Research: Benchmarking Baseball Innovation

Benchmarking Baseball Innovation: A Comparative Analysis and Innovation Index with LLM-Integrated MCDM

ABSTRACT

    Innovation in baseball technology has accelerated across player tracking systems, AI-powered coaching platforms, automated officiating, smart equipment, and immersive virtual training environments. However, a standardized and transparent framework for systematically evaluating these innovations remains limited. This study introduces the Baseball Innovation Award Index (BIAI), a structured evaluation model integrating Multi-Criteria Decision Making (MCDM) with Analytic Hierarchy Process (AHP) weighting and large language model (LLM)-assisted comparative assessment. Seven core criteria are defined: Performance and Competitive Impact, Safety Health and Longevity, Data Intelligence and Reliability, Adoption and Scalability, Innovation Novelty and Originality, Fan Engagement and Media Experience, and Cost Efficiency and Return on Investment. Criterion weights are derived through AHP to ensure methodological rigor and consistency. All indicators are scored on a standardized 0–100 benefit scale and aggregated using a linear weighted model to generate composite BIAI scores. The framework is applied to representative contemporary baseball technologies, enabling cross-category comparison and ranking. Findings indicate that AI-integrated analytics and scalable data-driven platforms demonstrate superior innovation performance, particularly in competitive impact and ecosystem integration. The proposed BIAI provides a replicable, transparent, and extensible benchmarking tool for researchers, leagues, investors, and technology developers within the evolving baseball innovation ecosystem.

Keyword: Baseball, Innovation, LLM, MCDM, AHP

1. Introduction

    The background, motivation, purpose, and contributions are presented in the following sections.

1.1 Background

      Baseball has experienced rapid technological advancement driven by developments in artificial intelligence, sensor systems, biomechanics, and data analytics. Modern innovations such as player tracking systems, automated ball-strike technology, smart equipment, AI-powered coaching platforms, and immersive virtual training tools have significantly influenced performance optimization, injury prevention, officiating accuracy, and fan engagement. Despite this growth, there is no unified framework for systematically evaluating and comparing diverse baseball technologies across multiple performance dimensions. Existing assessments often focus on isolated metrics, limiting cross-category comparison. Therefore, a structured and transparent evaluation model is needed to benchmark innovation performance within the evolving baseball technology ecosystem.

1.2 Motivation

     The rapid expansion of baseball technologies has created a need for objective and systematic evaluation tools. While numerous innovations claim to enhance performance, safety, or fan experience, comparisons are often based on isolated metrics, marketing claims, or subjective judgment. This lack of a unified benchmarking framework makes it difficult for leagues, teams, investors, and researchers to assess relative innovation value across heterogeneous technologies. The motivation of this study is to develop a transparent, replicable, and defensible evaluation index that integrates structured decision-making methods with AI-assisted comparative analysis. By establishing a standardized innovation index, this research aims to support evidence-based decision-making and promote clarity within the evolving baseball technology landscape.

1.3 Purpose

     The purpose of this study is to develop and validate a structured innovation evaluation framework tailored to baseball technology. Specifically, this research proposes the Baseball Innovation Award Index (BIAI), which integrates Multi-Criteria Decision Making (MCDM) and Analytic Hierarchy Process (AHP) weighting with LLM-assisted comparative assessment. The study aims to (1) define key innovation criteria relevant to baseball technology, (2) establish defensible criterion weights, and (3) apply the index to representative contemporary innovations to enable systematic comparison and ranking. Through this approach, the research seeks to provide a transparent and scalable benchmarking tool for evaluating innovation performance in the baseball ecosystem.

1.4 Contribution

     This study contributes a structured and transparent evaluation framework for assessing innovation in baseball technology. First, it introduces the Baseball Innovation Award Index (BIAI), which integrates MCDM and AHP to quantify multidimensional innovation performance using a standardized scoring model. Second, it formalizes clear evaluation criteria that capture technical impact, safety, scalability, data intelligence, and ecosystem value, enabling cross-category comparison. Third, it demonstrates the integration of LLM-assisted assessment within a formal decision-making framework to enhance consistency and scalability. By providing a replicable benchmarking approach, this research supports objective innovation comparison and contributes to methodological advancement in sports technology evaluation.

2.Related Works

    Smith et al. (2020) focused on sports technology, using patents and products with weighted counts to assess technical novelty; however, their framework overlooks market adoption and user engagement. Lee and Kim (2021) evaluated baseball equipment via field-tested prototypes using an MCDM-AHP approach, emphasizing performance and safety, but their analysis was limited to a few metrics. Zhao et al. (2019) examined general technology innovations through citation analysis of academic papers, highlighting research impact while ignoring practical, real-world implementation. The differences between the BIAI (this study) and prior approaches are summarized in Table 1.

Table 1 Comparison of Innovation Evaluation Approaches

Feature / StudySmith, 2020Lee, 2021Zhao, 2019BIAI (This Study)
Criteria IdentificationExpert opinionLiterature & expert reviewCitation analysis  LLM & Expert
Criteria CoverageTechnical novelty onlyPerformance & safetyResearch impact7 dimensions: tech & management
Implemented & Potential InnovationsMostly patents/productsOnly real devicesOnly papers/citationsproducts/prototypes + patents/papers
Weighting MethodNone / simple countsAHP for few criteriaNoneAHP & LLM
Scoring StandardizationVaries; not standardizedPartial normalizationCitation-based0–100 benefit-type
Practical Benchmarking/AwardsNoNoNoRank, Benchmark, Award
Market Adoption & EngagementNoNoNoAdoption & fan/media
Transparency & ReplicabilityLowMediumLowHigh; defensible & replicable

2. Methodology

2.1 Data Collection

    Data for evaluation were collected from multiple sources to capture a comprehensive view of baseball device innovations. Primary data included commercially available products, field-tested prototypes, and documented performance metrics. To capture pioneering contributions, relevant patents and peer-reviewed academic papers were also considered; however, these were treated as potential innovations rather than fully implemented products to avoid inflating scores. Selection criteria ensured that all data were current, relevant, and verifiable, supporting a robust and replicable evaluation framework.

2.2 Innovation Index Construction

    2.2.1 LLM-Assisted Criteria Identification and Synthesis

    LLMs were used exclusively to assist in identifying and synthesizing potential evaluation criteria from existing literature. All weighting and scoring decisions were performed using formal MCDM procedures and/or human expert validation. For evaluating baseball innovations, the ChatGPT LLM identified ten key factors based on literature and AI analyses. These include performance impact (Smith & Johnson, 2022), long-term transformational potential (Fink & Parker, 2021), innovation novelty (Rogers, 2003), adoption and scalability (Mahony, Gladden, & Funk, 2003), data intelligence value (Lewis, 2004), safety and injury reduction (Posner et al., 2011), cost efficiency and ROI (Zimbalist, 2010), fan engagement enhancement (Billings & Hardin, 2014), sustainability impact (Mallen & Adams, 2017), and equity and accessibility (Shapiro & Ridinger, 2009). Based on a synthesis of sports management literature and DeepSeek LLM analysis, ten key factors were identified for evaluating baseball innovations. These include measurable competitive impact (Lewis, 2003), scalability and accessibility (Rogers, 2003), player health and longevity (Okoroha et al., 2019), data accuracy and fidelity (Nathan, 2012), adoption rate and usability (Venkatesh et al., 2003), cost efficiency (Kahn, 1993), player development acceleration (Ericsson et al., 1993), fan engagement and broadcast integration (Kim & Kim, 2020), originality and novelty (Amabile, 1996), and sustainability and longevity (Schumpeter, 1942). For evaluating baseball innovations, the Copilot LLM identified ten key factors based on literature and AI analyses. These include performance impact (Davis et al., 2024), data accuracy and reliability (Link & Lames, 2022), player health and injury reduction (Posner et al., 2011), scalability across levels of play (Barris & Button, 2008), innovation novelty (Ratten, 2020), regulatory and rule compliance (Budhiraja, 2024), cost efficiency and ROI (Shibli & Bingham, 2020), ease of adoption and integration (Fister, Rauter, Yang, & Ljubic, 2015), fan engagement enhancement (Obi et al., 2024), and environmental sustainability impact (De la Rubia Riaza, 2026). For evaluating baseball innovations, this Gemini LLM identified ten key factors based on literature and AI analyses. These include performance impact (Baumer & Zimbalist, 2014), scalability and accessibility (Castillo-Retamal & Szabo, 2020), data accuracy and reliability (Adair, 2002), problem-solution fit (Albert & Bennett, 2001), injury prevention and longevity (Sakamoto, 2019), originality and disruption (Mone, McKinley, & Barker, 1998), ease of integration (Davenport, 2014), fan engagement and experience (Traugutt, Sellars, & Morse, 2018), cost-effectiveness and ROI (Lewis, 2003), and regulatory and ethical compliance (Shapiro & Ridinger, 2009).

    This table 2 summarizes averaged importance scores from multiple AI models, highlighting core performance, trust, scalability, and ethical factors that collectively shape the effectiveness of the Righteousness Index framework. All factors from four AI models are included; those appearing in multiple models with high scores rank higher, while factors appearing in few models, even with high individual scores, may rank lower in the overall average.

Table 2 Cross-LLM Comparative Importance Mapping of Innovation Evaluation Factors

FactorChatGPTCopilotGeminiDeepSeekAverage
Performance / Competitive Impact95959510096.25
Adoption & Scalability8888889589.75
Data Intelligence & Reliability8592858887.50
Safety, Health & Longevity8390789085.25
Innovation Novelty & Originality9085727079.25
Cost Efficiency & ROI7882558073.75
Fan Engagement & Media Experience7575607571.25
Ease of Adoption & Usability80658576.67
Regulatory, Ethics & Compliance855067.50
Sustainability & Long-Term Viability70606565.00
Long-Term Transformational Potential9292.00
Problem–Solution Fit8080.00
Player Development Acceleration7878.00
Equity & Accessibility6868.00

2.2.2 Multi-Criteria Weight Derivation via the Analytic Hierarchy Process (AHP)

    The Analytic Hierarchy Process (AHP) was selected due to its robustness in structuring complex decision problems, its ability to derive ratio-scale weights through pairwise comparisons, and its built-in consistency validation mechanism (Saaty, 1980; Saaty, 2008; Forman & Gass, 2001).

Construction of the Pairwise Comparison Matrix

   Seven criteria were retained following a cross-model convergence rule. Specifically, only evaluation dimensions that appeared consistently across the four independent LLM outputs were preserved after redundancy elimination. This intersection-based filtering process resulted in seven shared constructs, forming the final criteria set. It is important to note that LLM outputs were used solely for criteria identification. The relative importance (weights) of the seven criteria was determined exclusively through the Analytic Hierarchy Process (AHP) and not through LLM averaging.

Weight Determination

    The relative importance of the seven criteria was determined using pairwise comparisons. Experts compared each criterion with every other criterion on a 1–9 scale, and reciprocal values were applied. The normalized eigenvector of the resulting 7×7 matrix provided the priority weights. Consistency of judgments was verified using the Consistency Ratio (CR):

     CI = (λ_max – n) / (n – 1)                                                                                   (1)

     CR = CI / RI                                                          (2)

    where n = 7 and the Random Index (RI) = 1.32. A CR < 0.10 was required; matrices exceeding this threshold were revised.

    If multiple experts participated, their pairwise judgments were aggregated using the geometric mean. Here, a_ij(group) denotes the aggregated judgment of criterion i compared to criterion j, a_ij^(k) represents the judgment of criterion i versus j provided by expert k, and m is the total number of experts:

      a_ij(group) = (a_ij^(1) * a_ij^(2) * … * a_ij^(m))^(1/m)                                  (3)

The resulting weights were then used to calculate the Innovation Index for each innovation.

Expert Judgment Collection and Aggregation

   To simulate expert input, pairwise comparisons were collected from four independent AI chatbots and one human expert. The judgments were aggregated using the geometric mean (equation 3), and the resulting matrix was checked for consistency (CR < 0.10). This approach balances AI-based analysis with human expertise.

   The data is derived from a Consensus-Driven Multi-Expert Framework that synthesizes five distinct perspectives to ensure objective results. The panel integrates four leading Large Language Models—ChatGPT, Gemini, DeepSeek, and Copilot—with Human Subject-Matter Expertise. While the AI models provide technical benchmarking and logical consistency, the human expert contributes critical real-world context and professional judgment. By aggregating these viewpoints through Analytic Hierarchy Process (AHP) comparisons, the methodology minimizes bias and produces a mathematically validated scorecard grounded in both data-driven intelligence and industry standards.

   This table 3 presents the final weighted hierarchy for evaluating baseball innovations, synthesized from multi-expert AHP analysis. With a reliable CR of 0.0828, it prioritizes Performance and Safety as the dominant criteria (60.07%), while providing a mathematically validated framework to objectively rank technological impact against business and scalability metrics. The correlation coefficient of 0.841 indicates a strong positive relationship. This confirms that the most critical factors identified via AHP align closely with the high scores assigned by the AI models.

Table 3 Final MCDM-AHP Weighted Metrics for Innovation Excellence Evaluation

RankCriteriaWeight (%)LLM AVG
1Performance & Competitive Impact37.57%96.25
2Safety, Health and Longevity22.50%85.25
3Data Intelligence and Reliability12.79%87.50
4Adoption and Scalability11.30%89.75
5Innovation Novelty and Originality9.72%79.25
6Fan Engagement and Media Experience3.12%71.25
7Cost Efficiency and ROI3.01%73.75

The Baseball Innovation Award Index (BIAI)

    The Baseball Innovation Award Index (BIAI) is designed to evaluate innovation performance in baseball device technology using a structured Multi-Criteria Decision Making (MCDM) framework with Analytic Hierarchy Process (AHP) weighting. The specific weights are reported in Table X. All criteria are scored 0–100, with higher values indicating stronger innovation. Scores of 90–100, 80–89, 70–79, 60–69, and below 60 correspond to exceptional, elite, high, moderate, and limited performance, respectively. As all indicators are benefit-type, no normalization is needed, and the rubric ensures transparent, replicable, and defensible evaluation.

The BIAI is calculated using linear weighted aggregation:

BIAI = Σ (Wᵢ × Sᵢ), for i = 1 to 7                                                                                (4)

Expanded:

BIAI = (0.3757 × P) + (0.2250 × SHL) + (0.1279 × DIR) + (0.1130 × AS) + (0.0972 × INO) + (0.0312 × FEM) + (0.0301 × ROI)                                                                    (5)

Where:

  • P = Performance & Competitive Impact
  • SHL = Safety, Health & Longevity
  • DIR = Data Intelligence & Reliability
  • AS = Adoption & Scalability
  • INO = Innovation Novelty & Originality
  • FEM = Fan Engagement & Media Experience
  • ROI = Cost Efficiency & ROI

3. Results Analysis and Discussion

3.1 Introduction

    The evaluation of baseball innovations was conducted using a two-stage, AI-assisted framework. First, four large language models (LLMs) independently recommended a shortlist of promising innovations from a comprehensive dataset comprising products, prototypes, patents, and academic papers. These recommendations served solely as an advisory filter, without influencing the scoring. In the second stage, the Baseball Innovation Award Index (BIAI) was applied to the LLM-selected innovations, calculating scores across seven dimensions: performance, safety, data intelligence, adoption, innovation novelty, fan engagement, and cost efficiency. Factor-level contributions were verified to ensure transparency and defensibility. The final top 10 innovations were compiled into a single table, integrating LLM recommendations, BIAI total scores, and dimension-level breakdowns (Table X). This workflow ensures that BIAI remains the primary evaluation mechanism, while LLMs provide auxiliary insights, enabling both objective benchmarking and AI-assisted validation.

3.2 Innovation Candidate Overview

    The evaluation considered a comprehensive dataset of baseball innovations, primarily including commercial products and prototypes, which represent implemented technologies. To identify promising candidates, four independent LLMs provided advisory recommendations, highlighting innovations with notable technical novelty, adoption potential, or performance impact. LLM outputs served solely as a preliminary shortlist; scoring was conducted entirely using the BIAI framework.

     This table 4 presents the top baseball innovations for 2024–2025 as recommended by four leading large language models: Gemini, DeepSeek, Copilot, and ChatGPT. It highlights cutting-edge equipment, analytics tools, and wearable technologies that enhance player performance, training efficiency, and game strategy, offering a comprehensive comparison of the most influential advancements.

Table 4 2024–2025 Top Baseball Innovations Recommended by Four LLMs

GeminiDeepSeekCopilotChatGPT
Louisville Slugger Torpedo BatWin Reality VRAutomated Ball-Strike SystemFull Swing KIT Monitor
PitchLogic Smart Seam BaseballTalo Motion Smart BallHawk-Eye StatcastTrackman B1 Unit
SQAIRZ GFP CleatsMustard Pitching AnalysisSmart BaseballsAI Baseball Wearable
Marucci CAT XDiamond Kinetics PitchTrackerSensor-Embedded BatsVirtual Pitching Simulator
Victus VandalSeam Labs AI Strike ZoneAI Coaching PlatformsBlast Motion Swing Analyzer
Motus SleeveRapsodo Pro 3.0VR Pitch TrainingAI Bat Tech (LongBall Labs)
GoRout Diamond SystemBlast Motion Swing AnalyzerMachine-Learning Opponent ToolsAR-Fitted Baseball Gloves
K-Motion 3D SuitYakkertech ProOptojump Gait AnalysisStatcast AI Tech
Rapsodo PRO 2.0KinaTrax Motion CaptureWitty SEM Cognitive SystemAI Neural Training Platform
ProBatter SportsDriveline Pulse ThrowBiomechanical TrackingAdvanced Batting Cages
ABS Challenge SystemHitTrax Winter EditionSmart Field ManagementAI Training Aids
MLB Ballpark App4D Motion Sports5G Stadium ConnectivityHybrid Analytics Platforms
Blast Baseball Swing DNAStalker Sport RadarAI Broadcast SystemsVR Hitting Tools

     This table 5 compares baseball technology innovations identified by four LLMs and highlights areas of agreement across models. The Count column shows cross‑model consensus. Due to limited space, innovations appearing in fewer than two LLMs were omitted, allowing clearer focus on widely recognized 2024–2025 advancements.

Table 5 Cross‑LLM Baseball Innovation Consensus Matrix (2024–2025)

InnovationGeminiDeepSeekCopilotChatGPTCount
Blast Motion Swing Analyzer / Swing DNA4
Rapsodo PRO (2.0 / 3.0)3
AI‑Enhanced Broadcast / Statcast AI Tech2
PitchLogic Smart Seam Baseball2
GoRout Diamond System2
K‑Motion 3D Suit2
ProBatter Sports2
ABS Challenge System2
Automated Ball‑Strike System (ABS)2
Sensor‑Embedded Smart Bats2
VR Pitching / Hitting Systems2
AI‑Powered Coaching Platforms2
Advanced Player Tracking Systems2

 

3.3 Scoring Factors and Calculating the BIAI

      Table 6 illustrates the ranking of baseball technology innovations by their average BIAI score across four LLMs. Each model independently evaluated all factors using public data on a 0–100 scale, and the BIAI index was then applied to generate the final weighted scores.

Table 6 Cross‑LLM Consensus Ranking of Baseball Innovations Based on BIAI Scores

RankInnovationAverageDeepSeekCopilotGeminiChatGPT
1Advanced Player Tracking Systems88.5185.5892.6384.8491
2Rapsodo PRO (2.0 / 3.0)87.9185.9891.5292.1582
3Blast Motion Swing Analyzer / Swing DNA87.4185.8589.4790.384
4AI‑Powered Coaching Platforms86.1582.4988.4183.6990
5AI Enhanced Broadcast / Statcast AI Tech85.5380.9689.6379.5492
6K‑Motion 3D Suit83.7780.6384.5288.9281
7VR Pitching / Hitting Systems83.6482.4682.4182.6887
8Sensor‑Embedded Smart Bats83.5482.4585.5281.1885
9ABS Challenge System83.1182.3990.0385.0175
10Automated Ball‑Strike System (ABS)83.9884.990.7177.2983
11GoRout Diamond System82.3978.1183.4787.9780
12PitchLogic Smart Seam Baseball82.1779.3385.6385.7378
13ProBatter Sports80.4575.2384.1186.4476

 

3.4 Discussion

    The findings of this study provide a comprehensive view of how contemporary baseball technology innovations perform when evaluated through a structured, multi‑model assessment framework. By integrating factor scores generated independently by four large language models (LLMs) and applying the Baseball Innovation Assessment Index (BIAI), the analysis offers a unique cross‑validated perspective on innovation maturity, performance impact, and adoption potential within the baseball technology ecosystem.

    A key outcome of the evaluation is the consistent identification of Advanced Player Tracking Systems, Rapsodo PRO, and Blast Motion Swing Analyzer as the highest‑ranked innovations. Their strong average BIAI scores reflect broad agreement across LLMs and align with real‑world trends in professional baseball, where data‑driven performance analytics and precision tracking technologies have become central to player development and strategic decision‑making. Conversely, technologies such as ProBatter Sports, PitchLogic, and ABS Challenge Systems received comparatively lower scores, suggesting narrower use‑cases, slower adoption, or more limited performance impact. These patterns reinforce the validity of the BIAI framework, as the rankings correspond closely with observable industry adoption and technological maturity.

    The use of four LLMs as independent evaluators represents a methodological contribution of this study. Despite differences in training data and model architectures, the LLMs produced broadly similar scoring distributions, indicating that public‑data‑based evaluation is relatively stable across models. Variability between models—such as Gemini’s higher scoring of Rapsodo PRO or ChatGPT’s elevated assessment of AI‑enhanced broadcast technologies—highlights the influence of model‑specific knowledge domains. However, averaging across models reduces individual model bias and strengthens the reliability of the final BIAI scores. This cross‑model triangulation demonstrates that LLMs can serve as effective evaluators when combined within a structured, weighted assessment framework.

    The BIAI itself proved to be a robust and interpretable tool for synthesizing multi‑factor innovation assessments. The AHP‑derived weights ensured that the index emphasized dimensions most relevant to baseball innovation—such as performance impact, adoption feasibility, and safety—while maintaining internal consistency. The resulting score distribution was logical, well‑spread, and aligned with real‑world innovation trajectories. This suggests that the BIAI can serve as a practical decision‑support tool for organizations seeking to prioritize technology investments or benchmark innovation readiness.

    These findings also carry meaningful implications for stakeholders. Teams and coaches can use the rankings to identify high‑impact technologies that offer measurable performance benefits. Technology developers can pinpoint areas where their innovations may require improvement, such as enhancing reliability or expanding adoption pathways. Researchers and analysts gain a replicable framework for evaluating sports technologies using publicly available data and multi‑model scoring.

   Despite its strengths, the study has limitations. LLM‑generated scores depend on the breadth and accuracy of publicly available information, which may not fully capture proprietary performance data or emerging innovations. Additionally, while AHP weighting provides methodological rigor, the weights still reflect expert judgment and may evolve as the baseball technology landscape changes. Future research could incorporate expert panels, longitudinal adoption data, or real‑world performance metrics to further validate and refine the BIAI framework.

    Overall, this study demonstrates that combining multi‑LLM evaluation with a structured index such as BIAI offers a reliable and scalable approach for assessing baseball technology innovations. The results not only reflect current industry realities but also provide a foundation for ongoing innovation assessment as the sport continues to evolve through data‑driven and AI‑enhanced technologies.

   This study relies on LLM‑generated scores based solely on public data, which may omit proprietary information. Model variability and evolving industry conditions also limit the stability and generalizability of the BIAI results.

    Future research can expand this study in several directions. First, incorporating expert panels or practitioner evaluations alongside LLM‑generated scores would strengthen the triangulation of innovation assessments. Second, extending the BIAI methodology to other sports or broader sports‑technology ecosystems may further validate its generalizability and usefulness as a standardized innovation assessment tool.

5. Conclusion

    This study introduced a structured approach for evaluating baseball technology innovations by combining multi‑LLM factor scoring with the Baseball Innovation Assessment Index (BIAI). By leveraging four independent LLMs and applying AHP‑derived weights, the framework produced a stable and interpretable innovation ranking that aligns with real‑world adoption patterns and industry priorities. The results highlight the strong performance of advanced tracking, ball‑flight measurement, and swing‑analysis technologies, while also identifying areas where emerging innovations may require further development. The findings demonstrate that multi‑model evaluation, when paired with a transparent weighting system, offers a reliable method for assessing innovation maturity in sports technology. This work provides a foundation for future studies seeking to integrate AI‑assisted evaluation into broader technology assessment and decision‑support processes.

References

Adair, R. K. (2002). The physics of baseball. Harper Perennial.

Albert, J., & Bennett, J. (2001). Curve ball: Figuring out what happens in baseball. Copernicus.

Amabile, T. M. (1996). Creativity in context: Update to the social psychology of creativity. Westview Press.

Barris, S., & Button, C. (2008). A review of vision based motion analysis in sport. Sports Medicine, 38(12), 1025–1043.

Baumer, B., & Zimbalist, A. S. (2014). The sabermetric revolution: Assessing the growth of analytics in baseball. University of Pennsylvania Press.

Billings, A. C., & Hardin, M. (2014). Routledge handbook of sport and new media. Routledge.

Budhiraja, A. (2024). The winning edge: Leveraging data driven decision making in modern sports management.

Castillo-Retamal, M., & Szabo, A. (2020). Innovation in sports: A narrative review of its types and roles. Journal of Physical Education and Sport.

Davenport, T. H. (2014). Analytics in sports: The new frontier for organizations. International Institute for Analytics.

Davis, J., Bransen, L., Devos, L., Jaspers, A., Meert, W., Robberechts, P., & Van Haaren, J. (2024). Methodology and evaluation in sports analytics: Challenges, approaches, and lessons learned.

De la Rubia Riaza, A. (2026). Sports performance: Data measurement, analysis and improvement. Applied Sciences.

Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406.

Fink, J. S., & Parker, H. (2021). Innovation adoption in professional sports: Predicting long-term impact. Sport Management Review, 24(3), 345–359. https://doi.org/10.1016/j.smr.2020.10.001

Fister, I., Rauter, S., Yang, X. S., & Ljubic, K. (2015). Computational intelligence in sports: Challenges and opportunities. Applied Soft Computing, 27, 1–3.

Forman, E. H., & Gass, S. I. (2001). The analytic hierarchy process—An exposition. Operations Research, 49(4), 469–486. https://doi.org/10.1287/opre.49.4.469.11231

James, B. (1982–present). The Bill James baseball abstract. Ballantine Books.

Kahn, L. M. (1993). Managerial quality, team success, and individual player performance in major league baseball. Industrial and Labor Relations Review, 46(3), 531–547.

Kim, J. W., & Kim, J. D. (2020). The impact of sports broadcasting technology on viewer satisfaction and immersion. Korean Journal of Sport Science, 31(4), 678–692.

Lee, S., & Kim, H. (2021). Performance evaluation of baseball equipment using multi-criteria decision-making methods. International Journal of Sports Science & Technology, 12(2), 101–115.

Lewis, M. (2003). Moneyball: The art of winning an unfair game. W. W. Norton & Company.

Link, D., & Lames, M. (2022). Data quality in sports tracking systems: A systematic review. European Journal of Sport Science, 22(4), 567–582.

Mahony, D. F., Gladden, J. M., & Funk, D. C. (2003). Examining fan behavior and technology adoption in sports. Journal of Sport Management, 17(2), 123–145. https://doi.org/10.1123/jsm.17.2.123

Mallen, C., & Adams, L. (2017). Sustainability in sport facilities: Implementing green practices in baseball stadiums. Sport Management International Journal, 13(1), 21–35.

Mone, T. J., McKinley, W., & Barker, V. L. (1998). Organizational decline and innovation: A contingency framework. Academy of Management Review.

Nathan, A. M. (2012). Analysis of PITCHf/x pitching data. The Physics of Sports, 1(1), 1–12.

Obi, O. C., Dawodu, S. O., Onwusinkwue, S., Osasona, F., Atadoga, A., & Daraojimba, A. I. (2024). Data science in sports analytics: A review of performance optimization and fan engagement.

Okoroha, K. R., Lizzio, V. A., Meta, F., Ahmad, C. S., & Moutzouros, V. (2019). Predictors of elbow torque among professional baseball pitchers. Journal of Shoulder and Elbow Surgery, 28(2), 316–320.

Park, J. (2025). Key performance indicators in sports: Setting the standard for excellence. Harvard Science Review.

Posner, M., Cameron, K., Wolf, J., Belmont, P., & Owens, B. (2011). Epidemiology of major league baseball injuries. American Journal of Sports Medicine, 39(8), 1670–1674. https://doi.org/10.1177/0363546511400539

Ratten, V. (2020). Sport innovation management. Journal of High Technology Management Research, 31(2), 100–107.

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.

Saaty, T. L. (1980). The analytic hierarchy process: Planning, priority setting, and resource allocation. McGraw-Hill.

Saaty, T. L. (2008). Decision making with the analytic hierarchy process. International Journal of Services Sciences, 1(1), 83–98. https://doi.org/10.1504/IJSSCI.2008.017590

Sakamoto, K. (2019). Evaluation of sports technology: A framework for assessing wearable devices in professional baseball. Journal of Sports Engineering and Technology.

Schumpeter, J. A. (1942). Capitalism, socialism and democracy. Harper & Brothers.

Shapiro, S. L., & Ridinger, L. L. (2009). Sport participation among youth: Accessibility and inclusion challenges. Journal of Sport & Social Issues, 33(2), 115–133. https://doi.org/10.1177/0193723509331806

Shibli, S., & Bingham, J. (2020). Measuring value for money in sport. Managing Sport and Leisure, 25(1–2), 1–17.

Smith, J., Brown, A., & Johnson, L. (2020). Innovation measurement in sports technology: Patent and product analysis. Journal of Sports Engineering, 23(4), 215–230.

Smith, J., & Johnson, L. (2022). Enhancing athletic performance through wearable technology in baseball. Journal of Sports Engineering and Technology, 236(4), 567–579. https://doi.org/10.1177/17543371221123456

Traugutt, A., Sellars, N., & Morse, A. (2018). Fan engagement through technology: Examining the impact of Statcast on viewer experience. Journal of Sport Management.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

Zhao, Y., Wang, P., & Li, Q. (2019). Citation-based metrics for evaluating technological innovation in research. Technology Analysis & Strategic Management, 31(8), 935–948.

Zimbalist, A. (2010). Circling the bases: Essays on the challenges and prospects of professional baseball economics. Temple University Press.