[6] viXra:2503.0169 [pdf] submitted on 2025-03-27 02:30:35
Authors: Sanjay Sharma, Akshit Rao, Chetan Sawant, Mangesh Gangurde
Comments: 4 Pages.
Automated sports analytics using artificial intelligence (AI) and computer vision has gained significant attention in recent years. This project presents a tennis match analysis system that detects players, tracks ball movement, and extracts performance metrics using deep learning techniques. The system employs YOLO for real-time player and ball detection, along with CNNs for court keypoint extraction and perspective correction. By analyzing video frames, the system calculates shot speed, player movement speed, and shot counts, providing valuable insights into gameplay dynamics.The methodology involves video acquisition, frame extraction, preprocessing, and deep learning-based tracking. A rolling mean filter is applied to ball trajectory data to identify shot impact points and analyze rally patterns. Experimental results demonstrate the model’s effectiveness, achieving a detection accuracy of 92.3% (mAP) and reliable tracking of key game events. The extracted performance metrics offer valuable applications for coaches, analysts, and players, enhancing strategic decision-making and training efficiency.The proposed approach bridges the gap between traditional sports analysis and AI-based automation, paving the way for more advanced player performance evaluation and match strategy optimization.
Category: Artificial Intelligence
[5] viXra:2503.0107 [pdf] submitted on 2025-03-17 05:57:25
Authors: Hidehiko Okada
Comments: 7 Pages.
The author previously reported an experimental result of evolutionary reinforcement learning of binary neural network controllers. In the previous study, the controller was trained by Evolution Strategy. In this study, the author experimentally applies Genetic Algorithm, instead of ES, and the results were compared between GA and ES. In both studies, the same Acrobot control task is utilized, and the same three-layer feedforward neural network is adopted. The difference lies in the training algorithm. The findings from this study are (1) GA trained the controller better than ES (p<.01), (2) increasing the population size, rather than the number of generations, improved performance more in GA (p < .01), and (3) the optimal number of hidden units for the binary MLP was 128 among the choices of 16, 32, 64, 128 and 256, which was consistent with the previous study using ES.
Category: Artificial Intelligence
[4] viXra:2503.0093 [pdf] submitted on 2025-03-16 01:20:00
Authors: Fei Ding
Comments: 7 Pages.
The success of DeepSeek-R1 has demonstrated the effectiveness of the GRPO algorithm. However, due to the absence of process rewards, GRPO often suffers from inefficiencies in exploration, as a single detailed error can result in an entirely incorrect final answer, leading to zero rewards.To address these challenges, we propose MGRPO (Multi-layer GRPO). In the first layer, GRPO operates identically to the original version, generating an initial response. This response is then fed into a second-stage GRPO process, which primarily trains the model to correct errors. Experimental results indicate that MGRPO outperforms standard GRPO, achieving superior performance.
Category: Artificial Intelligence
[3] viXra:2503.0073 [pdf] submitted on 2025-03-12 23:02:10
Authors: Shuai Liu
Comments: 10 Pages. (Note by viXra Admin: AI assisted article is in general not acceptable)
This paper summarizes the characteristics of neural networks.This paper focuses on challenging the sudden randomness of gene mutation and explaining the active source of mild gene mutation.In this paper, a life algorithm framework that simulates the changes of genes by neural networks combined with genetic algorithms is taken as an example to show that genes can maintain overall stability and actively update at the same time, presenting a random exploration that maintains relatively small scope in general, and thus species evolution.It is necessary to keep the exploration of randomness small. This article breaks the shackles that genes determine everything in life. Genes and neural networks are both important. In addition to innate genes, people's ability to learn, update, shape themselves, and explore is particularly important.This paper points out the importance of neural networks in the evolution of species. Different from the view that neural network is only the intelligent organizational structure of human brain, this article extends the above view, neural network is also an important intelligent structure of cells, and neural network is also an important intelligent structure of organs.
Category: Artificial Intelligence
[2] viXra:2503.0016 [pdf] submitted on 2025-03-03 15:30:10
Authors: Yu Zhou, Fuyuan Xiao
Comments: 3 Pages.
By exploiting the computational potential of quantum computing beyond the computational power of classical computing, an adaptive quantum algorithm of generalized evidential combination rule (AQ-QECR) is proposed to reduce the computational complexity of QECR in the creditability and plausibility levels with no information loss.
Category: Artificial Intelligence
[1] viXra:2503.0009 [pdf] submitted on 2025-03-02 21:53:17
Authors: Stephane H. Maes
Comments: 33 Pages. All related details of the projects (and updates) can be found and followed at https://shmaes.wordpress.com/
Since the release of DeepSeek LLMs, the industry, the investors, and the media have reacted with alarm, surprised that a Chinese startup—despite operating on a low budget and with limited access to specialized AI hardware—could surpass the latest models with reasoning capabilities. This has led to geopolitical concerns about threats to U.S. technological dominance, and the effectiveness of AI chip sanctions imposed by the U.S. on China. Investor confidence in leading U.S. tech companies involved in AI, AI hardware, and AI/cloud hosting has been shaken, contributing to a significant stock market drop on January 27, 2025.In this paper, we argue that while the success of DeepSeek V3 and R1 is remarkable, it does not signal the decline of any major player. Instead, it is a natural progression of how LLMs and generative AI function. Most LLM providers, of a same LLM generation, rely on similar algorithms, big-data pools, and development techniques, meaning that models tend to converge in performance once their methodologies become public. Different starting points often lead to LLMs of comparable capabilities for a same generation. Techniques such as model distillation and reinforcement learning further enable the reduction of model size, data requirements, and hardware constraints. As a result, each time a model is developed, it can be replicated, closely matched, or even surpassed soon after—sometimes with significantly lower effort than the original, or with a significantly smaller set of parameters. This cycle of life will continue as long as LLMs remain a competitive field, vs. a commodity, and until new AI approaches beyond GenAI emerge, or the old AI reemerges.Such a pattern will continue, repeating the cycle. Open source models have the advantage of drawing from broader communities and collective innovation, making it increasingly difficult for proprietary models to maintain an edge. As development costs rise, it will be interesting to see whether proprietary models can sustain their dominance.Ultimately, there was no reason for panic. AI may be in a bubble, but if it bursts, it will not be because DeepSeek outperforms OpenAI’s latest model. Instead, the real challenges facing LLMs and GenAI lie elsewhere. The path to AGI is likely beyond current LLMs. While AI agents may extend the viability of GenAI, other factors pose more significant long-term threats. If LLMs are not the future of AI, there is little reason to be concerned about new players mastering them.
Category: Artificial Intelligence