[8] viXra:2401.0154 [pdf] submitted on 2024-01-31 21:27:08
Authors: TongGuk Kim, CholRyon Pak, KwangJin Ryang
Comments: 9 Pages.
With manufacturing technology developing persistently, hardware manufacturing cost becomes lower and lower. More and more computers equipped with multiple CPUs and enormous data disk emerge. Existing programming modes make people unable to make effective use of growing computational resources. Hence cloud computing appears. With the utilization of Map Reduce parallelized model,existing computingand storage capabilities are effectively integrated and powerful distributed computingability is provided. Association rules can forcefully get a horizontal relation in the big data,the Apriori algorithm is one of the most significant association rules. Traditional mining based on parallel Apriori algorithms needs much more time in data IO with the increasing size of large transaction database.This paper improves the Apriori algorithm from compressing transactions,reducing the number of scans and simplifying candidate set generation. And then the improved algorithm is parallelized on the Hadoop framework. The experiments show that this improved algorithm is suitable for large-scale data mining and has good scalability and effectiveness.
Category: Artificial Intelligence
[7] viXra:2401.0130 [pdf] submitted on 2024-01-25 14:06:19
Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang
Comments: 10 Pages.
Quantum Image Processing (QIP) is a field that aims to utilize the benefits of quantum computing for manipulating and analyzing images. However, QIP faces two challenges: the limitation of qubits and the presence of noise in a quantum machine. In this research we propose a novel approach to address the issue of noise in QIP. By training and employing amachine learning model that identifies and corrects the noise in quantum processed images, we can compensate for the noisiness caused by the machine and retrieve a processing result similar to that performed by a classical computer with higher efficiency. The model is trained by learning a dataset consisting of both existing processed images and quantumprocessed images from open access datasets. This model will be capable of providing us with the confidence level for each pixel and its potential original value. To assess the model's accuracy in compensating for loss and decoherence in QIP, we evaluate it using three metrics: Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), andMean Opinion Score (MOS). Additionally, we discuss the applicability of our model across domains well as its cost effectiveness compared to alternative methods.
Category: Artificial Intelligence
[6] viXra:2401.0071 [pdf] submitted on 2024-01-16 01:05:49
Authors: Ait-TYaleb Nabil
Comments: 12 Pages.
In this paper, we will expose the causation of multiple causes acting on a single variable computed from correlations. Using an example, we will show when strong or weak correlations between multiple causes and a variable imply a strong or weak causation between the causes and the variable.
Category: Artificial Intelligence
[5] viXra:2401.0059 [pdf] submitted on 2024-01-12 18:25:00
Authors: Naguneu Lionel Perin, Jimbo Claver, Bouetou Thomas, Tchoua Paul
Comments: 9 Pages.
This paper presents a deep learning-based approach for stock price prediction in financial markets. The problem of accurately predicting future stock price movements is of crucial importance to investors and traders, as it allows them to make informed investment decisions. Deep learning, a branch of artificial intelligence, offers new perspectives for meeting this complex challenge. Deep learning models, such as deep neural networks, are capable of extracting complex features and patterns from large amounts of historical data on stock prices, trading volumes, financial news and data. other relevant factors. Using this data, deep learning and machine learning models can learn to recognize trends, patterns, and non-linear relationships between variables that can influence stock prices. Once trained, these models can be used to predict future stock prices. This study aims to find the most suitable model to predict stock prices using statistical learning with deep learning and machine learning methods RNN, LSTM, GRU, SVM and Linear Regression using the data on Apple stock prices from Yahoo Finance from 2000 to 2024. The result showed that SVMmodeling is not suitable for predicting Apple stock prices. In comparison,GRUshowed the best performance in predicting Apple stock prices with a MAE of 1.64 and an RMSE of 2.14 which exceeded the results of LSTM, Linear regression and SVM. The limitation of this research was that the data type was only time series data. It is important to note, however, that stock price forecasting remains a complex challenge due to the volatile nature of financial markets and the influence of unpredictable factors. Although deep learning models can improve prediction accuracy, it is essential to understand that errors can still occur.
Category: Artificial Intelligence
[4] viXra:2401.0045 [pdf] submitted on 2024-01-08 13:33:43
Authors: Junjie Huang, Fuyuan Xiao
Comments: 2 Pages.
In this paper, a novel TFN-based complex basic belief assignment generation method is proposed to improve decision-making accuracy in complex evidence theory.
Category: Artificial Intelligence
[3] viXra:2401.0043 [pdf] submitted on 2024-01-08 20:00:56
Authors: Sana Shakeel
Comments: 8 Pages.
Machine Learning is the study of computer algorithms that can improve automatically through experience and by the use of data. The complex mathematical expressions of physical processes of floods, during the past two decades can be studied through Machine Learning and these methods have contributed highly in the advancement of prediction systems providing better performance and cost-effective solutions. Due to the vast benefits and potential of Machine Learning, it is heavily popular among hydrologists. Researchers through introducing novel Machine Learning methods and hybridizing of the existing ones aim at discovering more accurate and efficient prediction models. Flooding is the most devastating natural hazard in Pakistan and the recently flooding has demonstrated its severeness through large scale destruction and displacement of homes and businesses in Interior Sindh. This paper aims to explore the methodologies of flood detection currently used in Pakistan, and the potential of Machine Learning in prediction systems within the country. Drawing on sources such as journals, scientific articles, and websites, the research assembled relevant information concerning floods and their prevention.
Category: Artificial Intelligence
[2] viXra:2401.0021 [pdf] submitted on 2024-01-05 01:17:17
Authors: Budee U. Zaman
Comments: 16 Pages.
This paper introduces a preliminary concept aimed at achieving Artificial General Intelligence (AGI) by leveraging a novel approach rooted in two key aspects. Firstly, we present the General Intelligent Network(GIN) paradigm, which integrates information entropy principles with a generative network, reminiscent of Generative Adversarial Networks(GANs). Within the GIN network, original multimodal information is encoded as low information entropy hidden state representations (HPPs). These HPPs serve as efficient carriers of contextual information, enabling reverse parsing by contextually relevant generative networks to reconstruct observable information.Secondly, we propose a Generalized Machine Learning Operating System (GML System) to facilitate the seamless integration of the GINparadigm into the AGI framework. The GML system comprises three fundamental components: an Observable Processor (AOP) responsiblefor real-time processing of observable information, an HPP Storage Systemfor the efficient retention of low entropy hidden state representations, and a Multimodal Implicit Sensing/Execution Network designed to handle diverse sensory inputs and execute corresponding actions.
Category: Artificial Intelligence
[1] viXra:2401.0012 [pdf] submitted on 2024-01-03 19:13:36
Authors: Mayur Sinha, Sangram Kesari Ray, Khirawadhi
Comments: 4 Pages.
Runtime Application Security Protection (RASP) is crucial in safe-guarding applications against evolving cyber threats. This research presents a novel approach leveraging a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) model as the cornerstone of a robust RASP solution. The fine-tuning process optimizes BERT’s natural language processing capabilities for application security, enabling nuanced threat detection and mitigation at runtime. The developedRASP system harnesses BERT’s contextual understanding to proactively identify and neutralize potential vulnerabilities and attacks within diverse application environments. Through comprehensive evaluation and experimentation, this study demonstrates the efficacy and adaptability of the BERT-based RASP solution in enhancing application security, thereby contributing to the advancement of proactive defense mechanisms against modern cyber threats.
Category: Artificial Intelligence