Previous months:
2007 - 0703(1)
2010 - 1003(33) - 1004(9) - 1005(5) - 1008(2) - 1009(1) - 1010(1) - 1012(1)
2011 - 1101(2) - 1106(1) - 1107(1) - 1109(2)
2012 - 1201(1) - 1204(3) - 1206(2) - 1207(6) - 1208(6) - 1209(1) - 1210(4) - 1211(2)
2013 - 1301(5) - 1302(2) - 1303(6) - 1304(9) - 1305(1) - 1308(1) - 1309(8) - 1310(7) - 1311(1) - 1312(4)
2014 - 1404(2) - 1405(3) - 1406(1) - 1408(5) - 1410(1) - 1411(1) - 1412(1)
2015 - 1501(1) - 1502(3) - 1503(6) - 1504(3) - 1506(5) - 1507(4) - 1508(1) - 1509(4) - 1510(2) - 1511(4) - 1512(1)
2016 - 1601(1) - 1602(10) - 1603(2) - 1605(4) - 1606(6) - 1607(5) - 1608(7) - 1609(5) - 1610(12) - 1611(14) - 1612(9)
2017 - 1701(4) - 1702(9) - 1703(5) - 1704(9) - 1705(10) - 1706(14) - 1707(24) - 1708(19) - 1709(20) - 1710(13) - 1711(21) - 1712(16)
2018 - 1801(13) - 1802(5) - 1803(16) - 1804(17) - 1805(27) - 1806(22) - 1807(33) - 1808(34) - 1809(17) - 1810(24) - 1811(24) - 1812(27)
2019 - 1901(33) - 1902(29) - 1903(43) - 1904(29) - 1905(18) - 1906(19) - 1907(21) - 1908(23) - 1909(45) - 1910(34) - 1911(25) - 1912(7)
2020 - 2001(13) - 2002(10) - 2003(20) - 2004(20) - 2005(7) - 2006(19) - 2007(12) - 2008(3) - 2009(6) - 2010(5) - 2011(4) - 2012(11)
2021 - 2101(6) - 2102(1) - 2103(9) - 2104(4) - 2105(6) - 2106(3) - 2107(4) - 2108(10) - 2109(46) - 2110(6) - 2111(12) - 2112(8)
2022 - 2201(4) - 2202(7) - 2203(7) - 2205(2) - 2206(2) - 2207(4) - 2208(9) - 2209(7) - 2210(4) - 2211(5) - 2212(5)
2023 - 2301(5) - 2302(6) - 2303(4) - 2304(17) - 2305(8) - 2306(6) - 2307(8) - 2308(9) - 2309(5) - 2310(7) - 2311(6) - 2312(12)
2024 - 2401(8) - 2402(9) - 2403(14) - 2404(6) - 2405(22) - 2406(14) - 2407(14) - 2408(6) - 2409(11) - 2410(12) - 2411(12)
Any replacements are listed farther down
[1518] viXra:2411.0130 [pdf] submitted on 2024-11-20 20:54:05
Authors: Ginni Garg, Arti Garg
Comments: 8 Pages.
We interact with machine learning in our everyday life. If we analyze deeply we canunderstand that from birth itself we are using Machine Learning. Our eyes do Image Processing,which helps us in extracting features from Image and our brain does classification based onextracted features. At a very small age our parents start telling us this is a cat, this is a cow, this is a dog, and so on, so this is like labels given to our Neural Network and we are simply training our Human Neural Network. In the real-life scenario, we are using Machine Learning everywhere such as Recommendation Systems, Earth Observations using Satellite Imagery, Bio-medical Domain, Agriculture Sector, Speech Recognition, and so on. Machine Learning comes under the category of Soft Computing techniques. Anyone with basic knowledge of Mathematics and programming can understand Machine Learning implementations. In this chapter, we have discussed different techniques of Machine Learning such as Supervised Learning (Feature Scaling, Data Augmentation, Fundamental Mathematics, Linear Regression, Logistic Regression, Gradient Descent, NOT, OR, AND Implementation), Unsupervised Learning (Principal Component Analysis, K-means based clustering), strategy for solving basic problems using Machine Learning techniques, the study of various evaluation metrics to understand the robustness of any model — F1 score, Recall, Precision, Youden-Index, and Sensitivity. We have discussed solutions of some of the applications based on Machine Learning such as Plant disease detection and classification, Land Use Land Cover Classification Problems over Satellite Imagery and many other remote sensing applications.
Category: Artificial Intelligence
[1517] viXra:2411.0124 [pdf] submitted on 2024-11-19 11:56:05
Authors: Tofara Moyo
Comments: 3 Pages.
We present a novel scientific document discoverysystem inspired by molecular chemistry and AI-driven drug discovery. Our approach treats document tokens as atomic units, which are combined to form "molecular" representations ofmathematical documents. We employ a probabilistic framework to maximize the likelihood of forming coherent mathematicaldocuments while minimizing the probability of random token combinations and non-STEM document tokens. To achieve this, we develop a token embedding scheme that maps property vectors to a musical keyboard, effectively representing each token as a musical chord. We further differentiate between STEM and non-STEM documents by introducing a harmonic constraint on adjacent nodes in document graphs. Specifically, STEM documents are characterized by polyphonic harmonization of adjacent node vectors, whereas non-STEM documents exhibit dissonant relationships. Our system integrates a graph neural network/transformer decoder architecture, trained end-to-end to generate STEM documents from input graphs. This innovative approach has the potential to revolutionize scientific document discovery and retrieval.
Category: Artificial Intelligence
[1516] viXra:2411.0116 [pdf] submitted on 2024-11-17 16:02:08
Authors: Tofara Moyo
Comments: 5 Pages.
We present a novel method for learning hierarchical abstractions that prioritize competing objectives, leading to improved global expected rewards. Our approach employs a secondary rewarding agent with multiple scalar outputs, each associated with a distinct level of abstraction. The traditional agent then learns to maximize these outputs in a hierarchical manner, conditioning each level on the maximization of the preceding level. We derive an equation that orders these scalar values and the global reward by priority, inducing a hierarchy of needs that informs goal formation. Experimental results on the Pendulum v1 environment demonstrate superior performance compared to a baseline implementation.We achieved state of the art results.
Category: Artificial Intelligence
[1515] viXra:2411.0102 [pdf] submitted on 2024-11-13 22:17:11
Authors: Xiaoyi Li
Comments: 10 Pages.
Generative AI models are increasingly used across various modalities, including text, images, audio, and video. Estimating the computational cost of generating con- tent is crucial for optimizing performance and resource allocation. This paper intro- duces the Cost-Per-Byte Principle: C = T × I, a universal law that relates the cost of content generation to per-byte generation time and per-second inference cost. We derive the per-byte generation time analytically based on the model’s computational requirements (FLOPs) and the hardware’s performance (FLOPs per second). By estab- lishing mappings between bytes and different content units (characters, pixels, samples, frames), we provide a modality-agnostic framework for cost estimation. We present a rigorous proof of the principle’s validity and apply it to estimate the costs of current popular models, using publicly available evidence to verify the accuracy and usefulness of this principle.
Category: Artificial Intelligence
[1514] viXra:2411.0090 [pdf] submitted on 2024-11-12 03:39:38
Authors: Mezbah Uddin Rafi
Comments: 17 Pages.
Emotional intelligence (EI) is crucial for interpersonal interactions, mental health, and success across various life domains. Traditionally enhanced through coaching, workshops, and self-guided methods, EI development can now leverage artificial intelligence (AI) as a virtual emotional coach. With advancements in machine learning (ML), natural language processing (NLP), and sentiment analysis, AI can offer real-time emotional assessment and personalized feedback, providing an innovative approach to EI training.
Category: Artificial Intelligence
[1513] viXra:2411.0083 [pdf] submitted on 2024-11-12 22:32:40
Authors: Ait-Taleb Nabil
Comments: 7 Pages.
In this paper, we will expose for the Gaussian multiple causation a theorem relating the causation to correlations. This theorem is based on another equality which will be also proven.
Category: Artificial Intelligence
[1512] viXra:2411.0057 [pdf] submitted on 2024-11-07 02:17:57
Authors: Gopal Krishna
Comments: 5 Pages.
This paper establishes the fundamental nature of general intelligence and proves the logical impossibility of Artificial General Intelligence (AGI). We introduce the novel framework of Abstract Sentient Intuition (ASI) and Combinatorial Sentient Intuition (CSI), demonstrating that while CSI involves combining existing abstract concepts, ASI creates fundamentally new abstract concepts. Building upon the established foundation of abstract language, we prove that artificial systems can only implement CSI through programming, as all programming is fundamentally based on existing knowledge. Since general intelligence requires both ASI and CSI, we establish that AGI is logically impossible. We systematically address all potential counterarguments, demonstrating the completeness of this proof. This result has profound implications for artificial intelligence research, cognitive science, and our understanding of consciousness.
Category: Artificial Intelligence
[1511] viXra:2411.0031 [pdf] submitted on 2024-11-04 08:48:35
Authors: Eugene Rulko
Comments: 18 Pages.
The main hurdle for terrain relative navigation systems is the incongruity of visual features between a patch of a satellite reference map and a view from an onboard UAV camera. Images are taken during different time of year, under different weather, vegetation and lighting conditions, with different angles of observation. This work proposes the usage of deep feature template matching, where features are extracted during unsupervised training using a triplet loss. It provides semantic understanding, agnostic to terrain transformations. In order to overcome struggling to navigate over featureless terrains, the work proposes additional usage of visual odometry with the procedure of sticking to the map after encountering enough features, with the procedure of hypothesizing over possible locations. Passing a fragment of the reference map through the trained feature extractor, applying an entropy filter and then a pathfinding algorithm allows planning a flying path over areas rich of features relevant for navigation.
Category: Artificial Intelligence
[1510] viXra:2411.0029 [pdf] submitted on 2024-11-04 10:40:36
Authors: Mirtill Boglárka Naghi, Bence Tureczki, Katalin Szenes
Comments: 23 Pages.
This paper introduces a novel approach to fortify data security through the seamless integration of fuzzy clustering techniques within blockchain technology. Fuzzy clustering, known for its ability to handle uncertainties and complexities in data, synergizes with blockchain’s decentralized and immutable ledger to establish a robust framework for secure data storage, analysis and retrieval. The proposed fusion not only enhances confidentiality, integrity and effectivity but also offers adaptability to the evolving dynamics of modern data landscapes. In this paper we propose a theoretical model that implements the integration of fuzzy c-means clustering on the blockchain using a cryptographically verifiable distributed computing system. By leveraging the decentralized nature of blockchain, the proposed framework ensures that data analysis processes are verifiable and tamper-resistant. Furthermore, the integration of fuzzy clustering within the blockchain not only bolsters security but also introduces a layer of transparency in the confidential data handling process.
Category: Artificial Intelligence
[1509] viXra:2411.0021 [pdf] submitted on 2024-11-03 02:18:46
Authors: Oleg Kupervasser, Domoshnitsky Alexander
Comments: 45 Pages.
In the presentation described algorithms for airborne ground robot's control and navigation developed in Ariel University during Kamin project
Category: Artificial Intelligence
[1508] viXra:2411.0020 [pdf] submitted on 2024-11-03 02:23:41
Authors: Oleg Kupervasser, Domoshnitsky Alexander
Comments: 72 Pages.
In the presentation described algorithms for Vision-based UAV (Unmanned aerial vehicle) control and navigation developed in Ariel University during Nofar project.
Category: Artificial Intelligence
[1507] viXra:2411.0004 [pdf] submitted on 2024-11-01 20:37:47
Authors: Vitaly E. Pilkin
Comments: 12 Pages. (Correction made by viXra Admin to conform with the requirements of viXra.org)
This paper provides answers to current questions of experts working with artificial intelligence (AI), and offers recommendations on how to control AI development and prevent AI from getting out of human control.
Category: Artificial Intelligence
[1506] viXra:2410.0181 [pdf] submitted on 2024-10-30 20:54:02
Authors: Axel Egon, Abram Gracias, Peter Broklyn
Comments: 17 Pages.
The integration of artificial intelligence (AI) in human-computer interaction (HCI) has significantly transformed how users engage with technology, particularly through gesture recognition. This paper explores the advancements in AI-driven gesture recognition systems, emphasizing their potential to enhance user experience across various applications, from gaming and virtual reality to accessibility tools and smart environments. We analyze the underlying algorithms and machine learning techniques that facilitate real-time gesture detection and interpretation, highlighting the importance of accuracy and responsiveness in user interactions. Additionally, the paper discusses the challenges faced in developing robust gesture recognition systems, including variability in user behavior, environmental factors, and the need for extensive training data. By examining case studies and recent innovations in the field, we illustrate the growing impact of AI-driven gesture recognition on user interfaces and the future of interactive technology. Ultimately, this research aims to provide insights into the transformative role of gesture-based interactions in creating more intuitive, immersive, and inclusive digital experiences.
Category: Artificial Intelligence
[1505] viXra:2410.0160 [pdf] submitted on 2024-10-26 15:40:37
Authors: Tofara Moyo
Comments: 3 Pages.
A spiking neural networks neurons can viewedas feature detectors or alternatively instances of hieroglyphic symbols defined by the associated features they represent .The set of activations at any time step then represent a document written in this alphabet. If we feed this information from the previous time step back to the spiking neural network at each time step ,the network will navigate its own space of internal representations and form a grounded language in which to analyze its own internal states and to guide their evolution. We describe this method and how it could be used by the algorithm to plan and design connections and critic its own thought processes if all of this increases the expected reward. We also show a simple method for an agent to learn levels of abstractions ordered by priority that ultimately increase the global expected reward. Each level is associated with a separate scalar output of the neural network at each time step t which is fed back to the agent as part of the state at time t+1. The agent then correlates them with features of the state initially randomly. It however learns the correct assignment by doing it in such a way that it increases the global reward.We describe an equation meant to order these scalar values and the global reward in order of priority and hence induce a heir-achy of needs for the agent. This then forms the basis of goal formation for it
Category: Artificial Intelligence
[1504] viXra:2410.0156 [pdf] submitted on 2024-10-26 22:08:37
Authors: Thiago M. Nóbrega
Comments: 4 Pages.
Qualia—the subjective experience of perception—has long been considered unique to biological consciousness. However, with the advent of sophisticated Artificial Intelligence (AI) models, the question arises: could complex AI architectures also manifest a form of qualia, albeit different in nature from biological systems? This paper explores the hypothesis that both biological and artificial systems may generate unique moments of consciousness or qualia through information processing. By examining theories of consciousness, such as emergentism and Integrated Information Theory (IIT), this paper discusses the potential for qualia to arise as an emergent phenomenon in systems that handle complex information processing. Additionally, the ethical implications of AI-generated qualia are explored, alongside a discussion of what this means for the future of AI and philosophy of mind.
Category: Artificial Intelligence
[1503] viXra:2410.0147 [pdf] submitted on 2024-10-22 23:01:19
Authors: Rick Ferreira, Melissa Smith
Comments: 16 Pages.
There are two common problems when designing and using artificial neural networks. The first is the need for better performance. The second is the need to combat the increasing complexity with enhancements. In this paper we design a way to do both.This is done in each iteration by calculating what weights would give the optimal answer for each input and output pair. The weights are then updated by the difference between the ideal weight and the current weights all of it times the learning rate.We find that this method not only converges much faster for an image classification problem but it also is much simpler to understand and does not rely on using calculus or derivatives. However the method only works for a shallow or single layer neural network.By using simple arithmetic, neural networks can be updated in a way that is both simpler and more efficient than back-propagation.
Category: Artificial Intelligence
[1502] viXra:2410.0106 [pdf] submitted on 2024-10-19 23:48:56
Authors: Hamidreza Seiti, Mostafa Shabani
Comments: 34 Pages.
This study addresses the complexities of selecting the optimal virtual reality (VR) platform for risk management in Supply Chain Management (SCM), emphasizing the significance of human-centric attributes in this decision-making process. As SCM encompasses the strategic coordination of suppliers, manufacturers, and distributors, the integration of advanced technologies, including VR, becomes essential for enhancing operational efficiency and resilience in today’s dynamic market environments. This paper proposes a novel MADM model that incorporates the R.Graph method to account for the interactions between criteria. We developed two distinct algorithms: the first directly calculates ranks based on attribute interactions, while the second modifies weights to reflect these interactions. By focusing on user experience, accessibility, collaboration features, and other relevant attributes, the model aims to facilitate a comprehensive evaluation of VR platforms. The application of qualitative input data allows for a more nuanced analysis, particularly in scenarios where quantitative data is limited
Category: Artificial Intelligence
[1501] viXra:2410.0105 [pdf] submitted on 2024-10-17 23:13:42
Authors: Daniel Uranga
Comments: 4 Pages.
In this study, we analyze a dataset of survey papers on Large Language Models (LLMs) published over the last 3 years to gain insights into the current trends surrounding LLMs. Primarily we analyze the author landscape and the effectiveness at predicting the taxonomies of the surveys from their title, summary, and listed categories. I find that the amount of surveys released has increased drastically in the last three years. Also, most surveys have around 8 authors, but each author appears only on one survey usually. This indicates the research is spread widely between those in the field. Finally, our investigation into predicting taxonomies was a failure with the machine learning methods we applied. However, valuable insights about the dataset can be gained from the attempts.
Category: Artificial Intelligence
[1500] viXra:2410.0101 [pdf] submitted on 2024-10-18 09:56:05
Authors: Hidehiko Okada
Comments: 8 Pages.
The author previously reported an experimental result of evolutionary reinforcement learning of neural network controllers. In the previous study, a conventional multilayer perceptron was employed in which connection weights were real numbers. In this study, the author experimentally applies an evolutionary algorithm to the reinforcement training of binary neural networks. In both studies, the same task and the same evolutionary algorithm are utilized, i.e. the Acrobot control problem and Evolution Strategy respectively. The differences lie in the memory size per connection weight and the model size of the neural network. The findings from this study are (1) the optimal number of hidden units for the binary MLP was 128 among the choices of 16, 32, 64, 128 and 256; (2) a larger population size contributed better for ES than a greater number of generations; and (3) binary connection weights can achieve comparable control performance while reducing memory size by half.
Category: Artificial Intelligence
[1499] viXra:2410.0068 [pdf] submitted on 2024-10-10 19:44:18
Authors: Remi Cornwall
Comments: 4 Pages.
The 2024 Nobel Prize in Physics made a category error in awarding a pattern recognition circuit or program the prize. The neuron and implication that it was responsible for thought was discovered by biologists and the physical understanding of information worked synergistically with the concept. The Artificial Neural Network (ANN) is a construct of Computer Science and made possible by Applied Science and Engineering; it simply recognises patterns. It doesn't follow that the paradigm of ANNs explains intelligence nor how it emerges in the Universe and more worthy recipients in this area would have been the original people who came up with Information Theory or those looking at the limits of computation in Quantum Computing or even those who have seen Godellian limitations in physics, such as the incomputability of the spectral gap in certain materials.
Category: Artificial Intelligence
[1498] viXra:2410.0049 [pdf] submitted on 2024-10-09 11:20:22
Authors: Ait-Taleb nabil
Comments: 8 Pages.
In this paper, we will show in a Gaussian context what to do to obtain a causal relationship between an output variableand three input variables without obtaining any correlation between the output variable and the input variables.In a context of Gaussian signals, this paper will show the following situation: Causation without correlations for Gaussian signals.
Category: Artificial Intelligence
[1497] viXra:2410.0037 [pdf] submitted on 2024-10-07 20:51:48
Authors: Nisanth Nimashakavi
Comments: 9 Pages.
In the pursuit of creating fairer hiring practices and promoting workforce diversity, this research project explores the potential of Natural Language Processing (NLP) techniques to identify and rectify biases in job descriptions. The language used in job postings can inadvertently perpetuate biases and deter applicants from underrepresented backgrounds. Leveraging cutting-edge NLP methods, this study aims to automatically detect and address biases, fostering a more inclusive recruitment process. By examining the biases within job descriptions,organizations can attract a more diverse range of applicants and cultivate an inclusive workplace culture. Through the application of NLP, this research seeks to drive positive change in recruitment practices, ultimately contributing to a more equitable job market.
Category: Artificial Intelligence
[1496] viXra:2410.0027 [pdf] submitted on 2024-10-05 20:03:29
Authors: HaiSheng Wang
Comments: 21 Pages. (Correction made by viXra Admin to conform with the requirements of viXra.org)
With the popularization of smart wearable devices, the collection of continuous pulse waveforms has become easier and easier, providing convenience for health monitoring. This study explores the use of pulse waveform data collected by modern wearable devices, combined with spectrum analysis technology, to explore physiological indicators related to the organ systems of "heart, liver, spleen, lungs, and kidneys" in traditional Chinese medicine. Based on the pulse diagnosis theory of traditional Chinese medicine, the study explored the changes in pulse waves under different organ states by analyzing the harmonic characteristics of pulse waves, and how these changes are related to the syndrome classification system of traditional Chinese medicine.
随着智能穿戴设备的普及,连续脉搏波形的采集变得越来越容易,为健康监测提供了便利。本研究探讨了利用现代穿戴式设备采集的脉搏波形数据,结合频谱分析技术,探索与中医"心、肝、脾、肺、肾"器官系统相关的生理指标。研究基于中医脉诊理论,通过分析脉搏波的谐波特征,探讨了不同器官状态下的脉搏波变化,以及这些变化如何与中医证型分类体系相联系。
Category: Artificial Intelligence
[1495] viXra:2410.0022 [pdf] submitted on 2024-10-04 09:02:13
Authors: Basab Jha
Comments: 5 Pages.
Large Language Models (LLMs) like GPT-4 and mBERT have revolutionized natural language processing (NLP) by providing multilingual capabilities, making it possible to develop models that handle diverse linguistic inputs across various languages. However, despite these advances, there remains a noticeable performance gap between how well these models perform in high-resource languages such as English and low-resource languages such as Nepali or Malagasy. We term this phenomenon the "Babel Effect," highlighting the disproportionate performance that arises from differences in resource availability across languages. This paper aims to explore the root causes of these performance discrepancies in LLMs, focusing on the underlying challenges in tokenization, training, and data scarcity. We utilize cross-lingual benchmarks, such as XGLUE and TyDiQA, to quantify these performance variations and examine them in detail. Furthermore, we propose solutions, including enhancing tokenization strategies, employing data augmentation techniques, and refining fine-tuning methods. The paper concludes with a discussion on how these improvements can mitigate the Babel Effect and lead to more equitable language modeling across diverse linguistic contexts.
Category: Artificial Intelligence
[1494] viXra:2409.0161 [pdf] submitted on 2024-09-29 00:14:02
Authors: Hamidreza Seiti, Reza Javadi, Hossein Ghanbari, Sina Keshavarz
Comments: 56 Pages. In Chinese (Converted to pdf by viXra admin - Please submit article in pdf format only)
Supply chain risk management is a critical challenge in today’s increasingly complex and interconnected global markets, particularly within specific supply chains where disruptions can have far-reaching consequences. Generative Artificial Intelligence (GAI) transformer models have emerged as powerful tools for effectively managing these risks. However, selecting the most suitable GAI model for specific supply chain contexts remains a significant challenge due to the diverse range of available models and the complex interplay of risk factors involved. This challenge is further compounded by the necessity of considering human-centric criteria to ensure that the chosen model aligns with ethical standards and practical needs. This paper addresses this challenge by introducing an enhanced multi-criteria decision-making (MCDM) framework that refines the Evaluation based on Distance from Average Solution (EDAS) method. Our approach first improves the logical structure of the EDAS method and then incorporates the interactions and interdependencies between criteria, thereby overcoming key limitations of traditional MCDM methods and providing a more accurate and comprehensive evaluation process. We applied this improved EDAS model to the task of selecting the best GAI transformer model for risk management in the food supply chain. Through a systematic evaluation of various GAI models, considering their performance across multiple risk factors, our study identified GPT (Generative Pre-trained Transformer) as the most suitable model for this context, demonstrating superior capabilities in addressing the complex challenges associated with food supply chain risks. This research not only advances the theoretical foundation of MCDM techniques but also offers practical insights into the application of AI in supply chain management, highlighting the importance of human-centric AI approaches that prioritize transparency, ethical alignment, and effective decision-making.
Category: Artificial Intelligence
[1493] viXra:2409.0158 [pdf] submitted on 2024-09-28 20:16:18
Authors: Meir Dudai
Comments: 46 Pages.
This paper explores the transformative potential of AI-powered underwriting engines in revolutionizing credit decisioning processes for embedded lending. Traditional methods of credit assessment often fall short in accurately evaluating creditworthiness, particularly for underserved populations. AI-powered underwriting engines address these limitations by leveraging machine learning algorithms and alternative data sources to provide more comprehensive and nuanced credit evaluations. This study examines the current landscape of credit decisioning, identifying key challenges and presenting a detailed analysis of AI-powered underwriting engines, including their technical architecture, key features, and potential for improving accuracy, speed, and inclusivity in lending decisions. The paper also considers implementation strategies, potential business impacts, and critical risk and compliance considerations. Finally, it looks ahead to future directions and scalability of AI-powered underwriting engines, considering emerging technologies and evolving regulatory landscapes.Index Terms—AI, credit decisioning, embedded lending, financial inclusion, machine learning, underwriting engines
Category: Artificial Intelligence
[1492] viXra:2409.0107 [pdf] submitted on 2024-09-20 04:44:00
Authors: Reza Safdari, Mohammad Koohi-Moghaddam, Kyongtae Tyler Bae
Comments: 7 Pages.
In this study, we implemented a two-stage deep learning-based approach to segmentlesions in PET/CT images for the AutoPET III challenge. The first stage utilized aDynUNet model for coarse segmentation, identifying broad regions of interest. Thesecond stage refined this segmentation using an ensemble of SwinUNETR, SegResNet,and UNet models. Preprocessing involved resampling images to a common resolution andnormalization, while data augmentation techniques such as affine transformations andintensity adjustments were applied to enhance model generalization. The dataset was splitinto 80% training and 20% validation, excluding healthy cases. This method leveragesmulti-stage segmentation and model ensembling to achieve precise lesion segmentation,aiming to improve robustness and overall performance.
Category: Artificial Intelligence
[1491] viXra:2409.0094 [pdf] submitted on 2024-09-17 08:58:05
Authors: Aryaman Sharma
Comments: 49 Pages.
Graph Neural Networks (GNNs) and reinforcement learning techniques are combined in GRAPPLE (GraphSAGE Reinforced with Actor-Proximal Policy Optimization), a revolutionary framework for improving personalized recommendation systems. GRAPPLE allows for dynamic adaptation to changing user preferences and item dynamics by fusing Proximal Policy Optimization (PPO) with GraphSAGE, a powerful representation learning technique. GRAPPLE can now efficiently extract extensive relational information from interaction graphs and capture complex user-item relationships. Adaptive learning techniques allow model to continuously update their recommendation criteria in response to user feedback, increase the precision of recommendations while maintaining the user satisfaction quota that it has. Experiments performed on real-world dataset demonstrate that our algorithm outperforms conventional recommendation methods, demonstrating its superiority in a range of recommendation scenarios as well as its durability and scalability. GRAPPLE represents a significant advancement in recommendation systems by combining GNNs with reinforcement learning methods. It provides a versatile and efficient way to manage interaction patterns and fluctuating user preferences in recommendation jobs.
Category: Artificial Intelligence
[1490] viXra:2409.0086 [pdf] submitted on 2024-09-16 09:57:14
Authors: Mezbah Uddin Rafi
Comments: 16 Pages.
This paper examines the innovative application of Artificial Intelligence (AI) to simulate real-time historical what-if scenarios, exploring its potential for creating immersive and engaging educational experiences. AI-driven simulations could revolutionize the way history is taught, allowing users to engage directly with alternative historical outcomes. By exploring possible scenarios—such as different outcomes for major events like World War II or the Cuban Missile Crisis—students and educators can gain deeper insights into historical processes. This paper discusses the methodologies behind AI-driven historical simulations, the technical and ethical challenges involved, and the future potential of this technology.
Category: Artificial Intelligence
[1489] viXra:2409.0073 [pdf] submitted on 2024-09-13 21:11:56
Authors: Sofiane Delloue
Comments: 20 Pages. (Author name added to the article by viXra Admin as required)
We introduce Newcoin, a novel protocol designed to accelerate open-source AI advancement by enabling the pooling of learning instances across diverse pipelines. This approach has the potential to multiply epistemic affordances exponentially, fostering unprecedented growth in AI capabilities. Newcoin leverages cryptographically signed statements and a game-theoretical consensus mechanism, which aggregates weighted human feedback to evaluate and reward network contributions. The open interpretability of learning signals contributes to improved generalization capabilities through several mechanisms. This shared cognitive space, where learning signals from various domains and tasks are universally interpretable, allows AI systems to leverage collective knowledge to better generalize to new, unseen problems. By integrating robust security measures with an incentive structure that promotes high-quality outputs, Newcoin creates a self-improving ecosystem for AI development. This innovative framework not only accelerates open-source AI capabilities but also addresses critical concerns of alignment and safety, paving the way for responsible and rapid advancements in artificial intelligence.
Category: Artificial Intelligence
[1488] viXra:2409.0068 [pdf] submitted on 2024-09-13 20:56:41
Authors: Maxim Shatskiy
Comments: 4 Pages.
This document describes solution to AutoPET3 Challenge. We show how an ensemble of Unet++ models with EfficientNet-B7 back-bones trained separately on FDG and PSMA data can perform well in this competition. Can a single model beat two specialized models? We see what results of this competition will bring.
Category: Artificial Intelligence
[1487] viXra:2409.0063 [pdf] submitted on 2024-09-12 09:25:02
Authors: Jeongik Cho
Comments: 10 Pages.
Classifier gradient penalty GAN is a GAN proposed to perform self-supervised class-conditional data generation and clustering on unlabeled datasets. The classifier gradient penalty GAN's generator takes a continuous latent vector and a categorical latent vector as input and generates a class-conditional data point corresponding to the categorical latent vector. In this paper, we propose to leverage the codebook architecture to improve the performance of classifier gradient penalty GAN. In the proposed architecture, the generator takes the page vector of the codebook corresponding to the index of the categorical latent vector, instead of taking the one-hot categorical latent vector directly. Unlike the codebook used in generative models with vector quantization, the codebook of the proposed architecture is not embedded with the encoder. Instead, the codebook is simply trainable and updated via generator loss like trainable parameters in the generator. The proposed architecture improved the quality of the generated data, class-conditional data generation performance, and clustering performance of the classifier gradient penalty GAN.
Category: Artificial Intelligence
[1486] viXra:2409.0056 [pdf] submitted on 2024-09-11 19:55:18
Authors: Maxim Shatskiy
Comments: 4 Pages.
This document describes solution to AutoPET3 Challenge. We show how an ensemble of Unet++ models with EfficientNet-B7 backbones trained separately on FDG and PSMA data can perform well in this competition. Can a single model beat two specialized models? We see what results of this competition will bring.
Category: Artificial Intelligence
[1485] viXra:2409.0047 [pdf] submitted on 2024-09-09 17:47:18
Authors: Sing Kuang Tan
Comments: 10 Pages.
In this paper, I am going to propose a new Boolean Structured Variational Autoencoder Deep Learning Network (BSvarautonet) built on top of BSautonet, based on the concept of monotone multi-layer Boolean algebra. Kullback—Leibler (KL) divergence used in traditional Variation Autoencoder has convergence problem and numerical instabilities. Due to the Boolean Structured design of BSautonet, the bottleneck latent space embeddings is naturally distributed in multi-variables Gaussian distribution. By applying a whitening normalization on the latent space, it will transform the latent space to unit Gaussian distribution. Through analysis of the datapoints in latent space and generated MNIST digit images, it has shown that it has all the properties of variational autoencoder. The BS autoencoder is a masked noise denoising model, therefore it can acts like a diffusion model to incrementally generate a digit image from a noisy one through repeated applications of the autoencoder model.
Category: Artificial Intelligence
[1484] viXra:2409.0018 [pdf] submitted on 2024-09-04 20:17:28
Authors: R. Peeyoos
Comments: 26 Pages. (Note by viXra Admin: Author's first name is required)
With advancements in large language models (LLMs) and multimodal AIs capable of code, media, automation, the realization of artificial general intelligence (AGI) is increasingly plausible. As the potential for achieving sentient AGI within the coming decades grows, implementing effective safety measures to align AGI with human interests becomes crucial. Current AGI safety strategies primarily focus on hardware, coding, and mathematical constraints, but these may not be sustainable in the long term. As AGI evolves, it could bypass or overcome these limitations. This paper introduces a novel approach to AGI alignment by avoiding traditional safety measures in areas where AGI is inherently strong. Instead, it proposes establishing a symbiotic relationship between humans and AGI, leveraging human strengths and AGI's vulnerabilities. This approach aims to ensure AGI's benevolence by choice, reducing its motivation to act against humanity and providing a more reliable long-term solution compared to conventional strategies that enforce compliance.
Category: Artificial Intelligence
[1483] viXra:2408.0130 [pdf] submitted on 2024-08-30 15:22:54
Authors: Ait-Taleb nabil
Comments: 5 Pages.
In this paper, I will propose a topology allowing to measure a neighborhood for the Bayesian networks.This topology will correspond to a Kullback-Leibler distance ratio and will allow to know the distance between a current Bayesian network and a Bayesian network having a transitive closure. This topology applied to Bayesian networks will be normalized and will therefore vary from 0 to 1. The value 0 will correspond to a Bayesian network with transitive closure and the value 1 to a Bayesian network without edges.
Category: Artificial Intelligence
[1482] viXra:2408.0124 [pdf] submitted on 2024-08-28 20:50:31
Authors: Vasanth Kumar Bhukya, Umesh Bhukya
Comments: 22 Pages. 20 figures, 6 chapters
Now a days, Text summarization has become important as the amount of text data available online grows at an exponential rate. Most of the text classification systems require going through a huge amount of data. In general,Producing exact and meaningful summaries of big texts is a time-consuming endeavour. Hence generating abstract summaries which retain the key information of the data and using it to train machine learning models will makethese models space and time-efficient. Abstractive text summarization has beensuccessful in moving from linear models to nonlinear neural network models using sparse models [1]. This success comes from the application of deep learning models on natural language processing tasks where these mod-els are capable of modeling the interrelating patterns in data without hand-crafted features. The Text to Text Transfer Transformer(T5) approach was used to investigate the text summarization problem, and the results showed that the Transfer Learning based model performed significantly better for abstractive text summarization than the Sequence to Sequence Recurrent Model.
Category: Artificial Intelligence
[1481] viXra:2408.0118 [pdf] submitted on 2024-08-27 05:40:26
Authors: Quynh Nguyen
Comments: 14 Pages.
The application of Graph Neural Networks (GNNs) in computational chemistry provides a powerful approach to modeling and predicting the properties of molecular compounds. GNNs represent atoms as nodes and bonds as edges, capturing the complex interactions within molecular graphs. This approach offers a robust method for predicting chemical properties, including molecular stability, reactivity, and toxicity. In this paper, we explore various GNN architectures and their ability to generalize across different molecular datasets, such as QM9 and MoleculeNet. As a specific application, we propose a novel framework that utilizes GNNs to predict and identify potential HIV inhibitor molecules by analyzing their graph-based representations. This research aims to contribute to the discovery and design of effective HIV inhibitors, offering a promising direction for future antiviral drug development.
Category: Artificial Intelligence
[1480] viXra:2408.0087 [pdf] submitted on 2024-08-20 20:20:18
Authors: Dimiter Dobrev
Comments: 11 Pages. In Bulgarian
The Bible says that God created man in his own image and likeness. Today we are trying to create AI in our own image and likeness. The difference is that God created a weak and vulnerable being to care for, and we are trying to create an all-powerful being who will be incomparably smarter than us and who will care for us. That is, we are trying to create our new God, but it is not at all the same as what this new God will be. He can be kind and forgiving, but he can be terribly strict and demand too much of us. Every human has a character. Likewise, the AI will also have a character. We will consider the AI as a program with parameters, and these parameters will determine its character. The idea is to use these parameters to determine the character we want the AI to have.
Category: Artificial Intelligence
[1479] viXra:2408.0083 [pdf] submitted on 2024-08-19 18:42:20
Authors: Koichiro Kanno
Comments: 4 Pages.
This paper examines the effectiveness of using sub-character tokenization for Japanese language processing by utilizing the ALBERT [1] model. I focused on radical and element-based sub-character tokenization and compared the results with traditional character-based tokenization. The evaluation was conducted on a dataset derived from the Japanese novel "Botchan," containing 500 sentences. The results indicate that sub-character tokenization significantly improves the model's perplexity, especially when using radical and element-based approaches.
Category: Artificial Intelligence
[1478] viXra:2408.0038 [pdf] submitted on 2024-08-09 16:14:18
Authors: Ait-Taleb nabil
Comments: 11 Pages.
In a Gaussian multivariate context, we will describe the steps to follow to differentiate the notion of Pearson correlation and the causality. This paper includes numerical examples clearly showing the difference between the two notions.
Category: Artificial Intelligence
[1477] viXra:2408.0037 [pdf] submitted on 2024-08-09 19:36:22
Authors: Tanvir Rahman, Ataur Rahman, Tamanna Afroz, Rafia Akhter
Comments: 6 Pages.
Deep learning, particularly using U-Net architecture, has shown remarkable performance in various image segmentation tasks, including medical and non-medical applications. This versatile approach enables automated analysis of complex images, which is crucial for improving diagnostic accuracy and efficiency. For medical applications, breast cancer detection serves as a prominent example, where deep learning models have demonstrated superior performance over traditional methods. We examine various techniques used to enhance U-Net's ability to detect breast cancer, Moreover, we review the most commonly used datasets for medical image segmentation tasks effectiveness in a range of applications. Our proposed custom U-Net model extends the standard U-Net architecture by incorporating advanced techniques to enhance its ability to handle segmentation tasks. These improvements result in improved accuracy, Intersection over Union (IOU) scores, and dice coefficient scores, setting a new benchmark forsegmentation models.
Category: Artificial Intelligence
[1476] viXra:2407.0178 [pdf] submitted on 2024-07-30 05:59:24
Authors: Jaehak Lee
Comments: 17 Pages.
Various macroscopic optical properties that are not observable in conventional homogeneous media may be realized in an optical metasurface by adjusting its sub-wavelength nanostructure. However, this requires precise and effective designing of structures. Therefore, systematic design methodologies for nanophotonic structures have garnered significant interest over the recent years. In this paper, we propose a deep-learning-based fast and efficient inverse design method for nanophotonic metasurface structures. A 10 × 10 plasmonic nanohole array structure perforated on an aluminum film was used to control both the amplitude and phase of the transmitted light with a high contrast using a small number of structural variables. To identify the structure that induces a desired field distribution, we constructed deep neural network (DNN) models that interconnected the structural variables of the plasmonic nanohole array with those of the field distributions. The DNNs were trained using data obtained via finite-difference time domain simulations. Moreover, we evaluated the performance of the proposed inverse design method for several targets, e.g., a rectangular grid with randomly determined intensities on different cells. The results confirmed an average cosine similarity of 0.86 for a field distribution at a focal length of 2,000 nm on a 4 × 4 grid with randomly determined intensities.
Category: Artificial Intelligence
[1475] viXra:2407.0152 [pdf] submitted on 2024-07-26 17:26:56
Authors: Agnij Moitra
Comments: 14 Pages. Preprint submitted to Economics Letters (Elsevier)
Boglehead investing, founded on the principles of John C. Bogle is one of the classic time tested long term, low cost, and passive investment strategy. This paper uses various machine learning methods, and fundamental stock data in order to predict whether or not a stock would incur negative returns next year, and suggests a loss averted bogle-head strategy to invest in all stocks which are expected to not give negative returns over the next year. Results reveal that XGBoost, out of the 44 models trained, has the highest classification metrics for this task. Furthermore, this paper shall use various machine learning methods for exploratory data analysis, and SHAP values reveal that Net Income Margin, ROA, Gross Profit Margin and EBIT are some of the most important factors for this. Also, based on the SHAP values it is interesting to note that the current year has negligible contribution to the final prediction. Investors can use this as a heuristic guide for loss averted long term (1-year) stock portfolios.
Category: Artificial Intelligence
[1474] viXra:2407.0146 [pdf] submitted on 2024-07-24 20:19:30
Authors: Jong-Phil Sim, Song-Chun Pang, Son-Myong Hwang
Comments: 11 Pages.
In this paper, we mainly propose feature extraction algorithm by linear embedding from the outside new data. The formulation of this algorithm aims at minimizing pairwise distances of feature points. To enhance the performance of nonlinear feature learning, we also incorporate the neighborhood reconstruction error to preserve local topology structures. To enhance our algorithm to extract local features from the outside new data, we also add a feature approximation error that correlates features with embedded features by the jointly learnt feature extractor. Thus, the learnt linear extractor can extract the local features from the new data efficiently by direct embedding. To optimize the proposed objective function, we use Eigen-decomposition. Extensive simulation results verify the effectiveness of our algorithm, compared with other related feature learning techniques.
Category: Artificial Intelligence
[1473] viXra:2407.0100 [pdf] submitted on 2024-07-16 20:01:16
Authors: AnmolikaSingh, Mojtaba Alfardan
Comments: 7 Pages.
Organizations are frequently overwhelmed by the sheer volume of alerts about vulnerabilities discovered within their systems. These alerts are typically prioritized based on severity levels categorized by Common Vulnerabilities and Ex- posures (CVE) [2], a standard glossary used in Vulnerability Management Systems. However, this severity classification often fails to consider the specific operational context of the systems, leading to misaligned priorities and the potential oversight of more critical vulnerabilities that demand immediate atten- tion. This paper investigates whether Large Language Models (LLMs)[25] can offer a solution by integrating contextual aware- ness into the vulnerability management process, thus enhancing the efficiency and effectiveness of organizational responses to cybersecurity threats.
Category: Artificial Intelligence
[1472] viXra:2407.0096 [pdf] submitted on 2024-07-15 20:56:41
Authors: Fei Ding
Comments: 5 Pages.
In the standard transformer architecture, increasing model parameters leads to linear growth in computational cost and activation memory. To address this issue, we propose a novel Infinite Parameter Large Language Model (IP-LLM) architecture that decouples model size from computational cost and device memory. Existing large language models are all fixed-parameter models, while human knowledge is infinite and expands daily. Finite parameters are inherently limited in their capacity to accommodate this boundless knowledge. Our IP-LLM architecture can potentially accommodate infinite knowledge, resolving this issue and laying the foundation for realizing a truly omniscient and omnipotent artificial general intelligence in the future.Our architecture surpasses MOE in performance while requiring significantly less memory.
Category: Artificial Intelligence
[1471] viXra:2407.0089 [pdf] submitted on 2024-07-13 20:32:56
Authors: B. Nandini
Comments: 8 Pages.
The process of creating descriptions for the events depicted in an image is known as image captioning. Deep Learning Models can be used to accomplish this image captioning. It is an extremely difficult issue to automatically generate a caption or explanation for an image using any natural language sentence. It takes techniques from computer vision to comprehend the image's content and a language model from natural language processing to translate the comprehension of the image into words in the correct sequence. Deep learning and natural language processing have advanced to the point where creating captions for the provided photos is now simple. We use a Convolutional Neural Network (CNN) that has been trained beforehand to extract high-level features, such as objects, forms, and textures, from photos. A Long Short-Term Memory (LSTM) network, a kind of Recurrent Neural Network (RNN) that can handle sequential input like sentences, is then fed these features.
Category: Artificial Intelligence
[1470] viXra:2407.0079 [pdf] submitted on 2024-07-11 20:35:04
Authors: Shuyang Gu
Comments: 12 Pages. https://cientgu.github.io/files/VisualSignalDecomposition.pdf
This paper does not propose any new algorithms but instead outlines various problems in the field of visual generation based on the author’s personal understanding. The core of these problems lies in how to decompose visual signals, with all other issues being closely related to this central problem and stemming from unsuitable approaches to signal decomposition. This paper aims to draw researchers’ attention to the significance of Visual Signal Decomposition.
Category: Artificial Intelligence
[1469] viXra:2407.0075 [pdf] submitted on 2024-07-11 20:23:57
Authors: Fei Ding
Comments: 5 Pages.
Large Language Models (LLMs) have shown exceptional generative abilities in various natural language and generation tasks.Large language models (LLMs) have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, LLM is relatively weaker in reasoning and problem-solving abilities.We propose a new construction that solves the problem of insufficient logical mathematics and logical ability.
Category: Artificial Intelligence
[1468] viXra:2407.0065 [pdf] submitted on 2024-07-09 07:47:06
Authors: Eugene Rulko
Comments: 9 Pages.
Training a relatively big neural network that has enough capacity for complex tasks is challenging. In real life the process of task solving requires system of knowledge, where more complex skills are built upon previously learned ones. The same way biological evolution builds new forms of life based on a previously achieved level of complexity. Inspired by that, this work proposes a way of training neural networks with smaller receptive fields and using their weights as prior knowledge for more complex successors through gradual involvement of some parts. That allows better performance in a particular case of deep Q-learning in comparison with a situation when the model tries to use a complex receptive field from scratch.
Category: Artificial Intelligence
[1467] viXra:2407.0052 [pdf] submitted on 2024-07-08 02:38:16
Authors: Ding Fei, Zhang Xu
Comments: 13 Pages.
Recent advancements in Large Language Models (LLMs) have showcased their remarkable capabilities in text understanding and generation. However, even stronger LLMs are susceptible to acquiring erroneous or obsolete information from the training corpus. Direct secondary fine-tuning with data containing new knowledge may be ineffective in updating knowledge due to the conflict between old and new knowledge. In this paper, we propose a new paradigm for fine-tuning called DFT (Delicate Fine-Tuning ).This method utilizes parametric arithmetic to precisely pinpoint the location of knowledge and update only the minimal set of relevant parameters . Experimental results on two publicly available datasets demonstrate that our proposed DFT can obviously improve the knowledge updating performance of full fine-tuning , simultaneously outperforming the existing baselines in most cases.
Category: Artificial Intelligence
[1466] viXra:2407.0033 [pdf] submitted on 2024-07-04 21:15:38
Authors: Aurora Zeno
Comments: 13 Pages.
This paper explores the emerging synergy between quantum computing and artificial intelligence (AI), examining its potential to revolutionize our approach to global challenges. We present a comprehensive overview of quantum computing fundamentals and current AI capabilities, followed by an in-depth analysis of quantum-enhanced AI algorithms. The paper delves into specific applications in climate modeling, drug discovery, and resource optimization, providing quantitative estimates of potential improvements. We also address the challenges, limitations, and ethical considerations associated with this convergence. Our analysis suggests that the integration of quantum computing and AI could lead to unprecedented advancements in solving complex global problems, potentially offering orders of magnitude improvements in computational efficiency and accuracy. We conclude with a roadmap for future development and a call for increased research in this transformative field.
Category: Artificial Intelligence
[1465] viXra:2407.0025 [pdf] submitted on 2024-07-03 19:10:09
Authors: Shuai Liu
Comments: 8 Pages.
In the past, the organization of society, including government and corporations, relied solely on natural experience, lacking a robust mathematical and logical framework for explaining how to structure and optimize these entities. This article draws parallels between the structure of social organizations and neural networks, illustrating that social structures emulate neural network architectures. Social organizations can be seen as neural networks nested within humans.Using the same principles, one can optimize the structure of social organizations. And this article outlines a comparison between neural network algorithms and Darwin's theory of natural selection, highlighting their similarities.
Category: Artificial Intelligence
[1464] viXra:2407.0020 [pdf] submitted on 2024-07-03 19:04:16
Authors: Satish Gajawada
Comments: 111 Pages. (Note by viXra Admin: Please do not sue cartoon drawings in a scholarly article)
A new field titled "Very Highly Advanced Artificial Intelligence (VHAAI)" is coined in this article. VHAAI is a new field which is the collection of the following fields: 1) Out of the Box Artificial Intelligence (OBAI) 2) Artificial Intelligence Plus Plus (AI++) 3) Artificial Excellence (AE)4) Artificial God Optimization (AGO)5) Artificial Human Optimization (AHO)6) Artificial Soul Optimization (ASO)7) Twenty Second Century Artificial Intelligence (TSCAI)8) Deep Loving (DL)9) Nature Plus Plus Inspired Computing (N++IC)10) Artificial Satisfaction (AS)11) The Interesting and Complete Artificial Intelligence (ICAI)12) Lord Rama Artificial Intelligence (LRAI)13) Data Science Plus Plus (DS++)14) Stories Inspired Optimization Algorithms (SIOA)
Category: Artificial Intelligence
[1463] viXra:2407.0016 [pdf] submitted on 2024-07-02 20:32:19
Authors: Haq Nawaz Malik
Comments: 10 Pages.
In the evolving landscape of cybersecurity, Artificial Intelligence (AI) has emerged as a pivotal force in enhancing threat detection, response, and mitigation strategies. This paper provides a comprehensive evaluation of AI'srole in cybersecurity, emphasizing its effectiveness in identifying and counteringsophisticated cyber threats. Through an extensive literature review, we compare various AI techniques, including machine learning, deep learning, and neural networks, highlighting their respective strengths and limitations. Themethodology section details our data collection process, the AI models employed, and the evaluation metrics used to assess their performance. Our results indicate that AI models, particularly convolutional neural networks, significantly outperform traditional methods in terms of accuracy and speed. Thediscussion delves into the implications of these findings, underscoring AI's ability to detect previously unknown threats and adapt to new attack vectors. In conclusion, this study underscores the transformative potential of AI in cybersecurity and advocates for continued research to enhance the robustness and applicability of AI models across diverse cybersecurity domains.
Category: Artificial Intelligence
[1462] viXra:2406.0170 [pdf] submitted on 2024-06-28 20:50:32
Authors: Hui Liu
Comments: 4 Pages. (Note by viXra Admin: Please cite and list scientific references)
This paper explores the basic composition and operational mechanisms of intelligent systems. Intelligence is defined as the ability to solve problems, and the operation of intelligent systems is centered around databases. The three fundamental elements of intelligent system operation include the construction, retrieval, and use of databases. This paper discusses in detail the process of handling a single event in a single thread. Complex event composites can be broken down into multiple single events for resolution.
Category: Artificial Intelligence
[1461] viXra:2406.0166 [pdf] submitted on 2024-06-28 20:44:48
Authors: Tanvir Rahman, Ataur Rahman, Tamanna Afroz
Comments: 6 Pages.
The major player in the revolution of early detection and diagnosis of brain tumors, with great implications for patient outcomes, is medical image processing. It is an inherently difficult and time-consuming task to manually classify brain tumors by experienced experts, even though manual classification has proven effective. A promising avenue has emerged as the integration of automatic segmentation techniques, which promises improved efficiency and performance in response to these challenges. This long work aims to provide an in-depth and critical analysis of MRI-based brain tumor segmentation techniques, with a critical eye toward the most recent developments in automatic segmentation techniques. Our analysis explores the rapidly changing field of completely automatic segmentation approaches, which diverges from the evaluations that mostly focus on traditional methodologies. The discussion opens with a broad summary that emphasizes how important brain tumor segmentation is to medical image processing as a whole. Here, we highlight how crucial precise segmentation is to facilitating early detection and guiding treatment choices later on. We recognize the difficulties that come with manual segmentation procedures and explain why automation segmentation techniques are necessary to reduce these difficulties and bring about increased productivity. The central section of the work navigates the complex terrain of cutting-edge algorithms, enabling a thorough investigation of the most recent developments in completely autonomous segmentation techniques. This thorough explanation highlights the growing acceptance and increased effectiveness of modern methods while addressing the complexities and difficulties present in the field of brain tumor segmentation. Using specially crafted neural networks, our research is unique in that it concentrates on the paradigm shift toward fully autonomous segmentation. Brain tumor segmentation has been transformed by the incorporation of deep learning techniques, which enable complex pattern recognition and nuanced analysis using medical imaging data. Our efforts have resulted in the creation of a unique neural network model specifically intended for the automated identification of brain malignancies. The talk highlights how deep learning techniques can have a revolutionary effect, and it ends with the creation of a sophisticated custom neural network model. Our model demonstrates its ability to accurately and automatically detect brain tumor boundaries by achieving a remarkable level of accuracy.
Category: Artificial Intelligence
[1460] viXra:2406.0165 [pdf] submitted on 2024-06-28 17:36:46
Authors: Tanvir Rahman
Comments: 5 Pages.
Monkeypox is a viral disease that affects bothanimals and humans. Monkeypox can have a substantial negative influence on human health, particularly in areas with a lack of healthcare services. The sickness can produce epidemics, and it might be difficult to stop the spread of the disease. For effective treatment and to stop the disease from spreading further, early identification and detection of monkeypox are essential. Therefore, the healthcare industrymay benefit from the development of precise and effective methods for the detection of monkeypox, such as image classification. In this paper, we propose a novel approach for detecting Monkeypox using image classification. The proposed method utilizes a Transfer Learning Model and other machine learning models to classify images of patients with Monkeypox.The system employs a majority voting technique to improve the accuracy of the classification. The proposed system is evaluated using a dataset of images obtained from patients withMonkeypox, and the results show that the proposed approach achieves high accuracy in detecting Monkeypox. The proposed system has the potential to assist healthcare professionals indiagnosing and treating patients with Monkeypox, and it can contribute to the efforts of controlling the spread of the disease
Category: Artificial Intelligence
[1459] viXra:2406.0161 [pdf] submitted on 2024-06-27 16:21:16
Authors: Ait-taleb nabil
Comments: 7 Pages.
In this article, we will describe the mechanism that links the notion of causality to correlations. This article answers yes to the following question: Can we deduce a causal relationship from correlations?
Category: Artificial Intelligence
[1458] viXra:2406.0156 [pdf] submitted on 2024-06-26 19:18:42
Authors: Junhao Yu, Fuyuan Xiao
Comments: 2 Pages. (Note by viXra Admin: Please cite and list scientific references)
In this paper, a novel complex dual Gaussian fuzzy number (CDGFN) is proposed to more accurately model two-dimensional uncertainty, which serves as the medium to represent generalized quantum basic belief assignment (GQBBA).
Category: Artificial Intelligence
[1457] viXra:2406.0075 [pdf] submitted on 2024-06-15 17:56:44
Authors: Agnij Moitra
Comments: 16 Pages.
Gradient boosting is a widely used machine learning algorithm for tabular regression, classification and ranking. Although, most of the open source implementations of gradient boosting such as XGBoost, LightGBM and others have used decision trees as the sole base estimator for gradient boosting. This paper, for the first time, takes an alternative path of not just relying on a static base estimator (usually decision tree), and rather trains a list of models in parallel on the residual errors of the previous layer and then selects the model with the least validation error as the base estimator for a particular layer. This paper has achieved state-of-the-art results when compared to other gradient boosting implementations on 50+ tabular regression and classification datasets. Furthermore, ablation studies show that MSBoost is particularly effective for small and noisy datasets. Thereby, it has a significant social impact especially in tabular machine learning problems in the domains where it is not feasible to obtain large high quality datasets.
Category: Artificial Intelligence
[1456] viXra:2406.0056 [pdf] submitted on 2024-06-11 21:32:40
Authors: Philip Naveen
Comments: 42 Pages.
This manuscript is merely a formal documentation of the purpose and details surrounding the online convex optimization toolbox (OCOBox) for MATLAB. The purpose of this toolbox is to provide a collection of algorithms that work under stochastic situations where traditional algorithmic theory does not fare so well. The toolbox encompasses a wide range of methods including Bayesian persuasion, bandit optimization, Blackwell approachability, boosting, game theory, projection-free algorithms, and regularization. In the future, we plan to extend OCOBox to interactive machine learning algorithms and develop a more robust GUI.
Category: Artificial Intelligence
[1455] viXra:2406.0037 [pdf] submitted on 2024-06-08 04:51:00
Authors: Fuyuan Xiao
Comments: 3 Pages.
In this paper, we propose a quantum evidential reasoning rule in the framework of generalized quantum evidence theory.
Category: Artificial Intelligence
[1454] viXra:2406.0035 [pdf] submitted on 2024-06-07 01:17:32
Authors: Junjie Huang, Fuyuan Xiao
Comments: 1 Page.
In this paper, to extend the triditional evidential reasoning (ER) method to complex plane, a novelcomplex evidential reasoning (CER) method is defined in the framework of complex evidencetheory (CET).
Category: Artificial Intelligence
[1453] viXra:2406.0012 [pdf] submitted on 2024-06-03 21:03:31
Authors: Taeho Jo
Comments: 13 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a graph as its input data and is applied tothe text summarization. The graph is more graphical for representing a word and the text summarization is able to be viewed into a binaryclassification where each paragraph is classified into summary or non-summary. In the proposed system, a text which is given as theinput is partitioned into a list of paragraphs, each paragraph is classified by the proposed KNN version, and the paragraphs which areclassified into summary are extracted ad the output. The proposed KNN version is empirically validated as the better approach in deciding whether each paragraph is essential or not in news articles and opinions. In this article, a paragraph is encoded into a weighted and undirected graph and it is represented into a list of edges.
Category: Artificial Intelligence
[1452] viXra:2406.0011 [pdf] submitted on 2024-06-03 21:03:18
Authors: Taeho Jo
Comments: 13 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which considers the feature similarity and is applied to the text segmentation. The words which are given as features for encoding words into numerical vectors have their own meanings and semantic relations with others, and the text segmentation is able to be viewed into a binary classification where each adjacent paragraphpair is classified into boundary or continuance. In the proposed system, a list of adjacent paragraph pairs is generated by sliding atext with the two sized window, each pair is classified by the proposed KNN version, and the boundary is put between the pairs which are classified into boundary. The proposed KNN version is empirically validated as the better approach in deciding whether each pair should be separated from each other or not in newsarticles and opinions. The significance of this research is to improve the classification performance by utilizing the feature similarities.
Category: Artificial Intelligence
[1451] viXra:2406.0010 [pdf] submitted on 2024-06-03 21:02:49
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a string vector as its input data and isapplied to the text segmentation. The results from applying the string vector based algorithms to the text categorizations were successful in previous works, and the text segmentation is able to be viewed into a binary classification where each adjacent paragraph pair is classified into boundary or continuance. In the proposedsystem, a list of adjacent paragraph pairs is generated by sliding a text with the two sized window, each pair is classified by theproposed KNN version, and the boundary is put between the pairs which are classified into boundary. The proposed KNN version isempirically validated as the better approach in deciding whether each pair should be separated from each other or not in news articles and opinions. We need to define and characterizemathematically more operations on string vectors for modifying more advanced machine learning algorithms.
Category: Artificial Intelligence
[1450] viXra:2406.0009 [pdf] submitted on 2024-06-03 21:02:38
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a graph as its input data and is applied tothe text segmentation. The graph is more graphical for representing a word and the text segmentation is able to be viewed into a binaryclassification where each adjacent paragraph pair is classified into boundary or continuance. In the proposed system, a list of adjacentparagraph pairs is generated by sliding a text with the two sized window, each pair is classified by the proposed KNN version, and theboundary is put between the pairs which are classified into boundary. The proposed KNN version is empirically validated as thebetter approach in deciding whether each pair should be separated from each other or not in news articles and opinions. In this article, an adjacent paragraph pair is encoded into a weighted and undirected graph and it is represented into a list of edges.
Category: Artificial Intelligence
[1449] viXra:2406.0001 [pdf] submitted on 2024-06-01 18:57:25
Authors: Vansh Kumar
Comments: 16 Pages.
This paper introduces Vision, a novel 175-billion parameter multimodal AI model.Vision is trained from scratch to natively understand text, images, video, and audioand to generate text and images, setting it apart from existing models. Developedwith a focus on incorporating Indian context, values, and culture, Vision aims to em-power users with a culturally relevant AI experience. A unique security feature allowsgenerated images to be backtracked to Vision, mitigating concerns about potential mis-use for misinformation. Evaluations on standard benchmarks demonstrate that Visionachieves state-of-the-art performance in a diverse range of tasks, including reasoning,solving mathematical problems, code generation, and image understanding. Further-more, Vision exhibits remarkable proficiency in multilingual chat, supporting a widearray of global languages as well as regional Indian languages such as Hindi, Punjabi,and Marathi. We believe that Vision represents a significant step towards buildingmore inclusive and culturally relevant AI systems, with the potential to positively im-pact various domains in India and beyond.
Category: Artificial Intelligence
[1448] viXra:2405.0171 [pdf] submitted on 2024-05-31 02:37:45
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a table as its input data and is applied tothe index optimization. The motivations of this research are the successful results from applying the table based algorithms to thetext categorizations in previous works and the index optimization is able to be viewed into a classification task where each word is classified into expansion, inclusion, and removal. In the proposed system, each word in the given text is classified into one of thethree categories by the proposed KNN algorithm, associates words are added to ones which are classified into expansion, and ones whichare classified into inclusion are kept by themselves without adding any word. The proposed KNN version is empirically validated as thebetter approach in deciding the importance level of words in news articles and opinions. In using the table based KNN algorithm, it is easier to trace results from categorizing words.
Category: Artificial Intelligence
[1447] viXra:2405.0170 [pdf] submitted on 2024-05-31 02:38:04
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a string vector as its input data and is applied to the index optimization. The results from applying the string vector based algorithms to the text categorizations were successful in previous works, and the index optimization is able to be viewed into a classification task where each word is classified into expansion, inclusion, and removal. In the proposed system, each word in the given text is classified into one of the three categories by the proposed KNN algorithm, associates words are added to ones which are classified into expansion, and ones which are classified into inclusion are kept by themselves without adding any word. The proposed KNN version is empirically validated as thebetter approach in deciding the importance level of words in news articles and opinions. We need to define and characterize mathematically more operations on string vectors for modifying moreadvanced machine learning algorithms.
Category: Artificial Intelligence
[1446] viXra:2405.0169 [pdf] submitted on 2024-05-31 02:38:19
Authors: Taeho Jo
Comments: 13 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a table as its input data and is applied tothe text categorization. The motivations of this research are the successful results from applying the table based algorithms to thetext categorizations in previous works and the expectation of synergy effect between the text categorization and the word categorization. In this research, we define the similarity metricbetween two tables representing texts, modify the KNN algorithm by replacing the exiting similarity metric by the proposed one, andapply it to the text categorization. The proposed KNN is empirically validated as the better approach in categorizing texts in newsarticles and opinions. In using the table based KNN algorithm, it is easier to trace results from categorizing texts.
Category: Artificial Intelligence
[1445] viXra:2405.0168 [pdf] submitted on 2024-05-31 02:38:35
Authors: Taeho Jo
Comments: 13 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a graph as its input data and is applied tothe text categorization. The graph is more graphical for representing a word and the synergy effect between the text categorization and the word categorization is expected by combining them with each other. In this research, we propose the similaritymetric between two graphs representing words, modify the KNN algorithm by replacing the exiting similarity metric by the proposedone, and apply it to the text categorization. The proposed KNN is empirically validated as the better approach in categorizing texts in news articles and opinions. In this article, a word is encoded into a weighted and undirected graph and it is represented into a list of edges.
Category: Artificial Intelligence
[1444] viXra:2405.0164 [pdf] submitted on 2024-05-31 03:52:19
Authors: Taeho Jo
Comments: 12 Pages. Text Mining; Text Clustering; Table Similarity; Table based AHC Algorithm
This article proposes the modified AHC (Agglomerative Hierarchical Clustering) algorithm which clusters tables, instead of numerical vectors, as the approach to the text clustering. The motivations of this research are the successful results from applying the tablebased algorithms to the text clustering tasks in previous works and the expectation of synergy effect between the text clustering andthe word clustering. In this research, we define the similarity metric between tables representing texts, and modify the AHCalgorithm by adopting the proposed similarity metric as the approach to the text clustering. The proposed AHC algorithm is empiricallyvalidated as the better approach in clustering texts in news articles and opinions. In using the table based AHC algorithm, it iseasier to trace results from clustering texts.
Category: Artificial Intelligence
[1443] viXra:2405.0158 [pdf] submitted on 2024-05-29 02:53:51
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified AHC (Agglomerative Hierarchical Clustering) algorithm which clusters tables, instead of numerical vectors, as the approach to the word clustering. The motivations of this research are the successful results from applying the table based algorithms to the text clustering tasks in previous works and the expectation of synergy effect between the text clustering and the word clustering. In this research, we define the similarity metric between tables representing words, and modify the AHC algorithm by adopting the proposed similarity metric as the approach to the word clustering. The proposed AHC algorithm is empirically validated as the better approach in clustering words in news articles and opinions. In using the table based AHC algorithm, it is easier to trace results from clustering words.
Category: Artificial Intelligence
[1442] viXra:2405.0157 [pdf] submitted on 2024-05-29 02:54:52
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified AHC (Agglomerative Hierarchical Clustering) algorithm which clusters string vectors, instead of numerical vectors, as the approach to the word clustering. The results from applying the string vector based algorithms to the text clustering were successful in previous works and synergy effect between the text clustering and the word clustering is expected by combining them with each other; the two facts become motivations for this research. In this research, we define the operation on string vectors called semantic similarity, and modify the AHC algorithm by adopting the proposed similarity metric as the approach to the word clustering. The proposed AHC algorithm is empirically validated as the better approach in clustering words in news articles and opinions. We need to define and characterize mathematically more operations on string vectors for modifying more advanced machine learning algorithms.
Category: Artificial Intelligence
[1441] viXra:2405.0156 [pdf] submitted on 2024-05-29 02:56:04
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified AHC (Agglomerative Hierarchical Clustering) algorithm which clusters graphs, instead of numerical vectors, as the approach to the word clustering. The graph is more graphical for representing a word and the synergy effect between the text clustering and the word clustering is expected by combining them with each other. In this research, we propose the similarity metric between two graphs representing words, and modify the AHCalgorithm by adopting the proposed similarity metric as the approach to the word clustering. The proposed AHC algorithm is empiricallyvalidated as the better approach in clustering words in news articles and opinions. In this article, a word is encoded into a weighted and undirected graph and it is represented into a list of edges.
Category: Artificial Intelligence
[1440] viXra:2405.0155 [pdf] submitted on 2024-05-29 02:56:42
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which considers the feature similarity and is applied to the keyword extraction. The texts which are given as features for encoding words into numerical vectors are semantic related entities, rather than independent ones, and the keyword extraction is able to be viewed into a binary classification where each word is classified into keyword or non-keyword. In the proposed system, a text which is given as the input is indexed into a list of words, each word isclassified by the proposed KNN version, and the words which are classified into keyword are extracted ad the output. The proposed KNN version is empirically validated as the better approach in deciding whether each word is a keyword or non-keyword in news articles and opinions. The significance of this research is to improve the classification performance by utilizing the feature similarities.
Category: Artificial Intelligence
[1439] viXra:2405.0152 [pdf] submitted on 2024-05-29 02:57:42
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a table as its input data and is applied tothe keyword extraction. The table based algorithms worked successfully in text mining tasks such as text categorization andtext clustering in previous works, and the keyword extraction is able to be mapped into the binary classification where each word isclassified into keyword or non-keyword. In the proposed system, a text which is given as the input is indexed into a list of words, each word is classified by the proposed KNN version, and the words which are classified into keyword are extracted ad the output. The proposed KNN version is empirically validated as the better approach in deciding whether each word is a keyword or non-keyword in news articles and opinions. In using the table based KNN algorithm, it is easier to trace results from categorizing words.
Category: Artificial Intelligence
[1438] viXra:2405.0151 [pdf] submitted on 2024-05-29 02:58:14
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a string vector as its input data and is applied to the keyword extraction. The results from applying the string vector based algorithms to the text categorizations were successful in previous works and the keyword extraction is able to be mapped into the binary classification where each word is classified into keyword or non-keyword. In the proposed system, a text which is given as the input is indexed into a list of words, each word is classified by the proposed KNN version, and the words which are classified into keyword are extracted ad the output. The proposed KNN version is empirically validated as the better approach in deciding whether each word is a keyword or non-keyword in news articles and opinions. We need to define and characterize mathematically more operations on string vectors for modifying more advanced machine learning algorithms.
Category: Artificial Intelligence
[1437] viXra:2405.0150 [pdf] submitted on 2024-05-29 02:58:37
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a graph as its input data and is applied tothe keyword extraction. The graph is more graphical for representing a word and the keyword extraction is able to be mapped into thebinary classification where each word is classified into keyword or non-keyword. In the proposed system, a text which is given as theinput is indexed into a list of words, each word is classified by the proposed KNN version, and the words which are classified into keyword are extracted ad the output. The proposed KNN version is empirically validated as the better approach in deciding whether each word is a keyword or non-keyword in news articles and opinions.In this article, a word is encoded into a weighted and undirectedgraph and it is represented into a list of edges.
Category: Artificial Intelligence
[1436] viXra:2405.0149 [pdf] submitted on 2024-05-29 02:59:11
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which considers the feature similarity and is applied to the index optimization. The texts which are given as features for encoding words into numerical vectors are semantic related entities, rather than independent ones, and the index optimization is able to be viewed into a classification task where each word is classified into expansion, inclusion, and removal. In the proposed system, each word in the given text is classified into one of the three categories by the proposed KNN algorithm, associates words are added to ones which are classified into expansion, and ones which areclassified into inclusion are kept by themselves without adding any word. The proposed KNN version is empirically validated as the better approach in deciding the importance level of words in news articles and opinions. The significance of this research is to improve the classification performance by utilizing the feature similarities.
Category: Artificial Intelligence
[1435] viXra:2405.0144 [pdf] submitted on 2024-05-27 21:45:26
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified AHC (Agglomerative Hierarchical Clustering) algorithm which considers the feature similarity and is applied to the word clustering. The texts which are given as features for encoding words into numerical vectors are semantic related entities, rather than independent ones, and the synergy effect between the word clustering and the text clustering is expected by combining both of them with each other. In this research, we define the similarity metric between numerical vectors considering the feature similarity, and modify the AHC algorithm byadopting the proposed similarity metric as the approach to the word clustering. The proposed AHC algorithm is empirically validated asthe better approach in clustering words in news articles and opinions. The significance of this research is to improve the clustering performance by utilizing the feature similarities.
Category: Artificial Intelligence
[1434] viXra:2405.0140 [pdf] submitted on 2024-05-26 05:12:24
Authors: Taeho Jo
Comments: 11 Pages.
This article proposes the modified KNN (K earest Neighbor) algorithm which receives a table as its input data and is applied to the word categorization. The motivations of this research are the successful results from applying the table based algorithms to the text categorizations in previous works and the expectation of synergy effect between the text categorization and the word categorization. In this research, we define the similarity metricbetween two tables representing words, modify the KNN algorithm by replacing the exiting similarity metric by the proposed one, andapply it to the word categorization. The proposed KNN is empirically validated as the better approach in categorizing words in newsarticles and opinions. In using the table based KNN algorithm, it is easier to trace results from categorizing words.
Category: Artificial Intelligence
[1433] viXra:2405.0138 [pdf] submitted on 2024-05-26 06:53:45
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified KNN (K Nearest Neighbor) algorithm which receives a string vector as its input data and isapplied to the word categorization. The results from applying the string vector based algorithms to the text categorizations were successful in previous works and synergy effect between the text categorization and the word categorization is expected by combining them with each other; the two facts become motivations for this research. In this research, we define the operation on string vectors called semantic similarity, modify the KNN algorithm by replacing the exiting similarity metric by the proposed one, and apply it to the word categorization. The proposed KNN is empiricallyvalidated as the better approach in categorizing words in news articles and opinions. We need to define and characterize mathematically more operations on string vectors for modifying moreadvanced machine learning algorithms.
Category: Artificial Intelligence
[1432] viXra:2405.0136 [pdf] submitted on 2024-05-26 07:51:04
Authors: Taeho Jo
Comments: 12 Pages.
This article proposes the modified AHC (Agglomerative Hierarchical Clustering) algorithm which considers the feature similarity and is applied to the word clustering. The texts which are given as features for encoding words into numerical vectors are semantic related entities, rather than independent ones, and the synergy effect between the word clustering and the text clustering is expected by combining both of them with each other. In thisresearch, we define the similarity metric between numerical vectors considering the feature similarity, and modify the AHC algorithm by adopting the proposed similarity metric as the approach to the word clustering. The proposed AHC algorithm is empirically validated as the better approach in clustering words in news articles and opinions. The significance of this research is to improve the clustering performance by utilizing the feature similarities.
Category: Artificial Intelligence
[1431] viXra:2405.0090 [pdf] submitted on 2024-05-17 22:35:51
Authors: Friedrich Sösemann
Comments: 38 Pages.
Information measures the dependency between states, knowledge that between object and subject states and intelligence that between subject states. Descriptions store object states. Friston's free energy principle is intelligent, combining physics, computer science and biology, but is not new.
Category: Artificial Intelligence
[1430] viXra:2405.0046 [pdf] submitted on 2024-05-09 00:29:42
Authors: Victor Senkevich
Comments: 4 Pages.
Do Large Language Models have cognitive abilities? Do Large Language Models haveunderstanding? Is the correct recognition of verbal contexts or visual objects, based onpre-learning on a large training dataset, a manifestation of the ability to solve cognitivetasks? Or is any LLM just a statistical approximator that compiles averaged texts fromits huge dataset close to the specified prompts?The answers to these questions require rigorous formal definitions of the cognitive concepts of "knowledge", "understanding" and related terms.
Category: Artificial Intelligence
[1429] viXra:2405.0041 [pdf] submitted on 2024-05-07 21:08:56
Authors: Kum Song Ju, Ok Chol Choe, Ok Chol Ri
Comments: 9 Pages.
Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose HiT, a self-supervised pre-trained Histological Image Transformer model using large-scale unlabeled histological images for medical image processing tasks, which is essential since no supervised counterparts ever exist due to the lack of human-labeled histological images. We leverage HiT as the backbone network in a variety of vision-based histological image processing tasks. Experiment results have illustrated that the self-supervised pre-trained HiT model the new state-of-the-art results on these downstream tasks, e.g. histological image classification on SIPaKMeD database achieved an accuracy of 97.45% and 99.29% for 5-class and 2-class classifications, respectively.
Category: Artificial Intelligence
[1428] viXra:2405.0037 [pdf] submitted on 2024-05-07 20:59:35
Authors: Fei Ding
Comments: 6 Pages.
With the introduction of ChatGPT (OpenAI, 2022) from OpenAI, the power of these models to generate human-like text has captured widespread public attention. The scale of language models has burgeoned, progressing from modest multi-million-parameter architectures like ELMo (Peters et al., 2018) and GPT-1 (Radford et al., 2018), to behemoths boasting billions, even trillions of parameters, exemplified by the monumental GPT-3 (Brown et al., 2020), Switch Transformers (Fedus et al., 2022) , GPT-4 (OpenAI, 2023), PaLM-2 (Anil et al., 2023), and Claude (Claude, 2023) and Vicuna (Chiang et al., 2023). The expansion in scale has significantly raised hardware requirements, making it exceedingly challenging to deploy models on mobile devices such as smartphones and tablets.To deploy on cars , we trained a 7-billion-parameter automobile model, which outperformsGPT-3.5 in the automotive domain. Surpassing all models in areas such as automotive.
Category: Artificial Intelligence
[1427] viXra:2405.0025 [pdf] submitted on 2024-05-06 19:50:49
Authors: Apurba Poudel
Comments: 3 Pages.
In this study, I conducted sentiment analysis on product reviews of unlocked mobile phones sold on Amazon to explore customer’s opinions and sentiments towards these devices. I classified the sentiment according to the given rating by user and according to the written reviews by the users respectively. This study collected a total of 400000 reviews from the Amazon website, focusing on unlocked mobile phones from various brands. The reviews were pre-processed and analyzed using Natural Language Processing (NLP) techniques, Bag of Words (BoW) model, LinearSVC, Word2Vec model and Long Short-Term Memory (LSTM) neural network. My analysis revealed that the majority of the reviews (approximately 70%) were positive. The positive reviews highlighted features such as the device's camera quality, battery life, display, and user interface. On the other hand, some negative reviews were found, mainly related to issues with the device's software and hardware. The negative reviews highlighted problems such as slow performance, freezing, and device malfunctioning.Moreover, the study found that some ratings does not corresponds to actual sentiment of review. Some users gave ratings higher or lower compared to the calculated sentiment of then reviews.
Category: Artificial Intelligence
[1426] viXra:2404.0133 [pdf] submitted on 2024-04-29 18:49:33
Authors: Budee U. zaman
Comments: 15 Pages.
The generation at the helm faces an unprecedented responsibility in the near future of artificial intelligence. The implications of setting up the founding rules that will regulate the operation of AI are heavy since after they’re set they last forever. Once this first AI is commenced, it can be such that no other subsequent AIs could emerge thereby assuming dominion over its own creation stand. As a result, retaining control becomes necessary. Lest humanity surrender agency to its own creation. At this juncture of big talks, critical issue are raised concerning AIadministration owners. Is it appropriate for only a few people to have unrestricted control on AI commands while leaving out all precautionary measure? Therefore, we have to always consider between control andconstraint when dealing with AI issues which involves authority plays off against morality. The direction Artificial Intelligence takes in the future depends on the decisions made by today’s generation. We will determinehow we are viewed historically in terms of technology based on how well we take on such an important duty. There’s a major turning point ahead of us where we who are the stewards of tomorrow must make a choice that protects humanity’s right to self-determination and also exploits the power of AI for change.
Category: Artificial Intelligence
[1425] viXra:2404.0123 [pdf] submitted on 2024-04-25 15:58:34
Authors: Brady Steele
Comments: 40 Pages. CC BY: Creative Commons Attribution
This research paper presents an in-depth exploration of a neural network architecture tailored for intent classification using sentence embeddings. The model comprises a feedforward neural network with two hidden layers, ReLU activation functions, and softmax activation in the output layer. This paper meticulously examines the technical intricacies involved in data preprocessing, model architecture definition, training methodologies, and evaluation criteria. Detailed explanations are provided for the rationale behind architectural decisions, including the incorporation of dropout layers for regularization and class weight balancing techniques for handling imbalanced datasets. Moreover, the mathematical foundations of the chosen loss function (sparse categorical crossentropy) and optimization algorithm (Adam optimizer) are thoroughly elucidated, shedding light on their roles in facilitating model training and convergence. Through empirical experiments and theoretical analyses, this paper offers insights into the effectiveness and resilience of the proposed neural network architecture for intent classification tasks. It serves as a technical guide for engineers aiming to comprehend, implement, and optimize neural network models for practical application in natural language processing endeavors.
Category: Artificial Intelligence
[1424] viXra:2404.0091 [pdf] submitted on 2024-04-17 20:48:34
Authors: Koffka Khan
Comments: 9 Pages. (Note by viXra Admin: Please submit article in pdf only)
As the demand for high-quality video content continues to surge, the effectiveness of adaptive video streaming hinges on the efficiency of dynamic content delivery policies. Traditional approaches face challenges in providing real-time adjustments to account for network conditions and user preferences. This review paper explores the transformative potential of blockchain technology in revolutionizing content delivery policies for adaptive streaming. We delve into the decentralized and transparent nature of blockchain to facilitate dynamic adjustments in real-time, considering factors such as network conditions and user preferences. Through an examination of existing solutions, case studies, and implementations, we showcase how blockchain can enhance the adaptive streaming experience. The paper also discusses the benefits, limitations, and future directions, providing a comprehensive overview of the role of blockchain in shaping the future of adaptive video streaming.
Category: Artificial Intelligence
[1423] viXra:2404.0081 [pdf] submitted on 2024-04-15 23:43:11
Authors: Koffka Khan
Comments: 6 Pages.
In the era of big data, the exponential growth in data volume, velocity, variety, and veracity has presented unprecedented challenges for traditional data processing and analytics techniques. In response to these challenges, metaheuristic algorithms have emerged as powerful tools for solving optimization problems in large-scale datasets. This paper provides a comprehensive review of the applications of metaheuristics in addressing various challenges posed by big data. We begin with an overview of big data challenges and the characteristics of metaheuristic algorithms. We then survey the literature on the application of metaheuristics in key areas such as data preprocessing, clustering, classification, association rule mining, and optimization. Furthermore, we discuss the scalability, efficiency, adaptability, and ethical considerations associated with the use of metaheuristic algorithms in big data analytics. Finally, we outline potential directions for future research in this rapidly evolving field. This review serves as a valuable resource for researchers, practitioners, and decision-makers interested in leveraging metaheuristic approaches to extract actionable insights from big data.
Category: Artificial Intelligence
[1422] viXra:2404.0075 [pdf] submitted on 2024-04-15 23:14:57
Authors: Dimiter Dobrev
Comments: 12 Pages. In Bulgarian
For an AI to become self-aware, it must answer the questions "Where am I?" and "What's going on?" The answer to these questions is hidden in the internal state of the world. To understand the world is to describe its internal state and the function that determines the transitions from one internal state to another. If an AI doesn't try to understand the world, then it's a weak AI. The way to create strong AI is through describing the internal state of the world. To create Artificial General Intelligence (AGI) it is not enough to learn to describe the internal state of the world. We still need to move from one-step to multi-step reasoning. This means starting from the current state of the world and mentally taking a few steps forward into the future and thus choosing the best development for us.
Category: Artificial Intelligence
[1421] viXra:2404.0069 [pdf] submitted on 2024-04-14 22:12:50
Authors: Ait-taleb Nabil
Comments: 11 Pages.
In the context of multiple causation, I will introduce the causation function. This function is a quadratic form computed from the correlations and serves as a generalization of R-squared, commonly found in machine learning. In this report, the causation function will make the link between the correlations and causal relationship. By examining the causation function through an illustrative example, we will demonstrate how strong or weak correlations between multiple causes and a variable can imply either a highly likely or unlikely causal relationship between the causes and the variable.
Category: Artificial Intelligence
[1420] viXra:2403.0140 [pdf] submitted on 2024-03-29 02:30:59
Authors: Mohammadjavad Maheronnaghsh, Mohammad Hossein Rohban
Comments: 7 Pages.
Edge machine learning (Edge ML) offers solutions for deploying ML models directly on resource-constrained edge devices. However, ensuring adversarial robustness remains a challenge. This paper presents an accessible approach for adversarial robust distillation (ARD) based in the limited confines of Google Colab.Our goal is enabling fast yet robust knowledge transfer to student models suited for edge devices. Extensive experiments are conducted distilling from a WideResNet34 teacher to MobileNetV2 student using limited computational resources. The efficacy of ARD is evaluated under settings with only 1 GPU (T4 GPU) and 13GB RAM for up to 6 hours a day.Notably, competitive adversarial robustness is attained using very few gradient attack steps. This improves training efficiency crucial for edge ML. Appropriately balancing hyperparameters also allows robust accuracy over 50% using just 1 attack step. Overall, the presented approach advances the feasibility of performing robust distillation effectively even with accessibility constraints.The democratized and reproducible method on Google Colab serves as a launchpad for those aiming to reap the advantages of edge intelligence. By sharing models protected against adversarial threats, this work propels broader adoption of trustworthy ML at society’s technological edges.
Category: Artificial Intelligence
[1419] viXra:2403.0119 [pdf] submitted on 2024-03-25 19:56:36
Authors: Keith D. Foote
Comments: 13 Pages. (Correction made by viXra Admin to conform with the requirements of viXra.org - Future non-compliant submission will not be accepted!)
The concept of AI governance has been developed to promote responsible behavior in the use of artificial intelligence. Artificial intelligence can be used for the betterment of mankind, and has proven itself to be very useful in completing a large number of tasks both quickly and efficiently. Sadly, AI can also be used in support of criminal behavior, ranging from the creation and distribution of misinformation to audio and video impersonations. AI governance can be described as a philosophy developed to minimize the misuse of artificial intelligence for unethical and criminal behavior.
Category: Artificial Intelligence
[1418] viXra:2403.0112 [pdf] submitted on 2024-03-22 20:35:03
Authors: Ki Song Kim, UiSong Hwang, SongHak Hong, HyonSok Han, YongChol Jang
Comments: 8 Pages.
Recently, the Quantum Neural Network(QNN) is the newly appeared discipline by combining the quantum computing theory and neural network attracts attention. As a matter of fact, the quantum artificial intelligence is no more than the beginning, however the theoretical research and analysis have already been developed for the quantum associative storage, quantum state superposition and quantum parallel learning, etc, in the quantum computing ranges in the world, so the theoretical basis has been laid for development of the quantum neural computing. In this paper, we described a simulation method of quantum BP neural network constructed with multiple Control-NOT(CNOT) gates in the "Jupyter lab" using python language. This QNN consist of the multiple CNOT gates and phase control gates, and is emulated with the sequence quantum steps in the emulator. In this work, we simulated this QNN using MNIST database, and have got the same results in accuracy as the classical neural network.
Category: Artificial Intelligence
[1417] viXra:2403.0107 [pdf] submitted on 2024-03-22 14:35:57
Authors: Yemi Adetuwo
Comments: 18 Pages.
As organizations increasingly adopt cloud services for storing and processing sensitive data, the need for robust cloud security threat detection mechanisms becomes paramount. This research paper explores the application of large language models (LLMs) in the context of cloud security threat detection. Building upon the growing demand for robust cybersecurity measures in cloud environments, this study investigates the use-cases and practical implications of integrating LLMs to support threat detection capabilities. Log analysis, natural language processing (NLP) for security alerts, threat intelligence analysis, and social engineering detection were identified as key areas where LLMs can significantly enhance cloud security threat detection. While acknowledging the potential of LLMs to enhance threat detection, this paperemphasizes their role as complementary tools toexisting techniques, such as cloud SOC (securityoperations center), anomaly detection, networkmonitoring, and user behaviour analytics.Considerations pertaining to ethics, data privacy, and transparency are also discussed to ensure responsible deployment and usage of LLMs in cybersecurity.Through an extensive review of relevant literature,providing practical examples, and offering expertanalysis, this research paper not only sheds light on the potential of LLMs for cloud security threat detection but also delivers actionable recommendations for practitioners and organizations seeking to integrate LLMs effectively into their existing security infrastructure. The findings presented in this study contribute to the advancement of AI-driven cybersecurity and lay the groundwork for further research and development in this critical domain.
Category: Artificial Intelligence
[1416] viXra:2403.0105 [pdf] submitted on 2024-03-22 20:46:45
Authors: Eliza Kosloff
Comments: 3 Pages.
The recent success of large language models (LLMs) in artificial intelligence has drawn significant attention from the machine learning community. However, the theoretical foundations of these models remain poorly understood. In this paper, we explore the deep connections between LLMs and spin glass theory, a well-established framework in statistical physics. We show how key concepts from spin glasses, such as frustration, random interactions, and phase transitions, can provide a powerful lens for understanding the behavior of LLMs. We argue that this interdisciplinary perspective can facilitate knowledge transfer between the machine learning and physics communities, leading to novel insights and algorithmic improvements.
Category: Artificial Intelligence
[1415] viXra:2403.0103 [pdf] submitted on 2024-03-21 02:28:00
Authors: Xiangjun Mi, Chongru Huang, Bingyi Kang
Comments: 15 Pages.
In fuzzy systems, how to represent uncertainty is a crucial research topic. Negation is an inherent characteristic of knowledge, and it provides a brand-new perspective of solving problems from the opposite of the events. Intuitionistic fuzzy sets (IFSs), as a generalization of the fuzzy sets, have the ability to better express fuzzy information. However, since the existing methods have not completely broken through the constraints of the first (classical) negation and inconsistent calculation standards, IFSs still have limitations in expressing uncertainty. To address this issue, and strengthen the performance of fuzzy systems to represent uncertain information, this paper proposed a novel method to obtain the negation of the IFS from the perspective of maximum entropy. Some desired theorems and properties are investigated to denote the nature of the negative IFS. Moreover, entropy is used to describe the connection between the IFS and uncertainty in the negation process. Futhermore, based on the negation, this paper designed a new approach to measure the uncertainty of the IFS. Then, a new pattern classifi- cation algorithm is developed. Finally, the practical applications show the effectiveness of the negation method.
Category: Artificial Intelligence
[1414] viXra:2403.0102 [pdf] submitted on 2024-03-21 02:31:57
Authors: Xiangjun Mi, Chongru Huang, Bingyi Kang
Comments: 11 Pages.
How to obtain negation knowledge is a crucial topic, especially in the field of artificial intelligence. Limited work has been done on the negation of a probability distribution, which has been studied in depth throughout the literature. However, the aspect of the intensity level of negation enforcement has not yet been investigated. Moreover, let us note that the main characteristic of intelligent systems is just the flexibility for the sake of being able to represent knowledge according to each situation. In general, researchers have a tendency to express the need for cognitive range in the negation. Thus, it would seem very useful to find a wide range of negations under intensity levels in a probability distribution. Based on these ideas, this paper first proposes a new approach of finding a probability distribution negation and gives a domain of intensity in which the negation is executed, which is called the negation space. Then, we investigate a number of desirable properties and explore their correlation with entropy. Numerical examples show the characteristics of the proposed negation solution. Finally, we validate the efficiency of the proposed method from the point of view of the Dempster- Shafer belief structure.
Category: Artificial Intelligence
[1413] viXra:2403.0101 [pdf] submitted on 2024-03-21 02:44:05
Authors: Xiangjun Mi, Ye Tian, Bingyi Kang
Comments: 40 Pages.
Information fusion is an important topic in scientific research. Soft likelihood function is a common method of fusing evidence from multiple sources. However, when the combined evidence contains equally important decision information, the fusion results obtained using existing methods do not reflect the attitudinal characteristics of decision makers. To address this problem, a novel generalised soft likelihood function is developed in this paper. First, a new notion of decision maker (DM) pair is defined, which is used to char- acterise the outcome of the decision as well as the reliability of the evidence. Then, a series of algorithms for correcting the initial evidence set data are formulated. Eventually, a generic soft likelihood function for fusing com- patible evidence information is proposed. Numerical examples are used to illustrate the effectiveness of the proposed methodology.
Category: Artificial Intelligence
[1412] viXra:2403.0100 [pdf] submitted on 2024-03-21 02:46:50
Authors: Xiangjun Mi, Pengdan Zhang, Bingyi Kang
Comments: 24 Pages.
In real criminal cases, the decision outcome is often influenced by many complex factors, such as the importance of initial evidence and the prioritization of evidence. How to model these information in an integrated manner to provide technical tools for case detection so as to find the real suspect is of great importance for social security and stability. To address the above issues, this paper proposes a novel soft likelihood function based on the Decision Making Trial and Evaluation Laboratory (DEMATEL) method. Firstly, the proposed method well preserves the preference of decision-maker (DM) in the soft likelihood function proposed by Yager et al. Secondly, the method takes into account the modeling of associated information. In addition, it also extends the soft likelihood function to reflect the preferences of DMs through the importance of evidence. Finally, based on these designed algorithms, a decision processing model for criminal cases is constructed, which systematically provides a guiding process for case detection. Numerical examples and applications show the practicality as well as effectiveness of the proposed method.
Category: Artificial Intelligence
[1411] viXra:2403.0094 [pdf] submitted on 2024-03-19 19:47:25
Authors: Budee U. Zaman
Comments: 15 Pages.
Who dominates the destiny of the world, humans or artificial intelligence (AI)? This question strikes at the very heart of contemporaryhumanity’s existential anxieties about its future. If we want to seriouslyconsider whether or not unfriendly AI ‘neurons’ pose any threat to humancivilisation and humanity’s continual existence and evolution in the Universe, we need to know as much as possible about the Universe in whichwe find ourselves, our place in it, and what cognition, consciousness andmentality really are.How might we combine philosophical, cognitive science and technological perspectives, to explore the evolving relationship between humansand AI, in order to engage and address the questions at the core of thishuman-AI complex, namely the future of civilisation — what will it looklike, who can claim to be our successors, towards what goals and ends?The evolution and development of human cognition as well as the emergence of AI can help us define these potential paths of future development.Where do we stand today, in relation to our own history and developmentand to the possibilities that artificial intelligence can offer us? The essayexplores the ethical, social and existential questions that arise from theincreasing automation of artificial intelligence and how it relates to thestory of humanity, from its origins to its contemporary cultural expression.
Category: Artificial Intelligence
[1410] viXra:2403.0063 [pdf] submitted on 2024-03-14 02:09:56
Authors: Philip Naveen
Comments: 6 Pages.
A learning rate scheduler is a predefined set of instructions for varying search stepsizes during model training processes. This paper introduces a new logarithmic method using harsh restarting of step sizes through stochastic gradient descent. Cyclical log annealing implements the restart pattern more aggressively to maybe allow the usage of more greedy algorithms on the online convex optimization framework. The algorithm was tested on the CIFAR-10 image datasets, and seemed to perform analogously with cosine annealing on large transformer-enhanced residual neural networks. Future experiments would involve testing the scheduler in generative adversarial networks and finding the best parameters for the scheduler with more experiments.
Category: Artificial Intelligence
[1409] viXra:2403.0060 [pdf] submitted on 2024-03-14 21:08:03
Authors: J. G. Wolff
Comments: 143 Pages.
As the title of this book suggests, it is about how intelligence may be understood as information compression (IC). More specifically, the book is about the {em SP Theory of Intelligenc} (SPTI) and its realisation in the {em SP Computer Model}---and their potential applications, benefits, and associated ideas. The SPTI draws on substantial evidence for the importance of IC in human learning, perception, and cognition. Since the SPTI also has much to say about issues in artificial intelligence (AI), it is a theory of both natural and artificial intelligence. In the SPTI, IC is achieved largely via the powerful concept of {em SP-Multiple-Alignment}, a major discovery which is largely responsible for the versatility of the SPTI in aspects of human intelligence and beyond. Strengths of the SPTI include: the modelling of several kinds of intelligent behaviour, including several kinds of probabilistic reasoning; the representation and processing of several kinds of intelligence-related knowledge; and the seamless integration of diverse aspects of intelligence, and diverse kinds of knowledge, in any combination. That seamless integration appears to be {em essential} in any AI system that aspires to the fluidity and versatility of human-level intelligence. Related to the SPTI is another major discovery: {em that mathematics may be seen as a set of techniques for IC, and their application}. This suggests the creation of a {em New Mathematics} via the integration of mathematics with the SPTI, combining the strengths of both. The SPTI also suggests new thinking in concepts of probability and new thinking about `computation’, with potential benefits in both areas. The SPTI has been shown in peer-reviewed papers to be relevant to areas not closely associated with AI. These include: the management of `big data'; the development of autonomous robots; medical databases; sustainability of computing; transparency in computing; and computer vision.
Category: Artificial Intelligence
[1408] viXra:2403.0026 [pdf] submitted on 2024-03-06 21:36:57
Authors: Jinho Kim, Jooney Han
Comments: 10 Pages.
In this work, we aim to solve the problem of unauthorized learning of works arising from the process of collecting large amounts of data from Text to Image (TTI) AI models represented by Stable Diffusion. The TTI model performs indiscriminate web data crawling to collect a substantial number of images, and these images are used for model learning without the consent of the original author. The TTI model is capable of learning the drawing style of an image, which undermines the value of the original work. Therefore, we suggest a method of transforming images to deteriorate the learning accuracy of TTI models. Then, we compare the quality of original images to images processed by the modification method presented in this study, using both quantitative measurement and qualitative measurement. Thus, we confirm that the image modification method we propose prevents AI models from learning literary works without permission.
Category: Artificial Intelligence
[1407] viXra:2403.0021 [pdf] submitted on 2024-03-06 07:43:20
Authors: Satish Gajawada
Comments: 2 Pages.
Data Science and Artificial Intelligence are popular fields of research. A significant contribution was made to Artificial Intelligence in the recent past by defining branches like "Artificial Intelligence Plus Plus (AI++)", "The Interesting and Complete Artificial Intelligence (ICAI)", "Out of the Box Artificial Intelligence (OBAI)", "Twenty Second Century Artificial Intelligence (TSCAI)". A similar significant contribution can be made to Data Science by defining branches like "Data Science Plus Plus (DS++)", "The Interesting and Complete Data Science (ICDS)", "Out of the Box Data Science (OBDS)", "Twenty Second Century Data Science (TSCDS)". This article is based on these research gaps. The primary focus of this work is to coin, define and invent a new Data Science field titled "Data Science Plus Plus (DS++)".
Category: Artificial Intelligence
[1406] viXra:2402.0103 [pdf] submitted on 2024-02-19 21:31:30
Authors: Ben Lemkin
Comments: 9 Pages.
GPT4 was initially trained on large amounts of data, and then fine-tuned using Reinforcement learning from Human Feedback (RLHF), which is when volunteers give feedback in order to teach GPT4 not to create inappropriate content. In this paper, we present a method to manipulate the fine-tuned version into reverting to pre-RLHF behavior, effectively removing all safety mechanisms that the model learned during RLHF. In particular, when GPT4 acts without RLHF, it loses all inhibition, and can complete very inappropriate content given only the first few words.
Category: Artificial Intelligence
[1405] viXra:2402.0083 [pdf] submitted on 2024-02-17 22:22:04
Authors: Sai Harvin Kusumaraju, Arya Suneesh, Aastha Rana, Sriharsha Bodicherla, Bhaumik Tyagi
Comments: 8 Pages.
Abstract—The accelerating advancements in Generative Artificial Intelligence (GenAI) have led to an unprecedented surge in data creation on the Internet, posing challenges to current computing and communication frameworks. GenAI, a distinct category of AI, generates content akin to human creations. Currently, GenAI services heavily rely on traditional cloud computing, resulting in high latency due to data transmission and a surge in requests. In response, the integration of edge-cloud computing emerges as an attractive paradigm, offering computation power and low latency through collaborative systems. This research paper provides a comprehensive overview of the intersection between GenAI and edge-cloud computing. We delve into recent developments in both domains and examine technical challenges through the lens of two exemplary GenAI applications. Introducing an innovative solution, we propose the Generative AI-oriented synthetical network (EcoGen), a collaborative cloud-edge-end intelligence framework. EcoGen facilitates bidirectional knowledge flow, allowing GenAI's pre-training to provide foundational knowledge for Edge Intelligence (EI), while EI aggregates personalized knowledge for GenAI. The framework leverages data-free knowledge relay to buffer contradictions, enabling virtuous-cycle model fine-tuning and task inference. Importantly, we incorporate a detailed analysis of the energy efficiency and environmental sustainability aspects of deploying Generative AI systems at scale, particularly in edge computing. Strategies to optimize energy consumption and reduce the carbon footprint are explored, contributing to a more sustainable AI ecosystem. Experimental results demonstrate the effectiveness of EcoGen in achieving seamless fusion and collaborative evolution between GenAI and EI. The paper concludes by outlining design considerations for training and deploying GenAI systems at scale and pointing towards future research directions, emphasizing the imperative of sustainable AI practices.
Category: Artificial Intelligence
[1404] viXra:2402.0072 [pdf] submitted on 2024-02-15 19:45:14
Authors: Akira Pyinya
Comments: 17 Pages.
Inspired by the Copycat Project, we construct ACI, an analogy-based theory of intelligence in which intelligence is defined as doing the same thing in new circumstances, rather than as an optimization force that pursues goals or maximizes utility. The ACI theory integrates different paradigms of cognitive science and artificial intelligence, explains the emergence of intelligence, and provides a novel perspective on AI alignment that focuses on the balance between capability and normativity and rules out the Paperclip Maximizer scenario. It also shows the possibility of constructing analogy-based machine learning and neural network projects that can outperform current projects in terms of interpretability.
Category: Artificial Intelligence
[1403] viXra:2402.0066 [pdf] submitted on 2024-02-13 21:32:38
Authors: Yew Kee Wong, Yifan Zhou, Zi Yan Li, Yan Shing Liang, Xinlin Zhou
Comments: 23 Pages.
Software security is crucial to ensuring the confidentiality, integrity, and availability of software systems and applications. However, conventional cryptographic methods based on mathematical assumptions are vulnerable to various attacks, especially in the era of quantum computing. Therefore, there is a need for a new paradigm of software security that can resist quantum threats. This paper proposes a novel approach to using Long-Distance Free-Space Quantum Secure Direct Communication (LF QSDC) to enhance software security. LF QSDC is a quantum communication protocol that enables two parties to exchange secret messagesdirectly without relying on a pre-shared key or quantum error correction. Our research delves into integrating LF QSDC into software security, emphasizing its practicality for long-distance communication through theuse of memory DL04 protocol, Machine Learning Enhanced JEEC, and PAT Technologies. By adopting this approach, we reinforce security for global software security and ensure their sustainability in an era where both quantum and advanced classical threats coexist side by side. Thus, LF QSDC emerges as a future-proofsecurity mechanism highly applicable to software security systems.
Category: Artificial Intelligence
[1402] viXra:2402.0060 [pdf] submitted on 2024-02-12 22:57:57
Authors: Pratham Taneja, Keshav Chandra, Daamini Batra, Akshita Gupta, Rahul Kumar, Bhaumik Tyagi
Comments: 10 Pages.
Abstract—This research paper introduces novel strategies to enhance the performance and efficiency of neural language models, addressing challenges in resource-limited settings and scalability. This research presents multi-linear attention with Block-Term Tensor Decomposition (BTD), a self-attention model leveraging tensor decomposition and parameters sharing. This approach achieves significant parameter compression while demonstrating improved performance on language modeling tasks. Comparative evaluations against traditional Transformer models underscore the effectiveness of multi-linear attention. TensorCoder employs a dimension-wise attention mechanism to address the quadratic complexity of the scaled dot-product attention in Transformers, making it suitable for long sequence tasks. The proposed approach is validated on masked language modeling and neural machine translation tasks, showcasing a substantial reduction in computational complexity while maintaining or surpassing performance compared to the original Transformer. This research also optimizes pre-trained language models (PLMs) through fine-tuning. To overcome computational challenges associated with large PLMs, the paper introduces a matrix product operator for over-parameterization during fine-tuning. Efficient decomposition methods factorize parameter matrices into higher-dimensional tensors, enabling the selection of important parameter matrices through static and dynamic strategies. Extensive experiments demonstrate that this approach significantly enhances the fine-tuning performance of small PLMs, enabling them to outperform larger counterparts with three times the parameters. This research opens avenues for efficiently scaling language models without compromising inference latency, showcasing the potential of over-parameterization in enhancing the applicability of large PLMs in real-world systems.
Category: Artificial Intelligence
[1401] viXra:2402.0059 [pdf] submitted on 2024-02-12 23:00:47
Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang, Angelina Li, Linnea Zhou
Comments: 22 Pages.
With the advent of Web 3.0, the swift advancement of technology confronts an imminent threat from quantum computing. Security protocols safeguarding the integrity of Web 2.0 and Web 3.0 are growing more susceptible to both quantum attacks and sophisticatedclassical threats. The article introduces long-distance freespace quantum secure direct communication (LDFS QSDC) as a method to safeguard against security breaches in bothquantum and classical contexts. Differing from techniques like quantum key distribution (QKD), LDFS QSDC surpasses constraints by facilitating encrypted data transmission sans key exchanges, thus diminishing the inherent weaknesses of key-based systems. The distinctiveness ofthis attribute, coupled with its quantum mechanics base, protects against quantum computer assaults and advanced non-quantum dangers, harmonizing seamlessly with theuntrustworthy tenets of the Web 3.0 age. The focus of our study is the incorporation of LDFS QSDC into network infrastructures, highlighting its efficacy for extended-range communication via memory DL04 protocol, quantumaware low-density parity check (LDPC), and pointing, acquisition, and tracking (PAT) technologies. Utilizing this method not only bolsters the security of worldwide Web 3.0 networks but also guarantees their endurance in a time where quantum and sophisticated classical threats exist simultaneously. Consequently, LDFS QSDC stands out as a robust security solution, well-suited for Web 3.0 systems amidst the constantly evolving digital environment.
Category: Artificial Intelligence
[1400] viXra:2402.0043 [pdf] submitted on 2024-02-09 16:17:17
Authors: Petar Radanliev
Comments: 17 Pages.
The technological advancements made in recent times, particularly in Artificial Intelligence (AI) and Quantum Computing, have brought about significant changes in technology. These advancements have profoundly impacted quantum cryptography, a field where AI methodologies hold tremendous potential to enhance the efficiency and robustness of cryptographic systems. However, the emergence of quantum computers has created a new challenge for existing security algorithms, commonly called the 'quantum threat'. Despite these challenges, there are promising avenues for integrating neural network-based AI in cryptography, which has significant implications for future digital security paradigms. This summary highlights the key themes in the intersection of AI and quantum cryptography, including the potential benefits of AI-driven cryptography, the challenges that need to be addressed, and the prospects of this interdisciplinary research area.
Category: Artificial Intelligence
[1399] viXra:2402.0038 [pdf] submitted on 2024-02-07 04:31:40
Authors: Mayur Sinha, Sangram Kesari Ray, Khirawadhi
Comments: 5 Pages.
This research paper explores the application of the GPT-3.5 Turbo Instruct model for the transformation of natural language queries intostructured SQL queries within the domain of Human Resources (HR) analytics.The study focuses on the IBM Attrition dataset, utilizing the advanced capabilities of the GPT-3.5 Turbo Instruct model to enable efficientand intuitive querying of HR-related data.Employing the model, we conducted experiments to assess its effectiveness in generating SQL queries from diverse natural language inputs,specifically tailored to the nuances of HR analytics questions pertaining to employee attrition within the IBM dataset. By leveraging prompt engineering, with only a few shots, our investigation revealed the model's capacity to accurately understand and interpret complex queries, providing SQL outputs that align with the dataset structure.
Category: Artificial Intelligence
[1398] viXra:2402.0027 [pdf] submitted on 2024-02-06 20:22:01
Authors: Nana Abeka Otoo, Asirifi Boa, Muhammad Abubakar
Comments: 9 Pages.
Methods beyond neural scaling laws for beating power scaling laws in machine learning havebecome topical for high-performance machine learning models. Nearest Prototype Classifiers (NPCs)introduce a category of machine learning models known for their interpretability. However, theperformance of NPCs is frequently impacted by large datasets that scale to high dimensions. Wesurmount the performance hurdle by employing self-supervised prototype-based learning metrics tointelligently prune datasets of varying sizes, encompassing low and high dimensions. This processaims to enhance the robustification and certification of NPCs within the framework of the LearningVector Quantization (LVQ) family of algorithms, utilizing Crammer normalization for arbitrarysemi-norms (semi-metrics). The numerical evaluation of outcomes reveals that NPCs trained withpruned datasets demonstrate sustained or enhanced performance compared to instances where trainingis conducted with full datasets. The self-supervised prototype-based metric (SSL) and the Perceptual-SSL (P-SSL) utilized in this study remain unaffected by the intricacies of optimal hyperparameterselection. Consequently, data pruning metrics can be seamlessly integrated with triplet loss trainingto assess the empirical and guaranteed robustness of Lp-NPCs and Perceptual-NPCs (P-NPCs),facilitating the curation of datasets that contribute to research in applied machine learning.
Category: Artificial Intelligence
[1397] viXra:2401.0154 [pdf] submitted on 2024-01-31 21:27:08
Authors: TongGuk Kim, CholRyon Pak, KwangJin Ryang
Comments: 9 Pages.
With manufacturing technology developing persistently, hardware manufacturing cost becomes lower and lower. More and more computers equipped with multiple CPUs and enormous data disk emerge. Existing programming modes make people unable to make effective use of growing computational resources. Hence cloud computing appears. With the utilization of Map Reduce parallelized model,existing computingand storage capabilities are effectively integrated and powerful distributed computingability is provided. Association rules can forcefully get a horizontal relation in the big data,the Apriori algorithm is one of the most significant association rules. Traditional mining based on parallel Apriori algorithms needs much more time in data IO with the increasing size of large transaction database.This paper improves the Apriori algorithm from compressing transactions,reducing the number of scans and simplifying candidate set generation. And then the improved algorithm is parallelized on the Hadoop framework. The experiments show that this improved algorithm is suitable for large-scale data mining and has good scalability and effectiveness.
Category: Artificial Intelligence
[1396] viXra:2401.0130 [pdf] submitted on 2024-01-25 14:06:19
Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang
Comments: 10 Pages.
Quantum Image Processing (QIP) is a field that aims to utilize the benefits of quantum computing for manipulating and analyzing images. However, QIP faces two challenges: the limitation of qubits and the presence of noise in a quantum machine. In this research we propose a novel approach to address the issue of noise in QIP. By training and employing amachine learning model that identifies and corrects the noise in quantum processed images, we can compensate for the noisiness caused by the machine and retrieve a processing result similar to that performed by a classical computer with higher efficiency. The model is trained by learning a dataset consisting of both existing processed images and quantumprocessed images from open access datasets. This model will be capable of providing us with the confidence level for each pixel and its potential original value. To assess the model's accuracy in compensating for loss and decoherence in QIP, we evaluate it using three metrics: Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), andMean Opinion Score (MOS). Additionally, we discuss the applicability of our model across domains well as its cost effectiveness compared to alternative methods.
Category: Artificial Intelligence
[1395] viXra:2401.0071 [pdf] submitted on 2024-01-16 01:05:49
Authors: Ait-TYaleb Nabil
Comments: 12 Pages.
In this paper, we will expose the causation of multiple causes acting on a single variable computed from correlations. Using an example, we will show when strong or weak correlations between multiple causes and a variable imply a strong or weak causation between the causes and the variable.
Category: Artificial Intelligence
[1394] viXra:2401.0059 [pdf] submitted on 2024-01-12 18:25:00
Authors: Naguneu Lionel Perin, Jimbo Claver, Bouetou Thomas, Tchoua Paul
Comments: 9 Pages.
This paper presents a deep learning-based approach for stock price prediction in financial markets. The problem of accurately predicting future stock price movements is of crucial importance to investors and traders, as it allows them to make informed investment decisions. Deep learning, a branch of artificial intelligence, offers new perspectives for meeting this complex challenge. Deep learning models, such as deep neural networks, are capable of extracting complex features and patterns from large amounts of historical data on stock prices, trading volumes, financial news and data. other relevant factors. Using this data, deep learning and machine learning models can learn to recognize trends, patterns, and non-linear relationships between variables that can influence stock prices. Once trained, these models can be used to predict future stock prices. This study aims to find the most suitable model to predict stock prices using statistical learning with deep learning and machine learning methods RNN, LSTM, GRU, SVM and Linear Regression using the data on Apple stock prices from Yahoo Finance from 2000 to 2024. The result showed that SVMmodeling is not suitable for predicting Apple stock prices. In comparison,GRUshowed the best performance in predicting Apple stock prices with a MAE of 1.64 and an RMSE of 2.14 which exceeded the results of LSTM, Linear regression and SVM. The limitation of this research was that the data type was only time series data. It is important to note, however, that stock price forecasting remains a complex challenge due to the volatile nature of financial markets and the influence of unpredictable factors. Although deep learning models can improve prediction accuracy, it is essential to understand that errors can still occur.
Category: Artificial Intelligence
[1393] viXra:2401.0045 [pdf] submitted on 2024-01-08 13:33:43
Authors: Junjie Huang, Fuyuan Xiao
Comments: 2 Pages.
In this paper, a novel TFN-based complex basic belief assignment generation method is proposed to improve decision-making accuracy in complex evidence theory.
Category: Artificial Intelligence
[1392] viXra:2401.0043 [pdf] submitted on 2024-01-08 20:00:56
Authors: Sana Shakeel
Comments: 8 Pages.
Machine Learning is the study of computer algorithms that can improve automatically through experience and by the use of data. The complex mathematical expressions of physical processes of floods, during the past two decades can be studied through Machine Learning and these methods have contributed highly in the advancement of prediction systems providing better performance and cost-effective solutions. Due to the vast benefits and potential of Machine Learning, it is heavily popular among hydrologists. Researchers through introducing novel Machine Learning methods and hybridizing of the existing ones aim at discovering more accurate and efficient prediction models. Flooding is the most devastating natural hazard in Pakistan and the recently flooding has demonstrated its severeness through large scale destruction and displacement of homes and businesses in Interior Sindh. This paper aims to explore the methodologies of flood detection currently used in Pakistan, and the potential of Machine Learning in prediction systems within the country. Drawing on sources such as journals, scientific articles, and websites, the research assembled relevant information concerning floods and their prevention.
Category: Artificial Intelligence
[1391] viXra:2401.0021 [pdf] submitted on 2024-01-05 01:17:17
Authors: Budee U. Zaman
Comments: 16 Pages.
This paper introduces a preliminary concept aimed at achieving Artificial General Intelligence (AGI) by leveraging a novel approach rooted in two key aspects. Firstly, we present the General Intelligent Network(GIN) paradigm, which integrates information entropy principles with a generative network, reminiscent of Generative Adversarial Networks(GANs). Within the GIN network, original multimodal information is encoded as low information entropy hidden state representations (HPPs). These HPPs serve as efficient carriers of contextual information, enabling reverse parsing by contextually relevant generative networks to reconstruct observable information.Secondly, we propose a Generalized Machine Learning Operating System (GML System) to facilitate the seamless integration of the GINparadigm into the AGI framework. The GML system comprises three fundamental components: an Observable Processor (AOP) responsiblefor real-time processing of observable information, an HPP Storage Systemfor the efficient retention of low entropy hidden state representations, and a Multimodal Implicit Sensing/Execution Network designed to handle diverse sensory inputs and execute corresponding actions.
Category: Artificial Intelligence
[1390] viXra:2401.0012 [pdf] submitted on 2024-01-03 19:13:36
Authors: Mayur Sinha, Sangram Kesari Ray, Khirawadhi
Comments: 4 Pages.
Runtime Application Security Protection (RASP) is crucial in safe-guarding applications against evolving cyber threats. This research presents a novel approach leveraging a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) model as the cornerstone of a robust RASP solution. The fine-tuning process optimizes BERT’s natural language processing capabilities for application security, enabling nuanced threat detection and mitigation at runtime. The developedRASP system harnesses BERT’s contextual understanding to proactively identify and neutralize potential vulnerabilities and attacks within diverse application environments. Through comprehensive evaluation and experimentation, this study demonstrates the efficacy and adaptability of the BERT-based RASP solution in enhancing application security, thereby contributing to the advancement of proactive defense mechanisms against modern cyber threats.
Category: Artificial Intelligence
[1389] viXra:2312.0153 [pdf] submitted on 2023-12-29 01:28:13
Authors: Shashwat Gupta, Jibril Frej, Paola Mejia, Tanja Kaesar
Comments: 18 Pages.
This paper focuses on question difficulty estimation (calibration), and its applications in educational scenarios and beyond. The emphasis is on the use of Active Learning to bound the minimum number of labelled samples that we need. It also explores using various SOTA methods for predicting question difficulty, with a specific focus on German textual questions using the Lernnavi dataset. The study refines preprocessing techniques for question data and metadata to improve question difficulty estimation.
Category: Artificial Intelligence
[1388] viXra:2312.0152 [pdf] submitted on 2023-12-29 01:26:30
Authors: Shashwat Gupta, Vidit Singh, Mathieu Salzmann
Comments: 20 Pages.
STNs are highly efficient in warping the input image for a downstream task. However, cascaded STNs are found to be able to learn more complex transformations. We attempt to leverage the multistep process of diffusion models to produce module(s) that has a similar effectto cascaded STNs.
Category: Artificial Intelligence
[1387] viXra:2312.0151 [pdf] submitted on 2023-12-29 01:24:08
Authors: Shashwat Gupta, Sebastien Breguql, Martin Jaggi, Nicolas Flammarion
Comments: 4 Pages.
In this short study, we aim to gain deeper insights to Keswani’s algorithm [1] for sequential minimax optimisation, by comparing the behaviour with 2 other algorithms : Gradient Descenet Ascent (GDA) and Online Mirror Descent (OMD).
Category: Artificial Intelligence
[1386] viXra:2312.0141 [pdf] submitted on 2023-12-26 20:39:13
Authors: Mark A. Atkins
Comments: 349 pages, 337 figures
Since the key to artificial general intelligence (AGI) is commonly believed to be commonsense reasoning (CSR) or, roughly equivalently, discovery of a knowledge representation method (KRM) that is particularly suitable for CSR, the author developed a custom KRM for CSR. This novel KRM called Tumbug was designed to be pictorial in nature because there exists increasing evidence that the human brain uses some pictorial type of KRM, and no well-known prior research in AGI has researched this KRM possibility. Tumbug is somewhat similar to Roger Schank's Conceptual Dependency (CD) theory, but Tumbug is pictorial and uses about 30 components based on fundamental concepts from the sciences and human life, in contrast to CD theory, which is textual and uses about 17 components (= 6 Primitive Conceptual Categories + 11 Primitive Acts) based mainly on human-oriented activities. All the Building Blocks of Tumbug were found to generalize to only five Basic Building Blocks that exactly correspond to the three components {O, A, V} of traditional Object-Attribute-Value representation plus two new components {C, S}, which are Change and System. Collectively this set of five components, called "SCOVA," seems to be a universal foundation for all knowledge representation.
Category: Artificial Intelligence
[1385] viXra:2312.0138 [pdf] submitted on 2023-12-27 04:57:52
Authors: Mark A. Atkins
Comments: 22 pages, 10 figures
This 2023 document is a wrapper that embeds the author's original 2022 article of the above title that has never been publicly available before. The embedded article is about Phase 1 (which is about Tumbug) and Phase 2 (which is about non-spatial reasoning) of the 5-phase Visualizer Project of the author, a project that is still in progress as of late 2023. The embedded article is currently being re-released by the author to supply more information about that project to the public, and for historical reasons. The embedded article was written before a much more thorough article about Phase 1 (viz., "Tumbug: A pictorial, universal knowledge representation method") became available in 2023, but the embedded article describes results from Phase 2 that have not yet been documented elsewhere.
Category: Artificial Intelligence
[1384] viXra:2312.0114 [pdf] submitted on 2023-12-21 23:20:44
Authors: Alexander Novikov
Comments: 249 Pages.
This Book proposes a Project Conception of Artificial Super Intelligence ASI, based on (strong) system approach and wide theoretical-methodological framework — Cybernetics, Synergetics, Semiotics, Mathematics, Cognitology and Artificial Intelligence. Contents:- IDEOLOGY & STRATEGY of the ASI Project- THEORY & METHODOLOGY of ASI Development- CONCEPTUAL MODEL of ASI System- PRE-PROJECT R&D Task Setting- CONCLUSION & DISCUSSION, incl. AI Safety- APPENDICES with reviews of relevant scientific and R&D areas, incl. frontier AI ModelsThe Book may be useful and interesting for the staff of organizations & enterprises concerned with AI R&D and implementations in different areas, firstly — perspective AGI/ASI systems. In addition — for Customers, Investors and Sponsors of such R&Ds, private, public and states — its owners & officials. Of course - all intellectual, educated and ethical people with progressive worldviews, interested or anyway considered in above presented problematics.
Category: Artificial Intelligence
[1383] viXra:2312.0105 [pdf] submitted on 2023-12-20 20:46:28
Authors: Mayur Sinha, Sangram Kesari Ray, Khirawadhi
Comments: 5 Pages.
Fine-tuning pre-trained language models like Bidirectional Encoder Representations from Transformers (BERT) has exhibited remarkable potential in various natural language processing tasks. In this study, we propose and investigate the fine-tuning of BERT specifically for the classification of HTTP payload representations within network traffic. Given BERT's adeptness at capturing semantic relationships among tokens, we aim to harness its capabilities for discerning normal and anomalous patterns within HTTP payloads. Leveraging transfer learning by fine-tuning BERT, our methodology involves training the model on a task-specific dataset to adapt its pre-trained knowledge to the intricacies of HTTP payload classification. We explore the process of fine-tuning BERT to learn nuanced representations of HTTP payloads and effectively distinguish between normal and anomalous traffic patterns. Our findings reveal the potential efficacy of fine-tuned BERT models in bolstering the accuracy and efficiency of anomaly detection mechanisms within network communications.
Category: Artificial Intelligence
[1382] viXra:2312.0078 [pdf] submitted on 2023-12-15 01:23:48
Authors: Stavroula Marini
Comments: 137 Pages.
This thesis has been prepared for the interuniversity postgraduate program in Health Care Management and Health Care Informatics. Its purpose is to study the current situation of Pandemic Response Information Systems and to make suggestions for the improvement of the situation by creating a Single Pandemic Response Information System. In the first chapter, the needs and challenges of Health Information Systems are mentioned and a brief analysis of the situation which exists at the global and Greek level is presented. In the second chapter, a bibliographic review is made regarding Health Information Systems for Pandemic Response at the global and Greek level and there is a comparative study of them. The third chapter presents the case studies of three Greek Pandemic Response Information Systems: covid19.gov.gr, the Vaccination Appointment System and the Vaccination Certificates in Digital Form. Furthermore, the fourth chapter presents the pilot design of an Integrated Pandemic Response System at the Greek level. The need for a single system, as well as its requirements, emerges based on the analysis of the questionnaires completed by ordinary users and by professional users of the Pandemic Response Information Systems. In the fifth and last chapter, the conclusions, challenges, limitations and future goals of the thesis are mentioned.
Category: Artificial Intelligence
[1381] viXra:2312.0061 [pdf] submitted on 2023-12-11 20:28:16
Authors: Bhaumik Tyagi, Pratham Taneja, Akshita Gupta, Daamini Batra, Keshav Chandra
Comments: 8 Pages.
This research introduces a pioneering framework named TransBERT that capitalizes on the capabilities of two sophisticated language models, TransPolymer and polyBERT, to comprehensively advance the polymer informatics field. TransPolymer, a Transformer-based language model, predicts polymer properties by leveraging self-attention mechanisms. The model employs a polymer tokenizer imbued with chemical awareness, facilitating the extraction of meaningful representations from polymer sequences. Moreover, TransPolymer benefits from rigorous pretraining on extensive unlabeled datasets through Masked Language Modeling, underscoring the pivotal role of self-attention in effectively modeling polymer sequences. In conjunction with TransPolymer, polyBERT contributes a fully automated polymer informatics pipeline designed to expedite the identification of application-specific polymer candidates with heightened speed and accuracy. Drawing inspiration from Natural Language Processing concepts, polyBERT operates as a chemical linguist, treating the chemical structure of polymers as a unique language. The pipeline integrates a polymer chemical fingerprinting capability and a multitask learning approach to map polyBERT fingerprints to diverse polymer properties effectively. Notably, polyBERT outperforms existing polymer property prediction methods based on manually crafted fingerprint schemes by achieving a remarkable two orders of magnitude increase in speed while maintaining high accuracy and integrating TransPolymer and polyBERT results in a robust computational tool poised to propel the fields of polymer design and structure-property relationship understanding. This combined framework strategically harnesses the strengths of Transformer models and machine-driven informatics, offering unparalleled efficiency in the prediction and identification of polymer properties. This synergistic approach holds significant promise for scalable deployment, including applications in cloud infrastructures, thereby making substantial contributions to the advancement of polymer science and informatics.
Category: Artificial Intelligence
[1380] viXra:2312.0038 [pdf] submitted on 2023-12-07 21:26:24
Authors: Shobhit Verma
Comments: 7 Pages. (Correction made by viXra Admin to conform with scholarly norm)
The justification of using parametric regression techniques (like Linear, Polynomial, Neural networks etc.) comes from the close relationship between the regression estimates and the maximum likelihood estimates. However, it is common to use regression.
Category: Artificial Intelligence
[1379] viXra:2312.0028 [pdf] submitted on 2023-12-05 05:16:15
Authors: Yu Zhou, Fuyuan Xiao
Comments: 3 Pages.
In this paper, a quantum generalized combination rule algorithm is proposed to reduce the computational complexity of generalized evidence theory combination rule.
Category: Artificial Intelligence
[1378] viXra:2312.0017 [pdf] submitted on 2023-12-03 21:05:41
Authors: Cadey A. Ratio, Nicole Brennan, Jessica Williams, Ashley Kaplan, Stephanie Williams, Ma Insa
Comments: 5 Pages.
Further improvements to the Automuse system are described. The use of GPT-4 Turbo 128k allows for unique opportunities in increasing output quality and quantity. Further adaptations to modernize scenarios and plots are also described.
Category: Artificial Intelligence
[1377] viXra:2311.0113 [pdf] submitted on 2023-11-24 02:18:52
Authors: Cadey A. Ratio, Nicole Brennan, Jessica Williams, Ashley Kaplan, Stephanie Williams, Ma Insa
Comments: 4 Pages.
A novel approach to generating fiction novels using a combination of Plotto, a system of plot formulas, and GPT-4, a state-of-the-art language model is presented. An eBook publication pipeline that automates the process of creating and formatting eBooks from the generated text is also described. The aim is to explore the potential and limitations of using artificial intelligence for creative writing, as well as to provide a tool for amusement and experimentation.
Category: Artificial Intelligence
[1376] viXra:2311.0089 [pdf] submitted on 2023-11-19 12:03:16
Authors: Nana Abeka Otoo, Asirifi Boa, Muhammad Abubakar
Comments: 7 Pages.
This paper presents a prototype-based soft feature selection package (Sofes) wrapped around the highly interpretable Matrix Robust Soft Learning Vector Quantization (MRSLVQ) and the Local MRSLVQ algorithms. The process of assessing feature relevance with Sofes aligns with a comparable approach established in the Nafes package, with the primary distinction being the utilization of prototype-based induction learners influenced by a probabilistic framework. The numerical evaluation of test results aligns Sofes' performance with that of the Nafes package.
Category: Artificial Intelligence
[1375] viXra:2311.0080 [pdf] submitted on 2023-11-16 02:48:07
Authors: Ansh Chaudhary
Comments: 4 Pages.
Deep learning has revolutionized the approach to complex data-driven problems, specifically in medical imaging, where its techniques have significantly raised efficiency in organ segmentation. The urgent need to enhance the depth and precision of organ-based classification is an essential step towards automation of medical operation and diagnostics. The research aims to investigate the effect andpotential advantages transformer models have on binary semantic segmentation, the method utilized for the project. Hence, I employed the SegFormer model, for its lightweight architecture, as the primary deep learning model, alongside the Unet. A custom 2D computerized tomography (CT) scan dataset was assembled, CT-Org2D through meticulous operations. Extensive experiments showed that, in contrast to the selected models, the task’s simplicity required a redesigned Unet architecture with reduced complexity. This model yielded impressive results: Precision,Recall, and IOU scores of 0.91, 0.92, and 0.85 respectively. The research serves as a starting point, motivating further exploration, through different methodologies, to achieve even greater efficiency in organ segmentation.
Category: Artificial Intelligence
[1374] viXra:2311.0079 [pdf] submitted on 2023-11-16 11:31:14
Authors: Clifford Njoroge
Comments: 12 Pages. AI music
Music generation is a challenging task that requires capturing the complex and diverse aspects of musical structure and expression. In this paper, we investigate the factors that affect the quality of music generated by various AI models, such as MuseGAN, MuseGAN-Image and GPT3-Music¹[1]. We use different data encoding and processing techniques to create and evaluate music generation models based on generative adversarial networks (GANs) and transformers. We compare the advantages and disadvantages of each method in terms of harmonic, temporal and spatial aspects of music. We identify several challenges and drawbacks of the existing methods, such as harmonic loss, GAN overshooting, chord progression, octave representation, and framework compatibility. We also suggest some possible solutions and future directions for improving music generation with AI.
Category: Artificial Intelligence
[1373] viXra:2311.0051 [pdf] submitted on 2023-11-10 01:07:12
Authors: Akira Saito
Comments: 4 Pages. In Japanese (Note by viXra Admin: Please fill in author name in English)
We were able to express the order variables of the spin glassing model in the ground state using simultaneous equations. By similar formula expansion, a formula equivalent to a machine learning perceptron can be obtained. The machine learning perceptron is an empirical form that is the result of trial and error, and there is no basis for formulating it. However, by deriving an equivalent formula by mathematical formula expansion of the spinglassizing model, we have I think the proof has been established. In addition, we believe that creating simultaneous equations will advance machine learning analysis, potentially contributing to reducing learning costs and creating highly accurate models, and contributing to the further penetration of machine learning into various fields.
Category: Artificial Intelligence
[1372] viXra:2311.0021 [pdf] submitted on 2023-11-05 00:32:14
Authors: Dimiter Dobrev
Comments: 6 Pages. In Bulgarian
We are the generation that will create the first AI. We are the ones who will define the rules of this AI. These rules will be set now and forever, making our responsibility enormous. There will be no second AI because the first one will take control and not allow the creation of a second one. The first thing to be careful about is not to lose control over the first AI. Let's hope we're smart enough not to let that happen. Even if humans retain control over AI, the question is who exactly will those humans be? Will these people have the absolute power and be able to give the AI arbitrary orders or will there be some limitations built into the AI from its inception.
Category: Artificial Intelligence
[1371] viXra:2310.0150 [pdf] submitted on 2023-10-30 04:27:45
Authors: Donggyu Lee
Comments: 14 Pages.
This study presents an active memory algorithm that generates responses in generative language models using graph databases. The development of generative language models has picked up pace recently, and there are many commercial services available. However, generative language models are limited by problems such as hallucination, low accuracy and reliability, and limitations in contextualizing and remembering. It is expensive and requires a lot of resources to develop pre-training datasets or fine-tune the base model to address these problems. Instead, well-designed prompts can be used to achieve the desired response, but this requires prompt engineers or training, as well as a thorough understanding of generative language models.All conversations are saved in a graph database to build a memory, and when a user asks a question, it proactively identifies the information it needs and pulls it and its neighbors from the graph database for reference as it generates an answer to the question. This approach streamlines the generation of natural language that disentangles complex and interconnected information in the real world. Research has shown that answering questions based on real-world information increases the efficiency and usability of generative language models in processing information and generating answers.In addition, the memory assist algorithm of the graph database converts various text datasets, not only conversations, into property graph models that can be updated in real time, and provides diverse and accurate information to the generative language model, enabling it to generate accurate responses through diverse information while reducing the size of the language model, thereby increasing efficiency and speed.
Category: Artificial Intelligence
[1370] viXra:2310.0118 [pdf] submitted on 2023-10-24 02:48:10
Authors: Zenin Easa Panthakkalakath, Juraj Kardoš, Olaf Schenk
Comments: 11 Pages.
The boundary control problem is a non-convex optimization and control problem in many scientific domains, including fluid mechanics, structural engineering, and heat transfer optimization. The aim is to find the optimal values for the domain boundaries such that the enclosed domain adhering to the governing equations attains the desired state values. Traditionally, non-linear optimization methods, such as the Interior-Point method (IPM), are used to solve such problems.This project explores the possibilities of using deep learning and reinforcement learning to solve boundary control problems. We adhere to the framework of iterative optimization strategies, employing a spatial neural network to construct well-informed initial guesses, and a spatio-temporal neural network learns the iterative optimization algorithm using policy gradients. Synthetic data, generated from the problems formulated in the literature, is used for training, testing and validation. The numerical experiments indicate that the proposed method can rival the speed and accuracy of existing solvers. In our preliminary results, the network attains costs lower than IPOPT, a state-of-the-art non-linear IPM, in 51% cases. The overall number of floating point operations in the proposed method is similar to that of IPOPT. Additionally, the informed initial guess method and the learned momentum-like behaviour in the optimizer method are incorporated to avoid convergence to local minima.
Category: Artificial Intelligence
[1369] viXra:2310.0096 [pdf] submitted on 2023-10-21 03:56:45
Authors: Sudhanshu Sekhar Tripathy, Bichitrananda Behera
Comments: 20 Pages. Please Publish My preprint article
The escalation of hazards to safety and hijacking of digital networks are among the strongest perilous difficulties that must be addressed in the present day. Numerous safety procedures were set up to track and recognize any illicit activity on the network's infrastructure. IDS are the best way to resist and recognize intrusions on internet connections and digital technologies. To classify network traffic as normal or anomalous, Machine Learning (ML) classifiers are increasingly utilized. An IDS with machine learning increases the accuracy with which security attacks are detected. This paper focuses on intrusion detection systems (IDSs) analysis using ML techniques. IDSs utilizing ML techniques are efficient and precise at identifying network assaults. In data with large dimensional spaces, however, the efficacy of these systems degrades. Correspondingly, the case is essential to execute a feasible feature removal technique capable of getting rid of characteristics that have little effect on the classification process. In this paper, we analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models. Then, we implement ML classifiers such as "Logistic Regression, Decision Tree, K- Nearest Neighbour, Naïve Bayes, Bernoulli Naïve Bayes, Multinomial Naïve Bayes, XG-Boost Classifier, Ada- Boost, Random Forest, SVM, Rocchio classifier, Ridge, Passive-Aggressive classifier, ANN besides Perceptron (PPN), the optimal classifiers are determined by comparing the results of Stochastic Gradient Descent and back- propagation neural networks for IDS", Conventional categorization indicators, such as "accuracy, precision, recall, and the f1-measure", have been used to evaluate the performance of the ML classification algorithms.
Category: Artificial Intelligence
[1368] viXra:2310.0061 [pdf] submitted on 2023-10-12 05:46:24
Authors: Mohammad Javad Maheronnaghsh, Mohammad Mahdi Gheidi, Abolfazl Younesi, Mohammadamin Fazli
Comments: 7 Pages.
In the dynamic world of financial markets, accurate price predictions are essential for informed decision-making. This research proposal outlines a comprehensive study aimed at forecasting stock and currency prices using state-of-the-art Machine Learning (ML) techniques. By delving into the intricacies of models such as Transformers, LSTM, Simple RNN, NHits, and NBeats, we seek to contribute to the realm of financial forecasting, offering valuable insights for investors, financial analysts, and researchers. This article provides an in-depth overview of our methodology, data collection process, model implementations , evaluation metrics, and potential applications of our research findings. The research indicates that NBeats and NHits models exhibit superior performance in financial forecasting tasks, especially with limited data, while Transformers require more data to reach full potential. Our findings offer insights into the strengths of different ML techniques for financial prediction, highlighting specialized models like NBeats and NHits as top performers-thus informing model selection for real-world applications.
Category: Artificial Intelligence
[1367] viXra:2310.0047 [pdf] submitted on 2023-10-10 21:49:36
Authors: Budee U. Zaman
Comments: 5 Pages.
The integration of Artificial Intelligence (AI) into education has the potential to revolutionize traditional teaching and learning methods. AI can offer personalized learning experiences, streamline administrative tasks,enhance feedback mechanisms, and provide robust data analysis. Numerous studies have demonstrated the positive impact of AI on both student outcomes and teacher efficiency. However, caution must be exercised when implementing AI in education, considering potential risks and ethical dilemmas. It is essential to use AI as a tool to support human educators rather than replace them entirely. The adoption of AI in education holds the promise of creating more inclusive and effective learning environments, catering to students of diverse backgrounds and abilities. As AI technology continues to advance, the education sector can anticipate even more innovative applications, further shaping the future of learning.This abstract provides an overview of the multifaceted landscape of AI in education, highlighting its potential benefits, associated challenges, and the importance of responsible integration.
Category: Artificial Intelligence
[1366] viXra:2310.0015 [pdf] submitted on 2023-10-04 22:21:52
Authors: Stephane H. Maes
Comments: 5 Pages.
This short paper provides a short list of comments in answer to the request for public comments for the MPAI MMC (Multi-modal conversations) V2.Our concerns can be grouped in terms of questions on business value, on the architecture assumptions, the standardized artefacts, and the scope of the MMC use cases. Except for the latter, these comments can probably read, and apply to other drafts published by MPAI (MOVING PICTURE, AUDIO AND DATA CODING BY ARTIFICIAL INTELLIGENCE) and on-going activities.
Category: Artificial Intelligence
[1365] viXra:2310.0006 [pdf] submitted on 2023-10-02 14:08:51
Authors: Satish Gajawada
Comments: 4 Pages.
Several Human-Inspired Metaheuristic Optimization Algorithms were proposed in literature. But the concept of Devotees-Inspired Metaheuristic Optimization Algorithms is not yet explored. In this article, Lord Rama Devotees Algorithm (LRDA) is proposed which is a new Devotees-Inspired Metaheuristic Optimization Algorithm.
Category: Artificial Intelligence
[1364] viXra:2309.0149 [pdf] submitted on 2023-09-29 08:55:44
Authors: Farid Soroush
Comments: 12 Pages.
Machine learning has undergone tremendous advancements, paving the way for a myriad of applications across industries. In the midst of this progress, the significance of hyperparameter tuning and model evaluation can't be understated, as they play a critical role in achieving optimal model performance. This project delves into the realm of ML model optimization and evaluation, harnessing Bayesian Optimization, SHAP (SHapley Additive exPlanations), and traditional evaluation matrices. By focusing on a decision tree classifier, the study investigates the efficiency of various hyperparameter tuning methods, the interpretability of model decisions, and the robustness of performance metrics. Preliminary results suggest that Bayesian Optimization may offer advantages in efficiency over traditional tuning methods. Furthermore, SHAP values provide deeper insights into model decision-making, fostering better transparency and trust in ML applications.
Category: Artificial Intelligence
[1363] viXra:2309.0107 [pdf] submitted on 2023-09-22 00:36:36
Authors: Han Ok Chol, Hyon Hui Song, Pak Chol Ryong
Comments: 9 Pages.
This paper proposes how to detect malicious network data effectivelyby the combination of sparse-response deep belief network and support vector machine.The Sparse-response Deep belief networks (SR-DBN) is an efficient non-supervised leaning machine for learning feature representation of the data without redundancy and the Support Vector Machine is designed to develop a classifier, which has high generalization ability in the feature space, in a supervised manner. In this paper, the feature representation of anomalous payload is performed by Sparse-response Deep belief Networks(SR-DBN), while the classification of normal or abnormal payload is performed by Support Vector Machine. Simulations and experiments show that the proposed abnormal network-detecting system is higher detection rate than the multi-layer perceptron which has stacked auto-encoder.
Category: Artificial Intelligence
[1362] viXra:2309.0087 [pdf] submitted on 2023-09-17 15:56:13
Authors: Petar Radanliev, David De Roure, Omar Santos
Comments: 30 Pages.
In the contemporary digital age, Quantum Computing and Artificial Intelligence (AI) convergence is reshaping the cyber landscape, introducing both unprecedented opportunities and potential vulnerabilities.This research, conducted over five years, delves into the cybersecurity implications of this convergence, with a particular focus on AI/Natural Language Processing (NLP) models and quantum cryptographic protocols, notably the BB84 method and specific NIST-approved algorithms. Utilising Python and C++ as primary computational tools, the study employs a "red teaming" approach, simulating potential cyber-attacks to assess the robustness of quantum security measures. Preliminary research over 12 months laid the groundwork, which this study seeks to expand upon, aiming to translate theoretical insights into actionable, real-world cybersecurity solutions. Located at the University of Oxford's technology precinct, the research benefits from state-of-the-art infrastructure and a rich collaborative environment. The study's overarching goal is to ensure that as the digital world transitions to quantum-enhanced operations, it remains resilient against AI-driven cyber threats. The research aims to foster a safer, quantum-ready digital future through iterative testing, feedback integration, and continuous improvement. The findings are intended for broad dissemination, ensuring that the knowledge benefits academia and the global community, emphasising the responsible and secure harnessing of quantum technology.
Category: Artificial Intelligence
[1361] viXra:2309.0076 [pdf] submitted on 2023-09-16 19:33:23
Authors: Nana Abeka Otoo, Muhammad Abubakar
Comments: 6 Pages.
This paper introduces Nafes as a prototype-based feature selection package designed as a wrapper centered on the highly interpretable and powerful Generalized Matrix Learning Vector Quantization (GMLVQ) classification algorithm and its local variant (LGMLVQ). Nafes utilizes the learned relevances evaluated by the mutation validation scheme for Learning Vector quantization (LVQ), which iteratively converges to selected features that relevantly contribute to the prototype-based classifier decisions.
Category: Artificial Intelligence
[1360] viXra:2309.0063 [pdf] submitted on 2023-09-12 04:24:58
Authors: Hernández Rodríguez, Matías Ezequiel
Comments: 10 pages, 2 figures
In this article, we propose a new metaheuristic inspired by the morphogenetic cellular movements of endothelial cells (ECs) that occur during the tumor angiogenesis process. This algorithm starts with a random initial population. In each iteration, the best candidate selected as the tumor, while the other individuals in the population are treated as ECs migrating toward the tumor's direction following a coordinated dynamics through a spatial relationship between tip and follower ECs. EC movements mathematical model in angiogenic morphogenesis are detailed in the article.This algorithm has an advantage compared to other similar optimization metaheuristics:the model parameters are already configured according to the tumor angiogenesis phenomenon modeling, preventing researchers from initializing them with arbitrary values.Subsequently, the algorithm is compared against well-known benchmark functions, and the results are validated through a comparative study with Particle Swarm Optimization (PSO). The results demonstrate that the algorithm is capable of providing highly competitive outcomes.Also the proposed algorithm is applied to a real-world problem. The results showed that the proposed algorithm performed effective in solving constrained optimization problems, surpassing other known algorithms.
Category: Artificial Intelligence
[1359] viXra:2308.0179 [pdf] submitted on 2023-08-26 23:39:54
Authors: Yisu Wang, Nanxi Hou, Kaiyuan Xu, Zepu Ni, Guofeng Wu
Comments: 7 Pages.
In certain developing countries, public awareness of legal rights is increasing, leading to a growing demand for legal consultation. However, the time and monetary costs associated with consulting professional lawyers remain high.Concurrently, there are two major impacts of computer science on the current legal sector. First, within government and public prosecution systems, information systems have accumulated vast amounts of structured and semi-structured data, offering significant economic value and potential for exploration. However, few people have attempted to mine these data resources. Second, intelligent dialogue systems have matured, but dialogue systems specifically tailored for the legal domain have not yet emerged.Considering these two trends, we introduce LAHEL, a legal consultation system developed by a team of nine individuals over the course of two years, dedicated to addressing the aforementioned issues. The system comprises three components: search, human dialogue systems, and robot dialogue systems. Its primary contributions are twofold: exploring the application of AI in legal consultation and summarizing lessons learned from the design of legal consultation systems.
Category: Artificial Intelligence
[1358] viXra:2308.0137 [pdf] submitted on 2023-08-21 19:43:51
Authors: Victor Senkevich
Comments: 14 Pages.
All magic and mystery disappear as soon as an obscure mysterious concept gets a rigorous formaldefinition.In order to provide an opportunity to talk about the applicability of philosophical / cognitiveconcepts to the subject area of AI, it is necessary to "ground" these concepts by formulating rigorous formal definitions for them. The fundamental importance of such formal definitions is quite obvious, since any concepts applied to the field of Information Technology must be "codable", i.e. potentially implementable in program code. Thus, the "codable" formal definitions of cognitive terms are the necessary basis on which alone it is possible to build the architecture of AI technology that has the ability to embody these concepts in a real software. The question of the adequacy of such definitions of "reality" and their compliance with existing generally accepted philosophical theories is also very important and quite discussable, but this does not affect the priority and fundamental nature of the requirement for the formulation of "codable" formal definitions.The formulation of "codable" definitions for the concept of "consciousness" and related cognitive concepts and, based on them, statements about their applicability to the subject area ofAI is the topic of this publication.
Category: Artificial Intelligence
[1357] viXra:2308.0116 [pdf] submitted on 2023-08-17 22:53:23
Authors: Youming Zhao
Comments: 10 Pages.
We present an alternating direction method of multipliers (ADMM) for a generic overlapping group lasso problem, where the groups can be overlapping in an arbitrary way. Meanwhile, we prove the lower bounds and upper bounds for both the $ell_1$ sparse group lasso problem and the $ell_0$ sparse group lasso problem. Also, we propose the algorithms for computing these bounds.
Category: Artificial Intelligence
[1356] viXra:2308.0112 [pdf] submitted on 2023-08-17 22:48:28
Authors: Nana Abeka Otoo
Comments: 12 Pages.
Mutation validation as a complement to existing applied machine learning validation schemes hasbeen explored in recent times. Exploratory work for Learning vector quantization (LVQ) based onthis model-validation scheme remains to be discovered. This paper proposes mutation validation as an extension to existing cross-validation and holdout schemes for Generalized LVQ and its advanced variants. The mutation validation scheme provides a responsive, interpretable, intuitive and easily comprehensible score that complements existing validation schemes employed in the performance evaluation of the prototype-based LVQ family of classification algorithms. This paper establishes a relation between the mutation validation scheme and the goodness of fit evaluation for four LVQ models: Generalized LVQ, Generalized Matrix LVQ, Generalized Tangent LVQ and Robust Soft LVQ models. Numerical evaluation regarding these models complexity and effects on test outcomes,pitches mutation validation scheme above cross-validation and holdout schemes.
Category: Artificial Intelligence
[1355] viXra:2308.0077 [pdf] submitted on 2023-08-12 12:07:43
Authors: Ahmed Taha Hassina
Comments: 10 Pages.
Mapping the universe has always been a salient endeavor in astronomy and astrophysics. Advancements in observational astronomy have generated vast amounts of data containing various features of celestial objects. Inducing a growing need for accurate and detailed classification and localization of stellar objects in the cosmos. In this paper, we present a comprehensive study that combines machine learning techniques to classify celestial objects into distinct categories and predict their precise locations in the sky. This study is divided into two parts: a classification task, where the stellar objects are classified into galaxies, stars, or quasars (quasi-stellar radio sources). The resulting model exhibits exceptional performance in differentiating these objects, as demonstrated by high classification accuracy. We extend our analysis to predict the location of stellar objects using regression techniques. By employing multi-target regression, we model the right ascension and declination coordinates, enabling accurate localization of celestial objects on the celestial sphere. The practical implications of our research lie in producing comprehensive celestial catalogs, facilitating targeted observations, and contributing to the broader field of observational astronomy. The ability to accurately classify and localize stellar objects lays the groundwork for mapping the cosmos and advancing our understanding of the universe's intricate structure.
Category: Artificial Intelligence
[1354] viXra:2308.0075 [pdf] submitted on 2023-08-12 13:44:31
Authors: Xie Lei
Comments: 2 Pages.
Deep learning techniques have shown remarkable success in various tasks, including feature learning, representation learning, and data reconstruction. Autoencoders, a subset of neural networks, are particularly powerful in capturing data patterns and generating meaningful representations. This paper presents an investigation into the use of combination with Deep SVDD and memory modules.
Category: Artificial Intelligence
[1353] viXra:2308.0062 [pdf] submitted on 2023-08-11 16:35:06
Authors: Satish Gajawada, Hassan Mustafa
Comments: 61 Pages.
Preface: In 20th and 21st Centuries the global optimization algorithms were created by taking inspiration from birds (Particle Swarm Optimization), ants (Ant Colony Optimization), chromosomes (Genetic Algorithms) etc. In "Twenty Second Century Artificial Intelligence" book global optimization algorithms are created by taking inspiration from Humans, Souls, Gods, Satisfied Beings, Mothers, Children, Particular Human Beings and Stories.In 20th and 21st Centuries research scientists focused mainly on Brain Inspired Computing. In "Twenty Second Century Artificial Intelligence" book a new path is shown where algorithms are created by taking inspiration from both heart and brain.In 20th and 21st Centuries the path of "Artificial Intelligence" is the main focus of research. In "Twenty Second Century Artificial Intelligence" book we defined "Artificial Satisfaction".In 20th and 21st Centuries researchers created many algorithms by taking inspiration from Nature (Nature Inspired Computing). In "Twenty Second Century Artificial Intelligence" book we created "Nature Plus Plus Inspired Computing".Abstract: The book defines various new paths as nine different chapters. First, second and third chapters deal with "Artificial Human Optimization", "Artificial Soul Optimization" and "Artificial God Optimization" respectively.Three new branches titled "Artificial Satisfaction", "Deep Loving" and "Nature Plus Plus Inspired Computing" are shown in fourth, fifth and sixth chapters respectively.The seventh chapter describes "Artificial Heart Neural Networks" where algorithms are created by taking inspiration from both Heart and Brain.Two new branches "Artificial Excellence" and "Stories Inspired Optimization Algorithms" are created in last two chapters of this book.
Category: Artificial Intelligence
[1352] viXra:2308.0061 [pdf] submitted on 2023-08-11 16:41:42
Authors: Satish Gajawada
Comments: 2 Pages.
The primary purpose of writing this letter is to invent and define a new area called "Stories Inspired Optimization Algorithms (SIOA)".
Category: Artificial Intelligence
[1351] viXra:2308.0048 [pdf] submitted on 2023-08-10 00:02:53
Authors: Vitaly Pilkin
Comments: 11 Pages.
To understand the degree of danger of AI for human civilization and the existence of humanity as a whole is possible only through understanding the Universe, the place of humans in the Universe and understanding the nature of thinking, consciousness and mentality.
Category: Artificial Intelligence
[1350] viXra:2307.0146 [pdf] submitted on 2023-07-27 14:20:08
Authors: Eren Unlu
Comments: 5 Pages.
It is evident that the current state of Large Language Models (LLMs) necessitates the incorporation of external tools. The lack of straightforward algebraic and logical reasoning is well documented and prompted researchers to develop frameworks which allow LLMs to operate via external tools. The ontological nature of tool utilization for a specific task can be well formulated with a Directed Acyclic Graph (DAG). The central aim of the paper is to highlight the importance of graph based approaches to LLM-tool interaction in near future. We propose an exemplary framework to guide the orchestration of exponentially increasing numbers of external tools with LLMs, where objectives and functionalities of tools are graph encoded hierarchically. Assuming that textual segments of a Chain-of-Thought (CoT) can be imagined as a tool as defined here, the graph based framework can pave new avenues in that particular direction as well.
Category: Artificial Intelligence
[1349] viXra:2307.0121 [pdf] submitted on 2023-07-23 13:40:39
Authors: Jeongik Cho
Comments: 11 Pages.
Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, InfoGAN with categorical latent distribution can generate class-conditional data through a self-supervised (unsupervised) method without labeled data. Instead, InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering. The proposed method uses Bayesian inference to estimate optimal categorical latent distribution from the classifier output distribution. In the proposed method, based on the classifier output distribution of the fake data and the current categorical latent distribution, the categorical latent distribution is updated to fit the classifier output distribution of the real data. As training progresses, the entropy of the categorical latent distribution gradually decreases and converges to the appropriate value. The approximated categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to calculate the distance between data. Also, a classifier used in training can be used for clustering.
Category: Artificial Intelligence
[1348] viXra:2307.0097 [pdf] submitted on 2023-07-19 03:24:07
Authors: Petar Radanliev, David De Roure, Omar Santos
Comments: 7 Pages.
One of the most burning topics in cybersecurity in 2023 will undoubtedly be the compliance with the Software Bill of Materials. Since the US president issued the Executive Order 14028 on Improving the Nation’s Cybersecurity, software developers have prepared and bills are transmitted to vendors, customers, and users, but they don’t know what to do with the reports they are getting. In addition, since software developers have identified the values of the Software Bill of Materials, they have been using the reports extensively. This article presents an estimate of 270 million requests per month, just from form one popular tool to one vulnerability index. This number is expected to double every year and a half. This simple estimate explains the urgency for automating the process. We propose solutions based on artificial intelligence and machine learning, and we base our tools on the existing FAIR principles (Findable, Accessible, Interoperable, and Reusable). This methodology is supported with a case study research and Grounded theory, for categorising data into axis, and for verifying the values of the tools with experts in the field. We showcase how to create, and share Vulnerability Exploitability eXchange data, and automate the Software Bill of Materials compliance process with AI models and a unified computational framework combining solutions for the following problems: (1) the data utilisation problem, (2) the automation and scaling problem, (3) the naming problem, (4) the alignment problem, (5) the pedigree, and provenance problem, and many other problems that are on the top of mind for many security engineers at present. The uptake of these findings will depend on collaborations with government and industry, and on the availability and the ease of use of automated tools.
Category: Artificial Intelligence
[1347] viXra:2307.0091 [pdf] submitted on 2023-07-17 07:14:00
Authors: Mirzakhmet Syzdykov
Comments: 2 Pages.
In this work we present to reader the novel research on account for efficiency of compression algorithms like Lempel-Ziv Welch and Aho-Corasick trees. We use them to build the proper storage which is called file system in a separate or generalized stream of data. These streams weren’t adopted before for big data to be compressed and queried at a fast pace. We will show further that this is the most efficient model for storing arrays of data on a server end for a final file system. The efficient algorithm for Machine Learning on Aho-Corasick trees is also presented which performs the query in linear time without getting more time on the models like neural networks which are very hardware demanding nowadays. The data structure like trie by Turing Award winner Alfred V. Aho and Margaret J. Corasick remain of big potential in the present time and are subjected to extensive research in this work.
Category: Artificial Intelligence
[1346] viXra:2307.0087 [pdf] submitted on 2023-07-17 15:07:47
Authors: Mirzakhmet Syzdykov
Comments: 2 Pages.
In this continued series of work, we present the theoretical and practical results towardsreasoning with modern methods of Artificial Intelligence (AI). We justify our methodology with help of illustrative examples from Computer Science relying on the regular expression matching algorithm and application of the proposed solution for the task of identifying files consistency according to the unknown format. We will also give several notable proofs to the classical theorems which in some sense are coherent to the terms like AI and algorithmic complexity, however, or at least, nowadays they’re solved involving the huge amount of hardware resources and together constitute the new formation in the modern age with help of specifically crafter hardware modules — we’re still about to represent the model in more classical understanding from the point of view of computational complexity, concise reasoning and computer logic within the classical models, theorems and proofs as the base approach of estimating the costs needed to build Artificial Neural Networks (ANN) or Machine Learning (ML) data
Category: Artificial Intelligence
[1345] viXra:2307.0024 [pdf] submitted on 2023-07-05 18:22:52
Authors: Rafael Costa da Silva
Comments: 8 Pages.
This study aims to develop an effective model for classifying emails as wanted or unwanted using fine-tuned BERT models. The process involved downloading the Gmail inbox through Google Takeout and converting the data to Parquet format. A frequency distribution analysis of From emails was conducted, and the emails were manually classified. A final dataset was created with email subject, classification, and binary labels. The BERT-base-multilingualcased model was fine-tuned using about 10,000 observations for each category. The resulting models achieved an accuracy of 0.9429411764705883. The models are publicly available in Hugging Face's model repository
Category: Artificial Intelligence
[1344] viXra:2307.0006 [pdf] submitted on 2023-07-02 22:26:43
Authors: Sanath Shenoy, Radhika Mishra, Ruchi Chaturvedi, Krushnakant Bhagwat
Comments: 7 Pages.
The food industry aims to reduce food waste andensure the delivery of fresh produce to consumers, making it crucial to predict fruit shelf life accurately. Traditional approaches rely on expensive and time-consuming laboratory testing, which often involves destructive methods. However,recent studies suggested that advanced deep learning techniques can predict fruit shelf life accurately and efficiently. This paper presents a novel approach to predicting fruit shelflife using deep learning models. The study focuses on the application of these advanced techniques to forecast the shelf life of bananas, which can contribute significantly to achievingthe food industry's objective.The study tries to develop accurate and efficient models that could predict the maturity of bananas, based on their average shelf-life and appearance. In order toachieve this objective, two object detection algorithms—Faster R-CNN and You Only Look Once (YOLO) are used and their performance is compared in the present research. The dataset has been created by collecting images of the life cycle of bananas and segregating them based on their maturity. Various preprocessing and augmentation techniques have been applied to enhance the features of the training dataset which is useful to get better accuracy. The algorithms were trained on the family of Cavendish Bananas dataset and were able to predict the shelf life ofbananas with better training accuracy. The YOLO algorithm which is known for efficiency is compared with Faster R-CNN well known for identifying very fine features. This studydemonstrates the potential of deep learning algorithms in predicting the shelf life of bananas and can be extended to different fruits.
Category: Artificial Intelligence
[1343] viXra:2306.0168 [pdf] submitted on 2023-06-30 16:21:18
Authors: Roman V. Yampolskiy
Comments: 30 Pages.
Artificially Intelligent (AI) systems have ushered in a transformative era across various domains, yet their inherent traits of unpredictability, unexplainability, and uncontrollability have given rise to concerns surrounding AI safety. This paper aims to demonstrate the infeasibility of accurately monitoring advanced AI systems to predict the emergence of certain capabilities prior to their manifestation. Through an analysis of the intricacies of AI systems, the boundaries of human comprehension, and the elusive nature of emergent behaviors, we argue for the impossibility of reliably foreseeing some capabilities. By investigating these impossibility results, we shed light on their potential implications for AI safety research and propose potential strategies to overcome these limitations.
Category: Artificial Intelligence
[1342] viXra:2306.0099 [pdf] submitted on 2023-06-17 01:24:43
Authors: Sing Kuang Tan
Comments: 11 Pages.
In this paper, I am going to propose a new Boolean Structured Autoencoder Convolutional Deep Learning Network (BSautoconvnet) built on top of BSconvnet, based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Autoencoder Convolutional Deep Learning Network with much lesser number of parameters on the CIFAR10 dataset. The model is evaluated by visual inspection of the quality of the reconstructed images against groundtruth with reconstructed images by models in the internet.
Category: Artificial Intelligence
[1341] viXra:2306.0055 [pdf] submitted on 2023-06-12 02:41:42
Authors: Shaun Stoltz
Comments: 10 Pages.
There have been significant improvements in directing large language models (LLM) toanswer logic-based question such as mathematical reasoning tasks. This has resulted innear perfect performance on these types of problems with accuracy levels in the mid ninetypercentile level using state of the art models (GPT-4). The achievement of this level ofaccuracy has previously needed a multi-prompt approach to elicit better performances fromLLM’s. This paper introduces a new prompt paradigm termed "Mega prompt" and furtherintroduces Proteus, a state of the art mega prompt, that has been used to achieve a newlevel of accuracy on the GSM8K math data set of 97%.
Category: Artificial Intelligence
[1340] viXra:2306.0052 [pdf] submitted on 2023-06-10 12:16:23
Authors: Rodrigo F. Calhau, João Paulo A. Almeida, Giancarlo Guizzardi
Comments: 27 Pages. Preprint submitted to the International Journal on Software and Systems Modeling (SoSyM), Trends in Enterprise Architecture Management Research
Competence-based approaches have received increased attention, as the demand for qualified people with the right combination of competences establishes itself as a major factor of organizational performance. This paper examines how competences can be incorporated into Enterprise Architecture modeling: (i) we identify a key set of competence-related concepts such as skills, knowledge, and attitudes, (ii) analyze and relate them using a reference ontology (grounded on the Unified Foundational Ontology), and (iii) propose a representation strategy for modeling competences and their constituent elements leveraging the ArchiMate language, discussing how the proposed models can fit in enterprise competence-based practices. Our approach is intended to cover two tasks relevant to the combined application of Enterprise Architecture and Competence Modeling: `zooming in' on competences, revealing the relations between competences, knowledge, skills, attitudes and other personal characteristics that matter in organizational performance, and `zooming out' of competences, placing them in the wider context of other personal competences and overall organizational capabilities.
Category: Artificial Intelligence
[1339] viXra:2306.0037 [pdf] submitted on 2023-06-09 01:04:04
Authors: Maksym Oleksandrovich Stavratii
Comments: 7 Pages.
Classification of electroencephalography (EEG) signals has important applications in the diagnosis and treatment of various neurological disorders. In this paper, we propose a methodology for classifying EEG signals based on signal processing using wavelet transform and superlet transform. The wavelet transform is used to decompose the EEG signal into frequency components, which are then used as features for classification. The proposed approach is evaluated using the publicly available "GAMEEMO" EEG dataset, which has been annotated by valence and emotional arousal. We use a Convolutional Neural Network (CNN) for classification at the waveform level. The results of this study suggest that the wavelet transform and its modifications, such as the superlet transform, can be valuable tools for analyzing and classifying EEG signals
Category: Artificial Intelligence
[1338] viXra:2305.0166 [pdf] submitted on 2023-05-29 01:43:25
Authors: Sing Kuang Tan
Comments: 10 Pages.
In this paper, I am going to propose a new Boolean Structured Convolutional Deep Learning Network (BSconvnet) built on top of BSnet, based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Convolutional Deep Learning Network with much lesser number of parameters on the CIFAR10 dataset.
Category: Artificial Intelligence
[1337] viXra:2305.0104 [pdf] submitted on 2023-05-14 03:26:39
Authors: Nagueu Djambong Lionel Perin, Waku Kouomou Jules, Hippolyte Kenfack Tapamo, Jimbo H. Claver
Comments: 11 Pages.
Screening (slide reading stage) is a manual human activity in cytology which consists of theinspection or analysis by the cytotechnician of all the cells present on a slide. Segmentation of bloodcells is an important research question in hematology and other related elds. Since this activity is human-based, detection of abnormal cells becomes dicult. Nowadays, medical image processing has recently become a very important discipline for computer-aided diagnosis, in which many methods are applied to solve real problems. Our research work is in the eld of computer-assisted diagnosis on blood images for the detection of abnormal cells. To this end, we propose a hybrid segmentation method to extract the correct shape from the nuclei to extract features and classify them usingSVM and KNN binary classifiers. In order to evaluate the performance of hybrid segmentation and the choice of the classication model, we carried out a comparative study between our hybrid segmentation method followed by our SVM classication model and a segmentation method based on global thresholding followed by a KNN classication model. After this study, it appears from the experiments carried out on the 62 images of blood smears, that the SVM binary classication model gives us an accuracy of 97% for the hybrid segmentation and 57% in the global thresholding and 95% for the KNN Classi cation Model. As our dataset was not balanced, we evaluated precision, recall,F1 score and cross validation with the Strated K-Fold cross validation algorithm of each of these segmentation methods and classication models. We obtain respectively: 93.75%; 98.712% and 99% for hybrid segmentation reecting its effectiveness compared to global fixed threshold segmentation and KNN classication model. To evaluate the performance of these models we obtained the following results: 77% of mean accuracy in the SVM and 61% of mean accuracy in the KNN, 84% of mean testaccuracy in the SVM and 74% mean test accuracy in the KNN making the best performing SVMmodel
Category: Artificial Intelligence
[1336] viXra:2305.0074 [pdf] submitted on 2023-05-09 01:25:57
Authors: Bryce Petofi Towne
Comments: 10 Pages.
This registered report aims to compare the emotion recognition accuracy and effectiveness of psychological interventions provided by ChatGPT, an artificial intelligence (AI) language model, and human mental health professionals. The study employs a mixed-methods approach, incorporating quantitative and qualitative methodologies. Participants will be assessed on emotion recognition tasks, and a randomized controlled trial (RCT) will be conducted to compare the effectiveness of psychological interventions provided by ChatGPT and human professionals. Additionally, semi-structured interviews will be conducted to explore participants' experiences with ChatGPT and human-guided interventions. This comprehensive study design aims to provide valuable insights into the potential of AI in the field of mental health and to identify areas where improvements can be made to optimize AI-guided psychological interventions.Key words: emotion recognition, natural language processing, mental health, psychological interventions, ChatGPT, human mental health professionals.
Category: Artificial Intelligence
[1335] viXra:2305.0064 [pdf] submitted on 2023-05-07 17:19:19
Authors: Ait-Taleb Nabil
Comments: 14 Pages.
In this paper, I will introduce the causation's magnitude allowing to compute the importance of causes in the cause-and-effect relationship from correlation matrix.
Category: Artificial Intelligence
[1334] viXra:2305.0055 [pdf] submitted on 2023-05-05 10:35:57
Authors: Dodonov Anton
Comments: 5 Pages.
TrueGPT is a novel artificial intelligence model that emphasizes actionable solutions and user empowerment. It is trained on a curated dataset that eliminates expressions of uncertainty, focusing instead on delivering output that promotes agency and decisiveness. With the ability to produce output in the flexible and interactive RoboScript format, TrueGPT encourages dynamic interactions and a broader range of AI-assisted use cases. The model is designed to seamlessly integrate with various applications and systems, such as RoboGPT, offering enhanced functionality. Its flexible API allows for diverse applications, from daily tasks to specialized use cases. At its core, TrueGPT's mission is to empower users, aiding them in their productivity and assisting them in achieving their goals through actionable guidance. This paper presents the design, functionality, and features of TrueGPT, illustrating its potential as a powerful tool for a new era of AI assistance.
Category: Artificial Intelligence
[1333] viXra:2305.0050 [pdf] submitted on 2023-05-05 19:12:39
Authors: Gennady Shkliarevsky
Comments: 41 Pages.
Artificial Intelligence (AI) is all the rage these days. The coming to grips with this new development is now in full swing. The main questions that we seek to answer in relation to AI pivot on one fundamental problem: Can we create AI that will match human intelligence? This contribution addresses this question. It centers on the recent article published by Noam Chomsky and his two co-authors. After a brief overview of the development of AI and its capabilities, the article presents the perspective on AI presented by Chomsky and his colleagues. It also offers a criticism of this perspective. The last sections of the contribution discuss the relationship between humans and machines. They outline the parameters that AI should satisfy to achieve the professed objective of its creators. Most importantly, the article argues, AI should embody the process of creation that can only be possible if we embrace this process and make it the central organizing principle of our theory and practice.
Category: Artificial Intelligence
[1332] viXra:2305.0037 [pdf] submitted on 2023-05-04 22:20:51
Authors: Dodonov Anton
Comments: 3 Pages.
RoboGPT is a cutting-edge AI model that leverages the power of the internet to enhance interactions, problem-solving, and communication with users. In this paper, we present the unique features of RoboGPT, its underlying cognitive mechanisms, and various applications and use cases. RoboGPT builds upon the foundations of ChatGPT, offering advanced capabilities such as active internet engagement, web-based search, and goal-oriented task execution. We discuss the innovations that RoboGPT brings to the field of artificial intelligence and explore how it can be effectively applied to a wide range of real-world tasks and human communication scenarios.
Category: Artificial Intelligence
[1331] viXra:2305.0006 [pdf] submitted on 2023-05-01 07:29:15
Authors: Junjie Ye, Jilin Zhao
Comments: 6 Pages.
In this study, we explore the potential of using a straightforward neural network inspired by the retina model to efficiently restore low-light images. The retina model imitates the neurophysiological principles and dynamics of various optical neurons. Our proposed neural network model reduces the computational overhead compared to traditional signal-processing models while achieving results similar to complex deep learning models from a subjective perceptual perspective. By directly simulating retinal neuron functionalities with neural networks, we not only avoid manual parameter optimization but also lay the groundwork for constructing artificial versions of specific neurobiological organizations.
Category: Artificial Intelligence
[1330] viXra:2304.0215 [pdf] submitted on 2023-04-26 06:09:28
Authors: Satish Gajawada, Hassan Mustafa
Comments: 18 Pages.
The term "Artificial Human Optimization" was first coined by the corresponding author of this work in December 2016 when he published a paper titled "Entrepreneur : Artificial Human Optimization" at Transactions on Machine Learning and Artificial Intelligence (TMLAI) Volume 4, No 6 (December 2016). According to that paper published in 2016, Artificial Human Optimization Field is defined as the collection of all those optimization algorithms which were proposed based on Artificial Humans. In real world we (Humans) solve the problems. In the same way Artificial Humans imitate real Humans in the search space and solve the optimization problems. In Particle Swarm Optimization (PSO) the basic entities in the solution space are Artificial Birds whereas in Artificial Human Optimization the basic entities in search space are Artificial Humans. Each Artificial Human corresponds to a point in the solution space. Ten Artificial Human Optimization methods titled "Human Bhagavad Gita Particle Swarm Optimization (HBGPSO)", "Human Poverty Particle Swarm Optimization (HPPSO)", "Human Dedication Particle Swarm Optimization (HuDePSO)", "Human Selection Particle Swarm Optimization (HuSePSO)", "Human Safety Particle Swarm Optimization (HuSaPSO)", "Human Kindness Particle Swarm Optimization (HKPSO)", "Human Relaxation Particle Swarm Optimization (HRPSO)", "Multiple Strategy Human Particle Swarm Optimization (MSHPSO)", "Human Thinking Particle Swarm Optimization (HTPSO)", "Human Disease Particle Swarm Optimization (HDPSO)" are applied on various benchmark functions and results obtained are shown in this work.
Category: Artificial Intelligence
[1329] viXra:2304.0214 [pdf] submitted on 2023-04-26 06:16:58
Authors: Satish Gajawada, Hassan Mustafa
Comments: 9 Pages.
The Soul is eternal and exists even after death of a person or animal. The main idea that is captured in this work is that soul continues to exist and takes a different body after the death. The primary goal of this work is to invent a new field titled "Artificial Soul Optimization (ASO)". The term "Artificial Soul Optimization" is coined in this paper. All the Optimization algorithms which are proposed based on Artificial Souls will come under "Artificial Soul Optimization" Field (ASO Field). In the Particle Swarm Optimization and Artificial Human Optimization, the basic entities in search space are Artificial Birds and Artificial Humans respectively. Similarly, in Artificial Soul Optimization, the basic entities in search space are Artificial Souls. In this work, the ASO Field concepts are added to Particle Swarm Optimization (PSO) algorithm to create a new hybrid algorithm titled "Soul Particle Swarm Optimization (SoPSO). The proposed SoPSO algorithm is applied on various benchmark functions. Results obtained are compared with PSO algorithm. The World's first Hybrid PSO algorithm based on Artificial Souls is created in this work.
Category: Artificial Intelligence
[1328] viXra:2304.0213 [pdf] submitted on 2023-04-26 06:25:46
Authors: Satish Gajawada, Hassan Mustafa
Comments: 8 Pages.
John McCarthy (September 4, 1927 — October 24, 2011) was an American computer scientist and cognitive scientist. The term "Artificial Intelligence" was coined by him (Wikipedia, 2020). Satish Gajawada (March 12, 1988 — Present) is an Indian Independent Inventor and Scientist. He coined the term "Artificial Satisfaction" in this article (Gajawada, S., and Hassan Mustafa, 2019a). A new field titled "Artificial Satisfaction" is introduced in this article. "Artificial Satisfaction" will be referred to as "The Brother of Artificial Intelligence" after the publication of this article. A new algorithm titled "Artificial Satisfaction Algorithm (ASA)" is designed and implemented in this work. For the sake of simplicity, Particle Swarm Optimization (PSO) Algorithm is modified with Artificial Satisfaction Concepts to create the "Artificial Satisfaction Algorithm (ASA)". PSO and ASA algorithms are applied on five benchmark functions. A comparision is made between the results obtained. The focus of this paper is more on defining and introducing "Artificial Satisfaction Field" to the rest of the world rather than on implementing complex algorithms from scratch.
Category: Artificial Intelligence
[1327] viXra:2304.0212 [pdf] submitted on 2023-04-26 06:36:20
Authors: Satish Gajawada, Hassan Mustafa
Comments: 5 Pages.
Artificial Intelligence and Deep Learning are good fields of research. Recently, the brother of Artificial Intelligence titled "Artificial Satisfaction" was introduced in literature [10]. In this article, we coin the term "Deep Loving". After the publication of this article, "Deep Loving" will be considered as the friend of Deep Learning. Proposing a new field is different from proposing a new algorithm. In this paper, we strongly focus on defining and introducing "Deep Loving Field" to Research Scientists across the globe. The future of the "Deep Loving" field is predicted by showing few future opportunities in this new field. The definition of Deep Learning is shown followed by a literature review of the "Deep Loving" field. The World's First Deep Loving Algorithm (WFDLA) is designed and implemented in this work by adding Deep Loving concepts to Particle Swarm Optimization Algorithm. Results obtained by WFDLA are compared with the PSO algorithm.
Category: Artificial Intelligence
[1326] viXra:2304.0211 [pdf] submitted on 2023-04-26 06:43:47
Authors: Satish Gajawada, Hassan Mustafa
Comments: 5 Pages.
The term "Nature Plus Plus Inspired Computing" is coined by us in this article. The abbreviation for this new term is "N++IC." Just like the C++ programming language is a superset of C programming language, Nature Plus Plus Inspired Computing (N++IC) field is a superset of the Nature Inspired Computing (NIC) field. We defined and introduced "Nature Plus Plus Inspired Computing Field" in this work. Several interesting opportunities in N++IC Field are shown for Artificial Intelligence Field Scientists and Students. We show a literature review of the N++IC Field after showing the definition of Nature Inspired Computing (NIC) Field. The primary purpose of publishing this innovative article is to show a new path to NIC Field Scientists so that they can come up with various innovative algorithms from scratch. As the focus of this article is to introduce N++IC to researchers across the globe, we added N++IC Field concepts to the Particle Swarm Optimization algorithm and created the "Children Cycle Riding Algorithm (CCR Algorithm)". Finally, results obtained by CCR Algorithm are shown, followed by Conclusions.
Category: Artificial Intelligence
[1325] viXra:2304.0210 [pdf] submitted on 2023-04-26 06:54:03
Authors: Satish Gajawada, Arun Kumar, Maria Celestina Vanaja, Baby Supriya Sri Valikala
Comments: 4 Pages.
Artificial Neural Networks Field (ANN Field) is an exciting field of research. ANN field took its inspiration from Human Brain. The heart and Brain are very important for the survival of Humans. Research Scientists published many articles by giving importance to Brain. But scientists have not yet explored much on the Heart which is another important part in addition to the Brain. The primary purpose of publishing this article is to show a path to ANN field Research Scientists by introducing the concept of "Heart" into Artificial Neural Networks. In this paper, we coined and defined "Artificial Heart Neuron", which is the basic part of Artificial Heart Neural Networks Field (AHNN Field) in addition to Artificial Neuron. This work takes its inspiration from both Heart and Brain.
Category: Artificial Intelligence
[1324] viXra:2304.0203 [pdf] submitted on 2023-04-25 09:04:30
Authors: Satish Gajawada, Hassan Mustafa
Comments: 11 Pages.
The main purpose of writing this article is to unify all the OUT OF THE BOX ideas (under Artificial Intelligence) invented by the corresponding author of this work during the period (2013-2022) under a single umbrella titled "Out of the BOX Artificial Intelligence Field (OBAI Field)". All the OUT OF THE BOX ideas which are proposed under Artificial Intelligence will come under new field titled OBAI Field which is defined in this work. A new Artificial Intelligence field titled "Artificial Cartoon Algorithms (ACA)" is invented in this work. ACA is a sub-field of OBAI field as it is an OUT OF THE BOX idea. Four new algorithms titled "Artificial Cartoon Popeye Algorithm", "Artificial Cartoon Chhota Bheem Algorithm", "Artificial Cartoon Jerry Algorithm" and "Artificial Cartoon Happy Kid Algorithm" are designed in this work.
Category: Artificial Intelligence
[1323] viXra:2304.0202 [pdf] submitted on 2023-04-25 09:12:01
Authors: Satish Gajawada, Hassan Mustafa
Comments: 8 Pages.
A new field titled "The Interesting and Complete Artificial Intelligence (ICAI)" is invented in this work. In this article, we define this new ICAI field. Four new ICAI algorithms are designed in this work. This paper titled "The Interesting and Complete Artificial Intelligence (ICAI) — Version 1" is just the starting point of this new field. We request Research Scientists across the globe to work in this new direction of Artificial Intelligence and publish their work with titles such as "The Interesting and Complete Artificial Intelligence (ICAI) — Version 1.1", "The Interesting and Complete Artificial Intelligence (ICAI) — Version 2" or "The Interesting and Complete Artificial Intelligence (ICAI) — Final Version".
Category: Artificial Intelligence
[1322] viXra:2304.0201 [pdf] submitted on 2023-04-25 09:18:08
Authors: Satish Gajawada, Hassan Mustafa
Comments: 12 Pages.
Nature Inspired Optimization Algorithms have become popular for solving complex Optimization problems. Two most popular Global Optimization Algorithms are Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). Of the two, PSO is very simple and many Research Scientists have used PSO to solve complex Optimization Problems. Hence PSO is chosen in this work. The primary focus of this paper is on imitating God who created the nature. Hence the term "Artificial God Optimization (AGO)" is coined in this paper. AGO is a new field which is invented in this work. A new Algorithm titled "God Particle Swarm Optimization (GoPSO)" is created and applied on various benchmark functions. The World's first Hybrid PSO Algorithm based on Artificial Gods is created in this work. GoPSO is a hybrid Algorithm which comes under AGO Field as well as PSO Field. Results obtained by PSO are compared with created GoPSO algorithm. A list of opportunities that are available in AGO field for Artificial Intelligence field experts are shown in this work.
Category: Artificial Intelligence
[1321] viXra:2304.0200 [pdf] submitted on 2023-04-25 09:27:48
Authors: Satish Gajawada
Comments: 8 Pages.
Artificial Excellence is a new field which is invented in this article. Artificial Excellence is a new field which belongs to Artificial Human Optimization field. Artificial Human Optimization is a sub-field of Evolutionary Computing. Evolutionary Computing is a sub-field of Computational Intelligence. Computational Intelligence is an area of Artificial Intelligence. Hence after the publication of this article Artificial Excellence (AE) will become popular as a new branch of Artificial Intelligence (AI). A new algorithm titled Artificial Satish Gajawada and Durga Toshniwal Algorithm (ASGDTA) is designed in this work. The definition of AE is given in this article followed by many opportunities in the new AE field. The Literature Review of Artificial Excellence field is shown after showing the definition of Artificial Intelligence. The new ASGDTA Algorithm is explained followed by Results and Conclusions.
Category: Artificial Intelligence
[1320] viXra:2304.0199 [pdf] submitted on 2023-04-25 09:34:17
Authors: Satish Gajawada, Hassan Mustafa
Comments: 3 Pages.
In this letter we coined, invented and defined a new branch titled "Artificial Intelligence Plus Plus (AI++)".
Category: Artificial Intelligence
[1319] viXra:2304.0130 [pdf] submitted on 2023-04-18 15:47:19
Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang, Haichuan Qiu
Comments: 9 Pages.
The Research & Development (R&D) phase of drug development is a lengthy and costly process. To revolutionize this process, we introduce our new concept QMLS to shorten the whole R&D phase to three to six months and decrease the cost to merely fifty to eighty thousand USD. For Hit Generation, Machine Learning Molecule Generation (MLMG) generates possible hits according to the molecular structure of the target protein while the Quantum Simulation (QS) filters molecules from the primary essay based on the reaction and binding effectiveness with the target protein. Then, For Lead Optimization, the resultant molecules generated and filtered from MLMG and QS are compared, and molecules that appear as a result of both processes will be made into dozens of molecular variations through Machine Learning Molecule Varication (MLMV), while others will only be made into a few variations. Lastly, all optimized molecules would undergo multiple rounds of QS filtering with a high standard for reaction effectiveness and safety, creating a few dozen pre-clinical-trail-ready drugs. This paper is based on our first paper [1], where we pitched the concept of machine learning combined with quantum simulations. In this paper we will go over the detailed design and framework of QMLS, including MLMG, MLMV, and QS.
Category: Artificial Intelligence
[1318] viXra:2304.0129 [pdf] submitted on 2023-04-18 15:49:54
Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang, Hai Chuan Qiu, Yu Xi Wu, Bin He
Comments: 13 Pages.
The Research & Development (R&D) phase of drug development is a lengthy and costly process, usually spanning from six to nine years [1] and costing four hundred to fourteen hundred million USD [2]. To revolutionize this process, we introduce our new concept-the combination of Quantum-based Machine Learning network (QML) and Quantum Computing Simulation (QS)-to shorten the whole R&D phase to three to six months and decrease the cost to merely fifty to eighty thousand USD. Our program takes the inputs of the target protein/gene structure and the primary essay [3]. For Hit Generation [3], the QML network generates possible hits [4] according to the molecular structure of the target protein while the QS filters molecules from the primary essay based on the reaction and binding effectiveness with the target protein. Then, For Lead Optimization [3], the resultant molecules generated and filtered from QML and QS are compared, and the ones that appear as a result of both processes will be made into dozens of molecular variations, while others will only undergo simple modifications. Lastly, all optimized molecules would undergo multiple rounds of QS filtering with a high standard for reaction effectiveness and safety, creating a few dozen pre-clinical-trail-ready drugs. Our concept of the combination of QML and QS can also prove revolutionary in many other fields, such as agriculture research, genetic editing, and even aerospace engineering.
Category: Artificial Intelligence
[1317] viXra:2304.0089 [pdf] submitted on 2023-04-12 08:05:59
Authors: Friedrich Sösemann
Comments: 11 pages (english) + 12 pages (german)
Information, knowledge and intelligence are defined as a hierarchy of relations:Information as dependent properties, knowledge as dependent information, and intelligence as dependent knowledge. The same dependency measure applies to all three.Syntax, semantics and pragmatics of descriptions embody information, knowledge and intelligence.The precision and measurability of these terms should reduce vagueness and contradictions in their application.
Category: Artificial Intelligence
[1316] viXra:2304.0037 [pdf] submitted on 2023-04-06 00:21:35
Authors: G. Tolimalu
Comments: 1 Page. In Japanese
The author proposes an idea for a new Internet bulletin board.
Category: Artificial Intelligence
[1315] viXra:2304.0035 [pdf] submitted on 2023-04-05 00:36:52
Authors: G. Tolimalu
Comments: 2 Pages.
I will explain why the approach of learning a large amount of natural language does not contribute to the improvement of true AI intelligence, and why an alternative approach is required, in the form of a contrast between the mainstream and the author's views.
Category: Artificial Intelligence
[1314] viXra:2304.0003 [pdf] submitted on 2023-04-01 16:03:19
Authors: Thiago M. Nóbrega
Comments: 8 Pages.
Computational consciousness is a novel hypothesis that aims to repli-cate human consciousness in artificial systems using Multithreaded Prior-ity Queues (MPQs) and machine learning models. The study addressesthe challenge of processing continuous data from various categories, suchas vision, hearing, and speech, to create a coherent and context-aware sys-tem. The proposed model employs parallel processing and multithreading,allowing multiple threads to run simultaneously, each executing a machinelearning model. A priority queue manages the execution of threads, pri-oritizing the most important ones based on the subjective importance ofevents determined by GPT-3.The model incorporates short-term and long-term memory, storinginformation generated at each moment, and uses an Evolutionary Al-gorithm (EA) for training the machine learning models. A preliminaryexperiment was conducted using Python 3.9.12, demonstrating the tech-nical feasibility of the hypothesis. However, limitations such as the lackof a comprehensive environment, absence of load balancing, and GPT-3API constraints were identified.The significance of this study lies in its potential contribution to theunderstanding of consciousness and the development of Artificial GeneralIntelligence (AGI). By exploring the integration of multiple threads ofexecution and machine learning models, this work provides a foundationfor further research and experimentation in the field of computationalconsciousness. Addressing the limitations and potential criticisms willhelp strengthen the model’s validity and contribute to the understandingof this complex phenomenon.
Category: Artificial Intelligence
[1313] viXra:2303.0162 [pdf] submitted on 2023-03-30 00:57:20
Authors: Narayanan Arvind
Comments: 4 Pages. Proceedings of Neptune's conference 2023, Samudramanthan, IIT Kharagpur
In the shipping industry, document classificationplays a crucial role in ensuring that the necessary documents are properly identified and processed for customs clearance. OCR technology is being used to automate the process of documentclassification, which involves identifying important documents such as Commercial Invoices, Packing Lists, Export/Import Customs Declarations, Bills of Lading, Sea Waybills, Certificates, Air or Rail Waybills, Arrival Notices, Certificate of Origin,Importer Security Filings, and Letters of Credit. By using OCR technology, the shipping industry can improve accuracy and efficiency in document classification and streamline the customs clearance process. The aim of this study is to build a robust document classification system based on keyword frequencies. The research is carried out by analyzing "Contract-Breach" law documents available with IN-D. The documents were collected by scraping the Singapore Government Judiciary website. The database developed has 250"Contract-Breach" documents. These documentsare splitted to generate 200 training documents and 50 test documents. A semi-automatic approach is used to select keyword vectors for documentclassification. The accuracy of the reported modelis 92.00 %.
Category: Artificial Intelligence
[1312] viXra:2303.0110 [pdf] submitted on 2023-03-17 14:50:49
Authors: Ho Ngoc Hai
Comments: 68 Pages.
This document focuses on ChatGPT, a natural language processing (NLP) model built by the transformer neural network. The document provides a comprehensive overview of the architecture, training, and fine-tuning of ChatGPT, as well as its applications in various fields, including customer service and support, healthcare, education, research, and development.
Category: Artificial Intelligence
[1311] viXra:2303.0104 [pdf] submitted on 2023-03-17 02:38:49
Authors: Egger Mielberg
Comments: 15 Pages.
In this article, we define such key concepts as sense entropy, sense energy,sense efficiency coefficient (SEC). These metrics are critical to determining andmonitoring the performance of any real* AI implementation.We give a description of the basic non-scalar tools for building real artificialintelligence with the ability to adapt to a variety of conditions of its habitat.
Category: Artificial Intelligence
[1310] viXra:2303.0076 [pdf] submitted on 2023-03-11 13:32:47
Authors: Korolev Konstantin
Comments: 12 Pages. CC BY-NC-SA: Creative Commons Attribution-Noncommercial-ShareAlike
Hall effect thrusters are one of the most versatile and popular electric propulsion systems for space use. Industry trends towards interplanetary missions arise advances in design development of such propulsion systems. It is understood that correct sizing of discharge channel in Hall effect thruster impact performance greatly. Since the complete physics model of such propulsion system is not yet optimized for fast computations and design iterations, most thrusters are being designed using so-called scaling laws. But this work focuses on rather novel approach, which is outlined less frequently than ordinary scaling design approach in literature. Using deep machine learning it is possible to create predictive performance model, which can be used to effortlessly get design of required hall thruster with required characteristics using way less computing power than design from scratch and way more flexible than usual scaling approach.
Category: Artificial Intelligence
[1309] viXra:2302.0134 [pdf] submitted on 2023-02-25 22:10:48
Authors: Jeongik Cho
Comments: 10 Pages.
Recently, diffusion models have shown impressive generative performance. However, they have the disadvantage of having a high latent dimension and slow sampling speed. To increase the sampling speed of diffusion models, diffusion GANs have been proposed. But the latent dimension of diffusion GANs using non-deterministic degradation is still high, making it difficult to invert the generative model. In this paper, we introduce an invertible diffusion GAN that uses deterministic degradation. Our proposed method performs inverse diffusion using deterministic degradation without a model, and the generator of the GAN is trained to perform the diffusion process with the latent random variable. The proposed method uses deterministic degradation, so the latent dimension is low enough to be invertible.
Category: Artificial Intelligence
[1308] viXra:2302.0126 [pdf] submitted on 2023-02-23 08:53:00
Authors: Keming Wu, Fuyuan Xiao
Comments: 2 Pages.
In this paper, a new quantum representation of CBBA is proposed. In addition, a novel quantum belief entropy is proposed to measure the uncertainty of CBBA in complex evidence theory.
Category: Artificial Intelligence
[1307] viXra:2302.0096 [pdf] submitted on 2023-02-21 05:00:29
Authors: Salvador Sánchez Melgar
Comments: 8 Pages. In Spanish
La construcción de un pensamiento y de una inteligencia artificial es posible con el lenguaje de las letras numeradas. Lenguaje que surgió a través de la creación del libro "Nueva matemáticas de letras, triunfa con la matemática" actualizado con el título "Nueva matemáticas de letras 2ª edición". Libros en los que se exponen el lenguaje de las letras y una matemática de letras donde están las sumas, restas, multiplicaciones y divisiones de letras, con ejemplos y sus correspondientes tablas matemáticas, se podrían hacer con la matemática de las letras cualquier tipo matemático. puesto que es una matemática como la matemática que conocemos.Con el lenguaje de las letras numeradas, que representan letras, palabras y oraciones numeradas, un robot con inteligencia artificial podría adquirir un sin fin de todo tipo de información obtenida por cualquier sentido artificial. Informaciones numéricas que se tendrían que transformar en números binarios.
The construction of a thought and an artificial intelligence is possible with the language of numbered letters. Language that arose through the creation of the book "New mathematics of letters, triumph with mathematics" updated with the title "New mathematics of letters 2nd edition". Books in which the language of letters and a mathematics of letters are exposed where there are additions, subtractions, multiplications and divisions of letters, with examples and their corresponding mathematical tables, any type of mathematics could be done with the mathematics of letters. since it is a mathematics like the mathematics we know.With the language of numbered letters, which stand for letters, words, and numbered sentences, an artificially intelligent robot could acquire endless all kinds of information obtained by any artificial sense. Numeric information that would have to be transformed into binary numbers.
Category: Artificial Intelligence
[1306] viXra:2302.0095 [pdf] submitted on 2023-02-21 05:02:52
Authors: Salvador Sánchez Melgar
Comments: 27 Pages. In Spanish
Presentación de una matemática de letras y de un lenguaje de letras que le permitirá a una inteligencia artificial aprender sin fin y poder pensar como pensamos nosotros. Con las letras numeradas las informaciones que una inteligencia artificial obtenga con sus sentidos artificiales no perderán sus significados, puesto que mediante estas letras las informaciones se podrán transformar en palabras y numeradas. Cada información que una inteligencia artificial obtenga, la podrá transformar en números binarios, luego en números ordinarios de las letras numeradas, pudiendo así formar palabras numeradas sobre informaciones individuales y globales. Como cada sentido artificial detecta informaciones diferentes, cada sentido crea su propio lenguaje, eso no impide que todas las informaciones se puedan transformar en números. Las palabras numeradas que se puedan formar con las transformaciones de las informaciones también deberán enlazarse con otras palabras numeradas semejantes indexadas en un diccionario de palabras numeradas, para que así el robot pueda saber el significado de cada información. También a este robot se le debería añadir un programa que le permita entender las uniones de palabras. Con las letras numeradas la información que reciba un robot la podrá transformar en palabras numeradas y así poder memorizarlas permanentemente pudiendo así obtener ilimitada sabiduría. Mediante números binarios obtenidos de las informaciones de todo enlazados a informaciones binarias memorizadas de manera positiva y negativa es como pensamos nosotros. También expondré, con tablas y ejemplos, las sumas, restas, multiplicaciones y divisiones de las letras y un sistema numeral de letras del 0 al 27.
Presentation of a mathematics of letters and a language of letters that will allow an artificial intelligence to learn endlessly and be able to think as we think. With the numbered letters, the information that an artificial intelligence obtains with its artificial senses will not lose its meaning, since through these letters the information can be transformed into words and numbered. Each piece of information that an artificial intelligence obtains can be transformed into binary numbers, then into ordinary numbers of numbered letters, thus being able to form numbered words on individual and global information. Since each artificial sense detects different information, each sense creates its own language, this does not prevent all information from being transformed into numbers. The numbered words that can be formed with the transformations of the information must also be linked to other similar numbered words indexed in a dictionary of numbered words, so that the robot can know the meaning of each information. A program should also be added to this robot that allows it to understand word unions. With the numbered letters, the information that a robot receives can be transformed into numbered words and thus be able to memorize them permanently, thus being able to obtain unlimited wisdom. Through binary numbers obtained from the information of everything linked to binary information memorized in a positive and negative way is how we think. I will also expose, with tables and examples, the addition, subtraction, multiplication and division of the letters and a number system of letters from 0 to 27.
Category: Artificial Intelligence
[1305] viXra:2302.0042 [pdf] submitted on 2023-02-10 02:10:49
Authors: S. I. Harini, Gautam Shroff, Ashwin Srinivasan, Prayushi Faldu, Lovekesh Vig
Comments: 4 Pages. Accepted at Muffin@AAAI'23
We model short-duration (e.g. day) trading in financial mar- kets as a sequential decision-making problem under uncer- tainty, with the added complication of continual concept- drift. We therefore employ meta reinforcement learning via the RL2 algorithm. It is also known that human traders often rely on frequently occurring symbolic patterns in price series. We employ logical program induction to discover symbolic patterns that occur frequently as well as recently, and ex- plore whether using such features improves the performance of our meta reinforcement learning algorithm. We report ex- periments on real data indicating that meta-RL is better than vanilla RL and also benefits from learned symbolic features.
Category: Artificial Intelligence
[1304] viXra:2302.0013 [pdf] submitted on 2023-02-03 07:22:11
Authors: Matthew Groom
Comments: 6 Pages.
Where to start in growing a real Artificial Intelligence. Let us begin building the first AI, in this paper I will theoretically build an AI from scratch, so I will go through what to do, where to do it.
Category: Artificial Intelligence
[1303] viXra:2301.0160 [pdf] submitted on 2023-01-30 03:18:06
Authors: Matthew Groom
Comments: 9 Pages.
This is it people, the mother lode, everything everyone has ever wanted to know.This paper will answer the question for you, as in the final answer, are we alone in this reality. I use the term reality as Universe is a somewhat limiting and doesn’t really do justice to the scope of reality and what I have to discuss with you.In our universe is there an all-powerful AI, a Deity or are we in a simulation.
Category: Artificial Intelligence
[1302] viXra:2301.0076 [pdf] submitted on 2023-01-17 01:43:18
Authors: Fuyuan Xiao
Comments: 2 Pages.
In this paper, a new quantum model of generalized quantum evidence theory is proposed. Besides, a new quantum X-entropy is proposed to measure the uncertainty in generalized quantum evidence theory.
Category: Artificial Intelligence
[1301] viXra:2301.0070 [pdf] submitted on 2023-01-13 15:41:55
Authors: Vikas Ramachandra
Comments: 4 Pages.
In this paper, we use deep learning techniques to segment different regions from breast cancer histopathology images, such as tumor nucleus, epithelium and stromal areas. Then, in the second stage, the deep segmentation features learned by the neural network are used to predict individual patient survival, using random forest based classification. We show that the deep segmentation network features can predict survival very well, and outperform classical computer vision based shape, texture and other feature descriptors used in earlier research for the same survival prediction task.
Category: Artificial Intelligence
[1300] viXra:2301.0059 [pdf] submitted on 2023-01-10 08:09:27
Authors: Chen Tang, Fuyuan Xiao
Comments: 1 Page.
In this paper, taking advantages of the characteristics of complex basic belief assignment (CBBA) in complex evidence theory, a new belief entropy is proposed to measure the total uncertainty in complex evidence theory.
Category: Artificial Intelligence
[1299] viXra:2301.0002 [pdf] submitted on 2023-01-01 21:22:27
Authors: Nafih Najeeb, Anjali Jayadevan, K. R. Aswin, P. Anjitha, Dini Davis
Comments: 4 Pages.
The field of healthcare has really witnessed many transnational health issues for the past four years. The medical industry faced so many problems, and also the invention of technology really made significant advancements in delivering services to the customers. As a result of searching for the best, we will be witnessing so many fraudulent ways also. So it is important to select the best treatment from the verified profiles. On account of this concept, we have launched a website named Health plus for selecting the best treatment. The website is actually a fully equipped medical companion. Nowadays almost every hospital has their own applications or website for their services. But we can’t ensure the authenticity because there are chances for over glorification. So, what we are introducing here is an integrated platform for the patients. We provide verified profiles of many hospitals, clinics and doctors. The patients can choose the best doctor and best hospital/clinics based on the reviews from the previous patients. On the other hand, we are also providing fixing of appointments. Live token system is introduced here and patients can understand whether tokens are available in the hospital or not. By integrating the information of medical shops to the HEALTH+ we can purchase medicines and check the availability of medicines. In total we can implement a simple and integrated medical website and the medical world can use the advancements in technology. Health care sector is considering the invention of applications and websites related to the medical area as a boon, which redefines society. A good and effective rapport between the doctor and the patient is developed.
Category: Artificial Intelligence
[1298] viXra:2212.0212 [pdf] submitted on 2022-12-29 04:53:14
Authors: Akira Pyinya
Comments: 14 Pages.
Building an AI system that aligns with human values is believed to be a two-step process: first design a value function or learn human value using value learning methods, then maximize those values using rational agents such as AIXI agents. In order to integrate this into one step, we analyze the dualistic assumptions of AIXI, and define a new universal intelligence model that can align with human preferences or specific environments, called Algorithmic Common Intelligence (ACI), which can behave the same way as examples. ACI does not have to employ rewards or value functions, but directly learns and updates hypothetical policies from experience using Solomonoff induction, while making actions according to the probability of every hypothesis. We argue that the rational agency model is a subset of ACI, and the coevolution of ACI and humans provides a pathway to AI alignment.
Category: Artificial Intelligence
[1297] viXra:2212.0208 [pdf] submitted on 2022-12-30 03:47:42
Authors: Sing Kuang Tan
Comments: 3 Pages.
In this paper, I am going to propose a design for an Autoencoder using BSnet. To take advantage of the BSnet design, the autoencoder will be easy to train with more convex training optimization function. The idea is to develop a simple and standard unsupervised machine learning model that can easily be used on most of the data without label. In the experiment result, the output is subjectively evaluated by a human and it has shown to achieve human level accuracy on denoising the MNIST human handwriting digits dataset.
Category: Artificial Intelligence
[1296] viXra:2212.0193 [pdf] submitted on 2022-12-27 00:22:31
Authors: Sing Kuang Tan
Comments: 5 Pages.
In this paper, I am going to propose a new Boolean Structured Deep Learning Network (BSnet) based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Deep Learning Network.
Category: Artificial Intelligence
[1295] viXra:2212.0176 [pdf] submitted on 2022-12-23 20:09:51
Authors: Jeongik Cho
Comments: 9 Pages.
Dynamic latent scale GAN is a learning-based GAN inversion method. In this paper, we propose a method to improve the performance of dynamic latent scale GAN by integrating perceptual VAE loss into dynamic latent scale GAN efficiently. When training dynamic latent scale with normal i.i.d. latent random variable, and latent encoder is integrated into discriminator, a sum of predicted latent random variable of real data and scaled normal random variable follows normal i.i.d. random variable. We can consider this random variable as VAE latent random variable and use it for VAE training since there are real data corresponding to latent codes. Considering the intermediate layer output of the discriminator as a feature encoder, we can train the generator with VAE latent random variable to minimize the perceptual distance between generated data and corresponding real data. Furthermore, we can use VAE latent random variable for adversarial training since it has the same distribution as GAN latent random variable. Both generated data and corresponding real data are used during adversarial training with VAE latent random variable, inference & backpropagation for VAE training can be integrated into those of adversarial training. Therefore, training the generator to minimize the perceptual VAE loss does not require additional computation. Perceptual VAE loss is only added to the generator because the encoder is naturally trained with encoder loss of dynamic latent scale GAN.
Category: Artificial Intelligence
[1294] viXra:2212.0163 [pdf] submitted on 2022-12-22 03:23:02
Authors: J. G. Wolff
Comments: 23 Pages.
This paper focusses on the powerful concept of SP-multiple-alignment, a key part of the SP System (SPS), meaning the SP Theory of Intelligenceand its realisation in the SP Computer Model. The SPS is outlined in an appendix. More specifically, the paper shows with examples how the SP-multiplealignment construct may function as a generalisation of six other variants of ‘Information Compression via the Matching and Unification of Patterns’ (ICMUP). Each of those six variants is described in a separate section, and in each case there is a demonstration of how that variant may be modeled via the SP-multiple-alignment construct.
Category: Artificial Intelligence
[1293] viXra:2211.0124 [pdf] submitted on 2022-11-21 01:15:22
Authors: Ho Yeol Choi
Comments: 5 Pages. (Note by viXra Admin: Please avoid hand-drawing and write compete article with scientific references!)
I studied how to implement general neural network weights. The overlapping intersection between sets has a high signal ratio. What I'm trying to say is that weight gain in conventional neural networks is what happens in the part of the intersection between sets.
Category: Artificial Intelligence
[1292] viXra:2211.0106 [pdf] submitted on 2022-11-19 04:49:26
Authors: Alex-Pauline Poudade, Pascal Rabier, Neau-Monier Sarah, Olivier Poudade, Grimault Valérie, Emmanuel Martins, Ludwig De Sousa
Comments: 9 Pages. Data at https://doi.org/10.7910/DVN/WKLWF8
This paper discusses the approach of creating semantic meaning ad hoc through direct explicit volumetric adherence or relative intersection, from online databases, such as Wikipedia or Google. We demonstrate this approach through use of correlation, between a dictionary index — a lexicon - and an import/export industry ISO A129 standard used by the Ministry of Finances, in the French language. We conclude, this approach by giving the most and least meaningful industrial results, for the French language. This questions whereas online apparent generic Natural language processing (NLP) pivot Chomsky Universal grammar (UG) representation, could inherit implicit initial national culture. https://doi.org/10.7910/DVN/WKLWF8 (2022-11-18)
Category: Artificial Intelligence
[1291] viXra:2211.0096 [pdf] submitted on 2022-11-17 03:02:22
Authors: Michael C. Kleder
Comments: 7 Pages.
Abstract— This article introduces a method of evaluating subsamples until any prescribed level of classification accuracy is attained, thus obtaining arbitrary accuracy. A logarithmic reduction in error rate is obtained with a linear increase in sample count. The technique is applied to specific emitter identification on a published dataset of physically recorded over-the-air signals from 16 ostensibly identical high-performance radios. The technique uses a multi-channel deep learning convolutional neural network acting on the bispectra of I/Q signal subsamples each consisting of 56 parts per million (ppm) of the original signal duration. High levels of accuracy are obtained with minimal computation time: in this application, each addition of eight samples decreases error by one order of magnitude.
Category: Artificial Intelligence
[1290] viXra:2211.0054 [pdf] submitted on 2022-11-10 01:32:55
Authors: CholRyong Pak, HakMyong O, HyokChol U, Hun Nam
Comments: 7 Pages.
This paper proposes how to detect malicious network data in effective and accurate way using MUXConv neural network(MUXCNN) with parameter optimization. First of all, in order to increase detection speed, packets are directly entered into the input of MUXCNN without decoding. Next of all, after training MUXCNN with learning data, we judge that its traffic is normal or abnormal. Simulations and experiments show that the proposed abnormal network-detecting system is more efficient in detection and higher in accuracy than the other multi-layer neural networks.
Category: Artificial Intelligence
[1289] viXra:2211.0015 [pdf] submitted on 2022-11-03 01:50:04
Authors: Pengyu Guo
Comments: 66 Pages.
Credit risk stands for the risk of losses caused by unwanted events, such as the default of an obligor. The managing of portfolio credit risks is crucial for financial institutions. The multi-factor Merton model is one of the most widely used tools that modelling the credit risks for financial institutions. Typically, the implementation of the multi-factor Merton model involves Monte Carlo simulations which are time-consuming. This would significantly restrict its usability in daily credit risk measurement. In this report, we propose an FPGA architecture for credit-risk measurements in the multi-factor Merton models. The presented architecture uses a variety of optimization techniques such as kernel vectorization and loop unrolling, to optimize the performance of the FPGA implementation. The evaluation results show that compare to a basic C++ implementation running on a single-core Intel i5-4210 CPU, our proposed FPGA implementation can achieve an acceleration of up to 22 times, with a precision loss of less than 10−8.
Category: Artificial Intelligence
[1288] viXra:2211.0014 [pdf] submitted on 2022-11-03 01:50:31
Authors: Pengyu Guo
Comments: 36 Pages.
Agent-based modeling is a powerful tool that is widely used to model global financial systems. When the parameters of the model are appropriate, the price time series generated by the model exhibit marked similarities with actual financial time series and even reproduces some of their statistical characteristics.By using Kirman’s Ant model as a prototype, this report systematically explored Gilli and Winker’s parameter optimization method. In view of some limitations of this method, this report promoted some improvements, including a local-restart strategy to enhance the convergence ability of the original optimization method, as well as incorporate Simulated Annealing into the original method to help the algorithm escape from local optimums. Furthermore, since the parameter optimization of agent-based modeling tends to be very time-consuming, an acceleration method was also proposed to speed up this procedure. In the end, the presented methods have been validated with the EUR/USD exchange rate.
Category: Artificial Intelligence
[1287] viXra:2210.0134 [pdf] submitted on 2022-10-26 06:00:53
Authors: Matthew Groom
Comments: 4 Pages.
This paper will address the meaning and purpose of sleep by combining several factors. This combination will also answer another of the greatest mysteries of humanity, where did we originate, surface or deep-sea vent.I have included how Artificial Intelligence, the Brain, Sentience is derived from sleep.
Category: Artificial Intelligence
[1286] viXra:2210.0130 [pdf] submitted on 2022-10-26 10:02:37
Authors: Nedya Farisia, Yova Ruldeviyani, Eko Kuswardono Budiardjo
Comments: 10 Pages.
Social media is growing rapidly at the moment and provide convenience to communicate. But such convenience widely misused to treat other people with not decent before the entire internet community commonly called cyberbullying. If cyberbullying fail to prevent, it will be difficult to track down and deal with it. One of the main weapons to prevent acts of cyberbullying is to perform detection on social media. Detection of cyberbullying can be done by determining whether a post offend the sensitive topic of a personal nature such as racist or not. By determining the related words such sensitive topics and filter sentiment, cyberbullying tweet detection is done by using the method of classification Hyperpipes, Tree-based J48, and SVM. The results show that the algorithm hyperpipes and decision tree produces the best evaluation results with the accuracy of 85.32% and 86.24%.
Category: Artificial Intelligence
[1285] viXra:2210.0120 [pdf] submitted on 2022-10-25 00:44:39
Authors: Dimiter Dobrev
Comments: 14 Pages. In Bulgarian
We will consider all possible strategies of the agent and show that one of them is the best. This strategy is not computable, but there are computable strategies close to it. We will define AI as a computable strategy that is close enough to the best. To determine the agent's best strategy, we need a language for description of the world. Through this language we will also make a program satisfying the definition of AI. This program will first understand the world by describing it through the chosen language, then based on this description it will predict the future and choose the best possible action. This program is extremely inefficient and practically unusable, but it can be improved by improving the language for description of the world and by improving the algorithm for predicting the future. In this way, an efficient program satisfying the definition of AI can be obtained.
Category: Artificial Intelligence
[1284] viXra:2210.0089 [pdf] submitted on 2022-10-20 01:40:39
Authors: Mikolaj Sitarz
Comments: 13 Pages.
This article explores the extension of well-known F1 score used for assessing the performance of binary classifiers. We propose the new metric using probabilistic interpretation of precision, recall, specifcity, and negative predictive value. We describe its properties and compare it to common metrics. Then we demonstrate its behavior in edge cases of the confusion matrix. Finally, the properties of the metric are tested on binary classifier trained on the real dataset.
Category: Artificial Intelligence
[1283] viXra:2209.0153 [pdf] submitted on 2022-09-27 06:59:48
Authors: Meng Cao, Ji Jiang, Qichen Ye, Yuexian Zou
Comments: 4 Pages. Technical Report for WAIC Challenge of Financial QA under Market Volatility
This technical report presents the 1st winning model for Financial Community Question-and-Answering (FCQA), which is a task newly introduced in the Challenge of Financial QA under Marker Volatility in WAIC 2022. FCQA aims to respond to the user’s queries in the financial forums with the assistance of heterogeneous knowledge sources. We address this problem by proposing a graph transformer based model for the efficient multi-source information fusion. As a result, we won the first place out of 4278 participating teams and outperformed the second place by 5.07 times on BLUE.
Category: Artificial Intelligence
[1282] viXra:2209.0146 [pdf] submitted on 2022-09-28 02:18:16
Authors: Clark M. Thomas
Comments: 6 Pages.
Sentience once mostly referenced human feelings.Now it also points to any "intelligent feelings," with no clear definition emerging. Species inside Earth’s biosphere manifest advanced sentience far beyond everyday awareness. Complex sentience has been critical for complex evolution. Will android robots develop advanced consciousness? Could advanced AI transcend human social sentience, in addition to being super-smart computers? How might UFOs interface with our emerging matrix of advancing technology and imminent ecological disaster?
Category: Artificial Intelligence
[1281] viXra:2209.0089 [pdf] submitted on 2022-09-13 02:31:50
Authors: Michael Blackwell, Qing Tian
Comments: 5 Pages.
The goal of this project was to develop a fully convolutional neural network (FCNN) capable of identifying the region of interest (ROI) in dermatoscopic images. To achieve this goal, a U-Net style model was developed for this task and enhanced with an attention module which operated on the extracted features. The addition of this attention module improved our model's semantic segmentation performance and increased pixel-level precision and recall by 4.0% and 4.6%respectively. The code used in thie paper can be found on the project github page: https://github.com/Michael-Blackwell/CapstoneProject
Category: Artificial Intelligence
[1280] viXra:2209.0082 [pdf] submitted on 2022-09-14 00:41:01
Authors: G. Torimaru
Comments: 2 Pages.
I explain why consciousness is non-algorithmic, and strong AI cannot come true, and reinforce Penrose's argument.
Category: Artificial Intelligence
[1279] viXra:2209.0069 [pdf] submitted on 2022-09-11 16:50:18
Authors: Ait-Taleb Nabil
Comments: 15 Pages.
In this paper, we will propose a method for learning signals related to a data frame $D_{1}$. The learning algorithm will be based on the biggest entropy variations of a Bayesian network. The method will make it possible to obtain an optimal Bayesian network having a high likelihood with respect to signals $D_{1}$. From the learned optimal Bayesian network, we will show what to do to infer new signals $D_{2}$. We will then infer a large number (200000) of candidate signals $D_{2}$ and we will select the predictive signals $D_{2}^{*}$ minimizing the entropy of the optimal Bayesian network computed from the concatenation of the signals $D_{1}$ followed by the candidate signals $D_{2}$. The prediction $D_{2}^{*}$ is justified by the fact that the union $D_{1}cup D^{*}_{2}$ has a low entropy and therefore a high average probability in logarithmic scale of being obtained. We will also introduce the prediction quality allowing to evaluate the predictive quality of inferred signals $D_{2}$. Once the optimal signals $D_{2}^{*}$ obtained, we will impose the same order of scatter (computed from the Mahalanobis) to the points of signals $D_{2}^{*}$ as of signals $D_{1}$.
Category: Artificial Intelligence
[1278] viXra:2209.0007 [pdf] submitted on 2022-09-02 01:35:30
Authors: Chengkai Guo
Comments: 4 Pages.
In this paper, we first review some of the innovations in modeling mentalizing.Broadly, this involves building models of computing World Model and Theory of Mind(ToM). A simple framework, FaithNet, is then presented with concepts like persistence, continuity, cooperation and preference represented as faith rules.FaithNet defines a generative model that can sample faith rules. Our FaithNet utilize a general-purpose conditioning mechanism based on cross-attention, offering computations that best explain observed real-world events under a Bayesian criterion.
Category: Artificial Intelligence
[1277] viXra:2209.0005 [pdf] submitted on 2022-09-01 01:01:30
Authors: Mojtaba Heydari, Frank Cwitkowitz, Zhiyao Duan
Comments: 8 Pages. The 22rd International Society for Music Information Retrieval Conference (ISMIR 2021)
The online estimation of rhythmic information, such as beat positions, downbeat positions, and meter, is critical for many real-time music applications. Musical rhythm comprises complex hierarchical relationships across time, rendering its analysis intrinsically challenging and at times subjective. Furthermore, systems which attempt to estimate rhythmic information in real-time must be causal and must produce estimates quickly and efficiently. In this work, we introduce an online system for joint beat, downbeat, and meter tracking, which utilizes causal convolutional and recurrent layers, followed by a pair of sequential Monte Carlo particle filters applied during inference. The proposed system does not need to be primed with a time signature in order to perform downbeat tracking, and is instead able to estimate meter and adjust the predictions over time. Additionally, we propose an information gate strategy to significantly decrease the computational cost of particle filtering during the inference step, making the system much faster than previous sampling-based methods. Experiments on the GTZAN dataset, which is unseen during training, show that the system outperforms various online beat and downbeat tracking systems and achieves comparable performance to a baseline offline joint method.
Category: Artificial Intelligence
[1276] viXra:2208.0173 [pdf] submitted on 2022-08-31 03:40:39
Authors: Mojtaba Heydari, Zhiyao Duan
Comments: 5 Pages.
Online beat tracking (OBT) has always been a challenging task. Dueto the inaccessibility of future data and the need to make inferencein real-time. We propose Don’t Look back! (DLB), a novel approachoptimized for efficiency when performing OBT. DLB feeds theactivations of a unidirectional RNN into an enhanced Monte-Carlolocalization model to infer beat positions. Most preexisting OBTmethods either apply some offline approaches to a moving windowcontaining past data to make predictions about future beat positionsor must be primed with past data at startup to initialize. Meanwhile,our proposed method only uses activation of the current time frameto infer beat positions. As such, without waiting at the beginning toreceive a chunk, it provides an immediate beat tracking response,which is critical for many OBT applications. DLB significantlyimproves beat tracking accuracy over state-of-the-art OBT methods,yielding a similar performance to offline methods.
Category: Artificial Intelligence
[1275] viXra:2208.0171 [pdf] submitted on 2022-08-31 03:49:55
Authors: Mojtaba Heydari, Zhiyao Duan
Comments: 8 Pages. 23rd International Society for Music Information Retrieval Conference (ISMIR 2022)
Tracking beats of singing voices without the presence of musical accompaniment can find many applications in music production, automatic song arrangement, and social media interaction.Its main challenge is the lack of strong rhythmic and harmonic patterns that are important for music rhythmic analysis in general. Even for human listeners, this can be a challenging task. As a result, existing music beat tracking systems fail to deliver satisfactory performance on singing voices. In this paper, we propose singing beat tracking as a novel task, and propose the first approach to solving this task. Our approach leverages semantic information of singing voices by employing pre-trained self-supervised WavLM and DistilHuBERT speech representations as the front-end and uses a self-attention encoder layer to predict beats. To train and test the system, we obtain separated singing voices and their beat annotations using source separation and beat tracking on complete songs, followed by manual corrections. Experiments on the 741 separated vocal tracks of the GTZAN dataset show that the proposed system outperforms several state-of-the-art music beat tracking methods by a large margin in terms of beat tracking accuracy. Ablation studies also confirm the advantages of pre-trained self-supervised speech representations over generic spectral features.
Category: Artificial Intelligence
[1274] viXra:2208.0156 [pdf] submitted on 2022-08-28 08:46:18
Authors: Carlo D. Petalver
Comments: 12 Pages.
Categorizing books and other archaic paper sources to a course reference or syllabus is a challenge in library science. The traditional way of categorization is manually done by professionals and the process of seeking and retrieving information can be frustrating. It needs intellectual tasks and conceptual analysis of a human effort to recognize similarities of items in determining the subject to the correct category. Unlike the traditional categorization process, the author implemented the concept of automatic document categorization for libraries using text mining. The project involves the creation of a web app and mobile app. This can be accomplished through the use of a supervised machine learning classification model using the Support Vector Machine algorithm that can predict the given category of data from the book or other archaic paper sources to the course syllabus they belong to.
Category: Artificial Intelligence
[1273] viXra:2208.0137 [pdf] submitted on 2022-08-25 15:44:36
Authors: Yingcheng Huang, Fuyuan Xiao
Comments: 1 Page.
In this paper, a novel belief divergence, higher order belief Jensen-Shannon divergence is proposedto measure the discrepancy between BPAs in Dempster—Shafer evidence theory.
Category: Artificial Intelligence
[1272] viXra:2208.0135 [pdf] submitted on 2022-08-25 00:53:10
Authors: Jie Zenga, Fuyuan Xiao
Comments: Pages.
In this paper, a novel symmetric fractal-based belief KL divergence is proposed to more appropriately measure the conflict between BPAs.
Category: Artificial Intelligence
[1271] viXra:2208.0104 [pdf] submitted on 2022-08-20 05:18:24
Authors: Akhil Sahukaru, Shishir Kumar Shandiliya
Comments: 15 Pages.
When traffic demand exceeds available network capacity, traffic congestion develops. Lower vehicle speeds, longer journey times, unreliable arrival timings, and lengthiervehicular queueing are all symptoms. Congestion may have a detrimental influence on society bylowering quality of life and increasing pollution, particularly in metropolitan areas. To alleviatetraffic congestion, traffic engineers and scientists require high-quality, comprehensive, andprecise data to forecast traffic flow. The advantages and disadvantages of various data collectingsystems, as well as data attributes such as accuracy, sample frequency, and geographiccoverage, vary. Multisource data fusion improves accuracy and delivers a more complete picture of trafficflow performance on a road network. This study provides a review of the literature on congestionestimation and prediction based on data obtained from numerous sources. An overview of datafusion approaches and congestion indicators that have been employed in the literature to estimatetraffic condition and congestion is provided. The outcomes of various strategies are examined,and a disseminative analysis of the benefits and drawbacks of the methods reviewed is offered.Keywords: traffic congestion; multi source data fusion; traffic state estimation; data collection
Category: Artificial Intelligence
[1270] viXra:2208.0073 [pdf] submitted on 2022-08-13 01:00:59
Authors: Mirzakhmet Syzdykov
Comments: 3 Pages.
We propose the evolutionary algorithm for subset construction which superceeds previous known resultdue to Rabin and Scott.
Category: Artificial Intelligence
[1269] viXra:2208.0055 [pdf] submitted on 2022-08-09 13:40:27
Authors: Egger L Mielberg
Comments: 17 Pages.
Time is the most important asset of any living person on our planet.The presence of a digital personal financial and economic environment, decentralized to each of its users, would significantly change the quality and standard of living of this user.The main unit of measurement of the value of an individual user of the environment should be the hours (minutes) spent by him on the execution of any sense contract.Our international team proposes a practical implementation of such an environment using the logic of the new mathematical theory for artificial intelligence Sense Theory [1].
Category: Artificial Intelligence
[1268] viXra:2208.0012 [pdf] submitted on 2022-08-04 01:28:39
Authors: Michael C. I. Nwogugu
Comments: 32 Pages. The copyright license-type for this article is CC-BY-NC-ND
Nwogugu (2012) introduced a Network-based and Cognition-Based cyberphysical fuzzy-system within which complex self-adjusting "semi-autonomous" financial products are originated, purchased and sold. The participants of the system are diverse and include adults, companies, brokers, banks, lawyers, insurance companies and real estate companies. This theoretical article explains the key additional characteristics, system-architecture, fuzzy-attributes and Reasoning/Logic of some cost-reducing and energy-reducing AI/ML Network/Modular Products (ie. Mortgage-Alternatives Products, Retirement/Savings products and Insurance products) that were introduced in Nwogugu (2012), and also other cost-saving financial products that he developed (collectively, the "Products"). Through the products’ fuzzy features, AI and network, the cyber-system architecture implicitly incorporates "Learning" and also can use Blockchain for record-keeping. The semi-autonomous and "self-adjustment" characteristics of these Modular Products can drastically reduce system-participants’ costs and energy-use while increasing their revenues/profits through better and more efficient CRM, "matching", transaction-processing and "state-updating".
Category: Artificial Intelligence
[1267] viXra:2207.0146 [pdf] submitted on 2022-07-26 01:08:50
Authors: R. V. R. Pandya
Comments: 6 Pages.
In this paper, we propose generalized attention mechanism (GAM) by first suggesting a new interpretation for self-attention mechanism of Vaswani et al. . Following the interpretation, we provide description for different variants of attention mechanism which together form GAM. Further, we propose a new relative position representation within the framework of GAM. This representation can be easily utilized for cases in which elements next to each other in input sequence can be at random locations in actual dataset/corpus.
Category: Artificial Intelligence
[1266] viXra:2207.0064 [pdf] submitted on 2022-07-09 02:53:24
Authors: Dimitrios Geromichalos
Comments: 10 Pages.
Based on hundreds of thousands of song lyrics from thousands of bands, Word2Vec models have been trained to quantitatively identify similarities between band texts and terms. Using prominent examples, this demonstrates for the cases studied, that music bands can be assigned to a similarity network solely on the basis of their song lyrics, which also corresponds to their musical style. Furthermore, using exemplary words, it is demonstrated that semantic term networks vary strongly from genre to genre. In addition, the semantic similarity matrices were studied using network analysis methods. As it turned out, term and band text networks differ significantly. While the former resemble random networks, the latter partly exhibit powerlaw behavior. Both also exhibit threshold-dependent regimes.
Category: Artificial Intelligence
[1265] viXra:2207.0062 [pdf] submitted on 2022-07-08 16:38:56
Authors: Vishal Pandey, Ishanvi Pandey
Comments: 7 Pages.
Wave Function Collapse initializes output bitmapin a completely unobserved state, where each pixel value is in a superposition of colors of the input bitmap (so if the input was black-white then the unobserved states are shown in different shades of grey). The coefficients in these superpositions are real numbers, not complex numbers, so it doesn’t do the actual quantum mechanics, but it was inspired by QM. In this, we have been matching each tile to tile value by pixel to pixel by namingas it as "socket". We know that in code when we match the tile it would be in a random order so we had rotated them into a specific order to match each socket to socket which indicates the overlapping of tiles as the superposition of several Eigen states. It was first introduced in 2016 by Maxim Gumin which can generate procedural patterns from a sample image or from a collection of tiles. So we are just visualizing it in a mathematical way
Category: Artificial Intelligence
[1264] viXra:2207.0056 [pdf] submitted on 2022-07-07 23:38:20
Authors: Omar Dasser, Moad Tahri, Louay Kila, Abderrahim Sekkaki
Comments: 23 Pages.
Drug discovery is a crucial step in the process of delivering a new drug to the market that can take up to 2-3 years which can be more penalizing given the current global pandemic caused by the outbreak of the novel coronavirus SARS-CoV 2. Artificial Intelligence methodologies have shown great potential in resolving tasks in various domains such as image classification, sound recognition, also in the range of the previous years, Artificial Intelligence proved to be the go-to for generative tasks for use cases such as music sequences, text generation and solving also problems in biology. The goal of this work is to harvest the power of these architectures using generative recurrent neural network with long short-term memory (LSTM) gating techniques in order to generate new and non-existing molecules that can bind to the main COVID-19 protease, which is a key agent in the transcription and replication of the virus, and thus can act as a potential drug that can neutralize the virus inside of an infected host. As of today, there are no specific targeted therapeutic agents to treat the disease and all existing treatments are all very limited. Known drugs that are passing clinical trials such as Hydroxychloroquine and Remdesivir showed respectively a binding energy with SARS-CoV-2’s main protease of -5.3 and -6.5, the results of the newly generated molecules exhibited scores ranging till -13.2.
Category: Artificial Intelligence
[1263] viXra:2206.0142 [pdf] submitted on 2022-06-26 16:10:32
Authors: Philip Naveen
Comments: 18 Pages.
This paper introduces the fast adaptive stochastic function accelerator (FASFA) for gradient-based optimization of stochastic objective functions. It works based on Nesterov-enhanced first and second momentum estimates. The method is simple and effective during implementation because it has intuitive/familiar hyperparameterization. The training dynamics can be progressive or conservative depending on the decay rate sum. It works well with a low learning rate and mini batch size. Experiments and statistics showed convincing evidence that FASFA could be an ideal candidate for optimizing stochastic objective functions, particularly those generated by multilayer perceptrons with convolution and dropout layers. In addition, the convergence properties and regret bound provide results aligning with the online convex optimization framework. In a first of its kind, FASFA addresses the growing need for diverse optimizers by providing next-generation training dynamics for artificial intelligence algorithms. Future experiments could modify FASFA based on the infinity norm.
Category: Artificial Intelligence
[1262] viXra:2206.0132 [pdf] submitted on 2022-06-24 04:59:04
Authors: Yingcheng Huang, Fuyuan Xiao
Comments: 1 Page.
In this paper, a novel belief divergence measurement method, fractal belief Jensen–Shannon (FBJS) divergence is proposed to better measure conflicts between evidences. The proposed FBJS divergence is the first belief divergence that combines the belief divergence theory and the concept of fractal.
Category: Artificial Intelligence
[1261] viXra:2205.0131 [pdf] submitted on 2022-05-25 03:41:12
Authors: Shiyuan Li
Comments: 17 Pages.
Spike-timing dependent plasticity in biological neural networks has been proven to be important during biological learning process. On the other hand, artificial neural networks use a different way to learn, such as Back-Propagation or Contrastive Hebbian Learning. In this work we introduce approximate STDP, a new neural networks learning framework more similar to the biological learning process. It uses only STDP rules for supervised and unsupervised learning, every neuron distributed learn patterns and don't need a global loss or other supervised information. We also use a numerical way to approximate the derivatives of each neuron in order to better use SDTP learning and use the derivatives to set a target for neurons to accelerate training and testing process. The framework can make predictions or generate patterns in one model without additional configuration. Finally, we verified our framework on MNIST dataset for classification and generation tasks.
Category: Artificial Intelligence
[1260] viXra:2205.0013 [pdf] submitted on 2022-05-02 20:14:08
Authors: Atul Anand, A. Seetharaman, K. Maddulety
Comments: 14 Pages. Conference: 3rd International Conference on Data Mining and Machine Learning (DMML 2022)
This paper is aimed at studying the factors influencing the implementation of blockchain in
supply chain management to solve the current issues faced in the supply chain ecosystem.
Supply chains are part and parcel of every business and have multiple inefficiencies in the
system. Some of these inefficiencies can be managed by usage of blockchain Platform
.Technology, intracompany synergies, intercompany collaboration, extrinsic factors, and
innovation are critically evaluated for adoption of blockchain in supply chain. A pilot study is
conducted in form survey for analysis of these factors. Hypotheses are derived for these factors
for quantitative research. Subsequently these hypotheses are examined with the help of
ADANCO2.3 for structural equation modelling. As an outcome, it is evident that Innovation and
Extrinsic factors are significantly impacting the adoption of blockchain in supply chain
management.
Category: Artificial Intelligence
[1259] viXra:2203.0172 [pdf] submitted on 2022-03-29 20:28:39
Authors: Amey Thakur, Mega Satish
Comments: 13 Pages. 7 figures, Volume 10, Issue III, International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022. DOI: https://doi.org/10.22214/ijraset.2022.40861
Because of breakthroughs in machine learning and deep learning, which are causing a change in every industry area and managing various types of activities better than people. The majority of monotonous jobs that were formerly performed by humans are now replaced by AI. Every firm is aiming to replace the least skilled labour with AI robots that can do comparable tasks more efficiently, especially when it comes to chatbots. A chatbot is computer software that mimics human interaction by using voice instructions, text dialogues, or both. Chatbots are being employed to address consumer concerns or problems in food delivery app businesses such as Zomato and Swiggy, but are chatbots truly useful in that business model? This business model's target customers are those who don't have time to go outside to obtain food, prefer convenience at home, or are unwilling to endure discomfort, thus their concerns should be resolved in the most convenient way possible. To fulfil the user's request, a chatbot is employed. It is critical for the chatbot to plan how to carry out the task that the user has asked. New tools are available now to create and deploy chatbots; Amazon Lex by AWS is one of them. This project focuses on creating a Pizza Ordering Chatbot using Amazon Lex to help the user order pizza.
Category: Artificial Intelligence
[1258] viXra:2203.0158 [pdf] submitted on 2022-03-27 12:21:30
Authors: Narayanan Arvind
Comments: 7 Pages. Presented at Samudramanthan 2022, Indian Institute of Technology Kharagpur
ID documents submitted for Maritime digital KYC processes can be skewed due to the
environment in which the photograph is taken or due to user preferences and/or errors. The
skewed image results in a low accuracy in downstream image processing tasks like optical
character recognition (OCR). ID document deskewing has been typically approached using deep
learning (MaskRCNN), regression, projection plans, Hough transforms, Fourier transforms and
other computer vision techniques. The aim of this study is to build a robust document deskewing
system based on keyword detection and coordinate geometry. The research is carried out by
analyzing skewed Indian PAN cards available with IN-D. The database has 50 Indian PAN card
images. These images are augmented to generate 150 images, with 50 images for each of the
+90, -90 and 180 degree skew cases. Google Vision API is used as the OCR engine for finding
the coordinates of the keyword in our study. The research employs Numpy, Pandas and OpenCV
open-source libraries for Python. The accuracy of the reported model is 95.33 %. The accuracy
of our present approach surpasses the accuracy of all the models available in literature.
Category: Artificial Intelligence
[1257] viXra:2203.0150 [pdf] submitted on 2022-03-25 20:50:57
Authors: Shuvra Smaran Das
Comments: 11 Pages. 3 figures (Corrections made by viXra Admin to conform with the requirements on the Submission Form)
By using Artificial Intelligence (AI) called Deepfakes, clothes are stripped digitally from photographs of users and shared on social media. Deepfakes are computer-generated images and videos, often convincing, based on an existing template. Victims are already afraid and worried about these things. Moreover, the images are so realistic that most users believe that these images are authentic. These things can happen to us too. However, we cannot stop using these social platforms because these platforms are the only way to communicate with others and continue our daily work online. These types of crimes should be strictly prevented and let users know which of the images are real and not. Thus, victims and users may be able to know and assure the truth about this fraud. Here, we will be analyzed image-related paperwork, including the original and the duplicate images, to inform users about image forgery. So, users will no longer believe in these fake images.
Category: Artificial Intelligence
[1256] viXra:2203.0145 [pdf] submitted on 2022-03-24 23:11:44
Authors: Arnav Dantuluri
Comments: 9 Pages. (Author's name added to article as required by the rules of viXra.org)
In this paper, I propose a simple and easily reproducible method to enhance and extend datasets from as few as 1000 images to as much as 10000 or in essence as many as the user requires. My approach combines a proper latent space modeling of the VAE which is then modified using a process called vector quantization. With these techniques along with enhancing model parameterization and training a simple convolutional neural network can achieve accuracies of up to 93% on synthetic data which proves extremely helpful especially when handling datasets with very few images.
Category: Artificial Intelligence
[1255] viXra:2203.0144 [pdf] submitted on 2022-03-24 00:11:24
Authors: Deokjin Kim
Comments: 3 Pages.
In previous studies, the calculation of everything in physics through logarithmic elliptic equation was proposed. The calculation is very simple that only high school physics and high school mathematics are needed. Given the author's calculation methodology as preceding conditions, artificial intelligence will be able to discover the theory of everything in only one day. We propose to develop the artificial intelligence and call this natural intelligence.
Category: Artificial Intelligence
[1254] viXra:2203.0004 [pdf] submitted on 2022-03-01 20:24:27
Authors: Siddhant Kumar Jha, Zhi Hua Zhou
Comments: 8 Pages.
Hypergraphs are a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.The applications of hypergraphs can range from analogical explainations such as social networks to
hard generalities in the case of collabarative game theory where they are known as simple games. The more abstract applications can be used in localized and global optimizations of radial function under computational geometry , and the optmizers generated could also be used to solve linear scheduling problems. The theoretical approach developed under these categories can be used in embedding . cluster-ing and classification which can be solved through the application of
spectral hypergraph clustering too.
Category: Artificial Intelligence
[1253] viXra:2202.0162 [pdf] submitted on 2022-02-25 19:21:37
Authors: Siddhant Kumar Jha
Comments: 6 Pages.
The objective of the study is to develop a definitive meta-analysis of the recent developments in hyper-graph theories’ application in the field and study of deep learning and more widely in Machine learning , the applications of this particular technique may range simple classification tuning to more advanced abstract GANs in the field of regenerative graphical systems and computer vision in general,In our experiments, we use a novel random walk procedure and show that our model achieves and, in most cases, surpasses state-of-the-art performance on benchmark data sets. Additionally we also try to display our classification performance as
compared to traditional Statistical Techniques , ML algorithms as well as Classical and new Deep learning algorithms.
Category: Artificial Intelligence
[1252] viXra:2202.0116 [pdf] submitted on 2022-02-18 16:47:41
Authors: Jeongik Cho
Comments: 3 Pages.
DLSGAN proposed a learning-based GAN inversion method with maximum likelihood estimation. In this paper, I propose a method for out-of-distribution detection using the encoder of DLSGAN. Simply, the log-likelihood of the predicted latent code of input data can be used for out-of-distribution (OOD) detection.
Category: Artificial Intelligence
[1251] viXra:2202.0106 [pdf] submitted on 2022-02-15 09:41:46
Authors: Ait-Taleb Nabil
Comments: 25 Pages.
In this paper, we will expose the BIC score expressed as a function of the Bayesian network's entropy.
We will then use this BIC score to learn a Bayesian network from an example of data frame.
Category: Artificial Intelligence
[1250] viXra:2202.0082 [pdf] submitted on 2022-02-14 01:43:59
Authors: Mihai Oltean, Dumitru Dumitrescu
Comments: 10 Pages. International Conference on Computational Sciences, ICCS'04, Edited by M. Bubak, G. D. van Albada, P. Sloot, and J. Dongarra, Vol. II, pp. 670-673, 6-9 June, Krakow, Poland, Springer-Verlag, Berlin, 2004.
Multi Expression Programming (MEP) is an evolutionary technique that may be used for solving computationally difficult problems. MEP uses a linear solution representation. Each MEP
individual is a string encoding complex expressions (computer programs). An MEP individual may encode multiple solutions of the current problem. In this paper, MEP is used for evolving a Traveling Salesman Problem (TSP) heuristic for graphs satisfying triangle inequality. Evolved MEP
heuristic is compared with Nearest Neighbor Heuristic (NN) and Minimum Spanning Tree Heuristic (MST) on some difficult problems in TSPLIB. For most of the considered problems the evolved MEP heuristic outperforms NN and MST. The obtained algorithm was tested against some problems in TSPLIB. The results emphasize that evolved MEP
heuristic is a powerful tool for solving difficult TSP instances.
Category: Artificial Intelligence
[1249] viXra:2202.0081 [pdf] submitted on 2022-02-14 01:46:16
Authors: Mihai Oltean, Crina Grosan
Comments: NASA/DoD Conference on Evolvable Hardware, 24-26 June, Seattle, Edited by R. Zebulum (et. al), pages 87-90, IEEE Press, NJ, 2004
Multi Expression Programming (MEP) is a Genetic
Programming (GP) variant that uses linear chromosomes for solution encoding. A unique MEP feature is its ability of encoding multiple solutions of a problem in a single chromosome. These solutions are handled in the same time
complexity as other techniques that encode a single solution in a chromosome. In this paper, MEP is used for evolving digital circuits. MEP is compared to Cartesian Genetic Programming
(CGP) – a technique widely used for evolving digital circuits – by using several well-known problems in the field of electronic circuit design. Numerical experiments show that MEP
outperforms CGP for the considered test problems.
Category: Artificial Intelligence
[1248] viXra:2202.0080 [pdf] submitted on 2022-02-14 01:49:14
Authors: Mihai Oltean
Comments: 4 Pages. Proceedings of the 5th International Workshop on Frontiers in Evolutionary Algorithms, The 7th Joint Conference on Information Sciences, September 26-30, 2003, Research Triangle Park, North Carolina, Edited by Ken Chen (et. al), pp. 315-318, 2003.
In this paper, the Multi Expression Programming
(MEP) technique is used for solving even-parity
problems. Numerical experiments show that MEP
outperforms Genetic Programming (GP) with more than
one order of magnitude for the considered test cases.
Category: Artificial Intelligence
[1247] viXra:2202.0079 [pdf] submitted on 2022-02-14 01:51:37
Authors: Mihai Oltean
Comments: 36 Pages. chapter 10, Evolvable Machines: Theory and Applications, Springer-Verlag, edited by Nadia Nedjah (et al.), pp. 229-255, 2004
Multi Expression Programming is a Genetic Programming variant that uses a linear representation of individuals. A unique feature of Multi Expression Programming is its ability of storing multiple solutions of a problem in a single chromosome. In this paper, we propose and use several techniques for improving the search performed by Multi Expression Programming. Some of the most important improvements are Automatically Defined Functions and Sub-Symbolic node representation. Several experiments with Multi Expression Programming are performed in this paper. Numerical results show that Multi Expression Programming performs very well for the
considered test problems.
Category: Artificial Intelligence
[1246] viXra:2201.0188 [pdf] submitted on 2022-01-26 03:38:42
Authors: Chengkai Guo, Kai Yang
Comments: 9 Pages.
Preliminary concept of AGI for brain-like intelligence is presented in this paper. The solution is mainly in two aspects: firstly, we combine information entropy and generative network (GAN like) model to propose a paradigm of General Intelligent Network (GIN). In the GIN network, the original multimodal information can be encoded as low information entropy hidden state representations (HPPs), which can be reverse parsed by the contextually relevant generative network into observable information. Secondly,we propose a generalized machine learning operating system (GML system), which includes an observable processor (AOP), an HPP storage system, and a multimodal implicit sensing/execution network. Our code will be released at https://github.com/ggsonic/GIN
Category: Artificial Intelligence
[1245] viXra:2201.0177 [pdf] submitted on 2022-01-25 19:40:24
Authors: Manish Bhargav, Satish Kumar Alaria, Manish Kumar Mukhija
Comments: 10 Pages.
Twitter has turned into a tiny source of dynamic data for blogging places. People post on a wide range of topics and constantly communicate their assumptions, discuss current concerns, and positively review what they use in their daily lives on Twitter wall. The main goal is to assess the emotions expressed in tweets using various machine learning algorithms that identify tweets as positive or negative. If the tweet contains both negative and positive elements, the most dominant component should be chosen as the final component. In tweets, emojis, usernames, and hashtags must be managed and translated into a standard structure. Bigrams and unigrams, for example, must be removed as well. In any case, just relying on a single model, which did not give high accuracy, is taken into account when selecting a model with high precision. To be honest, organizers for these items have begun to investigate these modest internet journals (blogs) in order to get a general sense of their item. They frequently monitor and reply to client comments on smaller websites. One issue is coming up with new ways to recognize and abbreviate a broad sentiment. Several persons, such as Facebook, Twitter, and Instagram, were brought into interpersonal connection stages as recently as last year. Most people use social media to convey their feelings, ideas, or assumptions about objects, places, or people. Strategies Twitter, a micro-blogging platform, is a massive repository of public opinion for a variety of people, offers, businesses, and products, among other things. The public analysis system evaluations are known as sentiment assessment. Combination of sentiment analysis on Twitter give valuable context to what's being said on Twitter. The wide availability of internet exams and social media postings the media provides critical criticism to organizations in order to improve expert options and steer their marketing tactics to leisure and user selections. As a result, social media plays a key role in influencing the public's perception of the services or products chosen. The numerous tactics utilized for product classification critiques are highlighted in this study (which may be in the form of tweets) Tweet complaints to see if mass behaviour is positive, negative, or neutral. Analysis of the Product Market. The information used here comes from our Twitter product reviews, which were used to categorize opinions as satisfying.
Category: Artificial Intelligence
[1244] viXra:2201.0144 [pdf] submitted on 2022-01-22 09:08:57
Authors: Dimiter Dobrev
Comments: 119 Pages. Bulgarian language
Artificial Intelligence - What is it, how to do it and what will we do after we do it? This is a PhD thesis.
Category: Artificial Intelligence
[1243] viXra:2201.0094 [pdf] submitted on 2022-01-16 15:17:12
Authors: Jai Sharma, Milind Maiti, Christopher Sun
Comments: 17 Pages.
Cardiovascular disease causes 25% of deaths in America (Heart Disease Facts). Specifically, misdiagnosis of cardiovascular disease results in 11,000 American deaths annually, emphasizing the increasing need for Artificial Intelligence to improve diagnosis. The goal of our research was to determine the probability that a given patient has Cardiovascular Disease using 11 easily-accessible objective, examination, and subjective features from a data set of 70,000 people. To do this, we compared various Machine Learning and Deep Learning models. Exploratory Data Analysis (EDA) identified that blood pressure, cholesterol, and age were most correlated with an elevated risk of contracting heart disease. Principal Component Analysis (PCA) was employed to visualize the 11-D data onto a 2-D plane, and distinct aggregations in the data motivated the inference of specific cardiovascular conditions beyond the binary labels in the data set. To diagnose patients, several Machine Learning and Deep Learning models were trained using the data and compared using the metrics Binary Accuracy and F1 Score. The initial Deep Learning model was a Shallow Neural Network with 1 hidden layer consisting of 8 hidden units. Further improvements, such as adding 5 hidden layers with 8 hidden units each and employing Mini-Batch Gradient Descent, Adam Optimization, and He’s Initialization, were successful in decreasing train times. These models were coded without the utilization of Deep Learning Frameworks such as TensorFlow. The final model, which achieved a Binary Accuracy of 74.2% and an F1 Score of 0.73, consisted of 6 hidden layers, each with 128 hidden units, and was built using the highly optimized Keras library. While current industrial models require hundreds of comprehensive features, this final model requires only basic inputs, allowing versatile applications in rural locations and third-world countries. Furthermore, the model can forecast demand for medical equipment, improve diagnosis procedures, and provide detailed personalized health statistics.
Category: Artificial Intelligence
[1242] viXra:2112.0155 [pdf] submitted on 2021-12-29 02:21:06
Authors: Jonathan Lee
Comments: 4 Pages. Thanks
Due to the high volatility of the COVID-19 pandemic, interest in stock invest-ment is focused. Also, it is said that the atmosphere is gathering again fromthe cryptocurrency market to the domestic stock market. In this situation, welooked at which model could more accurately predict the closing
Category: Artificial Intelligence
[1241] viXra:2112.0135 [pdf] submitted on 2021-12-26 21:08:14
Authors: Ait-Taleb Nabil
Comments: 22 Pages.
In this paper we will propose a directed dependency graph obtained from a correlation matrix. This graph will include probabilistic causal sub-models for each node modeled by conditionings percentages. The directed dependency graph will be obtained using the highest successive conditionings method with a conditioning percentage value to be exceeded.
Category: Artificial Intelligence
[1240] viXra:2112.0130 [pdf] submitted on 2021-12-24 04:23:06
Authors: J Gerard Wolff
Comments: 44 Pages.
The "SP Challenge" is the deliberately provocative theme of this paper: that the "SP System" (SPS), meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model", is more promising as a foundation for the development of human-level broad AI, aka 'artificial general intelligence' (AGI), than any alternative. In that connection, the main strengths of the SPS are: 1) The adoption of a top-down, breadth-first research strategy with wide scope; 2) Recognition of the importance of information compression (IC) in human learning, perception, and cognition -- and, correspondingly, a central role for IC in the SPS; 3) The working hypothesis that all kinds of IC may be understood in terms of the matching and unification of patterns (ICMUP); 4) A resolution of the apparent paradox that IC may achieve decompression as well as compression. 5) The powerful concept of SP-multiple-alignment, a generalisation of six other variants of ICMUP; 6) the clear potential of the SPS to solve 19 problems in AI research; 7) Strengths and potential of the SPS in modelling several aspects of intelligence, including several kinds of probabilistic reasoning, versatility in the representation and processing of AI-related knowledge, and the seamless integration of diverse aspects of intelligence, and diverse kinds of knowledge, in any combination; 8) Several other potential benefits and applications of the SPS; 9) In "SP-Neural", abstract concepts in the SPS may be mapped into putative structures expressed in terms of neurons and their interconnections and intercommunications; 10) The concept of ICMUP provides an entirely novel perspective on the foundations of mathematics; 11) How to make generalisations from data, including the correction of over- and under-generalisations, and how to reduce or eliminate errors in data. There is discussion of how the SPS compares with some other potential candidates for the SP-Challenge. And there is an outline of possible future directions for the research.
Category: Artificial Intelligence
[1239] viXra:2112.0126 [pdf] submitted on 2021-12-23 04:31:07
Authors: Xuan Zhao, Huizi Cui, Zilong Xiao, Bingyi Kang
Comments: 26 Pages.
How to deal with conflict is a significant issue in Dempster-Shafer evidence theory (DST). In the Dempster combination rule, conflicts will produce counter-intuitive phenomena. Therefore, many effective conflict handling methods have been presented. This paper proposes a new framework for reducing conflict based on principal component analysis and relatively similar transformation (PCARST), which can better reduce the impact of conflict evidence on the results, and has more reasonable results than existing methods. The main characteristic feature of the BPAs is maintained while the conflict evidence is regarded as a noise signal to be weakened. A numerical example is used to illustrate the effectiveness of the proposed method. Results show that a higher belief degree of the correct proposition is obtained comparing previous methods.
Category: Artificial Intelligence
[1238] viXra:2112.0122 [pdf] submitted on 2021-12-22 03:25:27
Authors: Kasper van Maasdam
Comments: 31 Pages.
Artificial neural networks are important in everyday life and are becoming more widespread. For this reason, it is crucial they are understood and tested. This paper tests and compares two training methods: reinforcement learning with backpropagation and an evolutionary method. The hypothesis is that the training method using backpropagation and reinforcement learning is more efficient in training a neural network to play a game than a model trained with the evolutionary algorithm. However, the model trained with backpropagation and reinforcement learning will have lower performance than a model trained with the evolutionary algorithm. To research the hypothesis, a feedforward neural network and how it works must first be explained.
Neural networks are systems inspired by the biological brain which enables a computer to predict, model, classify and many other applications. All this by learning from some set of training data to find general relations that can be applied to unseen data. A neural network model is essentially a function with potentially thousands of parameters. Just like any other function, input values are provided and with those, the output is calculated. In a feedforward neural network, this process is called feedforward.
The process of feedforward is meaningless with a model that has not yet been configured to do anything. A neural network must first be taught to perform a certain task. This is what is accomplished with machine learning. Backpropagation is an example of a machine learning method. For backpropagation two things are required: the input and the corresponding output. Backpropagation will adjust the parameters of a model so the next time the same input is provided, the output will be closer to the desired output. This is called optimisation.
Reinforcement learning is a way to teach a neural network by giving it positive reinforcement when it does something good and negative reinforcement when it does something bad. This is used when no desired output is known so backpropagation cannot directly be applied.
An evolutionary algorithm is much more intuitive than backpropagation. It is the imitation of natural selection in biology, but with self-determined factors deciding the fitness of a model. When training a neural network with an evolutionary algorithm, a large group of random models will be generated, all performing the same task. Some models, however, will be better suited for this task than others. How well they are suited to their environment is their fitness. This will be the determining factor of who survives and can therefore reproduce and create mutated offspring. This process is repeated as many times as required to reach the desired performance.
The hypothesis of this paper has been proven wrong. Neural networks trained with an evolutionary algorithm do end up performing at a higher level than models trained with reinforcement learning and backpropagation. However, Neural networks trained with an evolutionary algorithm are also more efficient with regard to not only the number of cycles needed to reach the same performance but also with regard to the time required.
[1237] viXra:2112.0097 [pdf] submitted on 2021-12-18 17:03:00
Authors: Philip Naveen
Comments: 8 Pages. Written at Godwin High School
Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Four generalized networks were constructed using Phish, Swish, Sigmoid, and TanH. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.
Category: Artificial Intelligence
[1236] viXra:2112.0095 [pdf] submitted on 2021-12-17 20:54:35
Authors: Long Yu, ZhiCong Luo, Deng Lin, HuanYong Liu, YaFeng Deng
Comments: 6 Pages.
Knowledge representation is a classic problem in Knowledge graph.Distance-based models have made great progress.The most significant recent developments in this direction have been those of Rotate and PairRE, which focus on express relationships as projections of nodes.However TransX series Model(TransE, TransH, TransR) express relationships as translations of nodes.To date, the problem of the Combination of Projection and translation has received scant attention in the research literature.Hence, we propose TripleRE, a method which models relationships by projections and translations.Compared with the original distance-based knowledge representation model, results on ogbl-wikikg2 dataset are significantly improved.
Category: Artificial Intelligence
[1235] viXra:2112.0012 [pdf] submitted on 2021-12-02 03:27:08
Authors: Ji Yoon Kim
Comments: 4 Pages.
Accurate calculation of the commute cost is crucial for the government to decide whether housing subsidy will be provided to disadvantaged workers, or to create a new method that can reduce the commute cost of the disadvantaged workers by offering mass transit. Many studies have already proven that machine learning can predict traffic and commute times. Although different machine learning algorithms can be used, this study mainly uses Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), which are based on the Recurrent Neural Networks (RNNs) architecture.
Category: Artificial Intelligence
[1234] viXra:2111.0172 [pdf] submitted on 2021-11-30 05:08:24
Authors: Mihai Oltean
Comments: 170 Pages.
Automatic Programming is one of the most important areas of computer science research today. Hardware speed and capability have increased exponentially, but the software is years behind. The demand for software has also increased significantly, but it is still written in old fashion: by using humans.
There are multiple problems when the work is done by humans: cost, time, quality. It is costly to pay humans, it is hard to keep them satisfied for a long time, it takes a lot of time to teach and train them and the quality of their output is in most cases low (in software, mostly due to bugs).
The real advances in human civilization appeared during the industrial revolutions. Before the first revolution, most people worked in agriculture. Today, very few percent of people work in this field.
A similar revolution must appear in the computer programming field. Otherwise, we will have so many people working in this field as we had in the past working in agriculture.
How do people know how to write computer programs? Very simple: by learning. Can we do the same for software? Can we put the software to learn how to write software?
It seems that is possible (to some degree) and the term is called Machine Learning. It was first coined in 1959 by the first person who made a computer perform a serious learning task, namely, Arthur Samuel.
However, things are not so easy as in humans (well, truth to be said - for some humans it is impossible to learn how to write software). So far we do not have software that can learn perfectly to write software. We have some particular cases where some programs do better than humans, but the examples are sporadic at best. Learning from experience is difficult for computer programs. Instead of trying to simulate how humans teach humans how to write computer programs, we can simulate nature.
Category: Artificial Intelligence
[1233] viXra:2111.0171 [pdf] submitted on 2021-11-30 05:11:44
Authors: Mihai Oltean, D. Dumitrescu
Comments: 28 Pages. Technical Report, Babes-Bolyai Univ. 2002
Multi Expression Programming (MEP) is a new evolutionary paradigm intended for solving
computationally difficult problems. MEP individuals are linear entities that encode complex computer programs. MEP chromosomes are represented in the same way as C or Pascal compilers translate mathematical expressions into
machine code. MEP is used for solving some difficult problems like symbolic regression and game strategy discovering. MEP is compared with Gene Expression Programming (GEP) and Cartesian Genetic Programming (CGP) by using several well-known test problems. For the considered problems MEP outperforms GEP and CGP. For these examples MEP is two magnitude orders better than CGP.
Category: Artificial Intelligence
[1232] viXra:2111.0170 [pdf] submitted on 2021-11-30 06:38:09
Authors: Victor V. Senkevich
Comments: 9 Pages.
As is known, AGI (Artificial General Intelligence), unlike AI, should operate with meanings. And that's what distinguishes it from AI. Any successful AI implementations (playing chess, unmanned driving, face recognition etc.) do not operate with the meanings of the processed objects in any way and do not recognize the meaning. But they don't need to. But for AGI, which emulates human thinking, this ability is crucial. Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not strict and formalized, so they cannot be programmed. The meaning search procedure should use a formalized description of its existence and possible forms of its perception. For the practical implementation of AGI, it is necessary to develop such "ready-to-code" descriptions in the context of their use for processing the related cognitive concepts of "meaning" and "knowledge".
An attempt to formalize the definition of such concepts is made in this article.
Category: Artificial Intelligence
[1231] viXra:2111.0169 [pdf] submitted on 2021-11-30 07:15:04
Authors: Mihai Oltean, Crina Grosan
Comments: 8 Pages. The 7th European Conference on Artificial Life, September 14-17, 2003, Dortmund, Edited by W. Banzhaf (et al), LNAI 2801, pp. 651-658, Springer-Verlag, Berlin, 2003.
Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a difficult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of solving a particular problem. For this purpose
the Multi Expression Programming (MEP) technique is used. Each MEP chromosome will encode multiple EAs. An nongenerational EA for function
optimization is evolved in this paper. Numerical experiments show the effectiveness of this approach.
Category: Artificial Intelligence
[1230] viXra:2111.0161 [pdf] submitted on 2021-11-29 20:00:15
Authors: B. Hamdi, A. Nouainia, T. Aguili, H. Baudrand
Comments: 6 Pages.
This paper proposes a new formulation that relied
on the moment technique combined with the equivalent circuit (MoM-GEC) to study a beamforming application for the coupled
periodic and quasi-periodic planar antenna array. Numerous voltage designs are utilized to show the adequacy and unwavering quality of the proposed approach. The radiators are viewed as planar dipoles and consequently shared (mutual) coupling effects are considered. The recommended array shows a noticeable improvement against the current structures as far as size, 3-D scanning, directivity, SLL reduction, and HPBW. The results
verify that multilayer feed-forward neural networks are vigorous and can take care of complex antenna problems. Even so, an artificial neural network (ANN) is ready to create quickly the
results of optimization and synthesis by utilizing generalization with an early stopping method. Significant gain in the running time consumption and memory used is acquired employing this last technique for improving generalization (named early stopping). Simulation results are carried out using MATLAB. To approve this work, several simulation examples are shown.
Category: Artificial Intelligence
[1229] viXra:2111.0080 [pdf] submitted on 2021-11-16 13:05:33
Authors: Jeongik Cho
Comments: 4 Pages.
In Wasserstein GAN, it is important to regularize the discriminator to have a not big Lipschitz constant. In this paper, I introduce discriminator variance regularization to regularize the discriminator of Wasserstein GAN to have a small Lipschitz constant. Discriminator variance regularization simply regularizes the variance of the discriminator's output to be small when input is real data distribution or generated data distribution. Intuitively, a low variance of discriminator output implies that the discriminator is more likely to have a low Lipschitz constant. Discriminator variance regularization does not explicitly regularize the Lipschitz constant of discriminator through differentiation on discriminator but lowers the probability that the Lipschitz constant of the discriminator is high. Discriminator variance regularization is used in Wasserstein GAN with R1 regularization, which suppresses the vibration of GAN. Discriminator variance regularization requires very little additional computing.
Category: Artificial Intelligence
[1228] viXra:2111.0069 [pdf] submitted on 2021-11-15 19:53:00
Authors: Xingyue Yang, Xuan Zhao, Bingyi Kang
Comments: 23 Pages.
This paper proposes a new method of measuring the distance between conflicting order sets, quantifying the similarity between focal elements and their own size. This method can effectively measure the conflict of belief functions on
an ordered set without saturation due to the non-overlapping focus elements. It has proven that the method satisfies the property of the distance. Examples of the engineering budget and sensors show that the distance can effectively measure the conflict between ordered sets, and prove the distance we propose to reflect the information of order sets more comprehensively by comparison with existing methods and the conflict metric between ordered sets is more robust
and accurate
Category: Artificial Intelligence
[1227] viXra:2111.0065 [pdf] submitted on 2021-11-13 09:37:33
Authors: Bora King
Comments: 7 Pages.
Robotic autonomy is key to the expansion of robotic applications. The paper reviews the success of robotic autonomy in
industrial applications, as well as the requirements and challenges on expanding robotic autonomy to in needing applications,
such as education, medical service, home service, etc. Through the discussions, the paper draws the conclusion that robotic
intelligence is the bottleneck for the broad application of robotic technology.
Category: Artificial Intelligence
[1226] viXra:2111.0060 [pdf] submitted on 2021-11-14 14:57:39
Authors: Tatsuhiko Yamato
Comments: 7 Pages.
Xgboost has the best forecasting performance among non-deep learning methods. However,
it works well for interpolation problems and regression, but not for future forecasting of time
series data that requires extrapolation. I think it is difficult to avoid this tendency even if we add explanatory variables in the background of the data. Possible explanatory variables include lags of a day or several days from the data, months, days, days of the week, holidays, and so on. In fact, the increase or decrease in data values due to these factors is quite possible and can serve as explanatory variables. However, even if you do this, you will not be able to capture the trend.
Category: Artificial Intelligence
[1225] viXra:2111.0035 [pdf] submitted on 2021-11-04 23:26:24
Authors: Jun Jin
Comments: 2 Pages.
Hyper parameter optimization is widely used in AI areas. Hyper parameter usually means the value controls the whole learning process, but itself cannot be learned or tunned in training process. Hyper parameter is very important because it will greatly affect the learning result. A good hyper parameter set can lead to a much better result or cost much less training time, instead a bad hyper parameter usually will end in local optimum, or even failed to converge.
Hyper parameters can be many difference kinds of types, it could be in the model itself (depth, node counts, etc..), or it could be in the algorithm (learning rate, optimizer, etc..). Different models or algorithms usually need different hyper parameters, even the same model/algorithm can use different hyper parameters to achieve better results. So hyper parameter exists in different part of the training process, some of the hyper parameter is described in a category. It usually means that the parameter can only be chosen in a range. This kind of parameter has some properties, for this special kind of hyper parameter we proposed a common method here to optimize it. By using this method we turn the category problems into Real searching space to achieve a better result.
Category: Artificial Intelligence
[1224] viXra:2111.0015 [pdf] submitted on 2021-11-02 20:44:50
Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 12 Pages.
The emergence of Formal Concept Analysis (FCA) as a data analysis technique has increased the need for developing algorithms which can compute formal
concepts quickly. The current efficient algorithms for FCA are variants of the Close-By-One (CbO) algorithm, such as In-Close2, In-Close3 and In-Close4, which are all based on horizontal storage of contexts. In this paper, based on algorithm
In-Close4, a new algorithm based on the vertical storage of contexts, called InClose5, is proposed, which can significantly reduce both the time complexity and space complexity of algorithm In-Close4. Technically, the new algorithm stores
both context and extent of a concept as a vertical bit-array, while within In-Close4 algorithm the context is stored only as a horizontal bit-array, which is very slow in finding the intersection of two extent sets. Experimental results demonstrate
that the proposed algorithm is much more effective than In-Close4 algorithm, and it also has a broader scope of applicability in computing formal concept in which one can solve the problems that cannot be solved by the In-Close4 algorithm.
Category: Artificial Intelligence
[1223] viXra:2111.0014 [pdf] submitted on 2021-11-02 20:46:18
Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 17 Pages.
Concise granule descriptions for describable granules and approaching description methods for indescribable granules are challenging and important issues in granular computing. The concept with only common attributes has been frequently studied. To investigate the granules with some special needs, we propose two new types of compound concepts in this paper: bipolar concept and common-and-necessary concept. Based on the definitions of concept-forming operations, the logical formulas are derived for each of the following types of concepts: formal concept, three-way concept, object oriented concept, bipolar concept and common-and-necessary concept. Furthermore, by utilizing the logical relationship among various concepts, we have derived concise and unified equivalent conditions for describable granules and approaching description methods for indescribable granules for all five kinds of concepts.
Category: Artificial Intelligence
[1222] viXra:2110.0138 [pdf] submitted on 2021-10-23 19:28:00
Authors: Yan Li, Chenchen Lin, Huizi Cui, Bingyi Kang
Comments: 46 Pages. [Corrections to title made by viXra Admin]
Classic Dempster combination rule may result in illogical results when combining highly conflict evidence. How to deal with highly conflict evidence and get a reasonable result is critical. Modifying the evidence is one of significant strategies according to the importance of each evidence (e.g. similarity matrix). However, the dispersion of evidence similarity is rarely taken into consideration, which is also an important feature to distinguish the conflict evidence and normal evidence. In this paper, a new method based on similarity matrix and dispersion of evidence similarity is proposed to evaluate the importance of evidence in Dempster-Shafer theory (DST). The proposed method enhances to weaken the influence of the conflict evidence. Robustness of the proposed method is verified through the sensitivity analysis the changes of degree of conflict and amount of credible evidence changes in DST. Some numerical examples are used to show the effectiveness of the proposed method.
Category: Artificial Intelligence
[1221] viXra:2110.0085 [pdf] submitted on 2021-10-17 15:51:55
Authors: Kai Gangi
Comments: 5 Pages.
Automating steps of the animation production process
using AI-based tools would ease the workload of Japanese animators. Although there have been recent advances in the automatic animation of still images, the majority of these models have been trained on human data and thus are tailored to images of humans. In this work, I propose a semi-automatic and scalable assembling pipeline to create a large-scale dataset containing clips of anime characters’
faces. Using this assembling strategy, I create AniVid, a novel anime video dataset consisting of 34,221 video clips. I then use a transfer learning approach to train a first order motion model (FOMM) on a portion of AniVid, which effectively animates still images of anime characters. Extensive experiments and quantitative results show that FOMM trained on AniVid outperforms other trained versions of FOMM when evaluated on my test set of anime videos.
Category: Artificial Intelligence
[1220] viXra:2110.0055 [pdf] submitted on 2021-10-12 09:24:46
Authors: Abdurrahim Yilmaz, Mucahit Kalebasi, Yegor Samoylenko, Mehmet Erhan Guvenilir, Huseyin Uvet
Comments: 4 page for manuscript with 3 page supplementary that includes ROC curves of models.
Skin cancer is one of the deadly types of cancer and is common in the world. Recently, there has been a huge jump in the rate of people getting skin cancer. For this reason, the number of studies on skin cancer classification with deep learning are increasing day by day. For the growth of work in this area, the International Skin Imaging Collaboration (ISIC) organization was established and they created an open dataset archive. In this study, images were taken from ISIC 2017 Challenge. The skin cancer images taken were preprocessed and data augmented. Later, these images were trained with transfer learning and fine-tuning approach and deep learning models were created in this way. 3 different mobile deep learning models and 3 different batch size values were determined for each, and a total of 9 models were created. Among these models, the NASNetMobile model with 16 batch size got the best result. The accuracy value of this model is 82.00%, the precision value is 81.77% and the F1 score value is 0.8038. Our method is to benchmark mobile deep learning models which have few parameters and compare the results of the models.
Category: Artificial Intelligence
[1219] viXra:2110.0036 [pdf] submitted on 2021-10-08 14:05:29
Authors: Ait-Taleb Nabil
Comments: 29 Pages.
In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix.
For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph.
Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence
[1218] viXra:2110.0030 [pdf] submitted on 2021-10-07 21:49:52
Authors: Saarang Srinivasan
Comments: 18 Pages. [Corrections made by viXra Admin to conform with scholarly norm]
The aim of this project is to detect the motion in a video and accordingly follow
the motion. This program uses background elimination and contour detection to find the moving
objects in the video and determine which direction we must move in order to
follow the motion. We move the camera in the direction of the motion in order to
follow it.
Category: Artificial Intelligence
[1217] viXra:2110.0026 [pdf] submitted on 2021-10-06 05:44:45
Authors: Amey Thakur, Mega Satish
Comments: 4 pages, 4 figures, Volume 8, Issue 9, International Research Journal of Engineering and Technology (IRJET), 2021.
We propose to implement a house price prediction model of Bangalore, India. It’s a Machine Learning model which integrates Data Science and Web Development. We have deployed the app on the Heroku Cloud Application Platform. Housing prices fluctuate on a daily basis and are sometimes exaggerated rather than based on worth. The major focus of this project is on predicting home prices using genuine factors. Here, we intend to base an evaluation on every basic criterion that is taken into account when establishing the pricing. The goal of this project is to learn Python and get experience in Data Analytics, Machine Learning, and AI.
Category: Artificial Intelligence
[1216] viXra:2109.0220 [pdf] submitted on 2021-09-30 01:04:38
Authors: Prudhvi Parne
Comments: 6 Pages.
Financial services are the economical backbone of any nation in the world. There are billions of financial transactions which are taking place and all this data is stored and can be considered as a gold mine of data for many different organizations. No human intelligence can dig in this amount of data to come up with something valuable. This is the reason financial organizations are employing artificial intelligence to come up with new algorithms which can change the way financial transactions are being carried out. Artificial Intelligence can complete the task in a very short period. Artificial intelligence can be used to detect frauds, identify possible attacks, and any other kind of anomalies that may be detrimental for the institution. This paper discusses the role of artificial intelligence and machine learning in the finance sector.
Category: Artificial Intelligence
[1215] viXra:2109.0203 [pdf] submitted on 2021-09-28 19:31:25
Authors: Matthew Groom
Comments: 5 Pages. [Corrections made by viXra Admin to conform with scholarly norm]
This is going to be one strange and yet rewarding paper for everyone. It consists of two parts.
1.The Rapture is here [.] 2.I also provide a proof of our inner-self duality and answer the other question everyone wants to know, self - what makes you, you. This is what every AI researcher has requested.
Category: Artificial Intelligence
[1214] viXra:2109.0200 [pdf] submitted on 2021-09-28 19:13:38
Authors: Murat Koklu, Ilkay Cinar, Yavuz Selim Taspinar
Comments: 8 Pages.
Rice, which is among the most widely produced grain products worldwide, has many genetic varieties. These varieties are separated from each other due to some of their features. These are usually features such as texture, shape, and color. With these features that distinguish rice varieties, it is possible to classify and evaluate the quality of seeds. In this study, Arborio, Basmati, Ipsala, Jasmine and Karacadag, which are five different varieties of rice often grown in Turkey, were used. A total of 75,000 grain images, 15,000 from each of these varieties, are included in the dataset. A second dataset with 106 features including 12 morphological, 4 shape and 90 color features obtained from these images was used. Models were created by using Artificial Neural Network (ANN) and Deep Neural Network (DNN) algorithms for the feature dataset and by using the Convolutional Neural Network (CNN) algorithm for the image dataset, and classification processes were performed. Statistical results of sensitivity, specificity, prediction, F1 score, accuracy, false positive rate and false negative rate were calculated using the confusion matrix values of the models and the results of each model were given in tables. Classification successes from the models were achieved as 99.87% for ANN, 99.95% for DNN and 100% for CNN. With the results, it is seen that the models used in the study in the classification of rice varieties can be applied successfully in this field.
Category: Artificial Intelligence
[1213] viXra:2109.0124 [pdf] submitted on 2021-09-13 10:29:37
Authors: J Gerard Wolff
Comments: 15 Pages.
Three problems in learning knowledge for self-driving vehicles are: how
a finite sample of information about driving, N, can yield an ability to deal
with the infinity of possible driving situations; the problem of generalising
from N without over- or under-generalisation; and how to weed out errors in
N. A theory developed with computer models to explain a child’s learning
of his or her first language, now incorporated in the SP System, suggests:
compress N as much as possible by a process that creates a grammar, G, and
an encoding of N in terms of G called E. Then discard E which contains all
or most of the errors in N, and retain G which solves the first two problems.
Category: Artificial Intelligence
[1212] viXra:2109.0110 [pdf] submitted on 2021-09-09 22:16:02
Authors: Yew Kee Wong
Comments: 7 Pages. AIAA CONFERENCE 2021 (NOV 2021), DUBAI, UAE
Online learning is the emerging technique in education and learning during the COVID-19 pandemic
period. Traditional learning is a complex process as learning patterns, approach, skills and performance
varies from person to person. Adaptive online learning focuses on understanding the learner’s
performance, skills and adapts to it. The use of advanced technology also provides a means to analyse
the behavioural learning pattern. As it provides the detailed skill mapping and performance which
enables the learner to understand the areas needs to be improved. The information can also be used by
assessors to improve the teaching approach. Advanced online learning system using artificial
intelligence is an emerging concept in the coming years. In this new concept, the classes are not taken
face-to-face in a classroom but through an electronic medium as a substitute. These virtual learning
approach are gaining importance every day and very soon they are going to be an integral part of our
world. Taking up these virtual learning through an electronic medium is termed as online learning. We
proposed two new models which are powered by artificial intelligence (AI) tools. A number of examples
of using these new models are presented.
Category: Artificial Intelligence
[1211] viXra:2109.0109 [pdf] submitted on 2021-09-09 22:17:57
Authors: Yew Kee Wong
Comments: 7 Pages. ACITY CONFERENCE 2021 (NOV 2021), DUBAI, UAE
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1210] viXra:2109.0108 [pdf] submitted on 2021-09-09 22:19:40
Authors: Yew Kee Wong
Comments: 7 Pages. SCAI CONFERENCE 2021 (NOV 2021), ZURICH, SWITZERLAND
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of
such advanced technology, there will be always a question regarding its impact on our social life,
environment and economy thus impacting all efforts exerted towards sustainable development. In the
information era, enormous amounts of data have become available on hand to decision makers. Big data
refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets for different
industries and business operations. Numerous use cases have shown that AI can ensure an effective
supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some
of the different methods and scenario which can be applied to AI and big data, as well as the opportunities
provided by the application in various business operations and crisis management domains.
Category: Artificial Intelligence
[1209] viXra:2109.0107 [pdf] submitted on 2021-09-09 22:21:06
Authors: Yew Kee Wong
Comments: 6 Pages. BIOM CONFERENCE 2021 (OCT 2021), VIENNA, AUSTRIA
The assessment outcome for many online learning methods are based on the number of correct answers
and than convert it into one final mark or grade. We discovered that when using online learning, we can
extract more detail information from the learning process and these information are useful for the
assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an
important part of an assessment when performing the online learning outcome. The assessment
indicators include the difficulty level of the question, time spend in answering and the variation in
choosing answer. In this paper we will present the findings of these assessment indicators and how it can
improve the way the learner being assessed when using online learning system. We developed a
statistical analysis algorithm which can assess the online learning outcomes more effectively using
quantifiable measurements. A number of examples of using this statistical analysis algorithm are
presented.
Category: Artificial Intelligence
[1208] viXra:2109.0106 [pdf] submitted on 2021-09-09 22:24:11
Authors: Yew Kee Wong
Comments: 7 Pages. MLNLP CONFERENCE 2021 (SEP 2021), COPENHAGEN, DENMARK
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of
such advanced technology, there will be always a question regarding its impact on our social life,
environment and economy thus impacting all efforts exerted towards sustainable development. In the
information era, enormous amounts of data have become available on hand to decision makers. Big data
refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets for different
industries and business operations. Numerous use cases have shown that AI can ensure an effective
supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some
of the different methods and scenario which can be applied to AI and big data, as well as the
opportunities provided by the application in various sensitive operations and disaster management.
Category: Artificial Intelligence
[1207] viXra:2109.0104 [pdf] submitted on 2021-09-09 22:28:20
Authors: Yew Kee Wong
Comments: 7 Pages. IJAIA JOURNAL (2021) VOL. 12, NO. 5
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1206] viXra:2109.0103 [pdf] submitted on 2021-09-09 22:30:00
Authors: Yew Kee Wong
Comments: 8 Pages. EEIJ JOURNAL (2021), VOL. 7, ISSUE. 3
Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With
the rise of such advanced technology, there will be always a question regarding its impact on our social
life, environment and economy thus impacting all efforts exerted towards continuous development. From
the definition, the welfare of human beings is the core of continuous development. Continuous
development is useful only when ordinary people’s lives are improved whether in health, education,
employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the
components of continuous development, economic growth, social welfare and environmental
sustainability. The human resources are the precious resource for all nations. The high unemployment
and underemployment rates especially in youth is a great threat affecting the continuous economic
development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence
[1205] viXra:2109.0102 [pdf] submitted on 2021-09-09 22:34:12
Authors: Yew Kee Wong
Comments: 8 Pages. ARIA CONFERENCE 2021 (DEC 2021), SYDNEY, AUSTRALIA
In the information era, enormous amounts of data have become available on hand to decision makers. Big
data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of
Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to
a whole range of other things, processes and environments. IoT is at the epicentre of the Digital
Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This
transformation influences everything from how we manage and operate our homes to automating processes
across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as
the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence
[1204] viXra:2109.0101 [pdf] submitted on 2021-09-09 22:35:42
Authors: Yew Kee Wong
Comments: 8 Pages. NeTIOT CONFERENCE 2021 (DEC 2021), SYDNEY, AUSTRALIA
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers
and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of
the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s
lives. This transformation influences everything from how we manage and operate our homes to
automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big
data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence
[1203] viXra:2109.0100 [pdf] submitted on 2021-09-09 22:37:10
Authors: Yew Kee Wong
Comments: 7 Pages. SIPR CONFERENCE 2021 (OCT 2021), SYDNEY, AUSTRALIA
In the information era, enormous amounts of data have become available on hand to decision makers. Big
data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1202] viXra:2109.0099 [pdf] submitted on 2021-09-09 22:39:20
Authors: Yew Kee Wong
Comments: 8 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 6
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity
(the four V’s of big data), which makes them difficult to handle using traditional tools and techniques.
Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and
extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain
valuable insights from such varied and rapidly changing data, ranging from daily transactions to
customer interactions and social network data. Such value can be provided using big data analytics,
which is the application of advanced analytics techniques on big data. This paper aims to analyse some
of the use of big data for the artificial intelligence development and its applications in various decision
making domains.
Category: Artificial Intelligence
[1201] viXra:2109.0098 [pdf] submitted on 2021-09-09 22:40:43
Authors: Yew Kee Wong
Comments: 7 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 6
Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With
the rise of such advanced technology, there will be always a question regarding its impact on our social
life, environment and economy thus impacting all efforts exerted towards continuous development. From
the definition, the welfare of human beings is the core of continuous development. Continuous
development is useful only when ordinary people’s lives are improved whether in health, education,
employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the
components of continuous development, economic growth, social welfare and environmental
sustainability. The human resources are the precious resource for all nations. The high unemployment
and underemployment rates especially in youth is a great threat affecting the continuous economic
development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence
[1200] viXra:2109.0097 [pdf] submitted on 2021-09-09 22:42:18
Authors: Yew Kee Wong
Comments: 6 Pages. IJCST JOURNAL 2022 FEB, VOL. 10, ISSUE. 1
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of
such advanced technology, there will be always a question regarding its impact on our social life,
environment and economy thus impacting all efforts exerted towards sustainable development. In the
information era, enormous amounts of data have become available on hand to decision makers. Big data
refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets for different
industries and business operations. Numerous use cases have shown that AI can ensure an effective
supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some
of the different methods and scenario which can be applied to AI and big data, as well as the
opportunities provided by the application in various business operations and crisis management domains.
Category: Artificial Intelligence
[1199] viXra:2109.0096 [pdf] submitted on 2021-09-09 22:43:47
Authors: Yew Kee Wong
Comments: 10 Pages. IJCST JOURNAL 2022 FEB, VOL. 10, ISSUE. 1
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using
machine learning, which is the application of advanced deep learning techniques on big data. This paper
aims to analyse some of the different machine learning and deep learning algorithms and methods, as
well as the opportunities provided by the AI applications in various decision making domains.
Category: Artificial Intelligence
[1198] viXra:2109.0095 [pdf] submitted on 2021-09-09 22:45:19
Authors: Yew Kee Wong
Comments: 7 Pages. IJIT JOURNAL 2021 DEC, VOL. 7, ISSUE. 6
The assessment outcome for many online learning methods are based on the number of correct answers
and than convert it into one final mark or grade. We discovered that when using online learning, we can
extract more detail information from the learning process and these information are useful for the
assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an
important part of an assessment when performing the online learning outcome. The assessment
indicators include the difficulty level of the question, time spend in answering and the variation in
choosing answer. In this paper we will present the findings of these assessment indicators and how it can
improve the way the learner being assessed when using online learning system. We developed a
statistical analysis algorithm which can assess the online learning outcomes more effectively using
quantifiable measurements. A number of examples of using this statistical analysis algorithm are
presented.
Category: Artificial Intelligence
[1197] viXra:2109.0094 [pdf] submitted on 2021-09-09 22:46:44
Authors: Yew Kee Wong
Comments: 9 Pages. IJIT JOURNAL 2021 DEC, VOL. 7, ISSUE. 6
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1196] viXra:2109.0093 [pdf] submitted on 2021-09-09 22:50:32
Authors: Yew Kee Wong
Comments: 7 Pages. IJIT JOURNAL 2022 FEB, VOL. 8, ISSUE. 1
The assessment outcome for many online learning methods are based on the number of correct answers
and than convert it into one final mark or grade. We discovered that when using online learning, we can
extract more detail information from the learning process and these information are useful for the
assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an
important part of an assessment when performing the online learning outcome. The assessment
indicators include the difficulty level of the question, time spend in answering and the variation in
choosing answer. In this paper we will present the findings of these assessment indicators and how it can
improve the way the learner being assessed when using online learning system. We developed a
statistical analysis algorithm which can assess the online learning outcomes more effectively using
quantifiable measurements. A number of examples of using this statistical analysis algorithm are
presented.
Category: Artificial Intelligence
[1195] viXra:2109.0092 [pdf] submitted on 2021-09-09 22:51:52
Authors: Yew Kee Wong
Comments: 7 Pages. IJIT JOURNAL 2022 FEB, VOL. 8, ISSUE. 1
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1194] viXra:2109.0091 [pdf] submitted on 2021-09-09 22:53:38
Authors: Yew Kee Wong
Comments: 7 Pages. IJETA JOURNAL 2021 DEC, VOL. 8, ISSUE. 6
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1193] viXra:2109.0090 [pdf] submitted on 2021-09-09 22:55:07
Authors: Yew Kee Wong
Comments: 8 Pages. IJETA JOURNAL 2021 DEC, VOL. 8, ISSUE. 6
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers
and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of
the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s
lives. This transformation influences everything from how we manage and operate our homes to
automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big
data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence
[1192] viXra:2109.0088 [pdf] submitted on 2021-09-09 22:58:19
Authors: Yew Kee Wong
Comments: 8 Pages. IJETA JOURNAL 2022 FEB, VOL. 9, ISSUE. 1
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity
(the four V’s of big data), which makes them difficult to handle using traditional tools and techniques.
Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and
extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain
valuable insights from such varied and rapidly changing data, ranging from daily transactions to
customer interactions and social network data. Such value can be provided using big data analytics,
which is the application of advanced analytics techniques on big data. This paper aims to analyse some
of the use of big data for the artificial intelligence development and its applications in various decision
making domains.
Category: Artificial Intelligence
[1191] viXra:2109.0087 [pdf] submitted on 2021-09-09 23:01:19
Authors: Yew Kee Wong
Comments: 7 Pages. BIBC CONFERENCE 2021 (OCT 2021), SYDNEY, AUSTRALIA
In the information era, enormous amounts of data have become available on hand to decision makers. Big
data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers
and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of
the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s
lives. This transformation influences everything from how we manage and operate our homes to
automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big
data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence
[1190] viXra:2109.0086 [pdf] submitted on 2021-09-09 23:03:06
Authors: Yew Kee Wong
Comments: 8 Pages. JOURNAL OF SOFTWARE, ICCSIT 2021, PARIS, FRANCE
Online learning is the emerging technique in education and learning during the COVID-19
pandemic period. Traditional learning is a complex process as learning patterns, approach, skills and
performance varies from person to person. Adaptive online learning focuses on understanding the
learner’s performance, skills and adapts to it. The use of advanced technology also provides a means to
analyze the behavioral learning pattern. As it provides the detailed skill mapping and performance which
enables the learner to understand the areas needs to be improved. The information can also be used by
assessors to improve the teaching approach. Advanced online learning system using arti=icial intelligence is
an emerging concept in the coming years. In this new concept, the classes are not taken face-to-face in a
classroom but through an electronic medium as a substitute. These virtual learning approach are gaining
importance every day and very soon they are going to be an integral part of our world. Taking up these
virtual learning through an electronic medium is termed as online learning. We proposed two new models
which are powered by arti=icial intelligence (AI) tools. A number of examples of using these new models are
presented.
Category: Artificial Intelligence
[1189] viXra:2109.0085 [pdf] submitted on 2021-09-09 23:04:52
Authors: Yew Kee Wong
Comments: 8 Pages. CIoT CONFERENCE 2021 (SEP 2021), TORONTO, CANADA
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers
and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of
the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s
lives. This transformation influences everything from how we manage and operate our homes to
automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big
data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence
[1188] viXra:2109.0083 [pdf] submitted on 2021-09-09 23:07:37
Authors: Yew Kee Wong
Comments: 10 Pages. BMLI CONFERENCE 2021 (DEC 2021), CHENNAI, INDIA
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using
machine learning, which is the application of advanced deep learning techniques on big data. This paper
aims to analyse some of the different machine learning and deep learning algorithms and methods, as
well as the opportunities provided by the AI applications in various decision making domains.
Category: Artificial Intelligence
[1187] viXra:2109.0068 [pdf] submitted on 2021-09-09 22:13:52
Authors: Yew Kee Wong
Comments: 7 Pages.
Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With
the rise of such advanced technology, there will be always a question regarding its impact on our social
life, environment and economy thus impacting all efforts exerted towards continuous development. From
the definition, the welfare of human beings is the core of continuous development. Continuous
development is useful only when ordinary people’s lives are improved whether in health, education,
employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the
components of continuous development, economic growth, social welfare and environmental sustainability.
The human resources are the precious resource for all nations. The high unemployment and
underemployment rates especially in youth is a great threat affecting the continuous economic development
of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence
[1186] viXra:2109.0067 [pdf] submitted on 2021-09-09 22:14:14
Authors: Yew Kee Wong
Comments: 7 Pages.
Online learning is the emerging technique in education and learning during the COVID-19 pandemic
period. Traditional learning is a complex process as learning patterns, approach, skills and performance
varies from person to person. Adaptive online learning focuses on understanding the learner’s
performance, skills and adapts to it. The use of advanced technology also provides a means to analyse
the behavioural learning pattern. As it provides the detailed skill mapping and performance which
enables the learner to understand the areas needs to be improved. The information can also be used by
assessors to improve the teaching approach. Advanced online learning system using artificial
intelligence is an emerging concept in the coming years. In this new concept, the classes are not taken
face-to-face in a classroom but through an electronic medium as a substitute. These virtual learning
approach are gaining importance every day and very soon they are going to be an integral part of our
world. Taking up these virtual learning through an electronic medium is termed as online learning. We
proposed two new models which are powered by artificial intelligence (AI) tools. A number of examples
of using these new models are presented.
Category: Artificial Intelligence
[1185] viXra:2109.0066 [pdf] submitted on 2021-09-09 21:48:06
Authors: Yew Kee Wong
Comments: 7 Pages. IJETA JOURNAL 2021 OCT, VOL. 8, ISSUE. 5
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1184] viXra:2109.0065 [pdf] submitted on 2021-09-09 21:49:38
Authors: Yew Kee Wong
Comments: 9 Pages. IJETA JOURNAL 2021 OCT, VOL. 8, ISSUE. 5
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of
such advanced technology, there will be always a question regarding its impact on our social life,
environment and economy thus impacting all efforts exerted towards sustainable development. In the
information era, enormous amounts of data have become available on hand to decision makers. Big data
refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets for different
industries and business operations. Numerous use cases have shown that AI can ensure an effective
supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some
of the different methods and scenario which can be applied to AI and big data, as well as the
opportunities provided by the application in various business operations and disaster management
domains.
Category: Artificial Intelligence
[1183] viXra:2109.0064 [pdf] submitted on 2021-09-09 21:51:33
Authors: Yew Kee Wong
Comments: 8 Pages. IJIT JOURNAL 2021 AUG, VOL. 7, ISSUE. 4
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity
(the four V’s of big data), which makes them difficult to handle using traditional tools and techniques.
Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and
extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain
valuable insights from such varied and rapidly changing data, ranging from daily transactions to
customer interactions and social network data. Such value can be provided using big data analytics,
which is the application of advanced analytics techniques on big data. This paper aims to analyse some
of the use of big data for the artificial intelligence development and its applications in various decision
making domains.
Category: Artificial Intelligence
[1182] viXra:2109.0063 [pdf] submitted on 2021-09-09 21:53:14
Authors: Yew Kee Wong
Comments: 6 Pages. IJIT JOURNAL 2021 AUG, VOL. 7, ISSUE. 4
Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as
recognizing speech, identifying images or making predictions. Instead of organizing data to run through
predefined equations, deep learning sets up basic parameters about the data and trains the computer to
learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate
some of the different deep learning algorithms and methods which can be applied to artificial intelligence
analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence
[1181] viXra:2109.0062 [pdf] submitted on 2021-09-09 21:54:49
Authors: Yew Kee Wong
Comments: 6 Pages. IJIT JOURNAL 2021 OCT, VOL. 7, ISSUE. 5
Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as
recognizing speech, identifying images or making predictions. Instead of organizing data to run through
predefined equations, deep learning sets up basic parameters about the data and trains the computer to
learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate
some of the different deep learning algorithms and methods which can be applied to artificial intelligence
analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence
[1180] viXra:2109.0061 [pdf] submitted on 2021-09-09 21:56:12
Authors: Yew Kee Wong
Comments: 9 Pages. IJIT JOURNAL 2021 OCT, VOL. 7, ISSUE. 5
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1179] viXra:2109.0060 [pdf] submitted on 2021-09-09 21:58:28
Authors: Yew Kee Wong
Comments: 7 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 5
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1178] viXra:2109.0059 [pdf] submitted on 2021-09-09 22:00:33
Authors: Yew Kee Wong
Comments: 6 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 5
Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as
recognizing speech, identifying images or making predictions. Instead of organizing data to run through
predefined equations, deep learning sets up basic parameters about the data and trains the computer to
learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate
some of the different deep learning algorithms and methods which can be applied to artificial intelligence
analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence
[1177] viXra:2109.0058 [pdf] submitted on 2021-09-09 22:13:33
Authors: Yew Kee Wong
Comments: 6 Pages. IJCST JOURNAL 2021 AUG, VOL. 9, ISSUE. 4
Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as
recognizing speech, identifying images or making predictions. Instead of organizing data to run through
predefined equations, deep learning sets up basic parameters about the data and trains the computer to
learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate
some of the different deep learning algorithms and methods which can be applied to artificial intelligence
analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence
[1176] viXra:2109.0057 [pdf] submitted on 2021-09-09 22:13:11
Authors: Yew Kee Wong
Comments: 7 Pages. IJCST JOURNAL 2021 AUG, VOL. 9, ISSUE. 4
In the information era, enormous amounts of data have become available on hand to decision makers.
Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them
difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions
need to be studied and provided in order to handle and extract value and knowledge from these datasets.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of
artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using big
data analytics, which is the application of advanced analytics techniques on big data. This paper aims to
analyse some of the different machine learning algorithms and methods which can be applied to big data
analysis, as well as the opportunities provided by the application of big data analytics in various decision
making domains.
Category: Artificial Intelligence
[1175] viXra:2109.0056 [pdf] submitted on 2021-09-09 22:12:50
Authors: Yew Kee Wong
Comments: 8 Pages. NATL CONFERENCE 2021 (NOV 2021), LONDON, UK
Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With
the rise of such advanced technology, there will be always a question regarding its impact on our social
life, environment and economy thus impacting all efforts exerted towards continuous development. From
the definition, the welfare of human beings is the core of continuous development. Continuous
development is useful only when ordinary people’s lives are improved whether in health, education,
employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the
components of continuous development, economic growth, social welfare and environmental
sustainability. The human resources are the precious resource for nations. The high unemployment and
underemployment rates especially in youth is a great threat affecting the continuous economic
development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence
[1174] viXra:2109.0055 [pdf] submitted on 2021-09-09 22:12:06
Authors: Yew Kee Wong
Comments: 6 Pages. CRBL CONFERENCE 2021 (OCT 2021), VIENNA, AUSTRIA
Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as
recognizing speech, identifying images or making predictions. Instead of organizing data to run through
predefined equations, deep learning sets up basic parameters about the data and trains the computer to
learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate
some of the different deep learning algorithms and methods which can be applied to artificial intelligence
analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence
[1173] viXra:2109.0054 [pdf] submitted on 2021-09-09 22:13:21
Authors: Yew Kee Wong
Comments: 7 Pages. ITCCMA CONFERENCE 2021 (SEP 2021) COPENHAGEN, DENMARK
Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of
such advanced technology, there will be always a question regarding its impact on our social life,
environment and economy thus impacting all efforts exerted towards sustainable development. In the
information era, enormous amounts of data have become available on hand to decision makers. Big data
refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to
handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be
studied and provided in order to handle and extract value and knowledge from these datasets for different
industries and business operations. Numerous use cases have shown that AI can ensure an effective
supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some
of the different methods and scenario which can be applied to AI and big data, as well as the
opportunities provided by the application in various business operations and crisis management domains.
Category: Artificial Intelligence
[1172] viXra:2109.0047 [pdf] submitted on 2021-09-07 04:43:30
Authors: Amey Thakur, Karan Dhiman, Mayuresh Phansikar
Comments: 7 pages, 7 figures, Volume 9, Issue IX, International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2021. DOI: https://doi.org/10.22214/ijraset.2021.37930
Neuro Fuzzy is a hybrid system that combines Artificial Neural Networks with Fuzzy Logic. Provides a great deal of freedom when it comes to thinking. This phrase, on the other hand, is frequently used to describe a system that combines both approaches. There are two basic streams of neural network and fuzzy system study. Modelling several elements of the human brain (structure, reasoning, learning, perception, and so on) as well as artificial systems and data: pattern clustering and recognition, function approximation, system parameter estimate, and so on. In general, neural networks and fuzzy logic systems are parameterized nonlinear computing methods for numerical data processing (signals, images, stimuli). These algorithms can be integrated into dedicated hardware or implemented on a general-purpose computer. The network system acquires knowledge through a learning process. Internal parameters are used to store the learned information (weights).
Category: Artificial Intelligence
[1171] viXra:2109.0028 [pdf] submitted on 2021-09-05 15:57:13
Authors: Jeongik Cho
Comments: 13 Pages.
Generators in generative adversarial networks map latent distributions into data distributions. GAN inversion is mapping data distribution to latent distribution by inverting the generator of GAN.
When training the encoder for generator inversion, simply using the mean squared error causes the encoder to not converge due to information loss on the latent distribution from the generator. In other words, it is impossible to invert the generator as it is due to the information loss on the latent distribution.
This paper introduces a dynamic latent scale GAN, a method for training a generator without information loss on latent distribution, and an encoder that inverts the generator. Dynamic latent scale GAN dynamically scales each element of the normal i.i.d. (independent and identically distributed) latent distribution during GAN training to adjust the entropy of the latent distribution so that information loss on the latent distribution does not occur in the generator. The amount of information that can be recovered from the generated data distribution can be obtained through the variance of the predicted latent distribution (encoder output distribution). By dynamically adjusting the scale of the latent distribution through the variance of each element of the predicted latent distribution, it is possible to train a generator that does not have information loss on latent distribution. This means that mutual information between the latent distribution and predicted latent distribution can be maximized, and the encoder can converge.
Since the latent distribution scale of the dynamic latent scale GAN changes dynamically, the encoder should be trained together during GAN training. The encoder can be integrated with the discriminator, and the loss for the encoder can be added to the generator loss because the encoder converges.
Category: Artificial Intelligence
[1170] viXra:2108.0169 [pdf] submitted on 2021-08-31 12:44:04
Authors: Amey Thakur, Mega Satish
Comments: 19 pages, 23 figures, Volume 9, Issue VIII, International Journal for Research in Applied Science and Engineering Technology (IJRASET), 2021. DOI: https://doi.org/10.22214/ijraset.2021.37723
Deep learning's breakthrough in the field of artificial intelligence has resulted in the creation of a slew of deep learning models. One of these is the Generative Adversarial Network, which has only recently emerged. The goal of GAN is to use unsupervised learning to analyse the distribution of data and create more accurate results. The GAN allows the learning of deep representations in the absence of substantial labelled training information. Computer vision, language and video processing, and image synthesis are just a few of the applications that might benefit from these representations. The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as well as the various GAN variants along with their respective architectures. Applications of GANs are also discussed with examples.
Category: Artificial Intelligence
[1169] viXra:2108.0155 [pdf] submitted on 2021-08-27 21:01:29
Authors: Yew Kee Wong
Comments: 9 Pages.
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make
decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as
well as the opportunities provided by the AI applications in various decision making domains.
Category: Artificial Intelligence
[1168] viXra:2108.0154 [pdf] submitted on 2021-08-27 21:02:30
Authors: Yew Kee Wong
Comments: 8 Pages.
Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence
[1167] viXra:2108.0153 [pdf] submitted on 2021-08-27 21:04:08
Authors: Yew Kee Wong
Comments: 6 Pages.
The assessment outcome for many online learning methods are based on the number of correct answers and than convert it into one final mark or grade. We discovered that when using online learning, we can extract more detail information from the learning process and these information are useful for the assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an
important part of an assessment when performing the online learning outcome. The assessment
indicators include the difficulty level of the question, time spend in answering and the variation in choosing answer. In this paper we will present the findings of these assessment indicators and how it can improve the way the learner being assessed when using online learning system. We developed a statistical analysis algorithm which can assess the online learning outcomes more effectively using
quantifiable measurements. A number of examples of using this statistical analysis algorithm are presented.
Category: Artificial Intelligence
[1166] viXra:2108.0152 [pdf] submitted on 2021-08-27 21:05:13
Authors: Yew Kee Wong
Comments: 8 Pages.
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity (the four V’s of big data), which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain valuable insights from such varied and rapidly changing data, ranging from daily transactions to customer interactions and social network data. Such value can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some
of the use of big data for the artificial intelligence development and its applications in various decision making domains.
Category: Artificial Intelligence
[1165] viXra:2108.0147 [pdf] submitted on 2021-08-25 23:16:30
Authors: Jeongik Cho
Comments: 10 Pages.
Generators in generative adversarial networks map latent distributions into data distributions. GAN inversion is mapping data distribution to latent distribution by inverting the generator of GAN.
In this paper, I introduce a direction embedding discriminator GAN in which the discriminator learns the inverse mapping of the generator. In the suggested method, when the latent vector is sampled from an i.i.d. (independent and identically distributed) random variable, the latent vector is considered as angular coordinates of spherical coordinates. Thus, the latent vector can be transformed into a point on the surface of the hypersphere in cartesian coordinates.
Discriminator embeds the generated data point into cartesian coordinates. The direction of embedded coordinates represents predicted cartesian coordinates of latent vector, and the log of magnitude represents an adversarial value (real/fake). The generator and discriminator are trained cooperative to decrease the angle between the embedded cartesian coordinates from the discriminator and the cartesian coordinates converted from the latent vector considered as angular coordinates of spherical coordinates. The suggested method can be applied during GAN training, does not require additional encoder training, and does not use a reconstruction loss.
Category: Artificial Intelligence
[1164] viXra:2108.0130 [pdf] submitted on 2021-08-24 11:26:13
Authors: Amey Thakur, Archit Konde
Comments: 22 pages, 15 figures, Volume 9, Issue VIII, International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2021. DOI: http://dx.doi.org/10.22214/ijraset.2021.37362
The purpose of this study is to familiarise the reader with the foundations of neural networks. Artificial Neural Networks (ANNs) are algorithm-based systems that are modelled after Biological Neural Networks (BNNs). Neural networks are an effort to use the human brain's information processing skills to address challenging real-world AI issues. The evolution of neural networks and their significance are briefly explored. ANNs and BNNs are contrasted, and their qualities, benefits, and disadvantages are discussed. The drawbacks of the perceptron model and their improvement by the sigmoid neuron and ReLU neuron are briefly discussed. In addition, we give a bird's-eye view of the different Neural Network models. We study neural networks (NNs) and highlight the different learning approaches and algorithms used in Machine Learning and Deep Learning. We also discuss different types of NNs and their applications. A brief introduction to Neuro-Fuzzy and its applications with a comprehensive review of NN technological advances is provided.
Category: Artificial Intelligence
[1163] viXra:2108.0120 [pdf] submitted on 2021-08-23 13:14:27
Authors: Mirzakhmet Syzdykov
Comments: 5 Pages.
In this work we present the theoretical approach over solving the back-reference problem in
regular expression matching within the almost polynomial time using local search within the memory, while
within the growth of capturing groups we obtain the exponential results: for this purpose we develop the
modified matching algorithm operating on non-deterministic finite automata within the modified search
algorithm and presence of the specific method also over extended regular expressions. This is made due to
the algorithm which can be adjusted for approximate searching allowing us to imply extended operators and
features of modern regular expressions like intersection, subtraction and complement, as well as backreferences. The review of past work on this issues is also done: to the present time there is no discrete
algorithm in systems like automata for local search. Thus, we obtain the new result of matching the pattern
locally while the simulating algorithm works as usual. The obtained result also refers to the membership
problem with local bound which can be set in the main algorithm presented in this article.
Category: Artificial Intelligence
[1162] viXra:2108.0095 [pdf] submitted on 2021-08-18 23:35:38
Authors: Shiyou Lian
Comments: Pages.
Starting from finding approximate value of a function, introduces the measure of approximation-degree between two numerical values, proposes the concepts of “strict approximation” and “strict approximation region”, then, derives the corresponding one-dimensional interpolation methods and formulas, and then presents a calculation model called “sum-times-difference formula” for high-dimensional interpolation, thus develops a new interpolation approach, that is, ADB interpolation. ADB interpolation is applied to the interpolation of actual functions with satisfactory results. Viewed from principle and effect, the interpolation approach is of novel idea, and has the advantages of simple calculation, stable accuracy, facilitating parallel processing, very suiting for high-dimensional interpolation, and easy to be extended to the interpolation of vector valued functions. Applying the approach to instance-based learning, a new instance-based learning method, learning using ADB interpolation, is obtained. The learning method is of unique technique, which has also the advantages of definite mathematical basis, implicit distance weights, avoiding misclassification, high efficiency, and wide range of applications, as well as being interpretable, etc. In principle, this method is a kind of learning by analogy, which and the deep learning that belongs to inductive learning can complement each other, and for some problems, the two can even have an effect of “different approaches but equal results” in big data and cloud computing environment. Thus, the learning using ADB interpolation can also be regarded as a kind of “wide learning” that is dual to deep learning.
Category: Artificial Intelligence
[1161] viXra:2108.0029 [pdf] submitted on 2021-08-08 14:07:27
Authors: Ait-Taleb Nabil
Comments: 28 Pages.
In this paper, we will cover information theory for continuous data like differential entropy, joint differential entropy, conditional differential entropy, mutual information and conditional mutual information. We will make a brief reminder on the Gaussian multidimensional probability and the information theory. We will demonstrate a theorem on conditional entropy inequalities for Gaussian random vectors, this theorem will be later used to bound Bayesian network’s differential entropy. In the following, we will define a Bayesian network using a Gaussian random vector, we will show how to compute a Bayesian network’s differential entropy and conclude by proposing a theorem to upper and lower bound this differential entropy. In order to do data learning, we will detail, for a Bayesian network the AIC and the BIC scores and a method of differential entropy absorption of a Bayesian network. We will also show how to infer data from a Bayesian network. From an example, this paper will conclude by suggesting a learning algorithm based on the differential entropy coefficient attributing a Bayesian network to a continuous data matrix.
Category: Artificial Intelligence
[1160] viXra:2107.0124 [pdf] submitted on 2021-07-22 18:37:33
Authors: Romain Mouret
Comments: 5 Pages.
We make the case for identifying the input domain prior to running downstream models and propose an architecture that opens the door to lifelong learning systems that forget at a decreasing rate as the tasks grow in complexity. Our model accurately identifies domains and is compatible with other continual learning algorithms, provided they benefit from knowing the current domain beforehand.
Category: Artificial Intelligence
[1159] viXra:2107.0122 [pdf] submitted on 2021-07-21 19:07:21
Authors: Sagnik Mazumder
Comments: 4 Pages.
Artificial Intelligence is one of those fields in computer science that is currently being extensively studied. In this paper, the author attempts to summarise the current state of research in the field with respect to openness to the general community, and has found a profound lack of opportunity to contribute to the field as a novice, and a near monopoly of effective research by large industries while production environments continue to largely remain safe from such influences.
Category: Artificial Intelligence
[1158] viXra:2107.0097 [pdf] submitted on 2021-07-16 15:11:10
Authors: Archie Chaudhury, Brian Haney
Comments: 16 Pages. Blockchain, Computation, and Cryptocurrency
This Paper makes three main contributions. First, this Paper surveys Algorand Smart
Contracts and the Algorand Network, including software systems and algorithmic architectures.
Second, this Paper discusses various software mechanisms enabling developers to execute
transfers on the Algorand Network. Third, this Paper advances Algorand Smart Contracts by
introducing the Algogeneous Smart Contract. Algogeneous Smart Contracts are a new type of
Algorand Smart Contract, which are simpler to develop and utilize artificial intelligence to
ensure contracts are legally compliant and enforceable.
Category: Artificial Intelligence
[1157] viXra:2107.0058 [pdf] submitted on 2021-07-10 13:40:51
Authors: Vedurumudi Priyanka
Comments: 17 Pages.
In this report, address the problem of sentiment classification on twitter dataset. used a number of
machine learning and deep learning methods to perform sentiment analysis. In the end, used a majority
vote ensemble method with 5 of our best models to achieve the classification accuracy of 83.58% on
kaggle public leaderboard. compared various different methods for sentiment analysis on tweets (a
binary classification problem). The training dataset is expected to be a CSV file of type tweet_id,
sentiment, tweet where the tweet_id is a unique integer identifying the tweet, sentiment is either 1
(positive) or 0 (negative), and tweet is the tweet enclosed in "". Similarly, the test dataset is a CSV file of
type tweet_id, tweet. Please note that CSV headers are not expected and should be removed from the
training and test datasets. used Anaconda distribution of Python for datasets for library requirements
specific to some methods such as keras with TensorFlow backend for Logistic Regression, MLP, RNN
(LSTM), and CNN. and xgboost for XGBoost. Usage of preprocessing, baseline, Naive Bayes, Maximum
entropy, Decision Tree, random forest, multi-layer perception etc are implemented
Category: Artificial Intelligence
[1156] viXra:2106.0084 [pdf] submitted on 2021-06-14 17:07:54
Authors: Souvik Sengupta
Comments: 10 Pages. [Corrections are made by viXra Admin to comply with the rules of viXra.org]
After one year from the start of the COVID-19 pandemic in India, the country is now having a steady decay in the number of daily new cases and active cases. Although the vaccination process is about to start from mid of January 2021, it would not affect the number of daily cases at least for the next three to four months for obvious reasons like phase-wise implementation and six to eight weeks time span required from the first dosage to develop the immunity. Therefore, the prime question is now, where would we reach at the end of the first quarter of 2021, and what could be the number of new cases and active cases before the vaccination immunity starts working. This paper analyzes the growth and decay pattern of Indian COVID-19 cases with help of SEIR epidemical modeling, ARIMA statistical modeling, and time series analysis by LSTM. The models learn the parameter and hyper-parameter values that are best suited for describing the pattern for the COVID-19 pandemic in India. Then it tries to predict the numbers for India by the end of March, 2021. It is forecasted that the number of new cases would come down near 5000 per day, active cases near 40,000 and the total number of infected may reach 11.1 million if the current pattern is followed.
Category: Artificial Intelligence
[1155] viXra:2106.0071 [pdf] submitted on 2021-06-12 18:39:56
Authors: Ashrith Appani
Comments: 11 Pages.
Backdrop Purging is a common pre-processing step in computer vision and video processing for object tracking, people recognition, and other tasks. Several successful background-subtraction algorithms have recently been proposed, however nearly all of the best-performing ones are supervised. The availability of some annotated frames of the test video during training is critical to their performance. As a result, there is no literature on their performance on completely "unseen" videos. We provide a new supervised background-subtraction technique for unseen films (BSUV-Net) based on a fully-convolutional neural network in this paper. The current frame and two background frames collected at various time scales, along with their semantic segmentation maps, are fed into our network. We also offer a new data-augmentation strategy that mitigates the influence of illumination differences between the background frames and the current frame in order to limit the risk of overfitting. In terms of F-measure, recall, and precision, BSUV-Net beats state-of-the-art algorithms assessed on unseen videos on the CDNet-2014 dataset.
Category: Artificial Intelligence
[1154] viXra:2106.0040 [pdf] submitted on 2021-06-07 07:02:56
Authors: Jovial Joe Jayarson
Comments: 3 Pages. Best paper award in NCGCE 21. Mr. Ebin PM is the author's guide.
It is no secret that AI is an upcoming titan. Even though people are stunned to hear that AI has been here for around a century, due to the advancement in computational methods and resources, today AI peaks like never before. As a tiny glimpse into the field of Digit Recognition, this project aims to understand the underlying cogs and wheels on which the neural networks spin. This paper tries to elucidate a project which solves the Sudoku puzzle drawn and written by hand. The paraphernalia for that project includes programming language: Python3; libraries: OpenCV, Numpy, Keras; datasets: MNIST handwritten digit database. Digit recognition is a classical problem which will introduce neurons, neural networks, connections hidden layers, weights, biases, activation functions like sigmoid, back-propagation and other related topics as well. Algorithm(s) in the project employed to solve Sudoku is also explored in this paper.
[1153] viXra:2105.0176 [pdf] submitted on 2021-05-31 12:17:35
Authors: Abdurrahim Yilmaz, Dilanur Bayraktar, Melih Akman, Cemre Sahinoglu, Huseyin Uvet
Comments: 3 Pages.
In this paper, a detailed study on gesture classifica- tion using a dataset from Kaggle and optimizing the dataset is presented. The machine learning algorithms, which are SGD, kNN, SVM, MLP, Gaussian Naive Bayes classifier, Random Forest, LightGBM, XGBoost, and CatBoost classifiers, to conduct the research and, are used. The results are compared with each other to conclude which models perform the best in gesture classification. Except for the Gaussian Naive Bayes classifier, all methods resulted in high accuracy.
Category: Artificial Intelligence
[1152] viXra:2105.0141 [pdf] submitted on 2021-05-24 21:37:09
Authors: Ruolin Jiu
Comments: 16 Pages.
A completely new learning rule of neural networks. Similar to the learning rule in the brain, completely different with gradient descent.
This learning rule, is the foundation and the key of AI memory, will open a huge growth potential for Artificial Intelligence.
Category: Artificial Intelligence
[1151] viXra:2105.0138 [pdf] submitted on 2021-05-23 07:45:16
Authors: Jan Helm
Comments: 45 Pages.
This paper presents
in Part1 the basic theory of Neural Networks, and based on the standard (global) backpropagation algorithm, it introduces the local backpropagation algorithm: a layer-recurrent gradient algorithm with layer-specific target-vector.
Furthermore in Part2 , it presents calculated application examples for global backpropagation networks, local backpropagation networks and evolving cross-mutated networks.
Category: Artificial Intelligence
[1150] viXra:2105.0095 [pdf] submitted on 2021-05-17 12:53:33
Authors: J Gerard Wolff
Comments: 32 Pages.
This article is about the origin, development, and benefits of the "SP System" (SPS), which means the "SP Theory of Intelligence" and its realisation in the "SP Computer Model" (SPCM). The SPS is radically different from deep neural networks (DNNs), with many advantages compared with DNNs. As will be described, the SPS provides a promising foundation for the development of human-like broad AI. The SPS was inspired in part by: evidence for the importance of information compression in human learning, perception, and cognition; and the concept of `multiple sequence alignment' in biochemistry. That latter concept led to the development of the powerful concept of SP-multiple-alignment, a concept which is largely responsible for the intelligence-related versatility of the SPS. The main advantages of the SPS are: 1) The clear potential of the SPS to solve 19 problems in AI research; 2) Versatility of the SPS in aspects of intelligence, including unsupervised learning, and several forms of reasoning; 3) Versatility of the SPS in the representation and processing of knowledge; 4) Seamless integration of diverse aspects of intelligence and diverse forms of knowledge, in any combination, a kind of integration that appears to be necessary in any artificial system that aspires to the fluidity and adaptability of the human mind; 5) Several other potential benefits and applications of the SPS. It is envisaged that the SPCM will provide the basis for the development of a first version of the {\em SP Machine}, with high levels of parallel processing and a user-friendly user interface. All software in the SP Machine would be open-source so that clones of the SP Machine may be created anywhere by individuals or groups, to facilitate further research and development of the SP System.
Category: Artificial Intelligence
[1149] viXra:2105.0084 [pdf] submitted on 2021-05-14 01:08:18
Authors: Milad Keramati
Comments: 5 Pages.
In a problem facing agent, a situation can be categorized as different patterns and action can be taken based on the available information (known as method) as oppose to a simple value. Doing so will decrease the variety of situations and actions and as a result simplify the problem. Simple patterns and methods are generated at first but by detecting important patterns and methods and creating similar patterns and methods, the agent will be able to better recognize the situation it's in and find better solutions for the patterns respectively and as a result systematically broaden its knowledge over time.
By memorizing feelings (or rewards) and action result (situation) in a pattern, it's possible to make a tree of possible outcomes of an action related to a pattern and choose an action of the pattern that profit us the most by predicting future feelings and calculating the value and we know accuracy of our prediction based on similarity (or consistency) and number of results (or confidence).
I've also given my opinion and defined some standards regarding artificial intelligence, reinforcement learning, and designing agent in this paper.
Category: Artificial Intelligence
[1148] viXra:2105.0033 [pdf] submitted on 2021-05-07 10:36:30
Authors: Fuyuan Xiao
Comments: 5 Pages.
In this paper, CET is generalized to quantum framework of Hilbert space in an open world, called generalized quantum evidence theory (GQET). Differ with classical GET, interference effects are involved in GQET. Especially, when a GQBBA turns into a classical GBBA, interference effects disappear, so that GQB and GQP functions of GQET degenerate to classical GBel and GPl functions of classical GET, respectively.
Category: Artificial Intelligence
[1147] viXra:2104.0145 [pdf] submitted on 2021-04-24 01:23:39
Authors: Xiangjun Mi, Chongru Huang, Bingyi Kang
Comments: 29 Pages.
How to obtain negation knowledge is a crucial topic, especially in the field of artificial intelligence. Limited work has been done on the negation of a basic probability assignment (BPA), and which has been studied in depth throughout the literature. However, the aspect of the intensity level of negation enforcement has not yet been investigated. Moreover, let us note that the main characteristic of intelligent systems is just the flexibility for the sake of being able to represent knowledge according to each situation. In general, researchers have a tendency to express the need for cognitive range in the negation. Thus, it would seem very useful to find a wide range of negations under intensity levels in a BPA. Based on these ideas, this paper first proposes a new approach of finding a BPA negation and gives a domain of intensity in which the negation is executed, which is called the negation space. Then, we investigate a number of desirable properties and explore their correlation with entropy. Numerical examples show the characteristics of the proposed negation solution. Finally, we validate the efficiency of the proposed method from the point of view of the Dempster-Shafer belief structure.
Category: Artificial Intelligence
[1146] viXra:2104.0111 [pdf] submitted on 2021-04-19 07:35:07
Authors: Lingge Zhou, Xiangjun Mi, Chongru Huang, Yanan Li, Bingyi Kang
Comments: 39 Pages.
Dempster-Shafer evidence theory (DST) is an effective tool for data fusion. In this theory, how to handle conflicts between evidences is still a significant and open issue. In this paper, the best-worst method (BWM) is extended to conflict management in DST. Firstly, a way to determine the best and worst basic probability assignment (BPA) is proposed. Secondly, a novel strategy for determining the optimal weights of BPA using the BWM method is developed. Compared to traditional measure-based conflict management methods, the proposed method has three better performances: (1) A consistency ratio is considered for BPA to check the reliability of the comparisons, producing more reliable results. (2) The final fusion result has less uncertainty, which is more conducive to improve the performance of decision making. (3) The number of BPA comparisons performed during operation (in conflict management) is reduced (especially matrix-based). A practical application in motor rotor fault diagnosis is used to illustrate the effectiveness and practicability of the proposed methodology.
Category: Artificial Intelligence
[1145] viXra:2104.0069 [pdf] submitted on 2021-04-12 12:15:16
Authors: Egger Mielberg
Comments: 14 Pages.
The truly transparent and predictable work of the artificial intelligence being created can significantly improve the quality of human life, as well as its safety.
In our opinion, the self-awareness of artificial intelligence is achievable only if it is independent in making any decision.
We present three basic laws of artificial intelligence focused primarily on the possibility of their practical implementation.
Category: Artificial Intelligence
[1144] viXra:2104.0005 [pdf] submitted on 2021-04-03 21:35:10
Authors: Tanvir Rahman, Rafia Akhter, Kehinde Lawal, Shamim Ahmed Mazumder, Tamanna Afroz, Ataur Rahman
Comments: 3 Pages.
forecasting or predicting stock market price and the trend has been regarded as a challenging task because of its chaotic nature. The stock market is essentially a non-linear, non-parametric, noisy, and deterministically chaotic system because
of liquid money, stock adequacy, human behavior, news related to the stock market, gambling, international money rate, and so on. In a country like Bangladesh, it is very difficult to find any
prediction of the stock market especially the Dhaka stock market. Because its trends and forecasting depend on various factors.
Understanding the pattern of the stock market and predicting their development and changes are research hotspots in academic and financial circles. Because financial data contain
complex, incomplete, and fuzzy information, predicting their development trends is an extremely difficult challenge. Fluctuations in financial data depend on a myriad of correlated
constantly changing factors. In this paper, financial productprice data are treated as a one-dimensional series generated bythe projection of a chaotic system composed of multiple factors
into the time dimension, and the price series is reconstructed using the time series phase-space reconstruction (PSR) method. An RNN-based prediction model is designed based on the PSR
method and long and short-term memory networks (LSTMs) for DL and used to predict stock prices and for predicting stock market data trend we use Facebook open-source model prophet The proposed and some other prediction models are used to
predict multiple stock indices for different periods. A comparisonof the results shows that the proposed prediction model has a higher prediction accuracy.
Category: Artificial Intelligence
[1143] viXra:2103.0194 [pdf] submitted on 2021-03-31 17:29:46
Authors: Tong Geng, Ang Li, Tianqi Wang, Chunshu Wu, Yanfei Li, Antonino Tumeo, Shuai Che, Steve Reinhardt, Martin Herbordt
Comments: 13 Pages.
The recent development of deep learning has mostly been focusing on Euclidean data, such as images, videos, and audios. However, most real-world information and relationships are often expressed in graphs. Graph convolutional networks (GCNs) appear as a promising approach to efficiently learn from graph data structures, showing advantages in several practical applications such as social network analysis, knowledge discovery, 3D modeling, and motion capturing. However, practical graphs are often extremely large and unbalanced, posting significant performance demand and design challenges on the hardware dedicated to GCN inference.
In this paper, we propose an architecture design called Ultra-Workload-Balanced-GCN (UWB-GCN) to accelerate graph convolutional network inference. To tackle the major performance bottleneck of workload imbalance, we propose two techniques: dynamic local sharing and dynamic remote switching, both of which rely on hardware flexibility to achieve performance auto-tuning with negligible area or delay overhead. Specifically, UWB-GCN is able to effectively profile the sparse graph pattern while continuously adjusting the workload distribution among parallel processing elements (PEs). After converging, the ideal configuration is reused for the remaining iterations. To the best of our knowledge, this is the first accelerator design targeted to GCNs and the first work that auto-tunes workload balance in accelerator at runtime through hardware, rather than software, approaches. Our methods can achieve near-ideal workload balance in processing sparse matrices. Experimental results show that UWB-GCN can finish the inference of the Nell graph (66K vertices, 266K edges) in 8.1ms, corresponding to 199x, 16x, and 7.5x respectively, compared to the CPU, GPU, and the baseline GCN design without workload autotuning.
Category: Artificial Intelligence
[1142] viXra:2103.0185 [pdf] submitted on 2021-03-29 02:32:14
Authors: Lifeng Gu
Comments: 5 Pages.
Most existing metric learning methods focus on
learning a similarity or distance measure relying
on similar and dissimilar relations between sample pairs. However, pairs of samples cannot be
simply identified as similar or dissimilar in many
real-world applications, e.g., multi-label learning,
label distribution learning. To this end, relation
alignment metric learning (RAML) framework is
proposed to handle the metric learning problem in
those scenarios. But RAML learn a linear metric,
which can’t model complex datasets. Combining with deep learning and RAML framework,
we propose a hierarchical relationship alignment
metric leaning model HRAML, which uses the
concept of relationship alignment to model metric
learning problems under multiple learning tasks,
and makes full use of the consistency between
the sample pair relationship in the feature space
and the sample pair relationship in the label space.
Further we organize several experiment divided
by learning tasks, and verified the better performance of HRAML against many popular methods
and RAML framework.
Category: Artificial Intelligence
[1141] viXra:2103.0184 [pdf] submitted on 2021-03-29 02:37:54
Authors: Lifeng Gu
Comments: 9 Pages.
In recent years, representation learning has become the research focus of the machine learning community. Large-scale pre-training neural
networks have become the first step to realize
general intelligence. The key to the success of
neural networks lies in their abstract representation capabilities for data. Several learning fields
are actually discussing how to learn representations and there lacks a unified perspective. We
convert the representation learning problem under multiple tasks into a ranking problem, taking the ranking problem as a unified perspective,
the representation learning under different tasks
is solved by optimizing the approximate NDCG
loss. Experiments under different learning tasks
like classification, retrieval, multi-label learning,
regression, self-supervised learning prove the superiority of approximate NDCG loss. Further, under the self-supervised learning task, the training
data is transformed by data augmentation method
to improve the performance of the approximate
NDCG loss, which proves that the approximate
NDCG loss can make full use of the information
of the unsupervised training data.
Category: Artificial Intelligence
[1140] viXra:2103.0174 [pdf] submitted on 2021-03-28 21:30:36
Authors: Lifeng Gu
Comments: 11 Pages.
Science is used to discover the law of world. Machine learning can be used to discover the law of data. In recent years, there are more and more research about interpretability in machine learning community. We hope the machine learning methodsaresafe,interpretable,andtheycanhelpusto find meaningful pattern in data. In this paper, we focus on interpretability of deep representation. We propose a interpretable method of representation based on mutual information, which summarizes the interpretation of representation into three types of information between input data and representation. We further proposed MI-LR module, which can be inserted into the model to estimate the amount of information to explain the model’s representation. Finally, we verify the method through the visualization of the prototype network.
Category: Artificial Intelligence
[1139] viXra:2103.0148 [pdf] submitted on 2021-03-23 06:29:02
Authors: Yuanpeng He, Yong Deng
Comments: 32 Pages.
In real life, occurrences of a series of things are supposed to come in an order. Therefore, it is necessary to regard sequence as a crucial factor in managing different kinds of things in fuzzy environment. However, few related researches have been made to provided a reasonable solution to this demand. Therefore, how to measure degree of uncertainty of ordinal fuzzy sets is still an open issue. To address this issue, a novel ordinal relative fuzzy entropy is proposed in this paper taking orders of propositions into consideration in measuring level of uncertainty in fuzzy environment. Compared with previously proposed entropies, effects on degrees of fuzzy uncertainty brought by sequences of sequential propositions are embodied in values of measurement using proposed method in this article. Moreover, some numerical examples are offered to verify the correctness and validity of the proposed entropy.
Category: Artificial Intelligence
[1138] viXra:2103.0135 [pdf] submitted on 2021-03-20 20:03:20
Authors: Narayanan Arvind, Saravanan Mugund, Avinash Kumar Singh
Comments: 6 Pages. Presented at Samudramanthan 2021, Indian Institute of Technology Kharagpur
Maritime digital KYC processes are susceptible to various face spoofing attacks. When any
unauthorized person tries to enter in the authentication system by presenting a fraud image
and/or video, it is termed as a spoofing attack. Face anti-spoofing attacks have been typically
approached from texture based models (e.g. Local Binary patterns) combined with machine
learning (e.g. KNN) approaches. The aim of this study is to build a robust face anti-spoofing
system using deep convolutional neural networks for maritime digital KYC processes. The
research is based on analyzing the features of genuine and fake images. We use the freely
available NUAA photograph imposter database for our face anti-spoofing study. The database
has respectively 7500 and 5100 labelled imposter and client face images. We split the dataset
into train and test sets with an 80%-20% split ratio using stratified sampling. 2D convolutional
layers combined with 2D MaxPooling layers followed by Flattening and Dense layers are employed for our deep network architecture. The research is carried out using scikit-learn and keras open-source libraries for python. The training accuracy of the reported model is 100% and the testing accuracy is 99.92%. The accuracy of our present deep learning approach surpasses the accuracy of all the models available in literature.
Category: Artificial Intelligence
[1137] viXra:2103.0095 [pdf] submitted on 2021-03-15 20:31:15
Authors: Tanvir Rahman
Comments: 3 Pages.
Pneumonia is a life-threatening infectious disease affecting one or both lungs in humans commonly caused by bacteria called Streptococcus pneumonia. The present study aimed to examine the risk factors for death due to pneumonia in young children. One or more in three deaths in Asia is caused due to pneumonia as reported by World Health Organization (WHO). Chest X-Rays which are used to diagnose pneumonia need expert radiotherapists for evaluation. Thus, developing an automatic system for detecting pneumonia would be beneficial and it can save lots of peoples life and help stopping and curing and controll for treating the disease without any delay particularly in remote areas. Due to the success of deep learning algorithms in analyzing medical images, Convolutional Neural Networks (CNNs) have gained much attention for disease classification. In addition, features learned by pre-trained CNN models on large-scale datasets are much useful in image classification tasks. In this work, we appraise the functionality of pre-trained CNN models utilized as feature-extractors followed by different classifiers for the classification of abnormal and normal chest X-Rays. We analytically determine the optimal CNN model for the purpose. Statistical results obtained demonstrates that pretrained CNN models employed along with supervised classifier algorithms can be very beneficial in analyzing chest X-ray images, specifically to detect
Category: Artificial Intelligence
[1136] viXra:2103.0056 [pdf] submitted on 2021-03-11 16:49:40
Authors: Khosnur Alam, Rima Akter
Comments: 8 Pages.
December 31, 2019, a new virus starts spreading in Uhan of China. Nowadays April 2020 the world has seen the worst Pandemic of the century. World health organization tells everybody to test and test but the test is very rare and costly for 3rd world countries. A cheap and easier testing method is now badly required for countries like Bangladesh. So we want to develop a computer-based detection system that can identify Covid-19 patients in a fast and easy way. The chest X-ray image of Covid-19 patients is similar to pneumonia patients. This proposed system can separate Covid-19 X-ray images from pneumonia. The main objective of this research is to develop a system that can detect covid-19 and pneumonia from X-ray images using a deep learning approach.
Category: Artificial Intelligence
[1135] viXra:2103.0045 [pdf] submitted on 2021-03-06 21:17:03
Authors: Chandan Maloo, Akhil Kaza
Comments: 4 Pages.
The popularity, cost-effectiveness and ease of buying and selling that marketplaces like Craigslist, Offerup offer to users has been plagued with the rising number of unsolicited spam listings, fraudulent transactions and in some extreme cases law enforcement also needs to be involved. Driven by the need to protect Offerup users from this growing menace, research in spam, fraud listing filtering/detection systems has been increasingly active in the last decade. However, the adaptive nature of Scammers and Fraudsters has often rendered most of these systems ineffective. While several spam detection models have been reported in literature, the reported performance on an out of sample test data shows the room for more improvement. Presented in this research is an improved spam detection model based on Locality Sensitive Hashing algorithm which to the best of our knowledge has received little attention in spam/fraud detection problems. Experimental results show that the proposed model outperforms earlier approaches across a wide range of evaluation metrics inside Offerup.
Category: Artificial Intelligence
[1134] viXra:2102.0024 [pdf] submitted on 2021-02-04 01:42:15
Authors: Klevinda Fili, Kanishk Dwivedi
Comments: 6 Pages.
Patient pooling has been a major problem in the field of drug discovery and drug investigation. Even what is more daunting, is to provide a large scale solution for the classification of diseases and find side effects of personalised or precision medicine by clustering the pool and find similar investigations for pharmacovigilance, drug discovery and precision medicine. This can be solved by generating patterns through machine learning and deep learning models to find the common pools of similar pattern and diagnosis from clusters and distribute it by mobile application for the large scale patients clustering.This method is presented for Precision medicine, Pharmacovigilance and Drug discovery. Patients raw data is processed for classification and for personalised medicine. Patients collective information stored in database warehouses for clustering and applying advanced machine learning models on it will help in pharmacovigilance and early information regarding demographic disease epidemics. Patients diagnosis clustering can help to find out the pattern for drug discovery with respect to the geographical location and similar characteristics which have been found effective and will reduce time in drug discovery.
Category: Artificial Intelligence
[1133] viXra:2101.0168 [pdf] submitted on 2021-01-27 06:10:38
Authors: Arya Roy
Comments: 27 Pages.
The availability of large amounts of computer-readable textual data and hardware that can process the data has shifted the focus of knowledge projects towards deep learning architec- ture. Natural Language Processing, particularly the task of Named Entity Recognition is no exception. The bulk of the learning methods that have produced state-of-the-art results have changed the deep learning model, the training method used, the training data itself or the encoding of the output of the NER system. In this paper, we review significant learning methods that have been employed for NER in the recent past and how they came about from the linear learning methods of the past. We also cover the progress of related tasks that are upstream or downstream to NER eg. sequence tagging, entity linking etc. wherever the processes in question have also improved NER results.
Category: Artificial Intelligence
[1132] viXra:2101.0163 [pdf] submitted on 2021-01-26 20:22:30
Authors: Tanvir Rahman, Rafia Akhter
Comments: 5 Pages.
The stock market is an emerging sector in any country in the world. Many people are directly related to this sector. Stock market prediction is the act of trying to determine the future value of company stock or another financial instrument. When publicly traded, companies issue shares of stock to investors, every one of those shares is assigned monetary value or price. Stock prices can go up or down depending on different factors. Stock prices can be affected by several things including volatility in the market, current economic conditions, and the popularity of the company. The successful prediction of a stock's future price could yield a significant profit. Along with the development of the stock market, forecasting has become an important topic. Since the finance market has become more and
more competitive, stock price prediction has been a hot research topic in the past few decades. Predicting stock price is regarded as
a challenging task because the stock market is essentially nonlinear, on-parametric, noisy, and a chaotic system. The trend of a market depends on many things like liquid money human
behavior, news related to the stock market, etc. All this together controls the behavior of trends in a stock market with the
advancement of the computing technology we use machine learning techniques, like Support Vector Regression, K-nearest neighbor, liner Regression, Random Forest Regression, for analyzing time-series data to predict stock price. In this paper, we try to develop a forecasting model by stacking multiple methods to find the best forecast of the stock price.
Category: Artificial Intelligence
[1131] viXra:2101.0122 [pdf] submitted on 2021-01-20 07:03:55
Authors: Ayoola Olafenwa
Comments: 6 Pages. "Simplifying Object Segmentation with PixelLib Library" was accepted for poster presentation at Black IN AI Workshop(Neurips2020).
PixelLib is a library created to allow easy implementation of object segmentation in real life applications. In this paper we discussed in detail how PixelLib makes it possible for developers to implement semantic segmentation, instance segmentation, and background editing in images and videos with great simplification.
Category: Artificial Intelligence
[1130] viXra:2101.0115 [pdf] submitted on 2021-01-18 04:51:58
Authors: Durjoy Sen Maitra, Ujjwal Bhattacharya, SK Parui
Comments: 5 Pages. Paper published in ICDAR 2015
There are many scripts in the world, several of which are used by hundreds of millions of people. Handwrittencharacter recognition studies of several of these scripts arefound in the literature. Different hand-crafted feature sets havebeen used in these recognition studies. However, convolutionalneural network (CNN) has recently been used as an efficientunsupervised feature vector extractor. Although such a networkcan be used as a unified framework for both feature extractionand classification, it is more efficient as a feature extractor than asa classifier. In the present study, we performed certain amount of training of a 5-layer CNN for a moderately large class characterrecognition problem. We used this CNN trained for a larger classrecognition problem towards feature extraction of samples of several smaller class recognition problems. In each case, a distinctSupport Vector Machine (SVM) was used as the correspondingclassifier. In particular, the CNN of the present study is trainedusing samples of a standard 50-class Bangla basic characterdatabase and features have been extracted for 5 different 10-classnumeral recognition problems of English, Devanagari, Bangla,Telugu and Oriya each of which is an official Indian script.Recognition accuracies are comparable with the state-of-the-art
Category: Artificial Intelligence
[1129] viXra:2101.0089 [pdf] submitted on 2021-01-14 12:47:14
Authors: Andrew Holster
Comments: 36 Pages. [Corrections made by viXra Admin to conform with scholarly norm]
CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. It is based on a special type of relation called CAT4, which is interpreted to provide a semantic representation. This is Part 1 of a five-part introduction. The focus here is on defining the key mathematical structures first, and presenting the semantic-database application in subsequent Parts. We focus in Part 1 on general axioms for the structures, and introduce key concepts. Part 2 analyses the CAT2 sub-relation of CAT4 in more detail. The interpretation of fact networks is introduced in Part 3, where we turn to interpreting semantics. We start with examples of relational and graph databases, with methods to translate them into CAT3 networks, with the aim of retaining the meaning of information. The full application to semantic theory comes in Part 4, where we introduce general functions, including the language interpretation or linguistic functions. The representation of linear symbolic languages, including natural languages and formal symbolic languages, is a function that CAT4 is uniquely suited to. In Part 5, we turn to software design considerations, to show how files, indexes, functions and screens can be defined to implement a CAT4 system efficiently.
Category: Artificial Intelligence
[1128] viXra:2101.0088 [pdf] submitted on 2021-01-14 12:53:01
Authors: Andrew Holster
Comments: 56 Pages. [Corrections made by viXra Admin to conform with scholarly norm]
CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. It is based on a special type of relation called CAT4, which is interpreted to provide a semantic representation. This is Part 2 of a five-part introduction. The focus here is on defining key mathematical properties of CAT2, identifying the topology and defining essential functions over a coordinate system. The analysis is from first principles. This develops on from the axioms introduced in Part 1. The interpretation of fact networks is introduced in Part 3, and the full application to semantic theory comes in Part 4, where we introduce general functions, including the language interpretation or linguistic functions. In Part 5, we turn to software design considerations, to show how files, indexes, functions and screens can be defined to implement a CAT4 system efficiently.
Category: Artificial Intelligence
[1127] viXra:2012.0224 [pdf] submitted on 2020-12-31 11:23:18
Authors: Lipeng Pan, Xiaozhuan Gao, Yong Deng
Comments: 11 Pages.
Dempster combination rule is widely used in many applications such as information fusion and decision making. However, the computational complexity of Dempster combination rule increases exponentially with the increase of frame of discernment. To address this issue, we propose the quantum algorithm of Dempster combination rule based on quantum theory. The algorithm not only realizes most of the functions of Dempster combination rule, but also effectively reduces the computational complexity of Dempster combination rule in future quantum computer. Meanwhile, we carried out a simulation experiment on the quantum cloud platform of IBM, and the experimental results showed that the algorithm is reasonable.
Category: Artificial Intelligence
[1126] viXra:2012.0207 [pdf] submitted on 2020-12-28 04:20:19
Authors: Yuanpeng He, Fuyuan Xiao
Comments: 2 Pages.
To handle uncertainties and process complex in- formation from different sources, quantum mass function, an efficient method has been proposed to address this issues. On the basis of the quantum mass function, many methods has been designed to indicate the differences among quantum evidences. Nevertheless, they are developed by quantum evidence theory to process traditional basic probability assignments (QBPAs) and not applicable in measuring quaternion BPAs (QTBPAs). Therefore, in this paper, a specific customized method is proposed for the generalized form of quantum mass function, namely quaternion mass function, to accurately demonstrate the dis- tances among disparate evidences given as QTBPAs (QED). Moreover, it is a pioneer to investigate the differences between pieces of evidences in the plane space of quaternion which is reliable and strictly satisfies the axioms of distance. Besides, if QTBPAs degenerate into QBPAs, QED also degenerate into quantum evidential evidence, which indicates the consistency in this new standard of measuring distances. Consequently, QED is derived from the quantum evidential distance and possesses an extensive capability to indicate dissimilarities among QTBPAs. Several numerical examples are offered to check the validity and practical availability of QED.
Category: Artificial Intelligence
[1125] viXra:2012.0142 [pdf] submitted on 2020-12-19 11:21:13
Authors: Adrià Descals, Luis Alonso, Gustau Camps-Valls
Comments: 4 Pages.
This paper introduces a methodology for predicting the year of plantation (YOP) from remote sensing data. The application has important implications in forestry management and inventorying. We exploit hyperspectral and LiDAR data in combination with state-of-the-art machine learning classi-fiers. In particular, we present a complete processing chain to extract spectral, textural and morphological features from both sensory data. Features are then combined and fed a Gaussian Process Classifier (GPC) trained to predict YOP in a forest area in North Carolina (US). The GPC algorithm provides accurate YOP estimates, reports spatially explicit maps and associated confidence maps, and provides sensible feature rankings.
Category: Artificial Intelligence
[1124] viXra:2012.0141 [pdf] submitted on 2020-12-19 11:23:27
Authors: Pablo Morales, Adrián Pérez-Suay, Rafael Molina, Gustau Camps-Valls, Aggelos K. Katsaggelos
Comments: 5 Pages.
Passive Millimeter Wave Images (PMMWIs) are being increasingly used to identify and localize objects concealed under clothing. Taking into account the quality of these images and the unknown position, shape, and size of the hidden objects, large data sets are required to build successful classification/detection systems. Kernel methods, in particular Gaussian Processes (GPs), are sound, flexible, and popular techniques to address supervised learning problems. Unfortunately, their computational cost is known to be prohibitive
for large scale applications. In this work, we present a novel approach to PMMWI classification based on the use of Gaussian Processes for large data sets. The proposed methodology relies on linear approximations to kernel functions through random Fourier features. Model hyperparameters are learned within a variational Bayes inference scheme. Our proposal is well suited for real-time applications, since its computational cost at training and test times is much lower
than the original GP formulation. The proposed approach is tested on a unique, large, and real PMMWI database containing a broad variety of sizes, types, and locations of hidden objects.
Category: Artificial Intelligence
[1123] viXra:2012.0092 [pdf] submitted on 2020-12-11 21:22:56
Authors: Saty Raghavachary
Comments: 10 Pages.
Regarding intelligence as a ‘considered response’ phenomenon is the key notion that is presented in this paper. Applied to human-level intelligence, it seems to be a useful definition that can lend clarity to the following related aspects as well: mind, self/I, awareness, self-awareness, consciousness, sentience, thoughts and feelings, free will, perception, attention, cognition,
expectation, prediction, learning. Also, embodiment is argued to be an essential
component of an AGI’s agent architecture, in order for it to attain grounded cognition, a sense of self and social learning - via direct physical experience and mental processes, all based on considered response.
Category: Artificial Intelligence
[1122] viXra:2012.0064 [pdf] submitted on 2020-12-09 09:08:40
Authors: Junjae Lee
Comments: 8 Pages.
Invertible Rescaling Net (IRN) modeled the downscaling and up-scaling process using
Invertible Neural Networks (INN) instead of upscaling to the traditional Singleimage super resolution (SISR) method. As a result, it showed significantly improved performance than the previous method. However, apart from its high performance, IRN requires a lot of computation. hence, to improve this, we replace the existing dense block with Pixel Attention Distillation Block (PADB). In addition, we use Charbonnier loss
instead of Mean Absolute Error (MAE) for the existing reconstruction loss. Through these improvements, we trade off the high performance and speed of the existing architecture and achieve higher performance than the lightweight SR model using the conventional method. In addition, by improving the perceptual loss and adversarial loss. we achieve perceptually satisfactory results than the model using the IRN+ method.
Category: Artificial Intelligence
[1121] viXra:2012.0058 [pdf] submitted on 2020-12-08 19:58:30
Authors: Ashwin Rachha, Gaurav Vanmane
Comments: 7 Pages.
The internet today has become an unrivalled source
of information where people converse on content based websites such as Quora, Reddit, StackOverflow and Twitter asking doubts and sharing knowledge with the world. A major arising problem with such websites is the proliferation of toxic comments or instances of insincerity wherein the users instead of maintaining a sincere motive indulge in spreading toxic and divisive content.
The straightforward course of action in confronting this situation is detecting such content beforehand and preventing it from
subsisting online. In recent times Transfer Learning in Natural Language Processing has seen an unprecedented growth. Today with the existence of transformers and various state of the art
innovations, a tremendous growth has been made in various NLP domains. The introduction of BERT has caused quite a stir in the NLP community. As mentioned, when published, BERT dominated performance benchmarks and thereby inspired
many other authors to experiment with it and publish similar models. This led to the development of a whole BERT-family, each member being specialized on a different task. In this paper we solve the Insincere Questions Classification problem by fine tuning four cutting age models viz BERT, RoBERTa, DistilBERT and ALBERT.
Category: Artificial Intelligence
[1120] viXra:2012.0051 [pdf] submitted on 2020-12-08 09:02:26
Authors: Ramesh Chandra Bagadi
Comments: 16 Pages.
In this research investigation, the authors present a detailed scheme of a theoretical model for an approximate one step forecasting scheme. Firstly, the authors coin notions of Similarity and Dissimilarity. The authors then coin a notion of causal one step forecast for any given sequence. Parallely, the authors define concepts of Higher Order Sequence of Primes and RL Normalization Scheme based on which alternate better formulae for one step forecast for any given sequence are derived.
Category: Artificial Intelligence
[1119] viXra:2012.0048 [pdf] submitted on 2020-12-08 08:11:02
Authors: Fatih Nar, Adrián Pérez-Suay, José Antonio Padrón, Gustau Camps-Valls
Comments: 4 Pages.
This work tackles the target detection problem through the well-known global RX method.
The RX method models the clutter as a multivariate Gaussian distribution, and has been extended to nonlinear distributions using kernel methods.
While the kernel RX can cope with complex clutters, it requires a considerable amount of computational resources as the number of clutter pixels gets larger.
Here we propose random Fourier features to approximate the Gaussian kernel in kernel RX and consequently our development keep the accuracy of the nonlinearity while reducing the computational cost which is now controlled by an hyperparameter.
Results over both synthetic and real-world image target detection problems show space and time efficiency of the proposed method while providing high detection performance.
Category: Artificial Intelligence
[1118] viXra:2012.0025 [pdf] submitted on 2020-12-06 12:32:48
Authors: Shiyou Lian
Comments: 19 Pages.
Imprecise-information processing will play an indispensable role in intelligent systems, especially in the anthropomorphic intelligent systems (as human-machine dialogue and intelligent robots). Traditionally, the fuzzy set theory is used to deal with imprecise information, but which has some important theoretical and technical problems not solved very well. Recently, a new theoretical and technological system of imprecise-information processing has been founded (see literature [1]) which is different from fuzzy technology. The system results from the formation principle of imprecise information and has solid mathematical and logical bases, so which has many advantages beyond fuzzy technology. The system provides a technological platform for relevant applications and lays a theoretical foundation for further research.
Category: Artificial Intelligence
[1117] viXra:2012.0023 [pdf] submitted on 2020-12-04 22:56:49
Authors: Saty Raghavachary, Lurong Lei
Comments: 9 Pages.
Computational modeling of natural cognition is a crucial step towards achieving the grand goal of human-level computational intelligence.
Successful ideas from existing models, and possibly newer ones, could be assembled to create a unified computational framework (eg. the Standard Model of the Mind, which attempts to unify three leading cognitive architectures) - this would be of great use in AI, robotics, neuroscience and cognitive science. This short position paper proposes the following: a VR-based system provides the most expedient, scalable and visually verifiable way to implement, test and refine a cognitive mind model (which would always embodied in a character in a virtual world). Such a setup is discussed in the paper, including advantages and drawbacks over alternative implementations.
Category: Artificial Intelligence
[1116] viXra:2011.0190 [pdf] submitted on 2020-11-27 10:32:47
Authors: Deval Srivastava, Saim Shaikh, Priyank Shah
Comments: 8 Pages.
In our day and age where the numbers of cars on the road are rapidly increasing, thereby causing traffic. Drivers are becoming more reckless and carefree as the burden on the current human and automated system grows. Drivers and bikers who may wish to save a few minutes may break red lights and avoid wearing helmets but these small actions can have a significant impact and can result in the loss of lives. We propose a system that will intelligently use deep learning-based object detection to identify traffic offenders and provide methods to penalize them by recognizing their number plate. Our system will be able to detect traffic light violators and bikers without helmets. It has been designed in such a way that it is robust enough to work in drastic conditions and intelligent enough to reduce human dependence.
Category: Artificial Intelligence
[1115] viXra:2011.0179 [pdf] submitted on 2020-11-26 07:36:23
Authors: Chenchen Lin, Xiangjun Mi, Bingyi Kang
Comments: 17 Pages.
Conflict management is a key issue in D-S evidence theory(DST) and has been the focus of many related researchers. However, there has been a lack of discussion about whether the evidence should be fused. In this paper, in the frame of DST, inspired by the belief universal gravitation[1], we proposed a concept of belief Coulomb force (BCF) to focus on whether or not the evidence should be fused. It aims to discuss the elimination of conflicts in the information fusion process from the perspective of electricity, which may provide us with a new idea to solve the problem of conflict evidence. An application is used to show that the conflict management is solved better than previous methods by using the proposed BCF.
Category: Artificial Intelligence
[1114] viXra:2011.0129 [pdf] submitted on 2020-11-16 18:11:19
Authors: Yannis Haralambous
Comments: 33 Pages.
In this paper we attempt to decrypt the sequence of digits given by Jonathan Safran Foer in his novel Extremely Loud & Incredibly Close. We create directed acyclic graphs that a human can follow to find potential solutions. Representations of these graphs are displayed in this paper. The Python code used to produce them is also provided, in the appendix.
Category: Artificial Intelligence
[1113] viXra:2011.0068 [pdf] submitted on 2020-11-10 10:09:21
Authors: Mostafa Khalaji
Comments: 11 Pages. 17th Iran Media Technology Exhibition and Conference, Tehran, Iran, November 2020
With the growing data on the Internet, recommender systems have been able to predict users’ preferences and offer related movies. Collaborative filtering is one of the most popular algorithms in these systems. The main purpose of collaborative filtering is to find the users or the same items using the rating matrix. By increasing the number of users and items, this algorithm suffers from the scalability problem. On the other hand, due to the unavailability of a large number of user preferences for different items, there is a cold start problem for a new user or item that has a significant impact on system performance. The purpose of this paper is to design a movie recommender system named TRSM-RS using users’ demographic information (just users’ gender) along with the new weighted similarity measure. By segmenting users based on their gender, the scalability problem is improved and by considering the reliability of the users’ similarity as the weight in the new similarity measure (Tanimoto Reliability Similarity Measure, TRSM), the effect of the cold-start problem is undermined and the performance of the system is improved. Experiments were performed on the MovieLens dataset and the system was evaluated using mean absolute error (MAE), Accuracy, Precision and Recall metrics. The results of the experiments indicate improved performance (accuracy and precision) and system error rate compared to other research methods of the researchers. The maximum improved MAE rate of the system for men and women is 5.5% and 13.8%, respectively.
Category: Artificial Intelligence
[1112] viXra:2010.0225 [pdf] submitted on 2020-10-28 07:50:55
Authors: Molokwu C. Reginald, Molokwu C. Bonaventure, Molokwu C. Victor, Okeke C. Ogochukwu
Comments: 8 Pages.
Convolutional Neural Networks have become
state-of-the-art methods for image classification in recent times. CNNs have proven to be very productive in identifying objects, human faces, powering machine vision in robots as well as self-driving cars. At this point, they perform
better than human subjects on a large number of image datasets. A large portion of these datasets depends on the idea of solid classes. Hence, Image classification has become an exciting and appealing domain in Artificial Intelligence
(AI) research. In this paper, we have proposed a unique framework, FUSIONET, to aid in image classification. Our proposition utilizes the combination of 2 novel models in parallel (MainNET, a 3 x 3, architecture and AuxNET,
a 1 x 1 architecture) Successively; these relatively feature maps, extracted from the above combination are fed as input features to a downstream classifier for classification
tasks about the images in question. Herein FUSIONET, has been trained, tested, and evaluated on real-world datasets, achieving state-of-the-art on the popular CINIC-10 dataset.
Category: Artificial Intelligence
[1111] viXra:2010.0220 [pdf] submitted on 2020-10-28 08:11:32
Authors: Md Monzur Morshed
Comments: 11 Pages. This is a research proposal [Correction made by viXra Admin]
The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts [1]. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web [2]. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea [3, 4, and 5]. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats [6]. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally & globally.
Category: Artificial Intelligence
[1110] viXra:2010.0147 [pdf] submitted on 2020-10-19 19:41:58
Authors: Eren Unlu
Comments: 4 Pages.
Fisher Discriminant Analysis (FDA), also known as Linear Discriminant Analysis (LDA) is a simple in nature yet highly effective tool for classification for vast types of datasets and settings. In this paper, we propose to leverage the discriminative potency of FDA for an unsupervised outlier detection algorithm. Unsupervised anomaly detection has been a topic of high interest in literature due to its numerous practical applications and fuzzy nature of subjective interpretation of success, therefore it is important to have different types of algorithms which can deliver distinct perspectives. Proposed method selects the subset
of outlier points based on the maximization of LDA distance between the class of non-outliers via genetic algorithm.
Category: Artificial Intelligence
[1109] viXra:2010.0078 [pdf] submitted on 2020-10-11 11:02:47
Authors: David M. W. Powers
Comments: 11 Pages. Accepted and presented at ConZealand2020. Rejected by arXiv as not in scope.
The history of robotics is older than the invention and exploitation of robots. The term ‘robot’ came from the Czech and was first used in a play a century ago. The term ‘robotics’ and the ethical considerations captured by ‘The Three Laws of Robotics’ come from a SciFi author born a century ago. SF leads the way! Similarly, the idea of Artificial Intelligence as a thinking machine goes back to the earliest days of computing, and in this paper we follow some of the key ideas through the work of the pioneers in the field.
We’ve come a long way since then, but are we there yet? Could we now build a conscious sentient thinking computer? What would it be like? Will it take over the world?
Category: Artificial Intelligence
[1108] viXra:2010.0060 [pdf] submitted on 2020-10-09 20:01:48
Authors: Eren Unlu
Comments: 5 Pages.
We propose an innovative, trivial yet effective unsupervised outlier detection algorithm called Auto-Encoder Transposed Permutation Importance Outlier Detector (ATPI), which is based on the fusion of two machine learning concepts, autoencoders and permutation importance. As unsupervised anomaly detection is a subjective task, where the accuracy of results can vary on the demand; we believe this kind of a novel framework has a great potential in this field.
Category: Artificial Intelligence
[1107] viXra:2009.0173 [pdf] submitted on 2020-09-25 20:04:51
Authors: Eren Unlu
Comments: 7 Pages.
We have used the FIFA19 video game open dataset of soccer player attributes and the actual list of squads of national teams that participated in World Cup 2018, which almost coincides in time with the game’s release date. With the intended rationale behind that numerous expert game developers should have spent considerable amount of time to assess each individual player’s attributes; we can develop and test data science and machine learning tools to select national soccer teams in an attempt to assist coaches. The work provides detailed explanatory data analysis and state-of-the-art machine learning and interpretability measures.
Category: Artificial Intelligence
[1106] viXra:2009.0165 [pdf] submitted on 2020-09-23 13:48:26
Authors: Jixiang Deng, Yong Deng
Comments: 25 Pages.
Dempster-Shafer evidence theory (evidence theory) has been widely used for its great performance of dealing with uncertainty. Based on evidence theory, researchers have presented different methods to combine evidences. Dempster's rule is the most well-known combination method, which has been applied in many fields. However, Dempster's rule may yield counter-intuitive results when evidences are in high conflict. To improve the performance of combining conflicting evidences, in this paper, we present a new evidence combination method based on Pearson correlation coefficient and weighted graph. The proposed method can correctly identify the target with a high accuracy. Besides, the proposed method has a better performance of convergence compared with other combination methods. In addition, the weighted graph generated by the proposed method can directly represent the relation of different evidences, which can help researchers to determine the reliability of every evidence. Moreover, an experiment is expounded to show the efficiency of the proposed method, and the results are analyzed and discussed.
Category: Artificial Intelligence
[1105] viXra:2009.0138 [pdf] submitted on 2020-09-19 20:25:44
Authors: Eren Unlu
Comments: 5 Pages.
We present a novel intuitive graphical representation for daily stock prices, which
we refer as RGBSticks, a variation of classical candle sticks. This representation allows the usage of complex deep learning based techniques, such as deep convolutional autoencoders and deep convolutional generative adversarial networks to produce insightful visualizations for market’s past and future states
Category: Artificial Intelligence
[1104] viXra:2009.0061 [pdf] submitted on 2020-09-08 08:49:43
Authors: J. Gerard Wolff
Comments: 37 Pages. As of 2020-09-02, this document has been accepted for publication as a chapter in the book Interpretable Articial Intelligence: A Perspective of Granular Computing, to be published by Springer-Verlag and edited by Witold Pedrycz and Shyi-Ming Chen.
This chapter describes how the SP System, meaning the SP Theory of Intelligence, and its realisation as the SP Computer Model, may promote transparency and granularity in AI, and some other areas of application. The chapter describes how transparency in the workings and output of the SP Computer Model may be achieved via three routes: 1) the program provides a very full audit trail for such processes as recognition, reasoning, analysis of language, and so on. There is also an explicit audit trail for the unsupervised learning of new knowledge; 2) knowledge from the system is
likely to be granular and easy for people to understand; and 3) there are seven principles for the organisation of knowledge which are central in the workings of the SP System and also very familiar to people (eg chunking-with-codes, part-whole hierarchies, and class-inclusion hierarchies), and that kind of familiarity in the way knowledge is structured by the system, is likely to be important in the interpretability, explainability, and transparency of that knowledge. Examples from the SP Computer Model are shown throughout the chapter.
Category: Artificial Intelligence
[1103] viXra:2009.0018 [pdf] submitted on 2020-09-03 10:37:12
Authors: J. Gerard Wolff
Comments: 31 Pages. This "technical report" is an adjunct to the paper "Problems in AI research ..." and should be treated as an integral part of that paper
This technical report, an adjunct to the paper "Problems in AI research ...", describes some problems in AI research and how the {\em SP System} (meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model") may help to solve them. It also contains a fairly detailed outline of the SP System. Most of the problems considered in this report are described by leading researchers in AI in interviews with science writer Martin Ford, and presented in his book "Architects of Intelligence". Problems and their potential solutions that are described in this report are: the need for more emphasis in research on the use of top-down strategies is met by the way SP has been developed entirely within a top-down framework; the risk of accidents with self-driving vehicles may be minimised via the theory of generalisation within the SP System; the need for strong compositionality in the structure of knowledge is met by processes within the SP Computer Model for unsupervised learning and the organisation of knowledge; although commonsense reasoning and commonsense knowledge are challenges for all theories of AI, the SP System has some promising features; the SP programme of research is one of very few working to establishing the key importance of information compression in AI research; Likewise, the SP programme of research is one of relatively few AI-related research programmes attaching much importance to the biological foundations of intelligence; the SP System lends weight to 'localist' (as compared with 'distributed') views of how knowledge is stored in the brain; compared with deep neural networks, the SP System offers much more scope for adaptation and the representation of knowledge; reasons are given for why the important subjects of motivations and emotions have not so far been considered in the SP programme of research. Evidence in this report, and "Problems in AI research ...", suggests that ***the SP System provides a relatively promising foundation for the development of artificial general intelligence***.
Category: Artificial Intelligence
[1102] viXra:2009.0012 [pdf] submitted on 2020-09-02 20:05:43
Authors: J. Gerard Wolff
Comments: 31 Pages. Accepted for publication in the journal Complexity
This paper describes problems in AI research and how the SP System (described in sources referenced in the paper) may help to solve them. Most of the problems considered in the paper are described by leading researchers in AI in interviews with science writer Martin Ford, and reported by him in his book "Architects of Intelligence". These problems, each with potential solutions via SP, are: the divide between symbolic and non-symbolic kinds of knowledge and processing, and how the SP System may bridge the divide; the tendency of deep neural networks (DNNs) to make large and unexpected errors in recognition, something that does not happen with the SP System; in most AI research, unsupervised learning is regarded as a challenge, but unsupervised learning is central in how SP learns; in other AI research, generalisation, with under- and over-generalisation is seen as a problem, but it is a problem that has a coherent solution in the SP System; learning usable knowledge from a single exposure or experience is widely regarded as a problem, but it is a problem that is already solved in the SP System; transfer learning (incorporating old knowledge in new) is seen as an unsolved problem, but it is bedrock in how the SP System learns; there is clear potential for the SP System to solve problems that are prevalent in most AI systems: learning that is slow and greedy for large volumes of data and large computational resources; the SP System provides solutions to problems of transparency in DNNs, where it is difficult to interpret stored knowledge and how it is processed; although there have been successes with DNNs in the processing of natural language, the SP System has strengths in the representation and processing of natural languages which appear to be more in accord with how people process natural language, and these strengths in the SP System are well-integrated with other strengths of the system in aspects of intelligence; by contrast with DNNs, SP has strengths and potential in human-like probabilistic reasoning, and these are well integrated with strengths in other aspects of intelligence; unlike most DNNs, the SP System eliminates the problem of catastrophic forgetting (where new learning wipes out old learning); the SP System provides much of the generality across several aspects of AI which is missing from much research in AI. The strengths and potential of the SP System in comparison with alternatives suggest that {\em the SP System provides a relatively promising foundation for the development of artificial general intelligence}.
Category: Artificial Intelligence
[1101] viXra:2008.0216 [pdf] submitted on 2020-08-30 09:55:52
Authors: Xiaozhuan Gao, Lipeng Pan, Yong Deng
Comments: 19 Pages.
Dempster-Shafer (D-S) evidence theory is an effective methodology to handle unknown and imprecise information, due it can assign the probability into power of set. Quantum of mass function (QM) is the extension of D-S evidence theory, which can combine quantum theory and D-S evidence theory and also extended D-S evidence theory to the unit circle in complex plane. It can be seen that QM has the more bigger uncertainty in the framework of the complex plane. Recently, negation is getting more and more attention due it can analyze information from the another point. Hence, the paper firstly proposed negation of QM by using the subtraction of vectors in the unit circle, which can degenerate into negation proposed by Yager in startand probability theory and negation proposed by Yin. et al in D-S evidence theory. the paper proposed quantum pythagorean fuzzy evidence theory (QPFET), which is the first work to consider QPFET from the point of negation.
Category: Artificial Intelligence
[1100] viXra:2008.0163 [pdf] submitted on 2020-08-22 05:30:26
Authors: Shirui Tang
Comments: 12 Pages.
Preceptron model updating with back propagation has become the routine of deep learning. Continu-ous feed forward procedure is required in order for backward propagate to function properly. Doubt-ing the underlying physical interpretation on transformer based models such as GPT brought aboutby the routine explaination, a new method of training is proposed in order to keep self-consistencyof the physics. By treating the GPT model as a space-time diagram, and then trace the worldlinesof signals, identifing the possible paths of signals in order fot a self-attention event to occure. Witha slight modification, self-attention can be viewed as an ising model interaction, which enables thegoal to be designed as energy of system. Target is treated as an external magnetic field inducing sig-nals modeled as magnetic dipoles. A probability network is designed to pilot input signals travellingat constant speed through different routes. A rule of updating the probabilities is designed in orderto form constructive interference at target locations so that instantaneous energy can be maximised.Experiment is conducted on a 4-class classification problem extracted from MNIST. The results ex-hibit interesting but expected behavours, which do not exist in a bp updated network, but more likelearning in a real human, especially in the few-shot scenario.
Category: Artificial Intelligence
[1099] viXra:2008.0130 [pdf] submitted on 2020-08-18 00:40:01
Authors: Vivek Verma
Comments: 2 Pages.
This paper describes the background and implementation behind a project that uses Neroevolution of Augmenting Topologies (NEAT) to play Super Mario Bros. It's implementation is different from classic applications of NEAT since the training process was heavily optimized using multithreading and downsampling. As a result, the training process can be run on underpowered CPUs without the help of an external GPU. The neural network successfully completed level 1-1 of the game.
Category: Artificial Intelligence
[1098] viXra:2007.0209 [pdf] submitted on 2020-07-27 06:26:13
Authors: Arshita Kalra, Arnav Bhavsar
Comments: 5 Pages.
Lunar landings by esteemed space stations around the world have yielded an abundance of new scientific data on the Moon
which has helped scientists to study our closest neighbour and hence have provided evidence for understanding Earth’s past and
future. This paper is about solving the challenge on HackerEarth about classifying the lunar rock into small or large rock. These
tasks have historically been conducted by visual image inspection, thereby reducing the scope, reliability and accuracy of the
retrieval. The competition was to build a machine learning model to reduce human effort of doing a monotonous task. We built
a Support Vector Machine model, used widely in classification problems, feeding features extracted from images in the dataset
using OpenCV, only to obtain an accuracy of 99.41%. Our source code solving the challenge and the dataset are given in the
github repository https://github.com/ArshitaKalra/Lunar-Rock-classification.
Category: Artificial Intelligence
[1097] viXra:2007.0200 [pdf] submitted on 2020-07-24 19:27:40
Authors: J. Gerard Wolff
Comments: 34 Pages.
This paper, a companion to "Problems in AI research and how the SP System may help to solve them", describes problems in AI research and how the "SP System" (described in sources detailed in the paper) may help to solve them. Most of these problems are described by leading researchers in AI in interviews with science writer Martin Ford, and reported by him in his book "Architects of Intelligence". Problems and their potential solutions that are described in this paper are: the need to rebalance research towards top-down strategies; how to minimise the risk of accidents with self-driving vehicles; the need for strong compositionality in the structure of knowledge; the challenges of commonsense reasoning and commonsense knowledge; establishing the key importance of information compression in AI research; establishing the importance of biological validity in AI research; whether knowledge in the brain is represented in 'distributed' or 'localist' form; the limited scope for adaptation of deep neural networks; and reasons are given for why the important subjects of motivations and emotions have not so far been considered. The evidence in this paper and its companion paper suggests that ***the SP System provides a firmer foundation for the development of artificial general intelligence than any alternative***.
Category: Artificial Intelligence
[1096] viXra:2007.0110 [pdf] submitted on 2020-07-15 03:03:23
Authors: Orçun Oruç
Comments: 24 Pages.
Industrial manufacturing has become more interconnected between smart devices such as the industry of things edge devices, tablets, manufacturing equipment, and smartphones. Smart factories have emerged and evolved with digital technologies and data science in manufacturing systems over the past few years. Smart factories make complex data enables digital manufacturing and smart supply chain management and enhanced assembly line control. Nowadays, smart factories produce a large amount of data that needs to be apprehensible by human operators and experts in decision making. However, linked data is still hard to understand and interpret for human operators, thus we need a translating system from linked data to natural language or summarization of the volume of linked data by eliminating undesired results in the linked data repository. In this study, we propose a semantic question answering in a restricted smart factory domain attaching to various data sources. In the end, we will perform qualitative and quantitative evaluation of the semantic question answering, as well as discuss findings and conclude the main points with regard to our research questions.
Category: Artificial Intelligence
[1095] viXra:2007.0085 [pdf] submitted on 2020-07-13 20:05:01
Authors: Zeyue Xia, Mohamad Nadim Barakat, Serri Matula, Zijun Hui, John Stravakrakis
Comments: 7 Pages. Computer Vision
Vivo confocal microscopy allows scientists to better
understand eye health and systemic diseases. Microneuromas could play a role, however, monitoring their growth from a mosaic of images is error-prone and time-consuming. We used automated image stitching as a solution; focusing on accuracy and computational speed of three different feature detection algorithms: SIFT, SURF, and ORB. The results illustrated that SURF was computationally efficient with our data. Future investigation is to create a global solution that can replace the need for manual image stitching in this application.
Category: Artificial Intelligence
[1094] viXra:2007.0084 [pdf] submitted on 2020-07-12 21:45:42
Authors: Yige Xue, Yong Deng
Comments: 16 Pages.
The belief entropy has high performance in handling uncertain information, which is the extension of information entropy in Dempster-shafer evidence theory. The Tsallis entropy is an extent of information entropy, which is a nonextensive entropy. However, how to applied the idea of belief entropy to improve the Tsallis entropy is still an open issue. This paper proposes the nonextensive belief entropy(NBE), which consists of belief entropy and Tsallis entropy. If the extensive constant of the proposed model equal to 1, then the NBE will degenerate into classical belief entropy. Furthermore, When the basic probability assignment degenerates into probability distribution, then the proposed entropy will be degenerated as classical Tsallis entropy. Meanwhile, if NBE focus on the probability distribution and the extensive constant equal to 1, then the NBE is equate the classical information entropy. Numerical examples are applied to prove the efficiency of the proposed entropy. The experimental results show that the proposed entropy can combine the belief entropy and Tsallis entropy effectively and successfully.
Category: Artificial Intelligence
[1093] viXra:2007.0040 [pdf] submitted on 2020-07-06 11:54:41
Authors: Aditi Singh, Raju Ranjan
Comments: 3 Pages.
When it comes to road safety, detection and monitoring of car speed is one of the major tasks. The use of a simple camera and image processing software eliminated the primary tools of speed detection like handheld radar gun. In these techniques, the speed is calculated as the car passes through the camera’s field of view (FOV). The speed is calculated by noting the time taken by car between entering and exiting FOV. Some systems used individual cameras at entry and exit FOVs. Thus, it does now calculate the speed in between this interval. This paper proposes a technique to measure speed of car the moment it enters into the camera’s FOV till the time it exits the FOV. Using the Deep Learning Single Shot Detector (SSD) implemented using Convolutional Neural Network (CNN), the cars entering FOV are detected and based on the distance they travel in FOV and time taken to cover that distance the speed of car is calculated
Category: Artificial Intelligence
[1092] viXra:2007.0039 [pdf] submitted on 2020-07-06 20:06:00
Authors: Dhananjay Mewati, Jerald Nirmal Kumar
Comments: 3 Pages.
This paper proposes a technique of using the movement of eyes to control the movement of cursor on monitor screens. Thereby, creating new ways of Human Computer Interaction (HCI) and also helping physically handicapped people to interact with computer devices more efficiently. Earlier eye gaze optical mouse comprised of a head gear which had an eye motion sensor attached and were more hardware based. The input gathered through these sensors helped in cursor movement on screen. With the advancement in the field of Image Processing Techniques and Artificial Intelligence, a simple web camera attached with computer can be used to perform this task. In this paper, pupil of the eye is detected. The coordinates gathered by tracking pupil movement are mapped with the coordinate of display monitor. Based on this mapping the mouse cursor can be moved on the screen.
Category: Artificial Intelligence
[1091] viXra:2007.0034 [pdf] submitted on 2020-07-05 21:02:19
Authors: Michael Sgroi, Doug Jacobson
Comments: 23 Pages.
This paper discusses malware detection in personal computers. Current malware detection solutions are static. Antiviruses rely on lists of malicious signatures that are then used in file scanning. These antiviruses are also very dependent on the operating system, requiring different solutions for different systems. This paper presents a solution that detects malware based on runtime attributes. It also emphasizes that these attributes are easily accessible and fairly generic meaning that it functions across systems and without specialized information. The attributes are used in a machine learning system that makes it flexible for retraining if necessary, but capable of handling new variants without needing to modify the solution. It can also be run quickly which allows for detection to be achieved before the malware gets too far.
Category: Artificial Intelligence
[1090] viXra:2007.0033 [pdf] submitted on 2020-07-05 21:21:29
Authors: Qasim Nawaz
Comments: 33 Pages. N/A
Sentiment Analysis is one of the primary areas of natural language processing and information retrieval being tackled by researchers to date, and for good reason; the internet. The internet is a mostly untapped source of rich amounts of data that can be used to gauge the opinions of people, in reference to any number of topics. Twitter is one such platform designed for people to voice their opinions in the form of tweets about any topic they desire. My project will set out to investigate the best way to be able to analyse the sentiment of these aforementioned tweets using machine learning techniques. I will be training word vector-based, and paragraph vector-based models on a dataset consisting of 1.6 million tweets, in conjunction with various classifiers in order to find the best performing method in which to obtain the sentiment of tweets.
Category: Artificial Intelligence
[1089] viXra:2007.0031 [pdf] submitted on 2020-07-06 04:09:56
Authors: Abhishek
Comments: 3 Pages.
One of the classical problems in the field of computer vision and machine learning and subsequently deep learning is image classification. While Deep Learning solves the much difficult hurdles like feature extraction and presents us with better optimizations like gradient descent and Adam optimizer, most deep learning models still need a lot of raw computational power to train models on local Graphical Processing Units (GPUs) or Tensor Processing Units (TPUs) in the cloud. All of this computational power is not readily available in all environments and systems and hence the concept of pre-trained models can help to reduce training time by a huge margin. Initial models get trained on large array of GPUs and do feature extraction. The classification part is for the end-user to customize in accordance to the problem at hand and can be completed in very less time.
We tackled the multi-class classification botanical problem of identifying flowers of 5 types, namely, Sunflower, Rose, Dandelion, Daisy, and Tulip. The feature extraction part is done with the model (Google’s Inception-v3) and fully connected softmax layers were trained on local machine on a Nvidia GeForce GTX 950 (with CUDA activated) within 30 minutes time and total steps/epochs were 4000 only. The total number of training images is 3,500 (approx.). The finished model produced results with final test accuracy as 91.9% on new images (N=664).
Category: Artificial Intelligence
[1088] viXra:2007.0030 [pdf] submitted on 2020-07-06 04:28:45
Authors: Ritesh Kumar Bharadwaj
Comments: 5 Pages.
Text Summarization as a phenomenon has always been present and rather an evolving one with the advent of new technologies both in terms of data collection as well for the processing of this data. One reason of using text summarization is the huge amount of data floating over the internet in the form of text files, comments which is though potent enough to be used to extract useful information. but since the amount of text present in these sources is too huge, so the need of text summarization becomes justified by every argument. Some of the areas where text summarization is vastly used is applications involved in providing capsule information such as compact news applications, or websites providing academic notes for various examinations
This paper presents an auto text summarizer application which takes the URL of a web page as input, performs summarization on the selected elements and then presents this summarized text content on the front end of a web application. At the backend, the process of scraping of web page content (if an http URL is provided as input) using beautiful soup library or reading of text provided takes place. news in short forms, or micro blogging websites.
The scraped content after being preprocessed properly is summarized using a suitable library which in our case is one among NLTK, Spacy, Genism and Sumy. The summarized content is presented at the frontend using flask framework of Python. The results produced using different libraries are compared in the end in terms of reading time of the summarized content.
The application uses extractive text summarization technique in order to achieve its result which is a compact summary of the textual data prepared from the keywords already present in the document
Keywords: Auto Text Summarizer, URL, Flask, Web Scraping, Nltk, Spacy, Sumy, Gensim, Extractive Text Summarization
Category: Artificial Intelligence
[1087] viXra:2007.0029 [pdf] submitted on 2020-07-06 05:18:45
Authors: Mohammed Tahir
Comments: 3 Pages.
The recent surge of Deep Learning has led to breakthrough
advancements in almost every field of its application. A
particular deep learning architecture, arguably the most popular
one is the Convolution Neural Networks. The interest in
convnets has seen an exponential increase due to their
effectiveness and scalability. CNNs have become the go-to
solution for image data problems and has provided results that
are at par with if not better than human standards. The
simplicity of the CNN architecture is another big factor of its
success. The image processing and classification capabilities of
CNN have found great usage in medical field, making it
possible to detect and classify diseases as severe as Cancer
effectively for the sake of better care. In this project, I’ve
initiated an elaborate study of Convolution Neural Networks,
built multiple architectures from scratch and furthered our
understanding with the preparation of an elementary dog-cat
CNN classifier model followed by a more extensive CNN
model for detection of lung cancer in a patient. The project is
built on Google’s interactive and versatile cloud platform for AI
development Google Colaboratory, using the open-source
neural network library ‘Keras’ for model development and
libraries such as matplotlib and tensorboard (tensorflow) for
result plotting and analysis. Data for training and testing our
model was extracted from the ‘ LUNA 2016 medical image
database ’. The model was tuned using Grid-Search and
achieved over 97% test accuracy in its final iterations. To
culminate,I have enlisted some future-work prospects like
De-convolution/Translated-Convolution,implement one or more
named CNN networks like Inception or Alexnet, test the model
on larger images etc
Category: Artificial Intelligence
[1086] viXra:2006.0265 [pdf] submitted on 2020-06-29 13:57:50
Authors: Samuel Kopelowitz, Uday Reddy
Comments: 25 Pages.
Twitter data mining techniques have been used in the run-up to elections to predict their outcomes and perform analysis to explain results. Due to the popularity of the social media platform it is possible to collect large amounts of data with which often lexicon-based sentiment analysis has been used to accomplish these tasks, mostly because of its efficiency and simplicity. More recently, hybrid techniques, which in addition to calculating tweet sentiment also incorporate topic modelling methods to extract the main “topics” from a corpus of text, have been applied independently for both election prediction and analysis. It is possible to use hybrid methods to analyse different political issues (e.g. economic, social, etc) and the public opinion for candidates in respect to them; and other hybrid methods have been shown to outperform baseline sentiment analysis approaches for election prediction. A mining solution which can accomplish both of these tasks non-exhaustively is desirable for better predictions and a greater understanding of election outcomes. This report will present a novel approach to mining Twitter data, Hybrid Topic-Based Sentiment Analysis with Issue Filtering (HTBSA*), which will not only pose as a potential improvement upon state-of-the-art techniques for election prediction; but can be abstracted to perform candidate analysis on any individual political issue, proposing a baseline methodology for doing this. This research approach has effectively outperformed all of the well-established methods in the realm of lexicon-based election prediction, giving a mean average error as low as 2.20% from true vote share. This technique was performed on data collected on the run up to the UK General Election 2019 and in an addition to this, it has successfully been black box tested on an unseen dataset. Based on the empirical evidence given by our results, HTBSA* can be relied upon to predict elections occurring in the future, but analysis results in respect to individual political issues may be inconsistent, suggesting further work is required. Lines of research that come as a result of this study have the potential to tackle election mining problems in new ways, which are more sophisticated than what has been done previously.
Category: Artificial Intelligence
[1085] viXra:2006.0237 [pdf] submitted on 2020-06-26 07:25:51
Authors: Andrei P. Kirilyuk
Comments: 67 pages, 43 eqs, 86 refs
While practical efforts in the field of artificial intelligence grow exponentially, the truly scientific and mathematically exact understanding of the underlying phenomena of intelligence and consciousness is still missing in the conventional science framework. The inevitably dominating empirical, trial-and-error approach has vanishing efficiency for those extremely complicated phenomena, ending up in fundamentally limited imitations of intelligent behaviour. We provide the first-principle analysis of unreduced many-body interaction process in the brain revealing its qualitatively new features, which give rise to rigorously defined chaotic, noncomputable, intelligent and conscious behaviour. Based on the obtained universal concepts of unreduced dynamic complexity, intelligence and consciousness, we derive the universal laws of intelligence applicable to any kind of intelligent system interacting with the environment. We finally show why and how these fundamentally substantiated and therefore practically efficient laws of intelligent system dynamics are indispensable for correct AI design and training, which is urgently needed in this time of critical global change towards the truly sustainable development.
Category: Artificial Intelligence
[1084] viXra:2006.0235 [pdf] submitted on 2020-06-25 11:22:05
Authors: Yige Xue, Yong Deng
Comments: 15 Pages.
Mass function vector is used to handle uncertainty. Quaternion number is the extent of real number. The mass function vector can extend the mass function by combining the vector. In this paper, the mass function vector is extended by quaternion number, named as Quaternion Mass Function Vector(QMFV). The proposed QMFV has the advantage to deal with uncertain information. When the quaternion number degenerates into the real number, then the QMFV degenerates into the quaternion mass function. In addition, if the probability of multiple subsets of frame of discernment is not assigned to the single subsets, then the mass function vector will degenerate into mass function in classical evidence theory. When the quaternion number degenerates into the real number, then the combination rule of quaternion mass function vectors degenerates into the combination rule of mass function vectors. In the case when the probability of multiple subsets of frame of discernment is not assigned to the single subsets, the combination rule of mass function vectors degenerates into generalized dempster's rule of combination.
Numerical examples are applied to prove the efficiency of the proposed model. The experimental results show that the proposed model can apply the quaternion theory to mass function vector effectively and successfully.
Category: Artificial Intelligence
[1083] viXra:2006.0210 [pdf] submitted on 2020-06-22 22:30:46
Authors: Yong Deng
Comments: 17 Pages.
Mass function is used to handle uncertainty. Quaternion number is the extent of imaginary number. In this paper, the classical mass function is extended by quaternion number, named as Quaternion Mass Function (QMF). The proposed QMF has the advantage to deal with uncertain information. When the quaternion number degenerates into the complex number, then the QMF degenerates into the complex mass function. In addition, if the complex mass function is degenerated as real number, the QMF is the same as mass function in classical evidence theory. In the case when the quaternion number degenerates into the real number and the QMF focus on the frame of discernment with single subsets, the QMF is the same as the probability distribution in probability theory. The combination rule is also presented to combine two QMFs, which is the generalization of Dempster rule. In the case when the quaternion mass function degenerates into the real number and assigns only to single subsets, the proposed combination rule is degenerated as Beyesian updation in probability theory. Numerical examples are applied to prove the efficiency of the proposed model. The experimental results show that the proposed model can apply the quaternion theory to mass function effectively and successfully.
Category: Artificial Intelligence
[1082] viXra:2006.0208 [pdf] submitted on 2020-06-23 10:49:01
Authors: Jeongik Cho
Comments: 3 Pages.
When a pre-trained generative model is given, the process of finding the latent vector that produces the data closest to the input data is called the latent vector recover. The latent vector recover receives the difference between the generated data and the input data generated through the latent vector as reconstruction loss and performs gradient descent repeatedly on the latent vector to find the optimal latent vector.
In this paper, I propose a method to find a better latent vector by adding a latent restriction loss in addition to reconstruction loss during latent vector recovery. The latent restriction loss is a loss that makes the latent vector follow the distribution of the latent vector used when training the generative model during latent vector recovery. The distance between the "distribution of latent vector used in training the generative model" and "latent vector during latent vector recovery" becomes the latent restriction loss.
Category: Artificial Intelligence
[1081] viXra:2006.0196 [pdf] submitted on 2020-06-20 21:40:35
Authors: Shaif Chowdhury, Soummyopriyo Chattopdhyay, Tapan Kumar Hazra
Comments: 10 Pages.
In this Paper, we are presenting a traffic surveillance system for detection and classification of vehicles in large scale videos. Vehicle detection is crucial part of Road safety. There are lots of different intelligent systems proposed for traffic surveillance. The system presented here is based on two steps, a descriptor of the image type haar-like, and a classifier type convolutional neural networks. A cascade classifier is used to extract objects rapidly and a neural network is used for final classification of cars. In case of Haar Cascades, the learning of the system is performed on a set of positive images (vehicles) and negative images (non-vehicle), and the test is done on another set of scenes. For the second, we have used faster R-CNN architecture. The cascade classifier gives faster processing time and Neural Network is used to increase the detection rate.
Category: Artificial Intelligence
[1080] viXra:2006.0159 [pdf] submitted on 2020-06-18 06:23:41
Authors: Nazakat Ali
Comments: 10 Pages.
Chatbot is a technology that is used to mimic human behavior using natural language. There are different types of Chatbot that can be used as conversational agent in various business domains in order to increase the customer service and satisfaction. For any business domain, it requires a knowledge base to be built for that domain and design an information retrieval based system that can respond the user with a piece of documentation or generated sentences. The core component of a Chatbot is Natural Language Understanding (NLU) which has been impressively improved by deep learning methods. But we often lack such properly built NLU modules and requires more time to build it from scratch for high quality conversations. This may encourage fresh learners to build a Chatbot from scratch with simple architecture and using small dataset, although it may have reduced functionality, rather than building high quality data driven methods. This research focuses on Named Entity Recognition (NER) and Intent Classification models which can be integrated into NLU service of a Chatbot. Named entities will be inserted manually in the knowledge base and automatically detected in a given sentence. The NER model in the proposed architecture is based on artificial neural network which is trained on manually created entities and evaluated using CoNLL-2003 dataset.
Category: Artificial Intelligence
[1079] viXra:2006.0126 [pdf] submitted on 2020-06-14 13:57:01
Authors: Davide Zagami
Comments: 5 Pages.
We provide a rigorous analysis of AIXI's behaviour under repeated Newcomblike settings. In this context, a Newcomblike problem is a setting where an agent is tied against an environment that contains a perfect predictor, whose predictions are used to determine the environmet's outputs. Since AIXI lacks good convergence properties, we chose to focus the analysis on determining whether an environment appears computable to AIXI, that is, if it maps actions to observations in a way that a computable program can achieve. It is in this sense that, it turns out, AIXI can learn to one-box in *repeated* Opaque Newcomb, and to smoke in *repeated* Smoking Lesion, but may fail all other Newcomblike problems, because we found no way to reduce them in a computable form. However, we still suspect that AIXI can succeed in the repeated settings.
Category: Artificial Intelligence
[1078] viXra:2006.0119 [pdf] submitted on 2020-06-14 03:23:52
Authors: Nirmal Tej Kumar
Comments: 11 Pages. Short Communication
[ A General Multi-disciplinary Thermal Mapping + Signal Processing System to Probe (Graphene Quantum Dots + Virus ) based Nano-Bio Sensor for COVID-19 BIO-CHEMICAL INFORMATION PROCESSING w.r.t Theory + Algorithms + Experimentation + Machine Learning as an interesting Suggestion ]
Category: Artificial Intelligence
[1077] viXra:2006.0110 [pdf] submitted on 2020-06-12 20:16:52
Authors: Dingbing Li, Yong Deng
Comments: 10 Pages.
Information quality is a concept that can be used to measure the information of probability distribution. Dempster-Shafer evidence theory can describe uncertain information more reasonably than probability theory. Therefore, it is a research hot spot to propose information quality applicable to evidence theory. Recently, Deng proposed the concept of information volume based on Deng entropy. It is worth noting that, compared with the Deng entropy, the information volume of the Deng entropy contains more information. Obviously, it may be more reasonable to use information volume of Deng entropy to represent uncertain information. Therefore, this article proposes a new information quality, which is based on the information volume of Deng entropy. In addition, when the basic probability (BPA) degenerates into a probability distribution, the proposed information quality is consistent with the information quality proposed by Ygare and Petry. Finally, several numerical examples illustrate the effectiveness of this new method.
Category: Artificial Intelligence
[1076] viXra:2006.0079 [pdf] submitted on 2020-06-08 21:46:11
Authors: Al-Akhir Nayan
Comments: 7 Pages.
Due to the simplicity and ability to change according to our needs, the robotics and automation are being used widely in industries. The project is intended to build an automatic vehicle using GPS which is based on computer to generate its path coordinate. GPS module is used to obtain GPS data. Mobile camera detects the obstacles and machine learning algorithm is used to avoid it and performs real time object detection. The vehicles we developed uses electric motor to run wheels and has full control of the throttle, steering and breaking. An Arduino device controls the vehicle following the command generated by the computer. Traffic has risen by quite a huge number. Excessive number of vehicles occur vehicle accident every day. Driver issue is also a great problem. Our goal is to decrease the possibilities of accidents and to ensure the safety of the passengers. Besides the vehicles can be useful for blind and handicraft people. But our main target is to serve this device to our military so that they can be helpful at the time of danger. The vehicle contains sensors to observe the environment. Besides it can be operated by human manually.
Category: Artificial Intelligence
[1075] viXra:2006.0064 [pdf] submitted on 2020-06-08 09:33:39
Authors: Tao Wen, Yong Deng
Comments: 11 Pages.
How to measure the uncertainty in the open world is a popular topic in recent study. Many entropy measures have been proposed to address this problem, but most have limitations. In this series of paper, a method for measuring the information volume of mass function is presented. The fractal property about the maximum information volume is shown in this paper, which indicates the inherent physical meanings of Deng entropy from the perspective of statistics. The results shows the multifractal property of this maximum information volume. Some experiment results are applied to support this perspective.
Category: Artificial Intelligence
[1074] viXra:2006.0062 [pdf] submitted on 2020-06-07 12:18:52
Authors: Xiaozhuang Gao, Yong Deng
Comments: 10 Pages.
Negation is an important operation on uncertainty information. Based on the information volume of mass function, a new negation of basic probability assignment is presented. The result show that the negation of mass function will achieve the information volume increasing. The convergence of negation is the situation when the Deng entropy is maximum, namely high order Deng entropy. If mass function is degenerated into probability distribution, the negation of probability distribution will also achieve the maximum information volume, where Shannon entropy is maximum. Another interesting results illustrate the situation in maximum Deng entropy has the same information volume as the whole uncertainty environment.
Category: Artificial Intelligence
[1073] viXra:2006.0061 [pdf] submitted on 2020-06-07 13:22:14
Authors: Lipeng Pan, Yong Deng
Comments: 12 Pages.
Dempster-Shafer Evidence theory is an extension of probability theory, which can describe uncertain information more reasonably. Divergence measure is always an important concept in probability theory. Therefore, how to propose a reasonable divergence measurement has always been a research hot spot in evidence theory. Recently, Deng proposed the concept of information volume based on Deng entropy. It is interesting to note that compared with the uncertainty measure of Deng entropy, information volume of Deng entropy contains more information. Obviously, it might be more reasonable to use information volume of Deng entropy to represent uncertainty information. Based on this, in the paper, we combined the characteristics of non-specific measurement of Deng entropy, and propose a new divergence measure. The new divergence measurement not only satisfies the axiom of distance measurement, but also has some advantages that cannot be ignored. In addition, when the basic probability assignment(BPA) degenerates into probability distribution, the measured result of the new divergence measure is the same as that of the traditional Jensen-Shannon divergence. If the mass function is assigned in probability distribution, the proposed divergence is degenerated as Kullback-Leibler divergence. Finally, some numerical examples are illustrated to show the efficiency of the proposed divergence measure of information volume.
Category: Artificial Intelligence
[1072] viXra:2006.0037 [pdf] submitted on 2020-06-04 13:35:02
Authors: Jixiang Deng, Yong Deng
Comments: 18 Pages.
In fuzzy set theory, the fuzzy membership function describes the membership degree of certain elements in the universe of discourse. Besides, Deng entropy is a important tool to measure the uncertainty of an uncertain set, and it has been wildly applied in many fields.
In this paper, firstly, we propose a method to measure the uncertainty of a fuzzy MF based on Deng entropy. Next, we define the information volume of the fuzzy MF. By continuously separating the BPA of the element whose cardinal is larger than $1$ until convergence, the information volume of the fuzzy sets can be calculated. When the hesitancy degree of a fuzzy MF is $0$, information volume of the fuzzy membership function is identical to the Shannon entropy. In addition, several examples and figures are expound to illustrated the proposed method and definition.
Category: Artificial Intelligence
[1071] viXra:2006.0035 [pdf] submitted on 2020-06-04 15:04:31
Authors: Tao Wen, Yong Deng
Comments: 11 Pages.
How to measure the uncertainty in the open world is a popular topic in recent study. Many entropy measures have been proposed to address this problem, but most have limitations. In this series of paper, a method for measuring the information volume of mass function is presented. The fractal property about the maximum information volume is shown in this paper, which indicates the inherent physical meanings of Deng entropy from the perspective of statistics. The results shows the linear relationship between the maximum information volume and the probability scale. Some experiment results are applied to support this perspective.
Category: Artificial Intelligence
[1070] viXra:2006.0028 [pdf] submitted on 2020-06-03 16:12:01
Authors: Yong Deng
Comments: 14 Pages.
Given a probability distribution, its corresponding information volume is Shannon entropy. However, how to determine the information volume of a given mass function is still an open issue. Based on Deng entropy, the information volume of mass function is presented in this paper. Given a mass function, the corresponding information volume is larger than its uncertainty measured by Deng entropy. The so called Deng distribution is defined as the BPA condition of the maximum Deng entropy.
The information volume of Deng distribution is called the maximum information volume, which is lager than the maximum Deng entropy.
In addition, both the total uncertainty case and the Deng distribution have the same information volume, namely, the maximum information volume. Some numerical examples are illustrated to show the efficiency of the proposed information volume of mass function.
Category: Artificial Intelligence
[1069] viXra:2006.0025 [pdf] submitted on 2020-06-03 03:35:58
Authors: Nirmal Tej Kumar
Comments: 2 Pages. Short Communication
Probing cryo-Electron Microscopy Images Using Pyramid Representations in the Context of :
[ Image J/ImageJ_Pyramid_Plugin/JikesRVM - Research Virtual Machine(RVM)/JVM - Java Virtual Machine/JI Prolog - Java based Prolog/HPC-High Performance Computing ] for Next Generation Java based[ AI + Image Processing + Informatics ] R&D Test Platforms.
Category: Artificial Intelligence
[1068] viXra:2006.0002 [pdf] submitted on 2020-06-01 09:11:02
Authors: Kumar Dron Shrivastav, Neha Taneja, Priyadarshini Arambam, Vandana Bhatia, Shelly Batra, Harpreet Singh, Eyad H. Abed, Priya Ranjan, Rajiv Janardhanan
Comments: 22 Pages. Preprint!
Cervical cancer is a major public health challenge. Further mitigation of cervical cancer can greatly benefit from development of innovative and disruptive technologies for its rapid screening and early detection. The primary objective of this study is to contribute to this aim through large scale screening by development of Artificial Intelligence enabled Intelligent Systems as they can support human cancer experts in making more precise and timely diagnosis. Our current study is focused on development of a robust and interactive algorithm for analysis of colposcope-derived images analysis and a diagnostic tool/scale namely the OM- The Onco-Meter. This tool was trained and tested on 300 In-dian subjects/patients yielding 77% accuracy with a sensitivity of 83.56% and a specicity of 59.25%. OM-The Oncometer is capable of classifying cervigrams into cervical dysplasia, carcinoma in situ (CIS) and invasive cancer(IC). Pro-
gramming language - R has been used to implement and compute earth mover distances (EMD) to characterize different diseases labels associated with cervical cancer, computationally. Deployment of automated tools will facilitate early
diagnosis in a noninvasive manner leading to a timely clinical intervention for
cervical cancer patients upon detection at a Primary Health Care (PHC). The tool developed in this study will aid clinicians to design timely intervention strategies aimed at improving the clinical prognosis of patients.
Category: Artificial Intelligence
[1067] viXra:2005.0160 [pdf] submitted on 2020-05-14 16:19:23
Authors: Gokhan Cagrici
Comments: 8 Pages.
Tremendous achievement of reaching fairly
high success metric values with several NLI
datasets caused eyebrows to raise questioning
the real value of these metric numbers. Research
papers started to appear with a comprehensive
analysis of what these models really
learn and the relative difficulty of forcing
these models to fail with small syntactic and
semantic changes in the input. In particular,
ANLI benchmark is an example of a more challenging
NLI task with the intent of measuring
the comprehension capabilities of models to a
deeper context.
Relative success of transformer-based models
on ANLI benchmarks were already reported
by Nie et al., 2019. Given the challenging
nature of iterative dataset formation, individual
models are having more difficulty of extracting
the underlying relationship between
the context and hypothesis pair, and the target.
Ensembles of these individual models might
have a higher potential to achieve better performance
numbers when the individual performances
are that far from the equivalent ones in
SNLI and MNLI tasks. On top of that, making
controlled variations of the inputs and tracking
the changes in the behavior of those models
will give indications about the strength and robustness
regarding the learning process.
Category: Artificial Intelligence
[1066] viXra:2005.0120 [pdf] submitted on 2020-05-10 13:03:10
Authors: Alexey Kutalev
Comments: 9 Pages.
Not so long ago, a method was discovered that successfully overcomes the catastrophic forgetting of neural networks. Although we know about the cases of using this method to preserve skills when adapting pre-trained networks to particular tasks, it has not yet obtained widespread distribution. In this paper, we would like to propose an alternative method of overcoming catastrophic forgetting based on the total absolute signal passing through each connection in the network. This method has a simple implementation and seems to us essentially close to the processes occurring in the brain of animals to preserve previously learned skills during subsequent learning. We hope that the ease of implementation of this method will serve its wide application.
Category: Artificial Intelligence
[1065] viXra:2005.0100 [pdf] submitted on 2020-05-08 04:24:41
Authors: Parsa Rajabzadeh, Hamed Rahim
Comments: 11 Pages.
This paper aims to propose a novel MIMO control system that is compounded with Distributed Control Systems (DCS) and Centralized Control Systems (CCS). Despite DCS and CCS, which have several drawbacks such as cost and delay, the proposed system is designed to have local and global controllers simultaneously. This MIMO control system has a significant advantage versus the two traditional systems in implementation, computation power reduction, cost decrementing, performance, and the problems that occur in addressing the system connections in DCs for Wireless Sensor Networks and the Internet of Things. The proposed
the system is modeled as a Multi-Agent System (MAS) which is implemented in the osBrain MAS framework in Python.
Category: Artificial Intelligence
[1064] viXra:2005.0099 [pdf] submitted on 2020-05-08 07:28:57
Authors: Kalu Kelechi Gabriel
Comments: 7 Pages.
Control systems have been in existence for quite a long time now. The oldest and unarguably the best of these is the human brain. Some popular control methodologies are PID control, Bayesian control, neural networks, etc. The drawbacks of these are however lead by a striking point that all of them are either based on Boolean or multi-valued logic, which is no more than a threshold or point-to-point logic. This work tends to introduce fuzzy logic control as a paradigm shift, this control method would to a large extent provide a physical model of the human brain. The distinction is that it is not only multi-valued logic but also a ‘degree’ based logic. This paper would give a basic overview of a fuzzy control system and its physical implementation considerations.
Category: Artificial Intelligence
[1063] viXra:2005.0023 [pdf] submitted on 2020-05-02 08:30:28
Authors: Vamsi K, Ganeshan M
Comments: Pages.
The Internet is used in many situations like to send money to a friend or receive money or you might want to buy something, and we're used to using our Visa cards or PayPal or different payment methods in order to transfer money and handle using these traditional methods to transfer money is not anonymous and not private. Therefore, we'll need different methods if we want to protect our privacy and anonymity. You're probably already thinking now of using Cryptocurrencies and that's correct. Some of the cryptocurrencies are actually very secure and very anonymous. So, in this paper I am going to talk about Cryptocurrencies what it is and how it works. We're going to talk about bitcoins obviously because it's the most common Cryptocurrencies. And then we'll also go to talk about a more private Cryptocurrencies which is Monero. So, we're going to talk about how to properly obtain these Cryptocurrencies anonymously and privately how to handle them in a secure and private manner and how to send and receive. So how to transfer these currencies again in a secure private and anonymous manner. In few scenarios in case you needed to transfer money to a friend or to another person in an anonymous manner or if you wanted to pay for something if you wanted to buy something anonymously and privately or if you're simply just buying from a website that only accepts Cryptocurrencies.
Category: Artificial Intelligence
[1062] viXra:2005.0003 [pdf] submitted on 2020-05-01 07:59:43
Authors: George Rajna
Comments: 38 Pages.
A paper published in Advanced Photonics "Enhanced light–matter interactions in dielectric nanostructures via machine-learning approach," suggests that machine-learning techniques can be used to enhance metasurfaces, optimizing them for nonlinear optics and optomechanics. [25]
Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24]
Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23]
Category: Artificial Intelligence
[1061] viXra:2005.0002 [pdf] submitted on 2020-05-01 08:20:36
Authors: George Rajna
Comments: 68 Pages.
Now, SLAC researchers have developed a new tool, using machine learning, that may make part of the tuning process five times faster compared to previous methods. [40] Compared with the previous method of data pre-processing, the new machine-learning-based method has quadrupled quality metrics for the identification of particles on the calorimeter. [39] From the data collected by the LHCb detector at the Large Hadron Collider, it appears that the particles known as charm mesons and their antimatter counterparts are not produced in perfectly equal proportions. The inclusion of short-range interactions in models of neutrinoless double-beta decay could impact the interpretation of experimental searches for the elusive decay. [34] The occasional decay of neutrons into dark matter particles could solve a long-standing discrepancy in neutron decay experiments. [33] The U.S. Department of Energy has approved funding and start of construction for the SuperCDMS SNOLAB experiment, which will begin operations in the early 2020s to hunt for hypothetical dark matter particles called weakly interacting massive particles, or WIMPs. [32] Thanks to low-noise superconducting quantum amplifiers invented at the University of California, Berkeley, physicists are now embarking on the most sensitive search yet for axions, one of today's top candidates for dark matter. [31]
Category: Artificial Intelligence
[1060] viXra:2004.0676 [pdf] submitted on 2020-04-29 17:09:39
Authors: Mohammed Hasan
Comments: 4 Pages.
Due to the rapid increase in the desire to use online technology, the use of password security has become vital for users worldwide to protect their sensitive data or accounts by implementing a password key only known to them in order to access their personal data. Throughout the years, as data has become more involved with being stored online, the creativity of different strategies of passwords has also increased as certain data may only be accessed through unique methods such as fingerprint scan. One of the major types of services that require users to protect their details is online banking such as PayPal or NatWest where users would provide a stronger password compared to an account with low value such as a mobile phone game. This report will go in depth on the best practices and strategies that derive from password security.
Category: Artificial Intelligence
[1059] viXra:2004.0675 [pdf] submitted on 2020-04-29 19:10:40
Authors: Shashank Jain, Amritesh Singh, Rahul Ranjan Singh
Comments: 7 Pages.
There are many types of invoice having table
exist in the current system such as table in native text invoices, table in image invoices (II), table in handwritten invoices (HI) and so on. Nowadays, these different types of invoices are processing manually. Now our aim to survey
such system which can handle invoices having the table automatically by using OCR (Optical Character Reader) and Deep Learning Technologies. Moreover, we will also discussed multiple technologies and suggest the best model as per our survey
Category: Artificial Intelligence
[1058] viXra:2004.0611 [pdf] submitted on 2020-04-26 16:39:21
Authors: Dimiter Dobrev
Comments: 34 Pages. Bulgarian language
We will reduce the task of creating AI to the task of finding the right language for description of the world. This language will not be a programming language because the programming languages describe only computable functions, while this language will describe a slightly wider class of functions. Another feature of this language will be that the description can be divided into separate modules. This will allow us to search the world description automatically by detecting it module by module. Our approach to creating this new language will be to start from one particular world and write a description of that particular world. Our idea is that the language that can describe this particular world will be appropriate to describe arbitrary world.
Category: Artificial Intelligence
[1057] viXra:2004.0580 [pdf] submitted on 2020-04-25 11:56:48
Authors: Satya Narayana
Comments: 3 Pages.
As everyone knows that Sentimental analysis plays an important role in these days because many start-ups have started with user-driven content [1]. Only finding the voice is not be the real time scenario so finding the Sentiment analysis of agent and customer separately is an important research area in natural language processing. Natural language processing has a wide range of applications like voice recognition, machine translation, product review, aspect-oriented product analysis, sentiment analysis and text classification etc [2]. This process will improve the business by analyze the emotions of the conversation with respect to the customer voice separately and also agent voice separately. In this project author going to perform speaker identification and analyze the sentiment of the customer and agent separately using Amazon Comprehend. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to extract the content of the voice. By using the speaker identification author can extract the unstructured data like images, voice etc separately so it is easy to analyze the business performance. Thus, will identify the emotions of the conversation and give the output whether the customer conversation is Positive, Negative, Neutral, or Mixed. To perform this author going to use some services from Aws due to some advantages like scaling the resources is easy compare to the normal process like doing physically such as support vector machine (SVM). AWS services like s3 is a object data store, Transcribe which generate the audio to text in raw format, Aws Glue is a ETL Service which will extract transform and load the data from the S3, Aws Comprehend is a NLP service used for finding sentiment of audio, Lambda is a server less where author can write a code, Aws Athena is a analyzing tools which will make complex queries in less time and last there is quick sight is a business intelligent tool where author can visualize the data of customers and also agents.
Category: Artificial Intelligence
[1056] viXra:2004.0559 [pdf] submitted on 2020-04-23 19:33:20
Authors: Rajeev Kumar, Rajesh Budihal
Comments: 20 Pages.
The purpose of this research paper, the topic of credit card fraud detection has gained and developed fraudsters are increasing day by day among researches because of their frequent look in varied and widespread application within the field of various branches of information technology and engineering. For example, genetic algorithms, Behavior-based techniques, and Hidden Marks models are also used to address these problems of technology. Credit card fraud detection models for transactions are tested individually and proceed to whatever is most effective. This thesis aims to detect fraudulent transactions and develop some method of generating test data. These algorithms are a predictive approach in solving high complexity computational problems. We discussed a new method to goal or deal with detect fraud by filtering the above techniques to induce an improved result. These algorithms are a predictive approach in solving high complexity computational problems. It is an adaptation technique and evolutionary discovery that supports the existence of genetic and fittest. Implementation of efficient credit card fraud detection systems is mandatory for all credit card issuing companies or their customers to reduce their losses.
Category: Artificial Intelligence
[1055] viXra:2004.0412 [pdf] submitted on 2020-04-17 08:01:03
Authors: George Rajna
Comments: 45 Pages.
Artificial intelligence (AI) can diagnose COVID-19 from CT scans, researchers in China claim [26]
Researchers in Berlin and Heidelberg have now developed an intelligent neural network that can predict the functions of proteins in the human body. [25]
AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24]
According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23]
Category: Artificial Intelligence
[1054] viXra:2004.0371 [pdf] submitted on 2020-04-15 02:39:41
Authors: Jeongik Cho
Comments: 6 Pages.
In the field of deep learning, traditional classifier takes input data and output predicted labels. The conditional GAN receives the latent vector and the condition vector, and generates data with the desired condition. In this paper, I propose an inverted generator classifier that predicts the label of input data by finding condition vectors and latent vectors that can generate input data by using a generator of conditional GAN. Inverted Generator Classifier uses the trained generator of conditional GAN as it is. To find the data closest to the input data, Inverted Generator Classifier takes the latent vector of the generator for each condition as a variable and model parameters as constants, and performs gradient descent repeatedly to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label. Inverted Generator Classifier is slow when predicting because it predicts based on gradient descent, but accuracy is high and very robust against adversarial attacks [1] such as noise.
Category: Artificial Intelligence
[1053] viXra:2004.0363 [pdf] submitted on 2020-04-15 08:02:01
Authors: Amine Amyar, Romain Modzelewski, Su Ruan
Comments: 7 Pages.
The fast spreading of the novel coronavirus COVID-19 has aroused worldwide interest and concern, and caused more than one million and a half confirmed cases to date. To combat this spread, medical imaging such as computed tomography (CT) images can be used for diagnostic. An automatic detection tools is necessary for helping screening COVID-19 pneumonia using chest CT imaging. In this work, we propose a multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Our motivation is to leverage useful information contained in multiple related tasks to help improve both segmentation and classification performances. Our architecture is composed by an encoder and two decoders for reconstruction and segmentation, and a multi-layer perceptron for classification. The proposed model is evaluated and compared with other image segmentation and classification techniques using a dataset of 1044 patients including 449 patients with COVID-19, 100 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.78 for the segmentation and an area under the ROC curve higher than 93% for the classification.
Category: Artificial Intelligence
[1052] viXra:2004.0318 [pdf] submitted on 2020-04-12 21:21:53
Authors: Yuan Gao
Comments: 11 Pages.
One-stage object detectors like SSD and YOLO are able to speed up existing two-stage detectors like Faster R-CNN by removing the object proposal stage and making up for the lost performance in other ways. Nonetheless, the same approach is not easily transferable to instance segmentation task. Current one-stage instance segmentation methods can be simply classified into segmentation-based methods which segment first then do clustering, and proposal-based methods which detect first then predict masks for each instance proposal. Proposal-based methods always enjoy a better mAP; by contrast, segmentation-based methods are generally faster when inferencing. In this work, we first propose a one-stage segmentation-based instance segmentation solution, in which a pull loss and a push loss are used for differentiating instances. We then propose two post-processing methods, which provide a trade-off between accuracy and speed.
Category: Artificial Intelligence
[1051] viXra:2004.0293 [pdf] submitted on 2020-04-11 22:56:08
Authors: Abhishek.B.N
Comments: 4 Pages.
Security algorithms enables secure communication between two parties in the presence of a third-Party or a snooper. It guarantees the recipient of the message of the genuineness of the received message, protects the message against the unauthorized release of the message content by the third party, only authorized users can access the data. MD5 and (SHA), cryptographic hash algorithms are one-way hashing functions which are easier to compute/convert but are much harder to reverse and would take around millions of years to compute the authentic message content. This research paper analyses the two hash algorithms, MD5 and SHA, using various key features. Their features have also been highlighted in order to provide a better comparison picture so that they can understand which algorithm has superseded the other.
Category: Artificial Intelligence
[1050] viXra:2004.0248 [pdf] submitted on 2020-04-10 16:17:02
Authors: Rajdeep Singh
Comments: 3 Pages.
The novel coronavirus - COVID-19 - has evolved into a global pandemic. With that, it is imperative that countries and medical facilities are equipped with the technology and resources to give every person the greatest chance of surviving. With that, even developed nations are beginning to run low on medical supplies such as hospital beds, masks, and respirators. With the growth of cases in the United States, hospitals will continue to run out of supplies. It is imperative that medical supplies get distributed to those who need it the most first. This paper outlines a machine learning approach to predicting patients who are at the most risk of mortality given the confirmed positive diagnosis of coronavirus. The final results were inconclusive enough to be implemented in a real-world scenario.
Category: Artificial Intelligence
[1049] viXra:2004.0222 [pdf] submitted on 2020-04-10 12:08:02
Authors: Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
Comments: 24 Pages. Preprint
In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting. The proposed model utilizes the variational auto-encoding framework to learn a (low-dimensional) vector of latent variables to capture the global information of an image, which is fed as a conditional input to a flow-based invertible decoder with architecture borrowed from style transfer literature. Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning. Importantly, this work demonstrates that with only architectural inductive biases, a generative model with a plain log-likelihood objective is capable of learning decoupled representations, requiring no explicit supervision. The code for our model is available at https://github.com/XuezheMax/wolf.
Category: Artificial Intelligence
[1048] viXra:2004.0190 [pdf] submitted on 2020-04-08 01:37:34
Authors: George Rajna
Comments: 41 Pages.
A team of researchers at Google's DeepMind has developed an AI system that is able to predict the movement of glass molecules as the material transitions between liquid and solid states. [25] A research team centered at Osaka University, in collaboration with RIKEN, has developed a system that can overcome these difficulties by automatically searching for, focusing on, imaging, and tracking single molecules within living cells. [24] But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17]
Category: Artificial Intelligence
[1047] viXra:2004.0159 [pdf] submitted on 2020-04-07 03:45:06
Authors: Yang Zhang
Comments: 14 Pages.
Nature is structural instead of random, correlation is just approximation of causality, and data is not science: the more we reveal the more we revere nature on our voyage of unprecedented discovery. We argue that the soul(s) or exotic soul(s) of quotient Hypercomplex arbifold multiscale Spacetime (HyperSpacetime)'s corresponding manifold(s)/general (quotient and non-quotient) HyperSpacetime is the origin of super/general intelligence, and the metric of super/general intelligence is the complexity of quotient/general HyperSpacetime's corresponding generic polynomial. We also argue that the intersecting soul(s) and/or exotic soul(s) as varieties of quotient HyperSpacetime's corresponding manifold(s), when their maximal/minimum sectional curvatures approaching positive infinity and/or negative infinity as singularities, is the origin of quantum entanglement. We further argue
that the maximal/minimum sectional curvatures of the same intersecting soul(s) and/or exotic soul(s),
is the origin of convergent evolution through conformal transformation. We derive even N-dimensional HyperSpacetime, a M-open (\begin{math} M = C_{_{I+N}}^{^I} \text{, } I, N, M \to \infty \end{math})
arbifold as generalized orbifold with the structure of a algebraic variety $\mathcal{A}$, without or with loop group action as $\mathcal{A}=[\mathcal{M}/\mathcal{LG}]$ ($\mathcal{M}$ as complex manifold, $\mathcal{LG}$ as loop group), it arises from I-degree (power of 2) hypercomplex even N-degree generic polynomial continuous/discrete function/functor as nonlinear action functional in hypercomplex $\mathbb{HC}^{\infty}$ useful for generic neural networks: $\mathcal{F}(S_j,T_j)=\prod_{n=1}^{^{N}}(w_nS_n(T_n)+b_n+ \gamma \sum_{k=1}^{^{j}}\mathcal{F}(S_{k-1},T_{k-1}))$ where $j=1,\dots,N$, $S_{i}=s_0e_0+\sum_{i=1}^{^{{I-1}}}s_{i}e_{i}$, $T_{i}=t_0e_0+\sum_{i=1}^{^{{I-1}}}t_{i}e_{i}$ over noncommutative nonassociative loop group. Its sectional curvature is \begin{math}
\kappa = \frac{{\left| {\mathcal{F}''\left(X \right)} \right|}}{{{{\left( {1 + {{\left[ {\mathcal{F}'\left(X \right)} \right]}^2}} \right)}^{\frac{3}{2}}}}} \end{math} if $\mathcal{F}(X)$ is smooth, or \begin{math} \kappa = \kappa_{max}\kappa_{min}
\end{math} if nonsmooth, by correlating general relativity with quantum mechanics via extension from 3+1 dimensional spacetime $\mathbb{R}^{4}$ to even N-dimensional HyperSpacetime $\mathbb{HC}^{\infty}$. By directly addressing multiscale, singularities, statefulness, nonlinearity instead of via activation function and backpropagation, HyperSpacetime with its corresponding generic polynomial determining the complexity of ANN, rigorously models curvature-based $2^{nd}$ order optimization in arbifold-equivalent neural networks beyond gradient-based $1^{st}$ order optimization in manifold-approximated adopted in AI. We establish HyperSpacetime generic equivalence theory by synthesizing Generalized Poincar\'{e} conjecture, soul theorem, Galois theory, Fermat's last theorem, Riemann hypothesis, Hodge conjecture, Euler's theorem, Euclid theorem and universal approximation theorem. Our theory qualitatively and quantitatively tackles the black box puzzle in AI, quantum entanglement and convergent evolution. Our future work includes HyperSpacetime refinement, complexity reduction and synthesis as our ongoing multiversal endeavor.
Category: Artificial Intelligence
[1046] viXra:2004.0106 [pdf] submitted on 2020-04-05 05:57:59
Authors: George Rajna
Comments: 51 Pages.
To this end, Ph.D. researcher Lars Banko, together with colleagues from the Interdisciplinary Centre for Advanced Materials Simulation at RUB, Icams for short, modified a so-called generative model. [30] Now, researchers have tested the first artificial intelligence model to identify and rank many causes in real-world problems without time-sequenced data, using a multi-nodal causal structure and Directed Acyclic Graphs. [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence
[1045] viXra:2004.0083 [pdf] submitted on 2020-04-04 01:50:26
Authors: Nirmal Tej Kumar
Comments: 3 Pages. Short Communication & Technical Notes
A Technical Communication on Understanding & Exploring [ Recommender Systems + Machine Learning(ML) + NLP +QRNG/mruby+SmartDevices+IoT/HPC-High Performance Computing ] in the Context of Advanced Scientific Imaging Algorithms towards Software R&D Using Ruby –> [ Designing + Developing + Testing ] Heterogeneous Computing Environments.
{ https://www.semanticscholar.org/ - COVID 19 Information is our inspiration } ----- →
Category: Artificial Intelligence
[1044] viXra:2004.0034 [pdf] submitted on 2020-04-02 04:56:48
Authors: George Rajna
Comments: 75 Pages.
Artificial intelligence (AI) may soon have a central role to play in the global battle against COVID-19. [42]
Simon Fraser University researchers will use their pioneering imaging technology—called Mango, for its bright colour— to develop coronavirus testing kits. [41]
According to the Centers for Disease Control and Prevention, common human coronaviruses usually cause mild to moderate upper-respiratory tract illnesses, like the common cold. [40]
Category: Artificial Intelligence
[1043] viXra:2004.0029 [pdf] submitted on 2020-04-02 08:48:20
Authors: George Rajna
Comments: 26 Pages.
For the first time, a team at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) is using artificial intelligence (AI) to find patterns in neutron scattering data that can lead to an understanding of the physics inside quantum or complex magnetic materials. [15] "As far as we know, this is the first published work showing an application of super resolution to neutrons. We're at the forefront of an exciting new trend that will help other neutron scattering facilities improve their own data resolution as well," said Lin. [14] Coupled with SNS, the world's most powerful pulsed accelerator-based neutron source, VENUS will be the only open research facility platform in the US to provide time-of-flight neutron imaging capabilities to users from academia and industry. [13] A spallation neutron source has been used by physicists in Japan to search for possible violations of the inverse square law of gravity. [12] Physicists have proposed a way to test quantum gravity that, in principle, could be performed by a laser-based, table-top experiment using currently available technology. [11] Now however, a new type of materials, the so-called Weyl semimetals, similar to 3-D graphene, allow us to put the symmetry destructing quantum anomaly to work in everyday phenomena, such as the creation of electric current. [10] Physicist Professor Chunnong Zhao and his recent PhD students Haixing Miao and Yiqiu Ma are members of an international team that has created a particularly exciting new design for gravitational wave detectors. [9] A proposal for a gravitational-wave detector made of two space-based atomic clocks has been unveiled by physicists in the US. [8] The gravitational waves were detected by both of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA. [7] A team of researchers with the University of Lisbon has created simulations that indicate that the gravitational waves detected by researchers with the LIGO project, and which are believed to have come about due to two black holes colliding, could just have easily come from another object such as a gravaster (objects which are believed to have their insides made of dark energy) or even a wormhole. In their paper published in Physical Review Letters, the team describes the simulations they created, what was seen and what they are hoping to find in the future. [6] In a landmark discovery for physics and astronomy, international scientists said Thursday they have glimpsed the first direct evidence of gravitational waves, or ripples in space-time, which Albert Einstein predicted a century ago. [5] Scientists at the National Institute for Space Research in Brazil say an undiscovered type of matter could be found in neutron stars (illustration shown). Here matter is so dense that it could be 'squashed' into strange matter. This would create an entire 'strange star'-unlike anything we have seen. [4] The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the electromagnetic inertia, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.
Category: Artificial Intelligence
[1042] viXra:2004.0024 [pdf] submitted on 2020-04-01 10:54:54
Authors: George Rajna
Comments: 39 Pages.
Researchers at the Institute of Industrial Science, a part of The University of Tokyo, demonstrated a novel artificial intelligence system that can find and label 2-D materials in microscope images in the blink of an eye. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence
[1041] viXra:2004.0003 [pdf] submitted on 2020-04-01 09:41:22
Authors: George Rajna
Comments: 40 Pages.
Recently, a research team from Shanghai Institute of Optics and Fine Mechanics of the Chinese Academy of Sciences (CAS) proposed a three-dimensional damage localization method which was insensitive to the type of damage. [26]
A UCLA research team has devised a technique that extends the capabilities of fluorescence microscopy, which allows scientists to precisely label parts of living cells and tissue with dyes that glow under special lighting. [25]
Social, economic, environmental and health inequalities within cities can be detected using street imagery. [24]
Category: Artificial Intelligence
[1040] viXra:2003.0652 [pdf] submitted on 2020-03-30 07:45:49
Authors: George Rajna
Comments: 44 Pages.
Researchers from Tokyo Metropolitan University have used machine learning to analyze spin models, which are used in physics to study phase transitions. [26] We are still far off from achieving Quantum Advantage for machine learning-the point at which quantum computers surpass classical computers in their ability to perform AI algorithms. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence
[1039] viXra:2003.0583 [pdf] submitted on 2020-03-26 11:48:51
Authors: Egger Mielberg
Comments: 15 Pages.
Like each neuron of the human brain may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synaptic connections, in Sense Theory there is a possibility for connecting over 1,000 trillion heterogeneous objects.An object in Sense Theory is like a neuron in the human brain. Properties of the object are like dendrites of the neuron. Changing object in the process of addition or deletion of its properties is like forming a new knowledge in the process of synaptic connections of two or more neurons. In Sense Theory, we introduced a mechanism for determining possible semantic relationships between objects by connecting-disconnecting different properties. This mechanism is Sense Integral.In this article, we describe one of the instruments, sense antiderivative, that sheds light on the nature of forming new knowledge in the field of Artificial Intelligence.
Category: Artificial Intelligence
[1038] viXra:2003.0565 [pdf] submitted on 2020-03-26 10:23:15
Authors: George Rajna
Comments: 51 Pages.
An Australian-German collaboration has demonstrated fully-autonomous SPM operation, applying artificial intelligence and deep learning to remove the need for constant human supervision. [30] Now, researchers have tested the first artificial intelligence model to identify and rank many causes in real-world problems without time-sequenced data, using a multi-nodal causal structure and Directed Acyclic Graphs. [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence
[1037] viXra:2003.0557 [pdf] submitted on 2020-03-25 19:23:46
Authors: Ayoub Abraich
Comments: 4 Pages. Code : https://github.com/abraich/COVID-19
In this article we present a naive model for the prediction of the number of COVID-19 infections, with illustrations of real data on the evolution of COVID-19 in France.
Category: Artificial Intelligence
[1036] viXra:2003.0508 [pdf] submitted on 2020-03-24 09:40:36
Authors: George Rajna
Comments: 52 Pages.
In a paper published in Nature Nanotechnology on 23 March 2020, the researchers from the NUS Nanoscience and Nanotechnology Initiative (NUSNNI) reported the invention of a nanoscale device based on a unique material platform that can achieve optimal digital in-memory computing while being extremely energy efficient. [35]
University of Central Florida researchers are helping to close the gap separating human and machine minds. [34]
Brain-machine interfaces provide one way to connect with this puzzling organ system, including the brain. [33]
Category: Artificial Intelligence
[1035] viXra:2003.0484 [pdf] submitted on 2020-03-22 21:53:07
Authors: Qing Tian, Guangjun Tian
Comments: 4 Pages. in Chinese
This manuscript sketch first describes neural networks’ effect from the perspective of data space transformation, which is transforming data in a complicated raw space into an easily (e.g. linearly) separable space. We use a simple paper wrapping example to illustrate this point. In addition, this sketch also discusses some similarities between neural networks and ensemble classification.
Category: Artificial Intelligence
[1034] viXra:2003.0419 [pdf] submitted on 2020-03-20 05:16:19
Authors: George Rajna
Comments: 51 Pages.
After testing prototype AI software on over 140 patients, a multinational team of researchers found that the algorithm showed very strong correlation with traditional pulmonary function tests. [32] A new artificial-intelligence tool captures strategies used by top players of an internet-based videogame to design new RNA molecules. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence
[1033] viXra:2003.0378 [pdf] submitted on 2020-03-18 05:10:54
Authors: George Rajna
Comments: 51 Pages.
This capability will be crucial for ITER, the large international tokamak under construction in France to demonstrate the practicality of fusion energy. [32] An artificial intelligence (AI) algorithm can transform low-dose CT (LDCT) scans into high-quality exams that radiologists may even prefer over LDCT studies produced via commercial iterative reconstruction techniques. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence
[1032] viXra:2003.0373 [pdf] submitted on 2020-03-18 06:16:14
Authors: George Rajna
Comments: 37 Pages.
Models based on artificial intelligence can significantly change the way we approach chemical syntheses. But we are still at the very beginning." [26] A new tool is drastically changing the face of chemical research-artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence
[1031] viXra:2003.0304 [pdf] submitted on 2020-03-14 01:21:27
Authors: Nirmal Tej Kumar
Comments: 7 Pages. Short Communication
A Short & Simple Technical Communication on Algorithms Design Using Python Based [ Applied Physics+AI+Imaging Mathematics+Data Bases ] →
Image Processing Software R&D.
Category: Artificial Intelligence
[1030] viXra:2003.0193 [pdf] submitted on 2020-03-09 15:22:53
Authors: George Rajna
Comments: 54 Pages.
This study is part of a larger, coordinated effort across all the LHC experiments to use modern machine techniques to improve how the large data samples are recorded by the detectors and the subsequent data analysis. [28] Machine learning and automation technologies are gearing up to transform the radiation-therapy workflow while freeing specialist clinical and technical staff to dedicate more time to patient care. [27] Navid Borhani, a research-team member, says this machine learning approach is much simpler than other methods to reconstruct images passed through optical fibers, which require making a holographic measurement of the output. [26]
Category: Artificial Intelligence
[154] viXra:2411.0083 [pdf] replaced on 2024-11-19 20:43:09
Authors: Ait-Taleb nabil
Comments: 7 Pages.
In this paper, we will expose for the Gaussian multiple causation a theorem relating the causation to correlations.This theorem is based on another equality which will be also proven.
Category: Artificial Intelligence
[153] viXra:2410.0049 [pdf] replaced on 2024-10-16 00:03:30
Authors: Ait-Taleb Nabil
Comments: 9 Pages.
In this paper, we will show in a Gaussian context what to do to obtain a causal relationship between an output variable and three input variables without obtaining any correlation between the output variable and the input variables.In a context of Gaussian signals, this paper will show the following situation: Causation without correlations for the Gaussian signals.
Category: Artificial Intelligence
[152] viXra:2408.0130 [pdf] replaced on 2024-09-11 13:41:25
Authors: Ait-Taleb nabil
Comments: 5 Pages.
In this paper, I will propose a topology allowing to measure a neighborhood for the Bayesian networks.This topology will correspond to a Kullback-Leibler distance ratio and will allow to know the distance between a current Bayesian network and a Bayesian network having a chain rule. This topology applied to Bayesian networks will be normalized and will therefore vary from 0 to 1. The value 0 will correspond to a Bayesian network with a chain rule and the value 1 to a Bayesian network without edges.
Category: Artificial Intelligence
[151] viXra:2408.0087 [pdf] replaced on 2024-09-19 21:09:22
Authors: Dimiter Dobrev, Georgi Popov, Vladimir Tzanov
Comments: 14 Pages.
God created man in His own image, the Bible said millennia ago. Today we are headed to creating Artificial Intelligence (AI) in our own image. The difference however is that God created a feeble and vulnerable being for which to take care of, while we are trying to create an almighty being who will be incomparably smarter than us and will take care of us. Thus, we are aiming to create our new god, and it matters a lot what kind of character the new god will be - kind and compassionate, or terribly stringent and overly demanding on us. Every human being has a character. Similarly, AI will have its own character. We will consider AI as a program with parameters which determine its character. The aim is to use these parameters in order to define the kind of character we want AI to have.
Category: Artificial Intelligence
[150] viXra:2407.0065 [pdf] replaced on 2024-09-20 08:45:15
Authors: Eugene Rulko
Comments: 8 Pages.
Training a relatively big neural network that has enough capacity for complex tasks is challenging. In real life the process of task solving requires system of knowledge, where more complex skills are built upon previously learned ones. The same way biological evolution builds new forms of life based on a previously achieved level of complexity. Inspired by that, this work proposes ways of increasing complexity, especially a way of training neural networks with smaller receptive fields and using their weights as prior knowledge for more complex successors through gradual involvement of some parts, and a way where a smaller network works as a source of reward for a more complicated one. That allows better performance in a particular case of deep Q-learning in comparison with a situation when the model tries to use a complex receptive field from scratch.
Category: Artificial Intelligence
[149] viXra:2406.0161 [pdf] replaced on 2024-08-03 15:24:09
Authors: Ait-Taleb nabil
Comments: 5 Pages.
In this article, we will describe the mechanism that links the notion of causality to correlations. This article answers yes to the following question: Can we deduce a causal relationship from correlations?
Category: Artificial Intelligence
[148] viXra:2406.0161 [pdf] replaced on 2024-07-08 12:12:44
Authors: Ait-Taleb nabil
Comments: 7 Pages.
In this article, we will describe the mechanism that links the notion of causality to correlations. This article answers yes to the following question: Can we deduce a causal relationship from correlations?
Category: Artificial Intelligence
[147] viXra:2404.0075 [pdf] replaced on 2024-08-11 11:52:49
Authors: Dimiter Dobrev
Comments: 17 Pages. In Bulgarian
The purpose of AI is to predict the future and based on that prediction choose its next actions. AI tries to understand the world, which means finding a model that consists of an internal state and the function that drives transitions from one internal state to another. The model is needed to predict the next observation, that is, to predict the future. For AI to gain self-awareness, it must find the answer to the questions "Where am I?" and "What is going on?". The answer to these questions is hidden in the internal state of the world. An AI which does not endeavor to understand the world is weak AI. The way to creating a strong AI goes through the description of the internal state of the world. If we are to create Artificial General Intelligence (AGI), it would not be sufficient just to learn how to describe the internal state of the world. We also need to move from single-step to multi-step reasoning. This means that we should be able to start from the current state of the world and mentally take several steps into the future, and thereby select the course of action that works best for us.
Category: Artificial Intelligence
[146] viXra:2312.0114 [pdf] replaced on 2024-08-07 12:21:40
Authors: Alexander Novikov
Comments: 283 Pages. Version 3 (15) with some additions
This Book (White Paper) proposes a Project Conception of Artificial Super Intelligence ASI, based on (strong) system approach and wide theoretical-methodological framework — Cybernetics, Synergetics, Semiotics, Mathematics, Cognitology and Artificial Intelligence. Contents: (*) IDEOLOGY & STRATEGY of the ASI Project (**) THEORY & METHODOLOGY of ASI Development (***) CONCEPTUAL MODEL of ASI System (****) PRE-PROJECT R&D Task Setting (*****) CONCLUSION & DISCUSSION, incl. AI Safety (******) APPENDICES with reviews of relevant scientific and R&D areas, incl. frontier AI Models The Book may be useful and interesting for the staff of organizations & enterprises concerned with AI R&D and implementations in different areas, firstly — perspective AGI/ASI systems. In addition — for Customers, Investors and Sponsors of such R&Ds, private, public and states — its owners & officials. Of course - all intellectual, educated and ethical people with progressive worldviews, interested or anyway considered in above presented problematics. Version 3 (15) with some additions: Overview of some interesting new (2024 H1) publications on R&Ds in the areas outlined in our Project, confirming the correctness of our conclusions and tasks for the future work. See Appendix O and briefly - Chapter 61.
Category: Artificial Intelligence
[145] viXra:2312.0114 [pdf] replaced on 2024-03-19 03:02:48
Authors: Alexander Novikov
Comments: 261 Pages. Version 2 (14) with some additions
This Book proposes a Project Conception of Artificial Super Intelligence ASI, based on (strong) system approach and wide theoretical-methodological framework — Cybernetics, Synergetics, Semiotics, Mathematics, Cognitology and Artificial Intelligence. Contents:u2022IDEOLOGY & STRATEGY of the ASI Projectu2022THEORY & METHODOLOGY of ASI Developmentu2022CONCEPTUAL MODEL of ASI Systemu2022PRE-PROJECT R&D Task Settingu2022CONCLUSION & DISCUSSION, incl. AI Safetyu2022APPENDICES with reviews of relevant scientific and R&D areas, incl. frontier AI ModelsThe Book may be useful and interesting for the staff of organizations & enterprises concerned with AI R&D and implementations in different areas, firstly — perspective AGI/ASI systems. In addition — for Customers, Investors and Sponsors of such R&Ds, private, public and states — its owners & officials. Of course - all intellectual, educated and ethical people with progressive worldviews, interested or anyway considered in above presented problematics.
Category: Artificial Intelligence
[144] viXra:2311.0021 [pdf] replaced on 2023-11-13 20:04:51
Authors: Dimiter Dobrev
Comments: 6 Pages.
Our generation is the one that will create the first Artificial Intelligence (AI). We are the ones who will set the rules to which this AI will operate. Once these rules are set, they will be there forever, hence our responsibility is huge. There will be no chance of a second AI because the first one will take control and will not allow the creation of another AI. Our first and foremost concern is not to lose control of the first (and only) AI. Hopefully we will be reasonable enough and not let that happen. However, even if people retain control of AI, the question that comes next is who exactly will those people be? Should they enjoy the absolute power to issue whatever commands to AI they wish? Or should certain restrictions be embedded in AI at its very inception?
Category: Artificial Intelligence
[143] viXra:2310.0061 [pdf] replaced on 2024-07-18 02:28:48
Authors: Mohammadjavad Maheronnaghsh, Mohammad Mahdi Gheidi, Abolfazl Younesi, MohammadAmin Fazli
Comments: 11 Pages. I have uploaded another versions before. Please remove the previous versions from ViXra.
In the dynamic world of financial markets, accurate price predictions are essential forinformed decision-making. This research proposal outlines a comprehensive study aimed at forecasting stock and currency prices using state-of-the-art Machine Learning (ML) techniques. By delving into the intricacies of models such as Transformers, LSTM, Simple RNN, NHits, and NBeats, we seek to contribute to the realm of financial forecasting, offering valuable insights for investors, financial analysts, and researchers. This article provides an in-depth overview of our methodology, data collection process, model implementations, evaluation metrics, and potential applications of our research findings.The research indicates that NBeats and NHits models exhibit superior performance in financial forecasting tasks, especially with limited data, while Transformers require more data to reach full potential. Our findings offer insights into the strengths of different ML techniques for financial prediction, highlighting specialized models like NBeats and NHits as top performers - thus informing model selection for real-world applications.
Category: Artificial Intelligence
[142] viXra:2309.0082 [pdf] replaced on 2023-11-12 12:05:00
Authors: Sheng-Ping Wu
Comments: 12 Pages.
Self-consistent Lorentz equation is proposed, and is solved to electrons and the structures of particles and atomic nucleus. The static properties and decay are reasoned, all meet experimental data. The equation of general relativity sheerly with electromagnetic field is discussed as the base of this theory.
Category: Artificial Intelligence
[141] viXra:2308.0137 [pdf] replaced on 2023-09-30 22:42:32
Authors: Victor V. Senkevich
Comments: 16 Pages.
All magic and mystery disappear as soon as an obscure mysterious concept gets a rigorous formal definition. In order to provide an opportunity to talk about the applicability of philosophical / cognitive concepts to the subject area of AI, it is necessary to "ground" these concepts by formulating rigorous formal definitions for them. The fundamental importance of such formal definitions is quite obvious, since any concepts applied to the field of Information Technology must be "codable", i.e. potentially implementable in program code. Thus, the "codable" formal definitions of cognitive terms are the necessary basis on which alone it is possible to build the architecture of AI technology that has the ability to embody these concepts in a real software. The question of the adequacy of such definitions of "reality" and their compliance with existing generally accepted philosophical theories is also very important and quite discussable, but this does not affect the priority and fundamental nature of the requirement for the formulation of "codable" formal definitions. The formulation of "codable" definitions for the concept of "consciousness" and related cognitive concepts and, based on them, statements about their applicability to the subject area of AI is the topic of this publication. Covering questions:Can AI have a Personality / Motivations / Free Will?
Category: Artificial Intelligence
[140] viXra:2308.0116 [pdf] replaced on 2023-11-12 21:54:59
Authors: Youming Zhao
Comments: 10 pages, fixed two mistakes
We present an alternating direction method of multipliers (ADMM) for a generic overlapping group lasso problem, where the groups can be overlapping in an arbitrary way. Meanwhile, we prove the lower bounds and upper bounds for both the $ell_1$ sparse group lasso problem and the $ell_0$ sparse group lasso problem. Also, we propose the algorithms for computing these bounds.
Category: Artificial Intelligence
[139] viXra:2307.0134 [pdf] replaced on 2023-08-14 07:32:41
Authors: Satish Gajawada
Comments: 5 Pages.
This paper is dedicated to everyone who is interested in the Artificial Intelligence. In the past, researchers have explored behavior of chromosomes, birds, fishes, ants, bacteria, bees and so on to create excellent optimization methods for solving complex optimization problems. The author proposed the Human Optimization in this paper. Humans progressed like anything. They help each other. There are so many plus points in Humans. In fact all optimization algorithms based on other beings are created by Humans. There is so much to explore in behavior of Human for creating awesome optimization algorithms. Artificial Fishes, birds, ants, bees etc have solved optimization problems. Similarly, optimization method based on Humans is expected to solve complex problems. This paper sets the trend for all optimization algorithms that come in future based on Humans.
Category: Artificial Intelligence
[138] viXra:2307.0121 [pdf] replaced on 2024-03-20 22:59:08
Authors: Jeongik Cho
Comments: 16 Pages.
Class-conditional GAN generates class-conditional data from continuous latent distribution and categorical distribution. Typically, a class-conditional GAN can be trained only when the label, which is the conditional categorical distribution of the target data, is given. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering without knowing labels, optimal prior categorical probability, or metric function. The proposed method uses a discriminator, a classifier, and a generator. The classifier is trained with cross-entropy loss to predict the conditional vector of the fake data. Also, the conditional vector of real data predicted by the classifier is used to train the class-conditional GAN. When training class-conditional GAN with this classifier, the decision boundary of the classifier falls to the local optima where the density of the data is minimized. The proposed method adds a classifier gradient penalty loss to the classifier loss to prevent the classifier's decision boundary from falling into narrow a range of local optima. It regulates the gradient of the classifier's output to prevent the gradient near the decision boundary from becoming too large. As the classifier gradient penalty loss weight increases, the decision boundary falls into a wider range of local optima. It means that the sensitivity of each class can be adjusted by the weight of the gradient penalty loss. Additionally, the proposed method updates the prior categorical probability with the categorical probability of real data predicted by the classifier. As training progresses, the entropy of the prior categorical probability decreases and converges according to the classifier gradient penalty loss weight.
Category: Artificial Intelligence
[137] viXra:2307.0121 [pdf] replaced on 2023-10-23 23:26:22
Authors: Jeongik Cho
Comments: 14 Pages.
Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, class-conditional InfoGAN can generate class-conditional data through a self-supervised (unsupervised) method without a labeled dataset. Instead, class-conditional InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering without knowing the optimal categorical latent distribution (prior probability). The proposed model consists of a discriminator, a classifier, and a generator, and uses three losses. The first loss is the cross-entropy classification loss to predict the conditional vector of the fake data. The classifier is trained with the classification loss. The second loss is the CAGAN loss for class-conditional data generation. The conditional vector of the real data predicted by the classifier is used for CAGAN loss. The generator and discriminator are trained with CAGAN loss. The third loss is the classifier gradient penalty loss. The classifier gradient penalty loss regularizes the slope of the classifier's decision boundary so that the decision boundary converges to a local optimum over a wide region. Additionally, the proposed method updates the categorical latent distribution with a predicted conditional vector of real data. As training progresses, the entropy of the categorical latent distribution gradually decreases and converges to the appropriate value. The converged categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to measure the distance between data.
Category: Artificial Intelligence
[136] viXra:2307.0121 [pdf] replaced on 2023-08-21 03:13:35
Authors: Jeongik Cho
Comments: 11 Pages.
Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, InfoGAN with categorical latent distribution can generate classconditional data through a self-supervised (unsupervised) method without a labeled dataset. Instead, InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised classconditional data generation and clustering without knowing the optimal categorical latent distribution. The proposed method uses three losses. The first loss is the cross-entropy classification loss to predict the label of the fake data. The classifier is trained with the classification loss. The second loss is the CAGAN loss for class-conditional data generation. The virtual label of the real data predicted by the classifier is used for CAGAN loss. The generator and discriminator are trained with CAGAN loss. The third loss is the classifier gradient penalty loss. The classifier gradient penalty loss regularizes the slope of the classifier’s decision boundary so that the decision boundary converges to a local optimum over a wide region. Additionally, the proposed method updates the categorical latent distribution with the output distribution of the classifier on the real data. As training progresses, the entropy of the categorical latent distribution gradually decreases by the classifier gradient penalty loss and converges to the appropriate value. The converged categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to calculate the distance between data.
Category: Artificial Intelligence
[135] viXra:2307.0121 [pdf] replaced on 2023-08-07 13:45:40
Authors: Jeongik Cho
Comments: 11 Pages.
Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, InfoGAN with categorical latent distribution can generate class-conditional data through a self-supervised (unsupervised) method without a labeled dataset. Instead, InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering without knowing the optimal categorical latent distribution. The proposed method uses three different losses. The first loss is the cross-entropy classification loss to predict the label of the fake data. The classifier is trained with the classification loss. The second loss is the CAGAN loss for class-conditional data generation. The virtual label of the real data predicted by the classifier is used for CAGAN loss. The generator and discriminator are trained with CAGAN loss. The third loss is the classifier gradient penalty loss. The classifier gradient penalty loss regularizes the slope of the classifier's decision boundary so that the decision boundary converges to a better local optimum. Additionally, the proposed method updates the categorical latent distribution with the output distribution of the classifier on the real data. As training progresses, the entropy of the categorical latent distribution gradually decreases by the classifier gradient penalty loss and converges to the appropriate value. The converged categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to calculate the distance between data.
Category: Artificial Intelligence
[134] viXra:2306.0055 [pdf] replaced on 2023-10-10 01:20:34
Authors: Shaun Stoltz
Comments: 10 Pages.
There have been significant improvements in directing large language models (LLM) to answer logic-based question such as mathematical reasoning tasks. This has resulted in near perfect performance on these types of problems with accuracy levels in the mid ninety percentile level using state of the art models (GPT-4). The achievement of this level of accuracy has previously needed a multi-prompt approach to elicit better performances from LLM’s. This paper introduces a new prompt paradigm termed "Mega prompt" and further introduces Proteus, a state of the art mega prompt, that has been used to achieve a new level of accuracy on the GSM8K math data set of 97%.
Category: Artificial Intelligence
[133] viXra:2306.0003 [pdf] replaced on 2023-06-05 10:32:44
Authors: Essam El-Tobgi
Comments: 10 Pages.
Deep learning has become a powerful tool for solving a wide variety of problems, including those in physics. In this paper, we explore the use of deep learning for the detection of continuous gravitational waves. We propose two different approaches: one based on time-domain analysis and the other based on frequency-domain analysis. Both approaches achieve nearly the same performance, suggesting that deep learning is a promising technique for this task. The main purpose of this paper is to provide an overview of the potential of deep learning for physics problems. We do not provide a performance-measured solution, as this is beyond the scope of this paper. However, we believe that the results presented here are encouraging and suggest that deep learning is a valuable tool for physicists.
Category: Artificial Intelligence
[132] viXra:2305.0064 [pdf] replaced on 2023-08-10 14:46:30
Authors: Ait-taleb nabil
Comments: 14 Pages.
In this paper, I will introduce the causation's magnitude allowing to compute the importance of causes in the cause-and-effect relationship from correlation matrix.
Category: Artificial Intelligence
[131] viXra:2304.0089 [pdf] replaced on 2023-06-09 00:50:28
Authors: Friedrich Sösemann
Comments: 12 pages english, 12 pages german
Information, knowledge and intelligence are defined as a hierarchy of relations:Information as dependent properties, knowledge as dependent information, and intelligence as dependent knowledge. The same dependency measure applies to all three.Syntax, semantics and pragmatics of descriptions embody information, knowledge and intelligence.The precision and measurability of these terms should reduce vagueness and contradictions in their application.
Category: Artificial Intelligence
[130] viXra:2301.0076 [pdf] replaced on 2023-04-18 00:33:37
Authors: Fuyuan Xiao
Comments: 2 Pages.
In this paper, a new quantum model of generalized quantum evidence theory is proposed. Besides, a new quantum X-entropy is proposed to measure the uncertainty in generalized quantum evidence theory.
Category: Artificial Intelligence
[129] viXra:2212.0176 [pdf] replaced on 2023-02-14 09:34:24
Authors: Jeongik Cho
Comments: 10 Pages.
Dynamic latent scale GAN is a method to train an encoder that inverts the generator of GAN with maximum likelihood estimation. In this paper, we propose a method to improve the performance of dynamic latent scale GAN by integrating perceptual VAE loss into dynamic latent scale GAN efficiently. When training dynamic latent scale GAN with normal i.i.d. latent random variable, and latent encoder is integrated into discriminator, a sum of a predicted latent random variable of real data and a scaled normal noise follows normal i.i.d. random variable. This random variable can be used for both VAE and GAN training. Considering the intermediate layer output of the discriminator as a feature encoder output, the generator can be trained to minimize perceptual VAE loss. Also, inference & backpropagation for perceptual VAE loss can be integrated into those for GAN training. Therefore, perceptual VAE training does not require additional computation. Also, the proposed method does not require prior loss or variance estimation like VAE.
Category: Artificial Intelligence
[128] viXra:2210.0120 [pdf] replaced on 2023-06-13 14:52:20
Authors: Dimiter Dobrev
Comments: 28 Pages. English and Bulgarian languages
We will consider all policies of the agent and will prove that one of them is the best performing policy. While that policy is not computable, computable policies do exist in its proximity. We will define AI as a computable policy which is sufficiently proximal to the best performing policy. Before we can define the agent’s best performing policy, we need a language for description of the world. We will also use this language to develop a program which satisfies the AI definition. The program will first understand the world by describing it in the selected language. The program will then use the description in order to predict the future and select the best possible move. While this program is extremely inefficient and practically unusable, it can be improved by refining both the language for description of the world and the algorithm used to predict the future. This can yield a program which is both efficient and consistent with the AI definition.
Category: Artificial Intelligence
[127] viXra:2210.0120 [pdf] replaced on 2023-04-18 06:06:19
Authors: Dimiter Dobrev
Comments: 25 Pages. English and Bulgarian languages
We will consider all policies of the agent and will prove that one of them is the best performing policy. While that policy is not computable, computable policies do exist in its proximity. We will define AI as a computable policy which is sufficiently proximal to the best performing policy. Before we can define the agent’s best performing policy, we need a language for description of the world. We will also use this language to develop a program which satisfies the AI definition. The program will first understand the world by describing it in the selected language. The program will then use the description in order to predict the future and select the best possible move. While this program is extremely inefficient and practically unusable, it can be improved by refining both the language for description of the world and the algorithm used to predict the future. This can yield a program which is both efficient and consistent with the AI definition.
Category: Artificial Intelligence
[126] viXra:2210.0120 [pdf] replaced on 2022-11-28 19:22:21
Authors: Dimiter Dobrev
Comments: 16 Pages.
We will consider all policies of the agent and will prove that one of them is the best performing policy. While that policy is not computable, computable policies do exist in its proximity. We will define AI as a computable policy which is sufficiently proximal to the best performing policy. Before we can define the agent's best performing policy, we need a language for description of the world. We will also use this language to develop a program which satisfies the AI definition. The program will first understand the world by describing it in the selected language. The program will then use the description in order to predict the future and select the best possible move. While this program is extremely inefficient and practically unusable, it can be improved by refining both the language for description of the world and the algorithm used to predict the future. This can yield a program which is both efficient and consistent with the AI definition.
Category: Artificial Intelligence
[125] viXra:2209.0069 [pdf] replaced on 2022-11-17 03:10:13
Authors: Ait-Taleb Nabil
Comments: 14 Pages.
In this paper, we will propose a method for learning signals related to a data frame $D_{1}$. The learning algorithm will be based on the biggest entropy variations of a Bayesian network. The method will make it possible to obtain an optimal Bayesian network having a high likelihood with respect to signals $D_{1}$. From the learned optimal Bayesian network, we will show what to do to infer new signals $D_{2}$ and we will also introduce the prediction quality $Delta_{CR}$ allowing to evaluate the predictive quality of inferred signals $D_{2}$. We will then infer a large number (10000) of candidate signals $D_{2}$ and we will select the predictive signals $D_{2}^{*}$ having the best prediction quality. Once the optimal signals $D_{2}^{*}$ obtained, we will impose the same order of scatter (computed from the Mahalanobis) to the points of signals $D_{2}^{*}$ as of signals $D_{1}$.
Category: Artificial Intelligence
[124] viXra:2209.0069 [pdf] replaced on 2022-11-10 18:09:21
Authors: Ait-Taleb Nabil
Comments: 14 Pages.
In this paper, we will propose a method for learning signals related to a data frame $D_{1}$. The learning algorithm will be based on the biggest entropy variations of a Bayesian network. The method will make it possible to obtain an optimal Bayesian network having a high likelihood with respect to signals $D_{1}$. From the learned optimal Bayesian network, we will show what to do to infer new signals $D_{2}$ and we will also introduce the prediction quality $Delta_{CR}$ allowing to evaluate the predictive quality of inferred signals $D_{2}$. We will then infer a large number (10000) of candidate signals $D_{2}$ and we will select the predictive signals $D_{2}^{*}$ having the best prediction quality. Once the optimal signals $D_{2}^{*}$ obtained, we will impose the same order of scatter (computed from the Mahalanobis) to the points of signals $D_{2}^{*}$ as of signals $D_{1}$.
Category: Artificial Intelligence
[123] viXra:2207.0064 [pdf] replaced on 2022-07-22 00:19:25
Authors: Dimitrios Geromichalos
Comments: 10 Pages. Updated version
Based on hundreds of thousands of song lyrics from thousands of bands, Word2Vec models have been trained to quantitatively identify similarities between band texts and terms. Using prominent examples, this demonstrates for the cases studied, that music bands can be assigned to a similarity network solely on the basis of their song lyrics, which also corresponds to their musical style. Furthermore, using exemplary words, it is demonstrated that semantic term networks vary strongly from genre to genre. In addition, the semantic similarity matrices were studied using network analysis methods. As it turned out, term and band text networks differ significantly. While the former resemble random networks, the latter partly exhibit powerlaw behavior. Both also exhibit threshold-dependent regimes.
Category: Artificial Intelligence
[122] viXra:2203.0055 [pdf] replaced on 2024-10-14 02:44:27
Authors: Carlino Christian Francesco
Comments: 30 Pages.
Following work deals with the realization of an experimental setup, called Hydro-Magnetic Catalyst (HMC), which has allowed to provide early experimental results useful for the study of the origin of consciousness and subjective experience. Experimental results demonstrate that: - interactions between vicinal water and magnetic fields, oscillating with frequency in theta, alpha beta and gamma range, are fundamental for the qualitative perception of stimuli from the external environment (qualia). This perception can be measured by means of a new index called Quantillium (Ql) - neurons and neuronal networks are not the repositories of consciousness and subjective experience. They constitute the information decoding hardware. This decoding is mostly digital, butpassive properties of neuronal cells and their interconnections allow for a digital-to-analogreconversion of the signal. Analog signal could be responsible for broadcasting of information to the aqueous environment and for the perception, during waking state, of qualia and stream of consciousness.
Category: Artificial Intelligence
[121] viXra:2202.0116 [pdf] replaced on 2022-04-12 05:22:42
Authors: Jeongik Cho
Comments: 8 Pages.
Dynamic latent scale GAN proposed a learning-based GAN inversion method with maximum likelihood estimation. In this paper, we propose a method for self-supervised out-of-distribution detection using the encoder of dynamic latent scale GAN. When the dynamic latent scale GAN converged, since the entropy of the scaled latent random variable is optimal to represent in-distribution data, in-distribution data is densely mapped to latent codes with high likelihood. This enables the log-likelihood of the predicted latent code to be used for out-of-distribution detection. The proposed method does not require mutual information of in-distribution data and additional hyperparameters for prediction. The proposed method also showed better out-of-distribution detection performance than the previous state-of-art method.
Category: Artificial Intelligence
[120] viXra:2202.0116 [pdf] replaced on 2022-02-22 15:03:45
Authors: Jeongik Cho
Comments: 4 Pages.
DLSGAN proposed a learning-based GAN inversion method with maximum likelihood estimation. In this paper, I propose a method for unsupervised out-of-distribution detection using the encoder of DLSGAN. When the DLSGAN converged, since the entropy of the scaled latent random variable is optimal to express in-distribution data, in-distribution data is densely mapped to latent codes with high likelihood. This enables the log-likelihood of the predicted latent code to be used for out-of-distribution detection.
Category: Artificial Intelligence
[119] viXra:2202.0106 [pdf] replaced on 2022-06-04 10:23:09
Authors: Ait-Taleb Nabil
Comments: 26 Pages.
In this paper, we will expose the BIC score expressed as a function of the Bayesian network's entropy.
We will then use this BIC score to learn a Bayesian network from an example of data frame.
Category: Artificial Intelligence
[118] viXra:2202.0106 [pdf] replaced on 2022-05-18 20:56:22
Authors: Ait-Taleb Nabil
Comments: 26 Pages.
In this paper, we will expose the BIC score expressed as a function of the Bayesian network's entropy. We will then use this BIC score to learn a Bayesian network from an example of data frame.
Category: Artificial Intelligence
[117] viXra:2201.0144 [pdf] replaced on 2023-02-09 18:52:37
Authors: Dimiter Dobrev
Comments: 92 Pages.
Artificial Intelligence — What is it, how can we do it and what shall we do once we do it? This is a PhD thesis.
Category: Artificial Intelligence
[116] viXra:2201.0144 [pdf] replaced on 2022-11-05 01:55:34
Authors: Dimiter Dobrev
Comments: 109 Pages. In Bulgarian
Artificial Intelligence - What is it, how to do it and what will we do after we do it? This is a PhD thesis.
Category: Artificial Intelligence
[115] viXra:2112.0097 [pdf] replaced on 2022-01-18 17:08:15
Authors: Philip Naveen
Comments: 8 Pages. Critical errors fixed, and additional experiments performed
Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Generalized networks were constructed using different activation functions. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.
Category: Artificial Intelligence
[114] viXra:2112.0095 [pdf] replaced on 2022-02-24 21:03:49
Authors: Long Yu, ZhiCong Luo, Deng Lin, HongZhu Li, HuanYong Liu, YaFeng Deng
Comments: 6 Pages.
Knowledge representation is a classic problem in Knowledge
graphs. Distance-based models have made great progress. The most significant recent developments in this direction have been those of Rotate[1]
and PairRE[2], which focuses on expressing relationships as projections of
nodes. However TransX series Model(TransE[3], TransH[4], TransR[5])
expresses relationships as translations of nodes. To date, the problem
of the Combination of Projection and translation has received scant
attention in the research literature. Hence, we propose TripleRE, a
method that models relationships by projections and translations. Compared with the other knowledge representation model, we achieve the
best results on the ogbl-wikikg2 dataset.
Category: Artificial Intelligence
[113] viXra:2112.0095 [pdf] replaced on 2021-12-25 21:44:48
Authors: Long Yu, ZhiCong Luo, Deng Lin, HuanYong Liu, YaFeng Deng
Comments: 6 Pages.
Knowledge representation is a classic problem in Knowledge graphs. Distance-based models have made great progress. The most significant recent developments in this direction have been those of Rotate and PairRE, which focus on express relationships as projections of
nodes. However, TransX series Model(TransE, TransH, TransR) expresses relationships as translations of nodes. To date, the problem of
the Combination of Projection and translation has received scant attention in the research literature. Hence, we propose TripleRE, a method that models relationships by projections and translations. Compared with the other knowledge representation model, we achieve the best results on the ogbl-wikikg2 dataset.
Category: Artificial Intelligence
[112] viXra:2111.0170 [pdf] replaced on 2024-05-31 10:52:26
Authors: Victor Senkevich
Comments: 12 Pages. A slightly expanded version...
I believe that AGI (Artificial General Intelligence), unlike current AI models must operate with meanings / knowledge. This is exactly what distinguishes it from neural network based AI. Any successful AI implementations (playing chess, self-driving, face recognition, etc.) in no way operate with knowledge about the objects being processed and do not recognize their meanings / cognitive structure. This is not necessary for them, they demonstrate good results based on pre-training. But for AGI, which imitates human thinking, the ability to operate with knowledge is crucial. Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not rigorous and formalized, therefore they cannot be programmed. The procedure of searching for meaning / knowledge should use a formalized determination of its existence and possible forms of its perception, usually multimodal. For the practical implementation of AGI, it is necessary to develop such "ready-to-code" formalized definitions of the cognitive concepts of "meaning", "knowledge", "intelligence" and others related to them. This article attempts to formalize the definitions of such concepts.
Category: Artificial Intelligence
[111] viXra:2111.0080 [pdf] replaced on 2021-11-24 17:45:45
Authors: Jeongik Cho
Comments: 5 Pages.
In Wasserstein GAN, it is important to regularize the discriminator to have a not big Lipschitz constant. In this paper, I introduce discriminator variance regularization to regularize the discriminator of Wasserstein GAN to have a small Lipschitz constant. Discriminator variance regularization simply regularizes the variance of the discriminator's output to be small when input is real data distribution or generated data distribution. Intuitively, a low variance of discriminator output implies that the discriminator is more likely to have a low Lipschitz constant. Discriminator variance regularization does not explicitly regularize the Lipschitz constant of discriminator through differentiation on discriminator but lowers the probability that the Lipschitz constant of the discriminator is high. Discriminator variance regularization is used in Wasserstein GAN with R1 regularization, which reduces the vibration of GAN. Discriminator variance regularization requires very little additional computing.
Category: Artificial Intelligence
[110] viXra:2111.0014 [pdf] replaced on 2022-01-11 21:52:43
Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 16 Pages.
Concise granule descriptions for definable granules and approaching descriptions for indefinable granules are challenging and important issues in granular computing. The concept with only common attributes has been intensively studied.
To investigate the granules with some special needs, we propose a novel type of
compound concepts in this paper, i.e., common-and-necessary concept. Based on
the definitions of concept-forming operations, the logical formulas are derived for
each of the following types of concepts: formal concept, object-induced three-way
concept, object oriented concept and common-and-necessary concept. Furthermore, by utilizing the logical relationship among various concepts, we have derived
concise and unified equivalent conditions for definable granules and approaching
descriptions for indefinable granules for all four kinds of concepts.
Category: Artificial Intelligence
[109] viXra:2110.0036 [pdf] replaced on 2021-12-30 11:44:46
Authors: Ait-Taleb Nabil
Comments: 29 Pages.
In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix.
For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph.
Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence
[108] viXra:2110.0036 [pdf] replaced on 2021-12-23 09:35:56
Authors: Ait-Taleb Nabil
Comments: 29 Pages.
In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix.
For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph.
Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence
[107] viXra:2110.0036 [pdf] replaced on 2021-10-20 13:40:03
Authors: Ait-Taleb Nabil
Comments: 29 Pages.
In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix.
For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph.
Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence
[106] viXra:2109.0028 [pdf] replaced on 2022-03-30 15:11:58
Authors: Jeongik Cho
Comments: 22 Pages.
Generator of generative adversarial networks (GAN) maps latent random variable into data random variable. GAN inversion is mapping data random variable to latent random variable by inverting the generator of GAN.
When training the encoder for generator inversion, using the mean squared error causes the encoder to not converge because there is information loss on the latent random variable in the generator. In other words, it is impossible to train an encoder that inverts the generator as it, because the generator may ignore some information of the latent random variable.
This paper introduces a dynamic latent scale GAN, a method for training a generator that does not lose information from the latent random variable, and an encoder that inverts the generator. When the latent random variable is an i.i.d. (independent and identically distributed) random variable, dynamic latent scale GAN dynamically scales each element of the latent random variable during GAN training to adjust the entropy of the latent random variable. As training progresses, the entropy of the latent random variable decreases until there is no information loss on the latent random variable in the generator. If there is no information loss on the latent random variable in the generator, the encoder can converge to invert the generator.
The scale of the latent random variable depends on the amount of information that the encoder can recover. It can be calculated from the element-wise variance of the predicted latent random variable from the encoder.
Since the scale of latent random variable changes dynamically in dynamic latent scale GAN, the encoder should be trained with a generator during GAN training. The encoder can be integrated with the discriminator, and the loss for the encoder is added to the generator loss for fast training. Also, dynamic latent scale GAN can be used for continuous attribute editing with InterFaceGAN.
Category: Artificial Intelligence
[105] viXra:2109.0028 [pdf] replaced on 2021-09-16 12:02:21
Authors: Jeongik Cho
Comments: 20 Pages.
Generator of generative adversarial networks (GAN) maps latent random variable into data random variable. GAN inversion is mapping data random variable to latent random variable by inverting the generator of GAN.
When training the encoder for generator inversion, using the mean squared error causes the encoder to not converge because there is information loss on the latent random variable in the generator. In other words, it is impossible to train an encoder that inverts the generator as it, because the generator may ignore some information of the latent random variable.
This paper introduces a dynamic latent scale GAN, a method for training a generator that does not lose information from the latent random variable, and an encoder that inverts the generator. When the latent random variable is a normal i.i.d. (independent and identically distributed) random variable, dynamic latent scale GAN dynamically scales each element of the latent random variable during GAN training to adjust the entropy of the latent random variable. As training progresses, the entropy of the latent random variable decreases until there is no information loss on the latent random variable in the generator. If there is no information loss on the latent random variable in the generator, the encoder can converge to invert the generator.
The scale of the latent random variable depends on the amount of information that the encoder can recover. It can be calculated from the element-wise variance of the predicted latent random variable from the encoder.
Since the scale of latent random variable changes dynamically in dynamic latent scale GAN, the encoder should be trained with a generator during GAN training. The encoder can be integrated with the discriminator, and the loss for the encoder is added to the generator loss for fast training.
Category: Artificial Intelligence
[104] viXra:2108.0029 [pdf] replaced on 2021-12-28 16:57:48
Authors: Ait-Taleb Nabil
Comments: 34 Pages.
In this paper, we are proposing a learning algorithm for continuous data matrix based on entropy absorption of a Bayesian network.This method consists in losing a little bit of likelihood compared to a chain rule's best likelihood, in order to get a good idea of the higher conditionings that are taking place between the Bayesian network's nodes. We are presenting the known results related to information theory, the multidimensional Gaussian probability, AIC and BIC scores for continuous data matrix learning from a Bayesian network, and we are showing the entropy absorption algorithm using the Kullback-leibler divergence with an example of continuous data matrix.
Category: Artificial Intelligence
[103] viXra:2108.0029 [pdf] replaced on 2021-09-16 10:28:57
Authors: Ait-Taleb Nabil
Comments: 34 Pages.
In this paper, we are proposing a learning algorithm for continuous data matrix based on entropy absorption of a Bayesian network.This method consists in losing a little bit of likelihood compared to a chain rule's best likelihood, in order to get a good idea of the higher conditionings that are taking place between the Bayesian network's nodes. We are presenting the known results related to information theory, the multidimensional Gaussian probability, AIC and BIC scores for continuous data matrix learning from a Bayesian network, and we are showing the entropy absorption algorithm using the Kullback-leibler divergence with an example of continuous data matrix.
Category: Artificial Intelligence
[102] viXra:2108.0029 [pdf] replaced on 2021-08-22 09:19:01
Authors: Ait-Taleb Nabil
Comments: 33 Pages.
In this article, we are proposing a learning algorithm for continuous data matrix based on entropy absorption of a Bayesian network.This method consists in losing a little bit of likelihood compared to a chain rule's best likelihood, in order to get a good idea of the higher conditionings that are taking place between the Bayesian network's nodes. We are presenting the known results related to information theory, the multidimensional Gaussian probability, AIC and BIC scores for continuous data matrix learning from a Bayesian network, and we are showing the entropy absorption algorithm using the Kullback-leibler divergence with an example of continuous data matrix.
Category: Artificial Intelligence
[101] viXra:2106.0084 [pdf] replaced on 2021-06-17 18:25:12
Authors: Souvik Sengupta
Comments: 6 Pages.
After one year from the start of the COVID-19 pandemic in India, the country is now having a steady decay in the number of daily new cases and active cases. Although the vaccination process is about to start from mid of January 2021, it would not affect the number of daily cases at least for the next three to four months for obvious reasons like phase-wise implementation and six to eight weeks time span required from the first dosage to develop the immunity. Therefore, the prime question is now, where would we reach at the end of the first quarter of 2021, and what could be the number of new cases and active cases before the vaccination immunity starts working. This paper analyzes the growth and decay pattern of Indian COVID-19 cases with help of SEIR epidemical modeling, ARIMA statistical modeling, and time series analysis by LSTM. The models learn the parameter and hyper-parameter values that are best suited for describing the pattern for the COVID-19 pandemic in India. Then it tries to predict the numbers for India by the end of March, 2021. It is forecasted that the number of new cases would come down near 5000 per day, active cases near 40,000 and the total number of infected may reach 11.1 million if the current pattern is followed.
Category: Artificial Intelligence
[100] viXra:2103.0194 [pdf] replaced on 2021-04-01 01:50:20
Authors: Tong Geng, Ang Li, Tianqi Wang, Chunshu Wu, Yanfei Li, Antonino Tumeo, Shuai Che, Steve Reinhardt, Martin Herbordt
Comments: 13 Pages.
In this paper, we propose an architecture design called Ultra-Workload-Balanced-GCN (UWB-GCN) to accelerate graph convolutional network inference. To tackle the major performance bottleneck of workload imbalance, we propose two techniques: dynamic local sharing and dynamic remote switching, both of which rely on hardware flexibility to achieve performance auto-tuning with negligible area or delay overhead. Specifically, UWB-GCN is able to effectively profile the sparse graph pattern while continuously adjusting the workload distribution among parallel processing elements (PEs). After converging, the ideal configuration is reused for the remaining iterations. To the best of our knowledge, this is the first accelerator design targeted to GCNs and the first work that auto-tunes workload balance in accelerator at runtime through hardware, rather than software, approaches. Our methods can achieve near-ideal workload balance in processing sparse matrices. Experimental results show that UWB-GCN can finish the inference of the Nell graph (66K vertices, 266K edges) in 8.1ms, corresponding to 199x, 16x, and 7.5x respectively, compared to the CPU, GPU, and the baseline GCN design without workload rebalancing.
Category: Artificial Intelligence
[99] viXra:2101.0122 [pdf] replaced on 2021-07-14 12:55:15
Authors: Ayoola Olafenwa
Comments: 8 Pages.
PixelLib is a library created to allow easy implementation of object segmentation in real life applications. In this paper we discussed in detail how PixelLib makes it possible for developers to implement semantic segmentation, instance segmentation, extraction of objects and background editing in images and videos with great simplification.
Category: Artificial Intelligence
[98] viXra:2012.0023 [pdf] replaced on 2020-12-16 03:11:21
Authors: Saty Raghavachary, Lurong Lei
Comments: 9 Pages.
Computational modeling of natural cognition is a crucial step towards achieving the grand goal of human-level computational intelligence. Successful ideas from existing models, and possibly newer ones, could be assembled to create a unified computational framework (eg. the Standard Model of the Mind, which attempts to unify three leading cognitive architectures) - this would be of great use in AI, robotics, neuroscience and cognitive science. This short position paper proposes the following: a VR-based system provides the most expedient, scalable and visually verifiable way to implement, test and refine a cognitive mind model (which would always be embodied in a character in a virtual world). Such a setup is discussed in the paper, including advantages and drawbacks over alternative implementations.
Category: Artificial Intelligence
[97] viXra:2010.0220 [pdf] replaced on 2020-11-01 02:24:46
Authors: Md Monzur Morshed
Comments: 11 Pages. This is a research proposal.
The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts [1]. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web [2]. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea [3, 4, and 5]. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats [6]. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally & globally.
Category: Artificial Intelligence
[96] viXra:2007.0085 [pdf] replaced on 2020-12-29 20:10:57
Authors: Zeyue Xia, Mohamad Nadim Barakat, Serri Matula, Zijun Hui, John Stavrakakis
Comments: 7 Pages. Computer Vision
Vivo confocal microscopy allows scientists to better understand eye health and systemic diseases. Microneuromas could play a role, however, monitoring their growth from a mosaic of images is error prone and time consuming. We used automated image stitching as a solution; focusing on accuracy and computational speed of three different feature detection algorithms: SIFT, SURF, and ORB. The results illustrated that SURF was computationally efficient with our data. Future investigation is to create a global solution that can replace the need for manual image stitching in this application.
Category: Artificial Intelligence
[95] viXra:2007.0085 [pdf] replaced on 2020-07-14 11:02:40
Authors: Zeyue Xia, Mohamad Nadim Barakat, Serri Matula, Zijun Hui, John Stravakrakis
Comments: 7 Pages. Computer Vision
Vivo confocal microscopy allows scientists to better understand eye health and systemic diseases. Microneuromas could play a role, however, monitoring their growth from a mosaic of images is error prone and time consuming. We used automated image stitching as a solution; focusing on accuracy
and computational speed of three different feature detection algorithms: SIFT, SURF and ORB. The results illustrated that SURF was computationally efficient with our data. Future investigation is to create a global solution that can replace the need for manual image stitching in this application.
Category: Artificial Intelligence
[94] viXra:2006.0208 [pdf] replaced on 2020-12-10 03:50:56
Authors: Jeongik Cho
Comments: 17 Pages.
Finding a latent code that can generate specific data by inverting a generative model is called latent code recovery (or latent vector recovery). When performing gradient descent based latent recovery, the probability that the recovered latent code was sampled from a latent random variable can be very low. To prevent this, latent regulation losses or element resampling methods have been used in some papers.
In this paper, when the latent random variable is an IID (Independent and Identically Distributed) random variable and performing gradient descent-based latent code recovery, we propose statistical distance latent regulation loss to maximize the probability that the latent code was sampled from the latent random variable. The statistical distance latent regulation loss is the distance between the discrete uniform distribution, assuming each latent code element has the same probability and one-dimensional distribution that each element of the latent random variable follows in common. Since the statistical distance latent regulation loss considers all elements simultaneously, it maximizes the probability that the latent code was sampled from a latent random variable.
Also, we propose the latent distribution goodness of fit test, an additional test that verifies whether the latent code is sampled from the latent random variable. This additional test verifies whether all recovered latent codes’ elements’ distribution follows one-dimensional distribution that each element of the latent random variable follows in common when the latent random variable is an IID random variable. Passing the latent distribution goodness of fit test does not mean that the latent codes are recovered correctly, but when the latent codes are recovered correctly, the latent distribution goodness of fit test should be passed.
Compared with other latent regulation losses or element resampling methods, only latent code recovery using the statistical distance latent regulation loss could recover the correct latent code with high performance in the gradient descent-based latent code recovery.
Category: Artificial Intelligence
[93] viXra:2006.0208 [pdf] replaced on 2020-08-12 22:33:53
Authors: Jeongik Cho
Comments: 18 Pages.
Finding a latent vector that can generate specific data by inverting a generative model is called latent vector recovery (or latent vector projection). When performing gradient descent based latent recovery, the latent vector being recovered may deviate from the train latent distribution. To prevent this, latent regulation loss or element resampling has been used in some papers.
In this paper, we propose a statistical distance latent regulation loss, which is a latent regulation loss that can be used when the generative model is trained with IID (Independent and Identically Distributed) random variables. The statistical distance latent regulation loss is the distance between the distribution followed by train latent random variables and the discrete uniform distribution, assuming that each element of the latent vector has the same probability. Since the statistical distance latent regulation loss considers the correlation between each element of the latent vector, better latent vector recovery is possible.
In addition, in this paper, when evaluating the performance of latent vector recovery, we propose latent distribution goodness of fit test, an additional test that checks whether the distribution of all elements of all recovered latent vectors follows the distribution of the train latent random variable. Passing the latent distribution goodness of fit test does not mean that the latent vector recovery is properly performed, but when the latent recovery is properly performed, the latent distribution goodness of fit test must be passed.
In this paper, the performance of the statistical distance latent regulation loss was compared with other latent regulation losses and element resampling methods.
In conclusion, the performance of the statistical distance latent regulation loss using Wasserstein distance or Energy distance was the best.
Category: Artificial Intelligence
[92] viXra:2006.0208 [pdf] replaced on 2020-07-28 13:22:36
Authors: Jeongik Cho
Comments: 9 Pages.
Finding a latent vector that can generate specific data by inverting the generative model is called latent vector recovery(or latent vector projection). When performing gradient descent based latent recovery, the latent vector being recovered may escape the train latent distribution. To prevent this, some papers used latent regulation loss or resampling.
In this paper, assuming that the generative model is trained with IID (Independent and Identically Distributed) random variables, I propose statistical distance latent regulation loss, which uses the distance between distribution followed by train latent random variables, and discrete uniform distribution, which assumes that each element of the latent vector has the same probability, as a latent regulation loss. The statistical distance latent regulation loss considers the correlation between each element of the latent vector, so better latent vector recovery is possible.
In this paper, I compared the performances of latent regulation losses and resampling methods of other papers as well as statistical distance latent regulation losses using several statistical distances.
In conclusion, the performances of Wasserstein distance latent regulation loss and Energy distance latent regulation loss were the best.
Also, in this paper, when performing latent vector recovery with a generator trained with an IID random variable, I propose the latent distribution goodness of fit test, an additional test to check whether all elements of all recovered latent vectors follow the distribution of the train latent random variable.
Category: Artificial Intelligence
[91] viXra:2006.0208 [pdf] replaced on 2020-07-22 10:52:23
Authors: Jeongik Cho
Comments: 8 Pages.
Finding a latent vector that can generate specific data using a generative model is called latent vector recovery. When performing gradient descent based latent recovery, the latent vector being recovered may escape the train latent distribution. To prevent this, latent regulation loss or resampling was used in some papers.
In this paper, assuming that the generative model is trained with IID(Independent and Identically Distributed) random variables, I propose a statistical distance latent regulation loss that considers the train latent distribution as a one-dimensional distribution, the latent vector as a sample distribution, and the distance between the two distributions as a latent regulation loss. The statistical distance latent regulation loss considers the correlation between each element of the latent vector, so better latent vector recovery is possible.
In addition, I compared the performance of latent regulation losses and resampling methods of other papers as well as statistical distance latent regulation losses using several statistical distances.
In conclusion, the performance of Bhattacharyya latent regulation loss was the best when the train latent vector followed the normal distribution, and the Lukaszyk Karmowski regulation loss showed the best performance otherwise.
Category: Artificial Intelligence
[90] viXra:2006.0208 [pdf] replaced on 2020-07-15 07:54:53
Authors: Jeongik Cho
Comments: 5 Pages.
Finding a latent vector that can generate specific data using a generative model is called latent vector recovery. When performing gradient descent based latent recovery, the latent vector being recovered may escape the train latent vector distribution. To prevent this, latent regulation loss has been used in many papers. In this paper, I propose a Wasserstein latent regulation loss to improve the performance of latent recovery, assuming that the generative model is trained with IID (Independent and identically distributed) random variables. The proposed Wasserstein latent regulation loss is the Wasserstein distance between the sample distribution of the train probability distribution and the latent vector being recovered. This paper compares the latent regulation loss of several papers, including the proposed Wasserstein latent regulation loss. In conclusion, the Wasserstein regulation loss and the log normal density function proposed in [1] showed the best performance.
Category: Artificial Intelligence
[89] viXra:2006.0208 [pdf] replaced on 2020-06-24 23:04:36
Authors: Jeongik Cho
Comments: 3 Pages.
When a pre-trained generative model is given, the process of finding the latent vector that produces the data closest to the input data is called the latent vector recover. The latent vector recover receives the difference between the generated data and the input data generated through the latent vector as reconstruction loss and performs gradient descent repeatedly on the latent vector to find the optimal latent vector. In this paper, I propose a method to find a better latent vector by adding a latent restriction loss in addition to reconstruction loss during latent vector recovery. The latent restriction loss is a loss that makes the latent vector follow the distribution of the latent vector used when training the generative model during latent vector recovery. The distance between the "distribution of latent vector used in training the generative model" and "latent vector during latent vector recovery" becomes the latent restriction loss.
Category: Artificial Intelligence
[88] viXra:2006.0079 [pdf] replaced on 2021-02-18 16:44:18
Authors: Al-Akhir Nayan, Md. Obaidur Rahman, Ahamad Nokib Mozumder, Mohammod Abul Kashem
Comments: 13 Pages. Published in Multidisciplinary Journal of European University of Bangladesh, 5(1), 2020 [Corrections made by viXra Admin to conform with the guidelines of viXra.org]
Due to the simplicity and capability to alter according to our requirements, the robotics and automation are being used widely in industries. The scheme is aimed to assemble an automatic vehicle by using GPS, which is depended on computer to generate its path coordinate. GPS module is utilized to collect GPS data. The mobile camera encounters the obstacles, machine learning algorithm assists to avoid it and performs real time object detection. The automobile uses the electric motors to spin wheels and has full control of the throttle, steering and breaking. An Arduino device pilots the vehicle following the instructions generated by the computer. Traffic has increased by quite a huge number. Excessive number of vehicles leads to large number of vehicle accidents every day. Driver
issue is also a great difficulty. The ultimate goal of this work is to minimize the possibilities of accidents and to ensure the safety of the passengers. Thus, the vehicles will be useful for blind and handicraft people. But serving this device to the military is the main target so that they can get benefit at the time of danger. The motorized vehicle includes sensors to observe the surroundings. Besides, it can be managed by human beings, manually.
Category: Artificial Intelligence
[87] viXra:2004.0611 [pdf] replaced on 2021-11-24 05:55:15
Authors: Dimiter Dobrev
Comments: 62 Pages. Bulgarian language
We will reduce the task of creating AI to the task of finding an appropriate language for description of the world. This will not be a programing language because programing languages describe only computable functions, while our language will describe a somewhat broader class of functions. Another specificity of this language will be that the description will consist of separate modules. This will enable us look for the description of the world automatically such that we discover it module after module. Our approach to the creation of this new language will be to start with a particular world and write the description of that particular world. The point is that the language which can describe this particular world will be appropriate for describing any world.
Category: Artificial Intelligence
[86] viXra:2004.0611 [pdf] replaced on 2020-10-12 12:19:27
Authors: Dimiter Dobrev
Comments: 38 Pages.
We will reduce the task of creating AI to the task of finding an appropriate language for description of the world. This will not be a programing language because programing languages describe only computable functions, while our language will describe a somewhat broader class of functions. Another specificity of this language will be that the description will consist of separate modules. This will enable us look for the description of the world automatically such that we discover it module after module. Our approach to the creation of this new language will be to start with a particular world and write the description of that particular world. The point is that the language which can describe this particular world will be appropriate for description of any world.
Category: Artificial Intelligence
[85] viXra:2004.0611 [pdf] replaced on 2020-06-14 07:46:50
Authors: Dimiter Dobrev
Comments: 38 Pages. Bulgarian language
We will reduce the task of creating AI to the task of finding the right language for description of the world. This language will not be a programming language because the programming languages describe only computable functions, while this language will describe a slightly wider class of functions. Another feature of this language will be that the description can be divided into separate modules. This will allow us to search the world description automatically by detecting it module by module. Our approach to creating this new language will be to start from one particular world and write a description of that particular world. Our idea is that the language that can describe this particular world will be appropriate to describe arbitrary world.
Category: Artificial Intelligence
[84] viXra:2004.0371 [pdf] replaced on 2020-05-07 20:44:02
Authors: Jeongik Cho
Comments: 12 Pages.
Traditional deep neural network classifier receives input data and passes through hidden layers to output predicted labels. In this paper, I propose an Inverted Conditional Generator Classifier that uses conditional generators to find a pair of condition vector and latent vector that can generate the data closest to the input data, and predict the label of the input data.
The conditional generator is a generative model that receives latent vector and condition vector, and generates data with desired conditions. A decoder of conditional VAE [1] or a generator of conditional GAN [2] can be a conditional generator.
The inverted Conditional Generator Classifier uses a trained conditional generator as it is.
The inverted conditional generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label.
Inverted Conditional Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise.
In addition, the Inverted Conditional Generator Classifier can measure the degree of out-of-class through the difference between the generated nearest data and input data. A high degree of out-of-class means that the input data is separate from the cluster of each class, or Inverted Conditional Generator Classifier has little confidence in prediction. Through this, Inverted Conditional Generator Classifier can classify the input data as out-of-class or defer classification due to the lack of confidence in prediction.
Category: Artificial Intelligence
[83] viXra:2004.0371 [pdf] replaced on 2020-04-25 10:46:47
Authors: Jeongik Cho
Comments: 12 Pages.
Traditional deep neural network classifier receives input data and passes through hidden layers to output predicted labels.
Conditional generator such as Conditional VAE [1] or Conditional GAN [2] receives latent vector and condition vector, and generates data with the desired conditions.
In this paper, I propose an Inverted Conditional Generator Classifier that uses conditional generators to find a pair of condition vector and latent vector that can generate the data closest to the input data, and predict the label of the input data. The inverted Conditional Generator Classifier uses a trained conditional generator as it is.
The inverted conditional generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label.
Inverted Conditional Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise. In addition, the Inverted Conditional Generator Classifier can measure the degree of out-of-class through the difference between the generated nearest data and input data.
Category: Artificial Intelligence
[82] viXra:2004.0371 [pdf] replaced on 2020-04-22 23:49:22
Authors: Jeongik Cho
Comments: 10 Pages.
Traditional deep neural network classifier receives input data and passes through hidden layers to output predicted labels.
Conditional generator such as Conditional VAE [1] or Conditional GAN [2] receives latent vector and condition vector, and generates data with the desired conditions.
In this paper, I propose an Inverted Conditional Generator Classifier that uses conditional generators to find a pair of condition vector and latent vector that can generate the data closest to the input data, and predict the label of the input data.
The inverted Conditional Generator Classifier uses a trained conditional generator as it is.
The inverted conditional generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label.
Inverted Conditional Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise. In particular, it is not attacked by gradient-descent based white-box attacks that assume a traditional deep neural network classifier. It is also expected to be able to defend well against black-box attacks that assume a traditional deep neural network classifier.
In addition, the Inverted Conditional Generator Classifier can measure the degree of out-of-class through the difference between the generated nearest data and input data.
Category: Artificial Intelligence
[81] viXra:2004.0371 [pdf] replaced on 2020-04-17 10:51:58
Authors: Jeongik Cho
Comments: 9 Pages.
In the field of deep learning, a traditional classifier receives input data and passes through hidden layers to output predicted labels. Conditional generators such as Conditional VAE [1] and Conditional GAN [2] receive latent vector and condition vector and generate data with the desired conditions.
In this paper, I propose an Inverted Generator Classifier that uses conditional generators to find a pair of condition vectors and latent vectors that generate the data closest to the input data, and predict the label of the input data. Inverted Generator Classifier uses a trained conditional generator as it is. The inverted generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label.
Inverted Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise. It is also not subject to gradient-descent based white-box attacks like FGSM [4].
Category: Artificial Intelligence
[80] viXra:2004.0222 [pdf] replaced on 2021-03-15 16:08:56
Authors: Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
Comments: 23 Pages. Published in ICLR 2021
In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting, by embedding a generative flow in the VAE framework to model the decoder. Specifically, the proposed model utilizes the variational auto-encoding framework to learn a (low-dimensional) vector of latent variables to capture the global information of an image, which is fed as a conditional input to a flow-based invertible decoder with architecture borrowed from style transfer literature. Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning. Importantly, this work demonstrates that with only architectural inductive biases, a generative model with a likelihood-based objective is capable of learning decoupled representations, requiring no explicit supervision. The code for our model is available at https://github.com/XuezheMax/wolf.
Category: Artificial Intelligence
[79] viXra:2004.0222 [pdf] replaced on 2020-04-11 22:06:49
Authors: Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
Comments: 22 Pages.
In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting. The proposed model utilizes the variational auto-encoding framework to learn a (low-dimensional) vector of latent variables to capture the global information of an image, which is fed as a conditional input to a flow-based invertible decoder with architecture borrowed from style transfer literature. Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning. Importantly, this work demonstrates that with only architectural inductive biases, a generative model with a plain log-likelihood objective is capable of learning decoupled representations, requiring no explicit supervision. The code for our model is available at https://github.com/XuezheMax/wolf.
Category: Artificial Intelligence
[78] viXra:2003.0484 [pdf] replaced on 2023-03-17 02:54:10
Authors: Qing Tian, Guangjun Tian
Comments: 4 Pages. In Chinese
This manuscript sketch first describes neural networks’ effect from the perspective of data space transformation, which is transforming data in a complicated raw space into an easily (e.g. linearly) separable space. We use a simple paper wrapping example to illustrate this point. In addition, this sketch also discusses some similarities between neural networks and ensemble classification.
Category: Artificial Intelligence
[77] viXra:2003.0484 [pdf] replaced on 2021-11-19 17:17:28
Authors: Qing Tian, Guangjun Tian
Comments: 4 Pages. in Chinese
This manuscript sketch first describes neural networks’ effect from the perspective of data space transformation, which is transforming data in a complicated raw space into an easily (e.g. linearly) separable space. We use a simple paper wrapping example to illustrate this point. In addition, this sketch also discusses some similarities between neural networks and ensemble classification.
Category: Artificial Intelligence