Artificial Intelligence

2307 Submissions

[8] viXra:2307.0146 [pdf] submitted on 2023-07-27 14:20:08

Structural Embeddings of Tools for Large Language Models

Authors: Eren Unlu
Comments: 5 Pages.

It is evident that the current state of Large Language Models (LLMs) necessitates the incorporation of external tools. The lack of straightforward algebraic and logical reasoning is well documented and prompted researchers to develop frameworks which allow LLMs to operate via external tools. The ontological nature of tool utilization for a specific task can be well formulated with a Directed Acyclic Graph (DAG). The central aim of the paper is to highlight the importance of graph based approaches to LLM-tool interaction in near future. We propose an exemplary framework to guide the orchestration of exponentially increasing numbers of external tools with LLMs, where objectives and functionalities of tools are graph encoded hierarchically. Assuming that textual segments of a Chain-of-Thought (CoT) can be imagined as a tool as defined here, the graph based framework can pave new avenues in that particular direction as well.
Category: Artificial Intelligence

[7] viXra:2307.0134 [pdf] replaced on 2023-08-14 07:32:41

The Human Optimization Method (PhD)

Authors: Satish Gajawada
Comments: 5 Pages.

This paper is dedicated to everyone who is interested in the Artificial Intelligence. In the past, researchers have explored behavior of chromosomes, birds, fishes, ants, bacteria, bees and so on to create excellent optimization methods for solving complex optimization problems. The author proposed the Human Optimization in this paper. Humans progressed like anything. They help each other. There are so many plus points in Humans. In fact all optimization algorithms based on other beings are created by Humans. There is so much to explore in behavior of Human for creating awesome optimization algorithms. Artificial Fishes, birds, ants, bees etc have solved optimization problems. Similarly, optimization method based on Humans is expected to solve complex problems. This paper sets the trend for all optimization algorithms that come in future based on Humans.
Category: Artificial Intelligence

[6] viXra:2307.0121 [pdf] replaced on 2024-03-20 22:59:08

Training Self-supervised Class-conditional GANs with Classifier Gradient Penalty and Dynamic Prior

Authors: Jeongik Cho
Comments: 16 Pages.

Class-conditional GAN generates class-conditional data from continuous latent distribution and categorical distribution. Typically, a class-conditional GAN can be trained only when the label, which is the conditional categorical distribution of the target data, is given. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering without knowing labels, optimal prior categorical probability, or metric function. The proposed method uses a discriminator, a classifier, and a generator. The classifier is trained with cross-entropy loss to predict the conditional vector of the fake data. Also, the conditional vector of real data predicted by the classifier is used to train the class-conditional GAN. When training class-conditional GAN with this classifier, the decision boundary of the classifier falls to the local optima where the density of the data is minimized. The proposed method adds a classifier gradient penalty loss to the classifier loss to prevent the classifier's decision boundary from falling into narrow a range of local optima. It regulates the gradient of the classifier's output to prevent the gradient near the decision boundary from becoming too large. As the classifier gradient penalty loss weight increases, the decision boundary falls into a wider range of local optima. It means that the sensitivity of each class can be adjusted by the weight of the gradient penalty loss. Additionally, the proposed method updates the prior categorical probability with the categorical probability of real data predicted by the classifier. As training progresses, the entropy of the prior categorical probability decreases and converges according to the classifier gradient penalty loss weight.
Category: Artificial Intelligence

[5] viXra:2307.0097 [pdf] submitted on 2023-07-19 03:24:07

Generative Pre-Trained Transformers, Natural Language Processing and Artificial Intelligence and Machine Learning (Ai/ml) in Software Vulnerability Management: Automations in the Software Bill of Materials (Sbom) and the Vulnerability-Exploitability Excha

Authors: Petar Radanliev, David De Roure, Omar Santos
Comments: 7 Pages.

One of the most burning topics in cybersecurity in 2023 will undoubtedly be the compliance with the Software Bill of Materials. Since the US president issued the Executive Order 14028 on Improving the Nation’s Cybersecurity, software developers have prepared and bills are transmitted to vendors, customers, and users, but they don’t know what to do with the reports they are getting. In addition, since software developers have identified the values of the Software Bill of Materials, they have been using the reports extensively. This article presents an estimate of 270 million requests per month, just from form one popular tool to one vulnerability index. This number is expected to double every year and a half. This simple estimate explains the urgency for automating the process. We propose solutions based on artificial intelligence and machine learning, and we base our tools on the existing FAIR principles (Findable, Accessible, Interoperable, and Reusable). This methodology is supported with a case study research and Grounded theory, for categorising data into axis, and for verifying the values of the tools with experts in the field. We showcase how to create, and share Vulnerability Exploitability eXchange data, and automate the Software Bill of Materials compliance process with AI models and a unified computational framework combining solutions for the following problems: (1) the data utilisation problem, (2) the automation and scaling problem, (3) the naming problem, (4) the alignment problem, (5) the pedigree, and provenance problem, and many other problems that are on the top of mind for many security engineers at present. The uptake of these findings will depend on collaborations with government and industry, and on the availability and the ease of use of automated tools.
Category: Artificial Intelligence

[4] viXra:2307.0091 [pdf] submitted on 2023-07-17 07:14:00

Efficient Data Storage and Machine Learning

Authors: Mirzakhmet Syzdykov
Comments: 2 Pages.

In this work we present to reader the novel research on account for efficiency of compression algorithms like Lempel-Ziv Welch and Aho-Corasick trees. We use them to build the proper storage which is called file system in a separate or generalized stream of data. These streams weren’t adopted before for big data to be compressed and queried at a fast pace. We will show further that this is the most efficient model for storing arrays of data on a server end for a final file system. The efficient algorithm for Machine Learning on Aho-Corasick trees is also presented which performs the query in linear time without getting more time on the models like neural networks which are very hardware demanding nowadays. The data structure like trie by Turing Award winner Alfred V. Aho and Margaret J. Corasick remain of big potential in the present time and are subjected to extensive research in this work.
Category: Artificial Intelligence

[3] viXra:2307.0087 [pdf] submitted on 2023-07-17 15:07:47

Artificial Intelligence for Complexity Theory

Authors: Mirzakhmet Syzdykov
Comments: 2 Pages.

In this continued series of work, we present the theoretical and practical results towardsreasoning with modern methods of Artificial Intelligence (AI). We justify our methodology with help of illustrative examples from Computer Science relying on the regular expression matching algorithm and application of the proposed solution for the task of identifying files consistency according to the unknown format. We will also give several notable proofs to the classical theorems which in some sense are coherent to the terms like AI and algorithmic complexity, however, or at least, nowadays they’re solved involving the huge amount of hardware resources and together constitute the new formation in the modern age with help of specifically crafter hardware modules — we’re still about to represent the model in more classical understanding from the point of view of computational complexity, concise reasoning and computer logic within the classical models, theorems and proofs as the base approach of estimating the costs needed to build Artificial Neural Networks (ANN) or Machine Learning (ML) data
Category: Artificial Intelligence

[2] viXra:2307.0024 [pdf] submitted on 2023-07-05 18:22:52

Fine-Tuning a BERT Model for Email Classification: Leveraging Personal Gmail Inbox

Authors: Rafael Costa da Silva
Comments: 8 Pages.

This study aims to develop an effective model for classifying emails as wanted or unwanted using fine-tuned BERT models. The process involved downloading the Gmail inbox through Google Takeout and converting the data to Parquet format. A frequency distribution analysis of From emails was conducted, and the emails were manually classified. A final dataset was created with email subject, classification, and binary labels. The BERT-base-multilingualcased model was fine-tuned using about 10,000 observations for each category. The resulting models achieved an accuracy of 0.9429411764705883. The models are publicly available in Hugging Face's model repository
Category: Artificial Intelligence

[1] viXra:2307.0006 [pdf] submitted on 2023-07-02 22:26:43

Comparative Analysis for Predicting Shelf life of Fruits Using Advanced Deep Learning Approaches

Authors: Sanath Shenoy, Radhika Mishra, Ruchi Chaturvedi, Krushnakant Bhagwat
Comments: 7 Pages.

The food industry aims to reduce food waste andensure the delivery of fresh produce to consumers, making it crucial to predict fruit shelf life accurately. Traditional approaches rely on expensive and time-consuming laboratory testing, which often involves destructive methods. However,recent studies suggested that advanced deep learning techniques can predict fruit shelf life accurately and efficiently. This paper presents a novel approach to predicting fruit shelflife using deep learning models. The study focuses on the application of these advanced techniques to forecast the shelf life of bananas, which can contribute significantly to achievingthe food industry's objective.The study tries to develop accurate and efficient models that could predict the maturity of bananas, based on their average shelf-life and appearance. In order toachieve this objective, two object detection algorithms—Faster R-CNN and You Only Look Once (YOLO) are used and their performance is compared in the present research. The dataset has been created by collecting images of the life cycle of bananas and segregating them based on their maturity. Various preprocessing and augmentation techniques have been applied to enhance the features of the training dataset which is useful to get better accuracy. The algorithms were trained on the family of Cavendish Bananas dataset and were able to predict the shelf life ofbananas with better training accuracy. The YOLO algorithm which is known for efficiency is compared with Faster R-CNN well known for identifying very fine features. This studydemonstrates the potential of deep learning algorithms in predicting the shelf life of bananas and can be extended to different fruits.
Category: Artificial Intelligence