[13] viXra:2407.0178 [pdf] submitted on 2024-07-30 05:59:24
Authors: Jaehak Lee
Comments: 17 Pages.
Various macroscopic optical properties that are not observable in conventional homogeneous media may be realized in an optical metasurface by adjusting its sub-wavelength nanostructure. However, this requires precise and effective designing of structures. Therefore, systematic design methodologies for nanophotonic structures have garnered significant interest over the recent years. In this paper, we propose a deep-learning-based fast and efficient inverse design method for nanophotonic metasurface structures. A 10 × 10 plasmonic nanohole array structure perforated on an aluminum film was used to control both the amplitude and phase of the transmitted light with a high contrast using a small number of structural variables. To identify the structure that induces a desired field distribution, we constructed deep neural network (DNN) models that interconnected the structural variables of the plasmonic nanohole array with those of the field distributions. The DNNs were trained using data obtained via finite-difference time domain simulations. Moreover, we evaluated the performance of the proposed inverse design method for several targets, e.g., a rectangular grid with randomly determined intensities on different cells. The results confirmed an average cosine similarity of 0.86 for a field distribution at a focal length of 2,000 nm on a 4 × 4 grid with randomly determined intensities.
Category: Artificial Intelligence
[12] viXra:2407.0152 [pdf] submitted on 2024-07-26 17:26:56
Authors: Agnij Moitra
Comments: 14 Pages. Preprint submitted to Economics Letters (Elsevier)
Boglehead investing, founded on the principles of John C. Bogle is one of the classic time tested long term, low cost, and passive investment strategy. This paper uses various machine learning methods, and fundamental stock data in order to predict whether or not a stock would incur negative returns next year, and suggests a loss averted bogle-head strategy to invest in all stocks which are expected to not give negative returns over the next year. Results reveal that XGBoost, out of the 44 models trained, has the highest classification metrics for this task. Furthermore, this paper shall use various machine learning methods for exploratory data analysis, and SHAP values reveal that Net Income Margin, ROA, Gross Profit Margin and EBIT are some of the most important factors for this. Also, based on the SHAP values it is interesting to note that the current year has negligible contribution to the final prediction. Investors can use this as a heuristic guide for loss averted long term (1-year) stock portfolios.
Category: Artificial Intelligence
[11] viXra:2407.0146 [pdf] submitted on 2024-07-24 20:19:30
Authors: Jong-Phil Sim, Song-Chun Pang, Son-Myong Hwang
Comments: 11 Pages.
In this paper, we mainly propose feature extraction algorithm by linear embedding from the outside new data. The formulation of this algorithm aims at minimizing pairwise distances of feature points. To enhance the performance of nonlinear feature learning, we also incorporate the neighborhood reconstruction error to preserve local topology structures. To enhance our algorithm to extract local features from the outside new data, we also add a feature approximation error that correlates features with embedded features by the jointly learnt feature extractor. Thus, the learnt linear extractor can extract the local features from the new data efficiently by direct embedding. To optimize the proposed objective function, we use Eigen-decomposition. Extensive simulation results verify the effectiveness of our algorithm, compared with other related feature learning techniques.
Category: Artificial Intelligence
[10] viXra:2407.0100 [pdf] submitted on 2024-07-16 20:01:16
Authors: AnmolikaSingh, Mojtaba Alfardan
Comments: 7 Pages.
Organizations are frequently overwhelmed by the sheer volume of alerts about vulnerabilities discovered within their systems. These alerts are typically prioritized based on severity levels categorized by Common Vulnerabilities and Ex- posures (CVE) [2], a standard glossary used in Vulnerability Management Systems. However, this severity classification often fails to consider the specific operational context of the systems, leading to misaligned priorities and the potential oversight of more critical vulnerabilities that demand immediate atten- tion. This paper investigates whether Large Language Models (LLMs)[25] can offer a solution by integrating contextual aware- ness into the vulnerability management process, thus enhancing the efficiency and effectiveness of organizational responses to cybersecurity threats.
Category: Artificial Intelligence
[9] viXra:2407.0096 [pdf] submitted on 2024-07-15 20:56:41
Authors: Fei Ding
Comments: 5 Pages.
In the standard transformer architecture, increasing model parameters leads to linear growth in computational cost and activation memory. To address this issue, we propose a novel Infinite Parameter Large Language Model (IP-LLM) architecture that decouples model size from computational cost and device memory. Existing large language models are all fixed-parameter models, while human knowledge is infinite and expands daily. Finite parameters are inherently limited in their capacity to accommodate this boundless knowledge. Our IP-LLM architecture can potentially accommodate infinite knowledge, resolving this issue and laying the foundation for realizing a truly omniscient and omnipotent artificial general intelligence in the future.Our architecture surpasses MOE in performance while requiring significantly less memory.
Category: Artificial Intelligence
[8] viXra:2407.0089 [pdf] submitted on 2024-07-13 20:32:56
Authors: B. Nandini
Comments: 8 Pages.
The process of creating descriptions for the events depicted in an image is known as image captioning. Deep Learning Models can be used to accomplish this image captioning. It is an extremely difficult issue to automatically generate a caption or explanation for an image using any natural language sentence. It takes techniques from computer vision to comprehend the image's content and a language model from natural language processing to translate the comprehension of the image into words in the correct sequence. Deep learning and natural language processing have advanced to the point where creating captions for the provided photos is now simple. We use a Convolutional Neural Network (CNN) that has been trained beforehand to extract high-level features, such as objects, forms, and textures, from photos. A Long Short-Term Memory (LSTM) network, a kind of Recurrent Neural Network (RNN) that can handle sequential input like sentences, is then fed these features.
Category: Artificial Intelligence
[7] viXra:2407.0079 [pdf] submitted on 2024-07-11 20:35:04
Authors: Shuyang Gu
Comments: 12 Pages. https://cientgu.github.io/files/VisualSignalDecomposition.pdf
This paper does not propose any new algorithms but instead outlines various problems in the field of visual generation based on the author’s personal understanding. The core of these problems lies in how to decompose visual signals, with all other issues being closely related to this central problem and stemming from unsuitable approaches to signal decomposition. This paper aims to draw researchers’ attention to the significance of Visual Signal Decomposition.
Category: Artificial Intelligence
[6] viXra:2407.0075 [pdf] submitted on 2024-07-11 20:23:57
Authors: Fei Ding
Comments: 5 Pages.
Large Language Models (LLMs) have shown exceptional generative abilities in various natural language and generation tasks.Large language models (LLMs) have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering. However, LLM is relatively weaker in reasoning and problem-solving abilities.We propose a new construction that solves the problem of insufficient logical mathematics and logical ability.
Category: Artificial Intelligence
[5] viXra:2407.0065 [pdf] replaced on 2024-09-20 08:45:15
Authors: Eugene Rulko
Comments: 8 Pages.
Training a relatively big neural network that has enough capacity for complex tasks is challenging. In real life the process of task solving requires system of knowledge, where more complex skills are built upon previously learned ones. The same way biological evolution builds new forms of life based on a previously achieved level of complexity. Inspired by that, this work proposes ways of increasing complexity, especially a way of training neural networks with smaller receptive fields and using their weights as prior knowledge for more complex successors through gradual involvement of some parts, and a way where a smaller network works as a source of reward for a more complicated one. That allows better performance in a particular case of deep Q-learning in comparison with a situation when the model tries to use a complex receptive field from scratch.
Category: Artificial Intelligence
[4] viXra:2407.0052 [pdf] submitted on 2024-07-08 02:38:16
Authors: Ding Fei, Zhang Xu
Comments: 13 Pages.
Recent advancements in Large Language Models (LLMs) have showcased their remarkable capabilities in text understanding and generation. However, even stronger LLMs are susceptible to acquiring erroneous or obsolete information from the training corpus. Direct secondary fine-tuning with data containing new knowledge may be ineffective in updating knowledge due to the conflict between old and new knowledge. In this paper, we propose a new paradigm for fine-tuning called DFT (Delicate Fine-Tuning ).This method utilizes parametric arithmetic to precisely pinpoint the location of knowledge and update only the minimal set of relevant parameters . Experimental results on two publicly available datasets demonstrate that our proposed DFT can obviously improve the knowledge updating performance of full fine-tuning , simultaneously outperforming the existing baselines in most cases.
Category: Artificial Intelligence
[3] viXra:2407.0033 [pdf] submitted on 2024-07-04 21:15:38
Authors: Aurora Zeno
Comments: 13 Pages.
This paper explores the emerging synergy between quantum computing and artificial intelligence (AI), examining its potential to revolutionize our approach to global challenges. We present a comprehensive overview of quantum computing fundamentals and current AI capabilities, followed by an in-depth analysis of quantum-enhanced AI algorithms. The paper delves into specific applications in climate modeling, drug discovery, and resource optimization, providing quantitative estimates of potential improvements. We also address the challenges, limitations, and ethical considerations associated with this convergence. Our analysis suggests that the integration of quantum computing and AI could lead to unprecedented advancements in solving complex global problems, potentially offering orders of magnitude improvements in computational efficiency and accuracy. We conclude with a roadmap for future development and a call for increased research in this transformative field.
Category: Artificial Intelligence
[2] viXra:2407.0025 [pdf] replaced on 2025-03-26 09:23:35
Authors: Shuai Liu
Comments: 8 Pages.
In the past, the organization of society, including government and corporations, relied solely on natural experience, lacking a robust mathematical and logical framework for explaining how to structure and optimize these entities. This article draws parallels between the structure of social organizations and neural networks, illustrating that social structures emulate neural network architectures. Social organizations can be seen as neural networks nested within humans.Using the same principles, one can optimize the structure of social organizations. And this article outlines a comparison between neural network algorithms and Darwin's theory of natural selection, highlighting their similarities.
Category: Artificial Intelligence
[1] viXra:2407.0020 [pdf] submitted on 2024-07-03 19:04:16
Authors: Satish Gajawada
Comments: 111 Pages. (Note by viXra Admin: Please do not sue cartoon drawings in a scholarly article)
A new field titled "Very Highly Advanced Artificial Intelligence (VHAAI)" is coined in this article. VHAAI is a new field which is the collection of the following fields: 1) Out of the Box Artificial Intelligence (OBAI) 2) Artificial Intelligence Plus Plus (AI++) 3) Artificial Excellence (AE)4) Artificial God Optimization (AGO)5) Artificial Human Optimization (AHO)6) Artificial Soul Optimization (ASO)7) Twenty Second Century Artificial Intelligence (TSCAI)8) Deep Loving (DL)9) Nature Plus Plus Inspired Computing (N++IC)10) Artificial Satisfaction (AS)11) The Interesting and Complete Artificial Intelligence (ICAI)12) Lord Rama Artificial Intelligence (LRAI)13) Data Science Plus Plus (DS++)14) Stories Inspired Optimization Algorithms (SIOA)
Category: Artificial Intelligence