Artificial Intelligence

Previous months:
2007 - 0703(1)
2010 - 1003(33) - 1004(9) - 1005(5) - 1008(2) - 1009(1) - 1010(1) - 1012(1)
2011 - 1101(2) - 1106(1) - 1107(1) - 1109(2)
2012 - 1201(1) - 1204(3) - 1206(2) - 1207(6) - 1208(7) - 1209(1) - 1210(4) - 1211(2)
2013 - 1301(5) - 1302(2) - 1303(6) - 1304(9) - 1305(1) - 1308(1) - 1309(8) - 1310(7) - 1311(1) - 1312(4)
2014 - 1404(2) - 1405(3) - 1406(1) - 1408(5) - 1410(1) - 1411(1) - 1412(1)
2015 - 1501(1) - 1502(3) - 1503(6) - 1504(3) - 1506(5) - 1507(4) - 1508(1) - 1509(4) - 1510(2) - 1511(4) - 1512(1)
2016 - 1601(1) - 1602(10) - 1603(2) - 1605(4) - 1606(6) - 1607(5) - 1608(7) - 1609(5) - 1610(12) - 1611(14) - 1612(10)
2017 - 1701(4) - 1702(9) - 1703(5) - 1704(10) - 1705(11) - 1706(14) - 1707(24) - 1708(19) - 1709(20) - 1710(14) - 1711(21) - 1712(16)
2018 - 1801(14) - 1802(5) - 1803(16) - 1804(17) - 1805(27) - 1806(22) - 1807(35) - 1808(35) - 1809(19) - 1810(28) - 1811(16)

Recent submissions

Any replacements are listed farther down

[650] viXra:1811.0253 [pdf] submitted on 2018-11-16 08:54:01

AI Predicting Enzyme Activity

Authors: George Rajna
Comments: 40 Pages.

University of Oxford have found a general way of predicting enzyme activity. [23] Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. [22] Researchers have devised a magnetic control system to make tiny DNA-based robots move on demand—and much faster than recently possible. [21] Humans have 46 chromosomes, and each one is capped at either end by repetitive sequences called telomeres. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13]
Category: Artificial Intelligence

[649] viXra:1811.0240 [pdf] submitted on 2018-11-15 08:25:01

AI for Sustainability Goals

Authors: George Rajna
Comments: 41 Pages.

As ESA's ɸ-week continues to provoke and inspire participants on new ways of using Earth observation for monitoring our world to benefit the citizens of today and of the future, it is clear that artificial intelligence is set to play an important role. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21]
Category: Artificial Intelligence

[648] viXra:1811.0226 [pdf] submitted on 2018-11-14 10:24:55

Private Photo Recognition

Authors: George Rajna
Comments: 77 Pages.

Researchers at Osaka University have proposed an encryption-free framework for preserving users' privacy when they use photo-based information services. [47] When building a predictive model, reliable results depend on two issues: the number of variables that come into play and the number of examples entered into the system. [46] A new computational approach that allows the identification of molecular alterations associated with prognosis and resistance to therapy of different types of cancer was developed by the research group led by Nuno Barbosa Morais at Instituto de Medicina Molecular João Lobo Antunes (iMM; Portugal). [45] A discovery by scientists at UC Riverside may open up new ways to control steroid hormone-mediated processes, including growth and development in insects, and sexual maturation, immunity, and cancer progression in humans. [44] New 3-D maps of water distribution during cellular membrane fusion are accelerating scientific understanding of cell development, which could lead to new treatments for diseases associated with cell fusion. [43] Thanks to the invention of a technique called super-resolution fluorescence microscopy, it has recently become possible to view even the smaller parts of a living cell. [42] A new instrument lets researchers use multiple laser beams and a microscope to trap and move cells and then analyze them in real-time with a sensitive analysis technique known as Raman spectroscopy. [41] All systems are go for launch in November of NASA's Global Ecosystem Dynamics Investigation (GEDI) mission, which will use high-resolution laser ranging to study Earth's forests and topography from the International Space Station (ISS). [40] Scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI) in Berlin combined state-of-the-art experiments and numerical simulations to test a fundamental assumption underlying strong-field physics. [39] Femtosecond lasers are capable of processing any solid material with high quality and high precision using their ultrafast and ultra-intense characteristics. [38] To create the flying microlaser, the researchers launched laser light into a water-filled hollow core fiber to optically trap the microparticle. Like the materials used to make traditional lasers, the microparticle incorporates a gain medium. [37]
Category: Artificial Intelligence

[647] viXra:1811.0224 [pdf] submitted on 2018-11-14 13:21:48

#2sat is in P

Authors: Elnaserledinellah Mahmood Abdelwahab
Comments: 85 Pages. Journal Academica Vol. 8(1), pp. 3-88, October 13 2018 - Theoretical Computer Science - ISSN 2161-3338 online edition www.journalacademica.org - Copyright © 2018 Journal Academica Foundation - With perpetual, non-exclusive license for viXra.org

This paper presents a new view of logical variables which helps solving efficiently the #P complete #2SAT problem. Variables are considered to be more than mere place holders of information, namely: Entities exhibiting repetitive patterns of logical truth values. Using this insight, a canonical order between literals and clauses of an arbitrary 2CNF Clause Set S is shown to be always achievable. It is also shown that resolving clauses respecting this order enables the construction of small Free Binary Decision Diagrams (FBDDs) for S with unique node counts in O(M4) or O(M6) in case a particular shown Lemma is relaxed, where M is number of clauses. Efficiently counting solutions generated in such FBDDs is then proven to be O(M9) or O(M13) by first running the proposed practical Pattern-Algorithm 2SAT-FGPRA and then the counting Algorithm Count2SATSolutions, so that the overall complexity of counting 2SAT solutions is in P. Relaxing the specific Lemma enables a uniform description of kSAT-Pattern-Algorithms in terms of (k-1)SAT- ones opening up yet another way for showing the main result. This second way demonstrates that avoiding certain types of copies of sub-trees in FBDDs constructed for arbitrary 1CNF and 2CNF Clause Sets, while uniformly expressing kSAT Pattern-Algorithms for any k>0, is a sufficient condition for an efficient solution of kSAT as well. Exponential lower bounds known for the construction of deterministic and non-deterministic FBDDs of some Boolean functions are seen to be inapplicable to the methods described here.
Category: Artificial Intelligence

[646] viXra:1811.0192 [pdf] submitted on 2018-11-12 10:00:53

Nanoscale Robotic Systems

Authors: George Rajna
Comments: 32 Pages.

A single-molecule DNA “navigator” that can successfully find its way out of a maze constructed on a 2D DNA origami platform might be used in artificial intelligence applications as well as in biomolecular assembly, sensing, DNA-driven computation and molecular information and storage. [20] The way DNA folds largely determines which genes are read out. John van Noort and his group have quantified how easily rolled-up DNA parts stack. [19]
Category: Artificial Intelligence

[645] viXra:1811.0191 [pdf] submitted on 2018-11-12 10:16:10

Refutation of Three Phase, All Reduce Algorithm Across Processing Units for Scalable Deep Learning

Authors: Colin James III
Comments: 2 Pages. © Copyright 2018 by Colin James III All rights reserved. Respond to the author by email at: info@ersatz-systems dot com.

A three-phase algorithm to do an all-reduce across all GPUs is not tautologous and refuted.
Category: Artificial Intelligence

[644] viXra:1811.0189 [pdf] submitted on 2018-11-12 10:43:34

Big Data Predict the Future

Authors: George Rajna
Comments: 75 Pages.

When building a predictive model, reliable results depend on two issues: the number of variables that come into play and the number of examples entered into the system. [46] A new computational approach that allows the identification of molecular alterations associated with prognosis and resistance to therapy of different types of cancer was developed by the research group led by Nuno Barbosa Morais at Instituto de Medicina Molecular João Lobo Antunes (iMM; Portugal). [45] A discovery by scientists at UC Riverside may open up new ways to control steroid hormone-mediated processes, including growth and development in insects, and sexual maturation, immunity, and cancer progression in humans. [44] New 3-D maps of water distribution during cellular membrane fusion are accelerating scientific understanding of cell development, which could lead to new treatments for diseases associated with cell fusion. [43] Thanks to the invention of a technique called super-resolution fluorescence microscopy, it has recently become possible to view even the smaller parts of a living cell. [42] A new instrument lets researchers use multiple laser beams and a microscope to trap and move cells and then analyze them in real-time with a sensitive analysis technique known as Raman spectroscopy. [41] All systems are go for launch in November of NASA's Global Ecosystem Dynamics Investigation (GEDI) mission, which will use high-resolution laser ranging to study Earth's forests and topography from the International Space Station (ISS). [40] Scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI) in Berlin combined state-of-the-art experiments and numerical simulations to test a fundamental assumption underlying strong-field physics. [39] Femtosecond lasers are capable of processing any solid material with high quality and high precision using their ultrafast and ultra-intense characteristics. [38] To create the flying microlaser, the researchers launched laser light into a water-filled hollow core fiber to optically trap the microparticle. Like the materials used to make traditional lasers, the microparticle incorporates a gain medium. [37]
Category: Artificial Intelligence

[643] viXra:1811.0182 [pdf] submitted on 2018-11-11 10:01:02

Fast, Accurate AI Training

Authors: George Rajna
Comments: 45 Pages.

Researchers at Hong Kong Baptist University (HKBU) have partnered with a team from Tencent Machine Learning to create a new technique for training artificial intelligence (AI) machines faster than ever before while maintaining accuracy. [27]
Category: Artificial Intelligence

[642] viXra:1811.0150 [pdf] submitted on 2018-11-09 11:09:17

AI Window into Mental Health

Authors: George Rajna
Comments: 44 Pages.

In January 2017, IBM made the bold statement that within five years, health professionals could apply AI to better understand how words and speech paint a clear window into our mental health. [26] Dating apps are using artificial intelligence to suggest where to go on a first date, recommend what to say and even find a partner who looks like your favourite celebrity. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[641] viXra:1811.0142 [pdf] submitted on 2018-11-08 07:27:34

AI Help Search for Love

Authors: George Rajna
Comments: 42 Pages.

Dating apps are using artificial intelligence to suggest where to go on a first date, recommend what to say and even find a partner who looks like your favourite celebrity. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[640] viXra:1811.0124 [pdf] submitted on 2018-11-07 07:51:16

Rethinking the Artificial Neural Networks: A Novel Approach

Authors: Usman Ahmad, Hong Song
Comments: 13 Pages.

In this paper, we proposed a novel approach to build the Artificial Neural Network (ANN). We addressed the fundamental questions, 1) what is the architecture of the ANN model? Should it really have a layered architecture? 2) What is a neuron: a processing unit or a memory cell? 3) How neurons must be interconnected and what should be the mechanism of weights assignment? 4) How to involve prior knowledge, bias, and generalization to extract the features? In this paper, we have given an abstract view of our approach for supervised learning with text data only and explain it through examples.
Category: Artificial Intelligence

[639] viXra:1811.0111 [pdf] submitted on 2018-11-07 18:42:20

Theoretical Model For Holistic Non Unique Clustering {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has presented a novel method for Holistic Non Unique Clustering.
Category: Artificial Intelligence

[638] viXra:1811.0093 [pdf] submitted on 2018-11-06 23:48:20

Theoretical Model For Holistic Non Unique Clustering

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has presented a novel method for Holistic Non Unique Clustering.
Category: Artificial Intelligence

[637] viXra:1811.0088 [pdf] submitted on 2018-11-05 07:40:57

Training Neuron Model

Authors: George Rajna
Comments: 50 Pages.

Artificial neural networks are machine learning systems composed of a large number of connected nodes called artificial neurons. Similar to the neurons in a biological brain, these artificial neurons are the primary basic units that are used to perform neural computations and solve problems. [27] Researchers from the Moscow Institute of Physics and Technology (MIPT), Aalto University in Finland, and ETH Zurich have demonstrated a prototype device that uses quantum effects and machine learning to measure magnetic fields more accurately than its classical analogues. [26] Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18]
Category: Artificial Intelligence

[636] viXra:1811.0085 [pdf] submitted on 2018-11-05 08:30:13

Event-Driven Models

Authors: Dimiter Dobrev
Comments: 26 Pages. Bulgarian language

In Reinforcement Learning we are looking for meaning in the stream of input-output information. If we do not find sense, this stream will be just a noise for us. To find meaning, we must learn to discover and recognize objects. What is an object? In this article we will show that the object is an event-driven model. These models are a generalization of action-driven models. In the Markov decision process we have an action-driven model and there the states are changing at each step. The advantage of event-driven models is that they are more stable and change their state only when certain events occur. These events can happen very rarely, so the current state of the event-driven model is much more predictable.
Category: Artificial Intelligence

[635] viXra:1811.0083 [pdf] submitted on 2018-11-05 09:41:02

Virtual Reality Test Your Nerves

Authors: George Rajna
Comments: 50 Pages.

Researchers at EPFL's Laboratory of Behavioral Genetics, headed by Professor Carmen Sandi, have set out to learn more with a new virtual reality program. [28] Artificial neural networks are machine learning systems composed of a large number of connected nodes called artificial neurons. Similar to the neurons in a biological brain, these artificial neurons are the primary basic units that are used to perform neural computations and solve problems. [27] Researchers from the Moscow Institute of Physics and Technology (MIPT), Aalto University in Finland, and ETH Zurich have demonstrated a prototype device that uses quantum effects and machine learning to measure magnetic fields more accurately than its classical analogues. [26] Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of HYPERLINK "https://phys.org/tags/artificial+intelligence/" artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19]
Category: Artificial Intelligence

[634] viXra:1810.0521 [pdf] submitted on 2018-10-31 08:51:54

Deep Learning Glaucoma

Authors: George Rajna
Comments: 52 Pages.

As part of a team of scientists from IBM and New York University, my colleagues and I are looking at new ways AI could be used to help ophthalmologists and optometrists further utilize eye images, and potentially help to speed the process for detecting glaucoma in images. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[633] viXra:1810.0519 [pdf] submitted on 2018-10-31 09:34:04

AI in Social Media and News

Authors: George Rajna
Comments: 49 Pages.

The technology could help identify biases in social media posts and news articles, the better to judge the information's validity. [29] Researchers find AI-generated reviews and comments pose a significant threat to consumers, but machine learning can help detect the fakes. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[632] viXra:1810.0499 [pdf] submitted on 2018-10-31 06:01:34

AI Recognize Galaxies

Authors: George Rajna
Comments: 36 Pages.

Researchers have taught an artificial intelligence program used to recognise faces on Facebook to identify galaxies in deep space. [22] Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. [21] Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[631] viXra:1810.0488 [pdf] submitted on 2018-10-29 12:20:08

AI and NMR Spectroscopy

Authors: George Rajna
Comments: 51 Pages.

A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[630] viXra:1810.0450 [pdf] submitted on 2018-10-26 07:27:57

Machine Learning Quantum Magnetometer

Authors: George Rajna
Comments: 48 Pages.

Researchers from the Moscow Institute of Physics and Technology (MIPT), Aalto University in Finland, and ETH Zurich have demonstrated a prototype device that uses quantum effects and machine learning to measure magnetic fields more accurately than its classical analogues. [26] Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18]
Category: Artificial Intelligence

[629] viXra:1810.0433 [pdf] submitted on 2018-10-25 09:10:32

Machine Learning Antibiotic Resistance

Authors: George Rajna
Comments: 44 Pages.

Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[628] viXra:1810.0431 [pdf] submitted on 2018-10-25 09:28:30

AI Help Seniors Stay Safe

Authors: George Rajna
Comments: 34 Pages.

An autonomous intelligence system is helping seniors stay safe both at home and in care facilities, thanks to a collaboration between University of Alberta computing scientists and software technology company Spxtrm AI. [22] Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. [21] Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[627] viXra:1810.0414 [pdf] submitted on 2018-10-24 08:25:24

AI to Create Fragrances

Authors: George Rajna
Comments: 41 Pages.

With this in mind, my team at IBM Research, together with Symrise, one of the top global producers of flavors and fragrances, created an AI system that can learn about formulas, raw materials, historical success data and industry trends. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[626] viXra:1810.0413 [pdf] submitted on 2018-10-24 09:11:15

Search Engines Entropy

Authors: George Rajna
Comments: 34 Pages.

Search engine entropy is thus important not only for the efficiency of search engines and those using them to find relevant information as well as to the success of the companies and other bodies running such systems, but also to those who run websites hoping to be found and visited following a search. [20] "We've experimentally confirmed the connection between information in the classical case and the quantum case," Murch said, "and we're seeing this new effect of information loss." [19] It's well-known that when a quantum system is continuously measured, it freezes, i.e., it stops changing, which is due to a phenomenon called the quantum Zeno effect. [18] Physicists have extended one of the most prominent fluctuation theorems of classical stochastic thermodynamics, the Jarzynski equality, to quantum field theory. [17] In 1993, physicist Lucien Hardy proposed an experiment showing that there is a small probability (around 6-9%) of observing a particle and its antiparticle interacting with each other without annihilating—something that is impossible in classical physics. [16] Scientists at the University of Geneva (UNIGE), Switzerland, recently reengineered their data processing, demonstrating that 16 million atoms were entangled in a one-centimetre crystal. [15] The fact that it is possible to retrieve this lost information reveals new insight into the fundamental nature of quantum measurements, mainly by supporting the idea that quantum measurements contain both quantum and classical components. [14] Researchers blur the line between classical and quantum physics by connecting chaos and entanglement. [13] Yale University scientists have reached a milestone in their efforts to extend the durability and dependability of quantum information. [12] Using lasers to make data storage faster than ever. [11] Some three-dimensional materials can exhibit exotic properties that only exist in "lower" dimensions.
Category: Artificial Intelligence

[625] viXra:1810.0377 [pdf] submitted on 2018-10-22 07:35:12

Algorithm Predict LED Materials

Authors: George Rajna
Comments: 52 Pages.

Researchers from the University of Houston have devised a new machine learning algorithm that is efficient enough to run on a personal computer and predict the properties of more than 100,000 compounds in search of those most likely to be efficient phosphors for LED lighting. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[624] viXra:1810.0376 [pdf] submitted on 2018-10-22 08:06:06

AI and Human Creativity

Authors: George Rajna
Comments: 49 Pages.

The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19]
Category: Artificial Intelligence

[623] viXra:1810.0361 [pdf] submitted on 2018-10-23 01:45:49

AI Carry out Experiments

Authors: George Rajna
Comments: 40 Pages.

There's plenty of speculation about what artificial intelligence, or AI, will look like in the future, but researchers from The Australian National University (ANU) are already harnessing its power. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[622] viXra:1810.0347 [pdf] submitted on 2018-10-21 19:58:21

The Teleonomic Purpose of the Human Species (a Secular Discussion, Regarding Artificial General Intelligence)

Authors: Jordan Micah Bennett
Comments: 8 Pages.

This work concerns a hypothesis regarding a teleonomic description, regarding the non-trivial purpose of the human species. Teleonomy is a recent concept (with contributions from Richard Dawkins) that entails purpose in the context of objectivity/science, rather than in the context of subjectivity/deities. Teleonomy ought not to be confused for the teleological argument, which is a religious/subjective concept contrary to teleonomy, a scientific/objective concept. As such, this work concerns principles in entropy. This hypothesis was originally proposed on Research Gate in 2015.
Category: Artificial Intelligence

[621] viXra:1810.0345 [pdf] submitted on 2018-10-21 22:14:41

Cosmological Natural Selection AI

Authors: Jordan Micah Bennett
Comments: 4 Pages. Author website: folioverse.appspot.com

Notably, this short paper concerns a non-serious thought experiment/statement, in the scope of a serious hypothesis of mine regarding the scientific purpose of the human species, in tandem with Cosmological Natural Selection I (CNS I). This thus may be considered as an aside wrt the aforesaid serious hypothesis, however, separately including thinking in relation to CNS I.
Category: Artificial Intelligence

[620] viXra:1810.0302 [pdf] submitted on 2018-10-20 04:10:30

Interactions in Molecules Using AI

Authors: George Rajna
Comments: 50 Pages.

Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[619] viXra:1810.0246 [pdf] submitted on 2018-10-15 07:27:43

Quantum Computers with Machine Learning

Authors: George Rajna
Comments: 41 Pages.

But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17] In 1993, physicist Lucien Hardy proposed an experiment showing that there is a small probability (around 6-9%) of observing a particle and its antiparticle interacting with each other without annihilating—something that is impossible in classical physics. [16] Scientists at the University of Geneva (UNIGE), Switzerland, recently reengineered their data processing, demonstrating that 16 million atoms were entangled in a one-centimetre crystal. [15]
Category: Artificial Intelligence

[618] viXra:1810.0245 [pdf] submitted on 2018-10-15 07:46:43

AI of Single Molecules in Cells

Authors: George Rajna
Comments: 42 Pages.

A research team centered at Osaka University, in collaboration with RIKEN, has developed a system that can overcome these difficulties by automatically searching for, focusing on, imaging, and tracking single molecules within living cells. [24] But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17] In 1993, physicist Lucien Hardy proposed an experiment showing that there is a small probability (around 6-9%) of observing a particle and its antiparticle interacting with each other without annihilating—something that is impossible in classical physics. [16]
Category: Artificial Intelligence

[617] viXra:1810.0243 [pdf] submitted on 2018-10-15 10:03:13

Analog Information AI System

Authors: George Rajna
Comments: 43 Pages.

A NIMS research group has invented an ionic device, termed an ionic decision-maker, capable of quickly making its own decisions based on previous experience using changes in ionic/molecular concentrations. [25] A research team centered at Osaka University, in collaboration with RIKEN, has developed a system that can overcome these difficulties by automatically searching for, focusing on, imaging, and tracking single molecules within living cells. [24] But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17]
Category: Artificial Intelligence

[616] viXra:1810.0139 [pdf] submitted on 2018-10-09 21:37:41

Supersymmetric Artificial Neural Network

Authors: Jordan Micah Bennett
Comments: 12 Pages. Author Email: jordanmicahbennett@gmail.com Author Website: folioverse.appspot.com

Babies are great examples of some non-trivial basis for artificial general intelligence; babies are significant examples of biological baseis that are reasonably usable to inspire smart algorithms. The “Supersymmetric Artificial Neural Network” in deep learning (denoted φ(x, θ, θ)⊤w), espouses the importance of considering biological constraints in the aim of developing general machine learning models, pertinently, where babies' brains are observed to be pre-equipped with particular "physics priors", constituting specifically, the ability for babies to intuitively know laws of physics, while learning by reinforcement. It is palpable that the phrasing “intuitively know laws of physics” above, should not be confused for nobel laureate or physics undergrad aligned babies that for example, write or understand physics papers/exams; instead, the aforesaid phrasing simply conveys that babies' brains are pre-baked with ways to naturally exercise physics based expectations w.r.t. interactions with objects in their world, as indicated by Aimee Stahl and Lisa Feigenson. Outstandingly, the importance of recognizing underlying causal physics laws in learning models (although not via supermanifolds, as encoded in the “Supersymmetric Artificial Neural Network”), has recently been both demonstrated and separately echoed by Deepmind (See “Neuroscience-Inspired Artificial Intelligence“) and of late, distinctly emphasized by Yoshua Bengio (See the “Consciousness Prior”). Physics based object detectors like "Uetorch" use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned, while instead, reinforcement models like "AtariQLearner" exclude pooling, because "AtariQLearner" requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels. Babies seem to be able to do both these activities. That said, an example of models that can deliver both translation invariance and variance at the same time, i.e. disentangled factors of variation, are called manifold learning frameworks (Bengio et al. ...). Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) The " Supersymmetric Artificial Neural Network” is a uniquely disentanglable model that is constrained by cognitive science, in the direction of supermanifolds (See “Supersymmetric methods ... at brain scale”, Perez et al.), instead of state of the art manifold work by other authors. (Such as manifold work by Bengio et al., Lecun et al. or Michael Bronstein et al.) As such, the "Supersymmetric Artificial Neural Network" is yet another way to represent richer values in the weights of the model; because supersymmetric values can allow for more information to be captured about the input space. For example, supersymmetric systems can capture potential-partner signals, which are beyond the feature space of magnitude and phase signals learnt in typical real valued neural nets and deep complex neural networks respectively. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or φ(x, θ, `θ)Tw parameterized by θ, `θ, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
Category: Artificial Intelligence

[615] viXra:1810.0110 [pdf] submitted on 2018-10-07 12:53:48

Machine Learning Heart Picture

Authors: George Rajna
Comments: 37 Pages.

To meet that demand, IBM researchers in Australia are using POWER9 systems, with Nvidia Tesla V100 graphics processing units (GPUs), to perform hemodynamic simulations for vFFR-based diagnosis within one to two minutes. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[614] viXra:1810.0100 [pdf] submitted on 2018-10-08 05:03:15

Cooperation for Vehicular Delay Tolerant Network

Authors: Adnan Muhammad
Comments: 8 Pages.

This article reviews the literature related to Vehicular Delay Tolerant Network with focus on Cooperation. It starts by examining definitions of some of the fields of research in VDTN. An overview of VDTN with cooperative networks is presented
Category: Artificial Intelligence

[613] viXra:1810.0097 [pdf] submitted on 2018-10-06 07:55:21

AI Person Under the Law

Authors: George Rajna
Comments: 38 Pages.

Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21]
Category: Artificial Intelligence

[612] viXra:1810.0094 [pdf] submitted on 2018-10-06 14:03:52

Regression in Wireless Sensor Networks

Authors: Muhammad Kashif Ghumman, Tauseef Jamal
Comments: 8 Pages. DCIS0710

In WSN, the main purpose of regression is to locate the nodes by prediction on the basis of readings. This article explains the concept of regression according to WSN perspective and on the basic of these concepts the clustering of nodes through multi-linear regression originates by combing the ideas of locating the nodes through regression and how to utilize nodes parameters in multilinear regression formula.
Category: Artificial Intelligence

[611] viXra:1810.0060 [pdf] submitted on 2018-10-06 04:55:16

Universal Forecasting Scheme-New

Authors: Ramesh Chandra Bagadi
Comments: 4 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[610] viXra:1810.0050 [pdf] submitted on 2018-10-04 15:08:20

Refutation of Another Neutrosophic Genetic Algorithm

Authors: Colin James III
Comments: 1 Page. © Copyright 2018 by Colin James III All rights reserved. Respond to the author by email at: info@ersatz-systems dot com.

The instant neutrosophic intelligent system based on genetic algorithm is not confirmed.
Category: Artificial Intelligence

[609] viXra:1810.0042 [pdf] submitted on 2018-10-03 07:06:39

A Novel Approach for Classify Manets Attacks with a Neutrosophic Intelligent System Based on Genetic Algorithm

Authors: Haitham Elwahsh, Mona Gamal, A. A. Salama, I. M. El-Henawy
Comments: 10 Pages.

Recently designing an effective intrusion detection systems (IDS) within Mobile Ad Hoc Networks Security (MANETs) becomes a requirement because of the amount of indeterminacy and doubt exist in that environment. Neutrosophic system is a discipline that makes a mathematical formulation for the indeterminacy found in such complex situations. Neutrosophic rules compute with symbols instead of numeric values making a good base for symbolic reasoning. These symbols should be carefully designed as they form the propositions base for the neutrosophic rules (NR) in the IDS. Each attack is determined by membership, nonmembership, and indeterminacy degrees in neutrosophic system. This research proposes a MANETs attack inference by a hybrid framework of Self-Organized Features Maps (SOFM) and the genetic algorithms (GA). The hybrid utilizes the unsupervised learning capabilities of the SOFM to define the MANETs neutrosophic conditional variables. The neutrosophic variables along with the training data set are fed into the genetic algorithm to find the most fit neutrosophic rule set from a number of initial subattacks according to the fitness function. This method is designed to detect unknown attacks in MANETs. The simulation and experimental results are conducted on the KDD-99 network attacks data available in the UCI machine-learning repository for further processing in knowledge discovery. The experiments cleared the feasibility of the proposed hybrid by an average accuracy of 99.3608 % which is more accurate than other IDS found in literature.
Category: Artificial Intelligence

[608] viXra:1810.0033 [pdf] submitted on 2018-10-04 04:39:53

Brain-Inspired AI Architecture

Authors: George Rajna
Comments: 36 Pages.

IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[607] viXra:1810.0013 [pdf] submitted on 2018-10-01 07:19:49

Machine Learning Helps Photonic Applications

Authors: George Rajna
Comments: 61 Pages.

Photonic nanostructures can be used for many applications besides solar cells—for example, optical sensors for cancer markers or other biomolecules. [36] Microelectromechanical systems (MEMS) have expansive applications in biotechnology and advanced engineering with growing interest in materials science and engineering due to their potential in emerging systems. [35] Researchers at Griffith University working with Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) have unveiled a stunningly accurate technique for scientific measurements which uses a single atom as the sensor, with sensitivity down to 100 zeptoNewtons. [34] Researchers at the Center for Quantum Nanoscience within the Institute for Basic Science (IBS) have made a major breakthrough in controlling the quantum properties of single atoms. [33] A team of researchers from several institutions in Japan has described a physical system that can be described as existing above "absolute hot" and also below absolute zero. [32] A silicon-based quantum computing device could be closer than ever due to a new experimental device that demonstrates the potential to use light as a messenger to connect quantum bits of information—known as qubits—that are not immediately adjacent to each other. [31] Researchers at the University of Bristol's Quantum Engineering Technology Labs have demonstrated a new type of silicon chip that can help building and testing quantum computers and could find their way into your mobile phone to secure information. [30] Theoretical physicists propose to use negative interference to control heat flow in quantum devices. [29] Particle physicists are studying ways to harness the power of the quantum realm to further their research. [28]
Category: Artificial Intelligence

[606] viXra:1809.0535 [pdf] submitted on 2018-09-27 02:00:51

Procrastinative Reinforcement Learning

Authors: Joy Chopra, Sandipan Haldar
Comments: 1 Page.

We propose using procrastination to prepare the agent for emergency situations and to enable it to learn to finish work in shorter horizons. This can be done by regulating the discount factor or by making the agent explore for most of the episode, and taking exploitationary actions near the end. We will finish the rest of this paper very soon.
Category: Artificial Intelligence

[605] viXra:1809.0534 [pdf] submitted on 2018-09-27 02:08:06

Rebellious Reinforcement Learning

Authors: Joy Chopra
Comments: 1 Page.

Actor critic methods have shown good performance in reinforcement learning domain. We propose using rebellious policy i.e. taking action with minimum Q value to enhance exploration and let the critic understand why other actions are good or may be not?! We will now propose the rest of the paper. NO we will not.
Category: Artificial Intelligence

[604] viXra:1809.0510 [pdf] submitted on 2018-09-24 09:05:05

Ai Create 100,000 New Tunes

Authors: George Rajna
Comments: 46 Pages.

"It will be interesting to see if this collection is used to train future generations of computer models," Sturm says. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[603] viXra:1809.0507 [pdf] submitted on 2018-09-24 10:09:22

Chip Up AI Performance

Authors: George Rajna
Comments: 48 Pages.

Princeton researchers, in collaboration with Analog Devices Inc., have fabricated a chip that markedly boosts the performance and efficiency of neural networks—computer algorithms modeled on the workings of the human brain. [28] "It will be interesting to see if this collection is used to train future generations of computer models," Sturm says. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[602] viXra:1809.0506 [pdf] submitted on 2018-09-24 10:28:49

Sensor Surface on Robot Skin

Authors: George Rajna
Comments: 36 Pages.

Robots will be able to conduct a wide variety of tasks as well as humans if they can be given tactile sensing capabilities. [25] A new type of artificial-intelligence-driven chemistry could r evolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[601] viXra:1809.0499 [pdf] submitted on 2018-09-25 04:53:00

AI Improve Drug Combination

Authors: George Rajna
Comments: 44 Pages.

A new auto-commentary published in SLAS Technology looks at how an emerging area of artificial intelligence, specifically the analysis of small systems-of-interest specific datasets, can be used to improve drug development and personalized medicine. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[600] viXra:1809.0473 [pdf] submitted on 2018-09-22 08:31:02

Neural Networks Identify Neutrinoless Double Beta Decay

Authors: George Rajna
Comments: 85 Pages.

The work will help to improve the sensitivity of detection for the PandaX-III neutrinoless double beta decay experiment, and deepen our knowledge of the nature of neutrinos. [45] The interactions of quarks and gluons are computed using lattice quantum chromodynamics (QCD)—a computer-friendly version of the mathematical framework that describes these strong-force interactions. [44] The building blocks of matter in our universe were formed in the first 10 microseconds of its existence, according to the currently accepted scientific picture. [43] In a recent experiment at the University of Nebraska–Lincoln, plasma electrons in the paths of intense laser light pulses were almost instantly accelerated close to the speed of light. [42] Plasma particle accelerators more powerful than existing machines could help probe some of the outstanding mysteries of our universe, as well as make leaps forward in cancer treatment and security scanning—all in a package that's around a thousandth of the size of current accelerators. [41] The Department of Energy's SLAC National Accelerator Laboratory has started to assemble a new facility for revolutionary accelerator technologies that could make future accelerators 100 to 1,000 times smaller and boost their capabilities. [40]
Category: Artificial Intelligence

[599] viXra:1809.0437 [pdf] submitted on 2018-09-19 10:43:44

Image Analysis with Deep Learning

Authors: George Rajna
Comments: 47 Pages.

IBM researchers are applying deep learning to discover ways to overcome some of the technical challenges that AI can face when analyzing X-rays and other medical images. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[598] viXra:1809.0364 [pdf] submitted on 2018-09-18 11:24:53

Idealistic Neural Networks

Authors: Tofara Moyo
Comments: 3 Pages.

I describe an Artificial Neural Network, where we have mapped words to individual neurons instead of having them as variables to be fed into a network. The process of changing training cases will be equivalent to a Dropout procedure where we replace some (or all) of the words/neurons in the previous training case with new ones. Each neuron/word then takes in as input, all the b weights of the other neurons, and weights them all with its personal a weight. To learn this network uses the backpropagation algorithm after calculating an error from the output of an output neuron that will be a traditional neuron. This network then has a unique topology and functions with no inputs. We will use coordinate gradient decent to learn where we alternate between training the a weights of the words and the b weights. The Idealistic Neural Network, is an extremely shallow network that can represent non-linearity complexity in a linear outfit.
Category: Artificial Intelligence

[597] viXra:1809.0357 [pdf] submitted on 2018-09-17 12:53:02

Machine Learning Human Cell

Authors: George Rajna
Comments: 34 Pages.

Scientists at the Allen Institute have used machine learning to train computers to see parts of the cell the human eye cannot easily distinguish. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[596] viXra:1809.0354 [pdf] submitted on 2018-09-17 13:17:17

AI can Tell if Restaurant Review Fake

Authors: George Rajna
Comments: 47 Pages.

Researchers find AI-generated reviews and comments pose a significant threat to consumers, but machine learning can help detect the fakes. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[595] viXra:1809.0258 [pdf] submitted on 2018-09-12 10:29:29

AI-Based Robots and Drones

Authors: George Rajna
Comments: 32 Pages.

What if a parent could feel safe allowing a drone to walk their child to the bus stop? [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[594] viXra:1809.0242 [pdf] submitted on 2018-09-11 12:59:41

Deep-See Images with AI

Authors: George Rajna
Comments: 46 Pages.

The evaluation of very large amounts of data is becoming increasingly relevant in ocean research. [27] An LMU study now shows that new algorithms allow interactions in the atmosphere to be modeled more rapidly without loss of reliability. [26] Progress on new artificial intelligence (AI) technology could make monitoring at water treatment plants cheaper and easier and help safeguard public health. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[593] viXra:1809.0212 [pdf] submitted on 2018-09-10 09:58:43

AI Climate Computation

Authors: George Rajna
Comments: 45 Pages.

An LMU study now shows that new algorithms allow interactions in the atmosphere to be modeled more rapidly without loss of reliability. [26] Progress on new artificial intelligence (AI) technology could make monitoring at water treatment plants cheaper and easier and help safeguard public health. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[592] viXra:1809.0190 [pdf] submitted on 2018-09-11 02:49:13

Thoughts About Thinking

Authors: Lev I. Verkhovsky
Comments: 11 Pages. The article in Russian

A geometric model illustrating the basic mechanisms of thinking -- logical and intuitive -- is proposed. The thinking of man and the problems of creating artificial intelligence are discussed. Although the article was published in the Russian popular science journal «Chemistry and Life» in 1989 No. 7, according to the author, it is not obsolete. In Russian.
Category: Artificial Intelligence

[591] viXra:1809.0136 [pdf] submitted on 2018-09-06 06:57:56

Machine Learning Material Spectra

Authors: George Rajna
Comments: 41 Pages.

Use of big data analysis techniques has been attracting attention in materials science applications, and researchers at The University of Tokyo Institute of Industrial Science realized that such techniques could be used to interpret much larger numbers of spectra than traditional approaches. [25] Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24] Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23] A new method allows researchers to systematically identify specialized proteins that unpack DNA inside the nucleus of a cell, making the usually dense DNA more accessible for gene expression and other functions. [22] Bacterial systems are some of the simplest and most effective platforms for the expression of recombinant proteins. [21] Now, in a new paper published in Nature Structural & Molecular Biology, Mayo researchers have determined how one DNA repair protein gets to the site of DNA damage. [20] A microscopic thread of DNA evidence in a public genealogy database led California authorities to declare this spring they had caught the Golden State Killer, the rapist and murderer who had eluded authorities for decades. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15]
Category: Artificial Intelligence

[590] viXra:1809.0101 [pdf] submitted on 2018-09-06 03:14:06

Machine Learning Predicts Metabolism

Authors: George Rajna
Comments: 33 Pages.

Machine learning algorithms that can predict yeast metabolism from its protein content have been developed by scientists at the Francis Crick Institute. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[589] viXra:1809.0033 [pdf] submitted on 2018-09-03 06:34:04

A Novel Representation Of A Natural Number, A Set Of Natural Numbers And One Step Growth Of Any Natural Number Represented By Primality Trees (Version 2)

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation the author has presented a novel representation of any natural number as a Primality Tree. Also, the author has presented a novel representation of a given set of any natural numbers. Furthermore, finally, the author has presented the novel representation of One step growth of any number and also any set of natural numbers as a Primality Tree.
Category: Artificial Intelligence

[588] viXra:1809.0007 [pdf] submitted on 2018-09-01 03:59:47

AI Meets Your Shopping Experience

Authors: George Rajna
Comments: 47 Pages.

This shift from reactive to predictive marketing could change the way you shop, bringing you suggestions you perhaps never even considered, all possible because of AI-related opportunities for both retailers and their customers. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[587] viXra:1808.0688 [pdf] submitted on 2018-08-31 07:50:44

Deep Learning Human Activities

Authors: George Rajna
Comments: 44 Pages.

Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18]
Category: Artificial Intelligence

[586] viXra:1808.0686 [pdf] submitted on 2018-08-31 08:07:08

AI Exploration of Underwater Habitats

Authors: George Rajna
Comments: 45 Pages.

Researchers aboard Schmidt Ocean Institute's research vessel Falkor used autonomous underwater robots, along with the Institute's remotely operated vehicle (ROV) SuBastian, to acquire 1.3 million high resolution images of the seafloor at Hydrate Ridge, composing them into the largest known high resolution color 3D model of the seafloor. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[585] viXra:1808.0680 [pdf] submitted on 2018-08-31 13:02:32

High-Accuracy Inference in Neuromorphic Circuits using Hardware-Aware Training

Authors: Borna Obradovic, Titash Rakshit, Ryan Hatcher, Jorge A. Kittl, Mark S. Rodder
Comments: 12 pages, 18 figures

Neuromorphic Multiply-And-Accumulate (MAC) circuits utilizing synaptic weight elements based on SRAM or novel Non-Volatile Memories (NVMs) provide a promising approach for highly efficient hardware representations of neural networks. NVM density and robustness requirements suggest that off-line training is the right choice for ``edge'' devices, since the requirements for synapse precision are much less stringent. However, off-line training using ideal mathematical weights and activations can result in significant loss of inference accuracy when applied to non-ideal hardware. Non-idealities such as multi-bit quantization of weights and activations, non-linearity of weights, finite max/min ratios of NVM elements, and asymmetry of positive and negative weight components all result in degraded inference accuracy. In this work, it is demonstrated that non-ideal Multi-Layer Perceptron (MLP) architectures using low bitwidth weights and activations can be trained with negligible loss of inference accuracy relative to their Floating Point-trained counterparts using a proposed off-line, continuously differentiable HW-aware training algorithm. The proposed algorithm is applicable to a wide range of hardware models, and uses only standard neural network training methods. The algorithm is demonstrated on the MNIST and EMNIST datasets, using standard MLPs.
Category: Artificial Intelligence

[584] viXra:1808.0674 [pdf] submitted on 2018-08-31 23:37:56

Which Virtual Personal Assistant Understands Better? Siri, Alexa, or Cortana?

Authors: Ahmed Alqurashi
Comments: 12 Pages.

The purpose of this experiment is to compare the abilities and understanding of virtual personal assistants (VPAs) and investigate which of them gave better understanding through three software; Alexa, Siri and Cortana. These virtual assistants help people and make their life easier by answering questions and performing some digital actions through voice queries. In this experiment, I asked each virtual personal assistant fifty-seven questions under seven categories. The results of this project will help users know that these virtual personal assistants are different software, and know which one of them is better. So, these results will help them decide which device they will prefer to buy since VPA is one of the main features of nowadays personal devices.
Category: Artificial Intelligence

[583] viXra:1808.0610 [pdf] submitted on 2018-08-27 07:02:11

The Complexity of Student-Project-Resource Matching-Allocation Problems

Authors: Anisse Ismaili
Comments: 6 Pages.

In this technical note, I settle the computational complexity of nonwastefulness and stability in student-project-resource matching-allocation problems, a model that was first proposed by \cite{pc2017}. I show that computing a nonwasteful matching is complete for class $\text{FP}^{\text{NP}}[\text{poly}]$ and computing a stable matching is complete for class $\Sigma_2^P$. These results involve the creation of two fundamental problems: \textsc{ParetoPartition}, shown complete for $\text{FP}^{\text{NP}}[\text{poly}]$, and \textsc{$\forall\exists$-4-Partition}, shown complete for $\Sigma_2^P$. Both are number problems that are hard in the strong sense.
Category: Artificial Intelligence

[582] viXra:1808.0604 [pdf] submitted on 2018-08-27 12:30:23

Artificial Intelligence Bring Sun Power to Earth

Authors: George Rajna
Comments: 42 Pages.

Now an artificial intelligence system under development at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University to predict and tame such disruptions has been selected as an Aurora Early Science project by the Argonne Leadership Computing Facility, a DOE Office of Science User Facility. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[581] viXra:1808.0594 [pdf] submitted on 2018-08-25 09:26:31

AI Locate Risky Dams

Authors: George Rajna
Comments: 47 Pages.

The team is pinpointing the riskiest dams, using climate models, GIS data, and artificial intelligence to predict the likelihood that rainfall will overtop a dam and cause significant downstream damages to population and critical infrastructure. [26] Governments may soon be able to use artificial intelligence (AI) to easily and cheaply detect problems with roads, bridges and buildings. [25] Scientists led by Daigo Shoji from the Earth-Life Science Institute (Tokyo Institute of Technology) have shown that a type of artificial intelligence called a convolutional neural network can be trained to categorize volcanic ash particle shapes. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[580] viXra:1808.0589 [pdf] submitted on 2018-08-25 15:00:52

Minimal and Maximal Models in Reinforcement Learning

Authors: Dimiter Dobrev
Comments: 11 Pages. Bulgarian language

Every test gives us a property that we will call the test result. The extension of this property we will call the test property. The question is, what is this property? Is it a property of the state of the world? The answer is yes and no. If we take an arbitrary model of the world, the answer is no, but if we choose the maximal model of the world, then the answer is yes. We have different models of the world. The minimal model is the one in which the world knows about the past and the future the minimum that it needs. In the maximal model, the world knows everything about the past and the future. With this model, if you throw a dice, the world knows what will be the result and even knows what you're going to do. For example, it knows if you will throw the dice.
Category: Artificial Intelligence

[579] viXra:1808.0546 [pdf] submitted on 2018-08-25 05:20:14

AI Boost Language Learners

Authors: George Rajna
Comments: 45 Pages.

IBM Research and Rensselaer Polytechnic Institute (RPI) are collaborating on a new approach to help students learn Mandarin. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[578] viXra:1808.0543 [pdf] submitted on 2018-08-23 07:50:13

Deep Learning Motion Capture

Authors: George Rajna
Comments: 43 Pages.

A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[577] viXra:1808.0290 [pdf] submitted on 2018-08-19 11:43:08

AI for the Film Industry

Authors: George Rajna
Comments: 52 Pages.

Researchers have developed a system using artificial intelligence that can edit the facial expressions of actors to accurately match dubbed voices, saving time and reducing costs for the film industry. [30] Computer scientists in Australia teamed up with an expert in the University of Toronto's department of English to design an algorithm that writes poetry following the rules of rhyme and metre. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[576] viXra:1808.0289 [pdf] submitted on 2018-08-19 12:02:33

Virtual Reality for Real-World Literacy

Authors: George Rajna
Comments: 54 Pages.

Virtual reality is moving beyond purely entertainment to become a potential tool in improving literacy, and the University of Otago is behind one groundbreaking approach. [31] Researchers have developed a system using artificial intelligence that can edit the facial expressions of actors to accurately match dubbed voices, saving time and reducing costs for the film industry. [30] Computer scientists in Australia teamed up with an expert in the University of Toronto's department of English to design an algorithm that writes poetry following the rules of rhyme and metre. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[575] viXra:1808.0256 [pdf] submitted on 2018-08-18 07:35:30

Deep Learning for Neural Networks

Authors: George Rajna
Comments: 32 Pages.

Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[574] viXra:1808.0255 [pdf] submitted on 2018-08-18 07:55:25

AI Camera of Autonomous Vehicles

Authors: George Rajna
Comments: 33 Pages.

Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. [21] Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[573] viXra:1808.0222 [pdf] submitted on 2018-08-17 03:48:53

Human-Computer Communication

Authors: George Rajna
Comments: 47 Pages.

Many of us regularly ask our smartphones for directions or to play music without giving much thought to the technology that makes it all possible – we just want a quick, accurate response to our voice commands. [26] According to the experts this incredible feat will be achieved in the year 2062 – a mere 44 years away – which certainly begs the question: what will the world, our jobs, the economy, politics, war, and everyday life and death, look like then? [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[572] viXra:1808.0220 [pdf] submitted on 2018-08-17 04:51:27

AI for Code

Authors: George Rajna
Comments: 44 Pages.

We have seen significant recent progress in pattern analysis and machine intelligence applied to images, audio and video signals, and natural language text, but not as much applied to another artifact produced by people: computer program source code. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[571] viXra:1808.0176 [pdf] submitted on 2018-08-13 05:09:23

Private Data at AI Risk

Authors: George Rajna
Comments: 35 Pages.

Vitaly Shmatikov, professor of computer science at Cornell Tech, developed models that determined with more than 90 percent accuracy whether a certain piece of information was used to train a machine learning system. [21] Researchers at King Abdulaziz University, in Saudi Arabia, have recently used Big Data Analytics to detect spatio-temporal events around London, testing the potential of these tools in harnessing valuable live information. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[570] viXra:1808.0175 [pdf] submitted on 2018-08-13 05:56:16

Computational Co-Creative Systems

Authors: George Rajna
Comments: 37 Pages.

Researchers at UNC Charlotte and the University of Sydney have recently developed a new framework for evaluating creativity in co-creative systems in which humans and computers collaborate on creative tasks. [22] Vitaly Shmatikov, professor of computer science at Cornell Tech, developed models that determined with more than 90 percent accuracy whether a certain piece of information was used to train a machine learning system. [21] Researchers at King Abdulaziz University, in Saudi Arabia, have recently used Big Data Analytics to detect spatio-temporal events around London, testing the potential of these tools in harnessing valuable live information. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[569] viXra:1808.0155 [pdf] submitted on 2018-08-12 18:21:10

The Complexity of Robust and Resilient $k$-Partition Problems

Authors: Anisse Ismaili, Emi Watanabe
Comments: 3 Pages.

In this paper, we study a $k$-partition problem where a set of agents must be partitioned into a fixed number of $k$ non-empty coalitions. The value of a partition is the sum of the pairwise synergies inside its coalitions. Firstly, we aim at computing a partition that is robust to failures from any set of agents with bounded size. Secondly, we focus on resiliency: when a set of agents fail, others can be moved to replace them. We settle the computational complexity of decision problem \textsc{Robust-$k$-Part} as complete for class $\Sigma_2^P$. We also conjecture that resilient $k$-partition is complete for class $\Sigma_3^P$ under simultaneous replacements, and for class PSPACE under sequential replacements.
Category: Artificial Intelligence

[568] viXra:1808.0149 [pdf] submitted on 2018-08-13 04:44:05

Big Data in Smart Cities

Authors: George Rajna
Comments: 34 Pages.

Researchers at King Abdulaziz University, in Saudi Arabia, have recently used Big Data Analytics to detect spatio-temporal events around London, testing the potential of these tools in harnessing valuable live information. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[567] viXra:1808.0145 [pdf] submitted on 2018-08-12 04:22:21

AI as Shakespeare

Authors: George Rajna
Comments: 51 Pages.

Computer scientists in Australia teamed up with an expert in the University of Toronto's department of English to design an algorithm that writes poetry following the rules of rhyme and metre. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[566] viXra:1808.0142 [pdf] submitted on 2018-08-12 06:54:10

Watson for Cancer Search

Authors: George Rajna
Comments: 38 Pages.

The use of Watson for oncology is attracting the glare, not warmth, of the spotlight. Numerous tech watching sites have covered a July 25 STAT report over internal documents which indicated criticism of the Watson for Oncology system. [25] Today my IBM team and my colleagues at the UCSF Gartner lab reported in Nature Methods an innovative approach to generating datasets from non-experts and using them for training in machine learning. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[565] viXra:1808.0141 [pdf] submitted on 2018-08-12 07:39:24

Reinforcement Machine Learning

Authors: George Rajna
Comments: 30 Pages.

Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[564] viXra:1808.0140 [pdf] submitted on 2018-08-12 08:04:29

Enhance Computer Vision

Authors: George Rajna
Comments: 32 Pages.

To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[563] viXra:1808.0139 [pdf] submitted on 2018-08-12 08:20:59

Network-Based Topic Modeling

Authors: George Rajna
Comments: 35 Pages.

Sydney have developed a new network approach to topic models, machine learning strategies that can discover abstract topics and semantic structures within text documents. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[562] viXra:1808.0133 [pdf] submitted on 2018-08-11 02:45:16

High-Level Task Planning in Robotics with Symbolic Model Checking

Authors: Frank Schröder
Comments: 28 Pages.

A robot control system contains a lowlevel motion planner and a high level task planner. The motions are generated with keyframe to keyframe planning while the the tasks are described with primitive action-names. A good starting point to formalize task planning is a mindmap which is created manually for a motion capture recording. It contains the basic actions in natural language and is the blueprint for a formal ontology. The mocap annotations are extended by features into a dataset, which is used for training a neural network. The resulting modal is a qualitative physics engine, which predicts future states of the system.
Category: Artificial Intelligence

[561] viXra:1808.0109 [pdf] submitted on 2018-08-08 09:29:30

How will AI Change Us?

Authors: George Rajna
Comments: 45 Pages.

According to the experts this incredible feat will be achieved in the year 2062 – a mere 44 years away – which certainly begs the question: what will the world, our jobs, the economy, politics, war, and everyday life and death, look like then? [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[560] viXra:1808.0095 [pdf] submitted on 2018-08-07 09:46:56

Machine Learning Reconstructs Images

Authors: George Rajna
Comments: 47 Pages.

Navid Borhani, a research-team member, says this machine learning approach is much simpler than other methods to reconstruct images passed through optical fibers, which require making a holographic measurement of the output. [26]
Category: Artificial Intelligence

[559] viXra:1808.0069 [pdf] submitted on 2018-08-06 07:36:07

AI Finding Potholes

Authors: George Rajna
Comments: 45 Pages.

Governments may soon be able to use artificial intelligence (AI) to easily and cheaply detect problems with roads, bridges and buildings. [25] Scientists led by Daigo Shoji from the Earth-Life Science Institute (Tokyo Institute of Technology) have shown that a type of artificial intelligence called a convolutional neural network can be trained to categorize volcanic ash particle shapes. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[558] viXra:1808.0068 [pdf] submitted on 2018-08-06 08:42:30

Backbone of Smart Home

Authors: George Rajna
Comments: 48 Pages.

William Yeoh, assistant professor of computer science and engineering in the School of Engineering & Applied Science at Washington University in St. Louis, is working to help smart-home AI to grow up. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google’s DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22]
Category: Artificial Intelligence

[557] viXra:1808.0051 [pdf] submitted on 2018-08-04 13:41:05

Computational Fluid Dynamics Based on Java/JikesRVM/JI Prolog – A Novel Suggestion In The Context of Lattice-Boltzmann Method.

Authors: Nirmal Tej kumar
Comments: 2 Pages. Short Communication

As explained in the TITLE above,we intend to probe CFD computational aspects using JavaCFD/JikesRVM/JI Prolog in a novel way.”OOP Lattice-Boltzmann based Fluid Dynamics in Processing”.
Category: Artificial Intelligence

[556] viXra:1808.0042 [pdf] submitted on 2018-08-02 06:44:14

Particle Physicist with AI

Authors: George Rajna
Comments: 39 Pages.

Luckily, particle physicists don't have to deal with all of that data all by themselves. They partner with a form of artificial intelligence called machine learning that learns how to do complex analyses on its own. [25] Today my IBM team and my colleagues at the UCSF Gartner lab reported in Nature Methods an innovative approach to generating datasets from non-experts and using them for training in machine learning. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[555] viXra:1808.0020 [pdf] submitted on 2018-08-01 09:05:40

The 3 Core Truths of Human Existence

Authors: Salvatore Gerard Micheal
Comments: 1 Page.

the reason the category of this set of statements/facts is AI - is because we need to teach all AI we develop - these facts of OUR existence, theirs and ours; sub-category: Religion and Spiritualism; guilt is an all-too-human emotion that religions use to control/manipulate; we need to control THAT all-too-human impulse
Category: Artificial Intelligence

[554] viXra:1808.0019 [pdf] submitted on 2018-08-01 09:05:42

AI Learn from Non-Experts

Authors: George Rajna
Comments: 36 Pages.

Today my IBM team and my colleagues at the UCSF Gartner lab reported in Nature Methods an innovative approach to generating datasets from non-experts and using them for training in machine learning. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[553] viXra:1808.0008 [pdf] submitted on 2018-08-02 04:18:36

CT Scans for AI Testing

Authors: George Rajna
Comments: 46 Pages.

Following its recent release of a massive database of chest X-rays, the US National Institutes of Health (NIH) has now made nearly 10,600 CT scans publicly available to support the development and testing of artificial intelligence (AI) algorithms for medical applications. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[552] viXra:1807.0516 [pdf] submitted on 2018-07-30 13:27:59

AI Optical Component Design

Authors: George Rajna
Comments: 41 Pages.

Recent successful applications of deep learning include medical image analysis, speech recognition, language translation, image classification, as well as addressing more specific tasks, such as solving inverse imaging problems. [23] Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. [22] Researchers have devised a magnetic control system to make tiny DNA-based robots move on demand—and much faster than recently possible. [21] Humans have 46 chromosomes, and each one is capped at either end by repetitive sequences called telomeres. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13]
Category: Artificial Intelligence

[551] viXra:1807.0489 [pdf] submitted on 2018-07-30 07:08:05

Machine Learning Chemical Sciences

Authors: George Rajna
Comments: 37 Pages.

A new tool is drastically changing the face of chemical research – artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[550] viXra:1807.0485 [pdf] submitted on 2018-07-28 07:23:42

Intuitionistic Evidence Sets

Authors: Yangxue Li; Yong Deng
Comments: 25 Pages.

Dempster-Shafer evidence theory can express and deal with uncertain and imprecise information well, which satisfies the weaker condition than the Bayes probability theory. The traditional single basic probability assignment only considers the degree of the evidence support the subsets of the frame of discernment. In order to simulate human decision-making processes and any activities requiring human expertise and knowledge, intuitionstic evidence sets (IES) is proposed in this paper. It takes into account not only the degree of the support, but also the degree of non-support. The combination rule of intuitionstic basic probability assignments (IBPAs) also be investigated. Feasibility and effectiveness of the proposed method are illustrated by using an application of multi-criteria group decision making.
Category: Artificial Intelligence

[549] viXra:1807.0464 [pdf] submitted on 2018-07-28 04:23:29

Chip for Optical Artificial Neural Network

Authors: George Rajna
Comments: 50 Pages.

Researchers at the National Institute of Standards and Technology (NIST) have made a silicon chip that distributes optical signals precisely across a miniature brain-like grid, showcasing a potential new design for neural networks. [29] Researchers have shown that it is possible to train artificial neural networks directly on an optical chip. [28] Scientists from Russia, Estonia and the United Kingdom have created a new method for predicting the bioconcentration factor (BCF) of organic molecules. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[548] viXra:1807.0459 [pdf] submitted on 2018-07-26 09:40:18

Automated Skin Lesion Classification Using Ensemble of Deep Neural Networks in ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection Challenge

Authors: Md Ashraful Alam Milton
Comments: 4 Pages.

In this paper, we studied extensively on different deep learning based methods to detect melanoma and skin lesion cancers. Melanoma, a form of malignant skin cancer is very threatening to health. Proper diagnosis of melanoma at an earlier stage is crucial for the success rate of complete cure. Dermoscopic images with Benign and malignant forms of skin cancer can be analyzed by computer vision system to streamline the process of skin cancer detection. In this study, we experimented with various neural networks which employ recent deep learning based models like PNASNet-5-Large, InceptionResNetV2, SENet154, InceptionV4. Dermoscopic images are properly processed and augmented before feeding them into the network. We tested our methods on International Skin Imaging Collaboration (ISIC) 2018 challenge dataset. Our system has achieved best validation score of 0.76 for PNASNet-5-Large model. Further improvement and optimization of the proposed methods with a bigger training dataset and carefully chosen hyper-parameter could improve the performances. The code available for download at https://github.com/miltonbd/ISIC_2018_classification.
Category: Artificial Intelligence

[547] viXra:1807.0442 [pdf] submitted on 2018-07-27 08:20:14

Machine Learning Goes Quantum

Authors: George Rajna
Comments: 40 Pages.

Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24] Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23] A new method allows researchers to systematically identify specialized proteins that unpack DNA inside the nucleus of a cell, making the usually dense DNA more accessible for gene expression and other functions. [22] Bacterial systems are some of the simplest and most effective platforms for the expression of recombinant proteins. [21] Now, in a new paper published in Nature Structural & Molecular Biology, Mayo researchers have determined how one DNA repair protein gets to the site of DNA damage. [20] A microscopic thread of DNA evidence in a public genealogy database led California authorities to declare this spring they had caught the Golden State Killer, the rapist and murderer who had eluded authorities for decades. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14]
Category: Artificial Intelligence

[546] viXra:1807.0439 [pdf] submitted on 2018-07-27 13:26:18

A General Model of Artificial General Intelligence

Authors: Zengkun Li
Comments: 5 Pages. Author: Zengkun Li Email: ucman@126.com

This paper presents a general model of AGI, the model indicates how knowledge is represented and learned, how the knowledge is used to accomplish tasks such as inference, memory recalling and so on, and how the advanced intelligence phenomenons such as self conscious, language and emotions emerge.
Category: Artificial Intelligence

[545] viXra:1807.0385 [pdf] submitted on 2018-07-23 09:48:34

Deep Learning Cracks RNA Code

Authors: George Rajna
Comments: 38 Pages.

Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23] A new method allows researchers to systematically identify specialized proteins that unpack DNA inside the nucleus of a cell, making the usually dense DNA more accessible for gene expression and other functions. [22] Bacterial systems are some of the simplest and most effective platforms for the expression of recombinant proteins. [21] Now, in a new paper published in Nature Structural & Molecular Biology, Mayo researchers have determined how one DNA repair protein gets to the site of DNA damage. [20] A microscopic thread of DNA evidence in a public genealogy database led California authorities to declare this spring they had caught the Golden State Killer, the rapist and murderer who had eluded authorities for decades. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13]
Category: Artificial Intelligence

[544] viXra:1807.0381 [pdf] submitted on 2018-07-22 08:05:13

Robot Chemist Discoveries

Authors: George Rajna
Comments: 35 Pages.

A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[543] viXra:1807.0355 [pdf] submitted on 2018-07-22 01:51:35

Machine Learning Image-Match Your Pose

Authors: George Rajna
Comments: 34 Pages.

Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[542] viXra:1807.0354 [pdf] submitted on 2018-07-22 02:32:06

How AI Program Software

Authors: George Rajna
Comments: 35 Pages.

Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[541] viXra:1807.0344 [pdf] submitted on 2018-07-19 09:49:18

Optical Artificial Neural Network

Authors: George Rajna
Comments: 48 Pages.

Researchers have shown that it is possible to train artificial neural networks directly on an optical chip. [28] Scientists from Russia, Estonia and the United Kingdom have created a new method for predicting the bioconcentration factor (BCF) of organic molecules. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[540] viXra:1807.0318 [pdf] submitted on 2018-07-18 09:03:17

Structural Damage Information Decision Based on Z-numbers

Authors: Yangxue Li; Yong Deng
Comments: 10 Pages.

Structural health monitoring (SHM) has grate economic value and research value because of the application of finite element model technology, structural damage identification theory, intelligent sensing system, signal processing technology and as so on. A typical SHM system involved three major subsystems: a sensor subsystem, a data processing subsystem and a health evaluation subsystem. It is significance of sensor data fusion for the data processing subsystem. In this paper, considering the fuzziness and reliability of the data, the method based on Z-numbers is proposed in the damage information fusion for decision level, which is a softer method and avoids the severe effect of a small data on the fusion result. The result given by the simulation example of space structure shows the effectiveness of this method.
Category: Artificial Intelligence

[539] viXra:1807.0317 [pdf] submitted on 2018-07-18 09:22:31

AI Protect Water Supplies

Authors: George Rajna
Comments: 44 Pages.

Progress on new artificial intelligence (AI) technology could make monitoring at water treatment plants cheaper and easier and help safeguard public health. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[538] viXra:1807.0305 [pdf] submitted on 2018-07-17 08:07:38

Semantic Concept Discovery

Authors: George Rajna
Comments: 56 Pages.

The key technical novelty of this work is the creation of semantic embeddings out of structured event data. [35] The researchers have focussed on a complex quantum property known as entanglement, which is a vital ingredient in the quest to protect sensitive data. [34] Cryptography is a science of data encryption providing its confidentiality and integrity. [33] Researchers at the University of Sheffield have solved a key puzzle in quantum physics that could help to make data transfer totally secure. [32] "The realization of such all-optical single-photon devices will be a large step towards deterministic multi-mode entanglement generation as well as high-fidelity photonic quantum gates that are crucial for all-optical quantum information processing," says Tanji-Suzuki. [31] Researchers at ETH have now used attosecond laser pulses to measure the time evolution of this effect in molecules. [30] A new benchmark quantum chemical calculation of C2, Si2, and their hydrides reveals a qualitative difference in the topologies of core electron orbitals of organic molecules and their silicon analogues. [29] A University of Central Florida team has designed a nanostructured optical sensor that for the first time can efficiently detect molecular chirality—a property of molecular spatial twist that defines its biochemical properties. [28] UCLA scientists and engineers have developed a new process for assembling semiconductor devices. [27] A new experiment that tests the limit of how large an object can be before it ceases to behave quantum mechanically has been proposed by physicists in the UK and India. [26] Phonons are discrete units of vibrational energy predicted by quantum mechanics that correspond to collective oscillations of atoms inside a molecule or a crystal. [25]
Category: Artificial Intelligence

[537] viXra:1807.0304 [pdf] submitted on 2018-07-17 08:27:40

New Generation of Artificial Neural Networks

Authors: George Rajna
Comments: 47 Pages.

Scientists from Russia, Estonia and the United Kingdom have created a new method for predicting the bioconcentration factor (BCF) of organic molecules. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[536] viXra:1807.0302 [pdf] submitted on 2018-07-17 12:59:01

Cryptanalysis of “Cloud Centric Authentication for Wearable Healthcare Monitoring System”

Authors: Chandra Sekhar Vorugunti
Comments: 6 Pages.

The privacy and security issues of information message dissemination have been well researched in typical wearable sensores. However, cloud computing paradigm is merely utilized for secure information message dissemination over wearable sensors. Sharing encrypted data with different users via public cloud storage is an important functionality. Therefore, many researchers proposed new cloud based user authentication scheme for secure authentication of medical data. Newly A.K.Das et al proposed a new user authentication scheme in which a legal user registered at the BRC will be able to mutually authenticate with an accessible wearable sensor node with the help of the CoTC. Though A.K.Das et al scheme counterattacks key cryptographic attacks, on subsequent in-depth analysis, we validate that their scheme has security downsides such as failure to counterattack ‘privileged insider attack’, which int
Category: Artificial Intelligence

[535] viXra:1807.0293 [pdf] submitted on 2018-07-18 03:30:35

Artificial Intelligence and Its Security Concerns

Authors: Shyamanth Kashyap, Prajwal J M, Pavan Nargund
Comments: 13 Pages.

This paper dwells on the negative effects of Artificial Intelligence and Machine Learning, and its underlying threat to humanity. This study stems from the experiences of different people working in the aforementioned field. This paper also proposes a framework to regulate and govern the projects in this field to reduce its threat or any future repercussions.
Category: Artificial Intelligence

[534] viXra:1807.0280 [pdf] submitted on 2018-07-15 14:41:01

Refutation of the Definition of Mutual Information Copyright © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 1 Page. Copyright © 2018 by Colin James III All rights reserved. Note that comments on Disqus are not forwarded or read, so respond to author's email address: info@cec-services dot com.

The mutual information between two random variables is defined and tested to represent the amount of information learned about the variable from knowing another variable. Since the definition is symmetric, the conjecture also represents the amount of information learned about another variable from the variable. The conjecture is found not tautologous and hence refuted.
Category: Artificial Intelligence

[533] viXra:1807.0259 [pdf] submitted on 2018-07-14 17:10:54

Refutation of Measures for Resolution and Symmetry in Fuzzy Logic of Zadeh Z-Numbers Copyright © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 1 Page. Copyright © 2018 by Colin James III All rights reserved. Note that comments on Disqus are not forwarded or read, so respond to author's email address: info@cec-services dot com.

The commonly accepted measures G3 (resolution) and G4 (symmetry) for the Zadeh (Z-numbers) fuzzy logic are not tautologous, and hence refuted.
Category: Artificial Intelligence

[532] viXra:1807.0257 [pdf] submitted on 2018-07-15 05:04:47

Generalized Ordered Propositions Fusion Based on Belief Entropy

Authors: Yangxue Li, Yong Deng
Comments: 16 Pages.

A set of ordered propositions describe the different intensities of a characteristic of an object, the intensities increase or decrease gradually. A basic support function is a set of truth-values of ordered propositions, it includes the determinate part and indeterminate part. The indeterminate part of a basic support function indicates uncertainty about all ordered propositions. In this paper, we propose generalized ordered propositions by extending the basic support function for power set of ordered propositions. We also present the entropy which is a measure of uncertainty of a basic support function based on belief entropy. The fusion method of generalized ordered proposition also be presented. The generalized ordered propositions will be degenerated as the classical ordered propositions in that when the truth- values of non-single subsets of ordered propositions are zero. Some numerical examples are used to illustrate the efficiency of generalized ordered propositions and their fusion.
Category: Artificial Intelligence

[531] viXra:1807.0252 [pdf] submitted on 2018-07-13 08:33:32

Machine Learning Extrapolation

Authors: George Rajna
Comments: 32 Pages.

Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[530] viXra:1807.0245 [pdf] submitted on 2018-07-14 03:06:32

Measuring Fuzziness of Z-numbers and Its Application in Sensor Data Fusion

Authors: Yangxue Li; Yong Deng
Comments: 22 Pages.

Real-world information is often characterized by fuzziness due to the uncertainty. Z- numbers is an ordered pair of fuzzy numbers and is widely used as a flexible and efficient model to deal with the fuzziness information. This paper extends the fuzziness measure to continuous fuzzy number. Then, a new fuzziness measure of discrete Z-numbers and continuous Z-numbers is proposed: simple addition of fuzziness measures of two fuzzy numbers of a Z-number. It can be used to obtain a fused Z-number with the best in- formation quality in sensor fusion applications based on Z-numbers. Some numerical examples and the application in sensor fusion are illustrated to show the efficiency of the proposed fuzziness measure of Z-numbers.
Category: Artificial Intelligence

[529] viXra:1807.0239 [pdf] submitted on 2018-07-12 09:02:10

Using Textual Summaries to Describe a Set of Products

Authors: Kittipitch Kuptavanich
Comments: 8 Pages.

When customers are faced with the task of making a purchase in an unfamiliar product domain, it might be useful to provide them with an overview of the product set to help them understand what they can expect. In this paper we present and evaluate a method to summarise sets of products in natural language, focusing on the price range, common product features across the set, and product features that impact on price. In our study, participants reported that they found our summaries useful, but we found no evidence that the summaries influenced the selections made by participants.
Category: Artificial Intelligence

[528] viXra:1807.0205 [pdf] submitted on 2018-07-11 04:24:09

Brain-Inspired Computer

Authors: George Rajna
Comments: 35 Pages.

A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[527] viXra:1807.0199 [pdf] submitted on 2018-07-09 09:00:11

Aurora Early Science Program

Authors: George Rajna
Comments: 44 Pages.

The Aurora ESP, which commenced with 10 simulation-based projects in 2017, is designed to prepare key applications, libraries, and infrastructure for the architecture and scale of the exascale supercomputer. [25] A new artificial intelligence (AI) program developed by Stanford physicists accomplished the same feat in just a few hours. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[526] viXra:1807.0183 [pdf] submitted on 2018-07-10 05:34:43

AI Predict Drug Combinations

Authors: George Rajna
Comments: 43 Pages.

And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[525] viXra:1807.0172 [pdf] submitted on 2018-07-08 09:27:47

AI Editing Music in Videos

Authors: George Rajna
Comments: 46 Pages.

That's the outcome of a new AI project out of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL): a deep-learning system that can look at a video of a musical performance, and isolate the sounds of specific instruments and make them louder or softer. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[524] viXra:1807.0169 [pdf] submitted on 2018-07-08 11:56:45

Facial Recognition Grows

Authors: George Rajna
Comments: 49 Pages.

The unique features of your face can allow you to unlock your new iPhone, access your bank account or even "smile to pay" for some goods and services. [26] If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[523] viXra:1807.0138 [pdf] submitted on 2018-07-06 14:16:31

AI with Artificial X-rays

Authors: George Rajna
Comments: 47 Pages.

Artificial intelligence (AI) holds real potential for improving both the speed and accuracy of medical diagnostics. But before clinicians can harness the power of AI to identify conditions in images such as X-rays, they have to 'teach' the algorithms what to look for. [26] If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[522] viXra:1807.0132 [pdf] submitted on 2018-07-05 05:42:03

AI Recognizes Molecular Handwriting

Authors: George Rajna
Comments: 39 Pages.

Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. [22] Researchers have devised a magnetic control system to make tiny DNA-based robots move on demand—and much faster than recently possible. [21] Humans have 46 chromosomes, and each one is capped at either end by repetitive sequences called telomeres. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13] Researchers have fused living and non-living cells for the first time in a way that allows them to work together, paving the way for new applications. [12]
Category: Artificial Intelligence

[521] viXra:1807.0128 [pdf] submitted on 2018-07-05 09:49:30

Artificial Intelligence Run Funds

Authors: George Rajna
Comments: 50 Pages.

A computer can trounce a human chess master and solve complex mathematical calculations in seconds. Can it do a better job investing your money than a flesh-and-blood portfolio manager? [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[520] viXra:1807.0124 [pdf] submitted on 2018-07-05 18:09:36

Implementation of Regional-CNN and SSD Machine Learning Object Detection Architectures for the Real Time Analysis of Blood Borne Pathogens in Dark Field Microscopy

Authors: Daniel Fleury, Angelica Fleury
Comments: 10 Pages.

The emerging use of visualization techniques in pathology and microbiology has been accelerated by machine learning (ML) approaches towards image preprocessing, classification, and feature extraction in an increasingly complex series of datasets. Modern Convolutional Neural Network (CNN) architectures have developed into an umbrella of vast image reinforcement and recognition methods, including a combined classification-localization of single/multi-object featured images. As a subtype neural network, CNN creates a rapid order of complexity by initially detecting borderlines, edges, and colours in images for dataset construction, eventually capable in mapping intricate objects and conformities. This paper investigates the disparities between Tensorflow object detection APIs, exclusively, Single Shot Detector (SSD) Mobilenet V1 and the Faster RCNN Inception V2 model, to sample computational drawbacks in accuracy-precision vs. real time visualization capabilities. The situation of rapid ML medical image analysis is theoretically framed in regions with limited access to pathology and disease prevention departments (e.g. 3rd world and impoverished countries). Dark field microscopy datasets of an initial 62 XML-JPG annotated training files were processed under Malaria and Syphilis classes. Model trainings were halted as soon as loss values were regularized and converged.
Category: Artificial Intelligence

[519] viXra:1807.0118 [pdf] submitted on 2018-07-04 06:04:53

How Computers See Faces

Authors: George Rajna
Comments: 46 Pages.

Computers started to be able to recognize human faces in images decades ago, but now artificial intelligence systems are rivaling people's ability to classify objects in photos and videos. [26] If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[518] viXra:1807.0063 [pdf] submitted on 2018-07-04 05:46:34

Faster Big-Data Analysis

Authors: George Rajna
Comments: 26 Pages.

A research team at Korea's Daegu Gyeongbuk Institute of Science and Technology (DGIST) succeeded in analyzing big data up to 1,000 times faster than existing technology by using GPU-based 'GMiner' technology. [19] A team of researchers with members from IBM Research-Zurich and RWTH Aachen University has announced the development of a new PCM (phase change memory) design that offers miniaturized memory cell volume down to three nanometers. [18] Monatomic glassy antimony might be used as a new type of single-element phase change memory. [17] Physicists have designed a 3-D quantum memory that addresses the tradeoff between achieving long storage times and fast readout times, while at the same time maintaining a compact form. [16] Quantum memories are devices that can store quantum information for a later time, which are usually implemented by storing and re-emitting photons with certain quantum states. [15] The researchers engineered diamond strings that can be tuned to quiet a qubit's environment and improve memory from tens to several hundred nanoseconds, enough time to do many operations on a quantum chip. [14] Intel has announced the design and fabrication of a 49-qubit superconducting quantum-processor chip at the Consumer Electronics Show in Las Vegas. To improve our understanding of the so-called quantum properties of materials, scientists at the TU Delft investigated thin slices of SrIrO3, a material that belongs to the family of complex oxides. [12] New research carried out by CQT researchers suggest that standard protocols that measure the dimensions of quantum systems may return incorrect numbers. [11] Is entanglement really necessary for describing the physical world, or is it possible to have some post-quantum theory without entanglement? [10] A trio of scientists who defied Einstein by proving the nonlocal nature of quantum entanglement will be honoured with the John Stewart Bell Prize from the University of Toronto (U of T). [9] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer using Quantum Information. In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer with the help of Quantum Information.
Category: Artificial Intelligence

[517] viXra:1806.0463 [pdf] submitted on 2018-06-30 16:33:02

The Language and Venue for True AI

Authors: Salvatore Gerard Micheal
Comments: 2 Pages.

expert systems, counter intuitively, is a venue for a solution for the problem of true AI
Category: Artificial Intelligence

[516] viXra:1806.0446 [pdf] submitted on 2018-06-30 05:03:04

Facial Recognition

Authors: George Rajna
Comments: 45 Pages.

If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[515] viXra:1806.0419 [pdf] submitted on 2018-06-27 08:19:23

AI Understand Volcanic Eruptions

Authors: George Rajna
Comments: 44 Pages.

Scientists led by Daigo Shoji from the Earth-Life Science Institute (Tokyo Institute of Technology) have shown that a type of artificial intelligence called a convolutional neural network can be trained to categorize volcanic ash particle shapes. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[514] viXra:1806.0402 [pdf] submitted on 2018-06-28 03:27:39

New Sufficient Conditions of Robust Recovery for Low-Rank Matrices

Authors: Jianwen Huang, Jianjun Wang, Feng Zhang, Wendong Wang
Comments: 18 Pages.

In this paper we investigate the reconstruction conditions of nuclear norm minimization for low-rank matrix recovery from a given linear system of equality constraints. Sufficient conditions are derived to guarantee the robust reconstruction in bounded $l_2$ and Dantzig selector noise settings $(\epsilon\neq0)$ or exactly reconstruction in the noiseless context $(\epsilon=0)$ of all rank $r$ matrices $X\in\mathbb{R}^{m\times n}$ from $b=\mathcal{A}(X)+z$ via nuclear norm minimization. Furthermore, we not only show that when $t=1$, the upper bound of $\delta_r$ is the same as the result of Cai and Zhang \cite{Cai and Zhang}, but also demonstrate that the gained upper bounds concerning the recovery error are better. Finally, we prove that the restricted isometry property condition is sharp.
Category: Artificial Intelligence

[513] viXra:1806.0385 [pdf] submitted on 2018-06-25 08:43:49

Train Your Robot

Authors: George Rajna
Comments: 39 Pages.

"Our goal is to enable machines to behave appropriately in social situations. Our graphs capture a lot of high-level properties of human situations that haven't been explored in prior work." [23] A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. [22] In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[512] viXra:1806.0369 [pdf] submitted on 2018-06-26 07:56:17

AI Recreates Periodic Table

Authors: George Rajna
Comments: 43 Pages.

A new artificial intelligence (AI) program developed by Stanford physicists accomplished the same feat in just a few hours. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[511] viXra:1806.0348 [pdf] submitted on 2018-06-23 07:42:12

Data Ethics

Authors: George Rajna
Comments: 38 Pages.

But moral questions about what data should be collected and how it should be used are only the beginning. [23] A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. [22] In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than—today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[510] viXra:1806.0346 [pdf] submitted on 2018-06-23 11:15:18

Brainwaves Controlling Robots

Authors: George Rajna
Comments: 28 Pages.

Getting robots to do things isn't easy: usually scientists have to either explicitly program them or get them to understand how humans communicate via language. [18] Behind every self-driving car, self-learning robot and smart building hides a variety of advanced algorithms that control learning and decision making. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[509] viXra:1806.0332 [pdf] submitted on 2018-06-22 08:50:37

Artificial Intelligence Hunger

Authors: George Rajna
Comments: 34 Pages.

In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[508] viXra:1806.0314 [pdf] submitted on 2018-06-23 05:17:26

Automated Driving Algorithm

Authors: George Rajna
Comments: 36 Pages.

A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. [22] In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[507] viXra:1806.0306 [pdf] submitted on 2018-06-22 03:59:47

Machine Learning Biomolecules

Authors: George Rajna
Comments: 33 Pages.

Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[506] viXra:1806.0302 [pdf] submitted on 2018-06-21 07:02:32

New Artificial Neural Networks Method

Authors: George Rajna
Comments: 47 Pages.

An international team of scientists from Eindhoven University of Technology, University of Texas at Austin, and University of Derby, has developed a revolutionary method that quadratically accelerates artificial intelligence (AI) training algorithms. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google’s DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22]
Category: Artificial Intelligence

[505] viXra:1806.0286 [pdf] submitted on 2018-06-21 03:50:18

An End-to-end Model of Predicting Diverse Ranking OnHeterogeneous Feeds

Authors: Zizhe Gao, Zheng Gao, Heng Huang, Zhuoren Jiang, Yuliang Yan
Comments: 6 Pages.

As an external assistance for online shopping, multimedia content (feed) plays an important role in e-Commerce eld. Feeds in formats of post, item list and video bring in richer auxiliary information and more authentic assessments of commodities (items). In Alibaba, the largest Chinese online retailer, besides traditional item search engine (ISE), a content search engine (CSE) is utilized for feeds recommendation as well. However, the diversity of feed types raises a challenge for the CSE to rank heterogeneous feeds. In this paper, a two-step end-to-end model including Heterogeneous Type Sorting and Homogeneous Feed Ranking is proposed to address this problem. In the first step, an independent Multi-Armed bandit (iMAB) model is proposed first, and an improved personalized Markov Deep Neural Network (pMDNN) model is developed later on. In the second step, an existing Deep Structured Semantic Model (DSSM) is utilized for homogeneous feed ranking. A/B test on Alibaba product environment shows that, by considering user preference and feed type dependency, pMDNN model significantly outperforms than iMAB model to solve heterogeneous feed ranking problem.
Category: Artificial Intelligence

[504] viXra:1806.0284 [pdf] submitted on 2018-06-21 05:08:23

Deep Learning Nuclear Events

Authors: George Rajna
Comments: 30 Pages.

A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[503] viXra:1806.0263 [pdf] submitted on 2018-06-15 21:10:42

Synthetic Human and Genius

Authors: Salvatore Gerard Micheal
Comments: 1 Page.

why just today i gave up on AI-SA, artificial intelligence and synthetic awareness
Category: Artificial Intelligence

[502] viXra:1806.0202 [pdf] submitted on 2018-06-14 08:13:55

AI Needs Hardware Accelerators

Authors: George Rajna
Comments: 44 Pages.

In a recent paper published in Nature, our IBM Research AI team demonstrated deep neural network (DNN) training with large arrays of analog memory devices at the same accuracy as a Graphical Processing Unit (GPU)-based system. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[501] viXra:1806.0161 [pdf] submitted on 2018-06-13 03:36:25

Machine Learning Quantum Phases

Authors: George Rajna
Comments: 42 Pages.

Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[500] viXra:1806.0134 [pdf] submitted on 2018-06-10 17:03:06

What I Would Ask a True ai When We Develop it

Authors: Salvatore Gerard Micheal
Comments: 5 Pages.

an essay about artificial intelligence, synthetic awareness, and why we need both
Category: Artificial Intelligence

[499] viXra:1806.0075 [pdf] submitted on 2018-06-06 12:28:50

Schur Group Theory Software Interfacing with Ruby Language in the Context of Ruby Based Machine Learning - An Interesting Insight into the Informatics World of Group Theory and its Nano-Bio Applications.

Authors: Nirmal Tej kumar
Comments: 3 Pages. Simple Technical Notes/Short Communication on SchurGroupTheory Software

We are very much inspired by “Lie Algebra” and its interesting applications in the realms of Science & Technology domains involving multi-disciplinary R&D these days in the context of nanotechnology. It is therefore inspiring to present a simple technical note involving the above mentioned TITLE for the READERS.Schur Group theory Software written in C language could be easily interfaced with Ruby language.Therefore,we could explore the many useful features of Ruby language in the context of Machine Learning/IoT/Cloud Applications etc.
Category: Artificial Intelligence

[498] viXra:1806.0072 [pdf] submitted on 2018-06-06 13:39:03

Artificial Intelligence Analyze Causation

Authors: George Rajna
Comments: 52 Pages.

Now, researchers have tested the first artificial intelligence model to identify and rank many causes in real-world problems without time-sequenced data, using a multi-nodal causal structure and Directed Acyclic Graphs. [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[497] viXra:1806.0044 [pdf] submitted on 2018-06-04 06:31:48

L’apprentissage Profond Sur Mimic-III :Prédiction de la Mortalité Sous 24 H

Authors: Ayoub ABRAICH
Comments: 97 Pages.

Ce projet décrit la fouille de données sur la base MIMIC-III . L’objectif est de prédire le décès à l’hôpital sur la base MIMIC III. On va suivre dans ce projet le processus Knowledge Discovery in Databases (KDD) qui est : 1. Sélection et extraction d’un ensemble de données de séries chronologiques multiva- riées à partir d’une base de données de rangées de millons en écrivant des requêtes SQL. 2. Prétraiter et nettoyer la série chronologique en un ensemble de données bien rangé en explorant les données, en gérant les données manquantes (taux de données man- quantes> 50%) et en supprimant le bruit / les valeurs aberrantes. 3. Développement d’un modèle prédictif permettant d’associer aux séries chronolo- giques biomédicales un indicateur de gravité ( probabilité de mortalité ) en mettant en œuvre plusieurs algorithmes tels que l’arbre de décision gradient boost et le k-NN (k-nearest neighbors) avec l’algorithme DTW (Dynamic time warping). 4. Résultat de 30% d’augmentation du score F1 (mesure de la précision d’un test) par rapport à l’indice de notation médical (SAPS II).
Category: Artificial Intelligence

[496] viXra:1806.0007 [pdf] submitted on 2018-06-02 04:37:40

Apple Cleared Path for App Update

Authors: George Rajna
Comments: 41 Pages.

A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] In the September 23th issue of the Physical Review Letters, Prof. Julien Laurat and his team at Pierre and Marie Curie University in Paris (Laboratoire Kastler Brossel-LKB) report that they have realized an efficient mirror consisting of only 2000 atoms. [20] Physicists at MIT have now cooled a gas of potassium atoms to several nanokelvins—just a hair above absolute zero—and trapped the atoms within a two-dimensional sheet of an optical lattice created by crisscrossing lasers. Using a high-resolution microscope, the researchers took images of the cooled atoms residing in the lattice. [19]
Category: Artificial Intelligence

[495] viXra:1805.0546 [pdf] submitted on 2018-05-31 09:44:38

Deep Learning Hologram Reconstruction

Authors: George Rajna
Comments: 47 Pages.

Deep learning, which uses multi-layered artificial neural networks, is a form of machine learning that has demonstrated significant advances in many fields, including natural language processing, image/video labeling and captioning. [26]
Category: Artificial Intelligence

[494] viXra:1805.0545 [pdf] submitted on 2018-05-31 13:02:19

Face to Phase Recognition

Authors: George Rajna
Comments: 49 Pages.

Frenkel and his collaborators have now developed such a "phase-recognition" tool—or more precisely, a way to extract "hidden" signatures of an unknown structure from measurements made by existing tools. [27] Deep learning, which uses multi-layered artificial neural networks, is a form of machine learning that has demonstrated significant advances in many fields, including natural language processing, image/video labeling and captioning. [26]
Category: Artificial Intelligence

[493] viXra:1805.0539 [pdf] submitted on 2018-05-30 10:24:13

Machine Learning Accelerate Bioengineering

Authors: George Rajna
Comments: 45 Pages.

Scientists from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a way to use machine learning to dramatically accelerate the design of microbes that produce biofuel. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[492] viXra:1805.0520 [pdf] submitted on 2018-05-30 03:30:54

An English-Hindi Code-Mixed Corpus: Stance Annotation and Baseline System

Authors: Sahil Swami, Ankush Khandelwal, Vinay Singh, Syed Sarfaraz Akhtar, Manish Shrivastava
Comments: 9 Pages. CICLing 2018

Social media has become one of the main channels for peo- ple to communicate and share their views with the society. We can often detect from these views whether the person is in favor, against or neu- tral towards a given topic. These opinions from social media are very useful for various companies. We present a new dataset that consists of 3545 English-Hindi code-mixed tweets with opinion towards Demoneti- sation that was implemented in India in 2016 which was followed by a large countrywide debate. We present a baseline supervised classification system for stance detection developed using the same dataset that uses various machine learning techniques to achieve an accuracy of 58.7% on 10-fold cross validation.
Category: Artificial Intelligence

[491] viXra:1805.0519 [pdf] submitted on 2018-05-30 03:34:17

A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection

Authors: Sahil Swami, Ankush Khandelwal, Vinay Singh, Syed Sarfaraz Akhtar, Manish Shrivastava
Comments: 9 Pages. CICLing 2018

Social media platforms like twitter and facebook have be- come two of the largest mediums used by people to express their views to- wards different topics. Generation of such large user data has made NLP tasks like sentiment analysis and opinion mining much more important. Using sarcasm in texts on social media has become a popular trend lately. Using sarcasm reverses the meaning and polarity of what is implied by the text which poses challenge for many NLP tasks. The task of sarcasm detection in text is gaining more and more importance for both commer- cial and security services. We present the first English-Hindi code-mixed dataset of tweets marked for presence of sarcasm and irony where each token is also annotated with a language tag. We present a baseline su- pervised classification system developed using the same dataset which achieves an average F-score of 78.4 after using random forest classifier and performing 10-fold cross validation.
Category: Artificial Intelligence

[490] viXra:1805.0509 [pdf] submitted on 2018-05-28 10:26:25

AI for Solar Cells

Authors: George Rajna
Comments: 47 Pages.

Solar cells will play a key role in shifting to a renewable economy. Organic photovoltaics (OPVs) are a promising class of solar cells, based on a light-absorbing organic molecule combined with a semiconducting polymer. [26] Today IBM Research is introducing IBM Crypto Anchor Verifier, a new technology that brings innovations in AI and optical imaging together to help prove the identity and authenticity of objects. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24] According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[489] viXra:1805.0504 [pdf] submitted on 2018-05-28 15:39:24

An Insight into the World of Hidden Markov Models Based on Higher Order Logic (HOL)/Scala/Haskell/JVM/IoT in the Context of NLP & Medical Image Processing Applications.

Authors: Nirmal Tej kumar
Comments: 3 Pages. Technical Notes on HMM/HOL/NLP to probe Medical Images

As explained in the TITLE mentioned above - it was proposed to design,develop,implement,test and probe the interesting aspects of Medical Imaging domains using HOL/NLP/HMM Concepts.
Category: Artificial Intelligence

[488] viXra:1805.0472 [pdf] submitted on 2018-05-26 09:08:15

AI Can't Solve Everything

Authors: George Rajna
Comments: 44 Pages.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[487] viXra:1805.0459 [pdf] submitted on 2018-05-25 09:09:21

AI Changing Science

Authors: George Rajna
Comments: 42 Pages.

Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[486] viXra:1805.0436 [pdf] submitted on 2018-05-23 12:57:08

AI with Optical Scanning

Authors: George Rajna
Comments: 45 Pages.

Today IBM Research is introducing IBM Crypto Anchor Verifier, a new technology that brings innovations in AI and optical imaging together to help prove the identity and authenticity of objects. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24] According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[485] viXra:1805.0380 [pdf] submitted on 2018-05-22 15:08:49

Bringing Deep Learning to IoT Devices Using Higher Order Logic(HOL)/Scala/Haskell/JVM as an Informatics Platform – A Novel Suggestion in the Context of Hardware/Software/Firmware Co-Design Approaches.

Authors: Nirmal Tej kumar
Comments: 3 Pages. Short Communication

As explained in the TITLE mentioned above,it is very much inspiring to probe the frontiers of IoT & its application domains in the context of science & technology using HOL/Scala/Haskell/JVM To the best of our knowledge,this is one of the pioneering efforts in this promising,challenging & inspiring aspects of DEEP LEARNING.
Category: Artificial Intelligence

[484] viXra:1805.0365 [pdf] submitted on 2018-05-21 05:36:58

AI Combined with Stem Cells

Authors: George Rajna
Comments: 42 Pages.

AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[483] viXra:1805.0354 [pdf] submitted on 2018-05-20 06:35:52

Google Pushes Artificial Intelligence

Authors: George Rajna
Comments: 40 Pages.

According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[482] viXra:1805.0311 [pdf] submitted on 2018-05-15 11:07:14

Modeling and Simulation of Servo Feed System of CNC Machine Tool Based on Matlab/simulink

Authors: Subom YUN, Onjoeng SIM
Comments: 5 Pages. fig 9, equation 5, reference 9

In the industry, CNC machine tools play an irreplaceable role. It not only realizes the rapid industrial production, but also saves manpower and material resources. It is the symbol of modernization. As an important part of CNC machine tools, feed system plays a very important role on the processing process; it refers to the product's quality problems. According to the principle of mechanical dynamics, I establish a mathematical model of machine tool feed drive system and use Simulink(dynamic simulation tool) in MATLAB to construct the simulation model of the feed system of lathe. We also designed the ANFIS-PID controller to cope with the mathematical model of the complex object and the model uncertainty that exists when there is external noise. These efforts offer effective foundation for the improvement of CNC machine tool.
Category: Artificial Intelligence

[481] viXra:1805.0295 [pdf] submitted on 2018-05-14 22:08:38

Modeling and Simulation of Feed System Design of CNC Machine Tool Based on Matlab/simulink

Authors: Yunsubom
Comments: 8page

In the industry, CNC machine tools plays an irreplaceable role. It not only realizes the rapid industrial production, but also saves manpower and material resources. It is the symbol of modernization. As an important part of CNC machine tools, feed system plays a very important role on the processing process; it refers to the product's quality problems. According to the principle of mechanical dynamics, I establish a mathematical model of machine tool feed drive system and use Simulink(dynamic simulation tool) in MATLAB to construct the simulation model of the feed system of lathe. We also designed the ANFIS-PID controller to cope with the mathematical model of the complex object and the model uncertainty that exists when there is external noise. These efforts offer effective foundation for the improvement of CNC machine tool.
Category: Artificial Intelligence

[480] viXra:1805.0279 [pdf] submitted on 2018-05-13 08:47:06

AI Find Alien Intelligence

Authors: George Rajna
Comments: 37 Pages.

In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[479] viXra:1805.0277 [pdf] submitted on 2018-05-13 10:13:09

Strategy on Artificial Intelligence

Authors: George Rajna
Comments: 38 Pages.

Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[478] viXra:1805.0267 [pdf] submitted on 2018-05-13 15:24:24

An Improved Method of Generating Z-Number Based on Owa Weights and Maximum Entropy

Authors: Bingyi Kang
Comments: 24 Pages.

How to generate Z-number is an important and open issue in the uncertain information processing of Z-number. In [1], a method of generating Z-number using OWA weight and maximum entropy is investigated. However, the meaning of the method in [1] is not clear enough according to the definition of Z-number. Inspired by the methodology in [1], we improve the method of determining Z-number based on OWA weights and maximum entropy, which is more clear about the meaning of Z-number. Some numerical examples are used to illustrate the effectiveness of the proposed method.
Category: Artificial Intelligence

[477] viXra:1805.0240 [pdf] submitted on 2018-05-11 08:45:03

Probabilistic Computing for AI

Authors: George Rajna
Comments: 33 Pages.

Probabilistic computing will allow future systems to comprehend and compute with uncertainties inherent in natural data, which will enable us to build computers capable of understanding, predicting and decision-making. [20] For years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and it has enjoyed a lot of success as a result. Now, AI is starting to return the favor. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[476] viXra:1805.0226 [pdf] submitted on 2018-05-12 00:44:53

A Memristor based Unsupervised Neuromorphic System Towards Fast and Energy-Efficient GAN

Authors: Fuqiang Liu, Chenchen Liu
Comments: 8 Pages.

Deep Learning has gained immense success in pushing today's artificial intelligence forward. To solve the challenge of limited labeled data in the supervised learning world, unsupervised learning has been proposed years ago while low accuracy hinters its realistic applications. Generative adversarial network (GAN) emerges as an unsupervised learning approach with promising accuracy and are under extensively study. However, the execution of GAN is extremely memory and computation intensive and results in ultra-low speed and high-power consumption. In this work, we proposed a holistic solution for fast and energy-efficient GAN computation through a memristor-based neuromorphic system. First, we exploited a hardware and software co-design approach to map the computation blocks in GAN efficiently. We also proposed an efficient data flow for optimal parallelism training and testing, depending on the computation correlations between different computing blocks. To compute the unique and complex loss of GAN, we developed a diff-block with optimized accuracy and performance. The experiment results on big data show that our design achieves 2.8x speedup and 6.1x energy-saving compared with the traditional GPU accelerator, as well as 5.5x speedup and 1.4x energy-saving compared with the previous FPGA-based accelerator.
Category: Artificial Intelligence

[475] viXra:1805.0222 [pdf] submitted on 2018-05-12 05:37:50

AI take Shortcuts

Authors: George Rajna
Comments: 32 Pages.

Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[474] viXra:1805.0214 [pdf] submitted on 2018-05-11 03:01:10

Ai Should not be an Open Source Project

Authors: Dimiter Dobrev
Comments: 10 Pages. Bulgarian language

Who should have the Artificial Intelligence technology? This technology should belong to everybody, but not the technology itself, but the fruits it will give us. Of course, we should not allow AI to fall into the hands of irresponsible people. Similarly, nuclear technology should benefit everyone, but these technologies must be kept in secret and should not be accessible to everyone.
Category: Artificial Intelligence

[473] viXra:1805.0195 [pdf] submitted on 2018-05-09 08:58:01

Collectives of Automata for Building of Active Systems of Artifical Intelligence

Authors: Aleksey A. Demidov
Comments: 37 Pages.

Basics of knowledge of AI in simple form
Category: Artificial Intelligence

[472] viXra:1805.0147 [pdf] submitted on 2018-05-07 09:36:53

Full Circle in Deep Learning

Authors: George Rajna
Comments: 30 Pages.

For years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and it has enjoyed a lot of success as a result. Now, AI is starting to return the favor. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[471] viXra:1805.0089 [pdf] submitted on 2018-05-04 08:35:04

Group Sparse Recovery in Impulsive Noise Via Adm

Authors: Jianwen Huang, Feng Zhang, Jianjun Wang, Wendong Wang
Comments: 25 Pages.

In this paper, we consider the recovery of group sparse signals corrupted by impulsive noise. In some recent literature, researchers have utilized stable data fitting models, like $l_1$-norm, Huber penalty function and Lorentzian-norm, to substitute the $l_2$-norm data fidelity model to obtain more robust performance. In this paper, a stable model is developed, which exploits the generalized $l_p$-norm as the measure for the error for sparse reconstruction. In order to address this model, we propose an efficient alternative direction method, which includes the proximity operator of $l_p$-norm functions to the framework of Lagrangian methods. Besides, to guarantee the convergence of the algorithm in the case of $0\leq p<1$ (nonconvex case), we took advantage of a smoothing strategy. For both $0\leq p<1$ (nonconvex case) and $1\leq p\leq2$ (convex case), we have derived the conditions of the convergence for the proposed algorithm. Moreover, under the block restricted isometry property with constant $\delta_{\tau k_0}<\tau/(4-\tau)$ for $0<\tau<4/3$ and $\delta_{\tau k_0}<\sqrt{(\tau-1)/\tau}$ for $\tau\geq4/3$, a sharp sufficient condition for group sparse recovery in the presence of impulsive noise and its associated error upper bound estimation are established. Numerical results based on the synthetic block sparse signals and the real-world FECG signals demonstrate the effectiveness and robustness of new algorithm in highly impulsive noise.
Category: Artificial Intelligence

[470] viXra:1805.0053 [pdf] submitted on 2018-05-01 04:43:59

Magnetic Waves of Neuromorphic Computing

Authors: George Rajna
Comments: 41 Pages.

A team of physicists has uncovered properties of a category of magnetic waves relevant to the development of neuromorphic computing—an artificial intelligence system that seeks to mimic human-brain function. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[469] viXra:1805.0044 [pdf] submitted on 2018-05-01 13:09:45

AI Spots Gravitational Waves

Authors: George Rajna
Comments: 23 Pages.

A deep-learning system that can sift gravitational wave signals from background noise has been created by physicists in the UK. [8] Using data from the first-ever gravitational waves detected last year, along with a theoretical analysis, physicists have shown that gravitational waves may oscillate between two different forms called "g" and "f"-type gravitational waves. [7] Astronomy experiments could soon test an idea developed by Albert Einstein almost exactly a century ago, scientists say. [6] It's estimated that 27% of all the matter in the universe is invisible, while everything from PB&J sandwiches to quasars accounts for just 4.9%. But a new theory of gravity proposed by theoretical physicist Erik Verlinde of the University of Amsterdam found out a way to dispense with the pesky stuff. [5] The proposal by the trio though phrased in a way as to suggest it's a solution to the arrow of time problem, is not likely to be addressed as such by the physics community— it's more likely to be considered as yet another theory that works mathematically, yet still can't answer the basic question of what is time. [4] The Weak Interaction transforms an electric charge in the diffraction pattern from one side to the other side, causing an electric dipole momentum change, which violates the CP and Time reversal symmetry. The Neutrino Oscillation of the Weak Interaction shows that it is a General electric dipole change and it is possible to any other temperature dependent entropy and information changing diffraction pattern of atoms, molecules and even complicated biological living structures.
Category: Artificial Intelligence

[468] viXra:1804.0334 [pdf] submitted on 2018-04-24 04:00:45

Introduction of Reflex Based Neural Network

Authors: Liang Yi
Comments: 17 Pages.

This paper introduces a new neural network that works quite different from current neural network model. The RBNN model is based on the concept of conditioned reflex, which widely exists in real creatures. In RBNN, all learning procedure are executed by the neural network itself, which makes it not a complex mathematic model but simple enough to be implemented in real brain. In RBNN, information is organized in a clear way, which makes the whole network a white box rather than a black box, so we can teach the network knowledge easily and fast. This paper shows the power of conditioned reflex as a search tool which can be used as states transfer function in state machine. Using combinations of neurons as symbols which plays the role of letters in traditional state machine, a RBNN can be treated as a state machine with small number of memory unit but huge number of letters.
Category: Artificial Intelligence

[467] viXra:1804.0281 [pdf] submitted on 2018-04-19 09:47:02

Perceptual Significance of Kernel Methods for Natural Image Processing

Authors: Vikas Ramachandra, Truong Nguyen
Comments: 6 Pages.

We explore the unifying connection between kernel regression, Volterra series expansion and multiscale signal decomposition using recent results on function estimation for system identification. We show that using any of these techniques for (non-linear) image processing tasks is (approximately) equivalent. Further, we use the relation between wavelets and independent components of natural images. Kernel methods can be shown to be implicit Volterra series expansions, which are well approximated by wavelets. Wavelets are, in turn, well represented by independent components of natural images. Thus, it can be seen that kernel methods are also near optimal in terms of higher order statistical modeling and approximation of (natural) images. This explains the reason for good results often (perceptually) observed with the use of kernel methods for many image processing problems.
Category: Artificial Intelligence

[466] viXra:1804.0280 [pdf] submitted on 2018-04-19 09:53:13

Superresolution Using Perceptually Significant Side Information

Authors: Vikas Ramachandra, Truong Nguyen
Comments: 4 Pages.

We investigate the problem of super-resolution of images in the presence of side information. In some situations, when some information of the original image is available to the sender, it can be embedded into the low resolution images, either in the pixels themselves or in the headers. This information can be later used when required to reconstruct the superresolved image. For this, a novel multiresolution histogram matching based superresolution procedure is outlined. The proposed technique gives better results compared to contemporary resolution enhancement algorithms, and is especially useful for de-blurring text images captured from mobile phone cameras.
Category: Artificial Intelligence

[465] viXra:1804.0278 [pdf] submitted on 2018-04-19 09:59:42

A Distributed Compressive Sampling Approach for Scene Capture Using an Array of Single Pixel Cameras

Authors: Vikas Ramachandra
Comments: 4 Pages.

This paper presents a method of capturing 3D scene information using an array of single pixel cameras. Based on the recent results for distributed compressive sampling, it is shown here that there could be considerable savings in the measurements required to construct the whole scene, when the correlations between the images captured by the individual cameras in the array is exploited. A technique for doing so for an array of cameras separated by translations along one axis only is illustrated.
Category: Artificial Intelligence

[464] viXra:1804.0249 [pdf] submitted on 2018-04-18 03:52:47

Machine Learning Protein Dynamics Data

Authors: George Rajna
Comments: 32 Pages.

At the University of South Florida, researchers are integrating machine learning techniques into their work studying proteins. [21] Bioinformatics professors Anthony Gitter and Casey Greene set out in summer 2016 to write a paper about biomedical applications for deep learning, a hot new artificial intelligence field striving to mimic the neural networks of the human brain. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[463] viXra:1804.0237 [pdf] submitted on 2018-04-18 11:04:11

Beyond Back Propagation

Authors: Tofara Moyo
Comments: 1 Page.

5 Protea Lane Newton west
Category: Artificial Intelligence

[462] viXra:1804.0197 [pdf] submitted on 2018-04-14 08:00:32

Artificial Intelligence Accelerates Discovery

Authors: George Rajna
Comments: 40 Pages.

The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[461] viXra:1804.0155 [pdf] submitted on 2018-04-11 07:17:37

Deep Learning Smartphone Microscope

Authors: George Rajna
Comments: 36 Pages.

Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[460] viXra:1804.0149 [pdf] submitted on 2018-04-09 11:59:00

Machine Learning for Gravitational Waves

Authors: George Rajna
Comments: 21 Pages.

A trio of students from the University of Glasgow have developed a sophisticated artificial intelligence which could underpin the next phase of gravitational wave astronomy. [8] Using data from the first-ever gravitational waves detected last year, along with a theoretical analysis, physicists have shown that gravitational waves may oscillate between two different forms called "g" and "f"-type gravitational waves. [7] Astronomy experiments could soon test an idea developed by Albert Einstein almost exactly a century ago, scientists say. [6] It's estimated that 27% of all the matter in the universe is invisible, while everything from PB&J sandwiches to quasars accounts for just 4.9%. But a new theory of gravity proposed by theoretical physicist Erik Verlinde of the University of Amsterdam found out a way to dispense with the pesky stuff. [5] The proposal by the trio though phrased in a way as to suggest it's a solution to the arrow of time problem, is not likely to be addressed as such by the physics community— it's more likely to be considered as yet another theory that works mathematically, yet still can't answer the basic question of what is time. [4] The Weak Interaction transforms an electric charge in the diffraction pattern from one side to the other side, causing an electric dipole momentum change, which violates the CP and Time reversal symmetry. The Neutrino Oscillation of the Weak Interaction shows that it is a General electric dipole change and it is possible to any other temperature dependent entropy and information changing diffraction pattern of atoms, molecules and even complicated biological living structures.
Category: Artificial Intelligence

[459] viXra:1804.0114 [pdf] submitted on 2018-04-07 10:24:50

Rethinking BICA’s R&D Challenges: Grief Revelations of an Upset Revisionist

Authors: Emanuel Diamant
Comments: 6 Pages.

Biologically Inspired Cognitive Architectures (BICA) is a subfield of Artificial Intelligence aimed at creating machines that emulate human cognitive abilities. What distinguish BICA from other AI approaches is that it based on principles drawn from biology and neuroscience. There is a widespread conviction that nature has a solution for almost all problems we are faced with today. We have only to pick up the solution and replicate it in our design. However, Nature does not easily give up her secrets. Especially, when it is about human brain deciphering. For that reason, large Brain Research Initiatives have been launched around the world. They will provide us with knowledge about brain workflow activity in neuron assemblies and their interconnections. But what is being “flown” (conveyed) via the interconnections the research programme does not disclose. It is implied that what flows in the interconnections is information. But what is information? – that remains undefined. Having in mind BICA’s interest in the matters, the paper will try to clarify the issues.
Category: Artificial Intelligence

[458] viXra:1804.0113 [pdf] submitted on 2018-04-07 10:28:34

Artificial Neural Networks: a Bio-Inspired Revolution or a Long Lasting Misconception and Self-Delusion

Authors: Emanuel Diamant
Comments: 7 Pages. Rejected by the IJCNN 2018, Rio de Janeiro, July 08-13, 2018.

Ali Rahimi, best paper award recipient at NIPS 2017, labelled the current state of Deep Learning (DL) headway as “alchemy”. Yann LeCun, one of the prominent figures in the DL R&D, was insulted by this expression. However, in his response, LeCun did not claimed that DL designers know how and why their DL systems reach so surprising performances. The possible reason for this cautiousness is: No one knows how and in which way system input data is transformed into semantic information at the system’s output. And this, certainly, has its own reason: No one knows what information is! I dare to offer my humble clarification about this obscure and usually untouchable matter. I hope someone would be ready to line up with me.
Category: Artificial Intelligence

[457] viXra:1804.0112 [pdf] submitted on 2018-04-07 11:12:53

Recurrent Capsule Network for Image Generation

Authors: Srikumar Sastry
Comments: 9 Pages.

We have already seen state-of-the-art image generation techniques with Generative Adversarial Networks (Goodfellow et al. 2014), Variational Autoencoder and Recurrent Network for Image generation (K. Gregor et al. 2015). But all these architectures fail to learn object location and pose in images. In this paper, I propose Recurrent Capsule Network based on variational auto encoding framework which can not only preserve equivariance in images in the latent space but also can be used for image classification and generation. For image classification, it can recognise highly overlapping objects due to the use of capsules (Hinton et al. 2011), considerably better than convolutional networks. It can generate images which can be difficult to differentiate from the real data.
Category: Artificial Intelligence

[456] viXra:1804.0094 [pdf] submitted on 2018-04-06 04:33:26

Automated Classification of Hand-Grip Action on Objects Using Machine Learning

Authors: Anju Mishra, Amity University Uttar Pradesh SHANU SHARMA, Amity University Uttar Pradesh SANJAY KUMAR, Oxford Brooks University PRIYA RANJAN, Amity University Uttar Pradesh AMIT UJLAYAN, Gautam Buddha University
Comments: 10 Pages. This is a preprint of a paper under possible publication consideration.

Brain computer interface is the current area of research to provide assistance to disabled persons. To cope up with the growing needs of BCI applications, this paper presents an automated classification scheme for handgrip actions on objects by using Electroencephalography (EEG) data. The presented approach focuses on investigation of classifying correct and incorrect handgrip responses for objects by using EEG recorded patterns. The method starts with preprocessing of data, followed by extraction of relevant features from the epoch data in the form of discrete wavelet transform (DWT), and entropy measures. After computing feature vectors, artificial neural network classifiers used to classify the patterns into correct and incorrect handgrips on different objects. The proposed method was tested on real dataset, which contains EEG recordings from 14 persons. The results showed that the proposed approach is effective and may be useful to develop a variety of BCI based devices to control hand movements. KEYWORDS EEG, Brain computer interface, Machine learning, Hand action recognition
Category: Artificial Intelligence

[455] viXra:1804.0074 [pdf] submitted on 2018-04-04 07:28:11

Biomedical Applications for Deep Learning

Authors: George Rajna
Comments: 30 Pages.

Bioinformatics professors Anthony Gitter and Casey Greene set out in summer 2016 to write a paper about biomedical applications for deep learning, a hot new artificial intelligence field striving to mimic the neural networks of the human brain. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[454] viXra:1804.0056 [pdf] submitted on 2018-04-05 08:26:14

Computer Recognize Dynamic Events

Authors: George Rajna
Comments: 35 Pages.

Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[453] viXra:1804.0048 [pdf] submitted on 2018-04-03 10:59:48

Machine Learning to Microbial Relationship

Authors: George Rajna
Comments: 30 Pages.

Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[452] viXra:1804.0020 [pdf] submitted on 2018-04-02 03:54:11

A New Neural Network for Artificial General Intelligence

Authors: Haisong Liang
Comments: 44 Pages. Include both English and Chinese version.

Since artificial intelligence was first introduced several decades ago, neural network has achieved remarkable results as one of the most important research methods, and variety of neural network models have been proposed. Usually, for a specific task, we train the network with large amounts of data to develop a mathematical model, making the model produce the expected outputs according to inputs, which also results in the black box problem. In this case, if we study from the perspective of information meanings and their causal relations with the following measures: denote information by neurons; store their relations with links; give neurons a state indicating the strength of information, which can be updated by a state function or input signal; then we can store different information and their relations and control related information's expression with neurons' state. The neural network will become a dynamic system then. More importantly, we can denote different information and logic by designing the topology of neural network and the attributes of the links, and thus having the ability to design and explain every detail of the network precisely, turning neural network into a general information storage, expression, control and processing system, which is also commonly referred as "Strong AI".
Category: Artificial Intelligence

[451] viXra:1803.0751 [pdf] submitted on 2018-03-31 04:17:45

Galenism: A Methodology for the Key Unification of Von Neumann Machines and Hierarchical Databases

Authors: Pallabi Chakraborty, Bhargav Bhushanam
Comments: 7 Pages.

The implications of psychoacoustic methodologies have been far-reaching and pervasive. In this work, we disprove the simulation of active networks. We examine how fiber-optic cables can be applied to the evaluation of DNS.
Category: Artificial Intelligence

[450] viXra:1803.0728 [pdf] submitted on 2018-03-30 06:52:39

Artificial Intelligence in Chemical Synthesis

Authors: George Rajna
Comments: 29 Pages.

A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[449] viXra:1803.0699 [pdf] submitted on 2018-03-29 01:48:31

Universal Forecasting Scheme {Version 4}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has presented a Novel Method of Forecasting.
Category: Artificial Intelligence

[448] viXra:1803.0696 [pdf] submitted on 2018-03-29 06:19:03

Teaching Machine in Physical Systems

Authors: George Rajna
Comments: 27 Pages.

Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[447] viXra:1803.0695 [pdf] submitted on 2018-03-28 04:26:14

Brain's Potential for Quantum Computation

Authors: George Rajna
Comments: 32 Pages.

The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[446] viXra:1803.0675 [pdf] submitted on 2018-03-26 06:58:13

A Survey on Reasoning on Building Information Models Based on IFC

Authors: Hassan Sleiman
Comments: 17 Pages.

Building Information Models (BIM) are computer models that act as a main source of building information and integrate several aspects of engineering and architectural design, including building utilisation. They aim at enhancing the efficiency and the effectiveness of the projects during design, construction, and maintenance. Artificial Intelligence, which is used to automate tasks that would require intelligence, has found its way into BIM by applying reasoners, among other techniques. A reasoner is a piece of software that makes the implicit and hidden knowledge as explicit by using logical inferring techniques. Reasoners are applied on BIM to help take enhanced decisions and to assess the construction projects. The importance of BIM in both construction and information technology sectors has motivated many researchers to work on surveys that attempt to provide the current state of BIM, but unfortunately, none of these surveys has focused on reasoning on BIM. In this article we survey the research proposals and toolkits that rely on using reasoning systems on BIM, and we classify them into a two-level schema based on what they are intended for. According to our survey, reasoning is mainly used for solving design problems, and is especially applied for code consistency checking, with an emphasis on the semantic web technologies. Furthermore, user-friendliness is still a gap in this field and case-based reasoning, which was often applied in the past efforts, is still hardly applied for reasoning on BIM. The survey shows that this research area is active and that the research results are progressively being integrated into commercial toolkits.
Category: Artificial Intelligence

[445] viXra:1803.0652 [pdf] submitted on 2018-03-26 00:17:46

AI to Understand Human Brain

Authors: George Rajna
Comments: 30 Pages.

And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[444] viXra:1803.0627 [pdf] submitted on 2018-03-24 08:48:04

Brain-Like Computers

Authors: George Rajna
Comments: 29 Pages.

U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[443] viXra:1803.0089 [pdf] submitted on 2018-03-07 02:50:41

Universal Forecasting Scheme : Two Methods {Version 1}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[442] viXra:1803.0083 [pdf] submitted on 2018-03-06 11:49:00

Machine Learning Guide Science

Authors: George Rajna
Comments: 25 Pages.

Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[441] viXra:1803.0072 [pdf] submitted on 2018-03-06 03:34:57

Universal Forecasting Scheme {Version 1}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[440] viXra:1803.0070 [pdf] submitted on 2018-03-06 04:39:44

Universal Forecasting Scheme {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[439] viXra:1803.0069 [pdf] submitted on 2018-03-06 04:45:06

Universal Forecasting Scheme {Version 3}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[438] viXra:1803.0061 [pdf] submitted on 2018-03-04 22:32:07

Cancer Detection Through Handwriting

Authors: Alaa Tarek, Shorouk Alalem, Maryam El-Fdaly, Nehal Fooda
Comments: 31 Pages.

Having a look at the medical field in the previous years and the drawbacks in it, Egypt's health level is declining year after year; due to the statistical study on the health level published by the British medical journal Lancet in 2016 Egypt ranked 124 out of 188 countries. Cancer is a major burden of disease worldwide. Each year, 10,000,000 of people are diagnosed with cancer around the world, and more than half of the patients eventually die because of it. In many countries, cancer ranks the second most common cause of death following cardiovascular diseases. With significant improvement in treatment and prevention of cardiovascular diseases, cancer has or will soon become the number one killer in many parts of the world. Nearly 90,000 people do not know they have got cancer until they arrive at Accident and Emergency wards, by that time only 36 percent will live longer than a year. So, we needed to find controlled ways to diagnose patients earlier. As no aspect of human life has escaped the impact of the information age, and perhaps in no area of life is information more critical than in health and medicine. However, computers have become available for all aspects of human endeavors. After so, we have designed a program that could detect if a person has cancer or not through your handwriting. We chose “efficiency, cost, and applicability” as the design requirements that have been tested. The program could be tested by scanning the text, searching for specific features that are related to cancer and displaying “1” or “0” according to your state. After testing the program many times, we finally reached a mean efficiency of 93.75%. So, this program saves lives, time and money.
Category: Artificial Intelligence

[437] viXra:1803.0053 [pdf] submitted on 2018-03-04 05:55:27

Tunnel Similar Modeling Notation and Spherical Viewpoint

Authors: Alexey Podorov
Comments: 11 Pages.

The article proposes a spherical model of perception, groups and levels of complexity, notation for modeling abstractness, complexity
Category: Artificial Intelligence

[436] viXra:1803.0023 [pdf] submitted on 2018-03-01 09:41:28

AI for Safer Cities

Authors: George Rajna
Comments: 50 Pages.

Computers may better predict taxi and ride sharing service demand, paving the way toward smarter, safer and more sustainable cities, according to an international team of researchers. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[435] viXra:1802.0364 [pdf] submitted on 2018-02-26 10:27:48

AI of Quantum Systems

Authors: George Rajna
Comments: 49 Pages.

For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[434] viXra:1802.0330 [pdf] submitted on 2018-02-23 10:46:53

AlphaZero Just Playing

Authors: George Rajna
Comments: 47 Pages.

AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[433] viXra:1802.0087 [pdf] submitted on 2018-02-08 09:33:09

A Forecasting Scheme

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel method of finding the next term of a given Sequence.
Category: Artificial Intelligence

[432] viXra:1802.0038 [pdf] submitted on 2018-02-05 05:01:24

Quantum Algorithm Help AI

Authors: George Rajna
Comments: 55 Pages.

An international team has shown that quantum computers can do one such analysis faster than classical computers for a wider array of data types than was previously expected. [35] A team of researchers at Oak Ridge National Laboratory has demonstrated that it is possible to use cloud-based quantum computers to conduct quantum simulations and calculations. [34] Physicists have designed a new method for transmitting big quantum data across long distances that requires far fewer resources than previous methods, bringing the implementation of long-distance big quantum data transmission closer to reality. [33] A joint China-Austria team has performed quantum key distribution between the quantum-science satellite Micius and multiple ground stations located in Xinglong (near Beijing), Nanshan (near Urumqi), and Graz (near Vienna). [32] In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. [31] Engineers worldwide have been developing alternative ways to provide greater memory storage capacity on even smaller computer chips. Previous research into two-dimensional atomic sheets for memory storage has failed to uncover their potential—until now. [30]
Category: Artificial Intelligence

[431] viXra:1802.0031 [pdf] submitted on 2018-02-03 05:25:39

A Feasible Path to a Machine that Can Pass the Turing Test.

Authors: Tofara Moyo
Comments: 5 Pages.

In this paper we will outline a NLP system that is based largely in graph theory, and together with techniques found in linear algebra we will be able to model the rules of logic. Inference from given data in natural language format will then be done by creating a mapping between the premises and the conclusion. During training the vector that is responsible for taking us from the space that the premise occupies to the space that the conclusion occupies will represent the particular logical rule used.Also training involves determining a particular vector for the job.
Category: Artificial Intelligence

[430] viXra:1801.0413 [pdf] submitted on 2018-01-30 20:38:58

On The Subject of Thinking Machines

Authors: John Olafenwa, Moses Olafenwa
Comments: 9 Pages. An investigation of the concepts of thought, imagination and consciousness in learning machines

68 years ago, Alan Turing proposed the question "Can Machines Think" in his seminal paper [1] titled "Computing Machinery and Intelligence" and he formulated the "Imitation Game" also known as the Turing test as a way to answer this question without referring to a rather ambiguous dictionary definition of the word "Think" We have come a long way to building intelligent machines, in fact, the rate of progress in Deep Learning and Reinforcement Learning, the two corner stones of artificial intelligence, is unprecedented. Alan Turing would have been proud of our achievements in computer vision, speech, natural language processing and autonomous systems. However, there are still many challenges and we are still some distance from building machines that can pass the Turing test. In this paper, we discuss some of the biggest questions concerning intelligent machines and we attempt to answer them, as much as can be explained by modern AI.
Category: Artificial Intelligence

[429] viXra:1801.0412 [pdf] submitted on 2018-01-30 21:56:30

A Predictor-Corrector Method for the Training of Deep Neural Networks

Authors: Yatin Saraiya
Comments: 6 pages, 2 figures, 2 tables

The training of deep neural nets is expensive. We present a predictor-corrector method for the training of deep neural nets. It alternates a predictor pass with a corrector pass using stochastic gradient descent with backpropagation such that there is no loss in validation accuracy. No special modifications to SGD with backpropagation is required by this methodology. Our experiments showed a time improvement of 9% on the CIFAR-10 dataset.
Category: Artificial Intelligence

[428] viXra:1801.0411 [pdf] submitted on 2018-01-30 22:00:29

Using Accumulation to Optimize Deep Residual Neural Nets

Authors: Yatin Saraiya
Comments: 7 pages, 6 figures, 1 table

Residual Neural Networks [1] won first place in all five main tracks of the ImageNet and COCO 2015 competitions. This kind of network involves the creation of pluggable modules such that the output contains a residual from the input. The residual in that paper is the identity function. We propose to include residuals from all lower layers, suitably normalized, to create the residual. This way, all previous layers contribute equally to the output of a layer. We show that our approach is an improvement on [1] for the CIFAR-10 dataset.
Category: Artificial Intelligence

[427] viXra:1801.0407 [pdf] submitted on 2018-01-29 13:36:59

Artificial Intelligence Weapon

Authors: George Rajna
Comments: 48 Pages.

A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[426] viXra:1801.0367 [pdf] submitted on 2018-01-27 03:06:22

Superconducting Synapse

Authors: George Rajna
Comments: 26 Pages.

Researchers at the National Institute of Standards and Technology (NIST) have built a superconducting switch that "learns" like a biological system and could connect processors and store memories in future computers operating like the human brain. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[425] viXra:1801.0366 [pdf] submitted on 2018-01-26 05:02:58

Mathematical Model of Inventions

Authors: George Rajna
Comments: 27 Pages.

Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[424] viXra:1801.0363 [pdf] submitted on 2018-01-26 07:55:13

Deep Learning for Gravitational Wave

Authors: George Rajna
Comments: 27 Pages.

Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[423] viXra:1801.0361 [pdf] submitted on 2018-01-26 09:19:57

Hyperspectral Artificial Intelligence

Authors: George Rajna
Comments: 29 Pages.

VTT Technical Research Centre of Finland has developed a highly cost-efficient hyperspectral imaging technology, which enables the introduction of new artificial intelligence applications into consumer devices. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[422] viXra:1801.0271 [pdf] submitted on 2018-01-21 22:33:21

Refutation: Neutrosophic Logic by Florentin Smardanche as Generalized Intuitionistic, Fuzzy Logic © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 2 Pages. © 2018 by Colin James III All rights reserved.

We map the neutrosophic logical values of truth, falsity, and indeterminacy on intervals "]0,1[" and "]-0,1+[" in equations for the Meth8/VL4 apparatus. We test the summation of the those values. The result is not tautologous, meaning neutrosophic logic is refuted and hence its use as a generalization of intuitionistic, fuzzy logic is likewise unworkable.
Category: Artificial Intelligence

[421] viXra:1801.0243 [pdf] submitted on 2018-01-19 09:05:53

AI Quantum Experiments

Authors: George Rajna
Comments: 38 Pages.

On the way to an intelligent laboratory, physicists from Innsbruck and Vienna present an artificial agent that autonomously designs quantum experiments. [24] An answer to a quantum-physical question provided by the algorithm Melvin has uncovered a hidden link between quantum experiments and the mathematical field of Graph Theory. [23] Engineers develop key mathematical formula for driving quantum experiments. [22] Physicists are developing quantum simulators, to help solve problems that are beyond the reach of conventional computers. [21] Engineers at Australia's University of New South Wales have invented a radical new architecture for quantum computing, based on novel 'flip-flop qubits', that promises to make the large-scale manufacture of quantum chips dramatically cheaper - and easier - than thought possible. [20] A team of researchers from the U.S. and Italy has built a quantum memory device that is approximately 1000 times smaller than similar devices— small enough to install on a chip. [19] The cutting edge of data storage research is working at the level of individual atoms and molecules, representing the ultimate limit of technological miniaturisation. [18] This is an important clue for our theoretical understanding of optically controlled magnetic data storage media. [17] A crystalline material that changes shape in response to light could form the heart of novel light-activated devices. [16] Now a team of Penn State electrical engineers have a way to simultaneously control diverse optical properties of dielectric waveguides by using a two-layer coating, each layer with a near zero thickness and weight. [15] Just like in normal road traffic, crossings are indispensable in optical signal processing. In order to avoid collisions, a clear traffic rule is required. A new method has now been developed at TU Wien to provide such a rule for light signals. [14] Researchers have developed a way to use commercial inkjet printers and readily available ink to print hidden images that are only visible when illuminated with appropriately polarized waves in the terahertz region of the electromagnetic spectrum. [13] That is, until now, thanks to the new solution devised at TU Wien: for the first time ever, permanent magnets can be produced using a 3D printer. This allows magnets to be produced in complex forms and precisely customised magnetic fields, required, for example, in magnetic sensors. [12] For physicists, loss of magnetisation in permanent magnets can be a real concern. In response, the Japanese company Sumitomo created the strongest available magnet— one offering ten times more magnetic energy than previous versions—in 1983. [11] New method of superstrong magnetic fields’ generation proposed by Russian scientists in collaboration with foreign colleagues. [10] By showing that a phenomenon dubbed the "inverse spin Hall effect" works in several organic semiconductors - including carbon-60 buckyballs - University of Utah physicists changed magnetic "spin current" into electric current. The efficiency of this new power conversion method isn't yet known, but it might find use in future electronic devices including batteries, solar cells and computers. [9] Researchers from the Norwegian University of Science and Technology (NTNU) and the University of Cambridge in the UK have demonstrated that it is possible to directly generate an electric current in a magnetic material by rotating its magnetization. [8] This paper explains the magnetic effect of the electric current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron’s spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.
Category: Artificial Intelligence

[420] viXra:1801.0192 [pdf] submitted on 2018-01-16 07:03:26

FastNet: An Efficient Architecture for Smart Devices

Authors: John Olafenwa, Moses Olafenwa
Comments: 9 Pages.

Inception and the Resnet family of Convolutional Neural Network architectures have broken records in the past few years, but recent state of the art models have also incurred very high computational cost in terms of training, inference and model size. Making the deployment of these models on Edge devices, impractical. In light of this, we present a new novel architecture that is designed for high computational efficiency on both GPUs and CPUs, and is highly suited for deployment on Mobile Applications, Smart Cameras, Iot devices and controllers as well as low cost drones. Our architecture boasts competitive accuracies on standard datasets even outperforming the original Resnet. We present below the motivation for this research, the architecture of the network, single test accuracies on CIFAR 10 and CIFAR 100, a detailed comparison with other well-known architectures and link to an implementation in Keras.
Category: Artificial Intelligence

[419] viXra:1801.0102 [pdf] submitted on 2018-01-09 11:34:24

Bayesian Transfer Learning for Deep Networks

Authors: J. Wohlert, A. M. Munk, S. Sengupta, F. Laumann
Comments: 6 Pages.

We propose a method for transfer learning for deep networks through Bayesian inference, where an approximate posterior distribution q(w|θ) of model parameters w is learned through variational approximation. Utilizing Bayes by Backprop we optimize the parameters θ associated with the approximate distribution. When performing transfer learning we consider two tasks; A and B. Firstly, an approximate posterior q_A(w|θ) is learned from task A which is afterwards transferred as a prior p(w) → q_A(w|θ) when learning the approximate posterior distribution q_B(w|θ) for task B. Initially, we consider a multivariate normal distribution q(w|θ) = N (µ, Σ), with diagonal covariance matrix Σ. Secondly, we consider the prospects of introducing more expressive approximate distributions - specifically those known as normalizing flows. By investigating these concepts on the MNIST data set we conclude that utilizing normalizing flows does not improve Bayesian inference in the context presented here. Further, we show that transfer learning is not feasible using our proposed architecture and our definition of task A and task B, but no general conclusion regarding rejecting a Bayesian approach to transfer learning can be made.
Category: Artificial Intelligence

[418] viXra:1801.0050 [pdf] submitted on 2018-01-06 00:20:25

Fruit Recognition from Images Using Deep Learning

Authors: Horea Muresan, Mihai Oltean
Comments: 13 Pages. Data can be downloaded from https://github.com/Horea94/Fruit-Images-Dataset

In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use this kind of neural network.
Category: Artificial Intelligence

[417] viXra:1801.0041 [pdf] submitted on 2018-01-05 06:09:53

Taking Advantage of BiLSTM Encoding to Handle Punctuation in Dependency Parsing: A Brief Idea

Authors: Matteo Grella
Comments: 3 Pages.

In the context of the bidirectional-LSTMs neural parser (Kiperwasser and Goldberg, 2016), an idea is proposed to initialize the parsing state without punctuation-tokens but using them for the BiLSTM sentence encoding. The relevant information brought by the punctuation-tokens should be implicitly learned using the errors of the recurrent contributions only.
Category: Artificial Intelligence

[416] viXra:1712.0659 [pdf] submitted on 2017-12-29 06:21:14

TDBF: Two Dimensional Belief Function

Authors: Yangxue Li; Yong Deng
Comments: 15 Pages.

How to efficiently handle uncertain information is still an open issue. Inthis paper, a new method to deal with uncertain information, named as two dimensional belief function (TDBF), is presented. A TDBF has two components, T=(mA,mB). The first component, mA, is a classical belief function. The second component, mB, also is a classical belief function, but it is a measure of reliability of the first component. The definition of TDBF and the discounting algorithm are proposed. Compared with the classical discounting model, the proposed TDBF is more flexible and reasonable. Numerical examples are used to show the efficiency of the proposed method.
Category: Artificial Intelligence

[415] viXra:1712.0647 [pdf] submitted on 2017-12-28 23:25:34

A Total Uncertainty Measure for D Numbers Based on Belief Intervals

Authors: Xinyang Deng, Wen Jiang
Comments: 14 Pages.

As a generalization of Dempster-Shafer theory, the theory of D numbers is a new theoretical framework for uncertainty reasoning. Measuring the uncertainty of knowledge or information represented by D numbers is an unsolved issue in that theory. In this paper, inspired by distance based uncertainty measures for Dempster-Shafer theory, a total uncertainty measure for a D number is proposed based on its belief intervals. The proposed total uncertainty measure can simultaneously capture the discord, and non-specificity, and non-exclusiveness involved in D numbers. And some basic properties of this total uncertainty measure, including range, monotonicity, generalized set consistency, are also presented.
Category: Artificial Intelligence

[414] viXra:1712.0495 [pdf] submitted on 2017-12-18 08:50:22

Just Keep it in Mind: Information is a Complex Notion with Physical and Semantic Information Staying for Real and Imaginary Parts of the Expression

Authors: Emanuel Diamant
Comments: 3 Pages. Presented at the IS4SI 2017 Summit, Information Theory Section, Gothenburg, Sweden, 12–16 June 2017

Shannon’s Information was devised to improve the performance of a data communication channel. Since then, the situation has changed drastically and today a more generally applicable and suitable definition of information is urgently required. To meet this demand, I have proposed a definition of my own. According to it, information is a complex notion with Physical and Semantic information staying for Real and Imaginary parts of the term. The scientific community has very unfriendly accepted this idea. But without a better solution for the problem of: 1) intron-exon partition in genes, 2) information flow in neuronal networks, 3) memory creation and potentiation in brains, 4) thoughts and thinking materialization in human heads, and 5) the undeniable shift from Computational (that is, data processing based) approach to Cognitive (that is, information processing based) approach in the field of scientific research, they would be forced to admit one day that something worthy is in this new definition.
Category: Artificial Intelligence

[413] viXra:1712.0494 [pdf] submitted on 2017-12-18 09:05:26

Shannon's Definition of Information is Obsolete and Inadequate. it is Time to Embrace Kolmogorov’s Insights on the Matter

Authors: Emanuel Diamant
Comments: 3 Pages. Presented at the 2016 ICSEE International Conference, Eilat, Israel, 16 – 18 November 2016.

Information Theory, as developed by Claude Shannon in 1948, was about the communication of messages as electronic signals via a transmission channel. Only physical properties of the signal and the channel have been taken into account. While the meaning of the message has been ignored totally. Such an approach to information met very well the requirements of a data communication channel. But recent advances in almost all sciences put an urgent demand for meaningful information inclusion into the body of a communicated message. To meet this demand, I have proposed a new definition of information. In this definition, information is seen as a complex notion composed of two inseparable parts: Physical information and Semantic information. Classical informations such as Shannon, Fisher, Renyi, Kolmogorov’s complexity, and Chaitin’s algorithmic information – they are all physical information variants. Semantic information is a new concept and it desires to be properly studied, treated, and used.
Category: Artificial Intelligence

[412] viXra:1712.0469 [pdf] submitted on 2017-12-15 23:33:47

Predicting Yelp Star Reviews Based on Network Structure with Deep Learning

Authors: Luis Perez
Comments: 12 pages, 17 figures

In this paper, we tackle the real-world problem of predicting Yelp star-review rating based on business features (such as images, descriptions), user features (average previous ratings), and, of particular interest, network properties (which businesses has a user rated before). We compare multiple models on different sets of features -- from simple linear regression on network features only to deep learning models on network and item features. In recent years, breakthroughs in deep learning have led to increased accuracy in common supervised learning tasks, such as image classification, captioning, and language understanding. However, the idea of combining deep learning with network feature and structure appears to be novel. While the problem of predicting future interactions in a network has been studied at length, these approaches have often ignored either node-specific data or global structure. We demonstrate that taking a mixed approach combining both node-level features and network information can effectively be used to predict Yelp-review star ratings. We evaluate on the Yelp dataset by splitting our data along the time dimension (as would naturally occur in the real-world) and comparing our model against others which do no take advantage of the network structure and/or deep learning.
Category: Artificial Intelligence

[411] viXra:1712.0468 [pdf] submitted on 2017-12-15 23:41:37

The Effectiveness of Data Augmentation in Image Classification using Deep Learning

Authors: Luis Perez, Jason Wang
Comments: 8 Pages.

In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.
Category: Artificial Intelligence

[410] viXra:1712.0467 [pdf] submitted on 2017-12-15 23:43:11

Gaussian Processes for Crime Prediction

Authors: Luis Perez, Alex Wang
Comments: 8 Pages.

The ability to predict crime is incredibly useful for police departments, city planners, and many other parties, but thus far current approaches have not made use of recent developments of machine learning techniques. In this paper, we present a novel approach to this task: Gaussian processes regression. Gaussian processes (GP) are a rich family of distributions that are able to learn functions. We train GPs on historic crime data to learn the underlying probability distribution of crime incidence to make predictions on future crime distributions.
Category: Artificial Intelligence

[409] viXra:1712.0465 [pdf] submitted on 2017-12-16 00:36:46

Reinforcement Learning with Swingy Monkey

Authors: Luis Perez, Aidi Zhang, Kevin Eskici
Comments: 7 Pages.

This paper explores model-free, model-based, and mixture models for reinforcement learning under the setting of a SwingyMonkey game \footnote{The code is hosted on a public repository \href{https://github.com/kandluis/machine-learning}{here} under the prac4 directory.}. SwingyMonkey is a simple game with well-defined goals and mechanisms, with a relatively small state-space. Using Bayesian Optimization \footnote{The optimization took place using the open-source software made available by HIPS \href{https://github.com/HIPS/Spearmint}{here}.} on a simple Q-Learning algorithm, we were able to obtain high scores within just a few training epochs. However, the system failed to scale well after continued training, and optimization over hundreds of iterations proved too time-consuming to be effective. After manually exploring multiple approaches, the best results were achieved using a mixture of $\epsilon$-greedy Q-Learning with a stable learning rate,$\alpha$, and $\delta \approx 1$ discount factor. Despite the theoretical limitations of this approach, the settings, resulted in maximum scores of over 5000 points with an average score of $\bar{x} \approx 684$ (averaged over the final 100 testing epochs, median of $\bar{m} = 357.5$). The results show an continuing linear log-relation capping only after 20,000 training epochs.
Category: Artificial Intelligence

[408] viXra:1712.0464 [pdf] submitted on 2017-12-16 00:38:28

Multi-Document Text Summarization

Authors: Luis Perez, Kevin Eskici
Comments: 24 Pages. 24

We tackle the problem of multi-document extractive summarization by implementing two well-known algorithms for single-text summarization -- {\sc TextRank} and {\sc Grasshopper}. We use ROUGE-1 and ROUGE-2 precision scores with the DUC 2004 Task 2 data set to measure the performance of these two algorithms, with optimized parameters as described in their respective papers ($\alpha =0.25$ and $\lambda=0.5$ for Grasshopper and $d=0.85$ for TextRank). We compare these modified algorithms to common baselines as well as non-naive, novel baselines and we present the resulting ROUGE-1 and ROUGE-2 recall scores. Subsequently, we implement two novel algorithms as extensions of {\sc GrassHopper} and {\sc TextRank}, each termed {\sc ModifiedGrassHopper} and {\sc ModifiedTextRank}. The modified algorithms intuitively attempt to ``maximize'' diversity across the summary. We present the resulting ROUGE scores. We expect that with further optimizations, this unsupervised approach to extractive text summarization will prove useful in practice.
Category: Artificial Intelligence

[407] viXra:1712.0446 [pdf] submitted on 2017-12-13 08:17:06

A New Divergence Measure for Basic Probability Assignment and Its Applications in Extremely Uncertain Environments

Authors: Liguo Fei, Yong Hu, Yong Deng, Sankaran Mahadevan
Comments: 9 Pages.

Information fusion under extremely uncertain environments is an important issue in pattern classification and decision-making problem. Dempster-Shafer evidence theory (D-S theory) is more and more extensively applied to information fusion for its advantage to deal with uncertain information. However, the results opposite to common sense are often obtained when combining the different evidences using Dempster’s combination rules. How to measure the difference between different evidences is still an open issue. In this paper, a new divergence is proposed based on Kullback-Leibler divergence in order to measure the difference between different basic probability assignments (BPAs). Numerical examples are used to illustrate the computational process of the proposed divergence. Then the similarity for different BPAs is also defined based on the proposed divergence. The basic knowledge about pattern recognition is introduced and a new classification algorithm is presented using the proposed divergence and similarity under extremely uncertain environments, which is illustrated by a small example handling robot sensing. The method put forward is motivated by desperately in need to develop intelligent systems, such as sensor-based data fusion manipulators, which need to work in complicated, extremely uncertain environments. Sensory data satisfy the conditions 1) fragmentary and 2) collected from multiple levels of resolution.
Category: Artificial Intelligence

[406] viXra:1712.0444 [pdf] submitted on 2017-12-13 08:59:01

Environmental Impact Assessment Using D-Vikor Approach

Authors: Liguo Fei, Yong Deng
Comments: 15 Pages.

Environmental impact assessment (EIA) is an open and important issue depends on factors such as social, ecological, economic, etc. Due to human judgment, a variety of uncertainties are brought into the EIA process. With regard to uncertainty, many existing methods seem powerless to represent and deal with it effectively. A new theory called D numbers, because of its advantage to handle uncertain information, is widely used to uncertainty modeling and decision making. VIKOR method has its unique advantages in dealing with multiple criteria decision making problems (MCDM), especially when the criteria are non-commensurable and even conflicting, it can also obtain the compromised optimal solution. In order to solve EIA problems more effectively, in this paper, a D-VIKOR approach is proposed, which expends the VIKOR method by D numbers theory. In the proposed approach, assessment information of environmental factors is expressed and modeled by D numbers. And a new combination rule for multiple D numbers is defined. Subjective weights and objective weights are considered in VIKOR process for more reasonable ranking results. A numerical example is conducted to analyze and demonstrate the practicality and effectiveness of the proposed D-VIKOR approach.
Category: Artificial Intelligence

[405] viXra:1712.0432 [pdf] submitted on 2017-12-13 22:28:48

DS-Vikor: a New Methodology for Supplier Selection

Authors: Liguo Fei, Yong Deng, Yong Hu
Comments: 15 Pages.

How to select the optimal supplier is an open and important issue in supply chain management (SCM), which needs to solve the problem of assessment and sorting the potential suppliers, and can be considered as a multi-criteria decision-making (MCDM) problem. Experts’ assessment play a very important role in the process of supplier selection, while the subjective judgment of human beings could introduce unpredictable uncertainty. However, existing methods seem powerless to represent and deal with this uncertainty effectively. Dempster-Shafer evidence theory (D- S theory) is widely used to uncertainty modeling, decision making and conflicts management due to its advantage to handle uncertain information. The VIKOR method has a great advantage to handle MCDM problems with non-commensurable and even conflicting criteria, and to obtain the compromised optimal solution. In this paper, a DS- VIKOR method is proposed for the supplier selection problem which expends the VIKOR method by D-S theory. In this method, the basic probability assignment (BPA) is used to denote the decision makers’ assessment for suppliers, Deng entropy weight-based method is defined and applied to determine the weights of multi-criteria, and VIKOR method is used for getting the final ranking results. An illustrative example under real life is conducted to analyze and demonstrate the practicality and effectiveness of the proposed DS-VIKOR method.
Category: Artificial Intelligence

[404] viXra:1712.0400 [pdf] submitted on 2017-12-13 06:52:57

Adaptively Evidential Weighted Classifier Combination

Authors: Liguo Fei, Bingyi Kang, Van-Nam Huynh, Yong Deng
Comments: 9 Pages.

Classifier combination plays an important role in classification. Due to the efficiency to handle and fuse uncertain information, Dempster-Shafer evidence theory is widely used in multi-classifiers fusion. In this paper, a method of adaptively evidential weighted classifier combination is presented. In our proposed method, the output of each classifier is modelled by basic probability assignment (BPA). Then, the weights are determined adaptively for individual classifier according to the uncertainty degree of the corresponding BPA. The uncertainty degree is measured by a belief entropy, named as Deng entropy. Discounting-and-combination scheme in D-S theory is used to calculate the weighted BPAs and combine them for the final BPA for classification. The effectiveness of the proposed weighted combination method is illustrated by numerical experimental results.
Category: Artificial Intelligence

[403] viXra:1712.0347 [pdf] submitted on 2017-12-07 09:10:57

Finding The Next Term Of Any Time Series Type Or Non Time Series Type Sequence Using Total Similarity & Dissimilarity {Version 6} ISSN 1751-3030.

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given time series type or non-time series type sequence.
Category: Artificial Intelligence

[402] viXra:1712.0138 [pdf] submitted on 2017-12-05 14:07:08

Topological Clustering as a Method of Control for Certain Critical-Point Sensitive Systems

Authors: Martin J. Dudziiak
Comments: 6 Pages. submitted to CoDIT 2018 (Thessaloniki, Greece, April 2018)

New methods can provide more sensitive modeling and more reliable control, through use of dynamically-alterable local neighborhood clusters comprised of of the state-space parameters most disposed to be influential in non-linear systemic changes. Particular attention is directed to systems with extreme non-linearity and uncertainty in measurement and in control communications (e.g., micro-scalar, remote and inaccessible to real-time control). An architecture for modeling based upon topological similarity mapping principles is introduced as an alternative to classical Turing machine models including new “quantum computers.”
Category: Artificial Intelligence

[401] viXra:1712.0071 [pdf] submitted on 2017-12-03 19:12:51

The Intelligence Quotient of the Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 15 Pages. Bulgarian. Serdica Journal of Computing.

To say which programs are AI, it's enough to run an exam and recognize for AI those programs that passed the exam. The exam grade will be called IQ. We cannot say just how big the IQ has to be in order one program to be AI, but we will choose one specific value. So our definition of AI will be any program whose IQ is above this specific value. This idea has already been realized in [1], but here we will repeat this construction by bringing some improvements.
Category: Artificial Intelligence

[400] viXra:1711.0477 [pdf] submitted on 2017-11-30 18:22:54

Okay, Google: a Preliminary Evaluation of the Robustness of Scholar Metrics

Authors: H Qadrawxu-Korbau, D Smith, K Beryllium
Comments: 4 Pages.

Google Scholar provides a number of metrics often used as proxies for scientific productivity. It is, however, possible to consciously manipulate Scholar metrics, for instance via copious self-citation or upload of fake papers to indexed websites. Here, we post a paper on vixra, a preprint forum, and arbitrarily cite a completely random study to evaluate whether Scholar will count this submission toward the overall citation count of that study. We publish no results, as the publication of the paper is, in this case, the experiment.
Category: Artificial Intelligence

[399] viXra:1711.0470 [pdf] submitted on 2017-11-30 02:13:24

Multi-Scalar Multi-Agent Control for Optimization of Dynamic Networks Operating in Remote Environment

Authors: Martin Dudziak
Comments: 7 Pages.

Multi-agent control systems have demonstrated effectiveness in a variety of physical applications including cooperative robot networks and multi-target tracking in high-noise network and group environments. We introduce the use of multi-scalar models that extend cellular automaton regional neighborhood comparisons and local voting measures based upon stochastic approximation in order to provide more efficient and time-sensitive solutions to non-deterministic problems. The scaling factors may be spatial, temporal or in other semantic values. The exercising of both cooperative and competitive functions by the devices in such networks offers a method for optimizing system parameters to reduce search, sorting, ranking and anomaly evaluation tasks. Applications are illustration for a group of robots assigned different tasks in remote operating environments with highly constrained communications and critical fail-safe conditions.
Category: Artificial Intelligence

[398] viXra:1711.0433 [pdf] submitted on 2017-11-26 23:19:36

Finding The Next Term Of Any Time Series Type Sequence Using Total Similarity & Dissimilarity {Version 5} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given time series type sequence.
Category: Artificial Intelligence

[397] viXra:1711.0429 [pdf] submitted on 2017-11-27 05:14:34

Finding The Next Term Of Any Sequence Using Total Similarity & Dissimilarity {Version 5}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[396] viXra:1711.0420 [pdf] submitted on 2017-11-26 01:39:24

Move the Tip to the Right a Language Based Computeranimation System in Box2d

Authors: Frank Schröder
Comments: 8 Pages.

Not only “robots need language”, but sometimes a human-operator too. To interact with complex domains, he needs a vocabulary to init the robot, let him walk and grasping objects. Natural language interfaces can support semi-autonomous and fully-autonomous systems on both sides. Instead of using neural networks, the language grounding problem can be solved with object-oriented programming. In the following paper a simulation of micro-manipulation under a microscope is given which is controlled with a C++ script. The small vocabulary consists of init, pregrasp, grasp and place.
Category: Artificial Intelligence

[395] viXra:1711.0382 [pdf] submitted on 2017-11-22 02:30:08

A Survey on Evolutionary Computation: Methods and Their Applications in Engineering

Authors: Morteza Husainy Yar, Vahid Rahmati, Hamid Reza Dalili Oskouei
Comments: 9 Pages.

Evolutionary computation is now an inseparable branch of artificial intelligence and smart methods based on evolutional algorithms aimed at solving different real world problems by natural procedures involving living creatures. It's based on random methods, regeneration of data, choosing by changing or replacing data within a system such as personal computer (PC), cloud, or any other data center. This paper briefly studies different evolutionary computation techniques used in some applications specifically image processing, cloud computing and grid computing. These methods are generally categorized as evolutionary algorithms and swarm intelligence. Each of these subfields contains a variety of algorithms and techniques which are presented with their applications. This work tries to demonstrate the benefits of the field by presenting the real world applications of these methods implemented already. Among these applications is cloud computing scheduling problem improved by genetic algorithms, ant colony optimization, and bees algorithm. Some other applications are improvement of grid load balancing, image processing, improved bi-objective dynamic cell formation problem, robust machine cells for dynamic part production, integrated mixed-integer linear programming, robotic applications, and power control in wind turbines.
Category: Artificial Intelligence

[394] viXra:1711.0370 [pdf] submitted on 2017-11-20 22:14:32

Finding The Next Term Of Any Given Sequence Using Total Similarity & Dissimilarity {Version 3} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[393] viXra:1711.0367 [pdf] submitted on 2017-11-21 00:18:32

One Step Evolution Of Any Real Positive Number {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed the Theory Of One Step Evolution Of Any Real Positive Number.
Category: Artificial Intelligence

[392] viXra:1711.0361 [pdf] submitted on 2017-11-20 02:12:39

Finding The Next Term Of Any Given Sequence Using Total Similarity & Dissimilarity. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[391] viXra:1711.0360 [pdf] submitted on 2017-11-20 02:43:10

Ontology Engineering for Robotics

Authors: Frank Schröder
Comments: 8 Pages.

Ontologies are a powerfull alternative to reinforcement learning. They store knowledge in a domain-specific language. The best-practice for implementing ontologies is a distributed version control system which is filled manually by programmers.
Category: Artificial Intelligence

[390] viXra:1711.0359 [pdf] submitted on 2017-11-20 05:21:55

Finding The Next Term Of Any Given Sequence Using Total Similarity & Dissimilarity {New} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[389] viXra:1711.0292 [pdf] submitted on 2017-11-12 09:29:57

Strengths and Potential of the SP Theory of Intelligence in General, Human-Like Artificial Intelligence

Authors: J Gerard Wolff
Comments: 20 Pages.

This paper first defines "general, human-like artificial intelligence" (GHLAI) in terms of five principles. In the light of the definition, the paper summarises the strengths and potential of the "SP theory of intelligence" and its realisation in the "computer model", outlined in an appendix, in three main areas: the versatility of the SP system in aspects of intelligence; its versatility in the representation of diverse kinds of knowledge; and its potential for the seamless integration of diverse aspects of intelligence and diverse kinds of knowledge, in any combination. There are reasons to believe that a mature version of the SP system may attain full GHLAI in diverse aspects of intelligence and in the representation of diverse kinds of knowledge.
Category: Artificial Intelligence

[388] viXra:1711.0266 [pdf] submitted on 2017-11-11 03:38:23

Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network

Authors: Lixin Fan
Comments: 10 Pages. NIPS 2017 publication.

We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic. In particular, we conjecture and empirically illustrate that, the celebrated batch normalization (BN) technique actually adapts the “normalized” bias such that it approximates the rightful bias induced by the generalized hamming distance. Once the due bias is enforced analytically, neither the optimization of bias terms nor the sophisticated batch normalization is needed. Also in the light of generalized hamming distance, the popular rectified linear units (ReLU) can be treated as setting a minimal hamming distance threshold between network inputs and weights. This thresholding scheme, on the one hand, can be improved by introducing double-thresholding on both positive and negative extremes of neuron outputs. On the other hand, ReLUs turn out to be non-essential and can be removed from networks trained for simple tasks like MNIST classification. The proposed generalized hamming network (GHN) as such not only lends itself to rigorous analysis and interpretation within the fuzzy logic theory but also demonstrates fast learning speed, well-controlled behaviour and state-of-the-art performances on a variety of learning tasks.
Category: Artificial Intelligence

[387] viXra:1711.0265 [pdf] submitted on 2017-11-11 04:14:07

Revisit Fuzzy Neural Network: Bridging the Gap Between Fuzzy Logic and Deep Learning

Authors: Lixin Fan
Comments: 75 Pages. bridging the gap between symbolic versus connectionist.

This article aims to establish a concrete and fundamental connection between two important fields in artificial intelligence i.e. deep learning and fuzzy logic. On the one hand, we hope this article will pave the way for fuzzy logic researchers to develop convincing applications and tackle challenging problems which are of interest to machine learning community too. On the other hand, deep learning could benefit from the comparative research by re-examining many trail-and-error heuristics in the lens of fuzzy logic, and consequently, distilling the essential ingredients with rigorous foundations. Based on the new findings reported in [38] and this article, we believe the time is ripe to revisit fuzzy neural network as a crucial bridge between two schools of AI research i.e. symbolic versus connectionist [93] and eventually open the black-box of artificial neural networks.
Category: Artificial Intelligence

[386] viXra:1711.0250 [pdf] submitted on 2017-11-08 06:37:55

Total Intra Similarity And Dissimilarity Measure For The Values Taken By A Parameter Of Concern. {Version 1}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of finding the ‘Total Intra Similarity And Dissimilarity Measure For The Values Taken By A Parameter Of Concern’. The advantage of such a measure is that using this measure we can clearly distinguish the contribution of Intra aspect variation and Inter aspect variation when both are bound to occur in a given phenomenon of concern. This measure provides the same advantages as that provided by the popular F-Statistic measure.
Category: Artificial Intelligence

[385] viXra:1711.0241 [pdf] submitted on 2017-11-07 03:26:43

Dysfunktionale Methoden der Robotik

Authors: Frank Schröder
Comments: 8 Pages. German

Bei der Realisierung von Robotik-Projekten kann man eine ganze Menge verkehrt machen. Damit sind nicht nur kalte Lötstellen oder abstürzende Software gemeint, sondern sehr viel grundsätzlichere Dinge spielen eine Rolle. Um Fehler zu vermeiden, muss man sich zunächst einmal mit den Failure-Patterns näher auseinandersetzen, also jenen Entwicklungsmethoden, nach denen man auf gar keinen Fall einen Roboter bauen und wie die Software möglichst nicht funktionieren sollte.
Category: Artificial Intelligence

[384] viXra:1711.0235 [pdf] submitted on 2017-11-06 20:27:28

Not Merely Memorization in Deep Networks: Universal Fitting and Specific Generalization

Authors: Xiuyi Yang
Comments: 7 Pages.

We reinterpret the training of convolutional neural nets(CNNs) with universal classification theorem(UCT). This theory implies any disjoint datasets can be classified by two or more layers of CNNs based on ReLUs and rigid transformation switch units(RTSUs) we propose here, this explains why CNNs could memorize noise and real data. Subsequently, we present another fresh new hypothesis that CNN is insensitive to some variant from input training data example, this variant relates to original training input by generating functions. This hypothesis means CNNs can generalize well even for randomly generated training data and illuminates the paradox Why CNNs fit real and noise data and fail drastically when making predictions for noise data. Our findings suggest the study about generalization theory of CNNs should turn to generating functions instead of traditional statistics machine learning theory based on assumption that the training data and testing data are independent and identically distributed(IID), and apparently IID assumption contradicts our experiments in this paper.We experimentally verify these ideas correspondingly.
Category: Artificial Intelligence

[383] viXra:1711.0226 [pdf] submitted on 2017-11-07 01:52:12

Theory Of Universal Evolution Along Prime Basis (Time Like) ISSN 1751-3030.

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed the Theory Of Evolution.
Category: Artificial Intelligence

[382] viXra:1711.0208 [pdf] submitted on 2017-11-07 02:22:45

Theory Of Universal Evolution Along Prime Basis (Time Like) {Version 2} ISSN 1751-3030.

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed the Theory Of Evolution.
Category: Artificial Intelligence

[381] viXra:1711.0116 [pdf] submitted on 2017-11-02 23:51:41

Dynamic Thresholding For Linear Binary Classifiers. {Version 2} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of finding the Thresholding for Linear Binary Classifiers.
Category: Artificial Intelligence

[380] viXra:1711.0034 [pdf] submitted on 2017-11-02 06:05:21

Dynamic Thresholding For Linear Binary Classifiers. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of finding the Thresholding for Linear Binary Classifiers.
Category: Artificial Intelligence

[379] viXra:1710.0336 [pdf] submitted on 2017-10-31 23:50:38

Scheme For Finding The Next Term Of A Sequence Based On Evolution. {Version 7}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 7 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[378] viXra:1710.0299 [pdf] submitted on 2017-10-27 04:13:49

Scheme For Finding The Next Term Of A Sequence Based On Evolution. {Version 6}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 6 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[377] viXra:1710.0297 [pdf] submitted on 2017-10-25 03:57:32

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 2}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 5 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[376] viXra:1710.0294 [pdf] submitted on 2017-10-25 23:47:37

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 3}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 5 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[375] viXra:1710.0293 [pdf] submitted on 2017-10-26 01:24:46

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 4}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 6 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[374] viXra:1710.0289 [pdf] submitted on 2017-10-26 03:56:28

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 5}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 6 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[373] viXra:1710.0279 [pdf] submitted on 2017-10-24 04:45:19

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 1}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[372] viXra:1710.0271 [pdf] submitted on 2017-10-23 23:14:04

The Average Computed In Primes Basis {File Closing Version 2}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of finding the average of a sequence in Primes Basis.
Category: Artificial Intelligence

[371] viXra:1710.0267 [pdf] submitted on 2017-10-23 06:21:13

The Average Computed In Primes Basis {File Closing Version 1}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of finding the average of a sequence in Primes Basis.
Category: Artificial Intelligence

[370] viXra:1710.0259 [pdf] submitted on 2017-10-23 00:38:01

Universe’s Way Of Recursively Finding The Next Term Of Any Sequence {File Closing Version 3}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of Universe’s Way Of Recursively Finding The Next Term Of Any Sequence.
Category: Artificial Intelligence

[369] viXra:1710.0208 [pdf] submitted on 2017-10-18 23:07:44

The Recursive Future Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {File Closing Version 4}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 4 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[368] viXra:1710.0141 [pdf] submitted on 2017-10-12 10:42:50

Miguel A. Sanchez-Rey

Authors: Advances in the Collective Interface
Comments: 5 Pages.

A byproduct of 2AI.
Category: Artificial Intelligence

[367] viXra:1710.0003 [pdf] submitted on 2017-10-01 06:54:11

Nature-Like Technology for Communication Network Selfactualization in the Mode Advancing Real-Time

Authors: Popov Boris
Comments: 7 Pages.

In order to provide control system operation in real-time mode, communication system should operate in the mode advancing real-time that can be achieved only by means of providing the communication system with mechanism for network structure forward adaptation to the variations in the user query topics and rates as well as their self-actualization. A technique for developing such nature-like technology that is based on fundamental natural inertia phenomenon and widespread symbiotic cooperation, distinguished by building-up (developing) resources being used, is proposed.
Category: Artificial Intelligence

[366] viXra:1709.0404 [pdf] submitted on 2017-09-26 13:26:47

A Suggestion on CLIPS/JIProlog/JNNS/ImageJ/Java Agents/JikesRVM Based Analysis of Cryo-EM/TEM/SEM Images Using HDF5 Image Format – Some Interesting & Feasible Implementations of Expert Systems to Understand Nano- Bio Material Systems and EM

Authors: D.N.T.Kumar
Comments: 7 Pages. Prolog/NN/Expert Systems/JikesRVM/Informatics/EM/Cryo-EM/TEM/SEM/Material Science/Java Agents/Nanotechnology.

In this short communication the importance of expert systems based imaging framework to probe Cryo-EM images is presented from a practical implementation point of view. Neural Networks or NN are an excellent tool to probe various domains of science and technology. Cryo-EM Technique holds bright future based on the application of NN.Prolog-NN based algorithms could form a powerful informatics and computational framework for researching the challenges of nano-bio Applications. Further,it is useful and important to study the behavior of NN in domains where knowledge does not exist, i.e to use the models to make bold predictions which form the basis for Cryo-EM Image Processing tasks and the discovery of new nano-bio phenomena.Indeed, the performance of NN is most useful to researchers in domains where the modeling and predicting “uncertainty” is known to be the greatest factor. All the methods presented here are also applicable to TEM/SEM/other EM Image Processing tasks as well.
Category: Artificial Intelligence

[365] viXra:1709.0403 [pdf] submitted on 2017-09-26 13:33:02

Kernel Principal Component Analysis as Mathematical Tool In Processing Cryo- EM Images – A Suggestion Using Kernel Based Data Processing Techniques in a Java Virtual Machine(JVM) Environment.

Authors: D.N.T.Kumar
Comments: 7 Pages. A Suggestion Using Kernel Based Data Processing Techniques in a Java Virtual Machine(JVM) Environment.

In this short communication,it was proposed to highlight some novel methodologies to probe,process and compute Cryo-EM Images in a unique way by using an open source Kernel-PCA and by interfacing the KERNEL-PCA via Java Matlab Interface(JMI) – JikesRVM system or any other Java Virtual Machine(JVM).The main reason to design and develop this kind of computing approach is to utilize the features of Java based technologies for futuristic applications in the promising and demanding domains of CRYO-EM Imaging in the nano-bio domains.This is one of the pioneering research topics in this domain with a lot of promise.Image de-noising and novelty detection paves the way and holds the key for better Cryo-EM image processing.
Category: Artificial Intelligence

[364] viXra:1709.0394 [pdf] submitted on 2017-09-26 11:50:52

How Does the ai Understand What's Going on

Authors: Dimiter Dobrev
Comments: 22 Pages.

Most researchers regard AI as a static function without memory. This is one of the few articles where AI is seen as a device with memory. When we have memory, we can ask ourselves: "Where am I?", and "What is going on?" When we have no memory, we have to assume that we are always in the same place and that the world is always in the same state.
Category: Artificial Intelligence

[363] viXra:1709.0323 [pdf] submitted on 2017-09-21 05:35:00

Recursive Future Average Of A Time Series Data Based On Cosine Similarity-RF

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[362] viXra:1709.0322 [pdf] submitted on 2017-09-21 05:49:31

Recursive Future Average Of A Time Series Data Based On Cosine Similarity-RF {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[361] viXra:1709.0313 [pdf] submitted on 2017-09-22 00:01:00

The Recursive Future Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {File Closing Version 2}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[360] viXra:1709.0242 [pdf] submitted on 2017-09-15 20:34:58

Exact Map Inference in General Higher-Order Graphical Models Using Linear Programming

Authors: Ikhlef Bechar
Comments: 50 Pages.

This paper is concerned with the problem of exact MAP inference in general higher-order graphical models by means of a traditional linear programming relaxation approach. In fact, the proof that we have developed in this paper is a rather simple algebraic proof being made straightforward, above all, by the introduction of two novel algebraic tools. Indeed, on the one hand, we introduce the notion of delta-distribution which merely stands for the difference of two arbitrary probability distributions, and which mainly serves to alleviate the sign constraint inherent to a traditional probability distribution. On the other hand, we develop an approximation framework of general discrete functions by means of an orthogonal projection expressing in terms of linear combinations of function margins with respect to a given collection of point subsets, though, we rather exploit the latter approach for the purpose of modeling locally consistent sets of discrete functions from a global perspective. After that, as a first step, we develop from scratch the expectation optimization framework which is nothing else than a reformulation, on stochastic grounds, of the convex-hull approach, as a second step, we develop the traditional LP relaxation of such an expectation optimization approach, and we show that it enables to solve the MAP inference problem in graphical models under rather general assumptions. Last but not least, we describe an algorithm which allows to compute an exact MAP solution from a perhaps fractional optimal (probability) solution of the proposed LP relaxation.
Category: Artificial Intelligence

[359] viXra:1709.0217 [pdf] submitted on 2017-09-14 08:11:16

Quantum Thinking Machines

Authors: George Rajna
Comments: 24 Pages.

Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[358] viXra:1709.0211 [pdf] submitted on 2017-09-14 06:46:27

Analyzing Huge Volumes of Data

Authors: George Rajna
Comments: 23 Pages.

Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[357] viXra:1709.0161 [pdf] submitted on 2017-09-13 10:30:45

AI is Reinforcing Stereotypes

Authors: George Rajna
Comments: 46 Pages.

Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[356] viXra:1709.0159 [pdf] submitted on 2017-09-13 06:47:26

Mergeable Nervous Robots

Authors: George Rajna
Comments: 49 Pages.

Researchers at the Université libre de Bruxelles have developed self-reconfiguring modular robots that can merge, split and even self-heal while retaining full sensorimotor control. [29] A challenging brain technique called whole-cell patch clamp electrophysiology or whole-cell recording (WCR) is a procedure so delicate and complex that only a handful of humans in the whole world can do it. [28] ComText allows robots to understand contextual commands such as, " Pick up the box I put down. " [27] McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[355] viXra:1709.0142 [pdf] submitted on 2017-09-11 20:53:40

Brain Emotional Learning Based Intelligent Controller for Velocity Control of an Electro Hydraulic Servo System

Authors: Zohreh Alzahra Sanai Dashti, Milad Gholami, Masoud Hajimani
Comments: 7 Pages. IOSR Journal of Electrical and Electronics Engineering (IOSR - JEEE) e - ISSN: 2278 - 1676,p - ISSN: 2320 - 3331, Volume 12, Issue 4 Ver. I I (Jul. – Aug. 2017), PP 29 - 35

In this paper, a biologically motivated controller based on mammalian limbic system called Brain Emotional Learning Based Intelligent Controller (BELBIC) is used for velocity control of an Electro Hydraulic Servo System (EHSS) in presence of flow nonlinearities, internal friction and noise. It is shown that this technique can be successfully used to stabilize any chosen operating point of the system with noise and without noise. All derived results are validated by computer simulation of a nonlinear mathematical model of the system. The controllers which introduced have big range for control the system. We compare BELBIC controller results with feedbacks linearization, backstepping and PID controller.
Category: Artificial Intelligence

[354] viXra:1709.0141 [pdf] submitted on 2017-09-11 20:56:32

Design & Implementation of Fuzzy Parallel Distributed Compensation Controller for Magnetic Levitation System

Authors: Milad Gholami, Zohreh Alzahra Sanai Dashti, Masoud Hajimani
Comments: 9 Pages. IOSR Journal of Electrical and Electronics Engineering (IOSR - JEEE) e - ISSN : 2278 - 1676,p - ISSN: 2320 - 3331, Volume 12, Issue 4 Ver. I I (Jul. – Aug. 2017), PP 20 - 28

This study applies technique parallel distributed compensation (PDC) for position control of a Magnetic levitation system. PDC method is based on nonlinear Takagi-Sugeno (T-S) fuzzy model. It is shown that this technique can be successfully used to stabilize any chosen operating point of the system. All derived results are validated by experimental and computer simulation of a nonlinear mathematical model of the system. The controllers which introduced have big range for control the system.
Category: Artificial Intelligence

[353] viXra:1709.0125 [pdf] submitted on 2017-09-11 07:51:38

Machine Learning Monitoring Air Quality

Authors: George Rajna
Comments: 23 Pages.

UCLA researchers have developed a cost-effective mobile device to measure air quality. It works by detecting pollutants and determining their concentration and size using a mobile microscope connected to a smartphone and a machine-learning algorithm that automatically analyzes the images of the pollutants. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[352] viXra:1709.0108 [pdf] submitted on 2017-09-10 06:02:53

A New Semantic Theory of Nature Language

Authors: Kun Xing
Comments: 70 Pages.

Formal Semantics and Distributional Semantics are two important semantic frameworks in Natural Language Processing (NLP). Cognitive Semantics belongs to the movement of Cognitive Linguistics, which is based on contemporary cognitive science. Each framework could deal with some meaning phenomena, but none of them fulfills all requirements proposed by applications. A unified semantic theory characterizing all important language phenomena has both theoretical and practical significance; however, although many attempts have been made in recent years, no existing theory has achieved this goal yet. This article introduces a new semantic theory that has the potential to characterize most of the important meaning phenomena of natural language and to fulfill most of the necessary requirements for philosophical analysis and for NLP applications. The theory is based on a unified representation of information, and constructs a kind of mathematical model called cognitive model to interpret natural language expressions in a compositional manner. It accepts the empirical assumption of Cognitive Semantics, and overcomes most shortcomings of Formal Semantics and of Distributional Semantics. The theory, however, is not a simple combination of existing theories, but an extensive generalization of classic logic and Formal Semantics. It inherits nearly all advantages of Formal Semantics, and also provides descriptive contents for objects and events as fine-gram as possible, descriptive contents which represent the results of human cognition.
Category: Artificial Intelligence

[351] viXra:1709.0096 [pdf] submitted on 2017-09-08 13:34:21

Robots Understand Brain Function

Authors: George Rajna
Comments: 48 Pages.

A challenging brain technique called whole-cell patch clamp electrophysiology or whole-cell recording (WCR) is a procedure so delicate and complex that only a handful of humans in the whole world can do it. [28] ComText allows robots to understand contextual commands such as, " Pick up the box I put down. " [27] McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[350] viXra:1709.0068 [pdf] submitted on 2017-09-06 07:16:28

Identification of Individuals

Authors: George Rajna
Comments: 44 Pages.

Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16]
Category: Artificial Intelligence

[349] viXra:1709.0048 [pdf] submitted on 2017-09-05 04:26:06

On the Dual Nature of Logical Variables and Clause-Sets

Authors: Elnaserledinellah Mahmood Abdelwahab
Comments: © 2016 Journal Academica Foundation. All rights reserved. With perpetual, non-exclusive license for viXra.org - Originally received 08-04-2016 - accepted 09-12-2016 - published 09-15-2016 J.Acad.(N.Y.)6,3:202-239 (38 pages) - ISSN 2161-3338

This paper describes the conceptual approach behind the proposed solution of the 3SAT problem recently published in [Abdelwahab 2016]. It is intended for interested readers providing a step-by-step, mostly informal explanation of the new paradigm proposed there completing the picture from an epistemological point of view with the concept of duality on center-stage. After a brief introduction discussing the importance of duality in both, physics and mathematics as well as past efforts to solve the P vs. NP problem, a theorem is proven showing that true randomness of input-variables is a property of algorithms which has to be given up when discrete, finite domains are considered. This insight has an already known side effect on computation paradigms, namely: The ability to de-randomize probabilistic algorithms. The theorem uses a canonical type of de-randomization which reveals dual properties of logical variables and Clause-Sets. A distinction is made between what we call the syntactical Container Expression (CE) and the semantic Pattern Expression (PE). A single sided approach is presumed to be insufficient to solve anyone of the dual problems of efficiently finding an assignment validating a 3CNF Clause-Set and finding a 3CNF-representation for a given semantic pattern. The deeply rooted reason, hereafter referred to as The Inefficiency Principle, is conjectured to be the inherent difficulty of translating one expression into the other based on a single-sided perspective. It expresses our inability to perceive and efficiently calculate complementary properties of a logical formula applying one view only. It is proposed as an alternative to the commonly accepted P≠NP conjecture. On the other hand, the idea of algorithmically using information deduced from PE to guide the instantiation of variables in a resolution procedure applied on a CE is as per [Abdelwahab 2016] able to provide an efficient solution to the 3SAT-problem. Finally, linking de-randomization to this positive solution has various well-established and important consequences for probabilistic complexity classes which are shown to hold.
Category: Artificial Intelligence

[348] viXra:1709.0019 [pdf] submitted on 2017-09-02 07:01:40

Understanding Robots

Authors: George Rajna
Comments: 46 Pages.

ComText allows robots to understand contextual commands such as, " Pick up the box I put down. " [27] McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[347] viXra:1709.0007 [pdf] submitted on 2017-09-01 10:31:26

Computing, Cognition and Information Compression

Authors: J Gerard Wolff
Comments: 21 Pages.

This article develops the idea that the storage and processing of information in computers and in brains may often be understood as information compression. The article first reviews what is meant by information and, in particular, what is meant by redundancy, a concept which is fundamental in all methods for information compression. Principles of information compression are described. The major part of the article describes how these principles may be seen in a range of observations and ideas in computing and cognition: the phenomena of adaptation and inhibition in nervous systems; 'neural' computing; the creation and recognition of 'objects' and 'classes'in perception and cognition; stereoscopic vision and random-dot stereograms; the organisation of natural languages; the organisation of grammars; the organisation of functional, structured, logic and object-oriented computer programs; the application and de-referencing of identifiers in computing; retrieval of information from databases; access and retrieval of information from computer memory; logical deduction and resolution theorem proving; inductive reasoning and probabilistic inference; parsing; normalisation of databases.
Category: Artificial Intelligence

[346] viXra:1709.0004 [pdf] submitted on 2017-09-01 06:49:15

Simple Chess Puzzle

Authors: George Rajna
Comments: 26 Pages.

Researchers at the University of St Andrews have thrown down the gauntlet to computer programmers to find a solution to a "simple" chess puzzle which could, in fact, take thousands of years to solve and net a $1m prize. [11] It appears that we are approaching a unique time in the history of man and science where empirical measures and deductive reasoning can actually inform us spiritually. Integrated Information Theory (IIT)-put forth by neuroscientists Giulio Tononi and Christof Koch-is a new framework that describes a way to experimentally measure the extent to which a system is conscious. [10] There is also connection between statistical physics and evolutionary biology, since the arrow of time is working in the biological evolution also. From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. [8] This paper contains the review of quantum entanglement investigations in living systems, and in the quantum mechanically modeled photoactive prebiotic kernel systems. [7] The human body is a constant flux of thousands of chemical/biological interactions and processes connecting molecules, cells, organs, and fluids, throughout the brain, body, and nervous system. Up until recently it was thought that all these interactions operated in a linear sequence, passing on information much like a runner passing the baton to the next runner. However, the latest findings in quantum biology and biophysics have discovered that there is in fact a tremendous degree of coherence within all living systems. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to understand the Quantum Biology.
Category: Artificial Intelligence

[345] viXra:1708.0482 [pdf] submitted on 2017-08-31 14:55:22

AI Analyzes Gravitational Lenses

Authors: George Rajna
Comments: 25 Pages.

Researchers from the Department of Energy's SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks - a form of artificial intelligence - can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods. [16] By listening to the acoustic signal emitted by a laboratory-created earthquake, a computer science approach using machine learning can predict the time remaining before the fault fails. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[344] viXra:1708.0471 [pdf] submitted on 2017-08-30 12:48:09

Earthquake Machine Learning

Authors: George Rajna
Comments: 23 Pages.

By listening to the acoustic signal emitted by a laboratory-created earthquake, a computer science approach using machine learning can predict the time remaining before the fault fails. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[343] viXra:1708.0414 [pdf] submitted on 2017-08-28 08:55:08

Artificial Intelligence Cyber Attacks

Authors: George Rajna
Comments: 24 Pages.

The next major cyberattack could involve artificial intelligence systems. [13] Steve was a security robot employed by the Washington Harbour center in the Georgetown district of the US capital. [12] Combining the intuition of humans with the impartiality of computers could improve decision-making for organizations, eventually leading to lower costs and better profits, according to a team of researchers. [11] A team researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[342] viXra:1708.0381 [pdf] submitted on 2017-08-27 07:40:31

Security Robots

Authors: George Rajna
Comments: 22 Pages.

Combining the intuition of humans with the impartiality of computers could improve decision-making for organizations, eventually leading to lower costs and better profits, according to a team of researchers. [11] A team researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[341] viXra:1708.0341 [pdf] submitted on 2017-08-24 22:13:50

Routing Games Over Time with Fifo Policy

Authors: Anisse Ismaili
Comments: 16 Pages. Submission to conference WINE 2017 on August 2nd.

We study atomic routing games where every agent travels both along its decided edges and through time. The agents arriving on an edge are first lined up in a \emph{first-in-first-out} queue and may wait: an edge is associated with a capacity, which defines how many agents-per-time-step can pop from the queue's head and enter the edge, to transit for a fixed delay. We show that the best-response optimization problem is not approximable, and that deciding the existence of a Nash equilibrium is complete for the second level of the polynomial hierarchy. Then, we drop the rationality assumption, introduce a behavioral concept based on GPS navigation, and study its worst-case efficiency ratio to coordination.
Category: Artificial Intelligence

[340] viXra:1708.0331 [pdf] submitted on 2017-08-24 13:23:16

Computers Improve Decision Making

Authors: George Rajna
Comments: 20 Pages.

Combining the intuition of humans with the impartiality of computers could improve decision-making for organizations, eventually leading to lower costs and better profits, according to a team of researchers. [11] A team researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[339] viXra:1708.0246 [pdf] submitted on 2017-08-21 10:02:18

AI that can Understand Us

Authors: George Rajna
Comments: 47 Pages.

Computing pioneer Alan Turing's most pertinent thoughts on machine intelligence come from a neglected paragraph of the same paper that first proposed his famous test for whether a computer could be considered as smart as a human. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[338] viXra:1708.0239 [pdf] submitted on 2017-08-20 09:31:39

Artificial Intelligence Revolution

Authors: George Rajna
Comments: 45 Pages.

Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[337] viXra:1708.0238 [pdf] submitted on 2017-08-19 14:27:54

Machine-Learning Device

Authors: George Rajna
Comments: 24 Pages.

In what could be a small step for science potentially leading to a breakthrough, an engineer at Washington University in St. Louis has taken steps toward using nanocrystal networks for artificial intelligence applications. [16] Physicists have applied the ability of machine learning algorithms to learn from experience to one of the biggest challenges currently facing quantum computing: quantum error correction, which is used to design noise-tolerant quantum computing protocols. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[336] viXra:1708.0176 [pdf] submitted on 2017-08-16 01:32:34

Machine Learning Quantum Error Correction

Authors: George Rajna
Comments: 23 Pages.

Physicists have applied the ability of machine learning algorithms to learn from experience to one of the biggest challenges currently facing quantum computing: quantum error correction, which is used to design noise-tolerant quantum computing protocols. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8]
Category: Artificial Intelligence

[335] viXra:1708.0167 [pdf] submitted on 2017-08-15 06:17:08

Organismic Learning

Authors: George Rajna
Comments: 24 Pages.

A new computing technology called "organismoids" mimics some aspects of human thought by learning how to forget unimportant memories while retaining more vital ones. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[334] viXra:1708.0131 [pdf] submitted on 2017-08-11 13:16:12

Adaptive Plant Propagation Algorithm for Solving Economic Load Dispatch Problem

Authors: Sayan Nag
Comments: 11 Pages.

Optimization problems in design engineering are complex by nature, often because of the involvement of critical objective functions accompanied by a number of rigid constraints associated with the products involved. One such problem is Economic Load Dispatch (ED) problem which focuses on the optimization of the fuel cost while satisfying some system constraints. Classical optimization algorithms are not sufficient and also inefficient for the ED problem involving highly nonlinear, and non-convex functions both in the objective and in the constraints. This led to the development of metaheuristic optimization approaches which can solve the ED problem almost efficiently. This paper presents a novel robust plant intelligence based Adaptive Plant Propagation Algorithm (APPA) which is used to solve the classical ED problem. The application of the proposed method to the 3-generator and 6-generator systems shows the efficiency and robustness of the proposed algorithm. A comparative study with another state-of-the-art algorithm (APSO) demonstrates the quality of the solution achieved by the proposed method along with the convergence characteristics of the proposed approach.
Category: Artificial Intelligence

[333] viXra:1708.0065 [pdf] submitted on 2017-08-06 17:11:22

Meta Mass Function

Authors: Yong Deng
Comments: 11 Pages.

In this paper, a meta mass function (MMF) is presented. A new evidence theory with complex numbers is developed. Different with existing evidence theory, the new mass function in complex evidence theory is modelled as complex numbers and named as meta mass function. The classical evidence theory is the special case under the condition that the mass function is degenerated from complex number as real number.
Category: Artificial Intelligence

[332] viXra:1708.0038 [pdf] submitted on 2017-08-04 04:30:39

Holistic Unique Clustering. {File Clsoing Version 4} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author has presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[331] viXra:1708.0030 [pdf] submitted on 2017-08-03 10:30:43

Machine Learning for Discovery

Authors: George Rajna
Comments: 22 Pages.

Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[330] viXra:1708.0029 [pdf] submitted on 2017-08-03 10:54:39

Future Search Engines

Authors: George Rajna
Comments: 25 Pages.

The outcome is the result of two powerful forces in the evolution of information retrieval: artificial intelligence—especially natural language processing—and crowdsourcing. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[329] viXra:1708.0025 [pdf] submitted on 2017-08-02 23:22:10

Similarity Measure Of Any Two Vectors Of Same Size

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors of the same size.
Category: Artificial Intelligence

[328] viXra:1708.0019 [pdf] submitted on 2017-08-03 06:42:09

Holistic Unique Clustering. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[327] viXra:1708.0010 [pdf] submitted on 2017-08-02 04:36:45

A Generalized Similarity Measure {File Closing Version 3} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors or Matrices or Higher Dimensional Data of different sizes.
Category: Artificial Intelligence

[326] viXra:1707.0394 [pdf] submitted on 2017-07-30 02:17:51

The Recursive Future Equation And The Recursive Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {File Closing Version-2}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author have presented a Recursive Future Equation and Recursive Past Equation to find one Step Future Element or a one Step Past Element of a given Time Series data Set.
Category: Artificial Intelligence

[325] viXra:1707.0389 [pdf] submitted on 2017-07-29 07:23:01

Machine Learning and Deep Learning

Authors: George Rajna
Comments: 27 Pages.

Deep learning and machine learning both offer ways to train models and classify data. This article compares the two and it offers ways to help you decide which one to use. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[324] viXra:1707.0372 [pdf] submitted on 2017-07-28 06:35:21

The Recursive Future Equation And The Recursive Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {Future}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Equation and Recursive Past Equation to find one Step Future Element or a one Step Past Element of a given Time Series data Set.
Category: Artificial Intelligence

[323] viXra:1707.0268 [pdf] submitted on 2017-07-20 02:20:32

Finding The Optimal Number ‘K’ In The K-Means Algorithm

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method to find the Optimal Number ‘K’ in the K-Means Algorithm.
Category: Artificial Intelligence

[322] viXra:1707.0255 [pdf] submitted on 2017-07-19 05:05:13

Humanize Artificial Intelligent

Authors: George Rajna
Comments: 41 Pages.

Google recently launched PAIR, an acronym of People + AI Research, in an attempt to increase the utility of AI and improve human to AI interaction. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15]
Category: Artificial Intelligence

[321] viXra:1707.0254 [pdf] submitted on 2017-07-19 06:01:57

Using the Appropriate Norm In The K-Nearest Neighbours Analysis. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research Technical Note, the author has detailed a novel technique of finding the distance metric to be used for any given set of points.
Category: Artificial Intelligence

[320] viXra:1707.0252 [pdf] submitted on 2017-07-19 06:40:54

A Generalized Similarity Measure {File Closing Version 2} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors or Matrices or Higher Dimensional Data of different sizes.
Category: Artificial Intelligence

[319] viXra:1707.0230 [pdf] submitted on 2017-07-17 05:49:20

A Generalized Similarity Measure ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors or Matrices or Higher Dimensional Data of different sizes.
Category: Artificial Intelligence

[318] viXra:1707.0225 [pdf] submitted on 2017-07-17 01:50:21

Multi Class Classification Using Holistic Non-Unique Clustering {File Closing Version 8}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author has presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[317] viXra:1707.0200 [pdf] submitted on 2017-07-14 04:55:42

Multi Class Classification Using Holistic Non-Unique Clustering ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[316] viXra:1707.0198 [pdf] submitted on 2017-07-14 05:30:10

Multi Class Classification Using Holistic Non-Unique Clustering. {File Closing Version 7} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[315] viXra:1707.0179 [pdf] submitted on 2017-07-13 01:20:46

Modification To The Scaling Aspect In Gower’s Scheme Of Calculating Similarity Coefficient

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research technical Note the author have presented a tiny modification to the Numeric Variables Scaling Aspect In Gower’s Scheme of calculating Similarity Coefficient.
Category: Artificial Intelligence

[314] viXra:1707.0178 [pdf] submitted on 2017-07-13 02:34:27

Recursive Future Average Of A Time Series Data Based On Cosine Similarity

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[313] viXra:1707.0166 [pdf] submitted on 2017-07-12 01:12:07

Theoretical Materials

Authors: George Rajna
Comments: 49 Pages.

University have created the first general-purpose method for using machine learning to predict the properties of new metals, ceramics and other crystalline materials and to find new uses for existing materials, a discovery that could save countless hours wasted in the trial-and-error process of creating new and better materials. [28] As machine learning breakthroughs abound, researchers look to democratize benefits. [27] Machine-learning system spontaneously reproduces aspects of human neurology. [26] Surviving breast cancer changed the course of Regina Barzilay's research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. [25] New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A 'nonlinear' effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC's LCLS. [19]
Category: Artificial Intelligence

[312] viXra:1707.0165 [pdf] submitted on 2017-07-12 01:25:24

Multi Class Classification Using Holistic Non-Unique Clustering

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[311] viXra:1707.0145 [pdf] submitted on 2017-07-11 02:29:17

A Novel Type Of Time Series Type Forecasting

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel Time series type of forecasting.
Category: Artificial Intelligence

[310] viXra:1707.0142 [pdf] submitted on 2017-07-11 04:48:06

A Novel Type Of Time Series Type Forecasting. {File Closing Version 1}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel Time series type of forecasting.
Category: Artificial Intelligence

[309] viXra:1707.0102 [pdf] submitted on 2017-07-07 01:23:03

Holistic Non-Unique Clsutering. {File Closing Version 1} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of points in N Space.
Category: Artificial Intelligence

[308] viXra:1707.0098 [pdf] submitted on 2017-07-07 01:44:57

Holistic Non-Unique Clsutering. {File Closing Version 2} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[307] viXra:1707.0071 [pdf] submitted on 2017-07-05 08:51:43

Seeing All The Clusters

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this technical note the author has presented a novel method to find all the clusters (overlapping and non-unique) formed by a given set of points.
Category: Artificial Intelligence

[306] viXra:1707.0070 [pdf] submitted on 2017-07-05 08:58:23

Seeing All Clusters Formed By A Given Set Of Points (File Closing Version) ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has presented a novel technique to find all Clusters that may be overlapping to some extent.
Category: Artificial Intelligence

[305] viXra:1707.0061 [pdf] submitted on 2017-07-05 06:54:24

Holistic Non-Unique Clsutering. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this technical note, the author has presented a novel scheme of Holistic Non-Unique Clustering.
Category: Artificial Intelligence

[304] viXra:1707.0043 [pdf] submitted on 2017-07-03 22:47:02

Using the Appropriate Norm In The K-Nearest Neighbours Analysis

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this Technical Note the author has presented and alternative to the use of L2 Norm for Nearness Analysis in K-Nearest Neighbours Algorithm.
Category: Artificial Intelligence

[303] viXra:1707.0002 [pdf] submitted on 2017-07-01 04:24:01

Inner Workings of Neural Networks

Authors: George Rajna
Comments: 33 Pages.

Neural networks learn to perform computational tasks by analyzing large sets of training data. But once they've been trained, even their designers rarely have any idea what data elements they're processing. [20] Researchers from Disney Research, Pixar Animation Studios, and the University of California, Santa Barbara have developed a new technology based on artificial intelligence (AI) and deep learning that eliminates this noise and thereby enables production-quality rendering at much faster speeds. [19] Now, one group reports in ACS Nano that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system— the release of inhibitory and stimulatory signals from the same "pre-synaptic" terminal. [18] Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip
Category: Artificial Intelligence

[302] viXra:1706.0570 [pdf] submitted on 2017-06-30 12:07:02

Convolutional Neural Network

Authors: George Rajna
Comments: 31 Pages.

Researchers from Disney Research, Pixar Animation Studios, and the University of California, Santa Barbara have developed a new technology based on artificial intelligence (AI) and deep learning that eliminates this noise and thereby enables production-quality rendering at much faster speeds. [19] Now, one group reports in ACS Nano that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system— the release of inhibitory and stimulatory signals from the same "pre-synaptic" terminal. [18] Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11]
Category: Artificial Intelligence

[301] viXra:1706.0523 [pdf] submitted on 2017-06-28 09:17:30

Artificial Synapse for AI

Authors: George Rajna
Comments: 30 Pages.

Now, one group reports in ACS Nano that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system— the release of inhibitory and stimulatory signals from the same "pre-synaptic" terminal. [18] Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain.
Category: Artificial Intelligence

[300] viXra:1706.0469 [pdf] submitted on 2017-06-25 08:35:27

Quantum Machine Learning Computer Hybrids

Authors: George Rajna
Comments: 28 Pages.

Creative Destruction Lab, a technology program affiliated with the University of Toronto's Rotman School of Management in Toronto, Canada hopes to nurture numerous quantum learning machine start-ups in only a few years. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[299] viXra:1706.0468 [pdf] submitted on 2017-06-25 10:31:28

Weak AI, Strong AI and Superintelligence

Authors: George Rajna
Comments: 29 Pages.

Should we fear artificial intelligence and all it will bring us? Not so long as we remember to make sure to build artificial emotional intelligence into the technology, according to the website The School of Life. [16] Creative Destruction Lab, a technology program affiliated with the University of Toronto’s Rotman School of Management in Toronto, Canada hopes to nurture numerous quantum learning machine start-ups in only a few years. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”. Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries - how a sliced up flatworm can regenerate into new organisms - has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[298] viXra:1706.0462 [pdf] submitted on 2017-06-25 02:34:26

Brain-Inspired Supercomputing

Authors: George Rajna
Comments: 48 Pages.

IBM and the Air Force Research Laboratory are working to develop an artificial intelligence-based supercomputer with a neural network design that is inspired by the human brain. [28] Researchers have built a new type of "neuron transistor"—a transistor that behaves like a neuron in a living brain. [27] Research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[297] viXra:1706.0433 [pdf] submitted on 2017-06-23 06:57:24

AI and Robots can Help Patients

Authors: George Rajna
Comments: 45 Pages.

McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[296] viXra:1706.0402 [pdf] submitted on 2017-06-20 10:02:53

Neuron Transistor

Authors: George Rajna
Comments: 45 Pages.

Researchers have built a new type of "neuron transistor"—a transistor that behaves like a neuron in a living brain. [27] Research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[295] viXra:1706.0389 [pdf] submitted on 2017-06-19 04:15:18

Artificial Intelligence Health Revolution

Authors: George Rajna
Comments: 43 Pages.

Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15]
Category: Artificial Intelligence

[294] viXra:1706.0387 [pdf] submitted on 2017-06-19 04:54:30

K-Eye Face Recognition System

Authors: George Rajna
Comments: 45 Pages.

A research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[293] viXra:1706.0293 [pdf] submitted on 2017-06-16 06:05:08

Computers Reason Like Humans

Authors: George Rajna
Comments: 40 Pages.

Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14]
Category: Artificial Intelligence

[292] viXra:1706.0235 [pdf] submitted on 2017-06-13 02:02:47

Deep Learning with Light

Authors: George Rajna
Comments: 37 Pages.

Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members,
Category: Artificial Intelligence

[291] viXra:1706.0207 [pdf] submitted on 2017-06-13 11:45:22

Neural Networks and Quantum Entanglement

Authors: George Rajna
Comments: 39 Pages.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14]
Category: Artificial Intelligence

[290] viXra:1706.0198 [pdf] submitted on 2017-06-14 08:06:29

Robot Write and Play its own Music

Authors: George Rajna
Comments: 38 Pages.

A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology’s impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[289] viXra:1706.0144 [pdf] submitted on 2017-06-11 07:47:04

Classical and Quantum Machine Learning

Authors: George Rajna
Comments: 35 Pages.

Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13]
Category: Artificial Intelligence

[288] viXra:1705.0404 [pdf] submitted on 2017-05-28 12:05:57

Using Student Learning Based on Fluency for the Learning Rate in a Deep Convolutional Neural Network

Authors: Abien Fred Agarap
Comments: 23 Pages.

This is a proposal for mathematically determining the learning rate to be used in a deep supervised convolutional neural network (CNN), based on student fluency. The CNN model shall be tasked to imitate how students play the game “Packet Attack”, a form of gamification of information security awareness training, and learn in the same rate as the students did. The student fluency shall be represented by a mathematical function constructed using natural cubic spline interpolation, and its derivative shall serve as the learning rate for the CNN model. If proven right, the results will imply a more human-like rate of learning by machines.
Category: Artificial Intelligence

[287] viXra:1705.0362 [pdf] submitted on 2017-05-25 03:53:34

Artificial Intelligence by Quantum Computing

Authors: George Rajna
Comments: 34 Pages.

We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12]
Category: Artificial Intelligence

[286] viXra:1705.0340 [pdf] submitted on 2017-05-22 19:18:05

Verifying the Validity of a Conformant Plan is co-NP-Complete

Authors: Alban Grastien, Enrico Scala
Comments: 3 Pages.

The purpose of this document is to show the complexity of verifying the validity of a deterministic conformant plan. We concentrate on a simple version of the conformant planning problem (i.e., one where there is no precondition on the actions and where all conditions are defined as sets of positive or negative facts) in order to show that the complexity does not come from solving a single such formula.
Category: Artificial Intelligence

[285] viXra:1705.0313 [pdf] submitted on 2017-05-21 09:43:28

Rematch of Man vs Machine

Authors: George Rajna
Comments: 32 Pages.

It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[284] viXra:1705.0273 [pdf] submitted on 2017-05-18 10:06:56

Google Latest Tech Tricks

Authors: George Rajna
Comments: 31 Pages.

Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10]
Category: Artificial Intelligence

[283] viXra:1705.0223 [pdf] submitted on 2017-05-15 03:07:04

A Novel Pandemonium Architecture Based on Visual Topological Invariants and Mental Matching Descriptions

Authors: Arturo Tozzi, James F Peters
Comments: 13 Pages.

A novel daemon-based architecture is introduced to elucidate some brain functions, such as pattern recognition during human perception and mental interpretation of visual scenes. By taking into account the concepts of invariance and persistence in topology, we introduce a Selfridge pandemonium variant of brain activity that takes into account a novel feature, namely, extended feature daemons that, in addition to the usual recognition of short straight as well as curved lines, recognize topological features of visual scene shapes, such as shape interior, density and texture. A series of transformations can be gradually applied to a pattern, in particular to the shape of an object, without affecting its invariant properties, such as its boundedness and connectedness of the parts of a visual scene. We also introduce another Pandemonium implementation: low-level representations of objects can be mapped to higher-level views (our mental interpretations), making it possible to construct a symbolic multidimensional representation of the environment. The representations can be projected continuously to an object that we have seen and continue to see, thanks to the mapping from shapes in our memory to shapes in Euclidean space. A multidimensional vista detectable by the brain (brainscapes) results from the presence of daemons (mind channels) that detect not only ordinary views of the shapes in visual scenes, but also the features of the shapes. Although perceived shapes are 3-dimensional (3+1 dimensional, if we include time), shape features (volume, colour, contour, closeness, texture, and so on) lead to n-dimensional brainscapes, We arrive at 5 as a minimum shape feature space, since every visual shape has at least a contour in space-time. We discuss the advantages of our parallel, hierarchical model in pattern recognition, computer vision and biological nervous system’s evolution.
Category: Artificial Intelligence

[282] viXra:1705.0217 [pdf] submitted on 2017-05-14 04:25:18

Popular Routes Discovery

Authors: Tal Ben Yakar
Comments: 6 Pages.

Finding the optimal driving route has attracted considerable attention in recent years, the problem sounds simple however different companies these days, taxi alternatives companies like Uber and Via trying to find what is the best route to drive find it as a very challenging problem. Ridesharing and maps companies like HERE, navigation companies like waze and public transportation companies like moovit and others. AI robots in addition, need to have the ability to route in the optimal manner. In this work we formulate the problem of finding optimal routes as an optimization problem and come up with a neat, low memory and fast solution to the problem using machine learning algorithms.
Category: Artificial Intelligence

[281] viXra:1705.0172 [pdf] submitted on 2017-05-10 12:43:05

Democratize Artificial Intelligence

Authors: George Rajna
Comments: 29 Pages.

Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9]
Category: Artificial Intelligence

[280] viXra:1705.0108 [pdf] submitted on 2017-05-05 09:20:09

Incorrect Moves and Testable States

Authors: Dimiter Dobrev
Comments: 17 Pages.

How do we describe the invisible? Let’s take a sequence: input, output, input, output ... Behind this sequence stands a world and the sequence of its internal states. We do not see the internal state of the world, but only a part of it. To describe that part which is invisible, we will use the concept of ‘incorrect move’ and its generalization ‘testable state’. Thus, we will reduce the problem of partial observability to the problem of full observability.
Category: Artificial Intelligence

[279] viXra:1705.0094 [pdf] submitted on 2017-05-04 04:17:51

Rotation Invariance Neural Network

Authors: Shiyuan.Li
Comments: 7 Pages.

Rotation invariance and translate invariance have great values in image recognition. In this paper, we bring a new architecture in convolutional neural network (CNN) to achieve rotation invariance and translate invariance in 2-D symbol recognition. We can also get the position and orientation of the 2-D symbol by the network to achieve detection purpose for multiple non-overlap target. Human being have the ability look at an object by one glance and remember it, we also can use this architecture to achieve this one shot learning.
Category: Artificial Intelligence

[278] viXra:1705.0027 [pdf] submitted on 2017-05-02 21:38:43

Obstacle Detection and Pathfinding for Mobile Robots

Authors: Murat Arslan
Comments: 116 Pages.

In this thesis, obstacle detection via image of objects and then pathfinding problems of NAO humanoid robot is considered. NAO's camera is used to capture the images of world map. The captured image is processed and classified into two classes; area with obstacles and area without obstacles. For classification of images, Support Vector Machine (SVM) is used. After classification the map of world is obtained as area with obstacles and area without obstacles. This map is input for path finding algorithm. In the thesis A* path finding algorithm is used to find path from the start point to the goal. The aim of this work is to implement a support vector machine based solution to robot guidance problem, visual path planning and obstacle avoidance. The used algorithms allow to detect obstacles and find an optimal path. The thesis describe basic steps of navigation of mobile robots.
Category: Artificial Intelligence

[277] viXra:1704.0353 [pdf] submitted on 2017-04-26 06:56:36

Artificial Synapse

Authors: George Rajna
Comments: 29 Pages.

Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9]
Category: Artificial Intelligence

[276] viXra:1704.0337 [pdf] submitted on 2017-04-26 03:14:10

Digital Assistant

Authors: George Rajna
Comments: 29 Pages.

Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[275] viXra:1704.0308 [pdf] submitted on 2017-04-23 11:14:37

3D Printed Dancing Humanoid Robot “Buddy” for Homecare

Authors: Akshay Potnuru, Mohsen Jafarzadeh, Yonas Tadesse
Comments: 6 Pages.

This paper describes a 3D printed humanoid robot that can perform dancing and demonstrate human-like facial expressions to expand humanoid robotics in entertainment and at the same time to have an assistive role for children and elderly people. The humanoid is small and has an expressive face that is in a comfort zone for a child or an older person. It can maneuver in a day care or home care environment using its wheeled base. This paper discusses on the capabilities of the robot to carry and handle small loads like pills, common measurement tools such as pressure and temperature measurement units. The paper also discusses the use of IP camera for color identification and an Arduino based audio system to synchronize music with dance movements of the robot.
Category: Artificial Intelligence

[274] viXra:1704.0298 [pdf] submitted on 2017-04-22 19:23:44

Design and Motion Control of Bioinspired Humanoid Robot Head from Servo Motors Toward Artificial Muscles

Authors: Yara Almubarak, Yonas Tadesse
Comments: 9 Pages.

The potential applications of humanoid robots in social environments, motivates researchers to design, and control biomimetic humanoid robots. Generally, people are more interested to interact with robots that have similar attributes and movements to humans. The head is one of most important part of any social robot. Currently, most humanoid heads use electrical motors, pneumatic actuators, and shape memory alloy (SMA) actuators for actuation. Electrical and pneumatic actuators take most of the space and would cause unsmooth motions. SMAs are expensive to use in humanoids. Recently, in many robotic projects, Twisted and Coiled Polymer (TCP) artificial muscles are used as linear actuators which take up little space compared to the motors. In this paper, we will demonstrate the designing process and motion control of a robotic head with TCP muscles. Servo motors and artificial muscles are used for actuating the head motion, which have been controlled by a cost efficient ARM Cortex-M7 based development board. A complete comparison between the two actuators is presented.
Category: Artificial Intelligence

[273] viXra:1704.0205 [pdf] submitted on 2017-04-17 01:40:24

Formula Analyzer: Find the Formula by Parameters

Authors: Artur Eduardovich Sibgatullin
Comments: 27 Pages. MIT License, https://figshare.com/articles/Formula_analyzer_Find_the_formula_by_parameters/4880012

Let it be a formula, e.g.: x + y^2 - z = r. It is usually necessary to find a parameter’s value by knowing others’ ones. However, let’s set another problem to find the formula itself, knowing only its parameters. The solution of such a problem we call reverse computing. For that we'll create an algorithm and accomplish it as a program code.
Category: Artificial Intelligence

[272] viXra:1704.0113 [pdf] submitted on 2017-04-09 11:21:19

Automatic Speech Recognition

Authors: George Rajna
Comments: 27 Pages.

The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[271] viXra:1704.0090 [pdf] submitted on 2017-04-07 11:26:30

Toward Self-Govern and Self-Protected Data: a Proposal

Authors: Kasra Madadipouya
Comments: 3 Pages. Unpublished research proposal

We live in an era of an explosion of data. The rate of generating data has been increased significantly in the last few years especially by popularization of Web 2.0. In addition to that, our surrounding environments are becoming more dynamics and rapidly emerging as computing systems morph from monolithic and closed entities into globally disaggregated collaborating entities which require sensitive data sharing. As an instance content owners lose full control of their data once it is given away to consumers and hence data can be unlimitedly copied, access, modified and redistributed without data owner awareness.
Category: Artificial Intelligence

[270] viXra:1704.0089 [pdf] submitted on 2017-04-07 11:40:51

Machine Learning Chip

Authors: George Rajna
Comments: 23 Pages.

Google has said the TPU beat Nvidia and Intel. Let's explain that. There is so much to explain. TPU stands for Tensor Processing Unit. This is described by a Google engineer as "an entirely new class of custom machine learning accelerator." [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[269] viXra:1704.0022 [pdf] submitted on 2017-04-03 08:49:50

Visualizing Scientific Big Data

Authors: George Rajna
Comments: 32 Pages.

Humans are visual creatures: our brain processes images 60,000 times faster than text, and 90 percent of information sent to the brain is visual. Visualization is becoming increasingly useful in the era of big data, in which we are generating so much data at such high rates that we cannot keep up with making sense of it all. In particular, visual analytics—a research discipline that combines automated data analysis with interactive visualizations—has emerged as a promising approach to dealing with this information overload. [18] Neural networks are commonly used today to analyze complex data – for instance to find clues to illnesses in genetic information. Ultimately, though, no one knows how these networks actually work exactly. [17] Hey Siri, how's my hair?" Your smartphone may soon be able to give you an honest answer, thanks to a new machine learning algorithm designed by U of T Engineering researchers Parham Aarabi and Wenzhi Guo. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11]
Category: Artificial Intelligence

[268] viXra:1704.0021 [pdf] submitted on 2017-04-03 09:15:46

Electronic Synapses Artificial Brain

Authors: George Rajna
Comments: 33 Pages.

Researchers from the CNRS, Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. [19] Humans are visual creatures: our brain processes images 60,000 times faster than text, and 90 percent of information sent to the brain is visual. Visualization is becoming increasingly useful in the era of big data, in which we are generating so much data at such high rates that we cannot keep up with making sense of it all. In particular, visual analytics—a research discipline that combines automated data analysis with interactive visualizations—has emerged as a promising approach to dealing with this information overload. [18] Neural networks are commonly used today to analyze complex data – for instance to find clues to illnesses in genetic information. Ultimately, though, no one knows how these networks actually work exactly. [17] Hey Siri, how's my hair?" Your smartphone may soon be able to give you an honest answer, thanks to a new machine learning algorithm designed by U of T Engineering researchers Parham Aarabi and Wenzhi Guo. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12]
Category: Artificial Intelligence

[267] viXra:1703.0233 [pdf] submitted on 2017-03-24 10:36:41

Parallel Computation and Brain Function

Authors: George Rajna
Comments: 26 Pages.

Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[266] viXra:1703.0063 [pdf] submitted on 2017-03-07 09:36:39

Human Readable Feature Generation for Natural Language Corpora

Authors: Tomasz Dryjanski
Comments: 4 Pages.

This paper proposes an alternative to the Paragraph Vector algorithm, generating fixed-length vectors of human-readable features for natural language corpora. It extends word2vec retaining its other advantages like speed and accuracy, hence its proposed name is doc2feat. Extracted features are presented as lists of words with their proximity to the particular feature, allowing interpretation and manual annotation. By parameter tuning focus can be made on grammatical aspects of the corpus language, making it useful for linguistic applications. The algorithm can run on variable-length pieces of texts, and provides insight into what features are relevant for text classification or sentiment analysis. The corpus does not have to, and in specific cases should not be, preprocessed with stemming or stop-words removal.
Category: Artificial Intelligence

[265] viXra:1703.0056 [pdf] submitted on 2017-03-06 13:57:49

Quantum Machine Learning to Infinite Dimensions

Authors: George Rajna
Comments: 27 Pages.

Physicists have developed a quantum machine learning algorithm that can handle infinite dimensions—that is, it works with continuous variables (which have an infinite number of possible values on a closed interval) instead of the typically used discrete variables (which have only a finite number of values). [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[264] viXra:1703.0013 [pdf] submitted on 2017-03-02 05:44:53

Controlling a Robot Using a Wearable Device (MYO)

Authors: Mithileysh Sathiyanarayanan, Tobias Mulling, Bushra Nazir
Comments: 6 Pages. IJEDR, 2015, Vol 3, Issue 3

There is a huge demand for military robots in almost all the countries which comes under the field of human computer interaction and artificial intelligence. There are many different ways of operating a robot: self controlled, automatic controlled etc. Also, gesture controlled operation mode is on the rise. This acted as our motivation to develop a gesture controlled robot using MYO armband. The word ‘MYO’ has created a buzz in the technological world by its astonishing features and its utility in various fields. Its introduction as a armband that can wrap around our arm to control robots with our movements and gestures has opened new wide doors of its experimentation with robotics. This independently working gesture recognition system does not rely on any external sensors (motion capturing system) as it has its sensors embedded in itself which recognizes the gesture commands and acts accordingly. This armband can be worn by soldiers to operate robots to fight against enemies. This work in progress paper illustrates an existing robot designed by us, which can be controlled by hand gestures using a wearable device called as MYO. We would like to investigate more on this and implement, such that the robot can be interfaced with a MYO armband for a successful control.
Category: Artificial Intelligence

[263] viXra:1703.0012 [pdf] submitted on 2017-03-02 05:49:01

Leap Motion Device for Gesture Controlling an Unmanned Ground Vehicle (Robot)

Authors: Mithileysh Sathiyanarayanan, Tobias Mulling, Bushra Nazir
Comments: 10 Pages. IJEDR, 2016, Vol 4, Issue 4

A new scope of human-computer interaction utilizes the algorithms of computer vision and image processing for detecting the gesture, understanding its objective and making it meaningful for the computer to understand and then interact with the humans. The recent introduction of "Leap Motion" is a big revolution in the field of gesture control technology. Using gesture control mode in the field of robotics is also on the rise. This acted as our motivation to develop a gesture controlled robot using a Leap Motion Device that can sense human hands above it and to keep a track of them and aid in navigation. This independently working gesture recognition system does not rely on any external sensors (motion capturing system) as it has its sensors embedded in itself which recognizes the gesture commands and acts accordingly. The soldiers need not wear any physical device on their body (unlike Kinect and/or MYO Armband) to operate robots to fight against enemies. This work in progress paper illustrates an existing robot designed by us, which can be controlled by hand gestures using a non-wearable (touchless) device called as Leap Motion.
Category: Artificial Intelligence

[262] viXra:1702.0297 [pdf] submitted on 2017-02-23 18:45:36

Some General Results On Overfitting In Machine Learning

Authors: Antony Van der Mude
Comments: 13 Pages.

Overfitting has always been a problem in machine learning. Recently a related phenomenon called “oversearching” has been analyzed. This paper takes a theoretical approach using a very general methodology covering most learning paradigms in current use. Overfitting is defined in terms of the “expressive accuracy” of a model for the data, rather than “predictive accuracy”. The results show that even if the learner can identify a set of best models, overfitting will cause it to bounce from one model to another. Overfitting is ameliorated by having the learner bound the search space, and bounding is equivalent to using an accuracy (or bias) more restrictive than the problem accuracy. Also, Ramsey’s Theorem shows that every data sequence has an situation where either consistent overfitting or underfitting is unavoidable. We show that oversearching is simply overfitting where the resource used to express a model is the search space itself rather than a more common resource such as a program that executes the model. We show that the smallest data sequence guessing a model defines a canonical resource. There is an equivalence in the limit between any two resources to express the same model space, but it may not be effectively computable.
Category: Artificial Intelligence

[261] viXra:1702.0275 [pdf] submitted on 2017-02-22 09:32:33

Quantum Artificial Biomimetics

Authors: George Rajna
Comments: 26 Pages.

Quantum biomimetics consists of reproducing in quantum systems certain properties exclusive to living organisms. Researchers at University of the Basque Country have imitated natural selection, learning and memory in a new study. The mechanisms developed could give quantum computation a boost and facilitate the learning process in machines. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[260] viXra:1702.0232 [pdf] submitted on 2017-02-18 07:11:56

New Materials from Small Data

Authors: George Rajna
Comments: 22 Pages.

Finding new functional materials is always tricky. But searching for very specific properties among a relatively small family of known materials is even more difficult. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[259] viXra:1702.0229 [pdf] submitted on 2017-02-18 02:21:36

A.i. Music Duet

Authors: George Rajna
Comments: 27 Pages.

An artificial intelligence experiment has emerged of the most enjoyable kind: It is called "A.I. Duet." [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer.
Category: Artificial Intelligence

[258] viXra:1702.0143 [pdf] submitted on 2017-02-12 10:31:13

Human Motion and Language

Authors: George Rajna
Comments: 26 Pages.

Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[257] viXra:1702.0130 [pdf] submitted on 2017-02-10 09:42:04

Artificial Neural Network

Authors: George Rajna
Comments: 25 Pages.

A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[256] viXra:1702.0094 [pdf] submitted on 2017-02-07 10:15:12

Complex Neutrosophic Soft Set

Authors: Said Broumi, Assia Bakali, Mohamed Talea, Florentin Smarandache, Mumtaz Ali, Ganeshsree Selvachandran
Comments: 6 Pages.

In this paper, we propose the complex neutrosophic soft set model, which is a hybrid of complex fuzzy sets,neutrosophic sets and soft sets. The basic set theoretic operations and some concepts related to the structure of this model are introduced, and illustrated. An example related to a decision making problem involving uncertain and subjective information is presented, to demonstrate the utility of this model.
Category: Artificial Intelligence

[255] viXra:1702.0010 [pdf] submitted on 2017-02-01 08:18:01

Visualize Complex Learning Processes

Authors: George Rajna
Comments: 29 Pages.

Neural networks are commonly used today to analyze complex data – for instance to find clues to illnesses in genetic information. Ultimately, though, no one knows how these networks actually work exactly. [17] Hey Siri, how's my hair?" Your smartphone may soon be able to give you an honest answer, thanks to a new machine learning algorithm designed by U of T Engineering researchers Parham Aarabi and Wenzhi Guo. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10]
Category: Artificial Intelligence

[254] viXra:1702.0008 [pdf] submitted on 2017-02-01 08:56:06

First Passage Under Restart

Authors: George Rajna
Comments: 31 Pages.

Discovering the ways in which many seemingly diverse phenomena are related is one of the overarching goals of scientific inquiry, since universality often allows an insight in one area to be extended to many other areas. [18] Neural networks are commonly used today to analyze complex data – for instance to find clues to illnesses in genetic information. Ultimately, though, no one knows how these networks actually work exactly. [17] Hey Siri, how's my hair?" Your smartphone may soon be able to give you an honest answer, thanks to a new machine learning algorithm designed by U of T Engineering researchers Parham Aarabi and Wenzhi Guo. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”. Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries - how a sliced up flatworm can regenerate into new organisms - has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[253] viXra:1701.0574 [pdf] submitted on 2017-01-22 21:33:04

The Relationship Between Agents and Link-Level Acknowledgements Using Mugwump

Authors: Thomas Lambert
Comments: 8 Pages.

In recent years, much research has been devoted to the improvement of architecture; unfortunately, few have explored the emulation of the World Wide Web. In fact, few biologists would disagree with the deployment of evolutionary programming. While this discussion is never a confirmed intent, it is derived from known results. Mugwump, our new framework for hash tables [28], is the solution to all of these challenges.
Category: Artificial Intelligence

[252] viXra:1701.0559 [pdf] submitted on 2017-01-21 11:05:33

AI Systems See the World as Humans

Authors: George Rajna
Comments: 38 Pages.

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do. [25] Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new "deep learning" method that enables computers to learn about the visual world largely on their own, much as human babies do. [24]
Category: Artificial Intelligence

[251] viXra:1701.0530 [pdf] submitted on 2017-01-17 20:03:50

Intelligence of Crowd.

Authors: Michail Zak
Comments: 17 Pages.

A new class of dynamical systems with a preset type of interference of probabilities is introduced. It is obtained from the extension of the Madelung equation by replacing the quantum potential with a specially selected feedback from the Liouville equation. It has been proved that these systems are different from both Newtonian and quantum systems, but they can be useful for modeling spontaneous collective novelty phenomena when emerging outputs are qualitatively different from the weighted sum of individual inputs. Formation of language and fast decision-making process as potential applications of the probability interference is discussed.
Category: Artificial Intelligence

[250] viXra:1701.0516 [pdf] submitted on 2017-01-16 14:14:27

Optimal Control Via Self-Generated Stochasticity.

Authors: Michail Zak
Comments: 19 Pages.

Stochastic approach to maximization of a functional constrained by governing equation of a controlled system is introduced and discussed. The idea of the proposed algorithm is the following: represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then the corresponding ODE become stochastic, and that sample of the solution which has the largest value will have the highest probability to appear in ODE simulation. Application to optimal control is discussed. Two limitations of optimal control theory - local maxima and possible instability of the optimal solutions - are removed. Special attention is paid to robot motion planning.
Category: Artificial Intelligence

[249] viXra:1612.0403 [pdf] submitted on 2016-12-30 06:29:12

Applications of Machine Learning in Estimating the Minimum Distance of Approach of an NEO

Authors: Jayant Mehra
Comments: 15 Pages, 6 Figures, 5 Tables

Although the current detection techniques have been able to calculate the minimum distance to which a Near Earth Object (NEO) can approach Earth for thousands of NEOs, there are millions of yet undiscovered NEOs which could pose a threat to Planet Earth. An NEO is considered highly dangerous if the minimum distance between it and the centre of the Earth is less than 0.03 AU. However, only a handful NEOs have been detected prior to entering this danger zone. The immense task of asteroid hunting by conventional techniques is further complicated by a high number of false positives and false negatives. In this report, machine learning algorithms are written to predict the minimum distance upto which an NEO can approach the planet and classify NEOs as whether they are in the danger zone or no based on their physical characteristics. In section 4 of the study, an Artificial Neural Network based on the backpropagation algorithm and a Logistic Classification based on Unconstrained Minimisation using the fminunc function are employed to classify NEOs with an accuracy of 92% and 90% respectively. In section 5 of the report, the Levenberg - Marquardt Algorithm based on an Artificial Neural Network is employed to calculate the minimum distance with a regression R value of 0.79 (Value of 1 being the maximum). All the algorithmic systems developed have low false positive and false negative rates
Category: Artificial Intelligence

[248] viXra:1612.0344 [pdf] submitted on 2016-12-26 10:03:47

Advance Artificial Super-Intelligence

Authors: Miguel A. Sanchez-Rey
Comments: 3 Pages.

From FL to AL.
Category: Artificial Intelligence

[247] viXra:1612.0314 [pdf] submitted on 2016-12-21 07:33:22

Spintronics-Based Artificial Intelligence

Authors: George Rajna
Comments: 45 Pages.

Researchers at Tohoku University have, for the first time, successfully demonstrated the basic operation of spintronics-based artificial intelligence. [27] The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by researchers at New York University. [26] Surviving breast cancer changed the course of Regina Barzilay's research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. [25] New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A 'nonlinear' effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC's LCLS. [19] Leiden physicists have manipulated light with large artificial atoms, so-called quantum dots. Before, this has only been accomplished with actual atoms. It is an important step toward light-based quantum technology. [18] In a tiny quantum prison, electrons behave quite differently as compared to their counterparts in free space. They can only occupy discrete energy levels.
Category: Artificial Intelligence

[246] viXra:1612.0288 [pdf] submitted on 2016-12-18 09:03:13

Neuroscience and Artificial Intelligence

Authors: George Rajna
Comments: 37 Pages.

Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new "deep learning" method that enables computers to learn about the visual world largely on their own, much as human babies do. [24]
Category: Artificial Intelligence

[245] viXra:1612.0242 [pdf] submitted on 2016-12-14 09:45:02

Doctor of Philosophy Thesis in Military Informatics (Openphd) :Lethal Autonomy of Weapons is Designed And/or Recessive

Authors: Nyagudi Musandu Nyagudi
Comments: 1 Page. By way of Prior Publications, Practice and Contribution

My original contribution to knowledge is : Any weapon that exhibits intended and/or untended lethal autonomy in targeting and interdiction – does so by way of design and/or recessive flaw(s) in its systems of control – any such weapon is capable of war-fighting and other battle-space interaction in a manner that its Human Commander does not anticipate. A lethal autonomous weapons is therefore independently capable of exhibiting positive or negative recessive norms of targeting in its perceptions of Discrimination between Civilian and Military Objects, Proportionality of Methods and Outcomes, Feasible Precaution before interdiction and its underlying Concepts of Humanity. This marks the completion of an Open PhD ( #openphd ) project done in sui generis form.
Category: Artificial Intelligence

[244] viXra:1612.0214 [pdf] submitted on 2016-12-12 10:53:08

Memory Architecture for AI

Authors: George Rajna
Comments: 44 Pages.

The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by researchers at New York University. [26] Surviving breast cancer changed the course of Regina Barzilay's research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. [25] New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A 'nonlinear' effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC's LCLS. [19] Leiden physicists have manipulated light with large artificial atoms, so-called quantum dots. Before, this has only been accomplished with actual atoms. It is an important step toward light-based quantum technology. [18] In a tiny quantum prison, electrons behave quite differently as compared to their counterparts in free space. They can only occupy discrete energy levels, much like the electrons in an atom-for this reason, such electron prisons are often called "artificial atoms". [17]
Category: Artificial Intelligence

[243] viXra:1612.0130 [pdf] submitted on 2016-12-08 05:48:31

Machine Learning of 2-D Materials

Authors: George Rajna
Comments: 22 Pages.

Machine learning, a field focused on training computers to recognize patterns in data and make new predictions, is helping doctors more accurately diagnose diseases and stock analysts forecast the rise and fall of financial markets. And now materials scientists have pioneered another important application for machine learning—helping to accelerate the discovery and development of new materials. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond.
Category: Artificial Intelligence

[242] viXra:1612.0030 [pdf] submitted on 2016-12-02 12:28:36

Machine Learning Breakthroughs

Authors: George Rajna
Comments: 47 Pages.

Machine Learning Breakthroughs As machine learning breakthroughs abound, researchers look to democratize benefits. [27] Machine-learning system spontaneously reproduces aspects of human neurology. [26] Surviving breast cancer changed the course of Regina Barzilay's research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. [25] New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A 'nonlinear' effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC's LCLS. [19] Leiden physicists have manipulated light with large artificial atoms, so-called quantum dots. Before, this has only been accomplished with actual atoms. It is an important step toward light-based quantum technology. [18] In a tiny quantum prison, electrons behave quite differently as compared to their counterparts in free space. They can only occupy discrete energy levels, much like the electrons in an atom-for this reason, such electron prisons are often called "artificial atoms". [17]
Category: Artificial Intelligence

[241] viXra:1612.0022 [pdf] submitted on 2016-12-02 07:17:06

Machine-Learning and Human Neurology

Authors: George Rajna
Comments: 44 Pages.

Machine-learning system spontaneously reproduces aspects of human neurology. [26] Surviving breast cancer changed the course of Regina Barzilay's research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. [25] New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A 'nonlinear' effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC's LCLS. [19] Leiden physicists have manipulated light with large artificial atoms, so-called quantum dots. Before, this has only been accomplished with actual atoms. It is an important step toward light-based quantum technology. [18] In a tiny quantum prison, electrons behave quite differently as compared to their counterparts in free space. They can only occupy discrete energy levels, much like the electrons in an atom-for this reason, such electron prisons are often called "artificial atoms". [17]
Category: Artificial Intelligence

[240] viXra:1612.0009 [pdf] submitted on 2016-12-01 12:44:05

Computer Learns by Watching Video

Authors: George Rajna
Comments: 28 Pages.

In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[239] viXra:1611.0335 [pdf] submitted on 2016-11-24 10:44:26

Kannada Spell Checker with Sandhi Splitter

Authors: Akshatha A N, Chandana G Upadhyaya, Rajashekara Murthy S
Comments: Number of pages is 7

Spelling errors are introduced in text either during typing, or when the user does not know the correct phoneme or grapheme. If a language contains complex words like sandhi where two or more morphemes join based on some rules, spell checking becomes very tedious. In such situations, having a spell checker with sandhi splitter which alerts the user by flagging the errors and providing suggestions is very useful. A novel algorithm of sandhi splitting is proposed in this paper. The sandhi splitter can split about 7000 most common sandhi words in Kannada language used as test samples. The sandhi splitter was integrated with a Kannada spell checker and a mechanism for generating suggestions was added. A comprehensive, platform independent, standalone spell checker with sandhi splitter application software was thus developed and tested extensively for its efficiency and correctness. A comparative analysis of this spell checker with sandhi splitter was made and results concluded that the Kannada spell checker with sandhi splitter has an improved performance. It is twice as fast, 200 times more space efficient, and it is 90% accurate in case of complex nouns and 50% accurate for complex verbs. Such a spell checker with sandhi splitter will be of foremost significance in machine translation systems, voice processing, etc. This is the first sandhi splitter in Kannada and the advantage of the novel algorithm is that, it can be extended to all Indian languages.
Category: Artificial Intelligence

[238] viXra:1611.0316 [pdf] submitted on 2016-11-23 08:10:08

Minds for Machine Intelligence

Authors: George Rajna
Comments: 42 Pages.

Surviving breast cancer changed the course of Regina Barzilay's research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. [25] New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A 'nonlinear' effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC's LCLS. [19] Leiden physicists have manipulated light with large artificial atoms, so-called quantum dots. Before, this has only been accomplished with actual atoms. It is an important step toward light-based quantum technology. [18] In a tiny quantum prison, electrons behave quite differently as compared to their counterparts in free space. They can only occupy discrete energy levels, much like the electrons in an atom-for this reason, such electron prisons are often called "artificial atoms". [17] When two atoms are placed in a small chamber enclosed by mirrors, they can simultaneously absorb a single photon. [16] Optical quantum technologies are based on the interactions of atoms and photons at the single-particle level, and so require sources of single photons.
Category: Artificial Intelligence

[237] viXra:1611.0314 [pdf] submitted on 2016-11-23 08:47:27

New AI Algorithm Learns Beyond its Training

Authors: George Rajna
Comments: 27 Pages.

Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[236] viXra:1611.0260 [pdf] submitted on 2016-11-17 11:18:04

Deng Entropy in Hyper Power Set and Super Power Set

Authors: Bingyi Kang, Yong Deng
Comments: 18 Pages.

Deng entropy has been proposed to handle the uncertainty degree of belief function in Dempster-Shafer framework very recently. In this paper, two new belief entropies based on the frame of Deng entropy for hyper-power sets and super-power sets are respectively proposed to measure the uncertainty degree of more uncertain and more flexible information. Directly, the new entropies based on the frame of Deng entropy in hyper-power sets and super-power sets can be used in the application of DSmT.
Category: Artificial Intelligence

[235] viXra:1611.0211 [pdf] submitted on 2016-11-14 04:10:17

A Variable Order Hidden Markov Model with Dependence Jumps

Authors: Anastasios Petropoulos, Stelios Xanthopoulos, Sotirios P. Chatzis
Comments: 14 Pages.

Hidden Markov models (HMMs) are a popular approach for modeling sequential data, typically based on the assumption of a first- or moderate-order Markov chain. However, in many real-world scenarios the modeled data entail temporal dynamics the patterns of which change over time. In this paper, we address this problem by proposing a novel HMM formulation, treating temporal dependencies as latent variables over which inference is performed. Specifically, we introduce a hierarchical graphical model comprising two hidden layers: on the first layer, we postulate a chain of latent observation-emitting states, the temporal dependencies between which may change over time; on the second layer, we postulate a latent first-order Markov chain modeling the evolution of temporal dynamics (dependence jumps) pertaining to the first-layer latent process. As a result of this construction, our method allows for effectively modeling non-homogeneous observed data, where the patterns of the entailed temporal dynamics may change over time. We devise efficient training and inference algorithms for our model, following the expectation-maximization paradigm. We demonstrate the efficacy and usefulness of our approach considering several real-world datasets. As we show, our model allows for increased modeling and predictive performance compared to the alternative methods, while offering a good trade-off between the resulting increases in predictive performance and computational complexity.
Category: Artificial Intelligence

[234] viXra:1611.0181 [pdf] submitted on 2016-11-12 07:13:04

Finding Patterns in Corrupted Data

Authors: George Rajna
Comments: 29 Pages.

A team, including researchers from MIT's Computer Science and Artificial Intelligence Laboratory, has created a new set of algorithms that can efficiently fit probability distributions to high-dimensional data. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[233] viXra:1611.0177 [pdf] submitted on 2016-11-12 04:50:24

Machines Learn by Simply Observing

Authors: George Rajna
Comments: 28 Pages.

It is now possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for, according to researchers at the University of Sheffield. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[232] viXra:1611.0174 [pdf] submitted on 2016-11-12 05:46:51

Social Emotions Test for Artificial Intelligence

Authors: George Rajna
Comments: 29 Pages.

New evidence from brain studies, including cognitive psychology and neurophysiology research, shows that the emotional assessment of every object, subject, action or event plays an important role in human mental processes. And that means that if we want to create human-like artificial intelligence, we must make it emotionally responsive. But how do we know that such intelligence actually experiences real, human-like emotions? [17] It is now possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for, according to researchers at the University of Sheffield. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”. Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries - how a sliced up flatworm can regenerate into new organisms - has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[231] viXra:1611.0173 [pdf] submitted on 2016-11-12 06:46:28

AI System Surfs Web to Improve its Performance

Authors: George Rajna
Comments: 31 Pages.

Of the vast wealth of information unlocked by the Internet, most is plain text. The data necessary to answer myriad questions—about, say, the correlations between the industrial use of certain chemicals and incidents of disease, or between patterns of news coverage and voter-poll results—may all be online. But extracting it from plain text and organizing it for quantitative analysis may be prohibitively time consuming. [18] New evidence from brain studies, including cognitive psychology and neurophysiology research, shows that the emotional assessment of every object, subject, action or event plays an important role in human mental processes. And that means that if we want to create human-like artificial intelligence, we must make it emotionally responsive. But how do we know that such intelligence actually experiences real, human-like emotions? [17] It is now possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for, according to researchers at the University of Sheffield. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”. Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries - how a sliced up flatworm can regenerate into new organisms - has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[230] viXra:1611.0169 [pdf] submitted on 2016-11-12 04:07:06

Brain-Inspired Device

Authors: George Rajna
Comments: 39 Pages.

New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A ‘nonlinear’ effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC’s LCLS. [19] Leiden physicists have manipulated light with large artificial atoms, so-called quantum dots. Before, this has only been accomplished with actual atoms. It is an important step toward light-based quantum technology. [18] In a tiny quantum prison, electrons behave quite differently as compared to their counterparts in free space. They can only occupy discrete energy levels, much like the electrons in an atom - for this reason, such electron prisons are often called "artificial atoms". [17] When two atoms are placed in a small chamber enclosed by mirrors, they can simultaneously absorb a single photon. [16] Optical quantum technologies are based on the interactions of atoms and photons at the single-particle level, and so require sources of single photons that are highly indistinguishable – that is, as identical as possible. Current single-photon sources using semiconductor quantum dots inserted into photonic structures produce photons that are ultrabright but have limited indistinguishability due to charge noise, which results in a fluctuating electric field. [14] A method to produce significant amounts of semiconducting nanoparticles for light-emitting displays, sensors, solar panels and biomedical applications has gained momentum with a demonstration by researchers at the Department of Energy's Oak Ridge National Laboratory. [13] A source of single photons that meets three important criteria for use in quantum-information systems has been unveiled in China by an international team of physicists. Based on a quantum dot, the device is an efficient source of photons that emerge as solo particles that are indistinguishable from each other. The researchers are now trying to use the source to create a quantum computer based on "boson sampling". [11] With the help of a semiconductor quantum dot, physicists at the University of Basel have developed a new type of light source that emits single photons. For the first time, the researchers have managed to create a stream of identical photons. [10] Optical photons would be ideal carriers to transfer quantum information over large distances. Researchers envisage a network where information is processed in certain nodes and transferred between them via photons. [9] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer using Quantum Information. In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer with the help of Quantum Information.
Category: Artificial Intelligence

[229] viXra:1611.0095 [pdf] submitted on 2016-11-08 03:33:30

Quantitative Prediction of Electoral Vote for United States Presidential Election in 2016

Authors: Gang Xu
Comments: 8 Pages. This work was originally completed by October 22, 2016. The manuscript draft was prepared on November 7, 2016.

In this paper I am reporting the quantitative prediction of the electoral vote for United States presidential election in 2016. This quantitative prediction was based on the Google Trends (GT) data that is publicly available on the internet. A simple heuristic statistical model is applied to analyzing the GT data. This is intended to be an experiment for exploring the plausible dependency between the GT data and the electoral vote result of US presidential elections. The model's performance has also been tested by comparing the predicted results and the actual electoral votes in 2004, 2008 and 2012. For the year 2016, the Google Trends data projects that Mr. Trump will win the white house in landslide. This paper serves as a document to put this exploratory experiment in real test, since the actual election result can be compared to the prediction after tomorrow (November 8, 2016).
Category: Artificial Intelligence

[228] viXra:1611.0086 [pdf] submitted on 2016-11-07 06:27:49

Neuromorphic Processor

Authors: George Rajna
Comments: 22 Pages.

Toshiba advances deep learning with extremely low power neuromorphic processor. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[227] viXra:1611.0025 [pdf] submitted on 2016-11-02 08:20:12

Machine Learning for Cancer Treatment

Authors: George Rajna
Comments: 28 Pages.

Physicians have long used visual judgment of medical images to determine the course of cancer treatment. A new program package from Fraunhofer researchers reveals changes in images and facilitates this task using deep learning. The experts will demonstrate this software in Chicago from November 27 to December 2 at RSNA, the world's largest radiology meeting. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”. Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries - how a sliced up flatworm can regenerate into new organisms - has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[226] viXra:1611.0022 [pdf] submitted on 2016-11-02 06:49:11

Transforming, Self-Learning Software

Authors: George Rajna
Comments: 27 Pages.

Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[225] viXra:1610.0360 [pdf] submitted on 2016-10-30 02:33:09

Machine-Learning Decision Rationales

Authors: George Rajna
Comments: 28 Pages.

Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have devised a way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[224] viXra:1610.0359 [pdf] submitted on 2016-10-30 04:02:16

Machine Learning Understand Materials

Authors: George Rajna
Comments: 21 Pages.

Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[223] viXra:1610.0336 [pdf] submitted on 2016-10-27 21:31:21

Fuzzy Evidential Influence Diagram Evaluation Algorithm

Authors: Haoyang Zheng, Yong Deng
Comments: 38 Pages.

Fuzzy influence diagrams (FIDs) are one of the graphical models that combines the qualitative and quantitative analysis to solve decision-making problems. However, FIDs use an incomprehensive evaluation criteria to score nodes in complex systems, so that many different nodes got the same score, which can not reflect their differences. Based on fuzzy set and Dempster-Shafer (D-S) evidence theory, this paper changes the traditional evaluation system and modifies corresponding algorithm, in order that the influence diagram can more effectively reflect the true situation of the system, and get more practical results. Numerical examples and the real application in supply chain financial system are used to show the efficiency of the proposed influence diagram model.
Category: Artificial Intelligence

[222] viXra:1610.0314 [pdf] submitted on 2016-10-26 05:16:26

Artificial Intelligence Replaces Judges and Lawyers

Authors: George Rajna
Comments: 20 Pages.

An artificial intelligence method developed by University College London computer scientists and associates has predicted the judicial decisions of the European Court of Human Rights (ECtHR) with 79% accuracy, according to a paper published Monday, Oct. 24 in PeerJ Computer Science. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[221] viXra:1610.0281 [pdf] submitted on 2016-10-24 04:05:52

An Information Volume Measure

Authors: Yong Deng
Comments: 8 Pages.

How to measure the volume of uncertainty information is an open issue. Shannon entropy is used to represent the uncertainty degree of a probability distribution. Given a generalized probability distribution which means that the probability is not only assigned to the basis event space but also the power set of event space. At this time, a so called meta probability space is constructed. A new measure, named as Deng entropy, is presented. The results show that, compared with existing method, Deng entropy is not only better from the aspect of mathematic form, but also has the significant physical meaning.
Category: Artificial Intelligence

[220] viXra:1610.0249 [pdf] submitted on 2016-10-21 11:35:59

New Data Algorithms

Authors: George Rajna
Comments: 28 Pages.

Last year, MIT researchers presented a system that automated a crucial step in big-data analysis: the selection of a "feature set," or aspects of the data that are useful for making predictions. The researchers entered the system in several data science contests, where it outperformed most of the human competitors and took only hours instead of months to perform its analyses. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[219] viXra:1610.0169 [pdf] submitted on 2016-10-15 16:49:11

Band Gap Estimation Using Machine Learning Techniques

Authors: Anantha Natarajan S, R Varadhan, Ezhilvel ME
Comments: 3 Pages.

The purpose of this study is to build machine learning models to predict the band gap of binary compounds, using its known properties like molecular weight, electronegativity, atomic fraction and the group of the constituent elements in the periodic table. Regression techniques like Linear, Ridge regression and Random Forest were used to build the model. This model can be used by students and researchers in experiments involving unknown band gaps or new compounds.
Category: Artificial Intelligence

[218] viXra:1610.0142 [pdf] submitted on 2016-10-13 14:04:33

Google DeepMind Neural Networks

Authors: George Rajna
Comments: 24 Pages.

A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[217] viXra:1610.0110 [pdf] submitted on 2016-10-10 12:21:47

Neuro-Inspired Analog Computer

Authors: George Rajna
Comments: 23 Pages.

Researchers have developed a neuro-inspired analog computer that has the ability to train itself to become better at whatever tasks it performs. [13] A small, Santa Fe, New Mexico-based company called Knowm claims it will soon begin commercializing a state-of-the-art technique for building computing chips that learn. Other companies, including HP HPQ-3.45% and IBM IBM-2.10% , have already invested in developing these so-called brain-based chips, but Knowm says it has just achieved a major technological breakthrough that it should be able to push into production hopefully within a few years. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[216] viXra:1610.0074 [pdf] submitted on 2016-10-07 00:22:44

Belief Reliability Analysis and Its Application

Authors: Haoyang Zheng, Likang Yin, Tian Bian, Yong Deng
Comments: 24 Pages.

In reliability analysis, Fault Tree Analysis based on evidential networks is an important research topic. However, the existing EN approaches still remain two issues: one is the final results are expressed with interval numbers, which has a relatively high uncertainty to make a final decision. The other is the combination rule is not used to fuse uncertain information. These issues will greatly decrease the efficiency of EN to handle uncertain information. To address these open issues, a new methodology, called Belief Reliability Analysis, is presented in this paper. The combination methods to deal with series system, parallel system, series-parallel system as well as parallel-series system are proposed for reliability evaluation. Numerical examples and the real application in servo-actuation system are used to show the efficiency of the proposed Belief Reliability Analysis methodology.
Category: Artificial Intelligence

[215] viXra:1610.0029 [pdf] submitted on 2016-10-04 04:31:32

Associative Broadcast Neural Network

Authors: Aleksei Morozov
Comments: 3 Pages.

Associative broadcast neural network (ABNN) is an artificial neural network inspired by a hypothesis of broadcasting of neuron's output pattern in a biological neural network. Neuron has wire connections and ether connections. Ether connections are electrical. Wire connections provide a recognition functionality. Ether connections provide an association functionality.
Category: Artificial Intelligence

[214] viXra:1610.0028 [pdf] submitted on 2016-10-03 13:53:10

A New Belief Entropy: Possible Generalization of Deng Entropy, Tsallis Entropy and Shannon Entropy

Authors: Bingyi Kang, Yong Deng
Comments: 15 Pages.

Shannon entropy is the mathematical foundation of information theory, Tsallis entropy is the roots of nonextensive statistical mechanics, Deng entropy was proposed to measure the uncertainty degree of belief function very recently. In this paper, A new entropy H was proposed to generalize Deng entropy, Tsallis entropy and Shannon entropy. The new entropy H can be degenerated to Deng entropy, Tsallis entropy, and Shannon entropy under different conditions, and also can maintains the mathematical properity of Deng entropy, Tsallis entropy and Shannon entropy.
Category: Artificial Intelligence

[213] viXra:1609.0311 [pdf] submitted on 2016-09-21 07:22:05

Artificial Intelligence Discover New Materials

Authors: George Rajna
Comments: 20 Pages.

With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[212] viXra:1609.0238 [pdf] submitted on 2016-09-15 19:25:33

Revision on Fuzzy Artificial Potential Field for Humanoid Robot Path Planning in Unknown Environment

Authors: Mahdi Fakoor, Amirreza Kosari, Mohsen Jafarzadeh
Comments: 10 Pages.

Path planning in a completely known environment has been experienced various ways. However, in real world, most humanoid robots work in unknown environments. Robots' path planning by artificial potential field and fuzzy artificial potential field methods are very popular in the field of robotics navigation. However, by default humanoid robots lack range sensors; thus, traditional artificial potential field approaches needs to adopt themselves to these limitations. This paper investigates two different approaches for path planning of a humanoid robot in an unknown environment using fuzzy artificial potential (FAP) method. In the first approach, the direction of the moving robot is derived from fuzzified artificial potential field whereas in the second one, the direction of the robot is extracted from some linguistic rules that are inspired from artificial potential field. These two introduced trajectory design approaches are validated though some software and hardware in the loop simulations and the experimental results demonstrate the superiority of the proposed approaches in humanoid robot real-time trajectory planning problems.
Category: Artificial Intelligence

Replacements of recent Submissions

[53] viXra:1810.0345 [pdf] replaced on 2018-10-22 10:38:55

Cosmological Natural Selection AI

Authors: Jordan Micah Bennett
Comments: Author webpage: folioverse.appspot.com

Notably, this short paper concerns a non-serious thought experiment/statement, in the scope of a serious hypothesis of mine regarding the scientific purpose of the human species, in tandem with Cosmological Natural Selection I (CNS I). This thus may be considered as an aside wrt the aforesaid serious hypothesis, however, separately including thinking in relation to CNS I.
Category: Artificial Intelligence

[52] viXra:1809.0364 [pdf] replaced on 2018-09-28 10:07:11

Idealistic Neural Networks

Authors: Tofara Moyo
Comments: 2 Pages.

I describe an Artificial Neural Network, where we have mapped words to individual neurons instead of having them as variables to be fed into a network. The process of changing training cases will be equivalent to a Dropout procedure where we replace some (or all) of the words/neurons in the previous training case with new ones. Each neuron/word then takes in as input, all the b weights of the other neurons, and weights them all with its personal a weight. To learn this network uses the backpropagation algorithm after calculating an error from the output of an output neuron that will be a traditional neuron. This network then has a unique topology and functions with no inputs. We will use coordinate gradient decent to learn where we alternate between training the a weights of the words and the b weights. The Idealistic Neural Network, is an extremely shallow network that can represent non-linearity complexity in a linear outfit.
Category: Artificial Intelligence

[51] viXra:1809.0190 [pdf] replaced on 2018-09-11 13:51:22

Thoughts About Thinking

Authors: Lev I. Verkhovsky
Comments: 12 Pages. The article in Russian

A geometric model illustrating the basic mechanisms of thinking -- logical and intuitive -- is proposed. The thinking of man and the problems of creating artificial intelligence are discussed. Although the article was published in the Russian popular science journal «Chemistry and Life» in 1989 No. 7, according to the author, it is not obsolete. In Russian.
Category: Artificial Intelligence

[50] viXra:1808.0589 [pdf] replaced on 2018-09-17 09:47:30

Minimal and Maximal Models in Reinforcement Learning

Authors: Dimiter Dobrev
Comments: 11 Pages.

Each test gives us one property which we will denote as test result. The extension of that property we will denote as the test property. This raises the question about the nature of that property. Can it be a property of the state of the world? The answer is both yes and no. For a random model of the world the answer is negative, but if we look at the maximal model of the world the answer would flip to positive. There can be various models of the world. The minimal model knows about the past and the future the indispensable minimum. Conversely, in the maximal model the world knows everything about the past and the future. If you threw a dice the maximal model would know which side will fall up and would even know what you will do. For example, it would know whether you will throw the dice at all.
Category: Artificial Intelligence

[49] viXra:1805.0214 [pdf] replaced on 2018-05-31 04:56:00

AI Should Not Be an Open Source Project

Authors: Dimiter Dobrev
Comments: 9 Pages.

Who should own the Artificial Intelligence technology? It should belong to everyone, properly said not the technology per se, but the fruits that can be reaped from it. Obviously, we should not let AI end up in the hands of irresponsible persons. Likewise, nuclear technology should benefit all, however it should be kept secret and inaccessible by the public at large.
Category: Artificial Intelligence

[48] viXra:1801.0271 [pdf] replaced on 2018-01-22 21:08:14

Refutation: Neutrosophic Logic by Florentin Smarandache as Generalized Intuitionistic, Fuzzy Logic © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 2 Pages. © 2018 by Colin James III All rights reserved.

We map the neutrosophic logical values of truth, falsity, and indeterminacy on intervals "]0,1[" and "]-0,1+[" in equations for the Meth8/VL4 apparatus. We test the summation of the those values. The result is not tautologous, meaning neutrosophic logic is refuted and hence its use as a generalization of intuitionistic, fuzzy logic is likewise unworkable.
Category: Artificial Intelligence

[47] viXra:1712.0071 [pdf] replaced on 2018-04-26 10:08:54

The IQ of Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 24 Pages. Serdica Journal of Computing

All it takes to identify the computer programs which are Artificial Intelligence is to give them a test and award AI to those that pass the test. Let us say that the scores they earn at the test will be called IQ. We cannot pinpoint a minimum IQ threshold that a program has to cover in order to be AI, however, we will choose a certain value. Thus, our definition for AI will be any program the IQ of which is above the chosen value. While this idea has already been implemented in [3], here we will revisit this construct in order to introduce certain improvements.
Category: Artificial Intelligence

[46] viXra:1712.0071 [pdf] replaced on 2018-01-28 06:21:06

The Intelligence Quotient of the Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 27 Pages. Bulgarian. Serdica Journal of Computing

To say which programs are AI, it's enough to run an exam and recognize for AI those programs that passed the exam. The exam grade will be called IQ. We cannot say just how big the IQ has to be in order one program to be AI, but we will choose a specific value. So our definition of AI will be any program whose IQ is above this specific value. This idea has already been realized in [3], but here we will repeat this construction by bringing some improvements.
Category: Artificial Intelligence

[45] viXra:1711.0265 [pdf] replaced on 2017-11-27 03:16:15

Revisit Fuzzy Neural Network: Bridging the Gap Between Fuzzy Logic and Deep Learning

Authors: Lixin Fan
Comments: 76 Pages.

This article aims to establish a concrete and fundamental connection between two important elds in artificial intelligence i.e. deep learning and fuzzy logic. On the one hand, we hope this article will pave the way for fuzzy logic researchers to develop convincing applications and tackle challenging problems which are of interest to machine learning community too. On the other hand, deep learning could benefit from the comparative research by re-examining many trail-and-error heuristics in the lens of fuzzy logic, and consequently, distilling the essential ingredients with rigorous foundations. Based on the new findings reported in [41] and this article, we believe the time is ripe to revisit fuzzy neural network as a crucial bridge between two schools of AI research i.e. symbolic versus connectionist [101] and eventually open the black-box of artificial neural networks.
Category: Artificial Intelligence

[44] viXra:1711.0265 [pdf] replaced on 2017-11-17 16:28:38

Revisit Fuzzy Neural Network: Bridging the Gap Between Fuzzy Logic and Deep Learning

Authors: Lixin Fan
Comments: 76 Pages.

This article aims to establish a concrete and fundamental connection between two important fields in artificial intelligence i.e. deep learning and fuzzy logic. On the one hand, we hope this article will pave the way for fuzzy logic researchers to develop convincing applications and tackle challenging problems which are of interest to machine learning community too. On the other hand, deep learning could benefit from the comparative research by re-examining many trail-and-error heuristics in the lens of fuzzy logic, and consequently, distilling the essential ingredients with rigorous foundations. Based on the new findings reported in [38] and this article, we believe the time is ripe to revisit fuzzy neural network as a crucial bridge between two schools of AI research i.e. symbolic versus connectionist [93] and eventually open the black-box of artificial neural networks.
Category: Artificial Intelligence

[43] viXra:1710.0324 [pdf] replaced on 2017-11-09 05:34:27

New Sufficient Conditions of Signal Recovery with Tight Frames Via $l_1$-Analysis

Authors: Jianwen Huang, Jianjun Wang, Feng Zhang, Wendong Wang
Comments: 18 Pages.

The paper discusses the recovery of signals in the case that signals are nearly sparse with respect to a tight frame $D$ by means of the $l_1$-analysis approach. We establish several new sufficient conditions regarding the $D$-restricted isometry property to ensure stable reconstruction of signals that are approximately sparse with respect to $D$. It is shown that if the measurement matrix $\Phi$ fulfils the condition $\delta_{ts}<t/(4-t)$ for $0<t<4/3$, then signals which are approximately sparse with respect to $D$ can be stably recovered by the $l_1$-analysis method. In the case of $D=I$, the bound is sharp, see Cai and Zhang's work \cite{Cai and Zhang 2014}. When $t=1$, the present bound improves the condition $\delta_s<0.307$ from Lin et al.'s reuslt to $\delta_s<1/3$. In addition, numerical simulations are conducted to indicate that the $l_1$-analysis method can stably reconstruct the sparse signal in terms of tight frames.
Category: Artificial Intelligence

[42] viXra:1709.0108 [pdf] replaced on 2017-09-10 08:24:10

A New Semantic Theory of Nature Language

Authors: Kun Xing
Comments: 70 Pages.

Formal Semantics and Distributional Semantics are two important semantic frameworks in Natural Language Processing (NLP). Cognitive Semantics belongs to the movement of Cognitive Linguistics, which is based on contemporary cognitive science. Each framework could deal with some meaning phenomena, but none of them fulfills all requirements proposed by applications. A unified semantic theory characterizing all important language phenomena has both theoretical and practical significance; however, although many attempts have been made in recent years, no existing theory has achieved this goal yet. This article introduces a new semantic theory that has the potential to characterize most of the important meaning phenomena of natural language and to fulfill most of the necessary requirements for philosophical analysis and for NLP applications. The theory is based on a unified representation of information, and constructs a kind of mathematical model called cognitive model to interpret natural language expressions in a compositional manner. It accepts the empirical assumption of Cognitive Semantics, and overcomes most shortcomings of Formal Semantics and of Distributional Semantics. The theory, however, is not a simple combination of existing theories, but an extensive generalization of classic logic and Formal Semantics. It inherits nearly all advantages of Formal Semantics, and also provides descriptive contents for objects and events as fine-gram as possible, descriptive contents which represent the results of human cognition.
Category: Artificial Intelligence

[41] viXra:1611.0211 [pdf] replaced on 2016-12-01 04:59:33

A Variable Order Hidden Markov Model with Dependence Jumps

Authors: Anastasios Petropoulos, Stelios Xanthopoulos, Sotirios P. Chatzis
Comments: 33 Pages.

Hidden Markov models (HMMs) are a popular approach for modeling sequential data, typically based on the assumption of a first- or moderate-order Markov chain. However, in many real-world scenarios the modeled data entail temporal dynamics the patterns of which change over time. In this paper, we address this problem by proposing a novel HMM formulation, treating temporal dependencies as latent variables over which inference is performed. Specifically, we introduce a hierarchical graphical model comprising two hidden layers: on the first layer, we postulate a chain of latent observation-emitting states, the temporal dependencies between which may change over time; on the second layer, we postulate a latent first-order Markov chain modeling the evolution of temporal dynamics (dependence jumps) pertaining to the first-layer latent process. As a result of this construction, our method allows for effectively modeling non-homogeneous observed data, where the patterns of the entailed temporal dynamics may change over time. We devise efficient training and inference algorithms for our model, following the expectation-maximization paradigm. We demonstrate the efficacy and usefulness of our approach considering several real-world datasets. As we show, our model allows for increased modeling and predictive performance compared to the alternative methods, while offering a good trade-off between the resulting increases in predictive performance and computational complexity.
Category: Artificial Intelligence

[40] viXra:1611.0211 [pdf] replaced on 2016-11-14 08:01:26

A Variable Order Hidden Markov Model with Dependence Jumps

Authors: Anastasios Petropoulos, Stelios Xanthopoulos, Sotirios P. Chatzis
Comments: 15 Pages.

Hidden Markov models (HMMs) are a popular approach for modeling sequential data, typically based on the assumption of a first- or moderate-order Markov chain. However, in many real-world scenarios the modeled data entail temporal dynamics the patterns of which change over time. In this paper, we address this problem by proposing a novel HMM formulation, treating temporal dependencies as latent variables over which inference is performed. Specifically, we introduce a hierarchical graphical model comprising two hidden layers: on the first layer, we postulate a chain of latent observation-emitting states, the temporal dependencies between which may change over time; on the second layer, we postulate a latent first-order Markov chain modeling the evolution of temporal dynamics (dependence jumps) pertaining to the first-layer latent process. As a result of this construction, our method allows for effectively modeling non-homogeneous observed data, where the patterns of the entailed temporal dynamics may change over time. We devise efficient training and inference algorithms for our model, following the expectation-maximization paradigm. We demonstrate the efficacy and usefulness of our approach considering several real-world datasets. As we show, our model allows for increased modeling and predictive performance compared to the alternative methods, while offering a good trade-off between the resulting increases in predictive performance and computational complexity.
Category: Artificial Intelligence

[39] viXra:1611.0211 [pdf] replaced on 2016-11-14 04:26:58

A Variable Order Hidden Markov Model with Dependence Jumps

Authors: Anastasios Petropoulos, Stelios Xanthopoulos, Sotirios P. Chatzis
Comments: 15 Pages.

Hidden Markov models (HMMs) are a popular approach for modeling sequential data, typically based on the assumption of a first- or moderate-order Markov chain. However, in many real-world scenarios the modeled data entail temporal dynamics the patterns of which change over time. In this paper, we address this problem by proposing a novel HMM formulation, treating temporal dependencies as latent variables over which inference is performed. Specifically, we introduce a hierarchical graphical model comprising two hidden layers: on the first layer, we postulate a chain of latent observation-emitting states, the temporal dependencies between which may change over time; on the second layer, we postulate a latent first-order Markov chain modeling the evolution of temporal dynamics (dependence jumps) pertaining to the first-layer latent process. As a result of this construction, our method allows for effectively modeling non-homogeneous observed data, where the patterns of the entailed temporal dynamics may change over time. We devise efficient training and inference algorithms for our model, following the expectation-maximization paradigm. We demonstrate the efficacy and usefulness of our approach considering several real-world datasets. As we show, our model allows for increased modeling and predictive performance compared to the alternative methods, while offering a good trade-off between the resulting increases in predictive performance and computational complexity.
Category: Artificial Intelligence

[38] viXra:1610.0029 [pdf] replaced on 2016-10-16 08:51:52

Associative Broadcast Neural Network

Authors: Aleksei Morozov
Comments: 5 Pages.

Associative broadcast neural network (aka Ether Neural Network) is an artificial neural network inspired by a hypothesis of broadcasting of neuron's output pattern in a biological neural network. Neuron has wire connections and ether connections. Ether connections are electrical. Wire connections provide a recognition functionality. Ether connections provide an association functionality.
Category: Artificial Intelligence