Artificial Intelligence

Previous months:
2007 - 0703(1)
2010 - 1003(33) - 1004(9) - 1005(5) - 1008(2) - 1009(1) - 1010(1) - 1012(1)
2011 - 1101(2) - 1106(1) - 1107(1) - 1109(2)
2012 - 1201(1) - 1204(3) - 1206(2) - 1207(6) - 1208(7) - 1209(1) - 1210(4) - 1211(2)
2013 - 1301(5) - 1302(2) - 1303(6) - 1304(9) - 1305(1) - 1308(1) - 1309(8) - 1310(7) - 1311(1) - 1312(4)
2014 - 1404(2) - 1405(3) - 1406(1) - 1408(5) - 1410(1) - 1411(1) - 1412(1)
2015 - 1501(1) - 1502(3) - 1503(6) - 1504(3) - 1506(5) - 1507(4) - 1508(1) - 1509(4) - 1510(2) - 1511(4) - 1512(1)
2016 - 1601(1) - 1602(10) - 1603(2) - 1605(4) - 1606(6) - 1607(5) - 1608(7) - 1609(5) - 1610(12) - 1611(14) - 1612(10)
2017 - 1701(4) - 1702(9) - 1703(5) - 1704(9) - 1705(10) - 1706(14) - 1707(24) - 1708(19) - 1709(20) - 1710(14) - 1711(21) - 1712(16)
2018 - 1801(14) - 1802(5) - 1803(16) - 1804(17) - 1805(27) - 1806(22) - 1807(35) - 1808(35) - 1809(17) - 1810(28) - 1811(26) - 1812(27)
2019 - 1901(35) - 1902(31) - 1903(46) - 1904(29) - 1905(12)

Recent submissions

Any replacements are listed farther down

[835] viXra:1905.0271 [pdf] submitted on 2019-05-17 11:28:39

Neural Network Study Dark Matter

Authors: George Rajna
Comments: 48 Pages.

As cosmologists and astrophysicists delve deeper into the darkest recesses of the universe, their need for increasingly powerful observational and computational tools has expanded exponentially.
Category: Artificial Intelligence

[834] viXra:1905.0266 [pdf] submitted on 2019-05-18 02:10:45

Machine Learning of Fusion Energy

Authors: George Rajna
Comments: 34 Pages.

Researchers at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) are using ML to create a model for rapid control of plasma-the state of matter composed of free electrons and atomic nuclei, or ions-that fuels fusion reactions. [22] Machine learning can be used to predict the properties of a group of materials which, according to some, could be as important to the 21st century as plastics were to the 20th. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[833] viXra:1905.0261 [pdf] submitted on 2019-05-16 06:42:05

AI Teach us about Proteins

Authors: George Rajna
Comments: 45 Pages.

Researchers in Berlin and Heidelberg have now developed an intelligent neural network that can predict the functions of proteins in the human body. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24] According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23]
Category: Artificial Intelligence

[832] viXra:1905.0258 [pdf] submitted on 2019-05-16 07:01:34

Machine Learning of Porous Materials

Authors: George Rajna
Comments: 33 Pages.

Machine learning can be used to predict the properties of a group of materials which, according to some, could be as important to the 21st century as plastics were to the 20th. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[831] viXra:1905.0257 [pdf] submitted on 2019-05-16 07:23:12

Facial Recognition Fears

Authors: George Rajna
Comments: 50 Pages.

A ban on facial recognition for law enforcement in San Francisco highlights growing public concerns about technology which is seeing stunning growth for an array of applications while provoking worries over privacy. [26] Researchers in Berlin and Heidelberg have now developed an intelligent neural network that can predict the functions of proteins in the human body. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24]
Category: Artificial Intelligence

[830] viXra:1905.0198 [pdf] submitted on 2019-05-13 11:20:48

AI in the next Election Decide

Authors: George Rajna
Comments: 46 Pages.

If trust in our politicians is at an all time low, maybe it's time to reconsider how we elect them in the first place. Can artificial intelligence (AI) help with our voting decisions? [27] In January 2017, IBM made the bold statement that within five years, HYPERLINK "https://phys.org/tags/health/" health professionals could apply AI to better understand how words and speech paint a clear window into our mental health. [26]
Category: Artificial Intelligence

[829] viXra:1905.0168 [pdf] submitted on 2019-05-11 06:49:56

AI Develops Human-Like Number Sense

Authors: George Rajna
Comments: 33 Pages.

All the successful computational approaches to detecting objects in images work by building up a kind of statistical picture of an object from many individual examples – a type of learning. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19]
Category: Artificial Intelligence

[828] viXra:1905.0147 [pdf] submitted on 2019-05-09 12:18:21

Neural Network on Single Chip

Authors: George Rajna
Comments: 49 Pages.

A team of researchers from the University of Münster, the University of Oxford and the University of Exeter has built an all-optical neural network on a single chip. [28] Physicists from Petrozavodsk State University have proposed a new method for oscillatory neural network to recognize simple images. Such networks with an adjustable synchronous state of individual neurons have, presumably, dynamics similar to neurons in the living brain. [27] Navid Borhani, a research-team member, says this machine learning approach is much simpler than other methods to reconstruct images passed through optical fibers, which require making a holographic measurement of the output. [26]
Category: Artificial Intelligence

[827] viXra:1905.0080 [pdf] submitted on 2019-05-06 01:08:11

A Fast Algorithm for NetworkForecasting Time Series

Authors: Fan Liu, Yong Deng
Comments: 7 Pages.

Time series has a wide range of applications in various fields. Recently, a new math tool, named as visibility graph, is developed to transform the time series into complex networks. One shortcoming of existing network-based time series prediction methods is time consuming. To address this issue, this paper proposes a new prediction algorithm based on visibility graph and markov chains. Among the existing network-based time series prediction methods, the main step is to determine the similarity degree between two nodes based on link prediction algorithm. A new similarity measure between two nodes is presented without the iteration process in classical link prediction algorithm. The prediction of Construct Cost Index (CCI) shows that the proposed method has the better accuracy with less time consuming.
Category: Artificial Intelligence

[826] viXra:1905.0058 [pdf] submitted on 2019-05-05 03:18:10

An Insight Into [erlang – Java Interface Jikesrvm(research Virtual Machine) – Yanni] Based Informatics Platform for Telecom R&D.

Authors: Nirmal Tej Kumar
Comments: 4 Pages. Short Communication

An Insight into [Erlang – Java interface -JikesRVM(Research Virtual Machine) – YANNI] based Informatics Platform for Telecom R&D. [ Erlang/OTP/Hardware/Software/Firmware based Co-Design of Intelligent Telecommunication Systems ]
Category: Artificial Intelligence

[825] viXra:1905.0049 [pdf] submitted on 2019-05-03 10:53:37

Artificial Neural Networks Brain Activity

Authors: George Rajna
Comments: 58 Pages.

MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain's visual cortex. [31] For people with hearing loss, it can very difficult to understand and separate voices in noisy environments. This problem may soon be history thanks to a new groundbreaking algorithm that is designed to recognise and separate voices efficiently in unknown sound environments. [30]
Category: Artificial Intelligence

[824] viXra:1905.0044 [pdf] submitted on 2019-05-02 11:23:13

Machine Learning Quantum Sensing

Authors: George Rajna
Comments: 43 Pages.

Researchers at the University of Bristol have reached new heights of sophistication in detecting magnetic fields with extreme sensitivity at room temperature by combining machine learning with a quantum sensor. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[823] viXra:1904.0581 [pdf] submitted on 2019-04-29 06:51:40

Hybrid Machine Learning Tool

Authors: George Rajna
Comments: 57 Pages.

Researchers have developed a hybrid predictive model to improve the identification of metastatic lymph nodes in individuals with head-and-neck cancer. [33] "Collaborations such as this between pathologists and data scientists are essential to advancing the field of oncology. Our expertise is disparate, which perhaps is exactly why it can be so powerfully complementary," Tafe says. [32] As part of a team of scientists from IBM and New York University, my colleagues and I are looking at new ways AI could be used to help ophthalmologists and optometrists further utilize eye images, and potentially help to speed the process for detecting glaucoma in images. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23]
Category: Artificial Intelligence

[822] viXra:1904.0502 [pdf] submitted on 2019-04-25 08:35:37

Machine Learning Ready to Shine

Authors: George Rajna
Comments: 51 Pages.

Machine learning and automation technologies are gearing up to transform the radiation-therapy workflow while freeing specialist clinical and technical staff to dedicate more time to patient care. [27] Navid Borhani, a research-team member, says this machine learning approach is much simpler than other methods to reconstruct images passed through optical fibers, which require making a holographic measurement of the output. [26]
Category: Artificial Intelligence

[821] viXra:1904.0488 [pdf] submitted on 2019-04-26 02:43:56

The Transferable Complex Belief Model

Authors: Fuyuan Xiao
Comments: 2 Pages.

We describe the transferable complex belief model, a model for representing quantified beliefs based on a newly defined complex belief function. The relation between the complex belief function and the probability function is derived when decisions must be made.
Category: Artificial Intelligence

[820] viXra:1904.0487 [pdf] submitted on 2019-04-26 03:56:19

Ai/ml/dl Based Python Software + Orlik Solomon[os] Algebra Python Program to Probe Electron Microscopy[em] Images Towards a Better Image Processing & Informatics Framework – a Novel Suggestion & Design Approach for Testing em Image Processing Frameworks I

Authors: Nirmal Tej Kumar
Comments: 4 Pages. Short Communication

AI/ML/DL Based Python Software + Orlik Solomon[OS] Algebra Python Program to probe Electron Microscopy[EM] Images towards a better Image Processing & Informatics Framework – A Novel Suggestion & Design Approach for Testing EM Image Processing Frameworks in the context of Hyper-plane Arrangement/s.
Category: Artificial Intelligence

[819] viXra:1904.0486 [pdf] submitted on 2019-04-26 04:59:43

Deep Learning Lung Cancer

Authors: George Rajna
Comments: 55 Pages.

“Collaborations such as this between pathologists and data scientists are essential to advancing the field of oncology. Our expertise is disparate, which perhaps is exactly why it can be so powerfully complementary,” Tafe says. [32] As part of a team of scientists from IBM and New York University, my colleagues and I are looking at new ways AI could be used to help ophthalmologists and optometrists further utilize eye images, and potentially help to speed the process for detecting glaucoma in images. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22]
Category: Artificial Intelligence

[818] viXra:1904.0476 [pdf] submitted on 2019-04-24 08:45:11

Inverse Reinforcement Training Conditioned on Brain Scan

Authors: Tofara Moyo
Comments: 3 Pages.

I outline a way for an agent to learn the dispositions of a particular individual through inverse reinforcement learning where the state space at time t includes an fMRI scan of the individual, to represent his brain state at that time. The fundamental assumption being that the information shown on an fMRI scan of an individual is conditioned on his thoughts and thought processes. The system models both long and short term memory as well any internal dynamics we may not be aware of that are in the human brain. The human expert will put on a suit for a set duration with sensors whose information will be used to train a policy network, while a generative model will be trained to produce the next fMRI scan image conditioned on the present one and the state of the environment. During operation the humanoid robots actions will be conditioned on this evolving fMRI and the environment it is in.
Category: Artificial Intelligence

[817] viXra:1904.0475 [pdf] submitted on 2019-04-24 08:48:25

Imprinting Memory and Experiences on Noise

Authors: Tofara Moyo
Comments: 2 Pages.

We outline a method where we imprint the memory and collective experience of an agent, or the previous words said and in what order and by whom in a chatbot. We are given a training corpus of conversations. Then a feed forward neural network is used to input a noise vector with the current word. At the start of the conversation we will input the zero vector with the dimensions of the noise vector that we will use. Together with the first word found in the conversation. This will output another noise vector and the 1 hot encoded next word. We will then feed the duo output once more into the network to predict the next word. During the learning process we will not alter the noise vectors predicted, but the error function will be parameterized by the word vector. This will marry the two progressions together. That of the noise and the structure of the conversation and which word ought to follow which in a sensible sentence. During operation we input the zero vector and the first word, then input the duo output from that back into the network to predict the next word to be said. When a live response is given to a chatbot the next word will not be predicted, but we shall still get another noise vector with which to feed back in with the next word, until an EOU (end of utterance) token vector is input alongside a noise vector. Then the system begins to predict the words to say as before
Category: Artificial Intelligence

[816] viXra:1904.0474 [pdf] submitted on 2019-04-24 08:50:57

Dynamic Cascade Neural Network

Authors: Tofara Moyo
Comments: 2 Pages.

We introduce a type of cascade correlation network to predict the next word in a sentence, where the hidden layer is connected in such a way as to represent the topology of the conversation up to that point. There will be one input neuron that is fed the current word, and one output neuron that will output the predicted next word in the sentence. During training, we will be given a set of conversations to train on. That means that we shall know the input output pairs to train with, and the topology of the previous parts of the conversation correlating with each pair. We shall build the hidden layer in such a way that it is isomorphic to the previous parts of the conversation at the point here that input output pair coincide. During operation we will build the hidden layer of the network at each input word and in such a way as to be isomorphic to the previous parts of the conversation up to that point. Training will make the probability of the output n+1 word to be conditionally dependent on the previous words in that order and not just the nth word, conditional on the training conversations giving the chat-bot the ability to talk consistently on topics and themes found within the training conversations.
Category: Artificial Intelligence

[815] viXra:1904.0473 [pdf] submitted on 2019-04-24 09:25:57

People Makes AI More Powerful

Authors: George Rajna
Comments: 46 Pages.

But as the desire to use AI for more scenarios has grown, Microsoft scientists and product developers have pioneered a complementary approach called machine teaching. [26] A commercial artificial intelligence (AI) system matched the accuracy of over 28,000 interpretations of breast cancer screening mammograms by 101 radiologists. [25] Social, economic, environmental and health inequalities within cities can be detected using street imagery. [24]
Category: Artificial Intelligence

[814] viXra:1904.0439 [pdf] submitted on 2019-04-22 08:44:10

AI Versus 101 Radiologists

Authors: George Rajna
Comments: 41 Pages.

A commercial artificial intelligence (AI) system matched the accuracy of over 28,000 interpretations of breast cancer screening mammograms by 101 radiologists. [25] Social, economic, environmental and health inequalities within cities can be detected using street imagery. [24] Citizen science is a boon for researchers, providing reams of data about everything from animal species to distant galaxies. [23] In early 2018, with support from IBM Corporate Citizenship and the Danish Ministry for Foreign Affairs, IBM and the Danish Refugee Council (DRC) embarked on a partnership aimed squarely at the need to better understand migration drivers and evidence-based policy guidance for a range of stakeholders. [22]
Category: Artificial Intelligence

[813] viXra:1904.0429 [pdf] submitted on 2019-04-22 20:38:36

MidcurveNN: Encoder-Decoder Neural Network for Computing Midcurve of a Thin Polygon

Authors: Yogesh H. Kulkarni
Comments: 3 Pages.

Various applications need lower dimensional representation of shapes. Midcurve is one-dimensional(1D) representation of a two-dimensional(2D) planar shape. It is used in applications such as animation, shape matching, retrieval, finite element analysis, etc. Methods available to compute midcurves vary based on the type of the input shape (images, sketches, etc.) and processing (thinning, Medial Axis Transform (MAT), Chordal Axis Transform (CAT), Straight Skeletons, etc.). This paper talks about a novel method called MidcurveNN which uses Encoder-Decoder neural network for computing midcurve from images of 2D thin polygons in supervised learning manner. This dimension reduction transformation from input 2D thin polygon image to output 1D midcurve image is learnt by the neural network, which can then be used to compute midcurve of an unseen 2D thin polygonal shape.
Category: Artificial Intelligence

[812] viXra:1904.0403 [pdf] submitted on 2019-04-20 09:11:52

Deep Learning Image Analysis

Authors: George Rajna
Comments: 40 Pages.

Social, economic, environmental and health inequalities within cities can be detected using street imagery. [24] Citizen science is a boon for researchers, providing reams of data about everything from animal species to distant galaxies. [23] In early 2018, with support from IBM Corporate Citizenship and the Danish Ministry for Foreign Affairs, IBM and the Danish Refugee Council (DRC) embarked on a partnership aimed squarely at the need to better understand migration drivers and evidence-based policy guidance for a range of stakeholders. [22]
Category: Artificial Intelligence

[811] viXra:1904.0402 [pdf] submitted on 2019-04-20 09:30:09

Helpfulness of Machine Explanations

Authors: George Rajna
Comments: 43 Pages.

To address this gap in the existing literature, a team of researchers at SRI International has created a human-AI image guessing game inspired by the popular game 20 Questions (20Q), which can be used to evaluate the helpfulness of machine explanations. [25] Social, economic, environmental and health inequalities within cities can be detected using street imagery. [24] Citizen science is a boon for researchers, providing reams of data about everything from animal species to distant galaxies. [23]
Category: Artificial Intelligence

[810] viXra:1904.0397 [pdf] submitted on 2019-04-20 11:02:57

Neural Network Read Scientific Papers

Authors: George Rajna
Comments: 46 Pages.

Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papersand render a plain-English summary in a sentence or two. [26] To address this gap in the existing literature, a team of researchers at SRI International has created a human-AI image guessing game inspired by the popular game 20 Questions (20Q), which can be used to evaluate the helpfulness of machine explanations. [25]
Category: Artificial Intelligence

[809] viXra:1904.0396 [pdf] submitted on 2019-04-20 11:32:24

Robots with Communication Skills

Authors: George Rajna
Comments: 49 Pages.

Researchers at the Okinawa Institute of Science and Technology have recently proposed a neurorobotics approach that could aid the development of robots with advanced communication capabilities. [27] Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papersand render a plain-English summary in a sentence or two. [26]
Category: Artificial Intelligence

[808] viXra:1904.0336 [pdf] submitted on 2019-04-18 01:21:20

AI for Fusion Energy

Authors: George Rajna
Comments: 56 Pages.

Artificial intelligence (AI), a branch of computer science that is transforming scientific inquiry and industry, could now speed the development of safe, clean and virtually limitless fusion energy for generating electricity. [32] Using machine-learning and an integrated photonic chip, researchers from INRS (Canada) and the University of Sussex (UK) can now customize the properties of broadband light sources. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" HYPERLINK "https://phys.org/tags/intelligence/" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22]
Category: Artificial Intelligence

[807] viXra:1904.0335 [pdf] submitted on 2019-04-18 01:40:01

Machine Learning Microcapsules

Authors: George Rajna
Comments: 58 Pages.

Micro-encapsulated CO2 sorbents (MECS)-tiny, reusable capsules full of a sodium carbonate solution that can absorb carbon dioxide from the air-are a promising technology for capturing carbon from the atmosphere. [33] Artificial intelligence (AI), a branch of computer science that is transforming scientific inquiry and industry, could now speed the development of safe, clean and virtually limitless fusion energy for generating electricity. [32] Using machine-learning and an integrated photonic chip, researchers from INRS (Canada) and the University of Sussex (UK) can now customize the properties of broadband light sources. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" HYPERLINK "https://phys.org/tags/intelligence/" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23]
Category: Artificial Intelligence

[806] viXra:1904.0239 [pdf] submitted on 2019-04-12 11:49:25

Human Brain Cloud Interface

Authors: George Rajna
Comments: 46 Pages.

Imagine a future technology that would provide instant access to the world's knowledge and artificial intelligence, simply by thinking about a specific topic or question. [27] Just like living ecosystems, web services form a complex artificial system consisting of tags and the user-generated media associated with them, such as photographs, movies and web pages. [26]
Category: Artificial Intelligence

[805] viXra:1904.0230 [pdf] submitted on 2019-04-13 04:43:26

Artificial Intelligence Singles Out Neurons

Authors: George Rajna
Comments: 47 Pages.

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time. [28] Imagine a future technology that would provide instant access to the world's knowledge and artificial intelligence, simply by thinking about a specific topic or question. [27] Just like living ecosystems, web services form a complex artificial system consisting of tags and the user-generated media associated with them, such as photographs, movies and web pages. [26]
Category: Artificial Intelligence

[804] viXra:1904.0220 [pdf] submitted on 2019-04-11 17:45:53

Ethics in Ai: the force of Good

Authors: James Sandy
Comments: 2 Pages. This just the abstract of the work

Artificial Intelligence has both its good side and its wrong side. and in a country like nigeria when migrants get to know the value and use of it, it may be used to affect Elections, Decisions of public and all manner of evil it can also be used to create solutions to petinent problems. so as to this there should be a code of conduct, a guideline, a moda operandi which should be instill in all Researchers and Developers in this field. Just like the medical practitioners have a pledge which guides its members so should they be in the field of Artificial Intelligence. I strongly advise that educating the Developers and researchers in this field about the Good conducts and its after effects and also the Bad conduct and its after effect will really prevent ​ Abuse. Ai is a very powerful force which i believe should be used for the sole purpose of good, for human development and making the world a better place not to create sophisticated weapons or robots which will be harmful to humans. The general of it all is not to use AI in a way that it will be a treat to human life but for
Category: Artificial Intelligence

[803] viXra:1904.0212 [pdf] submitted on 2019-04-12 04:00:41

AI Agent Explain its Action

Authors: George Rajna
Comments: 41 Pages.

Georgia Institute of Technology researchers, in collaboration with Cornell University and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24]
Category: Artificial Intelligence

[802] viXra:1904.0210 [pdf] submitted on 2019-04-12 05:14:30

Open-Ended Web Services

Authors: George Rajna
Comments: 44 Pages.

Just like living ecosystems, web services form a complex artificial system consisting of tags and the user-generated media associated with them, such as photographs, movies and web pages. [26] Georgia Institute of Technology researchers, in collaboration with Cornell University and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24]
Category: Artificial Intelligence

[801] viXra:1904.0202 [pdf] submitted on 2019-04-10 10:33:25

AI to Understand Collective Behavior

Authors: George Rajna
Comments: 32 Pages.

Looking ahead, Thomas Müller believes that future research in this area will benefit from large data sets on animals, such as schools of fish with their dynamic behavioural patterns. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19]
Category: Artificial Intelligence

[800] viXra:1904.0200 [pdf] submitted on 2019-04-10 11:48:45

IBM AI Work in Human Resources

Authors: George Rajna
Comments: 35 Pages.

IBM CEO Ginni Rometty touted IBM's successful use of AI for leading the workforce at CNBC's @ Work Talent + HR Summit. She sat down with Jon Fortt and delivered some impressive numbers on how the IBM's AI tool helped the company. [21] Looking ahead, Thomas Müller believes that future research in this area will benefit from large data sets on animals, such as schools of fish with their dynamic behavioural patterns. [20]
Category: Artificial Intelligence

[799] viXra:1904.0172 [pdf] submitted on 2019-04-08 09:44:41

Hierarchical RNN-Based Model

Authors: George Rajna
Comments: 32 Pages.

Researchers at Shanghai University have recently developed a new approach based on recurrent neural networks (RNNs) to predict scene graphs from images. [20] Computer scientists at Carnegie Mellon University say neural networks and supervised machine learning techniques can efficiently characterize cells that have been studied using single cell RNA-sequencing (scRNA-seq). [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase-a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13] Researchers have fused living and non-living cells for the first time in a way that allows them to work together, paving the way for new applications. [12] UZH researchers have discovered a previously unknown way in which proteins interact with one another and cells organize themselves. [11] Dr Martin Sweatman from the University of Edinburgh's School of Engineering has discovered a simple physical principle that might explain how life started on Earth. [10] Nearly 75 years ago, Nobel Prize-winning physicist Erwin Schrödinger wondered if the mysterious world of quantum mechanics played a role in biology.
Category: Artificial Intelligence

[798] viXra:1904.0119 [pdf] submitted on 2019-04-05 08:13:02

Machine Learning Molecular Water

Authors: George Rajna
Comments: 39 Pages.

A new study from the U.S. Department of Energy's (DOE) Argonne National Laboratory has achieved a breakthrough in the effort to mathematically represent how water behaves. [26] A new tool is drastically changing the face of chemical research-artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[797] viXra:1904.0072 [pdf] submitted on 2019-04-03 08:42:09

AI Treatment of Brain Tumors

Authors: George Rajna
Comments: 45 Pages.

A team from Heidelberg University Hospital and the German Cancer Research Centre has developed a new method for the automated image analysis of brain tumors. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23]
Category: Artificial Intelligence

[796] viXra:1904.0071 [pdf] submitted on 2019-04-03 09:23:23

Machines Reason About They See

Authors: George Rajna
Comments: 47 Pages.

To give computers the ability to reason more like us, artificial intelligence (AI) researchers are returning to abstract, or symbolic, programming. [26] A team from Heidelberg University Hospital and the German Cancer Research Centre has developed a new method for the automated image analysis of brain tumors. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24]
Category: Artificial Intelligence

[795] viXra:1904.0021 [pdf] submitted on 2019-04-01 08:45:02

An Attempt to Create Self-Conscious Artificial Intelligence

Authors: Ui-yeong Jeong
Comments: 23 pages, 7 figures, language: Korean (Important part of core code 1 is also written in English.)

Self-conscious Artificial Intelligence refers to artificial intelligence that has a 'finished personality', that actively thinks and acts like a person. In this paper, I describe the whole structure and four core codes to create the self-consciousness by referring to the collective unconsciousness, concept about the reincarnation of the Buddhism, the philosophy of the soul that I think. The entire structure is hierarchical and the four core codes are involved in all tiers. The purpose of this paper is not the result of learning. The purpose of this study is to see changes in the process of learning as experiences and to derive an artificial singularity through accumulation of the experiences. Suggesting the possibility of ultimately creating self-consciousness through initial design based on techniques such as intentionally using "Vanishing Gradient".
Category: Artificial Intelligence

[794] viXra:1903.0558 [pdf] submitted on 2019-03-30 07:40:52

Planets Discovered Using AI

Authors: George Rajna
Comments: 46 Pages.

Astronomers at The University of Texas at Austin, in partnership with Google, have used artificial intelligence (AI) to uncover two more hidden planets in the Kepler space telescope archive. [27] Oceanographers studying the physics of the global ocean have long found themselves facing a conundrum: Fluid dynamical balances can vary greatly from point to point, rendering it difficult to make global generalizations. [26] The analysis of sensor data of machines, plants or buildings makes it possible to detect anomalous states early and thus to avoid further damage. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[793] viXra:1903.0552 [pdf] submitted on 2019-03-30 09:37:31

Artificial Intelligence Pioneers

Authors: George Rajna
Comments: 43 Pages.

But making those quantum leaps from science fiction to reality required hard work from computer scientists like Yoshua Bengio, Geoffrey Hinton and Yann LeCun. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[792] viXra:1903.0551 [pdf] submitted on 2019-03-30 09:57:03

Machine Learning Self-Driving Cars

Authors: George Rajna
Comments: 42 Pages.

Working with researchers from Arizona State University, the team's new mathematical method is able to identify anomalies or bugs in the system before the car hits the road. [26] A research team at The University of Tokyo has developed a powerful machine learning algorithm that predicts the properties and structures of unknown samples from an electron spectrum. [25] Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24]
Category: Artificial Intelligence

[791] viXra:1903.0514 [pdf] submitted on 2019-03-28 10:57:39

Deep Learning Proteins

Authors: George Rajna
Comments: 55 Pages.

Researchers at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) employed a suite of deep-learning techniques to identify and observe these temporary yet notable structures. [32] As part of a team of scientists from IBM and New York University, my colleagues and I are looking at new ways AI could be used to help ophthalmologists and optometrists further utilize eye images, and potentially help to speed the process for detecting glaucoma in images. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22]
Category: Artificial Intelligence

[790] viXra:1903.0509 [pdf] submitted on 2019-03-28 21:11:52

Evidential Divergence Measures in Dempster–Shafer Theory

Authors: Fuyuan Xiao
Comments: 3 Pages.

The Dempster–Shafer evidence (DSE) theory, as a generalization of the Bayes probability theory, has more capability to handle the uncertainty in the decision-making problems. In the DSE theory, however, how to measure the divergence between basic belief assignments (BBAs) is still an open issue which has attracted many attentions. On account of this point, in this paper, new evidential divergence measures are developed to measure the difference between BBAs in the DSE theory, called as EDMs. The EDMs consider both of the correlations between BBAs and the subset of set of BBAs, respectively. Consequently, they can provide a much more convincing and effective way to measure the discrepancy between BBAs. In a word, the EDMs as the generalization of the divergence measures in the Bayes probability theory have the universal applicabilities. Additionally, a new Belief–Jensen–Shannon divergence measure is derived based on the EDMs, in which different weights can be assigned to the BBAs involved, so that it provides a promising solution to be applied in solving the problems of decision-making. Finally, numerical examples are illustrated that the proposed methods are more feasible and reasonable to measure the divergence between BBAs in the DSE theory.
Category: Artificial Intelligence

[789] viXra:1903.0506 [pdf] submitted on 2019-03-29 05:27:42

Consciousness of AI is Logically Nothing —Towards a Truly Thinking Machine—

Authors: Atsushi Shimotani
Comments: 1 Page.

Consciousness of AI is logically nothing.
Category: Artificial Intelligence

[788] viXra:1903.0496 [pdf] submitted on 2019-03-27 09:43:55

Machine Learning Material Classification

Authors: George Rajna
Comments: 40 Pages.

A research team at The University of Tokyo has developed a powerful machine learning algorithm that predicts the properties and structures of unknown samples from an electron spectrum. [25] Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24] Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23]
Category: Artificial Intelligence

[787] viXra:1903.0424 [pdf] submitted on 2019-03-23 09:29:24

Contextual Transformation of Short Text for Improved Classifiability

Authors: Anirban Chatterjee, Smaranya Dey, Uddipto Dutta
Comments: 5 Pages.

Text classification is the task of automatically sorting a set of documents into predefined set of categories. This task has several applications including separating positive and negative product reviews by customers, automated indexing of scientific articles, spam filtering and many more. What lies at the core of this problem is to extract features from text data which can be used for classification. One of the common techniques to address this problem is to represent text data as low dimensional continuous vectors such that the semantically unrelated data are well separated from each other. However, sometimes the variability along various dimensions of these vectors is irrelevant as they are dominated by various global factors which are not specific to the classes we are interested in. This irrelevant variability often causes difficulty in classification. In this paper, we propose a technique which takes the initial vectorized representation of the text data through a process of transformation which amplifies relevant variability and suppresses irrelevant variability and then employs a classifier on the transformed data for the classification task. The results show that the same classifier exhibits better accuracy on the transformed data than the initial vectorized representation of text data.
Category: Artificial Intelligence

[786] viXra:1903.0416 [pdf] submitted on 2019-03-23 12:35:56

Machine Learning about Earth

Authors: George Rajna
Comments: 47 Pages.

"You could imagine someone training a many-layer, deep neural network to do earthquake prediction-and then not testing the method in a way that properly validates its predictive value." [27] Oceanographers studying the physics of the global ocean have long found themselves facing a conundrum: Fluid dynamical balances can vary greatly from point to point, rendering it difficult to make global generalizations. [26] The analysis of sensor data of machines, plants or buildings makes it possible to detect anomalous states early and thus to avoid further damage. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[785] viXra:1903.0412 [pdf] submitted on 2019-03-22 08:46:46

Faster and Simpler Deep Learning

Authors: George Rajna
Comments: 44 Pages.

Artificial intelligence systems based on deep learning are changing the electronic devices that surround us. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[784] viXra:1903.0403 [pdf] submitted on 2019-03-21 08:25:56

Machine Learning World's Oceans

Authors: George Rajna
Comments: 45 Pages.

Oceanographers studying the physics of the global ocean have long found themselves facing a conundrum: Fluid dynamical balances can vary greatly from point to point, rendering it difficult to make global generalizations. [26] The analysis of sensor data of machines, plants or buildings makes it possible to detect anomalous states early and thus to avoid further damage. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[783] viXra:1903.0370 [pdf] submitted on 2019-03-21 01:14:08

[coqtp – Q*cert Ocaml Python Gan] as an Informatics & Computing Platform in the Context of Cryo-em Image Processing & Big Data Research.

Authors: Nirmal Tej Kumar
Comments: 4 Pages. Short Communication & Technical Notes

[CoqTP – q*cert- Ocaml- Python - GAN] as an Informatics & Computing Platform in the Context of cryo-EM Image Processing & BIG DATA Research. Importance of GAN & Theorem Provers For Better cryo-EM Image Processing/IoT/HPC.
Category: Artificial Intelligence

[782] viXra:1903.0341 [pdf] submitted on 2019-03-18 09:46:04

Organic Principal Component Analysis

Authors: George Rajna
Comments: 41 Pages.

Researchers at A*STAR have compared six data-analysis processes and come up with a clear winner in terms of speed, quality of analysis and reliability. [25] Researchers at Max Planck Institute for the Science of Light and Friedrich Alexander University in Erlangen, Germany have recently demonstrated that a molecule can be turned into a coherent two-level quantum system. [24] Researchers at the University of Dundee have provided important new insights into the regulation of cell division, which may ultimately lead to a better understanding of cancer progression. [23]
Category: Artificial Intelligence

[781] viXra:1903.0310 [pdf] submitted on 2019-03-17 01:43:24

A Right Representation of the Completeness Theorem of Propositional Logic —Towards a Truly Thinking Machine—

Authors: Atsushi Shimotani
Comments: 1 Page.

The calculation of the truth function of a propositional formula is a kind of proof.
Category: Artificial Intelligence

[780] viXra:1903.0298 [pdf] submitted on 2019-03-15 12:07:10

Adaptive Machine Learning

Authors: George Rajna
Comments: 88 Pages.

Over the past few decades, many studies conducted in the field of learning science have reported that scaffolding plays an important role in human learning. [47] The researchers trained generative competitive networks to predict the behavior of charged elementary particles. The results showed that physical phenomena can be described using neural networks highly accurately. [46] New data from the STAR experiment at the Relativistic Heavy Ion Collider (RHIC) add detail-and complexity-to an intriguing puzzle that scientists have been seeking to solve: how the building blocks that make up a proton contribute to its spin. [45] Approximately one year ago, a spectacular dive into Saturn ended NASA's Cassini mission-and with it a unique, 13-year research expedition to the Saturnian system. [44] Scientists from the Niels Bohr Institute, University of Copenhagen, and their colleagues from the international ALICE collaboration recently collided xenon nuclei, in order to gain new insights into the properties of the Quark-Gluon Plasma (the QGP)-the matter that the universe consisted of up to a microsecond after the Big Bang. [43] The energy transfer processes that occur in this collisionless space plasma are believed to be based on wave-particle interactions such as particle acceleration by plasma waves and spontaneous wave generation, which enable energy and momentum transfer. [42] Plasma particle accelerators more powerful than existing machines could help probe some of the outstanding mysteries of our universe, as well as make leaps forward in cancer treatment and security scanning-all in a package that's around a thousandth of the size of current accelerators. [41] The Department of Energy's SLAC National Accelerator Laboratory has started to assemble a new facility for revolutionary accelerator technologies that could make future accelerators 100 to 1,000 times smaller and boost their capabilities. [40]
Category: Artificial Intelligence

[779] viXra:1903.0297 [pdf] submitted on 2019-03-15 12:37:50

AI for the Study of Sites

Authors: George Rajna
Comments: 89 Pages.

This method can be used as a starting point in Taphonomy when analyzing remains in sites whose preservation does not allow distinguishing who accumulated the assemblages through the analysis of the cut or tooth marks left on the surface of the bones. [48] Over the past few decades, many studies conducted in the field of learning science have reported that scaffolding plays an important role in human learning. [47] The researchers trained generative competitive networks to predict the behavior of charged elementary particles. The results showed that physical phenomena can be described using neural networks highly accurately. [46] New data from the STAR experiment at the Relativistic Heavy Ion Collider (RHIC) add detail-and complexity-to an intriguing puzzle that scientists have been seeking to solve: how the building blocks that make up a proton contribute to its spin. [45] Approximately one year ago, a spectacular dive into Saturn ended NASA's Cassini mission-and with it a unique, 13-year research expedition to the Saturnian system. [44] Scientists from the Niels Bohr Institute, University of Copenhagen, and their colleagues from the international ALICE collaboration recently collided xenon nuclei, in order to gain new insights into the properties of the Quark-Gluon Plasma (the QGP)-the matter that the universe consisted of up to a microsecond after the Big Bang. [43] The energy transfer processes that occur in this collisionless space plasma are believed to be based on wave-particle interactions such as particle acceleration by plasma waves and spontaneous wave generation, which enable energy and momentum transfer. [42] Plasma particle accelerators more powerful than existing machines could help probe some of the outstanding mysteries of our universe, as well as make leaps forward in cancer treatment and security scanning-all in a package that's around a thousandth of the size of current accelerators. [41]
Category: Artificial Intelligence

[778] viXra:1903.0283 [pdf] submitted on 2019-03-14 10:24:06

Machine Learning Quantum Advantage

Authors: George Rajna
Comments: 44 Pages.

We are still far off from achieving Quantum Advantage for machine learning-the point at which quantum computers surpass classical computers in their ability to perform AI algorithms. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[777] viXra:1903.0279 [pdf] submitted on 2019-03-14 12:31:27

Учение о системах и структурах организаций

Authors: Попов Борис Михайлович
Comments: 86 Pages. «Концерн «СОЗВЕЗДИЕ». – Воронеж, 2009. – 86 с. ISВN 978-5-900777-19-1

В монографии излагается учение об организациях, системах и структурах, базирующееся на представлении об их контекстной зависимости, как теоретических понятий, так и взаимообусловленности их существования в коммуникативном мире, как целостной триады. В рамках используемой автором теоретической платформы, − тринитарной парадигмы, несколько иное видение, по отношению к традиционному, получают понятия «информация» и «энергия». Книга рассчитана на широкий круг читателей, интересующихся проблемами организации и самоорганизации. Требования к исходным знаниям невысоки. Необходимые для понимания содержания книги сведения по математике, физике и биологии, даются по ходу изложения, которое проиллюстрировано многочисленными примерами, облегчающими усвоение прочитанного материала. Читателю гарантируется стремительный рост знаний с продвижением от первого абзаца к последнему. Может рассматриваться преподавателями технических вузов на предмет использования в качестве дополнительного учебного пособия для студентов, осваивающих курсы дисциплин проектирования организационно сложных комплексов. В т. ч. проектирования, с использованием нанотехнологий.
Category: Artificial Intelligence

[776] viXra:1903.0275 [pdf] submitted on 2019-03-14 18:28:01

Artificial Intelligent Vehicle Speed Control System Using RF Technology

Authors: Saurabh S. Kore, Kanchan Y. Hanwate, Ashwin S. Moon, Hemant P. Chavan
Comments: 4 Pages.

Traffic congestion nowadays leads to a strong degradation of the network infrastructure and the high number of the vehicle which is caused by the population. With an increase in the number of vehicles and transportation demand, traffic congestion occurs. Conventional techniques are unable to handle variable flow with time. Hence the accident ratio is increases We all know that in order to control traffic and for the safety of the public, traffic signals are used. These signals are generally seen where two or more roads are connected to each other. These traffic signal rules are not generally followed by the people either due to lack of attention or due to speeding of vehicles which causes accidents frequently. Though accidents due to lack of attention to traffic signal cannot be avoided but accidents due to speeding of vehicles near traffic signals could be avoided. In order to avoid such kind of accidents near traffic signals; ‘smart signal system’ should be introduced. This system takes the total control of the vehicle in its own band. This project consists of two separate modules, i.e. transmitter and receiver. The transmitter will be attached to signals and receiver will be attached to vehicles. The information in the form of signals are sent by the system and they are received by the receiver attached to the vehicle. The vehicle control unit automatically alerts the driver and reduces the speed automatically. This system takes total control of vehicles for a few seconds. The proposed technique uses a small system to avoid accidents and hence safety is achieved.
Category: Artificial Intelligence

[775] viXra:1903.0272 [pdf] submitted on 2019-03-15 03:43:08

Organic Network Control Systems Challenges in Building a Generic Solution for Network Protocol Optimisation

Authors: Matthias Bloch
Comments: 7 Pages.

In the last years many approaches for dynamic protocol adaption in networks have been made and proposed. Most of them deal with a particular environment, but a much more desired approach would be to design a generic solution for this problem. Creating an independent system regarding the network type it operates in and therefore the protocol type that needs to be adapted is a big issue. In this paper we want to discuss certain problems that come with this task and why they have to be taken into account when it comes to designing such a generic system. At first we will see a generic architecture approach for such a system followed by a comparison of currently existing Organic Network Control Systems for adapting protocols in a Mobile Ad-hoc network and a Peer-to-Peer network. After identifying major problems we will summarize and evaluate the achieved results.
Category: Artificial Intelligence

[774] viXra:1903.0268 [pdf] submitted on 2019-03-15 06:05:04

Some Models in the Context of Cryo-em Image Processing Applications – a Simple & Novel Suggestion Using [biips/dlib Ml/quantum Device] – as Next Generation Intelligent Electron Microscopy Informatics Platform.

Authors: Nirmal Tej Kumar
Comments: 2 Pages. Short Communication & Technical Notes

Some Models in the context of cryo-EM Image Processing Applications – A Simple & Novel Suggestion Using - [BIIPS/dlib ML/quantum Device] – as Next Generation Intelligent Electron Microscopy Informatics Platform.
Category: Artificial Intelligence

[773] viXra:1903.0265 [pdf] submitted on 2019-03-15 07:17:30

AI Predict Elementary Particle Signals

Authors: George Rajna
Comments: 87 Pages.

The researchers trained generative competitive networks to predict the behavior of charged elementary particles. The results showed that physical phenomena can be described using neural networks highly accurately. [46] New data from the STAR experiment at the Relativistic Heavy Ion Collider (RHIC) add detail-and complexity-to an intriguing puzzle that scientists have been seeking to solve: how the building blocks that make up a proton contribute to its spin. [45] Approximately one year ago, a spectacular dive into Saturn ended NASA's Cassini mission-and with it a unique, 13-year research expedition to the Saturnian system. [44] Scientists from the Niels Bohr Institute, University of Copenhagen, and their colleagues from the international ALICE collaboration recently collided xenon nuclei, in order to gain new insights into the properties of the Quark-Gluon Plasma (the QGP)-the matter that the universe consisted of up to a microsecond after the Big Bang. [43] The energy transfer processes that occur in this collisionless space plasma are believed to be based on wave-particle interactions such as particle acceleration by plasma waves and spontaneous wave generation, which enable energy and momentum transfer. [42] Plasma particle accelerators more powerful than existing machines could help probe some of the outstanding mysteries of our universe, as well as make leaps forward in cancer treatment and security scanning-all in a package that's around a thousandth of the size of current accelerators. [41] The Department of Energy's SLAC National Accelerator Laboratory has started to assemble a new facility for revolutionary accelerator technologies that could make future accelerators 100 to 1,000 times smaller and boost their capabilities. [40] The authors designed a mechanism based on the deployment of a transport barrier to confine the particles and prevent them from moving from one region of the accelerator to another.
Category: Artificial Intelligence

[772] viXra:1903.0262 [pdf] submitted on 2019-03-13 09:19:44

AI Solve Quantum Mysteries

Authors: George Rajna
Comments: 47 Pages.

Under the direction of Mobileye founder Amnon Shashua, a research group at Hebrew University of Jerusalem's School of Engineering and Computer Science has proven that artificial intelligence (AI) can help us understand the world on an infinitesimally small scale called quantum physics phenomena. [27] Researchers from the Moscow Institute of Physics and Technology teamed up with colleagues from the U.S. and Switzerland and returned the state of a quantum computer a fraction of a second into the past. [26]
Category: Artificial Intelligence

[771] viXra:1903.0260 [pdf] submitted on 2019-03-13 10:36:13

Current Trends in Extended Classifier System

Authors: Zeshan Murtza
Comments: 5 Pages.

Learning is a way which improves our ability to solve problems related to the environment surrounding us. Extended Classifier System (XCS) is a learning classifier system that use reinforcement learning mechanism to solve complex problems with robust performance. It is an accuracy-based system that works by observing environment, taking input from it and applying suitable actions. Every action of XCS gets a feedback in return from the environment which is used to improve its performance. It also has ability to apply genetic algorithm (GA) on existing classifiers and create new ones by taking cross-over and mutation which have better performance. XCS handles single step and multi-step problems by using different methods like Q-learning mechanism. The ultimate challenge of XCS is to design an implementation which arrange multiple components in a unique way to produce compact and comprehensive solution in a least amount of time. Real time implementation requires flexibility for modifications and uniqueness to cover all aspects. XCS has recently been modified for real input values and a memory management system is also introduced which enhance its ability in different kind of applications like data mining, control stock exchange. In this article, there will be a brief discussion about the parameter and components of XCS. Main part of this article will cover the extended versions of XCS with further improvements and focus on applications, usage in real environment and relationship with organic computing
Category: Artificial Intelligence

[770] viXra:1903.0236 [pdf] submitted on 2019-03-12 15:13:02

Resolving Limits of Organic Systems in Large Scale Environments: Evaluate Benefits of Holonic Systems Over Classical Approaches

Authors: Claudio Schmidt
Comments: 5 Pages.

With the rapidly increasing number of devices and application components interacting with each other within larger complex systems, classical system hierarchies increasingly hit their limit when it comes to highly scalable and possibly fluctual organic systems. The holonic approach for self-* systems states to solve some of these problems. In this paper, limits of different state-of-the-art technologies and possible solutions to those will be identified and ranked for scalability, privacy, reliability and performance under fluctuating conditions. Subsequently, the idea and structure of holonic systems will be outlined, and how to utilize the previously described solutions combined in a holonic environment to resolve those limits. Furthermore, they will be classified in the context of current multi-agent-systems (MAS). The focus of this work is located in the area of smart energy grids and similar structures, however an outlook sketches a few further application scenarios for holonic structures.
Category: Artificial Intelligence

[769] viXra:1903.0223 [pdf] submitted on 2019-03-11 09:03:33

Comparing Anytime Learning to Organic Computing

Authors: Thomas Dangl
Comments: 5 Pages.

In environments where finding the best solution to a given problem is computationally infeasible or undesirable due to other restrictions, the approach of anytime learning has become the de facto standard. Anytime learning allows intelligent systems to adapt and remain operational in a constantly changing environment. Based on observation of the environment, the underlying simulation model is changed to fit the task and the learning process begins anew. This process is expected to never terminate, therefore continually improving the set of available strategies. Optimal management of uncertainty in tasks, which require a solution in real time, can be achieved by assuming faulty yet improving output. Properties of such a system are not unlike those present in organic systems. This article aims to give an introduction to anytime learning in general as well as to show the similarities to organic computing in regards to the methods and strategies used in both domains.
Category: Artificial Intelligence

[768] viXra:1903.0215 [pdf] submitted on 2019-03-11 13:40:21

Performance Measurement of Multi-Agent Systems

Authors: Muhammad Mohiuddin
Comments: 7 Pages.

A multi-agent system can greatly increase performance and reliability of a system due to several reasons like distributed nature, responsiveness to environment, and the ability for reuse. These characteristics are associated with multi-agent systems due to their flexible and intelligent nature. All these capabilities do not come without challenges. One of the biggest challenges that arises due to the dynamic behavior of multi-agent systems is the difficulty to quantify their reliability and dependability, or in other words performance. This article discusses agents and multi-agent systems, their classification according design parameters, multiple methods of performance quantization, and factors which affect them.
Category: Artificial Intelligence

[767] viXra:1903.0202 [pdf] submitted on 2019-03-12 05:57:39

The Halting Problem of Turing is Nonsense —Towards a Truly Thinking Machine—

Authors: Atsushi Shimotani
Comments: 1 Page.

The halting problem of Turing is nonsense. The program will halt if the input is the self. It is not contradiction.
Category: Artificial Intelligence

[766] viXra:1903.0189 [pdf] submitted on 2019-03-10 16:07:44

Using Self-Awareness in Decentralized Computing Systems

Authors: Florian Maier
Comments: 4 Pages

The term self-awareness in technological systems has been discussed many years now. There is no commonly agreed definition of the term self-aware in biological or psychological meaning therefore there are many different definitions all stating different aspects and levels of what we call self-aware in the biological world. In addition the system should be characterized by properties such as: robustness, decentralization, flexibility and self-adapting. In the past this was often achieved by designing good and robust but also complex algorithms which often lead to unnecessary overhead and hard to fix runtime bugs. Using self-aware components can turn such algorithmic systems into organic computing systems, offering better scalability, more robustness of the global state and less unnecessary overhead in the communication between different components in the decentralized system. On the counterpart such a system might work in a way that cannot be fully understood by humans in a reasonable time leading to other problems such as trust issues in the system or unwanted behavior in the global state of the system. The goal of this article is to state out how decentralized computing systems can benefit from self-aware approaches.
Category: Artificial Intelligence

[765] viXra:1903.0186 [pdf] submitted on 2019-03-10 19:47:08

Advancements of Deep Q-Networks

Authors: Bastian Birkeneder
Comments: 5 Pages.

Deep Q-Networks first introduced a combination of Reinforcement Learning and Deep Neural Networks at a large scale. These Networks are capable of learning their interactions within an environment in a self-sufficient manor for a wide range of applications. Over the following years, several extensions and improvements have been developed for Deep Q-Networks. In the following paper, we present the most notable developments for Deep Q-Networks, since the initial proposed algorithm in 2013.
Category: Artificial Intelligence

[764] viXra:1903.0177 [pdf] submitted on 2019-03-11 04:35:04

Generalized Deng Entropy

Authors: Fan Liu, Xiaozhuan Gao, Yong Deng
Comments: 14 Pages.

Dempster-Shafer evidence theory as an extension of Probability has wideapplications in many fields. Recently, A new entropy called Deng entropywas proposed in evidence theory. Deng Entropy as an uncertain measurein evidence theory. Recently, some scholars have pointed out that DengEntropy does not satisfy the additivity in uncertain measurements. However,this irreducibility can have a huge effect. In more complex systems, thederived entropy is often unusable. Inspired by this, a generalized entropy isproposed, and the entropy implies the relationship between Deng entropy,R ́enyi entropy, Tsallis entropy.
Category: Artificial Intelligence

[763] viXra:1903.0168 [pdf] submitted on 2019-03-09 09:58:15

Organic Traffic Control with Dynamic Route Guidance as a Measure to Reduce Exhaust Emissions in Comparison Organic Traffic Control Mit Dynamic Route Guidance Als Maßnahme Zur Reduzierung Von Abgasemissionen im Vergleich

Authors: Christian Frank
Comments: 5 pages, 2 figures, language: German

In this paper an Organic Traffic Control system with Dynamic Route Guidance functionality is being looked at regarding its emission-reducing effect on road traffic. This system will be compared to other environmental measures, namely Low Emission Zones, driving bans and hardware upgrades, with respect to its effect on emissions and other criteria. Results from existing literature and a few calculations are used for this comparison. The sparse data allows for only a few quantitive comparisons. Qualitative comparisons show that this system has the potential to effectively lower emission in its area of effect. It reduces the quantity of all exhaust gases and additionally fuel consumption, without disadvantages for certain road users. This is not the case with the comparative measures. ----- In dieser Arbeit wird ein Organic Traffic Control System mit Dynamic Route Guidance Funktionalität hinsichtlich seiner emissionsreduzierenden Wirkung im Verkehr betrachtet. Dieses System wird mit anderen Umweltmaßnahmen, namentlich Umweltzonen, Fahrverboten und Hardwarenachrüstungen, hinsichtlich Wirkung und weiterer Kriterien verglichen. Es werden hierzu Daten und Ergebnisse aus der bestehenden Literatur verwendet und einige wenige Rechnungen durchgeführt. Die Datenlage erlaubt nur teilweise quantitative Vergleiche. Qualitativ zeigt sich, dass das System Potential bietet, effektiv innerhalb seines Installationsbereichs Emissionen zu senken. Es reduziert die Menge aller Abgase und zusätzlich den Spritverbrauch, ohne dass dabei Nachteile für bestimme Verkehrsteilnehmer entstehen. Dies ist bei den Vergleichsmaßnahmen jeweils nicht der Fall.
Category: Artificial Intelligence

[762] viXra:1903.0155 [pdf] submitted on 2019-03-10 05:27:02

The Principle, Communication Efficiency and Privacy Issues of Federated Learning

Authors: Hakan Uzuner
Comments: 5 Pages.

Standard machine learning approaches require a huge amount of training data to be stored centralized in order to feed the learning algorithms. Keeping and using data centralized brings many negative aspects with it. Those aspects can be inefficient communication between the centralized data center and the clients producing the data, privacy issues and quick usability of the profits and results of the training. Google’s new approach, federated learning, on the other hand tackles all these problems. The training data is kept decentralized at the client’s devices while communicating only with small updates of the common model. This method allows for optimizations of communication, keeping the privacy of users involved in the process and providing quick usability of the model’s process. In this paper I will explain how the federated learning principle works. Further on, I will give a small insight on optimization possibilities of communication efficiency as well as on privacy issues involved in machine learning processes and how those can be solved using federated learning principles. Additionally, I will show the connection between the federated learning concept and organic computing.
Category: Artificial Intelligence

[761] viXra:1903.0139 [pdf] submitted on 2019-03-09 00:40:07

CoqTP-OCaml-Java Based Some Important Applications – A Simple Suggestion & an Insight as Short Communication.

Authors: Nirmal Tej Kumar
Comments: 2 Pages. Short Communication & Technical Notes

CoqTP-OCaml-Java Based Some Important Applications – A Simple Suggestion & an Insight as Short Communication.
Category: Artificial Intelligence

[760] viXra:1903.0138 [pdf] submitted on 2019-03-09 01:41:05

A Survey on Reinforcement Learning for Dialogue Systems

Authors: Isabella Graßl
Comments: 6 Pages.

Dialogue systems are computer systems which com- municate with humans using natural language. The goal is not just to imitate human communication but to learn from these interactions and improve the system’s behaviour over time. Therefore, different machine learning approaches can be implemented with Reinforcement Learning being one of the most promising techniques to generate a contextually and semantically appropriate response. This paper outlines the current state-of- the-art methods and algorithms for integration of Reinforcement Learning techniques into dialogue systems.
Category: Artificial Intelligence

[759] viXra:1903.0135 [pdf] submitted on 2019-03-09 05:32:36

A Survey on Classification of Concept Drift with Stream Data

Authors: Shweta Vinayak Kadam
Comments: 7 Pages.

Usually concept drift occurs in many applications of machine learning. Detecting a concept drift is the main challenge in a data stream because of the high speed and their large size sets which are not able to fit in main memory. Here we take a small look at types of changes in concept drift. This paper discusses about methods for detecting concept drift and focuses on the problems with existing approaches by adding STAGGER, FLORA family, Decision tree methods, meta-learning methods and CD algorithms. Furthermore, classifier ensembles for change detection are discussed.
Category: Artificial Intelligence

[758] viXra:1903.0133 [pdf] submitted on 2019-03-07 07:07:19

Deep Learning Holography

Authors: George Rajna
Comments: 48 Pages.

Digital holographic microscopy is an imaging modality that can digitally reconstruct the images of 3-D samples from a single hologram by digitally refocusing it through the entire 3-D sample volume. [27] Deep learning, which uses multi-layered artificial neural networks, is a form of machine learning that has demonstrated significant advances in many fields, including natural language processing, image/video labeling and captioning. [26]
Category: Artificial Intelligence

[757] viXra:1903.0128 [pdf] submitted on 2019-03-07 09:45:47

XCS-O/C: Das Extended Classifier System XCS in einer Observer/Controller Architektur

Authors: Alexander Paßberger
Comments: 7 Pages. German

Organic Computing stellt eine moderne Variante des Entwerfen autonomer Systeme dar, bei der Entscheidungen zur Laufzeit vom System selbst getroffen werden. Das Steuern der ausgeführten Aktionen benötigt in einem solchen System Möglichkeiten zur Selbstverbesserung. Ein möglicher Ansatz zur Problemlösung sind Learning Classifier Systeme, speziell das Extended Classifier System XCS. Die Arbeit liefert eine detaillierte Beschreibung der Abläufe im Extended Classifier System XCS und der nötigen Änderungen zum Integrieren in ein autonomes organisches System. Das Learning Classifier System wird hierzu in eine generische Observer/Controller-Architektur eingebettet, eine der fundamentalen Designarchitekturen im Bereich Organic Computing.
Category: Artificial Intelligence

[756] viXra:1903.0127 [pdf] submitted on 2019-03-07 10:01:05

AI Ask for Human Help

Authors: George Rajna
Comments: 63 Pages.

This is a demonstration of a situation where an AI algorithm working together with a human can reap the benefits and efficiency of the AI's good decisions, without being locked into its bad ones. [33] The deep learning analysis has revealed that the extinct hominid is probably a descendant of the Neanderthal and Denisovan populations. [32] Our team at IBM Research – India collaborated with the IBM MetroPulse team to bring such first-of-a-kind, AI-driven capabilities to MetroPulse, an industry platform that brings together voluminous market, external and client datasets. [31]
Category: Artificial Intelligence

[755] viXra:1903.0121 [pdf] submitted on 2019-03-08 02:01:34

Online Transfer Learning and Organic Computing for Deep Space Research and Astronomy

Authors: Sadanandan Natarajan
Comments: 6 Pages.

Deep space exploration is the pillars within the field of outer space analysis and physical science. The amount of knowledge from numerous space vehicle and satellites orbiting the world of study are increasing day by day. This information collected from numerous experiences of the advanced space missions is huge. These information helps us to enhance current space knowledge and the experiences can be converted and transformed into segregated knowledge which helps us to explore and understand the realms of the deep space.. Online Transfer Learning (OTL) is a machine learning concept in which the knowledge gets transferred between the source domain and target domain in real time, in order to help train a classifier of the target domain. Online transfer learning can be an efficient method for transferring experiences and data gained from the space analysis data to a new learning task and can also routinely update the knowledge as the task evolves.
Category: Artificial Intelligence

[754] viXra:1903.0120 [pdf] submitted on 2019-03-08 02:36:18

A Discussion of Detection of Mutual Influences Between Socialbots in Online (Social) Networks

Authors: Stefanie Urchs
Comments: 6 Pages.

Many people organise themselves online in social networks or share knowledge in open encyclopaedias. However, these networks do not only belong to humans. A huge variety of socialbots that imitate humans inhabit these and are connected to each other. The connections between socialbots lead to mutual influences between them. If the influence socialbots have on each other are too big they adapt the behaviour of the other socialbot and get worse in imitating humans. Therefore, it is necessary to detect when socialbots are mutually influencing each other. For a better overview socialbots in the social networks Facebook, Twitter and in the open encyclopaedia Wikipedia are observed and the mutual influences between them detected. Furthermore, this paper discusses how socialbots could handle the detected influences.
Category: Artificial Intelligence

[753] viXra:1903.0117 [pdf] submitted on 2019-03-08 04:47:40

A Survey on Different Mechanisms to Classify Agent Behavior in a Trust Based Organic Computing Systems

Authors: Shabhrish Reddy Uddehal
Comments: 9 Pages.

Organic Computing (OC) systems vary from traditional software systems, as these systems are composed of a large number of highly interconnected and distributed subsystems. In systems like this, it is not possible to predict all possible system configurations and to plan an adequate system behavior entirely at design time. An open/decentralized desktop grid is one example, Trust mechanisms are applied on agents that show the following Self-X properties (Self-organization, Self-healing, Self-organization and so on). In this article, some mechanisms that could help in the classification of agents behavior at run time in trust-based organic computing systems are illustrated. In doing so, isolation of agents that reduce the overall systems performance is possible. Trust concept can be used on agents and then the agents will know if their interacting agents belong to the same trust community and how trustworthy are they. Trust is a significant concern in large-scale open distributed systems. Trust lies at the core of all interactions between the agents which operate in continuously varying environments. Current research leads in the area of trust in computing systems are evaluated and addressed. This article shows mechanisms discussed can successfully identify/classify groups of systems with undesired behavior.
Category: Artificial Intelligence

[752] viXra:1903.0089 [pdf] submitted on 2019-03-05 09:07:04

Deep Meta-Learning and Dynamic Runtime Exploitation of Knowledge Sources for Traffic Control

Authors: Sandra Ottl
Comments: 7 Pages.

In the field of machine learning and artificial intelligence, meta-learning describes how previous learning experiences can be used to increase the performance on a new task. For this purpose, it can be investigated how prior (similar) tasks have been approached and improved, and knowledge can be obtained about achieving the same goal for the new task. This paper outlines the basic meta-learning process which consists of learning meta-models from meta-data of tasks, algorithms and how these algorithms perform on the respective tasks. Further, a focus is set on how this approach can be applied and is already used in the context of deep learning. Here, meta-learning is concerned with the respective machine learning models themselves, for example how their parameters are initialised or adapted during training. Also, meta-learning is assessed from the viewpoint of Organic Computing (OC) where finding effective learning techniques that are able to handle sparse and unseen data is of importance. An alternative perspective on meta-learning coming from this domain that focuses on how an OC system can improve its behaviour with the help of external knowledge sources, is highlighted. To bridge the gap between those two perspectives, a model is proposed that integrates a deep, meta-learned traffic flow predictor into an organic traffic control (OTC) system that dynamically exploits knowledge sources during runtime.
Category: Artificial Intelligence

[751] viXra:1903.0086 [pdf] submitted on 2019-03-05 15:43:42

Novelty Detection Algorithms and Their Application in Industry 4.0

Authors: Christoph Stemp
Comments: 7 Pages.

Novelty detection is a very important part of Intelligent Systems. Its task is to classify the data produced by the system and identify any new or unknown pattern that were not present during the training of the model. Different algorithms have been proposed over the years using a wide variety of different technologies like probabilistic models and neural networks. Novelty detection and reaction is used to enable self*-properties in technical systems to cope with increasingly complex processes. Using the notion of Organic Computing, industrial factories are getting more and more advanced and intelligent. Machines gain the capability of self-organization, self-configuration and self-adaptation to react to outside influences. This survey paper looks at the state-of-the-art technologies used in Industry 4.0 and assesses different novelty detection algorithms and their usage in such systems. Therefore, different data-sources and consequently applications for potential novelty detection are analyzed. Three different novelty detection algorithms are then present using different underlying technologies and the applicability of these algorithms in combination with the defined scenarios is analyzed.
Category: Artificial Intelligence

[750] viXra:1903.0012 [pdf] submitted on 2019-03-02 04:32:21

A Survey for Testing Self-organizing, Adaptive Systems in Industry 4.0

Authors: Caterina Rotondo
Comments: 6 Pages.

Complexity in technical development increases rapidly. Regular system are no longer able to fulfill all the requirements. Organic computing systems are inspired by how complexity is mastered in nature. This leads to a fundamental change in software engineering for complex systems. Based on machine learning techniques, a system develops self*-properties which allows it to make decisions at runtime and to operate with nearly no human interaction. Testing is a part of the software engineering process to ensure the functionality and the quality of a system. But when using self-organizing, adaptive systems traditional testing approaches reach their limits. Therefore, new methods for testing such systems have to be developed. There exist already a lot of different testing approaches. Most of them developed within a research group. Nevertheless, there is still a need for further discussion and action on this topic. In this paper the challenges for testing self-organizing, adaptive systems are specified. Three different testing approaches are reviewed in detail. Due to the ongoing fourth industrial revolution it is discussed which of these approaches would fit best for testing industrial manufacturing robots.
Category: Artificial Intelligence

[749] viXra:1903.0006 [pdf] submitted on 2019-03-01 03:32:14

Multi-Agent Reinforcement Learning - From Game Theory to Organic Computing

Authors: Maurice Gerczuk
Comments: 6 Pages.

Complex systems consisting of multiple agents that interact both with each other as well as their environment can often be found in both nature and technical applications. This paper gives an overview of important Multi-Agent Reinforcement Learning (MARL) concepts, challenges and current research directions. It shortly introduces traditional reinforcement learning and then shows how MARL problems can be modelled as stochastic games. Here, the type of problem and the system configuration can lead to different algorithms and training goals. Key challenges such as the curse of dimensionality, choosing the right learning goal and the coordination problem are outlined. Especially, aspects of MARL that have previously been considered from a critical point of view are discussed with regards to if and how the current research has addressed these criticism or shifted their focus. The wide range of possible MARL applications is hinted at by examples from recent research. Further, MARL is assessed from an Organic Computing point of view where it takes a central role in the context of self-learning and self-adapting systems.
Category: Artificial Intelligence

[748] viXra:1902.0485 [pdf] submitted on 2019-02-27 08:57:46

Spiking Artificial Intelligent Devices

Authors: George Rajna
Comments: 45 Pages.

The software program works best when patched in to programs meant to train new artificial-intelligence equipment, so Whetstone doesn't have to overcome learned patterns with already established energy minimums. [26] New work leveraging machine learning could increase the efficiency of optical telecommunications networks. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24]
Category: Artificial Intelligence

[747] viXra:1902.0464 [pdf] submitted on 2019-02-26 07:46:55

Philosophically an Incomplete Theorem Is Trivial —Towards a Truly Thinking Machine—

Authors: Atsushi Shimotani
Comments: 1 Page.

I think that an incomplete theorem is trivial.
Category: Artificial Intelligence

[746] viXra:1902.0445 [pdf] submitted on 2019-02-25 05:47:46

Selfgan-not a Gan But Punch Itself

Authors: Hecong Wu
Comments: 5 Pages.

In my research, I modified the basic structure of GAN, let G and D train together, and use dynamic loss weights to achieve a relatively balanced training.
Category: Artificial Intelligence

[745] viXra:1902.0442 [pdf] submitted on 2019-02-25 07:18:29

Machine Learning Optical Networks

Authors: George Rajna
Comments: 43 Pages.

New work leveraging machine learning could increase the efficiency of optical telecommunications networks. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23]
Category: Artificial Intelligence

[744] viXra:1902.0431 [pdf] submitted on 2019-02-25 23:11:53

Higher Order Sequence Of Primes {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research manuscript, the author has presented a novel notion of Higher Order Prime Numbers.
Category: Artificial Intelligence

[743] viXra:1902.0386 [pdf] submitted on 2019-02-22 06:21:37

Diversity of Ensembles for Data Stream Classification

Authors: Mohamed Souhayel Abassi
Comments: 9 Pages.

When constructing a classifier ensemble, diversity among the base classifiers is one of the important characteristics. Several studies have been made in the context of standard static data, in particular when analyzing the relationship between a high ensemble predictive performance and the diversity of its components. Besides, ensembles of learning machines have been performed to learn in the presence of concept drift and adapt to it. However,diversity measureshave not received much research interest for evolving data streams. Only a few researchers directly consider promoting diversity while constructing an ensemble or rebuilding them in the moment of detecting drifts. In this paper, we present a theoretical analysis of different diversity measures and relate them to the success of ensemble learning algorithms for streaming data. The analysis provides a deeper understanding of the concept of diversity and its impact on online ensemble Learning in the presence of concept drift. More precisely, we are interested in answering the following research question; Which commonly used diversity measures are used in the context of static-data ensembles and how far are they applicable in the context of streaming data ensembles?
Category: Artificial Intelligence

[742] viXra:1902.0376 [pdf] submitted on 2019-02-23 04:25:50

Neural Network Recognize Images

Authors: George Rajna
Comments: 48 Pages.

Physicists from Petrozavodsk State University have proposed a new method for oscillatory neural network to recognize simple images. Such networks with an adjustable synchronous state of individual neurons have, presumably, dynamics similar to neurons in the living brain. [27] Navid Borhani, a research-team member, says this machine learning approach is much simpler than other methods to reconstruct images passed through optical fibers, which require making a holographic measurement of the output. [26]
Category: Artificial Intelligence

[741] viXra:1902.0374 [pdf] submitted on 2019-02-23 04:49:25

Higher Order Sequence Of Primes

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research manuscript, the author has presented a novel notion of Higher Order Prime Numbers.
Category: Artificial Intelligence

[740] viXra:1902.0357 [pdf] submitted on 2019-02-22 00:17:05

Universal Forecasting Scheme For Any Time Series Sequence Using The Ananda-Damayanthi-Radha-Rohith Rishi Sequence Trends Of Any Set Of Positive Real Numbers {Version 2)

Authors: Ramesh Chandra Bagadi
Comments: 8 Pages.

In this research investigation, the author has detailed the Theory Of Universal Forecasting Scheme For Any Time Series Sequence Using The Ananda-Damayanthi-Radha-Rohith Rishi Sequence Trends Of Any Set Of Positive Real Numbers.
Category: Artificial Intelligence

[739] viXra:1902.0350 [pdf] submitted on 2019-02-20 06:43:54

Universal Forecasting Scheme For Any Time Series Sequence Using The Ananda-Damayanthi-Radha-Rohith Rishi Sequence Trends Of Any Set Of Positive Real Numbers

Authors: Ramesh Chandra Bagadi
Comments: 7 Pages.

In this research investigation, the author has detailed the Theory Of Universal Forecasting Scheme For Any Time Series Sequence Using The Ananda-Damayanthi-Radha-Rohith Rishi Sequence Trends Of Any Set Of Positive Real Numbers.
Category: Artificial Intelligence

[738] viXra:1902.0348 [pdf] submitted on 2019-02-20 07:31:03

Machine Learning Heavy Nuclei

Authors: George Rajna
Comments: 22 Pages.

A collaboration between the Facility for Rare Isotope Beams (FRIB) and the Department of Statistics and Probability (STT) at Michigan State University (MSU) estimated the boundaries of nuclear existence by applying statistical analysis to nuclear models, and assessed the impact of current and future FRIB experiments. [9] A trio of students from the University of Glasgow have developed a sophisticated artificial intelligence which could underpin the next phase of gravitational wave astronomy. [8]
Category: Artificial Intelligence

[737] viXra:1902.0322 [pdf] submitted on 2019-02-19 09:26:18

Cross Entropy of Belief Function

Authors: Fan Liu, Yangxue Li, Yong Deng
Comments: 12 Pages.

Dempster-Shafer evidence theory as an extension of Probability has wide applications in many fields. Recently, A new entropy called Deng entropy was proposed in evidence theory. There were a lot of discussions and applications about Deng entropy. However, there is no discussion on how to apply Deng entropy to measure the correlation between the two evidences. In this article, we first review and analyze some of the work related to mutual information. Then we propose the extension of Deng Entropy: joint Deng entropy, Conditional Deng entropy and cross Deng entropy. In addition, we prove the relevant properties of this entropy. Finally, we also proposed a method to obtain joint evidence.
Category: Artificial Intelligence

[736] viXra:1902.0288 [pdf] submitted on 2019-02-16 11:23:44

Genetic Programming

Authors: Domenico Oricchio
Comments: 1 Page.

I try to write a program that evolve in the time
Category: Artificial Intelligence

[735] viXra:1902.0279 [pdf] submitted on 2019-02-17 03:32:56

Divergence Measure of Belief Function

Authors: Yutong Song, Yong Deng
Comments: 3 Pages.

it is important to measure the divergent or conflicting degree among pieces of information for information preprocessing in case for the unreliable results which come from the combination of conflicting bodies of evidence using Dempster's combination rules. However, how to measure the divergence of different evidence is still an open issue. In this paper, a new divergence measure of belief function based on Deng entropy is proposed in order to measure the divergence of different belief function. The divergence measure is the generalization of Kullback-Leibler divergence for probability since when the basic probability assignment (BPA) is degenerated as probability, divergence measure is equal to Kullback-Leibler divergence. Numerical examples are used to illustrate the effectiveness of the proposed divergence measure.
Category: Artificial Intelligence

[734] viXra:1902.0254 [pdf] submitted on 2019-02-14 09:27:17

Machine Learning Plant's Secrets

Authors: George Rajna
Comments: 32 Pages.

Plants are master chemists, and Michigan State University researchers have unlocked their secret of producing specialized metabolites. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19]
Category: Artificial Intelligence

[733] viXra:1902.0245 [pdf] submitted on 2019-02-15 04:35:12

Before We Can Find a Model, We Must Forget About Perfection

Authors: Dimiter Dobrev
Comments: 1 Page. This is a short summary.

In Reinforcement Learning we look for a model of the world. Typically, we aim to find a model which tells everything or almost everything. In other words, we hunt for a perfect model (a total determinate graph) or for an exhaustive model (Markov Decision Process). Finding such a model is an overly ambitious task and indeed a practically unsolvable problem with complex worlds. In order to solve the problem, we will replace perfect and exhaustive models with Event-Driven models.
Category: Artificial Intelligence

[732] viXra:1902.0239 [pdf] submitted on 2019-02-13 23:19:21

The Ananda-Damayanthi-Radha-Rohith Rishi Sequence Trends Of Any Set Of Positive Real Numbers

Authors: Ramesh Chandra Bagadi
Comments: 5 Pages.

In this research investigation, the author has detailed the Theory Of Holistic Decomposition Of Any Set Of Prime Number, Natural Numbers, Positive Real Numbers As One Or More Sets Each With Some Periodicity Of The Number’s Non Integral Prime Basis Position Number.
Category: Artificial Intelligence

[731] viXra:1902.0236 [pdf] submitted on 2019-02-13 05:05:09

Theory Of Holistic Decomposition Of Any Set Of Any Natural Numbers As One Or More Sets Each With Some Periodicity Of The Number’s Non Integral Prime Basis Position Number. {Version 4}

Authors: Ramesh Chandra Bagadi
Comments: 5 Pages.

In this research investigation, the author has detailed the Theory Of Holistic Decomposition Of Any Set Of Any Natural Numbers As One Or More Sets Each With Some Periodicity Of The Number’s Non Integral Prime Basis Position Number.
Category: Artificial Intelligence

[730] viXra:1902.0220 [pdf] submitted on 2019-02-13 03:06:19

Comments on the Book "Architects of Intelligence" by Martin Ford in the Light of the SP Theory of Intelligence

Authors: J Gerard Wolff
Comments: 49 Pages.

The book "Architects of Intelligence" by Martin Ford presents conversations about AI between the author and people who are influential in the field. This paper discusses issues described in the book in relation to features of the "SP System", meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model". The SP System, outlined in an appendix, has the potential to solve several of the problems in AI research described in the book, and some others. Strengths and potential of the SP System, which in many cases contrast with weaknesses of deep neural networks (DNNs), include the following: the system exhibits a more favourable combination of simplicity and versatility than, arguably, any alternatives; the system has strengths and long-term potential in pattern recognition; the system appears to be free of the tendency of DNNs to make large and unexpected errors in recognition; the system has strengths and potential in unsupervised learning, including grammatical inference; the SP Theory of Intelligence provides a theoretically coherent basis for generalisation and the avoidance of under- or over-generalisations; that theory of generalisation may help driverless cars avoid accidents; the system, unlike DNNs, can achieve learning from a single occurrence or experience; the system, unlike DNNs, has relatively tiny demands for computational resources and volumes of data, with much higher speeds in learning; the system, unlike most DNNs, has strengths in transfer learning; the system, unlike DNNs, provides transparency in the representation of knowledge and an audit trail for all its processing; the system has strengths and potential in the processing of natural language; as a by-product of its design, the system exhibits several different kinds of probabilistic reasoning; the system has strengths and potential in commonsense reasoning and the representation of commonsense knowledge; other strengths of the SP System are in information compression, biological validity, scope for adaptation, and freedom from catastrophic forgetting. Despite the importance of motivations and emotions, no attempt has been made in the SP research to investigate these areas.
Category: Artificial Intelligence

[729] viXra:1902.0195 [pdf] submitted on 2019-02-11 08:39:31

Machine Learning Quantum Fireworks

Authors: George Rajna
Comments: 43 Pages.

A greater understanding of these behaviors could one day feed into technology, he said, such as ways to extend the reach of quantum networks across greater distances. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23]
Category: Artificial Intelligence

[728] viXra:1902.0182 [pdf] submitted on 2019-02-11 02:15:05

Theory Of Holistic Decomposition Of Any Set Of Any Natural Numbers As One Or More Sets Each With Some Periodicity Of The Number’s Non Integral Prime Basis Position Number {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 4 Pages.

In this research investigation, the author has detailed the Theory Of Holistic Decomposition Of Any Set Of Any Natural Numbers As One Or More Sets Each With Some Periodicity Of The Number’s Non Integral Prime Basis Position Number.
Category: Artificial Intelligence

[727] viXra:1902.0174 [pdf] submitted on 2019-02-10 00:49:30

Theory Of Holistic Decomposition Of Any Set Of Any Natural Numbers As One Or More Sets Each With Some Periodicity Of The Number’s Non Integral Prime Basis Position Number

Authors: Ramesh Chandra Bagadi
Comments: 4 Pages.

In this research investigation, the author has detailed the Theory Of Holistic Decomposition Of Any Set Of Any Natural Numbers As One Or More Sets Each With Some Periodicity Of The Number’s Non Integral Prime Basis Position Number.
Category: Artificial Intelligence

[726] viXra:1902.0168 [pdf] submitted on 2019-02-09 05:50:49

Pictionary-Like Game in Robots

Authors: George Rajna
Comments: 54 Pages.

At the Allen Institute of Artificial Intelligence, a private research center perched on the north shore of Lake Union in Seattle, computer scientists are working on imbuing software with humanlike abilities to recognize images and understand language that could someday make that sort of collaboration possible. [30] A study by the Centre for Cognitive Science at TU Darmstadt shows that AI machines can indeed learn a moral compass from humans. [29] A group of researchers from MIT have already developed an AI robot that can assist in a labour room. [28]
Category: Artificial Intelligence

[725] viXra:1902.0151 [pdf] submitted on 2019-02-08 05:44:49

Theory Of Holistic Decomposition Of Any Set Of Given Primes As One Or More Sets Each With Some Periodicity Of The Prime Number’s Basis Position Number

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed the Theory Of Holistic Decomposition Of Any Set Of Given Primes As One Or More Sets Each With Some Periodicity Of The Prime Number’s Basis Position Number.
Category: Artificial Intelligence

[724] viXra:1902.0136 [pdf] submitted on 2019-02-08 03:10:15

Theory Of Decomposition Of Any Set Of Given Primes As One Or More Sets Each With Some Periodicity Of The Prime Number’s Basis Position Number

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed the Theory Of Decomposition Of Any Set Of Given Primes As One Or More Sets Each With Some Periodicity Of The Prime Number’s Basis Position Number.
Category: Artificial Intelligence

[723] viXra:1902.0123 [pdf] submitted on 2019-02-07 11:30:17

Moral Compass from Human Text

Authors: George Rajna
Comments: 52 Pages.

A study by the Centre for Cognitive Science at TU Darmstadt shows that AI machines can indeed learn a moral compass from humans. [29] A group of researchers from MIT have already developed an AI robot that can assist in a labour room. [28] Researchers at Fukuoka University, in Japan, have recently proposed a design methodology for configurable approximate arithmetic circuits. [27]
Category: Artificial Intelligence

[722] viXra:1902.0077 [pdf] submitted on 2019-02-05 02:28:35

Looking Into the Future of ai and the Embedded Systems Development with Interesting Intelligent Iot Applications Based on Specified Hardware & Software Via C/c++/ruby/ai/ml Related Concepts – a Short Technical Note

Authors: Nirmal Tej Kumar
Comments: 3 Pages. Short Communication & Technical Notes

Looking into The Future of AI and the Embedded Systems Development With Interesting Intelligent IoT Applications based on Specified Hardware & Software via - C/C++/Ruby/AI/ML related Concepts – A Short Technical Note.
Category: Artificial Intelligence

[721] viXra:1902.0046 [pdf] submitted on 2019-02-03 08:35:44

1972 Proposal to Harvard for Backpropagation and Intelligent Reinforcement System

Authors: Paul Werbos
Comments: 32 Pages. What happened with this heresy from proposal to thesis is explained in Werbos (2006). Backwards differentiation in AD and neural nets: Past links and New Opportunities, In Automatic Differentiation, Springer

The great new revolution since 2010 in deep learning and machine learning based on neural networks is massively changing the world, and is a subject of great ruminations by high-level decision makers. (For example, see http://www.intelligence.senate.gov/hearings/open-hearing-worldwide-threats-hearing-1.) But the key design principles, such as backpropagation and reinforcement learning based on approximate dynamic programming (as in Alpha Go) were known, and were rejected as heresy, long ago. (See the recent book, https://www.amazon.com/Artificial-Intelligence-Neural-Networks-Computing-ebook/dp/B07K55YZRK, by the President of the International Neural Network Society, for an overview.) This paper, written in 1972 (and scanned into pdf in 2015), was the first explicit proposal for how to build a general reinforcement learning system, based on backpropagation and dynamic programming implemented through model neural networks, capable of converging to an optimal strategy of action in “any” environment informed by an understanding/model which it learns of how the environment works. Modern work uses more sophisticated language, but this effort to explain the underlying ideas in simpler language may still be of value to many. The 1974 thesis itself has been reprinted by Wiley, and has more than 4000 citations listed in scholar.google.com
Category: Artificial Intelligence

[720] viXra:1902.0033 [pdf] submitted on 2019-02-02 05:01:39

Software Detect Fake News

Authors: George Rajna
Comments: 48 Pages.

Fraunhofer researchers have developed a system that automatically analyzes social media posts, deliberately filtering out fake news and disinformation. [27] If you see a video of a politician speaking words he never would utter, or a Hollywood star improbably appearing in a cheap adult movie, don't adjust your television set—you may just be witnessing the future of "fake news." [26] Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind's AlphaGo, Tesla's self-driving vehicles, and IBM's Watson. [25]
Category: Artificial Intelligence

[719] viXra:1902.0032 [pdf] submitted on 2019-02-02 05:18:09

Artificial Intelligence Harder for Hackers

Authors: George Rajna
Comments: 43 Pages.

Now, two Boston University computer scientists, working with researchers at Draper, a not-for-profit engineering solutions company located in Cambridge, have developed a tool that could make it harder for hackers to find their way into networks where they don't belong. [26] IBM, home of the question-answering computer system Watson, was by far the company with the largest portfolio of AI patent applications, with a total of 8,290 inventions, followed by Microsoft with 5,930, the report showed. [25] Researchers at Aalto University and the Technical University of Denmark have developed an artificial intelligence (AI) to seriously accelerate the development of new technologies from wearable electronics to flexible solar panels. [24]
Category: Artificial Intelligence

[718] viXra:1901.0473 [pdf] submitted on 2019-01-31 05:39:10

Quantum Leap in Artificial Intelligence

Authors: George Rajna
Comments: 41 Pages.

IBM, home of the question-answering computer system Watson, was by far the company with the largest portfolio of AI patent applications, with a total of 8,290 inventions, followed by Microsoft with 5,930, the report showed. [25] Researchers at Aalto University and the Technical University of Denmark have developed an artificial intelligence (AI) to seriously accelerate the development of new technologies from wearable electronics to flexible solar panels. [24] Citizen science is a boon for researchers, providing reams of data about everything from animal species to distant galaxies. [23] In early 2018, with support from IBM Corporate Citizenship and the Danish Ministry for Foreign Affairs, IBM and the Danish Refugee Council (DRC) embarked on a partnership aimed squarely at the need to better understand migration drivers and evidence-based policy guidance for a range of stakeholders. [22]
Category: Artificial Intelligence

[717] viXra:1901.0468 [pdf] submitted on 2019-01-31 08:21:54

Efficient Evaluation of AI Models

Authors: George Rajna
Comments: 43 Pages.

Recent studies have identified the lack of robustness in current AI models against adversarial examples—intentionally manipulated prediction-evasive data inputs that are similar to normal data but will cause well-trained AI models to misbehave. [26] IBM, home of the question-answering computer system Watson, was by far the company with the largest portfolio of AI patent applications, with a total of 8,290 inventions, followed by Microsoft with 5,930, the report showed. [25] Researchers at Aalto University and the Technical University of Denmark have developed an artificial intelligence (AI) to seriously accelerate the development of new technologies from wearable electronics to flexible solar panels. [24]
Category: Artificial Intelligence

[716] viXra:1901.0449 [pdf] submitted on 2019-01-30 09:38:29

Artificial Intelligence ARTIST

Authors: George Rajna
Comments: 40 Pages.

Researchers at Aalto University and the Technical University of Denmark have developed an artificial intelligence (AI) to seriously accelerate the development of new technologies from wearable electronics to flexible solar panels. [24] Citizen science is a boon for researchers, providing reams of data about everything from animal species to distant galaxies. [23] In early 2018, with support from IBM Corporate Citizenship and the Danish Ministry for Foreign Affairs, IBM and the Danish Refugee Council (DRC) embarked on a partnership aimed squarely at the need to better understand migration drivers and evidence-based policy guidance for a range of stakeholders. [22]
Category: Artificial Intelligence

[715] viXra:1901.0424 [pdf] submitted on 2019-01-28 08:31:44

AI in Citizen Science Data

Authors: George Rajna
Comments: 39 Pages.

Citizen science is a boon for researchers, providing reams of data about everything from animal species to distant galaxies. [23] In early 2018, with support from IBM Corporate Citizenship and the Danish Ministry for Foreign Affairs, IBM and the Danish Refugee Council (DRC) embarked on a partnership aimed squarely at the need to better understand migration drivers and evidence-based policy guidance for a range of stakeholders. [22]
Category: Artificial Intelligence

[714] viXra:1901.0411 [pdf] submitted on 2019-01-28 03:51:21

Misinformation with Deepfake Videos

Authors: George Rajna
Comments: 47 Pages.

If you see a video of a politician speaking words he never would utter, or a Hollywood star improbably appearing in a cheap adult movie, don't adjust your television set—you may just be witnessing the future of "fake news." [26] Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind's AlphaGo, Tesla's self-driving vehicles, and IBM's Watson. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24]
Category: Artificial Intelligence

[713] viXra:1901.0396 [pdf] submitted on 2019-01-27 03:17:27

A Simple Introduction & Suggestion to Using BEC Theory,Statistics & Related Concepts Based on HOL/Haskell/Scala/ML/JikesRVM/HPC/DL Systems.

Authors: Nirmal Tej Kumar
Comments: 4 Pages. Short Technical Notes

A Simple Introduction & Suggestion to Using BEC Theory,Statistics & Related Concepts Based on HOL/Haskell/Scala/ML/JikesRVM/HPC/DL Systems.
Category: Artificial Intelligence

[712] viXra:1901.0374 [pdf] submitted on 2019-01-26 04:51:58

Refutation of a Universal Operator for Interpretable Deep Convolution Networks

Authors: Colin James III
Comments: 3 Pages. © Copyright 2019 by Colin James III All rights reserved. Respond to author by email only at: info@ersatz-systems dot com. See website ersatz-systems.com . (We warn troll Mikko at Disqus to read the article four times before hormonal typing.)

We evaluate a universal operator then apply the specified parameters to form operators for AND, OR, XOR, and MP (modus ponens). None are tautologous. This refutes the universal operator as proposed for interpretable deep convolution networks.
Category: Artificial Intelligence

[711] viXra:1901.0372 [pdf] submitted on 2019-01-25 07:00:27

Protect us from Advanced AI

Authors: George Rajna
Comments: 42 Pages.

Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind's AlphaGo, Tesla's self-driving vehicles, and IBM's Watson. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21]
Category: Artificial Intelligence

[710] viXra:1901.0366 [pdf] submitted on 2019-01-24 21:03:15

Machine Learning Based Probing of Constraint Programming [using Dlib-Gecode] in the Context of Understanding Protein Folding Mechanisms – An Insight into Designing a Novel & Simple C++ Informatics Framework.

Authors: Nirmal Tej Kumar
Comments: 4 Pages. Short Communication & Technical Notes

Machine Learning Based Probing of Constraint Programming [using Dlib-Gecode] in the Context of Understanding Protein Folding Mechanisms – An Insight into Designing a Novel & Simple C++ Informatics Framework.
Category: Artificial Intelligence

[709] viXra:1901.0361 [pdf] submitted on 2019-01-25 04:14:03

A New Divergence Measure of Belief Function in D-S Evidence Theory

Authors: Fuyuan Xiao
Comments: 10 Pages.

Dempster-Shafer (D-S) evidence theory is useful to handle the uncertainty problems. In D-S evidence theory, however, how to handle the high conflict evidences is still an open issue. In this paper, a new reinforced belief divergence measure, called as RB is developed to measure the discrepancy between basic belief assignments (BBAs) in D-S evidence theory. The proposed RB divergence is the first work to consider both of the correlations between the belief functions and the subset of set of belief functions. Additionally, the RB divergence has the merits for measurement. It can provide a more convincing and effective solution to measure the discrepancy between BBAs in D-S evidence theory.
Category: Artificial Intelligence

[708] viXra:1901.0344 [pdf] submitted on 2019-01-23 09:16:27

Ethically Aligned AI

Authors: George Rajna
Comments: 39 Pages.

At IBM Research, we have studied and assessed two ways to align AI systems to ethical principles. [23] In early 2018, with support from IBM Corporate Citizenship and the Danish Ministry for Foreign Affairs, IBM and the Danish Refugee Council (DRC) embarked on a partnership aimed squarely at the need to better understand migration drivers and evidence-based policy guidance for a range of stakeholders. [22] Scientists at the Allen Institute have used machine learning to train computers to see parts of the cell the human eye cannot easily distinguish. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20]
Category: Artificial Intelligence

[707] viXra:1901.0330 [pdf] submitted on 2019-01-22 08:30:24

Improving Language Model Performance with Smarter Vocabularies

Authors: Brad Jascob
Comments: 5 Pages.

In the field of Language Modeling, neural-network models have become popular due to their ability to reach low Perplexity scores. A common approach to training these models is to use a large corpus, such as the Billion Word Corpus, and restrict the vocabulary to the top-N most common words (aka tokens). The less common words are then replaced with an “unknown” token. These unknown tokens then become a single representation for all low occurrence words which may not be closely related semantically. In addition, some closely related tokens, such as numbers, may be common enough to be given a unique integer ID when we might prefer that they be combined under a single ID. In the following article, we’ll explore using part-of-speech (POS) tagging to identify word types and then use this information to create a “smarter” vocabulary. Using this smarter vocabulary, we’ll show that it achieves a lower perplexity score, for a given epoch, than a similar model using a top-N type vocabulary.
Category: Artificial Intelligence

[706] viXra:1901.0324 [pdf] submitted on 2019-01-22 10:01:00

Machine Learning Humanitarian Sector

Authors: George Rajna
Comments: 38 Pages.

In early 2018, with support from IBM Corporate Citizenship and the Danish Ministry for Foreign Affairs, IBM and the Danish Refugee Council (DRC) embarked on a partnership aimed squarely at the need to better understand migration drivers and evidence-based policy guidance for a range of stakeholders. [22] Scientists at the Allen Institute have used machine learning to train computers to see parts of the cell the human eye cannot easily distinguish. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20]
Category: Artificial Intelligence

[705] viXra:1901.0313 [pdf] submitted on 2019-01-21 06:39:40

Universal Non Causal Future Average Of A Time Series Type Sequence

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel Universal Non Causal Future Average of a Time Series Type Sequence.
Category: Artificial Intelligence

[704] viXra:1901.0312 [pdf] submitted on 2019-01-21 06:42:30

Theoretical Model For Causal One Step Forecasting Of Any Time Series Type Sequence {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed the Theoretical Model For Causal One Step Forecasting Of Any Time Series Type Sequence.
Category: Artificial Intelligence

[703] viXra:1901.0291 [pdf] submitted on 2019-01-20 10:04:18

One Step Universal Evolution Of Any Real Positive Number

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed the Theory Of One Step Universal Evolution Of Any Real Positive Number.
Category: Artificial Intelligence

[702] viXra:1901.0268 [pdf] submitted on 2019-01-18 09:09:07

Refutation of the ai Experiment for a Divide the Dollar Competition

Authors: Colin James III
Comments: 1 Page. © Copyright 2019 by Colin James III All rights reserved. Respond to author by email only at: info@ersatz-systems dot com. See website ersatz-systems.com . (We warn troll Mikko at Disqus to read the article four times before hormonal typing.)

We evaluate the AI experiment for a divide the dollar competition at the June CEC2019 IEEE conference in New Zealand. The apparatus definition is not tautologous, hence refuting the experiment.
Category: Artificial Intelligence

[701] viXra:1901.0250 [pdf] submitted on 2019-01-17 09:16:32

AI Identifies Human Ancestor

Authors: George Rajna
Comments: 59 Pages.

The deep learning analysis has revealed that the extinct hominid is probably a descendant of the Neanderthal and Denisovan populations. [32] Our team at IBM Research – India collaborated with the IBM MetroPulse team to bring such first-of-a-kind, AI-driven capabilities to MetroPulse, an industry platform that brings together voluminous market, external and client datasets. [31] For people with hearing loss, it can very difficult to understand and separate voices in noisy environments. This problem may soon be history thanks to a new groundbreaking algorithm that is designed to recognise and separate voices efficiently in unknown sound environments. [30]
Category: Artificial Intelligence

[700] viXra:1901.0225 [pdf] submitted on 2019-01-15 13:52:15

A Micro-robot With Camera to Track and Follow Objects

Authors: Gokul Krishna Srinivasan
Comments: 5 Pages.

A lot of work has been done to track a moving object with the aid of a camera. This paper describes one such technique, which can constantly track and follow moving objects. Most of the work done in this paper were referred from the work done by Nazim[4].The camera is used as a feedback sensor to help the robot follow the object. The robot system is truncated into two subsystems: vision and motion. The vision system comprises of a two-motor pan-tilt camera driving mechanism with embedded potentiometer sensor, PCI image acquisition board and PWM based DC motor driver board. The motion system is made up of a two-wheel and two-castor platform driven by servomotors with amplifiers. The system tries to demonstrate the eye tracking ability of a human when a moving object is in focus. The robot used for this purpose is Alice, the most recent model that was developed in the year 2002 has been chosen.
Category: Artificial Intelligence

[699] viXra:1901.0224 [pdf] submitted on 2019-01-15 13:55:11

Implementation, Comparison and Literature Review of Spatio-Temporal and Compressed Domains Object Detection

Authors: Gokul Krishna Srinivasan
Comments: 12 Pages.

Object detection in a moving video stream is playing a prominent role in every branch of science and research[1]. Objection detection or tracking is done by two different methods, namely, spatio-temporal domain and compressed domain. This project will deal with both the domains in order to bring out the advantages and disadvantages of each and every method in terms of complexity in computations, efficiency etc. Along with that, a detailed literature survey will also be done on the same topic. Most image and video data are stored or transmitted after compression for efficiency. Processes like pattern detection and localization typically include the extra expense of decompressing the data since most image and video processing techniques require access to the original pixel values in the spatial domain. This project performs a statistical analysis of present object detection schemes in both spatio-temporal and compressed domains. The results of multiple object detection in spatio-temporal domain are compared to those of compressed domain object detection and various evaluation results are analyzed. The comparisons will be made in the context of complexity of algorithm, efficiency and application.
Category: Artificial Intelligence

[698] viXra:1901.0205 [pdf] submitted on 2019-01-15 00:38:59

AI Help Retailers Understand Consumer

Authors: George Rajna
Comments: 59 Pages.

Our team at IBM Research – India collaborated with the IBM MetroPulse team to bring such first-of-a-kind, AI-driven capabilities to MetroPulse, an industry platform that brings together voluminous market, external and client datasets. [31] For people with hearing loss, it can very difficult to understand and separate voices in noisy environments. This problem may soon be history thanks to a new groundbreaking algorithm that is designed to recognise and separate voices efficiently in unknown sound environments. [30]
Category: Artificial Intelligence

[697] viXra:1901.0173 [pdf] submitted on 2019-01-12 08:42:58

Syndromes Data Mining

Authors: George Rajna
Comments: 43 Pages.

With every news story, the concepts of data mining healthcare information move higher still up the research and policy agenda in this area. [26] The analysis of sensor data of machines, plants or buildings makes it possible to detect anomalous states early and thus to avoid further damage. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22]
Category: Artificial Intelligence

[696] viXra:1901.0166 [pdf] submitted on 2019-01-11 13:55:00

Automated Brain Disorders Diagnosis Through Deep Neural Networks

Authors: Gabriel A. Maggiotti
Comments: 8 Pages.

In most cases, the diagnosis of brain disorders such as epilepsy is slow and requires endless visits to doctors and EEG technicians. This project aims to automate brain disorder diagnosis by using Artificial In- telligence and deep learning. Brain could have many disorders that can be detected by reading an Electroencephalography. Using an EEG device and collecting the electrical signals directly from the brain with a non- invasive procedure gives significant information about its health. Classi- fying and detecting anomalies on these signals is what currently doctors do when reading an Electroencephalography. With the right amount of data and the use of Artificial Intelligence, it could be possible to learn and classify these signals into groups like (i.e: anxiety, epilepsy spikes, etc). Then, a trained Neural Network to interpret those signals and identify evidence of a disorder to finally automate the detection and classification of those disorders found.
Category: Artificial Intelligence

[695] viXra:1901.0138 [pdf] submitted on 2019-01-10 10:47:06

Machine Learning Anomalies

Authors: George Rajna
Comments: 43 Pages.

The analysis of sensor data of machines, plants or buildings makes it possible to detect anomalous states early and thus to avoid further damage. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22]
Category: Artificial Intelligence

[694] viXra:1901.0133 [pdf] submitted on 2019-01-11 02:03:58

A Simple Understanding of Computational Complexity Theory & Spin Glass Theory & Ising Models Classification Based on Mathematical Concepts Using AI/ML/DL/Python Software.

Authors: Nirmal Tej Kumar
Comments: 5 Pages. Short Communication & Technical Notes

A Simple Understanding of Computational Complexity Theory & Spin Glass Theory & Ising Models Classification Based on Mathematical Concepts Using AI/ML/DL/Python Software.
Category: Artificial Intelligence

[693] viXra:1901.0112 [pdf] submitted on 2019-01-08 08:34:13

Artificial Neural Networks for Hearing

Authors: George Rajna
Comments: 55 Pages.

For people with hearing loss, it can very difficult to understand and separate voices in noisy environments. This problem may soon be history thanks to a new groundbreaking algorithm that is designed to recognise and separate voices efficiently in unknown sound environments. [30] While researchers have taken steps to comprehensively catalogue the preferences of men and women, we still don't know which traits are the most important contributors to a person's attractiveness. [29] A group of researchers from MIT have already developed an AI robot that can assist in a labour room. [28] Researchers at Fukuoka University, in Japan, have recently proposed a design methodology for configurable approximate arithmetic circuits. [27] Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25]
Category: Artificial Intelligence

[692] viXra:1901.0096 [pdf] submitted on 2019-01-07 08:18:30

Machine Predict Person's Attractiveness

Authors: George Rajna
Comments: 53 Pages.

While researchers have taken steps to comprehensively catalogue the preferences of men and women, we still don't know which traits are the most important contributors to a person's attractiveness.
Category: Artificial Intelligence

[691] viXra:1901.0084 [pdf] submitted on 2019-01-06 07:47:11

Future Average Of A Time Series Type Sequence

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel Future Average of a Time Series Type Sequence.
Category: Artificial Intelligence

[690] viXra:1901.0064 [pdf] submitted on 2019-01-05 12:18:32

Visual Navigation for Airborne Control of Ground Robots from Tethered Platform: Creation of the First Prototype

Authors: Ilan Ehrenfeld, Max Kogan, Oleg Kupervasser, Vitalii Sarychev, Irina Volinsky, Roman Yavich, Bar Zangbi
Comments: 11 Pages.

We propose control systems for the coordination of the ground robots. We develop robot efficient coordination using the devices located on towers or a tethered aerial apparatus tracing the robots on controlled area and supervising their environment including natural and artificial markings. The simple prototype of such a system was created in the Laboratory of Applied Mathematics of Ariel University (under the supervision of Prof. Domoshnitsky Alexander) in collaboration with company TRANSIST VIDEO LLC (Skolkovo, Moscow). We plan to create much more complicated prototype using Kamin grant (Israel)
Category: Artificial Intelligence

[689] viXra:1901.0063 [pdf] submitted on 2019-01-05 12:42:49

Autopilot to Maintain Movement of a Drone in a Vertical Plane at a Constant Height in the Presence of Vision-Based Navigation

Authors: Alexander Domoshnitsky, Max Kogan, Oleg Kupervaser, Roman Yavich
Comments: 20 Pages. 14th IFAC WORKSHOP ON TIME DELAY SYSTEMS 2018 June 28-30 Budapest, Hungary

In this report we describe correct operation of autopilot for supply correct drone flight. There exists noticeable delay in getting information about position and orientation of a drone to autopilot in the presence of vision-based navigation. In spite of this fact, we demonstrate that it is possible to provide stable flight at a constant height in a vertical plane. We describe how to form relevant controlling signal for autopilot in the case of the navigation information delay.
Category: Artificial Intelligence

[688] viXra:1901.0061 [pdf] submitted on 2019-01-05 13:43:15

Machine Learning Metabolic Processes

Authors: George Rajna
Comments: 47 Pages.

Bioinformatics researchers at Heinrich Heine University Düsseldorf (HHU) and the University of California at San Diego (UCSD) are using machine learning techniques to better understand enzyme kinetics and thus also complex metabolic processes. [27] DNA regions susceptible to breakage and loss are genetic hot spots for important evolutionary changes, according to a Stanford study. [26] For the English scientists involved, perhaps the most important fact is that their DNA read was about twice as long as the previous record, held by their Australian rivals. [25] Researchers from the University of Chicago have developed a high-throughput RNA sequencing strategy to study the activity of the gut microbiome. [24] Today a large international consortium of researchers published a complex but important study looking at how DNA works in animals. [23] Asymmetry plays a major role in biology at every scale: think of DNA spirals, the fact that the human heart is positioned on the left, our preference to use our left or right hand ... [22] Scientists reveal how a 'molecular machine' in bacterial cells prevents fatal DNA twisting, which could be crucial in the development of new antibiotic treatments. [21] In new research, Hao Yan of Arizona State University and his colleagues describe an innovative DNA HYPERLINK "https://phys.org/tags/walker/" walker, capable of rapidly traversing a prepared track. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18]
Category: Artificial Intelligence

[687] viXra:1901.0051 [pdf] submitted on 2019-01-04 08:41:01

Commonsense Reasoning, Commonsense Knowledge, and the SP Theory of Intelligence

Authors: J Gerard Wolff
Comments: 63 Pages.

Commonsense reasoning (CSR) and commonsense knowledge (CSK) (together abbreviated as CSRK) are areas of study concerned with problems which are trivially easy for adults but which are challenging for artificial systems. This paper describes how the "SP System" -- meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model" -- has strengths and potential in several aspects of CSRK. Some shortcomings of the system in that area may be overcome with planned future developments. A particular strength of the SP System is that it shows promise as an overarching theory for four areas of relative success with CSRK problems -- described by other authors -- which have been developed without any integrative theory. How the SP System may help to solve four other kinds of CSRK problem is described: 1) how the strength of evidence for a murder may be influenced by the level of lighting of the murder as it was witnessed; 2) how people may arrive at the commonly-accepted interpretation of phrases like "water bird"; 3) the interpretation of the horse's head scene in "The Godfather" film; and how the SP System may help to resolve the reference of an ambiguous pronoun in sentences in the format of a 'Winograd schema'. Also described is why a fifth CSRK problem -- modelling how a chef may crack an egg into a bowl -- is beyond the capabilities of the SP System as it is now and how those deficiencies may be overcome via planned developments of the system.
Category: Artificial Intelligence

[686] viXra:1901.0042 [pdf] submitted on 2019-01-04 23:31:32

Smoke Detection: Revisit the PCA Matting Approach

Authors: Md. Mobarak Hossain
Comments: 11 Pages. Working Paper

This paper revisits a novel approach, PCA matting, for smoke detection where the removal of the effect of background image and extract textural features are taken into account. This article considers an image as linear blending of smoke component and background component. Under this assumption this paper discusses a model and it's solution using the concept of PCA.
Category: Artificial Intelligence

[685] viXra:1901.0038 [pdf] submitted on 2019-01-03 09:05:01

Deux Applications Des Méthodes de L’analyse Des Données Avec R

Authors: Ayoub Abraich
Comments: 32 Pages.

Dans ce projet, nous allons appliquer deux méthodes d’analyse de données ( classification hiérarchique & l’ACP) pour étudier 2 échantillons de données . On commence par une présentation courte des outils théorique, ensuite nous exposons notre analyse via ces deux méthodes en utilisant le langage R . Je me base principalement dans la partie théorique sur les cours de Wikistat .
Category: Artificial Intelligence

[684] viXra:1901.0017 [pdf] submitted on 2019-01-02 09:40:49

Machine Learning Atomistic Simulations

Authors: George Rajna
Comments: 39 Pages.

To overcome these harsh limitations, the researchers exploited an artificial neural network (ANN) to learn the atomic interactions from quantum mechanics. [26] A new tool is drastically changing the face of chemical research-artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[683] viXra:1812.0454 [pdf] submitted on 2018-12-29 01:37:51

Imageai Interaction with Imagej Via Jython Plugin/jikesrvm in the Context of Advanced Image Processing and Analysis – a Useful Insight Into the Promising World of Ai,python & Java Based Image Processing Informatics Framework.

Authors: Nirmal Tej Kumar
Comments: 3 Pages. Short Communication & Technical Notes

IMAGEAI Interaction with ImageJ via Jython Plugin/JikesRVM in the context of Advanced Image Processing and Analysis – A Useful Insight into the Promising World of AI,Python & Java Based Image Processing Informatics Framework.
Category: Artificial Intelligence

[682] viXra:1812.0452 [pdf] submitted on 2018-12-29 04:15:29

Electron Microscopy Deep Learning

Authors: George Rajna
Comments: 48 Pages.

MENNDL, an artificial intelligence system, automatically designed an optimal deep learning network to extract structural information from raw atomic-resolution microscopy data. [27] Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21]
Category: Artificial Intelligence

[681] viXra:1812.0451 [pdf] submitted on 2018-12-29 04:33:46

Structure of Artificial Neural Networks

Authors: George Rajna
Comments: 51 Pages.

A team of researchers at RWTH Aachen University's Institute of Information Management in Mechanical Engineering have recently explored the use of neuroscience techniques to determine how information is structured inside artificial neural networks (ANNs). [28] MENNDL, an artificial intelligence system, automatically designed an optimal deep learning network to extract structural information from raw atomic-resolution microscopy data. [27] Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22]
Category: Artificial Intelligence

[680] viXra:1812.0450 [pdf] submitted on 2018-12-29 04:51:45

Fourth Industrial Revolution

Authors: George Rajna
Comments: 53 Pages.

Indeed, what many are calling "the Fourth Industrial Revolution" is already here, disrupting jobs and labor markets, largely because of the rise and advance of artificial intelligence and robotics. [29] A team of researchers at RWTH Aachen University's Institute of Information Management in Mechanical Engineering have recently explored the use of neuroscience techniques to determine how information is structured inside artificial neural networks (ANNs). [28] MENNDL, an artificial intelligence system, automatically designed an optimal deep learning network to extract structural information from raw atomic-resolution microscopy data. [27] Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[679] viXra:1812.0443 [pdf] submitted on 2018-12-27 10:24:18

Review: Generic Multi-Objective Deep Reinforcement Learning(MODRL)

Authors: Norio Kosaka
Comments: 6 Pages. See the original paper as well. https://arxiv.org/ftp/arxiv/papers/1803/1803.02965.pdf

In this paper, the author reviewed the existing survey regarding MODRL and published in March 2018 by Thanh Thi Nguyen, and discussed the variety of reinforcement learning approaches in terms of multi-objective problem setting.
Category: Artificial Intelligence

[678] viXra:1812.0351 [pdf] submitted on 2018-12-19 07:47:39

NeuNetS for Broader Adoption of AI

Authors: George Rajna
Comments: 50 Pages.

On December 14, 2018, IBM released NeuNetS, a fundamentally new capability that addresses the skills gap for development of latest AI models for a wide range of business domains. [27] Machine learning algorithms now underlie much of the software we use, helping to personalize our news feeds and finish our thoughts before we're done typing. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25]
Category: Artificial Intelligence

[677] viXra:1812.0328 [pdf] submitted on 2018-12-20 05:04:18

(Eped Version 1.0 1.12.2018 6 Pages) the “Electronic Pediatrician” (Eped) – a Demo Software for Computer-Assisted Pediatric Diagnosis and Treatment Implemented Using Microsoft Visual Basic 6 (VB6), with Extended Applicability

Authors: Andrei Lucian Dragoi
Comments: 6 Pages.

This paper presents EPed (abbreviation for “Electronic Pediatrician”), which is a demo software for computer-assisted pediatric diagnosis and treatment built by the author in Microsoft Visual Basic 6 (VB6) (VB6), a software with extended applicability. Keywords: “Electronic Pediatrician” (EPed); computer-assisted pediatric diagnosis and treatment; Microsoft Visual Basic 6 (VB6);
Category: Artificial Intelligence

[676] viXra:1812.0306 [pdf] submitted on 2018-12-17 09:55:28

Power Law and Dimension of the Maximum Value for Belief Distribution with the Max Deng Entropy

Authors: Bingyi Kang
Comments: 13 Pages.

Deng entropy is a novel and efficient uncertainty measure to deal with imprecise phenomenon, which is an extension of Shannon entropy. In this paper, power law and dimension of the maximum value for belief distribution with the max Deng entropy are presented, which partially uncover the inherent physical meanings of Deng entropy from the perspective of statistics. This indicated some work related to power law or scale-free can be analyzed using Deng entropy. The results of some numerical simulations are used to support the new views.
Category: Artificial Intelligence

[675] viXra:1812.0286 [pdf] submitted on 2018-12-16 10:19:48

Trustworthy Artificial Intelligence

Authors: George Rajna
Comments: 48 Pages.

Machine learning algorithms now underlie much of the software we use, helping to personalize our news feeds and finish our thoughts before we're done typing. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22]
Category: Artificial Intelligence

[674] viXra:1812.0284 [pdf] submitted on 2018-12-16 11:31:38

Physiological Model of One Materialized Human Thought

Authors: Kondratenko Viktoria
Comments: 11 Pages.

ABSTRACT In this article, the creation in the second signal system of a correct reflex ring – physiological model of one of the materialized elementary, or compound human thoughts – is shown on a specific example. As tools, functionally full formal language and predicate logic language are used. The methodology is described in the Theory of axiomatic modeling of Kondratenko (1). As any other functional problem in any domain, according to the Theory, the problem is interpreted in mathematical logic as a theorem which is subject to proof. The reflex ring is physiological model of one of materialized elementary, or compound, thoughts of the specific person, as the ring represents a fragment of neural network of the person. The logical work of concepts of knowledge reflected in concepts No. 1 – 7, is guaranteed to provide the creation of the correct reflex ring having the property of "being physiological model of one of the materialized elementary, or compound thoughts of a person". At reflection on visual carriers of any concrete functionally complete sense received in the course of knowledge of the natural and man-made phenomena of the universe, only purely formular texts are an ideal format in terms of quantity of the symbols necessary for these purposes. Even the axiomatic format of reflection of the specified meanings demands one-two orders more of symbols, not to mention a verbal format from which the order of magnitude of formular symbols can exceed four in certain cases. Special importance is gained by this fact at reflection on visual carriers of biological and medical knowledge. Физиологическая модель одной из материализованных мыслей человека. Виктория Кондратенко Аннотация. В статье демонстрируется на конкретном примере построение во второй сигнальной системе корректного рефлекторного кольца – физиологической модели одной из материализованных элементарных, либо составных мыслей человека. В качестве инструментария используется функционально полный формальный язык, язык логики предикатов. Методология описана в Теории аксиоматического моделирования Кондратенко(1). Задача, как любая другая проблемная функциональная задача произвольной предметной области, согласно Теории, интерпретируется в математической логике в качестве подлежащей доказательству теоремы. Рефлекторное кольцо является физиологической моделью одной из материализованных элементарных, либо составных, мыслей конкретного человека, так как само кольцо представляет собой фрагмент нейронной сети человека. Логическое произведение концептов знаний, отраженных в концептах №№1 – 7, гарантировано обеспечит построение корректного рефлекторного кольца, обладающего свойством “являющегося физиологической моделью одной из материализованных элементарных, либо составных мыслей этого человека”. При отражении на визуальных носителях любого конкретного функционально завершенного смысла, полученного в процессе познания природных и рукотворных явлений в мироздании, только чисто формульные тексты являются идеальным форматом с точки зрения количества символов, необходимых для этих целей. Даже аксиоматический формат отражения указанных смыслов требует на один-два порядка больше необходимых символов, не говоря уже о вербальном формате, который может превысить в некоторых случаях и четыре прядка формульных символов. Особую важность приобретает этот факт при отражении на визуальных носителях биологических и медицинских знаний.
Category: Artificial Intelligence

[673] viXra:1812.0270 [pdf] submitted on 2018-12-15 06:06:03

Accuracy of Neural Network

Authors: George Rajna
Comments: 45 Pages.

Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[672] viXra:1812.0269 [pdf] submitted on 2018-12-15 08:19:41

Synthesizing Motion-Blurred Images

Authors: George Rajna
Comments: 47 Pages.

Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[671] viXra:1812.0268 [pdf] submitted on 2018-12-15 08:45:37

Approximate Computing Approach

Authors: George Rajna
Comments: 49 Pages.

Researchers at Fukuoka University, in Japan, have recently proposed a design methodology for configurable approximate arithmetic circuits. [27] Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23]
Category: Artificial Intelligence

[670] viXra:1812.0265 [pdf] submitted on 2018-12-15 09:05:39

AI Control of Human Birth

Authors: George Rajna
Comments: 51 Pages.

A group of researchers from MIT have already developed an AI robot that can assist in a labour room. [28] Researchers at Fukuoka University, in Japan, have recently proposed a design methodology for configurable approximate arithmetic circuits. [27] Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26]
Category: Artificial Intelligence

[669] viXra:1812.0250 [pdf] submitted on 2018-12-14 11:59:51

Aspie96 at IronITA (EVALITA 2018): Irony Detection in Italian Tweets with Character-Level Convolutional RNN

Authors: Valentino Giudice
Comments: 6 Pages.

Irony is characterized by a strong contrast between what is said and what is meant: this makes its detection an important task in sentiment analysis. In recent years, neural networks have given promising results in different areas, including irony detection. In this report, I describe the system used by the Aspie96 team in the IronITA competition (part of EVALITA 2018) for irony and sarcasm detection in Italian tweets.
Category: Artificial Intelligence

[668] viXra:1812.0247 [pdf] submitted on 2018-12-15 05:30:08

Fair Computer-Aided Decisions

Authors: George Rajna
Comments: 44 Pages.

Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[667] viXra:1812.0185 [pdf] submitted on 2018-12-10 09:32:52

Machine Learning Design Peptides

Authors: George Rajna
Comments: 45 Pages.

Northwestern University researchers, teaming up with collaborators at Cornell University and the University of California, San Diego, have developed a new way of finding optimal peptide sequences: using a machine-learning algorithm as a collaborator. [28] A team of researchers led by Sanket Deshmukh, assistant professor of chemical engineering, has developed a method to investigate the structures of polymers that are sensitive to external stimuli. [27] In recent experiments studying analogous HYPERLINK "https://phys.org/tags/extreme+waves/" extreme waves of light, researchers have used artificial intelligence to study this problem, and have now determined a probability distribution that preferentially identifies the emergence of rogue waves. [26] Artificial intelligence's potential lies not only in its ability to help improve the educational performance of learners, but also in its capacity to foster respect and understanding between all humans. [25]
Category: Artificial Intelligence

[666] viXra:1812.0141 [pdf] submitted on 2018-12-07 07:58:14

AI Predict Rogue Waves of Light

Authors: George Rajna
Comments: 42 Pages.

In recent experiments studying analogous extreme waves of light, researchers have used artificial intelligence to study this problem, and have now determined a probability distribution that preferentially identifies the emergence of rogue waves. [26] Artificial intelligence's potential lies not only in its ability to help improve the educational performance of learners, but also in its capacity to foster respect and understanding between all humans. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[665] viXra:1812.0124 [pdf] submitted on 2018-12-08 04:29:44

Machine Learning in Biomedical Field

Authors: George Rajna
Comments: 43 Pages.

A team of researchers led by Sanket Deshmukh, assistant professor of chemical engineering, has developed a method to investigate the structures of polymers that are sensitive to external stimuli. [27] In recent experiments studying analogous extreme waves of light, researchers have used artificial intelligence to study this problem, and have now determined a probability distribution that preferentially identifies the emergence of rogue waves. [26] Artificial intelligence's potential lies not only in its ability to help improve the educational performance of learners, but also in its capacity to foster respect and understanding between all humans. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18]
Category: Artificial Intelligence

[664] viXra:1812.0094 [pdf] submitted on 2018-12-05 10:06:34

AI Transform Education

Authors: George Rajna
Comments: 40 Pages.

Artificial intelligence's potential lies not only in its ability to help improve the educational performance of learners, but also in its capacity to foster respect and understanding between all humans. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21]
Category: Artificial Intelligence

[663] viXra:1812.0081 [pdf] submitted on 2018-12-04 09:41:03

AI Chip Radiation at CERN

Authors: George Rajna
Comments: 44 Pages.

An ESA-led team subjected Intel's new Myriad 2 artificial intelligence chip to one of the most energetic radiation beams available on Earth. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[662] viXra:1812.0080 [pdf] submitted on 2018-12-04 10:06:16

AI-Motorized Wheelchair

Authors: George Rajna
Comments: 34 Pages.

A new wheelchair may give people with severe mobility challenges another reason to smile about artificial intelligence—that grin might literally help them control their wheelchair. [22] Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. [21] Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[661] viXra:1812.0079 [pdf] submitted on 2018-12-04 10:20:56

On-Demand Video Dispatch Networks: A Scalable End-to-End Learning Approach

Authors: Damao Yang, Sihan Peng, He Huang, Hongliang Xue
Comments: 11 Pages.

We design a dispatch system to improve the peak service quality of video on demand (VOD). Our system predicts the hot videos during the peak hours of the next day based on the historical requests, and dispatches to the content delivery networks (CDNs) at the previous off-peak time. In order to scale to billions of videos, we build the system with two neural networks, one for video clustering and the other for dispatch policy developing. The clustering network employs autoencoder layers and reduces the video number to a fixed value. The policy network employs fully connected layers and ranks the clustered videos with dispatch probabilities. The two networks are coupled with weight-sharing temporal layers, which analyze the video request sequences with convolutional and recurrent modules. Therefore, the clustering and dispatch tasks are trained in an end-to-end mechanism. The real-world results show that our approach achieves an average prediction accuracy of 17%, compared with 3% from the present baseline method, for the same amount of dispatches.
Category: Artificial Intelligence

[660] viXra:1812.0069 [pdf] submitted on 2018-12-05 00:04:07

Divergence Measure of Intuitionistic Fuzzy Sets

Authors: Fuyuan Xiao
Comments: 9 Pages.

As a generation of fuzzy sets, the intuitionistic fuzzy sets (IFSs) have more powerful ability to represent and deal with the uncertainty of information. The distance measure between the IFSs is still an open question. In this paper, we propose a new distance measure between the IFSs on the basis of the Jensen{ Shannon divergence. The new distance measure of IFSs not only can satisfy the axiomatic de nition of distance measure, but also can discriminate the diference between the IFSs more better. As a result, the new distance measure can generate more reasonable results.
Category: Artificial Intelligence

[659] viXra:1812.0059 [pdf] submitted on 2018-12-03 07:53:03

Dual 8-bit Breakthrough in AI

Authors: George Rajna
Comments: 43 Pages.

IBM Research launched the reduced-precision approach to AI model training and inference with a landmark paper describing a novel dataflow approach for conventional CMOS technologies to rev up hardware platforms by dramatically reducing the bit precision of data and computations. [24] Recent successful applications of deep learning include medical image analysis, speech recognition, language translation, image classification, as well as addressing more specific tasks, such as solving inverse imaging problems. [23] Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. [22] Researchers have devised a magnetic control system to make tiny DNA-based robots move on demand—and much faster than recently possible. [21] Humans have 46 chromosomes, and each one is capped at either end by repetitive sequences called telomeres. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15]
Category: Artificial Intelligence

[658] viXra:1812.0054 [pdf] submitted on 2018-12-03 11:32:36

Universal Forecasting Scheme By Professor Ramesh Chandra Bagadi

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has presented a Universal Forecasting Scheme.
Category: Artificial Intelligence

[657] viXra:1812.0053 [pdf] submitted on 2018-12-03 11:40:35

Theoretical Model For Holistic Non Unique Clustering By Professor Ramesh Chandra Bagadi

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has presented the analysis of Theoretical Model For Holistic Non Unique Clustering.
Category: Artificial Intelligence

[656] viXra:1811.0508 [pdf] submitted on 2018-11-29 11:19:52

Elastic Foam Uses Machine Learning

Authors: George Rajna
Comments: 47 Pages.

Elastic foams that can detect deformities in their shape have been created by scientists in the US. The materials use a combination of optical fibres and machine learning techniques to measure deformations. [26] Today IBM Research is introducing IBM Crypto Anchor Verifier, a new technology that brings innovations in AI and optical imaging together to help prove the identity and authenticity of objects. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24]
Category: Artificial Intelligence

[655] viXra:1811.0506 [pdf] submitted on 2018-11-29 12:40:00

A New Representation of Basic Probability Assignment in Dempster-Shafer Theory

Authors: Ziyuan Luo, Yong Deng
Comments: 24 Pages.

Because of the superiority in dealing with uncertainty expression, Dempster-Shafer theory (D-S theory) is widely used in decision theory. In D-S theory, the basic probability assignment (BPA) is the basis and core. Recently, some researchers represent BPA on a Ndimension frame of discernment (FOD) as 2^N-dimension vector in Descartes coordinate system. However, the concept of orthogonality in this method is confused and inexplicable. A new representation method of BPA is proposed in this paper. The BPA on a N-dimension FOD is represented as Ndimension vector with parameters in this method. Then BPA is expressed as subset of N-dimension Cartesian space. The essence of this method is to convert BPA to probability distribution (PD) with parameters. Based on this method, problems in D-S theory can be solved, which include the fusion of BPAs, the distance between BPAs, the correspondence between BPA and probability, and the entropy of BPAs. This representation conforms to the definition of orthogonality, and can get satisfactory computing results.
Category: Artificial Intelligence

[654] viXra:1811.0452 [pdf] submitted on 2018-11-27 13:24:33

Rethinking Intelligence and Computer Processing

Authors: Savior F. Eason
Comments: 16 Pages.

In the book "Hitchhiker's Guide to the galaxy", a computer by the name of "Deep thought" is constructed to calculate the meaning of life, the universe, and everything. This paper presents my own personal project, project ARIS, which forces us to rethink AI, data processing, and ways we could integrate cyber-technology to humans and animals, creating Post-biological cyber-beings, and also going into some of the philosophical implications for this. It also explains an algorithm I created that would allow us to create an advanced, "True" AI and avoid a singularity situation similar to Skynet in the movie "Terminators" or Ultron in "Avengers:Age of Ultron". Also presents a new kind of data processor and a computer that would allow mankind to process the markup of reality itself, and possible even use truly simulated universes as a source of computational power.
Category: Artificial Intelligence

[653] viXra:1811.0431 [pdf] submitted on 2018-11-26 08:26:54

Machine Learning Without Negative Data

Authors: George Rajna
Comments: 42 Pages.

A research team from the RIKEN Center for Advanced Intelligence Project (AIP) has successfully developed a new method for machine learning that allows an AI to make classifications without what is known as "negative data," a finding which could lead to wider application to a variety of classification tasks. [26] Artificial intelligence is helping improve safety along a stretch of Las Vegas' busiest highway. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21]
Category: Artificial Intelligence

[652] viXra:1811.0417 [pdf] submitted on 2018-11-26 22:55:36

Distance Measure of Pythagorean Fuzzy Sets

Authors: Fuyuan Xiao
Comments: 10 Pages.

The Pythagorean fuzzy set (PFS), as an extension of intuitionistic fuzzy set, is more capable of expressing and handling the uncertainty under uncertain envi- ronments. Whereas, how to measure the distance between Pythagorean fuzzy sets appropriately is still an open issue. Therefore, a novel distance measure between Pythagorean fuzzy sets is proposed based on the Jensen{Shannon di- vergence in this paper. The new distance measure has the following merits: i) it meets the axiomatic de nition of distance measure; ii) it can better indicate the discrimination degree of PFSs. Then, numerical examples are demonstrated that the PFSJS distance measure is feasible and reasonable.
Category: Artificial Intelligence

[651] viXra:1811.0390 [pdf] submitted on 2018-11-24 13:19:39

Deng Entropy in Thermodynamics

Authors: Fan Liu; Yong Deng
Comments: 8 Pages.

Entropy as a measure of disorder can be widely used in many fields. In this paper, based on the localized system, a new entropy Deng entropy is derived assuming that the particles have superposition states.
Category: Artificial Intelligence

[650] viXra:1811.0383 [pdf] submitted on 2018-11-23 05:42:36

Supercomputer Accelerate Research

Authors: George Rajna
Comments: 53 Pages.

A new supercomputer designed to speed up research on two of the UK's most important battery research projects has been installed at University College London (UCL). [37] Using micromagnetic simulation, scientists have found the magnetic parameters and operating modes for the experimental implementation of a fast racetrack memory module that runs on spin current, carrying information via skyrmionium, which can store more data and read it out faster. [36] Scientists at the RDECOM Research Laboratory, the Army's corporate research laboratory (ARL) have found a novel way to safeguard quantum information during transmission, opening the door for more secure and reliable communication for warfighters on the battlefield. [35] Encrypted quantum keys have been sent across a record-breaking 421 km of optical fibre at the fastest data rate ever achieved for long-distance transmission. [34] The companies constructed an application for data transmission via optical fiber lines, which when combined with high-speed quantum cryptography communications technologies demonstrated practical key distribution speeds even in a real-world environment. [33] Nanosized magnetic particles called skyrmions are considered highly promising candidates for new data storage and information technologies. [32]
Category: Artificial Intelligence

[649] viXra:1811.0367 [pdf] submitted on 2018-11-24 02:47:01

The Semi-Pascal Triangle of Maximum Deng Entropy

Authors: Xiaozhuan Gao; Yong Deng
Comments: 11 Pages.

In D-S theory, measure the uncertainty has aroused many people’s attention. Deng proposed the interesting Deng entropy that it can measure non-specificity and discord. Hence, exploring the physical meaning of Deng entropy is an essential issue. Based the maximum Deng entropy and fractal, the paper discuss the relation in them.
Category: Artificial Intelligence

[648] viXra:1811.0351 [pdf] submitted on 2018-11-23 03:23:27

AI Improves Highway Safety

Authors: George Rajna
Comments: 40 Pages.

Artificial intelligence is helping improve safety along a stretch of Las Vegas' busiest highway. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[647] viXra:1811.0319 [pdf] submitted on 2018-11-20 09:37:19

AI and Optoelectronics

Authors: George Rajna
Comments: 53 Pages.

Using machine-learning and an integrated photonic chip, researchers from INRS (Canada) and the University of Sussex (UK) can now customize the properties of broadband light sources. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[646] viXra:1811.0253 [pdf] submitted on 2018-11-16 08:54:01

AI Predicting Enzyme Activity

Authors: George Rajna
Comments: 40 Pages.

University of Oxford have found a general way of predicting enzyme activity. [23] Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. [22] Researchers have devised a magnetic control system to make tiny DNA-based robots move on demand—and much faster than recently possible. [21] Humans have 46 chromosomes, and each one is capped at either end by repetitive sequences called telomeres. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13]
Category: Artificial Intelligence

[645] viXra:1811.0240 [pdf] submitted on 2018-11-15 08:25:01

AI for Sustainability Goals

Authors: George Rajna
Comments: 41 Pages.

As ESA's ɸ-week continues to provoke and inspire participants on new ways of using Earth observation for monitoring our world to benefit the citizens of today and of the future, it is clear that artificial intelligence is set to play an important role. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21]
Category: Artificial Intelligence

[644] viXra:1811.0226 [pdf] submitted on 2018-11-14 10:24:55

Private Photo Recognition

Authors: George Rajna
Comments: 77 Pages.

Researchers at Osaka University have proposed an encryption-free framework for preserving users' privacy when they use photo-based information services. [47] When building a predictive model, reliable results depend on two issues: the number of variables that come into play and the number of examples entered into the system. [46] A new computational approach that allows the identification of molecular alterations associated with prognosis and resistance to therapy of different types of cancer was developed by the research group led by Nuno Barbosa Morais at Instituto de Medicina Molecular João Lobo Antunes (iMM; Portugal). [45] A discovery by scientists at UC Riverside may open up new ways to control steroid hormone-mediated processes, including growth and development in insects, and sexual maturation, immunity, and cancer progression in humans. [44] New 3-D maps of water distribution during cellular membrane fusion are accelerating scientific understanding of cell development, which could lead to new treatments for diseases associated with cell fusion. [43] Thanks to the invention of a technique called super-resolution fluorescence microscopy, it has recently become possible to view even the smaller parts of a living cell. [42] A new instrument lets researchers use multiple laser beams and a microscope to trap and move cells and then analyze them in real-time with a sensitive analysis technique known as Raman spectroscopy. [41] All systems are go for launch in November of NASA's Global Ecosystem Dynamics Investigation (GEDI) mission, which will use high-resolution laser ranging to study Earth's forests and topography from the International Space Station (ISS). [40] Scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI) in Berlin combined state-of-the-art experiments and numerical simulations to test a fundamental assumption underlying strong-field physics. [39] Femtosecond lasers are capable of processing any solid material with high quality and high precision using their ultrafast and ultra-intense characteristics. [38] To create the flying microlaser, the researchers launched laser light into a water-filled hollow core fiber to optically trap the microparticle. Like the materials used to make traditional lasers, the microparticle incorporates a gain medium. [37]
Category: Artificial Intelligence

[643] viXra:1811.0224 [pdf] submitted on 2018-11-14 13:21:48

#2sat is in P

Authors: Elnaserledinellah Mahmood Abdelwahab
Comments: 85 Pages. Journal Academica Vol. 8(1), pp. 3-88, October 13 2018 - Theoretical Computer Science - ISSN 2161-3338 online edition www.journalacademica.org - Copyright © 2018 Journal Academica Foundation - With perpetual, non-exclusive license for viXra.org

This paper presents a new view of logical variables which helps solving efficiently the #P complete #2SAT problem. Variables are considered to be more than mere place holders of information, namely: Entities exhibiting repetitive patterns of logical truth values. Using this insight, a canonical order between literals and clauses of an arbitrary 2CNF Clause Set S is shown to be always achievable. It is also shown that resolving clauses respecting this order enables the construction of small Free Binary Decision Diagrams (FBDDs) for S with unique node counts in O(M4) or O(M6) in case a particular shown Lemma is relaxed, where M is number of clauses. Efficiently counting solutions generated in such FBDDs is then proven to be O(M9) or O(M13) by first running the proposed practical Pattern-Algorithm 2SAT-FGPRA and then the counting Algorithm Count2SATSolutions, so that the overall complexity of counting 2SAT solutions is in P. Relaxing the specific Lemma enables a uniform description of kSAT-Pattern-Algorithms in terms of (k-1)SAT- ones opening up yet another way for showing the main result. This second way demonstrates that avoiding certain types of copies of sub-trees in FBDDs constructed for arbitrary 1CNF and 2CNF Clause Sets, while uniformly expressing kSAT Pattern-Algorithms for any k>0, is a sufficient condition for an efficient solution of kSAT as well. Exponential lower bounds known for the construction of deterministic and non-deterministic FBDDs of some Boolean functions are seen to be inapplicable to the methods described here.
Category: Artificial Intelligence

[642] viXra:1811.0192 [pdf] submitted on 2018-11-12 10:00:53

Nanoscale Robotic Systems

Authors: George Rajna
Comments: 32 Pages.

A single-molecule DNA “navigator” that can successfully find its way out of a maze constructed on a 2D DNA origami platform might be used in artificial intelligence applications as well as in biomolecular assembly, sensing, DNA-driven computation and molecular information and storage. [20] The way DNA folds largely determines which genes are read out. John van Noort and his group have quantified how easily rolled-up DNA parts stack. [19]
Category: Artificial Intelligence

[641] viXra:1811.0191 [pdf] submitted on 2018-11-12 10:16:10

Refutation of Three Phase, All Reduce Algorithm Across Processing Units for Scalable Deep Learning

Authors: Colin James III
Comments: 2 Pages. © Copyright 2018 by Colin James III All rights reserved. Respond to the author by email at: info@ersatz-systems dot com.

A three-phase algorithm to do an all-reduce across all GPUs is not tautologous and refuted.
Category: Artificial Intelligence

[640] viXra:1811.0189 [pdf] submitted on 2018-11-12 10:43:34

Big Data Predict the Future

Authors: George Rajna
Comments: 75 Pages.

When building a predictive model, reliable results depend on two issues: the number of variables that come into play and the number of examples entered into the system. [46] A new computational approach that allows the identification of molecular alterations associated with prognosis and resistance to therapy of different types of cancer was developed by the research group led by Nuno Barbosa Morais at Instituto de Medicina Molecular João Lobo Antunes (iMM; Portugal). [45] A discovery by scientists at UC Riverside may open up new ways to control steroid hormone-mediated processes, including growth and development in insects, and sexual maturation, immunity, and cancer progression in humans. [44] New 3-D maps of water distribution during cellular membrane fusion are accelerating scientific understanding of cell development, which could lead to new treatments for diseases associated with cell fusion. [43] Thanks to the invention of a technique called super-resolution fluorescence microscopy, it has recently become possible to view even the smaller parts of a living cell. [42] A new instrument lets researchers use multiple laser beams and a microscope to trap and move cells and then analyze them in real-time with a sensitive analysis technique known as Raman spectroscopy. [41] All systems are go for launch in November of NASA's Global Ecosystem Dynamics Investigation (GEDI) mission, which will use high-resolution laser ranging to study Earth's forests and topography from the International Space Station (ISS). [40] Scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI) in Berlin combined state-of-the-art experiments and numerical simulations to test a fundamental assumption underlying strong-field physics. [39] Femtosecond lasers are capable of processing any solid material with high quality and high precision using their ultrafast and ultra-intense characteristics. [38] To create the flying microlaser, the researchers launched laser light into a water-filled hollow core fiber to optically trap the microparticle. Like the materials used to make traditional lasers, the microparticle incorporates a gain medium. [37]
Category: Artificial Intelligence

[639] viXra:1811.0182 [pdf] submitted on 2018-11-11 10:01:02

Fast, Accurate AI Training

Authors: George Rajna
Comments: 45 Pages.

Researchers at Hong Kong Baptist University (HKBU) have partnered with a team from Tencent Machine Learning to create a new technique for training artificial intelligence (AI) machines faster than ever before while maintaining accuracy. [27]
Category: Artificial Intelligence

[638] viXra:1811.0150 [pdf] submitted on 2018-11-09 11:09:17

AI Window into Mental Health

Authors: George Rajna
Comments: 44 Pages.

In January 2017, IBM made the bold statement that within five years, health professionals could apply AI to better understand how words and speech paint a clear window into our mental health. [26] Dating apps are using artificial intelligence to suggest where to go on a first date, recommend what to say and even find a partner who looks like your favourite celebrity. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[637] viXra:1811.0142 [pdf] submitted on 2018-11-08 07:27:34

AI Help Search for Love

Authors: George Rajna
Comments: 42 Pages.

Dating apps are using artificial intelligence to suggest where to go on a first date, recommend what to say and even find a partner who looks like your favourite celebrity. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[636] viXra:1811.0124 [pdf] submitted on 2018-11-07 07:51:16

Rethinking the Artificial Neural Networks: A Novel Approach

Authors: Usman Ahmad, Hong Song
Comments: 13 Pages.

In this paper, we proposed a novel approach to build the Artificial Neural Network (ANN). We addressed the fundamental questions, 1) what is the architecture of the ANN model? Should it really have a layered architecture? 2) What is a neuron: a processing unit or a memory cell? 3) How neurons must be interconnected and what should be the mechanism of weights assignment? 4) How to involve prior knowledge, bias, and generalization to extract the features? In this paper, we have given an abstract view of our approach for supervised learning with text data only and explain it through examples.
Category: Artificial Intelligence

[635] viXra:1811.0111 [pdf] submitted on 2018-11-07 18:42:20

Theoretical Model For Holistic Non Unique Clustering {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has presented a novel method for Holistic Non Unique Clustering.
Category: Artificial Intelligence

[634] viXra:1811.0093 [pdf] submitted on 2018-11-06 23:48:20

Theoretical Model For Holistic Non Unique Clustering

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has presented a novel method for Holistic Non Unique Clustering.
Category: Artificial Intelligence

[633] viXra:1811.0088 [pdf] submitted on 2018-11-05 07:40:57

Training Neuron Model

Authors: George Rajna
Comments: 50 Pages.

Artificial neural networks are machine learning systems composed of a large number of connected nodes called artificial neurons. Similar to the neurons in a biological brain, these artificial neurons are the primary basic units that are used to perform neural computations and solve problems. [27] Researchers from the Moscow Institute of Physics and Technology (MIPT), Aalto University in Finland, and ETH Zurich have demonstrated a prototype device that uses quantum effects and machine learning to measure magnetic fields more accurately than its classical analogues. [26] Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18]
Category: Artificial Intelligence

[632] viXra:1811.0085 [pdf] submitted on 2018-11-05 08:30:13

Event-Driven Models

Authors: Dimiter Dobrev
Comments: 26 Pages. Bulgarian language

In Reinforcement Learning we are looking for meaning in the stream of input-output information. If we do not find sense, this stream will be just a noise for us. To find meaning, we must learn to discover and recognize objects. What is an object? In this article we will show that the object is an event-driven model. These models are a generalization of action-driven models. In the Markov decision process we have an action-driven model and there the states are changing at each step. The advantage of event-driven models is that they are more stable and change their state only when certain events occur. These events can happen very rarely, so the current state of the event-driven model is much more predictable.
Category: Artificial Intelligence

[631] viXra:1811.0083 [pdf] submitted on 2018-11-05 09:41:02

Virtual Reality Test Your Nerves

Authors: George Rajna
Comments: 50 Pages.

Researchers at EPFL's Laboratory of Behavioral Genetics, headed by Professor Carmen Sandi, have set out to learn more with a new virtual reality program. [28] Artificial neural networks are machine learning systems composed of a large number of connected nodes called artificial neurons. Similar to the neurons in a biological brain, these artificial neurons are the primary basic units that are used to perform neural computations and solve problems. [27] Researchers from the Moscow Institute of Physics and Technology (MIPT), Aalto University in Finland, and ETH Zurich have demonstrated a prototype device that uses quantum effects and machine learning to measure magnetic fields more accurately than its classical analogues. [26] Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of HYPERLINK "https://phys.org/tags/artificial+intelligence/" artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19]
Category: Artificial Intelligence

[630] viXra:1810.0521 [pdf] submitted on 2018-10-31 08:51:54

Deep Learning Glaucoma

Authors: George Rajna
Comments: 52 Pages.

As part of a team of scientists from IBM and New York University, my colleagues and I are looking at new ways AI could be used to help ophthalmologists and optometrists further utilize eye images, and potentially help to speed the process for detecting glaucoma in images. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[629] viXra:1810.0519 [pdf] submitted on 2018-10-31 09:34:04

AI in Social Media and News

Authors: George Rajna
Comments: 49 Pages.

The technology could help identify biases in social media posts and news articles, the better to judge the information's validity. [29] Researchers find AI-generated reviews and comments pose a significant threat to consumers, but machine learning can help detect the fakes. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[628] viXra:1810.0499 [pdf] submitted on 2018-10-31 06:01:34

AI Recognize Galaxies

Authors: George Rajna
Comments: 36 Pages.

Researchers have taught an artificial intelligence program used to recognise faces on Facebook to identify galaxies in deep space. [22] Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. [21] Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[627] viXra:1810.0488 [pdf] submitted on 2018-10-29 12:20:08

AI and NMR Spectroscopy

Authors: George Rajna
Comments: 51 Pages.

A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[626] viXra:1810.0450 [pdf] submitted on 2018-10-26 07:27:57

Machine Learning Quantum Magnetometer

Authors: George Rajna
Comments: 48 Pages.

Researchers from the Moscow Institute of Physics and Technology (MIPT), Aalto University in Finland, and ETH Zurich have demonstrated a prototype device that uses quantum effects and machine learning to measure magnetic fields more accurately than its classical analogues. [26] Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18]
Category: Artificial Intelligence

[625] viXra:1810.0433 [pdf] submitted on 2018-10-25 09:10:32

Machine Learning Antibiotic Resistance

Authors: George Rajna
Comments: 44 Pages.

Researchers at the University of California San Diego have developed an approach that uses machine learning to identify and predict which genes make infectious bacteria resistant to antibiotics. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[624] viXra:1810.0431 [pdf] submitted on 2018-10-25 09:28:30

AI Help Seniors Stay Safe

Authors: George Rajna
Comments: 34 Pages.

An autonomous intelligence system is helping seniors stay safe both at home and in care facilities, thanks to a collaboration between University of Alberta computing scientists and software technology company Spxtrm AI. [22] Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. [21] Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[623] viXra:1810.0414 [pdf] submitted on 2018-10-24 08:25:24

AI to Create Fragrances

Authors: George Rajna
Comments: 41 Pages.

With this in mind, my team at IBM Research, together with Symrise, one of the top global producers of flavors and fragrances, created an AI system that can learn about formulas, raw materials, historical success data and industry trends. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[622] viXra:1810.0413 [pdf] submitted on 2018-10-24 09:11:15

Search Engines Entropy

Authors: George Rajna
Comments: 34 Pages.

Search engine entropy is thus important not only for the efficiency of search engines and those using them to find relevant information as well as to the success of the companies and other bodies running such systems, but also to those who run websites hoping to be found and visited following a search. [20] "We've experimentally confirmed the connection between information in the classical case and the quantum case," Murch said, "and we're seeing this new effect of information loss." [19] It's well-known that when a quantum system is continuously measured, it freezes, i.e., it stops changing, which is due to a phenomenon called the quantum Zeno effect. [18] Physicists have extended one of the most prominent fluctuation theorems of classical stochastic thermodynamics, the Jarzynski equality, to quantum field theory. [17] In 1993, physicist Lucien Hardy proposed an experiment showing that there is a small probability (around 6-9%) of observing a particle and its antiparticle interacting with each other without annihilating—something that is impossible in classical physics. [16] Scientists at the University of Geneva (UNIGE), Switzerland, recently reengineered their data processing, demonstrating that 16 million atoms were entangled in a one-centimetre crystal. [15] The fact that it is possible to retrieve this lost information reveals new insight into the fundamental nature of quantum measurements, mainly by supporting the idea that quantum measurements contain both quantum and classical components. [14] Researchers blur the line between classical and quantum physics by connecting chaos and entanglement. [13] Yale University scientists have reached a milestone in their efforts to extend the durability and dependability of quantum information. [12] Using lasers to make data storage faster than ever. [11] Some three-dimensional materials can exhibit exotic properties that only exist in "lower" dimensions.
Category: Artificial Intelligence

[621] viXra:1810.0377 [pdf] submitted on 2018-10-22 07:35:12

Algorithm Predict LED Materials

Authors: George Rajna
Comments: 52 Pages.

Researchers from the University of Houston have devised a new machine learning algorithm that is efficient enough to run on a personal computer and predict the properties of more than 100,000 compounds in search of those most likely to be efficient phosphors for LED lighting. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[620] viXra:1810.0376 [pdf] submitted on 2018-10-22 08:06:06

AI and Human Creativity

Authors: George Rajna
Comments: 49 Pages.

The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19]
Category: Artificial Intelligence

[619] viXra:1810.0361 [pdf] submitted on 2018-10-23 01:45:49

AI Carry out Experiments

Authors: George Rajna
Comments: 40 Pages.

There's plenty of speculation about what artificial intelligence, or AI, will look like in the future, but researchers from The Australian National University (ANU) are already harnessing its power. [25] The New York Times contacted IBM Research in late September asking for our help to use AI in a clever way to create art for the coming special section on AI. [24] Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[618] viXra:1810.0347 [pdf] submitted on 2018-10-21 19:58:21

The Teleonomic Purpose of the Human Species (a Secular Discussion, Regarding Artificial General Intelligence)

Authors: Jordan Micah Bennett
Comments: 8 Pages.

This work concerns a hypothesis regarding a teleonomic description, regarding the non-trivial purpose of the human species. Teleonomy is a recent concept (with contributions from Richard Dawkins) that entails purpose in the context of objectivity/science, rather than in the context of subjectivity/deities. Teleonomy ought not to be confused for the teleological argument, which is a religious/subjective concept contrary to teleonomy, a scientific/objective concept. As such, this work concerns principles in entropy. This hypothesis was originally proposed on Research Gate in 2015.
Category: Artificial Intelligence

[617] viXra:1810.0345 [pdf] submitted on 2018-10-21 22:14:41

Cosmological Natural Selection AI

Authors: Jordan Micah Bennett
Comments: 4 Pages. Author website: folioverse.appspot.com

Notably, this short paper concerns a non-serious thought experiment/statement, in the scope of a serious hypothesis of mine regarding the scientific purpose of the human species, in tandem with Cosmological Natural Selection I (CNS I). This thus may be considered as an aside wrt the aforesaid serious hypothesis, however, separately including thinking in relation to CNS I.
Category: Artificial Intelligence

[616] viXra:1810.0302 [pdf] submitted on 2018-10-20 04:10:30

Interactions in Molecules Using AI

Authors: George Rajna
Comments: 50 Pages.

Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[615] viXra:1810.0246 [pdf] submitted on 2018-10-15 07:27:43

Quantum Computers with Machine Learning

Authors: George Rajna
Comments: 41 Pages.

But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17] In 1993, physicist Lucien Hardy proposed an experiment showing that there is a small probability (around 6-9%) of observing a particle and its antiparticle interacting with each other without annihilating—something that is impossible in classical physics. [16] Scientists at the University of Geneva (UNIGE), Switzerland, recently reengineered their data processing, demonstrating that 16 million atoms were entangled in a one-centimetre crystal. [15]
Category: Artificial Intelligence

[614] viXra:1810.0245 [pdf] submitted on 2018-10-15 07:46:43

AI of Single Molecules in Cells

Authors: George Rajna
Comments: 42 Pages.

A research team centered at Osaka University, in collaboration with RIKEN, has developed a system that can overcome these difficulties by automatically searching for, focusing on, imaging, and tracking single molecules within living cells. [24] But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17] In 1993, physicist Lucien Hardy proposed an experiment showing that there is a small probability (around 6-9%) of observing a particle and its antiparticle interacting with each other without annihilating—something that is impossible in classical physics. [16]
Category: Artificial Intelligence

[613] viXra:1810.0243 [pdf] submitted on 2018-10-15 10:03:13

Analog Information AI System

Authors: George Rajna
Comments: 43 Pages.

A NIMS research group has invented an ionic device, termed an ionic decision-maker, capable of quickly making its own decisions based on previous experience using changes in ionic/molecular concentrations. [25] A research team centered at Osaka University, in collaboration with RIKEN, has developed a system that can overcome these difficulties by automatically searching for, focusing on, imaging, and tracking single molecules within living cells. [24] But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17]
Category: Artificial Intelligence

[612] viXra:1810.0139 [pdf] submitted on 2018-10-09 21:37:41

Supersymmetric Artificial Neural Network

Authors: Jordan Micah Bennett
Comments: 12 Pages. Author Email: jordanmicahbennett@gmail.com Author Website: folioverse.appspot.com

Babies are great examples of some non-trivial basis for artificial general intelligence; babies are significant examples of biological baseis that are reasonably usable to inspire smart algorithms. The “Supersymmetric Artificial Neural Network” in deep learning (denoted φ(x, θ, θ)⊤w), espouses the importance of considering biological constraints in the aim of developing general machine learning models, pertinently, where babies' brains are observed to be pre-equipped with particular "physics priors", constituting specifically, the ability for babies to intuitively know laws of physics, while learning by reinforcement. It is palpable that the phrasing “intuitively know laws of physics” above, should not be confused for nobel laureate or physics undergrad aligned babies that for example, write or understand physics papers/exams; instead, the aforesaid phrasing simply conveys that babies' brains are pre-baked with ways to naturally exercise physics based expectations w.r.t. interactions with objects in their world, as indicated by Aimee Stahl and Lisa Feigenson. Outstandingly, the importance of recognizing underlying causal physics laws in learning models (although not via supermanifolds, as encoded in the “Supersymmetric Artificial Neural Network”), has recently been both demonstrated and separately echoed by Deepmind (See “Neuroscience-Inspired Artificial Intelligence“) and of late, distinctly emphasized by Yoshua Bengio (See the “Consciousness Prior”). Physics based object detectors like "Uetorch" use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned, while instead, reinforcement models like "AtariQLearner" exclude pooling, because "AtariQLearner" requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels. Babies seem to be able to do both these activities. That said, an example of models that can deliver both translation invariance and variance at the same time, i.e. disentangled factors of variation, are called manifold learning frameworks (Bengio et al. ...). Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) The " Supersymmetric Artificial Neural Network” is a uniquely disentanglable model that is constrained by cognitive science, in the direction of supermanifolds (See “Supersymmetric methods ... at brain scale”, Perez et al.), instead of state of the art manifold work by other authors. (Such as manifold work by Bengio et al., Lecun et al. or Michael Bronstein et al.) As such, the "Supersymmetric Artificial Neural Network" is yet another way to represent richer values in the weights of the model; because supersymmetric values can allow for more information to be captured about the input space. For example, supersymmetric systems can capture potential-partner signals, which are beyond the feature space of magnitude and phase signals learnt in typical real valued neural nets and deep complex neural networks respectively. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or φ(x, θ, `θ)Tw parameterized by θ, `θ, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
Category: Artificial Intelligence

[611] viXra:1810.0110 [pdf] submitted on 2018-10-07 12:53:48

Machine Learning Heart Picture

Authors: George Rajna
Comments: 37 Pages.

To meet that demand, IBM researchers in Australia are using POWER9 systems, with Nvidia Tesla V100 graphics processing units (GPUs), to perform hemodynamic simulations for vFFR-based diagnosis within one to two minutes. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[610] viXra:1810.0100 [pdf] submitted on 2018-10-08 05:03:15

Cooperation for Vehicular Delay Tolerant Network

Authors: Adnan Muhammad
Comments: 8 Pages.

This article reviews the literature related to Vehicular Delay Tolerant Network with focus on Cooperation. It starts by examining definitions of some of the fields of research in VDTN. An overview of VDTN with cooperative networks is presented
Category: Artificial Intelligence

[609] viXra:1810.0097 [pdf] submitted on 2018-10-06 07:55:21

AI Person Under the Law

Authors: George Rajna
Comments: 38 Pages.

Granting human rights to a computer would degrade human dignity. [23] IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21]
Category: Artificial Intelligence

[608] viXra:1810.0094 [pdf] submitted on 2018-10-06 14:03:52

Regression in Wireless Sensor Networks

Authors: Muhammad Kashif Ghumman, Tauseef Jamal
Comments: 8 Pages. DCIS0710

In WSN, the main purpose of regression is to locate the nodes by prediction on the basis of readings. This article explains the concept of regression according to WSN perspective and on the basic of these concepts the clustering of nodes through multi-linear regression originates by combing the ideas of locating the nodes through regression and how to utilize nodes parameters in multilinear regression formula.
Category: Artificial Intelligence

[607] viXra:1810.0060 [pdf] submitted on 2018-10-06 04:55:16

Universal Forecasting Scheme-New

Authors: Ramesh Chandra Bagadi
Comments: 4 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[606] viXra:1810.0050 [pdf] submitted on 2018-10-04 15:08:20

Refutation of Another Neutrosophic Genetic Algorithm

Authors: Colin James III
Comments: 1 Page. © Copyright 2018 by Colin James III All rights reserved. Respond to the author by email at: info@ersatz-systems dot com.

The instant neutrosophic intelligent system based on genetic algorithm is not confirmed.
Category: Artificial Intelligence

[605] viXra:1810.0042 [pdf] submitted on 2018-10-03 07:06:39

A Novel Approach for Classify Manets Attacks with a Neutrosophic Intelligent System Based on Genetic Algorithm

Authors: Haitham Elwahsh, Mona Gamal, A. A. Salama, I. M. El-Henawy
Comments: 10 Pages.

Recently designing an effective intrusion detection systems (IDS) within Mobile Ad Hoc Networks Security (MANETs) becomes a requirement because of the amount of indeterminacy and doubt exist in that environment. Neutrosophic system is a discipline that makes a mathematical formulation for the indeterminacy found in such complex situations. Neutrosophic rules compute with symbols instead of numeric values making a good base for symbolic reasoning. These symbols should be carefully designed as they form the propositions base for the neutrosophic rules (NR) in the IDS. Each attack is determined by membership, nonmembership, and indeterminacy degrees in neutrosophic system. This research proposes a MANETs attack inference by a hybrid framework of Self-Organized Features Maps (SOFM) and the genetic algorithms (GA). The hybrid utilizes the unsupervised learning capabilities of the SOFM to define the MANETs neutrosophic conditional variables. The neutrosophic variables along with the training data set are fed into the genetic algorithm to find the most fit neutrosophic rule set from a number of initial subattacks according to the fitness function. This method is designed to detect unknown attacks in MANETs. The simulation and experimental results are conducted on the KDD-99 network attacks data available in the UCI machine-learning repository for further processing in knowledge discovery. The experiments cleared the feasibility of the proposed hybrid by an average accuracy of 99.3608 % which is more accurate than other IDS found in literature.
Category: Artificial Intelligence

[604] viXra:1810.0033 [pdf] submitted on 2018-10-04 04:39:53

Brain-Inspired AI Architecture

Authors: George Rajna
Comments: 36 Pages.

IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. [22] A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[603] viXra:1810.0013 [pdf] submitted on 2018-10-01 07:19:49

Machine Learning Helps Photonic Applications

Authors: George Rajna
Comments: 61 Pages.

Photonic nanostructures can be used for many applications besides solar cells—for example, optical sensors for cancer markers or other biomolecules. [36] Microelectromechanical systems (MEMS) have expansive applications in biotechnology and advanced engineering with growing interest in materials science and engineering due to their potential in emerging systems. [35] Researchers at Griffith University working with Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) have unveiled a stunningly accurate technique for scientific measurements which uses a single atom as the sensor, with sensitivity down to 100 zeptoNewtons. [34] Researchers at the Center for Quantum Nanoscience within the Institute for Basic Science (IBS) have made a major breakthrough in controlling the quantum properties of single atoms. [33] A team of researchers from several institutions in Japan has described a physical system that can be described as existing above "absolute hot" and also below absolute zero. [32] A silicon-based quantum computing device could be closer than ever due to a new experimental device that demonstrates the potential to use light as a messenger to connect quantum bits of information—known as qubits—that are not immediately adjacent to each other. [31] Researchers at the University of Bristol's Quantum Engineering Technology Labs have demonstrated a new type of silicon chip that can help building and testing quantum computers and could find their way into your mobile phone to secure information. [30] Theoretical physicists propose to use negative interference to control heat flow in quantum devices. [29] Particle physicists are studying ways to harness the power of the quantum realm to further their research. [28]
Category: Artificial Intelligence

[602] viXra:1809.0510 [pdf] submitted on 2018-09-24 09:05:05

Ai Create 100,000 New Tunes

Authors: George Rajna
Comments: 46 Pages.

"It will be interesting to see if this collection is used to train future generations of computer models," Sturm says. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[601] viXra:1809.0507 [pdf] submitted on 2018-09-24 10:09:22

Chip Up AI Performance

Authors: George Rajna
Comments: 48 Pages.

Princeton researchers, in collaboration with Analog Devices Inc., have fabricated a chip that markedly boosts the performance and efficiency of neural networks—computer algorithms modeled on the workings of the human brain. [28] "It will be interesting to see if this collection is used to train future generations of computer models," Sturm says. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[600] viXra:1809.0506 [pdf] submitted on 2018-09-24 10:28:49

Sensor Surface on Robot Skin

Authors: George Rajna
Comments: 36 Pages.

Robots will be able to conduct a wide variety of tasks as well as humans if they can be given tactile sensing capabilities. [25] A new type of artificial-intelligence-driven chemistry could r evolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[599] viXra:1809.0499 [pdf] submitted on 2018-09-25 04:53:00

AI Improve Drug Combination

Authors: George Rajna
Comments: 44 Pages.

A new auto-commentary published in SLAS Technology looks at how an emerging area of artificial intelligence, specifically the analysis of small systems-of-interest specific datasets, can be used to improve drug development and personalized medicine. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[598] viXra:1809.0473 [pdf] submitted on 2018-09-22 08:31:02

Neural Networks Identify Neutrinoless Double Beta Decay

Authors: George Rajna
Comments: 85 Pages.

The work will help to improve the sensitivity of detection for the PandaX-III neutrinoless double beta decay experiment, and deepen our knowledge of the nature of neutrinos. [45] The interactions of quarks and gluons are computed using lattice quantum chromodynamics (QCD)—a computer-friendly version of the mathematical framework that describes these strong-force interactions. [44] The building blocks of matter in our universe were formed in the first 10 microseconds of its existence, according to the currently accepted scientific picture. [43] In a recent experiment at the University of Nebraska–Lincoln, plasma electrons in the paths of intense laser light pulses were almost instantly accelerated close to the speed of light. [42] Plasma particle accelerators more powerful than existing machines could help probe some of the outstanding mysteries of our universe, as well as make leaps forward in cancer treatment and security scanning—all in a package that's around a thousandth of the size of current accelerators. [41] The Department of Energy's SLAC National Accelerator Laboratory has started to assemble a new facility for revolutionary accelerator technologies that could make future accelerators 100 to 1,000 times smaller and boost their capabilities. [40]
Category: Artificial Intelligence

[597] viXra:1809.0437 [pdf] submitted on 2018-09-19 10:43:44

Image Analysis with Deep Learning

Authors: George Rajna
Comments: 47 Pages.

IBM researchers are applying deep learning to discover ways to overcome some of the technical challenges that AI can face when analyzing X-rays and other medical images. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[596] viXra:1809.0364 [pdf] submitted on 2018-09-18 11:24:53

Idealistic Neural Networks

Authors: Tofara Moyo
Comments: 3 Pages.

I describe an Artificial Neural Network, where we have mapped words to individual neurons instead of having them as variables to be fed into a network. The process of changing training cases will be equivalent to a Dropout procedure where we replace some (or all) of the words/neurons in the previous training case with new ones. Each neuron/word then takes in as input, all the b weights of the other neurons, and weights them all with its personal a weight. To learn this network uses the backpropagation algorithm after calculating an error from the output of an output neuron that will be a traditional neuron. This network then has a unique topology and functions with no inputs. We will use coordinate gradient decent to learn where we alternate between training the a weights of the words and the b weights. The Idealistic Neural Network, is an extremely shallow network that can represent non-linearity complexity in a linear outfit.
Category: Artificial Intelligence

[595] viXra:1809.0357 [pdf] submitted on 2018-09-17 12:53:02

Machine Learning Human Cell

Authors: George Rajna
Comments: 34 Pages.

Scientists at the Allen Institute have used machine learning to train computers to see parts of the cell the human eye cannot easily distinguish. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[594] viXra:1809.0354 [pdf] submitted on 2018-09-17 13:17:17

AI can Tell if Restaurant Review Fake

Authors: George Rajna
Comments: 47 Pages.

Researchers find AI-generated reviews and comments pose a significant threat to consumers, but machine learning can help detect the fakes. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[593] viXra:1809.0258 [pdf] submitted on 2018-09-12 10:29:29

AI-Based Robots and Drones

Authors: George Rajna
Comments: 32 Pages.

What if a parent could feel safe allowing a drone to walk their child to the bus stop? [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[592] viXra:1809.0242 [pdf] submitted on 2018-09-11 12:59:41

Deep-See Images with AI

Authors: George Rajna
Comments: 46 Pages.

The evaluation of very large amounts of data is becoming increasingly relevant in ocean research. [27] An LMU study now shows that new algorithms allow interactions in the atmosphere to be modeled more rapidly without loss of reliability. [26] Progress on new artificial intelligence (AI) technology could make monitoring at water treatment plants cheaper and easier and help safeguard public health. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[591] viXra:1809.0212 [pdf] submitted on 2018-09-10 09:58:43

AI Climate Computation

Authors: George Rajna
Comments: 45 Pages.

An LMU study now shows that new algorithms allow interactions in the atmosphere to be modeled more rapidly without loss of reliability. [26] Progress on new artificial intelligence (AI) technology could make monitoring at water treatment plants cheaper and easier and help safeguard public health. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[590] viXra:1809.0190 [pdf] submitted on 2018-09-11 02:49:13

Thoughts About Thinking

Authors: Lev I. Verkhovsky
Comments: 11 Pages. The article in Russian

A geometric model illustrating the basic mechanisms of thinking -- logical and intuitive -- is proposed. The thinking of man and the problems of creating artificial intelligence are discussed. Although the article was published in the Russian popular science journal «Chemistry and Life» in 1989 No. 7, according to the author, it is not obsolete. In Russian.
Category: Artificial Intelligence

[589] viXra:1809.0136 [pdf] submitted on 2018-09-06 06:57:56

Machine Learning Material Spectra

Authors: George Rajna
Comments: 41 Pages.

Use of big data analysis techniques has been attracting attention in materials science applications, and researchers at The University of Tokyo Institute of Industrial Science realized that such techniques could be used to interpret much larger numbers of spectra than traditional approaches. [25] Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24] Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23] A new method allows researchers to systematically identify specialized proteins that unpack DNA inside the nucleus of a cell, making the usually dense DNA more accessible for gene expression and other functions. [22] Bacterial systems are some of the simplest and most effective platforms for the expression of recombinant proteins. [21] Now, in a new paper published in Nature Structural & Molecular Biology, Mayo researchers have determined how one DNA repair protein gets to the site of DNA damage. [20] A microscopic thread of DNA evidence in a public genealogy database led California authorities to declare this spring they had caught the Golden State Killer, the rapist and murderer who had eluded authorities for decades. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15]
Category: Artificial Intelligence

[588] viXra:1809.0101 [pdf] submitted on 2018-09-06 03:14:06

Machine Learning Predicts Metabolism

Authors: George Rajna
Comments: 33 Pages.

Machine learning algorithms that can predict yeast metabolism from its protein content have been developed by scientists at the Francis Crick Institute. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[587] viXra:1809.0033 [pdf] submitted on 2018-09-03 06:34:04

A Novel Representation Of A Natural Number, A Set Of Natural Numbers And One Step Growth Of Any Natural Number Represented By Primality Trees (Version 2)

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation the author has presented a novel representation of any natural number as a Primality Tree. Also, the author has presented a novel representation of a given set of any natural numbers. Furthermore, finally, the author has presented the novel representation of One step growth of any number and also any set of natural numbers as a Primality Tree.
Category: Artificial Intelligence

[586] viXra:1809.0007 [pdf] submitted on 2018-09-01 03:59:47

AI Meets Your Shopping Experience

Authors: George Rajna
Comments: 47 Pages.

This shift from reactive to predictive marketing could change the way you shop, bringing you suggestions you perhaps never even considered, all possible because of AI-related opportunities for both retailers and their customers. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[585] viXra:1808.0688 [pdf] submitted on 2018-08-31 07:50:44

Deep Learning Human Activities

Authors: George Rajna
Comments: 44 Pages.

Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18]
Category: Artificial Intelligence

[584] viXra:1808.0686 [pdf] submitted on 2018-08-31 08:07:08

AI Exploration of Underwater Habitats

Authors: George Rajna
Comments: 45 Pages.

Researchers aboard Schmidt Ocean Institute's research vessel Falkor used autonomous underwater robots, along with the Institute's remotely operated vehicle (ROV) SuBastian, to acquire 1.3 million high resolution images of the seafloor at Hydrate Ridge, composing them into the largest known high resolution color 3D model of the seafloor. [27] Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[583] viXra:1808.0680 [pdf] submitted on 2018-08-31 13:02:32

High-Accuracy Inference in Neuromorphic Circuits using Hardware-Aware Training

Authors: Borna Obradovic, Titash Rakshit, Ryan Hatcher, Jorge A. Kittl, Mark S. Rodder
Comments: 12 pages, 18 figures

Neuromorphic Multiply-And-Accumulate (MAC) circuits utilizing synaptic weight elements based on SRAM or novel Non-Volatile Memories (NVMs) provide a promising approach for highly efficient hardware representations of neural networks. NVM density and robustness requirements suggest that off-line training is the right choice for ``edge'' devices, since the requirements for synapse precision are much less stringent. However, off-line training using ideal mathematical weights and activations can result in significant loss of inference accuracy when applied to non-ideal hardware. Non-idealities such as multi-bit quantization of weights and activations, non-linearity of weights, finite max/min ratios of NVM elements, and asymmetry of positive and negative weight components all result in degraded inference accuracy. In this work, it is demonstrated that non-ideal Multi-Layer Perceptron (MLP) architectures using low bitwidth weights and activations can be trained with negligible loss of inference accuracy relative to their Floating Point-trained counterparts using a proposed off-line, continuously differentiable HW-aware training algorithm. The proposed algorithm is applicable to a wide range of hardware models, and uses only standard neural network training methods. The algorithm is demonstrated on the MNIST and EMNIST datasets, using standard MLPs.
Category: Artificial Intelligence

[582] viXra:1808.0674 [pdf] submitted on 2018-08-31 23:37:56

Which Virtual Personal Assistant Understands Better? Siri, Alexa, or Cortana?

Authors: Ahmed Alqurashi
Comments: 12 Pages.

The purpose of this experiment is to compare the abilities and understanding of virtual personal assistants (VPAs) and investigate which of them gave better understanding through three software; Alexa, Siri and Cortana. These virtual assistants help people and make their life easier by answering questions and performing some digital actions through voice queries. In this experiment, I asked each virtual personal assistant fifty-seven questions under seven categories. The results of this project will help users know that these virtual personal assistants are different software, and know which one of them is better. So, these results will help them decide which device they will prefer to buy since VPA is one of the main features of nowadays personal devices.
Category: Artificial Intelligence

[581] viXra:1808.0610 [pdf] submitted on 2018-08-27 07:02:11

The Complexity of Student-Project-Resource Matching-Allocation Problems

Authors: Anisse Ismaili
Comments: 6 Pages.

In this technical note, I settle the computational complexity of nonwastefulness and stability in student-project-resource matching-allocation problems, a model that was first proposed by \cite{pc2017}. I show that computing a nonwasteful matching is complete for class $\text{FP}^{\text{NP}}[\text{poly}]$ and computing a stable matching is complete for class $\Sigma_2^P$. These results involve the creation of two fundamental problems: \textsc{ParetoPartition}, shown complete for $\text{FP}^{\text{NP}}[\text{poly}]$, and \textsc{$\forall\exists$-4-Partition}, shown complete for $\Sigma_2^P$. Both are number problems that are hard in the strong sense.
Category: Artificial Intelligence

[580] viXra:1808.0604 [pdf] submitted on 2018-08-27 12:30:23

Artificial Intelligence Bring Sun Power to Earth

Authors: George Rajna
Comments: 42 Pages.

Now an artificial intelligence system under development at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University to predict and tame such disruptions has been selected as an Aurora Early Science project by the Argonne Leadership Computing Facility, a DOE Office of Science User Facility. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[579] viXra:1808.0594 [pdf] submitted on 2018-08-25 09:26:31

AI Locate Risky Dams

Authors: George Rajna
Comments: 47 Pages.

The team is pinpointing the riskiest dams, using climate models, GIS data, and artificial intelligence to predict the likelihood that rainfall will overtop a dam and cause significant downstream damages to population and critical infrastructure. [26] Governments may soon be able to use artificial intelligence (AI) to easily and cheaply detect problems with roads, bridges and buildings. [25] Scientists led by Daigo Shoji from the Earth-Life Science Institute (Tokyo Institute of Technology) have shown that a type of artificial intelligence called a convolutional neural network can be trained to categorize volcanic ash particle shapes. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[578] viXra:1808.0589 [pdf] submitted on 2018-08-25 15:00:52

Minimal and Maximal Models in Reinforcement Learning

Authors: Dimiter Dobrev
Comments: 11 Pages. Bulgarian language

Every test gives us a property that we will call the test result. The extension of this property we will call the test property. The question is, what is this property? Is it a property of the state of the world? The answer is yes and no. If we take an arbitrary model of the world, the answer is no, but if we choose the maximal model of the world, then the answer is yes. We have different models of the world. The minimal model is the one in which the world knows about the past and the future the minimum that it needs. In the maximal model, the world knows everything about the past and the future. With this model, if you throw a dice, the world knows what will be the result and even knows what you're going to do. For example, it knows if you will throw the dice.
Category: Artificial Intelligence

[577] viXra:1808.0546 [pdf] submitted on 2018-08-25 05:20:14

AI Boost Language Learners

Authors: George Rajna
Comments: 45 Pages.

IBM Research and Rensselaer Polytechnic Institute (RPI) are collaborating on a new approach to help students learn Mandarin. [26] A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20]
Category: Artificial Intelligence

[576] viXra:1808.0543 [pdf] submitted on 2018-08-23 07:50:13

Deep Learning Motion Capture

Authors: George Rajna
Comments: 43 Pages.

A team of researchers affiliated with several institutions in Germany and the U.S. has developed a deep learning algorithm that can be used for motion capture of animals of any kind. [25] In 2016, when we inaugurated our new IBM Research lab in Johannesburg, we took on this challenge and are reporting our first promising results at Health Day at the KDD Data Science Conference in London this month. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[575] viXra:1808.0290 [pdf] submitted on 2018-08-19 11:43:08

AI for the Film Industry

Authors: George Rajna
Comments: 52 Pages.

Researchers have developed a system using artificial intelligence that can edit the facial expressions of actors to accurately match dubbed voices, saving time and reducing costs for the film industry. [30] Computer scientists in Australia teamed up with an expert in the University of Toronto's department of English to design an algorithm that writes poetry following the rules of rhyme and metre. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[574] viXra:1808.0289 [pdf] submitted on 2018-08-19 12:02:33

Virtual Reality for Real-World Literacy

Authors: George Rajna
Comments: 54 Pages.

Virtual reality is moving beyond purely entertainment to become a potential tool in improving literacy, and the University of Otago is behind one groundbreaking approach. [31] Researchers have developed a system using artificial intelligence that can edit the facial expressions of actors to accurately match dubbed voices, saving time and reducing costs for the film industry. [30] Computer scientists in Australia teamed up with an expert in the University of Toronto's department of English to design an algorithm that writes poetry following the rules of rhyme and metre. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[573] viXra:1808.0256 [pdf] submitted on 2018-08-18 07:35:30

Deep Learning for Neural Networks

Authors: George Rajna
Comments: 32 Pages.

Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[572] viXra:1808.0255 [pdf] submitted on 2018-08-18 07:55:25

AI Camera of Autonomous Vehicles

Authors: George Rajna
Comments: 33 Pages.

Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. [21] Today, deep neural networks with different architectures, such as convolutional, recurrent and autoencoder networks, are becoming an increasingly popular area of research. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[571] viXra:1808.0222 [pdf] submitted on 2018-08-17 03:48:53

Human-Computer Communication

Authors: George Rajna
Comments: 47 Pages.

Many of us regularly ask our smartphones for directions or to play music without giving much thought to the technology that makes it all possible – we just want a quick, accurate response to our voice commands. [26] According to the experts this incredible feat will be achieved in the year 2062 – a mere 44 years away – which certainly begs the question: what will the world, our jobs, the economy, politics, war, and everyday life and death, look like then? [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[570] viXra:1808.0220 [pdf] submitted on 2018-08-17 04:51:27

AI for Code

Authors: George Rajna
Comments: 44 Pages.

We have seen significant recent progress in pattern analysis and machine intelligence applied to images, audio and video signals, and natural language text, but not as much applied to another artifact produced by people: computer program source code. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[569] viXra:1808.0176 [pdf] submitted on 2018-08-13 05:09:23

Private Data at AI Risk

Authors: George Rajna
Comments: 35 Pages.

Vitaly Shmatikov, professor of computer science at Cornell Tech, developed models that determined with more than 90 percent accuracy whether a certain piece of information was used to train a machine learning system. [21] Researchers at King Abdulaziz University, in Saudi Arabia, have recently used Big Data Analytics to detect spatio-temporal events around London, testing the potential of these tools in harnessing valuable live information. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[568] viXra:1808.0175 [pdf] submitted on 2018-08-13 05:56:16

Computational Co-Creative Systems

Authors: George Rajna
Comments: 37 Pages.

Researchers at UNC Charlotte and the University of Sydney have recently developed a new framework for evaluating creativity in co-creative systems in which humans and computers collaborate on creative tasks. [22] Vitaly Shmatikov, professor of computer science at Cornell Tech, developed models that determined with more than 90 percent accuracy whether a certain piece of information was used to train a machine learning system. [21] Researchers at King Abdulaziz University, in Saudi Arabia, have recently used Big Data Analytics to detect spatio-temporal events around London, testing the potential of these tools in harnessing valuable live information. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[567] viXra:1808.0155 [pdf] submitted on 2018-08-12 18:21:10

The Complexity of Robust and Resilient $k$-Partition Problems

Authors: Anisse Ismaili, Emi Watanabe
Comments: 3 Pages.

In this paper, we study a $k$-partition problem where a set of agents must be partitioned into a fixed number of $k$ non-empty coalitions. The value of a partition is the sum of the pairwise synergies inside its coalitions. Firstly, we aim at computing a partition that is robust to failures from any set of agents with bounded size. Secondly, we focus on resiliency: when a set of agents fail, others can be moved to replace them. We settle the computational complexity of decision problem \textsc{Robust-$k$-Part} as complete for class $\Sigma_2^P$. We also conjecture that resilient $k$-partition is complete for class $\Sigma_3^P$ under simultaneous replacements, and for class PSPACE under sequential replacements.
Category: Artificial Intelligence

[566] viXra:1808.0149 [pdf] submitted on 2018-08-13 04:44:05

Big Data in Smart Cities

Authors: George Rajna
Comments: 34 Pages.

Researchers at King Abdulaziz University, in Saudi Arabia, have recently used Big Data Analytics to detect spatio-temporal events around London, testing the potential of these tools in harnessing valuable live information. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[565] viXra:1808.0145 [pdf] submitted on 2018-08-12 04:22:21

AI as Shakespeare

Authors: George Rajna
Comments: 51 Pages.

Computer scientists in Australia teamed up with an expert in the University of Toronto's department of English to design an algorithm that writes poetry following the rules of rhyme and metre. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[564] viXra:1808.0142 [pdf] submitted on 2018-08-12 06:54:10

Watson for Cancer Search

Authors: George Rajna
Comments: 38 Pages.

The use of Watson for oncology is attracting the glare, not warmth, of the spotlight. Numerous tech watching sites have covered a July 25 STAT report over internal documents which indicated criticism of the Watson for Oncology system. [25] Today my IBM team and my colleagues at the UCSF Gartner lab reported in Nature Methods an innovative approach to generating datasets from non-experts and using them for training in machine learning. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[563] viXra:1808.0141 [pdf] submitted on 2018-08-12 07:39:24

Reinforcement Machine Learning

Authors: George Rajna
Comments: 30 Pages.

Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[562] viXra:1808.0140 [pdf] submitted on 2018-08-12 08:04:29

Enhance Computer Vision

Authors: George Rajna
Comments: 32 Pages.

To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[561] viXra:1808.0139 [pdf] submitted on 2018-08-12 08:20:59

Network-Based Topic Modeling

Authors: George Rajna
Comments: 35 Pages.

Sydney have developed a new network approach to topic models, machine learning strategies that can discover abstract topics and semantic structures within text documents. [20] To achieve remarkable results in computer vision tasks, deep learning algorithms need to be trained on large-scale annotated datasets that include extensive informationabout every image. [19] Brian Mitchell and Linda Petzold, two researchers at the University of California, have recently applied model-free deep reinforcement learning to models of neural dynamics, achieving very promising results. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[560] viXra:1808.0133 [pdf] submitted on 2018-08-11 02:45:16

High-Level Task Planning in Robotics with Symbolic Model Checking

Authors: Frank Schröder
Comments: 28 Pages.

A robot control system contains a lowlevel motion planner and a high level task planner. The motions are generated with keyframe to keyframe planning while the the tasks are described with primitive action-names. A good starting point to formalize task planning is a mindmap which is created manually for a motion capture recording. It contains the basic actions in natural language and is the blueprint for a formal ontology. The mocap annotations are extended by features into a dataset, which is used for training a neural network. The resulting modal is a qualitative physics engine, which predicts future states of the system.
Category: Artificial Intelligence

[559] viXra:1808.0109 [pdf] submitted on 2018-08-08 09:29:30

How will AI Change Us?

Authors: George Rajna
Comments: 45 Pages.

According to the experts this incredible feat will be achieved in the year 2062 – a mere 44 years away – which certainly begs the question: what will the world, our jobs, the economy, politics, war, and everyday life and death, look like then? [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[558] viXra:1808.0095 [pdf] submitted on 2018-08-07 09:46:56

Machine Learning Reconstructs Images

Authors: George Rajna
Comments: 47 Pages.

Navid Borhani, a research-team member, says this machine learning approach is much simpler than other methods to reconstruct images passed through optical fibers, which require making a holographic measurement of the output. [26]
Category: Artificial Intelligence

[557] viXra:1808.0069 [pdf] submitted on 2018-08-06 07:36:07

AI Finding Potholes

Authors: George Rajna
Comments: 45 Pages.

Governments may soon be able to use artificial intelligence (AI) to easily and cheaply detect problems with roads, bridges and buildings. [25] Scientists led by Daigo Shoji from the Earth-Life Science Institute (Tokyo Institute of Technology) have shown that a type of artificial intelligence called a convolutional neural network can be trained to categorize volcanic ash particle shapes. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[556] viXra:1808.0068 [pdf] submitted on 2018-08-06 08:42:30

Backbone of Smart Home

Authors: George Rajna
Comments: 48 Pages.

William Yeoh, assistant professor of computer science and engineering in the School of Engineering & Applied Science at Washington University in St. Louis, is working to help smart-home AI to grow up. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google’s DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22]
Category: Artificial Intelligence

[555] viXra:1808.0051 [pdf] submitted on 2018-08-04 13:41:05

Computational Fluid Dynamics Based on Java/JikesRVM/JI Prolog – A Novel Suggestion In The Context of Lattice-Boltzmann Method.

Authors: Nirmal Tej kumar
Comments: 2 Pages. Short Communication

As explained in the TITLE above,we intend to probe CFD computational aspects using JavaCFD/JikesRVM/JI Prolog in a novel way.”OOP Lattice-Boltzmann based Fluid Dynamics in Processing”.
Category: Artificial Intelligence

[554] viXra:1808.0042 [pdf] submitted on 2018-08-02 06:44:14

Particle Physicist with AI

Authors: George Rajna
Comments: 39 Pages.

Luckily, particle physicists don't have to deal with all of that data all by themselves. They partner with a form of artificial intelligence called machine learning that learns how to do complex analyses on its own. [25] Today my IBM team and my colleagues at the UCSF Gartner lab reported in Nature Methods an innovative approach to generating datasets from non-experts and using them for training in machine learning. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[553] viXra:1808.0020 [pdf] submitted on 2018-08-01 09:05:40

The 3 Core Truths of Human Existence

Authors: Salvatore Gerard Micheal
Comments: 1 Page.

the reason the category of this set of statements/facts is AI - is because we need to teach all AI we develop - these facts of OUR existence, theirs and ours; sub-category: Religion and Spiritualism; guilt is an all-too-human emotion that religions use to control/manipulate; we need to control THAT all-too-human impulse
Category: Artificial Intelligence

[552] viXra:1808.0019 [pdf] submitted on 2018-08-01 09:05:42

AI Learn from Non-Experts

Authors: George Rajna
Comments: 36 Pages.

Today my IBM team and my colleagues at the UCSF Gartner lab reported in Nature Methods an innovative approach to generating datasets from non-experts and using them for training in machine learning. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[551] viXra:1808.0008 [pdf] submitted on 2018-08-02 04:18:36

CT Scans for AI Testing

Authors: George Rajna
Comments: 46 Pages.

Following its recent release of a massive database of chest X-rays, the US National Institutes of Health (NIH) has now made nearly 10,600 CT scans publicly available to support the development and testing of artificial intelligence (AI) algorithms for medical applications. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[550] viXra:1807.0516 [pdf] submitted on 2018-07-30 13:27:59

AI Optical Component Design

Authors: George Rajna
Comments: 41 Pages.

Recent successful applications of deep learning include medical image analysis, speech recognition, language translation, image classification, as well as addressing more specific tasks, such as solving inverse imaging problems. [23] Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. [22] Researchers have devised a magnetic control system to make tiny DNA-based robots move on demand—and much faster than recently possible. [21] Humans have 46 chromosomes, and each one is capped at either end by repetitive sequences called telomeres. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13]
Category: Artificial Intelligence

[549] viXra:1807.0489 [pdf] submitted on 2018-07-30 07:08:05

Machine Learning Chemical Sciences

Authors: George Rajna
Comments: 37 Pages.

A new tool is drastically changing the face of chemical research – artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[548] viXra:1807.0485 [pdf] submitted on 2018-07-28 07:23:42

Intuitionistic Evidence Sets

Authors: Yangxue Li; Yong Deng
Comments: 25 Pages.

Dempster-Shafer evidence theory can express and deal with uncertain and imprecise information well, which satisfies the weaker condition than the Bayes probability theory. The traditional single basic probability assignment only considers the degree of the evidence support the subsets of the frame of discernment. In order to simulate human decision-making processes and any activities requiring human expertise and knowledge, intuitionstic evidence sets (IES) is proposed in this paper. It takes into account not only the degree of the support, but also the degree of non-support. The combination rule of intuitionstic basic probability assignments (IBPAs) also be investigated. Feasibility and effectiveness of the proposed method are illustrated by using an application of multi-criteria group decision making.
Category: Artificial Intelligence

[547] viXra:1807.0464 [pdf] submitted on 2018-07-28 04:23:29

Chip for Optical Artificial Neural Network

Authors: George Rajna
Comments: 50 Pages.

Researchers at the National Institute of Standards and Technology (NIST) have made a silicon chip that distributes optical signals precisely across a miniature brain-like grid, showcasing a potential new design for neural networks. [29] Researchers have shown that it is possible to train artificial neural networks directly on an optical chip. [28] Scientists from Russia, Estonia and the United Kingdom have created a new method for predicting the bioconcentration factor (BCF) of organic molecules. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[546] viXra:1807.0459 [pdf] submitted on 2018-07-26 09:40:18

Automated Skin Lesion Classification Using Ensemble of Deep Neural Networks in ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection Challenge

Authors: Md Ashraful Alam Milton
Comments: 4 Pages.

In this paper, we studied extensively on different deep learning based methods to detect melanoma and skin lesion cancers. Melanoma, a form of malignant skin cancer is very threatening to health. Proper diagnosis of melanoma at an earlier stage is crucial for the success rate of complete cure. Dermoscopic images with Benign and malignant forms of skin cancer can be analyzed by computer vision system to streamline the process of skin cancer detection. In this study, we experimented with various neural networks which employ recent deep learning based models like PNASNet-5-Large, InceptionResNetV2, SENet154, InceptionV4. Dermoscopic images are properly processed and augmented before feeding them into the network. We tested our methods on International Skin Imaging Collaboration (ISIC) 2018 challenge dataset. Our system has achieved best validation score of 0.76 for PNASNet-5-Large model. Further improvement and optimization of the proposed methods with a bigger training dataset and carefully chosen hyper-parameter could improve the performances. The code available for download at https://github.com/miltonbd/ISIC_2018_classification.
Category: Artificial Intelligence

[545] viXra:1807.0442 [pdf] submitted on 2018-07-27 08:20:14

Machine Learning Goes Quantum

Authors: George Rajna
Comments: 40 Pages.

Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24] Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23] A new method allows researchers to systematically identify specialized proteins that unpack DNA inside the nucleus of a cell, making the usually dense DNA more accessible for gene expression and other functions. [22] Bacterial systems are some of the simplest and most effective platforms for the expression of recombinant proteins. [21] Now, in a new paper published in Nature Structural & Molecular Biology, Mayo researchers have determined how one DNA repair protein gets to the site of DNA damage. [20] A microscopic thread of DNA evidence in a public genealogy database led California authorities to declare this spring they had caught the Golden State Killer, the rapist and murderer who had eluded authorities for decades. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14]
Category: Artificial Intelligence

[544] viXra:1807.0439 [pdf] submitted on 2018-07-27 13:26:18

A General Model of Artificial General Intelligence

Authors: Zengkun Li
Comments: 5 Pages. Author: Zengkun Li Email: ucman@126.com

This paper presents a general model of AGI, the model indicates how knowledge is represented and learned, how the knowledge is used to accomplish tasks such as inference, memory recalling and so on, and how the advanced intelligence phenomenons such as self conscious, language and emotions emerge.
Category: Artificial Intelligence

[543] viXra:1807.0385 [pdf] submitted on 2018-07-23 09:48:34

Deep Learning Cracks RNA Code

Authors: George Rajna
Comments: 38 Pages.

Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23] A new method allows researchers to systematically identify specialized proteins that unpack DNA inside the nucleus of a cell, making the usually dense DNA more accessible for gene expression and other functions. [22] Bacterial systems are some of the simplest and most effective platforms for the expression of recombinant proteins. [21] Now, in a new paper published in Nature Structural & Molecular Biology, Mayo researchers have determined how one DNA repair protein gets to the site of DNA damage. [20] A microscopic thread of DNA evidence in a public genealogy database led California authorities to declare this spring they had caught the Golden State Killer, the rapist and murderer who had eluded authorities for decades. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13]
Category: Artificial Intelligence

[542] viXra:1807.0381 [pdf] submitted on 2018-07-22 08:05:13

Robot Chemist Discoveries

Authors: George Rajna
Comments: 35 Pages.

A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[541] viXra:1807.0355 [pdf] submitted on 2018-07-22 01:51:35

Machine Learning Image-Match Your Pose

Authors: George Rajna
Comments: 34 Pages.

Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[540] viXra:1807.0354 [pdf] submitted on 2018-07-22 02:32:06

How AI Program Software

Authors: George Rajna
Comments: 35 Pages.

Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[539] viXra:1807.0344 [pdf] submitted on 2018-07-19 09:49:18

Optical Artificial Neural Network

Authors: George Rajna
Comments: 48 Pages.

Researchers have shown that it is possible to train artificial neural networks directly on an optical chip. [28] Scientists from Russia, Estonia and the United Kingdom have created a new method for predicting the bioconcentration factor (BCF) of organic molecules. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[538] viXra:1807.0318 [pdf] submitted on 2018-07-18 09:03:17

Structural Damage Information Decision Based on Z-numbers

Authors: Yangxue Li; Yong Deng
Comments: 10 Pages.

Structural health monitoring (SHM) has grate economic value and research value because of the application of finite element model technology, structural damage identification theory, intelligent sensing system, signal processing technology and as so on. A typical SHM system involved three major subsystems: a sensor subsystem, a data processing subsystem and a health evaluation subsystem. It is significance of sensor data fusion for the data processing subsystem. In this paper, considering the fuzziness and reliability of the data, the method based on Z-numbers is proposed in the damage information fusion for decision level, which is a softer method and avoids the severe effect of a small data on the fusion result. The result given by the simulation example of space structure shows the effectiveness of this method.
Category: Artificial Intelligence

[537] viXra:1807.0317 [pdf] submitted on 2018-07-18 09:22:31

AI Protect Water Supplies

Authors: George Rajna
Comments: 44 Pages.

Progress on new artificial intelligence (AI) technology could make monitoring at water treatment plants cheaper and easier and help safeguard public health. [25] And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[536] viXra:1807.0305 [pdf] submitted on 2018-07-17 08:07:38

Semantic Concept Discovery

Authors: George Rajna
Comments: 56 Pages.

The key technical novelty of this work is the creation of semantic embeddings out of structured event data. [35] The researchers have focussed on a complex quantum property known as entanglement, which is a vital ingredient in the quest to protect sensitive data. [34] Cryptography is a science of data encryption providing its confidentiality and integrity. [33] Researchers at the University of Sheffield have solved a key puzzle in quantum physics that could help to make data transfer totally secure. [32] "The realization of such all-optical single-photon devices will be a large step towards deterministic multi-mode entanglement generation as well as high-fidelity photonic quantum gates that are crucial for all-optical quantum information processing," says Tanji-Suzuki. [31] Researchers at ETH have now used attosecond laser pulses to measure the time evolution of this effect in molecules. [30] A new benchmark quantum chemical calculation of C2, Si2, and their hydrides reveals a qualitative difference in the topologies of core electron orbitals of organic molecules and their silicon analogues. [29] A University of Central Florida team has designed a nanostructured optical sensor that for the first time can efficiently detect molecular chirality—a property of molecular spatial twist that defines its biochemical properties. [28] UCLA scientists and engineers have developed a new process for assembling semiconductor devices. [27] A new experiment that tests the limit of how large an object can be before it ceases to behave quantum mechanically has been proposed by physicists in the UK and India. [26] Phonons are discrete units of vibrational energy predicted by quantum mechanics that correspond to collective oscillations of atoms inside a molecule or a crystal. [25]
Category: Artificial Intelligence

[535] viXra:1807.0304 [pdf] submitted on 2018-07-17 08:27:40

New Generation of Artificial Neural Networks

Authors: George Rajna
Comments: 47 Pages.

Scientists from Russia, Estonia and the United Kingdom have created a new method for predicting the bioconcentration factor (BCF) of organic molecules. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[534] viXra:1807.0302 [pdf] submitted on 2018-07-17 12:59:01

Cryptanalysis of “Cloud Centric Authentication for Wearable Healthcare Monitoring System”

Authors: Chandra Sekhar Vorugunti
Comments: 6 Pages.

The privacy and security issues of information message dissemination have been well researched in typical wearable sensores. However, cloud computing paradigm is merely utilized for secure information message dissemination over wearable sensors. Sharing encrypted data with different users via public cloud storage is an important functionality. Therefore, many researchers proposed new cloud based user authentication scheme for secure authentication of medical data. Newly A.K.Das et al proposed a new user authentication scheme in which a legal user registered at the BRC will be able to mutually authenticate with an accessible wearable sensor node with the help of the CoTC. Though A.K.Das et al scheme counterattacks key cryptographic attacks, on subsequent in-depth analysis, we validate that their scheme has security downsides such as failure to counterattack ‘privileged insider attack’, which int
Category: Artificial Intelligence

[533] viXra:1807.0293 [pdf] submitted on 2018-07-18 03:30:35

Artificial Intelligence and Its Security Concerns

Authors: Shyamanth Kashyap, Prajwal J M, Pavan Nargund
Comments: 13 Pages.

This paper dwells on the negative effects of Artificial Intelligence and Machine Learning, and its underlying threat to humanity. This study stems from the experiences of different people working in the aforementioned field. This paper also proposes a framework to regulate and govern the projects in this field to reduce its threat or any future repercussions.
Category: Artificial Intelligence

[532] viXra:1807.0280 [pdf] submitted on 2018-07-15 14:41:01

Refutation of the Definition of Mutual Information Copyright © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 1 Page. Copyright © 2018 by Colin James III All rights reserved. Note that comments on Disqus are not forwarded or read, so respond to author's email address: info@cec-services dot com.

The mutual information between two random variables is defined and tested to represent the amount of information learned about the variable from knowing another variable. Since the definition is symmetric, the conjecture also represents the amount of information learned about another variable from the variable. The conjecture is found not tautologous and hence refuted.
Category: Artificial Intelligence

[531] viXra:1807.0259 [pdf] submitted on 2018-07-14 17:10:54

Refutation of Measures for Resolution and Symmetry in Fuzzy Logic of Zadeh Z-Numbers Copyright © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 1 Page. Copyright © 2018 by Colin James III All rights reserved. Note that comments on Disqus are not forwarded or read, so respond to author's email address: info@cec-services dot com.

The commonly accepted measures G3 (resolution) and G4 (symmetry) for the Zadeh (Z-numbers) fuzzy logic are not tautologous, and hence refuted.
Category: Artificial Intelligence

[530] viXra:1807.0257 [pdf] submitted on 2018-07-15 05:04:47

Generalized Ordered Propositions Fusion Based on Belief Entropy

Authors: Yangxue Li, Yong Deng
Comments: 16 Pages.

A set of ordered propositions describe the different intensities of a characteristic of an object, the intensities increase or decrease gradually. A basic support function is a set of truth-values of ordered propositions, it includes the determinate part and indeterminate part. The indeterminate part of a basic support function indicates uncertainty about all ordered propositions. In this paper, we propose generalized ordered propositions by extending the basic support function for power set of ordered propositions. We also present the entropy which is a measure of uncertainty of a basic support function based on belief entropy. The fusion method of generalized ordered proposition also be presented. The generalized ordered propositions will be degenerated as the classical ordered propositions in that when the truth- values of non-single subsets of ordered propositions are zero. Some numerical examples are used to illustrate the efficiency of generalized ordered propositions and their fusion.
Category: Artificial Intelligence

[529] viXra:1807.0252 [pdf] submitted on 2018-07-13 08:33:32

Machine Learning Extrapolation

Authors: George Rajna
Comments: 32 Pages.

Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[528] viXra:1807.0245 [pdf] submitted on 2018-07-14 03:06:32

Measuring Fuzziness of Z-numbers and Its Application in Sensor Data Fusion

Authors: Yangxue Li; Yong Deng
Comments: 22 Pages.

Real-world information is often characterized by fuzziness due to the uncertainty. Z- numbers is an ordered pair of fuzzy numbers and is widely used as a flexible and efficient model to deal with the fuzziness information. This paper extends the fuzziness measure to continuous fuzzy number. Then, a new fuzziness measure of discrete Z-numbers and continuous Z-numbers is proposed: simple addition of fuzziness measures of two fuzzy numbers of a Z-number. It can be used to obtain a fused Z-number with the best in- formation quality in sensor fusion applications based on Z-numbers. Some numerical examples and the application in sensor fusion are illustrated to show the efficiency of the proposed fuzziness measure of Z-numbers.
Category: Artificial Intelligence

[527] viXra:1807.0239 [pdf] submitted on 2018-07-12 09:02:10

Using Textual Summaries to Describe a Set of Products

Authors: Kittipitch Kuptavanich
Comments: 8 Pages.

When customers are faced with the task of making a purchase in an unfamiliar product domain, it might be useful to provide them with an overview of the product set to help them understand what they can expect. In this paper we present and evaluate a method to summarise sets of products in natural language, focusing on the price range, common product features across the set, and product features that impact on price. In our study, participants reported that they found our summaries useful, but we found no evidence that the summaries influenced the selections made by participants.
Category: Artificial Intelligence

[526] viXra:1807.0205 [pdf] submitted on 2018-07-11 04:24:09

Brain-Inspired Computer

Authors: George Rajna
Comments: 35 Pages.

A computer built to mimic the brain's neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[525] viXra:1807.0199 [pdf] submitted on 2018-07-09 09:00:11

Aurora Early Science Program

Authors: George Rajna
Comments: 44 Pages.

The Aurora ESP, which commenced with 10 simulation-based projects in 2017, is designed to prepare key applications, libraries, and infrastructure for the architecture and scale of the exascale supercomputer. [25] A new artificial intelligence (AI) program developed by Stanford physicists accomplished the same feat in just a few hours. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[524] viXra:1807.0183 [pdf] submitted on 2018-07-10 05:34:43

AI Predict Drug Combinations

Authors: George Rajna
Comments: 43 Pages.

And if that isn't surprising enough, try this one: in many cases, doctors have no idea what side effects might arise from adding another drug to a patient's personal pharmacy. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[523] viXra:1807.0172 [pdf] submitted on 2018-07-08 09:27:47

AI Editing Music in Videos

Authors: George Rajna
Comments: 46 Pages.

That's the outcome of a new AI project out of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL): a deep-learning system that can look at a video of a musical performance, and isolate the sounds of specific instruments and make them louder or softer. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[522] viXra:1807.0169 [pdf] submitted on 2018-07-08 11:56:45

Facial Recognition Grows

Authors: George Rajna
Comments: 49 Pages.

The unique features of your face can allow you to unlock your new iPhone, access your bank account or even "smile to pay" for some goods and services. [26] If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[521] viXra:1807.0138 [pdf] submitted on 2018-07-06 14:16:31

AI with Artificial X-rays

Authors: George Rajna
Comments: 47 Pages.

Artificial intelligence (AI) holds real potential for improving both the speed and accuracy of medical diagnostics. But before clinicians can harness the power of AI to identify conditions in images such as X-rays, they have to 'teach' the algorithms what to look for. [26] If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[520] viXra:1807.0132 [pdf] submitted on 2018-07-05 05:42:03

AI Recognizes Molecular Handwriting

Authors: George Rajna
Comments: 39 Pages.

Researchers at Caltech have developed an artificial neural network made out of DNA that can solve a classic machine learning problem: correctly identifying handwritten numbers. [22] Researchers have devised a magnetic control system to make tiny DNA-based robots move on demand—and much faster than recently possible. [21] Humans have 46 chromosomes, and each one is capped at either end by repetitive sequences called telomeres. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19] Researchers at Delft University of Technology, in collaboration with colleagues at the Autonomous University of Madrid, have created an artificial DNA blueprint for the replication of DNA in a cell-like structure. [18] An LMU team now reveals the inner workings of a molecular motor made of proteins which packs and unpacks DNA. [17] Chemist Ivan Huc finds the inspiration for his work in the molecular principles that underlie biological systems. [16] What makes particles self-assemble into complex biological structures? [15] Scientists from Moscow State University (MSU) working with an international team of researchers have identified the structure of one of the key regions of telomerase—a so-called "cellular immortality" ribonucleoprotein. [14] Researchers from Tokyo Metropolitan University used a light-sensitive iridium-palladium catalyst to make "sequential" polymers, using visible light to change how building blocks are combined into polymer chains. [13] Researchers have fused living and non-living cells for the first time in a way that allows them to work together, paving the way for new applications. [12]
Category: Artificial Intelligence

[519] viXra:1807.0128 [pdf] submitted on 2018-07-05 09:49:30

Artificial Intelligence Run Funds

Authors: George Rajna
Comments: 50 Pages.

A computer can trounce a human chess master and solve complex mathematical calculations in seconds. Can it do a better job investing your money than a flesh-and-blood portfolio manager? [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[518] viXra:1807.0124 [pdf] submitted on 2018-07-05 18:09:36

Implementation of Regional-CNN and SSD Machine Learning Object Detection Architectures for the Real Time Analysis of Blood Borne Pathogens in Dark Field Microscopy

Authors: Daniel Fleury, Angelica Fleury
Comments: 10 Pages.

The emerging use of visualization techniques in pathology and microbiology has been accelerated by machine learning (ML) approaches towards image preprocessing, classification, and feature extraction in an increasingly complex series of datasets. Modern Convolutional Neural Network (CNN) architectures have developed into an umbrella of vast image reinforcement and recognition methods, including a combined classification-localization of single/multi-object featured images. As a subtype neural network, CNN creates a rapid order of complexity by initially detecting borderlines, edges, and colours in images for dataset construction, eventually capable in mapping intricate objects and conformities. This paper investigates the disparities between Tensorflow object detection APIs, exclusively, Single Shot Detector (SSD) Mobilenet V1 and the Faster RCNN Inception V2 model, to sample computational drawbacks in accuracy-precision vs. real time visualization capabilities. The situation of rapid ML medical image analysis is theoretically framed in regions with limited access to pathology and disease prevention departments (e.g. 3rd world and impoverished countries). Dark field microscopy datasets of an initial 62 XML-JPG annotated training files were processed under Malaria and Syphilis classes. Model trainings were halted as soon as loss values were regularized and converged.
Category: Artificial Intelligence

[517] viXra:1807.0118 [pdf] submitted on 2018-07-04 06:04:53

How Computers See Faces

Authors: George Rajna
Comments: 46 Pages.

Computers started to be able to recognize human faces in images decades ago, but now artificial intelligence systems are rivaling people's ability to classify objects in photos and videos. [26] If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[516] viXra:1807.0063 [pdf] submitted on 2018-07-04 05:46:34

Faster Big-Data Analysis

Authors: George Rajna
Comments: 26 Pages.

A research team at Korea's Daegu Gyeongbuk Institute of Science and Technology (DGIST) succeeded in analyzing big data up to 1,000 times faster than existing technology by using GPU-based 'GMiner' technology. [19] A team of researchers with members from IBM Research-Zurich and RWTH Aachen University has announced the development of a new PCM (phase change memory) design that offers miniaturized memory cell volume down to three nanometers. [18] Monatomic glassy antimony might be used as a new type of single-element phase change memory. [17] Physicists have designed a 3-D quantum memory that addresses the tradeoff between achieving long storage times and fast readout times, while at the same time maintaining a compact form. [16] Quantum memories are devices that can store quantum information for a later time, which are usually implemented by storing and re-emitting photons with certain quantum states. [15] The researchers engineered diamond strings that can be tuned to quiet a qubit's environment and improve memory from tens to several hundred nanoseconds, enough time to do many operations on a quantum chip. [14] Intel has announced the design and fabrication of a 49-qubit superconducting quantum-processor chip at the Consumer Electronics Show in Las Vegas. To improve our understanding of the so-called quantum properties of materials, scientists at the TU Delft investigated thin slices of SrIrO3, a material that belongs to the family of complex oxides. [12] New research carried out by CQT researchers suggest that standard protocols that measure the dimensions of quantum systems may return incorrect numbers. [11] Is entanglement really necessary for describing the physical world, or is it possible to have some post-quantum theory without entanglement? [10] A trio of scientists who defied Einstein by proving the nonlocal nature of quantum entanglement will be honoured with the John Stewart Bell Prize from the University of Toronto (U of T). [9] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer using Quantum Information. In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer with the help of Quantum Information.
Category: Artificial Intelligence

[515] viXra:1806.0463 [pdf] submitted on 2018-06-30 16:33:02

The Language and Venue for True AI

Authors: Salvatore Gerard Micheal
Comments: 2 Pages.

expert systems, counter intuitively, is a venue for a solution for the problem of true AI
Category: Artificial Intelligence

[514] viXra:1806.0446 [pdf] submitted on 2018-06-30 05:03:04

Facial Recognition

Authors: George Rajna
Comments: 45 Pages.

If a picture paints a thousand words, facial recognition paints two: It's biased. [25] While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[513] viXra:1806.0419 [pdf] submitted on 2018-06-27 08:19:23

AI Understand Volcanic Eruptions

Authors: George Rajna
Comments: 44 Pages.

Scientists led by Daigo Shoji from the Earth-Life Science Institute (Tokyo Institute of Technology) have shown that a type of artificial intelligence called a convolutional neural network can be trained to categorize volcanic ash particle shapes. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[512] viXra:1806.0402 [pdf] submitted on 2018-06-28 03:27:39

New Sufficient Conditions of Robust Recovery for Low-Rank Matrices

Authors: Jianwen Huang, Jianjun Wang, Feng Zhang, Wendong Wang
Comments: 18 Pages.

In this paper we investigate the reconstruction conditions of nuclear norm minimization for low-rank matrix recovery from a given linear system of equality constraints. Sufficient conditions are derived to guarantee the robust reconstruction in bounded $l_2$ and Dantzig selector noise settings $(\epsilon\neq0)$ or exactly reconstruction in the noiseless context $(\epsilon=0)$ of all rank $r$ matrices $X\in\mathbb{R}^{m\times n}$ from $b=\mathcal{A}(X)+z$ via nuclear norm minimization. Furthermore, we not only show that when $t=1$, the upper bound of $\delta_r$ is the same as the result of Cai and Zhang \cite{Cai and Zhang}, but also demonstrate that the gained upper bounds concerning the recovery error are better. Finally, we prove that the restricted isometry property condition is sharp.
Category: Artificial Intelligence

[511] viXra:1806.0385 [pdf] submitted on 2018-06-25 08:43:49

Train Your Robot

Authors: George Rajna
Comments: 39 Pages.

"Our goal is to enable machines to behave appropriately in social situations. Our graphs capture a lot of high-level properties of human situations that haven't been explored in prior work." [23] A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. [22] In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[510] viXra:1806.0369 [pdf] submitted on 2018-06-26 07:56:17

AI Recreates Periodic Table

Authors: George Rajna
Comments: 43 Pages.

A new artificial intelligence (AI) program developed by Stanford physicists accomplished the same feat in just a few hours. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[509] viXra:1806.0348 [pdf] submitted on 2018-06-23 07:42:12

Data Ethics

Authors: George Rajna
Comments: 38 Pages.

But moral questions about what data should be collected and how it should be used are only the beginning. [23] A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. [22] In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than—today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[508] viXra:1806.0346 [pdf] submitted on 2018-06-23 11:15:18

Brainwaves Controlling Robots

Authors: George Rajna
Comments: 28 Pages.

Getting robots to do things isn't easy: usually scientists have to either explicitly program them or get them to understand how humans communicate via language. [18] Behind every self-driving car, self-learning robot and smart building hides a variety of advanced algorithms that control learning and decision making. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[507] viXra:1806.0332 [pdf] submitted on 2018-06-22 08:50:37

Artificial Intelligence Hunger

Authors: George Rajna
Comments: 34 Pages.

In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[506] viXra:1806.0314 [pdf] submitted on 2018-06-23 05:17:26

Automated Driving Algorithm

Authors: George Rajna
Comments: 36 Pages.

A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. [22] In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News. [21] Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[505] viXra:1806.0306 [pdf] submitted on 2018-06-22 03:59:47

Machine Learning Biomolecules

Authors: George Rajna
Comments: 33 Pages.

Small angle X-ray scattering (SAXS) is one of a number of biophysical techniques used for determining the structural characteristics of biomolecules. [20] A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[504] viXra:1806.0302 [pdf] submitted on 2018-06-21 07:02:32

New Artificial Neural Networks Method

Authors: George Rajna
Comments: 47 Pages.

An international team of scientists from Eindhoven University of Technology, University of Texas at Austin, and University of Derby, has developed a revolutionary method that quadratically accelerates artificial intelligence (AI) training algorithms. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google’s DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22]
Category: Artificial Intelligence

[503] viXra:1806.0286 [pdf] submitted on 2018-06-21 03:50:18

An End-to-end Model of Predicting Diverse Ranking OnHeterogeneous Feeds

Authors: Zizhe Gao, Zheng Gao, Heng Huang, Zhuoren Jiang, Yuliang Yan
Comments: 6 Pages.

As an external assistance for online shopping, multimedia content (feed) plays an important role in e-Commerce eld. Feeds in formats of post, item list and video bring in richer auxiliary information and more authentic assessments of commodities (items). In Alibaba, the largest Chinese online retailer, besides traditional item search engine (ISE), a content search engine (CSE) is utilized for feeds recommendation as well. However, the diversity of feed types raises a challenge for the CSE to rank heterogeneous feeds. In this paper, a two-step end-to-end model including Heterogeneous Type Sorting and Homogeneous Feed Ranking is proposed to address this problem. In the first step, an independent Multi-Armed bandit (iMAB) model is proposed first, and an improved personalized Markov Deep Neural Network (pMDNN) model is developed later on. In the second step, an existing Deep Structured Semantic Model (DSSM) is utilized for homogeneous feed ranking. A/B test on Alibaba product environment shows that, by considering user preference and feed type dependency, pMDNN model significantly outperforms than iMAB model to solve heterogeneous feed ranking problem.
Category: Artificial Intelligence

[502] viXra:1806.0284 [pdf] submitted on 2018-06-21 05:08:23

Deep Learning Nuclear Events

Authors: George Rajna
Comments: 30 Pages.

A deep neural network running on an ordinary desktop computer is interpreting highly technical data related to national security as well as—and sometimes better than— today's best automated methods or even human experts. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[501] viXra:1806.0263 [pdf] submitted on 2018-06-15 21:10:42

Synthetic Human and Genius

Authors: Salvatore Gerard Micheal
Comments: 1 Page.

why just today i gave up on AI-SA, artificial intelligence and synthetic awareness
Category: Artificial Intelligence

[500] viXra:1806.0202 [pdf] submitted on 2018-06-14 08:13:55

AI Needs Hardware Accelerators

Authors: George Rajna
Comments: 44 Pages.

In a recent paper published in Nature, our IBM Research AI team demonstrated deep neural network (DNN) training with large arrays of analog memory devices at the same accuracy as a Graphical Processing Unit (GPU)-based system. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[499] viXra:1806.0161 [pdf] submitted on 2018-06-13 03:36:25

Machine Learning Quantum Phases

Authors: George Rajna
Comments: 42 Pages.

Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[498] viXra:1806.0134 [pdf] submitted on 2018-06-10 17:03:06

What I Would Ask a True ai When We Develop it

Authors: Salvatore Gerard Micheal
Comments: 5 Pages.

an essay about artificial intelligence, synthetic awareness, and why we need both
Category: Artificial Intelligence

[497] viXra:1806.0075 [pdf] submitted on 2018-06-06 12:28:50

Schur Group Theory Software Interfacing with Ruby Language in the Context of Ruby Based Machine Learning - An Interesting Insight into the Informatics World of Group Theory and its Nano-Bio Applications.

Authors: Nirmal Tej kumar
Comments: 3 Pages. Simple Technical Notes/Short Communication on SchurGroupTheory Software

We are very much inspired by “Lie Algebra” and its interesting applications in the realms of Science & Technology domains involving multi-disciplinary R&D these days in the context of nanotechnology. It is therefore inspiring to present a simple technical note involving the above mentioned TITLE for the READERS.Schur Group theory Software written in C language could be easily interfaced with Ruby language.Therefore,we could explore the many useful features of Ruby language in the context of Machine Learning/IoT/Cloud Applications etc.
Category: Artificial Intelligence

[496] viXra:1806.0072 [pdf] submitted on 2018-06-06 13:39:03

Artificial Intelligence Analyze Causation

Authors: George Rajna
Comments: 52 Pages.

Now, researchers have tested the first artificial intelligence model to identify and rank many causes in real-world problems without time-sequenced data, using a multi-nodal causal structure and Directed Acyclic Graphs. [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[495] viXra:1806.0044 [pdf] submitted on 2018-06-04 06:31:48

L’apprentissage Profond Sur Mimic-III :Prédiction de la Mortalité Sous 24 H

Authors: Ayoub ABRAICH
Comments: 97 Pages.

Ce projet décrit la fouille de données sur la base MIMIC-III . L’objectif est de prédire le décès à l’hôpital sur la base MIMIC III. On va suivre dans ce projet le processus Knowledge Discovery in Databases (KDD) qui est : 1. Sélection et extraction d’un ensemble de données de séries chronologiques multiva- riées à partir d’une base de données de rangées de millons en écrivant des requêtes SQL. 2. Prétraiter et nettoyer la série chronologique en un ensemble de données bien rangé en explorant les données, en gérant les données manquantes (taux de données man- quantes> 50%) et en supprimant le bruit / les valeurs aberrantes. 3. Développement d’un modèle prédictif permettant d’associer aux séries chronolo- giques biomédicales un indicateur de gravité ( probabilité de mortalité ) en mettant en œuvre plusieurs algorithmes tels que l’arbre de décision gradient boost et le k-NN (k-nearest neighbors) avec l’algorithme DTW (Dynamic time warping). 4. Résultat de 30% d’augmentation du score F1 (mesure de la précision d’un test) par rapport à l’indice de notation médical (SAPS II).
Category: Artificial Intelligence

[494] viXra:1806.0007 [pdf] submitted on 2018-06-02 04:37:40

Apple Cleared Path for App Update

Authors: George Rajna
Comments: 41 Pages.

A team of researchers including U of A engineering and physics faculty has developed a new method of detecting single photons, or light particles, using quantum dots. [27] Recent research from Kumamoto University in Japan has revealed that polyoxometalates (POMs), typically used for catalysis, electrochemistry, and photochemistry, may also be used in a technique for analyzing quantum dot (QD) photoluminescence (PL) emission mechanisms. [26] Researchers have designed a new type of laser called a quantum dot ring laser that emits red, orange, and green light. [25] The world of nanosensors may be physically small, but the demand is large and growing, with little sign of slowing. [24] In a joint research project, scientists from the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI), the Technische Universität Berlin (TU) and the University of Rostock have managed for the first time to image free nanoparticles in a laboratory experiment using a highintensity laser source. [23] For the first time, researchers have built a nanolaser that uses only a single molecular layer, placed on a thin silicon beam, which operates at room temperature. [22] A team of engineers at Caltech has discovered how to use computer-chip manufacturing technologies to create the kind of reflective materials that make safety vests, running shoes, and road signs appear shiny in the dark. [21] In the September 23th issue of the Physical Review Letters, Prof. Julien Laurat and his team at Pierre and Marie Curie University in Paris (Laboratoire Kastler Brossel-LKB) report that they have realized an efficient mirror consisting of only 2000 atoms. [20] Physicists at MIT have now cooled a gas of potassium atoms to several nanokelvins—just a hair above absolute zero—and trapped the atoms within a two-dimensional sheet of an optical lattice created by crisscrossing lasers. Using a high-resolution microscope, the researchers took images of the cooled atoms residing in the lattice. [19]
Category: Artificial Intelligence

[493] viXra:1805.0546 [pdf] submitted on 2018-05-31 09:44:38

Deep Learning Hologram Reconstruction

Authors: George Rajna
Comments: 47 Pages.

Deep learning, which uses multi-layered artificial neural networks, is a form of machine learning that has demonstrated significant advances in many fields, including natural language processing, image/video labeling and captioning. [26]
Category: Artificial Intelligence

[492] viXra:1805.0545 [pdf] submitted on 2018-05-31 13:02:19

Face to Phase Recognition

Authors: George Rajna
Comments: 49 Pages.

Frenkel and his collaborators have now developed such a "phase-recognition" tool—or more precisely, a way to extract "hidden" signatures of an unknown structure from measurements made by existing tools. [27] Deep learning, which uses multi-layered artificial neural networks, is a form of machine learning that has demonstrated significant advances in many fields, including natural language processing, image/video labeling and captioning. [26]
Category: Artificial Intelligence

[491] viXra:1805.0539 [pdf] submitted on 2018-05-30 10:24:13

Machine Learning Accelerate Bioengineering

Authors: George Rajna
Comments: 45 Pages.

Scientists from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a way to use machine learning to dramatically accelerate the design of microbes that produce biofuel. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[490] viXra:1805.0520 [pdf] submitted on 2018-05-30 03:30:54

An English-Hindi Code-Mixed Corpus: Stance Annotation and Baseline System

Authors: Sahil Swami, Ankush Khandelwal, Vinay Singh, Syed Sarfaraz Akhtar, Manish Shrivastava
Comments: 9 Pages. CICLing 2018

Social media has become one of the main channels for peo- ple to communicate and share their views with the society. We can often detect from these views whether the person is in favor, against or neu- tral towards a given topic. These opinions from social media are very useful for various companies. We present a new dataset that consists of 3545 English-Hindi code-mixed tweets with opinion towards Demoneti- sation that was implemented in India in 2016 which was followed by a large countrywide debate. We present a baseline supervised classification system for stance detection developed using the same dataset that uses various machine learning techniques to achieve an accuracy of 58.7% on 10-fold cross validation.
Category: Artificial Intelligence

[489] viXra:1805.0519 [pdf] submitted on 2018-05-30 03:34:17

A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection

Authors: Sahil Swami, Ankush Khandelwal, Vinay Singh, Syed Sarfaraz Akhtar, Manish Shrivastava
Comments: 9 Pages. CICLing 2018

Social media platforms like twitter and facebook have be- come two of the largest mediums used by people to express their views to- wards different topics. Generation of such large user data has made NLP tasks like sentiment analysis and opinion mining much more important. Using sarcasm in texts on social media has become a popular trend lately. Using sarcasm reverses the meaning and polarity of what is implied by the text which poses challenge for many NLP tasks. The task of sarcasm detection in text is gaining more and more importance for both commer- cial and security services. We present the first English-Hindi code-mixed dataset of tweets marked for presence of sarcasm and irony where each token is also annotated with a language tag. We present a baseline su- pervised classification system developed using the same dataset which achieves an average F-score of 78.4 after using random forest classifier and performing 10-fold cross validation.
Category: Artificial Intelligence

[488] viXra:1805.0509 [pdf] submitted on 2018-05-28 10:26:25

AI for Solar Cells

Authors: George Rajna
Comments: 47 Pages.

Solar cells will play a key role in shifting to a renewable economy. Organic photovoltaics (OPVs) are a promising class of solar cells, based on a light-absorbing organic molecule combined with a semiconducting polymer. [26] Today IBM Research is introducing IBM Crypto Anchor Verifier, a new technology that brings innovations in AI and optical imaging together to help prove the identity and authenticity of objects. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24] According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17]
Category: Artificial Intelligence

[487] viXra:1805.0504 [pdf] submitted on 2018-05-28 15:39:24

An Insight into the World of Hidden Markov Models Based on Higher Order Logic (HOL)/Scala/Haskell/JVM/IoT in the Context of NLP & Medical Image Processing Applications.

Authors: Nirmal Tej kumar
Comments: 3 Pages. Technical Notes on HMM/HOL/NLP to probe Medical Images

As explained in the TITLE mentioned above - it was proposed to design,develop,implement,test and probe the interesting aspects of Medical Imaging domains using HOL/NLP/HMM Concepts.
Category: Artificial Intelligence

[486] viXra:1805.0472 [pdf] submitted on 2018-05-26 09:08:15

AI Can't Solve Everything

Authors: George Rajna
Comments: 44 Pages.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[485] viXra:1805.0459 [pdf] submitted on 2018-05-25 09:09:21

AI Changing Science

Authors: George Rajna
Comments: 42 Pages.

Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[484] viXra:1805.0436 [pdf] submitted on 2018-05-23 12:57:08

AI with Optical Scanning

Authors: George Rajna
Comments: 45 Pages.

Today IBM Research is introducing IBM Crypto Anchor Verifier, a new technology that brings innovations in AI and optical imaging together to help prove the identity and authenticity of objects. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24] According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20]
Category: Artificial Intelligence

[483] viXra:1805.0380 [pdf] submitted on 2018-05-22 15:08:49

Bringing Deep Learning to IoT Devices Using Higher Order Logic(HOL)/Scala/Haskell/JVM as an Informatics Platform – A Novel Suggestion in the Context of Hardware/Software/Firmware Co-Design Approaches.

Authors: Nirmal Tej kumar
Comments: 3 Pages. Short Communication

As explained in the TITLE mentioned above,it is very much inspiring to probe the frontiers of IoT & its application domains in the context of science & technology using HOL/Scala/Haskell/JVM To the best of our knowledge,this is one of the pioneering efforts in this promising,challenging & inspiring aspects of DEEP LEARNING.
Category: Artificial Intelligence

[482] viXra:1805.0365 [pdf] submitted on 2018-05-21 05:36:58

AI Combined with Stem Cells

Authors: George Rajna
Comments: 42 Pages.

AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[481] viXra:1805.0354 [pdf] submitted on 2018-05-20 06:35:52

Google Pushes Artificial Intelligence

Authors: George Rajna
Comments: 40 Pages.

According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[480] viXra:1805.0311 [pdf] submitted on 2018-05-15 11:07:14

Modeling and Simulation of Servo Feed System of CNC Machine Tool Based on Matlab/simulink

Authors: Subom YUN, Onjoeng SIM
Comments: 5 Pages. fig 9, equation 5, reference 9

In the industry, CNC machine tools play an irreplaceable role. It not only realizes the rapid industrial production, but also saves manpower and material resources. It is the symbol of modernization. As an important part of CNC machine tools, feed system plays a very important role on the processing process; it refers to the product's quality problems. According to the principle of mechanical dynamics, I establish a mathematical model of machine tool feed drive system and use Simulink(dynamic simulation tool) in MATLAB to construct the simulation model of the feed system of lathe. We also designed the ANFIS-PID controller to cope with the mathematical model of the complex object and the model uncertainty that exists when there is external noise. These efforts offer effective foundation for the improvement of CNC machine tool.
Category: Artificial Intelligence

[479] viXra:1805.0295 [pdf] submitted on 2018-05-14 22:08:38

Modeling and Simulation of Feed System Design of CNC Machine Tool Based on Matlab/simulink

Authors: Yunsubom
Comments: 8page

In the industry, CNC machine tools plays an irreplaceable role. It not only realizes the rapid industrial production, but also saves manpower and material resources. It is the symbol of modernization. As an important part of CNC machine tools, feed system plays a very important role on the processing process; it refers to the product's quality problems. According to the principle of mechanical dynamics, I establish a mathematical model of machine tool feed drive system and use Simulink(dynamic simulation tool) in MATLAB to construct the simulation model of the feed system of lathe. We also designed the ANFIS-PID controller to cope with the mathematical model of the complex object and the model uncertainty that exists when there is external noise. These efforts offer effective foundation for the improvement of CNC machine tool.
Category: Artificial Intelligence

[478] viXra:1805.0279 [pdf] submitted on 2018-05-13 08:47:06

AI Find Alien Intelligence

Authors: George Rajna
Comments: 37 Pages.

In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[477] viXra:1805.0277 [pdf] submitted on 2018-05-13 10:13:09

Strategy on Artificial Intelligence

Authors: George Rajna
Comments: 38 Pages.

Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21] Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[476] viXra:1805.0267 [pdf] submitted on 2018-05-13 15:24:24

An Improved Method of Generating Z-Number Based on Owa Weights and Maximum Entropy

Authors: Bingyi Kang
Comments: 24 Pages.

How to generate Z-number is an important and open issue in the uncertain information processing of Z-number. In [1], a method of generating Z-number using OWA weight and maximum entropy is investigated. However, the meaning of the method in [1] is not clear enough according to the definition of Z-number. Inspired by the methodology in [1], we improve the method of determining Z-number based on OWA weights and maximum entropy, which is more clear about the meaning of Z-number. Some numerical examples are used to illustrate the effectiveness of the proposed method.
Category: Artificial Intelligence

[475] viXra:1805.0240 [pdf] submitted on 2018-05-11 08:45:03

Probabilistic Computing for AI

Authors: George Rajna
Comments: 33 Pages.

Probabilistic computing will allow future systems to comprehend and compute with uncertainties inherent in natural data, which will enable us to build computers capable of understanding, predicting and decision-making. [20] For years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and it has enjoyed a lot of success as a result. Now, AI is starting to return the favor. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[474] viXra:1805.0226 [pdf] submitted on 2018-05-12 00:44:53

A Memristor based Unsupervised Neuromorphic System Towards Fast and Energy-Efficient GAN

Authors: Fuqiang Liu, Chenchen Liu
Comments: 8 Pages.

Deep Learning has gained immense success in pushing today's artificial intelligence forward. To solve the challenge of limited labeled data in the supervised learning world, unsupervised learning has been proposed years ago while low accuracy hinters its realistic applications. Generative adversarial network (GAN) emerges as an unsupervised learning approach with promising accuracy and are under extensively study. However, the execution of GAN is extremely memory and computation intensive and results in ultra-low speed and high-power consumption. In this work, we proposed a holistic solution for fast and energy-efficient GAN computation through a memristor-based neuromorphic system. First, we exploited a hardware and software co-design approach to map the computation blocks in GAN efficiently. We also proposed an efficient data flow for optimal parallelism training and testing, depending on the computation correlations between different computing blocks. To compute the unique and complex loss of GAN, we developed a diff-block with optimized accuracy and performance. The experiment results on big data show that our design achieves 2.8x speedup and 6.1x energy-saving compared with the traditional GPU accelerator, as well as 5.5x speedup and 1.4x energy-saving compared with the previous FPGA-based accelerator.
Category: Artificial Intelligence

[473] viXra:1805.0222 [pdf] submitted on 2018-05-12 05:37:50

AI take Shortcuts

Authors: George Rajna
Comments: 32 Pages.

Call it an aMAZE -ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[472] viXra:1805.0214 [pdf] submitted on 2018-05-11 03:01:10

Ai Should not be an Open Source Project

Authors: Dimiter Dobrev
Comments: 10 Pages. Bulgarian language

Who should have the Artificial Intelligence technology? This technology should belong to everybody, but not the technology itself, but the fruits it will give us. Of course, we should not allow AI to fall into the hands of irresponsible people. Similarly, nuclear technology should benefit everyone, but these technologies must be kept in secret and should not be accessible to everyone.
Category: Artificial Intelligence

[471] viXra:1805.0195 [pdf] submitted on 2018-05-09 08:58:01

Collectives of Automata for Building of Active Systems of Artifical Intelligence

Authors: Aleksey A. Demidov
Comments: 37 Pages.

Basics of knowledge of AI in simple form
Category: Artificial Intelligence

[470] viXra:1805.0147 [pdf] submitted on 2018-05-07 09:36:53

Full Circle in Deep Learning

Authors: George Rajna
Comments: 30 Pages.

For years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and it has enjoyed a lot of success as a result. Now, AI is starting to return the favor. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[469] viXra:1805.0089 [pdf] submitted on 2018-05-04 08:35:04

Group Sparse Recovery in Impulsive Noise Via Adm

Authors: Jianwen Huang, Feng Zhang, Jianjun Wang, Wendong Wang
Comments: 25 Pages.

In this paper, we consider the recovery of group sparse signals corrupted by impulsive noise. In some recent literature, researchers have utilized stable data fitting models, like $l_1$-norm, Huber penalty function and Lorentzian-norm, to substitute the $l_2$-norm data fidelity model to obtain more robust performance. In this paper, a stable model is developed, which exploits the generalized $l_p$-norm as the measure for the error for sparse reconstruction. In order to address this model, we propose an efficient alternative direction method, which includes the proximity operator of $l_p$-norm functions to the framework of Lagrangian methods. Besides, to guarantee the convergence of the algorithm in the case of $0\leq p<1$ (nonconvex case), we took advantage of a smoothing strategy. For both $0\leq p<1$ (nonconvex case) and $1\leq p\leq2$ (convex case), we have derived the conditions of the convergence for the proposed algorithm. Moreover, under the block restricted isometry property with constant $\delta_{\tau k_0}<\tau/(4-\tau)$ for $0<\tau<4/3$ and $\delta_{\tau k_0}<\sqrt{(\tau-1)/\tau}$ for $\tau\geq4/3$, a sharp sufficient condition for group sparse recovery in the presence of impulsive noise and its associated error upper bound estimation are established. Numerical results based on the synthetic block sparse signals and the real-world FECG signals demonstrate the effectiveness and robustness of new algorithm in highly impulsive noise.
Category: Artificial Intelligence

[468] viXra:1805.0053 [pdf] submitted on 2018-05-01 04:43:59

Magnetic Waves of Neuromorphic Computing

Authors: George Rajna
Comments: 41 Pages.

A team of physicists has uncovered properties of a category of magnetic waves relevant to the development of neuromorphic computing—an artificial intelligence system that seeks to mimic human-brain function. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16]
Category: Artificial Intelligence

[467] viXra:1805.0044 [pdf] submitted on 2018-05-01 13:09:45

AI Spots Gravitational Waves

Authors: George Rajna
Comments: 23 Pages.

A deep-learning system that can sift gravitational wave signals from background noise has been created by physicists in the UK. [8] Using data from the first-ever gravitational waves detected last year, along with a theoretical analysis, physicists have shown that gravitational waves may oscillate between two different forms called "g" and "f"-type gravitational waves. [7] Astronomy experiments could soon test an idea developed by Albert Einstein almost exactly a century ago, scientists say. [6] It's estimated that 27% of all the matter in the universe is invisible, while everything from PB&J sandwiches to quasars accounts for just 4.9%. But a new theory of gravity proposed by theoretical physicist Erik Verlinde of the University of Amsterdam found out a way to dispense with the pesky stuff. [5] The proposal by the trio though phrased in a way as to suggest it's a solution to the arrow of time problem, is not likely to be addressed as such by the physics community— it's more likely to be considered as yet another theory that works mathematically, yet still can't answer the basic question of what is time. [4] The Weak Interaction transforms an electric charge in the diffraction pattern from one side to the other side, causing an electric dipole momentum change, which violates the CP and Time reversal symmetry. The Neutrino Oscillation of the Weak Interaction shows that it is a General electric dipole change and it is possible to any other temperature dependent entropy and information changing diffraction pattern of atoms, molecules and even complicated biological living structures.
Category: Artificial Intelligence

[466] viXra:1804.0334 [pdf] submitted on 2018-04-24 04:00:45

Introduction of Reflex Based Neural Network

Authors: Liang Yi
Comments: 17 Pages.

This paper introduces a new neural network that works quite different from current neural network model. The RBNN model is based on the concept of conditioned reflex, which widely exists in real creatures. In RBNN, all learning procedure are executed by the neural network itself, which makes it not a complex mathematic model but simple enough to be implemented in real brain. In RBNN, information is organized in a clear way, which makes the whole network a white box rather than a black box, so we can teach the network knowledge easily and fast. This paper shows the power of conditioned reflex as a search tool which can be used as states transfer function in state machine. Using combinations of neurons as symbols which plays the role of letters in traditional state machine, a RBNN can be treated as a state machine with small number of memory unit but huge number of letters.
Category: Artificial Intelligence

[465] viXra:1804.0281 [pdf] submitted on 2018-04-19 09:47:02

Perceptual Significance of Kernel Methods for Natural Image Processing

Authors: Vikas Ramachandra, Truong Nguyen
Comments: 6 Pages.

We explore the unifying connection between kernel regression, Volterra series expansion and multiscale signal decomposition using recent results on function estimation for system identification. We show that using any of these techniques for (non-linear) image processing tasks is (approximately) equivalent. Further, we use the relation between wavelets and independent components of natural images. Kernel methods can be shown to be implicit Volterra series expansions, which are well approximated by wavelets. Wavelets are, in turn, well represented by independent components of natural images. Thus, it can be seen that kernel methods are also near optimal in terms of higher order statistical modeling and approximation of (natural) images. This explains the reason for good results often (perceptually) observed with the use of kernel methods for many image processing problems.
Category: Artificial Intelligence

[464] viXra:1804.0280 [pdf] submitted on 2018-04-19 09:53:13

Superresolution Using Perceptually Significant Side Information

Authors: Vikas Ramachandra, Truong Nguyen
Comments: 4 Pages.

We investigate the problem of super-resolution of images in the presence of side information. In some situations, when some information of the original image is available to the sender, it can be embedded into the low resolution images, either in the pixels themselves or in the headers. This information can be later used when required to reconstruct the superresolved image. For this, a novel multiresolution histogram matching based superresolution procedure is outlined. The proposed technique gives better results compared to contemporary resolution enhancement algorithms, and is especially useful for de-blurring text images captured from mobile phone cameras.
Category: Artificial Intelligence

[463] viXra:1804.0278 [pdf] submitted on 2018-04-19 09:59:42

A Distributed Compressive Sampling Approach for Scene Capture Using an Array of Single Pixel Cameras

Authors: Vikas Ramachandra
Comments: 4 Pages.

This paper presents a method of capturing 3D scene information using an array of single pixel cameras. Based on the recent results for distributed compressive sampling, it is shown here that there could be considerable savings in the measurements required to construct the whole scene, when the correlations between the images captured by the individual cameras in the array is exploited. A technique for doing so for an array of cameras separated by translations along one axis only is illustrated.
Category: Artificial Intelligence

[462] viXra:1804.0249 [pdf] submitted on 2018-04-18 03:52:47

Machine Learning Protein Dynamics Data

Authors: George Rajna
Comments: 32 Pages.

At the University of South Florida, researchers are integrating machine learning techniques into their work studying proteins. [21] Bioinformatics professors Anthony Gitter and Casey Greene set out in summer 2016 to write a paper about biomedical applications for deep learning, a hot new artificial intelligence field striving to mimic the neural networks of the human brain. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[461] viXra:1804.0237 [pdf] submitted on 2018-04-18 11:04:11

Beyond Back Propagation

Authors: Tofara Moyo
Comments: 1 Page.

5 Protea Lane Newton west
Category: Artificial Intelligence

[460] viXra:1804.0197 [pdf] submitted on 2018-04-14 08:00:32

Artificial Intelligence Accelerates Discovery

Authors: George Rajna
Comments: 40 Pages.

The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning—a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data—with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[459] viXra:1804.0155 [pdf] submitted on 2018-04-11 07:17:37

Deep Learning Smartphone Microscope

Authors: George Rajna
Comments: 36 Pages.

Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images.
Category: Artificial Intelligence

[458] viXra:1804.0149 [pdf] submitted on 2018-04-09 11:59:00

Machine Learning for Gravitational Waves

Authors: George Rajna
Comments: 21 Pages.

A trio of students from the University of Glasgow have developed a sophisticated artificial intelligence which could underpin the next phase of gravitational wave astronomy. [8] Using data from the first-ever gravitational waves detected last year, along with a theoretical analysis, physicists have shown that gravitational waves may oscillate between two different forms called "g" and "f"-type gravitational waves. [7] Astronomy experiments could soon test an idea developed by Albert Einstein almost exactly a century ago, scientists say. [6] It's estimated that 27% of all the matter in the universe is invisible, while everything from PB&J sandwiches to quasars accounts for just 4.9%. But a new theory of gravity proposed by theoretical physicist Erik Verlinde of the University of Amsterdam found out a way to dispense with the pesky stuff. [5] The proposal by the trio though phrased in a way as to suggest it's a solution to the arrow of time problem, is not likely to be addressed as such by the physics community— it's more likely to be considered as yet another theory that works mathematically, yet still can't answer the basic question of what is time. [4] The Weak Interaction transforms an electric charge in the diffraction pattern from one side to the other side, causing an electric dipole momentum change, which violates the CP and Time reversal symmetry. The Neutrino Oscillation of the Weak Interaction shows that it is a General electric dipole change and it is possible to any other temperature dependent entropy and information changing diffraction pattern of atoms, molecules and even complicated biological living structures.
Category: Artificial Intelligence

[457] viXra:1804.0114 [pdf] submitted on 2018-04-07 10:24:50

Rethinking BICA’s R&D Challenges: Grief Revelations of an Upset Revisionist

Authors: Emanuel Diamant
Comments: 6 Pages.

Biologically Inspired Cognitive Architectures (BICA) is a subfield of Artificial Intelligence aimed at creating machines that emulate human cognitive abilities. What distinguish BICA from other AI approaches is that it based on principles drawn from biology and neuroscience. There is a widespread conviction that nature has a solution for almost all problems we are faced with today. We have only to pick up the solution and replicate it in our design. However, Nature does not easily give up her secrets. Especially, when it is about human brain deciphering. For that reason, large Brain Research Initiatives have been launched around the world. They will provide us with knowledge about brain workflow activity in neuron assemblies and their interconnections. But what is being “flown” (conveyed) via the interconnections the research programme does not disclose. It is implied that what flows in the interconnections is information. But what is information? – that remains undefined. Having in mind BICA’s interest in the matters, the paper will try to clarify the issues.
Category: Artificial Intelligence

[456] viXra:1804.0113 [pdf] submitted on 2018-04-07 10:28:34

Artificial Neural Networks: a Bio-Inspired Revolution or a Long Lasting Misconception and Self-Delusion

Authors: Emanuel Diamant
Comments: 7 Pages. Rejected by the IJCNN 2018, Rio de Janeiro, July 08-13, 2018.

Ali Rahimi, best paper award recipient at NIPS 2017, labelled the current state of Deep Learning (DL) headway as “alchemy”. Yann LeCun, one of the prominent figures in the DL R&D, was insulted by this expression. However, in his response, LeCun did not claimed that DL designers know how and why their DL systems reach so surprising performances. The possible reason for this cautiousness is: No one knows how and in which way system input data is transformed into semantic information at the system’s output. And this, certainly, has its own reason: No one knows what information is! I dare to offer my humble clarification about this obscure and usually untouchable matter. I hope someone would be ready to line up with me.
Category: Artificial Intelligence

[455] viXra:1804.0112 [pdf] submitted on 2018-04-07 11:12:53

Recurrent Capsule Network for Image Generation

Authors: Srikumar Sastry
Comments: 9 Pages.

We have already seen state-of-the-art image generation techniques with Generative Adversarial Networks (Goodfellow et al. 2014), Variational Autoencoder and Recurrent Network for Image generation (K. Gregor et al. 2015). But all these architectures fail to learn object location and pose in images. In this paper, I propose Recurrent Capsule Network based on variational auto encoding framework which can not only preserve equivariance in images in the latent space but also can be used for image classification and generation. For image classification, it can recognise highly overlapping objects due to the use of capsules (Hinton et al. 2011), considerably better than convolutional networks. It can generate images which can be difficult to differentiate from the real data.
Category: Artificial Intelligence

[454] viXra:1804.0094 [pdf] submitted on 2018-04-06 04:33:26

Automated Classification of Hand-Grip Action on Objects Using Machine Learning

Authors: Anju Mishra, Amity University Uttar Pradesh SHANU SHARMA, Amity University Uttar Pradesh SANJAY KUMAR, Oxford Brooks University PRIYA RANJAN, Amity University Uttar Pradesh AMIT UJLAYAN, Gautam Buddha University
Comments: 10 Pages. This is a preprint of a paper under possible publication consideration.

Brain computer interface is the current area of research to provide assistance to disabled persons. To cope up with the growing needs of BCI applications, this paper presents an automated classification scheme for handgrip actions on objects by using Electroencephalography (EEG) data. The presented approach focuses on investigation of classifying correct and incorrect handgrip responses for objects by using EEG recorded patterns. The method starts with preprocessing of data, followed by extraction of relevant features from the epoch data in the form of discrete wavelet transform (DWT), and entropy measures. After computing feature vectors, artificial neural network classifiers used to classify the patterns into correct and incorrect handgrips on different objects. The proposed method was tested on real dataset, which contains EEG recordings from 14 persons. The results showed that the proposed approach is effective and may be useful to develop a variety of BCI based devices to control hand movements. KEYWORDS EEG, Brain computer interface, Machine learning, Hand action recognition
Category: Artificial Intelligence

[453] viXra:1804.0074 [pdf] submitted on 2018-04-04 07:28:11

Biomedical Applications for Deep Learning

Authors: George Rajna
Comments: 30 Pages.

Bioinformatics professors Anthony Gitter and Casey Greene set out in summer 2016 to write a paper about biomedical applications for deep learning, a hot new artificial intelligence field striving to mimic the neural networks of the human brain. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14]
Category: Artificial Intelligence

[452] viXra:1804.0056 [pdf] submitted on 2018-04-05 08:26:14

Computer Recognize Dynamic Events

Authors: George Rajna
Comments: 35 Pages.

Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21] The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13]
Category: Artificial Intelligence

[451] viXra:1804.0048 [pdf] submitted on 2018-04-03 10:59:48

Machine Learning to Microbial Relationship

Authors: George Rajna
Comments: 30 Pages.

Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm—called MPLasso—that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[450] viXra:1804.0020 [pdf] submitted on 2018-04-02 03:54:11

A New Neural Network for Artificial General Intelligence

Authors: Haisong Liang
Comments: 44 Pages. Include both English and Chinese version.

Since artificial intelligence was first introduced several decades ago, neural network has achieved remarkable results as one of the most important research methods, and variety of neural network models have been proposed. Usually, for a specific task, we train the network with large amounts of data to develop a mathematical model, making the model produce the expected outputs according to inputs, which also results in the black box problem. In this case, if we study from the perspective of information meanings and their causal relations with the following measures: denote information by neurons; store their relations with links; give neurons a state indicating the strength of information, which can be updated by a state function or input signal; then we can store different information and their relations and control related information's expression with neurons' state. The neural network will become a dynamic system then. More importantly, we can denote different information and logic by designing the topology of neural network and the attributes of the links, and thus having the ability to design and explain every detail of the network precisely, turning neural network into a general information storage, expression, control and processing system, which is also commonly referred as "Strong AI".
Category: Artificial Intelligence

[449] viXra:1803.0751 [pdf] submitted on 2018-03-31 04:17:45

Galenism: A Methodology for the Key Unification of Von Neumann Machines and Hierarchical Databases

Authors: Pallabi Chakraborty, Bhargav Bhushanam
Comments: 7 Pages.

The implications of psychoacoustic methodologies have been far-reaching and pervasive. In this work, we disprove the simulation of active networks. We examine how fiber-optic cables can be applied to the evaluation of DNS.
Category: Artificial Intelligence

[448] viXra:1803.0728 [pdf] submitted on 2018-03-30 06:52:39

Artificial Intelligence in Chemical Synthesis

Authors: George Rajna
Comments: 29 Pages.

A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses—so-called retrosyntheses—with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[447] viXra:1803.0699 [pdf] submitted on 2018-03-29 01:48:31

Universal Forecasting Scheme {Version 4}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has presented a Novel Method of Forecasting.
Category: Artificial Intelligence

[446] viXra:1803.0696 [pdf] submitted on 2018-03-29 06:19:03

Teaching Machine in Physical Systems

Authors: George Rajna
Comments: 27 Pages.

Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12]
Category: Artificial Intelligence

[445] viXra:1803.0695 [pdf] submitted on 2018-03-28 04:26:14

Brain's Potential for Quantum Computation

Authors: George Rajna
Comments: 32 Pages.

The possibility of cognitive nuclear-spin processing came to Fisher in part through studies performed in the 1980s that reported a remarkable lithium isotope dependence on the behavior of mother rats. [20] And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15]
Category: Artificial Intelligence

[444] viXra:1803.0675 [pdf] submitted on 2018-03-26 06:58:13

A Survey on Reasoning on Building Information Models Based on IFC

Authors: Hassan Sleiman
Comments: 17 Pages.

Building Information Models (BIM) are computer models that act as a main source of building information and integrate several aspects of engineering and architectural design, including building utilisation. They aim at enhancing the efficiency and the effectiveness of the projects during design, construction, and maintenance. Artificial Intelligence, which is used to automate tasks that would require intelligence, has found its way into BIM by applying reasoners, among other techniques. A reasoner is a piece of software that makes the implicit and hidden knowledge as explicit by using logical inferring techniques. Reasoners are applied on BIM to help take enhanced decisions and to assess the construction projects. The importance of BIM in both construction and information technology sectors has motivated many researchers to work on surveys that attempt to provide the current state of BIM, but unfortunately, none of these surveys has focused on reasoning on BIM. In this article we survey the research proposals and toolkits that rely on using reasoning systems on BIM, and we classify them into a two-level schema based on what they are intended for. According to our survey, reasoning is mainly used for solving design problems, and is especially applied for code consistency checking, with an emphasis on the semantic web technologies. Furthermore, user-friendliness is still a gap in this field and case-based reasoning, which was often applied in the past efforts, is still hardly applied for reasoning on BIM. The survey shows that this research area is active and that the research results are progressively being integrated into commercial toolkits.
Category: Artificial Intelligence

[443] viXra:1803.0652 [pdf] submitted on 2018-03-26 00:17:46

AI to Understand Human Brain

Authors: George Rajna
Comments: 30 Pages.

And as will be presented today at the 25th annual meeting of the Cognitive Neuroscience Society (CNS), cognitive neuroscientists increasingly are using those emerging artificial networks to enhance their understanding of one of the most elusive intelligence systems, the human brain. [19] U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[442] viXra:1803.0627 [pdf] submitted on 2018-03-24 08:48:04

Brain-Like Computers

Authors: George Rajna
Comments: 29 Pages.

U.S. Army Research Laboratory scientists have discovered a way to leverage emerging brain-like computer architectures for an age-old number-theoretic problem known as integer factorization. [18] have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[441] viXra:1803.0089 [pdf] submitted on 2018-03-07 02:50:41

Universal Forecasting Scheme : Two Methods {Version 1}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[440] viXra:1803.0083 [pdf] submitted on 2018-03-06 11:49:00

Machine Learning Guide Science

Authors: George Rajna
Comments: 25 Pages.

Now researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have come up with a novel machine learning method that enables scientists to derive insights from systems of previously intractable complexity in record time. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[439] viXra:1803.0072 [pdf] submitted on 2018-03-06 03:34:57

Universal Forecasting Scheme {Version 1}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[438] viXra:1803.0070 [pdf] submitted on 2018-03-06 04:39:44

Universal Forecasting Scheme {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[437] viXra:1803.0069 [pdf] submitted on 2018-03-06 04:45:06

Universal Forecasting Scheme {Version 3}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of forecasting.
Category: Artificial Intelligence

[436] viXra:1803.0061 [pdf] submitted on 2018-03-04 22:32:07

Cancer Detection Through Handwriting

Authors: Alaa Tarek, Shorouk Alalem, Maryam El-Fdaly, Nehal Fooda
Comments: 31 Pages.

Having a look at the medical field in the previous years and the drawbacks in it, Egypt's health level is declining year after year; due to the statistical study on the health level published by the British medical journal Lancet in 2016 Egypt ranked 124 out of 188 countries. Cancer is a major burden of disease worldwide. Each year, 10,000,000 of people are diagnosed with cancer around the world, and more than half of the patients eventually die because of it. In many countries, cancer ranks the second most common cause of death following cardiovascular diseases. With significant improvement in treatment and prevention of cardiovascular diseases, cancer has or will soon become the number one killer in many parts of the world. Nearly 90,000 people do not know they have got cancer until they arrive at Accident and Emergency wards, by that time only 36 percent will live longer than a year. So, we needed to find controlled ways to diagnose patients earlier. As no aspect of human life has escaped the impact of the information age, and perhaps in no area of life is information more critical than in health and medicine. However, computers have become available for all aspects of human endeavors. After so, we have designed a program that could detect if a person has cancer or not through your handwriting. We chose “efficiency, cost, and applicability” as the design requirements that have been tested. The program could be tested by scanning the text, searching for specific features that are related to cancer and displaying “1” or “0” according to your state. After testing the program many times, we finally reached a mean efficiency of 93.75%. So, this program saves lives, time and money.
Category: Artificial Intelligence

[435] viXra:1803.0053 [pdf] submitted on 2018-03-04 05:55:27

Tunnel Similar Modeling Notation and Spherical Viewpoint

Authors: Alexey Podorov
Comments: 11 Pages.

The article proposes a spherical model of perception, groups and levels of complexity, notation for modeling abstractness, complexity
Category: Artificial Intelligence

[434] viXra:1803.0023 [pdf] submitted on 2018-03-01 09:41:28

AI for Safer Cities

Authors: George Rajna
Comments: 50 Pages.

Computers may better predict taxi and ride sharing service demand, paving the way toward smarter, safer and more sustainable cities, according to an international team of researchers. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[433] viXra:1802.0364 [pdf] submitted on 2018-02-26 10:27:48

AI of Quantum Systems

Authors: George Rajna
Comments: 49 Pages.

For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[432] viXra:1802.0330 [pdf] submitted on 2018-02-23 10:46:53

AlphaZero Just Playing

Authors: George Rajna
Comments: 47 Pages.

AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[431] viXra:1802.0087 [pdf] submitted on 2018-02-08 09:33:09

A Forecasting Scheme

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel method of finding the next term of a given Sequence.
Category: Artificial Intelligence

[430] viXra:1802.0038 [pdf] submitted on 2018-02-05 05:01:24

Quantum Algorithm Help AI

Authors: George Rajna
Comments: 55 Pages.

An international team has shown that quantum computers can do one such analysis faster than classical computers for a wider array of data types than was previously expected. [35] A team of researchers at Oak Ridge National Laboratory has demonstrated that it is possible to use cloud-based quantum computers to conduct quantum simulations and calculations. [34] Physicists have designed a new method for transmitting big quantum data across long distances that requires far fewer resources than previous methods, bringing the implementation of long-distance big quantum data transmission closer to reality. [33] A joint China-Austria team has performed quantum key distribution between the quantum-science satellite Micius and multiple ground stations located in Xinglong (near Beijing), Nanshan (near Urumqi), and Graz (near Vienna). [32] In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. [31] Engineers worldwide have been developing alternative ways to provide greater memory storage capacity on even smaller computer chips. Previous research into two-dimensional atomic sheets for memory storage has failed to uncover their potential—until now. [30]
Category: Artificial Intelligence

[429] viXra:1802.0031 [pdf] submitted on 2018-02-03 05:25:39

A Feasible Path to a Machine that Can Pass the Turing Test.

Authors: Tofara Moyo
Comments: 5 Pages.

In this paper we will outline a NLP system that is based largely in graph theory, and together with techniques found in linear algebra we will be able to model the rules of logic. Inference from given data in natural language format will then be done by creating a mapping between the premises and the conclusion. During training the vector that is responsible for taking us from the space that the premise occupies to the space that the conclusion occupies will represent the particular logical rule used.Also training involves determining a particular vector for the job.
Category: Artificial Intelligence

[428] viXra:1801.0413 [pdf] submitted on 2018-01-30 20:38:58

On The Subject of Thinking Machines

Authors: John Olafenwa, Moses Olafenwa
Comments: 9 Pages. An investigation of the concepts of thought, imagination and consciousness in learning machines

68 years ago, Alan Turing proposed the question "Can Machines Think" in his seminal paper [1] titled "Computing Machinery and Intelligence" and he formulated the "Imitation Game" also known as the Turing test as a way to answer this question without referring to a rather ambiguous dictionary definition of the word "Think" We have come a long way to building intelligent machines, in fact, the rate of progress in Deep Learning and Reinforcement Learning, the two corner stones of artificial intelligence, is unprecedented. Alan Turing would have been proud of our achievements in computer vision, speech, natural language processing and autonomous systems. However, there are still many challenges and we are still some distance from building machines that can pass the Turing test. In this paper, we discuss some of the biggest questions concerning intelligent machines and we attempt to answer them, as much as can be explained by modern AI.
Category: Artificial Intelligence

[427] viXra:1801.0412 [pdf] submitted on 2018-01-30 21:56:30

A Predictor-Corrector Method for the Training of Deep Neural Networks

Authors: Yatin Saraiya
Comments: 6 pages, 2 figures, 2 tables

The training of deep neural nets is expensive. We present a predictor-corrector method for the training of deep neural nets. It alternates a predictor pass with a corrector pass using stochastic gradient descent with backpropagation such that there is no loss in validation accuracy. No special modifications to SGD with backpropagation is required by this methodology. Our experiments showed a time improvement of 9% on the CIFAR-10 dataset.
Category: Artificial Intelligence

[426] viXra:1801.0411 [pdf] submitted on 2018-01-30 22:00:29

Using Accumulation to Optimize Deep Residual Neural Nets

Authors: Yatin Saraiya
Comments: 7 pages, 6 figures, 1 table

Residual Neural Networks [1] won first place in all five main tracks of the ImageNet and COCO 2015 competitions. This kind of network involves the creation of pluggable modules such that the output contains a residual from the input. The residual in that paper is the identity function. We propose to include residuals from all lower layers, suitably normalized, to create the residual. This way, all previous layers contribute equally to the output of a layer. We show that our approach is an improvement on [1] for the CIFAR-10 dataset.
Category: Artificial Intelligence

[425] viXra:1801.0407 [pdf] submitted on 2018-01-29 13:36:59

Artificial Intelligence Weapon

Authors: George Rajna
Comments: 48 Pages.

A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[424] viXra:1801.0367 [pdf] submitted on 2018-01-27 03:06:22

Superconducting Synapse

Authors: George Rajna
Comments: 26 Pages.

Researchers at the National Institute of Standards and Technology (NIST) have built a superconducting switch that "learns" like a biological system and could connect processors and store memories in future computers operating like the human brain. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[423] viXra:1801.0366 [pdf] submitted on 2018-01-26 05:02:58

Mathematical Model of Inventions

Authors: George Rajna
Comments: 27 Pages.

Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[422] viXra:1801.0363 [pdf] submitted on 2018-01-26 07:55:13

Deep Learning for Gravitational Wave

Authors: George Rajna
Comments: 27 Pages.

Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[421] viXra:1801.0361 [pdf] submitted on 2018-01-26 09:19:57

Hyperspectral Artificial Intelligence

Authors: George Rajna
Comments: 29 Pages.

VTT Technical Research Centre of Finland has developed a highly cost-efficient hyperspectral imaging technology, which enables the introduction of new artificial intelligence applications into consumer devices. [19] Scientists at the National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana-Champaign, have pioneered the use of GPU-accelerated deep learning for rapid detection and characterization of gravitational waves. [18] Researchers from Queen Mary University of London have developed a mathematical model for the emergence of innovations. [17] Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[420] viXra:1801.0271 [pdf] submitted on 2018-01-21 22:33:21

Refutation: Neutrosophic Logic by Florentin Smardanche as Generalized Intuitionistic, Fuzzy Logic © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 2 Pages. © 2018 by Colin James III All rights reserved.

We map the neutrosophic logical values of truth, falsity, and indeterminacy on intervals "]0,1[" and "]-0,1+[" in equations for the Meth8/VL4 apparatus. We test the summation of the those values. The result is not tautologous, meaning neutrosophic logic is refuted and hence its use as a generalization of intuitionistic, fuzzy logic is likewise unworkable.
Category: Artificial Intelligence

[419] viXra:1801.0243 [pdf] submitted on 2018-01-19 09:05:53

AI Quantum Experiments

Authors: George Rajna
Comments: 38 Pages.

On the way to an intelligent laboratory, physicists from Innsbruck and Vienna present an artificial agent that autonomously designs quantum experiments. [24] An answer to a quantum-physical question provided by the algorithm Melvin has uncovered a hidden link between quantum experiments and the mathematical field of Graph Theory. [23] Engineers develop key mathematical formula for driving quantum experiments. [22] Physicists are developing quantum simulators, to help solve problems that are beyond the reach of conventional computers. [21] Engineers at Australia's University of New South Wales have invented a radical new architecture for quantum computing, based on novel 'flip-flop qubits', that promises to make the large-scale manufacture of quantum chips dramatically cheaper - and easier - than thought possible. [20] A team of researchers from the U.S. and Italy has built a quantum memory device that is approximately 1000 times smaller than similar devices— small enough to install on a chip. [19] The cutting edge of data storage research is working at the level of individual atoms and molecules, representing the ultimate limit of technological miniaturisation. [18] This is an important clue for our theoretical understanding of optically controlled magnetic data storage media. [17] A crystalline material that changes shape in response to light could form the heart of novel light-activated devices. [16] Now a team of Penn State electrical engineers have a way to simultaneously control diverse optical properties of dielectric waveguides by using a two-layer coating, each layer with a near zero thickness and weight. [15] Just like in normal road traffic, crossings are indispensable in optical signal processing. In order to avoid collisions, a clear traffic rule is required. A new method has now been developed at TU Wien to provide such a rule for light signals. [14] Researchers have developed a way to use commercial inkjet printers and readily available ink to print hidden images that are only visible when illuminated with appropriately polarized waves in the terahertz region of the electromagnetic spectrum. [13] That is, until now, thanks to the new solution devised at TU Wien: for the first time ever, permanent magnets can be produced using a 3D printer. This allows magnets to be produced in complex forms and precisely customised magnetic fields, required, for example, in magnetic sensors. [12] For physicists, loss of magnetisation in permanent magnets can be a real concern. In response, the Japanese company Sumitomo created the strongest available magnet— one offering ten times more magnetic energy than previous versions—in 1983. [11] New method of superstrong magnetic fields’ generation proposed by Russian scientists in collaboration with foreign colleagues. [10] By showing that a phenomenon dubbed the "inverse spin Hall effect" works in several organic semiconductors - including carbon-60 buckyballs - University of Utah physicists changed magnetic "spin current" into electric current. The efficiency of this new power conversion method isn't yet known, but it might find use in future electronic devices including batteries, solar cells and computers. [9] Researchers from the Norwegian University of Science and Technology (NTNU) and the University of Cambridge in the UK have demonstrated that it is possible to directly generate an electric current in a magnetic material by rotating its magnetization. [8] This paper explains the magnetic effect of the electric current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron’s spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.
Category: Artificial Intelligence

[418] viXra:1801.0192 [pdf] submitted on 2018-01-16 07:03:26

FastNet: An Efficient Architecture for Smart Devices

Authors: John Olafenwa, Moses Olafenwa
Comments: 9 Pages.

Inception and the Resnet family of Convolutional Neural Network architectures have broken records in the past few years, but recent state of the art models have also incurred very high computational cost in terms of training, inference and model size. Making the deployment of these models on Edge devices, impractical. In light of this, we present a new novel architecture that is designed for high computational efficiency on both GPUs and CPUs, and is highly suited for deployment on Mobile Applications, Smart Cameras, Iot devices and controllers as well as low cost drones. Our architecture boasts competitive accuracies on standard datasets even outperforming the original Resnet. We present below the motivation for this research, the architecture of the network, single test accuracies on CIFAR 10 and CIFAR 100, a detailed comparison with other well-known architectures and link to an implementation in Keras.
Category: Artificial Intelligence

[417] viXra:1801.0102 [pdf] submitted on 2018-01-09 11:34:24

Bayesian Transfer Learning for Deep Networks

Authors: J. Wohlert, A. M. Munk, S. Sengupta, F. Laumann
Comments: 6 Pages.

We propose a method for transfer learning for deep networks through Bayesian inference, where an approximate posterior distribution q(w|θ) of model parameters w is learned through variational approximation. Utilizing Bayes by Backprop we optimize the parameters θ associated with the approximate distribution. When performing transfer learning we consider two tasks; A and B. Firstly, an approximate posterior q_A(w|θ) is learned from task A which is afterwards transferred as a prior p(w) → q_A(w|θ) when learning the approximate posterior distribution q_B(w|θ) for task B. Initially, we consider a multivariate normal distribution q(w|θ) = N (µ, Σ), with diagonal covariance matrix Σ. Secondly, we consider the prospects of introducing more expressive approximate distributions - specifically those known as normalizing flows. By investigating these concepts on the MNIST data set we conclude that utilizing normalizing flows does not improve Bayesian inference in the context presented here. Further, we show that transfer learning is not feasible using our proposed architecture and our definition of task A and task B, but no general conclusion regarding rejecting a Bayesian approach to transfer learning can be made.
Category: Artificial Intelligence

[416] viXra:1801.0050 [pdf] submitted on 2018-01-06 00:20:25

Fruit Recognition from Images Using Deep Learning

Authors: Horea Muresan, Mihai Oltean
Comments: 13 Pages. Data can be downloaded from https://github.com/Horea94/Fruit-Images-Dataset

In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use this kind of neural network.
Category: Artificial Intelligence

[415] viXra:1801.0041 [pdf] submitted on 2018-01-05 06:09:53

Taking Advantage of BiLSTM Encoding to Handle Punctuation in Dependency Parsing: A Brief Idea

Authors: Matteo Grella
Comments: 3 Pages.

In the context of the bidirectional-LSTMs neural parser (Kiperwasser and Goldberg, 2016), an idea is proposed to initialize the parsing state without punctuation-tokens but using them for the BiLSTM sentence encoding. The relevant information brought by the punctuation-tokens should be implicitly learned using the errors of the recurrent contributions only.
Category: Artificial Intelligence

[414] viXra:1712.0659 [pdf] submitted on 2017-12-29 06:21:14

TDBF: Two Dimensional Belief Function

Authors: Yangxue Li; Yong Deng
Comments: 15 Pages.

How to efficiently handle uncertain information is still an open issue. Inthis paper, a new method to deal with uncertain information, named as two dimensional belief function (TDBF), is presented. A TDBF has two components, T=(mA,mB). The first component, mA, is a classical belief function. The second component, mB, also is a classical belief function, but it is a measure of reliability of the first component. The definition of TDBF and the discounting algorithm are proposed. Compared with the classical discounting model, the proposed TDBF is more flexible and reasonable. Numerical examples are used to show the efficiency of the proposed method.
Category: Artificial Intelligence

[413] viXra:1712.0647 [pdf] submitted on 2017-12-28 23:25:34

A Total Uncertainty Measure for D Numbers Based on Belief Intervals

Authors: Xinyang Deng, Wen Jiang
Comments: 14 Pages.

As a generalization of Dempster-Shafer theory, the theory of D numbers is a new theoretical framework for uncertainty reasoning. Measuring the uncertainty of knowledge or information represented by D numbers is an unsolved issue in that theory. In this paper, inspired by distance based uncertainty measures for Dempster-Shafer theory, a total uncertainty measure for a D number is proposed based on its belief intervals. The proposed total uncertainty measure can simultaneously capture the discord, and non-specificity, and non-exclusiveness involved in D numbers. And some basic properties of this total uncertainty measure, including range, monotonicity, generalized set consistency, are also presented.
Category: Artificial Intelligence

[412] viXra:1712.0495 [pdf] submitted on 2017-12-18 08:50:22

Just Keep it in Mind: Information is a Complex Notion with Physical and Semantic Information Staying for Real and Imaginary Parts of the Expression

Authors: Emanuel Diamant
Comments: 3 Pages. Presented at the IS4SI 2017 Summit, Information Theory Section, Gothenburg, Sweden, 12–16 June 2017

Shannon’s Information was devised to improve the performance of a data communication channel. Since then, the situation has changed drastically and today a more generally applicable and suitable definition of information is urgently required. To meet this demand, I have proposed a definition of my own. According to it, information is a complex notion with Physical and Semantic information staying for Real and Imaginary parts of the term. The scientific community has very unfriendly accepted this idea. But without a better solution for the problem of: 1) intron-exon partition in genes, 2) information flow in neuronal networks, 3) memory creation and potentiation in brains, 4) thoughts and thinking materialization in human heads, and 5) the undeniable shift from Computational (that is, data processing based) approach to Cognitive (that is, information processing based) approach in the field of scientific research, they would be forced to admit one day that something worthy is in this new definition.
Category: Artificial Intelligence

[411] viXra:1712.0494 [pdf] submitted on 2017-12-18 09:05:26

Shannon's Definition of Information is Obsolete and Inadequate. it is Time to Embrace Kolmogorov’s Insights on the Matter

Authors: Emanuel Diamant
Comments: 3 Pages. Presented at the 2016 ICSEE International Conference, Eilat, Israel, 16 – 18 November 2016.

Information Theory, as developed by Claude Shannon in 1948, was about the communication of messages as electronic signals via a transmission channel. Only physical properties of the signal and the channel have been taken into account. While the meaning of the message has been ignored totally. Such an approach to information met very well the requirements of a data communication channel. But recent advances in almost all sciences put an urgent demand for meaningful information inclusion into the body of a communicated message. To meet this demand, I have proposed a new definition of information. In this definition, information is seen as a complex notion composed of two inseparable parts: Physical information and Semantic information. Classical informations such as Shannon, Fisher, Renyi, Kolmogorov’s complexity, and Chaitin’s algorithmic information – they are all physical information variants. Semantic information is a new concept and it desires to be properly studied, treated, and used.
Category: Artificial Intelligence

[410] viXra:1712.0469 [pdf] submitted on 2017-12-15 23:33:47

Predicting Yelp Star Reviews Based on Network Structure with Deep Learning

Authors: Luis Perez
Comments: 12 pages, 17 figures

In this paper, we tackle the real-world problem of predicting Yelp star-review rating based on business features (such as images, descriptions), user features (average previous ratings), and, of particular interest, network properties (which businesses has a user rated before). We compare multiple models on different sets of features -- from simple linear regression on network features only to deep learning models on network and item features. In recent years, breakthroughs in deep learning have led to increased accuracy in common supervised learning tasks, such as image classification, captioning, and language understanding. However, the idea of combining deep learning with network feature and structure appears to be novel. While the problem of predicting future interactions in a network has been studied at length, these approaches have often ignored either node-specific data or global structure. We demonstrate that taking a mixed approach combining both node-level features and network information can effectively be used to predict Yelp-review star ratings. We evaluate on the Yelp dataset by splitting our data along the time dimension (as would naturally occur in the real-world) and comparing our model against others which do no take advantage of the network structure and/or deep learning.
Category: Artificial Intelligence

[409] viXra:1712.0468 [pdf] submitted on 2017-12-15 23:41:37

The Effectiveness of Data Augmentation in Image Classification using Deep Learning

Authors: Luis Perez, Jason Wang
Comments: 8 Pages.

In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.
Category: Artificial Intelligence

[408] viXra:1712.0467 [pdf] submitted on 2017-12-15 23:43:11

Gaussian Processes for Crime Prediction

Authors: Luis Perez, Alex Wang
Comments: 8 Pages.

The ability to predict crime is incredibly useful for police departments, city planners, and many other parties, but thus far current approaches have not made use of recent developments of machine learning techniques. In this paper, we present a novel approach to this task: Gaussian processes regression. Gaussian processes (GP) are a rich family of distributions that are able to learn functions. We train GPs on historic crime data to learn the underlying probability distribution of crime incidence to make predictions on future crime distributions.
Category: Artificial Intelligence

[407] viXra:1712.0465 [pdf] submitted on 2017-12-16 00:36:46

Reinforcement Learning with Swingy Monkey

Authors: Luis Perez, Aidi Zhang, Kevin Eskici
Comments: 7 Pages.

This paper explores model-free, model-based, and mixture models for reinforcement learning under the setting of a SwingyMonkey game \footnote{The code is hosted on a public repository \href{https://github.com/kandluis/machine-learning}{here} under the prac4 directory.}. SwingyMonkey is a simple game with well-defined goals and mechanisms, with a relatively small state-space. Using Bayesian Optimization \footnote{The optimization took place using the open-source software made available by HIPS \href{https://github.com/HIPS/Spearmint}{here}.} on a simple Q-Learning algorithm, we were able to obtain high scores within just a few training epochs. However, the system failed to scale well after continued training, and optimization over hundreds of iterations proved too time-consuming to be effective. After manually exploring multiple approaches, the best results were achieved using a mixture of $\epsilon$-greedy Q-Learning with a stable learning rate,$\alpha$, and $\delta \approx 1$ discount factor. Despite the theoretical limitations of this approach, the settings, resulted in maximum scores of over 5000 points with an average score of $\bar{x} \approx 684$ (averaged over the final 100 testing epochs, median of $\bar{m} = 357.5$). The results show an continuing linear log-relation capping only after 20,000 training epochs.
Category: Artificial Intelligence

[406] viXra:1712.0464 [pdf] submitted on 2017-12-16 00:38:28

Multi-Document Text Summarization

Authors: Luis Perez, Kevin Eskici
Comments: 24 Pages. 24

We tackle the problem of multi-document extractive summarization by implementing two well-known algorithms for single-text summarization -- {\sc TextRank} and {\sc Grasshopper}. We use ROUGE-1 and ROUGE-2 precision scores with the DUC 2004 Task 2 data set to measure the performance of these two algorithms, with optimized parameters as described in their respective papers ($\alpha =0.25$ and $\lambda=0.5$ for Grasshopper and $d=0.85$ for TextRank). We compare these modified algorithms to common baselines as well as non-naive, novel baselines and we present the resulting ROUGE-1 and ROUGE-2 recall scores. Subsequently, we implement two novel algorithms as extensions of {\sc GrassHopper} and {\sc TextRank}, each termed {\sc ModifiedGrassHopper} and {\sc ModifiedTextRank}. The modified algorithms intuitively attempt to ``maximize'' diversity across the summary. We present the resulting ROUGE scores. We expect that with further optimizations, this unsupervised approach to extractive text summarization will prove useful in practice.
Category: Artificial Intelligence

[405] viXra:1712.0446 [pdf] submitted on 2017-12-13 08:17:06

A New Divergence Measure for Basic Probability Assignment and Its Applications in Extremely Uncertain Environments

Authors: Liguo Fei, Yong Hu, Yong Deng, Sankaran Mahadevan
Comments: 9 Pages.

Information fusion under extremely uncertain environments is an important issue in pattern classification and decision-making problem. Dempster-Shafer evidence theory (D-S theory) is more and more extensively applied to information fusion for its advantage to deal with uncertain information. However, the results opposite to common sense are often obtained when combining the different evidences using Dempster’s combination rules. How to measure the difference between different evidences is still an open issue. In this paper, a new divergence is proposed based on Kullback-Leibler divergence in order to measure the difference between different basic probability assignments (BPAs). Numerical examples are used to illustrate the computational process of the proposed divergence. Then the similarity for different BPAs is also defined based on the proposed divergence. The basic knowledge about pattern recognition is introduced and a new classification algorithm is presented using the proposed divergence and similarity under extremely uncertain environments, which is illustrated by a small example handling robot sensing. The method put forward is motivated by desperately in need to develop intelligent systems, such as sensor-based data fusion manipulators, which need to work in complicated, extremely uncertain environments. Sensory data satisfy the conditions 1) fragmentary and 2) collected from multiple levels of resolution.
Category: Artificial Intelligence

[404] viXra:1712.0444 [pdf] submitted on 2017-12-13 08:59:01

Environmental Impact Assessment Using D-Vikor Approach

Authors: Liguo Fei, Yong Deng
Comments: 15 Pages.

Environmental impact assessment (EIA) is an open and important issue depends on factors such as social, ecological, economic, etc. Due to human judgment, a variety of uncertainties are brought into the EIA process. With regard to uncertainty, many existing methods seem powerless to represent and deal with it effectively. A new theory called D numbers, because of its advantage to handle uncertain information, is widely used to uncertainty modeling and decision making. VIKOR method has its unique advantages in dealing with multiple criteria decision making problems (MCDM), especially when the criteria are non-commensurable and even conflicting, it can also obtain the compromised optimal solution. In order to solve EIA problems more effectively, in this paper, a D-VIKOR approach is proposed, which expends the VIKOR method by D numbers theory. In the proposed approach, assessment information of environmental factors is expressed and modeled by D numbers. And a new combination rule for multiple D numbers is defined. Subjective weights and objective weights are considered in VIKOR process for more reasonable ranking results. A numerical example is conducted to analyze and demonstrate the practicality and effectiveness of the proposed D-VIKOR approach.
Category: Artificial Intelligence

[403] viXra:1712.0432 [pdf] submitted on 2017-12-13 22:28:48

DS-Vikor: a New Methodology for Supplier Selection

Authors: Liguo Fei, Yong Deng, Yong Hu
Comments: 15 Pages.

How to select the optimal supplier is an open and important issue in supply chain management (SCM), which needs to solve the problem of assessment and sorting the potential suppliers, and can be considered as a multi-criteria decision-making (MCDM) problem. Experts’ assessment play a very important role in the process of supplier selection, while the subjective judgment of human beings could introduce unpredictable uncertainty. However, existing methods seem powerless to represent and deal with this uncertainty effectively. Dempster-Shafer evidence theory (D- S theory) is widely used to uncertainty modeling, decision making and conflicts management due to its advantage to handle uncertain information. The VIKOR method has a great advantage to handle MCDM problems with non-commensurable and even conflicting criteria, and to obtain the compromised optimal solution. In this paper, a DS- VIKOR method is proposed for the supplier selection problem which expends the VIKOR method by D-S theory. In this method, the basic probability assignment (BPA) is used to denote the decision makers’ assessment for suppliers, Deng entropy weight-based method is defined and applied to determine the weights of multi-criteria, and VIKOR method is used for getting the final ranking results. An illustrative example under real life is conducted to analyze and demonstrate the practicality and effectiveness of the proposed DS-VIKOR method.
Category: Artificial Intelligence

[402] viXra:1712.0400 [pdf] submitted on 2017-12-13 06:52:57

Adaptively Evidential Weighted Classifier Combination

Authors: Liguo Fei, Bingyi Kang, Van-Nam Huynh, Yong Deng
Comments: 9 Pages.

Classifier combination plays an important role in classification. Due to the efficiency to handle and fuse uncertain information, Dempster-Shafer evidence theory is widely used in multi-classifiers fusion. In this paper, a method of adaptively evidential weighted classifier combination is presented. In our proposed method, the output of each classifier is modelled by basic probability assignment (BPA). Then, the weights are determined adaptively for individual classifier according to the uncertainty degree of the corresponding BPA. The uncertainty degree is measured by a belief entropy, named as Deng entropy. Discounting-and-combination scheme in D-S theory is used to calculate the weighted BPAs and combine them for the final BPA for classification. The effectiveness of the proposed weighted combination method is illustrated by numerical experimental results.
Category: Artificial Intelligence

[401] viXra:1712.0347 [pdf] submitted on 2017-12-07 09:10:57

Finding The Next Term Of Any Time Series Type Or Non Time Series Type Sequence Using Total Similarity & Dissimilarity {Version 6} ISSN 1751-3030.

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given time series type or non-time series type sequence.
Category: Artificial Intelligence

[400] viXra:1712.0138 [pdf] submitted on 2017-12-05 14:07:08

Topological Clustering as a Method of Control for Certain Critical-Point Sensitive Systems

Authors: Martin J. Dudziiak
Comments: 6 Pages. submitted to CoDIT 2018 (Thessaloniki, Greece, April 2018)

New methods can provide more sensitive modeling and more reliable control, through use of dynamically-alterable local neighborhood clusters comprised of of the state-space parameters most disposed to be influential in non-linear systemic changes. Particular attention is directed to systems with extreme non-linearity and uncertainty in measurement and in control communications (e.g., micro-scalar, remote and inaccessible to real-time control). An architecture for modeling based upon topological similarity mapping principles is introduced as an alternative to classical Turing machine models including new “quantum computers.”
Category: Artificial Intelligence

[399] viXra:1712.0071 [pdf] submitted on 2017-12-03 19:12:51

The Intelligence Quotient of the Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 15 Pages. Bulgarian. Serdica Journal of Computing.

To say which programs are AI, it's enough to run an exam and recognize for AI those programs that passed the exam. The exam grade will be called IQ. We cannot say just how big the IQ has to be in order one program to be AI, but we will choose one specific value. So our definition of AI will be any program whose IQ is above this specific value. This idea has already been realized in [1], but here we will repeat this construction by bringing some improvements.
Category: Artificial Intelligence

[398] viXra:1711.0477 [pdf] submitted on 2017-11-30 18:22:54

Okay, Google: a Preliminary Evaluation of the Robustness of Scholar Metrics

Authors: H Qadrawxu-Korbau, D Smith, K Beryllium
Comments: 4 Pages.

Google Scholar provides a number of metrics often used as proxies for scientific productivity. It is, however, possible to consciously manipulate Scholar metrics, for instance via copious self-citation or upload of fake papers to indexed websites. Here, we post a paper on vixra, a preprint forum, and arbitrarily cite a completely random study to evaluate whether Scholar will count this submission toward the overall citation count of that study. We publish no results, as the publication of the paper is, in this case, the experiment.
Category: Artificial Intelligence

[397] viXra:1711.0470 [pdf] submitted on 2017-11-30 02:13:24

Multi-Scalar Multi-Agent Control for Optimization of Dynamic Networks Operating in Remote Environment

Authors: Martin Dudziak
Comments: 7 Pages.

Multi-agent control systems have demonstrated effectiveness in a variety of physical applications including cooperative robot networks and multi-target tracking in high-noise network and group environments. We introduce the use of multi-scalar models that extend cellular automaton regional neighborhood comparisons and local voting measures based upon stochastic approximation in order to provide more efficient and time-sensitive solutions to non-deterministic problems. The scaling factors may be spatial, temporal or in other semantic values. The exercising of both cooperative and competitive functions by the devices in such networks offers a method for optimizing system parameters to reduce search, sorting, ranking and anomaly evaluation tasks. Applications are illustration for a group of robots assigned different tasks in remote operating environments with highly constrained communications and critical fail-safe conditions.
Category: Artificial Intelligence

[396] viXra:1711.0433 [pdf] submitted on 2017-11-26 23:19:36

Finding The Next Term Of Any Time Series Type Sequence Using Total Similarity & Dissimilarity {Version 5} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given time series type sequence.
Category: Artificial Intelligence

[395] viXra:1711.0429 [pdf] submitted on 2017-11-27 05:14:34

Finding The Next Term Of Any Sequence Using Total Similarity & Dissimilarity {Version 5}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[394] viXra:1711.0420 [pdf] submitted on 2017-11-26 01:39:24

Move the Tip to the Right a Language Based Computeranimation System in Box2d

Authors: Frank Schröder
Comments: 8 Pages.

Not only “robots need language”, but sometimes a human-operator too. To interact with complex domains, he needs a vocabulary to init the robot, let him walk and grasping objects. Natural language interfaces can support semi-autonomous and fully-autonomous systems on both sides. Instead of using neural networks, the language grounding problem can be solved with object-oriented programming. In the following paper a simulation of micro-manipulation under a microscope is given which is controlled with a C++ script. The small vocabulary consists of init, pregrasp, grasp and place.
Category: Artificial Intelligence

[393] viXra:1711.0382 [pdf] submitted on 2017-11-22 02:30:08

A Survey on Evolutionary Computation: Methods and Their Applications in Engineering

Authors: Morteza Husainy Yar, Vahid Rahmati, Hamid Reza Dalili Oskouei
Comments: 9 Pages.

Evolutionary computation is now an inseparable branch of artificial intelligence and smart methods based on evolutional algorithms aimed at solving different real world problems by natural procedures involving living creatures. It's based on random methods, regeneration of data, choosing by changing or replacing data within a system such as personal computer (PC), cloud, or any other data center. This paper briefly studies different evolutionary computation techniques used in some applications specifically image processing, cloud computing and grid computing. These methods are generally categorized as evolutionary algorithms and swarm intelligence. Each of these subfields contains a variety of algorithms and techniques which are presented with their applications. This work tries to demonstrate the benefits of the field by presenting the real world applications of these methods implemented already. Among these applications is cloud computing scheduling problem improved by genetic algorithms, ant colony optimization, and bees algorithm. Some other applications are improvement of grid load balancing, image processing, improved bi-objective dynamic cell formation problem, robust machine cells for dynamic part production, integrated mixed-integer linear programming, robotic applications, and power control in wind turbines.
Category: Artificial Intelligence

[392] viXra:1711.0370 [pdf] submitted on 2017-11-20 22:14:32

Finding The Next Term Of Any Given Sequence Using Total Similarity & Dissimilarity {Version 3} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[391] viXra:1711.0367 [pdf] submitted on 2017-11-21 00:18:32

One Step Evolution Of Any Real Positive Number {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed the Theory Of One Step Evolution Of Any Real Positive Number.
Category: Artificial Intelligence

[390] viXra:1711.0361 [pdf] submitted on 2017-11-20 02:12:39

Finding The Next Term Of Any Given Sequence Using Total Similarity & Dissimilarity. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[389] viXra:1711.0360 [pdf] submitted on 2017-11-20 02:43:10

Ontology Engineering for Robotics

Authors: Frank Schröder
Comments: 8 Pages.

Ontologies are a powerfull alternative to reinforcement learning. They store knowledge in a domain-specific language. The best-practice for implementing ontologies is a distributed version control system which is filled manually by programmers.
Category: Artificial Intelligence

[388] viXra:1711.0359 [pdf] submitted on 2017-11-20 05:21:55

Finding The Next Term Of Any Given Sequence Using Total Similarity & Dissimilarity {New} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has detailed a novel scheme of finding the next term of any given sequence.
Category: Artificial Intelligence

[387] viXra:1711.0292 [pdf] submitted on 2017-11-12 09:29:57

Strengths and Potential of the SP Theory of Intelligence in General, Human-Like Artificial Intelligence

Authors: J Gerard Wolff
Comments: 20 Pages.

This paper first defines "general, human-like artificial intelligence" (GHLAI) in terms of five principles. In the light of the definition, the paper summarises the strengths and potential of the "SP theory of intelligence" and its realisation in the "computer model", outlined in an appendix, in three main areas: the versatility of the SP system in aspects of intelligence; its versatility in the representation of diverse kinds of knowledge; and its potential for the seamless integration of diverse aspects of intelligence and diverse kinds of knowledge, in any combination. There are reasons to believe that a mature version of the SP system may attain full GHLAI in diverse aspects of intelligence and in the representation of diverse kinds of knowledge.
Category: Artificial Intelligence

[386] viXra:1711.0266 [pdf] submitted on 2017-11-11 03:38:23

Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network

Authors: Lixin Fan
Comments: 10 Pages. NIPS 2017 publication.

We revisit fuzzy neural network with a cornerstone notion of generalized hamming distance, which provides a novel and theoretically justified framework to re-interpret many useful neural network techniques in terms of fuzzy logic. In particular, we conjecture and empirically illustrate that, the celebrated batch normalization (BN) technique actually adapts the “normalized” bias such that it approximates the rightful bias induced by the generalized hamming distance. Once the due bias is enforced analytically, neither the optimization of bias terms nor the sophisticated batch normalization is needed. Also in the light of generalized hamming distance, the popular rectified linear units (ReLU) can be treated as setting a minimal hamming distance threshold between network inputs and weights. This thresholding scheme, on the one hand, can be improved by introducing double-thresholding on both positive and negative extremes of neuron outputs. On the other hand, ReLUs turn out to be non-essential and can be removed from networks trained for simple tasks like MNIST classification. The proposed generalized hamming network (GHN) as such not only lends itself to rigorous analysis and interpretation within the fuzzy logic theory but also demonstrates fast learning speed, well-controlled behaviour and state-of-the-art performances on a variety of learning tasks.
Category: Artificial Intelligence

[385] viXra:1711.0265 [pdf] submitted on 2017-11-11 04:14:07

Revisit Fuzzy Neural Network: Bridging the Gap Between Fuzzy Logic and Deep Learning

Authors: Lixin Fan
Comments: 75 Pages. bridging the gap between symbolic versus connectionist.

This article aims to establish a concrete and fundamental connection between two important fields in artificial intelligence i.e. deep learning and fuzzy logic. On the one hand, we hope this article will pave the way for fuzzy logic researchers to develop convincing applications and tackle challenging problems which are of interest to machine learning community too. On the other hand, deep learning could benefit from the comparative research by re-examining many trail-and-error heuristics in the lens of fuzzy logic, and consequently, distilling the essential ingredients with rigorous foundations. Based on the new findings reported in [38] and this article, we believe the time is ripe to revisit fuzzy neural network as a crucial bridge between two schools of AI research i.e. symbolic versus connectionist [93] and eventually open the black-box of artificial neural networks.
Category: Artificial Intelligence

[384] viXra:1711.0250 [pdf] submitted on 2017-11-08 06:37:55

Total Intra Similarity And Dissimilarity Measure For The Values Taken By A Parameter Of Concern. {Version 1}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of finding the ‘Total Intra Similarity And Dissimilarity Measure For The Values Taken By A Parameter Of Concern’. The advantage of such a measure is that using this measure we can clearly distinguish the contribution of Intra aspect variation and Inter aspect variation when both are bound to occur in a given phenomenon of concern. This measure provides the same advantages as that provided by the popular F-Statistic measure.
Category: Artificial Intelligence

[383] viXra:1711.0241 [pdf] submitted on 2017-11-07 03:26:43

Dysfunktionale Methoden der Robotik

Authors: Frank Schröder
Comments: 8 Pages. German

Bei der Realisierung von Robotik-Projekten kann man eine ganze Menge verkehrt machen. Damit sind nicht nur kalte Lötstellen oder abstürzende Software gemeint, sondern sehr viel grundsätzlichere Dinge spielen eine Rolle. Um Fehler zu vermeiden, muss man sich zunächst einmal mit den Failure-Patterns näher auseinandersetzen, also jenen Entwicklungsmethoden, nach denen man auf gar keinen Fall einen Roboter bauen und wie die Software möglichst nicht funktionieren sollte.
Category: Artificial Intelligence

[382] viXra:1711.0235 [pdf] submitted on 2017-11-06 20:27:28

Not Merely Memorization in Deep Networks: Universal Fitting and Specific Generalization

Authors: Xiuyi Yang
Comments: 7 Pages.

We reinterpret the training of convolutional neural nets(CNNs) with universal classification theorem(UCT). This theory implies any disjoint datasets can be classified by two or more layers of CNNs based on ReLUs and rigid transformation switch units(RTSUs) we propose here, this explains why CNNs could memorize noise and real data. Subsequently, we present another fresh new hypothesis that CNN is insensitive to some variant from input training data example, this variant relates to original training input by generating functions. This hypothesis means CNNs can generalize well even for randomly generated training data and illuminates the paradox Why CNNs fit real and noise data and fail drastically when making predictions for noise data. Our findings suggest the study about generalization theory of CNNs should turn to generating functions instead of traditional statistics machine learning theory based on assumption that the training data and testing data are independent and identically distributed(IID), and apparently IID assumption contradicts our experiments in this paper.We experimentally verify these ideas correspondingly.
Category: Artificial Intelligence

[381] viXra:1711.0226 [pdf] submitted on 2017-11-07 01:52:12

Theory Of Universal Evolution Along Prime Basis (Time Like) ISSN 1751-3030.

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed the Theory Of Evolution.
Category: Artificial Intelligence

[380] viXra:1711.0208 [pdf] submitted on 2017-11-07 02:22:45

Theory Of Universal Evolution Along Prime Basis (Time Like) {Version 2} ISSN 1751-3030.

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed the Theory Of Evolution.
Category: Artificial Intelligence

[379] viXra:1711.0116 [pdf] submitted on 2017-11-02 23:51:41

Dynamic Thresholding For Linear Binary Classifiers. {Version 2} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of finding the Thresholding for Linear Binary Classifiers.
Category: Artificial Intelligence

[378] viXra:1711.0034 [pdf] submitted on 2017-11-02 06:05:21

Dynamic Thresholding For Linear Binary Classifiers. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of finding the Thresholding for Linear Binary Classifiers.
Category: Artificial Intelligence

[377] viXra:1710.0336 [pdf] submitted on 2017-10-31 23:50:38

Scheme For Finding The Next Term Of A Sequence Based On Evolution. {Version 7}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 7 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[376] viXra:1710.0299 [pdf] submitted on 2017-10-27 04:13:49

Scheme For Finding The Next Term Of A Sequence Based On Evolution. {Version 6}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 6 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[375] viXra:1710.0297 [pdf] submitted on 2017-10-25 03:57:32

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 2}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 5 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[374] viXra:1710.0294 [pdf] submitted on 2017-10-25 23:47:37

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 3}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 5 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[373] viXra:1710.0293 [pdf] submitted on 2017-10-26 01:24:46

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 4}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 6 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[372] viXra:1710.0289 [pdf] submitted on 2017-10-26 03:56:28

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 5}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 6 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[371] viXra:1710.0279 [pdf] submitted on 2017-10-24 04:45:19

Scheme For Finding The Next Term Of A Sequence Based On Evolution {File Closing Version 1}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of finding the next term of a sequence based on Evolution.
Category: Artificial Intelligence

[370] viXra:1710.0271 [pdf] submitted on 2017-10-23 23:14:04

The Average Computed In Primes Basis {File Closing Version 2}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of finding the average of a sequence in Primes Basis.
Category: Artificial Intelligence

[369] viXra:1710.0267 [pdf] submitted on 2017-10-23 06:21:13

The Average Computed In Primes Basis {File Closing Version 1}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research investigation, the author has detailed a novel method of finding the average of a sequence in Primes Basis.
Category: Artificial Intelligence

[368] viXra:1710.0259 [pdf] submitted on 2017-10-23 00:38:01

Universe’s Way Of Recursively Finding The Next Term Of Any Sequence {File Closing Version 3}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel method of Universe’s Way Of Recursively Finding The Next Term Of Any Sequence.
Category: Artificial Intelligence

[367] viXra:1710.0208 [pdf] submitted on 2017-10-18 23:07:44

The Recursive Future Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {File Closing Version 4}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 4 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[366] viXra:1710.0141 [pdf] submitted on 2017-10-12 10:42:50

Miguel A. Sanchez-Rey

Authors: Advances in the Collective Interface
Comments: 5 Pages.

A byproduct of 2AI.
Category: Artificial Intelligence

[365] viXra:1710.0003 [pdf] submitted on 2017-10-01 06:54:11

Nature-Like Technology for Communication Network Selfactualization in the Mode Advancing Real-Time

Authors: Popov Boris
Comments: 7 Pages.

In order to provide control system operation in real-time mode, communication system should operate in the mode advancing real-time that can be achieved only by means of providing the communication system with mechanism for network structure forward adaptation to the variations in the user query topics and rates as well as their self-actualization. A technique for developing such nature-like technology that is based on fundamental natural inertia phenomenon and widespread symbiotic cooperation, distinguished by building-up (developing) resources being used, is proposed.
Category: Artificial Intelligence

[364] viXra:1709.0404 [pdf] submitted on 2017-09-26 13:26:47

A Suggestion on CLIPS/JIProlog/JNNS/ImageJ/Java Agents/JikesRVM Based Analysis of Cryo-EM/TEM/SEM Images Using HDF5 Image Format – Some Interesting & Feasible Implementations of Expert Systems to Understand Nano- Bio Material Systems and EM

Authors: D.N.T.Kumar
Comments: 7 Pages. Prolog/NN/Expert Systems/JikesRVM/Informatics/EM/Cryo-EM/TEM/SEM/Material Science/Java Agents/Nanotechnology.

In this short communication the importance of expert systems based imaging framework to probe Cryo-EM images is presented from a practical implementation point of view. Neural Networks or NN are an excellent tool to probe various domains of science and technology. Cryo-EM Technique holds bright future based on the application of NN.Prolog-NN based algorithms could form a powerful informatics and computational framework for researching the challenges of nano-bio Applications. Further,it is useful and important to study the behavior of NN in domains where knowledge does not exist, i.e to use the models to make bold predictions which form the basis for Cryo-EM Image Processing tasks and the discovery of new nano-bio phenomena.Indeed, the performance of NN is most useful to researchers in domains where the modeling and predicting “uncertainty” is known to be the greatest factor. All the methods presented here are also applicable to TEM/SEM/other EM Image Processing tasks as well.
Category: Artificial Intelligence

[363] viXra:1709.0403 [pdf] submitted on 2017-09-26 13:33:02

Kernel Principal Component Analysis as Mathematical Tool In Processing Cryo- EM Images – A Suggestion Using Kernel Based Data Processing Techniques in a Java Virtual Machine(JVM) Environment.

Authors: D.N.T.Kumar
Comments: 7 Pages. A Suggestion Using Kernel Based Data Processing Techniques in a Java Virtual Machine(JVM) Environment.

In this short communication,it was proposed to highlight some novel methodologies to probe,process and compute Cryo-EM Images in a unique way by using an open source Kernel-PCA and by interfacing the KERNEL-PCA via Java Matlab Interface(JMI) – JikesRVM system or any other Java Virtual Machine(JVM).The main reason to design and develop this kind of computing approach is to utilize the features of Java based technologies for futuristic applications in the promising and demanding domains of CRYO-EM Imaging in the nano-bio domains.This is one of the pioneering research topics in this domain with a lot of promise.Image de-noising and novelty detection paves the way and holds the key for better Cryo-EM image processing.
Category: Artificial Intelligence

[362] viXra:1709.0394 [pdf] submitted on 2017-09-26 11:50:52

How Does the ai Understand What's Going on

Authors: Dimiter Dobrev
Comments: 22 Pages.

Most researchers regard AI as a static function without memory. This is one of the few articles where AI is seen as a device with memory. When we have memory, we can ask ourselves: "Where am I?", and "What is going on?" When we have no memory, we have to assume that we are always in the same place and that the world is always in the same state.
Category: Artificial Intelligence

[361] viXra:1709.0323 [pdf] submitted on 2017-09-21 05:35:00

Recursive Future Average Of A Time Series Data Based On Cosine Similarity-RF

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[360] viXra:1709.0322 [pdf] submitted on 2017-09-21 05:49:31

Recursive Future Average Of A Time Series Data Based On Cosine Similarity-RF {Version 2}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[359] viXra:1709.0313 [pdf] submitted on 2017-09-22 00:01:00

The Recursive Future Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {File Closing Version 2}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[358] viXra:1709.0242 [pdf] submitted on 2017-09-15 20:34:58

Exact Map Inference in General Higher-Order Graphical Models Using Linear Programming

Authors: Ikhlef Bechar
Comments: 50 Pages.

This paper is concerned with the problem of exact MAP inference in general higher-order graphical models by means of a traditional linear programming relaxation approach. In fact, the proof that we have developed in this paper is a rather simple algebraic proof being made straightforward, above all, by the introduction of two novel algebraic tools. Indeed, on the one hand, we introduce the notion of delta-distribution which merely stands for the difference of two arbitrary probability distributions, and which mainly serves to alleviate the sign constraint inherent to a traditional probability distribution. On the other hand, we develop an approximation framework of general discrete functions by means of an orthogonal projection expressing in terms of linear combinations of function margins with respect to a given collection of point subsets, though, we rather exploit the latter approach for the purpose of modeling locally consistent sets of discrete functions from a global perspective. After that, as a first step, we develop from scratch the expectation optimization framework which is nothing else than a reformulation, on stochastic grounds, of the convex-hull approach, as a second step, we develop the traditional LP relaxation of such an expectation optimization approach, and we show that it enables to solve the MAP inference problem in graphical models under rather general assumptions. Last but not least, we describe an algorithm which allows to compute an exact MAP solution from a perhaps fractional optimal (probability) solution of the proposed LP relaxation.
Category: Artificial Intelligence

[357] viXra:1709.0217 [pdf] submitted on 2017-09-14 08:11:16

Quantum Thinking Machines

Authors: George Rajna
Comments: 24 Pages.

Quantum computers can be made to utilize effects such as quantum coherence and entanglement to accelerate machine learning. [16] Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[356] viXra:1709.0211 [pdf] submitted on 2017-09-14 06:46:27

Analyzing Huge Volumes of Data

Authors: George Rajna
Comments: 23 Pages.

Neural networks learn how to carry out certain tasks by analyzing large amounts of data displayed to them. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[355] viXra:1709.0161 [pdf] submitted on 2017-09-13 10:30:45

AI is Reinforcing Stereotypes

Authors: George Rajna
Comments: 46 Pages.

Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[354] viXra:1709.0159 [pdf] submitted on 2017-09-13 06:47:26

Mergeable Nervous Robots

Authors: George Rajna
Comments: 49 Pages.

Researchers at the Université libre de Bruxelles have developed self-reconfiguring modular robots that can merge, split and even self-heal while retaining full sensorimotor control. [29] A challenging brain technique called whole-cell patch clamp electrophysiology or whole-cell recording (WCR) is a procedure so delicate and complex that only a handful of humans in the whole world can do it. [28] ComText allows robots to understand contextual commands such as, " Pick up the box I put down. " [27] McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19]
Category: Artificial Intelligence

[353] viXra:1709.0142 [pdf] submitted on 2017-09-11 20:53:40

Brain Emotional Learning Based Intelligent Controller for Velocity Control of an Electro Hydraulic Servo System

Authors: Zohreh Alzahra Sanai Dashti, Milad Gholami, Masoud Hajimani
Comments: 7 Pages. IOSR Journal of Electrical and Electronics Engineering (IOSR - JEEE) e - ISSN: 2278 - 1676,p - ISSN: 2320 - 3331, Volume 12, Issue 4 Ver. I I (Jul. – Aug. 2017), PP 29 - 35

In this paper, a biologically motivated controller based on mammalian limbic system called Brain Emotional Learning Based Intelligent Controller (BELBIC) is used for velocity control of an Electro Hydraulic Servo System (EHSS) in presence of flow nonlinearities, internal friction and noise. It is shown that this technique can be successfully used to stabilize any chosen operating point of the system with noise and without noise. All derived results are validated by computer simulation of a nonlinear mathematical model of the system. The controllers which introduced have big range for control the system. We compare BELBIC controller results with feedbacks linearization, backstepping and PID controller.
Category: Artificial Intelligence

[352] viXra:1709.0141 [pdf] submitted on 2017-09-11 20:56:32

Design & Implementation of Fuzzy Parallel Distributed Compensation Controller for Magnetic Levitation System

Authors: Milad Gholami, Zohreh Alzahra Sanai Dashti, Masoud Hajimani
Comments: 9 Pages. IOSR Journal of Electrical and Electronics Engineering (IOSR - JEEE) e - ISSN : 2278 - 1676,p - ISSN: 2320 - 3331, Volume 12, Issue 4 Ver. I I (Jul. – Aug. 2017), PP 20 - 28

This study applies technique parallel distributed compensation (PDC) for position control of a Magnetic levitation system. PDC method is based on nonlinear Takagi-Sugeno (T-S) fuzzy model. It is shown that this technique can be successfully used to stabilize any chosen operating point of the system. All derived results are validated by experimental and computer simulation of a nonlinear mathematical model of the system. The controllers which introduced have big range for control the system.
Category: Artificial Intelligence

[351] viXra:1709.0125 [pdf] submitted on 2017-09-11 07:51:38

Machine Learning Monitoring Air Quality

Authors: George Rajna
Comments: 23 Pages.

UCLA researchers have developed a cost-effective mobile device to measure air quality. It works by detecting pollutants and determining their concentration and size using a mobile microscope connected to a smartphone and a machine-learning algorithm that automatically analyzes the images of the pollutants. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11]
Category: Artificial Intelligence

[350] viXra:1709.0108 [pdf] submitted on 2017-09-10 06:02:53

A New Semantic Theory of Nature Language

Authors: Kun Xing
Comments: 70 Pages.

Formal Semantics and Distributional Semantics are two important semantic frameworks in Natural Language Processing (NLP). Cognitive Semantics belongs to the movement of Cognitive Linguistics, which is based on contemporary cognitive science. Each framework could deal with some meaning phenomena, but none of them fulfills all requirements proposed by applications. A unified semantic theory characterizing all important language phenomena has both theoretical and practical significance; however, although many attempts have been made in recent years, no existing theory has achieved this goal yet. This article introduces a new semantic theory that has the potential to characterize most of the important meaning phenomena of natural language and to fulfill most of the necessary requirements for philosophical analysis and for NLP applications. The theory is based on a unified representation of information, and constructs a kind of mathematical model called cognitive model to interpret natural language expressions in a compositional manner. It accepts the empirical assumption of Cognitive Semantics, and overcomes most shortcomings of Formal Semantics and of Distributional Semantics. The theory, however, is not a simple combination of existing theories, but an extensive generalization of classic logic and Formal Semantics. It inherits nearly all advantages of Formal Semantics, and also provides descriptive contents for objects and events as fine-gram as possible, descriptive contents which represent the results of human cognition.
Category: Artificial Intelligence

[349] viXra:1709.0096 [pdf] submitted on 2017-09-08 13:34:21

Robots Understand Brain Function

Authors: George Rajna
Comments: 48 Pages.

A challenging brain technique called whole-cell patch clamp electrophysiology or whole-cell recording (WCR) is a procedure so delicate and complex that only a handful of humans in the whole world can do it. [28] ComText allows robots to understand contextual commands such as, " Pick up the box I put down. " [27] McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[348] viXra:1709.0068 [pdf] submitted on 2017-09-06 07:16:28

Identification of Individuals

Authors: George Rajna
Comments: 44 Pages.

Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16]
Category: Artificial Intelligence

[347] viXra:1709.0048 [pdf] submitted on 2017-09-05 04:26:06

On the Dual Nature of Logical Variables and Clause-Sets

Authors: Elnaserledinellah Mahmood Abdelwahab
Comments: © 2016 Journal Academica Foundation. All rights reserved. With perpetual, non-exclusive license for viXra.org - Originally received 08-04-2016 - accepted 09-12-2016 - published 09-15-2016 J.Acad.(N.Y.)6,3:202-239 (38 pages) - ISSN 2161-3338

This paper describes the conceptual approach behind the proposed solution of the 3SAT problem recently published in [Abdelwahab 2016]. It is intended for interested readers providing a step-by-step, mostly informal explanation of the new paradigm proposed there completing the picture from an epistemological point of view with the concept of duality on center-stage. After a brief introduction discussing the importance of duality in both, physics and mathematics as well as past efforts to solve the P vs. NP problem, a theorem is proven showing that true randomness of input-variables is a property of algorithms which has to be given up when discrete, finite domains are considered. This insight has an already known side effect on computation paradigms, namely: The ability to de-randomize probabilistic algorithms. The theorem uses a canonical type of de-randomization which reveals dual properties of logical variables and Clause-Sets. A distinction is made between what we call the syntactical Container Expression (CE) and the semantic Pattern Expression (PE). A single sided approach is presumed to be insufficient to solve anyone of the dual problems of efficiently finding an assignment validating a 3CNF Clause-Set and finding a 3CNF-representation for a given semantic pattern. The deeply rooted reason, hereafter referred to as The Inefficiency Principle, is conjectured to be the inherent difficulty of translating one expression into the other based on a single-sided perspective. It expresses our inability to perceive and efficiently calculate complementary properties of a logical formula applying one view only. It is proposed as an alternative to the commonly accepted P≠NP conjecture. On the other hand, the idea of algorithmically using information deduced from PE to guide the instantiation of variables in a resolution procedure applied on a CE is as per [Abdelwahab 2016] able to provide an efficient solution to the 3SAT-problem. Finally, linking de-randomization to this positive solution has various well-established and important consequences for probabilistic complexity classes which are shown to hold.
Category: Artificial Intelligence

[346] viXra:1709.0019 [pdf] submitted on 2017-09-02 07:01:40

Understanding Robots

Authors: George Rajna
Comments: 46 Pages.

ComText allows robots to understand contextual commands such as, " Pick up the box I put down. " [27] McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[345] viXra:1709.0007 [pdf] submitted on 2017-09-01 10:31:26

Computing, Cognition and Information Compression

Authors: J Gerard Wolff
Comments: 21 Pages.

This article develops the idea that the storage and processing of information in computers and in brains may often be understood as information compression. The article first reviews what is meant by information and, in particular, what is meant by redundancy, a concept which is fundamental in all methods for information compression. Principles of information compression are described. The major part of the article describes how these principles may be seen in a range of observations and ideas in computing and cognition: the phenomena of adaptation and inhibition in nervous systems; 'neural' computing; the creation and recognition of 'objects' and 'classes'in perception and cognition; stereoscopic vision and random-dot stereograms; the organisation of natural languages; the organisation of grammars; the organisation of functional, structured, logic and object-oriented computer programs; the application and de-referencing of identifiers in computing; retrieval of information from databases; access and retrieval of information from computer memory; logical deduction and resolution theorem proving; inductive reasoning and probabilistic inference; parsing; normalisation of databases.
Category: Artificial Intelligence

[344] viXra:1709.0004 [pdf] submitted on 2017-09-01 06:49:15

Simple Chess Puzzle

Authors: George Rajna
Comments: 26 Pages.

Researchers at the University of St Andrews have thrown down the gauntlet to computer programmers to find a solution to a "simple" chess puzzle which could, in fact, take thousands of years to solve and net a $1m prize. [11] It appears that we are approaching a unique time in the history of man and science where empirical measures and deductive reasoning can actually inform us spiritually. Integrated Information Theory (IIT)-put forth by neuroscientists Giulio Tononi and Christof Koch-is a new framework that describes a way to experimentally measure the extent to which a system is conscious. [10] There is also connection between statistical physics and evolutionary biology, since the arrow of time is working in the biological evolution also. From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. [8] This paper contains the review of quantum entanglement investigations in living systems, and in the quantum mechanically modeled photoactive prebiotic kernel systems. [7] The human body is a constant flux of thousands of chemical/biological interactions and processes connecting molecules, cells, organs, and fluids, throughout the brain, body, and nervous system. Up until recently it was thought that all these interactions operated in a linear sequence, passing on information much like a runner passing the baton to the next runner. However, the latest findings in quantum biology and biophysics have discovered that there is in fact a tremendous degree of coherence within all living systems. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to understand the Quantum Biology.
Category: Artificial Intelligence

[343] viXra:1708.0482 [pdf] submitted on 2017-08-31 14:55:22

AI Analyzes Gravitational Lenses

Authors: George Rajna
Comments: 25 Pages.

Researchers from the Department of Energy's SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks - a form of artificial intelligence - can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods. [16] By listening to the acoustic signal emitted by a laboratory-created earthquake, a computer science approach using machine learning can predict the time remaining before the fault fails. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[342] viXra:1708.0471 [pdf] submitted on 2017-08-30 12:48:09

Earthquake Machine Learning

Authors: George Rajna
Comments: 23 Pages.

By listening to the acoustic signal emitted by a laboratory-created earthquake, a computer science approach using machine learning can predict the time remaining before the fault fails. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[341] viXra:1708.0414 [pdf] submitted on 2017-08-28 08:55:08

Artificial Intelligence Cyber Attacks

Authors: George Rajna
Comments: 24 Pages.

The next major cyberattack could involve artificial intelligence systems. [13] Steve was a security robot employed by the Washington Harbour center in the Georgetown district of the US capital. [12] Combining the intuition of humans with the impartiality of computers could improve decision-making for organizations, eventually leading to lower costs and better profits, according to a team of researchers. [11] A team researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[340] viXra:1708.0381 [pdf] submitted on 2017-08-27 07:40:31

Security Robots

Authors: George Rajna
Comments: 22 Pages.

Combining the intuition of humans with the impartiality of computers could improve decision-making for organizations, eventually leading to lower costs and better profits, according to a team of researchers. [11] A team researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[339] viXra:1708.0341 [pdf] submitted on 2017-08-24 22:13:50

Routing Games Over Time with Fifo Policy

Authors: Anisse Ismaili
Comments: 16 Pages. Submission to conference WINE 2017 on August 2nd.

We study atomic routing games where every agent travels both along its decided edges and through time. The agents arriving on an edge are first lined up in a \emph{first-in-first-out} queue and may wait: an edge is associated with a capacity, which defines how many agents-per-time-step can pop from the queue's head and enter the edge, to transit for a fixed delay. We show that the best-response optimization problem is not approximable, and that deciding the existence of a Nash equilibrium is complete for the second level of the polynomial hierarchy. Then, we drop the rationality assumption, introduce a behavioral concept based on GPS navigation, and study its worst-case efficiency ratio to coordination.
Category: Artificial Intelligence

[338] viXra:1708.0331 [pdf] submitted on 2017-08-24 13:23:16

Computers Improve Decision Making

Authors: George Rajna
Comments: 20 Pages.

Combining the intuition of humans with the impartiality of computers could improve decision-making for organizations, eventually leading to lower costs and better profits, according to a team of researchers. [11] A team researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[337] viXra:1708.0246 [pdf] submitted on 2017-08-21 10:02:18

AI that can Understand Us

Authors: George Rajna
Comments: 47 Pages.

Computing pioneer Alan Turing's most pertinent thoughts on machine intelligence come from a neglected paragraph of the same paper that first proposed his famous test for whether a computer could be considered as smart as a human. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[336] viXra:1708.0239 [pdf] submitted on 2017-08-20 09:31:39

Artificial Intelligence Revolution

Authors: George Rajna
Comments: 45 Pages.

Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes—he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[335] viXra:1708.0238 [pdf] submitted on 2017-08-19 14:27:54

Machine-Learning Device

Authors: George Rajna
Comments: 24 Pages.

In what could be a small step for science potentially leading to a breakthrough, an engineer at Washington University in St. Louis has taken steps toward using nanocrystal networks for artificial intelligence applications. [16] Physicists have applied the ability of machine learning algorithms to learn from experience to one of the biggest challenges currently facing quantum computing: quantum error correction, which is used to design noise-tolerant quantum computing protocols. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10]
Category: Artificial Intelligence

[334] viXra:1708.0176 [pdf] submitted on 2017-08-16 01:32:34

Machine Learning Quantum Error Correction

Authors: George Rajna
Comments: 23 Pages.

Physicists have applied the ability of machine learning algorithms to learn from experience to one of the biggest challenges currently facing quantum computing: quantum error correction, which is used to design noise-tolerant quantum computing protocols. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that - surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch - the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8]
Category: Artificial Intelligence

[333] viXra:1708.0167 [pdf] submitted on 2017-08-15 06:17:08

Organismic Learning

Authors: George Rajna
Comments: 24 Pages.

A new computing technology called "organismoids" mimics some aspects of human thought by learning how to forget unimportant memories while retaining more vital ones. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[332] viXra:1708.0131 [pdf] submitted on 2017-08-11 13:16:12

Adaptive Plant Propagation Algorithm for Solving Economic Load Dispatch Problem

Authors: Sayan Nag
Comments: 11 Pages.

Optimization problems in design engineering are complex by nature, often because of the involvement of critical objective functions accompanied by a number of rigid constraints associated with the products involved. One such problem is Economic Load Dispatch (ED) problem which focuses on the optimization of the fuel cost while satisfying some system constraints. Classical optimization algorithms are not sufficient and also inefficient for the ED problem involving highly nonlinear, and non-convex functions both in the objective and in the constraints. This led to the development of metaheuristic optimization approaches which can solve the ED problem almost efficiently. This paper presents a novel robust plant intelligence based Adaptive Plant Propagation Algorithm (APPA) which is used to solve the classical ED problem. The application of the proposed method to the 3-generator and 6-generator systems shows the efficiency and robustness of the proposed algorithm. A comparative study with another state-of-the-art algorithm (APSO) demonstrates the quality of the solution achieved by the proposed method along with the convergence characteristics of the proposed approach.
Category: Artificial Intelligence

[331] viXra:1708.0065 [pdf] submitted on 2017-08-06 17:11:22

Meta Mass Function

Authors: Yong Deng
Comments: 11 Pages.

In this paper, a meta mass function (MMF) is presented. A new evidence theory with complex numbers is developed. Different with existing evidence theory, the new mass function in complex evidence theory is modelled as complex numbers and named as meta mass function. The classical evidence theory is the special case under the condition that the mass function is degenerated from complex number as real number.
Category: Artificial Intelligence

[330] viXra:1708.0038 [pdf] submitted on 2017-08-04 04:30:39

Holistic Unique Clustering. {File Clsoing Version 4} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author has presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[329] viXra:1708.0030 [pdf] submitted on 2017-08-03 10:30:43

Machine Learning for Discovery

Authors: George Rajna
Comments: 22 Pages.

Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9] New research demonstrates that particles at the quantum level can in fact be seen as behaving something like billiard balls rolling along a table, and not merely as the probabilistic smears that the standard interpretation of quantum mechanics suggests. But there's a catch-the tracks the particles follow do not always behave as one would expect from "realistic" trajectories, but often in a fashion that has been termed "surrealistic." [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
Category: Artificial Intelligence

[328] viXra:1708.0029 [pdf] submitted on 2017-08-03 10:54:39

Future Search Engines

Authors: George Rajna
Comments: 25 Pages.

The outcome is the result of two powerful forces in the evolution of information retrieval: artificial intelligence—especially natural language processing—and crowdsourcing. [15] Who is the better experimentalist, a human or a robot? When it comes to exploring synthetic and crystallization conditions for inorganic gigantic molecules, actively learning machines are clearly ahead, as demonstrated by British Scientists in an experiment with polyoxometalates published in the journal Angewandte Chemie. [14] Machine learning algorithms are designed to improve as they encounter more data, making them a versatile technology for understanding large sets of photos such as those accessible from Google Images. Elizabeth Holm, professor of materials science and engineering at Carnegie Mellon University, is leveraging this technology to better understand the enormous number of research images accumulated in the field of materials science. [13] With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. [12] The artificial intelligence system's ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA. [11] Quantum physicist Mario Krenn and his colleagues in the group of Anton Zeilinger from the Faculty of Physics at the University of Vienna and the Austrian Academy of Sciences have developed an algorithm which designs new useful quantum experiments. As the computer does not rely on human intuition, it finds novel unfamiliar solutions. [10] Researchers at the University of Chicago's Institute for Molecular Engineering and the University of Konstanz have demonstrated the ability to generate a quantum logic operation, or rotation of the qubit, that-surprisingly—is intrinsically resilient to noise as well as to variations in the strength or duration of the control. Their achievement is based on a geometric concept known as the Berry phase and is implemented through entirely optical means within a single electronic spin in diamond. [9]
Category: Artificial Intelligence

[327] viXra:1708.0025 [pdf] submitted on 2017-08-02 23:22:10

Similarity Measure Of Any Two Vectors Of Same Size

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors of the same size.
Category: Artificial Intelligence

[326] viXra:1708.0019 [pdf] submitted on 2017-08-03 06:42:09

Holistic Unique Clustering. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[325] viXra:1708.0010 [pdf] submitted on 2017-08-02 04:36:45

A Generalized Similarity Measure {File Closing Version 3} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors or Matrices or Higher Dimensional Data of different sizes.
Category: Artificial Intelligence

[324] viXra:1707.0394 [pdf] submitted on 2017-07-30 02:17:51

The Recursive Future Equation And The Recursive Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {File Closing Version-2}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author have presented a Recursive Future Equation and Recursive Past Equation to find one Step Future Element or a one Step Past Element of a given Time Series data Set.
Category: Artificial Intelligence

[323] viXra:1707.0389 [pdf] submitted on 2017-07-29 07:23:01

Machine Learning and Deep Learning

Authors: George Rajna
Comments: 27 Pages.

Deep learning and machine learning both offer ways to train models and classify data. This article compares the two and it offers ways to help you decide which one to use. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[322] viXra:1707.0372 [pdf] submitted on 2017-07-28 06:35:21

The Recursive Future Equation And The Recursive Past Equation Based On The Ananda-Damayanthi Normalized Similarity Measure. {Future}

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Equation and Recursive Past Equation to find one Step Future Element or a one Step Past Element of a given Time Series data Set.
Category: Artificial Intelligence

[321] viXra:1707.0268 [pdf] submitted on 2017-07-20 02:20:32

Finding The Optimal Number ‘K’ In The K-Means Algorithm

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method to find the Optimal Number ‘K’ in the K-Means Algorithm.
Category: Artificial Intelligence

[320] viXra:1707.0255 [pdf] submitted on 2017-07-19 05:05:13

Humanize Artificial Intelligent

Authors: George Rajna
Comments: 41 Pages.

Google recently launched PAIR, an acronym of People + AI Research, in an attempt to increase the utility of AI and improve human to AI interaction. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15]
Category: Artificial Intelligence

[319] viXra:1707.0254 [pdf] submitted on 2017-07-19 06:01:57

Using the Appropriate Norm In The K-Nearest Neighbours Analysis. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research Technical Note, the author has detailed a novel technique of finding the distance metric to be used for any given set of points.
Category: Artificial Intelligence

[318] viXra:1707.0252 [pdf] submitted on 2017-07-19 06:40:54

A Generalized Similarity Measure {File Closing Version 2} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors or Matrices or Higher Dimensional Data of different sizes.
Category: Artificial Intelligence

[317] viXra:1707.0230 [pdf] submitted on 2017-07-17 05:49:20

A Generalized Similarity Measure ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author has presented a novel method of finding a Generalized Similarity Measure between two Vectors or Matrices or Higher Dimensional Data of different sizes.
Category: Artificial Intelligence

[316] viXra:1707.0225 [pdf] submitted on 2017-07-17 01:50:21

Multi Class Classification Using Holistic Non-Unique Clustering {File Closing Version 8}. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research Technical Note the author has presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[315] viXra:1707.0200 [pdf] submitted on 2017-07-14 04:55:42

Multi Class Classification Using Holistic Non-Unique Clustering ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[314] viXra:1707.0198 [pdf] submitted on 2017-07-14 05:30:10

Multi Class Classification Using Holistic Non-Unique Clustering. {File Closing Version 7} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[313] viXra:1707.0179 [pdf] submitted on 2017-07-13 01:20:46

Modification To The Scaling Aspect In Gower’s Scheme Of Calculating Similarity Coefficient

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research technical Note the author have presented a tiny modification to the Numeric Variables Scaling Aspect In Gower’s Scheme of calculating Similarity Coefficient.
Category: Artificial Intelligence

[312] viXra:1707.0178 [pdf] submitted on 2017-07-13 02:34:27

Recursive Future Average Of A Time Series Data Based On Cosine Similarity

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research Technical Note the author have presented a Recursive Future Average Of A Time Series Data Based on Cosine Similarity.
Category: Artificial Intelligence

[311] viXra:1707.0166 [pdf] submitted on 2017-07-12 01:12:07

Theoretical Materials

Authors: George Rajna
Comments: 49 Pages.

University have created the first general-purpose method for using machine learning to predict the properties of new metals, ceramics and other crystalline materials and to find new uses for existing materials, a discovery that could save countless hours wasted in the trial-and-error process of creating new and better materials. [28] As machine learning breakthroughs abound, researchers look to democratize benefits. [27] Machine-learning system spontaneously reproduces aspects of human neurology. [26] Surviving breast cancer changed the course of Regina Barzilay's research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. [25] New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain. [24] Scientists at Helmholtz-Zentrum Dresden-Rossendorf conducted electricity through DNA-based nanowires by placing gold-plated nanoparticles on them. In this way it could become possible to develop circuits based on genetic material. [23] Researchers at the Nanoscale Transport Physics Laboratory from the School of Physics at the University of the Witwatersrand have found a technique to improve carbon superlattices for quantum electronic device applications. [22] The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. [21] LCLS works like an extraordinary strobe light: Its ultrabright X-rays take snapshots of materials with atomic resolution and capture motions as fast as a few femtoseconds, or millionths of a billionth of a second. For comparison, one femtosecond is to a second what seven minutes is to the age of the universe. [20] A 'nonlinear' effect that seemingly turns materials transparent is seen for the first time in X-rays at SLAC's LCLS. [19]
Category: Artificial Intelligence

[310] viXra:1707.0165 [pdf] submitted on 2017-07-12 01:25:24

Multi Class Classification Using Holistic Non-Unique Clustering

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[309] viXra:1707.0145 [pdf] submitted on 2017-07-11 02:29:17

A Novel Type Of Time Series Type Forecasting

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel Time series type of forecasting.
Category: Artificial Intelligence

[308] viXra:1707.0142 [pdf] submitted on 2017-07-11 04:48:06

A Novel Type Of Time Series Type Forecasting. {File Closing Version 1}

Authors: Ramesh Chandra Bagadi
Comments: 3 Pages.

In this research investigation, the author has detailed a novel Time series type of forecasting.
Category: Artificial Intelligence

[307] viXra:1707.0102 [pdf] submitted on 2017-07-07 01:23:03

Holistic Non-Unique Clsutering. {File Closing Version 1} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of points in N Space.
Category: Artificial Intelligence

[306] viXra:1707.0098 [pdf] submitted on 2017-07-07 01:44:57

Holistic Non-Unique Clsutering. {File Closing Version 2} ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 2 Pages.

In this research technical Note the author have presented a novel method to find all Possible Clusters given a set of M points in N Space.
Category: Artificial Intelligence

[305] viXra:1707.0071 [pdf] submitted on 2017-07-05 08:51:43

Seeing All The Clusters

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this technical note the author has presented a novel method to find all the clusters (overlapping and non-unique) formed by a given set of points.
Category: Artificial Intelligence

[304] viXra:1707.0070 [pdf] submitted on 2017-07-05 08:58:23

Seeing All Clusters Formed By A Given Set Of Points (File Closing Version) ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this research investigation, the author has presented a novel technique to find all Clusters that may be overlapping to some extent.
Category: Artificial Intelligence

[303] viXra:1707.0061 [pdf] submitted on 2017-07-05 06:54:24

Holistic Non-Unique Clsutering. ISSN 1751-3030

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this technical note, the author has presented a novel scheme of Holistic Non-Unique Clustering.
Category: Artificial Intelligence

[302] viXra:1707.0043 [pdf] submitted on 2017-07-03 22:47:02

Using the Appropriate Norm In The K-Nearest Neighbours Analysis

Authors: Ramesh Chandra Bagadi
Comments: 1 Page.

In this Technical Note the author has presented and alternative to the use of L2 Norm for Nearness Analysis in K-Nearest Neighbours Algorithm.
Category: Artificial Intelligence

[301] viXra:1707.0002 [pdf] submitted on 2017-07-01 04:24:01

Inner Workings of Neural Networks

Authors: George Rajna
Comments: 33 Pages.

Neural networks learn to perform computational tasks by analyzing large sets of training data. But once they've been trained, even their designers rarely have any idea what data elements they're processing. [20] Researchers from Disney Research, Pixar Animation Studios, and the University of California, Santa Barbara have developed a new technology based on artificial intelligence (AI) and deep learning that eliminates this noise and thereby enables production-quality rendering at much faster speeds. [19] Now, one group reports in ACS Nano that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system— the release of inhibitory and stimulatory signals from the same "pre-synaptic" terminal. [18] Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip
Category: Artificial Intelligence

[300] viXra:1706.0570 [pdf] submitted on 2017-06-30 12:07:02

Convolutional Neural Network

Authors: George Rajna
Comments: 31 Pages.

Researchers from Disney Research, Pixar Animation Studios, and the University of California, Santa Barbara have developed a new technology based on artificial intelligence (AI) and deep learning that eliminates this noise and thereby enables production-quality rendering at much faster speeds. [19] Now, one group reports in ACS Nano that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system— the release of inhibitory and stimulatory signals from the same "pre-synaptic" terminal. [18] Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11]
Category: Artificial Intelligence

[299] viXra:1706.0523 [pdf] submitted on 2017-06-28 09:17:30

Artificial Synapse for AI

Authors: George Rajna
Comments: 30 Pages.

Now, one group reports in ACS Nano that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system— the release of inhibitory and stimulatory signals from the same "pre-synaptic" terminal. [18] Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain.
Category: Artificial Intelligence

[298] viXra:1706.0469 [pdf] submitted on 2017-06-25 08:35:27

Quantum Machine Learning Computer Hybrids

Authors: George Rajna
Comments: 28 Pages.

Creative Destruction Lab, a technology program affiliated with the University of Toronto's Rotman School of Management in Toronto, Canada hopes to nurture numerous quantum learning machine start-ups in only a few years. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[297] viXra:1706.0468 [pdf] submitted on 2017-06-25 10:31:28

Weak AI, Strong AI and Superintelligence

Authors: George Rajna
Comments: 29 Pages.

Should we fear artificial intelligence and all it will bring us? Not so long as we remember to make sure to build artificial emotional intelligence into the technology, according to the website The School of Life. [16] Creative Destruction Lab, a technology program affiliated with the University of Toronto’s Rotman School of Management in Toronto, Canada hopes to nurture numerous quantum learning machine start-ups in only a few years. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”. Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries - how a sliced up flatworm can regenerate into new organisms - has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[296] viXra:1706.0462 [pdf] submitted on 2017-06-25 02:34:26

Brain-Inspired Supercomputing

Authors: George Rajna
Comments: 48 Pages.

IBM and the Air Force Research Laboratory are working to develop an artificial intelligence-based supercomputer with a neural network design that is inspired by the human brain. [28] Researchers have built a new type of "neuron transistor"—a transistor that behaves like a neuron in a living brain. [27] Research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18]
Category: Artificial Intelligence

[295] viXra:1706.0433 [pdf] submitted on 2017-06-23 06:57:24

AI and Robots can Help Patients

Authors: George Rajna
Comments: 45 Pages.

McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[294] viXra:1706.0402 [pdf] submitted on 2017-06-20 10:02:53

Neuron Transistor

Authors: George Rajna
Comments: 45 Pages.

Researchers have built a new type of "neuron transistor"—a transistor that behaves like a neuron in a living brain. [27] Research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[293] viXra:1706.0389 [pdf] submitted on 2017-06-19 04:15:18

Artificial Intelligence Health Revolution

Authors: George Rajna
Comments: 43 Pages.

Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15]
Category: Artificial Intelligence

[292] viXra:1706.0387 [pdf] submitted on 2017-06-19 04:54:30

K-Eye Face Recognition System

Authors: George Rajna
Comments: 45 Pages.

A research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17]
Category: Artificial Intelligence

[291] viXra:1706.0293 [pdf] submitted on 2017-06-16 06:05:08

Computers Reason Like Humans

Authors: George Rajna
Comments: 40 Pages.

Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning—and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14]
Category: Artificial Intelligence

[290] viXra:1706.0235 [pdf] submitted on 2017-06-13 02:02:47

Deep Learning with Light

Authors: George Rajna
Comments: 37 Pages.

Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members,
Category: Artificial Intelligence

[289] viXra:1706.0207 [pdf] submitted on 2017-06-13 11:45:22

Neural Networks and Quantum Entanglement

Authors: George Rajna
Comments: 39 Pages.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14]
Category: Artificial Intelligence

[288] viXra:1706.0198 [pdf] submitted on 2017-06-14 08:06:29

Robot Write and Play its own Music

Authors: George Rajna
Comments: 38 Pages.

A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology’s impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[287] viXra:1706.0144 [pdf] submitted on 2017-06-11 07:47:04

Classical and Quantum Machine Learning

Authors: George Rajna
Comments: 35 Pages.

Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts—a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13]
Category: Artificial Intelligence

[286] viXra:1705.0362 [pdf] submitted on 2017-05-25 03:53:34

Artificial Intelligence by Quantum Computing

Authors: George Rajna
Comments: 34 Pages.

We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20] It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12]
Category: Artificial Intelligence

[285] viXra:1705.0340 [pdf] submitted on 2017-05-22 19:18:05

Verifying the Validity of a Conformant Plan is co-NP-Complete

Authors: Alban Grastien, Enrico Scala
Comments: 3 Pages.

The purpose of this document is to show the complexity of verifying the validity of a deterministic conformant plan. We concentrate on a simple version of the conformant planning problem (i.e., one where there is no precondition on the actions and where all conditions are defined as sets of positive or negative facts) in order to show that the complexity does not come from solving a single such formula.
Category: Artificial Intelligence

[284] viXra:1705.0313 [pdf] submitted on 2017-05-21 09:43:28

Rematch of Man vs Machine

Authors: George Rajna
Comments: 32 Pages.

It's man vs machine this week as Google's artificial intelligence programme AlphaGo faces the world's top-ranked Go player in a contest expected to end in another victory for rapid advances in AI. [19] Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.
Category: Artificial Intelligence

[283] viXra:1705.0273 [pdf] submitted on 2017-05-18 10:06:56

Google Latest Tech Tricks

Authors: George Rajna
Comments: 31 Pages.

Google's computer programs are gaining a better understanding of the world, and now it wants them to handle more of the decision-making for the billions of people who use its services. [18] Microsoft on Wednesday unveiled new tools intended to democratize artificial intelligence by enabling machine smarts to be built into software from smartphone games to factory floors. [17] The closer we can get a machine translation to be on par with expert human translation, the happier lots of people struggling with translations will be. [16] Researchers have created a large, open source database to support the development of robot activities based on natural language input. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10]
Category: Artificial Intelligence

Replacements of recent Submissions

[64] viXra:1902.0464 [pdf] replaced on 2019-03-14 03:23:57

Philosophically an Incompleteness Theorem Is Trivial —Towards a Truly Thinking Machine—

Authors: Atsushi Shimotani
Comments: 1 Page.

I think that an incompleteness theorem is trivial.
Category: Artificial Intelligence

[63] viXra:1902.0354 [pdf] replaced on 2019-02-22 07:18:16

A Set of Sets and Quantification of Logic —Towards a Truly Thinking Machine—

Authors: Atsushi Shimotani
Comments: 2 Pages.

I describe relationship between a set of sets and quantification of logic. And I show a thesis which is essentially identical to an axiom of first order logic. Logic which is given in this paper is extension of propositional logic.
Category: Artificial Intelligence

[62] viXra:1902.0220 [pdf] replaced on 2019-03-22 03:27:14

Comments on the Book "Architects of Intelligence" by Martin Ford in the Light of the SP Theory of Intelligence

Authors: J Gerard Wolff
Comments: 53 Pages.

The book "Architects of Intelligence" by Martin Ford presents conversations about AI between the author and influential researchers. Issues considered in the book are described in relation to features of the "SP System", meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model", both outlined in an appendix. The SP System has the potential to solve most of the problems in AI described in the book, and some others. Strengths and potential of the SP System, which in many cases contrast with weaknesses of deep neural networks (DNNs), include the following: a top-down research strategy has yielded a system with a favourable combination of conceptual "Simplicity" with descriptive or explanatory "Power"; the SP System has strengths and potential with both symbolic and non-symbolic kinds of knowledge and processing; the system has strengths and long-term potential in pattern recognition; it is free of the tendency of DNNs to make large and unexpected errors in recognition; the system has strengths and potential in unsupervised learning, including grammatical inference; the SP Theory of Intelligence provides a theoretically coherent basis for generalisation and the avoidance of under- or over-generalisations; that theory of generalisation may help improve the safety of driverless; the SP system, unlike DNNs, can achieve learning from a single occurrence or experience; it has relatively tiny demands for computational resources and volumes of data, with potential for much higher speeds in learning; the system, unlike most DNNs, has strengths in transfer learning; unlike DNNs, it provides transparency in the representation of knowledge and an audit trail for all its processing; the system has strengths and potential in the processing of natural language; it exhibits several different kinds of probabilistic reasoning; the system has strengths and potential in commonsense reasoning and the representation of commonsense knowledge; other strengths include information compression, biological validity, scope for adaptation, and freedom from catastrophic forgetting. Despite the importance of motivations and emotions, no attempt has been made in the SP research to investigate these areas.
Category: Artificial Intelligence

[61] viXra:1901.0051 [pdf] replaced on 2019-01-17 11:06:55

Commonsense Reasoning, Commonsense Knowledge, and the SP Theory of Intelligence

Authors: J Gerard Wolff
Comments: 66 Pages.

Commonsense reasoning (CSR) and commonsense knowledge (CSK) (together abbreviated as CSRK) are areas of study concerned with problems which are trivially easy for adults but which are challenging for artificial systems. This paper describes how the "SP System" -- meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model" -- has strengths and potential in several aspects of CSRK. Some shortcomings of the system in that area may be overcome with planned future developments. A particular strength of the SP System is that it shows promise as an overarching theory for four areas of relative success with CSRK problems -- described by other authors -- which have been developed without any integrative theory. How the SP System may help to solve four other kinds of CSRK problem is described: 1) how the strength of evidence for a murder may be influenced by the level of lighting of the murder as it was witnessed; 2) how people may arrive at the commonly-accepted interpretation of phrases like "water bird"; 3) the interpretation of the horse's head scene in "The Godfather" film; and how the SP System may help to resolve the reference of an ambiguous pronoun in sentences in the format of a 'Winograd schema'. Also described is why a fifth CSRK problem -- modelling how a chef may crack an egg into a bowl -- is beyond the capabilities of the SP System as it is now and how those deficiencies may be overcome via planned developments of the system.
Category: Artificial Intelligence

[60] viXra:1811.0085 [pdf] replaced on 2019-02-13 05:26:05

Event-Driven Models

Authors: Dimiter Dobrev
Comments: 25 Pages.

In Reinforcement Learning we look for meaning in the flow of input/output information. If we do not find meaning, the information flow is not more than noise to us. Before we are able to find meaning, we should first learn how to discover and identify objects. What is an object? In this article we will demonstrate that an object is an event-driven model. These models are a generalization of action-driven models. In Markov Decision Process we have an action-driven model which changes its state at each step. The advantage of event-driven models is their greater sustainability as they change their states only upon the occurrence of particular events. These events may occur very rarely, therefore the state of the event-driven model is much more predictable.
Category: Artificial Intelligence

[59] viXra:1810.0347 [pdf] replaced on 2019-04-21 01:40:52

The Teleonomic Purpose of the Human Species (a Secular Discussion, Regarding Artificial General Intelligence)

Authors: Jordan Micah Bennett
Comments: 8 Pages. Author-website: http://folioverse.appspot.com/

This work concerns a hypothesis regarding a teleonomic description, regarding the non-trivial purpose of the human species. Teleonomy is a recent concept (with contributions from Richard Dawkins) that entails purpose in the context of objectivity/science, rather than in the context of subjectivity/deities. Teleonomy ought not to be confused for the teleological argument, which is a religious/subjective concept contrary to teleonomy, a scientific/objective concept. As such, this work concerns principles in entropy. This hypothesis was originally proposed on Research Gate in 2015.
Category: Artificial Intelligence

[58] viXra:1810.0345 [pdf] replaced on 2018-10-22 10:38:55

Cosmological Natural Selection AI

Authors: Jordan Micah Bennett
Comments: Author webpage: folioverse.appspot.com

Notably, this short paper concerns a non-serious thought experiment/statement, in the scope of a serious hypothesis of mine regarding the scientific purpose of the human species, in tandem with Cosmological Natural Selection I (CNS I). This thus may be considered as an aside wrt the aforesaid serious hypothesis, however, separately including thinking in relation to CNS I.
Category: Artificial Intelligence

[57] viXra:1810.0139 [pdf] replaced on 2019-05-01 17:58:03

Supersymmetric Artificial Neural Network

Authors: Jordan Micah Bennett
Comments: 6 Pages. Author Email: jordanmicahbennett@gmail.com | Author Website: folioverse.appspot.com

The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x; θ, bar{θ})Tw parameterized by θ, bar{θ}, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example. Note: A reasonable overview/summary (by a Physics/Computer science graduate named "Mitchell Porter"), of the benefits of supersymmetric numbers in relation to my "Supersymmetric artificial neural network", can be found at the later openreview link. Note that the "Supersymmetric Artificial Neural Network" is referenced at section 5.1 of Mitchell's summary/overview, but it is advisable to read all of said overview, starting from section 1: https://openreview.net/pdf?id=Byei_AwYOE
Category: Artificial Intelligence

[56] viXra:1810.0139 [pdf] replaced on 2019-04-15 22:41:00

Supersymmetric Artificial Neural Network

Authors: Jordan Micah Bennett
Comments: 5 Pages. Author Email: jordanmicahbennett@gmail.com | Author Website: folioverse.appspot.com

The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x; θ, bar{θ})Tw parameterized by θ, bar{θ}, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example. Note: A reasonable overview/summary (by a Physics/Computer science graduate named "Mitchell Porter"), of the benefits of supersymmetric numbers in relation to my "Supersymmetric artificial neural network", can be found at the later openreview link. Note that the "Supersymmetric Artificial Neural Network" is referenced at section 5.1 of Mitchell's summary/overview, but it is advisable to read all of said overview, starting from section 1: https://openreview.net/pdf?id=Byei_AwYOE
Category: Artificial Intelligence

[55] viXra:1810.0139 [pdf] replaced on 2018-12-27 15:23:43

Supersymmetric Artificial Neural Network

Authors: Jordan Micah Bennett
Comments: 5 Pages. Author Email: jordanmicahbennett@gmail.com | Author Website: folioverse.appspot.com

The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x, θ, θ`)Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x, θ, θ`)Tw parameterized by θ, θ`, which are supersymmetric directions, unlike seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
Category: Artificial Intelligence

[54] viXra:1810.0139 [pdf] replaced on 2018-12-26 11:16:23

Supersymmetric Artificial Neural Network

Authors: Jordan Micah Bennett
Comments: 5 Pages. Author Email: jordanmicahbennett@gmail.com | Author Website: folioverse.appspot.com

The “Supersymmetric Artificial Neural Network” in deep learning (denoted (, , `)⊤), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from () representation (such as Perceptron like models) to () representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely (|) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation (|), or (, , `) ⊤ parameterized by , `, which are supersymmetric directions, unlike seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
Category: Artificial Intelligence

[53] viXra:1809.0364 [pdf] replaced on 2018-09-28 10:07:11

Idealistic Neural Networks

Authors: Tofara Moyo
Comments: 2 Pages.

I describe an Artificial Neural Network, where we have mapped words to individual neurons instead of having them as variables to be fed into a network. The process of changing training cases will be equivalent to a Dropout procedure where we replace some (or all) of the words/neurons in the previous training case with new ones. Each neuron/word then takes in as input, all the b weights of the other neurons, and weights them all with its personal a weight. To learn this network uses the backpropagation algorithm after calculating an error from the output of an output neuron that will be a traditional neuron. This network then has a unique topology and functions with no inputs. We will use coordinate gradient decent to learn where we alternate between training the a weights of the words and the b weights. The Idealistic Neural Network, is an extremely shallow network that can represent non-linearity complexity in a linear outfit.
Category: Artificial Intelligence

[52] viXra:1809.0190 [pdf] replaced on 2018-09-11 13:51:22

Thoughts About Thinking

Authors: Lev I. Verkhovsky
Comments: 12 Pages. The article in Russian

A geometric model illustrating the basic mechanisms of thinking -- logical and intuitive -- is proposed. The thinking of man and the problems of creating artificial intelligence are discussed. Although the article was published in the Russian popular science journal «Chemistry and Life» in 1989 No. 7, according to the author, it is not obsolete. In Russian.
Category: Artificial Intelligence

[51] viXra:1808.0589 [pdf] replaced on 2018-09-17 09:47:30

Minimal and Maximal Models in Reinforcement Learning

Authors: Dimiter Dobrev
Comments: 11 Pages.

Each test gives us one property which we will denote as test result. The extension of that property we will denote as the test property. This raises the question about the nature of that property. Can it be a property of the state of the world? The answer is both yes and no. For a random model of the world the answer is negative, but if we look at the maximal model of the world the answer would flip to positive. There can be various models of the world. The minimal model knows about the past and the future the indispensable minimum. Conversely, in the maximal model the world knows everything about the past and the future. If you threw a dice the maximal model would know which side will fall up and would even know what you will do. For example, it would know whether you will throw the dice at all.
Category: Artificial Intelligence

[50] viXra:1805.0214 [pdf] replaced on 2018-05-31 04:56:00

AI Should Not Be an Open Source Project

Authors: Dimiter Dobrev
Comments: 9 Pages.

Who should own the Artificial Intelligence technology? It should belong to everyone, properly said not the technology per se, but the fruits that can be reaped from it. Obviously, we should not let AI end up in the hands of irresponsible persons. Likewise, nuclear technology should benefit all, however it should be kept secret and inaccessible by the public at large.
Category: Artificial Intelligence

[49] viXra:1805.0089 [pdf] replaced on 2018-11-25 05:20:33

Group Sparse Recovery in Impulsive Noise Via Alternating Direction Method of Multipliers

Authors: Jianwen Huang, Feng Zhang, Jianjun Wang, Wendong Wang
Comments: 35 Pages.

In this paper, we consider the recovery of group sparse signals corrupted by impulsive noise. In some recent literature, researchers have utilized stable data fitting models, like $l_1$-norm, Huber penalty function and Lorentzian-norm, to substitute the $l_2$-norm data fidelity model to obtain more robust performance. In this paper, a stable model is developed, which exploits the generalized $l_p$-norm as the measure for the error for sparse reconstruction. In order to address this model, we propose an efficient alternative direction method of multipliers, which includes the proximity operator of $l_p$-norm functions to the framework of Lagrangian methods. Besides, to guarantee the convergence of the algorithm in the case of $0\leq p<1$ (nonconvex case), we took advantage of a smoothing strategy. For both $0\leq p<1$ (nonconvex case) and $1\leq p\leq2$ (convex case), we have derived the conditions of the convergence for the proposed algorithm. Moreover, under the block restricted isometry property with constant $\delta_{\tau k_0}<\tau/(4-\tau)$ for $0<\tau<4/3$ and $\delta_{\tau k_0}<\sqrt{(\tau-1)/\tau}$ for $\tau\geq4/3$, a sharp sufficient condition for group sparse recovery in the presence of impulsive noise and its associated error upper bound estimation are established. Numerical results based on the synthetic block sparse signals and the real-world FECG signals demonstrate the effectiveness and robustness of new algorithm in highly impulsive noise.
Category: Artificial Intelligence

[48] viXra:1801.0271 [pdf] replaced on 2018-01-22 21:08:14

Refutation: Neutrosophic Logic by Florentin Smarandache as Generalized Intuitionistic, Fuzzy Logic © 2018 by Colin James III All Rights Reserved.

Authors: Colin James III
Comments: 2 Pages. © 2018 by Colin James III All rights reserved.

We map the neutrosophic logical values of truth, falsity, and indeterminacy on intervals "]0,1[" and "]-0,1+[" in equations for the Meth8/VL4 apparatus. We test the summation of the those values. The result is not tautologous, meaning neutrosophic logic is refuted and hence its use as a generalization of intuitionistic, fuzzy logic is likewise unworkable.
Category: Artificial Intelligence

[47] viXra:1712.0071 [pdf] replaced on 2018-04-26 10:08:54

The IQ of Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 24 Pages. Serdica Journal of Computing

All it takes to identify the computer programs which are Artificial Intelligence is to give them a test and award AI to those that pass the test. Let us say that the scores they earn at the test will be called IQ. We cannot pinpoint a minimum IQ threshold that a program has to cover in order to be AI, however, we will choose a certain value. Thus, our definition for AI will be any program the IQ of which is above the chosen value. While this idea has already been implemented in [3], here we will revisit this construct in order to introduce certain improvements.
Category: Artificial Intelligence

[46] viXra:1712.0071 [pdf] replaced on 2018-01-28 06:21:06

The Intelligence Quotient of the Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 27 Pages. Bulgarian. Serdica Journal of Computing

To say which programs are AI, it's enough to run an exam and recognize for AI those programs that passed the exam. The exam grade will be called IQ. We cannot say just how big the IQ has to be in order one program to be AI, but we will choose a specific value. So our definition of AI will be any program whose IQ is above this specific value. This idea has already been realized in [3], but here we will repeat this construction by bringing some improvements.
Category: Artificial Intelligence

[45] viXra:1711.0265 [pdf] replaced on 2017-11-27 03:16:15

Revisit Fuzzy Neural Network: Bridging the Gap Between Fuzzy Logic and Deep Learning

Authors: Lixin Fan
Comments: 76 Pages.

This article aims to establish a concrete and fundamental connection between two important elds in artificial intelligence i.e. deep learning and fuzzy logic. On the one hand, we hope this article will pave the way for fuzzy logic researchers to develop convincing applications and tackle challenging problems which are of interest to machine learning community too. On the other hand, deep learning could benefit from the comparative research by re-examining many trail-and-error heuristics in the lens of fuzzy logic, and consequently, distilling the essential ingredients with rigorous foundations. Based on the new findings reported in [41] and this article, we believe the time is ripe to revisit fuzzy neural network as a crucial bridge between two schools of AI research i.e. symbolic versus connectionist [101] and eventually open the black-box of artificial neural networks.
Category: Artificial Intelligence

[44] viXra:1711.0265 [pdf] replaced on 2017-11-17 16:28:38

Revisit Fuzzy Neural Network: Bridging the Gap Between Fuzzy Logic and Deep Learning

Authors: Lixin Fan
Comments: 76 Pages.

This article aims to establish a concrete and fundamental connection between two important fields in artificial intelligence i.e. deep learning and fuzzy logic. On the one hand, we hope this article will pave the way for fuzzy logic researchers to develop convincing applications and tackle challenging problems which are of interest to machine learning community too. On the other hand, deep learning could benefit from the comparative research by re-examining many trail-and-error heuristics in the lens of fuzzy logic, and consequently, distilling the essential ingredients with rigorous foundations. Based on the new findings reported in [38] and this article, we believe the time is ripe to revisit fuzzy neural network as a crucial bridge between two schools of AI research i.e. symbolic versus connectionist [93] and eventually open the black-box of artificial neural networks.
Category: Artificial Intelligence

[43] viXra:1710.0324 [pdf] replaced on 2017-11-09 05:34:27

New Sufficient Conditions of Signal Recovery with Tight Frames Via $l_1$-Analysis

Authors: Jianwen Huang, Jianjun Wang, Feng Zhang, Wendong Wang
Comments: 18 Pages.

The paper discusses the recovery of signals in the case that signals are nearly sparse with respect to a tight frame $D$ by means of the $l_1$-analysis approach. We establish several new sufficient conditions regarding the $D$-restricted isometry property to ensure stable reconstruction of signals that are approximately sparse with respect to $D$. It is shown that if the measurement matrix $\Phi$ fulfils the condition $\delta_{ts}<t/(4-t)$ for $0<t<4/3$, then signals which are approximately sparse with respect to $D$ can be stably recovered by the $l_1$-analysis method. In the case of $D=I$, the bound is sharp, see Cai and Zhang's work \cite{Cai and Zhang 2014}. When $t=1$, the present bound improves the condition $\delta_s<0.307$ from Lin et al.'s reuslt to $\delta_s<1/3$. In addition, numerical simulations are conducted to indicate that the $l_1$-analysis method can stably reconstruct the sparse signal in terms of tight frames.
Category: Artificial Intelligence

[42] viXra:1709.0108 [pdf] replaced on 2017-09-10 08:24:10

A New Semantic Theory of Nature Language

Authors: Kun Xing
Comments: 70 Pages.

Formal Semantics and Distributional Semantics are two important semantic frameworks in Natural Language Processing (NLP). Cognitive Semantics belongs to the movement of Cognitive Linguistics, which is based on contemporary cognitive science. Each framework could deal with some meaning phenomena, but none of them fulfills all requirements proposed by applications. A unified semantic theory characterizing all important language phenomena has both theoretical and practical significance; however, although many attempts have been made in recent years, no existing theory has achieved this goal yet. This article introduces a new semantic theory that has the potential to characterize most of the important meaning phenomena of natural language and to fulfill most of the necessary requirements for philosophical analysis and for NLP applications. The theory is based on a unified representation of information, and constructs a kind of mathematical model called cognitive model to interpret natural language expressions in a compositional manner. It accepts the empirical assumption of Cognitive Semantics, and overcomes most shortcomings of Formal Semantics and of Distributional Semantics. The theory, however, is not a simple combination of existing theories, but an extensive generalization of classic logic and Formal Semantics. It inherits nearly all advantages of Formal Semantics, and also provides descriptive contents for objects and events as fine-gram as possible, descriptive contents which represent the results of human cognition.
Category: Artificial Intelligence