Artificial Intelligence

1801 Submissions

[5] viXra:1801.0243 [pdf] submitted on 2018-01-19 09:05:53

AI Quantum Experiments

Authors: George Rajna
Comments: 38 Pages.

On the way to an intelligent laboratory, physicists from Innsbruck and Vienna present an artificial agent that autonomously designs quantum experiments. [24] An answer to a quantum-physical question provided by the algorithm Melvin has uncovered a hidden link between quantum experiments and the mathematical field of Graph Theory. [23] Engineers develop key mathematical formula for driving quantum experiments. [22] Physicists are developing quantum simulators, to help solve problems that are beyond the reach of conventional computers. [21] Engineers at Australia's University of New South Wales have invented a radical new architecture for quantum computing, based on novel 'flip-flop qubits', that promises to make the large-scale manufacture of quantum chips dramatically cheaper - and easier - than thought possible. [20] A team of researchers from the U.S. and Italy has built a quantum memory device that is approximately 1000 times smaller than similar devices— small enough to install on a chip. [19] The cutting edge of data storage research is working at the level of individual atoms and molecules, representing the ultimate limit of technological miniaturisation. [18] This is an important clue for our theoretical understanding of optically controlled magnetic data storage media. [17] A crystalline material that changes shape in response to light could form the heart of novel light-activated devices. [16] Now a team of Penn State electrical engineers have a way to simultaneously control diverse optical properties of dielectric waveguides by using a two-layer coating, each layer with a near zero thickness and weight. [15] Just like in normal road traffic, crossings are indispensable in optical signal processing. In order to avoid collisions, a clear traffic rule is required. A new method has now been developed at TU Wien to provide such a rule for light signals. [14] Researchers have developed a way to use commercial inkjet printers and readily available ink to print hidden images that are only visible when illuminated with appropriately polarized waves in the terahertz region of the electromagnetic spectrum. [13] That is, until now, thanks to the new solution devised at TU Wien: for the first time ever, permanent magnets can be produced using a 3D printer. This allows magnets to be produced in complex forms and precisely customised magnetic fields, required, for example, in magnetic sensors. [12] For physicists, loss of magnetisation in permanent magnets can be a real concern. In response, the Japanese company Sumitomo created the strongest available magnet— one offering ten times more magnetic energy than previous versions—in 1983. [11] New method of superstrong magnetic fields’ generation proposed by Russian scientists in collaboration with foreign colleagues. [10] By showing that a phenomenon dubbed the "inverse spin Hall effect" works in several organic semiconductors - including carbon-60 buckyballs - University of Utah physicists changed magnetic "spin current" into electric current. The efficiency of this new power conversion method isn't yet known, but it might find use in future electronic devices including batteries, solar cells and computers. [9] Researchers from the Norwegian University of Science and Technology (NTNU) and the University of Cambridge in the UK have demonstrated that it is possible to directly generate an electric current in a magnetic material by rotating its magnetization. [8] This paper explains the magnetic effect of the electric current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron’s spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.
Category: Artificial Intelligence

[4] viXra:1801.0192 [pdf] submitted on 2018-01-16 07:03:26

FastNet: An Efficient Architecture for Smart Devices

Authors: John Olafenwa, Moses Olafenwa
Comments: 9 Pages.

Inception and the Resnet family of Convolutional Neural Network architectures have broken records in the past few years, but recent state of the art models have also incurred very high computational cost in terms of training, inference and model size. Making the deployment of these models on Edge devices, impractical. In light of this, we present a new novel architecture that is designed for high computational efficiency on both GPUs and CPUs, and is highly suited for deployment on Mobile Applications, Smart Cameras, Iot devices and controllers as well as low cost drones. Our architecture boasts competitive accuracies on standard datasets even outperforming the original Resnet. We present below the motivation for this research, the architecture of the network, single test accuracies on CIFAR 10 and CIFAR 100, a detailed comparison with other well-known architectures and link to an implementation in Keras.
Category: Artificial Intelligence

[3] viXra:1801.0102 [pdf] submitted on 2018-01-09 11:34:24

Bayesian Transfer Learning for Deep Networks

Authors: J. Wohlert, A. M. Munk, S. Sengupta, F. Laumann
Comments: 6 Pages.

We propose a method for transfer learning for deep networks through Bayesian inference, where an approximate posterior distribution q(w|θ) of model parameters w is learned through variational approximation. Utilizing Bayes by Backprop we optimize the parameters θ associated with the approximate distribution. When performing transfer learning we consider two tasks; A and B. Firstly, an approximate posterior q_A(w|θ) is learned from task A which is afterwards transferred as a prior p(w) → q_A(w|θ) when learning the approximate posterior distribution q_B(w|θ) for task B. Initially, we consider a multivariate normal distribution q(w|θ) = N (µ, Σ), with diagonal covariance matrix Σ. Secondly, we consider the prospects of introducing more expressive approximate distributions - specifically those known as normalizing flows. By investigating these concepts on the MNIST data set we conclude that utilizing normalizing flows does not improve Bayesian inference in the context presented here. Further, we show that transfer learning is not feasible using our proposed architecture and our definition of task A and task B, but no general conclusion regarding rejecting a Bayesian approach to transfer learning can be made.
Category: Artificial Intelligence

[2] viXra:1801.0050 [pdf] submitted on 2018-01-06 00:20:25

Fruit Recognition from Images Using Deep Learning

Authors: Horea Muresan, Mihai Oltean
Comments: 13 Pages. Data can be downloaded from https://github.com/Horea94/Fruit-Images-Dataset

In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use this kind of neural network.
Category: Artificial Intelligence

[1] viXra:1801.0041 [pdf] submitted on 2018-01-05 06:09:53

Taking Advantage of BiLSTM Encoding to Handle Punctuation in Dependency Parsing: A Brief Idea

Authors: Matteo Grella
Comments: 3 Pages.

In the context of the bidirectional-LSTMs neural parser (Kiperwasser and Goldberg, 2016), an idea is proposed to initialize the parsing state without punctuation-tokens but using them for the BiLSTM sentence encoding. The relevant information brought by the punctuation-tokens should be implicitly learned using the errors of the recurrent contributions only.
Category: Artificial Intelligence