Digital Signal Processing

1705 Submissions

[7] viXra:1705.0208 [pdf] submitted on 2017-05-13 04:18:00

Physics Solutions for Computational Problems

Authors: George Rajna
Comments: 17 Pages.

Researchers from the University of Central Florida and Boston University have developed a novel approach to solve such difficult computational problems more quickly. [29] By precisely measuring the entropy of a cerium copper gold alloy with baffling electronic properties cooled to nearly absolute zero, physicists in Germany and the United States have gleaned new evidence about the possible causes of high-temperature superconductivity and similar phenomena. [28] Physicists have theoretically shown that a superconducting current of electrons can be induced to flow by a new kind of transport mechanism: the potential flow of information. [27] This paper explains the magnetic effect of the superconductive current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron's spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the Higgs Field, the changing Relativistic Mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions. Since the superconductivity is basically a quantum mechanical phenomenon and some entangled particles give this opportunity to specific matters, like Cooper Pairs or other entanglements, as strongly correlated materials and Exciton-mediated electron pairing, we can say that the secret of superconductivity is the quantum entanglement.
Category: Digital Signal Processing

[6] viXra:1705.0194 [pdf] submitted on 2017-05-12 04:58:46

Robots Sense of Touch

Authors: George Rajna
Comments: 31 Pages.

Engineering researchers at the University of Minnesota have developed a revolutionary process for 3D printing stretchable electronic sensory devices that could give robots the ability to feel their environment. The discovery is also a major step forward in printing electronics on real human skin. [18] Researchers from France and the University of Arkansas have created an artificial synapse capable of autonomous learning, a component of artificial intelligence. [17] Intelligent machines of the future will help restore memory, mind your children, fetch your coffee and even care for aging parents. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system,
Category: Digital Signal Processing

[5] viXra:1705.0189 [pdf] submitted on 2017-05-11 21:28:54

Introduction to Logplex Encoding

Authors: Russell Leidich
Comments: 5 Pages.

Logplex codes are universal codes, that is, bitstrings which map one-to-one to the whole numbers, regardless of the bits which follow them in memory. The codes are dense, in the sense that there is no finite series of bits which does not map to at least one whole number. Their asymptotic efficiency (size out divided by size in) is one, as with Elias omega codes[1], but they have some convient features absent in the latter: Given whole numbers M and N. If (M<N) then (logplex(M)<logplex(N)). This provides for more efficient searching and sorting, as such tasks can be done without the need to allocate separate memory for the corresponding decoded whole numbers. For all nonzero M, M itself is encoded verbatim in the high bits of its logplex. In all cases, the high (last) bit of a logplex is one. Representation of all subparts of logplexes are bitwise little endian. This is in contrast to Elias omega codes, the endianness of the subparts of which are opposite to the expansion direction. Finally, logplexes are scale-agnostic: there is no need to assume that (log 2 M) has any particular maximum value. This feature stems from their recursive structure, which is analogous to that of Elias omega codes.
Category: Digital Signal Processing

[4] viXra:1705.0188 [pdf] submitted on 2017-05-11 21:39:13

Introduction to Agnentropy

Authors: Russell Leidich
Comments: 28 Pages.

Claude Shannon[1] devised a way to quantify the information entropy[2] of a finite integer set, given the probabilities of finding each integer in the set. Information entropy, hereinafter simply "entropy", refers to the number of bits required to encode some such set in a given numerical base (usually binary). Unfortunately, his formula for the "Shannon entropy" seems to have been widely misappropriated as a means by which to measure the entropy of such sets by supplanting the probability coefficients (which are generally unknowable) with the normalized frequencies of the integers as they actually occur in the set. This practice is so common that Shannon entropy is often defined in precisely this manner, and indeed this is how we define it here. However, the inaccuracy induced by this compromise may lead to erroneous conclusions, especially when very short or faint signals are concerned. To make matters worse, the numerical behavior of Shannon entropy formula is rather unstable over large sets, where otherwise it would be more accurate. Herein we introduce the concept of agnentropy, short for "agnostic entropy", in the sense of an entropy metric which begins with almost no assumptions about the set under analysis. (Technically, it's a "divergence" -- essentially a Kullback-Leibler divergence[3] without the implicit singularies -- because it fails the triangle inequality. We refer to it as a "metric" only in the qualitative sense that it measures something.) This stands in stark contrast to the (compromised) Shannon entropy, which presupposes that the frequencies of integers within a given set are already known. In addition to being more accurate when used appropriately, agnentropy is also more numerically stable and faster to compute than Shannon entropy. To be precise, Shannon entropy does not measure the number of bits in an invertibly compressed code. It is, more accurately, an underestimation of that value. Unfortunately, the margin of underestimation is not straightforwardly computable, and has a size O(Z), where Z is the number of unique integers in the set, assuming that said integers are of predetermined maximum size. By contrast, agnentropy underestimates that bit count by no more than 2, plus the size of 2 logplexes. (Logplexes are universal (affine) codes introduced in [8].) In practice, this overhead amounts to tens of bits, as opposed to potentially thousands of bits for Shannon. This difference has meaningful ramifications for the optimization of both lossless and lossy compression algos.
Category: Digital Signal Processing

[3] viXra:1705.0187 [pdf] submitted on 2017-05-11 21:51:26

Introduction to Entropy Transforms

Authors: Russell Leidich
Comments: 34 Pages.

We have at our disposal a wide variety of discrete transforms for the discovery of "interesting" signals in discrete data sets in any number of dimensions, which are of particular utility when the default assumption is that the set is mundane. SETI, the Search for Extraterrestrial Intelligence, is the archetypical case, although problems in drug discovery, malware detection, financial arbitrage, geologic exploration, forensic analysis, and other diverse fields are perpetual clients of such tools. Fundamentally, these include the Fourier, wavelet, curvelet, wave atom, contourlet, brushlet, etc. transforms which have churned out of math departments with increasing frequency since the days of Joseph Fourier. A mountain of optimized applications has been built on top of them, for example the Fastest Fourier Transform in the West[1] and the Wave Atom Toolbox[2]. Such transforms excel at discovering particular classes of signals. So much so that the return on investment in new math would appear to be approaching zero. What's missing, however, is efficiency: the question must be asked as towhen such transforms are computationally justifiable. Herein we investigate a preprocessing technique, abstractly known as an "entropy transform", which, in a wide variety of practical applications, can discern in essentially real time whether or not an "interesting" signal exists within a particular data set. (Entropy transforms say nothing as to the nature of the signal, but merely how interesting a particular subset of the data appears to be.) Entropy transforms have the added advantage that they can also be tuned to behave as crude classifiers -- not as good as their deep learning counterparts, but requiring orders of magnitude less processing power. In applications where identifying many targets with moderate accuracy is more important than identifying a few targets with excellent accuracy, entropy transforms could bridge the gap to product viability. It would be fair to say that in the realm of signal detection, discrete transforms should be the tool of choice because they tend to produce the most accurate and well characterized results. But processor power and execution time are not free! Particularly when, as in the case of SETI, the bottleneck is the rate at which newly acquired data can be processed, a more productive approach would be use to cheap but reasonably accurate O(N) transforms to filter out all but the most surprising subsets of the data. This would reserve processing capacity for those rare weird cases more deserving of closer inspection. I published Agnentro[3], an open-source toolkit for signal search and comparison. The reason, first and foremost, was to support these broad and rather unintuitive assertions with numerical evidence. The goal of this paper is to formalize the underlying math.
Category: Digital Signal Processing

[2] viXra:1705.0157 [pdf] submitted on 2017-05-10 04:48:19

OPRA Technique for M-QAM over Nakagami-m Fading Channel with Imperfect CSI

Authors: Mr. Bhargabjyoti Saikia1, Rupaban Subadar†2
Comments: 12 Pages.

Analysis of an Optimum Power and Rate Adaptation (OPRA) technique has been carried out for Multilevel-Quadrature Amplitude Modulation (M-QAM) over Nakagami-m ?at fading channels considering an imperfect channel estimation at the receiver side. The optimal solution has been derived for a continuous adaptation, which is a specific bound function and not possible to express in close mathematical form. Therefore, a sub-optimal solution is derived for the continuous adaptation and it has been observed that it tends to the optimum solution as the correlation coefficient between the true channel gain and its estimation tends to one. It has been observed that the receiver performance degrades with an increase in estimation error.
Category: Digital Signal Processing

[1] viXra:1705.0006 [pdf] submitted on 2017-05-01 07:33:55

Cloud Storage Services

Authors: George Rajna
Comments: 33 Pages.

Adding to strong recent demonstrations that particles of light perform what Einstein called "spooky action at a distance," in which two separated objects can have a connection that exceeds everyday experience, physicists at the National Institute of Standards and Technology (NIST) have confirmed that particles of matter can act really spooky too. [17] How fast will a quantum computer be able to calculate? While fully functional versions of these long-sought technological marvels have yet to be built, one theorist at the National Institute of Standards and Technology (NIST) has shown that, if they can be realized, there may be fewer limits to their speed than previously put forth. [16] Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. [15] A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14] A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by combining aspects of data processing and artificial intelligence and have come up with what they are calling a differentiable neural computer (DNC.) In their paper published in the journal Nature, they describe the work they are doing and where they believe it is headed. To make the work more accessible to the public team members, Alexander Graves and Greg Wayne have posted an explanatory page on the DeepMind website. [13] Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics. [12]
Category: Digital Signal Processing