**Previous months:**

2007 - 0702(58) - 0703(50) - 0704(5) - 0705(1) - 0706(8) - 0707(2) - 0708(3) - 0709(3) - 0710(1) - 0711(6) - 0712(3)

2008 - 0801(4) - 0802(4) - 0803(2) - 0804(9) - 0805(4) - 0806(1) - 0807(12) - 0808(6) - 0809(3) - 0810(16) - 0811(5) - 0812(9)

2009 - 0901(3) - 0902(7) - 0903(6) - 0904(5) - 0907(48) - 0908(109) - 0909(61) - 0910(66) - 0911(64) - 0912(54)

2010 - 1001(46) - 1002(51) - 1003(265) - 1004(138) - 1005(110) - 1006(67) - 1007(55) - 1008(91) - 1009(71) - 1010(61) - 1011(76) - 1012(52)

2011 - 1101(100) - 1102(55) - 1103(120) - 1104(82) - 1105(40) - 1106(60) - 1107(52) - 1108(49) - 1109(59) - 1110(69) - 1111(112) - 1112(86)

2012 - 1201(99) - 1202(79) - 1203(90) - 1204(93) - 1205(111) - 1206(89) - 1207(98) - 1208(222) - 1209(95) - 1210(160) - 1211(136) - 1212(139)

2013 - 1301(171) - 1302(145) - 1303(196) - 1304(147) - 1305(182) - 1306(204) - 1307(151) - 1308(138) - 1309(187) - 1310(238) - 1311(161) - 1312(198)

2014 - 1401(183) - 1402(140) - 1403(841) - 1404(249) - 1405(324) - 1406(174) - 1407(203) - 1408(216) - 1409(219) - 1410(183) - 1411(478) - 1412(237)

2015 - 1501(224) - 1502(215) - 1503(238) - 1504(212) - 1505(210) - 1506(200) - 1507(199) - 1508(350) - 1509(259) - 1510(452) - 1511(271) - 1512(438)

2016 - 1601(343) - 1602(346) - 1603(368) - 1604(326) - 1605(291) - 1606(311) - 1607(481) - 1608(407) - 1609(396) - 1610(363) - 1611(392) - 1612(399)

2017 - 1701(435)

Any replacements are listed further down

[17439] **viXra:1701.0553 [pdf]**
*submitted on 2017-01-20 10:05:13*

**Authors:** George Rajna

**Comments:** 28 Pages.

Physicists have proposed that violations of energy conservation in the early universe, as predicted by certain modified theories of quantum mechanics and quantum gravity, may explain the cosmological constant problem, which is sometimes referred to as "the worst theoretical prediction in the history of physics." [20] Scientists from The University of Manchester working on a revolutionary telescope project have harnessed the power of distributed computing from the UK's GridPP collaboration to tackle one of the Universe's biggest mysteries – the nature of dark matter and dark energy. [18] In the search for the mysterious dark matter, physicists have used elaborate computer calculations to come up with an outline of the particles of this unknown form of matter. [17] Unlike x-rays that the naked eye can't see but equipment can measure, scientists have yet to detect dark matter after three decades of searching, even with the world's most sensitive instruments. [16] Scientists have lost their latest round of hide-and-seek with dark matter, but they're not out of the game. [15] A new study is providing evidence for the presence of dark matter in the innermost part of the Milky Way, including in our own cosmic neighborhood and the Earth's location. The study demonstrates that large amounts of dark matter exist around us, and also between us and the Galactic center. The result constitutes a fundamental step forward in the quest for the nature of dark matter. [14] Researchers may have uncovered a way to observe dark matter thanks to a discovery involving X-ray emissions. [13] Between 2009 and 2013, the Planck satellite observed relic radiation, sometimes called cosmic microwave background (CMB) radiation. Today, with a full analysis of the data, the quality of the map is now such that the imprints left by dark matter and relic neutrinos are clearly visible. [12] The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy. There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter. The Weak Interaction changes the temperature dependent Planck Distribution of the electromagnetic oscillations and changing the non-compensated dark matter rate, giving the responsibility to the sterile neutrino.

**Category:** Astrophysics

[17438] **viXra:1701.0552 [pdf]**
*submitted on 2017-01-20 11:16:46*

**Authors:** George Rajna

**Comments:** 17 Pages.

Researchers have found a way to trigger the innate, but previously hidden, ability of graphene to act as a superconductor-meaning that it can be made to carry an electrical current with zero resistance. [28] Researchers in Japan have found a way to make the 'wonder material' graphene superconductive-which means electricity can flow through it with zero resistance. The new property adds to graphene's already impressive list of attributes, like the fact that it's stronger than steel, harder than diamond, and incredibly flexible. [27] Superconductivity is a rare physical state in which matter is able to conduct electricity—maintain a flow of electrons—without any resistance. It can only be found in certain materials, and even then it can only be achieved under controlled conditions of low temperatures and high pressures. New research from a team including Carnegie's Elissaios Stavrou, Xiao-Jia Chen, and Alexander Goncharov hones in on the structural changes underlying superconductivity in iron arsenide compounds—those containing iron and arsenic. [26] This paper explains the magnetic effect of the superconductive current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron's spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the Higgs Field, the changing Relativistic Mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.

**Category:** Condensed Matter

[17437] **viXra:1701.0551 [pdf]**
*submitted on 2017-01-20 04:29:57*

**Authors:** Valentin Danci

**Comments:** 23 Pages.

Since 1905, when Einstein introduced the Special Relativity Theory, various researchers independently observed that his theory contains at least one more postulate besides the two postulates stated by him explicitly. Putting together all those observations about the different additional postulates, we will describe here how the Special Relativity Theory was unfortunately based on nineteen postulates, and how most of them were implied and used in Einstein's 1905 article, later in his article of 1910, and also further in his manuscript written between 1912 and 1914.

**Category:** Relativity and Cosmology

[17436] **viXra:1701.0550 [pdf]**
*submitted on 2017-01-20 06:31:09*

**Authors:** George Rajna

**Comments:** 14 Pages.

Molecules vibrate in many different ways—like tiny musical instruments. [8] For centuries, scientists believed that light, like all waves, couldn't be focused down smaller than its wavelength, just under a millionth of a metre. Now, researchers led by the University of Cambridge have created the world's smallest magnifying glass, which focuses light a billion times more tightly, down to the scale of single atoms. [7] A Purdue University physicist has observed a butterfly Rydberg molecule, a weak pairing of two highly excitable atoms that he predicted would exist more than a decade ago. [6] In a scientific first, a team of researchers from Macquarie University and the University of Vienna have developed a new technique to measure molecular properties – forming the basis for improvements in scientific instruments like telescopes, and with the potential to speed up the development of pharmaceuticals. [5] In the quantum world, physicists study the tiny particles that make up our classical world-neutrons, electrons, photons-either one at a time or in small numbers because the behaviour of the particles is completely different on such a small scale. If you add to the number of particles that are being studied, eventually there will be enough particles that they no longer act quantum mechanically and must be identified as classical, just like our everyday world. But where is the line between the quantum world and the classical world? A group of scientists from Okinawa Institute of Science and Technology Graduate University (OIST) explored this question by showing what was thought to be a quantum phenomenon can be explained classically. [4] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry.

**Category:** Quantum Physics

[17435] **viXra:1701.0549 [pdf]**
*submitted on 2017-01-20 07:23:35*

**Authors:** George Rajna

**Comments:** 33 Pages.

ORNL researchers have discovered a new type of quantum critical point, a new way in which materials change from one state of matter to another. [22] New research conducted at the University of Chicago has confirmed a decades-old theory describing the dynamics of continuous phase transitions. [21] No matter whether it is acoustic waves, quantum matter waves or optical waves of a laser—all kinds of waves can be in different states of oscillation, corresponding to different frequencies. Calculating these frequencies is part of the tools of the trade in theoretical physics. Recently, however, a special class of systems has caught the attention of the scientific community, forcing physicists to abandon well-established rules. [20] Until quite recently, creating a hologram of a single photon was believed to be impossible due to fundamental laws of physics. However, scientists at the Faculty of Physics, University of Warsaw, have successfully applied concepts of classical holography to the world of quantum phenomena. A new measurement technique has enabled them to register the first-ever hologram of a single light particle, thereby shedding new light on the foundations of quantum mechanics. [19] A combined team of researchers from Columbia University in the U.S. and the University of Warsaw in Poland has found that there appear to be flaws in traditional theory that describe how photodissociation works. [18] Ultra-peripheral collisions of lead nuclei at the LHC accelerator can lead to elastic collisions of photons with photons. [17] Physicists from Trinity College Dublin's School of Physics and the CRANN Institute, Trinity College, have discovered a new form of light, which will impact our understanding of the fundamental nature of light. [16] Light from an optical fiber illuminates the metasurface, is scattered in four different directions, and the intensities are measured by the four detectors. From this measurement the state of polarization of light is detected. [15] Converting a single photon from one color, or frequency, to another is an essential tool in quantum communication, which harnesses the subtle correlations between the subatomic properties of photons (particles of light) to securely store and transmit information. Scientists at the National Institute of Standards and Technology (NIST) have now developed a miniaturized version of a frequency converter, using technology similar to that used to make computer chips. [14]

**Category:** Quantum Physics

[17434] **viXra:1701.0548 [pdf]**
*submitted on 2017-01-20 01:16:07*

**Authors:** Elias Khalil

**Comments:** 2 Pages.

A new technology based on nano bubbles developed and patented by a Spanish company, Jeanologia, is known as e-flow. The e-flow ‘breaks up’ the surface of the garment, achieving soft hand feel and controlling shrinkage. A minimal quantity of water is needed and there is zero discharge from the process. Air from the atmosphere is introduced into an electro flow reactor and subjected to an electromechanical shock creating nano bubbles and a flow of wet air. The nano bubble mix is then transported into a rotating tumbler containing the denim garments, and when it comes into contact with them produces a soft and natural hand feel. The garments are then dried in the same tumbler. When treating indigo dyed garments with this technology, some indigo cross contamination may occur that can be eliminated by a dry ozone treatment.

**Category:** Chemistry

[17433] **viXra:1701.0546 [pdf]**
*submitted on 2017-01-19 12:04:24*

**Authors:** George Rajna

**Comments:** 22 Pages.

Symmetry is the essential basis of nature, which gives rise to conservation laws. In comparison, the breaking of the symmetry is also indispensable for many phase transitions and nonreciprocal processes. Among various symmetry breaking phenomena, spontaneous symmetry breaking lies at the heart of many fascinating and fundamental properties of nature. [16] One of the biggest challenges in physics is to understand why everything we see in our universe seems to be formed only of matter, whereas the Big Bang should have created equal amounts of matter and antimatter. CERN's LHCb experiment is one of the best hopes for physicists looking to solve this longstanding mystery. [15] Imperial physicists have discovered how to create matter from light-a feat thought impossible when the idea was first theorized 80 years ago. [14] How can the LHC experiments prove that they have produced dark matter? They can't… not alone, anyway. [13] The race for the discovery of dark matter is on. Several experiments worldwide are searching for the mysterious substance and pushing the limits on the properties it may have. [12] Dark energy is a mysterious force that pervades all space, acting as a "push" to accelerate the universe's expansion. Despite being 70 percent of the universe, dark energy was only discovered in 1998 by two teams observing Type Ia supernovae. A Type 1a supernova is a cataclysmic explosion of a white dwarf star. The best way of measuring dark energy just got better, thanks to a new study of Type Ia supernovae. [11] Newly published research reveals that dark matter is being swallowed up by dark energy, offering novel insight into the nature of dark matter and dark energy and what the future of our Universe might be. [10] The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy. There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.

**Category:** High Energy Particle Physics

[17432] **viXra:1701.0545 [pdf]**
*submitted on 2017-01-19 12:19:53*

**Authors:** Andrew Beckwith

**Comments:** 11 Pages. submitted to JHEPGC for review

We examine the role of particle nucleation in the initial universe, and argue that there is a small effect due to particle nucleation in terms of lowering initial temperature, in tandem with energy density and scale factor contributions. If such scaling exists as a major order effect, then quenching of temperature proportional to a vacuum nucleation at or before the electro weak era is heavily influenced by a number, n, which is either a quantum number (quantum cosmology) or a ‘particle count before the electro weak era. If the supposition is for a particle count, say of gravitons from a prior universe to today's universe, initially, we can compare via a thermodynamic argument compared as to a modified Heisenberg uncertainty principle as to what this says about particle count information, we have a richer cosmological picture to contend with. We close with a speculation as to how a quantum teleportation picture for Pre- Planckian space-time physics may influence our physics discussion.

**Category:** Quantum Gravity and String Theory

[17431] **viXra:1701.0544 [pdf]**
*submitted on 2017-01-19 12:56:31*

**Authors:** Thomas Görnitz

**Comments:** 40 Pages.

Based on the simplest possible quantum structures, that is, the abstract free-of-meaning quan-tum information (AQI) bits establishing the fundamental substance referred to as protyposis, it is shown, using just three plausible postulates, how a cosmological model can be derived that describes the observation data better than the „flat ΛCDM“ standard model. The postulates are the Planck relation, E = hc/λ, the existence of a distinguished velocity, i.e. the velocity of light in vacuum, and the first law of thermodynamics. Assumptions concerning inexplicable fictitious entities, such as „inflation“ or „dark energy“ can be dispensed with. The model solves „cosmolog-ical problems“.
Einstein’s equations result by requiring that the cosmic relation between the radius of curvature and the energy density can be transferred to local density variations within the cosmos. General Relativity is shown up as a classical approximation of the quantum cosmology. Therefore the relations are clarified in principle that happen between quantum theory and gravity theory.
The AQI concept allows for a simple derivation of black hole entropies and, moreover, establish-es a rationalization of the gauge groups associated with the three fundamental forces. Relati-vistic particles with and without rest mass can be constructed from the AQI bits, and, thus, all objects described in natural sciences. In living beings, the AQI can manifest both in the material body and in meaningful quantum information of the psyche, eventually closing the „explanatory gap“ between „body and mind“.

**Category:** Relativity and Cosmology

[17430] **viXra:1701.0543 [pdf]**
*submitted on 2017-01-19 13:07:10*

**Authors:** George Rajna

**Comments:** 31 Pages.

A team of researchers from several institutions in Israel has, for the first time, identified a molecule that phages use to communicate with one another. [17] Molecules that change colour can be used to follow in real-time how bacteria form a protective biofilm around themselves. This new method, which has been developed in collaboration between researchers at Linköping University and Karolinska Institutet in Sweden, may in the future become significant both in medical care and the food industry, where bacterial biofilms are a problem. [16] Researchers led by Carnegie Mellon University physicist Markus Deserno and University of Konstanz (Germany) chemist Christine Peter have developed a computer simulation that crushes viral capsids. By allowing researchers to see how the tough shells break apart, the simulation provides a computational window for looking at how viruses and proteins assemble. [15] IBM scientists have developed a new lab-on-a-chip technology that can, for the first time, separate biological particles at the nanoscale and could enable physicians to detect diseases such as cancer before symptoms appear. [14] Scientists work toward storing digital information in DNA. [13] Leiden theoretical physicists have proven that DNA mechanics, in addition to genetic information in DNA, determines who we are. Helmut Schiessel and his group simulated many DNA sequences and found a correlation between mechanical cues and the way DNA is folded. They have published their results in PLoS One. [12] We model the electron clouds of nucleic acids in DNA as a chain of coupled quantum harmonic oscillators with dipole-dipole interaction between nearest neighbours resulting in a van der Waals type bonding. [11] Scientists have discovered a secret second code hiding within DNA which instructs cells on how genes are controlled. The amazing discovery is expected to open new doors to the diagnosis and treatment of diseases, according to a new study. [10] There is also connection between statistical physics and evolutionary biology, since the arrow of time is working in the biological evolution also. From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. [8] This paper contains the review of quantum entanglement investigations in living systems, and in the quantum mechanically modeled photoactive prebiotic kernel systems. [7] The human body is a constant flux of thousands of chemical/biological interactions and processes connecting molecules, cells, organs, and fluids, throughout the brain, body, and nervous system. Up until recently it was thought that all these interactions operated in a linear sequence, passing on information much like a runner passing the baton to the next runner. However, the latest findings in quantum biology and biophysics have discovered that there is in fact a tremendous degree of coherence within all living systems. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to understand the Quantum Biology.

**Category:** Physics of Biology

[17429] **viXra:1701.0542 [pdf]**
*submitted on 2017-01-19 08:52:21*

**Authors:** George Rajna

**Comments:** 31 Pages.

A strong, short light pulse can record data on a magnetic layer of yttrium iron garnet doped with Co-ions. This was discovered by researchers from Radboud University in the Netherlands and Bialystok University in Poland. The novel mechanism outperforms existing alternatives, allowing the fastest read-write magnetic recording accompanied by unprecedentedly low heat load. [22]
It goes by the unwieldy acronym STT-MRAM, which stands for spin-transfer torque magnetic random access memory. [21]
Memory chips are among the most basic components in computers. The random access memory is where processors temporarily store their data, which is a crucial function. Researchers from Dresden and Basel have now managed to lay the foundation for a new memory chip concept. [20]
Researchers have built a record energy-efficient switch, which uses the interplay of electricity and a liquid form of light, in semiconductor microchips. The device could form the foundation of future signal processing and information technologies, making electronics even more efficient. [19]
The magnetic structure of a skyrmion is symmetrical around its core; arrows indicate the direction of spin. [18]
According to current estimates, dozens of zettabytes of information will be stored electronically by 2020, which will rely on physical principles that facilitate the use of single atoms or molecules as basic memory cells. [17]
EPFL scientists have developed a new perovskite material with unique properties that can be used to build next-generation hard drives. [16]
Scientists have fabricated a superlattice of single-atom magnets on graphene with a density of 115 terabits per square inch, suggesting that the configuration could lead to next-generation storage media. [15]
Now a researcher and his team at Tyndall National Institute in Cork have made a 'quantum leap' by developing a technical step that could enable the use of quantum computers sooner than expected. [14]
A method to produce significant amounts of semiconducting nanoparticles for light-emitting displays, sensors, solar panels and biomedical applications has gained momentum with a demonstration by researchers at the Department of Energy's Oak Ridge National Laboratory. [13]
A source of single photons that meets three important criteria for use in quantum-information systems has been unveiled in China by an international team of physicists. Based on a quantum dot, the device is an efficient source of photons that emerge as solo particles that are indistinguishable from each other. The researchers are now trying to use the source to create a quantum computer based on "boson sampling". [11]
With the help of a semiconductor quantum dot, physicists at the University of Basel have developed a new type of light source that emits single photons. For the first time, the researchers have managed to create a stream of identical photons. [10]
Optical photons would be ideal carriers to transfer quantum information over large distances. Researchers envisage a network where information is processed in certain nodes and transferred between them via photons. [9]
While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer using Quantum Information.
In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods.
The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories.
The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry.
The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer with the help of Quantum Information.

**Category:** Digital Signal Processing

[17428] **viXra:1701.0541 [pdf]**
*submitted on 2017-01-19 07:15:53*

**Authors:** Manik Dawar

**Comments:** 7 Pages. Contact information: manikdawar@live.com

The entirety of this document assumes the existence of a maximum speed with which any entity in the universe can travel from a set of points in space to any other set of points in space. The consequences on the motion of the constituents of a typical system of particles, when the system is travelling at a speed which is close to the speed limit of the universe, are initially subjected to a qualitative analysis, the conclusions of which hint at a mechanical definition of time. A quantitative analysis of the same reveals the Lorentz Transformation Factor. The fact that the Lorentz transformation factor is derived on applying the definition of time, which was hinted from the qualitative analysis, supports that definition. The quantitative analysis, however, also revealed a different value (transformation factor*). Both the transformation factors are combined to form one transformation factor, which, given that n (the number of spatial dimensions in the universe through which any moving object traverses) is large enough, approximately equates to the Lorentz Transformation Factor. Thus, using the results derived here, the value of n might be revealed.

**Category:** Classical Physics

[17427] **viXra:1701.0540 [pdf]**
*submitted on 2017-01-18 19:08:15*

**Authors:** Michail Zak

**Comments:** 7 Pages.

New physical principle for simulations of PDE has been introduced. It is based upon replacing the PDE to be solved by the system of ODE for which the PDE represents the corresponding Liouville equation. The proposed approach has a polynomial (rather than exponential) algorithmic complexity, and it is applicable to nonlinear parabolic, hyperbolic, and elliptic PDE.

**Category:** Mathematical Physics

[17426] **viXra:1701.0539 [pdf]**
*submitted on 2017-01-18 22:00:17*

**Authors:** Rochelle Forrester

**Comments:** 11 Pages.

Quantum physicists have made many attempts to solve the quantum measurement problem, but no solution seems to have received widespread acceptance. The time has come for a new approach. In Sense Perception and Reality: A Theory of Perceptual Relativity, Quantum Mechanics and the Observer Dependent Universe and in a new paper The End of Realism I suggest the quantum measurement problem is caused by a failure to understand that each species has its own sensory world and that when we say the wave function collapses and brings a particle into existence we mean the particle is bought into existence in the human sensory world by the combined operation of the human sensory apparatus, particle detectors and the experimental set up. This is similar to the Copenhagen Interpretation suggested by Niels Bohr and others, but the understanding that the collapse of the wave function brings a particle into existence in the human sensory world removes the need for a dividing line between the quantum world and the macro world. The same rules can apply to both worlds and the ideas stated in this paper considerably strengthen the Copenhagen Interpretation of quantum mechanics.

**Category:** Quantum Physics

[17425] **viXra:1701.0538 [pdf]**
*submitted on 2017-01-19 02:17:57*

**Authors:** George Rajna

**Comments:** 12 Pages.

Usha Mallik and her team used a grant from the U.S. Department of Energy to help build a sub-detector at the Large Hadron Collider, the world's largest and most powerful particle accelerator, located in Switzerland. They're running experiments on the sub-detector to search for a pair of bottom quarks—subatomic yin-and-yang particles that should be produced about 60 percent of the time a Higgs boson decays. [8]
A new way of measuring how the Higgs boson couples to other fundamental particles has been proposed by physicists in France, Israel and the US. Their technique would involve comparing the spectra of several different isotopes of the same atom to see how the Higgs force between the atom's electrons and its nucleus affects the atomic energy levels. [7]
The magnetic induction creates a negative electric field, causing an electromagnetic inertia responsible for the relativistic mass change; it is the mysterious Higgs Field giving mass to the particles. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate by the diffraction patterns. The accelerating charges explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron’s spin also, building the bridge between the Classical and Relativistic Quantum Theories. The self maintained electric potential of the accelerating charges equivalent with the General Relativity space-time curvature, and since it is true on the quantum level also, gives the base of the Quantum Gravity. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.

**Category:** High Energy Particle Physics

[17424] **viXra:1701.0537 [pdf]**
*submitted on 2017-01-18 13:20:25*

**Authors:** Gary R. Prok

**Comments:** 2 Pages.

There has been disagreement about the validity of Landauer's Principle, which places a limitation on computational energy efficiency. The Principle is predicated on a finite entropy increase associated with every erasure of a memory register. Existence of a reversible memory register reduces Landauer' Principle to a disproven conjecture.

**Category:** Classical Physics

[17423] **viXra:1701.0536 [pdf]**
*submitted on 2017-01-18 13:23:54*

**Authors:** Gary R. Prok

**Comments:** 9 Pages.

Maxwell’s demon challenges our interpretation of thermodynamics and our understanding of the Second Law of thermodynamics. The Szilard engine is a gedanken instantiation of Maxwell’s Demon that is amenable to standard thermodynamic analysis. The paradox of Maxwell’s demon as presented by the Szilard engine is considered to have been solved by Landauer’s principle. A classical analysis of the Szilard engine, presented here, shows that Landauer’s principle is not needed to resolve the paradox of the demon. Classical thermodynamics is all that is needed.

**Category:** Classical Physics

[17422] **viXra:1701.0535 [pdf]**
*submitted on 2017-01-18 13:49:36*

**Authors:** Michail Zak

**Comments:** 13 Pages.

This work is inspired by the discovery of a new class of dynamical system described by ODE coupled with their Liouville equation. These systems called self-controlled since the role of actuators is played by the probability produced by the Liouville equation. Following the Madelung equation that belongs to this class, non-Newtonian properties such as randomness, entanglement, and probability interference typical for quantum systems have been described. Special attention was paid to the capability to violate the second law of thermodynamics, which makes these systems neither Newtonian, nor quantum. It has been shown that self-controlled dynamical systems can be linked to mathematical models of livings. The discovery of isolated dynamical systems that can decrease entropy in violation of the second law of thermodynamics, and resemblances of these systems to livings implies that Life can slow down heat death of the Universe, and that can be associated with the Purpose of Life.

**Category:** Astrophysics

[17421] **viXra:1701.0534 [pdf]**
*submitted on 2017-01-18 12:41:29*

**Authors:** George Rajna

**Comments:** 17 Pages.

An important step towards a completely new experimental access to quantum physics has been made at University of Konstanz. The team of scientists headed by Professor Alfred Leitenstorfer has now shown how to manipulate the electric vacuum field and thus generate deviations from the ground state of empty space which can only be understood in the context of the quantum theory of light. [10] Physicists at the National Institute of Standards and Technology (NIST) have cooled a mechanical object to a temperature lower than previously thought possible, below the so-called "quantum limit." [9] For the past 100 years, physicists have been studying the weird features of quantum physics, and now they're trying to put these features to good use. One prominent example is that quantum superposition (also known as quantum coherence)—which is the property that allows an object to be in two states at the same time—has been identified as a useful resource for quantum communication technologies. [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.

**Category:** Quantum Physics

[17420] **viXra:1701.0533 [pdf]**
*submitted on 2017-01-18 05:02:17*

**Authors:** J. Dunning-Davies, J. P. Dunning-Davies

**Comments:** 7 Pages.

Ever since Oliver Heaviside's suggestion of the possible existence of a set of equations, analogous to Maxwell's equations for the electromagnetic field, to describe the gravitational field, others have considered and built on the original notion. However, if such equations do exist and really are analogous to Maxwell's electromagnetic equations, new problems could arise related to presently accepted notions concerning special relativity. This note, as well as offering a translation of a highly relevant paper by Carstoiu, addresses these concerns in the same manner as similar concerns regarding Maxwell's equations were.

**Category:** Mathematical Physics

[17419] **viXra:1701.0532 [pdf]**
*submitted on 2017-01-18 07:32:42*

**Authors:** George Rajna

**Comments:** 17 Pages.

One of the deepest mysteries of physics today is why we seem to live in a world composed only of matter, while the Big Bang should have created equal amounts of matter and antimatter. [13] A precise measurement of absolute beam intensity is a key parameter to monitor any losses in a beam and to calibrate the absolute number of particles delivered to the experiments. [12] In a paper published today in the journal Science, the ASACUSA experiment at CERN reported new precision measurement of the mass of the antiproton relative to that of the electron. [11] When two protons approaching each other pass close enough together, they can " feel " each other, similar to the way that two magnets can be drawn closely together without necessarily sticking together. According to the Standard Model, at this grazing distance, the protons can produce a pair of W bosons. [10] The fact that the neutron is slightly more massive than the proton is the reason why atomic nuclei have exactly those properties that make our world and ultimately our existence possible. Eighty years after the discovery of the neutron, a team of physicists from France, Germany, and Hungary headed by Zoltán Fodor, a researcher from Wuppertal, has finally calculated the tiny neutron-proton mass difference. [9] Taking into account the Planck Distribution Law of the electromagnetic oscillators, we can explain the electron/proton mass rate and the Weak and Strong Interactions. Lattice QCD gives the same results as the diffraction patterns of the electromagnetic oscillators, explaining the color confinement and the asymptotic freedom of the Strong Interactions.

**Category:** High Energy Particle Physics

[17418] **viXra:1701.0531 [pdf]**
*submitted on 2017-01-17 17:36:56*

**Authors:** Fu Yuhua

**Comments:** 6 Pages.

As No.3 of comparative physics series papers, this paper mainly discusses the comparative studies between the original law of conservation of energy and the Computer Information Library Clusters; and based on the multiform laws of conservation of energy, the concept of "law clusters of conservation of generalized energy" is presented. In which, any physical quantity can be regarded as "generalized energy", and any physical formula and equation can be transformed into law of conservation, therefore all the physical laws as well as formulas and equations can be classified as "physical law clusters of conservation of generalized energy" (sometimes it can be simplified to "law clusters of conservation of generalized energy"). While in law clusters of conservation of generalized energy, there are some source laws. According to the source law, some related laws as well as formulas and equations can be derived, for example, law of gravity and Newton's second law can be derived with law of conservation of energy; thus "law clusters of conservation of generalized energy" can be simplified to "law clusters of physical source law". As the number of source laws in the law clusters is reduced to some degree, all the laws of physics are able to be written on a T-shirt with the form of "the simplest law clusters of physical source law". In order to deal with the practical problems, "variational principle of the simplest law clusters of physical source law" can be eatablished.

**Category:** Thermodynamics and Energy

[17417] **viXra:1701.0530 [pdf]**
*submitted on 2017-01-17 20:03:50*

**Authors:** Michail Zak

**Comments:** 17 Pages.

A new class of dynamical systems with a preset type of interference of probabilities is introduced. It is obtained from the extension of the Madelung equation by replacing the quantum potential with a specially selected feedback from the Liouville equation. It has been proved that these systems are different from both Newtonian and quantum systems, but they can be useful for modeling spontaneous collective novelty phenomena when emerging outputs are qualitatively different from the weighted sum of individual inputs. Formation of language and fast decision-making process as potential applications of the probability interference is discussed.

**Category:** Artificial Intelligence

[17416] **viXra:1701.0529 [pdf]**
*submitted on 2017-01-17 20:26:55*

**Authors:** Michail Zak

**Comments:** 22 Pages.

via conservative forces. These forces can be of gravitational origin (celestial mechanics),
inter-molecular origin (molecular dynamics), or representing (structural biology). In
The n-body problem as a classic astronomical and physical problem that naturally follows from the two- body problem first solved by Newton in his Principia in 1687. The efforts of many famous mathematicians have been devoted to this difficult problem, including Euler and Lagrange (1772), Jacobi (1836), Hill (1878), Poincaré (1899), Levi-Civita (1905), and Birkhoff (1915). However, despite centuries of exploration, there is no clear structure of the solution of the general n- or even three-body problem as there are no coordinate transformations that can simplify the problem, and there are more and more evidences that, in general, the solutions of n-body problems are chaotic. Failure to find a general analytical structure of the solution shifted the effort towards numerical methods. Many ODE solvers offer a variety of advance numerical methods for the solution.
2. Chaos in classical dynamics
We start this section with revisiting mathematical formalism of chaos in a non-traditional way that is based upon the concept of orbital instability.
The concept of randomness entered Newtonian dynamics almost a century ago: in 1926, Synge, J. introduced a new type of instability - orbital instability- in classical mechanics, [4], that can be considered as a precursor of chaos formulated a couple of decades later, [5]. The theory of chaos was inspired by the fact that in recent years, in many different domains of science (physics, chemistry, biology, engineering), systems with a similar strange behavior were frequently encountered displaying irregular and unpredictable behavior called chaotic. Currently the theory of chaos that describes such systems is well established. However there are still two unsolved problem remain: prediction of chaos (without numerical runs), and analytical description of chaos in term of the probability density that would formally follow from the original ODE. This paper proposes a contribution to the solution of these problems illustrated by chaos in inertial systems
a. Orbital instability as a precursor of chaos.
Chaos is a special type of instability when the system does not have an alternative stable state and displays an irregular aperiodic motion. Obviously this kind of instability can be associated only with ignorable variables, i.e. with such variables that do not contribute into energy of the system. In order to demonstrate this kind of instability, consider an inertial motion of a particle M of unit mass on a smooth pseudosphere S having a constant negative curvature G0, Fig. 1.
The n-body problem is the problem of predicting the individual motions of a group of objects
1
interacting
with each other
the most common version, the trajectories of the objects are determined by numerically solving the Newton's equations of motion for a system of interacting particles. Non-conservative version of the interaction forces became important in case of the n-body problem that incorporates the effects of
the Coulomb potential
radiation pressure, Poynting-Robertson (P-R) drag, and solar wind drag.
The general method of numerical solution of the corresponding system
of ODE was originally conceived within theoretical physics in the late 1950s,,[1,2], but is applied today
mostly in chemical physics, materials science and the modeling of biomolecules.
The most significant “side effect “of the existing numerical methods for n-body problems becomes chaos when different numerical runs with the same initial conditions result in different trajectories. Although numerical errors can contribute to chaos, nevertheless the primary origin of chaos is physical instability,
[3].
In this work, a general approach to probabilistic description of chaos in n-body problem with conservative

**Category:** Classical Physics

[17415] **viXra:1701.0528 [pdf]**
*submitted on 2017-01-17 23:40:30*

**Authors:** Yannan Yang

**Comments:** 2 Pages.

Concerning the paradox of magnetic interaction between two parallel moving charged particle beams, there are a lot of discussions. Here in this paper, an experimental design is proposed, by which we can verify if there is really a magnetic interaction between the two charged particle beams.

**Category:** Relativity and Cosmology

[17414] **viXra:1701.0527 [pdf]**
*submitted on 2017-01-17 15:18:42*

**Authors:** George Rajna

**Comments:** 26 Pages.

Technion researchers have demonstrated, for the first time, that laser emissions can be created through the interaction of light and water waves. This "water-wave laser" could someday be used in tiny sensors that combine light waves, sound and water waves, or as a feature on microfluidic "lab-on-a-chip" devices used to study cell biology and to test new drug therapies. [18] Researchers led by EPFL have built ultra-high quality optical cavities for the elusive mid-infrared spectral region, paving the way for new chemical and biological sensors, as well as promising technologies. [17] The research team led by Professor Hele Savin has developed a new light detector that can capture more than 96 percent of the photons covering visible, ultraviolet and infrared wavelengths. [16] A promising route to smaller, powerful cameras built into smartphones and other devices is to design optical elements that manipulate light by diffraction-the bending of light around obstacles or through small gaps-instead of refraction. [15] Converting a single photon from one color, or frequency, to another is an essential tool in quantum communication, which harnesses the subtle correlations between the subatomic properties of photons (particles of light) to securely store and transmit information. Scientists at the National Institute of Standards and Technology (NIST) have now developed a miniaturized version of a frequency converter, using technology similar to that used to make computer chips. [14] Harnessing the power of the sun and creating light-harvesting or light-sensing devices requires a material that both absorbs light efficiently and converts the energy to highly mobile electrical current. Finding the ideal mix of properties in a single material is a challenge, so scientists have been experimenting with ways to combine different materials to create "hybrids" with enhanced features. [13] Condensed-matter physicists often turn to particle-like entities called quasiparticles—such as excitons, plasmons, magnons—to explain complex phenomena. Now Gil Refael from the California Institute of Technology in Pasadena and colleagues report the theoretical concept of the topological polarition, or " topolariton " : a hybrid half-light, half-matter quasiparticle that has special topological properties and might be used in devices to transport light in one direction. [12]

**Category:** Condensed Matter

[17413] **viXra:1701.0526 [pdf]**
*submitted on 2017-01-17 12:05:57*

**Authors:** Mark M. Grinshtein

**Comments:** 8 Pages. in Russian

The article reviews the author's concept of the information-wave medicine (IWM). As presented, the IWM is not connected with author's special powers but is a branch of medical science. It is indicated that biolocation is also a science the principles of which are still not understood. \\ В статье рассматривается созданная автором концепция информационно-волновой медицины (ИВМ). Показано, что ИВМ не связана с особыми способностями автора, а является ветвью медицинской науки. Показано также, что биолокация тоже является наукой, механизм которой до настоящего времени ещё не познан.

**Category:** Physics of Biology

[17412] **viXra:1701.0525 [pdf]**
*submitted on 2017-01-17 07:37:05*

**Authors:** Sjaak Uitterdijk

**Comments:** 4 Pages.

The article shows that the present worldwide production of sustainable energy is negligible relative to the worldwide need of energy. As a result, increasing the production of sustainable energy, in order to try to reduce CO2 emissions, will not have any significant effect. Only one measure will do. However such a measure will not be received as a popular one.

**Category:** Climate Research

[17411] **viXra:1701.0524 [pdf]**
*submitted on 2017-01-17 04:12:19*

**Authors:** Terrence J. McMahon

**Comments:** 32 Pages.

Unification of the physical constants is announced, where gravity, quantum theory, and general relativity are linked via new physics. Unification involves a new, combined ‘gravito-electromagnetic’ constant, linked via Pi and Phi (the golden mean). All constants, most of which are found to run at high energies, are related via this expression. Energy, mass, and the gravitational constant are explained in those terms. The photon constant runs inversely to the gravitational constant, while both are united via a running fine-structure constant. Mass is not conserved, running with energy. Energy however is conserved. Space is a superconductor, where photons have mass. The Hubble constant is redefined, providing an alternative cause for red-shifting of photon wavelengths. A brief discussion of these findings offers an explanation via a new cosmological model that does not require inflation, singularities, dark energy, exotic dark matter, or supersymmetry. Anomalies in the Standard Model are explained. Suitable candidates are described for the cosmological constant, and mass density parameter. The Universe is found to be closed. Planck units run, and the Planck constant is calculated from theory, differing by 0.2%, as is the von Klitzing constant. Magnetic permeability, electric permittivity, and wave impedance are calculated from theory here, differing from accepted values (defined by convention) by just 0.2%. The fine structure, gravitational, and Hubble constants are defined, with accuracy for the latter two improved to 10 significant figures. These data described are all in excellent agreement with the Planck survey (2015) results. New, related constants are discussed. A novel explanation is introduced to explain the mass ratio between an electron and a proton. Predictions are made for future values of the principal running constants. These discoveries have substantial consequences for the Standard Model.

**Category:** Quantum Gravity and String Theory

[17410] **viXra:1701.0523 [pdf]**
*submitted on 2017-01-17 04:41:39*

**Authors:** Grushka Ya.I.

**Comments:** 158 Pages. Mathematics Subject Classification: 03E75; 70A05; 83A05; 47B99

This work lays the foundations of the theory of kinematic changeable sets ("abstract kinematics"). Theory of kinematic changeable sets is based on the theory of changeable sets. From an intuitive point of view, changeable sets are sets of objects which, unlike elements of ordinary (static) sets, may be in the process of continuous transformations, and which may change properties depending on the point of view on them (that is depending on the reference frame). From the philosophical and imaginative point of view the changeable sets may look like as "worlds" in which evolution obeys arbitrary laws.
Kinematic changeable sets are the mathematical objects, consisting of changeable sets, equipped by different geometrical or topological structures (namely metric, topological, linear, Banach, Hilbert and other spaces). In author opinion, theories of changeable and kinematic changeable sets (in the process of their development and improvement), may become some tools of solving the sixth Hilbert problem at least for physics of macrocosm. Investigations in this direction may be interesting for astrophysics, because there exists the hypothesis, that in the large scale of Universe, physical laws (in particular, the laws of kinematics) may be different from the laws, acting in the neighborhood of our solar System. Also these investigations may be applied for the construction of mathematical foundations of tachyon kinematics.
We believe, that theories of changeable and kinematic changeable sets may be interesting not only for theoretical physics but also for other fields of science as some, new, mathematical apparatus for description of evolution of complex systems.

**Category:** Mathematical Physics

[17409] **viXra:1701.0522 [pdf]**
*submitted on 2017-01-16 16:29:56*

**Authors:** Michail Zak

**Comments:** 10 Pages.

The concept of randomness entered Newtonian dynamics almost a century ago: in 1926, Synge, J. introduced a new type of instability - orbital instability- in classical mechanics, [1], that can be considered as a precursor of chaos formulated a couple of decades later, [2]. The theory of chaos was inspired by the fact that in recent years, in many different domains of science (physics, chemistry, biology, engineering), systems with a similar strange behavior were frequently encountered displaying irregular and unpredictable behavior called chaotic. Currently the theory of chaos that describes such systems is well established. However there are still two unsolved problem remain: prediction of chaos (without numerical runs), and analytical description of chaos in term of the probability density that would formally follow from the original ODE. This paper proposes a contribution to the solution of these problems.

**Category:** Classical Physics

[17408] **viXra:1701.0521 [pdf]**
*submitted on 2017-01-16 20:05:38*

**Authors:** J. P. Lestone

**Comments:** 3 Pages. 3 pg, 1 figure.

Virtual photons, with a reduced wavelength of ƛ, are assumed to interact with isolated charged leptons with a cross section of piƛ2. This interaction is assumed to generate stimulated virtual photon emissions that are capable of being exchanged with other particles. This exchange of virtual photons is assumed to define the strength of electromagnetism. With the inclusion of near-field effects, the model choices presented give a calculated fundamental unit of charge of 1.60218x10^-19 C. If these choices are corroborated by detailed calculations then an understanding of the numerical value of the fine structure constant may emerge.

**Category:** Quantum Physics

[17407] **viXra:1701.0520 [pdf]**
*submitted on 2017-01-16 23:28:52*

**Authors:** Yibing Qiu

**Comments:** 1 Page.

Abstract: show a viewpoint with regards to the mechanism
between the black holes and the disks of galaxies.

**Category:** Astrophysics

[17406] **viXra:1701.0519 [pdf]**
*submitted on 2017-01-17 00:26:31*

**Authors:** W. B. Vasantha Kandasamy, K. Ilanthenral, Florentin Smarandache

**Comments:** 278 Pages.

In this book authors for the first time develop the notion of MOD natural neutrosophic subset special type of topological spaces using MOD natural neutrosophic dual numbers or MOD natural neutrosophic finite complex number or MOD natural neutrosophic-neutrosophic numbers and so on to build their respective MOD semigroups. Later they extend this concept to MOD interval subset semigroups and MOD interval neutrosophic subset semigroups. Using these MOD interval semigroups and MOD interval natural neutrosophic subset semigroups special type of subset topological spaces are built. Further using these MOD subsets we build MOD interval subset matrix semigroups and MOD interval subset matrix special type of matrix topological spaces. Likewise using MOD interval natural neutrosophic subsets matrices semigroups we can build MOD interval natural neutrosophic matrix subset special type of topological spaces. We also do build MOD subset coefficient polynomial special type of topological spaces. The final chapter mainly proposes several open conjectures about the validity of the Kakutani’s fixed point theorem for all MOD special type of subset topological spaces.

**Category:** Topology

[17405] **viXra:1701.0518 [pdf]**
*submitted on 2017-01-16 13:27:51*

**Authors:** George Rajna

**Comments:** 33 Pages.

Gold is prized for its preciousness and as a conductor in electronics, but it is also important in scientific experimentation. [23] When the temperature of the material changes, both the electronic and the magnetic properties of the materials change with it. [22] In a proof-of-concept study published in Nature Physics, researchers drew magnetic squares in a nonmagnetic material with an electrified pen and then "read" this magnetic doodle with X-rays. [21] Researchers have brought electrides into the nanoregime by synthesizing the first 2D electride material. Electrides are ionic compounds, which are made of negative and positive ions. But in electrides, the negative "ions" are simply electrons, with no nucleus. [20] Microelectromechanical systems, or MEMS, are tiny machines fabricated using equipment and processes developed for the production of electronic chips and devices. [19] Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have made the world's smallest radio receiver-built out of an assembly of atomic-scale defects in pink diamonds. [18] Smart phones have shiny flat AMOLED displays. Behind each single pixel of these displays hide at least two silicon transistors which were mass-manufactured using laser annealing technologies. [17] Bumpy surfaces with graphene between would help dissipate heat in next-generation microelectronic devices, according to Rice University scientists. [16] Scientists at The University of Manchester and Karlsruhe Institute of Technology have demonstrated a method to chemically modify small regions of graphene with high precision, leading to extreme miniaturisation of chemical and biological sensors. [15] A new method for producing conductive cotton fabrics using graphene-based inks opens up new possibilities for flexible and wearable electronics, without the use of expensive and toxic processing steps. [14]

**Category:** Condensed Matter

[17404] **viXra:1701.0517 [pdf]**
*submitted on 2017-01-16 13:55:40*

**Authors:** Michail Zak

**Comments:** 22 Pages.

This paper presents a non-traditional approach to theory of turbulence. Its objective is to prove that Newtonian mechanics is fully equipped for description of turbulent motions without help of experimentally obtained closures. Turbulence is one of the most fundamental problems in theoretical physics that is still unsolved. The term “unsolved “ here means that turbulence cannot be properly formulated, i.e. reduced to standard mathematical procedure such as solving differential equations. In other words, it is not just a computational problem: prior to computations, a consistent mathematical model must be found. Although applicability of the Navier-Stokes equations as a model for fluid mechanics is not in question, the instability of their solutions for flows with supercritical Reynolds numbers raises a more general question: is Newtonian mechanics complete?
The problem of turbulence (stressed later by the discovery of chaos) demonstrated that the Newton’s world is far more complex than those represented by classical models. It appears that the Lagrangian or Hamiltonian formulations do not suggest any tools for treating postinstability motions, and this is a major flaw of the classical approach to Newtonian mechanics. The explanation of that limitation is proposed in this paper: the classical formalism based upon the Newton’s laws exploits additional mathematical restrictions (such as space–time differentiability, and the Lipchitz conditions) that are not required by the Newton’s laws. The only purpose for these restrictions is to apply a powerful technique of classical mathematical analysis. However, in many cases such restrictions are incompatible with physical reality, and the most obvious case of such incompatibility is the Euler’s model of inviscid fluid in which absence of shear stresses are not compensated by a release of additional degrees of freedom as required by the principles of mechanics.
It has been recently demonstrated, [3], that according to the principle of release of constraints, absence of shear stresses in the Euler equations must be compensated by additional degrees of freedom, and that led to a Reynolds-type enlarged Euler equations (EE equations) with a doublevalued velocity field that do not require any closures. In the first part of the paper, the theory is applied to turbulent mixing and illustrated by propagation of mixing zone triggered by a tangential jump of velocity. A comparison of the proposed solution with the Prandtl’s solution is performed and discussed. In the second part of the paper, a semi-viscous version of the Navier-Stokes equations is introduced. The model does not require any closures since the number of equations is equal to the number of unknowns.

**Category:** Classical Physics

[17403] **viXra:1701.0516 [pdf]**
*submitted on 2017-01-16 14:14:27*

**Authors:** Michail Zak

**Comments:** 19 Pages.

Stochastic approach to maximization of a functional constrained by governing equation of a controlled system is introduced and discussed. The idea of the proposed algorithm is the following: represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then the corresponding ODE become stochastic, and that sample of the solution which has the largest value will have the highest probability to appear in ODE simulation. Application to optimal control is discussed. Two limitations of optimal control theory - local maxima and possible instability of the optimal solutions - are removed. Special attention is paid to robot motion planning.

**Category:** Artificial Intelligence

[17402] **viXra:1701.0515 [pdf]**
*submitted on 2017-01-16 10:26:57*

**Authors:** George Rajna

**Comments:** 26 Pages.

A widely held understanding of electromagnetic radiation has been challenged in newly published research led at the University of Strathclyde. [19] Technion researchers have demonstrated, for the first time, that laser emissions can be created through the interaction of light and water waves. This "water-wave laser" could someday be used in tiny sensors that combine light waves, sound and water waves, or as a feature on microfluidic "lab-on-a-chip" devices used to study cell biology and to test new drug therapies. [18] Researchers led by EPFL have built ultra-high quality optical cavities for the elusive mid-infrared spectral region, paving the way for new chemical and biological sensors, as well as promising technologies. [17] The research team led by Professor Hele Savin has developed a new light detector that can capture more than 96 percent of the photons covering visible, ultraviolet and infrared wavelengths. [16] A promising route to smaller, powerful cameras built into smartphones and other devices is to design optical elements that manipulate light by diffraction-the bending of light around obstacles or through small gaps-instead of refraction. [15] Converting a single photon from one color, or frequency, to another is an essential tool in quantum communication, which harnesses the subtle correlations between the subatomic properties of photons (particles of light) to securely store and transmit information. Scientists at the National Institute of Standards and Technology (NIST) have now developed a miniaturized version of a frequency converter, using technology similar to that used to make computer chips. [14] Harnessing the power of the sun and creating light-harvesting or light-sensing devices requires a material that both absorbs light efficiently and converts the energy to highly mobile electrical current. Finding the ideal mix of properties in a single material is a challenge, so scientists have been experimenting with ways to combine different materials to create "hybrids" with enhanced features. [13] Condensed-matter physicists often turn to particle-like entities called quasiparticles—such as excitons, plasmons, magnons—to explain complex phenomena.

**Category:** Quantum Physics

[17401] **viXra:1701.0514 [pdf]**
*submitted on 2017-01-16 12:46:36*

**Authors:** Terubumi Honjou

**Comments:** 4 Pages.

The inflation space model becomes the leading role of the cosmology now.
The inflation cosmology supposes space to have been the size of the elementary particle level at a moment of the space birth and applies particle physics and is going to understand it.
But for now,
Innumerable inflation cosmology is proposed, and one cannot identify it.
Introduction of the super velocity of light concept more than the velocity of light,
A unit infinite other than the space where we live in other space with existing,
There is it in confusion.

**Category:** Astrophysics

[17400] **viXra:1701.0513 [pdf]**
*submitted on 2017-01-16 06:25:21*

**Authors:** Carlos Castro

**Comments:** 14 Pages. Submitted to the IJGMMP.

Starting with the study of the geometry on the cotangent bundle (phase space), it is shown that the maximal proper force condition, in the case of a uniformly accelerated observer of mass $m$ along the $x$ axis, leads to a minimum value of $x$ lying $inside$ the Rindler wedge and given by the black hole horizon radius $ 2Gm$. Whereas in the uniform circular motion case, we find that the maximal proper force condition implies that the radius of the circle cannot exceed the value of the horizon radius $2Gm$. A correspondence is found between the black hole horizon radius and a singularity in the curvature of momentum space. The fact that the geometry (metric) in phase spaces is observer dependent (on the momentum of the massive particle/observer) indicates further that the matter stress energy tensor and vacuum energy in the underlying spacetime may admit an interpretation in terms of the curvature in momentum spaces. Consequently, phase space geometry seems to be the proper arena for a space-time-matter unification.

**Category:** Quantum Gravity and String Theory

[17399] **viXra:1701.0512 [pdf]**
*submitted on 2017-01-16 07:18:40*

**Authors:** George Rajna

**Comments:** 32 Pages.

When the temperature of the material changes, both the electronic and the magnetic properties of the materials change with it. [22] In a proof-of-concept study published in Nature Physics, researchers drew magnetic squares in a nonmagnetic material with an electrified pen and then "read" this magnetic doodle with X-rays. [21] Researchers have brought electrides into the nanoregime by synthesizing the first 2D electride material. Electrides are ionic compounds, which are made of negative and positive ions. But in electrides, the negative "ions" are simply electrons, with no nucleus. [20] Microelectromechanical systems, or MEMS, are tiny machines fabricated using equipment and processes developed for the production of electronic chips and devices. [19] Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have made the world's smallest radio receiver-built out of an assembly of atomic-scale defects in pink diamonds. [18] Smart phones have shiny flat AMOLED displays. Behind each single pixel of these displays hide at least two silicon transistors which were mass-manufactured using laser annealing technologies. [17] Bumpy surfaces with graphene between would help dissipate heat in next-generation microelectronic devices, according to Rice University scientists. [16] Scientists at The University of Manchester and Karlsruhe Institute of Technology have demonstrated a method to chemically modify small regions of graphene with high precision, leading to extreme miniaturisation of chemical and biological sensors. [15] A new method for producing conductive cotton fabrics using graphene-based inks opens up new possibilities for flexible and wearable electronics, without the use of expensive and toxic processing steps. [14] A device made of bilayer graphene, an atomically thin hexagonal arrangement of carbon atoms, provides experimental proof of the ability to control the momentum of electrons and offers a path to electronics that could require less energy and give off less heat than standard silicon-based transistors. It is one step forward in a new field of physics called valleytronics. [13]

**Category:** Condensed Matter

[17398] **viXra:1701.0511 [pdf]**
*submitted on 2017-01-15 16:19:44*

**Authors:** Stephen C. Pearson.

**Comments:** 42 Pages.

This particular submission contains a copy [PART 5/6] of the author's original paper and is therefore a continuation of his previous submission, namely - "An Introduction to Functions of a Quaternion Hypercomplex Variable - PART 4/6", which has been published under the 'VIXRA' Mathematics subheading:- 'Functions and Analysis'.

**Category:** Functions and Analysis

[17397] **viXra:1701.0510 [pdf]**
*submitted on 2017-01-15 17:01:34*

**Authors:** Stephen C. Pearson.

**Comments:** 24 Pages.

This particular submission contains a copy [PART 6/6] of the author's original paper and is therefore a continuation of his previous submission, namely - "An Introduction to Functions of a Quaternion Hypercomplex Variable - PART 5/6", which has been published under the 'VIXRA' Mathematics subheading:- 'Functions and Analysis'.

**Category:** Functions and Analysis

[17396] **viXra:1701.0509 [pdf]**
*submitted on 2017-01-15 22:03:42*

**Authors:** Georgina Woodward

**Comments:** 25 Pages.

Starting with the premise that; the differences between substantial objects and images are not unimportant, as, though they may bear the same object name they are not equivalent. Consideration of methods of making measurements of distance is then given, followed by examination of the distance measurement methods used in “On the Electrodynamics of moving bodies”. The category error of not differentiating between objects and images is shown and identified within Einstein’s paper. It is postulated that that category error has led to a misunderstanding of the physics of relativity, and is cause of the associated paradoxes. Having clarified the categorical difference between material Object reality and product of information processing, Image reality, the paradoxes of relativity are considered making use of that differentiation. Caution regarding magic related to the “what you see is all there is” bias is given.

**Category:** History and Philosophy of Physics

[17395] **viXra:1701.0508 [pdf]**
*submitted on 2017-01-16 01:10:24*

**Authors:** Nikitin V. N., Nikitin I.V.

**Comments:** 2 Pages.

Our Universe – one of galaxies Multivselenna limited to a gravitational cover and the Black hole in the center. The white hole has turned black, having let out galactic "tears", and we see that today, as have to see!

**Category:** Astrophysics

[17394] **viXra:1701.0507 [pdf]**
*submitted on 2017-01-16 01:15:27*

**Authors:** Nikitin V. N., Nikitin I.V.

**Comments:** 1 Page.

Once the tail of an unknown comet "covered" Mars with red "cover". Martian "blueberry" is the hail which arose from the "torn-off" tail of an unknown comet.

**Category:** Astrophysics

[17393] **viXra:1701.0506 [pdf]**
*submitted on 2017-01-15 13:43:49*

**Authors:** George Rajna

**Comments:** 37 Pages.

Stem cell therapies hold great promise for restoring function in a variety of degenerative conditions, but one of the logistical hurdles is how to ensure the cells survive in the body long enough to work. [21] A surprising new finding about gene expression could increase our understanding of the aging process. Gene expression is the process by which the information contained within a gene becomes a useful product. [20] Scientists at The Scripps Research Institute (TSRI) have discovered a protein that fine-tunes the cellular clock involved in aging. [19] Genes tell cells what to do—for example, when to repair DNA mistakes or when to die—and can be turned on or off like a light switch. Knowing which genes are switched on, or expressed, is important for the treatment and monitoring of disease. Now, for the first time, Caltech scientists have developed a simple way to visualize gene expression in cells deep inside the body using a common imaging technology. [18] Researchers at The University of Manchester have discovered that a potential new drug reduces the number of brain cells destroyed by stroke and then helps to repair the damage. [17] Researchers at the University of Connecticut have uncovered new information about how particles behave in our bloodstream, an important advancement that could help pharmaceutical scientists develop more effective cancer drugs. [16] For the past 15 years, the big data techniques pioneered by NASA's Jet Propulsion Laboratory in Pasadena, California, have been revolutionizing biomedical research. On Sept. 6, 2016, JPL and the National Cancer Institute (NCI), part of the National Institutes of Health, renewed a research partnership through 2021, extending the development of data science that originated in space exploration and is now supporting new cancer discoveries. [15] IBM scientists have developed a new lab-on-a-chip technology that can, for the first time, separate biological particles at the nanoscale and could enable physicians to detect diseases such as cancer before symptoms appear. [14] Scientists work toward storing digital information in DNA. [13]

**Category:** Physics of Biology

[17392] **viXra:1701.0505 [pdf]**
*submitted on 2017-01-15 14:34:25*

**Authors:** Stephen C. Pearson.

**Comments:** 42 Pages.

This particular submission contains a copy [PART 3/6] of the author's original paper and is therefore a continuation of his previous submission, namely - "An Introduction to Functions of a Quaternion Hypercomplex Variable - PART 2/6", which has been published under the 'VIXRA' Mathematics subheading:- 'Functions and Analysis'.

**Category:** Functions and Analysis

[17391] **viXra:1701.0504 [pdf]**
*submitted on 2017-01-15 15:44:13*

**Authors:** Stephen C. Pearson.

**Comments:** 42 Pages.

This particular submission contains a copy [PART 4/6] of the author's original paper and is therefore a continuation of his previous submission, namely - "An Introduction to Functions of a Quaternion Hypercomplex Variable - PART 3/6", which has been published under the 'VIXRA' Mathematics subheading:- 'Functions and Analysis'.

**Category:** Functions and Analysis

[17390] **viXra:1701.0503 [pdf]**
*submitted on 2017-01-15 10:10:27*

**Authors:** Gene H. Barbee

**Comments:** 32 Pages. Please contact Gene Barbee genebarbee@msn.com

Abstract:
The cosmic web is a filament like structure that connects galaxies. It has been imaged by gravitational lensing and is thought to be composed mainly of dark matter since it is not visible in the electromagnetic spectrum. There are computer simulations of the web showing that galaxies are often nodes for multiple branches. View the simulations at https://www.youtube.com/watch?v=ivymdduulFU. WMAP, PLANCK and other background radiation anisotropy teams have concluded that dark matter is 5 times more prevalent than normal matter. Scientists are trying to identify dark matter and the unexpected web like structure adds to the list of cosmology unknowns.
This document proposes that dark matter consists of neutron waves or neutrons (wave/particle duality) contained by a gravitational field. Dark matter density would be the same as normal matter density but neutron waves might have a radius of only 1.53e-15 meters (the wavelength of a neutron). This means it could be very elongated (e.g. 5e16 meters). It may coil into a small volume unless stretched by gravity. The neutron/waves location in the long filament is probabilistic but it contains 939 MeV/filament (1.675e-27 Kg). A diffuse structure and the absence of electromagnetic features will make it difficult to detect. Originally dark and normal matter is mixed and both fall into massive structures like galaxies over time. The residual dark matter probably forms aligned stretched filaments we see as the cosmic web. It would attract some normal matter and be gravitationally stretched between galaxies. Dark matter has only gravitational interactions. As it moves into galaxies it forms halos and explains anomalous galactic velocity observations.
The author will present a re-analysis of the baryon/photon ratio (critical to residual deuterium abundance data) and will review that WMAP data that lead scientists to conclude that dark matter was 5 times more prevalent than normal matter. A detailed model from matter equality to decoupling will be presented. The features of interest are the waves that cause temperature variations in the background radiation. A model that predicts the temperature of the hot spots will be presented. Based on re-analysis of limiting considerations it will be shown that half of all matter is baryons and the other half is dark matter.
Most scientists use a time ratio to predict expansion; i.e. R=R0*(time/time0)^(2/3). The author developed fundamental equations that allow expansion to be modeled with forces and energy. Surprisingly, very little energy is required to produce late stage expansion of free protons. A proposal for dark energy was proposed based on this understanding.

**Category:** Relativity and Cosmology

[17389] **viXra:1701.0502 [pdf]**
*submitted on 2017-01-15 11:15:18*

**Authors:** Stephen C. Pearson.

**Comments:** 42 Pages.

This particular submission contains (inter alia) a copy [PART 1/6] of the author's original paper, which was completed on 31st March 1984 and thus comprises a total of 161 handwritten foolscap pages. Subsequently, its purpose is to enunciate various definitions and theorems, which pertain to the following topics, i.e. (a) the algebra of quaternion hypercomplex numbers; (b) functions of a single quaternion hypercomplex variable; (c) the concepts of limit and continuity applied to such functions; (d) the elementary principles of differentiation and integration applied to quaternion hypercomplex functions. Many of the concepts presented therein are analogous to well established notions from real and complex variable analysis with any divergent results being due to the non-commutativity of quaternion products.

**Category:** Functions and Analysis

[17388] **viXra:1701.0501 [pdf]**
*submitted on 2017-01-15 12:58:04*

**Authors:** Stephen C. Pearson.

**Comments:** 42 Pages.

This particular submission contains a copy [PART 2/6] of the author's original paper and is therefore a continuation of his previous submission, namely - "An Introduction to Functions of a Quaternion Hypercomplex Variable - PART 1/6", which has been published under the 'VIXRA' Mathematics subheading:- 'Functions and Analysis'.

**Category:** Functions and Analysis

[17387] **viXra:1701.0500 [pdf]**
*submitted on 2017-01-15 12:36:59*

**Authors:** George Rajna

**Comments:** 34 Pages.

Scientists at The Scripps Research Institute (TSRI) have discovered a protein that fine-tunes the cellular clock involved in aging. [19] Genes tell cells what to do—for example, when to repair DNA mistakes or when to die—and can be turned on or off like a light switch. Knowing which genes are switched on, or expressed, is important for the treatment and monitoring of disease. Now, for the first time, Caltech scientists have developed a simple way to visualize gene expression in cells deep inside the body using a common imaging technology. [18] Researchers at The University of Manchester have discovered that a potential new drug reduces the number of brain cells destroyed by stroke and then helps to repair the damage. [17] Researchers at the University of Connecticut have uncovered new information about how particles behave in our bloodstream, an important advancement that could help pharmaceutical scientists develop more effective cancer drugs. [16] For the past 15 years, the big data techniques pioneered by NASA's Jet Propulsion Laboratory in Pasadena, California, have been revolutionizing biomedical research. On Sept. 6, 2016, JPL and the National Cancer Institute (NCI), part of the National Institutes of Health, renewed a research partnership through 2021, extending the development of data science that originated in space exploration and is now supporting new cancer discoveries. [15] IBM scientists have developed a new lab-on-a-chip technology that can, for the first time, separate biological particles at the nanoscale and could enable physicians to detect diseases such as cancer before symptoms appear. [14] Scientists work toward storing digital information in DNA. [13] Leiden theoretical physicists have proven that DNA mechanics, in addition to genetic information in DNA, determines who we are. Helmut Schiessel and his group simulated many DNA sequences and found a correlation between mechanical cues and the way DNA is folded. They have published their results in PLoS One. [12] We model the electron clouds of nucleic acids in DNA as a chain of coupled quantum harmonic oscillators with dipole-dipole interaction between nearest neighbours resulting in a van der Waals type bonding. [11]

**Category:** Physics of Biology

[17386] **viXra:1701.0499 [pdf]**
*submitted on 2017-01-15 13:09:15*

**Authors:** George Rajna

**Comments:** 36 Pages.

A surprising new finding about gene expression could increase our understanding of the aging process. Gene expression is the process by which the information contained within a gene becomes a useful product. [20] Scientists at The Scripps Research Institute (TSRI) have discovered a protein that fine-tunes the cellular clock involved in aging. [19] Genes tell cells what to do—for example, when to repair DNA mistakes or when to die—and can be turned on or off like a light switch. Knowing which genes are switched on, or expressed, is important for the treatment and monitoring of disease. Now, for the first time, Caltech scientists have developed a simple way to visualize gene expression in cells deep inside the body using a common imaging technology. [18] Researchers at The University of Manchester have discovered that a potential new drug reduces the number of brain cells destroyed by stroke and then helps to repair the damage. [17] Researchers at the University of Connecticut have uncovered new information about how particles behave in our bloodstream, an important advancement that could help pharmaceutical scientists develop more effective cancer drugs. [16] For the past 15 years, the big data techniques pioneered by NASA's Jet Propulsion Laboratory in Pasadena, California, have been revolutionizing biomedical research. On Sept. 6, 2016, JPL and the National Cancer Institute (NCI), part of the National Institutes of Health, renewed a research partnership through 2021, extending the development of data science that originated in space exploration and is now supporting new cancer discoveries. [15] IBM scientists have developed a new lab-on-a-chip technology that can, for the first time, separate biological particles at the nanoscale and could enable physicians to detect diseases such as cancer before symptoms appear. [14] Scientists work toward storing digital information in DNA. [13] Leiden theoretical physicists have proven that DNA mechanics, in addition to genetic information in DNA, determines who we are. Helmut Schiessel and his group simulated many DNA sequences and found a correlation between mechanical cues and the way DNA is folded. They have published their results in PLoS One. [12]

**Category:** Physics of Biology

[17385] **viXra:1701.0498 [pdf]**
*submitted on 2017-01-15 09:24:28*

**Authors:** Sylwester Kornowski

**Comments:** 2 Pages.

The Scale-Symmetric Theory (SST) shows that the quantum entanglement fixes the speed of light in “vacuum” c in relation to its source or a last-interaction object (it can be a detector). It causes that the spatial distances to galaxies differ from the time distances (the light travel time) - it is the duality of relativity. The duality of relativity leads to the running Hubble constant. According to SST, for the nearest Universe, the time Hubble constant is 70.52. SST gives for mean time Hubble constant 64.01 - it should be the mean observed Hubble constant when we apply the General Relativity (GR) to the whole observed Universe. If we neglect some part of distant Universe then the GR/observed time Hubble constant should be defined by following interval <64.01, 70.52>. But emphasize that the real mean spatial Hubble constant calculated within SST is 45.24. It leads to the age of Universe 21.614 +- 0.096 Gyr but time distance to most distant observed Universe cannot be longer than 13.866 +- 0.096 Gyr. SST shows that evolution of galaxies accelerated about 13.1 Gyr ago - it leads to an illusion that cosmic objects are not older than 13.1 Gyr.

**Category:** Quantum Gravity and String Theory

[17384] **viXra:1701.0497 [pdf]**
*submitted on 2017-01-14 17:39:14*

**Authors:** Espen Gaarder Haug

**Comments:** 7 Pages.

In this paper we are combining Heisenberg’s uncertainty principle with Haug’s new insight on the maximum velocity for anything with rest-mass; see [1, 2, 3]. This leads to a new and exact boundary condition on Heisenberg’s uncertainty principle. The uncertainty in position at the potential maximum momentum for subatomic particles as derived from the maximum velocity is half of the Planck length.
Perhaps Einstein was right after all when he stated, “God does not play dice.” Or at least the dice may have a stricter boundary on possible outcomes than we have previously thought.
We also show how this new boundary condition seems to make big G consistent with Heisenberg’s uncertainty principle. We obtain a mathematical expression for big G that is fully in line with empirical observations.
Hopefully our analysis can be a small step in better understanding Heisenberg’s uncertainty principle and its interpretations and by extension, the broader implications for the quantum world.

**Category:** Quantum Physics

[17383] **viXra:1701.0496 [pdf]**
*submitted on 2017-01-14 20:54:47*

**Authors:** Frank Dodd Tony Smith Jr

**Comments:** 13 Pages.

This paper is intended to be a only rough semi-popular overview of how the 240 Root Vectors of E8 can be used to construct a useful Lagrangian describing Gravity and Dark Energy plus the Standard Model. For details and references, see viXra/1602.0319. The 240 Root Vectors of E8 represent the physical forces, particles, and spacetime that make up the construction of a realistic Lagrangian describing the Octonionic Inflation Era followed by a Quaternionic M4 x CP2 Kaluza-Klein Era in which the HIggs emerges by the Mayer mechanism and 2nd and 3rd Generation Fermions appear. By generalizations of the Nambu-Jona-Lasinio models, the Higgs is seen to be a Truth Quark-AntiQuark Condensate giving 3 Mass States of the Higgs and 3 Mass States of the Truth Quark. My analysis of Fermilab and LHC observation data indicates that Fermilab has observed the 3 Truth Quark Mass States and LHC has observed the 3 Higgs Mass States. The Lagrangian, which is fundamentally classical, is constructed from E8 only and E8 lives in Cl(16) = Cl(8) x Cl(8) which corresponds to two copies of an E8 Lattice. A seperate paper discusses using a third copy of an E8 Lattice in connection with construction of a realistic Algebraic Quantum Field Theory related to the Leech Lattice.

**Category:** High Energy Particle Physics

[17382] **viXra:1701.0495 [pdf]**
*submitted on 2017-01-14 22:39:49*

**Authors:** Frank Dodd Tony Smith Jr

**Comments:** 30 Pages.

This paper is intended to be a only rough semi-popular overview of how the 240 Root Vectors of E8 can be used to construct a useful Lagrangian and Algebraic Quantum Field Theory (AQFT) in which the Bohm Quantum Potential emerges from a 26D String Theory with Strings = World-Lines = Path Integral Paths and the Massless Spin 2 State interpreted as the Bohm Quantum Potential. For details and references, see viXra/1602.0319. The 240 Root Vectors of E8 represent the physical forces, particles, and spacetime that make up the construction of a realistic Lagrangian describing the Octonionic Inflation Era. The Octonionic Lagrangian can be embedded into a Cl(1,25) Clifford Algebra which with 8-Periodicity gives an AQFT. The Massless Spin 2 State of 26D String Theory gives the Bohm Quantum Potential. The Quantum Code of the AQFT is the Tensor Product Quantum Reed-Muller code. A Single Cell of the 26D String Theory model has the symmetry of the Monster Group. Quantum Processes produce Schwinger Sources with size about 10^(-24) cm. Microtubule Structure related to E8 and Clifford Algebra enable Penrose-Hameroff Quantum Consciousness. E8 and Cl(8) may have been encoded in the Great Pyramid. A seperate paper discusses using the Quaternionic M4 x CP2 Kaluza-Klein version
of the Lagrangian to produce the Higgs and 2nd and 3rd Generation Fermions and a Higgs - Truth Quark System with 3 Mass States for Higgs and Truth Quark.

**Category:** High Energy Particle Physics

[17381] **viXra:1701.0494 [pdf]**
*submitted on 2017-01-15 00:36:12*

**Authors:** Andrew Walcott Beckwith, Stepan Moshkaliuk

**Comments:** 16 Pages. Last version of a paper cleared by referees in the Ukranian Journal of Physics. Subsequently accepted for Publication, after 1 year of vetting

We examine conditions for which energy flows in the early universe are modeled as a quantum Hamilton-Jacobi set of equations. Subsequently, we manage to use the Heisenberg Uncertainty principle for metric tensors based upon are Geometrodynamics treatment of our problem

**Category:** Quantum Gravity and String Theory

[17380] **viXra:1701.0493 [pdf]**
*submitted on 2017-01-14 15:13:07*

**Authors:** George Rajna

**Comments:** 28 Pages.

Scientists at the University of Sydney have demonstrated the ability to "see" the future of quantum systems, and used that knowledge to preempt their demise, in a major achievement that could help bring the strange and powerful world of quantum technology closer to reality. [16] New method allows for quick, precise measurement of quantum states. [15] The fact that it is possible to retrieve this lost information reveals new insight into the fundamental nature of quantum measurements, mainly by supporting the idea that quantum measurements contain both quantum and classical components. [14] Researchers blur the line between classical and quantum physics by connecting chaos and entanglement. [13] Yale University scientists have reached a milestone in their efforts to extend the durability and dependability of quantum information. [12] Using lasers to make data storage faster than ever. [11] Some three-dimensional materials can exhibit exotic properties that only exist in "lower" dimensions. For example, in one-dimensional chains of atoms that emerge within a bulk sample, electrons can separate into three distinct entities, each carrying information about just one aspect of the electron's identity—spin, charge, or orbit. The spinon, the entity that carries information about electron spin, has been known to control magnetism in certain insulating materials whose electron spins can point in any direction and easily flip direction. Now, a new study just published in Science reveals that spinons are also present in a metallic material in which the orbital movement of electrons around the atomic nucleus is the driving force behind the material's strong magnetism. [10] Currently studying entanglement in condensed matter systems is of great interest. This interest stems from the fact that some behaviors of such systems can only be explained with the aid of entanglement. [9] Researchers from the Norwegian University of Science and Technology (NTNU) and the University of Cambridge in the UK have demonstrated that it is possible to directly generate an electric current in a magnetic material by rotating its magnetization. [8] This paper explains the magnetic effect of the electric current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron's spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.

**Category:** Quantum Physics

[17379] **viXra:1701.0492 [pdf]**
*submitted on 2017-01-14 08:20:23*

**Authors:** George Rajna

**Comments:** 31 Pages.

In a proof-of-concept study published in Nature Physics, researchers drew magnetic squares in a nonmagnetic material with an electrified pen and then "read" this magnetic doodle with X-rays. [21] Researchers have brought electrides into the nanoregime by synthesizing the first 2D electride material. Electrides are ionic compounds, which are made of negative and positive ions. But in electrides, the negative "ions" are simply electrons, with no nucleus. [20] Microelectromechanical systems, or MEMS, are tiny machines fabricated using equipment and processes developed for the production of electronic chips and devices. [19] Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have made the world's smallest radio receiver-built out of an assembly of atomic-scale defects in pink diamonds. [18] Smart phones have shiny flat AMOLED displays. Behind each single pixel of these displays hide at least two silicon transistors which were mass-manufactured using laser annealing technologies. [17] Bumpy surfaces with graphene between would help dissipate heat in next-generation microelectronic devices, according to Rice University scientists. [16] Scientists at The University of Manchester and Karlsruhe Institute of Technology have demonstrated a method to chemically modify small regions of graphene with high precision, leading to extreme miniaturisation of chemical and biological sensors. [15] A new method for producing conductive cotton fabrics using graphene-based inks opens up new possibilities for flexible and wearable electronics, without the use of expensive and toxic processing steps. [14] A device made of bilayer graphene, an atomically thin hexagonal arrangement of carbon atoms, provides experimental proof of the ability to control the momentum of electrons and offers a path to electronics that could require less energy and give off less heat than standard silicon-based transistors. It is one step forward in a new field of physics called valleytronics. [13] In our computer chips, information is transported in form of electrical charge. Electrons or other charge carriers have to be moved from one place to another. For years scientists have been working on elements that take advantage of the electrons angular momentum (their spin) rather than their electrical charge.

**Category:** Condensed Matter

[17378] **viXra:1701.0491 [pdf]**
*submitted on 2017-01-14 08:35:24*

**Authors:** Luke Kenneth Casson Leighton

**Comments:** 6 Pages.

This paper continues prior work based on the insight
that Rishon ultracoloured triplets (electron, up, neutrino in left and
right forms) might simply be elliptically-polarised "mobius light". The
important first step is therefore to identify the twelve (24 including
both left and right handed forms) phases, the correct topology, and then
to peform transformations (mirroring, rotation, time-reversal) to double-check
which "particles" are identical to each other and which are anti-particle
opposites.
Ultimately, a brute-force systematic analysis will allow a formal
mathematical group to be dropped seamlessly on top of the twelve (24)
particles.

**Category:** High Energy Particle Physics

[17377] **viXra:1701.0490 [pdf]**
*submitted on 2017-01-14 08:36:16*

**Authors:** Manuel Simões F., Ricardo Gobato

**Comments:** 1 Page. Panel presented at the XV Week of Physics of the State University of Londrina, Paraná, Brazil, September 2010.

The anisotropic viscosity of the liquid crystals (CL) is one of the most challenging properties of these materials, it was discovered in 1935 by Miesowicz, when he showed that CLs are non-Newtonian fluids, exhibiting viscosities that are direction dependent when subjected to an external field . Over this time, a tremendous amount of experimental and theoretical research has been devoted to the subject, but a microscopic theory satisfactory to it has never been found. The kinetic approach of Doi had for some time been the most accepted microscopic theory for nematic viscosity, but even having the great merit of producing a free expression of the adjustable parameters, which captures the essence of the phenomena, providing a semimicroscopic explanation for the origin of Its anisotropy, there are well documented divergences with the experimental data, being unable to describe the essential aspects of the phenomenology observed in these systems, especially when considering the range of the nematic phase.
The objective of this work is to study the contribution of the characteristic geometry of the nematic / molecule micelle to the viscosity of the nematic liquids. Throughout this work, we use the word geometry of the nematic grain, or simple geometry of the grain, to designate the geometry that a nematic micelle / molecule acquires under the thermal vibration. This concept does not appear to be common in the theory of NCLs, but arises naturally from Gennes's theory of parameters for NLCs. In addition, to increase the contribution of grain geometry to nematic viscosity we will use the Hess and Balls conforming approach to formulate the fundamentals of nematic viscosity.

**Category:** Condensed Matter

[17376] **viXra:1701.0489 [pdf]**
*submitted on 2017-01-14 08:44:58*

**Authors:** Desire Francine Gobato, Ricardo Gobato, Jonas Liasch

**Comments:** 1 Page. Panel presented at the XVI Week of Physics of the State University of Londrina, Paraná, Brazil, September 2011..

One of the most unusual V / STOL military aircraft programs was the Avro VZ-9 "Avrocar". Designed to be a true flying saucer, Avrocar was one of the few V / STOL to be developed in complete stealth. Despite significant design changes, during flight trials, Avrocar was unable to achieve its objectives, and the program was eventually canceled after spending $ 10 million between 1954 and 1961. Raise data and information related to the Avrocar project carried out during World War II, which was directly linked to the advances of the aircraft that were built after it. Also study the data obtained and correlate them with the turbo fan engines used today.

**Category:** General Science and Philosophy

[17375] **viXra:1701.0488 [pdf]**
*submitted on 2017-01-14 09:15:15*

**Authors:** Viktor S.Dolgikh

**Comments:** 16 Pages. In english and in russian

I present this article as a part of my work “HE”: the beginning.
A real picture of creation of primary, composing elements of matter and the results of their interaction are described.
Introductory and advertising part of it is omitted because of the expected "sarcasm", which will disappear at the end of the article.
Many years of practical approach to thinking is the main item in the content of the presented work.
It is given in a very condensed form without "tiring" description of the presented picture.
Its final statement is given on page 12.
The following extended explanation, and main description of basic directions:
- the variety of particles due to the result of their division, disintegration and their unnatural creation;
- the structure of atoms and molecules of matter in the classification of their states;
- the frame structure of the "live" part of this kind of matter with its diversity;
- the evolution of matter development,
which are constantly being in the process of work, will depend on the interest to this article and are presented in the following publications.
To clarify the text I am sending the original.

**Category:** Nuclear and Atomic Physics

[17374] **viXra:1701.0487 [pdf]**
*submitted on 2017-01-14 05:07:46*

**Authors:** George Rajna

**Comments:** 19 Pages.

Diffraction-based analytical methods are widely used in laboratories, but they struggle to study samples that are smaller than a micrometer in size. [13] In an electron microscope, electrons are emitted by pointy metal tips, so they can be steered and controlled with high precision. Recently, such metal tips have also been used as high precision electron sources for generating X-rays. [12] In some chemical reactions both electrons and protons move together. When they transfer, they can move concertedly or in separate steps. Light-induced reactions of this sort are particularly relevant to biological systems, such as Photosystem II where plants use photons from the sun to convert water into oxygen. [11] EPFL researchers have found that water molecules are 10,000 times more sensitive to ions than previously thought. [10] Working with colleagues at the Harvard-MIT Center for Ultracold Atoms, a group led by Harvard Professor of Physics Mikhail Lukin and MIT Professor of Physics Vladan Vuletic have managed to coax photons into binding together to form molecules – a state of matter that, until recently, had been purely theoretical. The work is described in a September 25 paper in Nature. New ideas for interactions and particles: This paper examines the possibility to origin the Spontaneously Broken Symmetries from the Planck Distribution Law. This way we get a Unification of the Strong, Electromagnetic, and Weak Interactions from the interference occurrences of oscillators. Understanding that the relativistic mass change is the result of the magnetic induction we arrive to the conclusion that the Gravitational Force is also based on the electromagnetic forces, getting a Unified Relativistic Quantum Theory of all 4 Interactions.

**Category:** Quantum Physics

[17373] **viXra:1701.0486 [pdf]**
*submitted on 2017-01-14 06:34:49*

**Authors:** George Rajna

**Comments:** 16 Pages.

In accordance with the rules of quantum mechanics, the atomic nucleus has discrete energy levels. [13] Research conducted at the National Superconducting Cyclotron Laboratory at Michigan State University has shed new light on the structure of the nucleus, that tiny congregation of protons and neutrons found at the core of every atom. [12] The work elucidates the interplay between collective and single-particle excitations in nuclei and proposes a quantitative theoretical explanation. It has as such great potential to advance our understanding of nuclear structure. [11] When two protons approaching each other pass close enough together, they can " feel " each other, similar to the way that two magnets can be drawn closely together without necessarily sticking together. According to the Standard Model, at this grazing distance, the protons can produce a pair of W bosons. [10] The fact that the neutron is slightly more massive than the proton is the reason why atomic nuclei have exactly those properties that make our world and ultimately our existence possible. Eighty years after the discovery of the neutron, a team of physicists from France, Germany, and Hungary headed by Zoltán Fodor, a researcher from Wuppertal, has finally calculated the tiny neutron-proton mass difference. [9] Taking into account the Planck Distribution Law of the electromagnetic oscillators, we can explain the electron/proton mass rate and the Weak and Strong Interactions. Lattice QCD gives the same results as the diffraction patterns of the electromagnetic oscillators, explaining the color confinement and the asymptotic freedom of the Strong Interactions.

**Category:** Nuclear and Atomic Physics

[17372] **viXra:1701.0485 [pdf]**
*submitted on 2017-01-13 17:18:21*

**Authors:** Dongchan Lee

**Comments:** 54 Pages. 1st draft in PPT based pdf form

The recent releases of the PISA 2015 and TIMSS 2015 math results showed clearly that most of the developed countries are in math EDU growths in stagnations or collapses. Lee demonstrated the overall math score history of both TIMSS and PISA 1995-2015 for 15-20 years and what the stagnations mean for the future economies of these nations and what are the concrete alternatives to overcome the past 1-2 decades' of the math education stagnations.

**Category:** Economics and Finance

[17371] **viXra:1701.0484 [pdf]**
*submitted on 2017-01-14 01:22:09*

**Authors:** Ernesto Lopez Gonzalez

**Comments:** 87 pages. (In spanish)

A new theory of matter and energy is proposed. The main postulate is this, all matter and energy are composed of vibrations of space-time, which is formed by a single 5-brane extended in the three spatial dimensions and compacted in two additional dimensions up to an order of 10 -6 m. It also postulates the existence of a central hole in the plane of compacted dimensions. The substance forming this 5-brane is considered to have properties similar to a liquid crystal. It is also postulated that all interactions are originated from the modification of space-time caused by the vibrations that forming matter and energy. In particular we analyze three mechanisms: the drag, deformation and the modification of the index of refraction of the space-time. With these postulates and by resolving the wave equation we can deduce the D'Broglie wavelength, the uncertainty principle, the charge and mass of the electron only from its mass, the origin of inertia, the centrifugal force, the electric forces, the gravitational forces, hydrogen atom orbitals and the existence of a system of elementary particles formed by the three known neutrinos, the electron and four partons formed by the combination of the previous four with surface waves in the hypothetical central hole in the plane of the compacted dimensions. The masses and the strength of their interactions of these particles are estimated. Then it is possible to propose a system for hadrons that allows to estimate their masses, magnetic moments, internal distribution of charges and the Reid potential for the residual nuclear force. Finally an intuitive explanation of the spin of the particles is provided.

**Category:** Quantum Physics

[17370] **viXra:1701.0483 [pdf]**
*submitted on 2017-01-13 13:46:54*

**Authors:** Reuven Tint

**Comments:** 4 Pages. original papper in russian

Annotation. Are given in Section 1 the theorem and its proof, complementing the classical formulation of the ABC conjecture, and in Chapter 2 addressed the issue of communication with the elliptic curve Frey's "Great" Fermat's theorem.

**Category:** Number Theory

[17369] **viXra:1701.0482 [pdf]**
*submitted on 2017-01-13 09:00:42*

**Authors:** guilhem CICOLELLA

**Comments:** 4 Pages.

the only consecutives powers being 8 and 9 the probleme consisted in demonstrating that the quantities of primes numbers inferior to one billion depended on one single equation based on two different methods of calculation with congruent results,the ultimate purpose being to prove the existence of an algorithm capable of determining two intricate values more quickly than with computer(rapid mathematical system r.m.S)

**Category:** Number Theory

[17368] **viXra:1701.0481 [pdf]**
*submitted on 2017-01-13 09:07:07*

**Authors:** Yannan Yang

**Comments:** 4 Pages.

Mistakes are found in the theoretical derivation process, during which the magnetic force is explained to be the relativistic side effect of Coulomb force. As a result, some serious paradoxes will be inevitable if we accept the notion that Magnetism is a Relativistic side eﬀect of Electrostatics.

**Category:** Relativity and Cosmology

[17367] **viXra:1701.0480 [pdf]**
*submitted on 2017-01-13 09:35:22*

**Authors:** Andrew Beckwith

**Comments:** 7 Pages.

Magnetic field for relic graviton production linked to strength, initially, of inflaton

**Category:** Quantum Gravity and String Theory

[17366] **viXra:1701.0479 [pdf]**
*submitted on 2017-01-13 06:52:10*

**Authors:** George Rajna

**Comments:** 29 Pages.

Researchers have brought electrides into the nanoregime by synthesizing the first 2D electride material. Electrides are ionic compounds, which are made of negative and positive ions. But in electrides, the negative "ions" are simply electrons, with no nucleus. [20] Microelectromechanical systems, or MEMS, are tiny machines fabricated using equipment and processes developed for the production of electronic chips and devices. [19] Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have made the world's smallest radio receiver-built out of an assembly of atomic-scale defects in pink diamonds. [18] Smart phones have shiny flat AMOLED displays. Behind each single pixel of these displays hide at least two silicon transistors which were mass-manufactured using laser annealing technologies. [17] Bumpy surfaces with graphene between would help dissipate heat in next-generation microelectronic devices, according to Rice University scientists. [16] Scientists at The University of Manchester and Karlsruhe Institute of Technology have demonstrated a method to chemically modify small regions of graphene with high precision, leading to extreme miniaturisation of chemical and biological sensors. [15] A new method for producing conductive cotton fabrics using graphene-based inks opens up new possibilities for flexible and wearable electronics, without the use of expensive and toxic processing steps. [14] A device made of bilayer graphene, an atomically thin hexagonal arrangement of carbon atoms, provides experimental proof of the ability to control the momentum of electrons and offers a path to electronics that could require less energy and give off less heat than standard silicon-based transistors. It is one step forward in a new field of physics called valleytronics. [13] In our computer chips, information is transported in form of electrical charge. Electrons or other charge carriers have to be moved from one place to another. For years scientists have been working on elements that take advantage of the electrons angular momentum (their spin) rather than their electrical charge. This new approach, called "spintronics" has major advantages compared to common electronics. It can operate with much less energy. [12]

**Category:** Condensed Matter

[17365] **viXra:1701.0478 [pdf]**
*submitted on 2017-01-12 13:25:43*

**Authors:** Tom Masterson

**Comments:** 1 Page. © 1965 by Tom Masterson

A number theory query related to Fermat's last theorem in higher dimensions.

**Category:** Number Theory

[17364] **viXra:1701.0477 [pdf]**
*submitted on 2017-01-12 14:18:40*

**Authors:** George Rajna

**Comments:** 22 Pages.

In roughly four billion years, the Milky Way will be no more. Indeed, our home galaxy is on course to collide and unite with the Andromeda Galaxy, at present some two million light years away. [16] A simulation of the powerful jets generated by supermassive black holes at the centers of the largest galaxies explains why some burst forth as bright beacons visible across the universe, while others fall apart and never pierce the halo of the galaxy. [15] Astronomers from Chalmers University of Technology have used the giant telescope Alma to reveal an extremely powerful magnetic field very close to a supermassive black hole in a distant galaxy. The results appear in the 17 April 2015 issue of the journal Science. [14] Quasars, even those that are billions of light years away, are some of the " brightest beacons " in the universe. Yet how can quasars radiate so much energy that they can be seen from Earth? One explanation is that at each quasar's center is a growing supermassive black hole (SMBH). [13] If dark matter comes in both matter and antimatter varieties, it might accumulate inside dense stars to create black holes. [12] For a long time, there were two main theories related to how our universe would end. These were the Big Freeze and the Big Crunch. In short, the Big Crunch claimed that the universe would eventually stop expanding and collapse in on itself. This collapse would result in…well…a big crunch (for lack of a better term). Think " the Big Bang " , except just the opposite. That's essentially what the Big Crunch is. On the other hand, the Big Freeze claimed that the universe would continue expanding forever, until the cosmos becomes a frozen wasteland. This theory asserts that stars will get farther and farther apart, burn out, and (since there are no more stars bring born) the universe will grown entirely cold and eternally black. [11] Newly published research reveals that dark matter is being swallowed up by dark energy, offering novel insight into the nature of dark matter and dark energy and what the future of our Universe might be. [10] The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy. There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.

**Category:** Astrophysics

[17363] **viXra:1701.0476 [pdf]**
*submitted on 2017-01-12 16:01:36*

**Authors:** Declan Traill

**Comments:** 7 Pages.

The original mathematical treatment used in the analysis of the Fizeau experiment of 1851, which measured the relative speed of light in a moving medium, assumes that light travels through the water in a smooth continuous flow, at a speed less than the speed of light in a vacuum (relative to the water). Thus it assumes that the water’s velocity vector can simply be added to that of the light. However, light is transmitted through optical media, such as water, by a continuous process of absorption and re-emission by the water molecules; but travels between them at the full speed of light (in a vacuum). Thus the mathematics describing the process of Fresnel dragging must be formulated differently and can then be explained by classical Physics

**Category:** Classical Physics

[17362] **viXra:1701.0475 [pdf]**
*submitted on 2017-01-12 10:27:06*

**Authors:** Nikolay Dementev

**Comments:** 5 Pages.

Based on the observation of randomly chosen primes it has been conjectured that the sum of digits that form any prime number should yield either even number or another prime number. The conjecture was successfully tested for the first 100 primes.

**Category:** Number Theory

[17361] **viXra:1701.0474 [pdf]**
*submitted on 2017-01-12 10:57:20*

**Authors:** George Rajna

**Comments:** 26 Pages.

New method allows for quick, precise measurement of quantum states. [15] The fact that it is possible to retrieve this lost information reveals new insight into the fundamental nature of quantum measurements, mainly by supporting the idea that quantum measurements contain both quantum and classical components. [14] Researchers blur the line between classical and quantum physics by connecting chaos and entanglement. [13] Yale University scientists have reached a milestone in their efforts to extend the durability and dependability of quantum information. [12] Using lasers to make data storage faster than ever. [11] Some three-dimensional materials can exhibit exotic properties that only exist in "lower" dimensions. For example, in one-dimensional chains of atoms that emerge within a bulk sample, electrons can separate into three distinct entities, each carrying information about just one aspect of the electron's identity—spin, charge, or orbit. The spinon, the entity that carries information about electron spin, has been known to control magnetism in certain insulating materials whose electron spins can point in any direction and easily flip direction. Now, a new study just published in Science reveals that spinons are also present in a metallic material in which the orbital movement of electrons around the atomic nucleus is the driving force behind the material's strong magnetism. [10] Currently studying entanglement in condensed matter systems is of great interest. This interest stems from the fact that some behaviors of such systems can only be explained with the aid of entanglement. [9] Researchers from the Norwegian University of Science and Technology (NTNU) and the University of Cambridge in the UK have demonstrated that it is possible to directly generate an electric current in a magnetic material by rotating its magnetization. [8] This paper explains the magnetic effect of the electric current from the observed effects of the accelerating electrons, causing naturally the experienced changes of the electric field potential along the electric wire. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron's spin also, building the bridge between the Classical and Quantum Theories. The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.

**Category:** Quantum Physics

[17360] **viXra:1701.0473 [pdf]**
*submitted on 2017-01-12 12:04:19*

**Authors:** Nikola Perkovic

**Comments:** 4 Pages.

The paper will make new claims regarding the fine structure constant. The specific value of the electromagnetic coupling constant, that is the fine structure constant, will be explained as a consequence of mass energy equivalence. Special Relativity and Quantum Electrodynamics will be used to attain the mass energy equivalence equation and after which a new, quantized equation of mass energy equivalence will be postulated and tested. A new way will be presented to determine the mass of neutrons by using the strong nuclear coupling constant and protons by using the fine structure constant.

**Category:** High Energy Particle Physics

[17359] **viXra:1701.0472 [pdf]**
*submitted on 2017-01-12 07:47:17*

**Authors:** Jeffrey S. Keen

**Comments:** 15 Pages, 17 Figures, 3 Tables

This paper was presented at the 2016 SSE European conference in Sweden, and addresses two fundamental areas of physics and cosmology that involve a “universal consciousness”. (a) It shows where Einstein was incorrect: it is not only possible to communicate information faster than the speed of light, but this can be instantaneous. (b) The main challenge in physics today is unifying quantum theory with gravity: it is demonstrated that the extended mind is involved in solving this problem.
The author has spent over 30 years researching the mind’s interaction with the laws of physics, subtle fields, and the cosmos. This has been achieved by quantifying sensed data and discovering formulae and universal constants. A technique, developed by the author, involving a singularity is explained for noetically studying subtle fields and abstract geometry. This has produced some ground-breaking and fundamental findings, demonstrating that the mind is very sensitive to geometry and both local and astronomical forces.
The most exciting aspects are the quantified results and graphs that have been obtained from a specified subtle energy beam length (L) measured over the last eight years. For example, during the course of a day a sinusoidal curve is obtained with maxima at sunset and minima at sunrise, even if measurements are made in a darkened room on a cloudy day.
Another example is that the mind can detect a lower gravitational force on Earth, when the sun and moon’s gravity are pulling in opposite directions at full moon, resulting in a peak in L. Likewise, a higher gravitational force, when the sun and moon’s gravity are pulling in the same direction at new moon, results in counter-intuitive shorter lengths of L.
The mind also detects changes in the Newtonian gravitational force, Fg, as the earth orbits the sun. Over the course of a year, a plot of L produces an equation L=6E+105*Fg -δ which has a very high correlation coefficient R2 = 0.9745. The power index is Feigenbaum’s constant within 0.013% error. This is another example of the mind’s ability to interact with gravity and produce a universal constant, suggesting that consciousness is intimately connected to the fabric of the universe and chaos theory.
Any three objects in alignment, be they 3 grains of sand, three trees, 3 coins, 3 stones, three abstract circles drawn on paper, or even three objects in the solar system all form a strong subtle energy beam that experimentally has been perceived to extend endlessly. In particular, this beam has been measured during alignments across the solar system. These have included eclipses of the sun and moon, to a transit of Neptune by the moon. The data was analysed weeks after the events. In all cases L peaked before the predicted time of the occlusion. This time was always identical to the time it takes light to reach an observer on earth from the furthest of the 3 planets in alignment, on the day of the experiment This demonstrates that the mind can communicate not only faster than light, but instantaneously across the solar system, and the structure of the universe is such to enable this to happen. It also suggests that macro entanglement is possible.
The findings in this paper significantly impact cosmology, and in particular show that Inflation Theory just after the big bang is unnecessary to explain the current structure of the universe.

**Category:** Relativity and Cosmology

[17358] **viXra:1701.0471 [pdf]**
*submitted on 2017-01-12 05:04:13*

**Authors:** George Rajna

**Comments:** 15 Pages.

Physicists at the National Institute of Standards and Technology (NIST) have cooled a mechanical object to a temperature lower than previously thought possible, below the so-called "quantum limit." [9] For the past 100 years, physicists have been studying the weird features of quantum physics, and now they're trying to put these features to good use. One prominent example is that quantum superposition (also known as quantum coherence)—which is the property that allows an object to be in two states at the same time—has been identified as a useful resource for quantum communication technologies. [8] Quantum entanglement—which occurs when two or more particles are correlated in such a way that they can influence each other even across large distances—is not an all-or-nothing phenomenon, but occurs in various degrees. The more a quantum state is entangled with its partner, the better the states will perform in quantum information applications. Unfortunately, quantifying entanglement is a difficult process involving complex optimization problems that give even physicists headaches. [7] A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others. [6] The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.

**Category:** Quantum Physics

[17357] **viXra:1701.0470 [pdf]**
*submitted on 2017-01-11 17:54:52*

**Authors:** Dongchan Lee

**Comments:** 9 Pages. 1st draft

In this short paper, I demonstrated that why the top 5-6 original oil richest countries need to be excluded from most of the socio-economic vs. cognitive skill regressions because they will remain as outliers far too much out to the otherwise very reliable and stable regression growth coefficients and explanation powers of the models involved. I included some simple linear regression charts where they are far out in North West corners of the regression lines; their GDP per capita had reached the top tier of the world by the 70s already with the minimal cognitive skills and education inputs; I provided their relative economic strength compared to the economic miracle powers from the Eastern Asia: 4 Asian Tigers and China so that you can see their super rapid rises were all due to their oil-based economies; their top 6 shares of the Natural Resource rents as percent of capita over the past 40 years. I believe that these 4 key factors may allow anyone serious about any serious regressions that involve the socio-economic regressions to exclude these 5-6 countries in their analysis. Finally I made a brief comment about the polar opposite to these countries with the poor economy with the rapid gains of the cognitive skills.

**Category:** Economics and Finance

[17356] **viXra:1701.0468 [pdf]**
*submitted on 2017-01-11 23:10:52*

**Authors:** Stephen I. Ternyik

**Comments:** 5 Pages.

The teleology of economic knowledge and causality is discussed, in terms of mathematical and temporal aspects.

**Category:** General Science and Philosophy

[17355] **viXra:1701.0467 [pdf]**
*submitted on 2017-01-12 02:38:07*

**Authors:** Nikitin V. N., Nikitin I.V.

**Comments:** 1 Page.

Hydrogen and helium – the subsequent product of synthesis of elements of stars.

**Category:** Astrophysics

[17354] **viXra:1701.0465 [pdf]**
*submitted on 2017-01-11 12:53:58*

**Authors:** Sylwester Kornowski

**Comments:** 7 Pages.

Here, applying the Scale-Symmetric Theory (SST), we answered following question: What is the origin of the cosmic reionization? Presented here scenario differs radically from that described within the mainstream cosmology. Most important are masses of massive galaxies/quasars and the decays of large cosmic structures. Highest rate of reionization of hydrogen should be for redshift z(H,max) = 11.18 whereas complete reionization should occur at z(H,end) = 7.10. For reionization of helium we obtain respectively z(He,max) = 3.63 and z(He,end) = 2.70. Theoretical results are consistent with observational data. We showed that number and energy of created photons were sufficient to ionize the intergalactic medium. We answered as well the second very important question: Why there appeared the supermassive black holes so quickly?

**Category:** Quantum Gravity and String Theory

[17353] **viXra:1701.0463 [pdf]**
*submitted on 2017-01-11 09:46:23*

**Authors:** U. Kayser-Herold

**Comments:** 3 Pages.

By oblique reflection of circularly polarized photons on a rotating cylindrical mirror the frequency of the reflected photons is shifted against the ferquency of incident photons by nearly twice the rotational frequency $n$ of the mirror: $\Delta \nu = 2\hspace{2} n \hspace{2}\sin \alpha$, where $\alpha$ is the axial angle of incidence. $\Delta \nu$ can be substantially enhanced by multiple reflections between counter-rotating coaxial mirrors.

**Category:** Quantum Physics

[17352] **viXra:1701.0462 [pdf]**
*submitted on 2017-01-11 05:45:26*

**Authors:** Octavian Cira, Florentin Smarandache

**Comments:** 75 Pages.

We put the problem to determine the sets of integers in base b ≥ 2 that generate primes with using a function.

**Category:** General Mathematics

[17351] **viXra:1701.0457 [pdf]**
*submitted on 2017-01-11 05:51:19*

**Authors:** Florentin Smarandache

**Comments:** 12 Pages.

: Soft set theory is a general mathematical tool for dealing with uncertain, fuzzy, not clearly deﬁned objects. In this paper we introduced soft mixed neutrosophic N-algebraic
with the discussion of some of their characteristics. We also introduced soft mixed dual neutrosophic N-algebraic structures, soft weak mixed neutrosophic N-algebraic structures,
soft Lagrange mixed neutrosophic N-algebraic structures, soft weak Lagrange mixed neu
trosophic and soft Lagrange free mixed neutosophic N-algebraic structures. the so called soft strong neutrosophic loop which is of pure neutrosophic character. We also introduced some of new notions and some basic properties of this newly born soft mixed neutrosophic N-structures related to neutrosophic theory.

**Category:** General Mathematics

[17350] **viXra:1701.0452 [pdf]**
*submitted on 2017-01-11 05:55:50*

**Authors:** Said Broumi, Irfan Deli, Florentin Smarandache

**Comments:** 15 Pages.

In this paper, we ﬁrst give the cartesian product of two neutrosophic multi sets(NMS). Then, we deﬁne relations on neutrosophic multi sets to extend the intuitionistic fuzzy multi relations to neutrosophic multi relations. The relations allows to compose two neutrosophic sets. Also, various properties like reﬂexivity, symmetry and transitivity are studied.

**Category:** General Mathematics

[17349] **viXra:1701.0451 [pdf]**
*submitted on 2017-01-11 05:57:23*

**Authors:** Muhammad Akram, Sundas Shahzadi, Florentin Smarandache

**Comments:** 21 Pages.

The concept of intuitionistic neutrosophic soft sets can be utilized as a mathematical tool to deal with imprecise and unspeciﬁed information. In this paper, we apply the concept of intuitionistic neutrosophic soft sets to graphs. We introduce the concepts of intuitionistic neutrosophic soft graphs, and present applications of intuitionistic neutrosophic soft graphs in a multiple-attribute decisionmaking problems. We also present an algorithm of our proposed method

**Category:** General Mathematics

[17348] **viXra:1701.0449 [pdf]**
*submitted on 2017-01-11 06:00:01*

**Authors:** Luige Vlădăreanu, Florentin Smarandache, Mumtaz Ali, Victor Vlădăreanu, Mingcong Deng

**Comments:** 6 Pages.

The paper presents automated estimation techniques for robot parameters through system identification, for both PID control and future implementation of intelligent control laws, with the aim of designing the experimental model in a 3D virtual reality for testing and validating control laws in the joints of NAO humanoid robots.

**Category:** General Mathematics

[17347] **viXra:1701.0448 [pdf]**
*submitted on 2017-01-11 06:02:38*

**Authors:** Nguyen Xuan Thao, Bui Cong Cuong, Florentin Smarandache

**Comments:** 20 Pages.

A rough fuzzy set is the result of approximation of a fuzzy set with respect to a crisp approximation space. It is a mathematical tool for the knowledge discovery in the fuzzy information systems. In this paper, we introduce the concepts of rough standard neutrosophic sets, standard neutrosophic information system and give some results of the knowledge discovery on standard neutrosophic information system based on rough standard neutrosophic sets.

**Category:** General Mathematics

[17346] **viXra:1701.0447 [pdf]**
*submitted on 2017-01-11 06:03:45*

**Authors:** Said Broumi, Mohamed Talea, Assia Bakali, Florentin Smarandache, Mumtaz Ali

**Comments:** 8 Pages.

This main purpose of this paper is to develop an algorithm to find the shortest path on a network in which the weights of the edges are represented by bipolar neutrosophic numbers. Finally, a numerical example has been provided for illustrating the proposed approach.

**Category:** General Mathematics

[17345] **viXra:1701.0446 [pdf]**
*submitted on 2017-01-11 06:04:50*

**Authors:** Said Broumi, Mohamed Talea, Assia Bakali, Florentin Smarandache, Luige VlĂdĂreanu

**Comments:** 6 Pages.

In this paper, we develop a new approach to deal with neutrosphic shortest path problem in a network in which each edge weight (or length) is represented as triangular fuzzy neutrosophic number. The proposed algorithm also gives the shortest path length from source node to destination node using ranking function. Finally, an illustrative example is also included to demonstrate our proposed approach.

**Category:** General Mathematics

[17344] **viXra:1701.0445 [pdf]**
*submitted on 2017-01-11 06:05:51*

**Authors:** Said Broumi, Mohamed Talea, Assia Bakali, Florentin Smarandache

**Comments:** 7 Pages.

In this study, we propose an approach to determine the shortest path length between a pair of specified nodes s and t on a network whose edge weights are represented by trapezoidal neutrosophic numbers. Finally, an illustrative example is provided to show the applicability and effectiveness of the proposed approach.

**Category:** General Mathematics

[17343] **viXra:1701.0444 [pdf]**
*submitted on 2017-01-11 06:06:52*

**Authors:** Kenta Takaya, Toshinori Asai, Valeri Kroumov, Florentin Smarandache

**Comments:** 6 Pages.

In the process of development a control strategy for mobile robots, simulation is important for testing the software components, robot behavior and control algorithms in different surrounding environments. In this paper we introduce a simulation environment for mobile robots based on ROS and Gazebo. We show that after properly creating the robot models under Gazebo, the code developed for the simulation process can be directly implemented in the real robot without modiﬁcations. In this paper autonomous navigation tasks and 3D-mapping simulation using control programs under ROS are presented. Both the simulation and experimental results agree very well and show the usability of the developed environment.

**Category:** General Mathematics

[17342] **viXra:1701.0439 [pdf]**
*submitted on 2017-01-11 06:10:57*

**Authors:** Said Broumi, Florentin Smarandache

**Comments:** 21 Pages.

Multi-attribute decision making (MADM). Play an important role in many applications, due to the efficiency to handle indeterminate and inconsistent information, single valued neutrosophic sets is widely used to model indeterminate information.

**Category:** General Mathematics

[17341] **viXra:1701.0438 [pdf]**
*submitted on 2017-01-11 06:11:57*

**Authors:** Florentin Smarandache, Mircea Eugen Şelariu

**Comments:** 13 Pages.

Trilobele sunt funcţii supermatematice circulare excentrice (FSM-CE) de excentricitate unghiulară.

**Category:** General Mathematics

[17340] **viXra:1701.0435 [pdf]**
*submitted on 2017-01-11 06:15:54*

**Authors:** Marcel Migdalovici, Luige Vladareanu, Gabriela Vladeanu, Said broumi, Daniela Baran, Florentin Smarandache

**Comments:** 6 Pages.

A survey of some author’s concepts on the dynamic systems stability regions, in the general case of dynamic systems, that depend on parameters, is related in the paper. The property of separation of stable regions in the free parameters domain is assumed in the paper as an important property of the environment that is carry out and in the specified case of walking robot analyzed in the paper.

**Category:** General Mathematics

[17339] **viXra:1701.0433 [pdf]**
*submitted on 2017-01-11 06:17:59*

**Authors:** W.B. Vasantha Kandasamy, Ilanthenral Kandasamy, Florentin Smarandache

**Comments:** 10 Pages.

The Collatz conjecture is an open conjecture in mathematics named so after Lothar Collatz who proposed it in 1937. It is also known as 3n + 1 conjecture, the Ulam conjecture (after Stanislaw Ulam), Kakutanis problem (after Shizuo
Kakutani) and so on. Several various generalization of the Collatz conjecture
has been carried.

**Category:** General Mathematics

[17338] **viXra:1701.0424 [pdf]**
*submitted on 2017-01-11 06:27:41*

**Authors:** Florentin Smarandache, Mircea Eugen Șelariu

**Comments:** 13 Pages.

The trilobes are ex-centric circular supermathematics functions (EC-SMF) of angular excentricity.

**Category:** General Mathematics

[17337] **viXra:1701.0423 [pdf]**
*submitted on 2017-01-11 06:28:41*

**Authors:** Ilanthenral Kandasamy, Florentin Smarandache

**Comments:** 8 Pages.

Personality tests are most commonly objective type where the users rate their behaviour. Instead of providing a single forced choice, they can be provided with more options. A person may not be in general capable to judge his/her behaviour very precisely and categorize it into a single category. Since it is self rating there is a lot of uncertain and indeterminate feelings involved. The results of the test depend a lot on the circumstances under which the test is taken, the amount of time that is spent, the past experience of the person, the emotion the person is feeling and the person’s self image at that time and so on.

**Category:** General Mathematics

[17336] **viXra:1701.0422 [pdf]**
*submitted on 2017-01-11 06:29:23*

**Authors:** Ovidiu Ilie Şandru, Florentin Smarandache

**Comments:** 3 Pages.

În aceasta lucrare vom prezenta un procedeu de algoritmizare a operatiilor necesare deplasarii automate a unui obiect predefinit dintr-o imagine video data intr-o regiune tinta a acelei imagini, menit a facilita realizarea de aplicatii software specializate in rezolvarea acestui gen de probleme.

**Category:** General Mathematics

[17335] **viXra:1701.0421 [pdf]**
*submitted on 2017-01-11 03:24:38*

**Authors:** Igor Chistiukhin

**Comments:** 18 Pages. Russian language

The article deals with the problems associated with the scientific study of the relationship of the Orthodox Church to the system of the ancient spectacles that existed at the time of the birth of Christianity

**Category:** Social Science

[17334] **viXra:1701.0420 [pdf]**
*submitted on 2017-01-10 13:45:13*

**Authors:** Nikhil Shaw

**Comments:** 8 Pages.

In computer science, a selection algorithm is an algorithm for finding the kth smallest number in a list or array; such a number is called the kth order statistic. This includes the cases of finding the minimum, maximum, and median elements. There are O(n) (worst-case linear time) selection algorithms, and sublinear performance is possible for structured data; in the extreme, O(1) for an array of sorted data. Selection is a subproblem of more complex problems like the nearest neighbor and shortest path problems. Many selection algorithms are derived by generalizing a sorting algorithm, and conversely some sorting algorithms can be derived as repeated application of selection.
This new algorithm although has worst case of O(n^2), the average case is of near linear time for an unsorted list.

**Category:** Statistics

[17333] **viXra:1701.0419 [pdf]**
*submitted on 2017-01-10 10:38:19*

**Authors:** Azeddine ELHASSOUNY, Florentin SMARANDACHE

**Comments:** 17 Pages.

The purpose of this paper is to present an extension and alternative of the hybrid approach using Saaty’s Analytical Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method (AHP-TOPSIS), that based on the AHP and its use of pairwise comparisons, to a new method called α-D MCDM-TOPSIS(α-Discounting Method for Multi-Criteria Decision Making-TOPSIS). The proposed method works not only for preferences that are pairwise comparisons of criteria as AHP does, but for preferences of any n-wise (with n ≥ 2) comparisons of criteria. Finally the α-D MCDM-TOPSIS methodology is veriﬁed by some examples to demonstrate how it might be applied in diﬀerent types of matrices and is how it allwos for consistency, inconsistent, weak inconsistent, and strong inconsistent problems.

**Category:** General Mathematics

[17332] **viXra:1701.0418 [pdf]**
*submitted on 2017-01-10 10:41:59*

**Authors:** Ilanthenral Kandasamy, Florentin Smarandache

**Comments:** 15 Pages.

Double Refined Indeterminacy Neutrosophic Set(DRINS) is an inclusive case of the refined neutrosophic set, defined by Smarandache [1], which provides the additional possibility to represent with sensitivity and accuracy the uncertain,imprecise,incomplete,and inconsistent information which are available in real world.

**Category:** General Mathematics

[17331] **viXra:1701.0417 [pdf]**
*submitted on 2017-01-10 10:48:31*

**Authors:** Muhammad Gulistan, Majid Khan, Young Bae Jun, Florentin Smarandache, Naveed Yaqoob

**Comments:** 17 Pages.

We generalize the concept of fuzzy point, intutionistic fuzzy point, cubic point by introducing the concept of neutrosophic cubic point.

**Category:** General Mathematics

[17330] **viXra:1701.0416 [pdf]**
*submitted on 2017-01-10 12:01:39*

**Authors:** Jerrold Thacker

**Comments:** 12 Pages.

A review of parallax measurements of stars indicates there are major discrepancies in the distance measurements to red giant stars. There is very strong evidence that the nearest star is Betelgeuse, only a few light-weeks away and actually a dwarf star. It is possible that this dwarf star may be the missing Planet 9

**Category:** Astrophysics

[17329] **viXra:1701.0415 [pdf]**
*submitted on 2017-01-10 12:06:01*

**Authors:** Gao Jian, Xinhan Huang, Min Wang, Xinde Li

**Comments:** 7 Pages.

Several neutrosophic combination reles based on the Dempster-Shafer Theory and Dezert-Smarandache Theory are presented in the study.

**Category:** General Mathematics

[17328] **viXra:1701.0413 [pdf]**
*submitted on 2017-01-10 12:08:23*

**Authors:** Akbar Rezaei, Arsham Borumand Saeid, Florentin Smarandache

**Comments:** 15 Pages.

In this paper, we introduce the notion of (implicative)neutrosophic ﬁlters in BE-algebras. The relation between implicative neutrosophic ﬁlters and neutrosophic ﬁlters is investigated and we show that in self distributive BEalgebras these notions are equivalent.

**Category:** General Mathematics

[17327] **viXra:1701.0412 [pdf]**
*submitted on 2017-01-10 12:09:13*

**Authors:** Florentin Smarandache, Ștefan Vlăduțescu

**Comments:** 6 Pages.

The study highlights persuasive-fictional inductions that are recorded in journalistic discourse. Subsequently it constitutes an application of Neutrosophic on journalistic communication. The theoretical premise is that journalism is impregnated persuasion.

**Category:** General Mathematics

[17326] **viXra:1701.0411 [pdf]**
*submitted on 2017-01-10 12:10:09*

**Authors:** Florentin Smarandache, Gheorghe Săvoiu

**Comments:** 36 Pages.

Neutrosophic numbers easily allow modeling uncertainties of prices universe, thus justifying the growing interest for theoretical and practical aspects of arithmetic generated by some special numbers in our work. At the beginning of this paper, we reconsider the importance in applied research of instrumental discernment, viewed as the main support of the final measurement validity.

**Category:** General Mathematics

[17325] **viXra:1701.0410 [pdf]**
*submitted on 2017-01-10 12:10:53*

**Authors:** Florentin Smarandache

**Comments:** 11 Pages.

We introduce now for the first time the neutrosophic modal logic. The Neutrosophic Modal Logic includes the neutrosophic operators that express the modalities. It is an extension of neutrosophic predicate logic, and of neutrosophic propositional logic.

**Category:** General Mathematics

[17324] **viXra:1701.0407 [pdf]**
*submitted on 2017-01-10 12:14:36*

**Authors:** Mumtaz Ali, Florentin Smarandache, Luige Vladareanu

**Comments:** 28 Pages.

Neutrosophic sets and Logic plays a significant role in approximation theory. It is a generalization of fuzzy sets and intuitionistic fuzzy set. Neutrosophic set is based on the neutrosophic philosophy in which every idea Z, has opposite denoted as anti(Z) and its neutral which is denoted as neut(Z). This is the main feature of neutrosophic sets and logic.

**Category:** General Mathematics

[17323] **viXra:1701.0406 [pdf]**
*submitted on 2017-01-10 12:15:32*

**Authors:** Madad Khan, Florentin Smarandache, Sania Afzal

**Comments:** 24 Pages.

In this paper we have dened neutrosophic ideals, neutrosophic interior ideals, netrosophic quasi-ideals and neutrosophic bi-ideals (neutrosophic generalized bi-ideals) and proved some results related to them.

**Category:** General Mathematics

[17322] **viXra:1701.0404 [pdf]**
*submitted on 2017-01-10 12:19:28*

**Authors:** Florentin Smarandache

**Comments:** 8 Pages.

This paper is an extention of (t,i,f)-Neutrosophic Structures applicability, where were introduced for the first time a new type of structures.

**Category:** General Mathematics

[17321] **viXra:1701.0403 [pdf]**
*submitted on 2017-01-10 12:20:21*

**Authors:** Mumtaz Ali, Florentin Smarandache

**Comments:** 10 Pages.

The theory of soluble groups and nilpotent groups is old and hence a generalized on. In this paper, we introduced neutrosophic soluble groups and neutrosophic nilpotent groups which have some kind of indeterminacy. These notions are generalized to the classic notions of soluble groups and nilpotent groups. We also derive some new type of series which derived some new notions of soluble groups and nilpotent groups. They are mixed neutrosophic soluble groups and mixed neutrosophic nilpotent groups as well as strong neutrosophic soluble groups and strong neutrosophic nilpotent groups.

**Category:** General Mathematics

[17320] **viXra:1701.0402 [pdf]**
*submitted on 2017-01-10 12:21:37*

**Authors:** Florentin Smarandache, Mumtaz Ali, Muhammad Shabir

**Comments:** 11 Pages.

In this paper, for the first time the authors introduced the notions of neutrosophic
triplet group which is completely different from the classical group. In neutrosophic triplet group, we apply the fundamental law of neutrosophy that for an idea A, we have neut(A) and
anti(A) and we capture the picture of neutrosophy in algebraic structures.

**Category:** General Mathematics

[17319] **viXra:1701.0400 [pdf]**
*submitted on 2017-01-10 12:26:05*

**Authors:** Mumtaz Ali, Florentin Smarandache, W. B. Vasantha Kandasamy

**Comments:** 13 Pages.

Algebraic codes play a signicant role in the minimisation of data corruption which caused by deffects such as inference, noise channel, crosstalk, and packet loss.In this paper, we introduce soft codes (soft linear codes) through the application of soft sets which is an approximated collection of codes.

**Category:** General Mathematics

[17318] **viXra:1701.0398 [pdf]**
*submitted on 2017-01-10 07:30:58*

**Authors:** Yibing Qiu

**Comments:** 1 Page.

Abstract: showing a viewpoint with regards to black holes, pulsars and neutron stars.

**Category:** Astrophysics

[17317] **viXra:1701.0397 [pdf]**
*submitted on 2017-01-10 07:35:16*

**Authors:** Quang Nguyen Van

**Comments:** 1 Page.

We have found a solution of FLT for n = 3, so that FLT is wrong. In this paper, we give a counterexample ( the solution in integer for equation x^3 + y^3 = z^3 only. It is too large ( 18 digits).

**Category:** Number Theory

[17316] **viXra:1701.0394 [pdf]**
*submitted on 2017-01-10 07:43:01*

**Authors:** Victor Christianto, Florentin Smarandache

**Comments:** 11 Pages.

Hyman Minsky pioneered the idea of the financial instability hypothesis to explain how swings between robustness and fragility in financial markets generate business cycles in the economic system. Therefore, in his model business cycles and instability are endogenous. The problem now is how to put his idea of financial instability into a working model which can be tested with empirical data. Such a Minskyan model is quite rare, though some economists have proposed have tried to achieve that. For example, Toichiro Asada suggested generalized Lotka-Volterra nonlinear systems of equations as a model for Minskyan cycles.

**Category:** General Mathematics

[17315] **viXra:1701.0393 [pdf]**
*submitted on 2017-01-10 07:44:18*

**Authors:** Qiang Guo, You He, Yong Deng, Tao Jian, Florentin Smarandache

**Comments:** 35 Pages.

To obtain effective fusion results of multi source evidences with different importances, an evidence fusion method with importance discounting factors based on neutrosopic probability analysis in DSmT framework is proposed. First, the reasonable evidence sources are selected out based on the statistical analysis of the pignistic probability functions of single focal elements.

**Category:** General Mathematics

[17314] **viXra:1701.0392 [pdf]**
*submitted on 2017-01-10 07:45:51*

**Authors:** W.B. Vasantha Kandasamy, Ilanthenral Kandasamy, Florentin Smarandache

**Comments:** 9 Pages.

The Collatz conjecture is an open conjecture in mathematics named so after Lothar Collatz who proposed it in 1937. It is also known as 3n + 1 conjecture, the Ulam conjecture (after Stanislaw Ulam), Kakutani’s problem (after Shizuo
Kakutani) and so on.

**Category:** General Mathematics

[17313] **viXra:1701.0391 [pdf]**
*submitted on 2017-01-10 07:47:02*

**Authors:** Mumtaz Ali, Florentin Smarandache

**Comments:** 12 Pages.

Algebraic codes play a signicant role in the minimisation of data corruption which caused by de⁄ects such as inference, noise channel, crosstalk, and packet loss.In this paper, we introduce soft codes (soft linear codes) through the application of soft sets which is an approximated collection of codes. We also discuss several types of soft codes such as type-1 soft codes, complete soft codes etc. Further, we constrcut the soft generator matrix and soft parity check matrix for the soft linear codes. Moreover, we develop two techinques for the decoding of soft codes.

**Category:** General Mathematics

[17312] **viXra:1701.0390 [pdf]**
*submitted on 2017-01-10 07:48:31*

**Authors:** Mehmet Şahin, Necati Olgun, Vakkas Uluçay, Abdullah Kargın, Florentin Smarandache

**Comments:** 28 Pages.

In this paper, we propose transformations based on the centroid points between single valued neutrosophic values. We introduce these transformations according to truth, indeterminacy and falsity value of single valued neutrosophic values. We propose a new similarity measure based on falsity value between single valued neutrosophic sets.

**Category:** General Mathematics

[17311] **viXra:1701.0384 [pdf]**
*submitted on 2017-01-10 08:29:52*

**Authors:** Said Broumi, Florentin Smarandache, Mohamed Talea, Assia Bakali

**Comments:** 8 Pages.

In this paper, we first define the concept of bipolar single neutrosophic graphs as the generalization of bipolar fuzzy graphs, N-graphs, intuitionistic fuzzy graph, single valued neutrosophic graphs and bipolar intuitionistic fuzzy graphs.

**Category:** General Mathematics

[17310] **viXra:1701.0383 [pdf]**
*submitted on 2017-01-10 08:31:27*

**Authors:** Florentin Smarandache, Ștefan VlĂduȚescu, Ioan Constantin Dima, Dan Valeriu Voinea

**Comments:** 8 Pages.

The paper aims to explain the technology of emergence of information. Our research proves that information as communicational product is the result of processing within some operations, actions, mechanisms and strategies of informational material meanings. Are determined eight computational-communicative operations of building information.

**Category:** General Mathematics

[17309] **viXra:1701.0381 [pdf]**
*submitted on 2017-01-10 08:33:59*

**Authors:** Said Broumi, Mohamed Talea, Assia Bakali, Florentin Smarandache

**Comments:** 6 Pages.

In this paper, the authors propose an extended version of Dijkstra’ algorithm for finding the shortest path on a network where the edge weights are characterized by an interval valued neutrosophic numbers. Finally, a numerical example is given to explain the proposed algorithm.

**Category:** General Mathematics

[17308] **viXra:1701.0380 [pdf]**
*submitted on 2017-01-10 08:43:21*

**Authors:** Kalyan Mondal, Surapati Pramanik, Florentin Smarandache

**Comments:** 16 Pages.

This paper presents some similarity measures between complex neutrosophic sets. A complex neutrosophic set is a generalization of neutrosophic set whose complex-valued truth membership function, complex-valued indeterminacy membership function, and complex valued falsity membership functions are the combinations of real-valued truth amplitude term in association with phase term, real-valued indeterminate amplitude term with phase term, and real-valued false amplitude term with phase term respectively. In the present study, we have proposed complex cosine, Dice and Jaccard similarity measures and investigated some of their properties. Finally, complex neutrosophic cosine, Dice and Jaccard similarity measures have been applied to a medical diagnosis problem with complex neutrosophic information.

**Category:** General Mathematics

[17307] **viXra:1701.0378 [pdf]**
*submitted on 2017-01-10 08:44:47*

**Authors:** George Rajna

**Comments:** 20 Pages.

Controlled direct acceleration of electrons in very strong laser fields can offer a path towards ultra-compact accelerators. [13] In an electron microscope, electrons are emitted by pointy metal tips, so they can be steered and controlled with high precision. Recently, such metal tips have also been used as high precision electron sources for generating X-rays. [12] In some chemical reactions both electrons and protons move together. When they transfer, they can move concertedly or in separate steps. Light-induced reactions of this sort are particularly relevant to biological systems, such as Photosystem II where plants use photons from the sun to convert water into oxygen. [11] EPFL researchers have found that water molecules are 10,000 times more sensitive to ions than previously thought. [10] Working with colleagues at the Harvard-MIT Center for Ultracold Atoms, a group led by Harvard Professor of Physics Mikhail Lukin and MIT Professor of Physics Vladan Vuletic have managed to coax photons into binding together to form molecules – a state of matter that, until recently, had been purely theoretical. The work is described in a September 25 paper in Nature. New ideas for interactions and particles: This paper examines the possibility to origin the Spontaneously Broken Symmetries from the Planck Distribution Law. This way we get a Unification of the Strong, Electromagnetic, and Weak Interactions from the interference occurrences of oscillators. Understanding that the relativistic mass change is the result of the magnetic induction we arrive to the conclusion that the Gravitational Force is also based on the electromagnetic forces, getting a Unified Relativistic Quantum Theory of all 4 Interactions.

**Category:** High Energy Particle Physics

[17306] **viXra:1701.0377 [pdf]**
*submitted on 2017-01-10 08:51:57*

**Authors:** Said Broumi, Mohamed Talea, Assia Bakali, Florentin Smarandache

**Comments:** 5 Pages.

The selection of shortest path problem is one the classic problems in graph theory. In literature, many algorithms have been developed to provide a solution for shortest path problem in a network. One of common algorithms in solving shortest path problem is Dijkstra’s algorithm. In this paper, Dijkstra’s algorithm has been redesigned to handle the case in which most of parameters of a network are uncertain and given in terms of neutrosophic numbers. Finally, a numerical example is given to explain the proposed algorithm

**Category:** General Mathematics

[17305] **viXra:1701.0376 [pdf]**
*submitted on 2017-01-10 08:54:44*

**Authors:** GUO Qiang, HE You, LI Xian, Florentin Smarandache, XU Shi-you

**Comments:** 12 Pages.

Aiming to solving the problem that the evidence information based on Dezert-Smarandache (DSm) model cannot be effectively conditionally reasoned in multi-source heterogeneous network which leads to the low rate of situation assessment, a situation assessment method in Conditional Evidential Network based on DSm-Proportional Conflict Redistribution No.5 (PCR5) is proposed. First, the conditional reasoning formula in Conditional Evidential Network based on DSm model is given. Then, the Disjunctive Rule of Combination(DRC) based on DSm-PCR5 is proposed and the Generalized Bayesian Theorem (GBT) for multiple intersection sets of focal elements can be obtained in the premise that the conditional mass assignments functions of focal elements in refinement of hyper-power set is known. Finally, through the simulation experiments results of situation assessment, the effectiveness of the proposed method is verified.

**Category:** General Mathematics

[17304] **viXra:1701.0374 [pdf]**
*submitted on 2017-01-10 09:16:25*

**Authors:** Florentin Smarandache

**Comments:** 9 Pages.

In this paper we make distinctions between Classical Logic (where the propositions are 100% true, or 100 false) and the Neutrosophic Logic (where one deals with partially true, partially indeterminate and partially false propositions) in order to respond to K. Georgiev [1]’s criticism. We recall that if an axiom is true in a classical logic system, it is not necessarily that the axiom be valid in a modern (fuzzy, intuitionistic fuzzy, neutrosophic etc.) logic system.

**Category:** General Mathematics

[17303] **viXra:1701.0373 [pdf]**
*submitted on 2017-01-10 09:19:44*

**Authors:** Ilanthenral Kandasamy, Florentin Smarandache

**Comments:** 10 Pages.

Triple Reﬁned Indeterminate Neutrosophic Set (TRINS) which is a case of the reﬁned neutrosophic set was introduced. It provides the additional possibility to represent with sensitivity and accuracy the uncertain, imprecise, incomplete, and inconsistent information which are available in real world.

**Category:** General Mathematics

[17302] **viXra:1701.0371 [pdf]**
*submitted on 2017-01-10 09:22:27*

**Authors:** Ilanthenral Kandasamy, Florentin Smarandache

**Comments:** 16 Pages.

Triple Reﬁned Indeterminate Neutrosophic Set (TRINS) a case of the reﬁned neutrosophic set was introduced in [8]. The uncertain and inconsistent information which are available in real world is represented with sensitivity and accuracy by TRINS.

**Category:** General Mathematics

[17301] **viXra:1701.0368 [pdf]**
*submitted on 2017-01-10 09:28:40*

**Authors:** Kalyan Mondal, Mumtaz Ali, Surapati Pramanik, Florentin Smarandache

**Comments:** 27 Pages.

This paper presents some similarity measures between complex neutrosophic sets. A complex neutrosophic set is a generalization of neutrosophic set whose complex-valued truth membership function, complex-valued indeterminacy membership function, and complex valued falsity membership functions are the combinations of realvalued truth amplitude term in association with phase term, real-valued indeterminate amplitude term with phase term, and real-valued false amplitude term with phase term respectively.

**Category:** General Mathematics

[17300] **viXra:1701.0366 [pdf]**
*submitted on 2017-01-10 09:32:05*

**Authors:** Said Broumi, Mohamed Talea, Assia Bakali, Florentin Smarandache

**Comments:** 6 Pages.

In this work, a neutrosophic network method is proposed for finding the shortest path length with single valued trapezoidal neutrosophic number. The proposed algorithm gives the shortest path length using score function from source node to destination node.

**Category:** General Mathematics

[17299] **viXra:1701.0364 [pdf]**
*submitted on 2017-01-10 09:37:38*

**Authors:** Luige Vladareanu, Mihaiela Iliescu, Hongbo Wang, Feng Yongfei, Victor Vladareanu, Hongnian Yu, Florentin Smarandache

**Comments:** 6 Pages.

This paper presents relevant aspects of the idea of using the digital medicine in cancer, so that to shape a viable strategy for creating and implementing an interactive digital platform, NEO-VIP, that should be the basic support to design the strategy for integration of basic, clinical and environmental research on neoplasia progression to cancer.

**Category:** General Mathematics

[17298] **viXra:1701.0363 [pdf]**
*submitted on 2017-01-10 09:39:32*

**Authors:** Florentin Smarandache, Mircea Eugen Şelariu

**Comments:** 6 Pages.

Funcţiile beta excentrice de variabilă excentrică bexθ şi de variabilă centrică Bexα stau la baza edificiul funcţiilor supermatematice circulare excentrice (FSM−CE).

**Category:** General Mathematics

[17297] **viXra:1701.0362 [pdf]**
*submitted on 2017-01-10 09:42:09*

**Authors:** Said Broumi, Mohamed Talea, Assia Bakali, Florentin Smarandache

**Comments:** 7 Pages.

In this article, we extend the concept of neutrosophic graph-based multicriteria decision making method (NGMCDM) to the case of interval valued neutrosophic graph theory. The new concept is called interval valued neutrosophic graph-based multicriteria decision making method (IVNGMCDM for short).

**Category:** General Mathematics

[17296] **viXra:1701.0358 [pdf]**
*submitted on 2017-01-10 09:49:30*

**Authors:** Madad Khan, Florentin Smarandache, Muhammad Gulistan, Naveed Yaqoob

**Comments:** 24 Pages.

The main theme of this paper is to explore the structural properties of Hv-LA-semigroups with respect to generalized cubic relations. Here, some generalized cubic equivalence relations in Hv-LAsemigroups have been investigated.

**Category:** General Mathematics

[17295] **viXra:1701.0356 [pdf]**
*submitted on 2017-01-10 09:54:12*

**Authors:** Irfan Deli, Yusuf Şubaş, Florentin Smarandache, Mumtaz Ali

**Comments:** 8 Pages.

Interval valued bipolar neutrosophic set(IVBN-set) is a new generalization of fuzzy set, bipolar fuzzy set, neutrosophic set and bipolar neutrosophic set so that it can handle uncertain information more flexibly in the process of decision making.

**Category:** General Mathematics

[17294] **viXra:1701.0354 [pdf]**
*submitted on 2017-01-10 09:56:56*

**Authors:** Said Broumi, Florentin Smarandache

**Comments:** 13 Pages.

We first defined interval-valued neutrosophic soft rough sets (IVN-soft rough sets for short) which combine interval-valued neutrosophic soft set and rough sets and studied some of its basic properties. This concept is an extension of interval-valued intuitionistic fuzzy soft rough sets(IVIF-soft rough sets).

**Category:** General Mathematics

[17293] **viXra:1701.0351 [pdf]**
*submitted on 2017-01-10 10:04:37*

**Authors:** Octavian Cira, F, Smarandache

**Comments:** 26 Pages.

In this article we try to answer questions regarding the set of L.

**Category:** General Mathematics

[17292] **viXra:1701.0346 [pdf]**
*submitted on 2017-01-10 05:36:59*

**Authors:** H.S. Dhaliwal

**Comments:** 19 Pages.

I have compared Wmap to Planck and there appears to be explosion like phenomena occurring in the Cmb. A hot spot, after some time, disappears or explodes in the before and after comparisons. There are also observations of hot spots appearing where once there was a cold region. One may say this is noise in the data, to carry on the big bang theory, but there is a significant chance it is not as these changes are too large. If they are explosions, these hot spots may leave black holes behind and eject matter outwards, similar to a super novae event but macro. In some observations you will notice a round hollowish dark dot is left after the explosion like event. This observation, if correct, means we cannot rely on the redshift-distance theory interpretation as we know it when it comes to the distance between us and the Cmb. If in fact these were explosions, they should be significantly different in there frequency. There is also evidence from simulations on the Cmb of numerous concentric circles existing in the map. These circles may be the after affects of these Cmb hot spots exploding. The midpoint of these concentric rings do not have any hot spots in them, because the hot spot at the midpoint may have exploded in the past, which turned into cold spots, and the after affect was the concentric rings. A superimposition of the concentric circles on the Cmb is also included in the paper. Note some concentric rings have unique dark spots in the mid point, a black circle is placed over these interesting observations (Obs. 17).

**Category:** Astrophysics

[17291] **viXra:1701.0345 [pdf]**
*submitted on 2017-01-10 07:18:39*

**Authors:** Zhang ChengGang

**Comments:** 2 Pages.

Time-independent Shrodinger equation is derived in mathematics and physics.

**Category:** Quantum Physics

[17290] **viXra:1701.0344 [pdf]**
*submitted on 2017-01-09 16:56:48*

**Authors:** Gerges Francis Tawdrous

**Comments:** 73 Pages. the study interest for the tabernacle geometry only

This Study interest for the Tabernacle Geometrical details
where the geometrical data isn't clearly complete in the Holy Bible (Exodus Book)
So I had to conclude many geometrical details logically
for that the study is interesting because it depends basically on the geometrical logic.

**Category:** General Science and Philosophy

[17289] **viXra:1701.0343 [pdf]**
*submitted on 2017-01-09 18:11:54*

**Authors:** Jerrold Thacker

**Comments:** 1 Page.

There is evidence that the attractive force of gravity is logarithmic instead of linear. If this is true, then the motion of stars in spiral galaxies is easily explained and there is no need for dark matter.

**Category:** Astrophysics

[17288] **viXra:1701.0342 [pdf]**
*submitted on 2017-01-10 00:56:43*

**Authors:** Mahendra Kumar Trivedi, Alice Branton, Dahryn Trivedi, Gopal Nayak, William Dean Plikerd, Peter L. Surguy, Robert John Kock, Rolando Baptista Piedad, Russell Phillip Callas, Sakina A. Ansari, Sandra Lee Barrett, Sara Friedman, Steven Lee Christie

**Comments:** 10 Pages.

With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect®) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 µg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, µg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 µg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 µg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 µg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 µg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 µg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 µg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect®- Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.

**Category:** Biochemistry

[17287] **viXra:1701.0341 [pdf]**
*submitted on 2017-01-10 02:57:03*

**Authors:** Alexey V. Komkov

**Comments:** 2 Pages.

This work contains certificates numbers Van der Waerden, was found using SAT Solver. These certificates establish the best currently known lower bounds of the numbers Van der Waerden W( 7, 3 ), W( 8, 3 ), W( 9, 3 ), W( 10, 3 ), W( 11, 3 ).

**Category:** Combinatorics and Graph Theory

[17286] **viXra:1701.0340 [pdf]**
*submitted on 2017-01-10 03:55:52*

**Authors:** Ahsan Amin

**Comments:** 32 Pages.

Our goal is to give a very simple, effective and intuitive algorithm for the solution of initial value problem of ODEs of 1st and arbitrary higher order with general i.e. constant, variable or nonlinear coefficients and the systems of these ordinary differential equations. We find an expansion of the differential equation/function to get an infinite series containing iterated integrals evaluated solely at initial values of the dependent variables in the ordinary differential equation. Our series represents the true series expansion of the closed form solution of the ordinary differential equation. The method can also be used easily for general 2nd and higher order ordinary differential equations. We explain with examples the steps to solution of initial value problems of 1st order ordinary differential equations and later follow with more examples for linear and non-linear 2nd order and third order ODEs. We have given mathematica code for the solution of nth order ordinary differential equations and show with examples how to use the general code on 1st, 2nd and third order ordinary differential equations. We also give mathematica code for the solution of systems of a large number of general ordinary differential equations each of arbitrary order. We give an example showing the ease with which our method calculates the solution of system of three non-linear second order ordinary differential equations.

**Category:** General Mathematics

[17285] **viXra:1701.0339 [pdf]**
*submitted on 2017-01-09 15:20:14*

**Authors:** Desire Francine Gobato

**Comments:** 95 Pages. Portuguese.

The flight safety is one of the main concerns related to the current aviation and through the prevention it is gotten to avoid countless incidents and accidents. The work has as objective demonstrates through norms, patterns and documents, that a safe way exists of accomplishing acrobatic maneuvers. The focus from work will have as reference the operational safety in the acrobatic maneuvers involving the current acrobatic aircrafts inside of the Brazilian air space. It gave way a bibliographical consultation it was elaborated through virtual libraries, where were books that they are correlated with the flight safety and the acrobatic flight, besides the norms and patterns of ANAC. It was verified like this that several ways exist of accomplishing a flight with acrobatics in a safe way, and for that they were developed several norms that owe her they be followed in the aerial demonstrations, shows or in any event that executes maneuvers or acrobatics in a risky way, where the aircraft is exposed to your own limits.

**Category:** Classical Physics

[17284] **viXra:1701.0338 [pdf]**
*submitted on 2017-01-09 10:48:26*

**Authors:** Igor Hrncic

**Comments:** 4 Pages.

This letter puts forward the proposition that particle probability functions must
vanish outside causal horizons. When applied to accelerated objects in a universe
with a particle horizon, the conclusion is that there is the minimal acceleration
a particle can accelerate with, thus possibly providing a theoretical
reason for MOND.

**Category:** Astrophysics

[17283] **viXra:1701.0337 [pdf]**
*submitted on 2017-01-09 12:02:34*

**Authors:** Desire Francine Gobato, Ricardo Gobato

**Comments:** 1 Page. Panel presented in the THIRD INTERNATIONAL SATELLITE CONFERENCE ON MATHEMATICAL METHODS IN PHYSICS, Londrina - PR (Brazil), October 21 - 26, 2013.

One of the most unusual V/STOL aircraft programs was the Avro VZ-9 “Avrocar”. Designed to be a true flying saucer, the Avrocar was one of the
few V/STOL aircraft to be developed in complete secrecy. Despite significant design changes during flight test, the Avrocar was unable to achieve
its objectives and the program was ultimately canceled after the expenditure of over $10 million (1954-61).
The concept of a lift fan driven by a turbojet engine did not die either, and lives on today as a
key component of the Lockheed X-35 Joint Strike Fighter contender. While the Avrocar was
under development, Peter Kappus of General Electric independently developed a lift fan
propulsion system which evolved into the GE/Ryan VZ-11 (later XV-5) “Vertifan”.

**Category:** General Mathematics

[17282] **viXra:1701.0336 [pdf]**
*submitted on 2017-01-09 12:13:24*

**Authors:** Desire Francine Gobato Fedrigo, Ricardo Gobato

**Comments:** 1 Page. Portuguese. Panel presented in the XVII Physics Week of the State University of Londrina, October 22 to 26, 2012.

To begin a study related to the High Speed Theory, it is necessary to know that the aircraft are classified according to the being: subsonic, transonic, hypersonic or supersonic. Directly related to there are still the so-called pressure waves, which are concentric waves printed in the air by any object that produces sound or which travels in the Earth's atmosphere, these propagate at a speed of 340 m/s or 1224 km/h at the mean sea level. The Aircraft in flight produces these waves, which are formed around them and move 360º around it. In this study, the
nose of the aircraft and the area of greater curvature in the extrados of the wing as expansion waveformers, in order to study the wave of Shock, Mach number and critical Mach number.

**Category:** Classical Physics

[17281] **viXra:1701.0335 [pdf]**
*submitted on 2017-01-08 16:44:39*

**Authors:** Jérémy Kerneis

**Comments:** 17 Pages.

We use three postulates P1, P2a/b and P3 :
Combining P1 and P2a with "Sommerfeld's quantum rules" correspond to the original quan-
tum theory of Hydrogen, which produces the correct relativistic energy levels for atoms (Sommerfeld's and Dirac's theories of matter produces the same energy levels, and Schrodinger's theory produces the approximation of those energy levels). P3 can be found in Schrodinger's famous paper introducing his equation, P3 being his first assumption (a second assumption, suppressed here, is required to deduce his equation). P3 implies that the wavefunction solution of both Schrodinger's and Klein-Gordon's equations in the non interacting case while, in the interacting case, it immediatly implies "Sommerfeld's quantum rules" : P1, P2a,
and P3 then produce the correct relativistic energy levels of atoms, and we check that the required degeneracy is justied by pure deduction, without any other assumption (Schrodinger's theory only justies one half of the degeneracy).
We observe that the introduction of an interaction in P1 is equivalent to a
modication of the metric inside the wavefunction in P3, such that the equation of motion of a system can be
deduced with two dierent methods, with or without the metric. Replacing the electromagnetic potential P2a by the suggested gravitationnal potential P2b, the equation of motion (deduced in two ways) is equivalent to the equation of motion of General Relativity in the low field approximation (with accuracy 10-6 at the surface of the Sun). We have no coordinate singularity inside the metric. Other motions can be obtained by modifying P2b, the theory is adaptable.
First of all, we discuss classical Kepler problems (Newtonian motion of the Earth around the Sun), explain the link between Kelpler law of periods (1619) and Plank's law (1900) and observe the links between all historical models of atoms (Bohr, Sommerfeld, Pauli, Schrodinger, Dirac, Fock). This being done, we introduce P1, P2a/b, and P3 to then describe electromagnetism
and gravitation in the same formalism.

**Category:** Quantum Gravity and String Theory

[17280] **viXra:1701.0334 [pdf]**
*submitted on 2017-01-08 18:10:20*

**Authors:** Ricardo Gobato

**Comments:** 14 Pages. Parana Journal of Science and Education, v.2, n.3, March 10, 2016 PJSE, ISSN 2447-6153, c 2015-2016

The work is the result of a philosophical study of several passages of the Holy Bible, with regard to faith. We analyzed verses that include mustard seed parables. The study discusses the various concepts of faith as belief and faith as a form of energy. In this concept of faith as energy, we made a connection and this matter. We approach the gravitational ﬁeld using the Law of Universal Gravitation and the equation of equivalence between energy and matter not to relativistic effects. Of Scriptures, we focus on Matthew 17:20, and according to the concept of faith as a form of energy, we calculate the energy needed to raise a mountain, for the conversion of matter to energy in a mustard seed and we compare a massive iron mountain, Mount Everest and Mount Sinai. We conclude with these concepts and considerations that energy ”faith” can move a mountain.

**Category:** General Science and Philosophy

[17279] **viXra:1701.0333 [pdf]**
*submitted on 2017-01-08 22:27:43*

**Authors:** Andrew Beckwith

**Comments:** 6 Pages.

We are looking at what if the initial cosmological constant is due to if we furthermore use as the variation of the time component of the metric tensor in Pre-Planckian Space-time up to the Planckian space-time initial values. This assumes as an initial inflaton value, as well as employing NonLinear Electrodynamics to the scale factor in ,the upshot is an expression for as an initial inflaton value / squared which supports Corda’s assumptions in the ‘Gravity’s breath Electronic Journal of theoretical physics article. We close with an idea to be worked in further detail as to density matrices and how it may relate to gravitons traversing from a Pre Planckian to Planckian space-time regime. An idea we will write up in far greater detail in a future publication

**Category:** Quantum Gravity and String Theory

[17278] **viXra:1701.0332 [pdf]**
*submitted on 2017-01-08 22:39:04*

**Authors:** Hadi Oqaibi, Anas Fattouh

**Comments:** 10 Pages. International Journal of Innovative Research in Computer and Communication Engineering

Steady-state visual evoked potential (SSVEP) is a well-established paradigm of brain-computer interface (BCI) where the interaction between the user and a controlled device is achieved via brainwave activities and visual stimuli. Although SSVEP-based BCIs are known to have high information transfer rate (ITR), wrong feedback reduces the performance of these applications. In this paper, we investigate the possibility of enhancing SSVEP -based BCI applications by incorporating the user’s emotions. To this end, an SSVEP-based BCI application is designed and implemented where the user has to steer a simulated car moving through a maze to reach a target position. Using standard flickering checkerboards, the user has to select one of two commands, turn right or turn left. After each selection, a visual virtual feedback is shown and the emotional state of the user is estimated from recorded electroencephalogram (EEG) brain activities. This estimated emotion could be used to automatically confirm or cancel the selected command and therefore improve the quality of executed commands.

**Category:** Digital Signal Processing

[17277] **viXra:1701.0331 [pdf]**
*submitted on 2017-01-08 14:31:48*

**Authors:** Gerges Francis Tawdrous

**Comments:** 279 Pages.

This study devoted to the tabernacle & Great pyramids geometries in addition to analyze the church icons
Also the study offers an interpretation for the tabernacle with some deep philosophy for the dualism..
The study in Arabic Language
(Part Two)

**Category:** General Science and Philosophy

[17276] **viXra:1701.0330 [pdf]**
*submitted on 2017-01-08 16:08:20*

**Authors:** Ramzi Suleiman

**Comments:** 70 Pages.

We propose a simple, axiom-free modification of Galileo-Newton's dynamics of moving bodies, termed Information Relativity theory. We claim that the theory is capable of unifying physics. The claimed unification is supported by the fact that the same derived set of simple and beautiful transformations, apply successfully to predicting and explaining many phenomena and findings in cosmology, quantum mechanics, and more. Our modification of classical physics is done simply by accounting for the time travel of information about a physical measurement, from the reference frame at which the measurement was taken, to an observer in another reference frame, which is in motion relative to the first frame. This minor modification of classical physics turns out to be sufficient for unifying all the dynamics of moving bodies, regardless of their size and mass. Since the theory's transformations and predictions are expressed only in terms of observable physical entities, its testing should be simple and straightforward.
For quantum mechanics the special version of the theory for translational inertial motion predicts and explains matter-wave duality, quantum phase transition, quantum criticality, entanglement, the diffraction of single particles in the double slit experiment, the quantum nature of the hydrogen atom. For cosmology, the theory constructs a relativistic quantum cosmology, which provides plausible and testable explanations of dark matter and dark energy, as well as predictions of the mass of the Higgs boson, the GZK cutoff phenomena, the Schwarzschild radius of black holes (without interior singularity), and the timeline of ionization of chemical elements along the history of the universe.
The general version of the theory for gravitational and electrostatic fields, also detailed in the paper, is shown to be successful in predicting and explaining the strong force, quantum confinement, and asymptotic freedom.

**Category:** Relativity and Cosmology

[17275] **viXra:1701.0329 [pdf]**
*submitted on 2017-01-08 11:02:17*

**Authors:** Marius Coman

**Comments:** 4 Pages.

In this paper I make the following conjecture: For any pair of consecutive primes [p1, p2], p2 > p1 > 43, p1 and p2 having the same number of digits, there exist a prime q, 5 < q < p1, such that the number n obtained concatenating (from the left to the right) q with p2, then with p1, then again with q is prime. Example: for [p1, p2] = [961748941, 961748947] there exist q = 19 such that n = 1996174894796174894119 is prime. Note that the least values of q that satisfy this conjecture for twenty consecutive pairs of consecutive primes with 9 digits are 19, 17, 107, 23, 131, 47, 83, 79, 61, 277, 163, 7, 41, 13, 181, 19, 7, 37, 29 and 23 (all twenty primes lower than 300!), the corresponding primes n obtained having 20 to 24 digits! This method appears to be a good way to obtain big primes with a high degree of ease and certainty.

**Category:** Number Theory

[17274] **viXra:1701.0328 [pdf]**
*submitted on 2017-01-07 23:13:35*

**Authors:** Roger Granet

**Comments:** 3 Pages.

The Russell Paradox (1) considers the set, R, of all sets that are not members of themselves. On its surface, it seems like R belongs to itself only if it doesn't belong to itself. This is where the paradox come from. Here, a solution is proposed that is similar to Russell's method based on his theory of types (1,2) but is instead based on the definition of why things exist as described in previous work (3). In that work, it was proposed that a thing exists if it is a grouping defining what is contained within. A corollary is that a thing, such as a set, does not exist until what is contained within is defined. A second corollary is that after a grouping defining what is contained within is present, and the thing exists, if one then alters the definition of what is contained within, the first existent entity is destroyed and a different existent entity is created. Based on this, set R of the Russell Paradox does not even exist until after the list of the elements it contains (e.g. the list of all sets that aren't members of themselves) is defined. Once this list of elements is completely defined, R then springs into existence. Therefore, because it doesn't exist until after its list of elements is defined, R obviously can't be in this list of elements and, thus, cannot be a member of itself; so, the paradox is resolved. Additionally, one can't then put R back into its list of elements after the fact because if this were done, it would be a different list of elements, and it would no longer be the original set R, but some new set. This same type of reasoning is then applied to the Godel Incompleteness Theorem, which roughly states that there will always be some statements within a formal system of arithmetic (system P) that are true but that can't be proven to be true. Briefly, this reasoning suggests that arguments such as the Godel sentence and diagonalization arguments confuse references to future, not yet existent statements with a current and existent statement saying that the future statements are unprovable. Current and existent statements are different existent entities than future, not yet existent statements and should not be conflated. In conclusion, a new resolution of the Russell Paradox and some issues with the Godel Incompleteness Theorem are described.

**Category:** Set Theory and Logic

[17273] **viXra:1701.0327 [pdf]**
*submitted on 2017-01-08 01:29:36*

**Authors:** Sergey G. Fedosin

**Comments:** 69 pages. Journal of Fundamental and Applied Sciences, Vol. 9, No. 1, pp. 411-467 (2017). http://dx.doi.org/10.4314/jfas.v9i1.25

It is shown that the angular frequency of the photon is nothing else than the averaged angular frequency of revolution of the electron cloud’s center during emission and quantum transition between two energy levels in an atom. On assumption that the photon consists of charged particles of the vacuum field (of praons), the substantial model of a photon is constructed. Praons move inside the photon in the same way as they must move in the electromagnetic field of the emitting electron, while internal periodic wave structure is formed inside the photon. The properties of praons, including their mass, charge and speed, are derived in the framework of the theory of infinite nesting of matter. At the same time, praons are part of nucleons and leptons just as nucleons are the basis of neutron stars and the matter of ordinary stars and planets. With the help of the Lorentz transformations, which correlate the laboratory reference frame and the reference frame, co-moving with the praons inside the photon, transformation of the electromagnetic field components is performed. This allows us to calculate the longitudinal magnetic field and magnetic dipole moment of the photon, and to understand the relation between the transverse components of the electric and magnetic fields, connected by a coefficient in the form of the speed of light. The total rest mass of the particles making up the photon is found, it turns out to be inversely proportional to the nuclear
charge number of the hydrogen-like atom, which emits the photon. In the presented
picture the photon composed of praons moves at a speed less than the speed of light,
and it loses the right to be called an elementary particle due to its complex structure.

**Category:** Classical Physics

[17272] **viXra:1701.0326 [pdf]**
*submitted on 2017-01-08 03:54:31*

**Authors:** Radhakrishnamurty Padyala

**Comments:** 6 Pages.

The concept of ‘thermal heating efficiency’, G, considered as a duel of Carnot efficiency, offers a suitable method to test the validity of second law of thermodynamics. This concept claims to offer us many practical (therefore, experimentally testable) advantages, specifically, economy in heating houses, cooking, besides others. For example, if one unit of fuel when burnt inside the house gives Q joules of heat, the thermodynamic method based on this concept offers as much as 10 Q joules for the same one unit of fuel, giving a 10 fold economy in heating houses. We show in this article that the economy claimed is a myth and we can get no more heat into the house using this method than that we get by burning the fuel inside the house. We propose, the concept of thermal heating efficiency as a suitable method to test the validity of the second law of thermodynamics.

**Category:** Thermodynamics and Energy

[17271] **viXra:1701.0325 [pdf]**
*submitted on 2017-01-08 03:55:27*

**Authors:** Ilija Barukčić

**Comments:** 29 Pages. Copyright © 2017 by Ilija Barukčić, Jever, Germany. Published by:

Epstein-Barr Virus (EBV) has been widely proposed as a possible candidate virus for the viral etiology of human breast cancer, still the most common malignancy affecting females worldwide. Due to possible problems with PCR analyses (contamination), the lack of uniformity in the study design and insufficient mathematical/statistical methods used by the different authors, findings of several EBV (polymerase chain reaction (PCR)) studies contradict each other making it difficult to determine the EBV etiology for breast cancer. In this present study, we performed a re-investigation of some of the known studies. To place our results in context, this study support the hypothesis that EBV is a cause of human breast cancer.

**Category:** Statistics

[17270] **viXra:1701.0324 [pdf]**
*submitted on 2017-01-08 04:19:38*

**Authors:** Adam Chmaj

**Comments:** 6 Pages. Original 2014 version of the result is posted here. Some minor corrections are left to the reader.

The existence of traveling waves for the fractional Burgers equation is established, using an operator splitting trick. This solves a 1998 open problem.

**Category:** Functions and Analysis

[17269] **viXra:1701.0322 [pdf]**
*submitted on 2017-01-07 10:21:42*

**Authors:** Adrian Ferent

**Comments:** 51 Pages. © 2016 Adrian Ferent

“Dark Energy is Gravitational Waves”
Adrian Ferent
“The momentum of the graviton is negative p = - m × v ”
Adrian Ferent
“Because the momentum is negative, the relativistic mass -m of the graviton is negative!”
Adrian Ferent
The causes of Planck universe to expand faster:
-The Ferent universe with supermassive black holes is speeding up the expansion of Planck universe.
-The negative pressure created by gravitons inside the Planck universe, is speeding up the expansion of Planck universe.
-Not the Dark energy is speeding up the expansion of Planck universe, because Dark energy does not exist.
“Dark Energy is Gravitons”
Adrian Ferent
The title is ‘Dark Energy is Gravitational Waves, Dark Energy is Gravitons’ because my Gravitation theory is completely different than Einstein’s Gravitation theory where
Gravitational waves are ripples in the curvature of spacetime that propagate as waves with the speed of light.

**Category:** Quantum Gravity and String Theory

[17268] **viXra:1701.0321 [pdf]**
*submitted on 2017-01-07 12:02:26*

**Authors:** Ricardo Gobato, Alekssander Gobato, Desire Francine G. Fedrigo

**Comments:** 1 Page. Portuguese. Panel presented at the XVIII Physics Week of the State University of Londrina, from September 9 to 13, 2013.

Argemone Mexicana L. popularly known as: Mexican poppy, thorny Mexican poppy, thistle or cardo santo is a species of poppy found in Mexico and widespread in many parts of the world. It is an extremely resistant plant, tolerant to drought and poor soils, being often the only vegetation cover present in the soil. It has bright yellow latex, and although toxic to grazing animals, it is rarely ingested. From the Papaverácea family, informally known as poppies, it is an important ethno-pharmacological family of 44 genera and about 760 species of flowering plants. The plant is the source of several types of chemical compounds, such as flavonoids, although alkaloids are the most commonly found. In addition to pharmaceutical efficacy, certain parts of the plant also show toxic effects. It is used in different parts of the world for the treatment of various diseases including tumors, warts, skin diseases, rheumatism, inflammation, jaundice, leprosy, microbial infections, malaria and as a larvicide against Aedes aegypti, dengue vector.

**Category:** Physics of Biology

[17267] **viXra:1701.0320 [pdf]**
*submitted on 2017-01-07 12:05:30*

**Authors:** Marius Coman

**Comments:** 3 Pages.

In this paper I make the following conjecture: For any pair of twin primes [p, p + 2], p > 5, there exist a prime q, 5 < q < p, such that the number n obtained concatenating (from the left to the right) q with p + 2, then with p, then again with q is prime. Example: for [p, p + 2] = [18408287, 18408289] there exist q = 37 such that n = 37184082891840828737 is prime. Note that the least values of q that satisfy this conjecture for twenty consecutive pairs of twins with 8 digits are 19, 7, 19, 11, 23, 23, 47, 7, 47, 17, 13, 17, 17, 37, 83, 19, 13, 13, 59 and 97 (all twenty primes lower than 100!), the corresponding primes n obtained having 20 digits! This method appears to be a good way to obtain big primes with a high degree of ease and certainty.

**Category:** Number Theory

[17266] **viXra:1701.0319 [pdf]**
*submitted on 2017-01-07 04:17:29*

**Authors:** Espen Gaarder Haug

**Comments:** 6 Pages.

In this paper we look at the ultimate limits of a photon propulsion rocket. The maximum velocity for a photon propulsion rocket is just below the speed of light and is a function of the reduced Compton wavelength of the heaviest subatomic particles in the rocket. We are basically combining the relativistic rocket equation with Haug’s new insight in the maximum velocity for anything with rest mass; see [1, 2, 3].
An interesting new finding is that in order to accelerate any sub-atomic “fundamental” particle to its maximum velocity, the particle rocket basically needs two Planck masses of initial load. This might sound illogical until one understands that subatomic particles with different masses have different maximum velocities. This can be generalized to large rockets and gives us the maximum theoretical velocity of a fully-efficient and ideal rocket. Further, no additional fuel is needed to accelerate a Planck mass particle to its maximum velocity; this also might sound absurd, but it has a very simple and logical solution that is explained in this paper.
This paper is Classified!

**Category:** Relativity and Cosmology

[7392] **viXra:1701.0520 [pdf]**
*replaced on 2017-01-18 00:39:26*

**Authors:** Yibing Qiu

**Comments:** 1 Page.

Abstract: show a viewpoint with regards to the mechanism
between the black holes and the disks of galaxies.

**Category:** Astrophysics

[7391] **viXra:1701.0498 [pdf]**
*replaced on 2017-01-19 10:50:18*

**Authors:** Sylwester Kornowski

**Comments:** 4 Pages.

The Scale-Symmetric Theory (SST) shows that the quantum entanglement fixes the speed of light in “vacuum” c in relation to its source or a last-interaction object (it can be a detector). It causes that the spatial distances to galaxies differ from the time distances (the light travel time) - it is the duality of relativity. The duality of relativity leads to the running time Hubble constant that creates an illusion of acceleration of expansion of the Universe. According to SST, for the nearest Universe, the time Hubble constant is 70.52. SST gives for mean time Hubble constant 64.01 - it should be the mean observed Hubble constant when we apply the General Relativity (GR) to the whole observed Universe. If we neglect some part of distant Universe then the GR/observed time Hubble constant should be defined by following interval <64.01, 70.52>. But emphasize that the real mean spatial Hubble constant calculated within SST is 45.24. It leads to the age of Universe 21.614 +- 0.096 Gyr but time distance to most distant observed Universe cannot be longer than 13.866 +- 0.096 Gyr. SST shows that evolution of galaxies accelerated about 13.1 Gyr ago - it leads to an illusion that cosmic objects are not older than 13.1 Gyr.

**Category:** Quantum Gravity and String Theory

[7390] **viXra:1701.0498 [pdf]**
*replaced on 2017-01-18 05:30:33*

**Authors:** Sylwester Kornowski

**Comments:** 3 Pages.

The Scale-Symmetric Theory (SST) shows that the quantum entanglement fixes the speed of light in “vacuum” c in relation to its source or a last-interaction object (it can be a detector). It causes that the spatial distances to galaxies differ from the time distances (the light travel time) - it is the duality of relativity. The duality of relativity leads to the running time Hubble constant that creates an illusion of acceleration of expansion of the Universe. According to SST, for the nearest Universe, the time Hubble constant is 70.52. SST gives for mean time Hubble constant 64.01 - it should be the mean observed Hubble constant when we apply the General Relativity (GR) to the whole observed Universe. If we neglect some part of distant Universe then the GR/observed time Hubble constant should be defined by following interval <64.01, 70.52>. But emphasize that the real mean spatial Hubble constant calculated within SST is 45.24. It leads to the age of Universe 21.614 +- 0.096 Gyr but time distance to most distant observed Universe cannot be longer than 13.866 +- 0.096 Gyr. SST shows that evolution of galaxies accelerated about 13.1 Gyr ago - it leads to an illusion that cosmic objects are not older than 13.1 Gyr.

**Category:** Quantum Gravity and String Theory

[7389] **viXra:1701.0497 [pdf]**
*replaced on 2017-01-15 05:04:48*

**Authors:** Espen Gaarder Haug

**Comments:** 7 Pages.

In this paper we are combining Heisenberg’s uncertainty principle with Haug’s new insight on the maximum velocity for anything with rest-mass; see [1, 2, 3]. This leads to a new and exact boundary condition on Heisenberg’s uncertainty principle. The uncertainty in position at the potential maximum momentum for subatomic particles as derived from the maximum velocity is half of the Planck length.
Perhaps Einstein was right after all when he stated, “God does not play dice.” Or at least the dice may have a stricter boundary on possible outcomes than we have previously thought.
We also show how this new boundary condition seems to make big G consistent with Heisenberg’s uncertainty principle. We obtain a mathematical expression for big G that is fully in line with empirical observations.
Hopefully our analysis can be a small step in better understanding Heisenberg’s uncertainty principle and its interpretations and by extension, the broader implications for the quantum world.

**Category:** Quantum Physics

[7388] **viXra:1701.0491 [pdf]**
*replaced on 2017-01-17 04:56:58*

**Authors:** Luke Kenneth Casson Leighton

**Comments:** 7 Pages.

This paper continues prior work based on the insight
that Rishon ultracoloured triplets (electron, up, neutrino in left and
right forms) might simply be elliptically-polarised "mobius light". The
important first step is therefore to identify the twelve (24 including
both left and right handed forms) phases, the correct topology, and then
to peform transformations (mirroring, rotation, time-reversal) to double-check
which "particles" are identical to each other and which are anti-particle
opposites.
Ultimately, a brute-force systematic analysis will allow a formal
mathematical group to be dropped seamlessly on top of the twelve (24)
particles.

**Category:** High Energy Particle Physics

[7387] **viXra:1701.0491 [pdf]**
*replaced on 2017-01-16 03:30:53*

**Authors:** Luke Kenneth Casson Leighton

**Comments:** 6 Pages.

This paper continues prior work based on the insight
that Rishon ultracoloured triplets (electron, up, neutrino in left and
right forms) might simply be elliptically-polarised "mobius light". The
important first step is therefore to identify the twelve (24 including
both left and right handed forms) phases, the correct topology, and then
to peform transformations (mirroring, rotation, time-reversal) to double-check
which "particles" are identical to each other and which are anti-particle
opposites.
Ultimately, a brute-force systematic analysis will allow a formal
mathematical group to be dropped seamlessly on top of the twelve (24)
particles.

**Category:** High Energy Particle Physics

[7386] **viXra:1701.0465 [pdf]**
*replaced on 2017-01-14 01:44:02*

**Authors:** Sylwester Kornowski

**Comments:** 10 Pages.

Here, applying the Scale-Symmetric Theory (SST), we answered following question: What is the origin of the cosmic reionization? Presented here scenario differs radically from that described within the mainstream cosmology. Most important are masses of massive galaxies/quasars and the decays of large cosmic structures due to the stepwise decays of the earliest photons (such decays of photons mimic an acceleration of expansion of the Universe). Highest rate of reionization of hydrogen should be for redshift z(H,start) = 11.18 whereas complete reionization should occur at z(H,end) = 7.10. For reionization of helium we obtain respectively z(He,start) = 3.63 and z(He,end) = 2.70. Theoretical results are consistent with observational data. It leads to conclusion that the General Theory of Relativity (GR) correctly describes the regions of reionization. We showed that number and energy of created photons were sufficient to ionize the intergalactic medium. We answered as well the second very important question: Why there appeared the supermassive black holes so quickly? We showed here also that there was an acceleration of evolution of clusters of galaxies (not an acceleration of expansion of spacetime!) about 13.8 down to 13 Gyr and 6.5 down to 5 Gyr ago.

**Category:** Quantum Gravity and String Theory

[7385] **viXra:1701.0465 [pdf]**
*replaced on 2017-01-13 04:32:25*

**Authors:** Sylwester Kornowski

**Comments:** 8 Pages.

Here, applying the Scale-Symmetric Theory (SST), we answered following question: What is the origin of the cosmic reionization? Presented here scenario differs radically from that described within the mainstream cosmology. Most important are masses of massive galaxies/quasars and the decays of large cosmic structures due to the stepwise decays of the earliest photons (such decays of photons mimic an acceleration of expansion of the Universe). Highest rate of reionization of hydrogen should be for redshift z(H,max) = 11.18 whereas complete reionization should occur at z(H,end) = 7.10. For reionization of helium we obtain respectively z(He,max) = 3.63 and z(He,end) = 2.70. Theoretical results are consistent with observational data. It leads to conclusion that the General Theory of Relativity (GR) correctly describes the regions of reionization. We showed that number and energy of created photons were sufficient to ionize the intergalactic medium. We answered as well the second very important question: Why there appeared the supermassive black holes so quickly? We showed here also that there was an acceleration of evolution of clusters of galaxies (not an acceleration of expansion of spacetime!) about 13.8 down to 13 Gyr and 6.5 down to 5 Gyr ago.

**Category:** Quantum Gravity and String Theory

[7384] **viXra:1701.0398 [pdf]**
*replaced on 2017-01-10 21:35:24*

**Authors:** Yibing Qiu

**Comments:** 1 Page.

Abstract: showing a viewpoint with regards to black holes, pulsars and neutron stars.

**Category:** Astrophysics

[7383] **viXra:1701.0345 [pdf]**
*replaced on 2017-01-10 22:26:14*

**Authors:** Zhang ChengGang

**Comments:** 2 Pages.

Time-independent Shrodinger equation is derived in mathematics and physics.

**Category:** Quantum Physics

[7382] **viXra:1701.0345 [pdf]**
*replaced on 2017-01-10 08:23:13*

**Authors:** Zhang ChengGang

**Comments:** 2 Pages.

Time-independent Shrodinger equation is derived in mathematics and physics.

**Category:** Quantum Physics

[7381] **viXra:1701.0338 [pdf]**
*replaced on 2017-01-12 08:54:27*

**Authors:** Igor Hrncic

**Comments:** 5 Pages.

This letter puts forward the proposition that particle probability functions must vanish outside causal horizons. When applied to accelerated objects in a universe with a particle horizon, the conclusion is that there is the minimal acceleration a particle can accelerate with, thus possibly providing a theoretical reason for MOND.

**Category:** Astrophysics

[7380] **viXra:1701.0338 [pdf]**
*replaced on 2017-01-09 13:47:17*

**Authors:** Igor Hrncic

**Comments:** 5 Pages.

This letter puts forward the proposition that particle probability functions must vanish outside causal horizons. When applied to accelerated objects in a universe with a particle horizon, the conclusion is that there is the minimal acceleration a particle can accelerate with, thus possibly providing a theoretical reason for MOND.

**Category:** Astrophysics

[7379] **viXra:1701.0319 [pdf]**
*replaced on 2017-01-11 05:41:16*

**Authors:** Espen Gaarder Haug

**Comments:** 7 Pages.

In this paper we look at the ultimate limits of a photon propulsion rocket. The maximum velocity for a photon propulsion rocket is just below the speed of light and is a function of the reduced Compton wavelength of the heaviest subatomic particles in the rocket. We are basically combining the relativistic rocket equation with Haug’s new insight in the maximum velocity for anything with rest mass; see [1, 2, 3]. An interesting new finding is that in order to accelerate any sub-atomic “fundamental” particle to its maximum velocity, the particle rocket basically needs two Planck masses of initial load. This might sound illogical until one understands that subatomic particles with different masses have different maximum velocities. This can be generalized to large rockets and gives us the maximum theoretical velocity of a fully-efficient and ideal rocket. Further, no additional fuel is needed to accelerate a Planck mass particle to its maximum velocity; this also might sound absurd, but it has a very simple and logical solution that is explained in this paper.

**Category:** Relativity and Cosmology

[7378] **viXra:1701.0319 [pdf]**
*replaced on 2017-01-10 17:28:19*

**Authors:** Espen Gaarder Haug

**Comments:** 7 Pages.

In this paper we look at the ultimate limits of a photon propulsion rocket. The maximum velocity for a photon propulsion rocket is just below the speed of light and is a function of the reduced Compton wavelength of the heaviest subatomic particles in the rocket. We are basically combining the relativistic rocket equation with Haug’s new insight in the maximum velocity for anything with rest mass; see [1, 2, 3]. An interesting new finding is that in order to accelerate any sub-atomic “fundamental” particle to its maximum velocity, the particle rocket basically needs two Planck masses of initial load. This might sound illogical until one understands that subatomic particles with different masses have different maximum velocities. This can be generalized to large rockets and gives us the maximum theoretical velocity of a fully-efficient and ideal rocket. Further, no additional fuel is needed to accelerate a Planck mass particle to its maximum velocity; this also might sound absurd, but it has a very simple and logical solution that is explained in this paper. This paper is Classified!

**Category:** Relativity and Cosmology