[171] **viXra:1301.0197 [pdf]**
*submitted on 2013-01-31 20:00:46*

**Authors:** Roger N. Weller

**Comments:** 10 Pages.

A closer review of particle decay schemes was undertaken in order to deduce the structure of quarks, leptons, and baryons. For simplicity only the u-, d-, and s-quarks were considered along with the most common particles. In spite of some ambiguities, many interesting results emerged: multiple particles with the same mass, a genetic relationship between leptons and baryons, and insight into the nature of u-, d-, s-quarks. Results from this study suggest the need for major modifications in the Standard Model.

**Category:** High Energy Particle Physics

[170] **viXra:1301.0196 [pdf]**
*submitted on 2013-01-31 10:16:28*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 1 Page.

Scientific definitions for peer-review and refereed journal are given so that the reader can clarify the real meaning of these terms and their real purposes.

**Category:** Social Science

[169] **viXra:1301.0195 [pdf]**
*submitted on 2013-01-31 03:09:33*

**Authors:** Marius Coman

**Comments:** 4 Pages. I discovered myself some of these polynomials and submitted few to OEIS; I know the other ones from the articles available on Internet.

A simple list of known such polynomials, indexed by the value of discriminants, containing no analysis but the introduction of the “root prime generating polynomial” notion.

**Category:** Number Theory

[168] **viXra:1301.0194 [pdf]**
*submitted on 2013-01-31 03:37:46*

**Authors:** Pandey Nitesh Vinodbhai

**Comments:** 29 Pages.

Parasites often alter the behavior of their host to facilitate transmission. Sexually transmitted infections in humans may offer an opportunity to explore this area further as the causative agents are under rigorous selection pressure to evolve traits like that of host sexual behavior manipulation. Very recently sexually transmitted pathogens like HIV and Herpes simplex have been speculated to induce similar changes in the human host. HPV which is leading cause of cervical cancers and that of genitals may be the one of the best candidates to study such manipulations in humans. Limited modes of transmission and cultural constraints in orthodox societies where sexuality is still a taboo may put the virus under immense selection pressure to manipulate the host sexual behavior and large number of available mutational variants within the host may actually support the entire process. HPV spreading through Oral sex recently been confirmed as the major culprit for rising Oral Cancer cases in Western World. HPV has been also linked to breast and prostate Cancer. HPV has evolved to infect even neurons as it has been detected in majority of Retinoblastoma cases in India .Infection in brain provides with ample opportunity to manipulate neuronal circuits that may influence sexual behavior. Considering the versatility of HPV to colonize various sites in Human Body and most recently Human Brain I hypothesize that HPV can manipulate sexual behavior in Humans when the chances of transmission are very thin even in the entire lifetime of the host.

**Category:** Mind Science

[167] **viXra:1301.0193 [pdf]**
*submitted on 2013-01-31 04:10:38*

**Authors:** Russell Bagdoo

**Comments:** 10 Pages. French version of « THE ENERGY IN VIRTUE OF THE PRINCIPLE OF COMPENSATION » on viXra

La théorie de la Relation est basée sur l’existence de deux structures qui vont en directions opposées et sur une transformation réelle, tout au long du temps cosmologique, de l’énergie négative de la structure de l’expansion en énergie positive de la structure de la condensation. L’énergie-masse négative, inversée en vertu du principe de Compensation, donne une énergie-masse positive. La théorie explore la situation du « vide » quantique qui contient une énergie minimale à la surface d’un océan d’énergie négative, ce qui alimente la matière ordinaire positive au-dessus. En vertu du principe de Compensation, les fluctuations d’énergie du vide se matérialisent spontanément non seulement en nombreux couples virtuels mais aussi réels. Il y aurait, superposé au vide, un champ « gravicoloré » qui fournirait la densité d’énergie gravitationnelle nécessaire pour séparer les couples virtuels de particules et les matérialiser. Deux sortes de masse et énergie seraient ainsi reliées. Le processus de cette conversion tout au long de l’expansion implique une flèche de temps irréversible.

**Category:** Quantum Physics

[166] **viXra:1301.0192 [pdf]**
*replaced on 2013-02-03 08:45:07*

**Authors:** Russell Bagdoo

**Comments:** 7 Pages. This is the French version of «THE PHARAO/ACES MISSION AND THE ALLAIS EFFECT» on viXra.

La théorie de la relativité d’Einstein sera mise à l’épreuve dans un environnement de microgravité par ACES-PHARAO. Une horloge à atomes froids sera installée en 2013 à l’extérieur du module spatial Columbus et rapportera des données avec une exactitude plus élevée que ce qui est accessible avec la gravité terrestre. Notre souhait est que l’effet Allais soit inclus dans son programme d'application. Cet effet est lié à une déviation inexpliquée du mouvement du pendule lors d’éclipses solaires. Pharao fournirait un moyen unique de chercher dans un référentiel d’éclipse une éventuelle variation de la constante G. La comparaison entre l’horloge spatiale et celles disponibles sur Terre permettrait de découvrir si le potentiel gravitationnel et la vitesse des stations terrestres sont toujours constants, si le principe d’équivalence comporte une faille. (Einstein's theory of relativity will be put to the test in a micro-gravitational environment by ACES/PHARAO. It is scheduled to be installed in 2013. The space will be studied by a new kind of atomic clock (Pharao) which will yield data accuracy much higher than what is attainable under earth’s gravitation. Our wish is that ACES includes the Allais effect in his application domain. This effect is related to an unexplained deviation of the plane of oscillation of the pendulum during solar eclipse. Pharao would provide a unique way to search a possible variation of the constant G in an eclipse framework. Frequency comparisons between distant clocks both space to-ground and ground-to-ground would allow, during the eclipse time interval, to discover if the gravitational potential and velocity of the ground stations are still constant, if the equivalence principle contains a flaw.)

**Category:** Relativity and Cosmology

[165] **viXra:1301.0190 [pdf]**
*replaced on 2013-02-11 22:27:07*

**Authors:** Rodney Bartlett

**Comments:** 25 Pages.

(When I wrote each of the previous versions of this article, I was absolutely convinced it was complete. But more and more supportive details and connections keep occurring to me, both in mathematics and physics.)
A paragraph explaining the way this article is written – the great French mathematician Jules Henri Poincare strongly favoured the use of intuition. Albert Einstein also said the most important things in science are intuition, imagination and curiosity. He said that things like logic and mathematical equations are necessary and valuable companions to scientific endeavour. But he was so sorry to see these companions taking the place of intuition and imagination. He thought science would suffer, and lose its way, as a result. Today, his fears seem to have manifested in reality. Modern science believes logic and maths are much more than valuable and necessary – it tends to treat them as the only path to knowledge. At the same time it tends to ridicule intuition and imagination.
The “Pioneer anomaly” compels me to propose a refinement of gravitational physics – which explains dark energy and dark matter, quantum phenomena, Kepler’s laws of planetary motion (no refinement would be necessary if Newton and Einstein could have used the data from today’s experiments and space probes). I redefine warping as 2.3 times General Relativity's value - deflection of starlight by the sun is still at 1.75 arcseconds since 57% of the light is diverted into solar wave packets (my ideas owe part of their inspiration to the MUH or Mathematical Universe Hypothesis formulated by MIT's Professor Max Tegmark).
I didn’t originally intend to write about tides, falling bodies, Earth’s orbit, and Greek philosophers. But if someone is attempting to explain the Pioneer slowdown etc. by a new interpretation of space-time warping (and this warping is what gravitation is), it’s a good idea – even an essential one – to not solely write about General Relativity and the spacecraft launched 40 years ago. Ideas from centuries ago, before General Relativity and Pioneer, must also be analysed – including those of Newton, Kepler, Galileo, Aristotle, Parmenides, Zeno. So must interpretations of the Mobius loop and figure-8 Klein bottle.
Very near its ending, the article mentions the two ways to view both infinity and reality, time travel into the past as well as the future, and the elimination of distances in space (enabling overtaking of the Pioneer and Voyager spacecraft – and leaving them in a galaxy far, far away). It actually ends by saying this article is offered as a starting point for humanity’s construction of this universe we live in (that sounds absolutely crazy and totally unscientific but, if it’s possible, try to restrain your emotional reactions and remain open-mindedly objective long enough to absorb the details).
PS The Law of Conservation of Matter-Energy is not overlooked. Using the word “create” or “produce” is simply a matter of convenience, like speaking of “sunrise” and “sunset” when we know the world rotates. And the article points out that 3 items already support the science-fiction-like idea of the electronic binary digits of 1 and 0 being the building blocks of our universe – the Kabbalah, the WMAP space probe’s data, and Einstein’s famous E=mc^2.

**Category:** Quantum Gravity and String Theory

[164] **viXra:1301.0189 [pdf]**
*submitted on 2013-01-30 13:38:30*

**Authors:** Gabor Schmera, Laszlo B. Kish

**Comments:** 5 Pages. Submitted for publication. US Navy - TAMU patent pending.

We present a new method for detecting and identifying bacteria by measuring impedance fluctuations (impedance noise) caused by ion release by the bacteria during phage infestation. This new method significantly increases the measured signal strength and reduces the negative effects of drift, material aging, surface imperfections, 1/f potential fluctuations, thermal noise, and amplifier noise.
Comparing BIPIF with another well-known method, bacteria detection by SEnsing of Phage Triggered Ion Cascades (SEPTIC), we find that the BIPIF algorithm is easier to implement, more stable and significantly more sensitive (by several orders of magnitude). We project that by using the BIPIF method detection of a single bacterium will be possible.

**Category:** Classical Physics

[163] **viXra:1301.0188 [pdf]**
*submitted on 2013-01-30 14:05:38*

**Authors:** Nige Cook

**Comments:** 3 Pages.

A previous paper (http://vixra.org/abs/1111.0111) makes some calculations from a quantum gravity theory and sketches a framework for further predictions. This paper defends in detail the lagrangian for quantum gravity, based on the theory in our earlier paper, by examining the simple physical dynamics behind general relativity and gauge theory.

**Category:** Quantum Gravity and String Theory

[162] **viXra:1301.0187 [pdf]**
*submitted on 2013-01-30 14:51:48*

**Authors:** Nige Cook

**Comments:** 3 Pages.

Maxwell’s equations of electromagnetism describe three dimensional electric and magnetic field line divergence and curl (rank 1 tensors, or vector calculus), but were compressed by Einstein by including those rank-1 equations as components of rank 2 tensors. However, Einstein did not express the electromagnetic force in terms of a rank-2 spacetime curvature. In order to unify or even compare the equations for two forces (gravity and electromagnetism), you need first to have them expressed in terms of similarly physical descriptions: either rank-1 field lines for both, or spacetime curvature for both.

**Category:** Quantum Gravity and String Theory

[161] **viXra:1301.0186 [pdf]**
*submitted on 2013-01-30 11:29:40*

**Authors:** Russell Bagdoo

**Comments:** 7 Pages. «LA MISSION PHARAO/ACES ET L’EFFET ALLAIS» is the French version of «THE PHARAO/ACES MISSION AND THE ALLAIS EFFECT» on viXra.

La théorie de la relativité d’Einstein sera mise à l’épreuve dans un environnement de microgravité par Atomic Clock Ensemble in Space (ACES), mission de l’agence spatiale européenne (ESA) développée par les laboratoires scientifiques français sous maîtrise d’œuvre du Centre National D’Études Spatiales (CNES). Le Projet d'Horloge Atomique par Refroidissement d'Atomes en Orbite (PHARAO) est l’élément central de la mission européenne ACES constituée de plusieurs horloges atomiques. L’horloge à atomes froids sera installée en 2013 pendant 18 mois à l’extérieur du module européen Columbus de la Station spatiale internationale (ISS) et rapportera des données avec une exactitude beaucoup plus élevée que ce qui est accessible avec la gravité terrestre. La physique théorique, la métrologie et la conception de l'horloge atomique seront toutes partantes pour bénéficier de ce partenariat. Notre souhait est que ACES puisse inclure l’effet d'éclipse Allais dans son programme d'application. Cet effet est lié à une déviation exceptionnelle et inexpliquée du mouvement du pendule lors d’éclipses solaires. Pharao fournirait un moyen unique de chercher dans un référentiel d’éclipse une éventuelle variation de la constante G. La comparaison entre l’horloge spatiale et celles disponibles sur Terre, qui fonctionnent sur des transitions différentes et qui dépendent de façons diverses des constantes fondamentales, permettrait, pendant l'intervalle de temps de l’éclipse, de découvrir si le potentiel gravitationnel et la vitesse des stations terrestres sont toujours constants, si le principe d’équivalence comporte une faille.

**Category:** Relativity and Cosmology

[160] **viXra:1301.0185 [pdf]**
*submitted on 2013-01-30 10:32:49*

**Authors:** Russell Bagdoo

**Comments:** 6 Pages.

Einstein's theory of relativity will be put to the test in a micro-gravitational environment by Atomic Clock Ensemble in Space (ACES), European Space Agency (Esa) mission developed in cooperation with the French Space Agency (CNES). It is scheduled to be installed in 2013 on board the International Space Station (ISS). The space will be studied by a new kind of atomic clock (Pharao) which will yield data accuracy much higher than what is attainable under earth’s gravitation. Theoretical physics, metrology and atomic clock design are all starters to benefit from this joint venture. Our wish is that ACES includes the Allais eclipse effect in his application domain. This effect is related to an unexplained deviation of the plane of oscillation of the pendulum during solar eclipse. Pharao, called a "cesium fountain clock", would provide a unique way to search a possible variation of the constant G in an eclipse framework. Frequency comparisons between distant clocks both space to-ground and ground-to-ground would allow, during the eclipse time interval, to discover if the gravitational potential and velocity of the ground stations are still constant, if the equivalence principle contains a flaw.

**Category:** Relativity and Cosmology

[159] **viXra:1301.0184 [pdf]**
*submitted on 2013-01-30 10:16:53*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 1 Page. 1 illustration

It is hypothesized that the separation of geophysics from astrophysics is arbitrary. Reasoning is provided.

**Category:** Linguistics

[158] **viXra:1301.0183 [pdf]**
*submitted on 2013-01-29 17:59:27*

**Authors:** Yu Zhou

**Comments:** 3 Pages.

Cloud computing offers the potential to help scientists to process massive number of computing
resources often required in machine learning application such as computer vision problems. This
proposal would like to show that which benefits can be obtained from cloud in order to help medical
image analysis users (including scientists, clinicians, and research institutes). As security and privacy
of algorithms are important for most of algorithms’ inventors, these algorithms can be hidden in a
cloud to allow the users to use the algorithms as a package without any access to see/change their
inside. In another word, in the user part, users send their images to the cloud and configure the
algorithm via an interface. In the cloud part, the algorithms are applied to this image and the results are
returned back to the user.
My proposal has two parts: (1) investigate the potential of cloud computing for computer vision
problems and (2) study the components of a proposed cloud-based framework for medical image
analysis application and develop them (depending on the length of the internship). The investigation
part will involve a study on several aspects of the problem including security, usability (for medical
end users of the service), appropriate programming abstractions for vision problems, scalability and
resource requirements. In the second part of this proposal I am going to thoroughly study of the
proposed framework components and their relations and develop them. The proposed cloud-based
framework includes an integrated environment to enable scientists and clinicians to access to the
previous and current medical image analysis algorithms using a handful user interface without any access to the algorithm codes and procedures.

**Category:** Artificial Intelligence

[157] **viXra:1301.0182 [pdf]**
*submitted on 2013-01-29 19:13:37*

**Authors:** Sabiou Inoua

**Comments:** 30 Pages.

**Abstract **The standard approach to economic growth and development consists of simplifying the products and the inputs of an economy to aggregate variables, GDP, labor and capital, thus overlooking the complexity of modern economies. Recently, two authors, Hausmann and Hidalgo, initiated an alternative framework where complexity is precisely the key concept, for it is identified to be the driving force behind economic development: rich countries have complex economies, and they make products that reflect this complexity. The aim of this paper is threefold. First, we discuss some conceptual and empirical limitations of the standard theories, in particular the aggregation problem they suffer from. Second, we make a succinct presentation of the complexity approach as an alternative account on economic development. Finally, we use a simple model to explain the phenomenon of divergence and poverty trap, building on ideas developed by the authors. This model allows for more: it provides a rationale for the interpretation of ECI as a measure of the productive knowledge embedded in an economy (i.e. the number of capabilities it has), and its confrontation with the data will make it possible to compute the number of capabilities for each country compatible with its observed ECI.

[156] **viXra:1301.0180 [pdf]**
*submitted on 2013-01-29 15:41:11*

**Authors:** Russell Bagdoo

**Comments:** 10 Pages.

The theory of Relation is based on the existence of two structures going in opposite directions and, throughout the cosmological time, on a real conversion of the negative energy of the structure of expansion into the positive energy of the structure of condensation. The negative energy-mass, inversed in virtue of the principle of Compensation, gives the positive energy-mass. The theory explores the situation of the quantum vacuum which contains a minimum energy at the surface of an ocean of negative energy, fueling the ordinary positive matter above. Under the principle of Compensation, the energy fluctuations of the vacuum are spontaneously materialized not only in numerous short-lived virtual particles but also in real particles. There would be a "gravicolored" field, superimposed on the vacuum, which would provide the necessary density of gravitational energy to materialize the virtual state. Two sorts of mass and energy are so connected. The process of this transformation throughout the expansion involves an irreversible arrow of time.

**Category:** Quantum Physics

[155] **viXra:1301.0179 [pdf]**
*submitted on 2013-01-29 12:08:33*

**Authors:** Andrew Nassif

**Comments:** 3 Pages.

Chemistry requires lots of conceptual thinking as well as analytical actions. Chemistry is a science of measurement. Like physics, it also requires much mathematical sequences in it. Chemistry is the bridge of science, because it is a sub-subject of every major field of science including physics and geology. Boyle describes chemistry as the subject of major bodies. Chang describes chemistry as the study of matter and the changes it undergoes. Chang's definition, is the most commonly used today. Chemistry was thought to be created by the greeks (atomism), however, it was created hundreds of years before as a way ancient Egyptians used herbs to create a remedy, or used the science of proper mechanics to build the pyramids using a special chemical combination as the glue that held bricks together. Cleopatra herself was an alchemist.

**Category:** Chemistry

[154] **viXra:1301.0178 [pdf]**
*submitted on 2013-01-29 09:42:30*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

The Stanford Encyclopedia of philosophy recognizes Quantum Field Theory as mathematical and conceptual framework that implements elementary particles in particle physics. This also acquired the theory as a sub subject of both Quantum Physics and Particle Physics. Stanford University uses those facts to explain QFT as a widely discussed subject in the field of science and mathematics itself. In comparison to other theories on the composition of out universe and matter itself, there is no conical definition for QFT. This has been quoted on the first paragraph of section one of the Stanford Encyclopedia. So far, there is a different way to access QFT in its relation with special general relativity theory and general relativity theory as well as its relation to statistical and solid physics. Though, we may not know the complete composition of the universe we know that everything living in this planet has a cellular structure and that we are all made of dry matter as well as composed of atoms. However, the theory that those atoms combine as strands and strings in order to compose us related more on the topic of Super String Theory.

**Category:** Quantum Physics

[153] **viXra:1301.0177 [pdf]**
*submitted on 2013-01-28 14:28:07*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 22 Pages.

Gas to aqueous phase standard state (1 atm to 1 mol/L; 298.15 K) free energies of solvation (ΔGosolv) were calculated for a range of neutral and ionic inorganic and organic compounds using various levels and combinations of Hartree-Fock and density functional theory (DFT) and composite methods (CBS-Q//B3, G4MP2, and G4) with the IEFPCM-UFF, CPCM, and SMD solvation models in Gaussian 09 (G09). For a subset of highly polar and generally polyfunctional neutral organic compounds previously identified as problematic for prior solvation models, we find significantly reduced ΔGosolv errors using the revised solvent models in G09. The use of composite methods for these compounds also substantially reduces their apparent ΔGosolv errors. In contrast, no general level of theory effects between the B3LYP/6-31+G** and G4 methods were observed on a suite of simpler neutral, anionic, and cationic molecules commonly used to benchmark solvation models. Further investigations on mono- and polyhalogenated short chain alkanes and alkenes and other possibly difficult functional groups also revealed significant ΔGosolv error reductions by increasing the level of theory from DFT to G4. Future solvent model benchmarking efforts should include high level composite method calculations to allow better discrimination of potential error sources between the levels of theory and the solvation models.

**Category:** Chemistry

[152] **viXra:1301.0176 [pdf]**
*submitted on 2013-01-28 10:26:05*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 1 illustration

A definition for time is given.

**Category:** General Science and Philosophy

[151] **viXra:1301.0175 [pdf]**
*submitted on 2013-01-28 04:43:38*

**Authors:** Martin Erik Horn

**Comments:** 25 Pages. 6 Figures.

Quarks are described mathematically by (3 x 3) matrices. To include these quarkonian mathematical structures into Geometric Algebra it is helpful to restate Geometric Algebra in the
mathematical language of (3 x 3) matrices. It will be shown in this paper how (3 x 3) permutation matrices can be interpreted as unit vectors. ** Special emphasis will be given to the definition of some wedge products which fifit better to this algebra of (3 x 3) matrices than the usual Geometric Algebra wedge product. ** And as S3 permutation symmetry is flavour symmetry a unifified flavour picture of Geometric Algebra will emerge.

**Category:** Algebra

[150] **viXra:1301.0174 [pdf]**
*replaced on 2015-12-09 12:37:40*

**Authors:** Sylwester Kornowski

**Comments:** 3 Pages.

Within the Scale-Symmetric Theory (SST) we calculated the electron and muon radii of proton. We obtained that the electron radius of proton is 0.87673 fm - it is consistent with the central value obtained in experiments. The calculated muon radius of proton is 0.84077 fm and this result is consistent with experimental data also. The two different radii follow from the atom-like structure of protons described within SST.

**Category:** High Energy Particle Physics

[149] **viXra:1301.0173 [pdf]**
*replaced on 2013-06-17 02:39:42*

**Authors:** Guo Chenxi

**Comments:** 10 pages; 1 figure; 11 formula

The concept of matter is examined in plain nature-views; emphasize the identity of mass and energy; the intrinsic property of matter is electromagnetics; and the principle of constancy of light velocity is ubiquitous. For the above understanding of matter, reconsider the concept of force, hereby consolidate the fundamental relations between energy, inertial mass and gravitational mass, force, and momentum in mechanics.

**Category:** Relativity and Cosmology

[148] **viXra:1301.0172 [pdf]**
*submitted on 2013-01-27 13:45:13*

**Authors:** Leonardo Rubino

**Comments:** 40 Pages.

It’s well known Restricted Relativity (or Special) is held as a theory. Well, this is a typical example of lack of knowledge on what one has in one’s hands, and typical for the official science, or system science, if you like. All this is true, today more than ever, after the recent and awkward news on tachyon neutrinos (or superluminal, if you like), between CERN and OPERA. It’s not awkward for the experimental scientists who carried out that experiment, but rather for those theorists (very known, in most cases) who welcomed and applauded, without rejecting that extravagant news, as I did, on the contrary, on all blogs etc.

**Category:** Relativity and Cosmology

[147] **viXra:1301.0171 [pdf]**
*replaced on 2013-02-09 07:58:44*

**Authors:** sangwha Yi

**Comments:** 6 Pages.

In the general relativity theory, using Einstein’s gravity field equation, discover the spherical solution of the quantum gravity. The careful point is that this theory is different from the other quantum theory. This theory is made by the Einstein’s field equation.

**Category:** Quantum Gravity and String Theory

[146] **viXra:1301.0167 [pdf]**
*submitted on 2013-01-27 21:57:34*

**Authors:** Robert Louis Kemp

**Comments:** 42 Pages. Copyright © 2013 - Super Principia Mathematica - The Rage to Master Conceptual & Mathematical Physics

This paper described a new algorithm, for “generalized mathematical formalism” of a “Spherically Symmetric Metric” ( ), that describes the Euclidean Metric, Minkowski Metric, Einstein Metric, or the Schwarzschild Metric; using Three (3) Metric Components & Three (3) Metric Coefficients; and likewise using a general algorithm which is composed of, Two (2) Metric Components & Two (2) Metric Coefficients.
In this paper a general introduction to basic mathematical concepts for the geometric description of Euclidean “Flat-Space” Geometry and Non-Euclidean “Curved-Space” Geometry, and Spherically Symmetric Metric equations which are used for describing the causality and motion of the “Gravitational” interaction between mass with vacuum energy space, and the mass interaction with mass.
This paper gives a conceptual and mathematical description of the differential geometry, of flat and curved space, space-time, or gravitational fields, using the “metric theory” mathematics of Euclidean, Minkowski, Einstein, and Schwarzschild, Spherically Symmetric metrics, and geodesic line elements.

**Category:** Relativity and Cosmology

[145] **viXra:1301.0166 [pdf]**
*submitted on 2013-01-27 10:18:18*

**Authors:** Peter Hickman

**Comments:** 2 Pages.

Further to the paper [1], a more careful analysis of the calculation of the cosmological density ratios is wrong, this resulted from an incorrect Lagrangian for the Quaternion spinor. The Lagrangian presented in this paper is more in line with the standard formalism. With the release of the WMAP 9 year results [2] the predicted (WMAP) ratios are Dark matter 0.238 (0.236), Dark energy 0.714 (0.7181) and Baryonic matter 0.0476 (0.0461).

**Category:** High Energy Particle Physics

[144] **viXra:1301.0164 [pdf]**
*submitted on 2013-01-27 09:12:36*

**Authors:** Andrew Nassif, Mark Simpson, Aditya Kumar, Neil Bates, String Theory Development Research Group

**Comments:** 2 Pages.

Some of the most popular topics in the Sting Theory Development Facebook group, which is one of the few facebook groups on facebook that is commited to doing research in the subject of theoretical physics.

**Category:** Relativity and Cosmology

[143] **viXra:1301.0163 [pdf]**
*submitted on 2013-01-27 08:04:08*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 1 Page.

It is common sense that nature does not have paradoxes it is humans that invent them based off our limited understanding of the natural world. This means that Olber’s Paradox is not a real paradox and can disregarded as arbitrary using modern realizations. It is suggested to the reader to find as many “astronomical paradoxes” as possible and fix them because the majority of them are rooted in failed mathematical assumptions.

**Category:** Astrophysics

[142] **viXra:1301.0162 [pdf]**
*submitted on 2013-01-26 18:05:43*

**Authors:** David E. Rutherford

**Comments:** 4 Pages.

We introduce a law that we believe is a natural companion to the Biot-Savart Law of classical electrodynamics. The forces resulting from these two laws compliment one another: the force due to the Biot-Savart Law changes the direction of the velocity of a test particle, but not its magnitude; the force due to the companion law changes the magnitude of the velocity, but not its direction.

**Category:** Classical Physics

[141] **viXra:1301.0161 [pdf]**
*replaced on 2013-02-02 12:04:41*

**Authors:** Steven Kenneth Kauffmann

**Comments:** 9 Pages.

The quantum mechanics status of the probability vector current density has long seemed to be marginal. On one hand no systematic prescription for its construction is provided, and the special examples of it that are obtained for particular types of Hamiltonian operator could conceivably be attributed to happenstance. On the other hand this concept's key physical interpretation as local average particle flux, which flows from the equation of continuity that it is supposed to satisfy in conjunction with the probability scalar density, has been claimed to breach the uncertainty principle. Given the dispiriting impact of that claim, we straightaway point out that the subtle directional nature of the uncertainty principle makes it consistent with the measurement of local average particle flux. We next focus on the fact that the unique closed-form linear-superposition quantization of any classical Hamiltonian function yields in tandem the corresponding unique linear-superposition closed-form divergence of the probability vector current density. Because the
probability vector current density is linked to the quantum physics only through the occurrence of its divergence in the equation of continuity, it is theoretically most appropriate to construct this vector field exclusively from its divergence -- analysis of the best-known "textbook" special example of a probability vector current density shows that it is thus constructed. That special example in fact leads to the physically interesting "Ehrenfest subclass" of probability vector current densities, which are closely related to their classical peers.

**Category:** Quantum Physics

[140] **viXra:1301.0160 [pdf]**
*replaced on 2013-01-28 23:09:07*

**Authors:** David E. Rutherford

**Comments:** 8 Pages.

We present a model that offers a resolution to the Horizon Problem of cosmology and eliminates the need for Inflation. It also suggests a possible new origin for the Cosmic Microwave Background Radiation. In addition, this model eliminates the need to invoke Dark Energy and Dark Matter to explain the accelerated expansion of the universe. In essence, it implies that there is no accelerated expansion by fitting the model to Type 1a Supernovae and Gamma Ray Burst data with a reduced chi-square (goodness-of-fit) of 0.99, using only the Hubble constant as a parameter.

**Category:** Relativity and Cosmology

[139] **viXra:1301.0159 [pdf]**
*submitted on 2013-01-26 12:22:35*

**Authors:** David E. Rutherford

**Comments:** 6 Pages.

We derive the equations of relativistic quantum mechanics from a modified version of classical electrodynamics, where probability is replaced by potential. As a result, a particle is not a localized entity, in the classical sense, but has a localized energy extremum. The particle/wave aspect of matter is inherent in the particle/wave equation describing elementary particles. Furthermore, the Heisenberg uncertainty and Planck-Einstein-de Broglie relations, and the Klein-Gordon, Dirac and Proca equations follow naturally from the particle/wave equation. In addition, we incorporate a new and more physical interpretation of spin angular momentum.

**Category:** Quantum Physics

[138] **viXra:1301.0158 [pdf]**
*submitted on 2013-01-26 07:14:47*

**Authors:** Dan Visser

**Comments:** 6 Pages.

The Double Torus, a new hypothesis for the universe, has been put in perspective and related to other theories and hypotheses. This ‘paper’ could be used by the press. The Double Torus hypothesis is theoretically and mathematically-physics-based. Examples of evidence might be available already.

**Category:** Mathematical Physics

[137] **viXra:1301.0157 [pdf]**
*submitted on 2013-01-26 04:29:00*

**Authors:** Philip Maulion

**Comments:** 3 Pages.

In 'A World in 'Presence' II', I propose to strengthen the validity of the title of the article that I have submitted on 11/26/2012, 'A World in 'Presence''(viXra 1211.0149v1)

**Category:** Quantum Physics

[136] **viXra:1301.0156 [pdf]**
*submitted on 2013-01-25 21:15:46*

**Authors:** David E. Rutherford

**Comments:** 4 Pages.

According to Newton's Third Law, in a collision between two isolated particles `action equals reaction'. However, in classical electrodynamics, this law is violated. In general, in a collision between two isolated charged particles, the momentum of the particles is not conserved. Typically, it is necessary to combine the field momentum with the particle momentum in order to `balance the scales'. A paradox arises from the fact that, generally, particle momentum is conserved in the center of mass (cm) frame, but not in the lab frame. Here, we offer a resolution to this paradox in which the Third Law remains valid for collisions between charged particles, in all situations and in all frames, without the need to invoke the momentum of the field.

**Category:** Classical Physics

[135] **viXra:1301.0155 [pdf]**
*submitted on 2013-01-25 21:44:30*

**Authors:** Rodney Bartlett

**Comments:** 10 Pages.

Sergei Kopeikin, professor of physics and astronomy at the University of Missouri, thinks the previous explanation for the so-called Pioneer Anomaly [1] was only able to account for 15 to 20% of the observed deceleration. He devised a new set of calculations that included the universe's expansion, and the way expansion affects the speed of photons which compose the light and radio waves.
[1] In a paper published on June 12 in Physical Review Letters [“Support for the Thermal Origin of the Pioneer Anomaly” - Phys. Rev. Lett. 108, 241101 (2012) [5 pages]; Slava G. Turyshev, Viktor T. Toth, Gary Kinsella, Siu-Chun Lee, Shing M. Lok, and Jordan Ellis write: “We investigate the possibility that the anomalous acceleration of the Pioneer 10 and 11 spacecraft is due to the recoil force associated with an anisotropic emission of thermal radiation off the vehicles” and “We ... conclude that, once the thermal recoil force is properly accounted for, no anomalous acceleration remains."
Both the “thermal recoil” and “universal expansion” theories regarding Pioneer are extremely interesting. However, I suspect the emission of thermal radiation doesn’t have a large enough effect, just as Sergei Kopeikin states. I also suspect the speed of photons in the vacuum of space is, as Relativity states, constant and always appears constant - and that universal expansion therefore doesn’t have enough effect either. I'd therefore like to propose a refinement of gravitational physics. I redefine warping as 2.3 times General Relativity's value - deflection of starlight by the sun is still at 1.75 arcseconds since 57% of the light is diverted into solar wave packets (my ideas owe part of their inspiration to the MUH or Mathematical Universe Hypothesis formulated by MIT's Professor Max Tegmark).
I didn’t originally intend to write about tides, falling bodies, Earth’s orbit, and Greek philosophers. But if someone is attempting to explain the Pioneer slowdown by a new interpretation of space-time warping (and this warping is what gravitation is), it’s a good idea – even an essential one – to not solely write about General Relativity and the spacecraft launched 40 years ago. Ideas from centuries ago – including those of Newton, Kepler, Galileo, Aristotle, Parmenides, Zeno – must also be analysed, as must original interpretations of the Mobius loop and figure-8 Klein bottle.

**Category:** Quantum Gravity and String Theory

[134] **viXra:1301.0154 [pdf]**
*submitted on 2013-01-25 11:38:23*

**Authors:** Jeffrey S. Keen

**Comments:** 8 Pages.

This paper explores scientifically, mind-matter interactions by utilising the welldocumented
phenomenon of mind-created psi-lines. Experimentation shows how the
perceived length of a standard yardstick, or an aura of any object, is affected by the
presence of psi-lines. Three distinct effects have been discovered. At a psi-line’s
nodes, the perceived aura of any object, or any attempted measurements of length,
decrease to zero! Measurements increase significantly when taken in the direction of
flow of the psi-line, whilst the same measurements taken against the flow decrease.
In contrast, transverse measurements produce a sine-like curve, but the equation is not
a simple sine wave or a standing wave. The format of the equation involves a square
root and is of the form: L = A * √sin (l * π/d). The conclusions not only demonstrate
that mind-mind, and mind-matter interactions exist; that psi lines and nodes detected
intuitively actually exist, and obey the laws of physics with significant enhancements;
and that the mind interacts with the structure of the universe down to the Planck level.

**Category:** Quantum Gravity and String Theory

[133] **viXra:1301.0153 [pdf]**
*submitted on 2013-01-25 10:26:57*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages.

It is taught by the Big Bang Mathematical Religion that Earth is the center of the Universe. This is incorrect. It is known by real scientists that Earth is the center of what we observe in the universe. This means that it is the center of the observable universe. An elaboration of the differences is provided.

**Category:** Relativity and Cosmology

[132] **viXra:1301.0152 [pdf]**
*submitted on 2013-01-24 15:39:18*

**Authors:** Richard A Peters

**Comments:** 11 Pages.

The phenomenon of time dilation demands the existence of a field that supports the propagation of photons. Historical references identify this field as the luminiferous ether. I will call it the temporal-inertial field (TI field), because I may attribute properties to this field not obtained by classical versions of the luminiferous ether. Time dilation occurs when an ongoing process moves relative to space, relative to this ether, relative to this TI field. The greater the velocity of the process relative to space the greater is the time dilation experienced by that process. The rate at which a process is slowed or accelerated is intrinsic, absolute and depends solely on the velocity of the process relative to space. If space has no properties other than dimensionality, motion relative to that space is undefined and meaningless and can have no influence on any ongoing process. Accordingly I assert the existence of the so-called TI field that supports the propagation of photons and occupies and permeates all of space, including the space of atoms. The entire thrust of this paper is that geometry does not govern the physics of time dilation; motion relative to the TI field of space does.

**Category:** Relativity and Cosmology

[131] **viXra:1301.0151 [pdf]**
*submitted on 2013-01-24 22:11:00*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 4 Pages.

Historical climate trends in southwestern Saskatchewan, Canada were analyzed using parametric linear regression and non-parametric Mann-Kendall trend detection approaches over various timeframes between 1886 and 2010. We find substantial variability for this region in the significance and magnitude of any temporal trends for daily maximum, minimum, and mean temperatures on an annual basis - as well as during the winter, spring, summer, and autumn periods - that is dependent on the time period chosen. Similar results are obtained for precipitation data in the study area. The results demonstrate that temperature and precipitation trends in southwestern Saskatchewan tell a complex long-term climate change story, containing substantial temporal trend heterogeneity, thereby necessitating caution when interpreting long-term climatic data - particularly in the context of larger-scale regional or global observations and predictions.

**Category:** Climate Research

[130] **viXra:1301.0150 [pdf]**
*replaced on 2013-10-01 20:58:17*

**Authors:** Frank Dodd Tony Smith Jr

**Comments:** 40 Pages.

The E8 Physics Model of viXra 1108.0027 begins with an 8-dim Spacetime. The 120 Root Vectors of the 4-dim 600-cell correspond to half of the 240 Root Vectors of E8. The 600-cell lives in a 3-dim sphere inside 4-dim space. Projected to flat 3-dim space, the 120 Root Vectors of the 600-cell can be represented in terms of an Icosidodecahedron and Rhombic Triacontahedra. The E8 Physics Model can be described in terms of Rhombic Triacontahedra in 3-dim space, which have natural QuasiCrystal structure and also are related to tilings of 3-dim flat space by Truncated Octahedra. V2 adds details about E8 lattices. V3 revises and expands about E8 lattices. V4 discusses Ker-Newman fermion clouds.

**Category:** High Energy Particle Physics

[129] **viXra:1301.0149 [pdf]**
*submitted on 2013-01-24 13:15:06*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

Chemistry is a branch of physical science that has to do with the study of the composition of matter and the study of everything composed of matter as well as based on the basis of atomism, fluid dynamics, and particle physics, as well as the action of measurements applied as a stoichiometry and a method in using chemistry. Chemistry in basically the central science, though it may be a branch of physical science, it is very distinct from physics. The world chemistry itself comes from the word alchemy which is derived from لكيمياء which is then derived from the greek word χημεία, this means he study of material, which is then translated to being the study of everything composed of matter. Ancient Egyptians themselves used synthetic chemistry as a study of herbs as well as using this branch of chemistry to help them plant crops in the fertile crust. Democritus's theory of atomism became the foundation of chemistry and elementary physics itself. Eventually Atomism brought forth ideas of models of the atoms around us and many generations later, the Quantum Mechanical Model was born. This is how the fundamental of chemistry came to be.

**Category:** Chemistry

[128] **viXra:1301.0148 [pdf]**
*submitted on 2013-01-24 01:29:28*

**Authors:** David E. Rutherford

**Comments:** 3 Pages.

We show that the energy density of a continuous charge distribution must be twice the conventionally accepted value. This conclusion is qualified through logical argument and quantified using conventional mathematical methods.

**Category:** Classical Physics

[127] **viXra:1301.0147 [pdf]**
*submitted on 2013-01-24 01:37:21*

**Authors:** David E. Rutherford

**Comments:** 3 Pages.

The ``4/3 problem'' of electrodynamics arose from the attempt to describe the mass of the electron as entirely electromagnetic in origin. Unfortunately, in the conventional treatment, there has been no acceptable solution offered. We present, here, a simple resolution to the `problem' and show, by considering a previously overlooked part of the `electromagnetic' field, that the mass of the electron is entirely `electromagnetic' in origin.

**Category:** Classical Physics

[126] **viXra:1301.0146 [pdf]**
*submitted on 2013-01-23 18:36:44*

**Authors:** Friedwardt Winterberg

**Comments:** 8 Pages.

As Newton's mysterious action at a distance law of gravity was explained as a Riemannian geometry by Einstein, it is proposed that the likewise mysterious non-local quantum mechanics is explained by the analytic continuation below the Planck length into a complex Teichmuller space. Newton's theory worked extremely well, as does quantum mechanics, but no satisfactory explanation has been given for quantum mechanics. In one space dimension, sufficient to explain the EPR paradox, the Teichmuller space is reduced to a space of complex Riemann surfaces. Einstein's curved space-time theory of gravity was confirmed by a tiny departure from Newton's theory in the motion of the planet Mercury, and an experiment is proposed to demonstrate the possible existence of a Teichmuller space below the Planck length.

**Category:** Quantum Physics

[125] **viXra:1301.0145 [pdf]**
*submitted on 2013-01-23 19:01:31*

**Authors:** Friedwardt Winterberg

**Comments:** 16 Pages.

To reduce the radiation hazard for manned missions to Mars and beyond, a high specific impulse-high thrust system is needed, with a nuclear bomb propulsion system the preferred candidate. The propulsion with small fission bombs is excluded because the critical mass requirement leads to extravagant small fission burn up rates. This leaves open the propulsion with non-fission ignited thermonuclear micro-explosions, with a compact fusion micro-explosion igniter (driver), and no large radiator. It should not depend on the rare He3 isotope, and only require a small amount of tritium. This excludes lasers for ignition. With multi-mega-ampere-gigavolt proton beams and a small amount of tritium, cylindrical deuterium targets can be ignited. The proton beams are generated by discharging the entire spacecraft as a magnetically insulated gigavolt capacitor. To avoid a large radiator, needed to remove the heat from the absorption of the fast neutrons in the spacecraft, the micro-explosion is surrounded by a thick layer of liquid hydrogen, stopping the neutrons and heating the hydrogen to a temperature of ~ 105 K, which as a fully ionized plasma can be repelled from the spacecraft by a magnetic mirror.

**Category:** Astrophysics

[124] **viXra:1301.0144 [pdf]**
*submitted on 2013-01-23 23:20:48*

**Authors:** V.A.I.Menon

**Comments:** 13 Pages.

The author after introducing the concept of the vean (vacuum energy absorption) process shows that it not only crystallizes the progressive nature of time but also causes gravitation and the red shift of light emitted by far off galaxies [1]. He now shows that the principle of equivalence on which the general theory of relativity is based can be understood in terms of the thermodynamics of the primary gas. The curvature of the space-time is seen to emerge from the anisotropy of the vacuum fluctuations background in the neighborhood of a massive body arising from the vean process. According to him unlike the currently accepted interpretation of general relativity, the gravitational field based on the vean process does not exhibit non-linearity.

**Category:** Quantum Physics

[123] **viXra:1301.0143 [pdf]**
*submitted on 2013-01-23 10:28:29*

**Authors:** Andrew Nassif

**Comments:** 10 Pages.

Linear Perspective allows you the ability to work by representing light passing through a scene in a rectangular base, this method is often used in some paintings or modern day sketches.

**Category:** Geometry

[122] **viXra:1301.0142 [pdf]**
*submitted on 2013-01-22 20:57:45*

**Authors:** Andrew Nassif, Nasir Germain

**Comments:** 4 Pages.

Research on the Higg Boson can lead to early discoveries of unknown subatomic particles in our universe or new forms of cosmic radiation. If this is the case, then the discovery would be one of the most revolutionary discoveries of our time.

**Category:** Relativity and Cosmology

[121] **viXra:1301.0141 [pdf]**
*submitted on 2013-01-22 21:20:57*

**Authors:** Andrew Nassif, Talal Khalaf

**Comments:** 2 Pages. This document is a commentary rather then a research report. All of the items in the paper are opinions provided by real scientist.

The relationship between weather forecasters and the age of the Earth based on gravity of the earth now I think that the ground will change speed less spin around itself due to the change in the angle of rotation of the earth on its axis any increase angle and the occurrence of a major disaster in the ground in the coming years, or rather 2013. Had previously occurred disasters in the ground and there is evidence to prove it?

**Category:** General Science and Philosophy

[120] **viXra:1301.0140 [pdf]**
*submitted on 2013-01-22 10:24:17*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

Radiation can often be found in devices we use everyday such as cell phones, wrist watches, laptops, and televisions. My research well include all these plus ways to avoid them. I will also make sure to talk about nuclear power and its harms and benefits as well as designs that can shorten usages and causes of radiation. The largest source of radiation is radioactive material found in soil. The second largest source of radiation is man-made radiation which is emitted in our everyday lives. "It is said that we literally live in a sea of radiation," says Dr.Dade W. Moeller. It is said that the largest sources for man made radiation are the United States, Russia, China, Japan, and the UK. This ranks the United States the highest source of radiation emitters, meaning that we cause much man made radiation in the public alone.

**Category:** General Science and Philosophy

[119] **viXra:1301.0139 [pdf]**
*replaced on 2015-05-31 14:58:54*

**Authors:** Branko Zivlak

**Comments:** 8 Pages. 25 formulas

The aim of this article is to determine dimensionless physical constants through mathematical constants and other dimensionless physical constants.

**Category:** Mathematical Physics

[118] **viXra:1301.0138 [pdf]**
*replaced on 2013-02-02 08:12:53*

**Authors:** Chun-Xuan Jiang

**Comments:** 10 Pages.

periodic table is wrong

**Category:** Nuclear and Atomic Physics

[117] **viXra:1301.0137 [pdf]**
*submitted on 2013-01-21 18:17:39*

**Authors:** Igor Elkin

**Comments:** 4 Pages.

It is not understanding, why physicists haven’t paid attention to the elementary solution of the question about the
formation of inertia mass and elementary explanation of gravity force till now. As soon as velocity-addition formula has
been received by Einstein, it has become clear that derivative of total velocity of one of velocities gives two summands.
At that it was clear that there can be the situation, when additional speeds would be for some object with plus and with
minus at once. In this case the result for two results of total velocity can be averaged and it would be possible to receive
the summand of concrete figure instead of zero result which depends on the direction of varying velocity. This improvidence
of scholars causes that last one hundred years scholars have been working meaninglessly but not for the development of
science in this sphere.

**Category:** Quantum Gravity and String Theory

[116] **viXra:1301.0136 [pdf]**
*submitted on 2013-01-21 21:18:22*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 1 Page.

It is hypothesized an Ockham’s Razor definition of matter so that science can be saved from mathematical fantasies and the collective insanity of those who teach that math is the language of some invisible being in the sky that created all matter via Big Bang.

**Category:** Relativity and Cosmology

[115] **viXra:1301.0134 [pdf]**
*replaced on 2013-01-22 16:47:14*

**Authors:** Robert Louis Kemp

**Comments:** 44 Pages. Copyright © 2013 - Super Principia Mathematica - The Rage to Master Conceptual & Mathematical Physics

In this paper a general introduction to basic concepts for the geometric description of Euclidean “Flat-Space” Geometry and Non-Euclidean “Curved-Space” Geometry, and Spherically Symmetric Metric equations which are used for describing the causality and motion of the “Gravitational” interaction between mass with vacuum energy space, and the mass interaction with mass.
This paper gives a conceptual and mathematical description of the differential geometry, of flat and curved space, space-time, or gravitational fields, using the “metric theory” mathematics of Euclidean, Minkowski, Einstein, and Schwarzschild, Spherically Symmetric metrics, and geodesic line elements.
This paper postulates a “Vacuum Energy Perfect Fluid” model and a “Dark Matter Force and Pressure” associated with the Non-Euclidean Spherically Symmetric metric equations, and also gives a conceptual and mathematical description and rationale, for selecting the Schwarzschild Metric over the Einstein Metric, as a physical description of the gradient gravitational, field surrounding a localized net inertial mass/matter source.
This paper also gives a new generalized mathematical formalism for describing “Non-Euclidean” Spherically Symmetric Metrics, of space, space-time, or the gravitational field, using a generalized “Metric “Curvature” Coefficient”.

**Category:** Relativity and Cosmology

[114] **viXra:1301.0133 [pdf]**
*replaced on 2013-03-08 19:46:55*

**Authors:** Robert Louis Kemp

**Comments:** 44 Pages. Copyright © 2013 - Super Principia Mathematica – The Rage to Master Conceptual & Mathematical Physics

This paper postulates a “Dark Matter Force and Pressure” and also gives a conceptual and mathematical description for the reason for choosing a “Vacuum Energy Perfect Fluid” model, and using the Schwarzschild Metric over the Einstein Metric, based on the concept of whether there is “Zero Pressure” impressed upon the surface of the Black Hole Event Horizon; And likewise, whether the “Volume Mass Density” and the curvature of space, space-time, or the gravitational field, surrounding a matter source is normal throughout the gradient of a gravitational field, or whether it is rarefied/condensed through the gradient of a gravitational field, and eventually becomes normal far away from the matter source. In this paper a general introduction into the basic concepts of a “Perfect Fluid” gravitation theory, and this bodes for the necessity of Non-Euclidean “Curved-Space” Geometry, and Spherically Symmetric Metrics, used for describing causality for “Gravitational” interaction of mass with space or “isotropic aether” space-time, and mass interaction with mass.

**Category:** Relativity and Cosmology

[113] **viXra:1301.0132 [pdf]**
*submitted on 2013-01-21 13:18:47*

**Authors:** Andrew Nassif, Thomas Zolotor

**Comments:** 3 Pages.

Faint Hubble Galaxies as named by Tom Zolotor are galaxies that are too far for the Hubble Telescope to take a perfect and non blurry image. The reason they should be put into a new class of Galaxies is because you can't really tell if it is spiral, round, etc., because the Galaxy is to far for the Hubble telescope to perfectly see.

**Category:** Astrophysics

[112] **viXra:1301.0131 [pdf]**
*replaced on 2014-03-15 05:24:07*

**Authors:** Barry Foster

**Comments:** 4 Pages.

This is a short proof(?) of Fermat's Last Theorem only using mathematical methods known to boys in a 1950's English grammar school.

**Category:** Number Theory

[111] **viXra:1301.0130 [pdf]**
*replaced on 2013-04-29 09:14:13*

**Authors:** Mario Everaldo de Souza

**Comments:** 10 Pages. Accepted for publication in Frontiers in Science

Considering that each quark is composed of two prequarks, called primons, it is shown that the recently found neutral Higgs-like boson belongs to a triplet constituted of a neutral boson and two charged bosons. The quantum numbers of these bosons are calculated and shown to be associated to a new kind of hypercharge which is directly related to the weak decays of hadrons and to the CKM matrix elements. Solutions to the proton spin puzzle and to other problems of particle physics are presented.

**Category:** High Energy Particle Physics

[110] **viXra:1301.0129 [pdf]**
*replaced on 2013-02-01 09:02:23*

**Authors:** Liu Ran

**Comments:** 11 Pages.

This paper has proved Goldbach conjecture is false with set theoryand higher mathematics knowledge. A program on computer is to verifypreliminary theorem. A prediction has become the evidence to verify the
finding being true or false.

**Category:** Number Theory

[109] **viXra:1301.0128 [pdf]**
*submitted on 2013-01-20 16:21:31*

**Authors:** Andrew Nassif, Nasir Germain

**Comments:** 6 Pages.

Now Andrew Nassif uses the following equations to discuss Germain’s composition theories of condensed matter through physics equations proven in the past by his research as well as research from other great scientist as well. My intake is that Germain’s theory supports the idea that everything that exists in the arrow of time is made of matter, this included light itself. If light is made of matter as well as extremely light subatomic particles, then can these subatomic particles such as toas and neutrinos consist of lighter weight? Also does this give it the possibility of traveling faster than the speed of light itself? Also these particles are in different places in space meaning there must be different universes consisting of them as seen on Germain’s theory.

**Category:** High Energy Particle Physics

[108] **viXra:1301.0127 [pdf]**
*submitted on 2013-01-20 20:09:18*

**Authors:** Nasir Germain

**Comments:** 2 Pages.

This paper explains my newest theorists on the matter of particle physics, inertia and totes of matter

**Category:** High Energy Particle Physics

[107] **viXra:1301.0126 [pdf]**
*replaced on 2013-01-21 20:33:11*

**Authors:** Adam G. Freeman

**Comments:** 9 Pages.

Many of the profound ideas in nature manifest themselves as symmetries. Everything in physics that has been observed to date has a symmetrical opposite but equal property except for gravitational attraction. If antimatter is revealed to be gravitationally repulsive to matter through experimentation, then the equal and opposite of gravitational attraction or anti-gravity will be established. This paper reveals that if antimatter is in fact repulsive to matter as it should be according to a CPT transformation of the Einstein field equations for general relativity, then it has a pseudo-spherical volume mathematically. This paper first discusses the familiar Schwarzschild solution to emphasize how that leads to Newton’s law of gravitational attraction. Then we discuss the CPT transformation of the Einstein field equations in the context of the Schwarzschild solution and how this leads to a repulsion between matter and anti-matter and how this repulsion is in sync with a pseudo-spherical geometry. Finally we discuss antimatter to antimatter attraction in general relativity and conclude with how this new derivation that is analogous to the Schwarzschild solution is important for physics.

**Category:** Relativity and Cosmology

[106] **viXra:1301.0125 [pdf]**
*replaced on 2013-01-24 00:00:07*

**Authors:** V.A.I.Menon

**Comments:** 15 Pages.

The author after clarifying the concepts of the imaginary time, reversible time and the progressive time based on the vean (vacuum energy absorption) process now shows that the same process will also result in a gradient in the energy of the vacuum near a massive body resulting in a force field which could be identified with the gravitational field. According to the author, the accumulation of the mass by a particle by the vean process would be so small that an electron’s rest mass would have increased by only 10% over a period of 1 billion years. It is shown that part of the red shift observed in distant galaxies could be attributed to the reduced mass of electrons in the distant past.

**Category:** Quantum Physics

[105] **viXra:1301.0124 [pdf]**
*submitted on 2013-01-20 15:35:38*

**Authors:** sangwha Yi

**Comments:** 6 Pages.

In the general relativity theory, discover the new system that is concerned about Rindler coordinate theory. In this time, .The new system uses the tetrad on the new method and it discovers the new inverse-coordinate transformation of the new system.

**Category:** Relativity and Cosmology

[104] **viXra:1301.0123 [pdf]**
*submitted on 2013-01-20 15:36:59*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 2 illustrations

It is now common knowledge that the entire standard model of particle physics is wrong. Any Wikipedia page or scientific article that strives to explain anything with particles is probably wrong and can be disregarded as arbitrary.

**Category:** General Science and Philosophy

[103] **viXra:1301.0122 [pdf]**
*replaced on 2013-03-05 08:44:14*

**Authors:** Risto Raitio

**Comments:** 6 Pages. Section 2 rewritten. An Appendix added.

A preon model for standard model particles is proposed based on spin 1/2 fermion and spin 0 boson constituents. The preons are quantum mechanical strings. Implications to the number of generations, heavy boson states, and dark matter are briefly considered.

**Category:** Quantum Gravity and String Theory

[102] **viXra:1301.0121 [pdf]**
*submitted on 2013-01-20 12:53:49*

**Authors:** Andrew Nassif

**Comments:** 7 Pages.

The answers to some of the greatest and biggest unawnsered problems in physics itself.

**Category:** Relativity and Cosmology

[101] **viXra:1301.0120 [pdf]**
*replaced on 2014-01-22 09:47:55*

**Authors:** Mohamed Elgendi, Flavien Picon, Nadia Magnenat-Thalmann, Derek Abbott

**Comments:** 21 Pages.

Many clinical studies have shown that the arm movement of patients with neurological injury is often
slow. In this paper, the speed analysis of arm movement is presented, with the aim of evaluating arm
movement automatically using a Kinect camera. The consideration of arm movement appears trivial at
rst glance, but in reality it is a very complex neural and biomechanical process that can potentially
be used for detecting a neurological disorder. This is a preliminary study, on healthy subjects, which
investigates three dierent arm-movement speeds: fast, medium and slow. With a sample size of 27
subjects, our developed algorithm is able to classify the three dierent speed classes (slow, normal, and
fast) with overall error of 5.43% for interclass speed classication and 0.49% for intraclass classication.
This is the rst step towards enabling future studies that investigate abnormality in arm movement, via
use of a Kinect camera.

**Category:** Digital Signal Processing

[100] **viXra:1301.0117 [pdf]**
*submitted on 2013-01-20 07:38:55*

**Authors:** Dhananjay P. Mehendale

**Comments:** 40 Pages

In this paper we propose a new algorithm for linear programming. This new algorithm is based on treating the objective function as a parameter. We form a matrix using coefficients in the system of equations consisting objective equation and equations obtained from inequalities defining constraint by introducing slack/surplus variables. We obtain reduced row echelon form for this matrix containing only one variable, namely, the objective function itself as an unknown parameter. We analyze this matrix in the reduced row echelon form and develop a clear cut method to find the optimal solution for the problem at hand, if and when it exists. We see that the entire optimization process can be developed through the proper analysis of the said matrix in the reduced row echelon form. From the analysis of the said matrix in the reduced row echelon form it will be clear that in order to find optimal solution we may need carrying out certain processes like rearranging of the constraint equations in a particular way and/or performing appropriate elementary row transformations on this matrix in the reduced row echelon form. These operations are mainly aimed at achieving nonnegativity of all the entries in the columns corresponding to nonbasic variables in this matrix or its submatrix obtained by collecting certain rows of this matrix (i.e. submatrix with rows having negative coefficient for parameter d, which stands for the objective function as a parameter for maximization problem and submatrix with rows having positive coefficient parameter d, again representing the objective function as a parameter for minimization problem). The care is to be taken so that the new matrix arrived at by rearranging the constraint equations and/or by carrying out suitable elementary row transformations must be equivalent to original matrix. This equivalence is in the sense that all the feasible solution sets for the problem variables obtained for different possible values of d with original matrix and transformed matrix are same. We then proceed to show that this idea naturally extends to deal with nonlinear and integer programming problems. For nonlinear and integer programming problems we use the technique of Grobner bases (since Grobner basis is an equivalent of reduced row echelon form for a system of nonlinear equations) and the methods of solving linear Diophantine equations (since the integer programming problem demands for optimal integer solution) respectively.

**Category:** Data Structures and Algorithms

[99] **viXra:1301.0116 [pdf]**
*submitted on 2013-01-20 06:20:11*

**Authors:** Xianzhao Zhong

**Comments:** 9 Pages.

In this paper, the generalized differential wave equation for free
electromagnetic field is transformed and formulated by means of
matrixes. Then Maxwell wave equation and the second form of wave
equation are deduced by matrix transformation. The solutions of the
wave equations are discussed . Finally, two differential equations of
vibration are established and their solutions are discussed .

**Category:** Classical Physics

[98] **viXra:1301.0115 [pdf]**
*replaced on 2013-02-24 16:11:42*

**Authors:** Michael J. Burns

**Comments:** 2 pages, mburns92003@yahoo.com

“Not even false” is a saying that is part of the lore of academic physics. Metaphysics is said to be meaningless. But there are compelling arguments, that originate from the work of the rationalist philosopher Spinoza, and from the work of Godel, Church and Turing in the foundations of computer science, which relate physics to metaphysics. The principle of possibility from Spinoza and the understanding of metamathematics by computer scientists do apply to physics.

**Category:** History and Philosophy of Physics

[97] **viXra:1301.0114 [pdf]**
*submitted on 2013-01-19 08:21:57*

**Authors:** Antoine Acke

**Comments:** 47 Pages.

The "theory of informatons" explains the electromagnetic interactions by the hypothesis that "e-information" is the substance of E.M. fields. The constituent element of that substance is called "an informaton".
The theory starts from the idea that any material object manifests itself in space by the emission of informatons: granular mass and energy less entities rushing away with the speed of light and carrying information about the position, the velocity and the electrical status of the emitter.
In this article the E.M. field in a point is characterized as the macroscopic manifestation of the presence of a cloud of informatons near that point; Maxwell's laws are mathematically deduced from the dynamics of the informatons; the electromagnetic interactions are explained as the effect of the trend of an electrically charged object to become blind for flows of e-information generated by other charged objects; and photons are identified as informatons carrying a quantum of energy, what helps us to understand the strange behaviour of light as described by QEM.

**Category:** Classical Physics

[96] **viXra:1301.0113 [pdf]**
*submitted on 2013-01-18 18:27:46*

**Authors:** Sergio Arciniegas-Alarcón, Carlos Tadeu dos Santos Dias

**Comments:** 14 Pages. In portuguese

A common problem in multienvironment trials are the missing genotype-environmental combinations. Recently, Bergamo proposed a distribution-free multiple imputation method in the interaction matrix. The purpose of this paper is to evaluate the new development and compare it with methodologies that have success in the genotype-environmental trials with missing data, like the alternating least squares (ALS) and the robust estimates, using the Additive Main effects and Multiplicative Interaction Models (AMMI). Was made an simulation study based in real data, doing missed random considering different percentages, imputing the observations and comparing the methodologies through three criteria: the square root of the mean predictive difference, the Procrustes statistic and the Spearman's rank correlation coeficient. Was concluded that the multiple imputation is not better than the imputation based in a additive model without interaction, and the best results for the variance are obtained with robust sub-models. All the considerated methods in this study have a high correlation between the true and the imputed missing values.

**Category:** Statistics

[95] **viXra:1301.0112 [pdf]**
*submitted on 2013-01-19 00:13:09*

**Authors:** David E. Rutherford

**Comments:** 25 Pages. Several applications of this article can be found at http://www.softcom.net/users/der555/

In special relativity, spacetime can be described as Minkowskian. We intend to show that spacetime, as well as the laws of electromagnetism, can be described using a four-dimensional Euclidean metric as a foundation. In order to formulate these laws successfully, however, it is necessary to extend the laws of electromagnetism by replacing the Maxwell tensor with an electric field four-vector. In addition, to assure the covariance of the new laws, we introduce equations that, completely, replace the Lorentz transformation equations and Lorentz group. The above replacements, we believe, lead naturally to a unification of the electromagnetic field with the gravitational and nuclear fields. We introduce, also, a new mathematical formalism which facilitates the presentation of our laws.

**Category:** Relativity and Cosmology

[94] **viXra:1301.0111 [pdf]**
*replaced on 2013-01-21 23:22:21*

**Authors:** V.A.I.Menon

**Comments:** 11 Pages.

The author after clarifying the physical implications of the imaginary time approach and the reversible real time approach goes on to explain how they differ from the progressive time which is experienced by all macroscopic systems. He proposes that the progressive nature of time is a direct result of the increase in the entropy at the sub-quantum level. According to him just as the interactions with the vacuum fluctuations create the confined helical wave (CH wave) which is the basic structure of a particle, a small part of the energy gets converted into its jiggling motion. It is proposed that this random motion arising from the absorption of the vacuum energy contributes to infinitesimal increase in the rest mass which results in the increase in entropy right at the level of the elementary particle. He calls the process by which the particles absorb energy from vacuum “the vean process”. He proposes that this increase in entropy at the level of the structure of the elementary particles results in time acquiring its progressive nature. With this interpretation of the progressive nature of time, the problem of the collapse of wave function gets resolved without invoking the presence of a conscious observer. Even the process of entanglement appears to have space-time limitations. He suggests that the existence of gravitational field and expansion of the universe may be direct outcome of the proposed vean process.

**Category:** Quantum Physics

[93] **viXra:1301.0110 [pdf]**
*submitted on 2013-01-18 10:51:52*

**Authors:** Helmut Söllinger

**Comments:** Pages. e-mail adress of the author: 64.soellinger@aon.at

The scope of the work described in this paper is a systematic investigation as to whether or not the mass of the proton and the electron can be represented by other fundamental constants. The author arrives at the conclusion that the mass of the proton and the electron can be expressed by a combination of five constants that occur in nature; namely, e, εo, h, c, G, plus a time-variable parameter. In this context, the author has studied more than 37,000 options using electronic support and powering the fundamental constants with natural numbers only.
The simplest and most convincing formula the author has found is:
me3 x mp3 = (e2 h/4p εo c G R)2
This equation results in the exact value of the mass of the proton and the electron. The beauty and simplicity of this equation give rise to the following question: What, if not this formula, is able to represent the mass of the two most important particles?
The author’s conclusion is that either the electron and proton masses themselves are natural constants that cannot be represented by other constants of nature, or that – as shown in this paper – they can be perfectly well represented by five other fundamental constants, in addition to a time-variable parameter.

**Category:** Mathematical Physics

[92] **viXra:1301.0109 [pdf]**
*replaced on 2013-12-16 10:14:11*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 3 Pages. 4 illustrations, 3 references

Young stars like the Sun are hollow. Three illustrations are provided. The theory that explains what will happen to stars as they gravitationally collapse is called stellar metamorphosis. The Sun is a very young planet.

**Category:** Astrophysics

[91] **viXra:1301.0108 [pdf]**
*replaced on 2013-01-20 19:53:04*

**Authors:** V.Skorobogaotov

**Comments:** 8 Pages.

It is given the description of some basic notions of the quantum mechanics in the frame of the model of 4D medium ("4D ether"): the uncertainty relation of Heisenberg, the spectrum of the energy for the hydrogen atom, the wave function, the de Broyle wave and some others.

**Category:** Quantum Physics

[90] **viXra:1301.0107 [pdf]**
*replaced on 2013-01-20 20:57:23*

**Authors:** Nikzad Babaii Rizvandi, Javid Taheri, Reza Moraveji, Albert Y. Zomaya

**Comments:** 19 Pages.

In this paper, we study CPU utilization time patterns of several MapReduce applications. After extracting running patterns of several applications, the patterns along with their statistical information are saved in a reference database to be later used to tweak system parameters to efficiently execute future unknown applications. To achieve this goal, CPU utilization patterns of new applications along with its statistical information are compared with the already known ones in the reference database to find/predict their most probable execution patterns. Because of different pattern lengths, the Dynamic Time Warping (DTW) is utilized for such comparison; a statistical analysis is then applied to DTWs’ outcomes to select the most suitable candidates. Furthermore, under a hypothesis, we also proposed another algorithm to classify applications under similar CPU utilization patterns. Finally, dependency between minimum distance/maximum similarity of applications and their scalability (in both input size and number of virtual nodes) are studied. Here, we used widely used applications (WordCount, Distributed Grep, and Terasort) as well as an Exim Mainlog parsing application to evaluate our hypothesis in automatic tweaking MapReduce configuration parameters in executing similar applications scalable on both size of input data and number of virtual nodes. Results are very promising and showed the effectiveness of our approach on a private cloud with up to 25 virtual nodes.

**Category:** Artificial Intelligence

[89] **viXra:1301.0106 [pdf]**
*submitted on 2013-01-17 22:09:09*

**Authors:** Rodney Bartlett

**Comments:** 9 Pages.

I start by pointing out “There is already support for the idea of, as a previous post puts it, "the electronic mechanism of binary digits" - in the Kabbalah (an interpretation of the Scriptures used by some Jews and Christians that seeks to discover mysteries by using special methods of interpretation). (Besides the Kabbalah, my thanks also go to cosmologist Max Tegmark and his MUH or Mathematical Universe Hypothesis.) We proceed to supersymmetry - then proposals of antiphotons, antigravitons and negative quantum spin (ideas that may seem preposterous but turn out to be possible, even logical, in a cosmos based on maths) are mentioned. From there, an original suggestion is proposed regarding the nature of magnetism. At the intended end, I mention that string theory is vital to a mathematical universe – which leads to explaining that the “Pioneer anomaly” is due to the warping of space-time by 2.3 times Einstein’s estimate, as is the “flyby anomaly” measured for several spacecraft. String theory supports the idea that mathematics itself influences the nature of space-time warps spacecraft travel in. We must not make the mistake of assuming maths’ production of matter-forming wave packets is the only way maths influences those travels.

**Category:** Quantum Gravity and String Theory

[88] **viXra:1301.0105 [pdf]**
*submitted on 2013-01-17 11:04:54*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

A paper on the symptoms of acid reflux as well as a guide to comforting it. This will provide an informative biochemical and organic description on the chemicals that can be used to treat acid reflux disease.

**Category:** Biochemistry

[87] **viXra:1301.0104 [pdf]**
*submitted on 2013-01-17 12:26:27*

**Authors:** Andrew Nassif

**Comments:** 7 Pages.

A research paper on some of the biggest controversies in the field of science and theoretical physics. This includes attempts on finding new scientific discoveries and research on the field itself as well as what is Quantum Field Theory.

**Category:** Condensed Matter

[86] **viXra:1301.0103 [pdf]**
*submitted on 2013-01-17 06:05:08*

**Authors:** V.A.I.Menon

**Comments:** 10 Pages.

The author shows that that for each quantum mechanical property of a micro-system there is a corresponding thermodynamic one in the primary eigen gas approach. He further shows that the basic postulates of quantum mechanics have equivalents in the primary eigen gas approach provided time is accorded directional symmetry. The interference pattern obtained in Young’s double slit experiment is explained in terms of the primary eigen gas approach using the directional symmetry of time

**Category:** Quantum Physics

[85] **viXra:1301.0102 [pdf]**
*submitted on 2013-01-17 10:45:04*

**Authors:** Andrea Gregori

**Comments:** 94 Pages.

We investigate the spectrum of elementary particles and fields
arising from the superposition of string configurations
weighted according to their entropy in the string phase space.
We find that this superposition describes a universe with
a physical content phenomenologically compatible with the experimental
observations and measurements. Masses and couplings are determined
as functions of the age of the universe,
with no room for freely-adjustable parameters.
They depend on time, with a scaling allowing this scenario to pass
the tests provided by cosmology
and the constraints imposed by the physics of the primordial universe.

**Category:** Quantum Gravity and String Theory

[84] **viXra:1301.0101 [pdf]**
*submitted on 2013-01-16 20:58:08*

**Authors:** B. C. Chanyal

**Comments:** 287 Pages. I would like to express my sincere gratitude to my supervisor Prof. O. P. S. Negi for suggesting problems, valuable guidance and encouragement throughout the period of my research.

Historical developments of standard model and physics beyond the standard model are summarized in this thesis to understand the behavior of monopoles and dyons in current grand unified theories and quark confinement problems relevant for their production and detection. On the other hand, the various roles of four division algebras (namely the algebras of real numbers, complex numbers, quaternions and octonions) in different branches of physics and mathematics are also summarized followed by the summery of the work done in different chapters of present thesis.

**Category:** High Energy Particle Physics

[83] **viXra:1301.0100 [pdf]**
*submitted on 2013-01-16 15:35:58*

**Authors:** Alexander Bolonkin

**Comments:** 8 Pages.

Previously [1], this author developed a theory which allows derivation of the unknown relations between main parameters in a given field of nature. Using this theory, the outcomes of the derived formulas to estimate some values of our Universe uncovered both well-known and new unknown relations. That paper [1] which should be considered part 1 of this series offers possibly valid relations between time, matter, volume, distance, and energy. The net picture derived is that there exists in the Universe ONLY one primary factor – ENERGY. Time, matter, volume, fields are all evidence of the energy and can be transformed one to other, such as the transformation in the famous formula E = mc2.
In this paper, part 2 of that series, the author shows that the parameters of space (volume, distance) and time have limits (maximal values). The volume (distance), time contract (collapse) into a point under the specific density of the energy, matter, pressure, frequency, temperature, intensity of electric, magnetic, acceleration fields. The maximal temperature and force are independent from other conditions.

**Category:** Classical Physics

[82] **viXra:1301.0098 [pdf]**
*replaced on 2013-01-16 12:35:32*

**Authors:** Glenn A. Baxter

**Comments:** Fourteen Pages

Dr. D. Sasso of Italy has just published two new papers: THE STABILITY OF ELECTRODYNAMIC PARICLES: THE DELTA RADIATION and THE PHYSICAL NATURE OF MESONS AND THE PRINCIPLE OF DECAY IN THE NON – STANDARD MODEL Dr. Sasso introduces some interesting new ideas including the Non-Standard Model. We are also joined this month my Harry H. Ricker as well as papers by your Editor, Glenn A. Baxter and another by Robert McCoy regarding the Higgs Boson plus a guest editorial by Sue Lange regarding the NPA, the Natural Philosophy Alliance.

**Category:** Relativity and Cosmology

[81] **viXra:1301.0097 [pdf]**
*submitted on 2013-01-16 10:33:50*

**Authors:** Andrew Nassif

**Comments:** 2 Pages.

What is life, and more importantly how do we live it? This is a paper written in poetic form about the subject of life.

**Category:** Religion and Spiritualism

[80] **viXra:1301.0096 [pdf]**
*submitted on 2013-01-16 01:42:36*

**Authors:** Michael Emerson

**Comments:** 3 Pages.

A number of papers and scientist have said that in the multiverse there are multiple you, and they play out every possible cosmic history, this paper is an attempt to prove this unlikely.

**Category:** Relativity and Cosmology

[79] **viXra:1301.0095 [pdf]**
*replaced on 2014-05-21 14:34:04*

**Authors:** John Shim

**Comments:** 4 Pages. published in the Hadronic Journal, 36(3) 2013 pp345-348

Einstein gave two conflicting interpretations of the Lorentz
transformation for time, τ=t√(1-v2/c2), applied to a moving clock. The
first was as a coordinate transformation, which was the basis of its
derivation. The second was as a physical slowing effect on the moving
clock caused solely by its motion relative to a stationary reference clock.
These interpretations are not independent. That is, the Lorentz
coordinate transformation cannot be applied during the clock’s time of
motion without correcting for the lack of synchronization between the
moving and stationary clocks resulting from the slowing of the moving
clock. Otherwise, the Lorentz transformation gives an incorrect result.
In addition, the interpretation as a physical effect has seemingly
insurmountable logical difficulties, as it subjects the moving clock to a
physical slowing dependent upon an arbitrary inertial reference frame,
and which is therefore indeterminable. This interpretation is supported
by questionable experimental evidence.

**Category:** Relativity and Cosmology

[78] **viXra:1301.0094 [pdf]**
*replaced on 2013-01-16 09:57:01*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 5 Pages.

In 2008, the Canadian province of British Columbia introduced a carbon tax starting at CAD$10 per tonne of carbon dioxide equivalent (CO2e) and rising by CAD$5/tonne CO2e/year to a 2012-2013 value of CAD$30/tonne CO2e. In the current work, we find no clear evidence over the short post-tax period of record that unequivocally links British Columbia's carbon tax to significant reductions in provincial greenhouse gas emissions. There are indications the implementation of this tax may have negatively impacted British Columbia's economic performance relative to the rest of Canada. A longer post-tax period of record is likely necessary in order to reliably determine what, if any, economic and environmental effects have been generated from British Columbia's carbon tax.

**Category:** Economics and Finance

[77] **viXra:1301.0093 [pdf]**
*submitted on 2013-01-16 00:35:23*

**Authors:** V.A.I.Menon

**Comments:** 18 Pages.

The author discusses the similarity between the expression for the state function of the primary eigen gas representing a particle and that of the wave function. It is observed that the only difference between these two expressions is that in the former time appears as a real function while in the latter it appears as an imaginary function. He shows that the primary eigen gas approach which treats time as real and the quantum mechanical approach which treats time as imaginary are two ways of representing the same reality and points to a new symmetry called the Wick symmetry. He shows that the probability postulate of quantum mechanics can be understood in a very simple and natural manner based on the primary eigen gas representation of the particle. It is shown that the zero point energy of the quantum mechanics is nothing but the energy of the thermal bath formed by the vacuum fluctuations. The author shows that the quantum mechanics is nothing but the thermodynamics of the primary eigen gas where time has not lost its directional symmetry.

**Category:** Quantum Physics

[76] **viXra:1301.0092 [pdf]**
*submitted on 2013-01-15 13:34:33*

**Authors:** Andrew Nassif

**Comments:** 3 Pages.

Renewable energy comes from natural resources such as sunlight, water, or geothermic heat. One of the most common uses for renewable energy is hydroelectricity. Hydroelectricity can be used for power plants such as the hoover dam power plant which supplies this country with 1/3 of its electricity, making it the biggest power plant in the country. The three largest power plants in the world are: the Three Gorges Dam, the Itaipu Dam, and the Guri Dam.
Hydropower does

**Category:** General Science and Philosophy

[75] **viXra:1301.0091 [pdf]**
*submitted on 2013-01-15 12:59:57*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

This is a guide to alternatives in biomedical technology as well as biochemistry. This includes using skin cells to replace nervous tissue, using artificial blood cells, and alternatives to stem cell research as well as possibly cures for aids or cancer.

**Category:** Biochemistry

[74] **viXra:1301.0089 [pdf]**
*submitted on 2013-01-15 10:51:24*

**Authors:** V.A.I.Menon

**Comments:** 21 Pages.

The author introduces the concept of the primary eigen gas which is an abstract gas where the microstates are occupied successively in time unlike in the case of a real gas where the microstates are occupied simultaneously. He shows that the energy-momentum eigen state of a particle represented by a plane wave can be treated as a primary eigen gas which makes it possible to understand the dynamics of a particle in terms of the thermodynamics of such a gas. In this approach, time and space turn out to be the intrinsic properties of the primary eigen gas representing a particle and the quantum nature of time and space emerges from it in a natural manner. It is shown that the action (with a negative sign) of a particle can be identified with the entropy of the primary eigen gas and the principle of least action is nothing but the second law of thermodynamics. Besides, it is shown that the uncertainty relations of quantum mechanics can be derived directly from the equation for the fluctuations.

**Category:** Quantum Physics

[73] **viXra:1301.0088 [pdf]**
*replaced on 2014-02-18 04:35:56*

**Authors:** Leo Vuyk

**Comments:** 16 Pages. 16

TWO or THREE Large Quasar Groups (LQGs) located at the start of two or three Lyman Alpha systems and a part of the Raspberry Multiverse?
Leo Vuyk,
Architect,
Rotterdam, the Netherlands.
Abstract,
According to Quantum FFF Theory, the FORM and MICROSTRUCTURE of elementary particles, is supposed to be the origin of FUNCTIONAL differences between Higgs- Graviton- Photon- and Fermion particles.
As a result, a NEW splitting, accelerating and pairing MASSLESS dual Black Hole system seems to be able to convert vacuum energy (ZPE) efficiently into real electric energy and even H2 production at the black hole horizon, by entropy decrease and is supposed to be responsible for all dark matter and Galaxy formation in the universe.
As a second consequence The Lyman Alpha structure of the universe is also supposed to be originated by this splitting and pairing process of Black holes, which should have been started at the beginning of the universe. Thus the big bang could have been a splitting and evaporation process of one or more central “new paradigm” black holes creating huge black holes based Quasars at the start of the Lyman alpha forest process..
Now I suggest that we can observe a strong signal of such a Black Hole splitting event as the start of a secondary branch of the Lyman Alpha system, at Z=1.27, distance, by the recent observation of TWO or even THREE Huge LQGs presented by R.Clowes. et al.
As a consequence of the redshift (Z=1.27) , it must be emphasized that this Huge LQG event can only be the start of one of the many branches of the Lyman Alpha forest in the Big Bang process, and not as the start and centre of the raspberry multiverse as depicted here as an example of black hole splitting.

**Category:** Astrophysics

[72] **viXra:1301.0087 [pdf]**
*submitted on 2013-01-14 14:38:41*

**Authors:** Bernard Riley

**Comments:** 7 Pages.

4-branes that wrap the long and short fundamental 1-cycles of a rectangular toroidal orbifold T2/(Z2)3, which has Planck-scale compactification radii, intersect after 43 and 109 wrappings, respectively, in a 3-brane upon which the scale factor, equal to a power of the inverse of compactification length, has precisely the same value when measured parallel to either of the fundamental 1-cycles. The scale, 5.12 MeV, of the 3-brane is reduced from Planck scale by a factor 137-10. The distance of the 5.12 MeV intersection from the Planck brane in the orbifold covering space is precisely equal to the length of 40 cycles of the orbifold diagonal, and measures 137 Planck units. A geometric ‘Sequence of Scales’ extends from 5.12 MeV to both larger and smaller scales, and is associated with the intersections of two 4-branes, with wrapping numbers (1, 2) and (1, -2). All fields may be confined to 3-branes. Weak, strong and electromagnetic 3-branes are identified; all three are associated with ~integer wrappings of the orbifold diagonal. The electromagnetic 3-brane is located adjacent to the 5.12 MeV 3-brane, at a distance of 137.0359 Planck units from the Planck brane in the orbifold covering space. The scale on the electromagnetic 3-brane, 4.927 MeV, is related to the scale on the weak 3-brane, 91.1876 GeV, by a factor 137-2.

**Category:** High Energy Particle Physics

[71] **viXra:1301.0086 [pdf]**
*submitted on 2013-01-14 23:28:06*

**Authors:** John G. Hartnett

**Comments:** 4 Pages. Published in New Advances in Physics 2(2):115-123, 2008

Electromagnetism is analyzed in a 5D expanding universe. Compared to the usual 4D description of electrodynamics it can be viewed as adding effective charge and current densities to the universe that are static in time. These lead to effective polarization and magnetization of the vacuum, which is most significant at high redshift. Electromagnetic waves propagate but group and phase velocities are dispersive. This introduces a new energy scale to the cosmos. And as a result electromagnetic waves propagate with superluminal speeds but no energy is transmitted faster than c.

**Category:** Relativity and Cosmology

[70] **viXra:1301.0084 [pdf]**
*submitted on 2013-01-14 11:41:16*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

This is a research paper on acid rain, how it is caused, and what environmental regulations can be put out to stop the harmful effects of acid rain. This also includes some safety regulations we can put out when acid rain is being produced.

**Category:** Climate Research

[69] **viXra:1301.0083 [pdf]**
*submitted on 2013-01-14 10:26:00*

**Authors:** Andrew Nassif

**Comments:** 3 Pages.

A guide on what are colligative properties and how they help us in the understanding of chemistry as well as semi-conductor physics. This will also give us the ability to look at modern research and understanding in the field of chemistry through a conceptual thinking to the dimensional analysis behind it.

**Category:** Condensed Matter

[68] **viXra:1301.0082 [pdf]**
*submitted on 2013-01-14 09:26:09*

**Authors:** Martin Erik Horn

**Comments:** 8 Pages.

It is a historical accident that we describe Pauli matrices as (2 x 2) matrices and Dirac matrices as (4 x 4) matrices. As it will be shown in this paper we can use (3 x 3) matrices or (9 x 9) matrices for this purpose as well. This hopefully will enable us one day to construct a unified geometric algebra picture which includes Gell-Mann matrices in an appropriate manner.

**Category:** Mathematical Physics

[67] **viXra:1301.0078 [pdf]**
*replaced on 2013-02-08 08:52:48*

**Authors:** Jose Javier Garcia

**Comments:** 11 Pages.

We study the relation between the Guzwiller Trace for a dynamical system and the Riemann-Weil trace formula for the Riemann zeros, using the Bohr-Sommerfeld quantization condition and the fractional calculus we obtain a method to define implicitly a potential , we apply this method to define a Hamiltonian whose energies are the square of the Riemann zeros (imaginary part) , also we show that for big ‘x’ the potential is very close to an exponential function. In this paper and for simplicity we use units so • Keywords: = Riemann Hypothesis, WKB semiclassical approximation, Gutzwiller trace formula, Bohr-Sommerfeld quantization,exponential potential.

**Category:** Mathematical Physics

[66] **viXra:1301.0077 [pdf]**
*submitted on 2013-01-14 04:36:58*

**Authors:** Khrapko R. I.

**Comments:** 12 Pages. The material is submitted to www.cleoconference.org

We consider two different types of angular momentum of electromagnetic radiation. 1) Moment of linear momentum, which we consider as orbital angular momentum. 2) Spin, which is not a moment of momentum; its origin is a circular polarization. We show that a circularly polarized light beam with plane phase front carries angular momentum of both types, spin and orbital angular momentum, contrary to the standard electrodynamics. Because of the conservation laws of momentum and total angular momentum, spin and moment of momentum have concrete values. These two types of angular momentum are spatially separated. Flux of spin and flux of moment of momentum act on an absorber independently. An experiment is described, which can verify this supposition.

**Category:** Classical Physics

[65] **viXra:1301.0076 [pdf]**
*replaced on 2014-07-12 06:05:18*

**Authors:** Sergey A. Kamenshchikov

**Comments:** 9 Pages. Journal of Chaos, Volume 2014, Article ID 292096. Author: ru.linkedin.com/pub/sergey-kamenshchikov/60/8b1/21a/

The goal of this investigation was to derive strictly new properties of chaotic systems and their mutual relations. The generalized Fokker-Planck equation with a non stationary diffusion has been derived and used for chaos analysis. An anomalous transport turned out to be natural property of this equation. A nonlinear dispersion of the considered motion allowed to find a principal consequence: a chaotic system with uniform dynamic properties tends to instable clustering. Small fluctuations of particles density increase by time and form attractors and stochastic islands even if the initial transport properties have uniform distribution. It was shown that an instability of phase trajectories leads to the nonlinear dispersion law and consequently to a space instability. A fixed boundary system was considered, using a standard Fokker-Planck equation. We have derived that such a type of dynamic systems has a discrete diffusive and energy spectra. It was shown that phase space diffusion is the only parameter that defines a dynamic accuracy in this case. The uncertainty relations have been obtained for conjugate phase space variables with account of transport properties. Given results can be used in the area of chaotic systems modelling and turbulence investigation. Key words: clustering, anomalous transport, Fokker-Planck equation, uncertainties relation.

**Category:** Mathematical Physics

[64] **viXra:1301.0075 [pdf]**
*replaced on 2013-04-12 18:27:46*

**Authors:** Jay R. Yablon

**Comments:** 25 Pages. Version 4 is the final paper which will appear in the Journal of Modern Physics, in their April 2013 "Special Issue on High Energy Physics."

Based on the thesis that baryons including protons and neutrons are Yang-Mills magnetic monopoles which the author has previously developed and which has been confirmed by over half a dozen empirically-accurate predictions, we develop a GUT that is rooted in the SU(4) subgroups for the proton/electron and neutron/neutrino which were used as the basis for these predictions. The SU(8) GUT group so-developed leads following three stages of symmetry breaking to all known phenomenology including a neutrino that behaves differently from other fermions, lepto-quark separation, replication of fermions into exactly three generations, the Cabibbo mixing of those generations, weak interactions which are left-chiral, and all four of the gravitational, strong, weak, and electromagnetic interactions. The next steps based on this development will be to calculate the masses and energies associated with the vacuum terms of the Lagrangian, to see if additional empirical confirmations can be achieved, especially for the proton and neutron and the fermion rest masses.

**Category:** High Energy Particle Physics

[63] **viXra:1301.0074 [pdf]**
*replaced on 2013-06-14 03:11:05*

**Authors:** Dirk J. Pons, Arion D. Pons, Aiden J. Pons

**Comments:** Pages. Published as: Pons, D.J., A. Pons, D., and A. Pons, J. (2013) Time: An emergent property of matter. Applied Physics Research 5, 23-47 DOI: http://dx.doi.org/10.5539/apr.v5n6p23

A non-local hidden-variable (NLHV) design called the Cordus conjecture is applied to address the ontological question: What is time? This NLHV theory, which has been successfully applied to other phenomena, includes a specific design for the internal structure of particules and their externalised discrete fields. In this specific area it provides a novel multi-level concept for time, and proposes candidate solutions to the problem of what time is and how its arrow arises. According to this theory, time at the fundamental level consists of the frequency oscillations of matter particules, and thus time is locally generated and a property of matter. At the next level up, that of the assembly of matter particles via bonds and fields, the interconnectedness creates a patchwork of temporal cause-and-effect, and hence a coarser time. A phenomenon that occurs in one volume is communicated via photons, or massy particules, or fields, to other matter around it. Thus time is also universal and relative. According to this Cordus theory, entropy, classical mechanics, and our perception of time all arise at the boundary between coherence and decoherence, and the theory explains how. The arrow is applied to time where irreversibility arises, i.e. at the assembly level rather than the fundamental level. Time at the macroscopic level is therefore a series of delayed irreversible interactions (temporal ratchets) between sub-microscopic domains of matter, not a dimension that can be traversed in both directions. The theory extends to time at the level of organic life. It explains how the human-perception of time arises at the cognitive level, and why we perceive time as universal. This theory suggests that time is all of particle-based vs. spacetime, relative vs. absolute, local vs. universal, depending on the level of assembly being considered. However it is also none of those things individually. This paper shows that questions about time can be answered at the next deeper level of physics, and gives an example of what that physics might look like and its implications for time.

**Category:** History and Philosophy of Physics

[62] **viXra:1301.0073 [pdf]**
*submitted on 2013-01-13 13:57:22*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 8 Pages.

Supplies of per capita renewable internal freshwater resources are declining at alarming rates around the globe, necessitating efforts to better manage population growth and the use and distribution of freshwater. All major geographic regions saw substantial reductions in per capita renewable internal freshwater supplies between 1962 and 2011. Over this period, the global per capita freshwater stock declined by 54%, with decreases of 75% in Sub-Saharan Africa, 71% in the Middle East and North Africa, 64% in South Asia, 61% in Latin America and the Caribbean, 52% in East Asia and the Pacific, and 41% in North America. At current rates of depletion, global per capita renewable internal freshwater resources are projected to decline by 65% compared to 1962 values before stabilizing, having regional variation ranging from 60% in East Asia and the Pacific to 86% of the Middle East and North Africa. Sub-Saharan Africa is predicted to reach a negative per capita renewable internal freshwater balance by the year 2120. Per capita renewable internal freshwater resources are declining more rapidly in low income countries than their middle and high income counterparts. All countries except Hungary and Bulgaria experienced declines in their per capita renewable internal freshwater supply between 1962 and 2011. Most countries (55%) experienced a decline of between 60% to 80% in per capita renewable internal freshwater resources over this period. The majority of nations are projected to maintain positive per capita renewable internal freshwater balances under steady-state conditions, although overall declines of between 80% to almost 100% from 1962 levels are dominant (~52% of all countries). A group of 28 nations is projected to reach zero per capita internal freshwater resources within the near future. African countries dominate the list of nations projected to reach zero per capita internal freshwater resources, comprising 16 of the 28 countries - of which six are landlocked. A further group of 25 nations have data records that are too short, and recent population dynamics that are generally too complex, for reliable trend extrapolation. Close attention will need to be paid to the per capita renewable internal freshwater resource trends for these countries over the coming decades in order to obtain a better understanding of their resource depletion rates.

**Category:** General Science and Philosophy

[61] **viXra:1301.0072 [pdf]**
*submitted on 2013-01-13 12:07:28*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 1 Page. 1 illustration

Marklund Convection is the process of chemically separating plasma based on the ionization potentials of the material.

**Category:** Astrophysics

[60] **viXra:1301.0071 [pdf]**
*replaced on 2013-12-19 04:28:19*

**Authors:** Joerg Schiller

**Comments:** 18 Pages.

The notion of the form of a set.

**Category:** General Mathematics

[59] **viXra:1301.0069 [pdf]**
*submitted on 2013-01-12 12:39:20*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 2 Pages.

The partitioning behavior of disparlure ((7R,8S)-7,8-epoxy-2-methyloctadecane) - a sex pheromone of the gypsy moth, Lymantria dispar - between aqueous solutions and the organic solvents chloroform and n-heptane has been re-evaluated. Prior estimates from the literature of the aqueous-organic solvent partitioning coefficients (log P) for disparlure in these two solvent systems appear to have been underestimated by about 5-6 orders of magnitude. In the current work, we provide corrected log P(chloroform/water) and log P(heptane/water) values for disparlure of 9.87 and 9.15, respectively.

**Category:** Chemistry

[58] **viXra:1301.0068 [pdf]**
*submitted on 2013-01-12 15:15:01*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 3 Pages.

Between 2007/2008 and 2012/2013, inflation adjusted undergraduate tuition fees for full-time Canadian students increased significantly in all disciplines. All disciplines except dentistry also exhibited substantial increases in inflation adjusted graduate tuition fees for full-time Canadian students over this period. In contrast to prior claims in the literature, we show that low tuition rates in the Canadian post-secondary system do not redistribute wealth from the poor to the rich. For each dollar of taxpayer derived financial support going into the Canadian college and university system, the wealthiest families paid almost the entire amount. Consequently, it appears that regardless of current or proposed tuition rates, the Canadian post-secondary system is a wealth transfer from the rich to the poor.

**Category:** Social Science

[57] **viXra:1301.0066 [pdf]**
*replaced on 2014-01-27 14:29:03*

**Authors:** Jeffrey N. Cook

**Comments:** 74 Pages. Fix a couple minor typos.

A Riemann operator is constructed in which sequential elements are removed from a decaying set by means of prime factorization, leading to a form of exponential decay with zero degeneration, referred to as the root of exponential decay. A proportionate operator is then constructed in a similar manner in terms of the non-trivial zeros of the Riemann zeta function, extending proportionately, mapping expectedly always to zero, which imposes a ratio of the primes to said zeta roots. Thirdly, a statistical oscillation function is constructed algebraically into an expression of the Laplace transform that links the two operators and binds the roots of the functions in such a manner that the period of the oscillation is defined (and derived) by the eigenvalues of one and the elements of another. A proof then of the Riemann hypothesis is obtained with a set of algebraic paradoxes that unmanageably occur for the single incident of any non-trivial real part greater or less than a rational one half.

**Category:** Number Theory

[56] **viXra:1301.0065 [pdf]**
*submitted on 2013-01-12 06:19:36*

**Authors:** Dan Visser

**Comments:** 6 Pages.

I present the mathematics of a new ‘Dark Energy Force’ in replay with my former ‘papers’. The reason is particles that feel small gravity, and anti-particles that maybe feel anti-gravity, and the particle-cosmology I use, with negative mass, never have been exposed experimentally to General Relativity, in order to prove that a real anti-gravity exists. However, the mathematics in my frame work theoretically prove, that only dark matter-mass could have negative mass. This is in contradiction with a new theory of Entropy-Gravity, which theoretically proved gravity is not fundamental, but caused by entropy. My framework is also in contradiction with the Elementary Process theory, which also predicts gravity is not fundamental, but will cause anti-gravity by anti-matter with positive mass. Both these frameworks consider their theory in a Big Bang cosmology. So I replayed my mathematics to highlight again, that a new dark matter-force, embedded in a new cosmology named Double Torus Universe, is the only one that could cause real anti-gravity. My framework is independently developed from institutions and based on two extra-time arrows from below the Planck-scale. Additionally, and for the first time, I used a Feynman-diagram to express this dark matter-force in order to illustrate the existence of real anti-gravity theoretically.

**Category:** Mathematical Physics

[55] **viXra:1301.0064 [pdf]**
*submitted on 2013-01-12 05:31:16*

**Authors:** Elkin Igor

**Comments:** 10 Pages.

Poincare and Einstein supposed that it is practically impossible to determine one-way speed of light, that’s why speed of light in different ways may differ. They also supposed that till there is no the experiment which would depend on the value of one-way speed of light, it is possible to consider that all one-way speeds of light are equal to two-way speed of light. It is offered the explanation of the experiment connected with “Red shift” with the aid of one-way speed of light, therefore it is shown the dependence on magnitude of one-way speed of light. Mechanics is set up based on the unidirectional speeds of light. For example, the experiment of Michelson-Morley is explained with the aid of this mechanics. The cause of the inception of Fitzgerald contraction is explained.

**Category:** Relativity and Cosmology

[54] **viXra:1301.0062 [pdf]**
*submitted on 2013-01-11 10:49:37*

**Authors:** Andrew Nassif

**Comments:** 6 pages not including title and sources

Hydrostatic Equilibrium is a condition in which volume of the fluids are staying at rest or reaching a constant velocity. This term is very important in Physics, especially in fluid mechanics. This presentation will basically guide you easily to research in the field.

**Category:** General Science and Philosophy

[53] **viXra:1301.0061 [pdf]**
*replaced on 2014-07-02 01:17:55*

**Authors:** Golden Gadzirayi Nyambuya

**Comments:** 10 Pages.

Prevailing and conventional wisdom as drawn from both Professor Einstein's Special Theory of Relativity and our palatable experience, holds that photons are massless particles and that, every particle that travels at the speed of light must -- accordingly, be massless. Amongst other important but now resolved problems in physics, this assumption led to the Neutrino Mass Problem -- namely, ``Do neutrinos have mass?'' Neutrinos appear very strongly to travel at the speed of light and according to the afore-stated, they must be massless. Massless neutrinos have a problem in that one is unable to explain the phenomenon of neutrino oscillations because this requires massive neutrinos. Experiments appear to strongly suggest that indeed, neutrinos most certainly are massive particles. While this solves the problem of neutrino oscillation, it directly leads to another problem, namely that of ``How can a massive particle travel at the speed of light? Is not this speed a preserve and prerogative of only massless particles?'' We argue herein that in principle, it is possible for massive particles to travel at the speed of light. In presenting the present letter, our hope is that this may aid or contribute significantly in solving the said problem of ``How can massive particles travel at the speed of light?

**Category:** Classical Physics

[52] **viXra:1301.0060 [pdf]**
*replaced on 2013-05-07 08:59:30*

**Authors:** Miroslaw J. Kubiak

**Comments:** 9 Pages.

It is well-known from the classical mechanics that there exists relation between the metric tensor and the effective mass tensor of the body (EMT). We have introduced the concept of the EMT in the General Relativity and we have found that there exists similar relation between the metric tensor and the EMT for the moving body in a weak gravitational field. We propose the an experimental verification of our considerations.
We compared a few physical features concerning of the space-time curvature with the physical fea-tures of the EMT and the result of this comparisons are presented in the form of a table in this paper.

**Category:** Relativity and Cosmology

[51] **viXra:1301.0058 [pdf]**
*replaced on 2014-01-07 22:13:04*

**Authors:** Mohamed Elgendi, Bjoern Eskofier, Socrates Dokos, Derek Abbott

**Comments:** 46 Pages. The paper is published in PLoS ONE and its citation is Elgendi M, Eskofier B, Dokos S, Abbott D (2014) Revisiting QRS Detection Methodologies for Portable, Wearable, Battery-Operated, and Wireless ECG Systems. PLoS ONE 9(1): e84018.

Cardiovascular diseases are the number one cause of death worldwide. Currently, portable batteryoperated
systems such as mobile phones with wireless ECG sensors have the potential to be used in
continuous cardiac function assessment that can be easily integrated into daily life. These portable
point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The
basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG
signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are
suited for an implementation on a mobile device. We investigate current QRS detection algorithms
based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical
eciency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection
algorithms may provide an acceptable solution only on small segments of ECG signals, within a
certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are
discussed in the context of a comparison with the most conventional algorithms, followed by future
recommendations for developing reliable QRS detection schemes suitable for implementation on
battery-operated mobile devices.

**Category:** Digital Signal Processing

[50] **viXra:1301.0057 [pdf]**
*replaced on 2013-09-16 17:50:46*

**Authors:** Mohamed Elgendi

**Comments:** 37 Pages. The paper is published in PLoS ONE and its citation is: Elgendi M (2013) Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases. PLoS ONE 8(9): e73557

The current state-of-the-art in automatic QRS detection methods show high
robustness and almost negligible error rates. In return, the methods are usu-
ally based on machine-learning approaches that require sucient computational re-
sources. However, simple-fast methods can also achieve high detection rates. There
is a need to develop numerically ecient algorithms to accommodate the new trend
towards battery-driven ECG devices and to analyze long-term recorded signals in a
time-ecient manner. A typical QRS detection method has been reduced to a basic
approach consisting of two moving averages that are calibrated by a knowledge base
using only two parameters. In contrast to high-accuracy methods, the proposed
method can be easily implemented in a digital lter design.

**Category:** Digital Signal Processing

[49] **viXra:1301.0056 [pdf]**
*replaced on 2015-07-21 12:13:10*

**Authors:** Mohamed Elgendi, Bjoern Eskofier, Derek Abbott

**Comments:** 33 Pages. The paper is published in Sensors and its citation Elgendi M, Eskofier B, Abbott D (2015) Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves. Sensors 15(7): 17693.

Background: There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies.
Methods: Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry).
Results: The determination of T wave peaks is performed and the proposed algorithm is evaluated on two~well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions: We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.

**Category:** Digital Signal Processing

[48] **viXra:1301.0055 [pdf]**
*replaced on 2016-04-13 14:01:29*

**Authors:** Mohamed Elgendi, Ian Norton, Matt Brearley, Socrates Dokos, Derek Abbott, Dale Schuurmans

**Comments:** 23 Pages.

To date, there have been no studies that investigate the independent use of the photoplethysmogram (PPG) signal to determine heart rate variability (HRV). However, researchers have demonstrated that PPG signals offer an alternative way of measuring HRV when electrocardiogram (ECG) and PPG signals are collected simultaneously. Based on these findings, we take the use of PPGs to the next step and investigate a different approach to show the potential independent use of short 20-second PPG signals collected from healthy subjects after exercise in a hot environment to measure HRV. Our hypothesis is that if the PPG–HRV indices are negatively correlated with age, then short PPG signals are appropriate measurements for extracting HRV parameters. The PPGs of 27 healthy male volunteers at rest and after exercise were used to determine the HRV indices: standard deviation of heartbeat interval (SDNN) and the root-mean square of the difference of successive heartbeats (RMSSD). The results indicate that the use of the aa interval, derived from the acceleration of PPG signals, is promising in determining the HRV statistical indices SDNN and RMSSD over 20-second PPG recordings. Moreover, the post-exercise SDNN index shows a negative correlation with age. There tends to be a decrease of the PPG–SDNN index with increasing age, whether at rest or after exercise. This new outcome validates the negative relationship between HRV in general with age, and consequently provides another evidence that short PPG signals have the potential to be used in heart rate analysis without the need to measure lengthy sequences of either ECG or PPG signals.

**Category:** Digital Signal Processing

[47] **viXra:1301.0054 [pdf]**
*replaced on 2014-08-12 12:05:33*

**Authors:** Mohamed Elgendi

**Comments:** 33 Pages. The paper is published in Computer Methods and Programs in Biomedicine and its citation is: Elgendi M Detection of c, d, and e waves in theacceleration photoplethysmogram. Computer Methods and Programs in Biomedicine. DOI: 10.1016/j.cmpb.2014.08.001

Analyzing the acceleration photoplethysmogram (APG) is becom- ing increasingly important for diagnosis. However, processing an APG signal is challenging, especially if the goal is to detect its small com- ponents (c, d, and e waves). Accurate detection of c, d, and e waves is an important first step for any clinical analysis of APG signals. In this paper, a novel algorithm that can detect c, d, and e waves simul- taneously in APG signals of healthy subjects that have low amplitude waves, contain fast rhythm heart beats, and suffer from non-stationary effects was developed. The performance of the proposed method was tested on 27 records collected during rest, resulting in 97.39% sensitiv- ity and 99.82% positive predictivity.

**Category:** Digital Signal Processing

[46] **viXra:1301.0053 [pdf]**
*replaced on 2014-10-22 17:21:02*

**Authors:** Mohamed Elgendi, Ian Norton, Matt Brearley, Derek Abbott, Dale Schuurmans

**Comments:** 26 Pages. The paper is published in Biomedical Engineering Online and its citation is: Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2014) Detection of a and b waves in the acceleration photoplethysmogram. Biomedical Engineering Online 13: 139.

Background: Analyzing acceleration photoplethysmogram (APG) signals measured after exercise is challenging. In this paper, a novel algorithm that can detect a waves and consequently b waves under these conditions is proposed. Accurate a and b wave detection is an important rst step for the assessment of arterial stiness and other cardiovascular parameters. Methods: Nine algorithms based on xed thresholding are compared, and a new algorithm is introduced to improve the detection rate using a testing set of heat stressed APG signals containing a total of 1,540 heart beats. Results: The new a detection algorithm demonstrates the highest overall detection accuracy|99.78% sensitivity, 100% positive predictivity|over signals that suer from 1) non-stationary eects, 2)irregular heartbeats, and 3) low amplitude waves. In addition, the proposed b detection algorithm achieved an overall sensitivity of 99.78% and a positive predictivity of 99.95%. Conclusions: The proposed algorithm presents an advantage for real-time applications by avoiding human intervention in threshold determination.

**Category:** Digital Signal Processing

[45] **viXra:1301.0052 [pdf]**
*submitted on 2013-01-10 13:53:27*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

Nuclear Physics is a major subject in Physics as well as a sub subject in chemistry that
deals with the study of radioactive elements and materials as well as how they work and can be
used. Its main field of study is interactions that happen in the nuclei of a compound or of an
element. Nuclear Physics is also a major field of particle physics as well. Nuclear Physics also
includes using radioactive elements in fields such as medicine and technology. It also can
focus on emission processes such as Neutron Emission, Positron Emission, and Proton
Emission. Henri Becquerel discovered radioactivity in 1896, and is thought to be the father of
nuclear physics. Henri's experiment was the investigation of phosphorescence in uranium
salts. A year later J.J. Thomson made the discovery of the electron which was the second most
major discoveries in nuclear physics since Henri's discovery. Thomson and Henri's discoveries
led to more major scientific discoveries in there time including the discovery of alpha, beta,
and gamma ray radiation. Nuclear Physics is also known as the study of high energy processes
and nucleosynthesis that takes place between elements. The two must major fundamentals of
Nucleosynthesis are Stellar and Supernova Nucleosynthesis.

**Category:** Nuclear and Atomic Physics

[44] **viXra:1301.0050 [pdf]**
*replaced on 2013-06-28 05:06:10*

**Authors:** Leo Vuyk

**Comments:** 19 Pages. 19

According to Quantum FFF Theory, the FORM and MICROSTRUCTURE of elementary particles, is supposed to be the origin of FUNCTIONAL differences between Higgs- Graviton- Photon- and Fermion particles.
As a result, a NEW splitting, accelerating and pairing MASSLESS dual Black Hole system seems to be able to convert vacuum energy (ZPE) efficiently into real electric energy by entropy decrease (nuclear electric potential) and is responsible for all dark matter in the universe.
The electric energy production of Stellar Anchor Black Holes (SABHs) and Galaxy Anchor Black Holes (GABHs), seem to be able to explain quick Star- and Galaxy formation also in the early universe, Birkeland Currents, Alfven Circuits and Herbig Haro Objects. If Galaxies merge, then in contrast, the Galaxy Anchor Black Holes don’t merge, but stay as neighbouring GABHs apart outside on both sides of the equatorial disc. The Higgs vacuum oscillation energy in between such neighbouring GABHs will decrease and become polarized and charged by Birkeland alike currents. As a consequence Inter galactic Gas and even stars will become concentrated in the middle of these GABHs and are supposed to form a Dwarf Galaxy.
Examples for Galaxy Anchor Black Hole effects for H-alpha streamers and electron jets are observed in the centre of Galaxy cluster and around merger galaxies.

**Category:** Astrophysics

[43] **viXra:1301.0049 [pdf]**
*submitted on 2013-01-10 09:26:05*

**Authors:** W. B. Vasantha Kandasamy, Florentin Smarandache

**Comments:** 152 Pages.

In this book, the authors introduce the notion of quasi set topological vector subspaces. The
advantage of such study is that given a vector subspace we can have only one topological
space associated with the collection of all subspaces. However, we can have several
quasi set topological vector subspaces of a given vector space. Further, we have defined
topological spaces for set vector spaces, semigroup vector spaces and group vector spaces.

**Category:** Algebra

[42] **viXra:1301.0048 [pdf]**
*submitted on 2013-01-09 17:14:20*

**Authors:** Andrew Nassif

**Comments:** 2 Pages.

Stoichiometry is one of the most major branches in chemistry that deals with the relative qualities and quantities of reactants and products in chemical reactions. Stoichiometry is based on the "Law of Conservation of Mass". Stoichiometry is broke into the subjects of: Reaction Stoichiometry and Composition Stoichiometry. Reaction Stoichiometry describe relationships of substances during a chemical reaction. Composition Stoichiometry describes the quantitative mass among elements and its relationship with compounds. Next their is Gas Stoichiometry which is part of Reaction Stoichiometry. Gas Stoichiometry involves chemicals and compounds in its relationship that involves gases, such as steam or burning magnesium. The term Stoichiometry itself derives from the greek word, "stoicheion metron", which means element measure or measurement of an element. Stoichiometry relies on scientific laws in chemistry to understand things better. The main use of Stoichiometry is to balance equations in chemical reactants.

**Category:** General Science and Philosophy

[41] **viXra:1301.0047 [pdf]**
*submitted on 2013-01-09 19:17:34*

**Authors:** U.V.S. Seshavatharam, S. Lakshminarayana, B.V.S.T. Sai

**Comments:** 10 Pages.

In this paper an attempt is made to understand the basic unified concepts of gravity, electromagnetism, nuclear charge radius, cosmic geometry; cosmic mass density, cosmic thermal energy density and cosmic red shift. The four key assumptions are : 1) Planck’s constant increases with cosmic time. 2) Being a primordial evolving black hole and angular velocity being H_t, universe is always rotating with light speed. 3) Atomic gravitational constant is squared Avogadro number times the classical gravitational constant and 4) Avogadro number is discrete and hence the atomic gravitational constant is discrete. This may be the root cause of discrete nature of revolving electron’s potential energy. Finally it can be suggested that current cosmological changes may be reflected in any existing atom.

**Category:** Relativity and Cosmology

[40] **viXra:1301.0046 [pdf]**
*submitted on 2013-01-09 10:16:25*

**Authors:** Bo He, Jin He

**Comments:** 6 Pages. 3 Figures, 1 Table, http://en.wikipedia.org/wiki/Laurent_Nottale

Friends: Whenever feeling happy, sad, or dangerous, remember two things. First, scientific experiments continue to show that human experience is derived from the creation of natural structure. For example, love, disease, and mental worries are the origin of the creation of microscopic structure. Second, the products of human's creation is based on the natural one. That is, the products are the second creation to the natural one. For example, the toys which children are fond of, are the creation based on Earth's resources and deposits while the meaning the toys transfer is originated from human's feelings for the motion of natural structure. However, scientists do not know what is the origin of the natural creation. Laurent Nottale is the first person in human history who gave the fundamental explanation to the natural creation of Solar system, based on his theory of Scale Relativity. This paper is the study on the natural creation of galaxies. For a planar distribution of matter, Jin He and Bo He defined Darwin curves on the plane as such that the ratio of the matter densities at both sides of the curve is constant along the curve. Therefore, the arms of ordinary spiral galaxies are Darwin curves. Now an important question facing humans is that: Are the arms of barred spiral galaxies the Darwin curves too? Fortunately, Dr. Jin He made a piece of Galaxy Anatomy Graphic Software (www.galaxyanatomy.com). With the software, not only can people simulate the stellar density distribution of barred spiral galaxies but also can draw the Darwin curves of the simulated galaxy structure. Therefore, if Dr. Jin He's idea is true then people all over the world will witness the evidence that the arms of barred spiral galaxies are identical to the corresponding Darwin curves. This paper shows partial evidence that the arms of galaxy NGC 5921 follow Darwin curves.

**Category:** Astrophysics

[39] **viXra:1301.0045 [pdf]**
*submitted on 2013-01-09 08:19:24*

**Authors:** David Brown

**Comments:** 6 Pages.

Why does time exist? Why does space exist? Why does energy exist? Consider the following conjectures (A), (B), (C):
CONJECTURE (A): Time exists because 2^46 divides the order of the monster group.
CONJECTURE (B): Space exists because 3^20 divides the order of the monster group.
CONJECTURE (C): Energy exists because the monster group and the six pariah groups allow D-brane gravitation and D-brane charge-based force to provide symmetries for a stable, oscillating multiverse that runs on a synchronized big-bang cycle of 81.6 billion years (± 1.7 billion years).
Are the 3 preceding conjectures complete nonsense? If nature is infinite, then the 3 conjectures are wrong. However, my guess is that the following conjectures (D), (E), (F) are valid:
CONJECTURE (D): Each superstring has 24 D-brane charges in a higher-dimensional superfluid with 3 energy-density levels vibrating with respect to 3 distinct copies of the Leech lattice.
CONJECTURE (E): AdS = CFT has a physical interpretation consisting of a 72-ball that undergoes vibrations and oscillations with respect to a nonmeasurable superstring time.
CONJECTURE (F): Nature is finite and digital. (This hypothesis is due to Konrad Zuse and Edward Fredkin.)
Is there a decisive test for conjectures (D), (E), (F)? I claim that the alleged Fernández-Rañada-Milgrom effect is such a test. This hypothetical effect states that the -1/2 in the standard form of Einstein’s field equations should be replaced by -1/2 + dark-matter-compensation-constant, where this constant is approximately sqrt((60±10)/4) * 10^-5.
Motl has stated, “One may construct infinitely many observables. For each observable, we need a different device to measure it. Quantum mechanics is able to predict the probabilities that we get any result for any variable but these calculations cannot be reduced to any algorithm respecting a classical framework simply because Nature isn’t classical, stupid.” ‘t Hooft replied in part, “The usual objections against my CA theories are based on Bell’s inequalities: these objections are erroneous because of what they say on their page one, line one: their assumptions.”
Is there an empirical test of Motl’s view versus ‘t Hooft’s view? I say that the alleged Fernández-Rañada-Milgrom effect is the decisive test. If nature is infinite, then there are in principle infinitely many quantum observables and, therefore, a cellular automaton simulation of quantum reality is likely to be either wrong or irrelevant. If nature is finite and digital, then the equivalence principle is likely to fail and, also, supersymmetry might be merely an approximation. I suggest that if string theory is empirically valid then the following idea is valid: Nature is finite if and only if supersymmetry is merely an approximation with weirdness that can only result from the failure of supersymmetry due to CA phenomenon. Am I wrong here? Perhaps so, but I suggest that the majority of string theorists are wrong about ‘t Hooft’s CA work. Have string theorists underestimated Milgrom and ‘t Hooft?

**Category:** Quantum Gravity and String Theory

[38] **viXra:1301.0044 [pdf]**
*submitted on 2013-01-08 13:14:05*

**Authors:** A.Garcés Doz

**Comments:** 4 Pages.

In this paper, we present several equations that generate the ratio of the sum of the roots of the masses of the quarks with respect to the electron, likewise the ratio of the sum of the masses of the quarks in relation to the mass of the electron. Both equations depend exclusively, and in a very simple and logical way from quantum dimensionless length derived from the fine structure constant to zero momentum; lengths in seven dimensions, the number of quarks and the three charges of color group SU(3).

**Category:** Quantum Physics

[37] **viXra:1301.0043 [pdf]**
*submitted on 2013-01-08 09:21:47*

**Authors:** Alexander Bolonkin

**Comments:** 17 Pages.

At the present time, rocket launch systems, flight passenger-transport and ground passenger systems have reached their peak of development. In the last 30 years there has been no increase in speed or reductions in trip costs and space launch. The space launch and air and ground transportation industry needs revolutionary ideas, which allow a jump in speed and delivery capability, and a dramatic drop in space launch and trip price. This idea (kinetic aviation and space launch) was offered and developed in a series of the author researches [1]-[7], but an important facet of this method – the ground electric hypersonic engine - was insufficiently developed. Rail Gun idea was unfit for low acceleration and long rails. All energy is spent into creating a powerful magnetic field produces a strong flash when the apparatus is disconnected from rails. When the rail length is increased, the efficiency of low speed railgun engine approaches zero.
The main idea of the offered ground hypersonic electric engine is segmentation of the acceleration track on small special closed-loop sections (12.5 – 100 m) and a system of special switches which allow return of the magnetic energy to the system transferring it to apparatus movement. This increases the efficiency of hypersonic engine up 0.9, avoids the burning of rails and using the engine for long periods of time. The same idea may be used in a conventional Rail Gun.
Author designed and computed the feasibility and practability of this invention which he designed for the purpose of using it as a space launcher for astronauts and space load, as method for hypersonic long distance aviation and as method for supersonic passenger ground rail transportation. The offered system will be significantly cheaper than the currently used MagLev (Magnetic Levitation) systems, because the vehicle employs conventional wings for levitation and the hypersonic engine is very simple. The offered system may be also used for mass launch of projectiles in war.

**Category:** Classical Physics

[36] **viXra:1301.0041 [pdf]**
*submitted on 2013-01-08 10:46:36*

**Authors:** Andrew Nassif

**Comments:** 5 Pages.

Water Fluoridation is the process of adding fluoride to public water supplies in order to
reduce the possibility of tooth decay. Its use began in 1945, as a study of children and the effects of them drinking fluoride in their water. The experiment remained a success, however the use of fluoride in water didn't increase dramatically until 1994 when a world health committee brought the idea of adding .8ml of fluoride/liter of water. The idea then went to congress and passed. Today, over 400 million houses have fluoride in their water.

**Category:** Biochemistry

[35] **viXra:1301.0040 [pdf]**
*submitted on 2013-01-08 00:47:32*

**Authors:** Rodney Bartlett

**Comments:** 5 Pages.

Over 30 years of thinking, plus the insights and mistakes in my viXra articles, reveal the basic blueprint for making this universe. This article continues from where previous articles finished (throughout, I’ve provided links to prior contributions). I begin with explanation of quantum particles, forces and spin in terms of positioning of Mobius loops and the flow of the loops’ binary digits accounting for the interference between gravitation and electromagnetism – together with a link supporting the idea of an electronics-based universe and addressing the topics of hidden variables, quantum fluctuation and virtual particles. The next link speaks of the inverse-square law and infinity. I give Dr. Carl Sagan credit where credit is due - and conclude that, being years ahead of his time, he saw a fundamental truth about the universe’s nature which he decided to include in his book “Contact”. Then time travel into the past (via matrices and the figure-8 Klein bottle), before putting it all together and indulging in some speculation about how to make this universe we’re living in.
I think it’s too simple to say “We don’t need to make the universe … it’s already here”. That statement relies on time being strictly linear (like a straight line, rectilinear). We know it isn’t, but is curvilinear and warped. It’s better to say the universe is here now because our future civilization did the following in the past –

**Category:** Mathematical Physics

[34] **viXra:1301.0039 [pdf]**
*submitted on 2013-01-07 12:08:49*

**Authors:** Andrew Nassif

**Comments:** 4 Pages.

Element must be heated to produce emitted light, this is due to its chemical reaction caused by
absorbed energy, which is required in order to emit light. This is why elements acquire being
heated in order to emit light. Sometimes absorbed energy can cause electromagnetic radiation,
also these spectrums are different waves of lights due to the different energy levels of a chemical
reaction.

**Category:** Chemistry

[33] **viXra:1301.0037 [pdf]**
*submitted on 2013-01-07 10:25:14*

**Authors:** Wes Hansen

**Comments:** 25 Pages.

Extrospection agents are committed to a universal ontology which, at its most elemental, is mathematics. This ontology is highly efficient in that there is no room for interpretive disagreement; differential calculus is differential calculus, there is no community context. Introspection agents are also committed to a universal ontology which, at its most elemental, is myth and metaphor. This ontology, however, suffers from inefficiencies due to interpretive disagreements emerging as a result of community context. In addition, introspective perceptions are often dismissed by extrospection agents simply because they’re formulated in the mystical; they are fluid rather than philosophically rigid. I propose the development of a new universal ontology of introspection based on cybernetic theory and structured according to the bootstrap method: a cybernetic monomyth.

**Category:** Religion and Spiritualism

[32] **viXra:1301.0036 [pdf]**
*submitted on 2013-01-07 01:37:06*

**Authors:** Hamdy I. Abdel-Gawad, Nasser S. Elazab, Mohamed Osman

**Comments:** 6 Pages. IOSR Journals

Abstract: Recently the unified method for finding traveling wave solutions of non-linear evolution equations
was proposed by one of the authors a. It was shown that, this method unifies all the methods being used to find
these solutions. In this paper, we extend this method to find a class of formal exact solutions to Korteweg-de
Vries (KdV) equation with space dependent coefficients. A new class of multiple-soliton or wave trains is
obtained.
Keywords: Exact solution, Extended unified method, Korteweg-deVries equation, variable coefficients

**Category:** Functions and Analysis

[31] **viXra:1301.0034 [pdf]**
*submitted on 2013-01-06 14:42:03*

**Authors:** David Brown

**Comments:** 4 Pages.

In 1966 Greisen and the team of Zatsepin and Kuzmin independently computed a limit of 5 * 10^19 eV for the energy of extragalactic cosmic rays detected on planet Earth, provided that the cosmic rays travel over 50 megaparsecs from their place of origin. However, there have been reports of the detection of cosmic rays with energies above the GZK limit. What might explain the GZK paradox? Presumably, string theorists hope that the answer might be string theory, provided that the GZK paradox is a true phenomenon. According to Witten, the three most important predictions of string theory are gravity, gauge/gravity duality, and supersymmetry. Of the three preceding predictions, supersymmetry might be the best bet for explaining the GZK paradox. This communication argues that supersymmetry with the infinite nature hypothesis is unlikely to explain the GZK paradox, while supersymmetry with the finite nature hypothesis might explain the GZK paradox within the context of ‘t Hooft’s work on the cellular automaton interpretation of quantum theory. The Milgrom Denial Hypothesis states that the main problem with string theory is that string theorists fail to realize that Milgrom is the Kepler of contemporary cosmology. The Fernández-Rañada-Milgrom effect is that the -1/2 in the standard form of Einstein’s field equations should be replaced by -1/2 + dark-compensation-constant. An argument is presented for the hypothesis that ‘t Hooft is correct about the foundations of quantum theory if and only if the Fernández-Rañada-Milgrom effect is empirically valid. If this argument is valid, then there is a decisive test for ‘t Hooft’s ideas.

**Category:** Quantum Gravity and String Theory

[30] **viXra:1301.0033 [pdf]**
*submitted on 2013-01-06 17:09:15*

**Authors:** John A. Gowan

**Comments:** 8 Pages. part 2 of 3

The charges of matter are the symmetry debts of light (Noether's Theorem). These debts must be paid in full to satisfy energy, symmetry, and charge conservation (as through matter-antimatter charge annihilation or its equivalent). The function of local gauge symmetry, as effected by the field vectors of the four forces, is to ensure, protect, and maintain charge invariance (serving charge and symmetry conservation) and the invariance of the "Interval" and velocity c ("Lorentz Invariance" serving causality and energy conservation), during and after the transformation of light (free electromagnetic energy) to matter (bound electromagnetic energy), as in the "Big Bang" (or in any subsequent interconversion between bound and free energy forms). Conservation must be observed in the "local" realm of matter no less than in the "global" realm of light - and in their compound domain of historical spacetime.

**Category:** High Energy Particle Physics

[29] **viXra:1301.0032 [pdf]**
*replaced on 2013-07-24 09:13:03*

**Authors:** Jonathan Tooker

**Comments:** 4 Pages. 1 color figure, fixed sign in equation 27, misc edits, quote possibly misattributed to Einstein

The modified cosmological model (MCM) is explored in the context of general relativity. A flaw in the ADM positive-definiteness theorem is identified. We present an exposition of the relationship between Einstein's equations and the precessing classical oscillator. Kaluza theory is applied to the MCM and we find a logical motivation for the cylinder condition which leads to a simple mechanism for AdS/CFT.

**Category:** Relativity and Cosmology

[28] **viXra:1301.0031 [pdf]**
*replaced on 2013-03-06 20:13:47*

**Authors:** Dimiter Tsvetkov, Lyubomir Hristov, Ralitsa Angelova-Slavova

**Comments:** 14 Pages.

In this paper we consider Markov chains associated with the Metropolis-Hastings algorithm.
We propose conditions under which the sequence of the successive densities of such a chain converges to the
target density according to the total variation distance for any choice of the initial density.
In particular we prove that the positiveness of the target and the proposal densities is enough for the chain to
converge.

**Category:** Statistics

[27] **viXra:1301.0030 [pdf]**
*replaced on 2013-01-25 06:38:29*

**Authors:** Henok Tadesse

**Comments:** 6 Pages.

Galilean (and Einstein’s) invariance principle gives significance to the relative motion of reference frames (observers). And it states that the laws of physics do not vary in reference frames that are in relative uniform-rectilinear motion and vary if the reference frames are accelerated. Therefore all inertial frames are in uniform-rectilinear motion relative to each other. This paper presents a new principle that gives significance to the relative motion of physical systems (e. g the solar system) and not to reference frames. It shifts the focus from the motion of reference frames (observers) to the motion of the physical systems to be observed. It states that the laws of mechanics and gravity (in their simplest or complex forms) are the same in physical systems that are at rest relative to each other and vary if there is relative motion between the two systems. This implies the validity of absolute reference frames in which the laws of mechanics are in their simplest forms. The laws of mechanics (and gravity) are independent of the choice or relative motion of reference frames. Therefore, we can use any reference frame provided that we know its motion relative to the physical system or relative to an absolute reference, with the same result for all reference frames; however, we can use reference frames attached to the physical systems for convenience (for example to the centre of the sun for the solar system). An observer should not attempt to apply the laws of mechanics before knowing his /her state of motion relative to an absolute reference or relative to the physical system to be observed. The relative motion of physical systems has kinetics effect, whereas the relative motion of reference frames (observers) has only kinematics effect. Inertial frames may be redefined as: All inertial frames are at rest relative to each other. Thus a frame which is in motion relative to an inertial (absolute) frame is not an inertial frame. All real motion has a cause and hence we can differentiate between real and illusionary motions. Motion without a cause is not a real motion. This is obviously an opposing view to relativity.
A thought experiment with two identical solar systems and two observers is presented to show that the laws of mechanics and gravity are not the same in physical systems that are in relative motion.

**Category:** Relativity and Cosmology

[26] **viXra:1301.0029 [pdf]**
*submitted on 2013-01-05 19:49:08*

**Authors:** Sierra Rayne, Kaya Forest

**Comments:** 8 Pages.

The Sinosphere is in ascension towards global economic dominance on both total and per capita bases, marking a fundamental shift in the balance of global economic and military power that is taking place absent any robust structural democratic and human rights reforms in this region. In contrast to comparisons during the 1980s of Japan potentially overtaking the United States as the world's largest economy, both purchasing power parity (PPP) and current United States dollar GDP metrics consistently project that China's gross domestic product (GDP) will exceed that of the United States sometime between 2015 and 2020. The Sinosphere's GDP-PPP passed that of The Commonwealth (including India) in 2011, The Commonwealth (excluding India) in 2005, the Francosphere member states in 2003, the Francosphere member and observer states in 2009 - subsequently widening the gap in all cases - and is predicted to surpass that of the Anglosphere by the early 2020s. China's military spending now exceeds that of all other nations bordering the East and South China Seas combined and the gap is widening rapidly. At current rates of increase, China's military expenditures may surpass those of the United States within the next decade. On a per capita basis, China's GDP-PPP is expected to overtake that of the United States and Canada by the early to mid-2030s, whereas Russia and the EU are projected to be surpassed by China in per capita GDP-PPP by the late 2020s.

**Category:** Social Science

[25] **viXra:1301.0028 [pdf]**
*replaced on 2014-07-25 20:04:15*

**Authors:** John Shim

**Comments:** 5 Pages. This paper has been superseded by "On the Lorentz Transformations"

The Lorentz transformations are not true coordinate transformations as Einstein derives them.
That is, they do not represent a one-to-one mapping of a single set of coordinate values in one
coordinate system onto another set in an identical coordinate system moving at a constant
velocity relative to the first. Rather, they represent a mapping of an average of two sets of
coordinate values from the first coordinate system onto a single set of values in the second. If
the measurement system used in an experiment is inconsistent with Einstein’s averaging method,
then the Lorentz transformations will give incorrect results. A simple example is given of a
photon-emitting clock moving at a constant velocity, v, in a straight line between two photon
detectors. The time of travel of the clock between detectors measured by the front detector
would be t = τ, and by the rear detector would be t = τ (1-v/(c+v)). The Lorentz transformation
gives a value of t = τ / √(1-v2/c2) for both detectors. It is also noted that the Lorentz
transformations give results inconsistent with the coordinates of photons in a light pulse of the
form c2t2-x2-y2-z2=0, when measured in an inertial reference frame different from that of the
source.

**Category:** Relativity and Cosmology

[24] **viXra:1301.0026 [pdf]**
*submitted on 2013-01-05 06:56:34*

**Authors:** Giorgio Fabretti

**Comments:** 11 Pages. "Chrono Logic" is a conceptual link of General Science and Philosophy, that some scholars required and would appreciate - in English - to "The Time and The All", printed, in Italian, on General Physics, on Vixra as well.

The Chrono Logic is the operational 'gravitational' abstract logic force that shapes the time conceived as an oriented dimensional field.
To help imagine such a concept, let us imagine a non-paved country muddy street, lined with the footprints of animals, people and charts that have stepped into the muddy clay in different times.
We can read in the dried materialized footprints the rhythmic cycles of walking people or wheeling charts.
Investigative Police could even find relevant evidence for a Court, in analyzing the dried footprints, just like technological videocamera record the robbers of the night before at the nearby Seven Eleven market shop.
A visionary philosopher of science, Giorgio Fabretti, even conceived the whole material universe as an immense footprint left by Logic in the Memory of Time, that is in the moldable clay of Time, whose Memory consists in its capacity of drying up and keeping witness of what happened.
But why Time would need to materialize in dried footprints?
The answer is "Logic".
In fact, Logic is a "Concept that wants to become Reality".
It is a biblical meaning, much more ancient than Greek classical philosophy. Having the creational definition of implementing a design, Logic tends at least to confirm itself by becoming dynamic, and so generating its time dimension.
In the Greek Pagan Olimpus, leading gods had Chronos as a specialized god to serve and implement their divine designs on a mundane scale.
The most advanced experimental physicists, who invented the atomic bomb and the computerized network to defend themselves from its destruction, have not developed a conception of the universe radically different from Biblical or from Greek ancestors.
In contemporary natural sciences, the material reality is still a footprint of Logos on the Memory of Time.
Now that you have heard this concept, it might seem as obvious as the password of your Credit Card. But, ask a thief to guess your password! How long would him take to guess? You might be waiting as long as your dog needs to typewrite by chance the Divine Comedy on your computer.
It maybe took centuries before a philosopher and a scientist, Giorgio Fabretti, could climb on the shoulder of many scientists and discoveries, and see the 'ethical theory' of Synchronic Materialism (including Chrono Logic), theoretically leading to such a password definition of Logic and Time: where Time (Operational Chrono) serves the material purposes (Memory) of an excited logical field (Logic) oriented (Ethics) toward (Dynamic Time) a pristine equilibrium (Design.
It is an elaborated cognitive historical anthropological equivalent, of the 'restored symmetry in the Big Bang vision', in theoretical physics.
At the same time it is an elaboration of a computational vision of the material universe, as a 'classroom board', working as a flash memory for the teacher, to develop algebraic equations for his students till the final result.
After the lesson ends, we would have had the creative mind of the teacher marking with chalk the blackboard and leaving a material storyboard of the cognitive stages of the equation - a sort of logic world map - that, through intrinsic logic energy, would lead to the final result: whose design finally implements the pristine creative idea of the teacher, who feels then calm, quiet, having been his duty accomplished.
These are very simple visionary metaphors of how 'Reality's Clockwork' can be represented in the 3rd millennium.
At the same time those metaphors help a wide non-specialized audience to understand the Chrono Logic' concepts. As Kuhn explained "it is a revolutionary cognitive jump', useful to stimulate new ways in the empirical researches.
The major achievement is 'the link between Logic and Time', before physical concepts like Energy, Matter, Mass, Space are involved.
Such a 'privileged position' of Logic and Time, has been consolidated after the limits of the theoretical physics occurred in the 20th century. As an instance: Einstein 'spacetime' theory ended on a dead alley, and could not realistically explain the dimensions and the timing of the universe, letting the door open to all kind of literary physics, even by well seasoned academics like Stephen Hawking.
If a Sistine Chapel portrait of the universe has no better technique than painting, than it is better to call a complex artist like Michelangelo, while humankind waits for a Galileo Galilei to give a more coherent portrait.
If a new Copernican world map was not yet ready, a Christopher Columbus was welcome to tell the 'tale of the western way to the Indies', as long as he could get financed by the Spanish Queen the 'discovery' of the Americas.
History has not changed much since 1500. It is just a past time of 500 years, in a world where many people - like Prof. Fabretti - have been living in an eight hundred years old intact apartment in Viterbo, or wakes up in Rome facing a two thousand years old stadium called Colosseum.
The nickname 'Coliseum' was first attributed to the Flavian Amphitheater by an erudite English monk of the 8th century, called 'Venerable Bede', who also stated that "world will last until Coliseum will stand".
Indeed he had to come to Rome to get a 'worldvision', because Rome was, and still is, a place "where Time happens", and Colosseum still symbolizes today, to millions of tourists, 'The Temple of All Times'.
Not by chance Prof. Fabretti - who used, when he was a child, to play 'hide-and-seek' and football inside the Colosseum - wrote an essay entitled "The Chronosseum", to add his 'chronologic' nickname to that of the English monk Bede in the 8th century.
"Time is where space happens. And it is time to shape space, in the history of the universe, whose movie in the 'present' is just a still frame": it is how Fabretti puts it in his 'Chrono Logic vision'.

**Category:** General Science and Philosophy

[23] **viXra:1301.0025 [pdf]**
*submitted on 2013-01-05 07:49:05*

**Authors:** A.H. Abdelrahman, M. D. Abdella, Mahgoub Salih

**Comments:** 6 Pages.

In this work the plasma hydrodynamical equations are exploited to explain the physical constraints under which amplification takes place. It is shown that lasing takes place in plasma in condition of concentration of electrons is less than the equilibrium concentration. In addition the amplification transpires when the internal field is stronger than the external applied field.

**Category:** Nuclear and Atomic Physics

[22] **viXra:1301.0024 [pdf]**
*submitted on 2013-01-05 10:07:02*

**Authors:** F. Ozgur Catak, M. Erdal Balaban

**Comments:** 13 Pages.

In conventional distributed machine learning methods, distributed support vector machines (SVM) algorithms are trained over pre-configured in-tranet/internet environments to find out an optimal classifier. These methods are very complicated and costly for large datasets. Hence, we propose a method that is referred as the Cloud SVM training mechanism (CloudSVM) in a cloud computing environment with MapReduce technique for distributed machine learning applications. Accordingly, (i) SVM algorithm is trained in distributed cloud storage servers that work concurrently; (ii) merge all support vectors in every trained cloud node; and (iii) iterate these two steps until the SVM con-verges to the optimal classifier function. Single computer is incapable to train SVM algorithm with large scale data sets. The results of this study are im-portant for training of large scale data sets for machine learning applications. We provided that iterative training of splitted data set in cloud computing envi-ronment using SVM will converge to a global optimal classifier in finite iteration size.

**Category:** Artificial Intelligence

[21] **viXra:1301.0023 [pdf]**
*submitted on 2013-01-05 12:51:18*

**Authors:** Moishe Garfinkle

**Comments:** 16 Pages.

Cosmologists such Sakharov, Alfvén, Klein, Weizsäcker, Gamow and Harrison all disregarded the distribution of baryons and antibaryons immediately prior to freeze-out in trying to elucidate the circumstances that explained hadron distribution in the early universe. They simply accepted a uniform distribution: each baryon paired with an antibaryon. Their acceptance of this assumption resulted in theoretical difficulties that could not be overcome. This essay discards this assumption of homogeneity or uniformity. Although this essay does deal with early-universe matters, it is not meant to indicate any involvement in energy distribution functions nor in any symmetry-asymmetry controversies. Cluster formation is strictly geometric. This essay has value as far as problems early cosmologists faced but also should complete the historic record.

**Category:** Relativity and Cosmology

[20] **viXra:1301.0022 [pdf]**
*submitted on 2013-01-04 19:34:14*

**Authors:** Rodney Bartlett

**Comments:** 6 Pages.

This suggests Lavoisier was correct to include heat and light in his list of the known elements. They aren't matter but they contribute to the formation of matter, according to quantum mechanics and the rewriting of Einstein's famous equation as m=E/c^2 (with our understanding of space-time being increasingly dominated by the Theory of Everything, it's important not to limit investigations to the material world but to consider matter's relation to energy ... and to the 4 fundamental forces).
As we'll see, this more integrated way of viewing the universe leaves no room for the Standard Model's version of the Higgs field and boson. As well, it requires us to take another look at cosmology's Steady State theory and to reconsider 1) electroweak unification, and 2) quarks (Stephen Hawking and Leonard Mlodinow wrote on p.49 of their book "The Grand Design", "It is certainly possible that some alien beings ... would make the same experimental observations that we do, but describe them without quarks." So I’ll try to become a Little Green Man and describe quarks, and everything from quantum physics to the origin of life to cosmology, in a way that agrees with science’s observations but is also “alien”.)
The words “Supplementary Material” in the text refer to material which is in no way essential to this article but merely additional, non-required, reading. It’s my earlier viXra submission “How the Pioneer anomaly refines Einstein's gravitation / space-time; and how equations he developed in 1919 show that the space warping in General Relativity extends to subatomic particles …”

**Category:** General Science and Philosophy

[19] **viXra:1301.0021 [pdf]**
*submitted on 2013-01-05 01:53:52*

**Authors:** Pith Xie

**Comments:** 22 Pages.

The reference \cite{Ref1} denote number systems with a logical calculus, but the form of natural numbers are not consistently in these number systems. So we rewrite number systems to correct the defect.

**Category:** Number Theory

[18] **viXra:1301.0020 [pdf]**
*submitted on 2013-01-04 09:45:59*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 2 Pages. 1 diagram, 1 reference

Since the fusion model for the Sun has been falsified as well as Big Bang, an actual location for fusion reactions is hypothesized based on observations and not mathematical fantasy rooted in vast arrays of ad hoc particle physics.

**Category:** Astrophysics

[17] **viXra:1301.0019 [pdf]**
*submitted on 2013-01-04 05:12:57*

**Authors:** Vladimir I. Rogozhin

**Comments:** 9 Pages. Essay was presented at the Contest FQXi Essay 2012

The essential analysis of changing ideas of Space and Time for the period from the beginning of “Archimedes’ Second Revolution” is carried out to overcome the ontological groundlessness of the Knowledge and to expand its borders. Synthetic model of Triune (absolute) 12-dimensional Space-Time is built on the basis of Ontological construction method, Superaxiom and Superprinciple: The Absolute Generating Structure («Structure-mother»), the nature of time is determined as a memory of material structure at a certain level of its holistic being.

**Category:** History and Philosophy of Physics

[16] **viXra:1301.0017 [pdf]**
*submitted on 2013-01-03 17:41:06*

**Authors:** Nikzad Babaii Rizvandi

**Comments:** 47 Pages.

This is a presenation on my PhD thesis about using statistical machine learning techniques to model and provision the performance of MapReduce and also energy efficient slack reclamation in distributed computing systems.

**Category:** Artificial Intelligence

[15] **viXra:1301.0016 [pdf]**
*submitted on 2013-01-03 17:52:51*

**Authors:** Nikzad Babaii Rizvandi

**Comments:** 1 Page.

After an overview of forward/inverse Prestack Kirchhoff Time Migration (PKTM) algorithm, we will explain our proposed approach to fit this algorithm for running on Google’s MapReduce framework. Toward the end, we will analyse the relation between MapReduce-based PKTM completion time and the number of map/reduce tasks on pseudo-distributed MapReduce mode.

**Category:** Artificial Intelligence

[14] **viXra:1301.0015 [pdf]**
*submitted on 2013-01-03 14:27:44*

**Authors:** A. Garcés Doz

**Comments:** 3 Pages.

In this paper we present a very simple formulas that generate the quark masses as a very direct functions sine and cosine of the Cabibbo angle.
The accuracy of the results are very big in relation to the latest experimental values.

**Category:** Quantum Physics

[13] **viXra:1301.0014 [pdf]**
*submitted on 2013-01-03 14:50:02*

**Authors:** David Brown

**Comments:** 7 Pages.

Are the explanations for dark energy and dark matter closely related and highly dependent upon Milgrom’s acceleration law for gravitational accelerations that have a small magnitude? Is superstring determinism the key to understanding Milgrom’s acceleration law? According to ‘t Hooft in his article “Dimensional Reduction in Quantum Gravity”, “The requirement that physical phenomena associated with gravitational collapse should be duly reconciled with the postulates of quantum mechanics implies that at a Planckian scale our world is not 3+1 dimensional. Rather, the observable degrees of freedom can best be described as if they were Boolean observables defined on a two-dimensional lattice, evolving over time.” What might ‘t Hooft’s hypothesis imply in terms of superstring theory? Does the superstring multiverse have two main components: (1) an interior consisting entirely of virtual energy and (2) a boundary consisting of alternate universes that contain real energy and also virtual energy shared with the interior? In this communication, the idea of superstring snapping is considered as an explanation for dark energy (i.e. the space roar profile prediction) and as an explanation for dark matter (i.e. the Fernández-Rañada-Milgrom effect). In addition, the concept of a “combined sfermion” that has spin 1/2 and travels at the speed of light is suggested as the explanation for the GZK paradox. One might say that there are four main ideas presented here:
(1) Milgrom denial hypothesis: The main problem with string theory is that string theorists fail to realize that Milgrom is the Kepler of contemporary cosmology.
(2) Nature is finite and digital with superstring determinism running on a cycle of 81.6 billion years (±1.7 billion years). The cycle runs by transferring gravitational energy from the boundary to the interior of the multiverse in the expansion phase of the synchronized big bangs. During the synchronized big stops to the big bangs, all the gravitational energy lost by the boundary is regained. During the synchronized cosmological inflation of the big bangs, all of the cosmological inflation occurs during one Planck time interval.
(3) Dark energy is direct evidence for superstring snapping, i.e., D-brane noise accompanying transfer of gravitational energy from the boundary to the interior of the multiverse. Because superstrings are under enormous tension, they sometimes snap and cool off, thus creating an excess of quantum vacuum and a deficiency of gravitational attraction in each particular universe.
(4) Dark matter is indirect evidence for superstring snapping, i.e., D-brane reinforcement of the gravitational signal in the form of excess gravitational redshift. The dark energy of all the alternate universes causes dark matter to be observed in each particular universe.

**Category:** Quantum Gravity and String Theory

[12] **viXra:1301.0013 [pdf]**
*submitted on 2013-01-03 15:05:36*

**Authors:** Andrew Nassif, Thomas Zolotor

**Comments:** 5 Pages. Typical classifications, group theory, and physics theories in this paper are provided by Andrew Nassif. FHB classifications, and candals surveys are provided by Thomas Zolotor

FHB galaxies are faint hubble blob galaxies, which are named because of their unusually large distance from the milk way and from the hubble's telescope reach of view.

**Category:** Relativity and Cosmology

[11] **viXra:1301.0012 [pdf]**
*submitted on 2013-01-05 04:45:26*

**Authors:** Amine Benachour

**Comments:** 3 Pages.

Based on the Euclidian concept of distance and velocity, we propose a thought experiment which shows that if the clocks carried by two observers in a uniform linear motion, don't indicate the same time, their relative velocities will necessarily be different.

**Category:** Relativity and Cosmology

[10] **viXra:1301.0010 [pdf]**
*submitted on 2013-01-02 18:58:54*

**Authors:** Andrew Nassif

**Comments:** 2 Pages.

For many years lied a problem called the P vs NP. The question is to find the number of factorial possibilities to its orders. An example of this is finding the possibilities and comparison of improbabilities of picking 100 students out of 400 students. According to Lardner's theorem the number of known atoms in the universe is less then the number of combinations of possible orders and combinations of the answer to the P vs. NP problems. Finding the equation for the number of different orders a group of 400 people can be put into and subtracting 300 different people that couldn't get picked is equal to ((400!)-(100!*3)). My project is to represent this data through algorithms and different diagrams. When looking at my project you will know how I found a solution and the importance of it. My project will include all the required schematics, and graphs that coordinates with this answer. It will also acquire data showing different possibilities between P vs NP. As well as the combination where P can equal NP and N equals 1, or the possibilities where P doesn't equal NP and N isn't 1. P and NP is believed to stand for the number of possibilities and impossibilities.

**Category:** Functions and Analysis

[9] **viXra:1301.0009 [pdf]**
*replaced on 2014-01-11 21:42:04*

**Authors:** Kenneth Dalton

**Comments:** 9 Pages. Journal Ref: Hadronic J. 36(5), 555-563 (2013)

The gravitational energy, momentum, and stress are calculated for the Robertson-Walker metric.
The principle of energy conservation is applied, in conjunction with the Friedmann equations. Together, they show that the cosmological constant is non-zero, the curvature index k = 0, and the acceleration is positive. It is shown that the gravitational field accounts for two-thirds of the energy in the Universe. Keywords: dark energy = gravitational energy

**Category:** Relativity and Cosmology

[8] **viXra:1301.0008 [pdf]**
*submitted on 2013-01-03 08:01:07*

**Authors:** Marcelo Carvalho; Alexandre Lyra de Oliveira

**Comments:** 31 Pages.

We present two models combining some aspects of the Galilei and the Special Relativity that leads to a unification of both relativities. This unification is founded on a reinterpretation of the absolute time
of the Galilei relativity that is considered as a quantity in its own and not as mere reinterpretation of
the time of the Special relativity in the limit of low velocity. In the first model, the Galilei relativity
plays a prominent role in the sense that the basic kinematical laws of Special relativity, e.g. the Lorentz
transformation and the velocity law, follows from the corresponding Galilei transformations for the position
and velocity. This first model also provides a new way of conceiving the nature of relativistic spacetime where
the Lorentz transformation is induced by the Galilei transformation through an embedding of 3-dimensional
Euclidean space into hyperplanes of 4-dimensional Euclidean space. This idea provides the starting point for
the development of a second model that leads to a generalization of the Lorentz transformation, which includes,
as particular cases, the standard Lorentz transformation and transformations that apply to the case of
superluminal frames.

**Category:** Relativity and Cosmology

[7] **viXra:1301.0007 [pdf]**
*submitted on 2013-01-02 10:30:20*

**Authors:** Jeffrey Joseph Wolynski

**Comments:** 1 Page. 1 illustration

It is hypothesized an actual understanding of humans as they are mostly psychopathic and delusional.

**Category:** Mind Science

[6] **viXra:1301.0006 [pdf]**
*submitted on 2013-01-02 03:16:23*

**Authors:** Anatoly V. Belyakov

**Comments:** 16 pages, including 9 figures and 2 tables. Accepted for publication in "Progress in Physics"

The proposed model is based on J.Wheeler’s geometrodynamic concept, in which space continuum is considered as a topologically non-unitary coherent surface admitting the existence of transitions of the input-output kind between distant regions of the space in an additional dimension. The existence of closed structures (macrocontours) formed at the expense of interbalance of gravitational, electric, magnetic and inertial forces has been substantiated. It is such macrocontours that have been demonstrated to form — independently of their material basis — the essential structure of stellar objects (SO) and to determine the position of these objects on the Hertzsprung-Russell diagram. Models of the characteristic types of stellar objects: stars and compact bodies emerging in the end of stellar evolution — have been presented, and their standard parameters at different stages of evolution have been calculated. The existence of the Hertzsprung-Russell diagram has been substantiated, and its computational analogue has been given. Parallels between stellar and microcosmic objects are drawn.

**Category:** Astrophysics

[5] **viXra:1301.0005 [pdf]**
*submitted on 2013-01-01 14:14:21*

**Authors:** Bo He, Jin He

**Comments:** 6 Pages. 3 Figures, 1 Table

It may be true that mankind’s hope is the identification of the living meaning of natural structures. However, scientists including physicists, chemists, and biologists have not found any evidence of the meaning. In the natural world, there exists one kind of structure which is beyond the scope of human laboratorial experiment. It is the structure of galaxies. Spiral galaxies are flat disk-shaped. There are two types of spiral galaxies. The spiral galaxies with some bar-shaped pattern are called barred spirals, and the ones without the pattern are called ordinary spirals. Longer-wavelength galaxy images (infrared, for example) show that ordinary spiral galaxies are basically an axi-symmetric
disk that is called exponential disk. For a planar distribution of matter, Jin He and Bo
He defined Darwin curves on the plane as such that the ratio of the matter densities at both sides of the curve is constant along the curve. Therefore, the arms of ordinary spiral galaxies are Darwin curves. Now an important question facing humans is that: Are the arms of barred spiral galaxies the Darwin curves too? Fortunately, Dr. Jin He made a piece of Galaxy Anatomy Graphic Software (www.galaxyanatomy.com). With the software, not only can people simulate the stellar density distribution of barred spiral galaxies but also can draw the Darwin curves of the simulated galaxy structure. Therefore, if Dr.
Jin He's idea is true then people all over the world will witness the evidence that the arms
of barred spiral galaxies are identical to the corresponding Darwin curves. This paper shows partial evidence that the arms of galaxy NGC 4548 follow Darwin curves. Note: Dr. Philip Gibbs is the founder of viXra.org. Jin He has been jobless and denied any possibility of postdoc position since 2005. Jin He has been rejected by arXiv.org or even by PhysicsForums.com since 2006 and 2007 respectively. Philip Gibbs's eprint archive is
the only channel with which Jin He can connect to the human world.

**Category:** Astrophysics

[4] **viXra:1301.0004 [pdf]**
*submitted on 2013-01-01 06:46:12*

**Authors:** Golden Gadzirayi Nyambuya, W. Simango

**Comments:** 30 Pages.

The paramount British-Led May, 29, 1919 Solar Eclipse Result of Eddington et al. has had tremendous if not an arcane effect in persuading scientists, philosophers and the general public, to accept Einstein's esoteric General Theory of Relativity (GTR) thereby ``deserting" Newtonian gravitation altogether, especially in physical domains of extreme gravitation where Einstein's GTR is thought or believed to reign supreme. The all-crucial factor ``2" predicted by Einstein's GTR has been ``verified" by subsequent measurements, more so by the most impressive and precision modern technology of VLBA measurements using cosmological radio waves to within 99.998\% accuracy. From within the most well accepted provinces, confines and domains of Newtonian gravitational theory, herein, we demonstrate that the gravitational to inertial mass ratio of photons in Newtonian gravitational theory where the identities of the inertial and gravitational mass are preserved, the resulting theory is very much compatible with all measurements made of the gravitational bending of light. Actually, this approach posits that these measurements of the gravitational bending of light not only confirm the gravitational bending of electromagnetic waves, but that, on a much more subtler level; rather clandestinely, these measurements are in actual fact a measurement of the gravitational to inertial mass ratio of photons. The significant 20% scatter seen in the measurements where white-starlight is used, according to the present thesis, this scatter is seen to imply that the gravitational to inertial ratio of photons may very well be variable quantity such that for radio waves, this quantity must -- to within 99.998% accuracy, be unity. We strongly believe that the findings of the present reading demonstrate or hint to a much deeper reality that the gravitational and inertial mass, may -- after all; not be equal as we have come to strongly believe. With great prudence, it is safe to say that, this rather disturbing (perhaps exciting) conclusion, if correct; may direct us to closely re-examine the validity of Einstein's central tenant -- the embellished Equivalence Principle (EP), which stands as the strongest and most complete embodiment of the foundational basis of Einstein's beautiful and celebrated GTR.

**Category:** Astrophysics

[3] **viXra:1301.0003 [pdf]**
*submitted on 2013-01-01 08:17:11*

**Authors:** M. Pitkanen

**Comments:** 13 Pages.

The recent progress in the understanding of preferred extremals of Kaehler action leads to the conclusion that they satisfy Einstein-Maxwell equations with cosmological term with Newton's constant and cosmological constant predicted to have a spectrum. One particular implication is that preferred extremals have a constant value of Ricci scalar. The implications of this are expected to be very powerful since it is known that D>2-dimensional manifolds allow a constant curvature metric with volume and other geometric invariants serving as topological invariants. Also the the possibly discrete generalization of Ricci flow playing key role in manifold topology to Maxwell flow is very natural, and the connections with the geometric description of dissipation, self-organization, transition to chaos and also with coupling constant evolution are highly suggestive. A further fascinating possibility inspired by quantum classical correspondence is quantum ergodicity (QE): the statistical geometric properties of preferred extremals code for various correlations functions of zero energy states defined as their superpositions so that any preferred extremal in the superposition would serve as a representative of the zero energy state. QE would make possible to deduce correlation functions and S-matrix from the properties of single preferred extremal.

**Category:** Quantum Gravity and String Theory

[2] **viXra:1301.0002 [pdf]**
*submitted on 2013-01-01 08:19:54*

**Authors:** M. Pitkanen

**Comments:** 13 Pages.

The existence of Higgs and its identification have been continual source of head ache in TGD framework. The vision which looks most plausible at this moment is rather conservative in the sense that it assumes that standard description of massivation using Higgs in QFT framework is the only possible one: if TGD has QFT limit, then Higgs provides a phenomenological parametrization of particle masses providing a mimicry for the microscopic description relying on p-adic thermodynamics. The anomalies related to Higgs are however still there. A new explanatory piece in the puzzle is M_{89} hadron physics. The gamma ray background from the decays of M_{89} pions could explain the anomalous decay rate to gamma pairs and the problemsrelated to the determination of Higgs mass. It could explain also the production of highly correlated charged particle pairs observed first at RHIC for colliding heavy ions and two years ago at LHC for proton heavy-ion collisions as decay products of string like objects of M_{89} hadron physics, the observations of Fermi satellite, and maybe even the latest Christmas rumour suggesting the existence of charge 2 states decaying to lepton pairs by identifying them as leptomeson formed from two color octet muons and produced ivia intermediate parallel gluon pairs n the decay of M_{89} mesonic strings to ordinary hadrons and leptons.

**Category:** High Energy Particle Physics

[1] **viXra:1301.0001 [pdf]**
*submitted on 2013-01-01 08:33:11*

**Authors:** M. Pitkanen

**Comments:** 7 Pages.

Whether right-handed neutrinos generate a supersymmetry in TGD has been a long standing open question. N=1 SUSY is certainly excluded by fermion number conservation but already N=2 defining a "complexification" of N=1 SUSY is possible and could generate right-handed neutrino and its antiparticle. These states should however possess a non-vanishing light-like momentum since the fully covariantly constant right-handed neutrino generates zero norm states. So called massless extremals (MEs) allow massless solutions of the modified Dirac equation for right-handed neutrino in the interior of space-time surface, and this seems to be case quite generally in Minkowskian signature for preferred extremals. This suggests that particle represented as magnetic flux tube structure with two wormhole contacts sliced between two MEs could serve as a starting point in attempts to understand the role of right handed neutrinos and how N=2 or N=4 SYM emerges at the level of space-time geometry. The following arguments inspired by the article of Nima Arkani-Hamed et al about twistorial scattering amplitudes suggest a more detailed physical interpretation of the possible SUSY associated with the right-handed neutrinos. The fact that right handed neutrinos have only gravitational interaction suggests a radical re-interpretation of SUSY: no SUSY breaking is needed since it is very difficult to distinguish between mass degenerate spartners of ordinary particles. In order to distinguish between different spartners one must be able to compare the gravitomagnetic energies of spartners in slowly varying external gravimagnetic field: this effect is extremely small.

**Category:** Quantum Gravity and String Theory