All Submission Categories

1111 Submissions

[109] viXra:1111.0118 [pdf] submitted on 2011-11-30 16:08:52

More on Tachyon decay

Authors: Paul Karl Hoiland
Comments: 4 Pages.

A closer look at tachyon decay engineered vacuum state changes via reheat and inflation.
Category: Quantum Physics

[108] viXra:1111.0117 [pdf] replaced on 2013-05-28 13:23:22

Superluminal Consequences of Quantum Diffusion on Special Relativity

Authors: John L. Haller Jr.
Comments: 4 Pages.

With super-luminal neutrinos being observed in 2011 then refuted in 2012 we present an argument, based on Jensen’s inequality, that they might still exist only with a smaller excess speed than initially thought. Specifically, the quantity measured by OPERA and ICARUS is the average of the length of the displacement over time which is greater than the length of the average velocities - which determines and does not break Lorenz invariance. We examine quantum diffusion to explain the physics behind the particle’s variance resulting in an excess average velocity of ℏ⁄(mc^2 (t+τ) ), where t is the baseline and τ is the coherence time. We examine the experimental setup at OPERA to estimate the excess velocity and show it is within the error as observed in the tighter re-run. We also show consistency with Fermilab 1979 and supernova 1987A. In conclusion we comment on arguments refuting superluminal neutrinos and note a similar consequence of quantum mechanics that conservation of energy can be violated, if only for a short time.
Category: Quantum Physics

[107] viXra:1111.0116 [pdf] submitted on 2011-11-30 06:00:57

Special Relativity Ultimate Test

Authors: M. Hernando-Guevara
Comments: 28 Pages.

From the hypothesis of the validity of Newton's dynamics and space as a privileged reference frame, the null results of Michelson and Morley's type of experiments are experimentally reinterpreted and invalidated as a proof of the non existence of the Luminous Ether. Other recent experiments to test Special Relativity, such as Brillet-Hall’s, Cedarholm-Townes’, etc, are also epistemologically revisited and an alternative explanation of their null results is given. Finally, using two-beam interferometry techniques, two conclusive experiments to test the validity of the Special Theory of Relativity under this new perspective are proposed.
Category: Relativity and Cosmology

[106] viXra:1111.0115 [pdf] replaced on 2012-01-30 07:27:22

Sheldrake's Morphic Fields and TGD View about Quantum Biology

Authors: Matti Pitkänen
Comments: 11 Pages.

This article is inspired by the study of two books of Rupert Sheldrake. What makes the study of the books of Sheldrake so rewarding is that Sheldrake starts from problems of the existing paradigm, analyzes them thoroughly, and proposes solutions in the framework provided by his vision. There is no need to accept Sheldrake′s views, just the reading of his arguments teaches a lot about the fundamental ideas and dogmas underlying recent day biology and forces the reader to realize how little we really know - not only about biology but even about so called established areas of physics such as condensed matter physics. These books are precious gems for anyone trying to build overall view.

The idea that Nature would have habits just as we do is probably one of those aspects in Sheldrake's work, which generate most irritation in physicalists believing that Nature is governed by deterministic laws with classical determinism replaced with quantum statistical determinism. Sheldrake is one of those very few scientists able to see the reality rather than only the model of reality. Morphic resonance would make possible to establish the habits of Nature and the past would determine to high extent the present but on organic manner and in totally different sense as in the world of physicalist.

In this article I propose an interpretation for the vision of Sheldrake based on zero energy ontology and TGD based view about geometric time and experienced time forcing to accept the notions of 4-dimensional brain and society. In this framework the problem is to understand why our sensory perception is 3-dimensional whereas the standard problems related to memory disappear since memory corresponds to 4-D aspects of perception and of conscious experience and memory storage is 4-dimensional. The vision about gene expression as something to some extend analogous to a democratic decision of 4-D society looks rather natural in this framework and would explain some still poorly understood aspects of gene expression known from the days of Mendel. Therefore the term ″the prence of the past″ appearing in the title of one of Sheldrake's books has quite a concrete meaning in TGD Universe. This article is inspired by the study of two books of Rupert Sheldrake. What makes the study of the books of Sheldrake so rewarding is that Sheldrake starts from problems of the existing paradigm, analyzes them thoroughly, and proposes solutions in the framework provided by his vision. There is no need to accept Sheldrake′s views, just the reading of his arguments teaches a lot about the fundamental ideas and dogmas underlying recent day biology and forces the reader to realize how little we really know - not only about biology but even about so called established areas of physics such as condensed matter physics. These books are precious gems for anyone trying to build overall view.

The idea that Nature would have habits just as we do is probably one of those aspects in Sheldrake's work, which generate most irritation in physicalists believing that Nature is governed by deterministic laws with classical determinism replaced with quantum statistical determinism. Sheldrake is one of those very few scientists able to see the reality rather than only the model of reality. Morphic resonance would make possible to establish the habits of Nature and the past would determine to high extent the present but on organic manner and in totally different sense as in the world of physicalist.

In this article I propose an interpretation for the vision of Sheldrake based on zero energy ontology and TGD based view about geometric time and experienced time forcing to accept the notions of 4-dimensional brain and society. In this framework the problem is to understand why our sensory perception is 3-dimensional whereas the standard problems related to memory disappear since memory corresponds to 4-D aspects of perception and of conscious experience and memory storage is 4-dimensional. The vision about gene expression as something to some extend analogous to a democratic decision of 4-D society looks rather natural in this framework and would explain some still poorly understood aspects of gene expression known from the days of Mendel. Therefore the term ″the prence of the past″ appearing in the title of one of Sheldrake's books has quite a concrete meaning in TGD Universe.

Category: Physics of Biology

[105] viXra:1111.0114 [pdf] replaced on 2012-01-30 07:30:08

Quantum Model for Remote Replication

Authors: Matti Pitkänen, Peter Gariaev
Comments: 10 Pages.

A model for remote replication of DNA is proposed. The motivating experimental discoveries are phantom DNA, the evidence for remote gene activation by scattered laser light from similar genome, and the recent findings of Montagnier's and Gariaev's groups suggesting remote DNA replication.

Phantom DNA is identified as dark nucleon sequences predicted by quantum TGD with dark nucleons defining naturally the analogs of DNA, RNA, tRNA, and amino-acids and realization of vertebrate genetic code. The notion of magnetic body defining a hierarchy of flux quanta realize as flux tubes connecting DNA nucleotides contained inside flux tubes connecting DNA codons and a condensed at flux sheets connecting DNA strands is an essential element of the model. Dark photons with large value of Planck constant coming as integer multiple of ordinary Planck constant propagate along flux quanta connecting biomolecules: this realizes the idea about wave DNA. Biomolecules act as quantum antennas and those with common antenna frequencies interact resonantly.

Biomolecules interacting strongly - in particular DNA nucleotides- would be characterized by same frequency. An additional coding is needed to distinguish between nucleotides: in the model for DNA as topological quantum computer quarks (u,d) and their antiquarks would code for the nucleotides A,T,C, and G would take care of this. The proposed role of quarks in biophysics of course makes sense only if one accepts the new physics predicted by quantum TGD. DNA codons (nucleotide triplets) would be coded by different frequencies which correspond to different values of Planck constant for photons with same photon energy propagating along corresponding flux tubes. This allows to interpret the previously proposed TGD based realization of so called divisor code proposed by Khrennikov and Nilsson in terms of quantum antenna mechanism.

In this framework the remote replication of DNA can be understood. DNA nucleotides interact resonantly with DNA strand and attach to the ends of the flux tubes emerging from DNA strand and organized on 2-D flux sheets. In Montagnier's experiment the interaction between test tubes A and B would be mediated by dark photons between DNA and dark nucleon sequences and amplify the dark photon beam, which in turn would induce remote replication. In the experiment of Gariaev scattered laser light would help to achieve the same purpose. Dark nucleon sequences would be generated in Montagnier's experiment by the homeopathic treatment of the test tube B.

Dark nucleon sequences could characterize the magnetic body of any polar molecule in water and give it a "name" written in terms of genetic codons so that genetic code would be much more general than usually thought. The dark nucleon sequence would be most naturally assigned with the hydrogen bonds between the molecule and the surrounding ordered water being perhaps generated when this layer of ordered water melts as the molecule becomes biologically active. Water memory and the basic mechanism of homeopathy would be due to the "dropping" of the magnetic bodies of polar molecules as the water is treated homeopathically and the dark nucleon sequences could define an independent life form evolving during the sequence of repeated dilutions and mechanical agitations taking the role environmental catastrophes as driving force of evolution. The association of DNA, RNA and amino-acid sequences associated with the corresponding dark nucleon sequences would be automatic since also also they are polar molecules surrounded by ordered water layers.

The transcription of the dark nucleon sequences associated the with the polar invader molecule to ordinary DNA sequences in turn coding of proteins attaching to the invader molecules by the quantum antenna mechanism could define the basic mechanism for functioning and evolution of the immune system.

Category: Physics of Biology

[104] viXra:1111.0112 [pdf] submitted on 2011-11-29 19:04:45

A Relook At Tachyons

Authors: Paul Karl Hoiland
Comments: 5 Pages.

Via a look at the math and the modern theory behind tachyons one can discover that inspite common belief the tachyon does have a place in our current best models.
Category: Quantum Physics

[103] viXra:1111.0111 [pdf] submitted on 2011-11-29 20:40:03

U(1) X SU(2) X Su(3) Quantum Gravity Successes

Authors: Nige Cook
Comments: 63 Pages.

See paper for technical abstract. Paper covers checked predictions for a theory which modifies the Standard Model's electroweak group representations to include quantum gravity, replacing the Higgs mechanism with checkable predictions. The model correctly predicted the cosmological acceleration in 1996. Full references, analysis, and feedback from peer-reviewed string theory dominated journals is included.
Category: Quantum Gravity and String Theory

[102] viXra:1111.0110 [pdf] replaced on 2012-01-30 07:39:22

Oil Droplets as a Primitive Life Form?

Authors: Matti Pitkänen
Comments: 12 Pages.

The origin of life is one the most fascinating problems of biology. The classic Miller-Urey experiment was carried out almost 60 years ago. In the experiment sparks were shot through primordial atmosphere consisting of methane, ammonia, hydrogen and water and the outcome was many of the aminoacids essential for life. The findings raised the optimism that the key to the understanding of the origins of life. After the death of Miller 2007 scientists re-examined sealed test tubes from the experiment using modern methods found that well over 20 aminoacids-more than the 20 occurring in life- were produced in the experiments.

The Urey-Miller experiments have yielded also another surprise: the black tar consisting mostly of hydrogen cyanide polymer produced in the experiments has turned out to be much more interesting than originally thought and suggests a direction where the candidates for precursors of living cells might be found. In earlier experiments nitrobenzene droplets doped with oleic anhydride exhibited some signatures of life. The droplets were capable to metabolism using oleic anhydride as ″fuel″ making for the droplet to move. Droplets can move along chemical gradients, sense each other′s presence and react to it and have also demonstrated rudimentary memory. Droplets can even ″solve″ a maze having ″food″ at its other end.

The basic objection against identification as primitive life form is that droplets have no genetic code and do not replicate. The model for dark nucleons however predicts that the states of nucleon are in one-one correspondence with DNA, RNA, tRNA, and aminoacid molecule and that vertebrate genetic code is naturally realized. The question is whether the realization of the genetic code in terms of dark nuclear strings might provide the system with genetic code and whether the replication could occur at the level of dark nucleon strings. In this article a model for oil droplets as a primitive life form is developed on basis of TGD inspired quantum model of biology. In particular, a proposal for how dark genes could couple to chemistry of oil droplets is developed.

Category: Physics of Biology

[101] viXra:1111.0109 [pdf] replaced on 2012-01-30 07:40:52

Generalization of Thermodynamics Allowing Negentropic Entanglement and a Model for Conscious Information Processing

Authors: Matti Pitkänen
Comments: 8 Pages.

Costa de Beauregard considers a model for information processing by a computer based on an analogy with Carnot's heat engine. As such the model Beauregard for computer does not look convincing as a model for what happens in biological information processing.

Combined with TGD based vision about living matter, the model however inspires a model for how conscious information is generated and how the second law of thermodynamics must be modified in TGD framework. The basic formulas of thermodynamics remain as such since the modification means only the replacement S→ S-N, where S is thermodynamical entropy and N the negentropy associated with negentropic entanglement. This allows to circumvent the basic objections against the application of Beauregard's model to living systems. One can also understand why living matter is so effective entropy producer as compared to inanimate matter and also the characteristic decomposition of living systems to highly negentropic and entropic parts as a consequence of generalized second law.

Category: Physics of Biology

[100] viXra:1111.0108 [pdf] replaced on 2012-01-30 08:08:08

DNA Waves and Water

Authors: Matti Pitkänen
Comments: 11 Pages.

The group of HIV Nobelist L. Montagnier has published two articles challenging the standard views about genetic code and providing strong support for the notion of water memory. Already the results of the first article suggested implicitly the existence of a new kind nano-scale representation of genetic code and the the recent article makes this claim explicitly. The TGD based model for the findings was based on the notion of magnetic body representing biologi- cally relevant aspects of molecules in terms of cyclotron frequencies. The model involved also the realization of genetic code using electromagnetic field patterns and as dark nucleon strings and led to a proposal that the analogs of trancription and translation are realized for the dark variants of DNA, RNA, tRNa, and aminoacids represented in terms of dark nucleon strings. Also processes transcribing ordinary and dark variants of the biomolecules to each other were proposed. This would make possible R&D-like controlled evolution based on experimentation using dark representations of biomoleculesd defining kind of virtual world.

The recent findings of the group of Montagnier allow a more detailed formulation of the model and suggest a general mechanism for generalized transcription and translation processes based on the reconnection of magnetic flux tubes between the molecules in question. A new element is the proposed role of ordered water and hydrogen bonds in the formation of water memories. These representation would result from the dropping of the magnetic bodies of molecules as the hydrogen bonds connecting the molecule to water molecules of the ordered water layer around it-analogous to ice layer- are split during the mechanical agitation. A similar process occurs quite generally when external energy feed excites the resting state of cell and induces protein folding and its reversal and the formation of protein aggregates. Good metaphors for resting state and excited states are cellular winter and summer. The necessity of a repeated dilution and mechanical agitation could be understood if agitation provides metabolic energy for the replication of the magnetic bodies filling the diluted water volume and gives rise to a series of "environmental catastrophes" inducing evolutionary leaps increasing the typical value of Planck constant associated with the magnetic bodies until the energy E = hf of 7 Hz dark photons exceeds the thermal energy at room temperature.

Category: Physics of Biology

[99] viXra:1111.0107 [pdf] replaced on 2012-01-30 08:16:59

Model for the Findings about Hologram Generating Properties of DNA

Authors: Peter Gariaev, Matti Pitkänen
Comments: 27 Pages.

A TGD inspired model for the strange replica structures observed when DNA sample is radiated by red, IR, and UV light using two methods by Peter Gariaev and collaborators. The first method produces what is tentatively interpreted as replica images of either DNA sample or of five red lamps used to irradiate the sample. Second method produce replica image of environment with replication in horizontal direction but only at the right hand side of the apparatus. Also a white phantom variant of the replica trajectory observed in the first experiment is observed and has in vertical direction the size scale of the apparatus.

p> The model is developed in order to explain the characteristic features of the replica patterns. The basic notions are magnetic body, massless extremal (topological light ray), the existence of Bose-Einstein condensates of Cooper pairs at magnetic flux tubes, and dark photons with large value of Planck constant for which macroscopic quantum coherence is possible. The hypothesis is that the first method makes part of the magnetic body of DNA sample visible whereas method II would produce replica hologram of environment using dark photons and produce also a phantom image of the magnetic tubes becoming visible by method I. Replicas would result as mirror hall effect in the sense that the dark photons would move back and forth between the part of magnetic body becoming visible by method I and serving as a mirror and the objects of environment serving also as mirrors. What is however required is that not only the outer boundaries of objects visible via ordinary reflection act as mirrors but also the parts of the outer boundary not usually visible perform mirror function so that an essentially 3-D vision providing information about the geometry of the entire object would be in question. Many-sheeted space-time allows this.

The presence of the hologram image for method II requires the self-sustainment of the reference beam only whereas the presence of phantom DNA image for method I requires the self-sustainment of both beams. Non-linear dynamics for the energy feed from DNA to the magnetic body could make possible self-sustainment for both beams simultaneously. Non-linear dynamics for beams themselves could allow for the self-sustainment of reference beam and/or reflected beam. The latter option is favored by data.

Category: Physics of Biology

[98] viXra:1111.0106 [pdf] replaced on 2013-11-10 15:33:06

A New Force Smaller Than The Smallest Gravity

Authors: Dan Visser
Comments: 7 Pages.

In this version 3 the formulations are given for the existence of a force smaller than the smallest gravity. This is a new dark energy force, which affects neutrinos differently than is assumed according to current physics. The formulations also imply a different look on the Higgs-mass and dark matter-mass. A deeper analysis became important, because a new cosmological hypothesis is involved. The CERN-experiments on these issues are far from criticism. My set of equations mentioned in my paper “A New Dark Energy Force Theoretically Calculates Faster-than-light-neutrinos” and “Duonistic Neutrinos Violate Relativity” reveal such a criticism. However, until now my formulations withstand a hurricane, even after a Director of the OPERA-project had to resign. My set of equations theoretically proves the neutrino-faster-than-light experiments had to be investigated to the bottom.
Category: Mathematical Physics

[97] viXra:1111.0104 [pdf] replaced on 2014-02-14 08:10:00

Work, Motion and Energy

Authors: Nainan K. Varghese
Comments: 5 Pages.

Energy, an undefined entity derived from work, is generally equated to motion. This has necessitated introduction of certain motion of physical entities, wherever energy is envisaged. All actions are results of work-done rather than energy. Although energy has no definite form, structure or existence, it has gradually come to usurp rightful status of work about a physical entity. Author proposes an alternative concept that may restore work, motion and energy to their fair and logical status.
Category: Classical Physics

[96] viXra:1111.0102 [pdf] submitted on 26 Nov 2011

Expanding Relative Theory to Including Supper-C-Neutrino

Authors: Sheng-Ping Wu
Comments: 2 Pages.

This article expands the classical velocity to surpassing that of light and does not vary the formula of Relative Theory, to construct a theory well explains the current measures like the velocity and energy of neutrinos tested between Gran Sasso and Cern.
Category: Relativity and Cosmology

[95] viXra:1111.0101 [pdf] submitted on 2011-11-25 19:45:10

On The Zero Point Field

Authors: Paul Karl Hoiland
Comments: 5 Pages.

Via a look at the Ricci and Weyl curvature tensors I show that from a quantum perspective the Zero Point Field or ZPF is the origin of both inertia and gravity. I also point out that accelerated expansion of the cosmos and a observational slow down of C should have been expected.
Category: Quantum Gravity and String Theory

[94] viXra:1111.0100 [pdf] submitted on 2011-11-26 10:28:01

Two Faces Myth Figure in Chess

Authors: Martiros Khurshudyan
Comments: 1 Page.

A new version of Chess game is proposed, where in general case a figure(or figures) has "two faces": white and black. Depending on the position of the figure on the board, it could change its color(face) and be used by opponent. Initially which of figures should have "two faces" as well as positions of the colored cells will be defined by players before game will start.
Category: General Mathematics

[93] viXra:1111.0099 [pdf] submitted on 2011-11-28 13:34:42


Authors: Meir Amiram
Comments: 12 Pages.

I indicate that the key factor in the mechanism of inertia is the proximity of any elementary particle to itself, and consequently show that Newton laws of motion are derivatives of Newton's inverse square law of gravity. Inertia is originated in the microscopic realm, in the particle's diameter scale of reality, and is the response of an elementary particle to the gravitational field of itself, nothing more or less. Experimental evidences and several consequences of the discovery are discussed.
Category: Quantum Gravity and String Theory

[92] viXra:1111.0098 [pdf] replaced on 2013-02-15 23:21:50

Mathematical Objects :Past, Present, Future.

Authors: Vyacheslav Telnin
Comments: 4 Pages.

Abstract. There are three main mathematical operations : [1] – addition, [2] - multiplication, [3] – raising to the power. Also there are three kinds of operands : A – numbers, B – vectors, C – vector spaces. This paper shows how to define new kinds of operands : D, E, F, … and new operations : [4], [5], [6], … ; [0], [-1], [-2], [-3], … . The study of inverse operations for new operations can open new classes of operands like it was with the operations [1], [2], [3]. Also there is a way to description of field with spin = 1/3.
Category: General Mathematics

[91] viXra:1111.0097 [pdf] replaced on 2018-10-31 09:50:59

Understanding Superconductivity: a New Approach

Authors: Kunwar Jagdish Narain
Comments: 34 Pages. 5 Figures

All electrons, nucleons, and other particles undergo a persistent spin motion without possessing any infinite energy source, and therefore, they should have a unique structure that maintains their spinning and provides all the properties that they display. Additionally, because nothing in nature occurs without a reason or purpose, there should be an explanation for their persistent spinning motion. Therefore, the unique structures of electrons and nucleons, and purpose why they display persistent spinning motion have been determined. The results of these determinations provide the knowledge of a new force possessing characteristics of nuclear force and both attractive and repulsive components, and very clear and complete explanation of: 1) all the phenomena; 2) all the properties and effects of their systems; and 3) structures of their systems, e.g., deuterons, alpha particles, and nuclei; those are generated due to these particles. Present study is focussed on to provide understanding of how in substances at their transition temperature, resistance-less state, superconducting state, numerous properties, and effects, e.g., Meissner effect, levitation of magnet above the superconductor, and Josephson’s tunnelling, those the substances exhibit at their transition temperature, are generated.
Category: Condensed Matter

[90] viXra:1111.0096 [pdf] submitted on 2011-11-29 10:58:44

Reconciliation of QM and GR and the Need for a Pulsating Entangled CPT Symmetric Raspberry Shaped Multiverse.

Authors: Leo Vuyk
Comments: 14 Pages.

In (Q-FFF) Quantum Function Follows Form theory, the Higgs particle is interpreted as a massless but energetic oscillating transformer particle, equipped with a complex internal structure and able to create the universe by transformation of its shape, after real mechanical collision and merging with other transformed Higgs particles into knots called Quarks and Leptons. The best place to create Higgs particle based plasma out of the “nothingness” of the oscillating super dense Higgs vacuum lattice, seems to be not only direct after the Black hole splitting Big Bang Multiverse, but even more at the event horizon of smaller “new paradigm” black holes proliferated through the multiverse. This multiverse seems to have a raspberry shape equipped with a restricted number of CPT (Charge Parity Time) symmetrical universes as berries or lobes. One of these lobes is supposed to be OUR material universe which is supposed to be entangled down to the Planck scale (thus even at the human scale) with at least one opposing anti-material mirror universe of the raspberry. The raspberry multiverse is supposed to be pulsating. due to the existence of a process of evaporation of the vacuum Higgs vacuum particles (Dark Energy) during the big bang inflation process and in succession, by the consumption of the same Higgs vacuum by individual black holes (Dark Matter) created at all scales from Super Nova black holes down to Sunspots, Comets, Micro Comets and even Ball Lightning.
Category: Astrophysics

[89] viXra:1111.0095 [pdf] submitted on 25 Nov 2011

2011 New Mexico Book Award for Science & Math

Authors: Mircea Eugen Selariu
Comments: 4 pages.

Prof. W. B. Vasantha Kandasamy, from Indian Institute of Technology, Chennai - India and Prof. Florentin Smarandache, from the University of New Mexico - USA, have received the 2011 New Mexico Book Award at the category Science and Mathematics for their book "Algebraic Structures Using Natural Class of Intervals", published by the Education Publ. Hse. from Columbus, in 2011.
Category: History and Philosophy of Physics

[88] viXra:1111.0094 [pdf] submitted on 25 Nov 2011

Another CERN Solution

Authors: Paul Karl Hoiland
Comments: 2 pages.

The CERN Problem is examined from another prespective.
Category: Relativity and Cosmology

[87] viXra:1111.0092 [pdf] submitted on 24 Nov 2011

Smooth Infinitesimal Analysis Based Model of Multidimensional Geometry

Authors: Alexander Egoyan
Comments: 6 pages.

In this work a new approach to multidimensional geometry based on smooth infinitesimal analysis (SIA) is proposed. An embedded surface in this multidimensional geometry will look different for the external and internal observers: from the outside it will look like a composition of infinitesimal segments, while from the inside like a set of points equipped by a metric. The geometry is elastic. Embedded surfaces possess dual metric: internal and external. They can change their form in the bulk without changing the internal metric.
Category: Geometry

[86] viXra:1111.0091 [pdf] replaced on 2012-01-30 08:18:27

Langlands Conjectures in TGD Framework

Authors: Matti Pitkänen
Comments: 24 Pages.

The arguments of this article support the view that in TGD Universe number theoretic and geometric Langlands conjectures could be understood very naturally. The basic notions are following.

  1. Zero energy ontology (ZEO) and the related notion of causal diamond CD (CD is short hand for the cartesian product of causal diamond of M4 and of CP2). ZEO leads to the notion of partonic 2-surfaces at the light-like boundaries of CD and to the notion of string world sheet. These notions are central in the recent view about TGD. One can assign to the partonic 2-surfaces a conformal moduli space having as additional coordinates the positions of braid strand ends (punctures). By electric-magnetic duality this moduli space must correspond closely to the moduli space of string world sheets.

  2. Electric-magnetic duality realized in terms of string world sheets and partonic 2-surfaces. The group G and its Langlands dual LG would correspond to the time-like and space-like braidings. Duality predicts that the moduli space of string world sheets is very closely related to that for the partonic 2-surfaces. The strong form of 4-D general coordinate invariance implying electric-magnetic duality and S-duality as well as strong form of holography indeed predicts that the collection of string world sheets is fixed once the collection of partonic 2-surfaces at light-like boundaries of CD and its sub-CDs is known.

  3. The proposal is that finite measurement resolution is realized in terms of inclusions of hyperfinite factors of type II1 at quantum level and represented in terms of confining effective gauge group. This effective gauge group could be some associate of G: gauge group, Kac-Moody group or its quantum counterpart, or so called twisted quantum Yangian strongly suggested by twistor considerations. At space-time level the finite measurement resolution would be represented in terms of braids at space-time level which come in two varieties correspond to braids assignable to space-like surfaces at the two light-like boundaries of CD and with light-like 3-surfaces at which the signature of the induced metric changes and which are identified as orbits of partonic 2-surfaces connecting the future and past boundaries of CDs.

    There are several steps leading from G to its twisted quantum Yangian. The first step replaces point like particles with partonic 2-surfaces: this brings in Kac-Moody character. The second step brings in finite measurement resolution meaning that Kac-Moody type algebra is replaced with its quantum version. The third step brings in zero energy ontology: one cannot treat single partonic surface or string world sheet as independent unit: always the collection of partonic 2-surfaces and corresponding string worlds sheets defines the geometric structure so that multilocality and therefore quantum Yangian algebra with multilocal generators is unavoidable.

    In finite measurement resolution geometric Langlands duality and number theoretic Langlands duality are very closely related since partonic 2-surface is effectively replaced with the punctures representing the ends of braid strands and the orbit of this set under a discrete subgroup of G defines effectively a collection of "rational" 2-surfaces. The number of the "rational" surfaces in geometric Langlands conjecture replaces the number of rational points of partonic 2-surface in its number theoretic variant. The ability to compute both these numbers is very relevant for quantum TGD.

  4. The natural identification of the associate of G is as quantum Yangian of Kac-Moody type group associated with Minkowskian open string model assignable to string world sheet representing a string moving in the moduli space of partonic 2-surface. The dual group corresponds to Euclidian string model with partonic 2-surface representing string orbit in the moduli space of the string world sheets. The Kac-Moody algebra assigned with simply laced G is obtained using the standard tachyonic free field representation obtained as ordered exponentials of Cartan algebra generators identified as transversal parts of M4 coordinates for the braid strands. The importance of the free field representation generalizing to the case of non-simply laced groups in the realization of finite measurement resolution in terms of Kac-Moody algebra cannot be over-emphasized.

  5. Langlands duality involves besides harmonic analysis side also the number theoretic side. Galois groups (collections of them) defined by infinite primes and integers having representation as symplectic flows defining braidings. I have earlier proposed that the hierarchy of these Galois groups define what might be regarded as a non-commutative homology and cohomology. Also G has this kind of representation which explains why the representations of these two kinds of groups are so intimately related. This relationship could be seen as a generalization of the MacKay correspondence between finite subgroups of SU(2) and simply laced Lie groups.

  6. Symplectic group of the light-cone boundary acting as isometries of the WCW geometry kenociteallb/compl1 allowing to represent projectively both Galois groups and symmetry groups as symplectic flows so that the non-commutative cohomology would have braided representation. This leads to braided counterparts for both Galois group and effective symmetry group.

  7. The moduli space for Higgs bundle playing central role in the approach of Witten and Kapustin to geometric Landlands program is in TGD framework replaced with the conformal moduli space for partonic 2-surfaces. It is not however possible to speak about Higgs field although moduli defined the analog of Higgs vacuum expectation value. Note that in TGD Universe the most natural assumption is that all Higgs like states are "eaten" by gauge bosons so that also photon and gluons become massive. This mechanism would be very general and mean that massless representations of Poincare group organize to massive ones via the formation of bound states. It might be however possible to see the contribution of p-adic thermodynamics depending on genus as analogous to Higgs contribution since the conformal moduli are analogous to vacuum expectation of Higgs field.

Category: Mathematical Physics

[85] viXra:1111.0090 [pdf] replaced on 2012-01-30 08:22:30

How Infinite Primes Relate to Other Views About Mathematical Infinity?

Authors: Matti Pitkänen
Comments: 16 Pages.

Infinite primes is a purely TGD inspired notion. The notion of infinity is number theoretical and infinite primes have well defined divisibility properties. One can partially order them by the real norm. p-Adic norms of infinite primes are well defined and finite. The construction of infinite primes is a hierarchical procedure structurally equivalent to a repeated second quantization of a supersymmetric arithmetic quantum field theory. At the lowest level bosons and fermions are labelled by ordinary primes. At the next level one obtains free Fock states plus states having interpretation as bound many particle states. The many particle states of a given level become the single particle states of the next level and one can repeat the construction ad infinitum. The analogy with quantum theory is intriguing and I have proposed that the quantum states in TGD Universe correspond to octonionic generalizations of infinite primes. It is interesting to compare infinite primes (and integers) to the Cantorian view about infinite ordinals and cardinals. The basic problems of Cantor's approach which relate to the axiom of choice, continuum hypothesis, and Russell's antinomy: all these problems relate to the definition of ordinals as sets. In TGD framework infinite primes, integers, and rationals are defined purely algebraically so that these problems are avoided. It is not surprising that these approaches are not equivalent. For instance, sum and product for Cantorian ordinals are not commutative unlike for infinite integers defined in terms of infinite primes.

Set theory defines the foundations of modern mathematics. Set theory relies strongly on classical physics, and the obvious question is whether one should reconsider the foundations of mathematics in light of quantum physics. Is set theory really the correct approach to axiomatization?

  1. Quantum view about consciousness and cognition leads to a proposal that p-adic physics serves as a correlate for cognition. Together with the notion of infinite primes this suggests that number theory should play a key role in the axiomatics.
  2. Algebraic geometry allows algebraization of the set theory and this kind of approach suggests itself strongly in physics inspired approach to the foundations of mathematics. This means powerful limitations on the notion of set.
  3. Finite measurement resolution and finite resolution of cognition could have implications also for the foundations of mathematics and relate directly to the fact that all numerical approaches reduce to an approximation using rationals with a cutoff on the number of binary digits.
  4. The TGD inspired vision about consciousness implies evolution by quantum jumps meaning that also evolution of mathematics so that no fixed system of axioms can ever catch all the mathematical truths for the simple reason that mathematicians themselves evolve with mathematics.
I will discuss possible impact of these observations on the foundations of physical mathematics assuming that one accepts the TGD inspired view about infinity, about the notion of number, and the restrictions on the notion of set suggested by classical TGD.

Category: Mathematical Physics

[84] viXra:1111.0089 [pdf] replaced on 2012-01-30 08:24:00

Motives and Infinite Primes

Authors: Matti Pitkänen
Comments: 80 Pages.

In this article the goal is to find whether the general mathematical structures associated with twistor approach, superstring models and M-theory could have a generalization or a modification in TGD framework. The contents of the chapter is an outcome of a rather spontaneous process, and represents rather unexpected new insights about TGD resulting as outcome of the comparisons.

1. Infinite primes, Galois groups, algebraic geometry, and TGD

In algebraic geometry the notion of variety defined by algebraic equation is very general: all number fields are allowed. One of the challenges is to define the counterparts of homology and cohomology groups for them. The notion of cohomology giving rise also to homology if Poincare duality holds true is central. The number of various cohomology theories has inflated and one of the basic challenges to find a sufficiently general approach allowing to interpret various cohomology theories as variations of the same motive as Grothendieck, who is the pioneer of the field responsible for many of the basic notions and visions, expressed it.

Cohomology requires a definition of integral for forms for all number fields. In p-adic context the lack of well-ordering of p-adic numbers implies difficulties both in homology and cohomology since the notion of boundary does not exist in topological sense. The notion of definite integral is problematic for the same reason. This has led to a proposal of reducing integration to Fourier analysis working for symmetric spaces but requiring algebraic extensions of p-adic numbers and an appropriate definition of the p-adic symmetric space. The definition is not unique and the interpretation is in terms of the varying measurement resolution.

The notion of infinite has gradually turned out to be more and more important for quantum TGD. Infinite primes, integers, and rationals form a hierarchy completely analogous to a hierarchy of second quantization for a super-symmetric arithmetic quantum field theory. The simplest infinite primes representing elementary particles at given level are in one-one correspondence with many-particle states of the previous level. More complex infinite primes have interpretation in terms of bound states.

  1. What makes infinite primes interesting from the point of view of algebraic geometry is that infinite primes, integers and rationals at the n:th level of the hierarchy are in 1-1 correspondence with rational functions of n arguments. One can solve the roots of associated polynomials and perform a root decomposition of infinite primes at various levels of the hierarchy and assign to them Galois groups acting as automorphisms of the field extensions of polynomials defined by the roots coming as restrictions of the basic polynomial to planes xn=0, xn=xn-1=0, etc...

  2. These Galois groups are suggested to define non-commutative generalization of homotopy and homology theories and non-linear boundary operation for which a geometric interpretation in terms of the restriction to lower-dimensional plane is proposed. The Galois group Gk would be analogous to the relative homology group relative to the plane xk-1=0 representing boundary and makes sense for all number fields also geometrically. One can ask whether the invariance of the complex of groups under the permutations of the orders of variables in the reduction process is necessary. Physical interpretation suggests that this is not the case and that all the groups obtained by the permutations are needed for a full description.

  3. The algebraic counterpart of boundary map would map the elements of Gk identified as analog of homotopy group to the commutator group [Gk-2,Gk-2] and therefore to the unit element of the abelianized group defining cohomology group. In order to obtains something analogous to the ordinary homology and cohomology groups one must however replaces Galois groups by their group algebras with values in some field or ring. This allows to define the analogs of homotopy and homology groups as their abelianizations. Cohomotopy, and cohomology would emerge as duals of homotopy and homology in the dual of the group algebra.

  4. That the algebraic representation of the boundary operation is not expected to be unique turns into blessing when on keeps the TGD as almost topological QFT vision as the guide line. One can include all boundary homomorphisms subject to the condition that the anticommutator δikδjk-1jkδik-1 maps to the group algebra of the commutator group [Gk-2,Gk-2]. By adding dual generators one obtains what looks like a generalization of anticommutative fermionic algebra and what comes in mind is the spectrum of quantum states of a SUSY algebra spanned by bosonic states realized as group algebra elements and fermionic states realized in terms of homotopy and cohomotopy and in abelianized version in terms of homology and cohomology. Galois group action allows to organize quantum states into multiplets of Galois groups acting as symmetry groups of physics. Poincare duality would map the analogs of fermionic creation operators to annihilation operators and vice versa and the counterpart of pairing of k:th and n-k:th homology groups would be inner product analogous to that given by Grassmann integration. The interpretation in terms of fermions turns however to be wrong and the more appropriate interpretation is in terms of Dolbeault cohomology applying to forms with homomorphic and antiholomorphic indices.

  5. The intuitive idea that the Galois group is analogous to 1-D homotopy group which is the only non-commutative homotopy group, the structure of infinite primes analogous to the braids of braids of braids of ... structure, the fact that Galois group is a subgroup of permutation group, and the possibility to lift permutation group to a braid group suggests a representation as flows of 2-D plane with punctures giving a direct connection with topological quantum field theories for braids, knots and links. The natural assumption is that the flows are induced from transformations of the symplectic group acting on δ M2+/-× CP2 representing quantum fluctuating degrees of freedom associated with WCW ("world of classical worlds"). Discretization of WCW and cutoff in the number of modes would be due to the finite measurement resolution. The outcome would be rather far reaching: finite measurement resolution would allow to construct WCW spinor fields explicitly using the machinery of number theory and algebraic geometry.

  6. A connection with operads is highly suggestive. What is nice from TGD perspective is that the non-commutative generalization homology and homotopy has direct connection to the basic structure of quantum TGD almost topological quantum theory where braids are basic objects and also to hyper-finite factors of type II1. This notion of Galois group makes sense only for the algebraic varieties for which coefficient field is algebraic extension of some number field. Braid group approach however allows to generalize the approach to completely general polynomials since the braid group make sense also when the ends points for the braid are not algebraic points (roots of the polynomial).

This construction would realize the number theoretical, algebraic geometrical, and topological content in the construction of quantum states in TGD framework in accordance with TGD as almost TQFT philosophy, TGD as infinite-D geometry, and TGD as generalized number theory visions.

2. p-Adic integration and cohomology

This picture leads also to a proposal how p-adic integrals could be defined in TGD framework.

  1. The calculation of twistorial amplitudes reduces to multi-dimensional residue calculus. Motivic integration gives excellent hopes for the p-adic existence of this calculus and braid representation would give space-time representation for the residue integrals in terms of the braid points representing poles of the integrand: this would conform with quantum classical correspondence. The power of 2π appearing in multiple residue integral is problematic unless it disappears from scattering amplitudes. Otherwise one must allow an extension of p-adic numbers to a ring containing powers of 2π.

  2. Weak form of electric-magnetic duality and the general solution ansatz for preferred extremals reduce the Kähler action defining the Kähler function for WCW to the integral of Chern-Simons 3-form. Hence the reduction to cohomology takes places at space-time level and since p-adic cohomology exists there are excellent hopes about the existence of p-adic variant of Kähler action. The existence of the exponent of Kähler gives additional powerful constraints on the value of the Kähler fuction in the intersection of real and p-adic worlds consisting of algebraic partonic 2-surfaces and allows to guess the general form of the Kähler action in p-adic context.

  3. One also should define p-adic integration for vacuum functional at the level of WCW. p-Adic thermodynamics serves as a guideline leading to the condition that in p-adic sector exponent of Kähler action is of form (m/n)r, where m/n is divisible by a positive power of p-adic prime p. This implies that one has sum over contributions coming as powers of p and the challenge is to calculate the integral for K= constant surfaces using the integration measure defined by an infinite power of Kähler form of WCW reducing the integral to cohomology which should make sense also p-adically. The p-adicization of the WCW integrals has been discussed already earlier using an approach based on harmonic analysis in symmetric spaces and these two approaches should be equivalent. One could also consider a more general quantization of Kähler action as sum K=K1+K2 where K1=rlog(m/n) and K2=n, with n divisible by p since exp(n) exists in this case and one has exp(K)= (m/n)r × exp(n). Also transcendental extensions of p-adic numbers involving n+p-2 powers of e1/n can be considered.

  4. If the Galois group algebras indeed define a representation for WCW spinor fields in finite measurement resolution, also WCW integration would reduce to summations over the Galois groups involved so that integrals would be well-defined in all number fields.

3. Floer homology, Gromov-Witten invariants, and TGD

Floer homology defines a generalization of Morse theory allowing to deduce symplectic homology groups by studying Morse theory in loop space of the symplectic manifold. Since the symplectic transformations of the boundary of δ M4+/-× CP2 define isometry group of WCW, it is very natural to expect that Kähler action defines a generalization of the Floer homology allowing to understand the symplectic aspects of quantum TGD. The hierarchy of Planck constants implied by the one-to-many correspondence between canonical momentum densities and time derivatives of the imbedding space coordinates leads naturally to singular coverings of the imbedding space and the resulting symplectic Morse theory could characterize the homology of these coverings.

One ends up to a more precise definition of vacuum functional: Kähler action reduces Chern-Simons terms (imaginary in Minkowskian regions and real in Euclidian regions) so that it has both phase and real exponent which makes the functional integral well-defined. Both the phase factor and its conjugate must be allowed and the resulting degeneracy of ground state could allow to understand qualitatively the delicacies of CP breaking and its sensitivity to the parameters of the system. The critical points with respect to zero modes correspond to those for Kähler function. The critical points with respect to complex coordinates associated with quantum fluctuating degrees of freedom are not allowed by the positive definiteness of Kähler metric of WCW. One can say that Kähler and Morse functions define the real and imaginary parts of the exponent of vacuum functional.

The generalization of Floer homology inspires several new insights. In particular, space-time surface as hyper-quaternionic surface could define the 4-D counterpart for pseudo-holomorphic 2-surfaces in Floer homology. Holomorphic partonic 2-surfaces could in turn correspond to the extrema of Kähler function with respect to zero modes and holomorphy would be accompanied by super-symmetry.

Gromov-Witten invariants appear in Floer homology and topological string theories and this inspires the attempt to build an overall view about their role in TGD. Generalization of topological string theories of type A and B to TGD framework is proposed. The TGD counterpart of the mirror symmetry would be the equivalence of formulations of TGD in H=M4× CP2 and in CP3× CP3 with space-time surfaces replaced with 6-D sphere bundles.

4. K-theory, branes, and TGD

K-theory and its generalizations play a fundamental role in super-string models and M-theory since they allow a topological classification of branes. After representing some physical objections against the notion of brane more technical problems of this approach are discussed briefly and it is proposed how TGD allows to overcome these problems. A more precise formulation of the weak form of electric-magnetic duality emerges: the original formulation was not quite correct for space-time regions with Euclidian signature of the induced metric. The question about possible TGD counterparts of R-R and NS-NS fields and S, T, and U dualities is discussed.

5. p-Adic space-time sheets as correlates for Boolean cognition

p-Adic physics is interpreted as physical correlate for cognition. The so called Stone spaces are in one-one correspondence with Boolean algebras and have typically 2-adic topologies. A generalization to p-adic case with the interpretation of p pinary digits as physically representable Boolean statements of a Boolean algebra with 2n>p>pn-1 statements is encouraged by p-adic length scale hypothesis. Stone spaces are synonymous with profinite spaces about which both finite and infinite Galois groups represent basic examples. This provides a strong support for the connection between Boolean cognition and p-adic space-time physics. The Stone space character of Galois groups suggests also a deep connection between number theory and cognition and some arguments providing support for this vision are discussed.

Category: Mathematical Physics

[83] viXra:1111.0088 [pdf] replaced on 2012-01-30 08:34:19

Could One Generalize Braid Invariant Defined by Vacuum Expecation of Wilson Loop to and Invariant of Braid Cobordisms and of 2-Knots?

Authors: Matti Pitkänen
Comments: 17 Pages.

Witten was awarded by Fields medal from a construction recipe of Jones polynomial based on topological QFT assigned with braids and based on Chern-Simons action. Recently Witten has been working with an attempt to understand in terms of quantum theory the so called Khovanov polynomial associated with a much more abstract link invariant whose interpretation and real understanding remains still open.

The attempts to understand Witten's thoughts lead to a series of questions unavoidably culminating to the frustrating "Why I do not have the brain of Witten making perhaps possible to answer these questions?". This one must just accept. In this article I summarize some thoughts inspired by the associations of the talk of Witten with quantum TGD and with the model of DNA as topological quantum computer. In my own childish manner I dare believe that these associations are interesting and dare also hope that some more brainy individual might take them seriously.

An idea inspired by TGD approach which also main streamer might find interesting is that the Jones invariant defined as vacuum expectation for a Wilson loop in 2+1-D space-time generalizes to a vacuum expectation for a collection of Wilson loops in 2+2-D space-time and could define an invariant for 2-D knots and for cobordisms of braids analogous to Jones polynomial. As a matter fact, it turns out that a generalization of gauge field known as gerbe is needed and that in TGD framework classical color gauge fields defined the gauge potentials of this field. Also topological string theory in 4-D space-time could define this kind of invariants. Of course, it might well be that this kind of ideas have been already discussed in literature.

Category: Mathematical Physics

[82] viXra:1111.0087 [pdf] submitted on 1 Nov 2011

Could the Notion of Hyperdeterminant be Useful in TGD Framework?

Authors: Matti Pitkänen
Comments: 4 pages.

The vanishing of ordinary determinant tells that a group of linear equations possesses non-trivial solutions. Hyperdeterminant generalizes this notion to a situation in which one has homogenous multilinear equations. The notion has applications to the description of quantum entanglement and has stimulated interest in physics blogs. Hyperdeterminant applies to hyper-matrices with n matrix indices defined for an n-fold tensor power of vector space - or more generally - for a tensor product of vector spaces with varying dimensions. Hyper determinant is an n-linear function of the arguments in the tensor factors with the property that all partial derivatives of the hyper determinant vanish at the point, which corresponds to a non-trivial solution of the equation. A simple example is potential function of n arguments linear in each argument.

Why the notion of hyperdeterminant- or rather its infinite-dimensional generalization- might be interesting in TGD framework relates to the quantum criticality of TGD stating that TGD Universe involves a fractal hierarchy of criticalities: phase transitions inside phase transitions inside... At classical level the lowest order criticality means that the extremal of Kähler action possesses non-trivial second variations for which the action is not affected. The system is critical. In QFT context one speaks about zero modes. The vanishing of the so called Gaussian (of functional) determinant associated with second variations is the condition for the existence of critical deformations. In QFT context this situation corresponds to the presence of zero modes.

The simplest physical model for a critical system is cusp catastrophe defined by a potential function V(x) which is fourth order polynomial. At the edges of cusp two extrema of potential function stable and unstable extrema co-incide and the rank of the matrix defined by the potential function vanishes. This means vanishing of its determinant. At the tip of the cusp the also the third derivative vanishes of potential function vanishes. This situation is however not describable in terms of hyperdeterminant since it is genuinely non-linear rather than only multilinear.

In a complete analogy, one can consider also the vanishing of n:th variations in TGD framework as higher order criticality so that the vanishing of hyperdeterminant might serve as a criterion for the higher order critical point and occurrence of phase transition. Why multilinearity might replace non-linearity in TGD framework could be due to the non-locality. Multilinearty with respect to imbedding space-coordinates at different space-time points would imply also the vanishing of the standard local divergences of quantum field theory known to be absent in TGD framework on basis of very general arguments. In this article an attempt to concretize this idea is made. The challenge is highly non-trivial since in finite measurement resolution one must work with infinite-dimensional system.

Category: Mathematical Physics

[81] viXra:1111.0086 [pdf] replaced on 2012-01-30 08:45:30

Yangian Symmetry, Twistors, and TGD

Authors: Matti Pitkänen
Comments: 61 Pages.

There have been impressive steps in the understanding of N=4 maximally sypersymmetric YM theory possessing 4-D super-conformal symmetry. This theory is related by AdS/CFT duality to certain string theory in AdS5× S5 background. Second stringy representation was discovered by Witten and is based on 6-D Calabi-Yau manifold defined by twistors. The unifying proposal is that so called Yangian symmetry is behind the mathematical miracles involved.

In the following I will discuss briefly the notion of Yangian symmetry and suggest its generalization in TGD framework by replacing conformal algebra with appropriate super-conformal algebras. Also a possible realization of twistor approach and the construction of scattering amplitudes in terms of Yangian invariants defined by Grassmannian integrals is considered in TGD framework and based on the idea that in zero energy ontology one can represent massive states as bound states of massless particles. There is also a proposal for a physical interpretation of the Cartan algebra of Yangian algebra allowing to understand at the fundamental level how the mass spectrum of n-particle bound states could be understood in terms of the n-local charges of the Yangian algebra.

Twistors were originally introduced by Penrose to characterize the solutions of Maxwell's equations. Kähler action is Maxwell action for the induced Kähler form of CP2. The preferred extremals allow a very concrete interpretation in terms of modes of massless non-linear field. Both conformally compactified Minkowski space identifiable as so called causal diamond and CP2 allow a description in terms of twistors. These observations inspire the proposal that a generalization of Witten's twistor string theory relying on the identification of twistor string world sheets with certain holomorphic surfaces assigned with Feynman diagrams could allow a formulation of quantum TGD in terms of 3-dimensional holomorphic surfaces of CP3× CP3 mapped to 6-surfaces dual CP3× CP3, which are sphere bundles so that they are projected in a natural manner to 4-D space-time surfaces. Very general physical and mathematical arguments lead to a highly unique proposal for the holomorphic differential equations defining the complex 3-surfaces conjectured to correspond to the preferred extremals of Kähler action.

Category: Mathematical Physics

[80] viXra:1111.0085 [pdf] replaced on 2012-03-16 03:44:46

A Possible Explanation for Shnoll Effect

Authors: Matti Pitkänen
Comments: 17 Pages.

Shnoll and collaborators have discovered strange repeating patterns of random fluctuations of physical observables such as the number n of nuclear decays in a given time interval. Periodically occurring peaks for the distribution of the number N(n) of measurements producing n events in a series of measurements as a function of n is observed instead of a single peak. The positions of the peaks are not random and the patterns depend on position and time varying periodically in time scales possibly assignable to Earth-Sun and Earth-Moon gravitational interaction.

These observations suggest a modification of the expected probability distributions but it is very difficult to imagine any physical mechanism in the standard physics framework. Rather, a universal deformation of predicted probability distributions would be in question requiring something analogous to the transition from classical physics to quantum physics.

The hint about the nature of the modification comes from the TGD inspired quantum measurement theory proposing a description of the notion of finite measurement resolution in terms of inclusions of so called hyper-finite factors of type II1 (HFFs) and closely related quantum groups. Also p-adic physics -another key element of TGD- is expected to be involved. A modification of a given probability distribution P(nkenovert λi) for a positive integer valued variable n characterized by rational-valued parameters λi is obtained by replacing n and the integers characterizing λi with so called quantum integers depending on the quantum phase qm=exp(i2π/m). Quantum integer nq must be defined as the product of quantum counterparts pq of the primes p appearing in the prime decomposition of n. One has pq= sin(2π p/m)/sin(2π/m) for p≠ P and pq=P for p=P. m must satisfy m≥ 3, m≠ p, and m≠ 2p.

The quantum counterparts of positive integers can be negative. Therefore quantum distribution is defined first as p-adic valued distribution and then mapped by so called canonical identification I to a real distribution by the map taking p-adic -1 to P and powers Pn to P-n and other quantum primes to themselves and requiring that the mean value of n is for distribution and its quantum variant. The map I satisfies I(∑ Pn)=∑ I(Pn). The resulting distribution has peaks located periodically with periods coming as powers of P. Also periodicities with peaks corresponding to n=n+n-, n+q>0 with fixed n-q<0, are predicted. These predictions are universal and easily testable. The prime P and integer m characterizing the quantum variant of distribution can be identified from data. The shapes of the distributions obtained are qualitatively consistent with the findings of Shnoll but detailed tests are required to see whether the number theoretic predictions are correct.

The periodic dependence of the distributions would be most naturally assignable to the gravitational interaction of Earth with Sun and Moon and therefore to the periodic variation of Earth-Sun and Earth-Moon distances. The TGD inspired proposal is that the p-dic prime P and integer m characterizing the quantum distribution are determined by a process analogous to a state function reduction and their most probably values depend on the deviation of the distance R through the formulas Δ p/p≈ kpΔ R/R and Δ m/m≈ kmΔ R/R. The p-adic primes assignable to elementary particles are very large unlike the primes which could characterize the empirical distributions. The hierarchy of Planck constants allows the gravitational Planck constant assignable to the space-time sheets mediating gravitational interactions to have gigantic values and this allows p-adicity with small values of the p-adic prime P.

Category: Mathematical Physics

[79] viXra:1111.0084 [pdf] replaced on 2011-11-30 16:01:56

Does the Opera Experiment Reveal a Systematic Error in the Satellite Ephemeris of the Global Positioning System ?

Authors: Yves-Henri Sanejouand
Comments: 3 Pages.

With respect to the speed of light, the speed excess of the neutrinos (7.2 ± 0.6 km.s−1 ) measured in the OPERA experiment is observed to be close, if not exactly equal, to two times the orbital velocity of the GPS satellites (≈ 3.9 km.s−1 ), strongly suggesting that this anomaly is due to an error made on some of the GPS-based measurements involved in the OPERA experiment. Moreover, when this error is assumed to arise from a systematic error made on the measurements of GPS satellite velocities, the origin of the factor two becomes obvious. So, it seems likely that the OPERA experiment, instead of revealing a new, unexpected and challenging aspect of the physics of neutrinos, has demonstrated that the Global Positioning System still suffers from a rather important error, which remained unoticed until now, probably as a consequence of its systematic nature.
Category: High Energy Particle Physics

[78] viXra:1111.0083 [pdf] submitted on 23 Nov 2011

Three Paths To Superluminal Travel

Authors: Paul Karl Hoiland
Comments: 21 pages, A redo of a article I did

A short look at some of the paths to superluminal travel.
Category: Relativity and Cosmology

[77] viXra:1111.0082 [pdf] replaced on 2012-04-23 01:46:03

Does the Weak Equivalence Principle Hold?

Authors: Golden Gadzirayi Nyambuya
Comments: 17 Pages. Significant improvements made. Paper is in its final form before it is reviewed.

We take -- albeit, with an all-important and subtle difference; a closer and meticulous look at the motion of light in a Newtonian gravitational field in exactly the same manner as has been conducted by past researchers leading them to conclude that for light grazing the limb of the Sun, its path must suffer a deflection of 0.875 arcsec from its otherwise straight path. The difference between our approach and that of past researchers, is that, at the outset of the derivation of the resultant equations of motion, we do not assume the equity of gravitational (m_g) and inertial mass (m_i). The ratio of the gravitational to inertial mass (gamma=m_g/m_i)$ is persistent in the equations, it does not cancel out or disappear. Eventually, this ratio emerges in the final equations of motion. When these resultant equations of motion are inspected, it is seen that the factor two difference needed to bring Newtonian gravitation into harmony with observations can be attributed to a photon's gravitational to inertial mass ratio. This leads us directly to question the validity of the equivalence principle. This finding, we believe, demonstrates or hints to a much deeper reality that the gravitational and inertial mass, may -- after all; not be equal as we have come to strongly believe. This rather disturbing (perhaps exciting) conclusion, if correct; may direct us to closely re-examine the validity of Einstein's central tenant -- the Equivalence Principle, which stands as the strong foundational basis of his beautiful and celebrated General Theory of Relativity (GTR).
Category: Relativity and Cosmology

[76] viXra:1111.0081 [pdf] submitted on 23 Nov 2011

On Artificially Creating Solar Eclipses

Authors: Dhananjay P. Mehendale
Comments: 12 pages

In this paper we inform about the partial solar eclipse we created artificially. It aims at inviting those who are interested in the study of solar eclipses to set up their own laboratory to artificially create and study solar eclipses at any time of the day and at any convenient spot on the earth. Anybody interested in the study of solar eclipses can setup his laboratory without much expenditure. What essentially required is a small piece of land exposed to sunlight to arrange the equipment and equipment consists of a telescope, some spherical objects of appropriate size, a mechanical arrangement to hold and move a chosen spherical object at hand, appropriate filters to protect eyes, and a good camera to take photographs of artificially eclipsed sun We report here about our initial efforts done regarding artificially creating solar eclipse of any kind. We provide towards the end of the paper two sample photographs of artificially created partial solar eclipse taken using orange fruit in the role of moon and a photograph of naturally occurred partial solar eclipse for the sake of demonstrating their similarity. We propose here a way to artificially create eclipses of all types, namely, total, partial, or annular in the laboratory at our will. We discuss how to create solar eclipses at any location on earth at any daytime and at any location of the sun on its daytime trajectory. These eclipses formed artificially will be same in every respect to naturally occurring eclipses due to perfect alignment of earth, moon, and sun along a straight line. The only difference in naturally occurring solar eclipses and artificially created solar eclipses lies in replacing the moon by any spherical body of appropriate size to work as artificial moon to obstruct sunrays to form solar eclipses artificially. We may use any spherical body in place of moon, which has diameter matching with the diameter of parallel sun beam entering the telescope, to hide the real image of the sun by this artificial moon.
Category: Astrophysics

[75] viXra:1111.0080 [pdf] replaced on 2011-11-26 17:12:33

Dark Matter, Dark Energy, and Elementary Particles and Forces

Authors: Thomas J. Buckholtz
Comments: 41 Pages.

Patterns link properties of six quarks and three leptons, the set of fundamental forces, and possible properties of dark matter and dark energy.
Category: Quantum Gravity and String Theory

[74] viXra:1111.0079 [pdf] submitted on 22 Nov 2011

Condensation States And Landscaping With The Theory Of Abstraction

Authors: Subhajit Ganguly
Comments: 11 pages

The Abstraction theory is applied in landscaping. A collection of objects may be made to be vast or meager depending upon the scale of observations. This idea may be developed to unite the worlds of the great vastness of the universe and the minuteness of the sub-atomic realm. Keeping constant a scaling ratio for both worlds, these may actually be converted into two self-same representatives with respect to scaling. The Laws of Physical Transactions are made use of to study Bose-Einstein condensation. As the packing density of concerned constituents increase to a certain critical value, there may be evolution of energy from the system.
Category: Quantum Gravity and String Theory

[73] viXra:1111.0078 [pdf] submitted on 22 Nov 2011

Finite Neutrosophic Complex Numbers

Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 222 pages

In this book for the first time the authors introduce the notion of real neutrosophic complex numbers.
Category: Algebra

[72] viXra:1111.0077 [pdf] submitted on 22 Nov 2011

DSm Super Vector Space of Refined Labels

Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 299 pages

In this book authors for the first time introduce the notion of supermatrices of refined labels. Authors prove super row matrix of refined labels form a group under addition. However super row matrix of refined labels do not form a group under product; it only forms a semigroup under multiplication. In this book super column matrix of refined labels and m x n matrix of refined labels are introduced and studied.
Category: Algebra

[71] viXra:1111.0076 [pdf] submitted on 22 Nov 2011

Neutrosophic Interval Bialgebraic Structures

Authors: W. B. Vasantha Kandasamy, Florentin Smarandache
Comments: 197 pages

In this book the authors for the first time introduce the notion of neutrosophic intervals and study the algebraic structures using them. Concepts like groups and fields using neutrosophic intervals are not possible. Pure neutrosophic intervals and mixed neutrosophic intervals are introduced and by the very structure of the interval one can understand the category to which it belongs.
Category: Algebra

[70] viXra:1111.0075 [pdf] replaced on 2011-12-07 13:22:35

On the Origin of Mass

Authors: Ir J.A.J. (Hans) van Leunen
Comments: 11 Pages. This paper is part of the Hilbert Book Model

Mass is caused by fields of elementary particles that are able of creating cavities at their center. Another cause is the presence of a different geometric anomaly such as a black hole.
Category: Quantum Physics

[69] viXra:1111.0074 [pdf] replaced on 2011-12-07 13:31:31

On the Origin of Physical Fields

Authors: Ir J.A.J. (Hans) van Leunen
Comments: 3 Pages. This paper is part of the Hilbert Book Model

Physical fields form the solution of nature for the problem that the set of observations is overwhelming the set of underlying variables.
Category: Quantum Physics

[68] viXra:1111.0073 [pdf] replaced on 2012-06-15 13:36:51

Kullback-Leibler Simplex

Authors: Popon Kangpenkae
Comments: 12 Pages.

Abstract. This technical reference presents the functional structure and the algorithmic implementation of KL (Kullback-Leibler) simplex. It details the simplex approximation and fusion. The KL simplex is fundamental, robust, adaptive an informatics agent for computational research in economics, finance, game and mechanism. From this perspective the study provides comprehensive results to facilitate future work in such areas. Abstract.
Category: Statistics

[67] viXra:1111.0072 [pdf] replaced on 2018-10-31 09:59:19

Understanding Electromagnetism: a New Approach

Authors: Kunwar Jagdish Narain
Comments: 27 Pages. 7 Figures

Because all electrons, nucleons, and other particles undergo a persistent spin motion without having any source of infinite energy, they should have a unique structure that keeps them persistently spinning and provides all the properties that they display. In addition, there should be some reason or purpose why they show a persistent spin motion, because, in nature, nothing occurs without a reason or purpose. Therefore, the unique structures of electrons, and nucleons, their properties, and purpose why they possess persistent spin motion have been determined. The results of these determinations provide the knowledge of a new force possessing characteristics of nuclear force and both attractive and repulsive components, and very clear and complete explanations of: 1) all the phenomena; 2) all the properties and effects of their systems; and 3) structures of their systems, e.g., deuterons, alpha particles, and nuclei; those are generated due to these particles. Present study is focused on to provide understanding of: 1) how electrons are bound together in their beams against the repulsive Coulomb force, which is generated between them due to similar charges on them; 2) how electromagnetism is generated in electron beams and current carrying substances; 3) which type of magnetism the generated electromagnetism happens to be; 4) how a magnetic field, which possesses direction and occurs in a plane perpendicular to the direction of flow of electrons through them, is generated; 5) how, in electron orbits, and current carrying close loops, magnetic north and south poles are created, and they behave as magnetic dipoles. Present study includes also the speculation of the generations of two possible very important effects in electric current carrying close loops.
Category: Condensed Matter

[66] viXra:1111.0071 [pdf] replaced on 2012-01-30 08:54:50

Do We Really Understand the Solar System?

Authors: Matti Pitkänen
Comments: 20 Pages.

The recent experimental findings have shown that our understanding of the solar system is surprisingly fragmentary. As a matter fact, so fragmentary that even new physics might find place in the description of phenomena like the precession of equinoxes and the recent discoveries about the bullet like shape of heliosphere and strong magnetic fields near its boundary bringing in mind incompressible fluid flow around obstacle. TGD inspired model is based on the heuristic idea that stars are like pearls in a necklace defined by long magnetic flux tubes carrying dark matter and strong magnetic field responsible for dark energy and possibly accompanied by the analog of solar wind. Heliosphere would be like bubble in the flow defined by the magnetic field inside the flux tube inducing its local thickening. A possible interpretation is as a bubble of ordinary and dark matter in the flux tube containing dark energy. This would provide a beautiful overall view about the emergence of stars and their helio-spheres as a phase transition transforming dark energy to dark and visible matter. Among other things the magnetic walls surrounding the solar system would shield the solar system from cosmic rays.
Category: Astrophysics

[65] viXra:1111.0070 [pdf] replaced on 2012-01-30 08:57:24

TGD Inspired Vision About Entropic Gravitation

Authors: Matti Pitkänen
Comments: 21 Pages.

Entropic gravity (EG) introduced by Verlinde has stimulated a great interest. One of the most interesting reactions is the commentary of Sabine Hossenfelder. The article of Kobakhidze relies on experiments supporting the existence of Schrödinger amplitudes of neutron in the gravitational field of Earth develops an argument suggesting that EG hypothesis in the form in which it excludes gravitons is wrong. Indeed, the mere existence of gravitational bound states suggests strongly the existence of transitions between them by graviton emission. The following arguments represent TGD inspired view about what entropic gravity (EG) could be if one throws out the unnecessary assumptions such as the emerging dimensions and absence of gravitons. Also the GRT limit of TGD is discussed leading to rather strong implications concerning the TGD counterparts of blackholes.

  1. If one does not believe in TGD, one could start from the idea that stochastic quantization">stochastic quantization or something analogous to it might imply something analogous to entropic gravity (EG). What is required is the replacement of the path integral with functional integral. More precisely, one has functional integral in which the real contribution to Kähler action of the preferred extremal from Euclidian regions of the space-time surface to the exponent represents Kähler function and the imaginary contribution from Minkowskian regions serves as a Morse function so that the counterpart of Morse theory in WCW is obtained on stationary phase approximation in accordance with the vision about TGD as almost topological QFT. The exponent of Kähler function is the new element making the functional integral well-defined and the presence of phase factor gives rise to the interference effects characteristic for quantum field theories although one does not integrate over all space-time surfaces. In zero energy ontology one has however pairs of 3-surfaces at the opposite light-like boundaries of CD so that something very much analogous to path integral is obtained.

  2. Holography requires that everything reduces to the level of 3-metrics and more generally, to the level of 3-D field configurations. Something like this happens if one can approximate path integral integral with the integral over small deformations for the minima of the action. This also happens in completely integral quantum field theories.

    The basic vision behind quantum TGD is that this approximation is much nearer to reality than the original theory. In other words, holography is realized in the sense that to a given 3-surface the metric of WCW assigns a unique space-time and this space-time serves as the analog of Bohr orbit and allows to realize 4-D general coordinate invariance in the space of 3-surfaces so that classical theory becomes an exact part of quantum theory. This point of view will be adopted in the following also in the framework of general relativity where one considers abstract 4-geometries instead of 4-surfaces: functional integral should be over 3-geometries with the definition of Kähler metric assigning to 3-geometry a unique 4-geometry.

  3. A powerful constraint is that the functional integral is free of divergences. Both 4-D path integral and stochastic quantization for gravitation fail in this respect due to the local divergences (in super-gravity situation might be different). The TGD inspired approach reducing quantum TGD to almost topological QFT with Chern-Simons term and a constraint term depending on metric associated with preferred 3-surfaces allows to circumvent this difficulty. This picture will applied to the quantization of GRT and one could see the resulting theory as a guess for what GRT limit of TGD could be. The first guess that Kähler function corresponds to Einstein-Maxwell action for this kind of preferred extremal turns out to be correct. An essential and radically new element of TGD is the possibility of space-time regions with Euclidian signature of the induced metric replacing the interiors of blackholes: this element will be assumed also now. The conditions that CP2 represents and extremal of EYM action requires cosmological constant in Euclidian regions determined by the constant curvature of CP2 and one can ask whether the average value of cosmological constant over 3-space could correspond to the cosmological constant explaining accelerating cosmic expansion.

  4. Entropic gravity is generalized in TGD framework so that all interactions are entropic: the reason is that in zero energy ontology (ZEO) the S-matrix is replaced with M-matrix defining a square root of thermodynamics in a well defined sense.

Category: Relativity and Cosmology

[64] viXra:1111.0069 [pdf] submitted on 19 Nov 2011

Gravity and Mass not Fundamental?

Authors: Paul Karl Hoiland
Comments: 4 pages

In this article a suggestion is raised where by gravity and mass are both emergent, not fundamental.
Category: Relativity and Cosmology

[63] viXra:1111.0068 [pdf] submitted on 19 Nov 2011

The Cause of Gravitation

Authors: Ir J.A.J. van Leunen
Comments: 2 pages

The interpretation of the causality of the gravitation field can be changed considerably.
Category: Quantum Physics

[62] viXra:1111.0067 [pdf] submitted on 19 Nov 2011

Essentials of Quantum Movement

Authors: Ir J.A.J. van Leunen
Comments: 2 pages

This is a concise list of the main points of “Continuity Equation for Quaternionic Quantum Fields
Category: Quantum Physics

[61] viXra:1111.0066 [pdf] submitted on 19 Nov 2011

Hilbert Book Model Essentials

Authors: Ir J.A.J. van Leunen
Comments: 3 pages

The essentials of the Hilbert book model are listed.
Category: Quantum Physics

[60] viXra:1111.0065 [pdf] replaced on 2011-12-07 14:50:02

Continuity Equation for Quaternionic Quantum Fields

Authors: Ir J.A.J. (Hans) van Leunen
Comments: 26 Pages. This paper is part of the Hilbert Book Model

The continuity equation is specified in quaternionic format. It means that the density and current of the considered “charge” is combined in a quaternionic probability amplitude distribution (QPAD). Next, the Dirac equation is also put in quaternionic format. It is shown that it is a special form of continuity equation. It is shown that two other quaternionic continuity equations can be derived from the quaternionic Dirac equation. Further, a whole series of equivalent equations of motions is derived from the possible sign flavor couplings. The corresponding particles are identified as member of the standard model. The coupling constant of the particles can be computed from their fields. In this way all known particles in the standard model can be identified.
Category: Quantum Physics

[59] viXra:1111.0064 [pdf] submitted on 18 Nov 2011

Femtotechnology: Stability of AB-needles. Fantastic Properties and Application

Authors: A.A. Bolonkin
Comments: 19 pages

In article “Femtotechnology: Nuclear AB-Matter with Fantastic Properties” *1+ American Journal of Engineering and Applied Sciences. 2 (2), 2009, p.501-514. ( author offered and consider possible super strong nuclear matter. But many readers asked about stability of the nuclear matter. It is well known, the conventional nuclear matter having more 92 protons or more 238 nucleons became instability. In given work the author shows the special artificial forms of nuclear AB-matter make its stability and give the fantastic properties. For example, by the offered AB-needle you can pierce any body without any damage, support motionless satellite, reach the other planet, researched Earth’s interior. These forms of nuclear matter are not in Nature now, but nanotubes also is not in Nature. That is artificial matter is made men. The AB-matter also is not now, but research and investigation their possibility, stability and properties are necessary for creating them.
Category: Nuclear and Atomic Physics

[58] viXra:1111.0063 [pdf] replaced on 2016-02-18 08:19:25

Advancements Over a Geometrodynamical Model

Authors: Sandro Antonelli
Comments: 15 Pages.

This article is mainly conceived to gain more interest into a recent trustworthy development. The dynamics of the relativistic Space-Time structure, as discussed in the model, exhibits unforeseen analogies with the electromagnetic theory. As direct continuation of the analysis of the gravitational wave propagation in free space, one should realize (unlike Lorentz gauge in General Relativity) that the polarization state is in general a mixture of six independent modes as many as the independent components of the Riemann tensor determining the tidal forces, although one can always recover two polarizations for particular symmetry conditions on the direction of propagation and observation. Actually, in this gravitational framework, at least for one polarization state, transverse waves are expected to propagate causing equal in-phase deformation displacement for a symmetric source, not counterphase as in General Relativity. At this aim a new interferometry methodology is designed. Calculation of gravitational power losses for the keplerian system PSR 1913+16 in the solution by approximations of inhomogeneous problem is carried out to the first order, which allows the assessment of a second gravitational constant.
Category: Relativity and Cosmology

[57] viXra:1111.0062 [pdf] replaced on 2011-11-28 16:33:33

A New Koide Triplet: Strange, Charm, Bottom.

Authors: Alejandro Rivero
Comments: 5 Pages.

With the negative sign for $\sqrt m_s$, the quarks strange, charm and bottom make a Koide tuple. It continues the c-b-t tuple recently found by Rodejohann and Zhang and, more peculiar, it is quasi-orthogonal to the original charged lepton triplet.
Category: High Energy Particle Physics

[56] viXra:1111.0061 [pdf] submitted on 15 Nov 2011

Black Hole Horizon Curvature Dependent Balance Between Plasma Creation and e-e+ Annihilation in Quantum FFF Theory.

Authors: Leo Vuyk
Comments: 39 pages

In particle physics it is an interesting challenge to postulate that the FORM and structure of elementary particles is the origin of different FUNCTIONS of these particles. In former papers we presented possible solutions based on complex 3-D ring shaped particles, which are equipped with three point like hinges and one splitting point, all four points divided equally over the ring surface. The 3-D ring itself is postulated to represent the “Virgin Mother” of all other particles and is coined Higgs particle, supplied with the 3-hinges coded (OOO), which gives the particle the opportunity to transform after some sort of mechanical collision with other particles into a different shape, with a different function. Thus in the (Q-FFF) Quantum Function Follows Form theory, the Higgs is interpreted as a massless transformer particle able to create the universe by transformation of its shape after real mechanical collision and merge with other shaped particles into complex and compound knots like Quarks. The best place to create such plasma out of the “nothingness” of the Higgs vacuum seems to be not only direct after the Black hole splitting big bang, but even more at the event horizon of new paradigm black holes. However the balance between e- e+ annihilation and plasma creation seems to depend on the curvature of the black hole event horizon or better the size of the black hole. Smaller black holes have stronger horizon curvature related to the vacuum Higgs vacuum structure (or Planck scale) and as a result a better balance between Quark production and e-e+ annihilation. The Tarantula- and Eagle nebula seem to show us this difference.
Category: Astrophysics

[55] viXra:1111.0059 [pdf] submitted on 17 Nov 2011

The New Prime Theorems (1291)-(1340)

Authors: Chun-Xuan Jiang
Comments: 90 pages

Using Jiang function we are able to prove almost all prime problems in prime distribution. This is the Book proof. No great mathematicians study prime problems and prove Riemann hypothesis in AIM, CLAYMA, IAS, THES, MPIM, MSRI. In this paper using Jiang function J2 (ω) we prove that the new prime theorems (1291)-(1340) contain infinitely many prime solutions and no prime solutions. From (6) we are able to find the smallest solution πk(N0,2) ≥ 1. This is the Book theorem.
Category: Number Theory

[54] viXra:1111.0058 [pdf] replaced on 2012-03-16 03:43:33

Quantum Arithmetics and the Relationship Between Real and P-Adic Physics

Authors: Matti Pitkänen
Comments: 37 Pages.

p-Adic physics involves two only partially understood questions.

  1. Is there a duality between real and p-adic physics? What is its precice mathematic formulation? In particular, what is the concrete map p-adic physics in long scales (in real sense) to real physics in short scales? Can one find a rigorous mathematical formulation of canonical identification induced by the map p→ 1/p in pinary expansion of p-adic number such that it is both continuous and respects symmetries.

  2. What is the origin of the p-adic length scale hypothesis suggesting that primes near power of two are physically preferred? Why Mersenne primes are especially important?

A possible answer to these questions relies on the following ideas inspired by the model of Shnoll effect. The first piece of the puzzle is the notion of quantum arithmetics formulated in non-rigorous manner already in the model of Shnoll effect.

  1. Quantum arithmetics is induced by the map of primes to quantum primes by the standard formula. Quantum integer is obtained by mapping the primes in the prime decomposition of integer to quantum primes. Quantum sum is induced by the ordinary sum by requiring that also sum commutes with the quantization.

  2. The construction is especially interesting if the integer defining the quantum phase is prime. One can introduce the notion of quantum rational defined as series in powers of the preferred prime defining quantum phase. The coefficients of the series are quantum rationals for which neither numerator and denominator is divisible by the preferred prime.

  3. p-Adic--real duality can be identified as the analog of canonical identification induced by the map p→ 1/p in the pinary expansion of quantum rational. This maps maps p-adic and real physics to each other and real long distances to short ones and vice versa. This map is especially interesting as a map defining cognitive representations.

Quantum arithmetics inspires the notion of quantum matrix group as counterpart of quantum group for which matrix elements are ordinary numbers. Quantum classical correspondence and the notion of finite measurement resolution realized at classical level in terms of discretization suggest that these two views about quantum groups are closely related. The preferred prime p defining the quantum matrix group is identified as p-adic prime and canonical identification p→ 1/p is group homomorphism so that symmetries are respected.

  1. The quantum counterparts of special linear groups SL(n,F) exists always. For the covering group SL(2,C) of SO(3,1) this is the case so that 4-dimensional Minkowski space is in a very special position. For orthogonal, unitary, and orthogonal groups the quantum counterpart exists only if quantum arithmetics is characterized by a prime rather than general integer and when the number of powers of p for the generating elements of the quantum matrix group satisfies an upper bound characterizing the matrix group.

  2. For the quantum counterparts of SO(3) (SU(2)/ SU(3)) the orthogonality conditions state that at least some multiples of the prime characterizing quantum arithmetics is sum of three (four/six) squares. For SO(3) this condition is strongest and satisfied for all integers, which are not of form n= 22r(8k+7)). The number r3(n) of representations as sum of squares is known and r3(n) is invariant under the scalings n→ 22rn. This means scaling by 2 for the integers appearing in the square sum representation.

  3. r3(n) is proportional to the so called class number function h(-n) telling how many non-equivalent decompositions algebraic integers have in the quadratic algebraic extension generated by (-n)1/2.

The findings about quantum SO(3) suggest a possible explanation for p-adic length scale hypothesis and preferred p-adic primes.

  1. The basic idea is that the quantum matrix group which is discrete is very large for preferred p-adic primes. If cognitive representations correspond to the representations of quantum matrix group, the representational capacity of cognitive representations is high and this kind of primes are survivors in the algebraic evolution leading to algebraic extensions with increasing dimension.

  2. The preferred primes correspond to a large value of r3(n). It is enough that some of their multiples do so (the 22r multiples of these do so automatically). Indeed, for Mersenne primes and integers one has r3(n)=0, which was in conflict with the original expectations. For integers n=2Mm however r3(n) is a local maximum at least for the small integers studied numerically.

  3. The requirement that the notion of quantum integer applies also to algebraic integers in quadratic extensions of rationals requires that the preferred primes (p-adic primes) satisfy p=8k+7. Quite generally, for the integers n=22r(8k+7) not representable as sum of three integers the decomposition of ordinary integers to algebraic primes in the quadratic extensions defined by (-n)1/2 is unique. Therefore also the corresponding quantum algebraic integers are unique for preferred ordinary prime if it is prime also in the algebraic extension. If this were not the case two different decompositions of one and same integer would be mapped to different quantum integers. Therefore the generalization of quantum arithmetics defined by any preferred ordinary prime, which does not split to a product of algebraic primes, is well-defined for p=22r(8k+7).

  4. This argument was for quadratic extensions but also more complex extensions defined by higher polynomials exist. The allowed extensions should allow unique decomposition of integers to algebraic primes. The prime defining the quantum arithmetics should not decompose to algebraic primes. If the algebraic evolution leadis to algebraic extensions of increasing dimension it gradually selects preferred primes as survivors.

Category: Quantum Gravity and String Theory

[53] viXra:1111.0057 [pdf] replaced on 2012-01-30 09:17:38

Is Kähler Action Expressible in Terms of Areas of Minimal Surfaces?

Authors: Matti Pitkänen
Comments: 5 Pages.

The general form of ansatz for preferred extremals implies that the Coulombic term in Kähler action vanishes so that it reduces to 3-dimensional surface terms in accordance with general coordinate invariance and holography. The weak form of electric-magnetic duality in turn reduces this term to Chern-Simons terms.

The strong form of General Coordinate Invariance implies effective 2-dimensionality (holding true in finite measurement resolution) so that also a strong form of holography emerges. The expectation is that Chern-Simons terms in turn reduces to 2-dimensional surface terms.

The only physically interesting possibility is that these 2-D surface terms correspond to areas for minimal surfaces defined by string world sheets and partonic 2-surfaces appearing in the solution ansatz for the preferred extremals. String world sheets would give to Kähler action an imaginary contribution having interpretation as Morse function. This contribution would be proportional to their total area and assignable with the Minkowskian regions of the space-time surface. Similar but real string world sheet contribution defining Kähler function comes from the Euclidian space-time regions and should be equal to the contribution of the partonic 2-surfaces. A natural conjecture is that the absolute values of all three areas are identical: this would realize duality between string world sheets and partonic 2-surfaces and duality between Euclidian and Minkowskian space-time regions.

Zero energy ontology combined with the TGD analog of large Nc expansion inspires an educated guess about the coefficient of the minimal surface terms and a beautiful connection with p-adic physics and with the notion of finite measurement resolution emerges. The t'Thooft coupling λ should be proportional to p-adic prime p characterizing particle. This means extremely fast convergence of the counterpart of large Nc expansion in TGD since it becomes completely analogous to the pinary expansion of the partition function in p-adic thermodynamics. Also the twistor description and its dual have a nice interpretation in terms of zero energy ontology. This duality permutes massive wormhole contacts which can have off mass shell with wormhole throats which are always massive (also for the internal lines of the generalized Feynman graphs).

Category: Quantum Gravity and String Theory

[52] viXra:1111.0056 [pdf] replaced on 2012-01-30 09:29:29

An Attempt to Understand Preferred Extremals of Kähler Action

Authors: Matti Pitkänen
Comments: 23 Pages.

There are pressing motivations for understanding the preferred extremals of Kähler action. For instance, the conformal invariance of string models naturally generalizes to 4-D invariance defined by quantum Yangian of quantum affine algebra (Kac-Moody type algebra) characterized by two complex coordinates and therefore explaining naturally the effective 2-dimensionality. One problem is how to assign a complex coordinate with the string world sheet having Minkowskian signature of metric. One can hope that the understanding of preferred extremals could allow to identify two preferred complex coordinates whose existence is also suggested by number theoretical vision giving preferred role for the rational points of partonic 2-surfaces in preferred coordinates. The best one could hope is a general solution of field equations in accordance with the hints that TGD is integrable quantum theory.

A lot is is known about properties of preferred extremals and just by trying to integrate all this understanding, one might gain new visions. The problem is that all these arguments are heuristic and rely heavily on physical intuition. The following considerations relate to the space-time regions having Minkowskian signature of the induced metric. The attempt to generalize the construction also to Euclidian regions could be very rewarding. Only a humble attempt to combine various ideas to a more coherent picture is in question.

The core observations and visions are following.

  1. Hamilton-Jacobi coordinates for M4 > define natural preferred coordinates for Minkowskian space-time sheet and might allow to identify string world sheets for X4 as those for M4. Hamilton-Jacobi coordinates consist of light-like coordinate m and its dual defining local 2-plane M2⊂ M4 and complex transversal complex coordinates (w,w*) for a plane E2x orthogonal to M2x at each point of M4. Clearly, hyper-complex analyticity and complex analyticity are in question.

  2. Space-time sheets allow a slicing by string world sheets (partonic 2-surfaces) labelled by partonic 2-surfaces (string world sheets).

  3. The quaternionic planes of octonion space containing preferred hyper-complex plane are labelled by CP2, which might be called CP2mod. The identification CP2=CP2mod motivates the notion of M8--M4× CP2. It also inspires a concrete solution ansatz assuming the equivalence of two different identifications of the quaternionic tangent space of the space-time sheet and implying that string world sheets can be regarded as strings in the 6-D coset space G2/SU(3). The group G2 of octonion automorphisms has already earlier appeared in TGD framework.

  4. The duality between partonic 2-surfaces and string world sheets in turn suggests that the CP2=CP2mod conditions reduce to string model for partonic 2-surfaces in CP2=SU(3)/U(2). String model in both cases could mean just hypercomplex/complex analyticity for the coordinates of the coset space as functions of hyper-complex/complex coordinate of string world sheet/partonic 2-surface.

The considerations of this section lead to a revival of an old very ambitious and very romantic number theoretic idea.

  1. To begin with express octonions in the form o=q1+Iq2, where qi is quaternion and I is an octonionic imaginary unit in the complement of fixed a quaternionic sub-space of octonions. Map preferred coordinates of H=M4× CP2 to octonionic coordinate, form an arbitrary octonion analytic function having expansion with real Taylor or Laurent coefficients to avoid problems due to non-commutativity and non-associativity. Map the outcome to a point of H to get a map H→ H. This procedure is nothing but a generalization of Wick rotation to get an 8-D generalization of analytic map.

  2. Identify the preferred extremals of Kähler action as surfaces obtained by requiring the vanishing of the imaginary part of an octonion analytic function. Partonic 2-surfaces and string world sheets would correspond to commutative sub-manifolds of the space-time surface and of imbedding space and would emerge naturally. The ends of braid strands at partonic 2-surface would naturally correspond to the poles of the octonion analytic functions. This would mean a huge generalization of conformal invariance of string models to octonionic conformal invariance and an exact solution of the field equations of TGD and presumably of quantum TGD itself.

Category: Quantum Gravity and String Theory

[51] viXra:1111.0055 [pdf] replaced on 2012-01-30 09:31:36

The Master Formula for the U-Matrix Finally Found?

Authors: Matti Pitkänen
Comments: 11 Pages.

In zero energy ontology U-matrix replaces S-matrix as the fundamental object characterizing the predictions of the theory. U-matrix is defined between zero energy states and its orthogonal rows define what I call M-matrices, which are analogous to thermal S-matrices of thermal QFTs. M-matrix defines the time-like entanglement coefficients between positive and negative energy parts of the zero energy state. M-matrices identifiable as hermitian square roots of density matrices. In this article it is shown that M-matrices form in a natural manner a generalization of Kac-Moody type algebra acting as symmetries of M-matrices and U-matrix and that the space of zero energy states has therefore Lie algebra structure so that quantum states act as their own symmetries. The generators of this algebra are multilocal with respect to partonic 2-surfaces just as Yangian algebras are multilocal with respect to points of Minkowski space and therefore define generalization of the Yangian algebra appearing in the Grassmannian twistor approach to N=4 SUSY.
Category: Quantum Gravity and String Theory

[50] viXra:1111.0054 [pdf] replaced on 2011-11-29 10:06:32

The Turn-up in the Differential GCR Proton Energy Spectrum Below 100 Mev

Authors: Henry D. May
Comments: 6 Pages. Deleted first term from Eqs. 1 and 2 because it is zero.

The high energy portion of galactic cosmic ray proton spectrum in the vicinity of Earth, above about 500 MeV per nucleon, can be well approximated by the “force field” model, whose only formal parameter is the modulation potential. Here I show that the entire spectrum can be well approximated by the force field model, when the force field is treated as an electric field. The analysis also explains the origin of the anomalously energetic ions in the low energy tail of the solar wind.
Category: Astrophysics

[49] viXra:1111.0053 [pdf] submitted on 14 Nov 2011

Absurdity of Relativity and Root of Its Success

Authors: Ji Qi, YinLing Jiang
Comments: 49 pages

There has existed the focus of debate between the viewpoint of space-time of classical physics and that of relativity for almost a hundred years. Which is more reasonable on earth? The fundamental principle of the theory of relativity and its basic transformation will be discussed in detail in this study. By discussion, I hope we can see the essence of the theory of relativity clearly and make people profoundly understand the fundamental conception of physics on time and space. I wish we can return a sunny sky to physics.
Category: Relativity and Cosmology

[48] viXra:1111.0052 [pdf] submitted on 14 Nov 2011

Universe Invariant and Universe Specific Concepts in the Niv Bible

Authors: James R. Akerlund
Comments: 4 pages

The point of this paper is for you to see universe invariant (UI) and universe specific (US) aspects in your own life. We do this by showing verses from the Bible that seem to show how the world is divided into UI and US realms. This will broaden your understanding of UI and US. We give specific examples from 22 verses and suggest 6 more verses with an additional verse as a counterexample to the previous 28 verses. The author uses verses from the New International Version (NIV) Bible and does not attempt to interpret the verses from their original languages.
Category: Religion and Spiritualism

[47] viXra:1111.0050 [pdf] replaced on 2013-04-17 10:51:37

Oreka Particle Theory: Explanations of Dark Energy, Dark Matter, and Gravity Mysteries.

Authors: Nikolas S Lewis
Comments: 12 Pages.

This article describes Oreka Particle Theory in which when gravity affects particles without mass, it creates an energy imbalance, which results in the creation of an energy called oreka particles. Oreka theory requires that massless particles do not create a gravitational pull, and gives evidence for this, which is similar to what disproved The Tired Light Theory. In Oreka Theory, dark energy is composed of oreka particles and they also are responsible for the why spiral galaxies spin the way they do. Oreka particles are also not a fluid.
Category: Relativity and Cosmology

[46] viXra:1111.0049 [pdf] submitted on 12 Nov 2011

Positive Energy Solution to Exotic Energy Requirement of Any Generic Warp Drive Metric

Authors: Paul Karl Hoiland
Comments: 4 pages

In this article I look at some of the math behind replacing the exotic energy of any warp metric with an inflation field with a focus on a simple generic solution to the frame switch in the recent CERN superluminal neutrino detection to that of a Newtonian metric.
Category: Relativity and Cosmology

[45] viXra:1111.0048 [pdf] submitted on 12 Nov 2011

Quantum Origins of the Question of God.

Authors: Paul Karl Hoiland
Comments: 6 pages

By taking a closer look at true quantum theory stemming from the idea of Hyperspace and it's effects on lightcones versus our observational state I show the question of is there a God cannot honestly be eliminated from what is possible. I also in brief discuss the possibility of the human soul or spirit from the aspect of Observation and Quantum Theory in the first place.
Category: Religion and Spiritualism

[44] viXra:1111.0047 [pdf] submitted on 12 Nov 2011

Problems with Warp Drive Examined

Authors: Paul Karl Hoiland
Comments: 11 pages

In a short examination of some of the major problems raised as objections to Doctor Alcubierre?s original proposal of warp drive within General Relativity(1) by many author?s in both peer review publication and archive articles one discovers that solutions to these problems do exist if one is willing to consider a modified version of that original proposal. It is the findings of this Author that Warp Drive cannot be properly ruled out at this time at least as a possible future method of sub-light propulsion with the possible added benefit of working as a superluminal field propulsion drive.
Category: Relativity and Cosmology

[43] viXra:1111.0046 [pdf] submitted on 12 Nov 2011

The Anthropic Principle and Intelligent Design Continued

Authors: Paul Karl Hoiland
Comments: 3 pages

In this short article I continue to discuss the idea of could the Universe or even multiverse be a product of Intelligent Design. I avoid the assumptions used by Christian writers on this subject and simply point out an alternative venue under which this whole idea could be studied in science. I also supply a possible solution to the origin of life and how life started here.
Category: History and Philosophy of Physics

[42] viXra:1111.0045 [pdf] submitted on 12 Nov 2011

The Anthropic Principle and Intelligent Design

Authors: Paul Karl Hoiland
Comments: 6 pages

In this short article I discuss the Anthropic Principle, some of the possible solutions, and focus on the idea of could the Universe or even multiverse be a product of Intelligent Design. I avoid the assumptions used by Christian writers on this subject and simply point out an alternative venue under which this whole idea could be studied in science.
Category: History and Philosophy of Physics

[41] viXra:1111.0044 [pdf] submitted on 11 Nov 2011

Towards a More Realistic Gravitomagnetic Displacement Drive

Authors: Paul Karl Hoiland
Comments: 47 pages

I have been at both physics and Electrical Engineering going on about 32 years now. But I have had two major interests all along. One is to better understand the Cosmos we live in. The other stems back to two events in my life several years removed from each other. The first event happened in Texas back in 1973 while taking a short trip across east Texas with my folks. The event was witnessing something that would be classified as a close encounter of the first type. What my folks and I saw out in east Texas was a very brilliant glowing object circular in shape due south of the road we where on above a cattle field. I and my folks remember pulling over to look at it and we remember driving away afterwards. But we do not remember anything in between. The second event took place back in 1983 in Tucson Arizona while working for the Military. One evening I and several others had gone out into the desert northeast of Tucson to cook out and have a good time. What we did not know was several unidentified objects had been spotted out south of Tucson by workers up on Kitt Peak in the area of Ryan Field a small local airport south of Tucson.. These objects took a slow flight path out across Tucson towards the direction we where at.. I ended up being about 100 yards from one of these as it progressed across the valley. Close enough to see a lot of detail, to get a good idea by its general shape and size and flight aspects that this was not anything our Military had at the time. While I never saw any aliens or little green men. What I witnessed was intelligently controlled, had some motive power different from anything our planet uses and could have been a robotic probe similar to one’s we launch at present into space.. I also learned later that the Military on Davis Monthan had tracked these same objects that evening also. These two events sparked keen interests in space propulsion which later got utilized during the era of Alcubierre Warp Drive research with the group ESAA.
Category: Relativity and Cosmology

[40] viXra:1111.0043 [pdf] submitted on 11 Nov 2011

Another Alternative To Superluminal Propulsion

Authors: Paul Karl Hoiland
Comments: 7 pages

In this article as based upon an alternative answer to the measured superluminal velocity of Neutrinos at CERN I propose an alternative approach to superluminal propulsion that on the surface does not suffer from some of the problems of the more standard Alcubierre Drive.
Category: Relativity and Cosmology

[39] viXra:1111.0042 [pdf] submitted on 10 Nov 2011

Quantum Classical Correspondence of Dirac Equation

Authors: Pranaav Sinz
Comments: 9 pages

it has been shown that unphysical results of quantum classical correspondence pertaining to Dirac equation have their roots in unjustified calculation of commutation relation involving 4x4 component operators acting on single component non normalized wave function. Zitterbewegung like unphysical phenomena have been completely eliminated as the treatment establishes a seamless connection between operators and observables both in relativistic and non relativistic limits. Intrinsic nature of spin angular momentum has been found questionable. data.
Category: Quantum Physics

[38] viXra:1111.0041 [pdf] submitted on 10 Nov 2011

Explanation of Apparent Superluminal Neutrino Velocity in the CERN-OPERA Experiment

Authors: Tim Joslin
Comments: 10 pages

The CERN-OPERA neutrino experiment at the Gran Sasso Laboratory obtained a measurement, vn, of the muon neutrino velocity with respect to the speed of light, c, of (vn-c)/c = (2.48 ± 0.28 (stat.) ± 0.30 (sys.)) x10-5. The neutrino flight path from CERN to OPERA was established using distances and timings based on “round-trip” light speed signals. These are incommensurate with the reference frame dependent “one-way” flight times of neutrinos over the same path. We perform a Lorentz transformation to demonstrate the frame-dependence of the result. We conclude that an Earth system (ES) reference frame defined by a timing system which assumes isotropic light speed, such as the UTC, is not able to support experiments requiring accurate one-way light speed measurement. We hypothesise that vn = c and consider the 2.7K CMB as a possible candidate for the isotropic frame of reference where round-trip and one-way light speeds are equal. On this basis we find that the CERN-OPERA experiment would be expected to measure deviations in neutrino arrival times compared to the expected light speed transmission of up to ±~2ns/km of neutrino flight path, but usually of less magnitude and with a bias towards early arrival. Only the N-S component (relative to the Earth’s axis) of the motion of the neutrino flight path relative to the isotropic frame would be statistically significant in the CERN-OPERA experiment. Assuming no bias in the mean of the other components of the experiment’s motion against the isotropic frame in the neutrino timing, because of the Earth’s rotation and orbit, we find a mean early neutrino arrival time of ~113ns would be expected were the CMB the isotropic frame. That is, the potential error is of the same order as the early arrival time of the neutrinos of (60.7 ± 6.9 (stat.) ± 7.4 (sys.)) ns, suggesting further analysis of possible sources of deviation from our theoretical estimate may be worthwhile. We propose further statistical methods to test the hypotheses that vn = c and that the CMB represents the isotropic frame, using the existing OPERA neutrino velocity measurement data.
Category: High Energy Particle Physics

[37] viXra:1111.0040 [pdf] submitted on 10 Nov 2011

The New Prime Theorems (1241)-(1290)

Authors: Chun-Xuan Jiang
Comments: 90 pages

Using Jiang function we are able to prove almost all prime problems in prime distribution. This is the Book proof. No great mathematicians study prime problems and prove Riemann hypothesis in AIM, CLAYMA, IAS, THES, MPIM, MSRI. In this paper using Jiang function J2 (ω) we prove that the new prime theorems (1241)-(1290) contain infinitely many prime solutions and no prime solutions. From (6) we are able to find the smallest solution πk(N0,2) ≥ 1. This is the Book theorem.
Category: Number Theory

[36] viXra:1111.0039 [pdf] submitted on 10 Nov 2011

Decay Modes of Excited 4he Below the Fragmentation Levels

Authors: A. Meulenberg, K P Sinha
Comments: 10 pages

Three reasons are given to dispute the claims of numerous experimenters that higher-than-expected heat and radiation are obtained from nuclear fusion of deuterium atoms at room temperature: 1) the inability of two low-energy protons or deuterons to penetrate the mutual Coulomb barrier; 2) the production of heat in excess of that possible for the measured particulate radiation, and 3) the high levels of 4He measured (much beyond that permitted by present nuclear physics models). The first has been addressed earlier. This paper discusses the second and how it leads to an understanding of a critical mechanism behind low-energy nuclear reactions.
Category: Nuclear and Atomic Physics

[35] viXra:1111.0038 [pdf] submitted on 10 Nov 2011

On a Strengthened Hardy-Hilbert’s Type Inequality

Authors: Guangsheng Chen
Comments: 8 pages

In this paper, by using the Euler-Maclaurin expansion for the zeta function and estimating the weight function effectively, we derive a strengthenment of a Hardy-Hilbert’s type inequality proved by W.Y. Zhong. As applications, some particular results are considered. work.
Category: Number Theory

[34] viXra:1111.0037 [pdf] replaced on 14 Nov 2011

The Reaction Force Does no Work?: a Comment on Physica Scripta 84 (2011) 055004

Authors: Ron Bourgoin
Comments: 3 pages

Newton’s third law will not produce correct results as long as the work of the reacting force is not considered. We have always only focused on the work of the acting force. We have never considered the work of the reacting force, but to obtain correct results at the microscopic level we have no choice but to admit that the reacting force performs work.
Category: Classical Physics

[33] viXra:1111.0036 [pdf] submitted on 9 Nov 2011

Not Linear Element, Cosmological Redshift and Deflection of Light in the Gravitational Field

Authors: Daniele Sasso
Comments: 18 pages

Starting from some questions on General Relativity we make a few critical considerations in concordance with the Theory of Reference Frames (TR). TR represents firstly a critical viewpoint and secondly an alternative solution with regard to whether Special Relativity or General Relativity, moreover it represents a new answer to problems of dynamics of motion. The new definition of not linear element is the most important concept expressed in this article and particularly we consider the physical explanation of the change of the linear element into a not linear curved element when it is in a gravitational field. In the absence of gravitational field, the geodetic (trajectory along which the work carried out by a force is the smallest) coincides with the linear element and inside the gravitational field it changes into the curved element. We prove that in a gravitational field this change is caused by energy reasons and not by the space and time kinematic warp. A few classical experiments, like cosmological redshift and deflection of light, at last are considered and a new interpretation is given outside General Relativity.
Category: Relativity and Cosmology

[32] viXra:1111.0035 [pdf] replaced on 2014-11-28 02:49:34

Asymmetrical Genesis by Remanufacture of Antielectrons

Authors: D. Pons, A.D. Pons, A.J. Pons
Comments: 19 Pages. Published as Pons, D. J., Pons, A. D., & Pons, A. J. (2014). Asymmetrical genesis by remanufacture of antielectrons. Journal of Modern Physics, 5, 1980-1994. doi:

Problem- The asymmetrical genesis problem concerns why the universe should have an abundance of matter over antimatter. Purpose- This paper shows how the baryogenesis and leptogenesis asymmetries may both be resolved. Approach- Design methods were used to develop a conceptual mechanics for the remanufacturing processes that transform particles in the decay processes. This was based on the structures for the photon, electron, antielectron, proton and antineutrino as previously identified as logical necessities for the beta decay process, and represented as a non-local hidden-variable design with discrete fields Findings- The solution is given in terms of a mechanics that defines the transformation of discrete field structures in particles. The genesis problem is shown to be solvable. The mechanics describes pair production of an electron and antielectron from two initial photons, and subsequent remanufacture of the antielectrons into protons. It is predicted that two antineutrinos would be emitted, which is testable and falsifiable. The theory identifies that the role of the antineutrinos is to remove the antimatter handed field structures. The original electron and proton may bond to form a simple hydrogen atom, or combine via electron capture to form a neutron and hence heavier nuclides. The subsequent preponderance of the matter pathways in the genesis production sequence is also addressed and is explained as domain warfare between the matter and antimatter species. Originality- The concept of remanufacture of antielectron into proton, with emission of antineutrinos, is novel. Extensions of the theory explain the nuclides. Consequently the theory explains from pair production up to nuclear structure, which also is original.
Category: Relativity and Cosmology

[31] viXra:1111.0034 [pdf] submitted on 8 Nov 2011

The Errors of Statistical Hypotheses and Scientific Theories

Authors: Stephen P. Smith
Comments: 16 pages

The process of error recognition is explored first in statistics, and then in science. The Type II error found in statistical hypothesis testing is found analogous to Karl Popper’s “logical probability” that is intended to measure the likelihood that a scientific theory can avoid its refutation. Nevertheless, Popper’s reliance on deductive thinking is found detracting from his demarcation that separates science and metaphysics. An improved critical logic for science is presented that permits error recognition more broadly: for induction by Popper’s falsification principle; but also for deduction and emotionality. The reality of induction creates a limitation for a science that has not accommodated a fuller menu of error recognition. The reality of induction places limits of what can be known from empiricism, and this has philosophical implications.
Category: General Science and Philosophy

[30] viXra:1111.0033 [pdf] submitted on 7 Nov 2011

Gravity Without the Equivalence Principle

Authors: Andrew Downing
Comments: 17 pages, dedicated under the CC0 Public Domain Dedication

A simple method is presented to account for the macroscopic effects of potential unknown attractive and repulsive forces that obey the inverse square law. This method is implemented in an n-body simulation. Graphs and screenshots from the simulation are then used to show that practically any quantum mechanical big bang theory with many arbitrary types of particles and fundamental forces would necessitate cosmic inflation, structure formation in the early universe, Hubble's law, the cosmological principle, slightly accelerated expansion of the universe, and, in specific cases (such as protoplanetary disks), the equivalence principle, regardless of what the particles and forces in the theory are.
Category: Relativity and Cosmology

[29] viXra:1111.0032 [pdf] submitted on 6 Nov 2011

The Black Hole Catastrophe: A Short Reply to J. J. Sharples

Authors: Stephen J. Crothers
Comments: This paper was published in The Hadronic Journal, 34, 197-224 (2011) and is an abridged version of the paper, ‘The Black Hole Catastrophe: A Reply to J. J. Sharples',

A recent Letter to the Editor (Sharples J. J., Coordinate transformations and metric extension: a rebuttal to the relativistic claims of Stephen J. Crothers, Progress in Physics, v.1, 2010) has analysed a number of my papers. Dr. Sharples has committed errors in both mathematics and physics. His notion that r = 0 in the so-called “Schwarzschild solution” marks the point at the centre of the related manifold is false, as is his related claim that Schwarzschild’s actual solution describes a manifold that is extendible. His post hoc introduction of Newtonian concepts and related mathematical expressions into the “Schwarzschild solution” is invalid; for instance, Newtonian two-body relations into what is alleged to be a one-body problem. Each of the objections are treated in turn and their invalidity fully demonstrated. Black hole theory is riddled with contradictions. This article provides definitive proof that black holes do not exist.
Category: Relativity and Cosmology

[28] viXra:1111.0031 [pdf] submitted on 4 Nov 2011

Explain the Phenomenon of Neutrino Superluminal

Authors: Ji Qi, Wei Feng Ni, Yinling Jiang
Comments: 8 Pages.

An interesting experimental phenomenon attracted the whole world's attention. European researchers found that the neutrinos traveled faster than the speed of light, which was difficult to explain. This result was reported on the website of British "Nature" journal on September 22, 2011. Meanwhile, many strange phenomena from the experiment of Foucault pendulum were found by us. In 1921, Millar conducted an experiment and found that the light presented a drift motion relative to the earth by an amount of 10km/s. These experiments implied that there existed another state substance in nature and it might account for the motion laws of all the objects. In this paper, an objective interpretation for the superluminal phenomenon of neutrinos was presented, as well as a reasonable prediction for this kind of experiments.
Category: Classical Physics

[27] viXra:1111.0030 [pdf] submitted on 4 Nov 2011

The Strange Phenomena of Foucault Pendulum and Movement Laws of Celestial Bodies

Authors: Ji Qi, Sheng Wang, Yinling Jiang
Comments: 18 Pages.

"Ether" is existing? Can say that this is a very important question in physics! A strange phenomenon is found by studying on Foucault pendulum again and again. It is that when Swing ball is located in the north-south direction at the initial time, the rotation angular velocity in the swing plane is relatively large, while Swing ball is located in the east-west direction at the initial time, the angular velocity is much smaller, or even almost no rotation. And, when the ball is north-south swing, the swing state itself can be distorted to clockwise swing; However, when the ball is east-west swing, the swing state itself is hardly changed, or even slightly distorted toward counter-clockwise direction. The experimental phenomenon is in contradiction with the classical theory. The experimental results can prove the existence of another substance in the nature, which is No-Shape-Substance. At the same time, we can well understand the stellar run of peculiar regularity.
Category: Classical Physics

[26] viXra:1111.0029 [pdf] submitted on 4 Nov 2011

Physical space of No-Shape-Substance

Authors: Ji Qi, Yinling Jiang
Comments: 21 Pages.

People used to establish physical laws on the mathematical frame directly, without considering the existence of the No-Shape-Substance. We believe such physical laws are separated from their nature. Since we have known something about the No-Shape-Substance, we must review physical laws on the ground of objectivity and reality. We will discuss more profoundly space, universal gravitation, Coulomb force, magnetic force, the theory of relativity, and the like. We are trying to make basic physical laws more exactly and more clear. There is a general presentation to No-Shape-Substance in this paper, as well as an elementary calculation of the density and volume modulus of No-Shape-Substance. On the ground of the calculation, we can get a profound understand on the influence of medium on the velocity of light and that of temperature on the refractive index.
Category: Classical Physics

[25] viXra:1111.0028 [pdf] submitted on 4 Nov 2011

Do The Observations of Superluminal Neutrinos Lead to The Model Where Light Speed Increases Over Time?

Authors: Mark Zilberman
Comments: 3 Pages.

In the recent research the OPERA collaboration has reported the observation of superluminal neutrinos. They did not state what exact value they used as the speed of light c, but we could safely assume that in accordance to the SI system it was 299,792,458 m/s. In the following research A.G. Cohen and S. L. Glashow showed that “superluminal neutrinos would lose energy rapidly via the bremsstrahlung of electronpositron pairs” and that “most of the neutrinos would have suffered several pair emissions en route”. This obvious paradox between experiment and theory can easily be resolved if the speed of light is slowly increasing and is now (or at least was during the experiment) higher than in 1970-1980 when mentioned that 299,792,458 m/s was measured. In this case the speed of neutrinos in the OPERA experiment can be higher than 299,792,458 m/s, but at the same time be lower or equal to the current c. Without subscribing to the model where c increases over time, it can still be a good idea to measure the speed of light c during the replication of the experiment. In addition, if slow increase of c will be proven, it may also explain the red shift of distant galaxies without the big-bang theory, since the more distant and earlier periods of time we observe - the slower the light speed there, and less is the energy of photons emitted there; what for current observer appears as a red shift in the spectrum.
Category: Relativity and Cosmology

[24] viXra:1111.0026 [pdf] replaced on 2016-04-07 15:44:21

Symmetrical Electron from Quark-Like Magnetic Monopole

Authors: Malcolm Macleod
Comments: 2 Pages.

In this article I propose a geometrical structure for a magnetic monopole constructed from Planck time, elementary charge -e, speed of light -c and the fine structure constant alpha. The frequency of the electron can be solved in terms of this monopole and time. As a monopole comprises a 1/3rd part of electron charge, it is analogous to the quark. The electron formulas suggest a Planck unit theory where the frequency of the Planck units are dictated by the frequency of the electron.
Category: Mathematical Physics

[23] viXra:1111.0025 [pdf] submitted on 4 Nov 2011

The Velocity of Neutrino in the 4D Medium Model

Authors: V. Skorobogatov
Comments: 3 Pages.

The simple explanation of the neutrino's velocity anomaly is presented in the frame of the 4D medium model. It is shown that there is no the faster then light motion in our Unverse. The effect of neutrino detection before the light with supernova SN1987A is also discussed.
Category: High Energy Particle Physics

[22] viXra:1111.0024 [pdf] replaced on 2012-05-23 05:04:54

Non-Redundant and Natural Variables Definition of Heat Valid for Open Systems

Authors: Juan Ramón González Álvarez
Comments: 13 Pages. Accepted for publication in the "International Journal of Thermodynamics", with minor changes

Although an unambiguous definition of heat is available for closed systems, the question of how best to define heat in open systems is not yet settled. After introducing a set of physical requirements for the definition of heat, this article reviews the non-equivalent definitions of heat for open systems used by Callen, Casas-Vázquez, DeGroot, Fox, Haase, Jou, Kondepudi, Lebon, Mazur, Misner, Prigogine, Smith, Thorne, and Wheeler, emphasizing which physical requirements are not met. A subsequent section deals with the main objective of this article and introduces a new definition of heat that avoids the difficulties of the existent definitions, providing (i) a complete distinction between open and closed systems, (ii) non-redundancy, (iii) natural variables for the thermodynamic potentials, and (iv) a sound, complete, and intuitive generalization of classical thermodynamic expressions.
Category: Classical Physics

[21] viXra:1111.0023 [pdf] replaced on 2014-12-17 03:28:54

Weak Interaction and the Mechanisms for Neutron Stability and Decay

Authors: Dirk J. Pons, Arion D. Pons, Aiden J. Pons
Comments: 15 Pages. Citation: Pons, D. J., Pons, A. D., and Pons, A. J., Weak interaction and the mechanisms for neutron stability and decay Applied Physics Research, 2015. 7(1): p. 1-11. DOI:

Purpose – The decay of the neutron is well known from the perspective of empirical quantification, but the ontological explanations are lacking for why the neutron should be stable within the nucleus and unstable outside. A theory is developed to explain the reasons for decay of the free neutron and stability of the bonded neutron. Method – The Cordus theory, a type of non-local hidden-variable (NLHV) design, provided the mathematical formalism of the principles for manipulating discrete forces and transforming one type of particule into another. This was used to determine the structures of the W and Z bosons, and the causes of neutron decay within this framework. Findings - The stability of the neutron inside the nucleus arises from the formation of a complementary bound state of discrete forces with the proton. The neutron is an intermediary between the protons, as the discrete forces of the protons are otherwise incompatible. This bond also gives a full complement of discrete forces to the neutron, hence its stability within the nucleus. The instability of the free neutron arises because its own discrete field structures are incomplete. Consequently it is vulnerable to external perturbation. The theory predicts the free neutron has two separate decay paths, which are mixed together in the β- process, the first determined by the local density of the fabric, and the second by the number of neutrinos encountered. The exponential life is recovered. The internal structures of the W bosons are determined. Implications – The W bosons are by-products from the weak decay process, and do not cause the decay. The weak decay is shown to be in the same class of phenomenon as annihilation, and is not a fundamental interaction. Originality – A novel theory has been constructed for the decay process, using a NLHV mechanics that is deeper than quantum theory. This new theory explains the stability-instability of the neutron and is consistent with the new theory for the stability of the nuclides.
Category: Nuclear and Atomic Physics

[20] viXra:1111.0022 [pdf] replaced on 2014-04-17 18:36:11

Beta Decays and the Inner Structures of the Neutrino in a NLHV Design

Authors: D.J. Pons, A.D. Pons, A.J. Pons
Comments: 21 Pages. Pons, D. J., Pons, A., D., & Pons, A., J. (2014). Beta decays and the inner structures of the neutrino in a NLHV design. Applied Physics Research, 6(3), 50-63. doi: (alternative

A novel conceptual theory is developed for the beta decay and electron capture processes, based on the specific non-local hidden-variable (NLHV) design provided by the Cordus theory. A new mechanics is sketched out for the interactions of particules through their discrete forces, and is a deeper level representation of Feynman diagrams. The new mechanics is able to correctly predict the outcomes of the decay processes, beta minus, beta plus, electron capture. It predicts specific NLHV structures for the neutrino and antineutrino. The velocity and unique spins of the neutrino species may then be explained as a consequence of the hidden structures.
Category: Nuclear and Atomic Physics

[19] viXra:1111.0021 [pdf] replaced on 2012-01-30 09:33:04

Do X and Y Mesons Provide Evidence for Color Excited Quarks or Squarks?

Authors: Matti Pitkänen
Comments: 12 Pages.

This article was motivated by a blog posting in Quantum Diaries with the title "Who ordered that?! An X-traordinary particle?". The learned that in the spectroscopy of ccbar type mesons is understood except for some troublesome mesons christened with letters X and Y. X(3872) is the firstly discovered troublemaker and what is known about it can be found in the blog posting and also in Particle Data Tables. The problems are following.

  1. These mesons should not be there.
  2. Their decay widths seem to be narrow taking into account their mass.
  3. Their decay characteristics are strange: in particular the kinematically allow decays to DDbar dominating the decays of Ψ(3770) with branching ratio 93 per cent has not been observed whereas the decay to DDbarπ0 occurs with a branching fraction >3.2× 10-3. Why the pion is needed?
  4. X(3872) should decay to photon and charmonium state in a predictable way but it does not.

One of the basic predictions of TGD is that both leptons and quarks should have color excitations. In the case of leptons there is a considerable support as carefully buried anomalies: the first ones come from seventies. But in the case of quarks this kind of anomalies have been lacking. Could these mysterious X:s and Y:s provide the first signatures about the existence of color excited quarks?
  1. The first basic objection is that the decay widths of intermediate gauge bosons do not allow new light particles. This objection is encountered already in the model of leptohadrons. The solution is that the light exotic states are possible only if they are dark in TGD sense having therefore non-standard value of Planck constant and behaving as dark matter. The value of Planck constant is only effective and has purely geometric interpretation in TGD framework.
  2. Second basic objection is that light quarks do not seem to have such excitations. The answer is that gluon exchange transforms the exotic quark pair to ordinary one and vice versa and considerable mixing of the ordinary and exotic mesons takes place. At low energies where color coupling strength becomes very large this gives rise to mass squared matrix with very large non-diagonal component and the second eigenstate of mass squared is tachyon and therefore drops from the spectrum. For heavy quarks situation is different and one expects that charmonium states have also exotic counterparts.
  3. The selection rules can be also understood. The decays to DDbar involve at least two gluon emissions decaying to quark pairs and producing additional pion unlikes the decays of ordinary charmonium state involving only the emission of single gluon decaying to quark pair so that DDbar results.
  4. The decay of the lightest X to photon and charmonium is not possible in the lowest order since at least one gluon exchange is needed to transform exotic quark pair to ordinary one. Exotic charmonia can however transform to exotic charmonia. Therefore the basic constraints seem to be satisfied.
The above arguments apply with minimal modifications also to squark option and at this moment I am not able to to distinguish between this options. The SUSY option is however favored by the fact that it would explain why SUSY has not been observed in LHC in terms of shadronization and subsequent decay to hadrons by gluino exhanges so that the jets plus missing energy would not serve as a signature of SUSY. Note that the decay of gluon to dark squark pair would require a phase transition to dark gluon first.
Category: High Energy Particle Physics

[18] viXra:1111.0020 [pdf] replaced on 2012-01-30 09:34:16

Are Neutrinos Superluminal?

Authors: Matti Pitkänen
Comments: 9 Pages.

OPERA collaboration in CERN has reported that the neutrinos travelling from CERN to Gran Sasso in Italy move with a super-luminal speed. There exists also earlier evidence for the super-luminality of neutrinos: for instance, the neutrinos from SN1987A arrived for few hours earlier than photons. The standard model based on tachyonic neutrinos is formally possible but breaks causality and is unable to explain all results. TGD based explanation relies on sub-manifold geometry replacing abstract manifold geometry as the space-time geometry. The notion of many- sheeted space-time predicts this kind of effects plus many other effects for which evidence exists as various anomalies which have not taken seriously by the main stream theorists. In this article the TGD based model is discussed in some detail.
Category: High Energy Particle Physics

[17] viXra:1111.0019 [pdf] replaced on 2012-01-30 10:18:58

First Evidence for M_89 Hadron Physics

Authors: Matti Pitkänen
Comments: 19 Pages.

p-Adic length scale hypothesis strongly suggests a fractal hierarchy of copies of hadron physics labelled by Mersenne primes. M89 hadron physics whose mass scales relates by a factor 512 to that of ordinary M107 hadron physics was predicted already for 15 years ago but only now the TeV energy region has been reached at LHC making possible to test the prediction. Pions of any hadron physics are produced copiously in hadronic reactions and their detection is the most probable manner how the new hadron physics will be discovered if Nature has realized them. Neutral pions produce monochromatic gamma pairs whereas heavy charged pions decay to W boson and gluon pair or quark pair. The first evidence -or should we say indication- for the existence of M89 hadron physics has now emerged from CDF which for more than two years ago provided evidence also for the colored excitations of tau lepton and for leptohadron physics. What CDF has observed is evidence for the production of quark antiquark pairs in association with W bosons and the following arguments demonstrate that the interpretation in terms of M89 hadron physics might make sense.
Category: High Energy Particle Physics

[16] viXra:1111.0018 [pdf] replaced on 2012-01-30 21:24:49

Explanation for the Soft Photon Excess in Hadron Production

Authors: Matti Pitkänen
Comments: 5 Pages.

There is quite a recent article entitled "Study of the Dependence of Direct Soft Photon Production on the Jet Characteristics in Hadronic Z0 Decays" discussing one particular manifestation of an anomaly of hadron physics known for two decades: the soft photon production rate in hadronic reactions is by an averge factor of about four higher than expected. In the article soft photons assignable to the decays of Z0 to quark-antiquark pairs. This anomaly has not reached the attention of particle physics which seems to be the fate of anomalies quite generally nowadays: large extra dimensions and blackholes at LHC are much more sexy topics of study than the anomalies about which both existing and speculative theories must remain silent.
TGD leads to an explanation of anomaly in terms of the basic differences between TGD and QCD.

  1. The first difference is due to induced gauge field concept: both classical color gauge fields and the U(1) part of electromagnetic field are proportional to induced Kähler form. Second difference is topological field quantization meaning that electric and magnetic fluxes are associated with flux tubes. Taken together this means that for neutral hadrons color flux tubes and electric flux tubes can be and will be assumed to be one and same thing. In the case of charged hadrons the em flux tubes must connect different hadrons: this is essential for understanding why neutral hadrons seem to contribute much more effectively to the brehmstrahlung than charged hadrons- which is just the opposite for the prediction of hadronic inner bremsstrahlung model in which only charged hadrons contribute. Now all both sea and valence quarks of neutral hadrons contribute but in the case of charged hadrons only valence quarks do so.
  2. Sea quarks of neutral hadrons seem to give the largest contribution to bremsstrahlung. p-Adic length scale hypothesis predicting that quarks can appear in several mass scales represents the third difference and the experimental findings suggest that sea quarks are by a factor of 1/2 lighter than valence quarks implying that brehmstrahlung for given sea quark is by a factor 4 more intense than for corresponding valence quark.

Category: High Energy Particle Physics

[15] viXra:1111.0017 [pdf] replaced on 2012-01-30 21:27:20

The Incredibly Shrinking Proton

Authors: Matti Pitkänen
Comments: 13 Pages.

The recent discovery that the charge radius of proton deduced from quantum average of nuclear charge density from the muonic version of hydrogen atom is 4 per cent smaller than the radius deduced from hydrogen atom challenges either QED or the view about proton or both. In TGD framework topological quantization leads to the notion of eld body as a characteristic of any system. Field body is expected to contain substructures with sizes given by the primary and secondary p-adic length scales at at least. u and d quarks would have eld bodies with size much larger than proton itself. In muonic atom the p-adic size scale of the eld body of u quark having mass of 2 MeV according to the last estimates would be roughly twice the Boh radius so that the anomaly might be understood as a signature of eld body.
Category: High Energy Particle Physics

[14] viXra:1111.0016 [pdf] replaced on 2012-01-30 21:29:18

Could Neutrinos Appear in Several Mass Scales?

Authors: Matti Pitkänen
Comments: 5 Pages.

There are some indications that neutrinos can appear in several mass scales from neutrino oscillations. These oscillations can be classi ed to vacuum oscillations and to solar neutrino oscillations believed to be due to the so called MSW e ect in the dense matter of Sun. There are also indications that the mixing is di erent for neutrinos and antineutrinos. In the following the possibility that padic length scale hypothesis might explain these ndings is discussed.
Category: High Energy Particle Physics

[13] viXra:1111.0015 [pdf] replaced on 5 Nov 2011

Gravity as a Result Quantum Vacuum Energy Density

Authors: Amrit Sorli
Comments: 2 Pages.

Recent report of CERN does not give much chance to the existence of Higgs boson. An alternative solution for mass generating is change of density of quantum vacuum.
Category: Quantum Physics

[12] viXra:1111.0013 [pdf] replaced on 2012-09-20 06:39:57

Special Relativity is Theorem Based

Authors: Bassera Hamid
Comments: 7 Pages.

In this work I show that special relativity is mathematical theorem based on just Chasles relation in Euclidian space. So special relativity is just a direct consequence of Euclidean geometry no more, no less. I show then definitely, there is no mean to doubt about special relativity and it must be engraved in the marble.
Category: Relativity and Cosmology

[11] viXra:1111.0012 [pdf] submitted on 2 Nov 2011

How the 17 Gev Opera Superluminal Neutrino from Cern Arrived at Gran Sasso Without Desintegration??: it Was Carried Out by a Natario Warp Drive. Explanation for the Results Obtained by Glashow-Cohen and Gonzalez-Mestres

Authors: Fernando Loup
Comments: 20 Pages.

Recently Superluminal Neutrinos have been observed in the OPERA experiment at CERN.Since the neutrino possesses a non-zero rest mass then according to the Standard Model,Relativity and Lorentz Invariance this Superluminal speed result would be impossible to be achieved.This Superluminal OPERA result seems to be confirmed and cannot be explained by errors in the measurements or break-ups in the Standard Model,Relativity or Lorentz Invariance. In order to conciliate the Standard Model,Relativity and Lorentz Invariance with the OPERA Superluminal Neutrino we propose a different approach: Some years ago Gauthier,Gravel and Melanson introduced the idea of the micro Warp Drive:Microscopical particle-sized Warp Bubbles carrying inside sub-atomic particles at Superluminal speeds. These micro Warp Bubbles according to them may have formed spontaneously in the Early Universe after the Big Bang and they used the Alcubierre Warp Drive geometry in their mathematical model.We propose exactly the same idea of Gauthier,Gravel and Melanson to explain the Superluminal Neutrino at OPERA however using the Natario Warp Drive geometry.Our point of view can be resumed in the following statement:In a process that modern science still needs to understand,the OPERA Experiment generated a micro NatarioWarp Bubble around the neutrino that pushed it beyond the Light Speed barrier.Inside the Warp Bubble the neutrino is below the Light Speed and no break-ups of the Standard Model,Relativity or Lorentz Invariance occurs but outside the Warp Bubble the neutrino would be seen at Superluminal speeds.Remember that the CERN particle accelerators were constructed to reproduce in laboratory scale the physical conditions we believe that may have existed in the Early Universe so these micro Warp Bubbles generated after the Big Bang may perhaps be re-created or reproduced inside particle accelerators. We believe that our idea borrowed from Gauthier,Gravel and Melanson can explain what really happened with the neutrinos in the OPERA experiment.We also explain here the results obtained by Glashow-Cohen and Gonzalez-Mestres
Category: Relativity and Cosmology

[10] viXra:1111.0011 [pdf] replaced on 2012-02-29 10:30:49

Question of Planck’s Constant in Dark Matter Direct Detection Experiments

Authors: Joseph F. Messina
Comments: 4 Pages. Updated to match version submitted to "Papers in Physics"

Recent astronomical observations have revealed important new clues regarding dark matter's behavior. However, the fact remains that all experimental efforts to detect dark matter directly, in a laboratory setting, have failed. A natural explanation for these failed efforts may be possible by postulating that dark matter's behavior is governed by a non-Planckian "action." It is pointed out, as a preliminary to advancing this possibility, that no purely dark matter measurement of Planck's constant exists. The resulting hypothesis advocates the existence of a new, experimentally verifiable, dark matter candidate. An extension of this hypothesis to the cosmological realm suggests that dark matter may have come into existence 10 to the minus 44 seconds after the big bang; an order of magnitude prior to the Planck era.
Category: Relativity and Cosmology

[9] viXra:1111.0010 [pdf] submitted on 2 Nov 2011

Photon-Neutrino Symmetry and the OPERA Anomaly

Authors: Ervin Goldfain
Comments: 9 Pages.

The OPERA collaboration has recently claimed discovery of superluminal propagation of neutrino beams. Excluding the possibility of unaccounted measurement errors, the most natural interpretation of OPERA anomaly is that, sufficiently far from the source, long-range neutrinos and photons may be regarded as components of the same field. In particular, we suggest that it is possible to construct a neutrino-photon doublet where the two components behave as dual entities. We examine conditions that enable the symmetry between neutrinos and photons to be unbroken. The benefit of this interpretation is that Lorentz invariance stays valid regardless of the relative velocity of neutrinos and their mean energy.
Category: High Energy Particle Physics

[8] viXra:1111.0009 [pdf] replaced on 2019-02-01 05:02:37

3: Programming Atomic and Gravitational Orbitals in a Simulation Hypothesis

Authors: Malcolm Macleod
Comments: 5 Pages.

This article proposes a method for programming orbitals at the Planck level using a geometrical approach that is applicable to Simulation Hypothesis. Mathematical probability orbitals are replaced with `physical' units of `orbit momentum', the orbital regions derived from geometrical imperatives rather than abstract forces. In this approach the electron does not orbit around a nucleus but rather is maintained within an orbital region by the confines of the geometry of the orbital `orbit momentum', the action of rotation around a center resulting from the incompressibility of momentum, the Bohr radius thus slightly less that the sum of its component wavelengths. There is no electron transition between orbitals, rather the existing orbital (orbit momentum) is exchanged for the new orbital by the momentum of the incoming photon, the Rydberg formula describing this as a 2-stage process of wave addition followed by wave subtraction. As the electron is physically linked to the nucleus by the orbital an electric force is not required, instead a charged momentum (the sqrt of Planck momentum) is presumed. A gravitational orbit is the sum of individual gravitational orbitals, also physical links of orbit momentum, thus a gravitational force is not required, instead mass is replaced by units of Planck mass, with the orbitals linking these mass units. Thus the moon is not orbiting the earth, instead it is propelled by these orbital momenta, its path the sum of this momentum. As orbitals have different momentum densities, movement between orbitals requires a change in momentum, an orbital (momentum) buoyancy. Nuclear binding energy, ionization energy and escape velocity are measures of the momentum required to completely erase the orbitals.
Category: Mathematical Physics

[7] viXra:1111.0008 [pdf] submitted on 2 Nov 2011

Is the Self-Destruction of Mankind Inevitable?

Authors: Mark Sverdlov
Comments: 6 Pages.

It has been shown that men cannot objectively percept and cognize the world but are organically connected with it, feel its total unity. This unity includes living – unloving synthesis. Mankind – world uncorrelation during civilization epoch prevents from feeling this and from forming of common human unity, leading to confrontation between peoples, to destruction of nature. Escalation of these processes led mankind to the verge of self-destruction. Science is directed to objective cognizing of the world which is inaccessible for men because of subjectivity of their perception and thinking. That is why science cannot help to understand the present situation. New science is necessary, taking into account the nature of human perception and thinking. It is shown that men can restore their natural feeling of the world and form united mankind, organically connected with it, on this foundation.
Category: Social Science

[6] viXra:1111.0007 [pdf] submitted on 2 Nov 2011

United Humanity ... Is It Possible?

Authors: Mark Sverdlov
Comments: 2 Pages.

It is shown that the human perception of the world was divided into two principally different representations - Eastern and Western. The Eastern school of thought was attempting to deepen its perception and viewed its knowledge in this context. The Western school of thought was based on subjectively logical modeling of the information obtained from the perception. It assumed that this way it was learning about the world. It is shown that the Western understanding was the basis of the human being created its own world during the time of civilization. This was not taken place not from the federative structure which is natural for the makeup of the world, but rather based on the system of monogovernments, which was built on the repression of weak people by the strong ones, as well as the exhostion by the human being of the natural resources. It was shown that the development of humanity in this way has led to almost complete extinction of the natural resources, and has put the humanity on the border of self distraction. It was shown that many-centuries-long development of humanity in this way did not ruin the initially representative of the human nature, basis for the world, federative structures. It was shown in the formation of federative governments as well as international market system in the 19-th and 20-th centuries. It is assumed to be possible, in case of appropriate, fundamental, qualified efforts, the formation on the federative basis the unified all-humanity-system, which is correlated with the makeup of the world and, because of this, retaining the ability to live.
Category: Social Science

[5] viXra:1111.0006 [pdf] submitted on 2 Nov 2011

Mark Sverdlov - To know the World ... Is It Possible?

Authors: Mark Sverdlov
Comments: 2 Pages.

The world represents the unified system of many soliton formations of different kinds. These formations distort the vibrational signals as they pass through their borders. The size of these solitons oscillates from microsolitons to galaxies, universes and so forth. The human being lives inside one of these solitons. Since the signals passing through the borders of solitons are distored, the human being has a possiblity of obtaining relatively adequate information only in the area where he resides, instead of the whole world. The possible degree of obtaining a knowledge about the world by a person in this situation is the problem discussed in this paper. It is concluded that the human can possess only limitted ability of modeling, as well as a necessity of creating a fundamentally new physics.
Category: History and Philosophy of Physics

[4] viXra:1111.0004 [pdf] submitted on 2 Nov 2011

A no-Shape-Substance is the Foundation All Physics Laws Depend on the Second Part of New Physics

Authors: Ji Qi, Yinling Jiang
Comments: 21 Pages.

People used to establish physical laws on the mathematical frame directly, without considering the existence of the No-Shape-Substance. Such physical laws are separated from nature. By analyzing the interaction between the body and the No-Shape-Substance, we will have a newer and better understanding of physical laws or concepts as inertial mass, Newton’s Second Law, kinetic energy equation, mass-energy equation and momentum. And now we are going to uncover the essence of the physical laws.
Category: Classical Physics

[3] viXra:1111.0003 [pdf] submitted on 2 Nov 2011

A no-Shape-Substance is the Propagating Medium of Light the First Part of New Physics

Authors: Ji Qi, Yinling Jiang
Comments: 19 Pages.

Through analyzing a variety of physical phenomena, the author proposes that there exists a special kind of substance - No-Shape-Substance. The author believes that this matter is the medium through which light propagates and the foundation on which all laws of motion can be built. On the same foundation we can explain a great many physical experiments as well. The No-Shape-Substance is an actual substance with mass in another state. The No-Shape-Substance is a more profound element in the nature, which can make people know the nature more thoroughly, as well as make the physics more objective, more natural and more logical.
Category: Classical Physics

[2] viXra:1111.0002 [pdf] submitted on 1 Nov 2011

The New Prime Theorems (1191)-(1240)

Authors: Chun-Xuan Jiang
Comments: 90 Pages.

Using Jiang function we are able to prove almost all prime problems in prime distribution. This is the Book proof. No great mathematicians study prime problems and prove Riemann hypothesis in AIM, CLAYMA, IAS, THES, MPIM, MSRI. In this paper using Jiang function J2 (ω) we prove that the new prime theorems (1191)-(1240) contain infinitely many prime solutions and no prime solutions. From (6) we are able to find the smallest solution πk(N0,2) ≥ 1. This is the Book theorem.
Category: Number Theory

[1] viXra:1111.0001 [pdf] submitted on 1 Nov 2011

A Modified Special Relativity Theory in the Light of Breaking the Speed of Light

Authors: Azzam AlMosallami
Comments: 22 Pages.

In the OPERA experiment the neutrino broke the speed of light. It moved with speed greater than the highest speed in the universe (the speed of light in vacuum) according to the special relativity [32]. This experiment if it is confirmed will contradict the main basis that the special relativity built on which is the constancy of speed of light, and no particle or electromagnetic wave can exceed this speed [37]. In 2000, NEC Research Institute in Princeton claims to have achieved propagation speeds of 310 c (c= speed of light) by Quantum tunneling [34]
Category: Relativity and Cosmology