[237] **viXra:1412.0284 [pdf]**
*submitted on 2014-12-31 21:53:15*

**Authors:** V.B. Smolenskii

**Comments:** 20 Pages.

Within the PI-Theory of fundamental physical constants is substantiated and proved an integer of 3 dimension space. For example, the theoretical derivation of the fundamental physical constants of the fine structure constant and the anomaly of the magnetic moment of the electron is proved only in the case of three-dimensional space of constants and have such numerical values and do not change over time. Presents a complete analytical conclusions. Done high-precision theoretical calculations and see their results. The comparison of theoretical calculations with experimental data.

**Category:** Nuclear and Atomic Physics

[236] **viXra:1412.0283 [pdf]**
*submitted on 2014-12-31 19:44:21*

**Authors:** George R. Briggs

**Comments:** 1 Page.

In 1803 the L'aigle meteor fall was studied by the physicist Baptiste Biot in France who concluded that it was of extraterrestrial origin but by 1930 meteorites were generally thought to be of terrestrial volcanic origin. Similarly, in the 1960's group theory and symmetry were thought to be of overriding importance in developing new physical theories but this has fallen out of favor and has resulted in the present-day disbelief that our universe existed before the big bang and could be governed by symmetry of any sort.

**Category:** Relativity and Cosmology

[235] **viXra:1412.0281 [pdf]**
*replaced on 2017-01-04 19:39:58*

**Authors:** Victor Paromov

**Comments:** 22 pages, 3 figures

It has long been expected that a quantum field theory (QFT) “beyond” the Standard Model (SM) will eventually unify gravitation with particle interactions. Unfortunately, this “Theory of everything” remains elusive and problematic. An alternative way is to explain all four types of physical interactions with the geometry of spacetime extended by unseen extra dimensions. The General Principle of Interaction (GPI) presents a philosophical concept that extends the Einsteinian understanding of geometrically deformed vacuum (spacetime) in order to explain particle interactions and establish a consistent basis for the full unification. Surprisingly, this simple concept remained undeveloped to this day. The GPI assumes that the extended spacetime includes one time dimension and three subspaces with seven spatial dimensions: ordinary spacetime (OST), “electromagnetic space” (EMS) with one extra dimension and “nuclear space” (NS) with three extra dimensions, and each type of interaction is governed by the geometry of one of these subspaces. The subspaces are closed and separated by their background curvatures: OST is flat (or almost flat), EMS is more curved, and NS is highly curved and compactified. Thus, gravitational, electromagnetic and strong interactions are governed by spacetime deformations originating separately in these three subspaces (OST, EMS or NS, respectively), and the weak interaction is a type of electromagnetic interaction.
Additionally, it is hypothesized that all elementary particles can be described as quantized wave-like vacuum deformations originating in the extra dimensions (NS and/or EMS) with certain secondary effects measurable in the OST (mass, electromagnetic and strong fields). It is expected that the GPI-based unified theory will be able to describe all four types of interaction with the “pure geometry” of the 8D spacetime. The theory should be developed as a minimal extension of the Einstein-Cartan (EC) theory accessing both curvature (via 8D metric) and torsion (via 8D torsion tensor) in the three subspaces. Unlike gravitation, the electromagnetic and strong interactions (i.e. EMS and NS deformations) cannot be described by a classical field theory, as the extra coordinates (NS or EMS) are immeasurable within the OST subspace. Hence, the future theory should accept quantum field methodology (at least to a certain extent), however rejecting the gauge transformation principle and relying solely on the spacetime geometry, which ensures background-independence and avoiding any virtual particles or gauge bosons. This concept promotes the vacuum (spacetime) geometry-based full unification and drastically simplifies the descriptions of particle interactions by reducing the elementary particle set and the total number of interacting fields.

**Category:** General Science and Philosophy

[234] **viXra:1412.0280 [pdf]**
*replaced on 2015-01-01 15:59:47*

**Authors:** R. Garner-Tetlow

**Comments:** 3 Pages.

The concept of vacuum energy challenges some basic assumptions of astronomy.

**Category:** Relativity and Cosmology

[233] **viXra:1412.0279 [pdf]**
*submitted on 2014-12-31 16:05:15*

**Authors:** Dirk J. Pons, Arion D. Pons, Aiden J. Pons

**Comments:** Pages. Published as: Pons, D. J., Pons, A. D., & Pons, A. J. (2015). Asymmetrical neutrino induced decay of nucleons, Applied Physics Research, 7(2), 1-13. http://dx.doi.org/10.5539/apr.v7n2p1

Problem- The operation of neutrino detectors shows that nuclide decay rates can be affected by loading of neutrino species. However the underlying principles of this are poorly understood. Purpose- This paper develops a conceptual solution for the neutrino-species interactions with single nucleon decay processes. Approach- The starting point was the non-local hidden-variable (NLHV) solution provided by the Cordus theory, specifically its mechanics for the manipulation of discrete forces and the remanufacture of particle identities. This mechanics was applied to the inverse beta decays and electron capture processes for nucleons. These are situations where the neutrino or antineutrino is supplied as an input, as opposed to being an output as in the conventional decays. Findings- Inverse decays are predicted to be differentially susceptible to inducement by neutrino-species. The inverse β- neutron decay is predicted to be susceptible to neutrino inducement (but not to the antineutrino). Correspondingly β+ proton decay is predicted to be induced by input of energetic antineutrinos, but not neutrinos. Hence a species asymmetry is predicted. The inverse electron capture is predicted to be induced by pre-supply of either a neutrino or antineutrino, with different energy threshold requirements in each situation. The neutrino induced channel is predicted to have the greater energy barrier than the antineutrino channels. All the nucleon decay processes (forward, inverse, and induced) may be represented in a single unified decay equation, with transfers across the decay equality resulting in inversion of the matter-antimatter species (hand). Implications- The theory predicts the existence of a number of induced decays with asymmetrical susceptibility to neutrino-species. The results imply that detectors that measure β- outcomes are measuring neutrinos, and β+ antineutrinos. Originality- A new methodology is demonstrated for predicting the outcomes of decays and particle transformations. A unified decay relationship is proposed, that expresses all the conventional and induced decay processes. A novel prediction is made, that neutrino-species induce decay of nucleons, and that the interaction is asymmetrical. Hence also, that different decay types are affected differently by the input of energy and neutrino-species. A detailed explanation is provided of how this occurs at the level of the internal structures of the particules. This is an unorthodox outcome and is testable and falsifiable.

**Category:** Nuclear and Atomic Physics

[232] **viXra:1412.0278 [pdf]**
*submitted on 2014-12-31 14:46:19*

**Authors:** Ivan L Zhogin

**Comments:** 6 Pages. essay; 2 figures (send to friedmann-2015.org)

There is a unique (compatible, second-order) 5D-equation of Absolute Parallelism (D is fixed) which solutions avoid singularities. Three of fifteen polarization modes are linearly unstable; they can carry digital information, topological charges and quasi-charges, but not the five-momentum. Phenomenological models accounting for these (quasi)charges (i.e., particles) emerge including a relativistically expanding S^3-shell cosmology and Lagrangian forth-order gravity. A radial longitudinal wave can form a shell of huge size L (in the co-moving system) along the extra-dimension.
The correction to the Newton's law here behaves as 1/r on large scales, r>L, so the Dark Matter hypothesis looks unnecessary. Generation of gravitational waves (GW) in this theory differs by the factor (\lambda/L)^2 from the General Relativity result (for GW amplitude; \lambda is GW wavelength); so, short GWs should be much weaker than in GR.
A new way of interpretation of Quantum Mechanics (closely relating to the huge and "undeveloped" extra-dimension) is briefly discussed.

**Category:** Mathematical Physics

[231] **viXra:1412.0277 [pdf]**
*submitted on 2014-12-31 04:23:04*

**Authors:** Anand Mankodia, Satish K. Shah

**Comments:** 11 Pages. This paper is based on Image and Video Processing

Content based video indexing and retrieval has its foundations in the analyses of the prime video temporal structures. Thus, technologies for video segmentation have become important for the development of such digital video systems. Dividing a video sequence into shots is the first step towards VCA and content-based video browsing and retrieval. This paper presents analysis of histogram based techniques on the compressed video features. Graphical User Interface is also designed in MATLAB to demonstrate the performance using the common performance parameters like, precision, recall and F1.

**Category:** Digital Signal Processing

[230] **viXra:1412.0276 [pdf]**
*replaced on 2016-06-13 09:23:33*

**Authors:** Jianwen Huang, Jianjun Wang

**Comments:** 18 Pages.

In this paper, with optimal normalized constants,
the asymptotic expansions of the distribution and density of the
normalized maxima from generalized Maxwell distribution are derived.
For the distributional expansion, it shows that the convergence rate
of the normalized maxima to the Gumbel extreme value distribution is
proportional to $1/\log n.$ For the density expansion, on the one
hand, the main result is applied to establish the convergence rate
of the density of extreme to its limit. On the other hand, the main
result is applied to obtain the asymptotic expansion of the moment
of maximum.

**Category:** Statistics

[229] **viXra:1412.0275 [pdf]**
*submitted on 2014-12-31 01:42:41*

**Authors:** Jianwen Huang, Yanmin Liu

**Comments:** 12 Pages.

In this paper, the higher-order asymptotic
expansion of the moment of extreme from generalized Maxwell
distribution is gained, by which one establishes the rate of
convergence of the moment of the normalized partial
maximum to the moment of the associate Gumbel extreme value distribution.

**Category:** Statistics

[228] **viXra:1412.0274 [pdf]**
*submitted on 2014-12-30 15:56:02*

**Authors:** Miloje M. Rakocevic

**Comments:** 8 Pages. New facts in connection to the paper, published in JTB, Volume 229 (2004) 221-234

In the paper is presented the chemically meaningful splitting of codons after pyrimidine and purine distinctions; such a splitting that is accompanied by the balance of number of atoms in the set of 61 amino acid molecules. In doing so, the increase or decrease of number of atoms occurs in the quantities of decimal units, what can be understood as analogous filling of the orbitals within atoms.

**Category:** Quantitative Biology

[227] **viXra:1412.0273 [pdf]**
*submitted on 2014-12-30 14:23:47*

**Authors:** Prashanth R. Rao

**Comments:** 2 Pages.

Collatz conjecture states that starting with any positive integer n, we divide it by 2 if it is even or multiply it by 3 and add 1 if it is odd and repeat this algorithm on the answer always using the same odd or even rule, we will ultimately end up with an answer of 1. Here we prove this conjecture for a special integer which is the product of any prime number “p” greater than three with another positive odd integer “x” that has been derived by using the Fermat’s little theorem and is therefore unique for each prime. Thus we prove Collatz’s conjecture for a small fraction of positive integers “px” which would be expected to roughly represent the same proportion of integers as prime numbers.

**Category:** Number Theory

[226] **viXra:1412.0272 [pdf]**
*submitted on 2014-12-30 09:47:02*

**Authors:** George Rajna

**Comments:** 17 Pages.

Prospects of developing computing and communication technologies based on quantum properties of light and matter may have taken a major step forward thanks to research by City College of New York physicists led by Dr. Vinod Menon. [12]
Solitons are localized wave disturbances that propagate without changing shape, a result of a nonlinear interaction that compensates for wave packet dispersion. Individual solitons may collide, but a defining feature is that they pass through one another and emerge from the collision unaltered in shape, amplitude, or velocity, but with a new trajectory reflecting a discontinuous jump.
Working with colleagues at the Harvard-MIT Center for Ultracold Atoms, a group led by Harvard Professor of Physics Mikhail Lukin and MIT Professor of Physics Vladan Vuletic have managed to coax photons into binding together to form molecules – a state of matter that, until recently, had been purely theoretical. The work is described in a September 25 paper in Nature.
New ideas for interactions and particles: This paper examines the possibility to origin the Spontaneously Broken Symmetries from the Planck Distribution Law. This way we get a Unification of the Strong, Electromagnetic, and Weak Interactions from the interference occurrences of oscillators. Understanding that the relativistic mass change is the result of the magnetic induction we arrive to the conclusion that the Gravitational Force is also based on the electromagnetic forces, getting a Unified Relativistic Quantum Theory of all 4 Interactions.

**Category:** Quantum Physics

[225] **viXra:1412.0271 [pdf]**
*submitted on 2014-12-30 09:50:23*

**Authors:** Radwan M. Kassir

**Comments:** 8 pages

Analysis of the Einstein’s Special Relativity equations derivation, outlined from his 1905 paper "On the Electrodynamics of Moving Bodies," revealed several contradictions. It imposed, through the speed of light principle, particular values on the space and time coordinates that, when used explicitly in Einstein’s own equation substitutions, led to fundamental contradictions. Furthermore, the space and time coordinates used in the derived transformation equations to obtain the time dilation and length contraction predictions were found to be incompatible with the method used in the derivation to perform the time calculations; no such predictions deemed feasible. The Special Relativity was hence found to be self-refuted.

**Category:** Relativity and Cosmology

[224] **viXra:1412.0270 [pdf]**
*replaced on 2015-01-22 15:03:37*

**Authors:** Steven L Coleman

**Comments:** 8 Pages. removed insitution

This paper attempts to show that General Relativity (GR) is incomplete as a physical theory of the gravitational process due to the first law of thermodynamics. A theoretical modification to General Relativity is offered to correct for this deficit, and is based on the first principals of thermodynamics. This new theoretical framework allows GR to not only be compatible with Quantum Mechanics (QM), but it is now interdependent with both QM and Special Relativity. The new framework predicts most if not all of our cosmological observations which had previously necessitated the creation of both Dark Matter and Dark Energy. Several other previously unexplained gravitational phenomena are also discussed in this new theoretical context.

**Category:** Relativity and Cosmology

[223] **viXra:1412.0269 [pdf]**
*submitted on 2014-12-29 20:01:52*

**Authors:** Jaykov Foukzon

**Comments:** 29 Pages.

In this paper paraconsistent first-order logic
LP^# with infinite hierarchy levels of contradiction is proposed. Corresponding paraconsistent set theory KSth^# is
proposed. Axiomatical system HST^#,as inconsistent generalization of Hrbacek set
theory HST is considered.

**Category:** Set Theory and Logic

[222] **viXra:1412.0268 [pdf]**
*submitted on 2014-12-29 20:50:11*

**Authors:** Syed Afsar Abbas

**Comments:** 6 Pages.

We obtain a new mathematical duality relating the Jacobi Identity and the Lie Algebra.
The duality is between two independent but simultaneously existing mathematical structures
related to the fundamental relationship between the Lie algebra of a Lie group and the corresponding
Jacobi Identity. We show that this new mathematical duality has a physical counterpart within the
Eightfold-way model and the SU(3) Lie group.

**Category:** High Energy Particle Physics

[221] **viXra:1412.0267 [pdf]**
*submitted on 2014-12-29 21:38:56*

**Authors:** Jay R. Yablon

**Comments:** 64 Pages.

The purpose of this paper is to explain the pattern of fill factors observed in the Fractional Quantum Hall Effect (FQHE) to be restricted to odd-integer denominators as well as the sole even-integer denominator of 2. The method is to use the mathematics of gauge theory to fully develop Dirac monopoles without strings as originally taught by Wu and Yang, while accounting for topological orientation-entanglement and related “twistor” relationships between spinors and their environment in the physical space of spacetime. We find that the odd-integer denominators are permitted and the even-integer denominators excluded if FQHE only displays electrons of identical orientation-entanglement “version,” i.e., only electrons separated by 4π not 2π. We also find that the even-integer denominator of 2 is permitted if entangled electrons can pair into boson states, and that all other even-integer denominators are excluded because bosons are not subject to the same Exclusion statistics as are fermions. Because this proposed relation between the Dirac monopoles and the FQHE presupposes an electric / magnetic duality near 0K, and because magnetic monopoles are certainly not observed at higher temperatures, we also find how to break this duality symmetry with the consequence that the low-temperature Dirac monopoles are replaced by a “thermal residue” at higher temperatures. We conclude that the observed FQHE fill factor pattern can be fundamentally explained using nothing other than the mathematics of gauge theory in view of orientation, entanglement and twist, with proper breaking of the low-temperature electric / magnetic duality. An unanticipated bonus is that the quantum topology emerging from this analysis appears to map precisely to the electronic orbital structure of atoms. This provides the basis for proposed experiments to closely observe the FQHE quasiparticles to seek correlations to the angular momentum observed in atomic electron shells, and to boson spin states.

**Category:** Condensed Matter

[220] **viXra:1412.0266 [pdf]**
*replaced on 2015-01-03 06:06:35*

**Authors:** Hasmukh K. Tank

**Comments:** Seven page letter

Quantum entanglement of pair of particles is now an established fact as described in [1]. But its theoretical explanation is yet to be found. So this letter attempts to propose one possible explanation, based on the previous works. In a paper titled: “Some conjectures on the nature of energy and matter” [2], and its latest version titled: “ On the emergence of physical world from the ultimate reality” [3] it was proposed that ‘space’ or ‘vacuum’ can be viewed as a ‘super flexible continuum (SFC), and ‘particles’ of ‘matter’ as ‘spherical standing wave patterns’ of fluctuations generated in SFC. Einstein tried to do-away with ether in his special theory of relativity, but when he had to assign curvature to space, he told: " Well, in that sense, there exists an ether" But majority of scientists still stick to emptiness of space, and are facing understanding observations like quantum entanglement and collapse of the quantum-mechanical wave-function. In this manuscript space is assumed to be a super-flexible-continuum, and you will find that it helps gaining an insight into the quantum entanglement and collapse of q-m wave-function. Since, in a continuum, when a labeled dot moves from point A to A’, as shown in the figure1 here, the point B behind it has to move from B to B’; and this chain of displacements has to complete a closed circular path, whose radius can be as small as a few nano-meters or as large as a few thousand kilometers. Since there is no preferred axis about which the point A should orbit, it moves partly about x-axis, partly about y-axis and partly about z-axis; forming a small circle on the spherical shell. This motion of point A, of completing a small circle on the surface of a spherical shell, takes some time t, giving rise to a wave in the radial directions. The propagation of this wave in the radial directions is at the speed of light; whereas the displacements of points on the surface of spherical shell are instantaneous and simultaneous; because of the continuum nature of ‘space’. When such a wave of fluctuations moves in radial direction its mirror-wave, of the same frequency and amplitude, has to move in radially opposite direction, giving rise to a pair of pulses moving apart in radial directions. Since, the spin and polarization of ‘photons’ are perpendicular to the direction of photon’s motion, they are in the tangential-direction of the spherical shell, on which displacements of points take place instantaneously. Therefore, when spin, or polarization, of one photon is measured the spin and polarization of the mirror-particle gets instantaneously affected, because of the instantaneous displacement of points on the surface of the spherical shell. Similarly, when a photon is absorbed by one atom on the spherical shell, the wave collapses, because of simultaneous displacement of points on the spherical shell. Thus, the observed entanglement of photons, and other particles, imply the presence of an underlying ‘super flexible continuum’ (SFC), as was anticipated in [2] in the year 1988.

**Category:** Quantum Physics

[219] **viXra:1412.0265 [pdf]**
*submitted on 2014-12-29 10:23:07*

**Authors:** Vladimir S. Netchitailo

**Comments:** 15 pages, 41 references

World – Universe Model is based on three primary assumptions:
1) The World is finite and is expanding inside the Universe with speed equal to the electrodynamic constant c . The Universe serves as an unlimited source of energy that continuously enters into the World from the boundary.
2) Medium of the World, consisting of protons, electrons, photons, neutrinos, and dark matter particles, is an active agent in all physical phenomena in the World.
3) Two fundamental parameters in various rational exponents define all macro and micro features of the World: Fine-structure constant α, and dimensionless quantity Q. While α is constant, Q increases with time, and is in fact a measure of the size and the age of the World.
In this paper, we introduce Cosmic Large Grains whose mass about equals to Planck mass, and their temperature is in the neighborhood of 29 K. These grains are Bose – Einstein condensates of cosmic dineutrinos, and are indeed responsible for the far-infrared background radiation.

**Category:** Relativity and Cosmology

[218] **viXra:1412.0263 [pdf]**
*replaced on 2015-01-04 11:21:05*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 11 Pages.

We calculate the angle of deflection suffered by the light passing near a large mass M, the Sun, first using Newtonian Mechanics, and then the General Relativity. We found that with the solution obtained in General Relativity to the movement of light occurs collision of light with the Sun, before it touches it, which does not occur with Newton's theory. We have also seen that for large eccentricities, in a hyperbolic motion, the total energy E of the photon according to Newtonian Mechanics tends to Einstein's famous equation E = mc^2. We suggest more experimental checks of starlight deflection angle, both due to a period of 6 months without any eclipse, as in the comparison of the positions of the stars in the occurrence of an eclipse in relation to the early date of the occurrence of this eclipse.

**Category:** Classical Physics

[217] **viXra:1412.0261 [pdf]**
*replaced on 2016-01-02 10:39:34*

**Authors:** Sylwester Kornowski

**Comments:** 3 Pages.

The Scale-Symmetric Theory (SST) shows that the quantum entanglement fixes the speed of light in “vacuum” ‘c’ as the relative radial speed of photons in relation to their sources or a last-interaction object (it can be a detector) - such is the correct interpretation of the Michelson-Morley experiment. It causes that in cosmology, generally, the spatial distances to galaxies differ from time distances concerning the speed c - it is the duality of relativity. The duality of relativity causes that there appear the two different Hubble constants, i.e. the real Hubble constant that is 45.24 and the Special-Relativity Hubble constant that is 70.52. Moreover, at the beginning of expansion of the Universe there appeared the cascades of protuberances of the dark matter and dark energy carrying the protogalaxies. The dampened protuberances led to the redshift higher and much higher than the mean redshift z = 0.6415 characteristic for the front of the expanding baryonic matter. SST shows that the protuberant redshift behaves similar to the gravitational redshift. Notice as well that the protuberances led to the observed untypical radial motions of groups of protogalaxies that we can observe - they do not follow from a gravitational attraction by some mass external to the expanding Universe. SST shows that spacetime does not expand - there expand the dark matter and dark energy. The acceleration of expansion of the Universe is an illusion that follows from the duality of relativity.

**Category:** Quantum Gravity and String Theory

[216] **viXra:1412.0260 [pdf]**
*submitted on 2014-12-27 15:29:14*

**Authors:** Klee Irwin

**Comments:** 13 Pages.

A first principles theory of everything has never been achieved. An E8 derived code of quantized spacetime could meet the following suggested requirements: (1) First principles explanation of time dilation, inertia, the magnitude of the Planck constant and the speed of light (2) First principles explanation of conservation laws and gauge transformation symmetry. (2) Must be fundamentally relativistic with nothing that is invariant being absolute. (4) Pursuant to the deduction that reality is fundamentally information-theoretic, all information must be generated by observation/measurement at the simplest Planck scale of the code/language. (5) Must be non-deterministic. (6) Must be computationally efficient. (7) Must be a code describing “jagged” (quantized) waveform – a waveform language. (8) Must have a first principles explanation for preferred chirality in nature.

**Category:** Quantum Gravity and String Theory

[215] **viXra:1412.0259 [pdf]**
*submitted on 2014-12-27 18:41:56*

**Authors:** J. S. Markovitch

**Comments:** 9 Pages.

In 2007 a single mathematical model encompassing both quark and lepton mixing was described.
This model exploited the fact that when a 3 x 3 rotation matrix whose elements are squared
is subtracted from its transpose, a matrix is produced whose non-diagonal elements have
a common absolute value, where this value is an intrinsic property of the rotation matrix.
For the traditional CKM quark mixing matrix with its
second and third rows interchanged (i.e., c - t interchange)
this value equals one-third the corresponding value for the leptonic matrix (roughly, 0.05 versus 0.15).
This model was distinguished by three such constraints on mixing.
As seven years have elapsed since its introduction, it is timely to assess the model's accuracy.
Despite large conflicts with experiment at the time of its introduction,
and significant improvements in experimental accuracy since then,
the model's six angles collectively fit experiment well; but one angle,
incorrectly forecast, did require toggling (in 2012) the sign of an integer exponent.
The model's mixing angles in degrees are
45, 33.210911, 8.034394 (the angle affected) for leptons;
and 12.920966, 2.367442, 0.190986 for quarks.

**Category:** High Energy Particle Physics

[214] **viXra:1412.0258 [pdf]**
*replaced on 2015-07-08 23:45:53*

**Authors:** Kenneth Dalton

**Comments:** 10 Pages. Journal-Ref: Hadronic J. 39(3), 303-313 (2016)

The linear field equations are solved for the metrical component $g_{00}$.
The solution is applied to the question of gravitational energy transport.
The Hulse-Taylor binary pulsar is treated in terms of
the new theory. Finally, the detection of gravitational waves is discussed.

**Category:** Relativity and Cosmology

[213] **viXra:1412.0257 [pdf]**
*replaced on 2016-08-08 16:52:34*

**Authors:** Gary D. Simpson

**Comments:** 21 Pages.

Definitions are presented for "quaternion functions" of a quaternion. Polynomial and exponential quaternion functions are presented. Derivatives and integrals of these quaternion functions are developed. It is shown that quaternion multiplication can be represented by matrix multiplication provided the matrix has a specific type of structure. It is also shown that differentiation and integration are similar to their non-quaternion applications.

**Category:** Mathematical Physics

[212] **viXra:1412.0256 [pdf]**
*submitted on 2014-12-27 22:42:35*

**Authors:** Gary D. Simpson

**Comments:** 9 Pages.

This text demonstrates that how we think about both Mathematics and Physics can be influenced by the mathematical tools that are available to us. The author attempts to predict what Newton might have thought and done if he had known of the works of Euler and Hamilton and had been familiar with the matrix methods of Linear Algebra. The author shows that Newton would have come very close to Special Relativity.

**Category:** General Mathematics

[211] **viXra:1412.0255 [pdf]**
*submitted on 2014-12-28 00:13:01*

**Authors:** Bertrand Wong

**Comments:** 3 Pages.

The phenomenon of quantum entanglement involving two particles has puzzled us for a long time. This article presents some possible solutions.

**Category:** Relativity and Cosmology

[210] **viXra:1412.0254 [pdf]**
*submitted on 2014-12-28 01:09:54*

**Authors:** Temur Z. Kalanov

**Comments:** 11 Pages.

Critical analysis of the foundations of the theory of negative numbers is proposed. The unity of formal logic and of rational dialectics is methodological basis of the analysis. It is shown that the foundations of the theory of negative numbers contradict practice and contain formal-logical errors. The main results are as follows: a) the concept “number sign” is inadmissible one because it represents a formal-logical error; b) all the numbers are neutral ones because the number “zero” is a neutral one; c) signs “plus” and “minus” are only symbols of mathematical operations. The obtained results are the sufficient reason for the following statement. The existence of logical errors in the theory of negative numbers determines the essence of the theory: the theory is a false one.

**Category:** General Mathematics

[209] **viXra:1412.0253 [pdf]**
*replaced on 2015-06-14 16:16:42*

**Authors:** J. S. Markovitch

**Comments:** 5 Pages.

A nonstandard cubic equation is shown to have an unusually economical solution, where this solution incorporates an angle that serves as the equation's discriminant.

**Category:** Algebra

[208] **viXra:1412.0252 [pdf]**
*submitted on 2014-12-27 10:24:17*

**Authors:** J. S. Markovitch

**Comments:** 3 Pages.

A substitution map applied to the simplest algebraic identities is shown to yield second- and third-order equations that share an
interesting property at the minimum 137.036.

**Category:** Algebra

[207] **viXra:1412.0251 [pdf]**
*replaced on 2014-12-27 23:52:10*

**Authors:** Yibing Qiu

**Comments:** 1 Page.

According to the background of propose and establish, analysis
And points the historical limitations of Quark Model and Quantum Chromo
Dynamics (QCD).

**Category:** Nuclear and Atomic Physics

[206] **viXra:1412.0250 [pdf]**
*submitted on 2014-12-27 04:44:46*

**Authors:** Nicolae Bratu, Adina Cretan

**Comments:** 4 Pages.

The present paper is a fragment revised from the work [3], published only in Romanian. Using a new function, “cubic combination”, we can solve different problems. The novelty of this work consists in the deduction of an infinite number of third degree Ramanujan identities.

**Category:** Number Theory

[205] **viXra:1412.0249 [pdf]**
*submitted on 2014-12-27 05:10:48*

**Authors:** F. F. Mende

**Comments:** 8 Pages.

In the article is examined the liquid-drop model of electron and atom, which assumes existence of electron both in the form the ball-shaped formation and in the form liquid. This model is built on the same principles, on which was built the liquid-drop model of nucleus, proposed by Bohr and Weizsacker.

**Category:** Classical Physics

[204] **viXra:1412.0248 [pdf]**
*submitted on 2014-12-27 05:22:19*

**Authors:** Nicolae Bratu

**Comments:** 8 Pages.

In the work “Disquisitiones Diophanticae”, published in 2006 in Romanian, I had gathered succinctly and schematized the content of the “Memorandum to the Romanian Academy” in 1983, concerning the Fermat’s Last Theorem. This paper demonstrates a lemma representing a completion of the algebraic method proposed by us to prove the Fermat’s Theorem.

**Category:** Number Theory

[203] **viXra:1412.0247 [pdf]**
*submitted on 2014-12-26 15:30:26*

**Authors:** Sergio Arciniegas-Alarcón, Marisol García-Peña, Wojtek Krzanowski, Carlos Tadeu dos Santos Dias

**Comments:** 14 Pages.

A common problem in multi-environment trials arises when some genotype-by-environment combinations are missing. In Arciniegas-Alarcón et al. (2010) we outlined a method of data imputation to estimate the missing values, the computational algorithm for which was a mixture of regression and lower-rank approximation of a matrix based on its singular value decomposition (SVD). In the present paper we provide two extensions to this methodology, by including weights chosen by cross-validation and allowing multiple as well as simple imputation. The three methods are assessed and compared in a simulation study, using a complete set of real data in which values are deleted randomly at different rates. The quality of the imputations is evaluated using three measures: the Procrustes statistic,the squared correlation between matrices and the normalised root mean squared error between these estimates and the true observed values. None of the methods makes any distributional or structural assumptions, and all of them can be used for any pattern or mechanism of the missing values.

**Category:** Statistics

[202] **viXra:1412.0246 [pdf]**
*submitted on 2014-12-26 16:11:05*

**Authors:** Nicolae BRATU, Adina CRETAN

**Comments:** 6 Pages.

This paper has been updated and completed thanks to suggestions and critics coming from Dr. Mike Hirschhorn, from the University of New South Walles. We want to express our highest gratitude.
The paper appeared in an abbreviated form [6]. The present work is a complete form.
For the homogeneous diophantine equations:x2 + by2 + cz2 = w2 there are solutions in the literature only for particular values of the parameters b and c. These solutions were found by Euler, Carmichael, Mordell. They proposed a particular solution for this equation in [3]. This paper presents the general solution of this equation as functions of the rational parameters b, c and their divisors. As a consequence, we obtain the theorem that every positive integer can be represented as the sum of three squares, with at most one of them duplicated, which improves on the Fermat –Lagrange theorem

**Category:** Number Theory

[201] **viXra:1412.0245 [pdf]**
*replaced on 2015-01-25 04:36:33*

**Authors:** Giuliano Bettini

**Comments:** 20 pages. Drawings added.

Abstract
A biological model of elementary particles with many resemblances with DNA.
More precisely, with a circular supercoiled DNA, or as supercoiled rods.
The model is isomorphic with the quark model.
It justifies all the elementary particles, and only these. A particle is a closed wire, a single strand, a helix. Quarks are twisted pieces of this helix, having its own charge Q, but also isospin projection I3 and hypercharge Y.
I put charge Q, isospin projection I3 and hypercharge Y in one-to-one correspondence with a physical model, in which each quark, each piece of helix, has its own Linking number Lk, Twist Tw and Writhe Wr. As a consequence, any elementary particle is modelled by a closed wire, with its own internal Twist, Writhe Wr and Linking number (charge).
For any closed cord Lk is invariant, so the alteration of the Twist will absorbed as Writhe and viceversa. Each couple Tw / Wr corresponds to different conformation.
The lowest-energy conformation will have the lowest mass, and the others will have much more mass.
As I know, the model is just “picturesque”, but may lead to some idea

**Category:** Nuclear and Atomic Physics

[200] **viXra:1412.0244 [pdf]**
*submitted on 2014-12-26 05:16:06*

**Authors:** Thomas Colignatus

**Comments:** 23 Pages.

Professor Casey's book opposes the historical evidence for Jesus to the mythical origin of the story. Historicism is generally accepted in academic New Testament Studies, mythicism is often adhered to by non-scholars on the internet. The review uses the analogy of Santa Claus to bring forth a point that may have been missed by both professor Casey and the mythicists who he wishes to expose. For Santa Claus there is the historical bishop Nicolas of Myra (Turkey) but it would be inaccurate to call him the "historical Santa Claus" since the origin of the story is rather the neolithical myth of the Norse god Wodan who rides the sky on the back of his horse Sleipnir. The Church imposed the story of Nicolas on the ancient myth in order to control the heresy. If the historical Jesus was a mere man, he couldn't have walked on water or risen from death, and the story of the resurrection reminds of many similary mythical stories from prehistoric times. For Jesus the religious meaning and the resurrection are the defining issue, for otherwise why tell the story from generation to generation ? If there was a historical preacher, healer and exorcist who got associated with already existing ancient myths of resurrection, then it becomes awkward to speak about a historical Jesus, just like with the "historical Santa Claus", because such historical Jesus is at distance from what defines him for the story that people consider relevant to relate. The review looks into the historical method, Crossley's & Casey's dating of Mark to 40 CE, the value of evidence of the Aramaic language, and some aspects of professor Casey's rejection of the mythical argument. The review is by an outsider of ancient history and New Testament Studies, as the author is an econometrician and teacher of mathematics. His interest is his proposal for a development of a multidisciplinary course on Jesus and the origin of Christianity, explained in his book The simple mathematics of Jesus (2012).

**Category:** Religion and Spiritualism

[199] **viXra:1412.0243 [pdf]**
*submitted on 2014-12-26 05:20:07*

**Authors:** Thomas Colignatus

**Comments:** 44 Pages. Not quite a book review

Lendering (2014) "Israël verdeeld" (Israel divided) - henceforth JLIV - claims to present a history of the Jewish world in 180 BC - 70 AD. The book has mixed features of a scholarly book and a book to popularise historical findings for a general audience. JLIV frames Jesus as a historical figure. JLIV also discusses the historical method but not the criticism about it for its application to Jesus. The author adheres to the motto: "Relevance is the enemy of history" (J.P. Meier). The focus on the Jewish world in JLIV implies an emphasis on that Jesus was a Jew - and thus marginal to the Greek and Roman world. The implied argument is: Why would Greeks and Romans worship a Jew as their God ? JLIV does not explicitly discuss other scenario's than a historical Jesus. JLIV basically neglects the arguments of serious authors who analyse that Jesus did not exist as a historical figure, and thus is not even a legend but a myth. There are various scenarios how a Jesus as a mere idea could have come about. A possibility is that a sect of Hellenised Jews came upon the creation themselves. Another possibility is deliberate deception. JLIV does not put the Greek and Roman conquest of Israel en Judea at center stage that can explain this deception. The creed around Jesus might have been created by the Romans to pacify the religiously fanatical Jews. It is only after some centuries and by more processes that the Roman Empire eventually adopted the creed as its own, as a twist of history. For some readers it may matter whether Jesus really existed. For scientists it doesn't matter but for them science and truthfulness matter. The relevance that Meier and Lendering refer to is that people feel cheated and scientists feel distressed by religious authorities and 'scholars' who distort truth. To understand JLIV on the historical method and JLIV's response to the criticism on the historical Jesus, the readers of JLIV are not well served by JLIV itself. For this, one must look at other texts by Lendering. While Lendering may present JLIV as his position and answer, that position and answer isn't there, while what is presented elsewhere fails. Potential readers of JLIV are advised to wait for a second revised edition. Science can progress when authors are free to develop their argument but it is part of the process to respond to criticism.

**Category:** Religion and Spiritualism

[198] **viXra:1412.0242 [pdf]**
*submitted on 2014-12-26 05:33:24*

**Authors:** Hasmukh K. Tank

**Comments:** Seven page paper

As soon as any author proposes alternative interpretation of the cosmological red-shift, it is opposed saying that tired-light explanations are not compatible with the observations of time-dilation of super novae light-curves. So this author had shown [1] that any mechanism which can cause ‘cosmological red-shift’ will also cause ‘time-dilation of super-novae light-curves’. Still the alternative interpretations are not taken seriously. So this author is constrained to show some limitations of the current big-bang cosmology, like: (i) the general relativity theory predicts ‘expansion of space’ between the galaxies; but the space within the galaxy is not expanding, because galaxy is a gravitationally-bound-structure. The question raised here is: If so, then what happens at the edge of a galaxy whose external space is expanding but the space within is not expanding? Is there a smooth transition from expanding to non-expanding space? If expanding-space can stretch the wavelength of a cosmologically red-shifting photon, then less and less expanding space, at the boundary of the galaxy, should shrink the wavelength back to its original length, isn’t it? (ii) According to general relativity the planets, like the earth, orbit around the Sun, because the space around the Sun has got curved; and the planets are in inertial-motion traveling along the geodesic path. Now the question raised here is: Inertial-motion of a body can be at any speed. Can the planets travel along the geodesic-path at any speed they like? Can they take a coffee-brake and then proceed further? (iii) According to general relativity there is a radial-distance at which rate of expansion of space is equal to the speed of light, so the wave-front of light beyond this radius is not able to enter the sphere of observable universe. This means that the speed of light is the same, 3 x 108 meters per second. The question raised here is: Since the speed of light is the same in expanding as well as non-expanding space; and f . λ = c , i.e. the product of frequency (f) and wavelength (λ) is always equal to the speed of light (c); then the wavelength (λ) can increase only when frequency (f) gets reduced; and not because of expansion of space. Then a new ‘self-gravitational-acceleration based explanation for the cosmological red-shift is proposed. And it is shown that reduction in energy of ‘cosmologically red-shifting photons’ matches strikingly with this new explanation.

**Category:** Relativity and Cosmology

[197] **viXra:1412.0239 [pdf]**
*replaced on 2014-12-31 05:28:32*

**Authors:** E.P.J. de Haas

**Comments:** 18 Pages.

This paper is a sequel to ``Doppler Boosting a de Broglie Electron from a Free Fall Grid Into a Stationary Field of Gravity''. We Doppler boost a de Broglie particle from a free fall grid onto a stationary field of gravity. This results in an identification of the two Doppler boost options with an electron energy double-valueness similar to electron spin. It seems that, within the limitations of our approach to gravity, we found a bottom up version of a possible theory of Quantum Gravity, on that connects the de Broglie hypothesis to gravity. This paper finishes and adapts ``Towards a 4-D Extension of the Quantum Helicity Rotator with a Hyperbolic Rotation Angle of Gravitational Nature'' for the quantum gravity part. We try to boost the de Broglie particle's quantum wave equation from the free fall grid to the stationary grid. We find that this is impossible on the Klein-Gordon level, the Pauli level and the Dirac level. But when we double the Dirac level and thus realize a kind of a Yang-Mills doublet level, we can formulate a doublet version of the Weyl-Dirac equation that can be Doppler boosted from the free fall grid into the stationary grid in a central field of gravity. In the end we add a quantitative prediction for the gravitational Doppler shift of the matter wave or probability density of an electron positron pair. Our free fall grid to stationary grid approach is ad-hoc and does not present a fundamental theory, but is a pragmatic attempt to formulate quantum mechanics outside the Poincaré group environment and beyond Lorentz symmetry.

**Category:** Quantum Gravity and String Theory

[196] **viXra:1412.0238 [pdf]**
*submitted on 2014-12-24 20:31:38*

**Authors:** U.V.S. Seshavatharam, S.Lakshiminarayana

**Comments:** 6 Pages. Happy Christmas 25 Dec 2014

During evolution, cosmic thermal energy density is always directly proportional to the critical mass-energy density. The product of cosmic ‘critical density’ and ‘critical Hubble volume’ can be called as the ‘critical mass’ of the evolving universe. With reference to Mach’s principle, cosmic ‘critical density’, ‘critical volume’ and ‘critical mass’ can be considered as the quantified back ground dynamic properties of the evolving universe. By considering the Planck mass as the critical mass connected with big bang, Planck scale Hubble constant and critical density can be defined. Observed redshift can be reinterpreted as a cosmological light emission phenomenon connected with cosmologically reinforcing or strengthening hydrogen atom. Super novae dimming can also be understood in this way. To understand the ground reality of cosmic rate of expansion, accuracy of the current methods of estimating the magnitudes of current Hubble’s constant and current CMBR temperature must be improved.

**Category:** Relativity and Cosmology

[195] **viXra:1412.0237 [pdf]**
*submitted on 2014-12-24 22:54:52*

**Authors:** Edwin Eugene Klingman

**Comments:** 8 Pages.

Bell oversimplified his model based on confusing a provisional precession eigenvalue equation with Dirac's fundamental helicity eigenvalue equation. I derive a local classical model based on energy-exchange physics that Bell intentionally suppressed and I show that Bell's constraints determine whether the model is local or non-local. The physical theory upon which the model is based can be tested experimentally; if valid, Bell's claims of non-locality will be proved wrong.

**Category:** Quantum Physics

[194] **viXra:1412.0236 [pdf]**
*submitted on 2014-12-25 04:38:19*

**Authors:** Predrag Terzic

**Comments:** 1 Page.

Conjectured polynomial time compositeness test for numbers of the form 2*3^n-1 is introduced .

**Category:** Number Theory

[193] **viXra:1412.0235 [pdf]**
*replaced on 2015-07-28 04:51:49*

**Authors:** Thomas Colignatus

**Comments:** 2 Pages. The paper refers to the book FMNAI that supersedes the paper

Paul of Venice (1369-1429) provides a consistency condition that resolves Russell's Paradox in naive set theory without using a theory of types. It allows a set of all sets. It also blocks the (diagonal) general proof of Cantor's Theorem (in Russell's form, for the power set). The Zermelo-Fraenkel-Axiom-of-Choice (ZFC) axioms for set theory appear to be inconsistent. They are still too lax on the notion of a well-defined set. The transfinites of ZFC may be a mirage, and a consequence of still imperfect axiomatics in ZFC for the foundations of set theory. For amendment of ZFC two alternatives are mentioned: ZFC-PV (amendment of de Axiom of Separation) or BST (Basic Set Theory).

**Category:** Set Theory and Logic

[192] **viXra:1412.0234 [pdf]**
*replaced on 2015-07-28 05:05:15*

**Authors:** Thomas Colignatus

**Comments:** 2 Pages. The paper refers to the book FMNAI that supersedes the paper

> Context • In the philosophy of mathematics there is the distinction between platonism (realism), formalism, and constructivism. There seems to be no distinguishing or decisive experiment to determine which approach is best according to non-trivial and self-evident criteria. As an alternative approach it is suggested here that philosophy finds a sounding board in the didactics of mathematics rather than mathematics itself. Philosophers can go astray when they don’t realise the distinction between mathematics (possibly pure modeling) and the didactics of mathematics (an empirical science). The approach also requires that the didactics of mathematics is cleansed of its current errors. Mathematicians are trained for abstract thought but in class they meet with real world students. Traditional mathematicians resolve their cognitive dissonance by relying on tradition. That tradition however is not targetted at didactic clarity and empirical relevance with respect to psychology. The mathematical curriculum is a mess. Mathematical education requires a (constructivist) re-engineering. Better mathematical concepts will also be crucial in other areas, such as e.g. brain research. > Problem • Aristotle distinguished between potential and actual infinite, Cantor proposed the transfinites, and Occam would want to reject those transfinites if they aren’t really necessary. My book “A Logic of Exceptions” already refuted ‘the’ general proof of Cantor's Conjecture on the power set, so that the latter holds only for finite sets but not for ‘any’ set. There still remains Cantor’s diagonal argument on the real numbers. > Results • There is a bijection by abstraction between N and R. Potential and actual infinity are two faces of the same coin. Potential infinity associates with counting, actual infinity with the continuum, but they would be ‘equally large’. The notion of a limit in R cannot be defined independently from the construction of R itself. Occam’s razor eliminates Cantor’s transfinites. > Constructivist content • Constructive steps S1, ..., S5 are identified while S6 gives non-constructivism (possibly the transfinites). Here S3 gives potential infinity and S4 actual infinity. The latter is taken as ‘proper constructivism with abstraction'. The confusions about S6 derive rather from logic than from infinity.

**Category:** Set Theory and Logic

[191] **viXra:1412.0233 [pdf]**
*submitted on 2014-12-25 05:58:23*

**Authors:** Thomas Colignatus

**Comments:** 10 Pages. Paper of 2007, written in Mathematica

Adding some reasonable properties to the Gödelian system of Peano Arithmetic creates a new system for which Gödel's completeness theorems collapse and the Gödeliar becomes the Liar paradox again. Rejection of those properties is difficult since they are reasonable. Three-valued logic is a better option to deal with the Liar and its variants.

**Category:** Set Theory and Logic

[190] **viXra:1412.0232 [pdf]**
*submitted on 2014-12-24 08:25:50*

**Authors:** Hasmukh K. Tank

**Comments:** Four-page letter

This letter proposes an explanation for the century-old puzzle of the wave-particle-duality of light, which thousands of physicists, including Einstein, Plank, Feynman …, have been trying to resolve. Since a ‘particle’ is localized in space, it is mathematically characterized here as an impulse-function in space. Then it is argued that if this ‘particle’ has anything to do with waves then we can know it by taking Fourier-transform of the impulse-function in space. When Fourier-transformed into wave-number-domain; we find that a ‘particle’ should contain a wide ‘set’ of waves, and not just a single frequency. Then we show that in the experiments performed so far [1] the red lasers had significantly wide line-width, means the sources have been producing a wide set of waves, and not just a pure single frequency. Similarly, in the single-particle interference-experiments incandescent filament-lamps were used with green filters inserted to isolate single photons; but it is obvious that at the frequencies of light very narrow-band-filters are not yet technically feasible, so the green filters used allowed significantly wide band of waves. So in the experiments performed so far we got ‘particles’ localized in space. And in the double-slit-interference-experiments this wide band of waves passed from both the slits, interfered like waves, and whenever and wherever they got coherently added, a ‘particle’ called ‘photon’ got detected. Atoms emit short-duration pulses of radiation, so their Fourier transform contains a wide band of frequencies. And at the time of detection, an atom needs only a small piece of continuous ‘wave’, of a suitable duration, amplitude and frequency, for ejection of an electron. So it is concluded here that: emission and detection of light is in the form of ‘photons’; whereas its propagation in space is in the form of a wide band of ‘waves’.

**Category:** Quantum Physics

[189] **viXra:1412.0231 [pdf]**
*submitted on 2014-12-23 18:59:03*

**Authors:** Rodney Bartlett

**Comments:** 8 Pages.

R136a1 is a monstrous-sized star 165,000 light years away in the Large Magellanic Cloud, one of our Milky Way’s satellite galaxies. It currently has 265 times the mass of the Sun and may have been 320 solar masses when it first formed. It’s the most massive and most luminous star ever found, being 10 million times brighter than the Sun. "Owing to the rarity of these monsters, I think it is unlikely that this new record will be broken any time soon," said (English astrophysicist Paul) Crowther.
The primary purpose of this article is not the description of R136a1, or of stellar mass. These are merely tools employed to clarify how the Unified Field permits a mathematical route from any idea conceived by the brain to that idea’s fulfilment in reality. In other words, to reconcile the anthropic principle with unified theories in physics. And to show a strong version of that principle - that a direct link exists between human existence and the actual form of the laws of nature. Incidentally, the article concludes that there cannot be a multiverse of many universes since the universe as a whole (not the observable cosmos) is infinite and eternal.

**Category:** Relativity and Cosmology

[188] **viXra:1412.0230 [pdf]**
*submitted on 2014-12-23 23:01:47*

**Authors:** Jia Yongxin, Jia Xiangyun, Jia Yuxuan

**Comments:** 67 Pages.

In the current optical theory, the moving mass of light quantum is usually studied as the basic mass where the photon energy is concerned. In the paper titled "The Relationship between light translation and fluctuation and ether energy and momentum", the relations between ether photon, energy of the ether field and its momentum can also be considered in light of the manifestation of the mass and momentum, This can show the situation of light quantum based on energy divergence, which has certain significance for an accurate understanding of ether. It can be seen that this method ignores systematicness and independent space of ether photon itself and only the overall relative manifested state of light quantum but not the essential state is revealed. In this context, it is difficult for us to understand the relationship between real photon mass and moving mass, which is quite unacceptable just like the principle of qualitative change in theory of relativity. From the perspective of quantum property of light quantum momentum, the relationship between the real mass of photon, moving mass and motion itself as well as the relationship between ether photon particle group and motion of astral body and hence the relationship with frequency are analyzed. Our discussion can accurately reflect the real conditions of mass, momentum and energy of light quantum. Many results conforming to the observation are obtained, which can provide explanations for the following relations: the relations between light radiation and motion of pulsating star, volume expansion and motion of astral body, nova eruption and motion, transverse Doppler effect, and the frequency difference in solar spherical radiation. The section can also be used as further development of the classical particle physics, and it can become the compressible fluid mechanics featured by failure of universal gravitation, no strength, non-viscosity and continuous media by changing the mechanics of a single mass point to the mechanics of mass point groups and the manifested group mechanics due to the energy expansion of the single mass point. By bypassing the too general concept of energy in theory of relativity, the energy is defined as the quantity of mass points and momentum points, so that the relationship between energy and momentum is analyzed more clearly.

**Category:** Quantum Physics

[187] **viXra:1412.0229 [pdf]**
*submitted on 2014-12-23 23:07:16*

**Authors:** Jia Dong, Jia Xiangyun

**Comments:** 72 Pages.

In the article titled On the formation of the Earth's gravitational field, the concepts of magneton line field and gravitational line field of primitive earth and the process of closure of bending are explained. The gravitational space of the astral body is transformed into a realistic linear mass system. This provides the basis for substantial mass field to address the problem of universal gravitation. In this situation, the concepts of graviton line and gravitational line field are proposed. Because of the cutting of the moving gravitational lines and magneton lines of other astral bodies, the precession of the graviton line and magneton link in a zigzag contraction manner occurs. From the perspective of quantization of gravity, the source of universal gravitation is given. Thus, the universal gravitation is related to the mass and movement of substance. This article proposes the method for theoretical calculation of universal gravitational constant. The theoretical quantized value of the Earth's universal gravitational constant calculated by this method is ΔG =2.281853214×10-30kg m/s2. The square principle of distance is explained in terms of the efficiency of quantized momentum, and the ratio of effective and ineffective transmission amount. Compared with the Kepler's theorem, it is essentially the ratio of area swept by each magnetic line due to cutting to the area swept by progression converted from such movement after the amplification over the distance. We compare the universal gravitation with Coulomb's force for the first time in essence. The universal gravitation and Coulomb's force are unified. We find some convincing proof that universal gravitation and Coulomb's force have identical principle of action.

**Category:** Quantum Gravity and String Theory

[186] **viXra:1412.0228 [pdf]**
*submitted on 2014-12-24 01:42:47*

**Authors:** Marius Coman

**Comments:** 2 Pages.

In this paper I make an observation about an interesting formula based on the lesser prime p from a pair of twin primes, id est N = p^3 + 3*p^2 + 4*p + 1, that conducts sometimes to the result N = q*r, where q, r are primes such that r – q + 1 = p and sometimes to the result N = q*r, where at least one from q, r or both are composites such that r – q + 1 = p.

**Category:** Number Theory

[185] **viXra:1412.0227 [pdf]**
*submitted on 2014-12-24 03:09:23*

**Authors:** John Frederick Sweeney

**Comments:** 19 Pages. Some charts and diagrams

Sir Roger Penrose, Nobel Laureate, has described them as “useless” for physics, yet the old uncle locked up in the attic, as John Baez describes them, play a fundamental role in the physics of the Universe, according to Vedic Physics theory. This paper explains why.

**Category:** Mathematical Physics

[184] **viXra:1412.0225 [pdf]**
*submitted on 2014-12-23 07:47:34*

**Authors:** Herbert Weidner

**Comments:** 10 Pages.

High resolution spectra of 74 SG stations were calculated with quadruple precision in order to reduce the numerical noise. The product spectrum shows 13 previously unknown spectral lines between 42 µHz and 70 µHz. Some of them may belong to the long-sought Slichter triplet.

**Category:** Geophysics

[183] **viXra:1412.0224 [pdf]**
*replaced on 2015-02-04 12:06:05*

**Authors:** Solomon I. Khmelnik

**Comments:** 7 Pages.

Different variants of electromagnetic induction are considered. The type of induction caused by changes of electromagnetic induction flow is separated. The dependence of this induction on the flow density of electromagnetic energy emf and on the parameters of the wire is explored. We are discussing the mechanism of occurrence of energy flow, which enters the wire and compensates the heat loss.

**Category:** Classical Physics

[182] **viXra:1412.0223 [pdf]**
*replaced on 2015-01-02 23:09:53*

**Authors:** Martin Schlueter

**Comments:** 1 Page. This document is licensed under a Creative Commons (CC BY-NC-ND)

An (assumed) new relationship between the harmonic series $H_{n}$ and the natural logarithm $log(n)$ is presented.

**Category:** Number Theory

[181] **viXra:1412.0222 [pdf]**
*submitted on 2014-12-23 04:21:43*

**Authors:** C. Burton, H. Isaaks

**Comments:** 6 Pages.

A new variation on the Copenhagen interpretation of quantum mechanics is introduced, and its
effects on the evolution of the Universe are reviewed. It is demonstrated that this modified
form of quantum mechanics will produce a habitable Universe with no required tuning of the
parameters, and without requiring multiple Universes or external creators.

**Category:** Quantum Physics

[180] **viXra:1412.0221 [pdf]**
*submitted on 2014-12-23 04:44:40*

**Authors:** Robert Rhodes

**Comments:** 7 Pages.

In this article we discuss the possibility that an extraterrestrial species could be hostile to humanity,
and present estimates for the probability that visitors to the Earth will be aggressive. For this purpose
we develop a generic model of multiple civilizations which are permitted to interact, and using randomized
parameters we simulate thousands of potential worlds through several millenia of their development. By
reviewing the species which survive the simulation, we can estimate the fraction of species which are hostile
and the fraction which are supportive of other cultures.

**Category:** Astrophysics

[179] **viXra:1412.0219 [pdf]**
*replaced on 2014-12-23 07:27:17*

**Authors:** Ervin Goldfain

**Comments:** 6 Pages. Work in progress.

The minimal fractal manifold (MFM) defines a space-time continuum endowed with arbitrarily small deviations from four-dimensions. It was recently shown that MFM is a natural consequence of the Renormalization Group and that it brings up a series of unforeseen solutions to the challenges raised by the Standard Model. In this brief report we argue that MFM may be treated as asymptotic manifestation of Non-Commutative (NC) Field Theory near the electroweak scale. Our provisional findings may be further expanded to bridge the gap between MFM and NC Field Theory.

**Category:** High Energy Particle Physics

[178] **viXra:1412.0218 [pdf]**
*submitted on 2014-12-22 18:39:37*

**Authors:** Ramzi Suleiman, Yuval Samid

**Comments:** 37 Pages.

Theoretical and experimental research underscores the role of punishment in the evolution of cooperation between humans. Experiments using the public goods game have repeatedly shown that in cooperative social environments, punishment makes cooperation flourish, and that withholding punishment makes cooperation collapse. In less cooperative social environments, where antisocial punishment has been detected, punishment was detrimental to cooperation. The success of punishment in enhancing cooperation is explained as deterrence of free riders by cooperative strong reciprocators, who were willing to pay the cost of punishing them, whereas in environments in which punishment diminished cooperation, antisocial punishment was explained as revenge by low cooperators against high cooperators suspected of punishing them in previous rounds.
The present paper reconsiders the generality of both explanations. Using data from a novel public good experiment with punishment and from 16 public goods experiments from countries around the world, we report results showing that revenge alone does not drive antisocial punishment of cooperators, and that such punishment is predominantly part of an upward and downward punishment strategy, presumably aimed at punishing those who deviate from the punisher’s aspired cooperation norm. More interestingly, we show that the effect of punishment on the emergence of cooperation is mainly due to contributors increasing their cooperation, more than free riders being deterred. We also show that the anticipation of being punished is more effective in enhancing cooperation than the actual punishment itself, and that the ratio of strong reciprocators in a given social group is a potent predictor of the group’s level of cooperation and success in providing public goods.

**Category:** Economics and Finance

[177] **viXra:1412.0217 [pdf]**
*submitted on 2014-12-22 14:58:01*

**Authors:** E.P.J. de Haas

**Comments:** 17 Pages.

This paper is a sequel to ``Frequency Gauged Clocks on a Free Fall Grid and Some Gravitational Phenomena''. We Doppler boost a de Broglie particle from a free fall grid onto a stationary field of gravity. First we do this for a photon and then for a particle with non-zero rest-mass. This results in an identification of the two Doppler boost options with electron spin or with electron energy double-valueness. It seems that, within the limitations of our approach to gravity, we found a bottom up version of a possible theory of Quantum Gravity, on that connects the de Broglie hypothesis to gravity. This paper realizes the connection between our papers ``Frequency Gauged Clocks on a Free Fall Grid and Some Gravitational Phenomena'' and ``Towards a 4-D Extension of the Quantum Helicity Rotator with a Hyperbolic Rotation Angle of Gravitational Nature''.

**Category:** Quantum Gravity and String Theory

[176] **viXra:1412.0216 [pdf]**
*submitted on 2014-12-22 11:21:32*

**Authors:** Aref Yazdani

**Comments:** 9 Pages. This article published in the Advances in High Energy Physics Vol. 2014, 349659, 9

We study a noncommutative theory of gravity in the framework of torsional spacetime. This theory is based on a Lagrangian obtained by applying the technique of dimensional reduction of noncommutative gauge theory and that the yielded diffeomorphism invariant field theory can be made equivalent to a teleparallel formulation of gravity. Field equations are derived in the framework of teleparallel gravity through Weitzenbock geometry. We solve these field equations by considering a mass that is distributed spherically symmetrically in a stationary static spacetime in order to obtain a noncommutative line element.This new line element interestingly reaffirms the coherent state theory for a noncommutative Schwarzschild black hole. For the first time, we derive the Newtonian gravitational force equation in the commutative relativity framework, and this result could provide the possibility to investigate examples in various topics in quantum and ordinary theories of gravity.

**Category:** Quantum Gravity and String Theory

[175] **viXra:1412.0215 [pdf]**
*submitted on 2014-12-21 20:55:19*

**Authors:** Michael J. Burns

**Comments:** 6 Pages.

This is a presentation style outline of some of my criticisms of the state of academic physics. It is more concrete, and it has a few specific explanations and diagrams.

**Category:** History and Philosophy of Physics

[174] **viXra:1412.0214 [pdf]**
*replaced on 2015-02-04 13:21:36*

**Authors:** Solomon I. Khmelnik

**Comments:** 7 Pages.

Variants of electromagnetic induction examined. Special examined induction, which is caused by the change in the flow of electromagnetic energy. Is the dependence of this emf from the flux density of electromagnetic energy and the parameters of the wire. We are discussing the mechanism of occurrence of energy flow, which enters the wire and compensates the heat loss.\\ Рассматриваются варианты электромагнитной индукции. Выделяется индукция, вызванная изменением потока электромагнитной энергии. Находится зависимость э.д.с. этой индукции от плотности потока электромагнитной энергии и параметров провода. Рассматривается механизм возникновения потока энергии, поступающего в провод и компенсирующего тепловые потери.

**Category:** Classical Physics

[173] **viXra:1412.0213 [pdf]**
*submitted on 2014-12-20 16:48:16*

**Authors:** E.P.J. de Haas

**Comments:** 16 Pages.

Using frequency gauged clocks on a free fall grid we look at gravitational phenomena as they appear for observers on a stationary grid in a central field of gravity. With an approach based on Special Relativity, the Weak Equivalence Principle and Newton's gravitational potential we derive first order correct expressions for the gravitational red shift of stationary clocks and of satellites. We also derive first order correct expressions for the geodetic precession, the Shapiro delay basis and the gravitational index of refraction, so phenomena connected to the curvature of the metric. Our approach is pragmatic and inherently limited but, due to its simplicity, it might be useful as an intermediate in between SR and GR.

**Category:** Relativity and Cosmology

[172] **viXra:1412.0211 [pdf]**
*submitted on 2014-12-20 07:21:38*

**Authors:** Dan Visser

**Comments:** 13 Pages.

The subtitle of this paper is: “An overwhelming amount of evidence is available for the non-existence of the Big Bang”. Instead a new proposal for the shape and dynamic of the universe is described as an eternally rotating Time-Torus-Universe. Therein particles and observers are in orbit inside a time-density of a sub-time-torus. Such a sub-time-torus is a part of the larger Time-Torus Universe. Such a whole universe rotates forever. This is the Double Torus Universe described in the Double Torus Theory (DTT), which has no beginning or end. Such a universe is full of dark matter in synergy with new dark energy, which emerges the visible material world as we know it. The power for this synergy is capsuled in events happening by 2 dimensions of time smaller than the conformal time in the Big Bang Universe. Therefore this new theory is called ‘double’, because it uses that refined time, which is 2D-time smaller than the conformal Planck-time. However, the General Relativity Theory (GRT) and the Quantum Theory (QT) remain valid in the DTT. In the DTT a new force takes care for the torus not to expand unlimited as in the Big Bang universe. Observers inside the DTT experience a combination of quantum Newton-force and sub-quantum dark matter-force at the edge of any sub-time-torus. This leads to a sometimes shared rotating cosmological horizon in the presence. Space is no longer the issue in the DTT, but conformal and refined time are. In an overview is explained, by means of 14 handwritten sheets, how to visualize this mechanism mathematically and dimensionally. The under laying formulations are already described in former vixra-articles of mine [1]. Popular information can also be found on my website [a]. In the end a description is given of the window of visible matter (a new formula, which is independent of the cosmological constant). This formula calculates the ‘window’. In addition several examples of institutional evidence is shown to support the DTT-formulations. My opinion is: It is unbelievable how science anno 2014 wants holds on to Big Bang. Forget the Big Bang!

**Category:** Mathematical Physics

[171] **viXra:1412.0210 [pdf]**
*submitted on 2014-12-20 04:36:42*

**Authors:** Subhajit Ganguly

**Comments:** 34 Pages.

Taking abstraction as the starting point, we build a complex, self-organizing fuzzy logic system. Such a system, being built on top of abstraction as the base, turns out to be just a special outcome of the laws of abstraction. As the system is self-organizing, it runs automatically towards optimization. Using such a system in neural networks, we may come as close as possible to the workings of the human brain. The abstract fuzzy optimization is seen to follow a Gaussian distribution.

**Category:** Artificial Intelligence

[170] **viXra:1412.0209 [pdf]**
*submitted on 2014-12-19 16:55:20*

**Authors:** Bill Porter

**Comments:** Pages.

Some cosmological theories, such as many versions of eternal inflation and ΛCDM
involve creation processes which continue indefinitely with no defined termination.
Such processes can only occur in a temporally unbounded but finite universe. This requirement imposes serious constraints on many theories but the issue is often ignored.
I propose an eternal steady state cosmological model in which past- or future-incomplete processes with no defined beginning or end are not permitted.
Much well regarded theory is incompatible with this model; however there are viable alternatives.

**Category:** Relativity and Cosmology

[169] **viXra:1412.0208 [pdf]**
*submitted on 2014-12-19 19:56:48*

**Authors:** V.B. Smolenskii

**Comments:** 5 Pages.

In the article, within the PI-Theory of fundamental physical constants, presents the theoretical basis and experimental confirmation of the author's perspective on the dynamics of evolution of the Universe and the criterion of existence of protein life on the planets of the solar system.

**Category:** Relativity and Cosmology

[168] **viXra:1412.0207 [pdf]**
*submitted on 2014-12-19 12:44:53*

**Authors:** George Rajna

**Comments:** 11 Pages.

Patrick Coles, Jedrzej Kaniewski, and Stephanie Wehner made the breakthrough while at the Centre for Quantum Technologies at the National University of Singapore. They found that 'wave-particle duality' is simply the quantum 'uncertainty principle' in disguise, reducing two mysteries to one. [4]
The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories.
The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry.

**Category:** Quantum Physics

[167] **viXra:1412.0206 [pdf]**
*submitted on 2014-12-19 06:45:24*

**Authors:** N.N.Leonov

**Comments:** 19 Pages. English and russian texts

This paper describes what function methods of the theory of non-linear oscillations had in gaining detailed and adequate understanding of structure of the material world objects. Using these methods, simple identification techniques and Mandelstam-Andronov’s applied scientific methodology a stubborn researcher had managed to do in thirty years that what the entire global physical elite could not achieve during more than a century.

**Category:** Nuclear and Atomic Physics

[166] **viXra:1412.0205 [pdf]**
*submitted on 2014-12-19 07:14:35*

**Authors:** Ștefan Vlăduțescu, Mirela Teodorescu

**Comments:** 7 Pages.

“Communicative Action, Deliberative and Restorative Justice – Socio-juridical perspective on
mediational averment” by Antonio Sandu and Elena Unguru, published by TRITONIC in 2014, is a
high level transdisciplinary lesson about transactional justice, restorative justice and deliberative
alternative to classical (retributive and distributive). Antonio Sandu is Professor at the University
"Ştefan cel Mare" from Suceava, and researcher at the Centre for Socio-Human Research Lumen in
Iasi (Romania). The main interest of the author include ethics, bioethics, social assistance, social
philosophy. He is the author of five books in Social Philosophy and Applied Ethics, more than 8
articles in scientific journals indexed by Thomson Reuters and over 20 other scientific articles. Elena
Unguru is researcher in the fields of law, social work, sociology, communication, appreciative inquiry
in Socio-Human Research Center Lumen from Iasi- Romania. We are led, naturally and
professionally, on the sinuously road from conflict to communication also denoted by the
establishment of the public sphere as social reality born of meeting and acceptance of the otherness, of
individuality asserting in public space, postulating the universality of human nature as human rights.
Starting from the Habermas’ idea of communication power as a form of expression in contemporary
society, the author believes that communicative action codes a power strategy based on consensus.
Power is soft, seductive, and inter-mediate linguistic and cultural relations. The chosen theme by the
authors analyze the communication mediation model based on the values of social justice, equity and
charity, assuming an exercise of the integration of the otherness, of perceiving the other as partner.

**Category:** Social Science

[165] **viXra:1412.0204 [pdf]**
*submitted on 2014-12-19 07:21:37*

**Authors:** Mirela Teodorescu, Adrian Nicolescu, Jozef Novak-Marcincin

**Comments:** 5 Pages.

Information from Theory towards Science, the professor Stefan Vlăduţescu’s book from University of Craiova, is a confirmation of high intelligence level and propensity of author’s cognition. From various semiotic materials (words, images, gestures, drawings, etc.), following certain principles, under different procedures (operations, actions, movements, maneuvers, mechanisms, strategies) using means (languages, codes, subcodes) and specific tools (knowledge, concepts, categories) adapted aim between earth (with autocorrection by feedback) and firmament (as anticipation by feed-forward) rises an imposing edifice, a cognitive construction: this is information. It has systemic and procedural character and is organized on four coordinates: metric, semantic, structural and pragmatic.

**Category:** General Science and Philosophy

[164] **viXra:1412.0203 [pdf]**
*submitted on 2014-12-19 04:10:22*

**Authors:** George Rajna

**Comments:** 8 Pages.

Physicists’ greatest hope for 2015, then, is that one of these experiments will show where Einstein got off track, so someone else can jump in and get closer to his long-sought “theory of everything.”
This article is part of our annual "Year In Ideas" package, which looks forward to the most important science stories we can expect in the coming year. It was originally published in the January 2015 issue of Popular Science. [4]
The self maintained electric potential of the accelerating charges equivalent with the General Relativity space-time curvature, and since it is true on the quantum level also, gives the base of the Quantum Gravity.
The magnetic induction creates a negative electric field, causing an electromagnetic inertia responsible for the relativistic mass change; it is the mysterious Higgs Field giving mass to the particles.
The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate by the diffraction patterns. The accelerating charges explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron’s spin also, building the bridge between the Classical and Relativistic Quantum Theories.

**Category:** Relativity and Cosmology

[163] **viXra:1412.0202 [pdf]**
*submitted on 2014-12-18 20:25:59*

**Authors:** VT Padmanabhan, R Ramesh, Joseph Makkolil

**Comments:** 6 Pages.

Finland’s parliament has recently approved a joint venture with Russia to build a VVER 1200 MWe, design AES-2006 pressurized water reactor which ‘complies with the IAEA and EUR requirements’ of a generation-III (Gen-III). AES-2006 design has not undergone the Gen-III assessment process. Its parent -design AES-92 which was certified as Gen-III in 2007 is missing from the genealogy of AES-2006 given in a presentation by the vendor, Rosatom. We propose that there are strong reasons to believe that this disappearance is due to the dismal performance of the AES-92 reactors at Kudankulam (KK) in India and the controversy surrounding its Gen-III certification. KK Reactor took about 12 years for construction and failed in the commissioning tests seven times. AES-92 received Gen-III certification in 2007 on the basis of fictitious and fabricated data. Rosatom's practice of selling an un-assessed design as Gen-III compliant has implications for nuclear safety globally as this design is being considered in many countries like Belarus, Bulgaria, Finland, South Africa, Bangladesh and Vietnam. In this article, we will chart out the genealogy of the AES-20056 reactor, the history of EUR (European Utility Requirement) certification process of AES-92, performance of the real AES-92 reactor at Kudankulam Nuclear Power Plant (KKNPP) in India and the attributes of its predecessor AES-91 reactor under operation in China since 2007. We will demonstrate that the reason for the deletion of the AES-92 reactor design from the AES-2006 genealogy is the real-world under-performance of the only AES-92 reactor at KKNPP.

**Category:** Nuclear and Atomic Physics

[162] **viXra:1412.0201 [pdf]**
*replaced on 2015-01-27 21:14:44*

**Authors:** Karan Doshi

**Comments:** 11 Pages.

In this paper the author submits a proof using the Power Set relation for the existence of a transfinite cardinal strictly smaller than Aleph Zero, the cardinality of the Naturals. Further, it can be established taking these arguments to their logical conclusion that even smaller transfinite cardinals exist. In addition, as a lemma using these new found and revolutionary concepts, the author conjectures that some outstanding unresolved problems in number theory can be brought to heel. Specifically, a proof of the twin prime conjecture is given.

**Category:** Set Theory and Logic

[161] **viXra:1412.0200 [pdf]**
*replaced on 2014-12-20 07:56:36*

**Authors:** Hasmukh K. Tank

**Comments:** A seven-page letter

This paper raises some questions regarding general theory of relativity like: (i) the theory predicts ‘expansion of space’ between the galaxies; but the space within the galaxy is not expanding, because galaxy is a gravitationally-bound-structure. The question raised here is: If so, then what happens at the edge of a galaxy whose external space is expanding but the space within is not expanding? Is there a smooth transition from expanding to non-expanding space? And what happens to the cosmologically red-shifted inter-galactic-photons when they enter our milky-way galaxy from expanding outer-space to less-and-less expanding space within our galaxy? (ii) According to general relativity the planets, like the earth, orbit around the Sun, because the space around the Sun has got curved; and the planets are in inertial-motion traveling along the geodesic path. Now the question raised here is: Inertial-motion of a body can be at any speed. Can the planets travel along the geodesic-path at any speed they like? Can they take a coffee-brake and then proceed further? (iii) According to general relativity there is a distance at which rate of expansion of space is equal to the speed of light; and the speed of light is always the same, 3 x 10^8 meters per second. The question raised here is: Since the speed of light is the same in expanding as well as non-expanding space; and f . λ = c , i.e. the product of frequency (f) and wavelength (λ) is always equal to the speed of light (c); then the wavelength (λ) can increase only when frequency (f) gets reduced; and not because of expansion of space. Then in the second part of the paper it is shown that reduction in energy of ‘cosmologically red-shifting photons’ is strikingly equal to (G me mp / e^2) times the reduction in electrostatic potential-energy of an electron at the same distance D.

**Category:** Relativity and Cosmology

[160] **viXra:1412.0199 [pdf]**
*replaced on 2016-01-13 12:00:58*

**Authors:** Sylwester Kornowski

**Comments:** 3 Pages.

According to the Scale-Symmetric Theory (SST), the very early Universe was the double loop composed of protogalaxies built of the neutron black holes. Due to the succeeding inflows of the dark matter and dark energy, there was the exit of the double loop from the black-hole state and the loop transformed into the expanding sphere. The correct interpretation of the Michelson-Morley experiment leads to conclusion that we cannot see directly the first stage of evolution of the early Universe. But due to some phenomena, the early binary systems and clusters of protogalaxies are imprinted on the CMB. Due to the mergers of the protogalaxies during the unseen period of evolution, the number of observed today clusters of galaxies should be smaller than it follows from the CMB and such conclusion is consistent with the observational facts. Here, we answered following question: Why we cannot see directly the evolution of the early Universe whereas we can see the CMB with coded initial number of clusters of protogalaxies?

**Category:** Quantum Gravity and String Theory

[159] **viXra:1412.0198 [pdf]**
*submitted on 2014-12-18 11:36:00*

**Authors:** Leonov N.N.

**Comments:** 10 Pages. English and russian texts

The paper describes a schematic structure of UFO photon propulsion system and explains how the structure is used for generating electric current.

**Category:** Nuclear and Atomic Physics

[158] **viXra:1412.0197 [pdf]**
*replaced on 2015-01-25 00:05:57*

**Authors:** Yu.A. Spirichev

**Comments:** 20 Pages. in Russian

The article is devoted to the development of the classical theory of electromagnetic fields (EMF) and elimination of "white spots" in the field of electromagnetic energy and electromagnetic forces. On the basis of the axioms of a four-dimensional vector potential and current density of the deductive method, the 4-tensors and the equations of conservation of electromagnetic energy for the system "EMF - 4-current density". The obtained 4-tensor of third rank electromagnetic forces, 64 the components of which are determined by all kinds of static, stationary and dynamic electromagnetic forces, including the forces of Coulomb, Ampere (Lorentz), Solunin-Nikolaev. The equations of balance of all the electromagnetic forces. The obtained 4-tensors and the equations of self-consistent motion of electric charges, including the wave equation turbulence plasma. Of the 4-tensor of electromagnetic energy received 4-tensor of mechanical energy-momentum, equation of conservation of mechanical energy-momentum, canonical relativistic Lagrangian for a free particle, than he reveals electrodynamics and mechanics. The present work is a theoretical Foundation on which can be supplemented and refined electrodynamics of material media and plasma physics.

**Category:** Classical Physics

[157] **viXra:1412.0195 [pdf]**
*replaced on 2015-03-24 17:26:46*

**Authors:** Michael John Sarnowski

**Comments:** 6 Pages.

This paper shows, building on other papers by Michael John Sarnowski, that the Universe is potentially composed of smaller nested Spheres called Planck Spinning Spheres, which is composed of smaller spheres called Kaluza Spinning Spheres, which is composed of smaller spheres called Klein Spinning Spheres and so on until we get to a basic cuboctahedron structure.
In “How can the Particles and Universe be Modeled as a Hollow Sphere” (1) It was shown that discontinuities of packing could account for the mass of the Universe. In “New Evidence for the Eddington Number, the Large Number Hypothesis, and the Number of Particles in the Universe” 2 It was shown that the number of particles of mass in the Hubble Sphere Universe was related to the number of particles of mass in the Planck Spinning Sphere. This relationship, if extended all the way down through the layers of the universe yields the basic unit of the universe, which is a unit cuboctahedron. This paper will go through the math to show how this is possible.

**Category:** Quantum Gravity and String Theory

[156] **viXra:1412.0194 [pdf]**
*submitted on 2014-12-17 19:54:59*

**Authors:** Carlos Oscar Rodríguez Leal

**Comments:** 35 Pages. The paper is written in Spanish. In future I will translate into English with Part II.

This paper generalizes the six basic aritmetic operations in terms of the n-th operation, the n-operation (where n∈ℤ), and many interesting propierties of the n-operations are discovered. Besides this new tipe of operations are applied to the calculus, thereby developing many new laws in modern mathematics (derivatives, integrals, eqs. differentials).

**Category:** General Mathematics

[155] **viXra:1412.0192 [pdf]**
*submitted on 2014-12-17 09:25:38*

**Authors:** Mirela Teodorescu, Petre Bosun, Bianca Teodorescu

**Comments:** 12 Pages.

Global economy of the world is about to exceed the limit of endurance of the Earth, pushing
civilization from early phase of the XXI century closer to a possible crash than ever. In our concern
for quarterly profit growth from one year to another, we lose sight of the magnitude scale of human
activity in relation to the earth's resources. A century ago, the annual growth of the global economy
was measured in trillions. As a result, the consuming of renewable resources is faster than their
regeneration. Forests are shrinking, grasslands are deteriorating, water fall, fishing seats collapse
and soils erode. Depletion of oil is at a pace that leaves little time for planning what will be beyond
its peak. And "unload" in the atmosphere gases with greenhouse effect faster than nature can absorb
them, thus reaching a stage where increasing the soil temperature is significantly higher than it has
ever been since the beginning of agriculture in here (Hawkins, 2006). Civilization of this XXI
century is not the first which takes the way of a certain economy cannot be sustained by the
environment. Many other previous civilizations were in trouble regarding the environment. As Jared
Diamond (2005) noted some could change the course and avoided the economic downturn. Others
could not.
The idea is that the world is today in what ecologists call “mode exaggeration-and-fall”. In
the past, many times the request exceeded the production of sustainable natural systems locally. In
this context, the major companies in their efforts to find ways to develop projects to stop the
degradation of the planet, then restore the resources. One of the large projects is CSR acting in
areas such as: environmental change, resources, waste, social change, pollution, energy, ecoefficiency,
water, fauna, forests, fossil fuels, contaminants, GHG (greenhouse gases), GM
(genetically modified), desertification are only few objectives. It is also the task of governments and
NGOs to create structures that facilitate and control this activity.

**Category:** Linguistics

[154] **viXra:1412.0191 [pdf]**
*replaced on 2015-01-06 15:49:59*

**Authors:** Ștefan Vlăduțescu

**Comments:** 8 Pages.

The study finds, supports and explores some of the uncertainties surrounding the core of the
concept of communicative instance. Examine joints concept and try articulating it as a useful tool in
understanding, describing and explaining the phenomenon of communication.
Method is one meta-analytical. The main conclusion that emerges from draining and
dropping the halo of doubt and uncertainty is the principle of communicative instance: within each
communicative event occurs and remain an instance of communication, understood as a mutual
computational and decisional human device of co-building, co-organization, co-implementation and
co-management of communication processes.

**Category:** Social Science

[153] **viXra:1412.0190 [pdf]**
*submitted on 2014-12-17 11:24:55*

**Authors:** Grzegorz Ileczko

**Comments:** 59 Pages. eBook (Part 1)

This book is composed of two parts. Both parts combine into a single theory, but for easier description of phenomena, they were presented as independent of each other. This book is an attempt to show the Theory of Relativity in “different” light. That is, so to speak, physics without relativism. Each experiment described in the book comprises visual, mathematical and numerical analyses. All possible cases of setting the light source on-board a very fast vehicle were analysed. The conclusions are indeed surprising.
Part_1
Two experiments, to find and precisely define absolute vehicle velocity, have been described. Establishment of total lack of contact of vehicle’s interior with the outside world is in both cases fulfilled. According to the Theory of Relativity such experiment does not exist and vehicle’s absolute velocity cannot be determined.

**Category:** Relativity and Cosmology

[152] **viXra:1412.0189 [pdf]**
*submitted on 2014-12-17 11:33:01*

**Authors:** Grzegorz Ileczko

**Comments:** 64 Pages. eBook (Part 2)

This book is composed of two parts. Both parts combine into a single theory, but for easier description of phenomena, they were presented as independent of each other. This book is an attempt to show the Theory of Relativity in “different” light. That is, so to speak, physics without relativism. Each experiment described in the book comprises visual, mathematical and numerical analyses. All possible cases of setting the light source on-board a very fast vehicle were analysed. The conclusions are indeed surprising.
Part_2
It is a modified version of a well-known experiment with light clock. The experiment has been improved in relation to the original. Optical clock was replaced with a laser. Laser beam may depart laser’s interior thus becoming observable (not just in theory). Experiment – L has been designed as a “broad” angular analysis. Various laser positions on-board the vehicle were thoroughly examined. One can literally say that laser’s beam is analysed in terms of every angle. As a result of the analyses performed, the “true” nature of time has been discovered. A mathematical proof that time has absolute nature has been presented.

**Category:** Relativity and Cosmology

[151] **viXra:1412.0188 [pdf]**
*submitted on 2014-12-17 07:16:00*

**Authors:** N.N. Leonov

**Comments:** 11 Pages. English and russian texts

The study focuses on one of possible scenarios of the contemporary Universe formation where the Universe is originated as a “black hole”.

**Category:** Nuclear and Atomic Physics

[150] **viXra:1412.0187 [pdf]**
*submitted on 2014-12-17 09:11:07*

**Authors:** George Rajna

**Comments:** 10 Pages.

In a paper published in Physical Review D, they suggest that scientists at CERN's Large Hadron Collider (LHC), where the Higgs was discovered, look for a specific kind of Higgs decay when the collider starts up again in 2015. The details of that decay could tell them whether or not the Higgs has a say in the matter-antimatter imbalance. [7]
The magnetic induction creates a negative electric field, causing an electromagnetic inertia responsible for the relativistic mass change; it is the mysterious Higgs Field giving mass to the particles. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate by the diffraction patterns. The accelerating charges explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the wave particle duality and the electron’s spin also, building the bridge between the Classical and Relativistic Quantum Theories. The self maintained electric potential of the accelerating charges equivalent with the General Relativity space-time curvature, and since it is true on the quantum level also, gives the base of the Quantum Gravity. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.

**Category:** High Energy Particle Physics

[149] **viXra:1412.0186 [pdf]**
*submitted on 2014-12-16 06:23:13*

**Authors:** David Brown

**Comments:** 3 Pages.

Based upon the space roar and speculative ideas of Fredkin and Wolfram, I have suggested that the Big Bang recurs every (81.6 ± 1.7) billion years. (81.6 billion years)/(Planck time) = 4.773 * 10^61 where Fredkin-Wolfram-constant * (Planck time) is approximately 81.6 billion years. G *((Planck mass) / (Planck length)^2) / (1.165 * 10 ^-10 meter/second^2) = 4.773 * 10^61, where G is Newton’s gravitational constant. Note that in Milgrom’s MOND (MOdified Newtonian Dynamics) the fundamental MOND acceleration constant a0 is 1.2 * 10^-10 m/s^2. Newtonian-Einstein string theory needs to be replaced by Milgromian string theory. Why? There are 2 basic alternatives: Alternative (1). Milgrom is the greatest astrophysicist of his generation. Alternative (2). Milgrom’s acceleration law is empirically wrong. Alternative (2) is not a real possibility, because if MOND were wrong then Milgrom could never have convinced McGaugh and Kroupa. This brief communication considers ideas concerning Milgromian string theory and Wolframian string theory.

**Category:** Quantum Gravity and String Theory

[148] **viXra:1412.0185 [pdf]**
*submitted on 2014-12-16 06:53:46*

**Authors:** Aloysius Sebastian

**Comments:** 6 Pages.

Our present concept about gravity is related with the mass and distance of the object. Still we are following the Newtonian concepts of gravity to measure the gravity of objects. Albert Einstein has made a great work on the concept of gravity, after Newton. But still we are trying to know more about gravity. According to me gravity is the property of energy. One must have to explain gravity on the basis of energy. I am agreeing that mass is also a form of energy. But I prefer to find gravity in accordance with the total energy of a system. Here I am making an attempt on this field. I am sure that I am on the right track. I am not placing any new equations here. I am using the existing equations in physics to find the gravity through my thoughts. The way which I am using these equations are may be strange. Sometimes I am thinking in reverse order, first I have found the gravity with the concept of energy. For that I am applying a constant ‘’A’’ and I am getting the value of ‘’A’’ from the last part of equations. Because the value of gravity is still with Newtonian concepts.

**Category:** Quantum Gravity and String Theory

[147] **viXra:1412.0184 [pdf]**
*replaced on 2016-01-13 11:50:22*

**Authors:** Sylwester Kornowski

**Comments:** 3 Pages.

Ludwig et al. derived solar ages from 1.7 to 22.3 Gyr (2009). The applied Th/Eu ratio is most credible. But the upper limit of the obtained interval is inconsistent with the mainstream-cosmology age of the Universe, about 13.8 Gyr. The upper limit is very close to the age of the Universe obtained within the Scale-Symmetric Theory (SST), about 21.614 +- 0.096 Gyr. G. Hasinger et al. (2002) obtained that the Fe/O abundance in a high-redshift quasar is significantly higher than solar (Fe/O = 2 - 5). This result as well is inconsistent with mainstream cosmology and suggests that age of the Universe is longer than assumed or that there is an unknown mechanism for production of iron in the very early Universe. The incorrect age of the expanding Universe follows from the fact that the front of the CMB has radial speed equal to the speed of light, c, whereas the front of the baryonic matter, dark matter and dark energy (the dark matter of the Universe is entangled with the baryonic matter) has radial speed equal to 0.6415c i.e. the most distant galaxies are already 7.75 Gyr old. The calculated within SST time distance to most distant galaxies is 13.866 +- 0.096 Gyr. Due to the cascade protuberances of the dark matter at the beginning of the expansion of the Universe, there appeared protogalaxies with redshift higher than z = 1 but such protuberances were very quickly dampened. Due to the last-scattering spheres, we can see spectrums of “superluminal” galaxies. In reality, the era of quasars lasted about 10 Gyr but we can see only the end of this era i.e. the last 2.5 Gyr.

**Category:** Quantum Gravity and String Theory

[146] **viXra:1412.0183 [pdf]**
*replaced on 2014-12-19 19:39:41*

**Authors:** ChengGang.Zhang

**Comments:** 5 Pages.

It will be considered that microscopic particle does not have wave nature , diffraction experiment of microscopic particle should indirectly and objectively reflect the existence of one force which can lead to particle’s diffraction phenomenon , the force belongs to deeper theory under the quantum mechanics , and will be proved that it relates to electrostatic force in this paper .

**Category:** Quantum Physics

[145] **viXra:1412.0181 [pdf]**
*replaced on 2015-01-09 08:42:42*

**Authors:** A. Antipin

**Comments:** 4 Pages. Это статья на РУССКОМ ЯЗЫКЕ. The English version of the article: http://vixra.org/abs/1501.0106

The obtained combinatorial formulas describing random walks on a simple cubic grid.
For the case of 2 dimensions - accurate and simple.
For the case of 3 dimensions - accurate, but, unfortunately, not compact.

**Category:** Combinatorics and Graph Theory

[144] **viXra:1412.0180 [pdf]**
*submitted on 2014-12-15 09:18:25*

**Authors:** A.Antipin

**Comments:** 8 Pages. Версия на русском языке. The English version: http://vixra.org/abs/1408.0023

As a result of search of objective (settlement) methods of the analysis and forecasting of movement of the prices at a stock exchange, the fundamental role of parameter "the maximum deviation of the price from Open by the current moment sessions" (it is named "Base") has been found out.
Base use allows to present session in the form of object with the developed geometrical form-structure. Structure elements are accurately enough localised in space. Structure, as a whole, keep the configuration, at least, some months.
The object is a set of attractors to which the Price and "avoiding areas" for the Price is drawn.
The object, as a whole, has 4 measured character. It is represented in three "spatial" variables plus the coding by colour. Variables are: the transaction Price, Base, a session Present situation, Probability of the transaction to occur till the end of session. All variables have unequivocal numerical character and, thus, are strictly objective. Spatial variables set conditions of realisation of the transaction, the coding colour shows probability of the transaction.
In article the algorithm of construction of Object is given, the basic idea of practical use of Technology is described.
For today the Technology works in current session, i.e. is applicable, while, to day-trading. Use of Technology and supervision with its help behind the market, has shown high degree of conformity of a reality. It has allowed to offer qualitative model for movement of the prices.
(The English version: http://vixra.org/abs/1408.0023)

**Category:** Economics and Finance

[143] **viXra:1412.0179 [pdf]**
*submitted on 2014-12-15 04:04:18*

**Authors:** Francis M. Sanchez

**Comments:** 2 Pages.

It is shown how the Occam Razor favors the one-parameter Coherent Cosmology against the 6-parameters L-CDM model, exhibating dramatic faults in the development of Modern Cosmology. This confirms that the Cosmical Immergence which deduce the atomic dimension from cosmical elementary calculation must be taken seriously. The universality of Intellighent Life follows immediatly.

**Category:** History and Philosophy of Physics

[142] **viXra:1412.0178 [pdf]**
*submitted on 2014-12-15 05:30:02*

**Authors:** Grzegorz Ileczko

**Comments:** 13 Pages.

The Riemann hypothesis is not proved by more, than 150 years. At this paper, I presented new solution for this problem. I found new trigonometrical form of Riemann's zeta function for negative numbers (n). This new form of zeta gives opportunity to prove the Riemann hypothesis. Presented proof isn’t complicated for trigonometrical form of zeta function.

**Category:** Number Theory

[141] **viXra:1412.0177 [pdf]**
*submitted on 2014-12-15 05:33:15*

**Authors:** Omer Dickstein

**Comments:** 2 Pages.

Discussion about limits of no communication theorems

**Category:** Quantum Physics

[140] **viXra:1412.0176 [pdf]**
*submitted on 2014-12-15 05:33:53*

**Authors:** Grzegorz Ileczko

**Comments:** 13 Pages.

This article demonstrates a general solution for the problems of class (P vs.NP). Peculiarly for the problems of class (P=NP). Presented solution is quite simple and can be applicable in many various areas of science. At general, (P=NP) it’s a class of problems which possess algorithmic nature. The algorithms should contains one or more of logical operations like (if...then) instruction, or Boolean operations. The proper proof for this thesis with a new formula was presented. Except formula, one proper example was presented for the problem (P=NP). Exists a lot of problems for which P class problems are equivalent with the NP problems (P=NP). Millions, I think.
For example, I discovered extremely effective algorithm for the “Hamiltonian Path Problem”. Algorithm can find the proper solution for 100 cities at very short time. Solution time for old laptop is less than two seconds. Classical solution for that problem exists, but is extremely difficult and computer’s time is huge. Algorithm for the Hamilton problem, will be presented at separate article (needs more paper).

**Category:** Data Structures and Algorithms

[139] **viXra:1412.0175 [pdf]**
*submitted on 2014-12-15 02:58:50*

**Authors:** John Frederick Sweeney

**Comments:** 47 Pages. Some diagrams

The Tai Xuan Jing (T’ai Hsuan Ching) or Classic of the Great Mystery, although attributed to Chinese writers and preserved in China for two thousand years, probably originated in Vedic India. The Tai Xuan Jing pertains to the dynamic Raja form of 9 x 9 matter, which is a fundamental aspect of Vedic Physics. This paper argues for Vedic origins of the Tai Xuan Jing and provides supporting material for that argument, based on the mathematical argument proposed by Frank “Tony” Smith on his website.

**Category:** Mathematical Physics

[138] **viXra:1412.0174 [pdf]**
*replaced on 2015-06-18 14:32:31*

**Authors:** Zhiliang Cao, Henry Gu Cao, Wenan Qiang

**Comments:** 6 Pages.

The paper "Unified Field Theory and the Configuration of Particles" opened a new chapter of physics. One of the predictions of the paper is that a proton has an octahedron shape. As Physics progresses, it focuses more on invisible particles and the unreachable grand universe as visible matter is studied theoretically and experimentally. The shape of invisible proton has great impact on the topology of atom. Electron orbits, electron binding energy, Madelung Rules, and Zeeman splitting, are associated with proton’s octahedron shape and three nuclear structural axes. An element will be chemically stable if the outmost s and p clouds have eight electrons which make atom a symmetrical cubic.

**Category:** Nuclear and Atomic Physics

[137] **viXra:1412.0173 [pdf]**
*submitted on 2014-12-13 12:16:49*

**Authors:** VT Padmanabhan, R Ramesh, Joseph Makkolil

**Comments:** 10 Pages.

Finland’s parliament has recently approved a joint venture with Russia to build a VVER 1200 MWe, design AES-2006 pressurized water reactor which according to the vendor ‘complies with the IAEA and EUR requirements’ of a generation-III (Gen-III) reactor in northern Finland. However, design AES-92, which was certified as Gen-III in 2007, is missing in the genealogy of AES-2006 given in a presentation by the vendor, Rosatom. We propose that there are strong reasons to believe that this disappearance is due to the dismal performance of the AES-92 reactors at Kudankulam (KK) in India. KK Reactor I took about 12 years for construction and failed in the commissioning tests seven times. AES-92 received Gen-III certification in 2007 on the basis of fictitious and fabricated data. Since neither AES-2006, nor any other design in the pedigree, has undergone the Gen-III compliance test, the vendor’s claims are contentious. Considering the substandard performance of AES-92 reactor at Kudankulam since its grid connection in October 2013, its Gen-III certification needs to be canceled. It is mandatory that the vendor will also need to prove the Gen-III compliance of AES-2006 before going ahead with the ongoing deals in India, Finland, Vietnam and Bulgaria.

**Category:** Nuclear and Atomic Physics

[136] **viXra:1412.0172 [pdf]**
*submitted on 2014-12-13 08:55:27*

**Authors:** George Rajna

**Comments:** 14 Pages.

Researchers may have uncovered a way to observe dark matter thanks to a discovery involving X-ray emissions. [13]
Between 2009 and 2013, the Planck satellite observed relic radiation, sometimes called cosmic microwave background (CMB) radiation. Today, with a full analysis of the data, the quality of the map is now such that the imprints left by dark matter and relic neutrinos are clearly visible. [12]
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.
The Weak Interaction changes the temperature dependent Planck Distribution of the electromagnetic oscillations and changing the non-compensated dark matter rate, giving the responsibility to the sterile neutrino.

**Category:** Astrophysics

[135] **viXra:1412.0171 [pdf]**
*submitted on 2014-12-13 09:07:49*

**Authors:** Radwan M. Kassir

**Comments:** 3 pages

An original thought experiment “interlinking” time between relatively moving frames through belt drive clocks offers concrete evidence on the unviability of the time dilation predicted by the Special Relativity. It also shows that the Special Relativity length contraction gives contradictory time results.

**Category:** Relativity and Cosmology

[134] **viXra:1412.0170 [pdf]**
*submitted on 2014-12-12 13:27:46*

**Authors:** Alfredo G. Oliveira

**Comments:** 25 pages, 10 figures

The first model of the past hot Earth’s climate consistently indicated by isotopic and biologic data is here established. This model, here named Evolving Climate Model (ECM), accurately matches a 3 Gy long compilation of isotope ^{18}O data. An important consequence of the model is the fast increase of oxygen atmospheric level between 2 and 1 Ga (Gy ago); this is a well-known but until now mysterious occurrence, the Great Oxygenation Event. A solution is presented for the two centuries old “dolomite problem” and new explanations arise for a number of long lasting problems, such as the origin of petroleum or of proto-continents. Differently from the usual climate scenarios, the ECM presents ideal conditions for the massive production of long organic molecules. Critical occurrences of life evolution, such as the Cambrian explosion, are explained and fitted by the ECM, exposing a previously unknown connection between the evolution of life and climate. The most likely cause for this hot past is the expansion of orbits; it is verified that this phenomenon can explain the ECM, the receding of the Moon and the water on early Mars for the same value of *H*_{0} = 48 km s^{-1} Mpc^{-1}, which, if not a coincidence, is a non-negligible result.

**Category:** Relativity and Cosmology

[133] **viXra:1412.0167 [pdf]**
*submitted on 2014-12-11 18:54:14*

**Authors:** Richard A Peters

**Comments:** 9 Pages.

The geometric interpretation of time dilation concludes that the geometric relation between moving objects is the cause of time dilation between the objects. The thesis of this paper is that the imposing edifice of the geometric interpretation rests on a flawed foundation. That foundation denies the existence of any fundamental (natural, physical, real) frame of reference for motion. Therefore the position, velocity and acceleration of an object may be reckoned to an arbitrary frame of reference. If space has no properties other than dimensionality, motion relative to that space is undefined and meaningless and can have no influence on any ongoing process. Accordingly I propose a model in which a field of particles occupies and permeates all of space, including the space of atoms. In this model the phenomenon of time dilation demands the existence of a field that supports the propagation of photons. I label this field the temporal-inertial field (TI field). Time dilation occurs when an ongoing process moves relative to space, relative to this TI field. The greater the velocity of the process relative to the TI field the greater is the time dilation experienced by that process. The rate at which a process is slowed or accelerated is intrinsic, absolute and depends solely on the velocity of the process relative to the TI field.

**Category:** Relativity and Cosmology

[132] **viXra:1412.0166 [pdf]**
*submitted on 2014-12-11 22:19:56*

**Authors:** James Conor O'Brien

**Comments:** 111 Pages. A blog itemizing excerpts from this paper are viewable at http://revolutionmatter.blogspot.com.au/2014/07/an-introduction-to-revolution-of-matter.html

It is proposed in accordance with the Feynman-Stückelberg Interpretation that out of the quantum vacuum the anti-particles of virtual pairs travel backwards along the axis of Time to reflect off the boundaries of a quantum potential of a scalar field at the start of the universe. The scalar field is subject to quantum fluctuations that adiabatically shift the boundaries of the potential which acts as the moving mirror of the Dynamical Casimir effect allowing the production of matter-antimatter pairs. A theorem is proposed, via a mechanism that has its foundation in the Wick Rotation, the virtual particles undergo a quantal adiabatic, geometric phase reflection; and as a consequence of the Pauli Exclusion principle this shift in phase nonholonomically conflates virtual particles under Lorentz Invariance into real particles. It is proposed that this model is consistent with the Hartle-Hawking state; leads directly to Guth's Inflationary model; a mechanism for a modified gravitational field (MOND) is given; and finally the results are shown consistent with the Sakharov conditions for the Big Bang.

**Category:** Astrophysics

[131] **viXra:1412.0165 [pdf]**
*submitted on 2014-12-11 23:12:22*

**Authors:** Amir H. Fatollahi

**Comments:** 12 pages, 2 figs, 1 table

The energy spectrum of two 0-branes for fixed angular momentum in 2+1 dimensions is calculated by the Rayleigh-Ritz method. The basis function used for each angular momentum consists of 80 eigenstates of the harmonic oscillator problem on the corresponding space. It is seen that the spectrum exhibits a definite linear Regge trajectory behavior. It is argued how this behavior, together with other pieces of evidence, suggests the picture by which the bound-states of quarks and QCD-strings are governed by the quantum mechanics of matrix coordinates.

**Category:** High Energy Particle Physics

[130] **viXra:1412.0164 [pdf]**
*submitted on 2014-12-11 15:54:12*

**Authors:** Stephen Marshall

**Comments:** 5 Pages.

This paper presents a complete rebuttal of the paper Vixra 1408.0195v2 posted by Matthias Lesch on 13 September 2014. This rebuttal is in response to Vixra 1408.0195v2 where Matthias Lesch erroneously attempted to disprove six papers I published proving several conjectures in Number Theory. Specifically, these were papers Vixra:1408.0169, 1408.0174, 1408.0201, 1408.0209, and 1408.0212. This rebuttal paper is presented in the same format as Vixra 1408.0195v2 with necessary quotes from paper Vixra 1408.0195v2 to clarify rebuttals.

**Category:** Number Theory

[129] **viXra:1412.0163 [pdf]**
*submitted on 2014-12-11 08:00:39*

**Authors:** Joseph I. Thomas

**Comments:** 40 Pages.

In 1801, Thomas Young devised what is now known as the Classical Double Slit Experiment. In this experiment, light waves emanating from two separate sources, interfere to form a pattern of alternating bright and dark fringes on a distant screen. By measuring the position of individual fringe centers, the fringe widths and the variation of average light intensity on the screen, it is possible to compute the wavelength of light itself. The original theoretical analysis of the experiment employs a set of geometric assumptions which are collectively referred to here as the Parallel Ray Approximation. Accordingly, any two rays of light arising from either source and convergent on an arbitrary point on the screen, are considered very nearly parallel to each other in the vicinity of the sources. This approximation holds true only when the screen to source distance is very large and the inter-source distance is much larger than the wavelength of light. The predictions that naturally follow are valid only for fringes located near the center of the screen (e.g. the equal spacing of fringes). But for those fringes located further away from the screen center, the precision of these predictions rapidly wanes. Also when the screen to source distance is comparable to the inter-source separation or when the inter-source distance is comparable to the wavelength of light, the original analysis is no longer applicable.
In this paper, the theoretical foundations of Young’s experiment are re-formulated using a newly derived analytical equation of a hyperbola, which forms the locus of the points of intersections of two expanding circular wavefronts (with sources located at the respective centers of expansion). The ensuing predictions of the new analysis are compared with those of the old. And finally, it is shown that the latter approach is just a special instance of the former, when the Parallel Ray Approximation can be said to hold true.

**Category:** Classical Physics

[128] **viXra:1412.0162 [pdf]**
*replaced on 2014-12-16 15:08:54*

**Authors:** J.A.J. van Leunen

**Comments:** 11 Pages.

The dynamics of reality is well regulated. What is the mechanism that controls dynamics and what rules this mechanism? The models of contemporary physics do not answer these questions.
In the realm of elementary particles the special habits of quaternions appear to play an essential role. In the past physics had a choice between the Maxwell based approach and the quaternionic based approach. That choice has a significant influence on how physics equations look. Einstein selected the Maxwell approach and in this way physics inherits the spacetime view on the universe in which we live.

**Category:** Mathematical Physics

[127] **viXra:1412.0161 [pdf]**
*submitted on 2014-12-11 04:38:10*

**Authors:** George Rajna

**Comments:** 18 Pages.

There is also connection between statistical physics and evolutionary biology, since the arrow of time is working in the biological evolution also.
From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. [8]
This paper contains the review of quantum entanglement investigations in living systems, and in the quantum mechanically modeled photoactive prebiotic kernel systems. [7]
The human body is a constant flux of thousands of chemical/biological interactions and processes connecting molecules, cells, organs, and fluids, throughout the brain, body, and nervous system. Up until recently it was thought that all these interactions operated in a linear sequence, passing on information much like a runner passing the baton to the next runner. However, the latest findings in quantum biology and biophysics have discovered that there is in fact a tremendous degree of coherence within all living systems.
The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories.
The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry.
The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to understand the Quantum Biology.

**Category:** Quantum Physics

[126] **viXra:1412.0160 [pdf]**
*replaced on 2014-12-17 22:33:38*

**Authors:** Jay R. Yablon

**Comments:** 41 Pages. Sections 6 and 7 are new in v2.

The purpose of this paper is to explain the pattern of fill factors observed in the Fractional Quantum Hall Effect (FQHE), which appears to be restricted to odd-integer denominators as well as the sole even-integer denominator of 2. The method is to use the mathematics of gauge theory to develop Dirac monopoles without strings as originally taught by Wu and Yang, while accounting for orientation / entanglement and related “twistor” relationships between spinors and their environment in the physical space of spacetime. We find that the odd-integer denominators are included and the even-integer denominators are excluded if we regard two fermions as equivalent only if both their orientation and their entanglement are the same, i.e., only if they are separated by 4π not 2π. We also find that the even integer denominator of 2 is permitted because unit charges can pair into boson states which do not have the same entanglement considerations as fermions, and that all other even-integer denominators are excluded because only integer charges, and not fractional charges, can be so-paired. We conclude that the observed FQHE fill factor pattern can be fundamentally explained using nothing other than the mathematics of gauge theory in view of how orientation / entanglement / twist applies to fermions but not to bosons, while restricting all but unfractionalized fermions from pairing into bosons.

**Category:** High Energy Particle Physics

[125] **viXra:1412.0159 [pdf]**
*submitted on 2014-12-11 01:06:10*

**Authors:** Hideki Mutoh

**Comments:** 7 Pages.

We found new current-field equations including charge creation-annihilation fields. Although it is difficult to treat creation and annihilation of charge pairs for Maxwell's equations, the new equations easily treat them. The equations cause the confinement of charge creation and annihilation centers, which means the charge conservation for this model. The equations can treat not only electromagnetic field but also weak and strong force fields. Weak gravitational field can be also treated by the equations, where four current means energy and momentum. It is shown that Klein-Gordon and Schr\"{o}dinger equations
and gauge transformation can be directly derived from the equations, where the wave function is defined as complex exponential function of the energy creation-annihilation field.

**Category:** High Energy Particle Physics

[124] **viXra:1412.0158 [pdf]**
*submitted on 2014-12-10 13:56:23*

**Authors:** Florentin Smarandache

**Comments:** 334 Pages.

This volume includes 37 papers of mathematics or applied mathematics written by the author alone or in collaboration with the following co-authors: Cătălin Barbu, Mihály Bencze, Octavian Cira, Marian Niţu, Ion Patraşcu, Mircea E. Şelariu, Rajan Alex, Xingsen Li, Tudor Paroiu, Luige Vladareanu, Victor Vladareanu, Ştefan Vladuţescu, Yingjie Tian, Mohd Anasri, Lucian Capitanu, Valeri Kroumov, Kimihiro Okuyama, Gabriela Tonţ, A. A. Adewara, Manoj K. Chaudhary, Mukesh Kumar, Sachin Malik, Alka Mittal, Neetish Sharma, Rakesh K. Shukla, Ashish K. Singh, Jayant Singh, Rajesh Singh,V.V. Singh, Hansraj Yadav, Amit Bhaghel, Dipti Chauhan, V. Christianto, Priti Singh, and Dmitri Rabounski.
They were written during the years 2010-2014, about the hyperbolic Menelaus theorem in the Poincare disc of hyperbolic geometry, and the Menelaus theorem for quadrilaterals in hyperbolic geometry, about some properties of the harmonic quadrilateral related to triangle simedians and to Apollonius circles, about Luhn prime numbers, and also about the correspondences of the eccentric mathematics of cardinal and integral functions and centric mathematics, or ordinary mathematics; there are some notes on Crittenden and Vanden Eynden's conjecture, or on new transformations, previously non-existent in traditional mathematics, that we call centric mathematics (CM), but that became possible due to the new born eccentric mathematics, and, implicitly, to the supermathematics (SM); also, about extenics, in general, and extension innovation model and knowledge management, in particular, about advanced methods for solving contradictory problems of hybrid position-force control of the movement of walking robots by applying a 2D Extension Set, or about the notion of point-set position indicator and that of point-two sets position indicator, and the navigation of mobile robots in non-stationary and nonstructured environments; about applications in statistics, such as estimators based on geometric and harmonic mean for estimating population mean using information; about Godel’s incompleteness theorem(s) and plausible implications to artificial intelligence/life and human mind, and many more.

**Category:** General Mathematics

[123] **viXra:1412.0157 [pdf]**
*submitted on 2014-12-10 10:35:30*

**Authors:** Hasmukh K. Tank

**Comments:** Four-page letter

When an electron in an atom falls from higher orbit to a lower orbit a photon gets emitted. Since the electrostatic potential-energy of the electron is negative, because of the attractive force between the proton and the electron, the fall of electron makes its potential-energy more negative. So it is argued here that the energy of the emitted photon is a chunk of positive potential-energy; and since the photon is electrically neutral, it can feel only the gravitational force. Therefore it is proposed here that the photon emitted by an atom might be feeling a repulsive gravitational force; and so it always moves away from the emitting atom. As this photon moves away from the atom, its potential-energy goes on reducing. Then it is shown here that the loss of energy of the cosmologically red-shifted photon is indeed equal to (G me mp/ e^2) times the loss in electrostatic potential-energy of the electron at the same distance D

**Category:** Astrophysics

[122] **viXra:1412.0156 [pdf]**
*submitted on 2014-12-10 06:43:55*

**Authors:** George Rajna

**Comments:** 17 Pages.

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. [8]
This paper contains the review of quantum entanglement investigations in living systems, and in the quantum mechanically modeled photoactive prebiotic kernel systems. [7]
The human body is a constant flux of thousands of chemical/biological interactions and processes connecting molecules, cells, organs, and fluids, throughout the brain, body, and nervous system. Up until recently it was thought that all these interactions operated in a linear sequence, passing on information much like a runner passing the baton to the next runner. However, the latest findings in quantum biology and biophysics have discovered that there is in fact a tremendous degree of coherence within all living systems.
The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories.
The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry.
The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to understand the Quantum Biology.

**Category:** Quantum Physics

[121] **viXra:1412.0155 [pdf]**
*replaced on 2015-01-30 13:43:13*

**Authors:** Florentin Smarandache

**Comments:** 480 Pages.

Neutrosophic Theory means Neutrosophy applied in many fields in order to solve problems related to indeterminacy.
Neutrosophy considers every entity <A> together with its opposite or negation <antiA>, and with their spectrum of neutralities <neutA> in between them (i.e. entities supporting neither nor <antiA>). Where

**Category:**

[120] **viXra:1412.0154 [pdf]**
*submitted on 2014-12-09 21:03:03*

**Authors:** Michael Starks

**Comments:** 17 Pages.

This book is invaluable as a synopsis of some of the work of one the greatest philosophers of recent times. There is much value in analyzing his responses to the basic confusions of philosophy, and in the generally excellent attempts to connect classical Chinese thought to modern philosophy. I take a modern Wittgensteinian view to place it in perspective.

**Category:** General Science and Philosophy

[119] **viXra:1412.0153 [pdf]**
*submitted on 2014-12-09 21:06:10*

**Authors:** Michael Starks

**Comments:** 13 Pages.

I give a detailed review of 'The Outer Limits of Reason' by Noson Yanofsky 403(2013) from a unified perspective of Wittgenstein and evolutionary psychology. I indicate that the difficulty with such issues as paradox in language and math, incompleteness, undecidability, computability, the brain and the universe as computers etc., all arise from the failure to look carefully at our use of language in the appropriate context and hence the failure to separate issues of scientific fact from issues of how language works. I discuss Wittgenstein's views on incompleteness, paraconsistency and undecidability and the work of Wolpert on the limits to computation.

**Category:** General Science and Philosophy

[118] **viXra:1412.0151 [pdf]**
*replaced on 2015-02-08 10:13:16*

**Authors:** Dmitri Martila

**Comments:** 3 Pages.

How much has been said in the media, that Earth-man never sees the body B fall into a black hole. Reason: time dilation. But researcher A with rocket has full control of the situation, he can come close to Black Hole, almost to contact and observe everything. Therefore, the distance between A and B may be zero. It does not depend on when in the past the body B was shut into a black hole. Therefore, the black hole horizon has one big collision of bodies.

**Category:** Relativity and Cosmology

[117] **viXra:1412.0150 [pdf]**
*replaced on 2015-04-08 05:00:13*

**Authors:** Marius Coman

**Comments:** 62 Pages.

In this book I define a function which allows the reduction to any non-null positive integer to one of the digits 1, 2, 3, 4, 5, 6, 7, 8 or 9. The utility of this enterprise is well-known in arithmetic; the function defined here differs apparently insignificant but perhaps essentially from the function modulo 9 in that is not defined on 0, also can’t have the value 0; essentially, the mar reduced form of a non-null positive integer is the digital root of this number but with the important distinction that is defined as a function such it can be easily used in various applications (divizibility problems, Diophantine equations), a function defined only on the operations of addition and multiplication not on the operations of subtraction and division. Some of the results obtained with this tool are a proof of Fermat’s last Theorem, cases n = 3 and n = 4, using just integers, no complex numbers and a Diophantine analysis of perfect numbers.
Note: I understand, in this book, the numbers denoted by “abc” as the numbers where a, b, c are digits, and the numbers denoted by “a*b*c” as the products of the numbers a, b, c.

**Category:** Number Theory

[116] **viXra:1412.0149 [pdf]**
*replaced on 2015-01-21 14:18:15*

**Authors:** Joel M Williams

**Comments:** 15 Pages.

This paper demonstrates with figures that the Nazca lines, whether straight and narrow or broad like rectangles and trapezoids, are simply the application of basic, humankind, engineering knowledge. No extraterrestrial intervention was necessary. ______________ Why the Nazca Desert even exists is demonstrated with a satellite image of the area and contour lines. A colossal landslide performed a strip-mining dump into the previous valley giving pre-Inca miners a "rock-milled" ore source. ______________ How the nature figures were created with significant precision without being viewable has been a source of conjecture. This paper illustrates that they could simply have been created by citizens who staged them on a well-defined, viewable grid and then celebrated by marching into the open desert and taking up corresponding target locations. ______________ Some of the Nazca nature figures are "staged" on the Mandala Grid in the paper.

**Category:** Archaeology

[115] **viXra:1412.0148 [pdf]**
*replaced on 2014-12-28 21:22:59*

**Authors:** Alejandro A. Torassa

**Comments:** 11 Pages.

This paper presents an alternative classical mechanics which is invariant under transformations between reference frames and which can be applied in any reference frame without the necessity of introducing fictitious forces. Additionally, a new principle of conservation of energy is also presented.

**Category:** Classical Physics

[114] **viXra:1412.0147 [pdf]**
*submitted on 2014-12-08 16:29:25*

**Authors:** Ramzi Suleiman

**Comments:** 35 Pages.

We propose a novel model of aspiration levels for interactive games, termed economic harmony. The model posits that the individuals' levels of outcome satisfaction are proportional to their actual outcomes relative to their aspired outcomes. We define a state of harmony as a state at which the interacting players' levels of outcome satisfaction are equal, and underscore the necessary condition for the manifestation and sustenance of harmony situations on the behavioral level. We utilize the proposed model to predict the transfer decisions in a class of two-person ultimatum games, including the standard ultimatum game, an ultimatum game with asymmetric information about the size of the "pie", an ultimatum game with varying veto power. We also apply the model to predicting behavior in a three person ultimatum game with uncertainty regarding the identity of the recipient and in a sequential common pool resource game. For all the aforementioned games, we show that the model accounts well for large sets of experimental data, and with the predictions of game theory and other relevant models of interactive situations. Strikingly, for the standard ultimatum game, the model predicts that the allocator should transfer a proportion of the entire sum that equals 1- Φ, where Φ is the famous Golden Ratio, most known for its aesthetically pleasing properties.

**Category:** Economics and Finance

[113] **viXra:1412.0146 [pdf]**
*replaced on 2014-12-19 14:45:12*

**Authors:** Rodolfo A. Frino

**Comments:** 9 Pages.

The present paper is concerned with the derivation of the Einstein's formula of equivalence of mass and energy, E = mc^2 , from the universal uncertainty relations. These relations are a generalization of the original uncertainty relations developed by Werner Heisenberg in 1927. Thus, this approach unifies two of the most important laws of physics as provides the proof of a) the quantum mechanical nature of the above formula, and, b) the correctness of the universal uncertainty relations that I found in 2012.

**Category:** Quantum Physics

[112] **viXra:1412.0144 [pdf]**
*submitted on 2014-12-08 11:09:38*

**Authors:** Fran De Aquino

**Comments:** 7 Pages.

When photons hit a material surface they exert a pressure on it. It was shown that this pressure has a negative component (opposite to the direction of propagation of the photons) due to the existence of the negative linear momentum transported by the photons. Here we show that, in the photoelectric effect, the electrons are ejected by the action of this negative component of the momentum transported by the light photons. It is still shown that, also the gravitational interaction results from the action of this negative component of the momentum transported by specific photons.

**Category:** Quantum Physics

[111] **viXra:1412.0141 [pdf]**
*submitted on 2014-12-08 05:42:35*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** Pages.

Einstein's calculation in 1916 of the deflection of light by the Sun and experimental suggestions.

**Category:** History and Philosophy of Physics

[110] **viXra:1412.0139 [pdf]**
*submitted on 2014-12-08 02:12:52*

**Authors:** Miloje M. Rakocevic

**Comments:** 28 Pages. An extended version of Addendum 4 in our Book "Genetic Code as a Unique System, presented at our web site.

There are many approaches to investigate the consciousness. In this paper we will show that it makes sense to speak about the consciousness as about the comprehension of something. Furthermore, to speak about the universal consciousness as about the universal comprehension of the universal code; the comprehension from different investigators, in different creativeness, through different epochs.

**Category:** General Science and Philosophy

[109] **viXra:1412.0138 [pdf]**
*submitted on 2014-12-07 13:34:35*

**Authors:** Dmitri Martila

**Comments:** 13 Pages.

According to a distant observer, a Fly escaping towards a black hole never reaches the event
horizon. Let “Fly” be a synonym for a space ship with food while “Bat” stands for a space ship
with starving astronauts. The question then arises, how long the Bat has to wait before reaching
the Fly.

**Category:** Relativity and Cosmology

[108] **viXra:1412.0137 [pdf]**
*submitted on 2014-12-06 21:52:23*

**Authors:** Syed Afsar Abbas

**Comments:** 7 Pages.

The reason, as to wherefrom arises the liquid drop character of the nucleus,
has been a source of puzzlement since the birth of nuclear physics.
We provide an explanation of the same by reviving the very first and the "original"
charge exchange interaction model suggested by Yukawa in 1935.

**Category:** Nuclear and Atomic Physics

[107] **viXra:1412.0136 [pdf]**
*submitted on 2014-12-07 03:30:07*

**Authors:** Pingyuan Zhou

**Comments:** 16 Pages. This paper has been submitted to mathematical journal.

In this paper we give a proof of the strong Goldbach conjecture by studying limit status of original continuous Goldbach natural number sequence generated by original continuous odd prime number sequence. It implies the weak Goldbach conjecture. If a prime p is defined as Goldbach prime when GNL = p then Goldbach prime is the higher member of a twin prime pair, from which we will give a proof of the twin prime conjecture.

**Category:** Number Theory

[106] **viXra:1412.0135 [pdf]**
*submitted on 2014-12-07 04:31:04*

**Authors:** Nikolay Leonov

**Comments:** 17 Pages. English and russian texts

The analysis conducted covers conditions in which electrons and neutrons can be formed from individual ethereal elements as well as conditions in which electrons and neutrons disintegrate into individual ethereal elements.

**Category:** Nuclear and Atomic Physics

[105] **viXra:1412.0134 [pdf]**
*submitted on 2014-12-07 05:34:40*

**Authors:** Stanislav Podosenov, Jaykov Foukzon, Alexander Potapov, Elena Men’kova

**Comments:** 19 Pages.

On the basis of the theory of bound charges the calculation of the motion of the charged particle
at the Coulomb field formed with the spherical source of bound charges is carried out. Such motion is
possible in the Riemanniam space-time. The comparison with the general relativity theory (GRT) and
special relativity theory (SRT) results in the Schwarzshil'd field when the particle falls on the
Schwarzshil'd and Coulomb centres is carried out. It is shown that the proton and electron can to create a
stable connection with the dimensions of the order of the classic electron radius. The perihelion shift of
the electron orbit in the proton field is calculated. This shift is five times greater than in SRT and when corrsponding substitution of the constants it is 5/6 from GRT. By means of the quantization of adiabatic invariants in accordance with the method closed to the Bohr and Sommerfeld one without the Dirac equation the addition to the energy for the fine level splitting is obtained. It is shown that the Caplan's stable orbits in the hydrogen atom coincide with the Born orbits.

**Category:** Relativity and Cosmology

[104] **viXra:1412.0133 [pdf]**
*submitted on 2014-12-07 06:22:31*

**Authors:** Padmanabhan Murali

**Comments:** 9 Pages.

Observations of recurrent evolutions have been noted for long suggesting that evolution prefers certain pathways. At the phenotype and genotype level it appears that evolution tends to arrive at certain solutions that suit environmental conditions present. Further, adaptive Laboratory Evolution studies on single celled organisms show that evolution is consistent and predictable when organisms are subjected to the same environmental Input. This paper presents a case from a different perspective and takes these observations as the starting point to propose a theory that Evolution on the whole is predictable. Only certain organisms are defined to exist as solutions in evolution space and only certain change pathways are possible and evolution necessarily proceeds along them. Presence of many regulatory networks in the organism is common knowledge supplemented by emerging understanding of Gene Regulatory Networks. Networks exist that control an organism right from birth, development, sustenance and survival. Extending this further, hidden regulatory networks are proposed to exist that activate and control organism change (speciation) as a response to environmental change. Evidence (possible cases) for such networks are proposed from literature. A key implication of this hypothesis is it eliminates the need of natural selection as a role player in Evolution. Instead the hidden networks proposed control Speciation. It proposes that beneficial mutations are the result of action of these hidden networks to initiate phenotype change (speciation).

**Category:** Physics of Biology

[103] **viXra:1412.0132 [pdf]**
*submitted on 2014-12-06 16:13:48*

**Authors:** Steven Kenneth Kauffmann

**Comments:** 7 Pages.

A fundamental theorem underpinning Einstein's gravity theory is that the contraction of a tensor is itself a tensor of lower rank. However this theorem is not an identity; its demonstration cannot be extended beyond space-time points where the space-time transformation in question has a Jacobian matrix with exclusively finite components and that matrix' inverse also has exclusively finite components. Space-time transformations therefore cannot be regarded as physical except at such points; indeed in classical theoretical physics nonfinite entities don't even make sense. This, taken together with the Principle of Equivalence, implies that metric tensors can be physical only at space-time points where they and their inverses have finite components exclusively, and as well have signatures which are identical to the Minkowski metric tensor's signature. For metric-tensor solutions of the Einstein equation there can exist space-time points where these physical constraints on the solution are flouted, just as there exist well-known solutions of the Maxwell and Schroedinger equations which also defy physical constraints -- and therefore are always discarded. Instances of unphysical solutions of the Maxwell or Schroedinger or Einstein field-theoretic equations can usually be traced to subtly unphysical initial inputs or assumptions.

**Category:** Relativity and Cosmology

[102] **viXra:1412.0131 [pdf]**
*submitted on 2014-12-06 14:12:09*

**Authors:** Lubomir Vlcek

**Comments:** 39 Pages. Wave - particle duality elegantly incorporates into kinetic energy in direction of movement as particle, and kinetic energy against directions of movement as wave, in relations for kinetic energy.

Speeds of electrons and protons in atoms are small. For example: An electron moving at a
speed ve= 0,003c creates spectral line Hα.
Confirmation of Doppler´s principle in hydrogen for Balmer line Hα.
Accompanying activity of reaction on movement of stable particles in the transmission
medium are waves.
Wave - particle duality elegantly incorporates into kinetic energy in direction of movement as
particle, and kinetic energy against directions of movement as wave, in relations for kinetic
energy. Neutron, β electron , gamma rays – calculations.
Stable electrons moving with speeds (0,99 c – c ) creates leptons (μ−, τ−), neutrinos (νe, νμ,
ντ) and bosons W +, W-, Z (= β electrons). Weak interactions are caused with stable
electrons, which creates leptons (μ−, τ−) = ( particles = electrons different speeds),
neutrinos νe, νμ, ντ (= waves) , bosons W +, W-, Z (= particles = β electrons moving at
nearly the speed of light ) and gamma rays (=waves of extremely high frequency >1019 Hz ).
Stable particles (p +, n0, D, He-3, α) moving with speeds ( 0,3 c – 0,99 c ) creates baryons
and mesons.
The strong interactions are caused with stable particles (p +, n0, D, He-3, α ), which
creates baryons and mesons.

**Category:** Classical Physics

[101] **viXra:1412.0130 [pdf]**
*replaced on 2015-02-27 12:54:44*

**Authors:** Jaykov Foukzon

**Comments:** 8 Pages. DOI: 10.11648/j.pamj.s.2015040101.12

In 1942 Haskell B. Curry presented what is now called Curry's paradox which can be found in a logic independently of its stand on negation. In recent years there has been a revitalised interest in non-classical solutions to the semantic paradoxes. In this article the non-classical resolution of Curry’s Paradox and Shaw-Kwei's paradox without rejection any contraction postulate is proposed.

**Category:** Set Theory and Logic

[100] **viXra:1412.0128 [pdf]**
*replaced on 2016-12-28 09:46:26*

**Authors:** Kabir Adinoyi Umar

**Comments:** 13 Pages. Presented at the 6th Annual Conference of the Astronomical Society of Nigeria and at the 39th Annual Conference of the Nigerian Institute of Physics.

A cosmological constant (Λ) dark energy capable of early universe inflation is described within the frame work of the Rotating Universe interpretation of Time and Energy (RUTE). RUTE is a model of spacetime consistent with entropic gravity and has a key dimensional symmetry that doubles the number of spacetime dimensions with microscopic dimensional partners. Its energy density constraint provides a spill model of dark energy sensitive to matter and radiation presence and a Gravitational Wave Reheating (GWR) mechanism. Recent observational constraints for gravitational wave and gamma ray counterpart are consistent with RUTE’s GWR – a shake-spill mechanism of vacuum energy extraction. In this scenario, gravitational waves of a given frequency with strain above a threshold are predicted to release Standard Model (SM) photons from vacuum energy. Verifiable predictions are briefly discussed.

**Category:** Relativity and Cosmology

[99] **viXra:1412.0127 [pdf]**
*submitted on 2014-12-06 05:16:47*

**Authors:** N.N. Leonov

**Comments:** 6 Pages. English and russian texts

There has been the structure of superfluid neutron component of neutron stars identified and there have been similarities revealed in structures and properties of superfluid components in liquid helium and in neutron superfluid liquid.

**Category:** Nuclear and Atomic Physics

[98] **viXra:1412.0126 [pdf]**
*submitted on 2014-12-06 05:43:30*

**Authors:** S.A. Podosenov

**Comments:** 25 Pages.

An exact solution for the field of a charge in a uniformly accelerated noninertial
frame of reference (NFR) alongside the "Equivalent Situation Postulate"allows one to
find space-time structure as well as fields from arbitrarily shaped charged conductors,
without using Einstein’s equations. In particular, the space-time metric over a charged
plane can be related to the metric being obtained from an exact solution to Einstein-
Maxwell’s equations. This solution describes an equilibrium of charged dust in parallel
electric and gravitational fields. The field and metric outside a conducting ball have
been found. The method proposed eliminates divergence of the proper energy and makes
classical electrodynamics consistent at any sufficiently small distances. An experiment is
proposed to verify the approach suggested.

**Category:** Relativity and Cosmology

[97] **viXra:1412.0125 [pdf]**
*submitted on 2014-12-06 02:24:29*

**Authors:** Lubomir Vlcek

**Comments:** 28 Pages. The main differences between Einstein's theory[1] and the latest knowledge[2]are: 1.Form of Intensity of the Moving Charge Electric Field is asymmetrical, 2. Form of the interference field is non-linear, 3. Kinetic energy of a charge moving at the v

Relationship Lorentz derived from the asymmetrical form of the intensity of the moving charge. To derive it we do not need Lorentz's transformations equations, that is we do not need SPACE-TIME.
We do not need local time, or covariant equations or physical simultaneity definition or invariant interval. In other words, in physics we do not need Einstein's theory of relativity. From the asymmetrical form of the intensity of the moving charge we can derive Gauss law, Faraday's law and derive the 4th Maxwell's equation, fictional by Maxwell and not to be derived.Kinetic energy of a charge moving at the velocity of v has two different values: in direction of motion as own kinetic energy of charge and against direction of motion of charge represents the wave energy, which charge creates in transmision medium.

**Category:** High Energy Particle Physics

[96] **viXra:1412.0124 [pdf]**
*submitted on 2014-12-06 03:01:39*

**Authors:** Barar Stelian Liviu

**Comments:** 20 Pages.

This paper is protected by copiryght from
16.07.2012
I want to present the paper tu the viXra .

**Category:** Number Theory

[95] **viXra:1412.0123 [pdf]**
*submitted on 2014-12-06 03:30:57*

**Authors:** Hasmukh K. Tank

**Comments:** three-page letter

This letter reports interesting correlations between cosmological red-shift and the strength-ratio of gravitational and electric forces, which may prove to be a clue to deeper understanding of gravitation and cosmology. Cosmological red-shift, smaller than unity, zc = Δ λ / λ0 = [G mp^2 / h c ] times the luminosity-distance measured in the units of a wavelength [ D / λC ] where λC = ( h / mp c), the Compton-wavelength of a fundamental-particle pi-meson. Also, the energy lost by cosmologically red-shifted photon can be viewed as due to deceleration experienced by the photon; Energy lost = mass times the acceleration (H0 c) times the luminosity-distance D ; where the rate of deceleration (H0 c) turns out to be equal to the accelerated-expansion of the universe! Thus this letter provides some interesting correlations for the experts to think further.

**Category:** Astrophysics

[94] **viXra:1412.0122 [pdf]**
*replaced on 2015-03-02 20:01:48*

**Authors:** Morio Kikuchi

**Comments:** 8 Pages.

We protect an aircraft from an anti-aircraft missile disposing a flying object between them.

**Category:** General Science and Philosophy

[93] **viXra:1412.0120 [pdf]**
*submitted on 2014-12-05 13:05:54*

**Authors:** Huping Hu, Maoxin Wu

**Comments:** 69 Pages. Appeared in Prespacetime Journal 5(11): 1042-1110 (Nov. 2014)

This article is a continuation of the Principle of Existence. A premomentumenergy model of elementary particles, four forces and human consciousness is formulated, which illustrates how the self-referential hierarchical spin structure of the premomentumenergy provides a foundation for creating, sustaining and causing evolution of elementary particles through matrixing processes embedded in said premomentumenergy. This model generates elementary particles and their governing matrix laws for a dual universe (quantum frame) comprised of an external momentum-energy space and an internal momentum-energy space. In contrast, the prespacetime model described previously generates elementary particles and their governing matrix laws for a dual universe (quantum frame) comprised of an external spacetime and an internal spacetime. These quantum frames and their metamorphoses are interconnected through quantum jumps as demonstrated in forthcoming articles.
The premomentumenergy model reveals the creation, sustenance and evolution of fermions, bosons and spinless entities each of which is comprised of an external wave function or external object in the external momentum-energy space and an internal wave function or internal object in the internal momentum-energy space. The model provides a unified causal structure in said dual universe (quantum frame) for weak interaction, strong interaction, electromagnetic interaction, gravitational interaction, quantum entanglement, human consciousness. Further, the model provides a unique tool for teaching, demonstration, rendering, and experimentation related to subatomic and atomic structures and interactions, quantum entanglement generation, gravitational mechanisms in cosmology, structures and mechanisms of human consciousness.

**Category:** High Energy Particle Physics

[92] **viXra:1412.0119 [pdf]**
*submitted on 2014-12-05 13:09:14*

**Authors:** Huping Hu, Maoxin Wu

**Comments:** 32 Pages. Appeared in Prespacetime Journal 5(11): 1111-1142 (Nov. 2014)

This work is a continuation of the premomentumenergy model described recently. Here we show how in this model premomentumenergy generates: (1) time, position, & intrinsic-proper-time relation from transcendental Law of One, (2) self-referential matrix law with time, position and intrinsic-proper-time relation as the determinant, (3) dual-universe Law of Zero, and (4) immanent Law of Conservation in the external/internal momentum-energy space which may be violated in certain processes. We further show how premomentum-energy generates, sustain and makes evolving elementary particles and composite particles incorporating the genesis of self-referential matrix law. In addition, we discuss the ontology and mathematics of ether in this model. Illustratively, in the beginning there was premomentumenergy by itself ei0 =1 materially empty and it began to imagine through primordial self-referential spin 1=ei0=ei0ei0=e+iL-iLe+iM-iM=e+iLe-iMe+iLe-iM=e+iLe+iM/e+iLe+iM …such that it created the self-referential matrix law, the external object to be observed and internal object as observed, separated them into external momentum-energy space and internal momeutm-energy space, caused them to interact through said matrix law and thus gave birth to the dual universe (quantum frame) comprised of the external momentum-energy space and the internal momentum-energy space which it has since sustained and made to evolve.

**Category:** High Energy Particle Physics

[91] **viXra:1412.0118 [pdf]**
*submitted on 2014-12-05 13:11:59*

**Authors:** Huping Hu, Maoxin Wu

**Comments:** 21 Pages. Appeared in Prespacetime Journal 5(11): 1143-1163 (Nov. 2014)

Some modeling methods based on premomentumenergy model are stated. The methods relate to presenting and modeling generation, sustenance and evolution of elementary particles through self-referential hierarchical spin structures of premomentumenergy. In particular, stated are methods for generating, sustaining and causing evolution of fermions, bosons and spinless particles in a dual universe (quantum frame) comprised of an external momentumenergy space and an internal momentumenergy space. Further, methods for modeling weak interaction, strong interaction, electromagnetic interaction, gravitational interaction, quantum entanglement and brain function in said dual universe are also stated.
Some additional modeling methods based on premomentumenergy model are also stated. The additional methods relate to presenting and modeling time, position & intrinsic-proper-time relation, self-referential matrix rules, elementary particles and composite particles through self-referential hierarchical spin in premomentumenergy. In particular, methods for generating time, position & intrinsic-proper-time relation, self-referential matrix rules, elementary particles and composite particles in aforesaid dual universe are stated.

**Category:** High Energy Particle Physics

[90] **viXra:1412.0117 [pdf]**
*submitted on 2014-12-05 13:15:34*

**Authors:** Huping Hu, Maoxin Wu

**Comments:** 80 Pages. Appeared in Prespacetime Journal 5(12): 1164-1243 (Nov. 2014)

This article is a continuation of the Principle of Existence. A prespacetime-premomentumenergy model of elementary particles, four forces and human consciousness is formulated, which illustrate how the self-referential hierarchical spin structure of the prespacetime-premomentumenergy may provide a foundation for creating, sustaining and causing evolution of elementary particles through matrixing processes embedded in said prespacetime-premomentumenergy. This model generates elementary particles and their governing matrix laws for a dual universe (quantum frame) comprised of an external spacetime and an internal energy-momentum space. In contrast, the prespacetime model described previously generates elementary particles and their governing matrix laws for a dual universe (quantum frame) comprised of an external spacetime and an internal spacetime. Then, the premomentumenergy model described recently generates elementary particles and their governing matrix laws for a dual universe (quantum frame) comprised of an external momentum-energy space and an internal momentum-energy space. These quantum frames and their metamorphoses may be interconnected through quantum jumps as demonstrated in forthcoming articles.
The prespacetime-premomentumenergy model may reveal the creation, sustenance and evolution of fermions, bosons and spinless entities each of which is comprised of an external wave function or external object in the external spacetime and an internal wave function or internal object in the internal momentum-energy space. The model may provide a unified causal structure in said dual universe (quantum frame) for weak interaction, strong interaction, electromagnetic interaction, gravitational interaction, quantum entanglement, human consciousness. The model may also provide a unique tool for teaching, demonstration, rendering, and experimentation related to subatomic and atomic structures and interactions, quantum entanglement generation, gravitational mechanisms in cosmology, structures and mechanisms of human consciousness.

**Category:** High Energy Particle Physics

[89] **viXra:1412.0116 [pdf]**
*submitted on 2014-12-05 13:18:02*

**Authors:** Huping Hu, Maoxin Wu

**Comments:** 51 Pages. Appeared in Prespacetime Journal 5(12): 1244-1294 (Nov. 2014)

This work is a continuation of prespacetime-premomentumenergy model described recently. Here we show how in this model prespacetime-premomentumenergy generates: (1) four-momentum and four-position relation as transcendental Law of One, (2) self-referential matrix law with four-momentum and four-position relation as the determinant, and (3) Law of Zero in a dual universe comprised of an external spacetime and an internal momentum-energy space. We further show how prespacetime-premomentumenergy may generate, sustain and make evolving elementary particles and composite particles incorporating the genesis of self-referential matrix law. In addition, we will discuss the ontology and mathematics of ether in this model. Illustratively, in the beginning there was prespacetime-premomentumenergy by itself ei0 =1 materially empty and it began to imagine through primordial self-referential spin 1=ei0=ei0ei0=eiL-iLeiM-iM=eiLeiMe-iLe-iM=e-iLe-iM/e-iLe-iM=eiLeiM/eiLeiM…such that it created the self-referential matrix law, the external object to be observed and internal object as observed, separated them into external spacetime and internal momentum-energy space, caused them to interact through said matrix law and thus gave birth to the dual universe which it has since sustained and made to evolve.

**Category:** High Energy Particle Physics

[88] **viXra:1412.0115 [pdf]**
*submitted on 2014-12-05 13:20:29*

**Authors:** Huping Hu, Maoxin Wu

**Comments:** 42 Pages. Appeared in Prespacetime Journal 5(12): 1295-1337 (Nov. 2014)

Some modeling methods based on prespacetime-premomentumenergy model are stated. The methods relate to presenting and modeling generation, sustenance and evolution of elementary particles through self-referential hierarchical spin structures of prespacetime-premomentum-energy. In particular, stated are methods for generating, sustaining and causing evolution of fermions, bosons and spinless particles in a dual universe (quantum frame) comprised of an external spacetime and an internal momenutmenergy space, vice versa. Further, methods for modeling weak interaction, strong interaction, electromagnetic interaction, gravitational interaction, quantum entanglement and brain function in said dual universe are also stated.
Some additonal methods based on prespacetime-premomentumenergy model are also stated. The additional methods relate to presenting and modeling four-momentum & four-position relation, self-referential matrix rules, elementary particles and composite particles through self-referential hierarchical spin in prespacetime-premomentumenergy. In particular, methods for modeling generating four-momentum & four-position relation, self-referential matrix rules, elementary particles and composite particles in aforesaid dual universe are stated.

**Category:** High Energy Particle Physics

[87] **viXra:1412.0114 [pdf]**
*replaced on 2016-01-13 06:58:45*

**Authors:** Sylwester Kornowski

**Comments:** 4 Pages.

To eliminate the useless mathematical expressions from the perturbative string theory, we must assume that spacetime is 10-dimensional (the 9 spatial dimensions and 1 time dimension). Why string theory is still fruitless? To describe position, shape and motions of spinning physical circle/closed-string (it has thickness not equal to zero), we need 10 degrees of freedom. Six of the ten degrees of freedom lead to circles i.e. to the “compactified” spatial degrees-of-freedom/dimensions. In an effective theory, the ten-degrees-of-freedom spacetime transforms into 4-dimensional spacetime (physical circle transforms into mathematical point). To describe within the Scale-Symmetric Theory (SST) position, internal structure and motions of a neutrino (it is torus and condensate in its centre both composed of the spinning physical circles), we need 26 degrees of freedom. In the SST, stability of the neutrinos follows from the interactions (due to the dynamic viscosity) of the closed strings with the superluminal non-gravitating Higgs field. Such Higgs field is the radion field and it is the six-degrees-of-freedom subspacetime. On the other hand, in M-theory there appears the extra eleventh compact dimension associated with a radion field. Due to the succeeding phase transitions of the modified Higgs fields, there appear more and more complex structures and number of degrees of freedom increases - there is following series: 6, 10, 26, 58 and 122. There as well appear new radion/scalar fields with very different properties. This causes that a complete description of Nature within one equation is impossible. We can partially unify all the main partial theories only via the theory of the succeeding phase transitions of the superluminal non-gravitating Higgs field and it is the lacking part of the Theory of Everything.

**Category:** Quantum Gravity and String Theory

[86] **viXra:1412.0113 [pdf]**
*submitted on 2014-12-05 11:11:38*

**Authors:** George Rajna

**Comments:** 13 Pages.

Between 2009 and 2013, the Planck satellite observed relic radiation, sometimes called cosmic microwave background (CMB) radiation. Today, with a full analysis of the data, the quality of the map is now such that the imprints left by dark matter and relic neutrinos are clearly visible. [12]
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.
The Weak Interaction changes the temperature dependent Planck Distribution of the electromagnetic oscillations and changing the non-compensated dark matter rate, giving the responsibility to the sterile neutrino.

**Category:** Astrophysics

[85] **viXra:1412.0112 [pdf]**
*submitted on 2014-12-05 07:49:40*

**Authors:** Radwan M. Kassir

**Comments:** 16 pages

In a recent research study entitled “Test of Time Dilation Using Stored Li+ Ions as Clocks at Relativistic Speed” (Phys. Rev. Lett. 113, 120405 – Published 16 September 2014), an Ives–Stilwell type experiment, it was claimed that a conducted time dilation experiment using the relativistic Doppler effect on the Li+ ions resonance frequencies had verified, with a greatly increased precision, the relativistic frequency shift formula, derived in the Special Relativity from the Lorentz Transformation, thus indirectly proving the time dilation predicted by the Special Relativity. The test was based on the validation of an algebraic equality relating a set of measured frequencies, and deduced from the relativistic Doppler equations. In this study, it was shown that this algebraic equality, used as a validation criterion, did not uniquely imply the validity of the relativistic Doppler equations. In fact, using an approach in line with the referenced study, it was revealed that an infinite number of frequency shift equations would satisfy the employed validation criterion. Nonetheless, it was shown that even if that claim was hypothetically accepted, then the experiment would prove nothing but a contradiction in the Special Relativity prediction. In fact, it was clearly demonstrated that the relativistic blue shift was the consequence of a time contraction, determined via the light speed postulate, leading to the relativistic Doppler formula in the case of an approaching light source. The experiment would then be confirming a relativistic time contraction. It was also shown that the classical relativity resulted in perceived time alterations leading to the classical Doppler Effect equations. The “referenced study” result could be attributed to the classical Doppler shift within 10 % difference.

**Category:** Relativity and Cosmology

[84] **viXra:1412.0109 [pdf]**
*submitted on 2014-12-04 21:12:41*

**Authors:** Robert Bennett

**Comments:** 25 Pages.

Belief that the Sagnac test of 1913 only applied to rotational motion was discounted
when Ruyong Wang found the same results for linear motion in 2004. The Sagnac
result has never been credibly explained, despite its wide application in modern
technology. In turn the Wang paper has been virtually ignored in the last ten years, but
remedied by this paper, which establishes the test as a critical signal of striking new
concepts.
Kinematic and dynamic motions are carefully distinguished here and the neglected
topic of covariance is reviewed and applied to the Galilean dynamical law of velocity
addition.
Analysis of Wang’s result in the conveyor and lab frames with the premise of aether
drag logically leads to identification of preferred motion in an absolute frame of
reference … the earth-bound laboratory frame!
That light speed in the lab frame will be the same as for the conveyor is a testable
prediction of this paper….. the same as the Doufour-Prunier test.
Discovery of the absolute lab reference frame and a flexible aether – the ALFA model –
refutes relativity and its alleged consequences, such as: both postulates of special
relativity – general covariance in general relativity - Lorentz transformations -
Minkowski space – length contraction - time dilation…. All disproven by ALFA via the
Wang and Sagnac tests.

**Category:** Relativity and Cosmology

[83] **viXra:1412.0107 [pdf]**
*submitted on 2014-12-04 09:44:40*

**Authors:** Casper Spanggaard

**Comments:** 3 Pages. Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

More effort is needed to arrive at a complete theory of consciousness that provide a satisfactory explanation of conscious experience. Some considerations that may inspire and contribute to such a theory are described. The hypothesis that consciousness can result in evolutionary advantages only as a consequence of its participation in and its effects on decision making is considered. The hypotheses that consciousness is an intrinsic property of reality and that encoded information gives rise to a unique conscious experience is considered as well as if evolutionary processes shape the consciousness/system interface, and an abstract information-based reality. It is suggested that implementation of the non-event-based conscious decision making strategy may be the only possibility of creating artificial intelligent systems that have free will.

**Category:** Mind Science

[82] **viXra:1412.0106 [pdf]**
*submitted on 2014-12-03 19:14:19*

**Authors:** Sidharth Ghoshal

**Comments:** 2 Pages. Copyright Sidharth Ghoshal

A high performance file compression algorithm

**Category:** Data Structures and Algorithms

[81] **viXra:1412.0105 [pdf]**
*submitted on 2014-12-03 23:55:25*

**Authors:** Ksawery Krenc, Adam Kawalec

**Comments:** 8 Pages.

The aim of this paper is to propose an
ontology framework for preselected sensors due to the sensor networks’ needs, regarding a specific task, such as the target’s threat recognition. The problem will be solved methodologically, taking into account particularly non-deterministic nature of functions assigning the concept and the relation sets into the concept and relation lexicon sets respectively and vice-versa. This may effectively enhance the efficiency of the information fusion performed in sensor networks.

**Category:** General Science and Philosophy

[80] **viXra:1412.0104 [pdf]**
*submitted on 2014-12-04 00:07:18*

**Authors:** Mihai Cristian Florea, Eloie Bosse

**Comments:** 7 Pages.

The aim of this paper is to investigate
how to improve the process of information combination,using the Dempster-Shafer Theory (DST). In presence of an overload of information and an unknown environment,the reliability of the sources of information or the sensors is usually unknown and thus cannot be used to refine the fusion process. In a previous paper [1], the
authors have investigated different techniques to evaluate contextual knowledge from a set of mass functions (membership of a BPA to a set of BPAs, relative reliabilities of BPAs, credibility degrees, etc.). The purpose of this paper is to investigate how to use the contextual
knowledge in order to improve the fusion process.

**Category:** General Science and Philosophy

[79] **viXra:1412.0103 [pdf]**
*submitted on 2014-12-04 00:09:19*

**Authors:** Pierre Valin, Pascal Djiknavorian, Dominic Grenier

**Comments:** 7 Pages.

Electronic Support Measures consist of passive
receivers which can identify emitters coming from a small bearing angle, which, in turn, can be related to platforms that belong to 3 classes: either Friend, Neutral, or Hostile.
Decision makers prefer results presented in STANAG 1241 allegiance form, which adds 2 new classes: Assumed Friend, and Suspect. Dezert-Smarandache (DSm) theory is particularly suited to this problem, since it allows for
intersections between the original 3 classes. Results are presented showing that the theory can be successfully applied to the problem of associating ESM reports to established tracks, and its results identify when missassociations
have occurred and to what extent. Results are
also compared to Dempster-Shafer theory which can only reason on the original 3 classes. Thus decision makers are offered STANAG 1241 allegiance results in a timely manner, with quick allegiance change when appropriate and stability in allegiance declaration otherwise.

**Category:** General Science and Philosophy

[78] **viXra:1412.0102 [pdf]**
*submitted on 2014-12-04 00:12:12*

**Authors:** Benjamin Pannetier, Jean Dezert

**Comments:** 8 Pages.

In this paper, we propose a new approach
to track multiple ground target with GMTI (Ground
Moving Target Indicator) and IMINT (IMagery INtel-
ligence) reports. This tracking algorithm takes into account road network information and is adapted to the out of sequence measurement problem. The scope of the paper is to fuse the attribute type information given by heterogeneous sensors with DSmT (Dezert Smarandache Theory) and to introduce the type results in the tracking process. We show the ground target tracking
improvement obtained due to better targets discrimination and an efficient conflicting information management on a realistic scenario.

**Category:** General Science and Philosophy

[77] **viXra:1412.0101 [pdf]**
*submitted on 2014-12-04 00:14:01*

**Authors:** Bart Kahler, Erik Blasch

**Comments:** 8 Pages.

Airborne radar tracking in moving ground
vehicle scenarios is impacted by sensor, target, and
environmental dynamics. Moving targets can be assessed with 1-D High Range Resolution (HRR) Radar profiles with sufficient signal-to-noise (SNR) present which contain enough feature information to discern one target from another to help maintain track or to identify the vehicle.

**Category:** General Science and Philosophy

[76] **viXra:1412.0100 [pdf]**
*submitted on 2014-12-04 00:16:56*

**Authors:** Erik Blasch, Pierre Valin, Eloi Bosse, Maria Nilsson, Joeri van Laere, Elisa Shahbazian

**Comments:** 8 Pages.

Information Fusion coordinates large-volume
data processing machines to address user needs. Users expect a situational picture to extend their ability of sensing events, movements, and activities. Typically, data is collected and processed for object location (e.g. target
identification) and movement (e.g. tracking); however, high-level reasoning or situational understanding depends on the spatial, cultural, and political effects. In this paper,we explore opportunities where information fusion can aid
in the selection and processing of the data for enhanced tacit knowledge understanding by (1) display fusion for data presentation (e.g. cultural segmentation), (2)interactive fusion to allow the user to inject a priori knowledge (e..g. cultural values), and (3) associated
metrics of predictive capabilities (e.g. cultural networks).In a simple scenario for target identification with deception, cultural information impacts on situational
understanding is demonstrated using the Technology-Emotion-Culture-Knowledge (TECK) attributes of the Observe-Orient-Decide-Act (OODA) model.

**Category:** General Science and Philosophy

[75] **viXra:1412.0099 [pdf]**
*submitted on 2014-12-04 00:19:12*

**Authors:** Frederic Dambreville

**Comments:** 8 Pages.

This paper defines a new concept and
framework for constructing fusion rules for evidences.This framework is based on a referee function, which does a decisional arbitrament conditionally to basic decisions provided by the several sources of information.
A simple sampling method is derived from this frame-work. The purpose of this sampling approach is to avoid the combinatorics which are inherent to the definition of fusion rules of evidences.

**Category:** General Science and Philosophy

[74] **viXra:1412.0098 [pdf]**
*submitted on 2014-12-04 00:21:30*

**Authors:** Saiedeh N. Razavi, Carl T. Haas, Philippe Vanheeghe, Emmanuel Duflos

**Comments:** 8 Pages.

Dislocations of construction materials on
large sites represent critical state changes. The ability to detect dislocations automatically for tens of thousands of items can ultimately improve project performance significantly. A belief_function -based data fusion
algorithm was developed to estimate materials locations and detect dislocations. Dislocation is defined as the change between discrete sequential locations of critical materials such as special valves or fabricated items, on a large construction project.

**Category:** General Science and Philosophy

[73] **viXra:1412.0097 [pdf]**
*submitted on 2014-12-04 00:23:19*

**Authors:** Arnaud Martin

**Comments:** 8 Pages.

This paper presents a point of view to ad-
dress an application with the theory of belief functions from a global approach. Indeed, in a belief application, the definition of the basic belief assignments and the tasks of reduction of focal elements number, discounting, combination and decision, must be thought at the same time. Moreover these tasks can be seen as a general process of belief transfer.

**Category:** General Science and Philosophy

[72] **viXra:1412.0096 [pdf]**
*submitted on 2014-12-04 00:35:56*

**Authors:** Quirin Hamp, Leonhard Reindl

**Comments:** 9 Pages.

Association of spatial information about targets is conventionally based on measures such as the Euclidean or the Mahalanobis distance. These approaches produce satisfactory results when targets are more distant than the resolution of the employed sensing principle, but is limited if they lie closer. This paper describes an
association method combined with classification enhancing performance. The method not only considers spatial distance, but also
information about class membership during a post-processing step.Association of measurements that cannot be uniquely associated to
only one estimate, but to multiple estimates, is achieved under the constraint of conflict minimization of the combination of mutual
class memberships.

**Category:** General Science and Philosophy

[71] **viXra:1412.0095 [pdf]**
*submitted on 2014-12-04 00:37:42*

**Authors:** Bart Kahler, Erik Blasch

**Comments:** 16 Pages.

Airborne radar tracking in moving ground vehicle scenarios is impacted by sensor, target, and environmental dynamics. Moving targets can be characterized by 1-D High Range Resolution (HRR)
Radar profiles with sufficient Signal-to-Noise Ratio (SNR). The amplitude feature information for each range bin of the HRR profile is used to discern one target from another to help maintain
track or to identify a vehicle. Typical radar clutter suppression algorithms developed for processing moving ground target data not
only remove the surrounding clutter, but a portion of the target signature. Enhanced clutter suppression can be achieved using a Multi-channel Signal Subspace (MSS) algorithm, which preserves
target features. In this paper, we (1) exploit extra feature information from enhanced clutter suppression for Automatic Target Recognition
(ATR), (2) present a Decision-Level Fusion (DLF) gain comparison using Displaced Phase Center Antenna (DPCA) and MSS clutter suppressed HRR data; and (3) develop a confusion-matrix
identity fusion result for Simultaneous Tracking and Identification (STID). The results show that more channels forMSS increase identification
over DPCA, result in a slightly noisier clutter suppressed image, and preserve more target features after clutter cancellation.
The paper contributions include extending a two-channel MSS clutter cancellation technique to three channels, verifying the MSS is
superior to the DPCA technique for target identification, and a comparison of these techniques in a novel multi-look confusion matrix
decision-level fusion process.

**Category:** General Science and Philosophy

[70] **viXra:1412.0093 [pdf]**
*submitted on 2014-12-04 00:40:54*

**Authors:** Pierre Valin, Pascal Djiknavorian, Eloi Bosse

**Comments:** 8 Pages.

This article addresses the performance of Dempster-Shafer (DS)theory, when it is slightly modified to prevent it from becoming too
certain of its decision upon accumulation of supporting evidence.Since this is done by requiring that the ignorance never becomes
too small, one can refer to this variant of DS theory as Thresholded-DS. In doing so, one ensures that DS can respond quickly to a
consistent change in the evidence that it fuses. Only realistic data is fused, where realism is discussed in terms of data certainty
and data accuracy, thereby avoiding Zadeh’s paradox. Performance measures of Thresholded-DS are provided for various thresholds
in terms of sensor data certainty and fusion accuracy to help designers assess beforehand, by varying the threshold appropriately,
the achievable performance in terms of the estimated certainty, and accuracy of the data that must be fused. The performance
measures are twofold, first in terms of stability when fused data are consistent, and second in terms of the latency in the response
time when an abrupt change occurs in the data to be fused. These two performance measures must be traded off against each other, which is the reason why the performance curves will be very
helpful for designers of multi-source information fusion systems using Thresholded-DS.

**Category:** General Science and Philosophy

[69] **viXra:1412.0092 [pdf]**
*submitted on 2014-12-04 00:42:38*

**Authors:** Stefan Arnborg, Kungliga Tekniska Hogskolan

**Comments:** 15 Pages.

We are interested in understanding the relationship between Bayesian inference and evidence theory. The concept of a set of
probability distributions is central both in robust Bayesian analysis and in some versions of Dempster-Shafer’s evidence theory. We interpret imprecise probabilities as imprecise posteriors obtainable from imprecise likelihoods and priors, both of which are convex sets that can be considered as evidence and represented with, e.g.,
DS-structures. Likelihoods and prior are in Bayesian analysis combined with Laplace’s parallel composition. The natural and simple
robust combination operator makes all pairwise combinations of elements from the two sets representing prior and likelihood. Our
proposed combination operator is unique, and it has interesting normative and factual properties. We compare its behavior with other proposed fusion rules, and earlier efforts to reconcile Bayesian analysis and evidence theory. The behavior of the robust rule is consistent
with the behavior of Fixsen/Mahler’s modified Dempster’s (MDS) rule, but not with Dempster’s rule. The Bayesian framework is liberal in allowing all significant uncertainty concepts to be modeled and taken care of and is therefore a viable, but probably not the only, unifying structure that can be economically taught
and in which alternative solutions can be modeled, compared and explained.

**Category:** General Science and Philosophy

[68] **viXra:1412.0089 [pdf]**
*submitted on 2014-12-04 00:49:22*

**Authors:** Pascal Djiknavorian, Pierre Valin, Dominic Grenier

**Comments:** 8 Pages.

Electronic Support Measures consist of passive
receivers which can identify emitters which, in turn, can be related to platforms that belong to 3 classes: Friend,Neutral, or Hostile. Decision makers prefer results presented in STANAG 1241 allegiance form, which adds 2 new classes: Assumed Friend, and Suspect. Dezert-
Smarandache (DSm) theory is particularly suited to this problem, since it allows for intersections between the original 3 classes. However, as we know, the DSm hybrid
combination rule is highly complex to execute and
requires high amounts of resources. We have applied and studied a Matlab implementation of Tessem's k-l-x, Lowrance’s Summarization and Simard’s approximation techniques in the DSm theory for the fusion of ESM reports. Results are presented showing that we can improve on the time of execution while maintaining or getting better rates of good decisions in some cases.

**Category:** General Science and Philosophy

[67] **viXra:1412.0088 [pdf]**
*submitted on 2014-12-04 00:51:14*

**Authors:** Deqiang Han, Jean Dezert, Chongzhao Han, Yi Yang

**Comments:** 7 Pages.

In Dempster-Shafer Theory (DST) of evidencee
and transferable belief model (TBM), the probability transformation is necessary and crucial for decision-making. The evaluation of the quality of the probability transformation is usually based on the entropy or the probabilistic information content (PIC) measures, which are questioned in this paper. Another alternative of probability transformation approach is
proposed based on the uncertainty minimization to verify the rationality of the entropy or PIC as the evaluation criteria for the probability transformation. According to the experimental results based on the comparisons among different probability transformation approaches,
the rationality of using entropy or Probabilistic Information Content (PIC) measures to evaluate probability transformation approaches is analyzed and discussed.

**Category:** General Science and Philosophy

[66] **viXra:1412.0087 [pdf]**
*submitted on 2014-12-04 00:53:00*

**Authors:** Jean Dezert, Benjamin Pannetier

**Comments:** 8 Pages.

In this paper we show how to correct
and improve the Belief Interacting Multiple Model filter (BIMM) proposed in 2009 by Nassreddine et al.for tracking maneuvering targets. Our improved algorithm,called PCR-BIMM is based on results developed in DSmT (Dezert-Smarandache Theory) framework and concerns two main steps of BIMM: 1) the update of the basic belief assignment of modes which is done by the Proportional Conflict Redistribution Rule no. 5 rather than Smets’ rule (conjunctive rule); 2) the global target state estimation which is obtained from the DSmP probabilistic transformation rather than the
commonly used Pignistic transformation. Monte-Carlo simulation results are presented to show the performances of this PCR-BIMM filter with respect to classical IMM and BIMM filters obtained on a very simple maneuvering target tracking scenario.

**Category:** General Science and Philosophy

[65] **viXra:1412.0086 [pdf]**
*submitted on 2014-12-04 00:54:28*

**Authors:** Ksawery Krenc, Adam Kawalec

**Comments:** 6 Pages.

This paper discusses the problem of application of the logical operators while defining
the posterior hypotheses, that is hypotheses which are not directly created upon the sensor data. In the authors’ opinion the application of the logical sentence operators is constrained to some specific cases, where the sets operators may be applied as well. On the other hand, the sets operators enable to provide much more adequate posterior hypotheses, which results in higher precision of the fusion final decision.
In order to present that an analysis has been made
and some examples, related to attribute information fusion in C2 systems have been delivered.

**Category:** General Science and Philosophy

[64] **viXra:1412.0085 [pdf]**
*submitted on 2014-12-04 01:35:40*

**Authors:** George Rajna

**Comments:** 13 Pages.

Although the idea of heavy photons has been around for almost 30 years, it gained new interest just a few years ago when theorists suggested that it could explain why several experiments detected more high-energy positrons—the antimatter partners of electrons—than scientists had expected in the cosmic radiation of space. Data from the PAMELA satellite experiment; the AMS instrument aboard the International Space Station; the LAT experiment of the Fermi Gamma-ray Space Telescope and others have all reported finding an excess of positrons. [13]
Hidden photons are predicted in some extensions of the Standard Model of particle physics, and unlike WIMPs they would interact electromagnetically with normal matter.
In particle physics and astrophysics, weakly interacting massive particles, or WIMPs, are among the leading hypothetical particle physics candidates for dark matter.
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.

**Category:** Quantum Physics

[63] **viXra:1412.0084 [pdf]**
*submitted on 2014-12-04 01:39:21*

**Authors:** Frederic Dambreville

**Comments:** 8 Pages.

We propose a solution to the Vehicle-Born Improvised Explosive Device problem. This solution is based on a modelling by belief functions, and involves the construction of
a combination rule dedicated to this problem. The construction of the combination rule is made possible by a tool developped in previous works, which is a generic framework dedicated
to the construction of combination rules. This tool implies a tripartite architecture, with respective parts implementing the logical framework, the combination definition (referee function) and the computation processes. Referee functions are decisional arbitrament conditionally to basic decisions provided by the
sources of information, and allows rule definitions at logical level adapted to the application.We construct a referee function for the Vehicle-Born Improvised Explosive Device problem, and compare it to reference combinaton rules.

**Category:** General Science and Philosophy

[62] **viXra:1412.0083 [pdf]**
*submitted on 2014-12-04 01:46:56*

**Authors:** Zhun-ga Liu, Jean Dezert, Gregoire Mercier, Quan Pan, Yong-mei Cheng

**Comments:** 8 Pages.

Theories of evidence have already been applied more or less successfully in the fusion of remote sensing images. In the classical evidential reasoning, all the sources of evidence and
their fusion results are related with the same invariable (static)frame of discernment. Nevertheless, there are possible change
occurrences through multi-temporal remote sensing images, and these changes need to be detected efficiently in some applications.
The invariable frame of classical evidential reasoning can’t efficiently represent nor detect the changes occurrences from heterogenous remote sensing images. To overcome this limitation, Dynamical Evidential Reasoning (DER) is proposed
for the sequential fusion of multi-temporal images. A new state transition frame is defined in DER and the change occurrences can be precisely represented by introducing a state transition operator. The belief functions used in DER are defined similarly to those defined in the Dempster-Shafer Theory (DST). Two kinds of dynamical combination rules working in free model
and constrained model are proposed in this new framework for dealing with the different cases. In the final, an experiment using three pieces of real satellite images acquired before and after
an earthquake are provided to show the interest of the new approach.

**Category:** General Science and Philosophy

[61] **viXra:1412.0081 [pdf]**
*submitted on 2014-12-04 01:50:52*

**Authors:** Jean Dezert, Zhun-ga Liu, Gregoire Mercier

**Comments:** 8 Pages.

In this paper, we present a non-supervised methodology for edge detection in color images based on belief functions and their combination. Our algorithm is based on the fusion of local edge detectors results expressed into basic belief assignments thanks to a flexible modeling, and the proportional conflict redistribution
rule developed in DSmT framework. The application
of this new belief-based edge detector is tested both on original (noise-free) Lena’s picture and on a modified image including artificial pixel noises to show the ability of our algorithm to
work on noisy images too.

**Category:** General Science and Philosophy

[60] **viXra:1412.0080 [pdf]**
*submitted on 2014-12-04 01:52:22*

**Authors:** Gavin Powell, Matthew Roberts

**Comments:** 8 Pages.

Generally, there are problems with any form
of recursive fusion based on belief functions. An open world is often required but the empty set can become greedy where, over time, all of the mass will become assigned to it where conjunctive combination is used. With disjunctive combination, all of the mass will move
toward the ignorant set over time. In real world
applications often the problem will require an open world but due to these limitations it is forced into a closed world solution. GRP1 works iteratively in an open world in a temporally conscious fashion, allowing more recent measurements to have more of an impact.
This approach makes it ideal for fusing and classifying streaming data.

**Category:** General Science and Philosophy

[59] **viXra:1412.0079 [pdf]**
*submitted on 2014-12-04 01:54:17*

**Authors:** Ronald Mahler

**Comments:** 8 Pages.

Data fusion algorithms must typically address
not only kinematic issues—that is, target tracking—but also nonkinematics—for example, target identification, threat estimation, intent assessment, etc. Whereas kinematics involves traditional measurements such as radar detections, nonkinematics typically involves nontraditional measurements such as quantized data, attributes, features, natural-language statements, and inference rules. The kinematic vs. nonkinematic chasm is often bridged by grafting some expert-system approach (fuzzy logic, Dempster-Shafer, rule-based inference) into a single- or multi-hypothesis multitarget
tracking algorithm, using ad hoc methods. The purpose of this paper is to show that conventional measurementto-track association theory can be directly extended to nontraditional measurements in a Bayesian manner. Concepts such as association likelihood, association distance, hypothesis probability, and global nearestneighbor
distance are defined, and explicit formulas are
derived for specific kinds of nontraditional evidence.

**Category:** General Science and Philosophy

[58] **viXra:1412.0078 [pdf]**
*submitted on 2014-12-04 01:56:02*

**Authors:** Deqiang Han, Jean Dezert, Chongzhao Han, Yi Yang

**Comments:** 7 Pages.

The dissimilarity of evidence, which represent the
degree of dissimilarity between bodies of evidence (BOE’s), has attracted more and more research interests and has been used in many applications based on evidence theory. In this paper, some novel dissimilarities of evidence are proposed by using fuzzy sets theory (FST). The basic belief assignments (bba’s) are first transformed to the measures in FST and then by using the dissimilarity (or similarity measure) in FST, the dissimilarities between bba’s
are defined. Some numerical examples are provided to verify the rationality of the proposed dissimilarities of evidence.

**Category:** General Science and Philosophy

[57] **viXra:1412.0075 [pdf]**
*submitted on 2014-12-04 02:03:27*

**Authors:** Deqiang Han, Jean Dezert, Jean-Marc Tacnet, Chongzhao Han

**Comments:** 8 Pages.

Multi-criteria decision making (MCDM) is to make
decisions in the presence of multiple criteria. To make a decision in the framework of MCDM under uncertainty, a novel fuzzy -Cautious OWA with evidential reasoning (FCOWA-ER) approach
is proposed in this paper. Payoff matrix and belief functions of states of nature are used to generate the expected payoffs, based on which, two Fuzzy Membership Functions (FMFs) representing
optimistic and pessimistic attitude, respectively can be obtained. Two basic belief assignments (bba’s) are then generated from the two FMFs. By evidence combination, a combined bba is
obtained, which can be used to make the decision. There is no problem of weights selection in FCOWA-ER as in traditional OWA. When compared with other evidential reasoning-based
OWA approaches such as COWA-ER, FCOWA-ER has lower
computational cost and clearer physical meaning. Some experiments and related analyses are provided to justify our proposed FCOWA-ER.

**Category:** General Science and Philosophy

[56] **viXra:1412.0074 [pdf]**
*submitted on 2014-12-04 02:05:50*

**Authors:** Deqiang Han, Jean Dezert, Zhun-ga Liu, Jean-Marc Tacnet

**Comments:** 8 Pages.

Dempster-Shafer evidence theory is widely used for
approximate reasoning under uncertainty; however, the decisionmaking is more intuitive and easy to justify when made in the probabilistic context. Thus the transformation to approximate a belief function into a probability measure is crucial
and important for decision-making based on evidence theory framework. In this paper we present a new transformation of any general basic belief assignment (bba) into a Bayesian belief
assignment (or subjective probability measure) based on new proportional and hierarchical principle of uncertainty reduction. Some examples are provided to show the rationality and efficiency of our proposed probability transformation approach.

**Category:** General Science and Philosophy

[55] **viXra:1412.0072 [pdf]**
*submitted on 2014-12-04 02:13:15*

**Authors:** Deqiang Han, Jean Dezert, Chongzhao Han

**Comments:** 8 Pages.

The theory of belief function, also called Dempster-Shafer evidence theory, has been proved to be a very useful representation scheme for expert and other knowledge based systems. However, the computational complexity of evidence
combination will become large with the increasing of the frame of discernment’s cardinality. To reduce the computational cost of evidence combination, the idea of basic belief assignment (bba) approximation was proposed, which can reduce the complexity of the given bba’s. To realize a good bba approximation, the
approximated bba should be similar (in some sense) to the original bba. In this paper, we use the distance of evidence together with the difference between the uncertainty degree of
approximated bba and that of the original one to construct a comprehensive measure, which can represent the similarity between the approximated bba and the original one. By using such a comprehensive measure as the objective function and by designing some constraints, the bba approximation is converted to an optimization problem. Comparative experiments are provided
to show the rationality of the construction of comprehensive similarity measure and that of the constraints designed.

**Category:** General Science and Philosophy

[54] **viXra:1412.0071 [pdf]**
*submitted on 2014-12-04 02:15:34*

**Authors:** Zhun-ga Liu, Jean Dezert, Quan Pan, Yong-mei Cheng

**Comments:** 8 Pages.

Data clustering methods integrating information
fusion techniques have been recently developed in the framework of belief functions. More precisely, the evidential version of fuzzy
c-means (ECM) method has been proposed to deal with the clustering of proximity data based on an extension of the popular fuzzy c-means (FCM) clustering method. In fact ECM doesn’t
perform very well for proximity data because it is based only on the distance between the object and the clusters’ center to determine the mass of belief of the object commitment. As a result, different clusters can overlap with close centers which is not very efficient for data clustering. To overcome this problem, we propose a new clustering method called belief functions cmeans
(BFCM) in this work. In BFCM, both the distance between the object and the imprecise cluster’s center, and the distances between the object and the centers of the involved specific clusters
for the mass determination are taken into account. The object will be considered belonging to a specific cluster if it is very close
to this cluster’s center, or belonging to an imprecise cluster if it lies in the middle (overlapped zone) of some specific clusters, or
belonging to the outlier cluster if it is too far from the data set. Pignistic probability can be applied for the hard decision making
support in BFCM. Several examples are given to illustrate how BFCM works, and to show how it outperforms ECM and FCM for the proximity data.

**Category:** General Science and Philosophy

[53] **viXra:1412.0070 [pdf]**
*submitted on 2014-12-04 02:18:33*

**Authors:** Jean Dezert, Jean-Marc Tacnet

**Comments:** 8 Pages.

Main decisions problems can be described into
choice, ranking or sorting of a set of alternatives. The classicalELECTRE TRI (ET) method is a multicriteria-based outranking
sorting method which allows to assign alternatives into a set of predetermined categories. ET method deals with situations
where indifference is not transitive and solutions can sometimes appear uncomparable. ET suffers from two main drawbacks: 1) it requires an arbitrary choice of cut step to perform the
outranking of alternatives versus profiles of categories, and 2) an arbitrary choice of attitude for final assignment of alternatives
into the categories. ET finally gives a final binary (hard) assignment of alternatives into categories. In this paper we develop a soft version of ET method based on belief functions which circumvents the aforementioned drawbacks of ET and allows to obtain both a soft (probabilistic) assignment of alternatives into
categories and an indicator of the consistency of the soft solution.
This Soft-ET approach is applied on a concrete example to show how it works and to compare it with the classical ET method.

**Category:** General Science and Philosophy

[52] **viXra:1412.0069 [pdf]**
*submitted on 2014-12-04 02:20:01*

**Authors:** Jean Dezert, Pei Wang, Albena Tchamova

**Comments:** 6 Pages.

We challenge the validity of Dempster-Shafer Theory by using an emblematic example to show that DS rule produces counter-intuitive result. Further analysis reveals that the result comes from a understanding of evidence pooling which
goes against the common expectation of this process. Although DS theory has attracted some interest of the scientific community
working in information fusion and artificial intelligence, its validity to solve practical problems is problematic, because it is
not applicable to evidences combination in general, but only to a certain type situations which still need to be clearly identified

**Category:** General Science and Philosophy

[51] **viXra:1412.0067 [pdf]**
*submitted on 2014-12-04 02:24:19*

**Authors:** Florentin Smarandache, Jean Dezert

**Comments:** 8 Pages.

Since the development of belief function theory
introduced by Shafer in seventies, many combination rules have been proposed in the literature to combine belief functions
specially (but not only) in high conflicting situations because the emblematic Dempster’s rule generates counter-intuitive and unacceptable results in practical applications. Many attempts
have been done during last thirty years to propose better rules of combination based on different frameworks and justifications.
Recently in the DSmT (Dezert-Smarandache Theory) framework, two interesting and sophisticate rules (PCR5 and PCR6 rules) have been proposed based on the Proportional Conflict Redistribution (PCR) principle. These two rules coincide for the
combination of two basic belief assignments, but they differ in general as soon as three or more sources have to be combined altogether because the PCR used in PCR5 and in PCR6 are different. In this paper we show why PCR6 is better than PCR5
to combine three or more sources of evidence and we prove the coherence of PCR6 with the simple Averaging Rule used classically to estimate the probability based on the frequentist
interpretation of the probability measure. We show that such probability estimate cannot be obtained using Dempster-Shafer (DS) rule, nor PCR5 rule.

**Category:** General Science and Philosophy

[50] **viXra:1412.0066 [pdf]**
*submitted on 2014-12-04 02:26:49*

**Authors:** Deqiang Han, X. Rong Li, Shaoyi Liang

**Comments:** 8 Pages.

The technique of Multiple Classifier Systems
(MCSs), which is a kind of decision-level information fusion,has fast become popular among researchers to fuse multiple classification outputs for better classification accuracy. In MCSs, there exist various kinds of uncertainties such as the ambiguity of the output of individual member classifier and the inconsistency
among outputs of member classifiers. In this paper, we model the uncertainties inMCSs based on the theory of belief functions. The
outputs of member classifiers are modeled using belief functions.
A new measure of diversity in member classifiers is established using the distance of evidence, and the fusion rule adopted for MCSs is Demspter’s rule of combination. The construction of MCSs based on the proposed diversity measure is a dynamic procedure and can achieve better performance than using existing diversity measures. Experimental results and related analyses show that our proposed measure and approach are rational and effective.

**Category:** General Science and Philosophy

[49] **viXra:1412.0065 [pdf]**
*submitted on 2014-12-04 02:32:20*

**Authors:** Nassim Abbas, Youcef Chibani, Zineb Belhadi, Mehdia Hedir

**Comments:** 8 Pages.

This paper presents a new combination scheme for
reducing the number of focal elements to manipulate in order to reduce the complexity of the combination process in the multiclass
framework. The basic idea consists in using of p sources of information involved in the global scheme providing p kinds of complementary information to feed each set of p one class
support vector machine classifiers independently of each other, which are designed for detecting the outliers of the same target class, then, the outputs issued from this set of classifiers are
combined through the plausible and paradoxical reasoning theory for each target class. The main objective of this approach is to render calibrated outputs even when less complementary
responses are encountered. An inspired version of Appriou’s model for estimating the generalized basic belief assignments is presented in this paper. The proposed methodology allows
decomposing a n-class problem into a series of n-combination, while providing n-calibrated outputs into the multi-class framework. The effectiveness of the proposed combination
scheme with proportional conflict redistribution algorithm is validated on digit recognition application and is compared with existing statistical, learning, and evidence theory based
combination algorithms.

**Category:** General Science and Philosophy

[48] **viXra:1412.0064 [pdf]**
*submitted on 2014-12-04 02:53:47*

**Authors:** Faouzi Sebbak, Farid Benhammadi, Aicha Mokhtari, Abdelghani Chibanizand Yacine Amirat

**Comments:** 8 Pages.

The evidence theory and its variants are mathematical
formalisms used to represent uncertain as well as
ambiguous data. The evidence combination rules proposed in these formalisms agree with Bayesian probability calculus in special cases but not in general. To get more reconcilement between the belief functions theory with the Bayesian probability calculus, this work proposes a new way of combining beliefs to estimate combined evidence. This approach is based on the
Constraint Satisfaction Problem modeling. Thereafter, we combine all solutions of these constraint problems using Dempster’s
rule. This mathematical formalism is tested using information system security risk simulations. The results show that our model produces intuitive results and agrees with the Bayesian
probability calculus.

**Category:** General Science and Philosophy

[47] **viXra:1412.0063 [pdf]**
*submitted on 2014-12-04 03:18:45*

**Authors:** Deqiang Han, Jean Dezert, Shicheng Li, Chongzhao Han, Yi Yang

**Comments:** 8 Pages.

Image registration is a crucial and necessary step
before image fusion. It aims to achieve the optimal match between two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. In the procedure of image registration, several types of uncertainty will be encountered, e.g., the selection of control points and the
distance or the dissimilarity measures used for image matching. In this paper, we model these uncertainty in image registration using the theory of belief functions. By jointly using the pixel level and feature level information, more effective image registrations are accomplished. Experimental results, comparisons and related
analyses illustrate the effectiveness of our evidential reasoning based image registration approach.

**Category:** General Science and Philosophy

[46] **viXra:1412.0062 [pdf]**
*submitted on 2014-12-04 03:20:48*

**Authors:** Georgios Ioannou, Panos Louvieris, Natalie Clewley, Gavin Powell

**Comments:** 8 Pages.

eXfiltration Advanced Persistent Threats (XAPTs)
increasingly account for incidents concerned with intelligence information gathering by malicious adversaries. This research exploits the multi-phase nature of an XAPT, mapping its phases
into a cyber attack kill chain. A novel Markov Multi-Phase Transferable Belief Model (MM-TBM) is proposed and demonstrated for fusing incoming evidence from a variety of sources which takes into account conflicting information. The
MM-TBM algorithm predicts a cyber attacker’s actions against a computer network and provides a visual representation of their footsteps.

**Category:** General Science and Philosophy

[45] **viXra:1412.0061 [pdf]**
*submitted on 2014-12-04 03:22:40*

**Authors:** Faouzi Sebbak, Farid Benhammadi, Abdelghani Chibani, Yacine Amirat, Aicha Mokhtari

**Comments:** 7 Pages.

The evidence theory and its propositional conflict
redistribution variant rules are mathematical formalisms used to represent uncertain as well as ambiguous data. The evidence combination rules proposed in these formalisms do not satisfy
the idempotence property. However, in a variety of applications, it is desirable that the evidence combination rules satisfy this
property. In response to this challenge, the present work proposes a new formalism for reasoning under uncertainty based on new
consensus and conflicting of evidence concepts. This mathematical formalism is evaluated using a real world activity recognition problem in smart home environment. The results show that one
rule of our formalism respects the idempotence property and improves the accuracy of activity recognition.

**Category:** General Science and Philosophy

[44] **viXra:1412.0060 [pdf]**
*submitted on 2014-12-04 03:24:20*

**Authors:** Deqiang Han, Jean Dezert, Chongzhao Han, Yi Yang

**Comments:** 8 Pages.

Neighborhood based classifiers are commonly used
in the applications of pattern classification. However, in the implementation of neighborhood based classifiers, there always exist the problems of uncertainty. For example, when one use k-NN classifier, the parameter k should be determined, which can be big or small. Therefore, uncertainty problem occurs for the classification caused by the k value. Furthermore, for the nearest neighbor (NN) classifier, one can use the nearest neighbor or the nearest centroid of all the classes, so different classification results
can be obtained. This is a type of uncertainty caused by the local and global information used, respectively. In this paper, we use theory of belief function to model and manage the two types
of uncertainty above. Evidential reasoning based neighborhood classifiers are proposed. It can be experimentally verified that our proposed approach can deal efficiently with the uncertainty
in neighborhood classifiers.

**Category:** General Science and Philosophy

[43] **viXra:1412.0059 [pdf]**
*submitted on 2014-12-04 03:26:44*

**Authors:** Erik Blasch, Kathryn B. Laskey, Anne-Laure Jousselme, Valentina Dragos, Jean Dezert, Paulo C. G. Costa

**Comments:** 8 Pages.

For many operational information fusion systems, both reliability and credibility are evaluation criteria for collected information. The Uncertainty Representation and Reasoning
Evaluation Framework (URREF) is a comprehensive ontology that represents measures of uncertainty. URREF supports standards such as the NATO Standardization Agreement (STANAG) 2511, which
incorporates categories of reliability and credibility. Reliability has traditionally been assessed for physical machines to support failure
analysis. Source reliability of a human can also be assessed. Credibility is associated with a machine process or human assessment of collected evidence for information content. Other
related constructs for URREF are data relevance and completeness. In this paper, we seek to develop a mathematical relation of weight of
evidence using credibility and reliability as criteria for characterizing uncertainty in information fusion systems.

**Category:** General Science and Philosophy

[42] **viXra:1412.0058 [pdf]**
*submitted on 2014-12-04 03:28:54*

**Authors:** Jean Dezert, Albena Tchamova, Deqiang Han, Jean-Marc Tacnet

**Comments:** 8 Pages.

In this paper, we analyze Bayes fusion rule in
details from a fusion standpoint, as well as the emblematic Dempster’s rule of combination introduced by Shafer in his Mathematical Theory of evidence based on belief functions. We
propose a new interesting formulation of Bayes rule and point out some of its properties. A deep analysis of the compatibility of Dempster’s fusion rule with Bayes fusion rule is done. We show that Dempster’s rule is compatible with Bayes fusion rule only in the very particular case where the basic belief assignments (bba’s)
to combine are Bayesian, and when the prior information is modeled either by a uniform probability measure, or by a vacuous bba. We show clearly that Dempster’s rule becomes incompatible
with Bayes rule in the more general case where the prior is truly informative (not uniform, nor vacuous). Consequently, this paper proves that Dempster’s rule is not a generalization of Bayes fusion rule.

**Category:** General Science and Philosophy

[41] **viXra:1412.0057 [pdf]**
*submitted on 2014-12-04 03:31:08*

**Authors:** Faouzi Sebbak, Farid Benhammadi, M’hamed Mataoui, Sofiane Bouznadx, Yacine Amirat

**Comments:** 8 Pages.

In order to have a normal behavior in combination
of bodies of evidence, this paper proposes a new combination rule. This rule includes the cardinality of focal set elements in
conjunctive operation and the conflict redistribution to all steps. Based on the focal set cardinalities, the conflict redistribution
is assigned using factors. The weighted factors are computed from the original and the conjunctive masses assigned to each focal element. This strategy forces the conflict redistribution in favor of the more committed hypothesis. Our method is evaluated
and compared with some numerical examples reported in the literature. As result, this rule redistributes the conflict in favor of
the more committed hypothesis and gives intuitive interpretation for combining multiple information sources with coherent results.

**Category:** General Science and Philosophy

[40] **viXra:1412.0055 [pdf]**
*submitted on 2014-12-04 03:34:29*

**Authors:** Jean Dezert, Albena Tchamova, Deqiang Han, Jean-Marc Tacnet

**Comments:** 8 Pages.

In this paper, we provide a deep examination of the main bases of Subjective Logic (SL) and reveal serious problems with them. A new interesting alternative way for building a
normal coarsened basic belief assignment from a refined one is also proposed. The defects in the SL fusion rule and the problems in the link between opinion and Beta probability density functions are also analyzed. Some numerical examples and related analyses are provided to justify our viewpoints.

**Category:** General Science and Philosophy

[39] **viXra:1412.0054 [pdf]**
*submitted on 2014-12-04 03:36:21*

**Authors:** Anne-Laure Jousselme, Anne-Claire Boury, Brisset Benoıt Debaque, Donald Prevost

**Comments:** 8 Pages.

Physical sensors (hard sources) and humans (soft
sources) have complementary features in terms of perception, reasoning, memory. It is thus natural to combine their associated information for a wider coverage of the diversity of the available
information and thus provide an enhanced situation awareness for the decision maker. While the fusion domain mainly considers (although not only) the processing and combination of information from hard sources, conciliating these two broad areas is gaining more and more interest in the domain of hard and soft fusion. In order to better understand the diversity and specificity
of sources of information, we propose a functional model of a source of information, and a structured list of dimensions along
which a source of information can be qualified. We illustrate some properties on a real data gathered from an experiment of light detection in a fog chamber involving both automatic and
human detectors.

**Category:** General Science and Philosophy

[38] **viXra:1412.0052 [pdf]**
*submitted on 2014-12-04 03:39:39*

**Authors:** Deqiang Han, Jean Dezert, Yi Yang

**Comments:** 7 Pages.

The theory of belief functions is one of the most
important tools in information fusion and uncertainty reasoning.Dempster’s rule of combination and its related modified versions
are used to combine independent pieces of evidence. However,until now there is still no solid evaluation criteria and methods
for these combination rules. In this paper, we look on the evidence combination as a procedure of estimation and then we propose a set of criteria to evaluate the sensitivity and divergence of different combination rules by using for reference the mean square error (MSE), the bias and the variance. Numerical examples and simulations are used to illustrate our proposed
evaluation criteria. Related analyses are also provided.

**Category:** General Science and Philosophy

[37] **viXra:1412.0051 [pdf]**
*submitted on 2014-12-04 03:42:30*

**Authors:** Zhun-ga Liua, Quan Pan, Jean Dezert, Gregoire Mercier, Yong Liu

**Comments:** 8 Pages.

Information fusion technique like evidence theory
has been widely applied in the data lassification to improve the performance of classifier. A new fuzzy-belief K-nearest neighbor (FBK-NN) classifier is proposed based on evidential reasoning for dealing with uncertain data. In FBK-NN, each labeled sample is assigned with a fuzzy membership to each class according
to its neighborhood. For each input object to classify, K basic belief assignments (BBA’s) are determined from the distances between the object and its K nearest neighbors taking into
account the neighbors’ memberships. The K BBA’s are fused by a new method and the fusion results are used to finally decide the class of the query object. FBK-NN method works with credal classification and discriminate specific classes, metaclasses and ignorant class. Meta-classes are defined by disjunction of several specific classes and they allow to well model the partial
imprecision of classification of the objects. The introduction of meta-classes in the classification procedure reduces the misclassification errors. The ignorant class is employed for outliers detections. The effectiveness of FBK-NN is illustrated through
several experiments with a comparative analysis with respect to other classical methods.

**Category:** General Science and Philosophy

[36] **viXra:1412.0050 [pdf]**
*submitted on 2014-12-04 03:44:18*

**Authors:** Joachim Biermann, Jesus Garcia, Ksawery Krenc

**Comments:** 8 Pages.

Driven by the underlying need for a yet to be developed framework for fusing heterogeneous data and information at different semantic levels coming from both sensory and human sources, we present some results of the research being conducted within the NATO Research Task Group IST-106 / RTG-051 on “Information Filtering and Multi Source Information Fusion”. As part of this on-going effort, we discuss here a first outcome of our investigation on multi-level fusion. It deals with removing the first hurdle between data/information sources and processes being at different levels: representation. Our contention here is that a common representation and description framework is the premise for enabling processing overarching different semantic levels. To this end we discuss here the use of the Battle Management Language (BML) as a way (“lingua franca”) to encode sensory data, a priori and contextual knowledge, both as hard and soft data.

**Category:** General Science and Philosophy

[35] **viXra:1412.0047 [pdf]**
*submitted on 2014-12-03 14:37:40*

**Authors:** Valeriy V. Dvoeglazov

**Comments:** 13 Pages. Prepared for the special issue of the "Adv. Appl. Clifford Algebras" - Proceedings of the ICCA10 Conference - Tartu, Estonia, Aug. 2014. Also presented by my students at the LVII Congreso Nacional de Fisica, Mazatlan, Sinaloa, Mexico, Oct. 2014

Recently, several discussions on the possible observability of 4-vector fields have been published in literature. Furthermore, several authors recently claimed existence of
the helicity=0 fundamental field. We re-examine the theory of antisymmetric tensor fields and 4-vector potentials. We study the massless limits.
In fact, a theoretical motivation for this venture is the old papers of Ogievetskii and Polubarinov, Hayashi, and Kalb and Ramond. They proposed
the concept of the notoph, whose helicity
properties are complementary to those of the photon. We analyze the quantum field theory with taking into account mass dimensions of the notoph and the photon. We also proceed to derive equations for the symmetric tensor of the second rank on the basis of the Bargmann-Wigner formalism
They are consistent with the general relativity.
Particular attention has been paid to the correct definitions of the energy-momentum tensor and other Noether currents. We estimate possible interactions, fermion-notoph, graviton-notoph, photon-notoph. PACS number: 03.65.Pm , 04.50.-h , 11.30.Cp

**Category:** Mathematical Physics

[34] **viXra:1412.0046 [pdf]**
*submitted on 2014-12-03 04:34:06*

**Authors:** Marius Coman

**Comments:** 2 Pages.

In this paper I make two conjectures about two types of possible infinite sequences of primes obtained starting from any given prime which is the lesser term from a pair of twin primes for a possible infinite of positive integers which are not of the form 3*k – 1 respectively starting from any given positive integer which is not of the form 3*k - 1 for a possible infinite of lesser terms from pairs of twin primes.

**Category:** Number Theory

[33] **viXra:1412.0045 [pdf]**
*submitted on 2014-12-02 20:53:48*

**Authors:** Akito Takahashi

**Comments:** 28 Pages. Preprint of Proceedings paper to JCF15

For explaining the experimentally claimed anomalous excess heat phenomena in metal-D(H) systems, the condensed cluster fusion (CCF) theory has been proposed and elaborated since 1989. This paper reviews the latest status of CCF theory development. The paper explains the following key aspects: classical mechanics and free particle fusion, fusion rate theory for trapped D(H) particles, strong interaction rate, condensation dynamics of D(H)-clusters, final state interaction and nuclear products, and sites for Platonic D(H) cluster formation on/in condensed matter.

**Category:** Condensed Matter

[32] **viXra:1412.0044 [pdf]**
*submitted on 2014-12-02 22:18:18*

**Authors:** Zhang Tianshu

**Comments:** 26 Pages.

First, we classify A, B and C according to their respective odevity, and ret rid of two kinds from AX+BY=CZ. Then, affirm AX+BY=CZ such being the case A, B and C have a common prime factor by concrete examples. After that, prove AX+BY≠CZ such being the case A, B and C have not any common prime factor by the mathematical induction with the aid of the symmetric law of odd numbers after the decomposition of the inequality. Finally, reached such a conclusion that the Beal’s conjecture can hold water after the comparison between AX+BY=CZ and AX+BY≠CZ under the given requirements.

**Category:** Number Theory

[31] **viXra:1412.0043 [pdf]**
*submitted on 2014-12-02 23:37:39*

**Authors:** John Frederick Sweeney

**Comments:** 20 Pages. charts and diagrams

The work of Robert de Marrais intersects with Vedic Physics at the point of Lissajous Figures (Bowditch Twirls). That is to say that modern mathematical physics meets ancient Vedic Science at a point of nuclear physics anticipated by but not yet articulated in western science – the Purushka. In Vedic Physics, Lissajous Figures of combinatorial waves meet at the crossroads of three types of matter. This paper explains the differences between the Electromagnetic and Weak forces from the point of view of Vedic Nuclear Physics, with its three states of matter: Satvic, Rajic and Thaamasic.

**Category:** Nuclear and Atomic Physics

[30] **viXra:1412.0042 [pdf]**
*submitted on 2014-12-03 02:29:23*

**Authors:** Marius Coman

**Comments:** 3 Pages.

In this paper I make eight conjectures about a type of numbers which I defined in a previous paper, “The notion of chameleonic numbers, a set of composites that «hide» in their inner structure an easy way to obtain primes”, in the following way: the non-null positive composite squarefree integer C not divisible by 2, 3 or 5 is such a number if the absolute value of the number P – d + 1 is always a prime or a power of a prime, where d is one of the prime factors of C and P is the product of all prime factors of C but d.

**Category:** Number Theory

[29] **viXra:1412.0041 [pdf]**
*replaced on 2014-12-03 08:13:58*

**Authors:** Predrag Terzic

**Comments:** 2 Pages.

Conjectured polynomial time primality test for specific class of numbers of the form k*2^n-1 is introduced .

**Category:** Number Theory

[28] **viXra:1412.0040 [pdf]**
*submitted on 2014-12-02 12:28:34*

**Authors:** Markos Georgallides

**Comments:** 18 Pages. The Energy Space nature of the particles

Analysing the principles of ,Mechanics and Physics, is clearly shown their common origin which is the unique essence of Space(m) and Energy (Force ,Push, Pull, Rotate) where arbitrary number ,t = time, is defined as the conversion factor between time units (second) and the length units (meter) . By contrast Inertial frame [R] is the frame of reference which describes Space-Energy homogeneously and isotropically , and in an genius manner , where arbitrary number , t = time , is defined as the conversion factor between time units (second) and length units ( meter ).
The content of this article is ,
The Geometrical Reasoning of Planck Length being a quaternion ,
The Binomial nature of Monads in Monad and this because of their intrinsic property in Inner system which is , A Stationary wave on Norm of wavelength λ , and thus Poinsot's Ellipsoid becomes → Cycloidal Ellipsoid ← or Cycloidal wave motion in monads , and also the two parts of spin of particles , i.e. the plane Electromagnetic waves forming the spherical standing waves .
The STPL line as the unique passage of particles from absolute [O]frame to all other [R] Inertial Frames (parallel between them) which is thus , the Navel cord (string) of galaxies ,
The geometrical expression of Spaces are the Geometries related to Euclidean by the Lorentz factor ,γ, and thus the common essence of Mechanics and Geometry,
The isochrones motion of spaces which is shown to be the geometrical expression of Lorentz factor γ , is such that it is possible for velocities in an Affine Frame (System) or the Frame , to be the projection of one Constant Velocity of Absolute Frame [O] to the relative frames [R] where there exists Simultaneity .
By considering breakages as ,masses, and also being off an absolute system [O] and under the constant velocity c as Action , then the known formulas of GR for masses and Energy are geometrically produced without any set restrictions , but from relation sec.φ = γ only. Relativity is now placed as a part of the whole Euclidean geometry , because all rotating [R] reference systems are related to the absolute system [O] which is that of Gravity Field .
The Origin of Particles from the collision of Velocity vectors Breakages in common circle ,
The Origin of Color forces from the Retardation and the Birefringence of Spaces ,
The Structure of the Space-Energy universe and the boundaries in it , of General Relativity,
Because breakages travel in [R] system (frame) with the constant velocity ,c, and so are in rectilinear motion between them , they occupy zero acceleration and thus can be converted to measurements in another by the simple Galilean transformations , because physical laws take the same form in all inertial systems .

**Category:** Mathematical Physics

[27] **viXra:1412.0039 [pdf]**
*submitted on 2014-12-02 12:33:58*

**Authors:** Marius Coman

**Comments:** 4 Pages.

In this paper I make four conjectures about a certain type of semiprimes which I defined in a previous paper, “Two exciting classes of odd composites defined by a relation between their prime factors”, in the following way: Coman semiprimes of the first kind are the semiprimes p*q with the property that q1 – p1 + 1 = p2*q2, where the semiprime p2*q2 has also the property that q2 – p2 + 1 = p3*q3, also a semiprime, and the operation is iterate until eventually pk – qk + 1 is a prime. I also defined Coman semiprimes of the second kind the semiprimes p*q with the property that q1 + p1 - 1 = p2*q2, where the semiprime p2*q2 has also the property that q2 + p2 - 1 = p3*q3, also a semiprime, and the operation is iterate until eventually pk + qk - 1 is a prime.

**Category:** Number Theory

[26] **viXra:1412.0038 [pdf]**
*replaced on 2014-12-05 12:05:55*

**Authors:** Akhmat Gemuev

**Comments:** 2 Pages.

My research includes study of time as a physical phenomenon, management of time and possibility of physical impact on living and nonliving objects remotely and through the time. Also, my research provides understanding of general structure of the world in a philosophical and physical senses, identification and explanation of hidden processes occurring in the world.

**Category:** Quantum Gravity and String Theory

[25] **viXra:1412.0037 [pdf]**
*submitted on 2014-12-02 13:34:33*

**Authors:** Omer Zvi Dickstein

**Comments:** 4 Pages.

It has been stated time and again that up to date, superluminal communication is beyond our reach. But, is superluminal communication an impossibility or an improbability? Observing current No Communication Theorems (NCT) one must conclude that it is a mere improbability, and that communication may be possible in the future.

**Category:** Quantum Physics

[24] **viXra:1412.0036 [pdf]**
*submitted on 2014-12-02 10:12:25*

**Authors:** Marius Coman

**Comments:** 2 Pages.

There exist few distinct generalizations of Fermat numbers, like for instance numbers of the form F(k) = a^(2^k) + 1, where a > 2, or F(k) = a^(2^k) + b^(2^k) or Smarandache generalized Fermat numbers, which are the numbers of the form F(k) = a^(b^k) + c, where a, b are integers greater than or equal to 2 and c is integer such that (a, c) = 1. In this paper I observe two formulas based on a new type of generalized Fermat numbers, which are the numbers of the form F(k) = (a^(b^k) ± c)/d, where a, b are integers greater than or equal to 2 and c, d are positive non-null integers such that F(k) is integer.

**Category:** Number Theory

[23] **viXra:1412.0035 [pdf]**
*submitted on 2014-12-02 11:16:57*

**Authors:** Mark Krinker

**Comments:** 5 Pages.

The author proposes a method of detecting subterranean water, employing an electric field, generated by contracting leg muscles of a walking operator. Presence of a water lens under a soil deflects electric lines of forces of the field, generated by the walking operator what can be detected by the field-measuring instruments. This operator-produced filed mechanism also can be a component of the physical base of the Dowsing, because its operators inevitably walk during the search process.

**Category:** Classical Physics

[22] **viXra:1412.0034 [pdf]**
*submitted on 2014-12-02 04:52:40*

**Authors:** Sorin Nabadan

**Comments:** 100 Pages.

Papers by various scientists on mathematics and computer science, presented at The International Symposium
Research and Education in Innovation Era,
5th Edition,
Arad, Romania, November 5th – 7th, 2014
(ISREIE 2014).

**Category:** General Mathematics

[21] **viXra:1412.0033 [pdf]**
*submitted on 2014-12-02 00:36:39*

**Authors:** M. Khodabandeh, A. Mohammad-Shahri

**Comments:** 11 Pages.

The purpose of this paper is uncertainty evaluation in a target differentiation problem. In the problem ultrasonic data fusion is applied using Dezert-Smarandache theory (DSmT). Besides of
presenting a scheme to target differentiation using ultrasonic sensors, the paper evaluates DSmTbased fused results in uncertainty point of view. The study obtains pattern of data for targets by a set of two ultrasonic sensors and applies a neural network as target classifier to these data to categorize the data of each sensor. Then the results are fused by DSmT to make final decision. The Generalized Aggregated Uncertainty measure named GAU2, as an extension to the Aggregated Uncertainty (AU), is applied to evaluate DSmT-based fused results. GAU2, rather than AU, is applicable to measure uncertainty in DSmT frameworks and can deal with continuous problems. Therefore GAU2 is an efficient measure to help decision maker to evaluate more accurate results and smoother decisions are made in final decisions by DSmT in comparison to DST.

**Category:** General Science and Philosophy

[20] **viXra:1412.0029 [pdf]**
*submitted on 2014-12-02 00:41:46*

**Authors:** Delfim F. M. Torres, Viorica Teca

**Comments:** 7 Pages.

We use the Maple system to check the investigations of S. S. Gupta regarding the Smarandache
consecutive and the reversed Smarandache sequences of triangular numbers [Smarandache Notions Journal, Vol. 14, 2004, pp. 366–368]. Furthermore, we extend previous investigations to the mirror and symmetric Smarandache sequences of triangular numbers.

**Category:** General Mathematics

[19] **viXra:1412.0028 [pdf]**
*submitted on 2014-12-02 00:46:41*

**Authors:** F. Aliniaeifard, M. Behboodi, E. Mehdi-Nezhad, Amir M. Rahimi

**Comments:** 11 Pages.

Let R be a commutative ring with 1 6= 0 and A(R) be the set of ideals with nonzero annihilators.

**Category:** General Mathematics

[18] **viXra:1412.0025 [pdf]**
*submitted on 2014-12-02 00:52:11*

**Authors:** Mladen Vassilev-Missana, Krassimir Atanassov

**Comments:** 20 Pages.

The eight problem is the following.

**Category:** General Mathematics

[17] **viXra:1412.0024 [pdf]**
*submitted on 2014-12-02 00:53:23*

**Authors:** Jonathan Sondow

**Comments:** 7 Pages.

The proof
leads in section 3 to a new measure of irrationality for e, that is, a lower bound on the
distance from e to a given rational number, as a function of its denominator. A
connection with the greatest prime factor of a number is discussed in section 4. In section
5 we compare the new irrationality measure for e with a known one, and state a numbertheoretic
conjecture that implies the known measure is almost always stronger. The new measure is applied in section 6 to prove a special case of a result from [24], leading to another conjecture. Finally, in section 7 we recall a theorem of G. Cantor that can be proved by a similar construction.

**Category:** General Mathematics

[16] **viXra:1412.0023 [pdf]**
*submitted on 2014-12-02 00:55:18*

**Authors:** Catalin Barbu

**Comments:** 8 Pages.

In this study, we present (i) a proof of the Menelaus theorem for quadrilaterals in hyperbolic geometry, (ii) and a proof for the transversal theorem for triangles, and (iii) the Menelaus*s theorem~for n-gons.

**Category:** General Mathematics

[15] **viXra:1412.0022 [pdf]**
*submitted on 2014-12-02 00:58:02*

**Authors:** Prem Kumar Singh, Ch. Aswani Kumar

**Comments:** 20 Pages.

Fuzzy Formal Concept Analysis (FCA) is a mathematical tool for the effective
representation of imprecise and vague knowledge. However, with a large number of formal concepts from a fuzzy context, the task of knowledge representation becomes complex. Hence, knowledge reduction is an important issue in FCA with a fuzzy setting. The purpose of this current study is to address this issue by proposing a method that computes the corresponding crisp order for the fuzzy relation in a given fuzzy formal
context. The obtained formal context using the proposed method provides a fewer number of concepts when compared to original fuzzy context. The resultant lattice structure is a reduced form of its corresponding fuzzy concept lattice and preserves the specialized and generalized concepts, as well as stability. This study also shows a step-by-step demonstration of the proposed method and its application.

**Category:** General Mathematics

[14] **viXra:1412.0020 [pdf]**
*submitted on 2014-12-02 01:00:38*

**Authors:** A. Ahadpanah, L. Torkzadeh, A. Borumand Saeid

**Comments:** 4 Pages.

In this paper we dene the Smarandache residuated lattice, Smarandache lter, Smarandache implicative lter and Smarandache positive implicative lter, we obtain some related results. Then we determine
relationships between Smarandache lters in Smarandache residuated lattices.

**Category:** General Mathematics

[13] **viXra:1412.0019 [pdf]**
*submitted on 2014-12-02 01:02:17*

**Authors:** Kadhum Mohammed

**Comments:** 10 Pages.

We discusse in this paper a Smarandache semigroups, a Smarandache normal subgroups and a Smarandache lagrange semigroup.We prove some results about it.

**Category:** General Mathematics

[12] **viXra:1412.0018 [pdf]**
*submitted on 2014-12-02 01:03:54*

**Authors:** P. A. Hummadi, A. K. Muhammad

**Comments:** 11 Pages.

In this paper, we study tripotent elements and Smarandache triple tripotents (S-T. tripotents) in Zn.

**Category:** General Mathematics

[11] **viXra:1412.0015 [pdf]**
*submitted on 2014-12-02 02:32:37*

**Authors:** Rıdvan şahin, Muhammed Yiğider

**Comments:** 10 Pages.

The process of multiple criteria decision making (MCDM) is of determining the best choice among all of the probable alternatives. The problem of supplier selection on which decision maker has usually vague and imprecise knowledge is a typical example of multi criteria group decision-making problem. The conventional crisp techniques has not much effective for solving MCDM problems because of imprecise or fuzziness nature of the linguistic assessments. To find the exact values for MCDM problems is both difficult and impossible in more cases in real world. So, it is more reasonable to consider the values of alternatives according to the criteria
as single valued neutrosophic sets (SVNS). This paper deal with the technique for order preference by similarity to ideal solution (TOPSIS) approach and extend the TOPSIS method to MCDM problem with single valued neutrosophic information. The value of each alternative and the weight of each criterion are characterized by single valued neutrosophic numbers. Here, the importance of criteria and alternatives is
identified by aggregating individual opinions of decision makers (DMs) via single valued neutrosophic weighted averaging (IFWA) operator. The proposed method is, easy use, precise and practical for solving MCDM problem with single valued neutrosophic data. Finally, to show the applicability of the developed method, a numerical experiment for supplier choice is given as an application of single valued neutrosophic
TOPSIS method at end of this paper.

**Category:** General Mathematics

[10] **viXra:1412.0013 [pdf]**
*submitted on 2014-12-02 02:34:57*

**Authors:** Faruk Karaaslan

**Comments:** 17 Pages.

In this paper, we define concept of possibility neutrosophic soft set and investigate their related properties. We then construct a decision making
method called possibility neutrosophic soft decision making method (PNSdecision making method) which can be applied to the decision making problems
involving uncertainty. We finally give a numerical example to show the method can be successfully applied to the problems.

**Category:** General Mathematics

[9] **viXra:1412.0012 [pdf]**
*submitted on 2014-12-02 02:37:05*

**Authors:** Irfan Deli, Yusuf Subas

**Comments:** 13 Pages.

In this paper, we firstly introduced single valued neutrosophic numbers which is a generalization of fuzzy numbers and intuitionistic fuzzy numbers. A single valued neutrosophic number is simply an ordinary number whose precise value is somewhat uncertain from a philosophical point of view. Then, we will discuss two special forms of single valued neutrosophic numbers such as single valued trapezoidal neutrosophic numbers and single valued triangular neutrosophic numbers. Also, we give some operations on single valued trapezoidal neutrosophic numbers and single valued triangular neutrosophic numbers.
Finally, we give a single valued trapezoidal neutrosophic weighted aggregation operator(SVTNWAO) and applied to multicriteria decision making problem.

**Category:** General Mathematics

[8] **viXra:1412.0011 [pdf]**
*submitted on 2014-12-02 02:38:51*

**Authors:** Bibhas C. Giri

**Comments:** 32 Pages.

A single-valued neutrosophic set is a special case of neutrosophic set. It has been proposed as a generalization of crisp sets, fuzzy sets, and intuitionistic fuzzy sets in order to deal with incomplete information. In this paper, a new approach for multi-attribute group decision making problems is.

**Category:** General Mathematics

[7] **viXra:1412.0010 [pdf]**
*submitted on 2014-12-02 02:40:04*

**Authors:** Adrian Nicolescu, Mirela Teodorescu

**Comments:** 12 Pages.

Paradoxism is an avant-garde movement in literature, art, philosophy, science, based on excessive use of antitheses, antinomies, contradictions, parables, odds, paradoxes in creations. It was set up and led by the writer Florentin Smarandache since 1980's, who said: "The goal is to enlargement of the artistic sphere through non-artistic elements. But especially the counter-time, counter-sense creation. Also, to experiment." Paradoxism = paradox + ism, means the theory and school of using paradoxes in literary, artistic, philosophical, scientific creations.

**Category:** General Mathematics

[6] **viXra:1412.0008 [pdf]**
*replaced on 2016-01-12 14:38:33*

**Authors:** Sylwester Kornowski

**Comments:** 7 Pages.

Here, within the Scale-Symmetric Theory (SST), the correct interpretation of the Kaluza-Klein theory (KK theory) is presented. In SST, the charges are the spinning tori whereas in KK theory they are the masses moving along circle-like fifth dimension. The most incredible fact is that when we abandon the international system of units then the product of the mass of the torus/charge of proton in SST and speed of light (i.e. the fifth momentum) is very close to the value of the electric charge of proton or electron i.e. there is possible the interpretation that electric charge is the motion of mass in the fifth dimension. Moreover, the fifth momentums for the quantum entanglement and electric charge of proton have the same values but physical meanings and densities of adequate radion fields are different. Spinning torus collapses to spinning circle. In reality, the fifth dimension is the additional degree of freedom which follows from the real structure of charges. The fifth-dimension simplification causes that we lose information about internal structure of charges i.e. the KK theory is an effective theory of the SST. Radions are the components of the fifth dimensions. The SST shows that gravitational radions are superluminal whereas electromagnetic radions are luminal i.e. we cannot unify the two different radion fields within the same methods and it concerns the generalizations of the KK theory also (for example, the Yang-Mills theories). The SST shows that the cylinder condition in the KK theory follows from the fact that there are the spinning tori/charges and spinning loops. The luminal electromagnetic radion field leads to photons but the superluminal gravitational radion field does not lead to gravitons and gravitational waves. Emission of gravitational energy, i.e. a decrease in inertial-mass density of the superluminal non-gravitating Higgs field, is due to increase in gravitational-mass density of a system or due to emission of gravitational mass which carries non-gravitating gravitational field.

**Category:** Quantum Gravity and String Theory

[5] **viXra:1412.0007 [pdf]**
*replaced on 2015-11-08 17:19:45*

**Authors:** Pierre Réal Gosselin

**Comments:** 27 pages, Français & English, web site: http://phrenocarpe.org/cgi-bin/zhp/fra/0_0_couverture.pl,

We lay down the fundamental hypothesis that any electromagnetic radiation transforms progressively, evolving towards and finally reaching after an appropriate distance the value of the cosmic microwave background radiation, a 1,873 mm wavelength. This way we explain the cosmic redshift Z of far away Galaxies using only Maxwell’s equations and the energy quantum principle of the photons. Hubble’s law sprouts out naturally as the consequence of this transformation. According to this hypothesis we compute the constant Ho (84,3 Km/s/Mpc) using data from the Pioneer satellite and doing so deciphering the enigm of its anomalous behaviour. This hypothesis is confirmed by solving some cases that are still enigmatic for the standard cosmology. We review the distance modulus formula and comment on the limits of cosmological observations.

**Category:** Relativity and Cosmology

[4] **viXra:1412.0006 [pdf]**
*submitted on 2014-12-01 09:42:44*

**Authors:** Vasile Patrascu

**Comments:** 10 Pages.

The paper presents some steps for multi-valued representation of neutrosophic information. These steps are provided in the framework of multi-valued logics using the following logical value: true, false, neutral, unknown and saturated. Also, this approach provides some calculus formulae for the following neutrosophic features: truth, falsity, neutrality, ignorance, under-definedness, over-definedness, saturation and entropy. In addition, it was defined net truth, definedness and neutrosophic score.

**Category:** Set Theory and Logic

[3] **viXra:1412.0003 [pdf]**
*submitted on 2014-12-01 04:45:04*

**Authors:** Marisol García-Peña, Sergio Arciniegas-Alarcón, Décio Barbin

**Comments:** 10 Pages.

A common problem in climate data is missing information. Recently, four methods have been developed which are based in the singular value decomposition of a matrix (SVD). The aim of this paper is to evaluate these new developments making a comparison by means of a simulation study based on two complete matrices of real data. One corresponds to the historical precipitation of Piracicaba / SP - Brazil and the other matrix corresponds to multivariate meteorological characteristics in the same city from year 1997 to 2012. In the study, values were deleted randomly at different percentages with subsequent imputation, comparing the methodologies by three criteria: the normalized root mean squared error, the similarity statistic of Procrustes and the Spearman correlation coefficient. It was concluded that the SVD should be used only when multivariate matrices are analyzed and when matrices of precipitation are used, the monthly mean overcome the performance of other methods based on the SVD.

**Category:** Statistics

[2] **viXra:1412.0002 [pdf]**
*submitted on 2014-12-01 05:34:38*

**Authors:** Bernard Riley

**Comments:** 15 Pages.

Symmetries are broken on the mass levels of three geometric sequences that descend from Planck scale and correspond with an extra-dimensional geometry. Partners resulting from broken symmetry take up a symmetrical arrangement about the mass level on which the symmetry is broken. The quark doublets lie in symmetrical arrangement about special mass levels, while isospin doublets are arranged symmetrically about sublevels. The pseudoscalar and vector mesons of the SU(5) multiplets lie in symmetrical arrangement with fundamental fermions of lower mass about mass levels. Such fermions include the valence quarks, charged leptons and a tower of ‘level-states’ that partner the short-lived isospin singlet and triplet mesons. Some level-states carry flavour and charge, and are identified with sea quarks.

**Category:** High Energy Particle Physics

[1] **viXra:1412.0001 [pdf]**
*replaced on 2015-12-28 21:16:51*

**Authors:** Pith Xie

**Comments:** 28 Pages.

The reference [2] contructs the Operator axioms to deduce number systems. In this paper, we slightly improve on the syntax of the Operator axioms and construct a semantics of the Operator axioms. Then on the basis of the improved Operator axioms, we define two fundamental operator functions to study the analytic properties of the Operator axioms. Finally, we prove two theorems about the fundamental operator functions and pose some conjectures. Real operators can give new equations and inequalities so as to precisely describe the relation of mathematical objects or scientific objects.

**Category:** Functions and Analysis