viXra.org e-printsPreprints from viXra.org site
http://viXra.org/
Sat Mar 28 21:52:17 GMT 2015Sat Mar 28 21:52:17 GMT 2015<![CDATA[The Intrinsic Magnetic Field of Magnetic Materials and Gravitomagnetization]]>
http://viXra.org/abs/1503.0233
2015-03-28 14:41:27Condensed Matter reference: viXra:1503.0233v1 title: The Intrinsic Magnetic Field of Magnetic Materials and Gravitomagnetization authors: Fran De Aquino category: Condensed Matter type: submission date: 2015-03-28 14:41:27 abstract:
Magnetic materials are composed of microscopic regions called magnetic domains that act like tiny permanent magnets. Before an external magnetic field to be applied on the material, the domains' magnetic fields are oriented randomly. Most of the domains’ magnetic fields cancel each other out, and so the resultant magnetic field is very small. Here we derive the expression of this intrinsic magnetic filed, which can be used to calculate the magnitude of the Earth’s magnetic field at the center of the Earth’s inner core. In addition, it is also described a magnetization process using gravity. This is gravitational magnetization process (or gravitomagnetization process) since the magnetization is produced starting from gravity. It is absolutely new and unprecedented in the literature.
]]> <![CDATA[Fractal Geometry a Possible Explanation to the Accelerating Expansion of the Universe and Other Standard ΛCDM Model Anomalies]]>
http://viXra.org/abs/1503.0232
2015-03-28 14:40:24Relativity and Cosmology reference: viXra:1503.0232v1 title: Fractal Geometry a Possible Explanation to the Accelerating Expansion of the Universe and Other Standard ΛCDM Model Anomalies authors: Blair D. Macdonald category: Relativity and Cosmology type: submission date: 2015-03-28 14:40:24 abstract:
One of the great questions in modern cosmology today is what is causing the accelerating expansion of the universe – the so called dark energy. It has been recently discovered this property is not unique to the universe; trees also do it and trees are fractals. Do fractals offer insight to the accelerating expansion a property of the universe and more?
In this investigation a simple experiment was undertaken on the classical (Koch snowflake) fractal. It was inverted to model and record observations from within an iterating fractal set as if at a static (measured) position. New triangles sizes were held constant allowing earlier triangles in the set to expand as the set iterated.
Velocities and accelerations were calculated for both the area of the total fractal, and the distance between points within the fractal set using classical kinematic equations. The inverted fractal was also tested for the Hubble's Law.
It was discovered that the area(s) expanded exponentially; and as a consequence, the distances between points – from any location within the set – receded away from the observer, at exponentially increasing velocities and accelerations. The model was consistent with the standard ΛCDM model of cosmology and demonstrated: a singularity Big Bang beginning, infinite beginnings; homogeneous isotropic expansion consistent with the CMB; an expansion rate capable of explaining the early inflation epoch; Hubble's Law – with a Hubble diagram and Hubble's constant; and accelerating expansion with a ‘cosmological’ constant. It was concluded that the universe behaves as a general fractal object. Thought the findings have obvious relevance to the study of cosmology, they may also give insight into: the recently discovered accelerating growth rate of trees; the empty quantum like nature of the atom; and possibly our perception value of events with the passage of time.
]]> <![CDATA[Observed Galaxy Distribution Transition with Increasing Redshift a Property of the Fractal]]>
http://viXra.org/abs/1503.0231
2015-03-28 14:50:37Relativity and Cosmology reference: viXra:1503.0231v1 title: Observed Galaxy Distribution Transition with Increasing Redshift a Property of the Fractal authors: Blair D. Macdonald category: Relativity and Cosmology type: submission date: 2015-03-28 14:50:37 abstract:
Is the universe a fractal? This is one of the great – though not often talked about – questions in cosmology. In my earlier publication where I inverted (Koch snowflake) fractal I showed the fractal demonstrated: Hubble’s Law, accelerating expansion, and a singularity beginning. Surveys of the universe – the most recent and largest, the 2012 WiggleZ Dark Energy Survey – show, galaxy distribution on small scales to be fractal, while on large-scales, homogeneity holds. There appears to be new anomaly to explain: a galaxy distribution transition from rough to smooth with cosmic distance. From my model I derived a Fractal-Hubble diagram. On this diagram, measurement points along the curve are clustered near the origin. This clustering was not addressed in discussions or part of the conclusion of my earlier experiment. Can this clustering of points account for the observed galaxy distribution transition? Could this transition be another property of fractals, and therefore could the universe – itself – be fractal? It was found, yes they do. Clustering of measurement points (and of galaxies) is as a result of observation position in the fractal. On small scales – relative to large scales – the cosmic surveys are what one would expect to see if one were viewing from within an iterating – growing – fractal. If trees – natural fractals that have also been found to grow at accelerating rates – are used to demonstrate this fractal: the large-scale smoothness maybe akin to a tree’s trunk; and the rough (fractal) on small-scales, to its branches. This discovery unifies the anomalies associated with the standard cosmological model. Together they are – through the mechanics of the fractal – inextricably linked.
]]> <![CDATA[Fractal Geometry a Possible Explanation to the Accelerating Growth Rate of Trees]]>
http://viXra.org/abs/1503.0230
2015-03-28 14:56:48Physics of Biology reference: viXra:1503.0230v1 title: Fractal Geometry a Possible Explanation to the Accelerating Growth Rate of Trees authors: Blair D. Macdonald category: Physics of Biology type: submission date: 2015-03-28 14:56:48 abstract:
In a recent publication it was discovered trees growth rate accelerates with age. Trees are described as being clear examples of natural fractals. Do fractals offer insight to the accelerating expansion?
In this investigation the classical (Koch snowflake) fractal was inverted to model the growth of a fractals seen from a fixed – new growth – perspective. New triangle area sizes represented new branch volume; these new triangles were held constant allowing earlier triangles in the set to expand as the fractal set iterated (grew) through time.
Velocities and accelerations were calculated for both the area of the total fractal, and the distance between points within the fractal set using classical kinematic equations.
It was discovered that the area(s) of earlier triangles expanded exponentially, and as a consequence the total snowflake area grew exponentially. Distances between points (nodes) – from any location within the fractal set – receded away at exponentially increasing velocities and accelerations. For trees, if the new growth branch volume size remains constant through time, its supporting branches volumes will grow exponentially to support their mass. This property of fractals may account for the accelerating volumetric growth rates of trees. A trees age can be measured not only by its annual (growth ring) age, but also by its iteration age: the amount of iterations from trunk to new growth branch.
Thought the findings have obvious relevance to the study of trees directly, they may also offer insight into the recently discovered observation of the accelerating growth rate of the universe.
]]> <![CDATA[An Explanation of the Mass Failure in the Market: the Internet – the Creator of Public Goods]]>
http://viXra.org/abs/1503.0229
2015-03-28 15:00:22Economics and Finance reference: viXra:1503.0229v1 title: An Explanation of the Mass Failure in the Market: the Internet – the Creator of Public Goods authors: Blair D. Macdonald category: Economics and Finance type: submission date: 2015-03-28 15:00:22 abstract:
Based on the economic model used to classify 'goods' in an economy; private goods such as found in the entertainment/media industries, or any item on the internet subject to file sharing or digital copying in any form including – at the extreme – the human genome, solid object 3D printing, and even money in the form of bit-coins, are being slowly repositioned from what are termed 'private' or 'club'/'congestion' goods, to the extreme opposite, public goods. The ‘free rider problem’ of Public Goods has become the ‘free copy problem’. Public Goods failure in the market, and are therefore provided by Government: is this the destiny of internet goods.
]]> <![CDATA[The Comprehensive Split Octonions and their Fano Planes]]>
http://viXra.org/abs/1503.0228
2015-03-28 15:32:12Combinatorics and Graph Theory reference: viXra:1503.0228v1 title: The Comprehensive Split Octonions and their Fano Planes authors: J Gregory Moxness category: Combinatorics and Graph Theory type: submission date: 2015-03-28 15:32:12 abstract:
For each of the 480 unique octonion Fano plane mnemonic multiplication tables, there are 7 split octonions (one for each of 7 triads in the parent octonion). This PDF is a comprehensive list of all 3840=480+3360 (octonions + split octonions), their Fano planes, and multiplication tables. They are organized in pairs of 240 parent octonions=(8-bit sign mask)*(30 canonical sets of 7 triads). The pairs of parent octonions are created by flipping (reversing) the first triad (center circular line) creating a unique Fano plane mnemonic.
]]> <![CDATA[The Notions of C-Reached Prime and M-Reached Prime]]>
http://viXra.org/abs/1503.0227
2015-03-28 15:35:18Number Theory reference: viXra:1503.0227v1 title: The Notions of C-Reached Prime and M-Reached Prime authors: Marius Coman category: Number Theory type: submission date: 2015-03-28 15:35:18 abstract:
In spite the fact that I wrote seven papers on the notions (defined by myself) of c-primes, m-primes, c-composites and m-composites (see in my paper “Conjecture that states that any Carmichael number is a cm-composite” the definitions of all these notions), I haven’t thinking until now to find a connection, beside the one that defines, of course, such an odd composite n, namely that, after few iterative operations on n, is reached a prime p, between the number n and the prime p. This is what I try to do in this paper, and also to give a name to this prime p, namely, say, “reached prime”, and, in order to distinguish, because a number can be same time c-prime and m-prime, respectively c-composite and m-composite, “c-reached prime” or “m-reached prime”.
]]> <![CDATA[Simplified ToE Summary (w/124.443…GeV Higgs Mass Prediction)]]>
http://viXra.org/abs/1503.0226
2015-03-28 11:45:41Quantum Gravity and String Theory reference: viXra:1503.0226v1 title: Simplified ToE Summary (w/124.443…GeV Higgs Mass Prediction) authors: J Gregory Moxness category: Quantum Gravity and String Theory type: submission date: 2015-03-28 11:45:41 abstract:
This is a Mathematica notebook saved as PDF which theorizes a relationship between fundamental constants (c, Planck, Gravity, Hubble, FineStructure) and computes within current experimental values the values of a (Higgs) particle mass of 124.443...GeV.
]]> <![CDATA[A New Perspective in Thermodynamics: The Energy-Entropy Principle]]>
http://viXra.org/abs/1503.0225
2015-03-28 12:23:31Thermodynamics and Energy reference: viXra:1503.0225v1 title: A New Perspective in Thermodynamics: The Energy-Entropy Principle authors: Rodrigo de Abreu category: Thermodynamics and Energy type: submission date: 2015-03-28 12:23:31 abstract:
In this paper, through the criticism of the paradigmatic view of thermodynamics, we aim at showing a new perspective attained in this matter. The generalization of heat as internal energy (generalization of the kinetic energy concept of heat) permits the generalization of the Kelvin postulate: "It is impossible, without another effect, to convert internal energy into work" (no reference to heat or to a heat reservoir).
]]> <![CDATA[Pi Day 3/14/15 Journée de pi Marseille]]>
http://viXra.org/abs/1503.0223
2015-03-28 12:56:08Number Theory reference: viXra:1503.0223v1 title: Pi Day 3/14/15 Journée de pi Marseille authors: Simon Plouffe category: Number Theory type: submission date: 2015-03-28 12:56:08 abstract:
Conférence à Marseille le 3/14/15
Conference on Pi Day of 3/14/15
]]> <![CDATA[Journée de Pi, 3/14/15 pi Day]]>
http://viXra.org/abs/1503.0222
2015-03-28 13:05:45Number Theory reference: viXra:1503.0222v1 title: Journée de Pi, 3/14/15 pi Day authors: Simon Plouffe category: Number Theory type: submission date: 2015-03-28 13:05:45 abstract:
Conférence pour la journée de Pi à Marseille le 3/14/15.
Conference on Pi Day of 3/14/15 in Marseille
]]> <![CDATA[Consider Uncertain Parameters based on Sensitivity Matrix]]>
http://viXra.org/abs/1503.0221
2015-03-28 08:41:06Digital Signal Processing reference: viXra:1503.0221v1 title: Consider Uncertain Parameters based on Sensitivity Matrix authors: Taishan Lou category: Digital Signal Processing type: submission date: 2015-03-28 08:41:06 abstract:
Uncertain parameters of state-space models have always been a considerable problem. Consider Kalman filter (CKF) and desensitized Kalman filter (DKF) are two methods to solve this problem. Based on the sensitivity matrix respected to the uncertain parameter vector, a special DKF with an analytical gain is given and a new form of the CKF is derived. The mathematical equivalence between the special DKF and the CKF is demonstrated when the sensitivity-weighting matrix is set to the covariance of the uncertain parameter and the problem how to select and obtain the sensitivity-weighting matrix in the DKF is solved.
]]> <![CDATA[Geometry Inspired Algorithms for Linear Programming]]>
http://viXra.org/abs/1503.0220
2015-03-28 06:17:24Data Structures and Algorithms reference: viXra:1503.0220v1 title: Geometry Inspired Algorithms for Linear Programming authors: Dhananjay P. Mehendale category: Data Structures and Algorithms type: submission date: 2015-03-28 06:17:24 abstract:
In this paper we discuss some novel algorithms for linear programming inspired by geometrical considerations and use simple mathematics related to finding intersections of lines and planes. All these algorithms have a common aim: they all try to approach closer and closer to “centroid” or some “centrally located interior point” for speeding up the process of reaching an optimal solution! Imagine the “line” parallel to vector C, where CTx denotes the objective function to be optimized, and further suppose that this “line” is also passing through the “point” representing optimal solution. The new algorithms that we propose in this paper essentially try to reach at some feasible interior point which is in the close vicinity of this “line”, in successive steps. When one will be able to arrive finally at a point belonging to small neighborhood of some point on this “line” then by moving from this point parallel to vector C one can reach to the point belonging to the sufficiently small neighborhood of the “point” representing optimal solution.
]]> <![CDATA[Equipotency of E# with FGH_w]]>
http://viXra.org/abs/1503.0219
2015-03-27 19:39:02Number Theory reference: viXra:1503.0219v1 title: Equipotency of E# with FGH_w authors: Sbiis Saibian category: Number Theory type: submission date: 2015-03-27 19:39:02 abstract:
The goal in this article is to demonstrate that E# is indeed on the order of ω. Formally this means that for every member of FGH_ω there is a function in E# with at least the same growth rate, and that f_w(n) the smallest member of FGH which eventually dominates over all functions within E#.
It will be demonstrated that a certain family of functions of order-type "w" in E# dominates over corresponding members in FGH_w, thus showing that for every function in FGH_w there is a function in E# which grows at least as fast. Then it will be shown how f_w(n) diagonalizes over this family of functions and must eventually dominate every member of this family.
]]> <![CDATA[Fusion D'images Par la Théorie de Dezert-Smarandache (DSmT) en Vue D'applications en Télédétection (Thèse Doctorale)]]>
http://viXra.org/abs/1503.0218
2015-03-27 19:59:36Data Structures and Algorithms reference: viXra:1503.0218v1 title: Fusion D'images Par la Théorie de Dezert-Smarandache (DSmT) en Vue D'applications en Télédétection (Thèse Doctorale) authors: Azeddine Elhassouny category: Data Structures and Algorithms type: submission date: 2015-03-27 19:59:36 abstract:
Thèse dirigée par Pr. Driss Mammass, préparée au Laboratoire Image et Reconnaissance
de Formes-Systèmes Intelligents et Communicants IRF-SIC, soutenue le 22 juin 2013,
Agadir, Maroc.
L'objectif de cette thèse est de fournir à la télédétection
des outils automatiques de la classification et de la
détection des changements d'occupation du sol utiles à
plusieurs fins, dans ce cadre, nous avons développé
deux méthodes générales de fusion utilisées pour la
classification des images et la détection des
changements utilisant conjointement l'information
spatiale obtenue par la classification supervisée ICM et
la théorie de Dezert-Smarandache (DSmT) avec des
nouvelles règles de décision pour surmonter les limites
inhérentes des règles de décision existantes dans la
littérature.
L'ensemble des programmes de cette thèse ont été
implémentés avec MATLAB et les prétraitements et
visualisation des résultats ont été réalisés sous ENVI 4.0,
ceci a permis d'effectuer une validation des résultats
avec précision et dans des cas concrets. Les deux
approches sont évaluées sur des images LANDSAT
ETM+ et FORMOSAT-2 et les résultats sont prometteurs.
The main objective of this thesis is to provide automatic
remote sensing tools of classification and of change
detection of land cover for many purposes, in this
context, we have developed two general methods used
for classification fusion images and change detection
using joint spatial information obtained by supervised
classification ICM and Dezert-Smarandache theory
(DSmT) with new decision rules to overcome the
limitations of decision rules existing in the literature.
All programs of this thesis have been implemented in
MATLAB and C language and preprocessing and
visualization of results were achieved in ENVI 4.0, this
has allowed for a validation of the results accurately and
in concrete cases. Both approaches are evaluated on
LANDSAT ETM + and FORMOSAT-2 and the results are
promising.
]]> <![CDATA[An Intuitive Conceptualization of n! and Its Application to Derive a Well Known Result]]>
http://viXra.org/abs/1503.0217
2015-03-28 01:40:00Number Theory reference: viXra:1503.0217v1 title: An Intuitive Conceptualization of n! and Its Application to Derive a Well Known Result authors: Prashanth R. Rao category: Number Theory type: submission date: 2015-03-28 01:40:00 abstract:
n! is defined as the product 1.2.3………n and it popularly represents the number of ways of seating n people on n chairs. We conceptualize another way of describing n! using sequential cuts to an imaginary circle and derive the following well known result
]]> <![CDATA[Three Conjectures on Twin Primes Involving the Sum of Their Digits]]>
http://viXra.org/abs/1503.0216
2015-03-28 02:39:54Number Theory reference: viXra:1503.0216v1 title: Three Conjectures on Twin Primes Involving the Sum of Their Digits authors: Marius Coman category: Number Theory type: submission date: 2015-03-28 02:39:54 abstract:
Observing the sum of the digits of a number of twin primes, I make in this paper the following three conjectures: (1) for any m the lesser term from a pair of twin primes having as the sum of its digits an odd number there exist an infinity of lesser terms n from pairs of twin primes having as the sum of its digits an even number such that m + n + 1 is prime, (2) for any m the lesser term from a pair of twin primes having as the sum of its digits an even number there exist an infinity of lesser terms n from pairs of twin primes having as the sum of its digits an odd number such that m + n + 1 is prime and (3) if a, b, c, d are four distinct terms of the sequence of lesser from a pair of twin primes and a + b + 1 = c + d + 1 = x, then x is a semiprime, product of twin primes.
]]> <![CDATA[A Locally Linear Element-Wise Transformations Based Forecasting Model for Dynamic State Systems]]>
http://viXra.org/abs/1503.0215
2015-03-28 04:22:11General Mathematics reference: viXra:1503.0215v1 title: A Locally Linear Element-Wise Transformations Based Forecasting Model for Dynamic State Systems authors: Ramesh Chandra Bagadi, Hussain Bahia, Jeffrey Russell, Tao Han, John A. Hoopes , Roderic S. Lakes, Paul Terwilliger, Amir Assadi category: General Mathematics type: submission date: 2015-03-28 04:22:11 abstract:
In this research paper a one step (at any instant) forecasting scheme for Dymanic State Systems with large number of parameters is presented.
]]> <![CDATA[An Interesting Recurrent Sequence Whose First 150 Terms Are Either Primes, Powers of Primes or Products of Two Prime Factors]]>
http://viXra.org/abs/1503.0214
2015-03-27 15:12:46Number Theory reference: viXra:1503.0214v1 title: An Interesting Recurrent Sequence Whose First 150 Terms Are Either Primes, Powers of Primes or Products of Two Prime Factors authors: Marius Coman category: Number Theory type: submission date: 2015-03-27 15:12:46 abstract:
I started this paper in ideea to present the recurrence relation defined as follows: the first term, a(0), is 13, then the n-th term is defined as a(n) = a(n–1) + 6 if n is odd and as a(n) = a(n-1) + 24, if n is even. This recurrence formula produce an amount of primes and odd numbers having very few prime factors: the first 150 terms of the sequence produced by this formula are either primes, power of primes or products of two prime factors. But then I discovered easily formulas even more interesting, for instance a(0) = 13, a(n) = a(n–1) + 10 if n is odd and a(n) = a(n-1) + 80, if n is even (which produces 16 primes in first 20 terms!). Because what seems to matter in order to generate primes for such a recurrent defined formula a(0) = 13, a(n) = a(n–1) + x if n is odd and as a(n) = a(n-1) + y, if n is even, is that x + y to be a multiple of 30 (probably the choice of the first term doesn’t matter either but I like the number 13).
]]> <![CDATA[Two Conjectures on Squares of Primes, Involving Twin Primes and Pairs of Primes p, q, Where Q=p+4]]>
http://viXra.org/abs/1503.0213
2015-03-27 11:14:09Number Theory reference: viXra:1503.0213v1 title: Two Conjectures on Squares of Primes, Involving Twin Primes and Pairs of Primes p, q, Where Q=p+4 authors: Marius Coman category: Number Theory type: submission date: 2015-03-27 11:14:09 abstract:
In this paper I make a conjecture which states that there exist an infinity of squares of primes that can be written as p + q + 13, where p and q are twin primes, also a conjecture that there exist an infinity of squares of primes that can be written as 3*q - p - 1, where p and q are primes and q = p + 4.
]]> <![CDATA[Lifetimes of Higgs, W and Z Bosons in the Scale-Symmetric Physics]]>
http://viXra.org/abs/1503.0212
2015-03-27 14:57:03Quantum Gravity and String Theory reference: viXra:1503.0212v2 title: Lifetimes of Higgs, W and Z Bosons in the Scale-Symmetric Physics authors: Sylwester Kornowski category: Quantum Gravity and String Theory type: replacement date: 2015-03-27 14:57:03 abstract:
Here, within the Scale-Symmetric Physics, we calculated the rigorous lifetimes of Higgs (H) boson, W and Z bosons expressed in yoctoseconds [ys] (yocto is the inverse of the 24 powers of ten): for H is 0.282 ys, for W is 0.438 ys whereas for Z is 0.386 ys. They are the upper limits for experimental data and it is obvious that the experimental data should be close to such limits. The decay width of H boson (about 3.4 GeV/(cc) gives 0.194 ys, the decay width of W boson (about 2.1 GeV/(cc) gives 0.316 ys whereas the decay width of Z boson (about 2.5 GeV/(cc) gives 0.264 ys. The calculated here theoretical results are consistent with experimental data. The lifetime of Higgs boson predicted within the Standard Model (SM), 0.156 zeptoseconds [zs] (zepto is the inverse of the 21 powers of ten), is inconsistent with experimental data - it suggests that SM is at least incomplete or partially incorrect. The main method to determine the lifetimes of particles follows from measurement of the decay width. But using this widely accepted method, we obtain the lifetime of the Higgs boson a factor of one thousandth of the value predicted by the Standard Model. It causes that some “scientists” try to change the widely accepted method to obtain from the experimental data the SM value. As usually, when a mainstream theory fails, to fit theoretical results to experimental data, “scientists” apply some approximations, mathematical tricks and free parameters. The truth is obvious - the experimental lifetime of the sham Higgs boson with a mass of 125 GeV is inconsistent with the SM prediction. Photons inside strong fields behave as gluons.
]]> <![CDATA[Lifetimes of Higgs, W and Z Bosons in the Scale-Symmetric Physics]]>
http://viXra.org/abs/1503.0212
2015-03-27 11:36:38Quantum Gravity and String Theory reference: viXra:1503.0212v1 title: Lifetimes of Higgs, W and Z Bosons in the Scale-Symmetric Physics authors: Sylwester Kornowski category: Quantum Gravity and String Theory type: submission date: 2015-03-27 11:36:38 abstract:
Here, within the Scale-Symmetric Physics, we calculated the rigorous lifetimes of Higgs (H) boson, W and Z bosons expressed in yoctoseconds [ys] (yocto is the inverse of the 24 powers of ten): for H is 0.282 ys, for W is 0.438 ys whereas for Z is 0.386 ys. They are the upper limits for experimental data and it is obvious that the experimental data should be close to such limits. The decay width of H boson (about 3.4 GeV/(cc) gives 0.194 ys, the decay width of W boson (about 2.1 GeV/(cc) gives 0.316 ys whereas the decay width of Z boson (about 2.5 GeV/(cc) gives 0.264 ys. The calculated here theoretical results are consistent with experimental data. The lifetime of Higgs boson predicted within the Standard Model (SM), 0.156 zeptoseconds [zs] (zepto is the inverse of the 21 powers of ten), is inconsistent with experimental data - it suggests that SM is at least incomplete or partially incorrect. The main method to determine the lifetimes of particles follows from measurement of the decay width. But using this widely accepted method, we obtain the lifetime of the Higgs boson a factor of one thousandth of the value predicted by the Standard Model. It causes that some “scientists” try to change the widely accepted method to obtain from the experimental data the SM value. As usually, when a mainstream theory fails, to fit theoretical results to experimental data, “scientists” apply some approximations, mathematical tricks and free parameters. The truth is obvious - the experimental lifetime of the sham Higgs boson with a mass of 125 GeV is inconsistent with the SM prediction. Photons inside strong fields behave as gluons.
]]> <![CDATA[Micro-Thermonuclear Plasma Tunneling by Rock Melting]]>
http://viXra.org/abs/1503.0211
2015-03-27 13:23:11Nuclear and Atomic Physics reference: viXra:1503.0211v1 title: Micro-Thermonuclear Plasma Tunneling by Rock Melting authors: Alexander Bolonkin, Joseph Friedlander, Shmuel Neumann, Zarek Newman category: Nuclear and Atomic Physics type: submission date: 2015-03-27 13:23:11 abstract:
Standard drilling has limits as at some depth the pressures and temperatures force the drilled opening tight when the drill is lifted.This paper proposes a reliable and rapid method of penetration of rock masses by melting all or part of the rock face and penetrate therein, cool the resulting glassy tube to be a stabilized liner. The methods proposed to heat the tip of the melting element include heat generated by a micro-thermonuclear reaction. High rates of advance are sustainable because only heat and cooling water must be advanced to the tunnel head. The equipment is simple and without need for unduly high pressure lithofracturing, and the equipment may be regularly removed and switched out to avoid time and personnel-intensive breakdowns in place. This method can achieve depths heretofore unreachable to access deep gas, oil, or to create an airtight and waterproof shaft for geothermic energy.
]]> <![CDATA[Inverse (Inner) and Outer Electric Field of Electrically Charged Particles as Fourth and Fifth Space Deformation]]>
http://viXra.org/abs/1503.0210
2015-03-27 07:24:05Quantum Gravity and String Theory reference: viXra:1503.0210v1 title: Inverse (Inner) and Outer Electric Field of Electrically Charged Particles as Fourth and Fifth Space Deformation authors: Michael Tzoumpas category: Quantum Gravity and String Theory type: submission date: 2015-03-27 07:24:05 abstract:
On site http://viXra.org/abs/1410.0040 it is described the first and second space deformation, while on http://viXra.org/abs/1502.0097 described the third space deformation. The fourth and fifth space deformation is a consequence of the presence of a proton and an electron in the dynamic space, after the inevitable end of the primary neutron (beta decay). So, it is caused an electrostatic induction of positive and negative units of the surrounding space and an inverse electric field of the proton (nucleus) is created, with reduction of the space cohesive pressure. The nuclear force now is interpreted as an electric force, 100 times stronger than the corresponding force of the outer electric field that extends beyond the potential barrier.
The Universal and the particulate (see http://viXra.org/abs/1501.0111) antigravity force is complemented by the stronger nuclear antigravity force, since at the lower nuclear field the reduction of the cohesive pressure is rapid and contributes to the architecture of the nuclei structure. Moreover, the reduction of cohesive pressure at the lower nuclear field is the cause of the neutron mass deficit, while protons do not undergo mass deficit, as it will be described below.
]]> <![CDATA[The Distribution of Prime Number in a Given Interval]]>
http://viXra.org/abs/1503.0209
2015-03-27 07:30:52Number Theory reference: viXra:1503.0209v1 title: The Distribution of Prime Number in a Given Interval authors: Jian Ye category: Number Theory type: submission date: 2015-03-27 07:30:52 abstract:
The Goldbach theorem and the twin prime theorem are homologous.the paper from the prime origin,derived the equations of the twin prime theorem and the Goldbach theorem,and it revealed the equivalence between the Goldbach theorem and the generalized twin prime theorem.
]]> <![CDATA[Seven Conjectures on the Triplets of Primes p, q, R Where Q=p+4 and R=p+6]]>
http://viXra.org/abs/1503.0208
2015-03-27 07:37:51Number Theory reference: viXra:1503.0208v1 title: Seven Conjectures on the Triplets of Primes p, q, R Where Q=p+4 and R=p+6 authors: Marius Coman category: Number Theory type: submission date: 2015-03-27 07:37:51 abstract:
In this paper I make seven conjectures on the triplets of primes [p, q, r], where q = p + 4 and r = p + 6, conjectures involving primes, squares of primes, c-primes, m-primes, c-composites and m-composites (the last four notions are defined in previous papers, see for instance the paper “Conjecture that states that any Carmichael number is a cm-composite”.
]]> <![CDATA[Two Conjectures on Squares of Primes Involving the Sum of Consecutive Primes]]>
http://viXra.org/abs/1503.0207
2015-03-27 09:13:43Number Theory reference: viXra:1503.0207v1 title: Two Conjectures on Squares of Primes Involving the Sum of Consecutive Primes authors: Marius Coman category: Number Theory type: submission date: 2015-03-27 09:13:43 abstract:
In this paper I make a conjecture which states that there exist an infinity of squares of primes of the form 6*k - 1 that can be written as a sum of two consecutive primes plus one and also a conjecture that states that the sequence of the partial sums of odd primes contains an infinity of terms which are squares of primes of the form 6*k + 1.
]]> <![CDATA[Initially Problems in Observing 0S0]]>
http://viXra.org/abs/1503.0206
2015-03-27 05:31:52Geophysics reference: viXra:1503.0206v1 title: Initially Problems in Observing 0S0 authors: Herbert Weidner category: Geophysics type: submission date: 2015-03-27 05:31:52 abstract:
Superconducting gravimeters are overloaded by strong earthquakes and stop the measurement. Here it is shown that, in the first hours after restart, most of the SG deliver erroneous data, which prevents an accurate frequency measurement. Avoiding this disturbed region, the frequency can be determined much more accurately than before.
]]> <![CDATA[Strong Hypothesis for Dark Matter and the Controlled Nuclear Fusion]]>
http://viXra.org/abs/1503.0205
2015-03-26 19:18:30Astrophysics reference: viXra:1503.0205v1 title: Strong Hypothesis for Dark Matter and the Controlled Nuclear Fusion authors: Valdir Monteiro dos Santos Godoi category: Astrophysics type: submission date: 2015-03-26 19:18:30 abstract:
The gravitational absorption appears to be the cause of dark matter. The Nuclear Fusion can be obtained from most highly targeted way. The LHC will not need to use much higher energies than those used to find the Higgs boson to be found "dark matter".
]]> <![CDATA[Purely Mechanical Memristors and the Missing Memristor]]>
http://viXra.org/abs/1503.0204
2015-03-26 23:03:36Classical Physics reference: viXra:1503.0204v1 title: Purely Mechanical Memristors and the Missing Memristor authors: Sascha Vongehr category: Classical Physics type: submission date: 2015-03-26 23:03:36 abstract:
This work is suppressed since 2010 and led to established scientists to be banned on the arXiv. We thank viXra for making critical work like this still available to relatively uncorrupted parts of the scientific community. We define a mechanical analog to the electrical basic circuit element M = dφ/dQ, the ideal mechanical memristance M = dp/dx, p being momentum. A never before described mechanical memory resistor M(x) is independent of velocity v and has a pinched hysteretic loop that collapses at high frequency in the v versus p plot: a perfect memristor. However, its memristance does not crucially involve inert mass, and the mechanical system helps clarifying that memristor devices hypothesized on grounds of physical symmetries require more. The missing mechanical perfect memristor needs to be crucially mass-involving (MI) precisely like the 1971 implied memristor device needs magnetism. Discussing novel MI memristive systems clarifies why such perfect MI memristors and EM memristors have not been discovered and may be impossible.
]]> <![CDATA[Energy and Forces in Economics]]>
http://viXra.org/abs/1503.0203
2015-03-26 23:28:11Economics and Finance reference: viXra:1503.0203v1 title: Energy and Forces in Economics authors: Wan-Jiung HU category: Economics and Finance type: submission date: 2015-03-26 23:28:11 abstract:
Besides gravity model of trade, many other physic force and energy laws can be applied to explain economics phenomenon. There should be Coulomb force, magnetic force, and impelity force to explain the business activity between two nations, industries, or companies. And, energy can also be applied for the above purpose for economic activity.
]]> <![CDATA[Conservation of Money]]>
http://viXra.org/abs/1503.0202
2015-03-26 23:31:22Economics and Finance reference: viXra:1503.0202v1 title: Conservation of Money authors: Wan-Jiung Hu category: Economics and Finance type: submission date: 2015-03-26 23:31:22 abstract:
Law of conservation is the key concept in physics world. We can also apply the conservation law in economics phenomenon. Here, I use the conservation of money to explain the stock market behavior. Continuity equation is provided for this model.
]]> <![CDATA[Lunar Drift Explains Lunar Eccentricity Rate]]>
http://viXra.org/abs/1503.0201
2015-03-27 02:09:55Astrophysics reference: viXra:1503.0201v1 title: Lunar Drift Explains Lunar Eccentricity Rate authors: Golden Gadzirayi Nyambuya category: Astrophysics type: submission date: 2015-03-27 02:09:55 abstract:
In this short letter, we argue that the observed +38 mm/yr secular Lunar drift from the Earth does – to an admirable degree of agreement between theory and observations ; explain the observed secular increase in the Lunar eccentricity. At present, the recession of the Moon from the Earth is not any more considered as an anomaly as this is believed to be well explained by conventional physics of Lunar-Earth tides. However, the same is not true when it comes to the observed increase in the Lunar eccentricity which is considered to be an anomaly requiring an explanation as to what is the cause behind this phenomenon. We not only demonstrate an intimate connection between these two seemingly unrelated phenomenon, but show that the intimate relationship that we deduce fits so well with observations to a point that – logic dictates that, the Lunar drift must surely be the cause of the secular increase in the Lunar eccentricity.
]]> <![CDATA[Dark Matter is Non-Interactive]]>
http://viXra.org/abs/1503.0200
2015-03-27 03:29:24Astrophysics reference: viXra:1503.0200v1 title: Dark Matter is Non-Interactive authors: George Rajna category: Astrophysics type: submission date: 2015-03-27 03:29:24 abstract:
A new study of colliding galaxy clusters has found that dark matter doesn't even interact with itself. [10]
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.
]]> <![CDATA[Implementing an Intelligent Version of the Classical Sliding-Puzzle Game for Unix Terminals Using Golang's Concurrency Primitives]]>
http://viXra.org/abs/1503.0199
2015-03-27 03:58:41Artificial Intelligence reference: viXra:1503.0199v1 title: Implementing an Intelligent Version of the Classical Sliding-Puzzle Game for Unix Terminals Using Golang's Concurrency Primitives authors: Pravendra Singh category: Artificial Intelligence type: submission date: 2015-03-27 03:58:41 abstract:
A smarter version of the sliding-puzzle game is developed using the Go programming language.
The game runs in computer system's terminals. Mainly, it was developed for UNIX-type systems but
because of cross-platform compatibility of the programming language used, it works very well in
nearly all the operating systems.
The game uses Go's concurrency primitives to simplify most of the hefty parts of the game. Real
time notification functionality is also developed using language's built-in concurrency support.
]]> <![CDATA[Entangle 3000 Atoms Using a Single Photon]]>
http://viXra.org/abs/1503.0198
2015-03-26 11:57:31Quantum Physics reference: viXra:1503.0198v1 title: Entangle 3000 Atoms Using a Single Photon authors: George Rajna category: Quantum Physics type: submission date: 2015-03-26 11:57:31 abstract:
Physicists in the US and Serbia have created an entangled quantum state of nearly 3000 ultracold atoms using just one photon. This is the largest number of atoms ever to be entangled in the lab, and the researchers say that the technique could be used to boost the precision of atomic clocks. [10]
The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron’s spin also, building the Bridge between the Classical and Quantum Theories.
The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry.
The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the relativistic quantum theory.
The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.
]]> <![CDATA[Special Relativity Replacement]]>
http://viXra.org/abs/1503.0197
2015-03-26 07:46:04Relativity and Cosmology reference: viXra:1503.0197v1 title: Special Relativity Replacement authors: Glenn A. Baxter category: Relativity and Cosmology type: submission date: 2015-03-26 07:46:04 abstract:
This paper details a replacement for Dr. Einstein’s theory of Special Relativity which, along with quantum theory, are pillars of 21st century physics. Special Relativity deals with light and other forms of radiation, such as radio, X, gamma, and delta radiation and the constant nature of the velocity of light as distinguished from the non constant nature of the relative velocity of light.
]]> <![CDATA[Munns' Field Equations]]>
http://viXra.org/abs/1503.0196
2015-03-26 08:11:52Quantum Physics reference: viXra:1503.0196v1 title: Munns' Field Equations authors: Christina Munns category: Quantum Physics type: submission date: 2015-03-26 08:11:52 abstract:
ABSTRACT
ABSTRACT
This paper serves to explain how the four tensors within Einstein’s field equations (EFE) of scalar curvature, metric tensor, Ricci curvature tensor and stress-energy tensor can each be exactly correlated to the phenomena of unitary symmetry groups, quantum number, dark phenomena, 5D regular polytopes and universal number. The paper also explains how these four EFE tensors find expression in the Unified Standard Model which is the graphical depiction of Unified Field Theory. The conclusion is that the four tensors of Einstein’s field equations can now be applied to real life phenomena such as space, light, gravity and time.
]]> <![CDATA[How Light Bends & Curves]]>
http://viXra.org/abs/1503.0195
2015-03-26 08:24:14Classical Physics reference: viXra:1503.0195v1 title: How Light Bends & Curves authors: Christina Munns category: Classical Physics type: submission date: 2015-03-26 08:24:14 abstract:
ABSTRACT
The aim of this paper is to explain the mechanism of how light both bends and curves and to discuss these findings in the light of current understanding of the properties of a gravitational field. This paper explains how gravity does not commute with the phenomenon of light since the gravitational force is a conserved vector field and is path independent. Also gravity is irrotational and thus has zero curl and so could not cause light to curve.. The essential argument of this paper is that the bending of light arises as a result of a difference in the refractive index between two identical reference frames travelling at different velocities and that the curvature of light arises as a direct result of the rotational symmetry of time which is intrinsic to the phenomenon of acceleration. Therefore the deviation of light from its path is not due to the presence of the gravitational field as expressed in Einstein’s Theory of General Relativity but is instead due to the CP violating and distorting effect of the phenomenon of time which is intrinsic to all material states.
]]> <![CDATA[Black Holes - The Real Story]]>
http://viXra.org/abs/1503.0194
2015-03-26 08:27:40Astrophysics reference: viXra:1503.0194v1 title: Black Holes - The Real Story authors: Christina Munns category: Astrophysics type: submission date: 2015-03-26 08:27:40 abstract:
ABSTRACT
This paper addresses the properties of black holes as they relate to unitary symmetry groups. The explanation is given that black holes belong to SU(2) and that they are therefore magnetic in nature not gravitational. The nature of the Schwarzschild radius in relation to unitary symmetry is explained along with an explanation of accretion disks and why information is not actually lost in a black hole. The phenomenon of the three different types of black holes of Schwarzschild, Reissner-Nordstrӧm and Kerr black holes are discussed in the light of unitary symmetry groups. The proposal that black holes operate via the process of magnetic induction is also examined.
]]> <![CDATA[The Riemann Theory - Proof of the Riemann Hypothesis]]>
http://viXra.org/abs/1503.0193
2015-03-27 02:52:50Functions and Analysis reference: viXra:1503.0193v2 title: The Riemann Theory - Proof of the Riemann Hypothesis authors: Christina Munns category: Functions and Analysis type: replacement date: 2015-03-27 02:52:50 abstract:
ABSTRACT
The Riemann Hypothesis is a proposal given by Bernhard Riemann in 1859 that the non-trivial zeros of the Riemann zeta function all have real part ½ or .5. The conjecture is that all the non-trivial zeros lie along the critical line.
]]> <![CDATA[The Riemann Theory - Proof of the Riemann Hypothesis]]>
http://viXra.org/abs/1503.0193
2015-03-26 08:37:02Functions and Analysis reference: viXra:1503.0193v1 title: The Riemann Theory - Proof of the Riemann Hypothesis authors: Christina Munns category: Functions and Analysis type: submission date: 2015-03-26 08:37:02 abstract:
ABSTRACT
The Riemann Hypothesis is a proposal given by Bernhard Riemann in 1859 that the non-trivial zeros of the Riemann zeta function all have real part ½ or .5. The conjecture is that all the non-trivial zeros lie along the critical line.
]]> <![CDATA[On the Importance of Symmetry on the Photonic Environment]]>
http://viXra.org/abs/1503.0192
2015-03-26 08:44:34High Energy Particle Physics reference: viXra:1503.0192v1 title: On the Importance of Symmetry on the Photonic Environment authors: Christina Munns category: High Energy Particle Physics type: submission date: 2015-03-26 08:44:34 abstract:
Abstract:
Consideration is given to the relevance of unitary symmetry in relation to the environment in which photonic research takes place. Both a U(1) and a SU(2) environment are considered and the results compared. It is found that there is a direct polarity between these two unitary symmetry groups with regard to both photonic behaviour and research results, such that it leads to the conclusion that environmental symmetry directly affects photonic activity and also research outcomes.
]]> <![CDATA[Virtual Proper Time in the Problems of Eliminating the Infrared Catastrophe and of the Field Origin of the Electron Mass and Self-Energy in Classical Electrodynamics]]>
http://viXra.org/abs/1503.0191
2015-03-26 06:24:19Quantum Physics reference: viXra:1503.0191v1 title: Virtual Proper Time in the Problems of Eliminating the Infrared Catastrophe and of the Field Origin of the Electron Mass and Self-Energy in Classical Electrodynamics authors: Nikolai V. Volkov category: Quantum Physics type: submission date: 2015-03-26 06:24:19 abstract:
With inclusion of the virtual proper time in the metric of the physical Minkowski
space we pass to the four-dimensional bimetric space-time. Now a complete description
of the occurring physical processes includes both physical (observable) and virtual
(unobservable) objects that enter in the physical expressions. In classical electrodynamics
this conversion leads to the appearance of the virtual scalar-electric field that complements
the physical electromagnetic field and is a massive in the presence of sources. This allows
to eliminate the infrared catastrophe and to proof the field origin of the virtual (bare)
electron mass and self-energy. With inclusion of the virtual proper time in the classical
quantum theory we obtain the single-particle wave Dirac equation for which the electron
wave function retains the simple probabilistic interpretation. In the single-particle Dirac
theory the virtual scalar-electric field shifts the physical energy levels for the hydrogen
atom in an external field and this leads to two additional amendments.
]]> <![CDATA[E8_Particle_Assignment_Symmetry]]>
http://viXra.org/abs/1503.0190
2015-03-25 19:06:38Quantum Physics reference: viXra:1503.0190v1 title: E8_Particle_Assignment_Symmetry authors: J Gregory Moxness category: Quantum Physics type: submission date: 2015-03-25 19:06:38 abstract:
Constructing an E8 Based Standard Model (SM): An approach to a Theory of Everything (ToE)
]]> <![CDATA[]]>
http://viXra.org/abs/1503.0189
2015-03-26 06:31:19Quantum Gravity and String Theory reference: viXra:1503.0189v2 title: authors: category: Quantum Gravity and String Theory type: withdrawal date: 2015-03-26 06:31:19 abstract:
]]> <![CDATA[The Quantum Levels of the Event Horizon of a Black Hole]]>
http://viXra.org/abs/1503.0189
2015-03-26 01:44:38Quantum Gravity and String Theory reference: viXra:1503.0189v1 title: The Quantum Levels of the Event Horizon of a Black Hole authors: Kuyukov Vitaly category: Quantum Gravity and String Theory type: submission date: 2015-03-26 01:44:38 abstract:
I present to you a simple explanation of the Hawking radiation on the basis of my early idea (Model of Hawking radiation 05.11.2013 http://vixra.org/pdf/1311.0042v1.pdf ) of
quantization of the black hole surface. Event horizon of a black hole is a two-dimensional surface in the
form of a sphere. On this sphere considered purely as a geometric object that has properties similar to the
membrane. Membrane may fluctuate, there arise elastic waves.
]]> <![CDATA[Formulation of Porosity Calculation for Three-Dimension Granular Materials in the Case of Spherical Particles]]>
http://viXra.org/abs/1503.0188
2015-03-26 02:19:25Condensed Matter reference: viXra:1503.0188v1 title: Formulation of Porosity Calculation for Three-Dimension Granular Materials in the Case of Spherical Particles authors: Sparisoma Viridi, Suprijadi, Reza Rendian Septiawan category: Condensed Matter type: submission date: 2015-03-26 02:19:25 abstract:
A derivation of formulation for calculating porosity of three-dimension granular materials is presented in this work, where granular particles are assumed spherical. Overlapping area problem is solved in two-dimension using geometry in two overlapping circles. The three-dimension overlap is formulated through numeric integration from the two dimension overlap.
]]> <![CDATA[Falsification of Einstein Theories of Relativity Second Revised Edition]]>
http://viXra.org/abs/1503.0187
2015-03-26 05:52:13Relativity and Cosmology reference: viXra:1503.0187v1 title: Falsification of Einstein Theories of Relativity Second Revised Edition authors: Lutz Kayser category: Relativity and Cosmology type: submission date: 2015-03-26 05:52:13 abstract:
The Einstein Postulates of Special Relativity (SR), namely the invariance of the speed of light c relative to the observer, the symmetry of relative velocities, and the Galilean Principle independent of velocity and gravitational potential are falsified. The replacement Law is: There exists an absolute universal velocity reference (Cosmic Velocity Reference, CVR). The velocity of light c is invariant and isotropic only relative to absolute universal space CVR. In honor of the discoverer, we propose to call it Smoot’s Law. Experimental evidence of the anisotropy of the cosmic microwave radiation background CMB and the one-way measurements of the speed of light are given. From the new Laws it follows (in vector notation) c_(rel )=c- c_cvr . This results in the elimination from physics of the Minkowski four-vector spacetime symmetry, time dilation, length contraction, velocity and acceleration symmetrical Lorentz transformation, Einstein vector addition, covariance, invisible and unphysical net of monolithic worldlines, and other weird mathematical constructs without physical meaning resulting from Special Relativity SR and General Relativity GR. The mass increase of particles with speed by the so-called Lorentz Factor 〖ϒ=(1-v^2 /c^2 )〗^(-1/2) is so often cited by Relativists as empirical proof of SR. ϒ was fraudulently smuggled into SR without mathematical proof applying it to relative velocities which give wrong results. We show that the Lorentz Factor is a simple part of the system of classical dynamic equations and the mass-energy conservation law. But it is only valid with absolute cosmic velocity v_CVR. The increases of mass, momentum, and energy with an object’s velocity are correct but not part of or caused by SR. This is true also for the change of clock rate as a function of velocity and Newtonian gravity potential. PIPS welcomes discussions and proposals for improvement of this work for the sake of the physics community and the wider public. For the general public, we have made an effort to explain important facts in everyday examples. Periodic revisions will be published. (Second Revised Edition 3/15/2015).
]]> <![CDATA[Primordial Dark Matter Black Holes Outside Galaxies Responsible for the Creation and Contraction of a Cyclic Universe?]]>
http://viXra.org/abs/1503.0186
2015-03-26 10:21:45Astrophysics reference: viXra:1503.0186v2 title: Primordial Dark Matter Black Holes Outside Galaxies Responsible for the Creation and Contraction of a Cyclic Universe? authors: lEO vUYK category: Astrophysics type: replacement date: 2015-03-26 10:21:45 abstract:
For years, Stephen Hawking (and Hartle) has been applying the No Boundary Proposal, or the idea that the universe is “closed“ and can have a beginning, and an end, however still exist forever. I will call this a closed cyclic universe and for symmetry reasons a cyclic entangled raspberry shaped multiverse based on my proposal for the Quantum Function Follows Form (FFF) model. The Form and Microstructure of elementary particles, is supposed to be the origin of Functional differences between Higgs- Graviton- Photon- and Fermion particles. As a consequence, a new paradigm splitting, accelerating and pairing massless Dark Matter Black Hole (without Graviton production) is able to convert vacuum energy (ZPE) into real energy by entropy decrease and quark electron positron production at the black hole horizon. Secondly, a Chiral dark energy Higgs vacuum lattice, is able to explain quick Gas and Dust production without large annihilation at the black hole horizon and Galaxy- and Star formation. Even microscopic black holes like Sunspots, (Micro) Comets, Lightning bolts, Sprite Fireballs and Ball Lightning seem to produce gas. However only the largest Primordial Galaxy Anchor Black Holes (GABHs) located far outside galaxies, are supposed to stop gas and dust production and by merging with other black holes they should be the ultimate origin of universal contraction into a Big Crunch and a cyclic multiverse. The Hubble redshift, in contrast with the mainstream, is supposed to originate local vacuum lattice dilution created by local dark matter black holes. All black holes eat the Higgs vacuum lattice with a local Planck length extension- as the result and the base for frequency decrease of a passing photon. As a result, GABHs are only observable by their background lensing effect and as quasi-soft x-ray sources, to be found in particular around merger galaxies, due to swirling gas and dust expelled from the galaxies.
]]>