**Previous months:**

2007 - 0702(59) - 0703(50) - 0704(5) - 0705(1) - 0706(8) - 0707(2) - 0708(3) - 0709(3) - 0710(1) - 0711(6) - 0712(3)

2008 - 0801(4) - 0802(4) - 0803(2) - 0804(9) - 0805(4) - 0806(1) - 0807(12) - 0808(6) - 0809(3) - 0810(16) - 0811(5) - 0812(9)

2009 - 0901(3) - 0902(7) - 0903(6) - 0904(5) - 0907(50) - 0908(109) - 0909(61) - 0910(66) - 0911(64) - 0912(54)

2010 - 1001(46) - 1002(51) - 1003(265) - 1004(138) - 1005(110) - 1006(68) - 1007(55) - 1008(91) - 1009(71) - 1010(62) - 1011(76) - 1012(52)

2011 - 1101(100) - 1102(55) - 1103(122) - 1104(82) - 1105(41) - 1106(61) - 1107(53) - 1108(49) - 1109(58) - 1110(71) - 1111(114) - 1112(86)

2012 - 1201(101) - 1202(84) - 1203(93) - 1204(95) - 1205(111) - 1206(91) - 1207(99) - 1208(223) - 1209(96) - 1210(164) - 1211(137) - 1212(143)

2013 - 1301(175) - 1302(156) - 1303(197) - 1304(153) - 1305(183) - 1306(210) - 1307(152) - 1308(141) - 1309(191) - 1310(242) - 1311(168) - 1312(203)

2014 - 1401(190) - 1402(149) - 1403(842) - 1404(262) - 1405(328) - 1406(179) - 1407(203) - 1408(221) - 1409(224) - 1410(189) - 1411(493) - 1412(252)

2015 - 1501(233) - 1502(233) - 1503(255) - 1504(182)

Any replacements are listed further down

[10454] **viXra:1504.0193 [pdf]**
*submitted on 2015-04-24 10:29:18*

**Authors:** Robert James Johnson

**Comments:** 19 Pages. Submission ref. PHYSSCR-102611; submitted 6 Feb 2015; awaiting referee comments 24.4.15

Neither the collisional hydrodynamic nor the collisionless kinetic models have yet been able to fully explain the acceleration of the fast solar wind without ad hoc assumptions of additional energy input or suprathermal electron populations at the base of the models in the lower corona. Separate research has shown that plasma naturally forms a Current Free Double Layer when expanding into a lower-density region, the effect of which is to generate a suprathermal electron population and beams of fast ions on the low potential side. It is suggested that the expansion of the dense plasma in the body of the Sun will form a stationary Current Free Double Layer below the photosphere and thus provide initial ion acceleration together with the type of electron velocity distribution function that kinetic models require as boundary conditions. The turbulence generated by the outflowing particle beams colliding with the low-density plasma in and above the upper chromosphere may contribute to the additional wave energy which the hydrodynamic models require. The implications of the present model for the coronal heating problem are also explored.

**Category:** Astrophysics

[10453] **viXra:1504.0192 [pdf]**
*submitted on 2015-04-23 20:54:28*

**Authors:** Madad Khan, Florentin Smarandache, Tariq Aziz

**Comments:** 82 Pages.

In this book, we introduce the concept of (/in, /in /or q_k)-fuzzy ideals and (/in_gamma, in_gamma /or q_delta)-fuzzy ideals in a non-associative algebraic structure called Abel Grassmann’s groupoid, discuss several important features of a regular AG-groupoid, investigate some characterizations of regular and intra-regular AG-groupoids and so on.

**Category:** Algebra

[10452] **viXra:1504.0191 [pdf]**
*submitted on 2015-04-23 23:26:34*

**Authors:** Akito Takahashi

**Comments:** 19 Pages. Preprint of ICCF19 Proceedings paper, to be published by J. Condensed Matter Nucl. Sci.

The condensed matter nuclear reactions (CMNR) are thought to happen for trapped H(D) particles within some chemical (electro-magnetic) potential well with finite life time. As the life time is much longer than the collision time of two-body interaction of free particles, CMNR reaction rates are significantly (on the order of 19-20 in magnitude) enhanced if we compare with estimated reactions rates by the two-body collision formula. The basis of CMNR rate theory is reviewed in this paper by extracting essence of the TSC theory tools developed until now. Derivation of Fermi’s golden rule with nuclear optical potential, rate formulas by Born-Oppenheimer wave function separation, estimation of bracket integral of inter-nuclear strong interaction rate, estimation of time dependent barrier penetration probability by the HMEQPET method for dynamic D(H)-cluster condensation/collapse process, and DD fusion power levels as functions of inter-nuclear d-d distance and effective existing (life) time are given. A DD fusion power level of 10 kW/mol-dd-pairs is possible for a 1 pm inter-nuclear d-d distance with 10 ato-seconds life time. The level of 2.8 nano-mol 4D/TSC formations/sec may release 10 kW neutron-free heat-power with 4He ash.

**Category:** Condensed Matter

[10451] **viXra:1504.0190 [pdf]**
*submitted on 2015-04-24 01:53:40*

**Authors:** Cheng Tianren

**Comments:** 9 Pages.

In this essays, we try to demonstrate the school life of Jeff. And we select more than 20 stories of him to explore our thoughts towards the most popular social events. In this paper, we publish essays one to four as the first part of this work.

**Category:** Social Science

[10450] **viXra:1504.0189 [pdf]**
*submitted on 2015-04-24 03:39:05*

**Authors:** S.Kalimuthu

**Comments:** 4 Pages. If there is a flaw in the proof, I welcome it.Thank you.

Reputed Austrian American mathematician Kurt Gödel formulated two extraordinary propositions in mathematical lo0gic.Accepted by all mathematicians they have revolutionized mathematics, showing that mathematical truth is more than logic and computation. These two ground breaking theorems changed mathematics, logic, and even the way we look at our Universe. The cognitive scientist Douglas Hofstadter described Gödel’s first incompleteness theorem as that in a formal axiomatic mathematical system there are propositions that can neither be proven nor disproven. The logician and mathematician Jean van Heijenoort summarizes that there are formulas that are neither provable nor disprovable. According to Peter Suber, inn a formal mathematical system, there are un decidable statements. S. M. Srivatsava formulates that formulations of number theory include undecidable propositions. And Miles Mathis describes Gödel’s first incompleteness theorem as that in a formal axiomatic mathematical system we can construct a statement which is neither true nor false. [Mathematical variance of liar’s paradox]In this short work, the author attempts to show these equivalent propositions to Gödel’s incompleteness theorems by applying elementary arithmetic operations, algebra and hyperbolic geometry. [1 – 6 ]

**Category:** Geometry

[10449] **viXra:1504.0188 [pdf]**
*submitted on 2015-04-24 04:40:30*

**Authors:** Alberto Bononi, 1 Ottmar Beucher, Paolo Serena

**Comments:** 1 Page.

We correct a typo in the key equation (20) of reference [Opt.Express 21(26), 32254–32268 (2013)] that shows an upper bound on the cross-channel interference nonlinear coefficient in coherent optical links for which the Gaussian Noise model applies.

**Category:** Digital Signal Processing

[10448] **viXra:1504.0187 [pdf]**
*submitted on 2015-04-24 07:56:32*

**Authors:** Rodrigo de Abreu

**Comments:** 9 Pages. Técnica 1, 53-61 (1994).

Two processes have been chosen to show the difficulty of attributing a physical significance to the first law - dU=dW+dQ, since it is not possible to separate the energetic exchange between two subsystems, dividing it into work - dW, and heat - dQ, with an energetic significance (attributed to each one of these terms), even if a "quasi-static" transformation is assumed. By analysing these processes we have shown that the First Law does not possess the significance commonly attributed to it. The analysis developed herein completes one recently published [3].

**Category:** Thermodynamics and Energy

[10447] **viXra:1504.0186 [pdf]**
*submitted on 2015-04-24 07:37:04*

**Authors:** George Rajna

**Comments:** 20 Pages.

Scientists have discovered a secret second code hiding within DNA which instructs cells on how genes are controlled. The amazing discovery is expected to open new doors to the diagnosis and treatment of diseases, according to a new study. [10]
There is also connection between statistical physics and evolutionary biology, since the arrow of time is working in the biological evolution also.
From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. [8]
This paper contains the review of quantum entanglement investigations in living systems, and in the quantum mechanically modeled photoactive prebiotic kernel systems. [7]
The human body is a constant flux of thousands of chemical/biological interactions and processes connecting molecules, cells, organs, and fluids, throughout the brain, body, and nervous system. Up until recently it was thought that all these interactions operated in a linear sequence, passing on information much like a runner passing the baton to the next runner. However, the latest findings in quantum biology and biophysics have discovered that there is in fact a tremendous degree of coherence within all living systems.

**Category:** Physics of Biology

[10446] **viXra:1504.0185 [pdf]**
*submitted on 2015-04-23 14:37:58*

**Authors:** John R. Cipolla

**Comments:** 9 Pages.

A series of computational fluid dynamic (CFD) analysis and experiment is discussed that probes the interaction between multiple shock waves and the adjacent boundary layer at high supersonic speed. For a cone-cylinder-flare configuration the viscous boundary layer and its interaction with the outer flow field is computationally determined and then compared to an experimental technique using holographic interferometry. Ultimately, the base flow region of a projectile can be modeled using CFD and its results compared to experimental interferometric data during the validation stage of code development.

**Category:** Classical Physics

[10445] **viXra:1504.0184 [pdf]**
*submitted on 2015-04-23 06:15:56*

**Authors:** C. A. Laforet

**Comments:** 4 Pages.

This paper considers a thought experiment in which an idealized version of the double slit experiment is carried out in a gravitational field. It is argued that the interference pattern observed will be modified from the pattern observed in the identical experiment performed in free space as a result of the blue/red shift of the photons fired at the screen caused by the gravitational field. The results of this are then interpreted in the context of an atom falling in a gravitational field.

**Category:** Quantum Gravity and String Theory

[10444] **viXra:1504.0183 [pdf]**
*submitted on 2015-04-22 15:30:54*

**Authors:** Laszlo B. Kish

**Comments:** 6 Pages.

There is a longstanding debate about the zero-point term in the Johnson noise voltage of a resistor: Is it indeed there or is it only an experimental artifact due to the uncertainty principle for phase-sensitive amplifiers? We show that, when the zero-point term is measured by the mean energy and force in a shunting capacitor and, if these measurements confirm its existence, two types of perpetual motion machines could be constructed. Therefore an exact quantum theory of the Johnson noise must include also the measurement system used to evaluate the observed quantities. The results have implications also for phenomena in advanced nanotechnology.

**Category:** Quantum Physics

[10443] **viXra:1504.0182 [pdf]**
*submitted on 2015-04-22 16:24:54*

**Authors:** Frederik Vantomme

**Comments:** 24 Pages. The original paper is written in LaTex format.

We explore the possibility that black holes and Space could be the geometrically Compactified Transverse Slices ("CTS"s) of their higher (+1) dimensional space. Our hypothesis is that we might live somewhere in between partially compressed regions of space, namely 4d_{L+R} hyperspace compactified to its 3d transverse slice, and fully compressed dark regions, i.e. black holes, still containing all _{L}d432-1-234d_{R} dimensional fields. This places the DGP, ADD, Kaluza-Klein, Randall-Sundrum, Holographic and Vanishing Dimensions theories in a different perspective.

We first postulate that a black hole could be the result of the compactification (fibration) of a 3d burned up S^{2} star to its 2d transverse slice; the 2d dimensional discus itself further spiralling down into a bundle of one-dimensional fibres.

Similarly, Space could be the compactified transverse slice (fibration) of its higher 4d_{L+R} S^{3} hyper-sphere to its 3d transverse slice, the latter adopting the topology of a closed and flat left+right handed trefoil knot. By further extending these two ideas, we might consider that the Universe in its initial state was a "Matroska" 4d_{L+R} hyperspace compactified, in cascading order, to a bundle of one-dimensional fibres. The Big Bang could be an explosion from within that broke the cascadingly compressed Universe open.

[10442] **viXra:1504.0181 [pdf]**
*submitted on 2015-04-22 16:57:30*

**Authors:** Vedat Tanriverdi

**Comments:** 8 Pages. spin, quantum mechanics

The historical development of spin and Stern-Gerlach experiment are summarized.
Then some questions on spin are stated.

**Category:** Quantum Physics

[10441] **viXra:1504.0180 [pdf]**
*submitted on 2015-04-23 04:48:18*

**Authors:** V.B. Smolenskii

**Comments:** 6 Pages.

Abstract: apparently, in may of this year will be published values of the fundamental physical constants (FPC), which CODATA recommended for international use (CODATA 2014). This article presents the predictive results of original theoretical research of the author in determining the numerical values of the most significant FFK obtained using the analytical method of the PI-theory of the fundamental physical constants (PI-Theory). Given a finite formulas and high-precision results of analytical calculations of the FPC 22. Presents a table comparing the results of calculations with the data CODATA 2010.

**Category:** Nuclear and Atomic Physics

[10440] **viXra:1504.0179 [pdf]**
*submitted on 2015-04-22 15:24:54*

**Authors:** Ernesto Lopez Gonzalez

**Comments:** 22 pages, in spanish

Background: In previous papers it was set out that matter could be considered to be formed by gravitational pulsations in a hexadimensional space with anisotropic curvature, since solutions to Einstein's field equations presented all of the characteristics of a particle then.
Results: Four solutions to the gravitomagnetic wave equation have been found . These solutions can be assimilated to four neutrinos and complement to the previous solution identified with the electron. Since this set of solutions does not allow the existence of hadrons is postulated the existence of a central hole in the plane of the compacted dimensions. By assuming this postulate we can obtain complementary solutions formed by a surface wave plus any of the other five solutions. These solutions are called glutinos. Linear combinations of these solutions can explain the huge variety of known particles, allowing not only to identify their different charges, but also justify the existence of a multilinear system for hadron masses as advocated by Palazzi. The proposed system also predict the size of mesons and baryons, and the internal distribution of charges. Regarding interactions, they occur via three non-linear mechanisms: by changing the refractive index, deforming and dragging on propagation medium (space-time). No other interaction is possible . The first two are the source of the gravitational interaction, the residual nuclear force and the London interaction, while the latest is the origin of interactions similar to the electromagnetic interaction. These interactions have been called electrostrong, electromagnetic and electroweak interaction. We can obtain mathematically these interactions from the probability density of the wavefunction or from the wavefunction gradient.

**Category:** Quantum Physics

[10439] **viXra:1504.0178 [pdf]**
*submitted on 2015-04-22 07:50:04*

**Authors:** Emmanuel Kanambaye

**Comments:** 5 Pages. The document is in french

In this document, after presented that we mean by "smallest length having physical sense",we propose the theorem resulting.

**Category:** Relativity and Cosmology

[10438] **viXra:1504.0177 [pdf]**
*submitted on 2015-04-22 08:23:05*

**Authors:** George Rajna

**Comments:** 14 Pages.

The main feature of dark matter is that it remains undetectable (invisible) to telescopes. But that doesn’t mean that dark matter can’t sometimes intermingle with light. Scientists have now studied the prospect that dark matter distributes star light, creating a potentially visible luminosity around galaxies. [12]
Astronomers believe they might have observed the first potential signs of dark matter interacting with a force other than gravity. [11]
A new study of colliding galaxy clusters has found that dark matter doesn't even interact with itself. [10]
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.

**Category:** Astrophysics

[10437] **viXra:1504.0176 [pdf]**
*submitted on 2015-04-22 04:41:40*

**Authors:** Centre For Democracy, Development

**Comments:** 4 Pages.

An act to regulate the acceptance and utilization of financial/material contribution of donor agencies to voluntary organisations.

**Category:** Social Science

[10436] **viXra:1504.0175 [pdf]**
*submitted on 2015-04-22 04:48:45*

**Authors:** Centre For Democracy, Development

**Comments:** 4 Pages.

matters arising from the voting phase during the Nigeria 2015 Governorship and State House of Assembly Elections

**Category:** Social Science

[10435] **viXra:1504.0174 [pdf]**
*submitted on 2015-04-22 05:54:02*

**Authors:** Centre For Democracy, Development

**Comments:** 17 Pages.

understanding women, youth and other marginalised groups in political party activities in Nigeria.

**Category:** Social Science

[10434] **viXra:1504.0173 [pdf]**
*submitted on 2015-04-21 18:06:25*

**Authors:** editor Florentin Smarandache

**Comments:** 640 Pages.

Folclor umoristic internetist, cules, selectat, prelucrat de Florentin Smarandache.
Bancuri, imagini, folclor in general.

**Category:** Linguistics

[10433] **viXra:1504.0172 [pdf]**
*submitted on 2015-04-22 02:41:47*

**Authors:** Rodrigo de Abreu

**Comments:** 9 Pages. Luzboa A arte da Luz em Lisboa, 14-19, Extra]muros[ (2004).

The traditional presentation of special relativity is made from a rupture with previous ideas, such as the notion of absolute motion, emphasizing the antagonism of the Lorentz’s views and Einstein’s ideas. However, a weaker formulation of the postulates allows to recover all the results from Einstein’s special relativity and reveals that both viewpoints are merely diﬀerent perspectives of one and the same theory.

**Category:** Relativity and Cosmology

[10432] **viXra:1504.0171 [pdf]**
*submitted on 2015-04-21 12:33:27*

**Authors:** George Rajna

**Comments:** 17 Pages.

How can the LHC experiments prove that they have produced dark matter? They can’t… not alone, anyway. [13]
The race for the discovery of dark matter is on. Several experiments worldwide are searching for the mysterious substance and pushing the limits on the properties it may have. [12]
Dark energy is a mysterious force that pervades all space, acting as a "push" to accelerate the universe's expansion. Despite being 70 percent of the universe, dark energy was only discovered in 1998 by two teams observing Type Ia supernovae. A Type 1a supernova is a cataclysmic explosion of a white dwarf star. The best way of measuring dark energy just got better, thanks to a new study of Type Ia supernovae. [11]
Newly published research reveals that dark matter is being swallowed up by dark energy, offering novel insight into the nature of dark matter and dark energy and what the future of our Universe might be. [10]
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.

**Category:** Astrophysics

[10431] **viXra:1504.0170 [pdf]**
*submitted on 2015-04-21 15:17:26*

**Authors:** Solomon I. Khmelnik

**Comments:** 11 Pages.

Consider the structure of the wire with constant current.

**Category:** Classical Physics

[10430] **viXra:1504.0169 [pdf]**
*submitted on 2015-04-21 15:21:19*

**Authors:** Solomon I. Khmelnik

**Comments:** 10 Pages. in Russian

The question about the source of energy in the dust whirl considered. Atmospheric conditions can not be the sole source of energy, such as dust whirls exist on Mars, where no atmosphere. Below we show that the source of energy for the dust whirl is the energy of the gravitational field. A mathematical model of the dust whirl, which uses a system of Maxwell-like gravitational equations, is proposed. Explains some of the properties of the dust whirl - saving vertical cylindrical shape of the dust whirl, its fluctuations, chaotic trajectory of motion of the dust whirl as a whole, the expansion of the body of the dust whirl. \\ Рассматривается вопрос об источнике энергии в песчаном вихре. Атмосферные явления не могут быть единственным источником энергии, поскольку такие вихри существуют и на Марсе, где атмосфера отсутствует. Ниже показывается, что источником энергии для песчаного вихря является энергия гравитационного поля. Предлагается математическая модель песчаного вихря, в которой используется система максвеллоподобных уравнений гравитации. Объясняются некоторые свойства песчаного вихря - сохранение цилиндрической вертикальной формы вихря, его колебания, хаотическую траекторию движения вихря в целом, расширение тела вихря.

**Category:** Classical Physics

[10429] **viXra:1504.0168 [pdf]**
*submitted on 2015-04-21 09:37:32*

**Authors:** V.Purushothaman, Naveen.M, Prem kumar.D, P.Manimaran, K.S.Sivaraman

**Comments:** 8 Pages.

Because frequency components interact nonlinearly with each other inside the cochlea, the loudness growth of tones is relatively simple in comparison to the loudness growth of complex sounds. The term suppression refers to a reduction in the response growth of one tone in the presence of a second tone. Suppression is a salient feature of normal cochlear processing and contributes to psychophysical masking. Suppression is evident in many measurements of cochlear function in subjects with normal hearing, including distortion-product to acoustic emissions (DPOAEs). Suppression is also evident, to a lesser extent, in subjects with mild-to moderate hearing loss. This paper describes a hearing-aid signal processing strategy that aims to restore both loudness growth and two-tone suppression in hearing-impaired listeners. The prescription of gain for this strategy is based on measurements of loudness by a method known as categorical loudness scaling. The proposed signal-processing strategy reproduces measured DPOAE suppression tuning curves and generalizes to any number of frequency components. The restoration of both normal suppression and normal loudness has the potential to improve hearing-aid performance.

**Category:** Digital Signal Processing

[10428] **viXra:1504.0167 [pdf]**
*submitted on 2015-04-21 08:40:29*

**Authors:** Peter J Carroll

**Comments:** 15 Pages. A rough outline of this hypothesis has already attracted over 300,000 reads on my website. One of your contributing physicists suggested that I tidy it up and offer it to viXra. Regards, Pete.

The author proposes that a reinterpretation of cosmological redshift as arising from the small positive spacetime curvature of a 4-rotating Hyperspherical Universe of constant size can eliminate the current requirement for spacetime singularities, cosmic inflation, dark matter, and dark energy from cosmological models.

**Category:** Relativity and Cosmology

[10427] **viXra:1504.0166 [pdf]**
*submitted on 2015-04-21 04:36:41*

**Authors:** Daniele Sasso

**Comments:** 13 Pages.

In the previous paper "Thermodynamics of elementary particles" we analysed the thermodynamic behavior of single elementary particles in the order of a continuous paradigm. Already we know elementary particles have in electrodynamics a few quantum features above all in regard to the emission of electromagnetic energy when they are accelerated. We want now to specify better this quantum behavior making use of particular mathematical functions and expanding successively this study from electrodynamic phenomena to thermodynamics.

**Category:** High Energy Particle Physics

[10426] **viXra:1504.0165 [pdf]**
*submitted on 2015-04-21 04:39:04*

**Authors:** Blair D. Macdonald

**Comments:** 34 Pages.

Climate science's fundamental premise – assumed by all parties in the great climate debate – says the greenhouse gases – constituting less than 2% of Earth’s atmosphere; first derived by John Tyndall‘s in his 1859 thermopile experiment, and demonstrated graphically today by infrared IR spectroscopy – are special because of their IR (heat) absorbing property. From this, it is – paradoxically – assumed the (remaining 98%) non-greenhouse gases N2 nitrogen and O2 oxygen are non-heat absorbent. This paper reveals, by elementary physics, the (deceptive) role thermopiles play in this paradox. It was found: for a special group substances – all sharing (at least one) electric dipole moment – i.e. CO2, and the other greenhouse gases – thermopiles – via the thermoelectric (Seebeck) effect – generate electricity from the radiated IR. Devices using the thermopile as a detector (e.g. IR spectrographs) discriminate, and have misinterpreted IR absorption for anomalies of electricity production – between the sample gases and a control heat source. N2 and O2 were found to have (as all substances) predicted vibrational modes (derived by the Schrodinger quantum equation) at 1556cm-1 and 2330cm-1 respectively – well within the IR range of the EM spectrum and are clearly observed – as expected – with Raman Spectroscopy – IR spectroscopy’s complement instrument. The non-greenhouse gases N2 and O2 are relegated to greenhouse gases, and Earth’s atmospheric thermoelectric spectrum was produced (formally IR spectrum), and was augmented with the Raman observations. It was concluded the said greenhouses gases are not special, but typical; and all substances have thermal absorption properties, as measured by their respective heat capacities.

**Category:** Climate Research

[10425] **viXra:1504.0164 [pdf]**
*submitted on 2015-04-20 12:46:56*

**Authors:** Marius Coman

**Comments:** 4 Pages.

In Addenda to my previous paper “On the special relation between the numbers of the form 505+1008k and the squares of primes” I defined the notions of c/m-integers and g/s-integers and showed some of their applications. In a previous paper I conjectured that, beside few definable exceptions, the Fermat pseudoprimes to base 2 with two prime factors are c/m-primes, but I haven’t defined the “definable exceptions”. However, in this paper I confirm one of my constant beliefs, namely that the relations between the two prime factors of a 2-Poulet number are definable without exceptions and I make a conjecture about a generic formula of these numbers, namely that the most of them are s-primes and the exceptions must satisfy a given Diophantine equation.

**Category:** Number Theory

[10424] **viXra:1504.0163 [pdf]**
*submitted on 2015-04-20 10:19:08*

**Authors:** Rodrigo de Abreu

**Comments:** 9 Pages. Nada vezes Nove, edição extra]muros[, 25-35, "Lisboa Capital do Nada" (2001)

Estava eu a ler o "Significado de Tudo" de Richard Feynman quando recebo o convite de escrever sobre o "Nada":"O Significado de Nada".

**Category:** Education and Didactics

[10423] **viXra:1504.0162 [pdf]**
*submitted on 2015-04-20 10:32:48*

**Authors:** Randy Sorokowski

**Comments:** 1 Page.

This natural units table shows the relationship between measured properties. A description as to how to use this grid will be in the next submission.

**Category:** Nuclear and Atomic Physics

[10422] **viXra:1504.0161 [pdf]**
*submitted on 2015-04-20 06:39:55*

**Authors:** Rodrigo de Abreu

**Comments:** 18 Pages. Ciência & Tecnologia dos Materiais, Vol. 14, Nº2/3, 65-72 (2002).

Neste artigo estabelecem-se as equações que permitem relacionar a tendência para o equilíbrio obtida por aplicação da 2ª Lei de Newton com a 2ª Lei da Termodinâmica. Esta análise permite, de uma forma simples e directa, relacionar a condição final de equilíbrio correspondente à aceleração e velocidade nulas (equilíbrio mecânico estático) com a estacionaridade da entropia (equilíbrio termodinâmico). Através da introdução do conceito de pressão dinâmica determinam-se a origem, significado e condições de validade de algumas aproximações da Termodinâmica.

**Category:** Thermodynamics and Energy

[10421] **viXra:1504.0160 [pdf]**
*submitted on 2015-04-20 08:14:03*

**Authors:** Wei-Xiong Huang

**Comments:** 5 Pages.

Substance's mass infinitely divisible. Substance mass infinitesimal small objects called magneton. Magneton mass tends to zero, but will never zero. Magnetic and spin are the magneton talent mettle.
Magnetic force enables multiple magneton orderly bond together, to form magneton group. Magneton bond into small and simple magneton group. Small and simple magneton group bond into big and complex magneton group. every hue independent magneton group construct every hue material forms.
Magneton collide with each other, make its spin speed and magneton's beeline speed transformed into each other. Bond into magneton group the magneton all maintain original characteristics. Magneton group have magneton the same characteristics. speed reach light speed, mass is about 10^-35 kg, such independent magneton group is photon.

**Category:** Astrophysics

[10420] **viXra:1504.0159 [pdf]**
*submitted on 2015-04-20 08:40:39*

**Authors:** Marius Coman

**Comments:** 6 Pages.

The study of the power of primes was for me a constant probably since I first encounter “Fermat’s last theorem”. The desire to find numbers with special properties, as is, say, Hardy-Ramanujan number, was another constant. In this paper I present a class of numbers, i.e. the numbers of the form n = 505 + 1008*k, where k positive integer, which, despite the fact that they don’t seem to be, prima facie, “special”, seem to have a strong connection with the powers of primes: for a lot of values of k (I show in this paper that for nine from the first twelve and I conjecture that for an infinity of the values of k), there exist p and q primes such that p^2 – q^2 + 1 = n. The special nature of the numbers of the form 505 + 1008*k is also highlight by the fact that they are (all the first twelve of them, as much I checked) primes or g/s-integers or c/m-integers (I define in Addenda to this paper the two new notions mentioned).

**Category:** Number Theory

[10419] **viXra:1504.0158 [pdf]**
*submitted on 2015-04-20 09:13:30*

**Authors:** Radwan M. Kassir

**Comments:** 5 journal pages

For relatively moving inertial frames, the constancy of the speed of light principle physically leads to time dilation in the transverse direction. This time dilation is irreconcilable in the longitudinal direction unless a length contraction in the relative motion direction is postulated. However, time dilation is contradictorily coupled with length expansion, a fact erroneously twisted in the special relativity and related text books, as demonstrated in this paper. The typical physical demonstration of the length contraction is shown to be inconsistent and contradicts its derivation from the Lorentz transformation. The misinterpretation of the Lorentz Transformation in predicting the length contraction is revealed. The constancy of the speed of light is consequently unviable.

**Category:** Relativity and Cosmology

[10418] **viXra:1504.0157 [pdf]**
*submitted on 2015-04-20 09:40:13*

**Authors:** Xinyang Deng, Yong Deng, Zhen Wang, Qi Liu

**Comments:** 22 pages.

Quantum game theory is a new interdisciplinary field between game theory and physical research. In this paper, we extend the classical inspection game into a quantum game version by quantizing the strategy space and importing entanglement between players. The quantum inspection has various Nash equilibrium depending on the initial quantum state of the game. Our results also show that quantization can respectively help each player to increase his own payoff, but can not simultaneously improve the collective payoff in the quantum inspection game.

**Category:** General Science and Philosophy

[10417] **viXra:1504.0156 [pdf]**
*submitted on 2015-04-20 04:15:13*

**Authors:** Sylwester Kornowski

**Comments:** 3 Pages.

Here, within the Scale-Symmetric Physics that leads to the atom-like structure of baryons, the antiproton-to-proton ratio in the energy coordinates and some qualitative and partially quantitative description of proton flux as a function of rigidity are presented. Obtained results are consistent with the AMS data.

**Category:** Quantum Gravity and String Theory

[10416] **viXra:1504.0155 [pdf]**
*submitted on 2015-04-20 05:30:11*

**Authors:** Rodrigo de Abreu

**Comments:** 11 Pages. Ciência & Tecnologia dos Materiais, Vol. 14, Nº 4, 36-40 (2002).

We analyse the acceleration of a mass with a simple structure taking into account Thermodynamics. Two situations are analysed. The first one for the application of a localized force to a point of the mass. The second one for the application of a force to the entire mass. The two situations are not equivalent. For the first situation we have an increase of temperature of the mass, resulting from an internal damping, during a transient.

**Category:** Thermodynamics and Energy

[10415] **viXra:1504.0154 [pdf]**
*submitted on 2015-04-19 16:53:37*

**Authors:** Ramzi Suleiman

**Comments:** 30 Pages.

Einstein's theory of special relativity (SR) theory dictates, as a force majeure, an ontic view, according to which relativity is a true state of nature. For example, the theory’s solution to the famous twin paradox prescribes the "traveling" twin returns truly and verifiably younger than the "staying" twin, thereby implying the “traveling” twin returns to the future.
Here I propose an epistemic view of relativity, according to which relativity results from a difference in knowledge about Nature between observers who are in motion relative to each other. Utilizing this postulation, together with SR’s first axiom, I construct an epistemic relativity theory (ER) for the dynamics of moving bodies in inertial systems. I show that ER solves the twin paradox such that the rejoining twins age equally. Tests of the theory's time transformation show that although the theory contradicts the Lorentz invariance principle (LI), it accords nicely with the results of the famous Michelson-Morley experiment and the well-known Frisch and Smith muon decay experiment. More important, the theory accounts for the linear Sagnac effect, which disobeys starkly both LI and SR. It also precisely predicts the results of several recent neutrino velocity experiments conducted by OPERA and other collaborations. I explain why the experimental setups of the linear Sagnac and the neutrino velocity experiments qualify them as stringent falsification tests for the LI and SR.
In another paper cited here I show that application of ER to cosmology and astrophysics, proves quite potent in providing plausible answers to key cosmological questions, including dark matter and dark energy, and the evolutionary timeline of the nucleosynthesis of chemical elements.
The theory's prediction concerning the kinetic energy density as a function of velocity reveals that for approaching bodies, the dependence of energy on velocity is similar, although not identical, to the one prescribed by SR. However, for departing bodies, the theory predicts kinetic energy density will monotonically increase with velocity up to a maximal value, after which it will monotonically decrease more steeply, reaching zero at a velocity equal to the velocity of light. Most strikingly, the breakdown of the acknowledged relationship between energy and velocity is predicted to occur at velocity v = Φ c, where c is the velocity of light and Φ is the famous Golden Ratio (≈ 0.618). This result echoes a recent
3
finding demonstrating that the quantum criticality of cobalt niobate atoms exhibits Golden Ratio symmetry. Another peculiar Golden Ratio symmetry of the predicted energy density indicates that at v = - Φ c (approaching bodies), the energy density is equal to 1+ Φ (= 1/ Φ) ≈ 1.618. No less surprising, we find the maximal energy density at v = Φ c, relative to the rest-frame energy density is Φ^5 ≈0.09016994, which precisely equals Hardy's probability of entanglement. The emergence of these results from a deterministic relativity theory based on SR's first axiom plus an axiom specifying light as information carrier is puzzling. One possible explanation is to attribute their emergence to mere coincidence. However, given the many confirmed predictions of the theory, including in cosmology, and the unlikelihood of such a coincidence actually happening, this explanation is highly improbable. Another possibility worth pursuing is that ER reveals more than one thread for a possible connection between an epistemic view of relativity and quantum mechanics, with the Golden Ratio symmetry playing a key role.

**Category:** Relativity and Cosmology

[10414] **viXra:1504.0153 [pdf]**
*submitted on 2015-04-19 19:18:27*

**Authors:** Marius Coman

**Comments:** 4 Pages.

In this paper I present the observation that the formula p^2 – q^2 + 1, where p and q are primes with the special property that the sums of their digits are equal, leads often to primes (of course, having only the digital root equal to 1 due to the property of p and q to have same digital sum implicitly same digital root) or to special kinds of semiprimes: some of them named by me, in few previous papers, c/m-primes, and some of them named by me, in this paper, g-primes respectively s-primes. Note that I chose the names “g/s-primes” instead “g/s-semiprimes” not to exist confusion with the names “g/s-composites”, which I intend to define and use in further papers.

**Category:** Number Theory

[10413] **viXra:1504.0152 [pdf]**
*submitted on 2015-04-20 02:37:56*

**Authors:** Rodrigo de Abreu

**Comments:** 5 Pages. Ciência & Tecnologia dos Materiais, Vol. 13, Nº 1, 44-48 (2001).

Considera-se um Sistema constituído por dois sub-sistemas separados por uma parede adiabática móvel. Cada um destes sub-sistemas i (i=A,B) é constituido por Ni moléculas de gás. A condição final de equilíbrio corresponde à igualdade de pressões e de temperaturas de A e B. No entanto
este resultado tem sido posto em causa e originado controvérsia em artigos e livros. Mostra-se a origem desta controvérsia e qual a forma de a resolver: as condições dQA= 0 e dQB= 0 impostas nas equações obtidas através do primeiro princípio da termodinâmica e baseadas na adiabaticidade da parede, são incompativeis com a condição de aumento de entropia global na transformação espontânea que se dá pelo movimento da parede até que as pressões e as temperaturas sejam iguais, verificando-se então a condição de equilibrio dST=0. Tiram-se conclusões divergentes das de um artigo recentemente publicado (Brogueira, P. e Dias de Deus, J. Gazeta de Física, vol. 18, Fasc. 1, 19 (1995)).

**Category:** Thermodynamics and Energy

[10412] **viXra:1504.0151 [pdf]**
*submitted on 2015-04-20 03:29:27*

**Authors:** JinHua Fei

**Comments:** 14 Pages.

This paper use the methods of References [1], we got a good upper bound of exceptional real zero of the Dirichlet L- function.

**Category:** Number Theory

[10411] **viXra:1504.0150 [pdf]**
*submitted on 2015-04-19 12:23:22*

**Authors:** Rodrigo de Abreu

**Comments:** 7 Pages. Técnica 3, 15-21 (1994), (Int. Conf. on Phys. Ed. "Light and Information", Univ. do Minho, Braga, Portugal (1993).

The fundamental aim of this article is to show that by considering an ideal gas defined through p=αu, this relation between force and energy contains the whole thermodynamic information about the system. As a matter of fact we show that there is no need for an a priori introduction of the variables temperature or entropy since they result from the above relation and from the Energy Conservation Principle. Previous tautological treatments are thus eliminated and equations p=αu and pV = BT are related with generality. The theory is general since the ideal gas considered has the photon gas, which, of course, is ever present.

**Category:** Thermodynamics and Energy

[10410] **viXra:1504.0149 [pdf]**
*submitted on 2015-04-19 13:56:45*

**Authors:** Marius Coman

**Comments:** 2 Pages.

In this paper I present the following observation: concatenating to the right the number p^2 – 1, where p is a prime of the form 6*k – 1, with the digit 1, is often obtained a prime or a c-prime; also, concatenating to the right the number p^2 – 1, where p is a prime of the form 6*k + 1, with the digit 1, is often obtained a prime or a m-prime.

**Category:** Number Theory

[10409] **viXra:1504.0148 [pdf]**
*submitted on 2015-04-19 10:31:57*

**Authors:** Julian Borchardt

**Comments:** 45 Pages.

We show that the quarterly updates about the risk of PML during natalizumab therapy, while in principle helpful, underestimate the real incidences systematically and significantly. Calculating the PML incidences using an appropriate method and on realistic assumptions, we obtain estimates that are up to 80% higher. In fact, with the recent paper [Plavina et al 2014], our approximate incidences are up to ten times as high. The present article describes the shortcomings of the methods used in [Bloomgren et al 2012] and by Plavina et al for computing incidences, and demonstrates how to properly estimate the true (prospective) risk of developing PML during natalizumab treatment. One application is that the newest data concerning the advances in risk-mitigation through the extension of dosing intervals, although characterised as not quite statistically significant, are in fact significant. Lastly, we discuss why the established risk-stratification algorithms, even on assessing the PML incidences correctly, are no longer state-of-the-art; in the light of all the progress that has been made so far, already today it is possible to reliably identify over 95% of patients in whom (a personalised regimen of) natalizumab should be very safe.

**Category:** Quantitative Biology

[10408] **viXra:1504.0147 [pdf]**
*submitted on 2015-04-19 11:41:04*

**Authors:** Koji Nagata, Tadao Nakamura

**Comments:** 8 Pages.

We study the relation between hidden variables theories and
quantum computation.
We discuss
an
inconsistency
between a hidden variables theory and controllability of quantum computation.
To derive the inconsistency, we use the maximum value of
the square of an expected value.
We propose a solution of the problem by using new hidden variables theory.
Also
we discuss
an
inconsistency
between hidden variables theories and the double-slit experiment
as the most basic experiment in quantum mechanics.
This experiment can be an easy detector to Pauli observable.
We cannot accept
hidden variables theories to simulate the double-slit experiment
in a specific case.
Hidden variables theories may not depicture quantum detector.
This is a quantum measurement theoretical profound problem.

**Category:** Quantum Physics

[10407] **viXra:1504.0146 [pdf]**
*submitted on 2015-04-19 06:45:46*

**Authors:** Ramesh Chandra Bagadi, Roderic S. Lakes

**Comments:** 3 Pages.

In this research monograph, some additional constraints on recursive phonetic expression of any alphabet in terms of one or more alphabets is presented.

**Category:** General Mathematics

[10406] **viXra:1504.0145 [pdf]**
*submitted on 2015-04-19 07:15:51*

**Authors:** George Rajna

**Comments:** 7 Pages.

Unambiguous detection of individual gravitons, though not prohibited by any fundamental law, is impossible with any physically reasonable detector. The reason is the extremely low cross section for the interaction of gravitons with matter. [5]
The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the electromagnetic inertia, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions. Since the gravitational force is basically a magnetic force the matter-antimatter gravitational repulsion makes sense.

**Category:** Quantum Gravity and String Theory

[10405] **viXra:1504.0144 [pdf]**
*submitted on 2015-04-19 08:05:27*

**Authors:** Ervin Goldfain

**Comments:** 6 Pages. Under construction (first draft). References not included.

There are several instances where non-analytic functions and non-integrable operators are deliberately excluded from perturbative Quantum Field Theory (QFT) and Renormalization Group (RG) to maintain internal consistency of both frameworks. Here we briefly review these instances and suggest that they may be a portal to an improved understanding of the asymptotic sectors of QFT and the Standard Model of particle physics (SM).

**Category:** High Energy Particle Physics

[10404] **viXra:1504.0143 [pdf]**
*submitted on 2015-04-18 17:04:29*

**Authors:** R.J. Tang

**Comments:** 2 Pages.

There is a profound principal in the universe that says there is no central entity or notion anywhere, and that everything has no special significance than any other things in physics terms. This principal dispelled ‘earth-centric’ idea and later the Newtonian absolute time-space concept. It is a universally accepted principal in modern science. If math and physics are intertwined inextricably then it seems natural numbers ought to have an equal standing as any other numbers, irrational, complex, or even numbers yet to be invented.
Is there any physical underlying reason for natural numbers’ special status? Or are the natural numbers just a convenient way for people to count and were invented by macro intelligent beings like us?
Since all natural numbers are mere derivatives of the number ‘1’, so let’s look closely at what this number one really means. There are two broad meaning of the number one, corresponding to different mental construct to define ‘1’. First it registers a definitive state of some physical attribute, such as ‘presence’ or ‘non-presence’. We can find its application in information theory, statistical physics, counting and etc. The second interpretation of number one is that it denotes the ‘wholeness’ of an entity. Yet another definition arises from set theory. Still another arises from the order of which one element in a sequence related to the other elements. Remarkably the concept of natural numbers can come from many different constructs, just as remarkable that natural numbers come from many different domains in the physical world.
In physics, natural numbers virtually have no sacred places prior to the establishment of quantum mechanics. After all, we don’t need any natural numbers in our gravity functions or the Maxwell electro-magnetic wave functions. Some sharp observers would argue that the ‘R squared’ contains a natural number 2. However on close examination the number 2 is merely a mathematical notation for a number multiplying by itself, and it has no actual physical corresponding object or attribute. The fact that there is no natural number in the formulae represents the idea that time-space is fundamentally smooth. For instance there is no such law in physics that requires 7 bodies (non quantum mechanical) to form a system in equilibrium.
Had we obtained calculus capability before we can count our fingers, we probably would have been more familiar with the number e than 1-2-3. We might have used e/2.718 to represent the mundane singletons. There is no logical requirement that we couldn’t or shouldn’t do it. It is all due to the accident that people happened to need to count their fingers earlier than the invention of calculus. There is no physical evidence that the number ‘2’ is more significant than the any other numbers in the natural world.
However with the standard model of quantum mechanics, energy is quantized, that is, it can only take natural numbers. This idea profoundly altered the status of natural numbers in physics and is a direct contradictory of the notion of ‘no center in the universe’ principal. In this sense it is far more unorthodox than the two relativity theories combined because the latter in fact enhance the ‘no center in the universe’ law. Why does the quantum have to be integer times of a certain energy level, and not an irrational number like square root of 17, or the quantity e? Does it really mean there are aristocrats in the number world, where some are nobler than others? Were the ancient Greek mathematicians right after all, who worshiped the sacredness of natural numbers and even threw the irrational number discoverer into the sea?
From this standpoint we can almost say that quantum theory has some bad taste among all branches of natural science.
Before the quantum theory got its germination, actually people should have noticed the unusual role natural numbers play in rudimentary chemistry. For instance, why two hydrogen atoms and not five, are supposed to combine with one oxygen atom to form a water molecule? If scientists are sharp enough back then they ought to be able to be alarmed by the oddity underlying the strange status of natural numbers. It could almost be an indirect way to deduce the quantized nature of electrons.
Fundamentally if natural numbers indeed play a very unusual role in nature, then nature resembles a codebook not just from a coarse analogy standpoint. It is the ultimate codebook filled with rules for a limited number of building block codes. The DNA code is an excellent example.
If it’s a codebook, inevitably it takes us to surmise if information itself is the ultimate being in the universe. It is probably not electrons, strings, quarks or whatever ‘entities’ people have claimed. It is the information that is the only tangible and verifiable entity out there. Everything else is a mirage or manifestation of some underlying information, the codebook.
In this sense physics has somewhat gone awry by focusing on the wrong things, the ‘attributes’ such as momentum, position and etc. Instead, information is what contemporary physicists talk about and experiment with. Otherwise, the physicists would have no right to laugh at the medieval scholars who based their intellectual work on the measurement of the distance between a subject and God’s throne.
The nature has revealed her latest hand of cards to us. It looks like it’s the final hand but no one can be sure of course.

**Category:** Number Theory

[10403] **viXra:1504.0142 [pdf]**
*submitted on 2015-04-18 17:06:16*

**Authors:** R.J. Tang

**Comments:** 2 Pages.

Simplicity is that a relative few theories and mathematical models can explain a number of phenomena. While complexity is the opposite where there seems to be an unending need to invent new theories. By this definition, physics and astronomy are in the former camp and social science and biology belong to the latter.
Why is the universe is even understandable? This itself is hard to understand according to Einstein. I propose a line of reasoning here. Simplicity is a result of long term evolution in a close system. The resulting equilibrium gives rise to simplicity. The infinite possibilities of any member of the system have been largely reduced to a highly confined options. Most of the possibilities are prohibited due to forces that have long been cancelled out during the long evolution. Because of this simplicity, there appears to be causal effect. In other words, causal effect is a direct product of simplicity. Take our universe as an example, the universe is in equilibrium by and large. Only a handful forces remain. Because there are relatively few forces and laws, the universe appears to be orderly and thereby allows mathematics to even exist and work. Mathematics owes its existence to the equilibrium of the universe. Equilibrium brings orderliness and slowness to change. Just imagine, if one puts one stone by another stone, and because the stones decay so fast, by the end of this action of moving them together, one counts zero stone. The law of addition will be forever different from what we know today. In this sense, math and physics have ‘this worldliness’ feature, and is a localized knowledge to this universe at this phase of equilibrium. It could be vastly different in other possible states of the universe or other universes.
A corollary is that the rules governing a simple universe is discoverable and free of controversy. The simplistic nature of the rules make it hard to miss the mark, so to speak. Once the framework of the rules is tested true over numerous times, what’s left is refining of the details. Contradiction between the rules and the reality should be worked away over the time. This is good news to scientists because it solves the age old anxiety of finding all theories are invalid one morning.
One notable exception to the simplicity in universe is the complexity in bio-sphere. Because the bio-sphere is inherently expansive and interactive, we cannot reduce the theories to a few laws and mathematics models. The bio-sphere is NOT an equilibrium system. Therefore it is very hard to apply causal effect to explain human society for instance. It is very hard to generalize theories or apply mathematics in bio-sphere or human society, as we are able to in cosmology.
Humans’ brain is wired to understand simple things and not complex things. We seek patterns and generalize. This skill helps tremendously in our evolutionary past. For instance, our eyes are adept in figuring out patterns like straight lines. Our eyes are especially good at spotting moving object in a static background. The predisposition to seek simplicity gave humans survival advantage in its evolutionary history. We appreciate simplicity over complexity. Humans process limited computational power. It is most efficient to apply the limited resource to a fast algorithm. The design principal of the fast algorithm is simplicity. There is an aesthetic side from human eyes for simplicity, whether be a new physics theory or a design of a gadget. The propensity of seeking simplicity is a very human specific trait, and has nothing to do with the reality whether the world is simple or complex.
The coincidence of the simplicity of the universe and human’s preference of simplicity is fortunate and fruitful. Specifically in the math and physics the coincidence yielded amazing results. There is no reason to doubt that more amazing discoveries will surface in the future. However, a grain of salt must be added so that we are conscious that there is less mysterious processes or agent involved in the coincidence. This article hopefully explained why. In fact, if we are blindly led by our pursuit of simplicity, we might fall into traps of naturally complex traps. For instance, any attempt to gain simplistic insight into a complex system is not a good idea. Our brain comes into my mind as an example of complex system. There are so many facets to this simple object that no one can claim a brain can be modeled with a finite number of rules.
However, the simplicity on the surface for the natural world might be just a disguise of a chaotic and unpredictable reality. The equilibrium masks over much of the chaos and most noises or complex nuisances cancel each other out. What’s left is the poetic simplicity skin. Underneath the skin, things might not be that smooth, or elegant or simple after all. It is a possibility. We probably are able to see some hints as we get more refined data, better models and more powerful observation tools.
Another aspect of the simplicity is that it indicates the death process of the universe toward infinite entropy. Based on the second law of thermal dynamics, our universe is slipping into this final death of maximum entropy. An accompanying result of this process is that the universe becomes simpler and simpler. Imagine a universe where homogeneity rules and any imaginable infinitesimal particles and forces are distributed absolutely uniformly and cannot be changed a bit. This would be the simplest state and requires the simplest mathematics or physics. If we are slipping in that direction, which I think we are, then we should not be surprised that the physics laws in describing the universe is becoming simpler. Our current simplistic physical forces and laws are hinting that.

**Category:** History and Philosophy of Physics

[10402] **viXra:1504.0141 [pdf]**
*submitted on 2015-04-18 17:21:12*

**Authors:** Marius Coman

**Comments:** 8 Pages.

In this paper I define the “Smarandache-Coman sequences” as “all the sequences of primes obtained from the Smarandache concatenated sequences using basic arithmetical operations between the terms of such a sequence, like for instance the sum or the difference between two consecutive terms plus or minus a fixed positive integer, the partial sums, any other possible basic operations between terms like a(n) + a(n+2) – a(n+1), or on a term like a(n) + S(a(n)), where S(a(n)) is the sum of the digits of the term a(n) etc.”, and I also present few such sequences.

**Category:** Number Theory

[10401] **viXra:1504.0140 [pdf]**
*submitted on 2015-04-18 21:01:02*

**Authors:** Marius Coman

**Comments:** 1 Page.

In this paper I conjecture that there exist an infinity of primes of the form 2*p^2 – p – 2, where p is a Sophie Germain prime, I show first few terms from this set and few larger ones.

**Category:** Number Theory

[10400] **viXra:1504.0139 [pdf]**
*submitted on 2015-04-18 23:52:50*

**Authors:** Ramesh Chandra Bagadi, Roderic S. Lakes

**Comments:** 2 Pages.

In this research monograph, a novel type of Inner Product and Outer Product is advented.

**Category:** General Mathematics

[10399] **viXra:1504.0138 [pdf]**
*submitted on 2015-04-19 02:33:16*

**Authors:** Marius Coman

**Comments:** 2 Pages.

In this paper I conjecture that there exist an infinity of squares of primes of the form 109 + 420*k, also an infinity of primes of this form and an infinity of semiprimes p*g of this form such that q – p = 60.

**Category:** Number Theory

[10398] **viXra:1504.0137 [pdf]**
*submitted on 2015-04-17 21:44:22*

**Authors:** Brian B.K. Min

**Comments:** 8 Pages.

Our space-time is postulated to have the following characteristics: (1) the space is an ocean filled with the “Gamma elements” having energy and mass and of a certain size; (2) both time and distance are discretized by the process of light propagation from one Gamma element to the next in some process of relativistic boost of the internal energy. These postulates provide us with a theoretical basis to explain why the speed of light, c, should remain constant in all inertial reference frames. The discrete process of light propagation leads us to a set of natural units. As a result, new physically based “Planck element units” may be defined with the new mass scale being ~7.37 x 10-51 kg (~4.14 x 10-15 eV/c2). The length scale is estimated from the wavelength of the highest energy gamma rays, in the range of 1 x 10-19 m ‒ 1 x 10-25 m, and the new time scale then being in the range of 3.34 x 10-28 s ‒ 3.34 x 10-34 s. The Planck element units are shown to relate with the fundamental constants, c (speed of light), G (gravitational constant), and h (Planck constant) with the same dimensional relationship as the conventional Planck units, but the length and time units are larger than those of the latter by 109 – 1016 orders of magnitude while the mass is smaller by whopping 10-43 orders of magnitude.

**Category:** Relativity and Cosmology

[10397] **viXra:1504.0136 [pdf]**
*submitted on 2015-04-17 21:50:01*

**Authors:** Brian B.K. Min

**Comments:** 13 Pages.

We postulate that our space is filled by the “Gamma elements” having energy and mass with the size approximately given by the wavelength of the highest energy gamma rays and that time and distance are both discretized by the process of light propagation from one Gamma element to the next. These postulates provide us with a theoretical ground to explain why the speed of light, c, should remain constant to all observers regardless of their inertial frames of reference. In the cosmological scale, the energy of the Gamma elements filling space has been equated to the energy represented by the cosmological constant, i.e., the dark energy. When applied to the expanding universe, the EST model brings additional 25% of the total mass simply as the relativistic correction to the non-relativistic results; hence the dark matter is closely identified as merely the relativistic correction to the dark energy predicted by the Friedman equation. The agreement with the observed magnitudes convincingly supports this interpretation of dark energy and dark matter.

**Category:** Relativity and Cosmology

[10396] **viXra:1504.0135 [pdf]**
*submitted on 2015-04-17 21:55:26*

**Authors:** Brian B.K. Min

**Comments:** 9 Pages.

A new relativistic quantum wave equation has been derived by applying the quantum prescription to the momentum and the kinetic energy rather than to the momentum and the total energy, since after all it is the kinetic energy that generates the momentum. The resulting equation reduces to the Schrödinger equation in the nonrelativistic limit and to the Klein-Gordon equation for “massless particles” in the relativistic limit, i.e., if the velocity of the particle approaches that of light, c. For massive particles in general, the new equation deviates from the Klein-Gordon equation. The same equation is shown to decouple according to the Dirac formalism, yielding a modified form of Dirac equation. When applied to a rest particle, the modified Dirac equation is shown to avoid a negative energy solution and instead include a constant solution. The other, the time-dependent particle solution of the modified Dirac equation, has the characteristic frequency Mc2/(ћ/2) , i.e., twice those of the Dirac solutions, Mc2/ћ.

**Category:** Quantum Physics

[10395] **viXra:1504.0134 [pdf]**
*submitted on 2015-04-17 08:19:42*

**Authors:** Bishnu Charan Behera

**Comments:** 2 Pages.

THIS IS A ALGORITHM WHICH HAS THE SAME TIME COMPLEXITY AS THAT OF LINEAR SEARCH OF O(n).BUT STILL IT IS BETTER THAN LINEAR SEARCH IN TERMS OF EXECUTION TIME. LET A[ ] BE THE THE ARRAY OF SOME SIZE N. IF THE ELEMENT WHICH WE WANT TO SEARCH IS AT ANY POSITION BEFORE N/2 THAN MY-SEARCH AND LINEAR-SEARCH BOTH WILL HAVE EXECUTION TIME , BUT THE MAGIC HAAPENS WHEN THE SEARCH ELEMENT IS AFTER N/2 POSITION.SUPPOSE THE ELEMENT WANT TO SEARCH IS AT Nth POSITION, THEN USING THE LINEAR SEARCH WILL FIND THE ELEMENT AFTER Nth ITERATION,BUT USING MY-SEARCH WE CAN SEARCH THE ELEMENT AFTER 1st ITERATION ITESELF.
WHEN WE ARE DEALING WITH A SITUTATION WHEN SIZE IS SOMETHING 10 OR 15 ITS OK. BUT CAN YOU IMAGINE THE CASE WHEN THE SIZE IS “100000000” OR EQUIVALANENT.IF WE USE THIS LINEAR SEARCH TECHINIUQE THAN THE TOTAL EXPENDITURE YOU CAN THINK OFF TO CONTINUE THE LOOP FOR 100000000 TIMES.BUT RATHER IF USE MY-SEARCH U GET THE DESIRED SEARCH JUST AFTER 1 ITERATIONS.
SO ,NOW YOU CAN IMAGINE HOW WE CAN PREVENT SUCH A BIG LOSS THROUGH MY-SEARCH.
THANK YOU

**Category:** Data Structures and Algorithms

[10394] **viXra:1504.0133 [pdf]**
*submitted on 2015-04-17 09:31:52*

**Authors:** You-Bang Zhan

**Comments:** 8 Pages.

The discrimination of quantum operations is an important subject of quantum information processes. For the local distinction, existing researches pointed out that, since any operation performed on a quantum system must be compatible with no-signaling constraint, local discrimination between quantum operations of two spacelike separated parties cannot be realized. We found that, however, local discrimination of quantum measurements may be not restricted by the no-signaling if more multi-qubit entanglement and selective measurements were employed. In this paper we report that local quantum measurement discrimination (LQMD) can be completed via selective projective measurements and numerous seven-qubit GHZ states without help of classical communication if both two observers agreed in advance that one of them should measure her/his qubits before an appointed time. As an application, it is shown that the teleportation can be completed via the LQMD without classical
information. This means that the superluminal communication can be realized by using the LQMD.

**Category:** Quantum Physics

[10393] **viXra:1504.0132 [pdf]**
*submitted on 2015-04-16 23:25:44*

**Authors:** V.B. Smolenskii

**Comments:** 1 Page.

we prove that a negative length and mass are absent in nature if there are negative charges with not equal to one factors in the form of dimensionless fundamental constants.

**Category:** Relativity and Cosmology

[10392] **viXra:1504.0131 [pdf]**
*submitted on 2015-04-17 00:40:25*

**Authors:** Bojan Vasiljević

**Comments:** 1 Page.

Here we have very slight improvement of Fermat factorization method, where instead for looking off odd N, we are looking for odd or even B.

**Category:** General Mathematics

[10391] **viXra:1504.0130 [pdf]**
*submitted on 2015-04-17 01:53:05*

**Authors:** Branko Kozulic, Matti Leisola

**Comments:** 21 Pages.

Over two millennia ago Socrates was pondering whether our Universe and all things in it are governed by randomness or by a regulating intelligence. This philosophical question has been alive till the present day, since the
proponents of neither side have been able to convince their opponents. Scientists seldom express or recognize clearly their philosophical presuppositions and many think that there is no room for philosophy in science. Our view is that although science cannot determine which philosophical view is correct, it can show which one is wrong. Here we critically review the experimental results obtained during the past twenty years by Jack W. Szostak and his co-workers relating to functional information among random RNA and protein sequences. We
explain in detail why their experiments with random or partially randomized protein sequences do not mimic the processes that take place in natural populations. Simple calculations show that in the laboratory scientists have searched much larger sequence space than could have been searched by random natural processes. We further argue that the discovery of singletons and of protein-protein-interaction networks has removed the randomness concept from biochemistry, and that neo-Darwinian view of the living world is false. We see faulty Hegelian logic as a major reason for the survival of the illusion that evolution is true, and the same logic is misleading many scientists into accepting empty phrases like “intrinsically disordered proteins” as existentially meaningful.

**Category:** Biochemistry

[10390] **viXra:1504.0129 [pdf]**
*submitted on 2015-04-17 05:21:54*

**Authors:** George Rajna

**Comments:** 19 Pages.

Astronomers from Chalmers University of Technology have used the giant telescope Alma to reveal an extremely powerful magnetic field very close to a supermassive black hole in a distant galaxy. The results appear in the 17 April 2015 issue of the journal Science. [14]
Quasars, even those that are billions of light years away, are some of the “brightest beacons” in the universe. Yet how can quasars radiate so much energy that they can be seen from Earth? One explanation is that at each quasar’s center is a growing supermassive black hole (SMBH). [13]
If dark matter comes in both matter and antimatter varieties, it might accumulate inside dense stars to create black holes. [12]
For a long time, there were two main theories related to how our universe would end. These were the Big Freeze and the Big Crunch. In short, the Big Crunch claimed that the universe would eventually stop expanding and collapse in on itself. This collapse would result in…well…a big crunch (for lack of a better term). Think “the Big Bang”, except just the opposite. That’s essentially what the Big Crunch is. On the other hand, the Big Freeze claimed that the universe would continue expanding forever, until the cosmos becomes a frozen wasteland. This theory asserts that stars will get farther and farther apart, burn out, and (since there are no more stars bring born) the universe will grown entirely cold and eternally black. [11]
Newly published research reveals that dark matter is being swallowed up by dark energy, offering novel insight into the nature of dark matter and dark energy and what the future of our Universe might be. [10]
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.

**Category:** Astrophysics

[10389] **viXra:1504.0128 [pdf]**
*submitted on 2015-04-16 05:04:10*

**Authors:** Hervé Le Cornec

**Comments:** 5 Pages.

Any mobile having a velocity which is the addition of a rotation velocity and a translation velocity, both with a constant modulus, will follow a trajectory that respects the three laws of Kepler. This article demonstrate this theorem and discuss it. An important result is to forecast the mathematical structure of the Newton's acceleration of attraction, not any more as a prior, but as a consequence, the subsequent centripetal acceleration due to the rotation velocity.

**Category:** Classical Physics

[10388] **viXra:1504.0127 [pdf]**
*submitted on 2015-04-15 23:49:17*

**Authors:** Reginald T Cahill

**Comments:** 17 Pages.

Experiments have repeatedly revealed the existence of a dynamical structured fractal 3-space, with a speed relative to the Earth
of some 500km/s from a southerly direction. Experiments have ranged from optical light speed anisotropy interferometers to
zener diode quantum detectors. This dynamical space has been missing from theories from the beginning of physics. This
dynamical space generates a growing universe, and gravity when included in a generalised Schrodinger equation, and light
bending when included in generalised Maxwell equations. Here we review ongoing attempts to construct a deeper theory of
the dynamical space starting from a stochastic pattern generating model that appears to result in 3-dimensional geometrical
elements, “gebits”, and accompanying quantum behaviour. The essential concept is that reality is a process, and geometrical
models for space and time are inadequate.

**Category:** Relativity and Cosmology

[10387] **viXra:1504.0126 [pdf]**
*submitted on 2015-04-15 23:56:04*

**Authors:** Reginald t Cahill

**Comments:** 11 Pages.

During the last decade the existence of space as a quantum-dynamical system was discovered, being first indicated by the measured anisotropy of the speed of EM radiation. The dynamical theory for space has been under development during that period, and has now been successfully tested against experiment and astronomical observations, explaining, in particular, the observed characteristics of galactic black holes. The dynamics involves G and alpha- the fine structure constant. Applied to the earth this theory gives two observed predictions (i) the bore hole g anomaly, and the space-inflow effect.The bore hole anomaly is caused by a black hole (a dynamical space in-flow effect) at the centre of the earth. This black hole will be associated with space-flow turbulence,which, it is suggested, may lead to the generation of new matter, just as such turbulence created matter in the earliest moments of the universe.This process may offer a dynamical mechanism for the observed expanding earth.

**Category:** Geophysics

[10386] **viXra:1504.0125 [pdf]**
*submitted on 2015-04-16 00:00:48*

**Authors:** Reginald T Cahill

**Comments:** 23 Pages.

The anisotropy of the velocity of EM radiation has been repeatedly detected, including the Michelson-Morley
experiment of 1887, using a variety of techniques. The experiments reveal the existence of a dynamical space
that has a velocity of some 500km/s from a southerly direction. These consistent experiments contradict the assumptions
neo-Lorentz Relativity. The existence
of the dynamical space has been missed by physics since its beginnings. Novel and checkable phenomena then
follow from including this space in Quantum Theory, EM Theory, Cosmology, etc, including the derivation of
a more general theory of gravity as a quantum wave refraction effect. The corrected Schrodinger equation has
resulted in a very simple and robust quantum detector, which easily measures the speed and direction of the
dynamical space. This report reviews the key experimental evidence.

**Category:** Relativity and Cosmology

[10385] **viXra:1504.0124 [pdf]**
*submitted on 2015-04-16 00:03:56*

**Authors:** Reginald T Cahill

**Comments:** 8 Pages.

In 2014 Jiapei Dai reported evidence of anisotropic Brownian motion of a toluidine blue
colloid solution in water. In 2015 Felix Scholkmann analysed the Dai data and detected
a sidereal time dependence, indicative of a process driving the preferred Brownian motion
diusion direction to a star-based preferred direction. Here we further analyse the
Dai data and extract the RA and Dec of that preferred direction, and relate the data
to previous determinations from NASA Spacecraft Earth-flyby Doppler shift data, and
other determinations. It is shown that the anisotropic Brownian motion is an anisotropic
“heating” generated by 3-space fluctuations: gravitational waves, an eect previously
detected in correlations between ocean temperature fluctuations and solar flare counts,
with the latter being shown to be a proxy for 3-space fluctuations. The dynamical 3-
space does not have a measure of energy content, but can generate energy in matter
systems, which amounts to a violation of the 1st Law of Thermodynamics.

**Category:** Relativity and Cosmology

[10384] **viXra:1504.0123 [pdf]**
*submitted on 2015-04-16 03:31:23*

**Authors:** Sylwester Kornowski

**Comments:** 5 Pages.

A single equation within Theory of Everything would be infinitely complex so we should formulate a fractal skeletal theory which should lead to the much simpler partial theories. In such theory should not appear free parameters and the indeterminate mathematical forms. The Scale-Symmetric Theory (S-ST) is such skeletal theory. Its structure looks as a Christmas tree. Here, within a model which is dual to the structure of baryons, applying the S-ST, we calculated the median effective radius of the Type 1 cosmological voids in observed redshift coordinates, number of such voids in the Universe, the quantized median effective radii of such voids, radius of the WMAP Cold Spot and the Cosmological Ruler. Obtained results are consistent with observational facts. Moreover, there is calculated the expected void abundance. Presented here theoretical results suggest that the picture of the high-redshift Universe obtained within the mainstream cosmology is misshapen.

**Category:** Quantum Gravity and String Theory

[10383] **viXra:1504.0122 [pdf]**
*submitted on 2015-04-15 14:35:31*

**Authors:** Rodolfo A. Frino

**Comments:** 6 Pages.

In this paper I derive the lepto-baryonic formula for the electric charge. The formula is based on
the lepto-baryonic formula for the fine-structure constant that I published recently. This paper
shows that the electrical charge is a function of the ratio of the mass difference between the two
lightest charged leptons: the electron and the electrino; and the mass difference between the two
lightest baryons: the proton and the neutron. Thus the formula for the elementary charge is a
function of the mass of four elementary particles. Two of these particles (the electron and the electrino) control the sign of the electric charge. This allow us to derive the electric charge of the positron from the electric charge of the electron by interpreting the positron, as Feynman did, as an electron of negative energy travelling backward in time.

**Category:** Quantum Physics

[10382] **viXra:1504.0121 [pdf]**
*submitted on 2015-04-15 11:13:03*

**Authors:** Th. Guyer

**Comments:** 8 Pages.

The nicest possible ABC Formula in Mathematic.

**Category:** Number Theory

[10381] **viXra:1504.0120 [pdf]**
*submitted on 2015-04-15 08:01:04*

**Authors:** George Rajna

**Comments:** 13 Pages.

Astronomers believe they might have observed the first potential signs of dark matter interacting with a force other than gravity. [11]
A new study of colliding galaxy clusters has found that dark matter doesn't even interact with itself. [10]
The gravitational force attracting the matter, causing concentration of the matter in a small space and leaving much space with low matter concentration: dark matter and energy.
There is an asymmetry between the mass of the electric charges, for example proton and electron, can understood by the asymmetrical Planck Distribution Law. This temperature dependent energy distribution is asymmetric around the maximum intensity, where the annihilation of matter and antimatter is a high probability event. The asymmetric sides are creating different frequencies of electromagnetic radiations being in the same intensity level and compensating each other. One of these compensating ratios is the electron – proton mass ratio. The lower energy side has no compensating intensity level, it is the dark energy and the corresponding matter is the dark matter.

**Category:** Astrophysics

[10380] **viXra:1504.0119 [pdf]**
*submitted on 2015-04-15 06:14:42*

**Authors:** Rodrigo de Abreu

**Comments:** 4 Pages. Técnica 2, 100-104 (1985)

The <

**Category:** Thermodynamics and Energy

[10379] **viXra:1504.0118 [pdf]**
*submitted on 2015-04-15 02:45:33*

**Authors:** Cheng Tianren

**Comments:** 16 Pages.

In this paper we solve a previously formulated conjecture. That is, for the cocommutative irreducible coalgebra C. we also introduce the azumaya coalgebras over a cocommutative coalgebra R. the computation of the brauer group BM of modified supergroup algebras over a field is performed, yielding the computation of the brauer group of all finite-dimensional triangular hopf algebra. Then we address the question if the cartan matrix of a block of a finite group cannot be arranged as a direct sum of smallest matrices. The aim of our work is to show that the topological hochschild homology of a discrete ring and the maclane homology of R are isomorphic.

**Category:** Algebra

[10378] **viXra:1504.0117 [pdf]**
*submitted on 2015-04-14 13:54:17*

**Authors:** Jiri Soucek

**Comments:** 11 Pages.

In this article we consider the variant of quantum mechanics (QM) which is based on the non-realism. There exists the theory of the modified QM introduced in [1] and [2] which is based on the non-realism, but it contains also other changes with respect to the standard QM (stQM). We introduce here the other non-realistic modification of QM (n-rQM) which contains the minimal changes with respect to stQM. The change consists in the replacement of the von Neumann`s axiom (ensembles which are in the pure state are homogeneous) by the anti-von Neumann`s axiom (any two different individual states must be orthogonal). This introduces the non-realism into n-rQM. We shall show that experimental consequences of n-rQM are the same as in stQM, but these two theories are substantially different. In n-rQM it is not possible to derive (using locality) the Bell inequalities. Thus n-rQM does not imply the non-locality (in contrast with stQM). Because of this the locality in n-rQM can be restored. The main purpose of this article was to show what could be the minimal modification of QM based on the non-realism, i.e. that the realism of stQM is completely contained in the von Neumann's axiom.

**Category:** Quantum Physics

[10377] **viXra:1504.0116 [pdf]**
*submitted on 2015-04-14 11:10:18*

**Authors:** M.pooja, S.k Manigandan

**Comments:** 7 Pages.

This project income tax deals
with computerizing the process of tax
payment. The entire process of tax payment
will be maintained in an automated way. The
main objective of this project is to reduce the
time consumption. The income tax system has
been categorized into three groups according
to the mode of payment to the central
government, state government, and the
municipality. The online tax payment system
will be helpful for paying the money from
anywhere and at any time. Earlier it was
impossible to pay the money online using
Debit card / Credit card. The main objective of
our system is; we can pay the money use of
Debit card / Credit card. Our project has
included the concept of paying money through
card number which is provided by the bank. It
is very secure and easy to reimburse. Through
the card security code providing secure money
transaction in the system .in other hand
through account number and bank name user
has to pay the tax in the system. User has to
viewing their tax calculation and money
transaction status whether payment succeeds
or not user has to monitor their entire tax
calculation through the tax view module in the
system. Admin login is used to login in admin
side. Admin side has a security of Money
Transaction and confidentiality of user
information. Admin provides the security to
their users. Admin view is used to view the
Tax payments of the login User. Admin
monitoring the user activities through admin
view module.

**Category:** Data Structures and Algorithms

[10376] **viXra:1504.0115 [pdf]**
*submitted on 2015-04-14 12:14:30*

**Authors:** Herbert Weidner

**Comments:** 6 Pages.

Precision measurements of the 0S0 frequency after eleven earthquakes in the years 2001 to 2011 by fifteen superconducting gravimeters show that this resonance frequency of the Earth has temporarily changed greatly. In addition to a slow drift there are also wide frequency jumps.

**Category:** Geophysics

[10375] **viXra:1504.0114 [pdf]**
*submitted on 2015-04-14 06:42:07*

**Authors:** Rodrigo de Abreu

**Comments:** 13 Pages. Portuguese

Considera-se uma massa imersa numa atmosfera, infinita, constituída por um gás ideal clássico na presença de um campo gravitacional constante. A massa desloca-se de uma altura inicial em que está em repouso, até uma altura final de equilíbrio. Determina-se a variação de entropia da atmosfera devida ao movimento da massa entre a altura inicial e final, por dois métodos: através das relações que resultam de se considerar que a energia da atmosfera é função da altura em que se encontra a massa e da entropia, e através da estatística de Maxwell-Boltzmann. A variação de entropia é calculada na transformação irreversível resultante da variação de posição da massa. A interpretação do cálculo da variação de entropia através da estatística de Maxwell-Boltzmann é comparada com o cálculo feito através do outro método. Mostra-se a consistência das duas análises e interpreta-se físicamente. Considera-se, como caso particular, uma parte da atmosfera confinada a um cilindro provido de um êmbolo. A analogia entre este modelo e o considerado anteriormente permite ilustrar, através de um modelo concreto, o tratamento unificado da interacção entre sub-sistemas em transformações irreversíveis, origem de uma controvérsia bem conhecida.

**Category:** Thermodynamics and Energy

[10374] **viXra:1504.0112 [pdf]**
*submitted on 2015-04-14 07:11:26*

**Authors:** Sakineh Jahangirzadeh, Ebrahim Farshidiy

**Comments:** 10 Pages.

Data weighted averaging algorithm work well for relatively low quantization levels , it begin
to present significant problems when internal quantization levels are extended farther. Each
additional bit of internal quantization causes an exponential increase in the complexity, size,
and power dissipation of the DWA logic and DAC. This is because DWA algorithms work with
unit-element DACs. The DAC must have 2N - 1 elements (where N is the number of bits
of internal quantization), and the DWA logic must deal with the control signals feeding those
2N-1 unit elements. This paper discusses the prospect of using a segmented feedback path with
coarse and ne signals to reduce DWA complexity for modulators with large internal quantizers.
However, it also creates additional problems. mathematical analysis of the problems involved
with segmenting the digital word in a
P- ADC feedback path are presented, along with a
potential solution that uses frequency-shapes this mismatch error. A potential circuit design
for the frequency-shaping method is presented in detail. Mathematical analysis and behavioral
simulation results are presented.

**Category:** Digital Signal Processing

[10373] **viXra:1504.0111 [pdf]**
*submitted on 2015-04-14 07:13:44*

**Authors:** Yadvendra Singh, Nirbhow Jap Singhy, Ravinder Aggarwalz

**Comments:** 15 Pages.

The Prosthetic is a branch of biomedical engineering that deals with missing human body
parts with artificial one. SEMG powered prosthetic required SEMG signals. The SEMG is a
common method of measurement of muscle activity. The analysis of SEMG signals depends on
a number of factors, such as amplitude as well as time and frequency domain properties.
In the present work, the study of SEMG signals at different location, below elbow and bicep
branchii muscles for two operation of hand like grip the different weights and lift the different
weights are carried out. SEMG signals are extracted by using a single channel SEMG amplifier. Biokit Datascope is used to acquire the SEMG signals from the hardware. After acquiring the
data from two selected location, analysis are done for the estimation of parameters of the SEMG
signal using LabVIEW 2012 (evaluation copy). An interpretation of grip/lift operations using
time domain features like root mean square (rms) value, zero crossing rate, mean absolute value
and integrated value of the EMG signal are carried out. For this study 30 university students
are used as subjects with 12 female and 18 male that will be a very helpful for the research in
understanding the behavior of SEMG for the development for the prosthetic hand.

**Category:** Digital Signal Processing

[10372] **viXra:1504.0110 [pdf]**
*submitted on 2015-04-14 07:14:54*

**Authors:** Sajad Sarajian

**Comments:** 15 Pages.

This paper presents design and analysis of an LCL-based voltage source converter using
for delivering power of a distributed generation source to power utility and local load. LCL
filer in output of the converter analytically is designed and its different transfer functions are
obtained for assessment on elimination of any probable parallel resonant in power system. The
power converter uses a controller system to work on two modes of operation, stand-alone and
grid-connected modes, and also has a seamless transfer between these two modes of operation.
Furthermore, a fast semiconductor-based protection system is designed for the power converter.
Performance of the designed grid interface converter is evaluated by using an 85kV A industrial
setup.

**Category:** Digital Signal Processing

[10371] **viXra:1504.0109 [pdf]**
*submitted on 2015-04-14 07:17:27*

**Authors:** S.C.Swain, Susmita Panday, Priyanka Karz

**Comments:** 14 Pages.

Power-system stability improvement by a static synchronous series compensator (SSSC)-
based damping controller considering dynamic power system load is thoroughly investigated in
this paper. Only remote input signal is used as input to the SSSC-based controller. For the
controller design, Firefly algorithm is used to find out the optimal controller parameters. To
check for the robustness and effectiveness of the proposed controller, the system is subjected
to various disturbances for both single-machine infinite bus power system and multi-machine
power system. Detailed analysis regarding dynamic load is done taking practical power system
loads into consideration. Simulation results are presented.

**Category:** Digital Signal Processing

[10370] **viXra:1504.0108 [pdf]**
*submitted on 2015-04-14 07:18:42*

**Authors:** Yung-Chung Chang, Chai-Chee Kong, Chien-Yi Chen, Jyun-Ting Lu, Tien- Shun Chan

**Comments:** 20 Pages.

This study used linear regression analysis, neural network and genetic neural network to build the coefficient of performance (COP) model of chiller before the condenser was cleaned respectively. The data were collected after the condenser was cleaned. The model was used to simulate the COP before the condenser was cleaned, and analyzed and compared the simulation results and improvement efficiency of the three methods under the same benchmark. The neural network used backpropagation network, whereas the genetic neural network designed appropriate fitness function according to the simulation result of backpropagation network to search for the optimum weighted value and bias value. This study used two cases for simulation comparison. The results showed that the COP of chiller of Case 1 increased by 3.82% in average, and the COP of chiller of Case 2 increased by 3.78% on average. Generally speaking, the accuracy of simulation by neural network was very high. The genetic neural network searched for the optimum weighted value and bias value according to the designed conditions, so as to achieve the optimized simulation result.

**Category:** Digital Signal Processing

[10369] **viXra:1504.0107 [pdf]**
*submitted on 2015-04-14 07:19:42*

**Authors:** Sajad Sarajian

**Comments:** 19 Pages.

This paper deals with design and implementation of a grid interfaced voltage source converter which uses an LCL passive filter in its output terminal and a Proportional Resonance (PR) controller to delivering power of a distributed generation source to power utility and local load. LCL filer in output of the converter analytically is designed and its different transfer functions are obtained for assessment on elimination of any probable parallel resonant in power system. The power converter uses a controller system to work on two modes of operation, stand-alone and grid-connected modes, and also has a seamless transfer between these two modes of operation. Furthermore, a fast semiconductor-based protection system is designed for the power converter. Performance of the designed grid interface converter is evaluated by using an 85 kVA industrial setup.

**Category:** Digital Signal Processing

[10368] **viXra:1504.0106 [pdf]**
*submitted on 2015-04-14 07:21:07*

**Authors:** Anand P. Mankodia, Satish K. Shah

**Comments:** 11 Pages.

Content based video indexing and retrieval has its foundations in the analyses of the prime
video temporal structures. Thus, technologies for video segmentation have become important
for the development of such digital video systems. Dividing a video sequence into shots is the
first step towards VCA and content-based video browsing and retrieval. This paper presents
analysis of histogram based techniques on the compressed video features. Graphical User
Interface is also designed in MATLAB to demonstrate the performance using the common
performance parameters like, precision, recall and F1.

**Category:** Digital Signal Processing

[10367] **viXra:1504.0105 [pdf]**
*submitted on 2015-04-14 07:22:14*

**Authors:** Hardik A.Shah, Satish K.Shah, Ami D. Patel

**Comments:** 11 Pages.

The paper presents discrete time sliding mode Position control of d.c.motor using MROF.
Discrete state space model is obtained from continuous time system of d.c.motor. Discrete state
variables and control inputs are used for sliding mode controller design using Multirate Output
Feed back approach(MROF) with fast output sampling. In this system output is sampled at a
faster rate as compared to control input. This approach does not use present output or input.
In this paper simulations are carried out for separately excited d.c.motor position control.

**Category:** Digital Signal Processing

[10366] **viXra:1504.0104 [pdf]**
*submitted on 2015-04-14 04:14:30*

**Authors:** Ramesh Chandra Bagadi, Roderic S. Lakes

**Comments:** 4 Pages.

In this research monograph, Areal Asymmetricity, Volume Asymmetricity, N-HyperSphere Volume Asymmetricity, Natural Metric Based N Hyper-Sphere Volume Asymmetricity, Penultimate Natural Metric Based N Hyper-Sphere Volume Asymmetricity, kth Penultimate Natural Metric Based N Hyper-Sphere Volume Asymmetricity are presented.

**Category:** General Mathematics

[10365] **viXra:1504.0103 [pdf]**
*submitted on 2015-04-14 04:56:09*

**Authors:** Dmitri Martila

**Comments:** 4 Pages.

Presented strong arguments against Black Hole production via two particle collision. Secondly, is talked about Black Hole strong curvature at event horizon.

**Category:** High Energy Particle Physics

[10364] **viXra:1504.0102 [pdf]**
*submitted on 2015-04-14 00:32:12*

**Authors:** Richard D. Gill

**Comments:** 16 Pages.

This paper describes the first and second versions of Joy Christian's model for the singlet correlations, working through the mathematical core of two of Christian's shortest, least technical, and most accessible works. The aim of the paper is to show that from the start, the model depended both on a conceptual error and on an algebraic error. For this purpose we start by giving an introduction to geometric algebra using the fact that the basic geometric algebra of 3D geometry is actually isomorphic to the algebra of the complex two-by-two matrices over the real numbers. Thus the reader who is already familiar with the Pauli spin matrices will find him- or herself in a completely familiar environment. This helps avoid the kind of beginner's errors which plague Christian's opus, and gives rapid access to (and understanding of) the so-called bivector algebra: the even subalgebra of Cl_{3, 0}(R), itself isomorphic to the quaternions.
Getting the basic facts of geometric algebra out front and crystal clear helps demystify Christian's project and hopefully is useful in its own right. We will see how Christian apparently realised, if only at a subconscious level, that there was a major gap in his first, 2007, paper, and attempted to patch this in 2011, making things, however, only worse.
Apart from providing a quick-start guide to geometric algebra, and a hopefully very accessible post-mortem analysis of Christian's project, the purpose of the paper is to discuss the psychology and sociology of Bell deniers: how can very clever people make such elementary mistakes, and persist so long in maintaining their illusion that they have created a major breakthrough?

**Category:** Quantum Physics

[4706] **viXra:1504.0183 [pdf]**
*replaced on 2015-04-23 14:48:05*

**Authors:** Laszlo B. Kish

**Comments:** 6 Pages. date of version and vixra link added

There is a longstanding debate about the zero-point term in the Johnson noise voltage of a resistor: Is it indeed there or is it only an experimental artifact due to the uncertainty principle for phase-sensitive amplifiers? We show that, when the zero-point term is measured by the mean energy and force in a shunting capacitor and, if these measurements confirm its existence, two types of perpetual motion machines could be constructed. Therefore an exact quantum theory of the Johnson noise must include also the measurement system used to evaluate the observed quantities. The results have implications also for phenomena in advanced nanotechnology.

**Category:** Quantum Physics

[4705] **viXra:1504.0173 [pdf]**
*replaced on 2015-04-22 10:24:09*

**Authors:** editor Florentin Smarandache

**Comments:** 640 Pages.

Folclor umoristic internetist, cules, selectat, prelucrat de Florentin Smarandache.
Bancuri, imagini, folclor in general.

**Category:** Linguistics

[4704] **viXra:1504.0165 [pdf]**
*replaced on 2015-04-22 08:25:00*

**Authors:** Blair D. Macdonald

**Comments:** 32 Pages.

Climate science's fundamental premise – assumed by all parties in the great climate debate – says the greenhouse gases – constituting less than 2% of Earth’s atmosphere; first derived by John Tyndall‘s in his 1859 thermopile experiment, and demonstrated graphically today by infrared IR spectroscopy – are special because of their IR (heat) absorbing property. From this, it is – paradoxically – assumed the (remaining 98%) non-greenhouse gases N2 nitrogen and O2 oxygen are non-heat absorbent. This paper reveals, by elementary physics, the (deceptive) role thermopiles play in this paradox. It was found: for a special group substances – all sharing (at least one) electric dipole moment – i.e. CO2, and the other greenhouse gases – thermopiles – via the thermoelectric (Seebeck) effect – generate electricity from the radiated IR. Devices using the thermopile as a detector (e.g. IR spectrographs) discriminate, and have misinterpreted IR absorption for anomalies of electricity production – between the sample gases and a control heat source. N2 and O2 were found to have (as all substances) predicted vibrational modes (derived by the Schrodinger quantum equation) at 1556cm-1 and 2330cm-1 respectively – well within the IR range of the EM spectrum and are clearly observed – as expected – with Raman Spectroscopy – IR spectroscopy’s complement instrument. The non-greenhouse gases N2 and O2 are relegated to greenhouse gases, and Earth’s atmospheric thermoelectric spectrum was produced (formally IR spectrum), and was augmented with the Raman observations. It was concluded the said greenhouses gases are not special, but typical; and all substances have thermal absorption properties, as measured by their respective heat capacities.

**Category:** Climate Research

[4703] **viXra:1504.0148 [pdf]**
*replaced on 2015-04-19 14:13:15*

**Authors:** Julian Borchardt

**Comments:** 45 Pages.

We show that the quarterly updates about the risk of PML during natalizumab therapy, while in principle helpful, underestimate the real incidences systematically and significantly. Calculating the PML incidences using an appropriate method and on realistic assumptions, we obtain estimates that are up to 80% higher. In fact, with the recent paper [Plavina et al 2014], our approximate incidences are up to ten times as high. The present article describes the shortcomings of the methods used in [Bloomgren et al 2012] and by Plavina et al for computing incidences, and demonstrates how to properly estimate the true (prospective) risk of developing PML during natalizumab treatment. One application is that the newest data concerning the advances in risk-mitigation through the extension of dosing intervals, although characterised as not quite statistically significant, are in fact significant. Lastly, we discuss why the established risk-stratification algorithms, even on assessing the PML incidences correctly, are no longer state-of-the-art; in the light of all the progress that has been made so far, already today it is possible to reliably identify over 95% of patients in whom (a personalised regimen of) natalizumab should be very safe.

**Category:** Quantitative Biology

[4702] **viXra:1504.0144 [pdf]**
*replaced on 2015-04-19 11:34:34*

**Authors:** Ervin Goldfain

**Comments:** 6 Pages. Under construction (first draft). References not included.

There are several instances where non-analytic functions and non-integrable operators are deliberately excluded from perturbative Quantum Field Theory (QFT) and Renormalization Group (RG) to maintain internal consistency of both frameworks. Here we briefly review these instances and suggest that they may be a portal to an improved understanding of the asymptotic sectors of QFT and the Standard Model of particle physics (SM).

**Category:** High Energy Particle Physics

[4701] **viXra:1504.0133 [pdf]**
*replaced on 2015-04-18 06:52:35*

**Authors:** You-Bang Zhan

**Comments:** 8 Pages.

The discrimination of quantum operations is an important subject of quantum information processes. For the local distinction, existing researches pointed out that, since any operation performed on a quantum system must be compatible with no-signaling constraint, local discrimination between quantum operations of two spacelike separated parties cannot be realized. We found that, however, local discrimination of quantum measurements may be not restricted by the no-signaling if more multi-qubit entanglement and selective measurements were employed. In this paper we report that local quantum measurement discrimination (LQMD) can be completed via selective projective measurements and numerous seven-qubit GHZ states without help of classical communication if both two observers agreed in advance that one of them should measure her/his qubits before an appointed time. As an application, it is shown that the teleportation can be completed via the LQMD without classical
information. This means that the superluminal communication can be realized by using the LQMD.

**Category:** Quantum Physics

[4700] **viXra:1504.0118 [pdf]**
*replaced on 2015-04-15 22:28:05*

**Authors:** Cheng Tianren

**Comments:** 16 Pages.

In this paper we solve a previously formulated conjecture. That is, for the cocommutative irreducible coalgebra C. we also introduce the azumaya coalgebras over a cocommutative coalgebra R. the computation of the brauer group BM of modified supergroup algebras over a field is performed, yielding the computation of the brauer group of all finite-dimensional triangular hopf algebra. Then we address the question if the cartan matrix of a block of a finite group cannot be arranged as a direct sum of smallest matrices. The aim of our work is to show that the topological hochschild homology of a discrete ring and the maclane homology of R are isomorphic.

**Category:** Algebra

[4699] **viXra:1504.0118 [pdf]**
*replaced on 2015-04-15 04:24:38*

**Authors:** Cheng Tianren

**Comments:** 16 Pages.

In this paper we solve a previously formulated conjecture. That is, for the cocommutative irreducible coalgebra C. we also introduce the azumaya coalgebras over a cocommutative coalgebra R. the computation of the brauer group BM of modified supergroup algebras over a field is performed, yielding the computation of the brauer group of all finite-dimensional triangular hopf algebra. Then we address the question if the cartan matrix of a block of a finite group cannot be arranged as a direct sum of smallest matrices. The aim of our work is to show that the topological hochschild homology of a discrete ring and the maclane homology of R are isomorphic.

**Category:** Algebra