**Previous months:**

2009 - 0908(1)

2010 - 1003(2) - 1004(2) - 1008(1)

2011 - 1101(3) - 1106(3) - 1108(1) - 1109(1) - 1112(2)

2012 - 1202(1) - 1208(3) - 1210(2) - 1211(1) - 1212(3)

2013 - 1301(1) - 1302(2) - 1303(6) - 1305(2) - 1306(6) - 1308(1) - 1309(1) - 1310(4) - 1311(1) - 1312(1)

2014 - 1403(3) - 1404(3) - 1405(25) - 1406(2) - 1407(2) - 1408(3) - 1409(3) - 1410(3) - 1411(1) - 1412(2)

2015 - 1501(2) - 1502(4) - 1503(3) - 1504(4) - 1505(2) - 1506(1) - 1507(1) - 1508(1) - 1509(5) - 1510(6) - 1511(1)

2016 - 1601(12) - 1602(4) - 1603(7) - 1604(1) - 1605(8) - 1606(6) - 1607(7) - 1608(4) - 1609(4) - 1610(2) - 1611(3) - 1612(4)

2017 - 1701(4) - 1702(4) - 1703(1) - 1704(1)

Any replacements are listed further down

[194] **viXra:1704.0175 [pdf]**
*submitted on 2017-04-13 07:02:31*

**Authors:** Q.P.Wimblik

**Comments:** 2 Pages.

Vertex Coloring can be reduced to a set of 2 color vertex coloring problems.
This is achieved by utilizing an ability to account for every positive integer with a unique pair of smaller integers.

**Category:** Data Structures and Algorithms

[193] **viXra:1703.0105 [pdf]**
*submitted on 2017-03-12 04:14:12*

**Authors:** Dhara Joshi1, Krishna Dalsaniya2, Chintan Patel3

**Comments:** 5 Pages.

With the fast advancement in the field of network security everything gets the chance to be possible on web. Remote user authentication is an imperative system in the networks framework to check the exactitude of remote user over the public channel. In this authentication procedure, server checks accreditation of the user that user is authentic and legal one or not. For that Server and user commonly confirm each other and make a same session key for encryption of upcoming conversations. There are two types of authentication: Single server and Multi server. To overcome the drawback of single server authentication (remembering id and pswd for accessing each of the server), the concept of Multi server comes, in which user first register with RC, and whatever servers are registerd under RC can be accessed by user by providing single id and pswd for all. Here We review US patent [US 9264425 B1] scheme which is based on Multi server authentication, we provide mathematical analysis of the same with some attacks found on it.

**Category:** Data Structures and Algorithms

[192] **viXra:1702.0321 [pdf]**
*submitted on 2017-02-26 11:24:32*

**Authors:** Michail Zak

**Comments:** 11 Pages.

The challenge of this paper is to relate quantum-inspired dynamics represented by a self- supervised system, to solutions of noncomputable problems. In the self-supervised systems, the role of actuators is played by the probability produced by the corresponding Liouville equation. Following the Madelung equation that belongs to this class, non- Newtonian properties such as randomness, entanglement, and probability interference typical for quantum systems have been described in [1]. It has been demonstrated there, that such systems exist in the mathematical world: they are presented by ODE coupled with their Liouville equation, but they belong neither to Newtonian nor to quantum physics. The central point of this paper is the application of the self-supervised systems to solve traveling salesman problem.

**Category:** Data Structures and Algorithms

[191] **viXra:1702.0261 [pdf]**
*submitted on 2017-02-20 21:15:53*

**Authors:** Michail Zak

**Comments:** 11 Pages.

The challenge of this paper is to relate quantum-inspired dynamics represented by a self-supervised system, to solutions of noncomputable problems. In the self-supervised systems, the role of actuators is played by the probability produced by the corresponding Liouville equation. Following the Madelung equation that belongs to this class, non-Newtonian properties such as randomness, entanglement, and probability interference typical for quantum systems have been described in [1]. It has been demonstrated there, that such systems exist in the mathematical world: they are presented by ODE coupled with their Liouville equation, but they belong neither to Newtonian nor to quantum physics. The central point of this paper is the application of the self-supervised systems to finding global maximum of functions that is no-where differential, but everywhere continuous (such as Weierstrass functions)

**Category:** Data Structures and Algorithms

[190] **viXra:1702.0060 [pdf]**
*submitted on 2017-02-04 06:12:29*

**Authors:** George Rajna

**Comments:** 36 Pages.

The researchers, in their paper published in Science Advances, say this freedom allows quantum computers to store many different states of the system being simulated in different superpositions, using less memory overall than in a classical computer. [26] The advancement of quantum computing faces a tremendous challenge in improving the reproducibility and robustness of quantum circuits. One of the biggest problems in this field is the presence of noise intrinsic to all these devices, the origin of which has puzzled scientists for many decades. [25] Characterising quantum channels with non-separable states of classical light the researchers demonstrate the startling result that sometimes Nature cannot tell the difference between particular types of laser beams and quantum entangled photons. [24] Physicists at Princeton University have revealed a device they've created that will allow a single electron to transfer its quantum information to a photon. [23] A strong, short light pulse can record data on a magnetic layer of yttrium iron garnet doped with Co-ions. This was discovered by researchers from Radboud University in the Netherlands and Bialystok University in Poland. The novel mechanism outperforms existing alternatives, allowing the fastest read-write magnetic recording accompanied by unprecedentedly low heat load. [22] It goes by the unwieldy acronym STT-MRAM, which stands for spin-transfer torque magnetic random access memory. [21] Memory chips are among the most basic components in computers. The random access memory is where processors temporarily store their data, which is a crucial function. Researchers from Dresden and Basel have now managed to lay the foundation for a new memory chip concept. [20] Researchers have built a record energy-efficient switch, which uses the interplay of electricity and a liquid form of light, in semiconductor microchips. The device could form the foundation of future signal processing and information technologies, making electronics even more efficient. [19] The magnetic structure of a skyrmion is symmetrical around its core; arrows indicate the direction of spin. [18]

**Category:** Data Structures and Algorithms

[189] **viXra:1701.0668 [pdf]**
*submitted on 2017-01-30 09:22:38*

**Authors:** Ameet Sharma

**Comments:** 11 Pages.

We propose developing an XML-based system to enhance scientific papers and articles. A system whereby the premises of arguments are made explicit in XML tags. These tags provide a link between papers to more clearly exhibit deductive knowledge dependencies. The tags allow us to construct deductive networks which are a visual representation of deductive knowledge dependencies. A deductive network (DN) is a kind of bayesian network, but without probabilities.

**Category:** Data Structures and Algorithms

[188] **viXra:1701.0573 [pdf]**
*submitted on 2017-01-22 21:38:03*

**Authors:** Mildred Bennet, Timothy Sato, Frank West

**Comments:** 6 Pages.

Many end-users would agree that, had it
not been for systems, the improvement of
fiber-optic cables might never have occurred.
Given the current status of self-learning symmetries,
physicists clearly desire the deployment
of courseware, which embodies the compelling
principles of unstable operating systems.
We construct a novel methodology for
the evaluation of hash tables, which we call
MOP.

**Category:** Data Structures and Algorithms

[187] **viXra:1701.0572 [pdf]**
*submitted on 2017-01-22 21:57:46*

**Authors:** R. Salvato, G. Casey

**Comments:** 6 Pages.

Many experts would agree that, had it not
been for the study of context-free grammar,
the understanding of the UNIVAC computer
might never have occurred. This is crucial
to the success of our work. In fact, few analysts
would disagree with the visualization of
spreadsheets, which embodies the important
principles of software engineering. In order
to realize this intent, we describe new robust
modalities (Destrer), which we use to validate
that architecture and wide-area networks can
collude to realize this intent

**Category:** Data Structures and Algorithms

[186] **viXra:1701.0089 [pdf]**
*submitted on 2017-01-03 10:03:01*

**Authors:** George Rajna

**Comments:** 28 Pages.

Memory chips are among the most basic components in computers. The random access memory is where processors temporarily store their data, which is a crucial function. Researchers from Dresden and Basel have now managed to lay the foundation for a new memory chip concept. [20] Researchers have built a record energy-efficient switch, which uses the interplay of electricity and a liquid form of light, in semiconductor microchips. The device could form the foundation of future signal processing and information technologies, making electronics even more efficient. [19] The magnetic structure of a skyrmion is symmetrical around its core; arrows indicate the direction of spin. [18] According to current estimates, dozens of zettabytes of information will be stored electronically by 2020, which will rely on physical principles that facilitate the use of single atoms or molecules as basic memory cells. [17] EPFL scientists have developed a new perovskite material with unique properties that can be used to build next-generation hard drives. [16] Scientists have fabricated a superlattice of single-atom magnets on graphene with a density of 115 terabits per square inch, suggesting that the configuration could lead to next-generation storage media. [15] Now a researcher and his team at Tyndall National Institute in Cork have made a 'quantum leap' by developing a technical step that could enable the use of quantum computers sooner than expected. [14] A method to produce significant amounts of semiconducting nanoparticles for light-emitting displays, sensors, solar panels and biomedical applications has gained momentum with a demonstration by researchers at the Department of Energy's Oak Ridge National Laboratory. [13] A source of single photons that meets three important criteria for use in quantum-information systems has been unveiled in China by an international team of physicists. Based on a quantum dot, the device is an efficient source of photons that emerge as solo particles that are indistinguishable from each other. The researchers are now trying to use the source to create a quantum computer based on "boson sampling". [11] With the help of a semiconductor quantum dot, physicists at the University of Basel have developed a new type of light source that emits single photons.

**Category:** Data Structures and Algorithms

[185] **viXra:1612.0368 [pdf]**
*submitted on 2016-12-29 05:24:47*

**Authors:** Domenico Oricchio

**Comments:** 1 Page.

A server can distribute signed files using the pretty good privacy program, using a universal standard client that have known public key and known private key.

**Category:** Data Structures and Algorithms

[184] **viXra:1612.0185 [pdf]**
*submitted on 2016-12-09 20:47:27*

**Authors:** Taha Sochi

**Comments:** 17 Pages.

Area detectors are used in many scientific and technological applications such as particle and radiation physics. Thanks to the recent technological developments, the radiation sources are becoming increasingly brighter and the detectors become faster and more efficient. The result is a sharp increase in the size of data collected in a typical experiment. This situation imposes a bottleneck on data processing capabilities, and could pose a real challenge to scientific research in certain areas. This article proposes a number of simple techniques to facilitate rapid and efficient extraction of data obtained from these detectors. These techniques are successfully implemented and tested in a computer program to deal with the extraction of X-ray diffraction patterns from EDF image files obtained from CCD detectors.

**Category:** Data Structures and Algorithms

[183] **viXra:1612.0179 [pdf]**
*submitted on 2016-12-09 21:07:59*

**Authors:** Taha Sochi

**Comments:** 17 Pages.

In this article we discuss general strategies and computer algorithms to test the connectivity of unstructured networks which consist of a number of segments connected through randomly distributed nodes.

**Category:** Data Structures and Algorithms

[182] **viXra:1612.0079 [pdf]**
*submitted on 2016-12-06 17:35:29*

**Authors:** Yuly Shipilevsky

**Comments:** 9 Pages. This is a new paper and I changed the title. Thanks.

A polynomial-time algorithm for integer factorization, wherein integer factorization reduced to a polynomial-time integer minimization problem over the integer points in a two-dimensional polyhedron.

**Category:** Data Structures and Algorithms

[181] **viXra:1611.0352 [pdf]**
*submitted on 2016-11-26 05:11:34*

**Authors:** Robert Deloin

**Comments:** 16 Pages.

Collatz' conjecture (stated in 1937 by Collatz and also named Thwaites conjecture, or Syracuse, 3n+1 or oneness problem) can be described as follows:
Take any positive whole number N. If N is even, divide it by 2. If it is odd, multiply it by 3 and add 1. Repeat this process to the result over
and over again. Collatz' conjecture is the supposition that for any positive integer N, the sequence will invariably reach the value 1.
The main contribution of this paper is to present a new approach to Collatz' conjecture. The key idea of this new approach is to clearly differentiate
the role of the division by two and the role of what we will name here the jump: a = 3n + 1.
With this approach, the proof of the conjecture is given as well as generalizations for jumps of the form qn + r and for jumps being polynomials
of degree m >1.

**Category:** Data Structures and Algorithms

[180] **viXra:1611.0328 [pdf]**
*submitted on 2016-11-24 06:35:11*

**Authors:** George Rajna

**Comments:** 22 Pages.

EPFL scientists have developed a new perovskite material with unique properties that can be used to build next-generation hard drives. [16] Scientists have fabricated a superlattice of single-atom magnets on graphene with a density of 115 terabits per square inch, suggesting that the configuration could lead to next-generation storage media. [15] Now a researcher and his team at Tyndall National Institute in Cork have made a 'quantum leap' by developing a technical step that could enable the use of quantum computers sooner than expected. [14] A method to produce significant amounts of semiconducting nanoparticles for light-emitting displays, sensors, solar panels and biomedical applications has gained momentum with a demonstration by researchers at the Department of Energy's Oak Ridge National Laboratory. [13] A source of single photons that meets three important criteria for use in quantum-information systems has been unveiled in China by an international team of physicists. Based on a quantum dot, the device is an efficient source of photons that emerge as solo particles that are indistinguishable from each other. The researchers are now trying to use the source to create a quantum computer based on "boson sampling". [11] With the help of a semiconductor quantum dot, physicists at the University of Basel have developed a new type of light source that emits single photons. For the first time, the researchers have managed to create a stream of identical photons. [10] Optical photons would be ideal carriers to transfer quantum information over large distances. Researchers envisage a network where information is processed in certain nodes and transferred between them via photons. [9] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer using Quantum Information. In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer with the help of Quantum Information.

**Category:** Data Structures and Algorithms

[179] **viXra:1611.0088 [pdf]**
*submitted on 2016-11-07 07:25:12*

**Authors:** George Rajna

**Comments:** 29 Pages.

Dynamic programming is a technique that can yield relatively efficient solutions to computational problems in economics, genomic analysis, and other fields. But adapting it to computer chips with multiple "cores," or processing units, requires a level of programming expertise that few economists and biologists have. [16] Researchers at Lancaster University's Data Science Institute have developed a software system that can for the first time rapidly self-assemble into the most efficient form without needing humans to tell it what to do. [15] Physicists have shown that quantum effects have the potential to significantly improve a variety of interactive learning tasks in machine learning. [14] A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of " quantum artificial intelligence ". Physicists have long claimed that quantum computers have the potential to dramatically outperform the most powerful conventional processors. The secret sauce at work here is the strange quantum phenomenon of superposition, where a quantum object can exist in two states at the same time. [13] One of biology's biggest mysteries-how a sliced up flatworm can regenerate into new organisms-has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help. [12] A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has. [11] A team of researchers used a promising new material to build more functional memristors, bringing us closer to brain-like computing. Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons. [10] Cambridge Quantum Computing Limited (CQCL) has built a new Fastest Operating System aimed at running the futuristic superfast quantum computers. [9] IBM scientists today unveiled two critical advances towards the realization of a practical quantum computer. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions. [8] Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the "Physical Review Letters". [7] While physicists are continually looking for ways to unify the theory of relativity, which describes large-scale phenomena, with quantum theory, which describes small-scale phenomena, computer scientists are searching for technologies to build the quantum computer. The accelerating electrons explain not only the Maxwell Equations and the Special Relativity, but the Heisenberg Uncertainty Relation, the Wave-Particle Duality and the electron's spin also, building the Bridge between the Classical and Quantum Theories. The Planck Distribution Law of the electromagnetic oscillators explains the electron/proton mass rate and the Weak and Strong Interactions by the diffraction patterns. The Weak Interaction changes the diffraction patterns by moving the electric charge from one side to the other side of the diffraction pattern, which violates the CP and Time reversal symmetry. The diffraction patterns and the locality of the self-maintaining electromagnetic potential explains also the Quantum Entanglement, giving it as a natural part of the Relativistic Quantum Theory and making possible to build the Quantum Computer.

**Category:** Data Structures and Algorithms

[178] **viXra:1610.0351 [pdf]**
*submitted on 2016-10-29 09:23:20*

**Authors:** George Rajna

**Comments:** 30 Pages.

A revolutionary and emerging class of energy-harvesting computer systems require neither a battery nor a power outlet to operate, instead operating by harvesting energy from their environment. [18]
In 1959 renowned physicist Richard Feynman, in his talk "Plenty of Room at the Bottom," spoke of a future in which tiny machines could perform huge feats. Like many forward-looking concepts, his molecule and atom-sized world remained for years in the realm of science fiction. [17]
The race towards quantum computing is heating up. Faster, brighter, more exacting – these are all terms that could be applied as much to the actual science as to the research effort going on in labs around the globe. [16]
For the first time, scientists now have succeeded in placing a complete quantum optical structure on a chip, as outlined Nature Photonics. This fulfills one condition for the use of photonic circuits in optical quantum computers. [15]
The intricately sculpted device made by Paul Barclay and his team of physicists is so tiny it can only be seen under a microscope. But their diamond microdisk could lead to huge advances in computing, telecommunications, and other fields. [14]
Researchers from the Institute for Quantum Computing at the University of Waterloo and the National Research Council of Canada (NRC) have, for the first time, converted the color and bandwidth of ultrafast single photons using a room-temperature quantum memory in diamond. [13]
One promising approach for scalable quantum computing is to use an all-optical architecture, in which the qubits are represented by photons and manipulated by mirrors and beam splitters. So far, researchers have demonstrated this method, called Linear Optical Quantum Computing, on a very small scale by performing operations using just a few photons. In an attempt to scale up this method to larger numbers of photons, researchers in a new study have developed a way to fully integrate single-photon sources inside optical circuits, creating integrated quantum circuits that may allow for scalable optical quantum computation. [12]
Spin-momentum locking might be applied to spin photonics, which could hypothetically harness the spin of photons in devices and circuits. Whereas microchips use electrons to perform computations and process information, photons are limited primarily to communications, transmitting data over optical fiber. However, using the spin of light waves could make possible devices that integrate electrons and photons to perform logic and memory operations. [11]

**Category:** Data Structures and Algorithms

[177] **viXra:1610.0326 [pdf]**
*submitted on 2016-10-27 07:48:22*

**Authors:** Miaomiaomiao

**Comments:** 47 Pages. I AM NOT THE AUTHOR

Note of matrix multiplication

**Category:** Data Structures and Algorithms

[176] **viXra:1609.0421 [pdf]**
*submitted on 2016-09-29 08:00:08*

**Authors:** Emshanov Dima

**Comments:** 15 Pages.

This article contains a description representing the logical formula 3-SAT as a conjunction of two polynomial logical formulas.

**Category:** Data Structures and Algorithms

[175] **viXra:1609.0370 [pdf]**
*submitted on 2016-09-26 06:59:01*

**Authors:** Trung Kien Vu, Sungoh Kwon

**Comments:** Preprint submitted to Computer Networks, 10 pages, 15 figures

In this paper, we propose an ad-hoc on-demand distance vector routing algorithm for mobile ad-hoc networks taking
into account node mobility. Changeable topology of such mobile ad-hoc networks provokes overhead messages in
order to search available routes and maintain found routes. The overheadmessages impede data delivery from sources
to destination and deteriorate network performance. To overcome such a challenge, our proposed algorithm estimates
link duration based neighboring node mobility and chooses the most reliable route. The proposed algorithm also
applies the estimate for route maintenance to lessen the number of overhead messages. Via simulations, the proposed
algorithmis verified in variousmobile environments. In the low mobility environment, by reducing routemaintenance
messages, the proposed algorithm significantly improves network performance such as packet data rate and end-toend
delay. In the high mobility environment, the reduction of route discovery message enhances network performance
since the proposed algorithm provides more reliable routes.

**Category:** Data Structures and Algorithms

[174] **viXra:1609.0144 [pdf]**
*submitted on 2016-09-11 15:00:52*

**Authors:** A. A. Salama, Ibrahim El-Henawy, M.S.Bondok

**Comments:** 11 Pages.

In business scenarios, where some of the data or the business attributes are neutrosophic, it may be useful to construct a warehouse that can support the analysis of neutrosophic data. In this paper, a neutrosophic data warehouse modelling approach is presented to support the neutrosophic analysis of the publishing house for books which allows integration of neutrosophic concept in dimensions and facts without affecting the core of a classical data warehouse. Also we describe a method is presented which includes guidelines that can be used to convert a classical data warehouse into a neutrosophic domain.

**Category:** Data Structures and Algorithms

[173] **viXra:1609.0044 [pdf]**
*submitted on 2016-09-03 16:15:57*

**Authors:** Brian Beckman

**Comments:** 7 Pages.

This paper fills in some blanks left between part 1 of this series, Kalman Folding (http://vixra.org/abs/1606.0328), and the rest of the papers in the series. In part 1, we present basic Kalman filtering as a functional fold, highlighting the advantages of this form for hardening code in a test environment. In that paper, we motivated the Kalman filter as a natural extension of the running average and variance, writing both as functional folds computed in constant memory. We expressed the running statistics as recurrence relations, where the new statistic is the old statistic plus a correction. We write the correction as a gain factor times some transform of a residual. The residual is the difference between the current (old) statistic and the incoming (new) observation. In both expressions, for brevity, we left derivations to the reader. Here, we present those derivations in full “school-level” detail, along with some basic explanation of the programming language that mechanizes the computations.

**Category:** Data Structures and Algorithms

[172] **viXra:1608.0230 [pdf]**
*submitted on 2016-08-21 11:20:24*

**Authors:** Fu Yuhua

**Comments:** 5 Pages.

Based on creating generalized and hybrid set and library with neutrosophy and quad-stage method, this paper presents the concept of "computer information library clusters" (CILC). There are various ways and means to form CILC. For example, CILC can be considered as the "total-library", and consists of several "sub-libraries". As another example, in CILC, a "total-library" can be set up, and a number of "sub-libraries" are side by side with the "total-library". Specially, for CILC, the operation functions can be added; for example, according to "natural science computer information library clusters" (natural science CILC), and applying "variation principle of library (or sub-library)", “partial and temporary unified theory of natural science so far” with different degrees can be established. Referring to the concept of “natural science CILC”, the concepts of “social science CILC”, “natural science and social science CILC”, and the like, can be presented. While, referreing to the concept of “computer information library clusters”, the concepts of “computer and non-computer information library clusters”, “earth information library clusters”, “solar system information library clusters”, “Milky Way galaxy information library clusters”, “universe information library clusters”, and the like, can be presented.

**Category:** Data Structures and Algorithms

[171] **viXra:1608.0098 [pdf]**
*submitted on 2016-08-09 15:04:41*

**Authors:** Leorge Takeuchi

**Comments:** 16 Pages.

Quicksort, invented by Tony Hoare in 1959, is one of the fastest sorting algorithms. However, conventional
implementations have some weak points, including the following: swaps to exchange two elements are redundant,
deep recursive calls may encounter stack overflow, and the case of repeated many elements in input data is a well-
known issue. This paper improves quicksort to make it more secure and faster using new or known ideas in C
language.

**Category:** Data Structures and Algorithms

[170] **viXra:1608.0044 [pdf]**
*submitted on 2016-08-04 22:35:27*

**Authors:** Sidharth Ghoshal

**Comments:** 10 Pages.

Documented is an algorithm,
It is intimately related to a question about piecewise linear cobordisms. IF the conjecture is true then this algorithm is polynomial time. IF it is not, the this algorithm "might be" but probably won't be. Contact a local topologist for updates in this computational crisis.

**Category:** Data Structures and Algorithms

[169] **viXra:1607.0457 [pdf]**
*submitted on 2016-07-24 21:42:26*

**Authors:** Martin Dudziak

**Comments:** 22 Pages.

We address the topic of internet and communications integrity and continuity during times of social unrest and disturbance where a variety of actions can lead to short-term or long-term disruption of conventional, public and private internet and wireless networks. The internet disruptions connected with WikiLeaks in 2010, those in Egypt and Libya during protests and revolution commencing in January of 2011, and long-standing controls upon internet access and content imposed within China and other nations, are considered as specific and contemporary examples. We examine alternatives that have been proposed by which large numbers of individuals can maintain “connectivity without borders.” We review the strengths and weaknesses of such alternatives, the countermeasures that can be employed against such connectivity, and a number of innovative measures that can be used to overcome such countermeasures.

**Category:** Data Structures and Algorithms

[168] **viXra:1607.0432 [pdf]**
*submitted on 2016-07-23 09:09:27*

**Authors:** Hemant Pandey

**Comments:** 13 Pages. With drawl paper due to technical reasons.

P
vs NP is possibly one of the most crucial problems’s of our era owing to the fact that it directly affects one of the most
8
basic things of our modern day survival, the Internet security. The proof will be surely a big blow to the RSA ciphering–
9
deciphering technology but it the way it is! Genuine apologies for
P
= NP. As for as mathematical gain is concern it is a
10
result that opens a search for solution of those 300 plus NP complete problems and much more. The present proof resolves
11
P
= NP by the solution of NP complete Hamiltonians path problem in polynomial time. The proof is using topology and
12
simple geometry. Hence
P
= NP; solved for the Hamiltonians path problem or Traveling salesman problem as it is called
13
so. NP complete Hamiltonian’s path problem has a polynomial time solution, i.e.
P
=CN
4
for HPP.
14
2006 Published by Elsevier Inc.

**Category:** Data Structures and Algorithms

[167] **viXra:1607.0141 [pdf]**
*submitted on 2016-07-10 15:52:42*

**Authors:** Brian Beckman

**Comments:** 11 Pages.

In Kalman Folding, Part 1, we present basic, static Kalman filtering
as a functional fold, highlighting the unique advantages of this form for
deploying test-hardened code verbatim in harsh, mission-critical environments.
In that paper, all examples folded over arrays in memory for convenience and
repeatability. That is an example of developing filters in a friendly
environment.
Here, we prototype a couple of less friendly environments and demonstrate
exactly the same Kalman accumulator function at work. These less friendly
environments are
- lazy streams, where new observations are computed on demand but never fully
realized in memory, thus not available for inspection in a debugger
- asynchronous observables, where new observations are delivered at arbitrary
times from an external source, thus not available for replay once consumed by
the filter

**Category:** Data Structures and Algorithms

[166] **viXra:1607.0109 [pdf]**
*submitted on 2016-07-09 08:03:25*

**Authors:** Z. Vosika, G. Lazović

**Comments:** 7 Pages.

In this paper we develop the new physicalmathematical time scale kinetic approach-model applied
on organic and non-organic particles motion. Concretely,
here, at first, this new research approach is based on
enzyme particles dynamics results. At the beginning, a
time scale is defined to be an arbitrary closed subset of the
real numbers R, with the standard inherited topology.
Mathematical examples of time scales include real
numbers R, natural numbers N, integers Z, the Cantor set
(i.e. fractals), and any finite union of closed intervals of R.
Calculus on time scales (TSC) was established in 1988 by
Stefan Hilger. TSC, by construction, is used to describe the
complex process. This method may utilized for description
of physical (classical mechanics), material (crystal growth
kinetics, physical chemistry kinetics - for example,
kinetics of barium-titanate synthesis), (bio)chemical or
similar systems and represents major challenge for
contemporary scientists. In this sense, the MichaelisMenten (MM) mechanism is the one of the best known and
simplest nonlinear biochemical network which deserves
appropriate attention. Generally speaking, such processes
may be described of discrete time scale. Reasonably it
could be assumed that such a scenario is possible for MM
mechanism. In this work, discrete time MM kinetics
(dtMM) with time various step h, is investigated. Instead of
the first derivative by time used first backward difference
h. Physical basics for new time scale approach is a new
statistical thermodynamics, natural generalization of
Tsallis non-extensive or similar thermodynamics. A
reliable new algorithm of novel difference transformation
method, namely multi-step difference transformation
method (MSDETM) for solving system of nonlinear
ordinary difference equations is proposed. If h tends to
zero, MSDETM transformed into multi-step differential
transformation method (MSDTM). In the spirit of TSC,
MSDETM describes analogously MSDTM.

**Category:** Data Structures and Algorithms

[165] **viXra:1607.0084 [pdf]**
*submitted on 2016-07-07 09:50:50*

**Authors:** Brian Beckman

**Comments:** 11 Pages.

We exhibit a foldable Extended Kalman Filter that internally integrates
non-linear equations of motion with a nested fold of generic
integrators over lazy streams in constant memory.
Functional form allows us to switch integrators easily and to diagnose filter
divergence accurately, achieving orders of magnitude better speed than
the source example from the literature. As with all Kalman folds, we can move
the vetted code verbatim, without even recompilation, from the lab to the field.

**Category:** Data Structures and Algorithms

[164] **viXra:1607.0083 [pdf]**
*submitted on 2016-07-07 09:52:55*

**Authors:** Brian Beckman

**Comments:** 9 Pages.

In Kalman Folding 5: Non-Linear Models and the EKF, we present an
Extended Kalman Filter as a fold over a lazy stream of observations that uses a
nested fold over a lazy stream of states to integrate non-linear equations of
motion. In Kalman Folding 4: Streams and Observables, we present a
handful of stream operators, just enough to demonstrate Kalman folding over
observables.
In this paper, we enrich the collection of operators, adding takeUntil,
last, and map. We then show how to use them to integrate differential
equations in state-space form in two different ways and to generate test cases
for the non-linear EKF from paper 5.

**Category:** Data Structures and Algorithms

[163] **viXra:1607.0059 [pdf]**
*submitted on 2016-07-05 23:28:11*

**Authors:** Brian Beckman

**Comments:** 14 Pages.

In Kalman Folding, Part 1, we present basic, static Kalman filtering
as a functional fold, highlighting the unique advantages of this form for
deploying test-hardened code verbatim in harsh, mission-critical environments.
The examples in that paper are all static, meaning that the states of the model
do not depend on the independent variable, often physical time.
Here, we present mathematical derivations of the basic, static filter. These are
semi-formal sketches that leave many details to the reader, but highlight all
important points that must be rigorously proved. These derivations have several
novel arguments and we strive for much higher clarity and simplicity than is
found in most treatments of the topic.

**Category:** Data Structures and Algorithms

[162] **viXra:1606.0348 [pdf]**
*submitted on 2016-06-30 20:27:15*

**Authors:** Brian Beckman

**Comments:** 7 Pages.

In Kalman Folding, Part 1, we present basic, static Kalman filtering
as a functional fold, highlighting the unique advantages of this form for
deploying test-hardened code verbatim in harsh, mission-critical environments.
The examples in that paper are all static, meaning that the states of the model
do not depend on the independent variable, often physical time.
Here, we present a dynamic Kalman filter in the same, functional form. This
filter can handle many dynamic, time-evolving applications including some
tracking and navigation problems, and is easilly extended to nonlinear and
non-Gaussian forms, the Extended Kalman Filter (EKF) and Unscented Kalman Filter
(UKF) respectively. Those are subjects of other papers in this Kalman-folding
series. Here, we reproduce a tracking example from a well known reference, but
in functional form, highlighting the advantages of that form.

**Category:** Data Structures and Algorithms

[161] **viXra:1606.0328 [pdf]**
*submitted on 2016-06-29 14:21:33*

**Authors:** Brian Beckman

**Comments:** 19 Pages.

Kalman filtering is commonplace in engineering, but less familiar to software
developers. It is the central tool for estimating states of a model, one
observation at a time. It runs fast in constant memory. It is the mainstay of
tracking and navigation, but it is equally applicable to econometrics,
recommendations, control: any application where we update models over time.
By writing a Kalman filter as a functional fold, we can test code in friendly
environments and then deploy identical code with confidence in unfriendly
environments. In friendly environments, data are deterministic, static, and
present in memory. In unfriendly, real-world environments,
data are unpredictable, dynamic, and arrive asynchronously.
The flexibility to deploy exactly the code that was tested is especially
important for numerical code like filters. Detecting, diagnosing and correcting
numerical issues without repeatable data sequences is impractical. Once code is
hardened, it can be critical to deploy exactly the same code, to the binary
level, in production, because of numerical brittleness. Functional form makes it
easy to test and deploy exactly the same code because it minimizes the coupling
between code and environment.

**Category:** Data Structures and Algorithms

[160] **viXra:1606.0182 [pdf]**
*submitted on 2016-06-17 22:40:41*

**Authors:** Ramesh Chandra Bagadi

**Comments:** 18 Pages.

In this research investigation, the author has presented a theory of ‘Universal
Relative Metric That Generates A Field Super-Set To The Fields Generated By
Various Distinct Relative Metrics’.

**Category:** Data Structures and Algorithms

[159] **viXra:1606.0157 [pdf]**
*submitted on 2016-06-15 07:29:20*

**Authors:** Ramesh Chandra Bagadi

**Comments:** 14 Pages.

In this research investigation, the author has presented a theory of ‘The Universal
Irreducible Any Field Generating Metric’.

**Category:** Data Structures and Algorithms

[158] **viXra:1606.0156 [pdf]**
*submitted on 2016-06-15 07:30:06*

**Authors:** Ramesh Chandra Bagadi

**Comments:** 14 Pages.

In this research investigation, the author has presented a theory of ‘The Universal
Irreducible Any Field Generating Metric’.

**Category:** Data Structures and Algorithms

[157] **viXra:1606.0147 [pdf]**
*submitted on 2016-06-15 00:16:12*

**Authors:** Ramesh Chandra Bagadi

**Comments:** 14 Pages.

In this research investigation, the author has presented a theory of ‘Universal Natural Memory Embedding’.

**Category:** Data Structures and Algorithms

[156] **viXra:1605.0235 [pdf]**
*submitted on 2016-05-22 18:39:09*

**Authors:** Edwin Eugene Klingman

**Comments:** 8 Pages. Embedded Systems Programming

FPGAs and microprocessors are more similar than you may think. Here's a primer on how to program an FPGA and some reasons why you'd want to.
Small processors are, by far, the largest selling class of computers and form the basis of many embedded systems. The first single-chip microprocessors contained approximately 10,000 gates of logic and 10,000 bits of memory. Today, field programmable gate arrays (FPGAs) provide single chips approaching 10 million gates of logic and 10 million bits of memory...

**Category:** Data Structures and Algorithms

[155] **viXra:1605.0234 [pdf]**
*submitted on 2016-05-22 18:44:43*

**Authors:** Edwin Eugene Klingman

**Comments:** 10 Pages. Embedded Systems Programming

FPGAs enable everyone to be a chip designer. This installment shows how to design the bus interface for a generic peripheral chip.
When designing with an embedded microprocessor, you always have to take into account, if not begin with, the actual pinout of the device. Each pin on a given microprocessor is uniquely defined by the manufacturer and must be used in a specific manner to achieve a specific function. Part of learning to design with embedded processors is learning the pin definitions. In contrast, field programmable gate array (FPGA) devices come to the design with pins completely undefined (except for power and ground). You have to define the FPGA's pins yourself. This gives you incredible flexibility but also forces you to think through the use of each pin...

**Category:** Data Structures and Algorithms

[154] **viXra:1605.0152 [pdf]**
*submitted on 2016-05-14 06:12:52*

**Authors:** Hossein Vahabi, Paul Lagree, Claire Vernade, Olivier Cappe

**Comments:** 19 Pages.

In many web applications, a recommendation is not a single item sug- gested to a user but a list of possibly interesting contents that may be ranked in some contexts. The combinatorial bandit problem has been studied quite extensively these last two years and many theoretical re- sults now exist : lower bounds on the regret or asymptotically optimal algorithms. However, because of the variety of situations that can be considered, results are designed to solve the problem for a specific reward structure such as the Cascade Model. The present work focuses on the problem of ranking items when the user is allowed to click on several items while scanning the list from top to bottom.

**Category:** Data Structures and Algorithms

[153] **viXra:1605.0109 [pdf]**
*submitted on 2016-05-11 02:37:54*

**Authors:** Robert A. Martin

**Comments:** 21 Pages.

We discuss the problem of finding an optimum linear seating arrangement for a small social network, i.e. approaching the problem put forth in XKCD comic 173 – for a small social network, how can one determine the seating order in a row (e.g at the cinema) that corresponds to maximum enjoyment? We begin by improving the graphical notation of the network, and then propose a method through which the total enjoyment for a particular seating arrangement can be quantified. We then discuss genetic programming, and implement a first-principles genetic algorithm in python, in order to find an optimal arrangement. While the method did produce acceptable results, outputting an optimal arrangement for the XKCD network, it was noted that genetic algorithms may not be the best way to find such an arrangement. The results of this investigation may have tangible applications in the organising of social functions such as weddings.

**Category:** Data Structures and Algorithms

[152] **viXra:1605.0033 [pdf]**
*submitted on 2016-05-04 01:44:47*

**Authors:** Marian Dragoi, Ciprian Palaghianu

**Comments:** 8 pages, 4 figures, language: Romanian (abstract in English)

Group decision makers making process - an analytic hierarchy approach
The paper deals with a step-wise analytic hierarchy process (AHP) applied by a
group of decision makers wherein nobody has a dominant position and it is unlikely to
come to terms with respect to either the weights of different objectives or expected utilities
of different alternatives. One of the AHP outcomes, that is the consistency index is
computed for each decision maker, for all other decision makers but that one, and for the
whole group. Doing so, the group is able to assess to which extent each decision maker
alters the group consistency index and a better consistency index could be achieved if the
assessment procedure is being resumed by the most influential decision maker in terms of
consistency.
The main contribution of the new approach is the algorithm presented in as a flow
chart where the condition to stop the process might be either a threshold value for the
consistency index, or a given number of iterations for the group or decision maker,
depending on the degree to which the targeted goal has been decomposed into conflictual
objectives.

**Category:** Data Structures and Algorithms

[151] **viXra:1605.0018 [pdf]**
*submitted on 2016-05-02 12:46:50*

**Authors:** Slim hannachi

**Comments:** 133 Pages. Cloud Computer

IAAS

**Category:** Data Structures and Algorithms

[150] **viXra:1605.0016 [pdf]**
*submitted on 2016-05-02 07:29:37*

**Authors:** A.A.Salama, Mohamed Eisa, Hewayda ElGhawalby, A.E.Fawzy

**Comments:** 6 Pages.

The aim of this paper is to present texture features for images embedded in the neutrosophic domain with Hesitancy degree. Hesitancy degree is the fourth component of Neutrosophic set. The goal is to extract a set of features to represent the content of each image in the training database to be used for the purpose of retrieving images from the database similar to the image under consideration.

**Category:** Data Structures and Algorithms

[149] **viXra:1605.0014 [pdf]**
*submitted on 2016-05-02 04:38:44*

**Authors:** A.A.Salama, Mohamed Eisa, Hewayda ElGhawalby, A.E.Fawzy

**Comments:** 6 Pages.

The goal of an Image Retrieval System is to retrieve images that are relevant to the user's request from a large image collection. In this paper we present texture features for images embedded in the neutrosophic domain. The aim is to extract a set of features to represent the content of each image in the training database to be used for the purpose of retrieving images from the database similar to the image under consideration.

**Category:** Data Structures and Algorithms

[148] **viXra:1604.0366 [pdf]**
*submitted on 2016-04-28 09:08:34*

**Authors:** Mai Ben-Adar Bessos, Simon Birnbach, Amir Herzberg, Ivan Martinovic

**Comments:** 1 Page. Technichal Report of the original paper E-bots vs. P-bots Cooperative Eavesdropping in (partial) Silence

We study the trade-off between the benefits obtained by communication, vs. the exposure of the location of the transmitter.

**Category:** Data Structures and Algorithms

[147] **viXra:1603.0386 [pdf]**
*submitted on 2016-03-28 13:36:36*

**Authors:** Carreño ED, Diener M, Cruz EHM, Navaux POA

**Comments:** 24 Pages.

One of the most important aspects that influences the performance of
parallel applications is the speed of communication between their tasks. To optimize communication, tasks that exchange lots of data should be mapped to processing units that have a high network performance. This technique is called communication-aware task mapping and requires detailed information about the underlying network topology for an accurate mapping. Previous work on task mapping focuses on network clusters or shared memory architectures, in which the topology can be determined directly from the hardware environment. Cloud computing adds significant challenges to task mapping, since information about network topologies is not available to end users. Furthermore, the communication performance might change due to external factors, such as different usage patterns of other users. In this paper, we present a novel solution to perform communication-
aware task mapping in the context of commercial cloud environments with multiple instances. Our proposal consists of a short profiling phase to discover the network topology and speed between cloud instances. The profiling can be executed before each application start as it causes only a negligible overhead. This information is then used together with the communication pattern of the parallel application to group tasks based on the amount of communication and to map groups with a lot of communication between them to cloud instances with a high network performance. In this way, application performance is increased, and data traffic between instances is reduced. We evaluated our proposal in a public cloud with a variety of MPI-based parallel benchmarks from the HPC domain, as well as a large scientific application. In the experiments, we observed substantial performance improvements (up to 11 times faster) compared to the default scheduling policies.

**Category:** Data Structures and Algorithms

[146] **viXra:1603.0107 [pdf]**
*submitted on 2016-03-07 05:49:11*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 3 Pages.

An original proof of P is not equal to NP.

**Category:** Data Structures and Algorithms

[145] **viXra:1603.0074 [pdf]**
*submitted on 2016-03-04 18:18:57*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 7 Pages. Article written around 5 years ago. See also viXra 1603.0107: "Languages Varying in Time and the Problem P x NP".

Prova-se que P ≠ NP, mostrando-se 2 problemas que são executados em tempo de complexidade constante O(1) em um algoritmo não determinístico, mas em tempo de complexidade exponencial em relação ao tamanho da entrada num algoritmo deterministístico. Os algoritmos são essencialmente simples para que tenham ainda alguma redução significativa em sua complexidade, o que poderia invalidar as provas aqui apresentadas.

**Category:** Data Structures and Algorithms

[144] **viXra:1603.0072 [pdf]**
*submitted on 2016-03-04 18:32:49*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 7 Pages. Article written around 5 years ago. See also viXra 1603.0107: "Languages Varying in Time and the Problem P x NP".

Is proved that P ≠ NP, showing 2 problems that are executed in constant complexity time O(1) in a nondeterministic algorithm, but in exponential complexity time related to the length of the input (input size) in a deterministic algorithm. These algorithms are essentially simple, so they can not have a significant reduction in its complexity, what could cause the proofs shown here to become invalid.

**Category:** Data Structures and Algorithms

[143] **viXra:1603.0071 [pdf]**
*submitted on 2016-03-04 18:45:49*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 7 Pages. Article written around 5 years ago. It is an enhancement over version 1. See also viXra 1603.0107: "Languages Varying in Time and the Problem P x NP".

Prova-se que P ≠ NP, mostrando-se 2 problemas que são executados em tempo de complexidade constante O(1) em um algoritmo não determinístico, mas em tempo de complexidade exponencial em relação ao tamanho da entrada num algoritmo deterministístico. Os algoritmos são essencialmente simples para que tenham ainda alguma redução significativa em sua complexidade, o que poderia invalidar as provas aqui apresentadas.

**Category:** Data Structures and Algorithms

[142] **viXra:1603.0070 [pdf]**
*submitted on 2016-03-04 18:59:21*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 10 Pages. Article written about 5 years ago. Subject and formalism will be reviewed.

Prova-se que P ≠ NP, mostrando-se 2 problemas que são executados em tempo de complexidade polinomial em um algoritmo não determinístico, mas em tempo de complexidade exponencial em relação ao tamanho da entrada num algoritmo deterministístico. Os algoritmos são essencialmente simples para que tenham ainda alguma redução significativa em sua complexidade, o que poderia invalidar as provas aqui apresentadas.

**Category:** Data Structures and Algorithms

[141] **viXra:1603.0069 [pdf]**
*submitted on 2016-03-04 19:18:09*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 10 Pages. Article written about 5 years ago. Subject and formalism will be re-examined. It is an enhancement over version 3.

Prova-se que P ≠ NP, mostrando-se 2 problemas que são executados em tempo de complexidade polinomial em um algoritmo não determinístico, mas em tempo de complexidade exponencial em relação ao tamanho da entrada num algoritmo deterministístico. Os algoritmos são essencialmente simples para que tenham ainda alguma redução significativa em sua complexidade, o que poderia invalidar as provas aqui apresentadas.

**Category:** Data Structures and Algorithms

[140] **viXra:1602.0349 [pdf]**
*submitted on 2016-02-27 10:52:47*

**Authors:** A. A. Salama, M.M.Eisa, S.A.EL-Hafeez, M.M. Lotfy

**Comments:** 28 Pages.

e-Learning has turned to be a necessity for everyone, as it enables continuous and life-long education.
Learners are social by nature. They want to connect to othersand share the same interests. Online
communities are important to help and encourage learners to continue education. Learners through social
capabilities can share different experiences.Social networks are cornerstone for e-Learning. However,
alternatives are many. Learners might get lost in the tremendous learning resources that are available. It is
the role of recommender systems to help learners find their own way through e-Learning. We present a
review of different recommender system algorithms that are utilized in social networks based e-Learning
systems. Future research will include our proposed our e-Learning system that utilizes Recommender
System and Social Network

**Category:** Data Structures and Algorithms

[139] **viXra:1602.0250 [pdf]**
*submitted on 2016-02-20 09:00:12*

**Authors:** J.A.J. van Leunen

**Comments:** 3 Pages.

Modular programming is a very efficient way of system creation. By reducing the number of relevant relations the method diminishes the complexity of configuring and supporting modular systems. The method uses the available resources in an optimal way.
The current way of software generation uses an object oriented way of system construction that does not encapsulate the objects, such that their internals are effectively hidden and guarded against obstructive access by external objects. This paper introduces a new way of notation that enforces this encapsulation. Many programming languages already implement part of the required methodology. An example is the razor language. This paper extends these ideas to a modular way of programming.
The approach makes from every relational database a modular database and from every file system a modular file system. It makes from every communication service a modular communication service and it standardizes programming such that reuse can be optimized in a global way. It will improve the robustness and reliability of software and enables to a large extent automated system configuration.

**Category:** Data Structures and Algorithms

[138] **viXra:1602.0033 [pdf]**
*submitted on 2016-02-02 18:19:36*

**Authors:** Funkenstein the Dwarf

**Comments:** 4 Pages. Published Oct. 2014

We outline here the design considerations and implementation of woodcoin, in particular those
which separate it from other cryptocurrencies. Woodcoin is a cryptocurrency very much like
bitcoin. However the design of bitcoin explicitly models a non-renewable resource: gold. For woodcoin we more closely model a sustainable resource. In particular woodcoin avoids the time asymmetries of the bitcoin release model, maximizing the incentive to participate and the longevity of the coin at the same time. Our solution is logarithmic growth of the money supply. In addition, we outline the design considerations behind two other changes to the core protocol: mining with the Skein hash function and securing digital ownership with the X9_prime256v1 curve using ECDSA.

**Category:** Data Structures and Algorithms

[137] **viXra:1602.0011 [pdf]**
*submitted on 2016-02-01 10:52:44*

**Authors:** Michail Zak

**Comments:** 11 Pages.

A new kind of dynamics for simulations based upon quantum-classical hybrid is discussed. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen potentials. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for quantum- inspired information processing. In this paper, the retrieval of stored items from an exponentially large unsorted database is performed by quantum-inspired resonance using polynomial resources due to quantum-like superposition effect.

**Category:** Data Structures and Algorithms

[136] **viXra:1601.0264 [pdf]**
*submitted on 2016-01-24 10:49:06*

**Authors:** Pavlova Sobakevich

**Comments:** 2 Pages.

We show that Albert Camus was right when depicting modern people work as a Sisypus work.
Data structures and algorithms can be used in different directions. One direciton is to create imitation of a "work" by this raising the suffer and noise of everyone. Another direction is to not imitate any work because no one can walk against the river too long but rather go straight to the point - lessening suffer and then raising the pleasure.
In ther words, the first thing should be always first: pamper yourself and then help others.
It was needed to pass several thousands years till it is now proved mathematically so that there is no one now who can create so much chaotical noise in the people's mind and vision of the reality that noone sees anything anymore.
This article is a cleaning instrument that helps to overcome this confuse happened due to a mischief.
Dr. Watson (recall computer project called Watson) can learn from Sherlock and do not seat in prostration anymore.
If any questions Dr. Watson the address of Sherlock: B 221.

**Category:** Data Structures and Algorithms

[135] **viXra:1601.0196 [pdf]**
*submitted on 2016-01-17 18:07:24*

**Authors:** Janis Belov

**Comments:** 5 Pages.

Richard P. Feynman in His book "Six easy steps" was telling about energy supplies in the nature and finally He wrote “Therefore it is up to the physicists to figure out how to liberate us from the need for having energy. It can be done”.
This article suggests a way for such liberation.
The opposite is well-known - the delta of information is always a money that used only for one goal - create more money. "Those who own the delta of information own the World".

**Category:** Data Structures and Algorithms

[134] **viXra:1601.0098 [pdf]**
*submitted on 2016-01-09 22:08:59*

**Authors:** Mrs. Prachi Karandikar, Sachin Deshpande

**Comments:** 8 Pages.

Data mining, the extraction of hidden predictive information from large databases, is nothing but
discovering hidden value in the data warehouse. Because of the increasing ability to trace and collect large
amount of personal information, privacy preserving in data mining applications has become an important
concern. Data distortion is one of the well known techniques for privacy preserving data mining. The objective
of these data perturbation techniques is to distort the individual data values while preserving the underlying
statistical distribution properties. These techniques are usually assessed in terms of both their privacy
parameters as well as its associated utility measure. In this paper, we are studying the use of non-negative
matrix factorization (NMF) with sparseness constraints for data distortion.

**Category:** Data Structures and Algorithms

[133] **viXra:1601.0097 [pdf]**
*submitted on 2016-01-09 22:10:00*

**Authors:** F. Emily Manoz, Priya, P.s. Ramesh, B.shanthi

**Comments:** 5 Pages.

Wireless Sensor Network (WSN) is a new class of networking technology .When we use sensor
network in brutal environment, security is most important concern. The technology may face against various
attacks. These attacks produce vulnerability against authentication, confidentiality and trustworthiness. This
paper introduces an adaptive method for securing the transformation of messages in wireless sensor networks in
the harsh environment. The light weight protocols are highly suitable for achieving authentication. The efficient
matching algorithm will be used for performing packet matching and also it detects the malicious attack
efficiently within the transformation of data. Finally, the encryption/decryption algorithm secures our original
data.

**Category:** Data Structures and Algorithms

[132] **viXra:1601.0096 [pdf]**
*submitted on 2016-01-09 22:12:21*

**Authors:** Saida Ibnyaich, Raefat Jalila El Bakouchi, Samira Chabaa, Abdelilah Ghammaz, Moha M’rabet Hassani

**Comments:** 8 Pages.

With the current expansion and the anticipated further increase in the
use of cellular telephones and other wireless communication devices,
considerable research effort is devoted to investigations of interactions between
antennas on handsets and the human body. This interaction significantly
changes the antenna characteristics from that in free space or even on the device
(handset, laptop). In this paper and in order to study this problem, firstly a
planar inverted-F antenna (PIFA) was designed and simulated to operate over
the frequency 2,45 GHz , then the influence of the human head on the return
loss and on the radiation efficiency of the antenna has been studied.

**Category:** Data Structures and Algorithms

[131] **viXra:1601.0095 [pdf]**
*submitted on 2016-01-09 22:13:17*

**Authors:** Sharma Shelja, Kumar Suresh, Rathy R. K.

**Comments:** 13 Pages.

Ad hoc networks are infrastructure-less collection of mobile nodes,
characterized by wireless links, dynamic topology and ease of deployment. Proactive
routing protocols maintain the network topology information in the form of routing
Tables, by periodically exchanging the routing information. Mobility of nodes leads to
frequent link breaks, resulting in loss of communication and thus the Information in the
Table may become stale after some time. DSDV routing protocol follows proactive
approach for routing and uses stale routes in case of link break, which is the major cause
of its low performance as mobility increases. We have focused on two variants of DSDV
namely Eff-DSDV and I-DSDV, which deals with the broken link reconstruction and
discussed in these protocols, the process of route reconstruction due to broken links. To
analyze this route reconstruction mechanism, we have used a terrain of size 700m × 800
m with 8 nodes placed randomly. Analysis shows that both Eff-DSDV & I-DSDV,
perform better than DSDV in Packet Delivery Ratio and Packet Loss with slight increase
in Routing Overheads.

**Category:** Data Structures and Algorithms

[130] **viXra:1601.0094 [pdf]**
*submitted on 2016-01-09 22:14:17*

**Authors:** Mohammed Bsiss, Amami Benaissa

**Comments:** 8 Pages.

Currently, the achievements of security systems is becoming more and more ground in different areas
not only through the development of new technologies of programmable circuits, with the ability to achieve very
complex systems in a single chip but thanks also a common and coherent organization of the different safety
standard.This paper describes the implementation for a safety fuzzy logic controller (SFLC) on the basis of Safety
Norm 61508. The SFLC is programmed with the hardware description language VHDL and implemented in
FPGA.

**Category:** Data Structures and Algorithms

[129] **viXra:1601.0093 [pdf]**
*submitted on 2016-01-09 22:15:09*

**Authors:** Vasanth H, Dr.A.R.Aswath

**Comments:** 8 Pages.

Interrupt controller is designed with the concept of priority based selection
of peripherals which requires immediate attention or service. Here AHB is optimized
to interface with VIC to initiate data transfer on the AHB. Both read and write cycles
are designed with AHB bus.

**Category:** Data Structures and Algorithms

[128] **viXra:1601.0092 [pdf]**
*submitted on 2016-01-09 22:15:59*

**Authors:** Sarita Rani, Sanju Saini, Sanjeeta Rani

**Comments:** 6 Pages.

Temperature is a very important parameter in industrial production. Recently, lots of researches have been
investigated for the temperature control system based on various control strategies. This paper presents the comparison of
GA-PID, fuzzy and PID for temperature control of water bath system. Different control schemes namely PID, PID tuning
using Genetic Algorithms(GA-PID), and Fuzzy Logic Control, have been compared through experimental studies with respect
to set-points regulation, influence of impulse noise and sum of absolute error. The new algorithm based on GA-PID improve the
performance of the system. Also, it's fit for the complicated variable temperature control system. The simulation results show that
the validity of the proposed strategy is more effective to control temperature.

**Category:** Data Structures and Algorithms

[127] **viXra:1601.0091 [pdf]**
*submitted on 2016-01-09 22:17:06*

**Authors:** S. Anupama Kumar, Vijayalakshmi M.N

**Comments:** 7 Pages.

Educational data mining is an emerging technology concerned with developing methods for exploring the various
unique data that exists in the educational settings and uses them to understand the students as well as the domain in which they
learn. Educational domain consists of a lot of data related to students, teachers and other learning strategies. Classification
algorithms can be used on various educational data to mine the academic records. It can be used to predict student‘s outcome
based on their previous academic performance. The various predictive algorithms like, C4.5, Random tree are applied on
student‘s previous academic results to predict the outcome of the students in the university examination. The prediction would
help the tutor in understanding the progress and attitude of the student towards the studies. It would also help them to identify the
students who are constantly improving in their studies and help them to achieve a higher percentage. It also helps them to identify
the underperformers so that extra effort can be taken to achieve a better result. The algorithms are analyzed based on their
accuracy of predicting the result, the recall and the precision values. The accuracy of the algorithm is predicted by comparing the
output generated by the algorithm with the original result obtained by the students in the university examination.

**Category:** Data Structures and Algorithms

[126] **viXra:1601.0090 [pdf]**
*submitted on 2016-01-09 22:17:56*

**Authors:** Affum Emmanuel, Edward Ansong

**Comments:** 14 Pages.

Ultra wideband (UWB) technology is one of the promising solutions for future short-range
communication which has recently received a great attention by many researchers. However,
interest in UWB devices prior to 2001 was primarily limited to radar systems, mainly for
military applications due to bandwidth resources becoming increasingly scarce and also its
interference with other commutation networks. This research work provides performance
analysis of multiband orthogonal frequency division multiplexing (MB-OFDM) UWB MIMO
system in the presence of binary phase-shift keying time-hopping (BPSK-TH) UWB or BPSKDS
UWB interfering transmissions under Nakagami-m and Lognormal fading channels
employing various modulation schemes using MATLAB simulations. The research work
indicates that it is totally impossible to predict the performance of UWB system in
Lognormal channel.

**Category:** Data Structures and Algorithms

[125] **viXra:1601.0089 [pdf]**
*submitted on 2016-01-09 22:32:00*

**Authors:** Rajashree Sukla, Chinmaya Kumar Nayak

**Comments:** 8 Pages.

In this paper we introduce a new interconnection network Fault Tolerant Hierarchical Interconnection network for parallel Computers denoted by FTH(k, 1).This network has fault tolerant hierarchical structure which overcomes the fault tolerant properties of Extended hypercube(EH).This network has low diameter, constant degree connectivity and low message traffic density in comparisons with other hypercube type networks like extended hypercube and hypercube. In this network we proposed the fault tolerant algorithm for node fault and also we introduce the hamiltonian Circuit for the proposed network FTH(k,2).

**Category:** Data Structures and Algorithms

[124] **viXra:1511.0207 [pdf]**
*submitted on 2015-11-21 13:52:54*

**Authors:** Andrew Nassif

**Comments:** 6 Pages.

Computer Engineering requires you to know a vast array of programming languages, as well as utilizing different technologies in order to design hardware, or manage databases. It can often be identified as the cross between Information Technology and Electrical Engineering. What I learned is that you don’t have to only know C and C++, but you will also be required to learn more, especially working into the Hardware, Software, and Database side. Softwares you need to be familiar with include Visual Studio, and sometimes Open Source technologies. All in all I learned a great deal of knowledge from the people in which I talked to. I learned that Computer Engineering and related fields, have an impact on technological advancements, as well as making the world an easier place to live. I learned the overall power of different subjects in the fields such as utilizing UML language, Blockchain Technology, Javascript, Python, and the power of Linux. Some of these, I may present in throughout this paper. The purpose to this paper is to inform the average user about what is in the field, what Computer Engineers do, as well as the powerful research and impact of the field. By the end of this paper, I hope you have a beginner’s expertise on the implications of this widely known field.

**Category:** Data Structures and Algorithms

[123] **viXra:1510.0487 [pdf]**
*submitted on 2015-10-28 20:24:58*

**Authors:** Sai Venkatesh Balasubramanian

**Comments:** 10 Pages.

A Chaos based embedding process for textual data offering high capacity and high security simultaneously is designed and implemented. A chaotic image, obtained using a frequency dependant driven chaotic system is used as the data carrier in which textual data is embedded. The decryption and subsequent performance analyses reveal a high fidelity with a mean square error of around 0.0009 percent and a compression ratio increasing nonlinearly with text size, with ratio values more than 150:1 obtained for significantly large texts. Moreover, a very high level of security leading to up to 60 percent of mean square error values even for 1 percent misalignment in the decryption process is observed. The extreme simplicity of implementation coupled with the twin advantages of high compression ratios and high security forms the highlight of the present work.

**Category:** Data Structures and Algorithms

[122] **viXra:1510.0478 [pdf]**
*submitted on 2015-10-28 20:36:00*

**Authors:** Sai Venkatesh Balasubramanian

**Comments:** 7 Pages.

Efficient techniques of Genome Data handling and storage are the need of the hour in the present genetic engineering era. The present work purports to the design and implementation of a Genome Sequence Data Compression Technique without the use of references and lookup. This is achieved by first generating a digital chaotic bit stream, formed by performing XOR operations on three square waves with mismatched frequencies. The generated bit stream is XORed with the Genome Sequence bit stream after necessary data conditioning, and the result is stored as a 2D array (image). The png format is chosen, owing to its inherent lossless properties. It is seen that the perfectly reversible operations of compression and decompression result in compression ratios of around 2.6-3.5 being achieved with absolute zero error. The use of digital chaos provides an additional layer of security, since the frequencies of the input square wave signals form a secure key, which when mismatched during decompression even by 1 percent, can result in error rates of upto 60 percent.

**Category:** Data Structures and Algorithms

[121] **viXra:1510.0473 [pdf]**
*submitted on 2015-10-29 03:15:36*

**Authors:** Kurt Mehlhorn, Sanjeev Saxena

**Comments:** 16 Pages. Also as arXiv:1510.03339 [cs.DS]

Linear programming is now included in algorithm undergraduate and postgraduate courses for computer science majors. We show that it is possible to teach interior-point methods directly to students with just minimal knowledge of linear algebra.

**Category:** Data Structures and Algorithms

[120] **viXra:1510.0417 [pdf]**
*submitted on 2015-10-27 09:23:57*

**Authors:** Sai Venkatesh Balasubramanian

**Comments:** 15 Pages.

A Chaos based compression technique offering high capacity and high security simultaneously is designed and implemented. A chaotic image, obtained by reshaping the signal representing a frequency dependant driven chaotic system is used as the data carrier in which data from the file to be compressed is embedded. Implementation of the algorithm is carried out in MATLAB and Python platforms for various filetypes such as txt, png, pdf, mp3, 3gp and rar formats. A comparative performance analysis reveals a high fidelity with a mean square errors of less than 0.0009 percent as well as a relatively high compression ratio value of 5-6. A very high level of security leading to up to 60 percent of mean square error values even for 1 percent misalignment in the decryption process is observed. The execution times for the implementations are obtained reasonably at around 5 seconds. A new compression technique, termed ‘supercompression’ consisting of repeated application of the compression technique is proposed. A proof-of-concept implementation achieved extremely high compression ratios of around 40000. The extreme simplicity of implementation coupled with the twin advantages of high compression ratios and high security forms the highlight of the present work.

**Category:** Data Structures and Algorithms

[119] **viXra:1510.0360 [pdf]**
*submitted on 2015-10-23 09:24:11*

**Authors:** Sai Venkatesh Balasubramanian, T. Venkata Subba Reddy, B. Madhava Reddy

**Comments:** 14 Pages.

The current era of data explosion entails the necessity of high efficiency in terms of data capacity and data security. This scenario of Big Data inevitably leads to the technology of Internet of Things (IoT) in the future.
The present project purports to the effective harnessing of nonlinear signal processing principles leading to enhanced security of data without compromising on capacity. The advantage of using nonlinear signal processing lies in the fact that the nonlinearity of a single NMOS transistor is able to provide robust security by generation of chaotic signals. This results in low power dissipation and simplicity of circuitry. The enhanced secure communication techniques are then studied giving importance to the phase variations in the signal and are then applied to real world information systems. Also, the possibility of introducing such techniques in conventional big data systems such as RDBMS and Hadoop are considered.
After significantly demonstrating the capabilities of the nonlinear signal processing approach in terms of fidelity, capacity and robustness, the techniques are extended even further to include an Internet of Things (IoT) based environment. The implementation of nonlinear signal processing techniques to IoT based systems such as RFID are explored. At the final stage, the change in the managerial perspective required to handle the IoT dominated environment is discussed. The business level implications of such a technology shift are studied. This study of IoT is termed as “Management of Things” (MoT).
The principal aim of this project is to provide a feasible, efficient, innovative yet costeffective solution to the biggest problems of the telecommunication world today – data capacity and data security. This project thus follows from the motto “Transformation through Information” and leads us gently to become effective citizens of a smarter planet.

**Category:** Data Structures and Algorithms

[118] **viXra:1510.0325 [pdf]**
*submitted on 2015-10-18 16:01:11*

**Authors:** J. Read, L. Martino, J. Hollmén

**Comments:** 26 Pages.

The number of methods available for classification of multi-label data has increased rapidly over recent years, yet relatively few links have been made with the related task of classification of sequential data. If labels indices are considered as time indices, the problems can often be seen as equivalent. In this paper we detect and elaborate on connections between multi-label methods and Markovian models, and study the suitability of multi-label methods for prediction in sequential data. From this study we draw upon the most suitable techniques from the area and develop two novel competitive approaches which can be applied to either kind of data. We carry out an empirical evaluation investigating performance on real-world
sequential-prediction tasks: electricity demand, and route prediction. As well as showing that several popular multi-label algorithms are in fact easily applicable to sequencing tasks, our novel approaches, which benefit from a unified view of these
areas, prove very competitive against established methods.

**Category:** Data Structures and Algorithms

[117] **viXra:1509.0259 [pdf]**
*submitted on 2015-09-27 17:00:53*

**Authors:** Laszlo B. Kish, Claes-Goran Granqvist

**Comments:** 8 Pages. first version

We introduce two new Kirchhoff-law–Johnson-noise (KLJN) secure key distribution schemes, which are the generalization of the original KLJN version. The first system, the Random-Resistor (RR-) KLJN scheme is using random resistors chosen from a quasi-continuum set of resistance values. It is well known since the creation of the KLJN concept that such system could work because Alice and Bob can calculate the unknown resistance value from measurements; however, it has not been addressed in publications as it was considered impractical. The reason for discussing it is the second scheme, the Random-Resistor-Random-Temperature (RRRT-) KLJN key exchanger inspired by a recent paper of Vadai-Mingesz-Gingl where security was maintained at non-zero power flow. In the RRRT-KLJN secure key exchanger scheme, both the resistances and their temperatures are continuum random variables. We prove that the security of the RRRT-KLJN system can be maintained at non-zero power flow thus the physical law guaranteeing the security is not the Second Law of Thermodynamics but the Fluctuation-Dissipation Theorem. Knowing their own resistance and temperature values, Alice and Bob can calculate the resistance and temperature values at the other end from the measured voltage, current and power-flow data in the wire. Eve cannot determine these values because, for her, there are 4 unknown quantities, while she can set up only 3 equations. The RRRT-KLJN scheme has several advantages and makes all the existing former attacks invalid or incomplete.

**Category:** Data Structures and Algorithms

[116] **viXra:1509.0162 [pdf]**
*submitted on 2015-09-18 04:02:48*

**Authors:** Ms. K. Sathya Sundari

**Comments:** 09 Pages. Figures :4 Tables : 0, IJCAT.org, Volume 2, Issue 8, August 2015

Job shop scheduling using ACO(Ant Colony Optimization) approach. Different heuristic information is discussed and three different ant algorithms are presented. State transition rule and pheromone updating methods are given. The concept of the new strategy is highlighted and template for ACO approach is presented.

**Category:** Data Structures and Algorithms

[115] **viXra:1509.0154 [pdf]**
*submitted on 2015-09-18 03:45:27*

**Authors:** Akhila G.S, Prasanth R.S

**Comments:** 7 Pages.

Using Personalized Web Search (PWS) we can improve the quality of search results in the Internet. The existing UPS based Personalized Web Searching has many drawbacks. First, there may be a chance of eavesdropping when generalized profile forwarded to the server. Second, web server is vulnerable to web attacks like URL manipulation attacks. The impact of these attacks will affect user’s personal information. So we introduce a new framework called UPES. Here, the data stored in the server-side and request from user will be in encrypted form. Fully Homomorphic Encryption over Integers (FHEI) is used for encrypting data. The experimental results show that this framework functioned in the best possible manner with the least waste of time and effort.

**Category:** Data Structures and Algorithms

[114] **viXra:1509.0152 [pdf]**
*submitted on 2015-09-18 03:56:22*

**Authors:** Rizal; Fadlisyah; Muhathir; Al Muammar Akfal

**Comments:** 08 Pages. Figures :10 Tables : 01; IJCAT.org, Volume 2, Issue 8, August 2015

Al Quran is the Muslim holy book written in Arabic. To read the Quran recitation necessary knowledge of the guidelines. In the context of everyday people find difficulty in recitation of the Quran. Therefore, the detection system tajwid needed to help users find the recitation of the Quran. In this study, the method of Bray Curtis Distance is used to detect the image of the Holy Qur'an recitation. The test results show that the accuracy of the system is 60% to 90%. The percentage of detection rate shows that the method can be used Bray curtis as one approach to detection at the image of the Holy Qur'an recitation. This system has several drawbacks that have a high false positive rate, or an error about a 40% chance. To improve the performance of this recitation detection system, can be done by providing further training with additional training data more and more varied. However, this recitation detection system does not deny the importance of teachers in learning how to read in accordance with the rules of recitation is right.

**Category:** Data Structures and Algorithms

[113] **viXra:1509.0104 [pdf]**
*submitted on 2015-09-10 08:53:01*

**Authors:** Arundale Ramanathan

**Comments:** 6 Pages. License: CC 4.0 Attribution

The Arithmetic Coding process involves re-calculation of intervals for each symbol that need to be encoded. This article discovers a formula based approach for calculating compressed codes and provides proof for deriving the formula from the usual approach. A spreadsheet is also provided for verification of the approach. Consequently, the similarities between Arithmetic Coding and Huffman coding are also visually illustrated.

**Category:** Data Structures and Algorithms

[112] **viXra:1508.0186 [pdf]**
*submitted on 2015-08-23 01:20:59*

**Authors:** Sparisoma Viridi, Tito Waluyo Purboyo

**Comments:** 18 pages, 2 figures, 5 tables, supported by RIK-ITB b-II 2015

Solving problem using C++ language requiring dynamic size variable can be easier performed using STL vector class. How to reproduces statuses from the article "An Improved Algorithm for Generation of Attack Graph Based on Virtual Performance Node" is traced back in this work by implementing the vector class. A random function in C++ rand() is also used in determining IP for attacker and also the target, imitating guessing from attacker.

**Category:** Data Structures and Algorithms

[111] **viXra:1507.0080 [pdf]**
*submitted on 2015-07-12 11:47:40*

**Authors:** S. Viridi, A. Suroso, F. T. Akbar, Novitrian, T. D. K. Wungu, S. Pramuditya, D. Irwanto, N. Asiah, A. Pramutadi, K. Basar, F. D. E. Latief, S. Permana, I. D. Aditya, H. Mahardika, A. H. Aimon, A. Waris, Khairurrijal

**Comments:** 8 pages, 1 figure, 2 tables, technical report

In preparing the 6th Asian Physics Symposium on 19-20 August 2015 in Bandung, Indonesia, a conference management system (CMS) known as SeminarPress is used. This CMS already has a lot of features but not in generating Book of Abstract (BoA) directly. In order to support the CMS a shell script named as mkboa.sh is developed and the results of executing it is discussed in this work. Some limitations due to LATEX restrictions in using some characters are also emphasized.

**Category:** Data Structures and Algorithms

[110] **viXra:1506.0119 [pdf]**
*submitted on 2015-06-15 09:45:39*

**Authors:** L. Martino, J. Read, F. Louzada

**Comments:** 11 Pages.

Multi-dimensional classification (also known variously as multi-target, multi-objective, and multi-output classification) is the supervised learning problem where an instance is associated to qualitative discrete variables (a.k.a. labels), rather than with a single class, as in traditional classification problems. Since these classes are often strongly correlated, modeling the dependencies between them allows MDC methods to improve their performance -- at the expense of an increased computational cost.
A popular method for multi-label classification is the classifier chains (CC), in which the predictions of individual classifiers are cascaded along a chain, thus taking into account inter-label dependencies. Different variant of CC methods have been introduced, and many of them perform very competitively across a wide range of benchmark datasets. However, scalability limitations become apparent on larger datasets when modeling a fully-cascaded chain. In this work, we present an alternative model structure among the labels, such that the Bayesian optimal inference is then computationally feasible. The inference is efficiently performed using a Viterbi-type algorithm.
As an additional contribution to the literature we analyze the relative advantages and interaction of three aspects of classifier chain design with regard to predictive performance versus efficiency: finding a good chain structure vs.a random structure, carrying out complete inference vs. approximate or greedy inference, and a linear vs. non-linear base classifier. We show that our Viterbi CC can perform best on a range of real-world datasets.

**Category:** Data Structures and Algorithms

[109] **viXra:1505.0218 [pdf]**
*submitted on 2015-05-29 01:56:46*

**Authors:** Grzegorz Ileczko

**Comments:** 14 Pages.

This arcle is a short demonstraon of computaonal possibilies of the extreme effecve algorithm
for the Hamilton problem. In fact, the algorithm can fast solve a few similar problems, well-known in literature as:
Hamilton path
Hamilton cycle
and
Hamilton longest path
Hamilton longest cycle

**Category:** Data Structures and Algorithms

[108] **viXra:1505.0169 [pdf]**
*submitted on 2015-05-23 18:53:26*

**Authors:** Yuly Shipilevsky

**Comments:** 10 Pages.

A polynomial-time algorithm for integer factorization, wherein integer factOrization reduced to a convex polynomial-time integer minimization problem.

**Category:** Data Structures and Algorithms

[107] **viXra:1504.0227 [pdf]**
*submitted on 2015-04-28 12:22:52*

**Authors:** Suraj Kumar

**Comments:** 4 Pages.

In this paper, it has been tried to provide an insight into the information system of Universe as a whole comparing it with the information system in our local reference frame of observables. With the conservation of information been carried out by the SU (1) gauge symmetry group of Universe, it explains how the same information is decoded in two different ways by respective information system mentioned above. It also provide with an introduction of different information processing methodology of the Universe and how their is loss of information by different dynamical changes in Universe including red shift.

**Category:** Data Structures and Algorithms

[106] **viXra:1504.0134 [pdf]**
*submitted on 2015-04-17 08:19:42*

**Authors:** Bishnu Charan Behera

**Comments:** 2 Pages.

THIS IS A ALGORITHM WHICH HAS THE SAME TIME COMPLEXITY AS THAT OF LINEAR SEARCH OF O(n).BUT STILL IT IS BETTER THAN LINEAR SEARCH IN TERMS OF EXECUTION TIME. LET A[ ] BE THE THE ARRAY OF SOME SIZE N. IF THE ELEMENT WHICH WE WANT TO SEARCH IS AT ANY POSITION BEFORE N/2 THAN MY-SEARCH AND LINEAR-SEARCH BOTH WILL HAVE EXECUTION TIME , BUT THE MAGIC HAAPENS WHEN THE SEARCH ELEMENT IS AFTER N/2 POSITION.SUPPOSE THE ELEMENT WANT TO SEARCH IS AT Nth POSITION, THEN USING THE LINEAR SEARCH WILL FIND THE ELEMENT AFTER Nth ITERATION,BUT USING MY-SEARCH WE CAN SEARCH THE ELEMENT AFTER 1st ITERATION ITESELF.
WHEN WE ARE DEALING WITH A SITUTATION WHEN SIZE IS SOMETHING 10 OR 15 ITS OK. BUT CAN YOU IMAGINE THE CASE WHEN THE SIZE IS “100000000” OR EQUIVALANENT.IF WE USE THIS LINEAR SEARCH TECHINIUQE THAN THE TOTAL EXPENDITURE YOU CAN THINK OFF TO CONTINUE THE LOOP FOR 100000000 TIMES.BUT RATHER IF USE MY-SEARCH U GET THE DESIRED SEARCH JUST AFTER 1 ITERATIONS.
SO ,NOW YOU CAN IMAGINE HOW WE CAN PREVENT SUCH A BIG LOSS THROUGH MY-SEARCH.
THANK YOU

**Category:** Data Structures and Algorithms

[105] **viXra:1504.0116 [pdf]**
*submitted on 2015-04-14 11:10:18*

**Authors:** M.pooja, S.k Manigandan

**Comments:** 7 Pages.

This project income tax deals
with computerizing the process of tax
payment. The entire process of tax payment
will be maintained in an automated way. The
main objective of this project is to reduce the
time consumption. The income tax system has
been categorized into three groups according
to the mode of payment to the central
government, state government, and the
municipality. The online tax payment system
will be helpful for paying the money from
anywhere and at any time. Earlier it was
impossible to pay the money online using
Debit card / Credit card. The main objective of
our system is; we can pay the money use of
Debit card / Credit card. Our project has
included the concept of paying money through
card number which is provided by the bank. It
is very secure and easy to reimburse. Through
the card security code providing secure money
transaction in the system .in other hand
through account number and bank name user
has to pay the tax in the system. User has to
viewing their tax calculation and money
transaction status whether payment succeeds
or not user has to monitor their entire tax
calculation through the tax view module in the
system. Admin login is used to login in admin
side. Admin side has a security of Money
Transaction and confidentiality of user
information. Admin provides the security to
their users. Admin view is used to view the
Tax payments of the login User. Admin
monitoring the user activities through admin
view module.

**Category:** Data Structures and Algorithms

[104] **viXra:1504.0072 [pdf]**
*submitted on 2015-04-09 09:40:48*

**Authors:** Funkenstein the Dwarf

**Comments:** 4 Pages.

About a year after Ittay Eyal published two papers claiming vulnerabilities in the bitcoin mining protocol, we have seen that the network is still strong (it has grown in hashpower many times over) and is unaffected by the supposed problems. I show here the biggest reasons the two vulnerability analyses were flawed. The attacks appear to hinder other miners who are competitors. However, both of the attacks harm the attacker's bottom line more than any harm to the competitors can emerge as profits for the attacker.

**Category:** Data Structures and Algorithms

[103] **viXra:1503.0220 [pdf]**
*submitted on 2015-03-28 06:17:24*

**Authors:** Dhananjay P. Mehendale

**Comments:** 17 pages.

In this paper we discuss some novel algorithms for linear programming inspired by geometrical considerations and use simple mathematics related to finding intersections of lines and planes. All these algorithms have a common aim: they all try to approach closer and closer to “centroid” or some “centrally located interior point” for speeding up the process of reaching an optimal solution! Imagine the “line” parallel to vector C, where CTx denotes the objective function to be optimized, and further suppose that this “line” is also passing through the “point” representing optimal solution. The new algorithms that we propose in this paper essentially try to reach at some feasible interior point which is in the close vicinity of this “line”, in successive steps. When one will be able to arrive finally at a point belonging to small neighborhood of some point on this “line” then by moving from this point parallel to vector C one can reach to the point belonging to the sufficiently small neighborhood of the “point” representing optimal solution.

**Category:** Data Structures and Algorithms

[102] **viXra:1503.0218 [pdf]**
*submitted on 2015-03-27 19:59:36*

**Authors:** Azeddine Elhassouny

**Comments:** 120 Pages.

Thèse dirigée par Pr. Driss Mammass, préparée au Laboratoire Image et Reconnaissance
de Formes-Systèmes Intelligents et Communicants IRF-SIC, soutenue le 22 juin 2013,
Agadir, Maroc.
L'objectif de cette thèse est de fournir à la télédétection
des outils automatiques de la classification et de la
détection des changements d'occupation du sol utiles à
plusieurs fins, dans ce cadre, nous avons développé
deux méthodes générales de fusion utilisées pour la
classification des images et la détection des
changements utilisant conjointement l'information
spatiale obtenue par la classification supervisée ICM et
la théorie de Dezert-Smarandache (DSmT) avec des
nouvelles règles de décision pour surmonter les limites
inhérentes des règles de décision existantes dans la
littérature.
L'ensemble des programmes de cette thèse ont été
implémentés avec MATLAB et les prétraitements et
visualisation des résultats ont été réalisés sous ENVI 4.0,
ceci a permis d'effectuer une validation des résultats
avec précision et dans des cas concrets. Les deux
approches sont évaluées sur des images LANDSAT
ETM+ et FORMOSAT-2 et les résultats sont prometteurs.
The main objective of this thesis is to provide automatic
remote sensing tools of classification and of change
detection of land cover for many purposes, in this
context, we have developed two general methods used
for classification fusion images and change detection
using joint spatial information obtained by supervised
classification ICM and Dezert-Smarandache theory
(DSmT) with new decision rules to overcome the
limitations of decision rules existing in the literature.
All programs of this thesis have been implemented in
MATLAB and C language and preprocessing and
visualization of results were achieved in ENVI 4.0, this
has allowed for a validation of the results accurately and
in concrete cases. Both approaches are evaluated on
LANDSAT ETM + and FORMOSAT-2 and the results are
promising.

**Category:** Data Structures and Algorithms

[101] **viXra:1503.0018 [pdf]**
*submitted on 2015-03-02 20:41:52*

**Authors:** editors Florentin Smarandache, Jean Dezert

**Comments:** 504 Pages.

The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.gallup.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals.
First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief
structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable
Belief Model, and others.
More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm
classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on.
Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered.

**Category:** Data Structures and Algorithms

[100] **viXra:1502.0231 [pdf]**
*submitted on 2015-02-26 04:57:01*

**Authors:** Jan A. Bergstra

**Comments:** 23 Pages.

After 15 years of development of instruction sequence theory (IST) writing a SWOT analysis about that project is long overdue.
The paper provides a comprehensive SWOT analysis of IST based on a recent proposal concerning the terminology
for the theory and applications of instruction sequences.

**Category:** Data Structures and Algorithms

[99] **viXra:1502.0228 [pdf]**
*submitted on 2015-02-25 17:47:05*

**Authors:** Jan A. Bergstra

**Comments:** 19 Pages.

Instruction sequences play a key role in computing and have the
potential of becoming more important in the conceptual development of
informatics in addition to their existing role in computer technology and machine architectures. After 15 years of development of instruction sequence theory a more robust and outreaching terminology is needed for it which may support further development. Instruction sequencing is the central concept around which a new family of terms and phrases is developed.

**Category:** Data Structures and Algorithms

[98] **viXra:1502.0047 [pdf]**
*submitted on 2015-02-05 23:42:58*

**Authors:** Phil Ascio

**Comments:** 1 Page.

We shall reassess the simplex algorithm by observing an injective semi-separable morphism. Recent interest in affine, geometric functionals has centered on studying linearly n-dimensional, minimal random variables in NP. In contrast, we shall show that there exists a combinatorially Cauchy projective set acting algebraically on P to demonstrate that P=NP.

**Category:** Data Structures and Algorithms

[97] **viXra:1502.0003 [pdf]**
*submitted on 2015-02-01 04:19:35*

**Authors:** Wenming Zhang

**Comments:** 4 Pages. This is a short and interesting paper.

We discuss the P versus NP problem from the perspective of
addition operation about polynomial functions. Two contradictory propositions for the addition operation are presented.
With the proposition that the sum of k (k<=n)
polynomial functions on n always yields a polynomial function, we
prove that P=NP, considering the maximum clique problem. However,
we also get a contradiction if we accept the proposition. So, we
conclude that the sum of k polynomial functions may yield a
exponential function. Accepting this proposition, we prove that
P!=NP by constructing an abstract decision problem.

**Category:** Data Structures and Algorithms

[96] **viXra:1501.0203 [pdf]**
*submitted on 2015-01-21 18:32:38*

**Authors:** Jan A. Bergstra

**Comments:** 23 Pages.

Algebraic Algorithmics, a phase taken from G.E. Tseitlin, is given a specific interpretation for
the line of work in the tradition of the program algebra and thread algebra. An application
to algebraic algorithmics of preservationist paraconsistent reasoning in the style of
chunk and permeate is suggested and discussed.
In the first appendix nopreprint is coined as a tag for new a publication category,
and a rationale for its use is given. In a second appendix some rationale is provided for the affiliation
from which the paper is written and posted.

**Category:** Data Structures and Algorithms

[95] **viXra:1501.0022 [pdf]**
*submitted on 2015-01-02 05:15:50*

**Authors:** Anatolij K. Prykarpatski

**Comments:** 7 Pages. a new approach to constructing a priori integrable discretizations of nonlinear Lax type integrable dynamical systems

The Calogero type matrix discretization scheme is applied to constructing the Lax type integrable discretizations of one wide enough class of nonlinear integrable dynamical systems on functional manifolds. Their Lie-algebraic structure and complete integrability related with co-adjoint orbits on the Markov co-algebras is discussed. It is shown that a set of conservation laws and the associated Poisson structure ensue as a byproduct of the approach devised. Based on the Lie algebras quasi-representation property the limiting procedure of finding the nonlinear dynamical systems on the corresponding functional spaces is demonstrated.

**Category:** Data Structures and Algorithms

[94] **viXra:1412.0176 [pdf]**
*submitted on 2014-12-15 05:33:53*

**Authors:** Grzegorz Ileczko

**Comments:** 13 Pages.

This article demonstrates a general solution for the problems of class (P vs.NP). Peculiarly for the problems of class (P=NP). Presented solution is quite simple and can be applicable in many various areas of science. At general, (P=NP) it’s a class of problems which possess algorithmic nature. The algorithms should contains one or more of logical operations like (if...then) instruction, or Boolean operations. The proper proof for this thesis with a new formula was presented. Except formula, one proper example was presented for the problem (P=NP). Exists a lot of problems for which P class problems are equivalent with the NP problems (P=NP). Millions, I think.
For example, I discovered extremely effective algorithm for the “Hamiltonian Path Problem”. Algorithm can find the proper solution for 100 cities at very short time. Solution time for old laptop is less than two seconds. Classical solution for that problem exists, but is extremely difficult and computer’s time is huge. Algorithm for the Hamilton problem, will be presented at separate article (needs more paper).

**Category:** Data Structures and Algorithms

[93] **viXra:1412.0106 [pdf]**
*submitted on 2014-12-03 19:14:19*

**Authors:** Sidharth Ghoshal

**Comments:** 2 Pages. Copyright Sidharth Ghoshal

A high performance file compression algorithm

**Category:** Data Structures and Algorithms

[92] **viXra:1411.0592 [pdf]**
*submitted on 2014-11-29 06:43:39*

**Authors:** Sanjeev Saxena

**Comments:** 8 Pages.

Linear Programming is now included in Algorithm undergraduate and postgraduate courses for Computer Science majors. It is possible to teach interior-point methods directly with just
minimal knowledge of Algebra and Matrices.

**Category:** Data Structures and Algorithms

[91] **viXra:1410.0193 [pdf]**
*submitted on 2014-10-29 16:10:28*

**Authors:** Manar Jammal, Ali Kanso, Abdallah Shami

**Comments:** 7 Pages.

Cloud computing is continuously growing as a
business model for hosting information and communication technology
applications. Although on-demand resource consumption
and faster deployment time make this model appealing for
the enterprise, other concerns arise regarding the quality of
service offered by the cloud. One major concern is the high
availability of applications hosted in the cloud. This paper
demonstrates the tremendous effect that the placement strategy
for virtual machines hosting applications has on the high
availability of the services provided by these applications. In
addition, a novel scheduling technique is presented that takes
into consideration the interdependencies between applications
components and other constraints such as communication delay
tolerance and resource utilization. The problem is formulated
as a linear programming multi-constraint optimization model.
The evaluation results demonstrate that the proposed solution
improves the availability of the scheduled components compared
to OpenStack Nova scheduler.

**Category:** Data Structures and Algorithms

[90] **viXra:1410.0134 [pdf]**
*submitted on 2014-10-22 17:29:34*

**Authors:** A. A. Salama, Mohamed Abdelfattah, Mohamed Eisa

**Comments:** 6 Pages. Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. İn this paper we, introduce the distances between neutrosophic sets: the Hamming distance, The normalized Hamming distance, the Euclidean distance an

Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. İn this paper we, introduce the distances between neutrosophic sets: the Hamming distance, The normalized Hamming distance, the Euclidean distance and normalized Euclidean distance. We will extend the concepts of distances to the case of neutrosophic hesitancy degree. Added to, this paper suggest how to enrich intuitionistic fuzzy querying by the use of neutrosophic values..

**Category:** Data Structures and Algorithms

[89] **viXra:1410.0122 [pdf]**
*submitted on 2014-10-21 10:02:06*

**Authors:** Laszlo B. Kish, Zoltan Gingl, Robert Mingesz, Gergely Vadai, Janusz Smulko, Claes-Goran Granqvist

**Comments:** 7 Pages. first draft

After briefly summarizing our general theoretical arguments, we show that, the experienced strong information leak at the Gunn-Allison-Abbott attack [Scientific Reports 4 (2014) 6461] against the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange scheme, resulted from a serious design flaw of the system. The attenuator broke the single Kirchhoff-loop into two coupled loops. This is an illegal operation because the single loop is essential for the security, thus the observed leak is obvious. We demonstrate this by cracking the system with an elementary current comparison attack yielding close to 1 success probability for Eve even without averaging within a sub-correlation-time measurement window. A fully defended KLJN system would not be able to function, at all, due to its built-in current-comparison defense against active (invasive) attacks.

**Category:** Data Structures and Algorithms

[88] **viXra:1409.0180 [pdf]**
*submitted on 2014-09-26 05:40:42*

**Authors:** Samit Kumar

**Comments:** 3 Pages.

Purposeful Information can be represented in a hierarchical manner using basic Data originating from digitally connected sources.
Such hierarchical represented data highlights the precarious state .

**Category:** Data Structures and Algorithms

[87] **viXra:1409.0150 [pdf]**
*submitted on 2014-09-20 13:41:49*

**Authors:** X. Cao, Y. Saez, G. Pesti, L.B. Kish

**Comments:** 13 Pages. Submitted for publication to Fluct. Noise Lett. on September 20, 2014

In a former paper [Fluct. Noise Lett., 13 (2014) 1450020] we introduced a vehicular communication system with unconditionally secure key exchange based on the Kirchhoff-Law-Johnson-Noise (KLJN) key distribution scheme. In this paper, we address the secure KLJN key donation to vehicles and give an upper limit for the lifetime of this key.

**Category:** Data Structures and Algorithms

[86] **viXra:1409.0071 [pdf]**
*submitted on 2014-09-10 14:26:01*

**Authors:** Panos Sakkos, Dimitrios Kotsakos, Ioannis Katakis, Dimitrios Gunopoulos

**Comments:** 4 Pages.

We present a Software Keyboard for smart touchscreen de- vices that learns its owner’s unique dictionary in order to produce personalized typing predictions. The learning pro- cess is accelerated by analysing user’s past typed communi- cation. Moreover, personal temporal user behaviour is cap- tured and exploited in the prediction engine. Computational and storage issues are addressed by dynamically forgetting words that the user no longer types. A prototype implemen- tation is available at Google Play Store.

**Category:** Data Structures and Algorithms

[85] **viXra:1408.0145 [pdf]**
*submitted on 2014-08-21 18:55:42*

**Authors:** Laszlo B. Kish

**Comments:** 3 Pages. submitted for publication

Unconditionally secure physical key distribution is very slow whenever it is undoubtedly secure. Thus it is practically impossible to use a one-time-pad based cipher to guarantee perfect security be-cause using the key bits more than once gives out statistical information, such as via the known-plain-text-attack or by utilizing known components of the protocol and language statistics. Here we outline a protocol that seems to reduce this problem and allows a near-to-one-time-pad based communication with unconditionally secure physical key of finite length. The unconditionally secure physical key is not used for communication; it is use for a secure communication to generate and share a new software-based key without known-plain-text component, such as keys shared via the Diffie-Hellmann-Merkle protocol. This combined physical/software key distribution based communication looks favorable compared to the physical key based communication when the speed of the physical key distribution is much slower than that of the software-based key distribution. The security proof of this scheme is yet an open problem.

**Category:** Data Structures and Algorithms

[84] **viXra:1408.0123 [pdf]**
*submitted on 2014-08-18 13:06:44*

**Authors:** Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist, He Wen

**Comments:** 4 Pages. In: Proceedings of the first conference on Hot Topics in Physical Informatics (HoTPI, 2013 November). Paper is in press at International Journal of Modern Physics: Conference Series (2014).

This paper deals with the Kirchhoff-law-Johnson-noise (KLJN) classical statistical physical key exchange method and surveys criticism - often stemming from a lack of understanding of its underlying premises or from other errors - and our related responses against these, often unphysical, claims. Some of the attacks are valid, however, an extended KLJN system remains protected against all of them, implying that its unconditional security is not impacted.

**Category:** Data Structures and Algorithms

[83] **viXra:1408.0048 [pdf]**
*submitted on 2014-08-08 23:27:58*

**Authors:** Alexander Fix, Misha Collins

**Comments:** 3 Pages.

A trillion by trillion matrix is almost unimaginably huge, and finding its inverse seems to be a truly im- possible task. However, given current trends in com- puting, it may actually be possible to achieve such a task around 2040 — if we were willing to devote the the entirety of human computing resources to a single computation. Why would we want to do this? Perhaps, as Mallory said of Everest: “Because it’s there”.

**Category:** Data Structures and Algorithms

[82] **viXra:1407.0063 [pdf]**
*submitted on 2014-07-08 22:02:18*

**Authors:** Yuly Shipilevsky

**Comments:** 10 Pages.

A polynomial-time algorithm for integer factorization, wherein integer factorization reduced to a convex polynomial-time integer minimization problem

**Category:** Data Structures and Algorithms

[81] **viXra:1407.0010 [pdf]**
*submitted on 2014-07-01 21:16:24*

**Authors:** Samuel C. Hsieh

**Comments:** 13 Pages.

We establish a lower bound of $2^n$ conditional jumps for deciding the satisfiability of the conjunction of any two Boolean formulas from a set called a full representation of Boolean functions of n variables - a set containing a Boolean formula to represent each Boolean function of n variables. The contradiction proof first assumes that there exists a RAM program that correctly decides the satisfiability of the conjunction of any two Boolean formulas from such a set by following an execution path that includes fewer than 2^n conditional jumps. By using multiple runs of this program, with one run for each Boolean function of n variables, the proof derives a contradiction by showing that this program is unable to correctly decide the satisfiability of the conjunction of at least one pair of Boolean formulas from a full representation of n-variable Boolean functions if the program executes fewer than 2^n conditional jumps. This lower bound of 2^n conditional jumps holds for any full representation of Boolean functions of n variables, even if a full representation consists solely of minimized Boolean formulas derived by a Boolean minimization method. We discuss why the lower bound fails to hold for satisfiability of certain restricted formulas, such as 2CNF satisfiability, XOR-SAT, and HORN-SAT. We also relate the lower bound to 3CNF satisfiability.

**Category:** Data Structures and Algorithms

[80] **viXra:1406.0105 [pdf]**
*submitted on 2014-06-16 18:39:38*

**Authors:** Michail Zak

**Comments:** 46 Pages.

One of the fundamental objectives of mathematical modeling is to interpret past and present, and, based upon this interpretation, to predict future. The use at time t of available observations from a time series to forecast its value at some future time t+l can provide basis for 1) model reconstruction, 2) model verification, 3) anomaly detection, 4) data monitoring, 5) adjustment of the underlying physical process. Forecast is usually needed over a period known as the lead time that is problem specific. For instance, the lead time can be associated with the period during which training data are available. The accuracy of the forecast may be expressed by calculating probability limits on either side of each forecast. These limits may be calculated for any convenient set of probabilities, for example, 50% and 90%. They are such that the realized value of the time series, when it eventually occurs, will be included within these limits with the stated probability.

**Category:** Data Structures and Algorithms

[79] **viXra:1405.0352 [pdf]**
*submitted on 2014-05-29 05:46:55*

**Authors:** José Francisco García Juliá

**Comments:** 3 Pages.

Information hiding is not programming hiding. It is the hiding of changeable information into programming modules.

**Category:** Data Structures and Algorithms

[78] **viXra:1405.0101 [pdf]**
*submitted on 2014-05-07 03:58:11*

**Authors:** Trilok Kumar Pathak, Prabha Singh, L.P.Purohit

**Comments:** 10 Pages.

ZnO thin films with the thickness of about 15nm on (0001) substrates were prepared by pulsed laser deposition. X-ray photoelectron spectroscopy indicated that both as grown and then annealed ZnO thin films were oxygen rich. Hydrogen (H2) sensing measurements of the films indicated that the conductivity type of both the unannealed and annealed ZnO films converted from p-type to n-type in process of increasing the operating temperature. However, the two films showed different conversion temperatures. The origin of the p-type conductivity in the unannealed and annealed ZnO films should be attributed to oxygen related defects and Zinc vacancies related defects, respectively. The conversion of the conductivity type was due to the annealing out of the correlated defects. Moreover, p-type ZnO films can work at lower temperature than n-type ZnO films without obvious sensitivity loss.

**Category:** Data Structures and Algorithms

[77] **viXra:1405.0099 [pdf]**
*submitted on 2014-05-07 04:02:40*

**Authors:** Kumar Pardeep

**Comments:** 15 Pages.

In flow networks, it is assumed that a reliability model representing telecommunications networks is independent of topological information, but depends on traffic path attributes like delay, reliability and capacity etc.. The performance of such networks from quality of service point of view is the measure of its flow capacity which can satisfy the customers demand. To design such flow networks, hierarchical importance indices based approach for reliability redundancy optimization using composite performance measure integrating reliability and capacity has been proposed. The method utilizes cardinality and other hierarchical importance indices based criterion in selecting flow paths and backup paths to optimize them. The algorithm is reasonably efficient due to reduced computation work even for large telecommunication networks.

**Category:** Data Structures and Algorithms

[76] **viXra:1405.0057 [pdf]**
*submitted on 2014-05-06 23:30:29*

**Authors:** Rahul Sinha, A. Sonika

**Comments:** 10 Pages.

This paper presents the design consideration and simulation of interpolator of OSR 128. The proposed structure uses the half band filers & Comb/Sinc filter. Experimental result shows that proposed interpolator achieves the design specification, and also has good noise rejection capabilities. The interpolator accepts the input at 44.1 kHz for applications like CD & DVD audio. The interpolation filter can be applied to the delta sigma DAC. The related work is done with the MATLAB & XILINX ISE simulators. The maximum operating frequency is achieved as 34.584 MHz.

**Category:** Data Structures and Algorithms

[75] **viXra:1405.0056 [pdf]**
*submitted on 2014-05-06 23:35:59*

**Authors:** Saman Kaedi, Ebrahim Farshidii

**Comments:** 15 Pages.

In this paper a discrete time sigma-delta ADC with new assumptions in optimization of noise transfer function (NTF) is presented, that improve SNR and accuracy of ADC. Zeros and poles of sigma-delta’s loop filter is optimized and located by genetic algorithm with assumption loop filter stability and final quantization noise density of modulator will be significantly decrease. Supposition density of quantization noise as default of optimization result without need to additional circuit or filter, the folded noise in pass band due to down sampling, has been minimized so SNR will be more increase. The circuit is designed and implemented using MATLAB. The simulator result of sigma-delta ADC demonstrates this methodology has 7db (equivalent more than 1bit) improvement in SNR.

**Category:** Data Structures and Algorithms

[74] **viXra:1405.0055 [pdf]**
*submitted on 2014-05-06 23:37:04*

**Authors:** Gopalkrishna Joshi, Narasimha H Ayachit, Kamakshi Prasad

**Comments:** 13 Pages.

The Increasing complexity of the processes and their distributed nature in enterprises is resulting in generation of data that is both huge and complex. And data quality is playing an important role as decision making in enterprises is dependent on the data. This data quality is a multidimensional concept. However, there does not exist a commonly accepted set of the dimensions and analysis of data quality in the literature by the concerned. Further, all the dimensions available in literature may not be of relevance in a particular context of information system and not all of these dimensions may enjoy the same importance in a context. Practitioners in the field choose dimensions of data quality based on intuitive understanding, industrial experience or literature review. There does not exist a rigorously defined mechanism of choosing appropriate dimensions for an information system under consideration in a particular context. In this paper, the authors propose a novel method of choosing appropriate dimensions of data quality for an information system bringing in the perspective of data consumer. This method is based on Analytic Hierarchic Process (AHP) popularly used in multi-criterion decision making and the demonstration of the same is done in the context of distributed information systems

**Category:** Data Structures and Algorithms

[73] **viXra:1405.0054 [pdf]**
*submitted on 2014-05-06 23:38:30*

**Authors:** Hamid Mohseni Pour, Ebrahim Farshidi

**Comments:** 10 Pages.

Adaptive noise cancellation (ANC) technique can removes thermal and shaped wideband quantization noise from the output of sigma-delta modulator and improves SNR and SFDR ratios. ANC filter more than desired signal passes harmonics of input signal caused by analog element such as operational amplifier of the integrator without any suppression and this issue causes less increment in SNR and SFDR of analog to digital converter. This paper presents a technique by adding an adaptive harmonic canceller filter in the front of ANC filter addresses this issue and improves considerably performance of the ADC. The simulation results demonstrate effectiveness of this combination technique in first order sigma-delta converter.

**Category:** Data Structures and Algorithms

[72] **viXra:1405.0051 [pdf]**
*submitted on 2014-05-07 01:24:58*

**Authors:** Pecimuthu Gopalasamy, Zulkefli Mansor

**Comments:** 12 Pages.

In many organizations, project management is no longer a separately identified function, but is entrenched in the overall management of the business. The typical project management environment has become a multi - project. Most of the project decisions require consideration of schedule, resource and cost concerns on other project work, necessitating the review and evaluation of multi-project data. Without good project management standard practices the organization very hard to reach their target. The research problem of this study is to assess how project management standard practices in the IT Organizations are using it. The research method employed was to first identify the best practices of project management, by focusing on generally accepted standards and practices are particularly effective in helping an organization achieve its objectives. It also requires the ability to manage projects in today’s complex, fast-changing organizations, its people, processes and operating systems which all work together in a collaborative, integrated fashion.

**Category:** Data Structures and Algorithms

[71] **viXra:1405.0050 [pdf]**
*submitted on 2014-05-07 01:31:32*

**Authors:** S.A.Quadri, Othman Sidek

**Comments:** 28 Pages.

The decreasing cost of sensors is resulting in an increase in the use of wireless sensor networks for structural health monitoring. In most applications, nodes are deployed once and are supposed to operate unattended for a long period of time. Due to the deployment of a large number of sensor nodes, it is not uncommon for sensor nodes to become faulty and unreliable. Faults may arise from hardware or software failure. Software failure causes non-deterministic behavior of the node, thus resulting in the acquisition of inaccurate data. Consequently, there exists a need to modify the system software and correct the faults in a wireless sensor node (WSN) network. Once the nodes are deployed, it is impractical at best to reach each individual node. Moreover, it is highly cumbersome to detach the sensor node and attach data transfer cables for software updates. Over-the-air programming is a fundamental service that serves this purpose. This paper discusses maintenance issues related to software for sensor nodes deployed for monitoring structural health and provides a comparison of various protocols developed for reprogramming.

**Category:** Data Structures and Algorithms

[70] **viXra:1405.0049 [pdf]**
*submitted on 2014-05-07 01:32:37*

**Authors:** Mohammed Ali Hussain

**Comments:** 9 Pages.

Electronic Commerce (Ecommerce) refers to the buying and selling of goods and services via electronic channels, primarily the Internet. The applications of E- commerce includes online book store, e- banking, online ticket reservation(railway, airway, movie, etc.,), buying and selling goods, online funds transfer and so on. During E commerce transactions, confidential information is stored in databases as well communicated through network channels. So security is the main concern in E commerce. E commerce applications are vulnerable to various security threats. This results in the loss of consumer confidence. So we need security tools to counter such security threats. This paper presents an overview of security threats to E commerce applications and the technologies to counter them.

**Category:** Data Structures and Algorithms

[69] **viXra:1405.0048 [pdf]**
*submitted on 2014-05-07 01:34:19*

**Authors:** A.saisudheer, V. Murali Praveen, S.jhansi Lakshmi

**Comments:** 6 Pages.

In this paper, a low-power pulse-triggered flip-flop (FF) designed and a simple two-transistor AND gate
is designed to reduce the circuit complexity. Second, a conditional pulse-enhancement technique is devised to speed
up the discharge along the critical path only when needed. As a result, transistor sizes in delay inverter and pulsegeneration
circuit can be reduced for power saving. Various post layout simulation results based on UMC CMOS
50-nm technology reveal that the proposed design features the best power-delay-product performance in several FF
designs under comparison. Its maximum power saving against rival designs is up to 18.2% and the average leakage
power consumption is also reduced by a factor of 1.52

**Category:** Data Structures and Algorithms

[68] **viXra:1405.0047 [pdf]**
*submitted on 2014-05-07 01:35:18*

**Authors:** V.Sankaraiah, V.Murali Praveen

**Comments:** 6 Pages.

As technology scaling drives the no.of processors upward, current on-chip routers consume substantial portions of chip area, performance, cost & power budgets. Recent work proposes to apply well-known routing technique, which eliminate buffers & hence buffers power (static & dynamic) at the cost of some misrouting or deflection called bufferless deflection routing. While bufferless NoC design has shown promising area and power reductions and offers similar performance to conventional buffered for many workloads. Such design provides lower throughput, unnecessary networkhops and wasting power at high network loads.
To address this issue we propose an innovative NoC router design called Single Side Buffered Defection (SSBD)router. Compared to previous bufferless deflection router SSBD contributes (i) a router microarchitecture with a double-width ejection path and enhanced arbitration with in-router prioritization. (ii)small side buffers to hold some traffic that would have otherwise been deflected.

**Category:** Data Structures and Algorithms

[67] **viXra:1405.0046 [pdf]**
*submitted on 2014-05-07 01:36:50*

**Authors:** Vinay Kumar, Abhishek Bansal

**Comments:** 9 Pages.

Development level of a society is a measure of how efficiently the society is harnessing the benefits of different developmental and welfare programs initiated by the government of the day. Tribal in India have been deprived of opportunities because of many factors. One of the important factor is unavailability of suitable infrastructure for the development plan to reach to them. It is widely acknowledged that Information and Communication Technologies (ICTs) have potential to play a vital role in social development. Several projects have attempted to adopt these technologies to improve the reach, enhance the coverage base by minimizing the processing costs and reducing the traditional cycles of output deliverables. ICTs can be used to strengthen and develop the information systems of development plans exclusively for tribal and thereby improving effective monitoring of implementation. The paper attempts to highlight the effectiveness of ICT in improving livelihood of tribals in India.

**Category:** Data Structures and Algorithms

[66] **viXra:1405.0045 [pdf]**
*submitted on 2014-05-07 01:37:39*

**Authors:** T.Rupalatha, G.Rajesh, K.Nandakumar

**Comments:** 7 Pages.

Edge detection is one of the basic operation carried out in image processing and object identification .In this paper, we present a distributed Canny edge detection algorithm that results in significantly reduced memory requirements, decreased latency and increased throughput with no loss in edge detection performance as compared to the original Canny algorithm. The new algorithm uses a low-complexity 8-bin non-uniform gradient magnitude histogram to compute block-based hysteresis thresholds that are used by the Canny edge detector. Furthermore, an FPGA-based hardware architecture of our proposed algorithm is presented in this paper and the architecture is synthesized on the Xilinx Spartan-3E FPGA. Simulation results are presented to illustrate the performance of the proposed distributed Canny edge detector. The FPGA simulation results show that we can process a 512×512 image in 0.28ms at a clock rate of 100 MHz.

**Category:** Data Structures and Algorithms

[65] **viXra:1405.0044 [pdf]**
*submitted on 2014-05-07 01:38:30*

**Authors:** N.Nallammal, V.Radha

**Comments:** 11 Pages.

Face recognition is one of the most frequently used biometrics both in commercial and law enforcement applications. The individuality of facial recognition from other biometric techniques is that it can be used for surveillance purposes; as in searching for wanted criminals, suspected terrorists, and missing children. The steps in a face recognition steps are preprocessing (image enhancement), feature extraction and finally recognition. This paper identifies techniques in each step of the recognition process to improve the overall performance of face recognition. The proposed face recognition model combines enhanced 2DPCA algorithm, LDA, ICA with wavelet packets and curvelets and experimental results proves that the combination of these techniques increases the efficiency of the recognition process and improves the existing systems.

**Category:** Data Structures and Algorithms

[64] **viXra:1405.0043 [pdf]**
*submitted on 2014-05-07 01:39:33*

**Authors:** Ch. Pallavi, V.swathi

**Comments:** 7 Pages.

Design of area, high speed and power-efficient data path logic systems forms the largest areas of research in VLSI system design. In digital adders, the speed of addition is limited by the time required to transmit a carry through the adder. Carry Select Adder (CSLA) is one of the fastest adders used in many data-processing processors to perform fast arithmetic functions. From the structure of the CSLA, it is clear that there is scope for reducing the area and delay in the CSLA. This work uses a simple and an efficient gate-level modification (in regular structure) which drastically reduces the area and delay of the CSLA. Based on this modification 8, 16, 32, and 64-bit square-root Carry Select Adder (SQRT CSLA) architectures have been developed and compared with the regular SQRT CSLA architecture. The proposed design has reduced area and delay to a great extent when compared with the regular SQRT CSLA. This work estimates the performance of the proposed designs with the regular designs in terms of delay; area and synthesis are implemented in Xilinx FPGA. The results analysis shows that the proposed SQRT CSLA structure is better than the regular SQRT CSLA.

**Category:** Data Structures and Algorithms

[63] **viXra:1405.0042 [pdf]**
*submitted on 2014-05-07 01:40:47*

**Authors:** Megha Sharma, Rashmi Kuamri

**Comments:** 12 Pages.

Image compression is the growing research area for the real world applications which is spreading day by day by the explosive growth of image transmission and storage. This paper presents the algorithm for gray scale image compression using self organizing map (SOM) and discrete wavelet transform (DWT). Self organizing map network is trained with input patterns in the form of vectors which gives code vector (weight matrix) and index values as the output. The discrete wavelet transform is applied on the code vectors and storing only the approximation coefficients (LL) and the index values obtained from the self organizing map. The result obtained shows the better compression ratio as well as better peak signal to noise ratio (PSNR) in comparison with the existing techniques.

**Category:** Data Structures and Algorithms

[62] **viXra:1405.0041 [pdf]**
*submitted on 2014-05-07 01:42:04*

**Authors:** J. V. Shiral, J. S. Zade, K. R. Bhakare, N. Gandhewar

**Comments:** 15 Pages.

A wireless sensor network consists of group of sensors, or nodes, that are linked by a wireless medium to perform distributed sensing tasks. The sensors are assumed to have a fixed communication and a fixed sensing range, which can significantly vary depending on the type of sensing performed. Duty cycle is the ratio of active time i.e the time at which the particular set of nodes are active to the whole scheduling time. With duty cycling, each node alternates between active and sleeping states, leaving its radio powered off most of the time and turning it on only periodically for short periods of time. In this paper, an ADB protocol is used to manage and control duty cycles as well as regulate , monitor on going traffic among the nodes by using adaptive scheduling. Thus congestion, delay can be controlled and efficiency and performance of overall network can be improved.

**Category:** Data Structures and Algorithms

[61] **viXra:1405.0040 [pdf]**
*submitted on 2014-05-07 01:43:45*

**Authors:** A. Saisudheer

**Comments:** 12 Pages.

Finite impulse response (FIR) digital filter is widely used in signal processing and image processing applications. Distributed arithmetic (DA)-based computation is popular for its potential for efficient memory-based implementation of finite impulse response (FIR) filter where the filter outputs are computed as inner-product of input-sample vectors and filter-coefficient vector. In this paper, however ,we show that the look-up-table(LUT)-multiplier-based approach, where the memory elements store all the possible values of products of the filter coefficients could be an area-efficient alternative to DA-based design of FIR filter with the same throughput of implementation.

**Category:** Data Structures and Algorithms

[60] **viXra:1405.0039 [pdf]**
*submitted on 2014-05-07 01:44:44*

**Authors:** Lakshmi Pujitha Dachuri

**Comments:** 16 Pages.

In many applications retransmissions of lost packets are not permitted .OFDM is a multi-carrier modulation scheme having excellent performance which allows overlapping in frequency domain. With OFDM there is a simple way of dealing with multipath relatively simple DSP algorithms. In this paper, an image frame is compressed using DWT, and the compressed data is arranged in data vectors, each with equal number of coefficients. These vectors are quantized and binary coded to get the bit steams, which are then packetized and intelligently mapped to the OFDM system. Based on one-bit channel state information at the transmitter, the descriptions in order of descending priority are assigned to the currently good channels. such that poorer sub-channels can only affect the lesser important data vectors .we consider only one-bit channel state information available at the transmitter, informing only about the sub-channels to be good or bad. For a good sub-channel, instantaneous received power should be greater than a threshold Pth. Otherwise, the sub-channel is in fading state and considered bad for that batch of coefficients. In order to reduce the system power consumption, the mapped descriptions onto the bad sub channels are dropped at the transmitter The binary channel state information gives an opportunity to map the bit streams intelligently and to save a reasonable amount of power. By using MAT LAB simulation we can analysis the performance of our proposed scheme, in terms of system energy saving without compromising the received quality in terms of peak signal-noise ratio.

**Category:** Data Structures and Algorithms

[59] **viXra:1405.0038 [pdf]**
*submitted on 2014-05-07 01:53:06*

**Authors:** A. Saisudheer

**Comments:** 9 Pages.

Object tracking is an important task in computer vision applications. One of the crucial challenges is the real time speed requirement. In this paper we implement an object tracking system in reconfigurable hardware using an efficient parallel architecture. In our implementation, we adopt a background subtraction based algorithm. The designed object tracker exploits hardware parallelism to achieve high system speed. We also propose a dual object region search technique to further boost the performance of our system under complex tracking conditions. For our hardware implementation we use the Altera Stratix III EP3SL340H1152C2 FPGA device. We compare the proposed FPGA-based implementation with the software implementation running on a 2.2 GHz processor. The observed speedup can reach more than 100X for complex video inputs.

**Category:** Data Structures and Algorithms

[58] **viXra:1405.0037 [pdf]**
*submitted on 2014-05-07 01:55:43*

**Authors:** A. Saisudheer

**Comments:** 4 Pages.

Nowadays, Online banking security mechanisms focus on safe authentication mechanisms, but all these mechanisms are rendered useless if we are unable to ensure the integrity of the transactions made. Of late a new threat has emerged known as Man in the Browser attack, it’s capable of modifying a transaction in real time without the user’s notice, after the user has successfully logged in using safe authentication mechanisms. In this paper we analyze the Man in the Browser attack and propose a solution based upon digitally signing a transaction and using the mobile phones as a software token for Digital Signature code generation. Two factor authentication solutions like smartcards, hardware tokens, One Time Password’s or PKI have long been considered sufficient protection against identity theft techniques. However, since the MITB attack piggybacks on authenticated sessions rather than trying to steal or impersonate an identity, most authentication technologies are incapable of preventing its success. In this paper we take a brief look into how the MITB attack takes place how it is capable of modifying an online transaction. We propose a solution based on using mobile phones as software token for Digital signature code generation. Digital signature is known to ensure the authenticity and integrity of a transaction. Mobile phones have become a daily part of our life, thus we can use the mobile phone as software token to generate Digital Signature code.

**Category:** Data Structures and Algorithms

[57] **viXra:1405.0036 [pdf]**
*submitted on 2014-05-07 01:56:28*

**Authors:** A. Saisudheer

**Comments:** 7 Pages.

Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop "non-intrusive" technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.

**Category:** Data Structures and Algorithms

[56] **viXra:1405.0035 [pdf]**
*submitted on 2014-05-07 01:58:08*

**Authors:** R. Obula Konda Reddy, B. Eswara Reddy, E. Keshava Reddy

**Comments:** 16 Pages.

Textures are one of the basic features in visual searching, computational vision and also a general property of any surface having ambiguity. This paper presents a novel texture classification system which has a high tolerance against illumination variation. A Gray Level Co-occurrence Matrix (GLCM) and binary pattern based automated similarity identification and defect detection model is presented. Different features are calculated from both GLCM and binary patterns (LBP, LLBP, and SLBP). Then a new rotation-invariant, scale invariant steerable decomposition filter is applied to filter the four orientation sub bands of the image. The experimental results are evaluated and a comparative analysis has been performed for the four different feature types. Finally, the texture is classified by different classifiers (PNN, KNN and SVM) and the classification performance of each classifier is compared. The experimental results have shown that the proposed method produces more accuracy and better classification rate over other methods.

**Category:** Data Structures and Algorithms

[55] **viXra:1404.0081 [pdf]**
*submitted on 2014-04-10 23:52:54*

**Authors:** Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera

**Comments:** 4 Pages. first draft

Recently Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. In a former paper [http://vixra.org/pdf/1403.0964v4.pdf], we proved that CAA's wave-based attack is unphysical. Here we address their experimental results regarding this attack. Our analysis shows that GAA virtually claim that they can identify, in a few correlation times that, from two Gaussian distributions with zero mean, which one is wider when their relative width difference is <10-4. Normally, such decision would need millions of correlations times to observe. We identify the experimental artifact causing this situation: existing DC current and/or ground loop (yielding slow deterministic currents) in the system. It is important to note that, while the GAA's cracking scheme, the experiments and the analysis are invalid, there is an important benefit of their attempt: our analysis implies that, in practical KLJN systems, DC currents ground loops or any other mechanisms carrying a deterministic current/voltage component must be taken care of to avoid information leak about the key.

**Category:** Data Structures and Algorithms

[54] **viXra:1404.0069 [pdf]**
*submitted on 2014-04-09 06:17:49*

**Authors:** D.V. Lande

**Comments:** 5 Pages. Russian language

The technique of building of networks of hierarchies of terms based on the analysis of chosen text corpora is offered. The technique is based on the methodology of horizontal visibility graphs. Constructed and investigated language network, formed on the basis of electronic preprints arXiv on topics of information retrieval.

**Category:** Data Structures and Algorithms

[53] **viXra:1404.0054 [pdf]**
*submitted on 2014-04-07 14:21:35*

**Authors:** Y.Saez, X. Cao, L.B. Kish, G. Pesti

**Comments:** 13 Pages. Paper submitted for publication

We review the security requirements for a vehicle communication network. We also provide a critical assessment of the security communication architectures and perform an analysis of the keys to design an efficient and secure vehicular network. We propose a novel unconditionally secure vehicular communication architecture that utilizes the Kirchhoff-Law-Johnson-Noise (KLJN) key distribution scheme.

**Category:** Data Structures and Algorithms

[52] **viXra:1403.0957 [pdf]**
*submitted on 2014-03-28 08:51:39*

**Authors:** A. A. Salama

**Comments:** 7 Pages.

Abstract: Mobile adhoc network is a special kind of wireless networks. It is a collection of mobile nodes without having aid of establish infrastructure. In mobile adhoc network, it is much more vulnerable to attacks than a wired network due to its limited physical security, Securing temporal networks like Mobile Ad-hoc Networks (MANETs) has been given a great amount of attention recently, though the process of creating a perfectly secured scheme has not been accomplished yet. MANETs has some other features and characteristics those are together make it a difficult environment to be secured. The bandwidth of MANET is another challenge because it is unlikely to consume the bandwidth in security mechanisms rather than data traffic. This paper proposes a security scheme based on Public Key infrastructure (PKI) for distributing session keys between nodes. The length of those keys is decided using intuitionistic fuzzy logic manipulation. The proposed algorithm of Security-model is an adaptive intuitionistic fuzzy logic based algorithm that can adapt itself according to the dynamic conditions of mobile hosts. Finally the Experimental results shows that the using of intuitionistic fuzzy based security can enhance the security of (MANETs).

**Category:** Data Structures and Algorithms

[51] **viXra:1403.0956 [pdf]**
*submitted on 2014-03-28 09:16:01*

**Authors:** A. A. Salama

**Comments:** 13 Pages.

The fundamental concepts of neutrosophic set, introduced by Smarandache in [9, 10] and
Salama et al. in [4, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. In this paper we introduce and
study new types of neutrosophic concepts " cut levels , normal neutrosophic set, convex
neutrosophic set". Added to we will begin with a definition of neutrosophic relation and then
define the various operations and will study its main properties. Some types of neutrosophic
relations and neutrosophic database are gevine. Finaly we introduce and study neutrosophic
database (NDB for short). Some neutrosophic queries are gevine to a neutrosophic database .

**Category:** Data Structures and Algorithms

[50] **viXra:1403.0940 [pdf]**
*submitted on 2014-03-26 11:19:44*

**Authors:** Kailash Ch. Dash, Umakant Mishra

**Comments:** 13 Pages.

Although Information Systems and Information Technology (IS & IT) has become a major driving force for many of the current day organizations, the NGOs have not been able to utilize the benefits up to a satisfactory level. Most organizations use standard office tools to manage huge amount for field data and never feel the need for a central repository of data. While many people argue that an NGO should not spend too much money on information management, it is a fact that organizing the information requires more of a mindset and an organized behavior than a huge financial investment.

**Category:** Data Structures and Algorithms

[73] **viXra:1702.0156 [pdf]**
*replaced on 2017-02-20 14:49:06*

**Authors:** Stephen P Smith

**Comments:** 12 Pages.

This paper describes the backward differentiation of the Cholesky decomposition by the bordering method. The backward differentiation of the Cholesky decomposition by the inner product form and the outer product form have been described elsewhere. It is found that the resulting algorithm can be adapted to vector processing, as is also true of the algorithms developed from the inner product form and outer product form. The three approaches can also be fashioned to treat sparse matrices, but this is done by enforcing the same sparse structure found for the Cholesky decomposition on a secondary work space.

**Category:** Data Structures and Algorithms

[72] **viXra:1701.0668 [pdf]**
*replaced on 2017-01-30 10:23:21*

**Authors:** Ameet Sharma

**Comments:** 11 Pages.

We propose developing an XML-based system to enhance scientific papers and articles. A system whereby the premises of arguments are made explicit in XML tags. These tags provide a link between papers to more clearly exhibit deductive knowledge dependencies. The tags allow us to construct deductive networks which are a visual representation of deductive knowledge dependencies. A deductive network (DN) is a kind of bayesian network, but without probabilities.

**Category:** Data Structures and Algorithms

[71] **viXra:1608.0250 [pdf]**
*replaced on 2016-12-03 19:20:21*

**Authors:** Atul Mehta

**Comments:** 5 Pages.

In this paper, we explore the connections between graphs and Turing machines. A method to construct Turing machines from a general undirected graph is provided. Determining whether a Hamiltonian cycle does exist is now shown to be equivalent to solving the halting problem. A modified version of Turing machines is now developed to solve certain classes of computational problems.

**Category:** Data Structures and Algorithms

[70] **viXra:1608.0098 [pdf]**
*replaced on 2016-08-11 23:42:15*

**Authors:** Leorge Takeuchi

**Comments:** 16 Pages. Two link addresses (URI) were wrong.

Quicksort, invented by Tony Hoare in 1959, is one of the fastest sorting algorithms. However, conventional
implementations have some weak points, including the following: swaps to exchange two elements are redundant,
deep recursive calls may encounter stack overflow, and the case of repeated many elements in input data is a well-
known issue. This paper improves quicksort to make it more secure and faster using new or known ideas in C
language.

**Category:** Data Structures and Algorithms

[69] **viXra:1607.0059 [pdf]**
*replaced on 2016-07-06 11:01:24*

**Authors:** Brian Beckman

**Comments:** 14 Pages. Minor corrections to original version

In Kalman Folding, Part 1, we present basic, static Kalman filtering
as a functional fold, highlighting the unique advantages of this form for
deploying test-hardened code verbatim in harsh, mission-critical environments.
The examples in that paper are all static, meaning that the states of the model
do not depend on the independent variable, often physical time.
Here, we present mathematical derivations of the basic, static filter. These are
semi-formal sketches that leave many details to the reader, but highlight all
important points that must be rigorously proved. These derivations have several
novel arguments and we strive for much higher clarity and simplicity than is
found in most treatments of the topic.

**Category:** Data Structures and Algorithms

[68] **viXra:1606.0348 [pdf]**
*replaced on 2016-07-06 16:57:41*

**Authors:** Brian Beckman

**Comments:** 7 Pages.

In Kalman Folding, Part 1, we present basic, static Kalman filtering as a functional fold, highlighting the unique advantages of this form for deploying test-hardened code verbatim in harsh, mission-critical environments. The examples in that paper are all static, meaning that the states of the model do not depend on the independent variable, often physical time. Here, we present a dynamic Kalman filter in the same, functional form. This filter can handle many dynamic, time-evolving applications including some tracking and navigation problems, and is easilly extended to nonlinear and non-Gaussian forms, the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) respectively. Those are subjects of other papers in this Kalman-folding series. Here, we reproduce a tracking example from a well known reference, but in functional form, highlighting the advantages of that form.

**Category:** Data Structures and Algorithms

[67] **viXra:1604.0366 [pdf]**
*replaced on 2016-05-15 21:49:27*

**Authors:** Mai Ben-Adar Bessos, Simon Birnbach, Amir Herzberg, Ivan Martinovic

**Comments:** 6 Pages. Technichal Report of the original paper E-bots vs. P-bots Cooperative Eavesdropping in (partial) Silence

We study the trade-off between the benefits obtained by communication, vs. the exposure of the location of the transmitter.

**Category:** Data Structures and Algorithms

[66] **viXra:1604.0366 [pdf]**
*replaced on 2016-05-09 19:41:31*

**Authors:** Mai Ben-Adar Bessos, Simon Birnbach, Amir Herzberg, Ivan Martinovic

**Comments:** 3 Pages. Technichal Report of the original paper E-bots vs. P-bots Cooperative Eavesdropping in (partial) Silence

We study the trade-off between the benefits obtained by communication, vs. the exposure of the location of the transmitter.

**Category:** Data Structures and Algorithms

[65] **viXra:1603.0107 [pdf]**
*replaced on 2016-03-15 08:00:02*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 8 Pages. In english and portuguese. Published in Transactions on Mathematics (TM) Vol. 3, No. 1, January 2017, pp. 34-37

An original proof of P is not equal to NP.

**Category:** Data Structures and Algorithms

[64] **viXra:1603.0107 [pdf]**
*replaced on 2016-03-09 10:07:13*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 6 Pages. An original proof of P <> NP. In english and portuguese.

An original proof of P is not equal to NP.

**Category:** Data Structures and Algorithms

[63] **viXra:1603.0107 [pdf]**
*replaced on 2016-03-08 06:32:39*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 6 Pages. An original proof of P is not equal to NP.

An original proof of P is not equal to NP.

**Category:** Data Structures and Algorithms

[62] **viXra:1603.0070 [pdf]**
*replaced on 2016-03-05 21:50:57*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 10 Pages.

Prova-se que P ≠ NP, mostrando-se 2 problemas que são executados em tempo de complexidade polinomial em um algoritmo não determinístico, mas em tempo de complexidade exponencial em relação ao tamanho da entrada num algoritmo deterministístico. Os algoritmos são essencialmente simples para que tenham ainda alguma redução significativa em sua complexidade, o que poderia invalidar as provas aqui apresentadas.

**Category:** Data Structures and Algorithms

[61] **viXra:1603.0069 [pdf]**
*replaced on 2016-03-07 10:35:28*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 11 Pages.

**Category:** Data Structures and Algorithms

[60] **viXra:1603.0069 [pdf]**
*replaced on 2016-03-05 23:49:34*

**Authors:** Valdir Monteiro dos Santos Godoi

**Comments:** 11 Pages.

**Category:** Data Structures and Algorithms

[59] **viXra:1511.0207 [pdf]**
*replaced on 2015-12-03 16:02:24*

**Authors:** Andrew Nassif

**Comments:** 8 Pages.

Computer Engineering requires you to know a vast array of programming languages, as well as utilizing different technologies in order to design hardware, or manage databases. It can often be identified as the cross between Information Technology and Electrical Engineering. What I learned is that you don’t have to only know C and C++, but you will also be required to learn more, especially working into the Hardware, Software, and Database side. Softwares you need to be familiar with include Visual Studio, and sometimes Open Source technologies. All in all I learned a great deal of knowledge from the people in which I talked to. I learned that Computer Engineering and related fields, have an impact on technological advancements, as well as making the world an easier place to live. I learned the overall power of different subjects in the fields such as utilizing UML language, Blockchain Technology, Javascript, Python, and the power of Linux. Some of these, I may present in throughout this paper. The purpose to this paper is to inform the average user about what is in the field, what Computer Engineers do, as well as the powerful research and impact of the field. By the end of this paper, I hope you have a beginner’s expertise on the implications of this widely known field.

**Category:** Data Structures and Algorithms

[58] **viXra:1511.0207 [pdf]**
*replaced on 2015-11-30 12:35:23*

**Authors:** Andrew Nassif

**Comments:** 6 Pages.

Computer Engineering requires you to know a vast array of programming languages, as well as utilizing different technologies in order to design hardware, or manage databases. It can often be identified as the cross between Information Technology and Electrical Engineering. What I learned is that you don’t have to only know C and C++, but you will also be required to learn more, especially working into the Hardware, Software, and Database side. Softwares you need to be familiar with include Visual Studio, and sometimes Open Source technologies. All in all I learned a great deal of knowledge from the people in which I talked to. I learned that Computer Engineering and related fields, have an impact on technological advancements, as well as making the world an easier place to live. I learned the overall power of different subjects in the fields such as utilizing UML language, Blockchain Technology, Javascript, Python, and the power of Linux. Some of these, I may present in throughout this paper. The purpose to this paper is to inform the average user about what is in the field, what Computer Engineers do, as well as the powerful research and impact of the field. By the end of this paper, I hope you have a beginner’s expertise on the implications of this widely known field.

**Category:** Data Structures and Algorithms

[57] **viXra:1510.0473 [pdf]**
*replaced on 2016-06-15 06:17:06*

**Authors:** Kurt Mehlhorn, Sanjeev Saxena

**Comments:** 18 Pages.

Linear programming is now included in algorithm undergraduate
and postgraduate courses for computer science majors. We give
a self-contained treatment of an interior-point method which
is particularly tailored to the typical mathematical background
of CS students. In particular, only limited knowledge of linear
algebra and calculus is assumed.

**Category:** Data Structures and Algorithms

[56] **viXra:1510.0325 [pdf]**
*replaced on 2016-09-26 07:00:37*

**Authors:** Jesse Read, Luca Martino Jaakko Hollmén

**Comments:** 29 Pages. (accepted: to appear) Pattern Recognition

The number of methods available for classification of multi-label data has increased rapidly over recent years, yet relatively few links have been made with the related task of classification of sequential data. If labels indices are considered as time indices, the problems can often be seen as equivalent. In this paper we detect and elaborate on connections between multi-label methods and Markovian models, and study the suitability of multi-label methods for prediction in sequential data. From this study we draw upon the most suitable techniques from the area and develop two novel competitive approaches which can be applied to either kind of data. We carry out an empirical evaluation investigating performance on real-world sequential-prediction tasks: electricity demand, and route prediction. As well as showing that several popular multi-label algorithms are in fact easily applicable to sequencing tasks, our novel approaches, which benefit from a unified view of these areas, prove very competitive against established methods.

**Category:** Data Structures and Algorithms

[55] **viXra:1509.0259 [pdf]**
*replaced on 2015-10-01 16:55:57*

**Authors:** Laszlo B. Kish, Claes G. Granqvist

**Comments:** 8 Pages. submitted for journal publication

We introduce two new Kirchhoff-law–Johnson-noise (KLJN) secure key distribution schemes which are generalizations of the original KLJN scheme. The first of these, the Random-Resistor (RR–) KLJN scheme, uses random resistors with values chosen from a quasi-continuum set. It is well-known since the creation of the KLJN concept that such a system could work in cryptography, because Alice and Bob can calculate the unknown resistance value from measurements, but the RR–KLJN system has not been addressed in prior publications since it was considered impractical. The reason for discussing it now is the second scheme, the Random-Resistor–Random-Temperature (RRRT–) KLJN key exchange, inspired by a recent paper of Vadai, Mingesz and Gingl, wherein security was shown to be maintained at non-zero power flow. In the RRRT–KLJN secure key exchange scheme, both the resistances and their temperatures are continuum random variables. We prove that the security of the RRRT–KLJN scheme can prevail at non-zero power flow, and thus the physical law guaranteeing security is not the Second Law of Thermodynamics but the Fluctuation–Dissipation Theorem. Alice and Bob know their own resistances and temperatures and can calculate the resistance and temperature values at the other end of the communication channel from measured voltage, current and power-flow data in the wire. However, Eve cannot determine these values because, for her, there are four unknown quantities while she can set up only three equations. The RRRT–KLJN scheme has several advantages and makes all former attacks on the KLJN scheme invalid or incomplete.

**Category:** Data Structures and Algorithms

[54] **viXra:1504.0072 [pdf]**
*replaced on 2015-04-09 12:56:05*

**Authors:** Funkenstein the Dwarf

**Comments:** 4 Pages. Couple of typos and a simplification

About a year after Ittay Eyal published two papers claiming vulnerabilities in the bitcoin mining protocol, we have seen that the network is still strong (it has grown in hashpower many times over) and is unaffected by the supposed problems. I show here the biggest reasons the two vulnerability analyses were flawed. The attacks appear to hinder other miners who are competitors. However, both of the attacks harm the attacker's bottom line more than any harm to the competitors can emerge as profits for the attacker.

**Category:** Data Structures and Algorithms

[53] **viXra:1503.0018 [pdf]**
*replaced on 2015-03-03 12:52:40*

**Authors:** editors Florentin Smarandache, Jean Dezert

**Comments:** 504 Pages.

The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.gallup.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals.
First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief
structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable
Belief Model, and others.
More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm
classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on.
Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered.

**Category:** Data Structures and Algorithms

[52] **viXra:1502.0003 [pdf]**
*replaced on 2015-02-07 06:24:59*

**Authors:** Wenming Zhang

**Comments:** 5 Pages. This is a short and interesting paper.

We discuss the P versus NP problem from the perspective of addition operation about polynomial functions. Two contradictory propositions for the addition operation are presented. With the proposition that the sum of k (k<=n+1) polynomial functions on n always yields a polynomial function, we prove that P=NP, considering the maximum clique problem. And with the proposition that the sum of k polynomial functions may yield an exponential function, we prove that P!=NP by constructing an abstract decision problem. Furthermore, we conclude that P=NP and P!=NP if and only if the above propositions hold, respectively.

**Category:** Data Structures and Algorithms

[51] **viXra:1411.0592 [pdf]**
*replaced on 2015-10-29 07:41:07*

**Authors:** Sanjeev Saxena

**Comments:** 10 Pages. Corrected Arithmetic Errors. A full/more complete version of this is viXra:1510.0473

Linear Programming is now included in Algorithm undergraduate and
postgraduate courses for Computer Science majors. It is possible to teach
interior-point methods directly with just minimal knowledge of Algebra
and Matrices.

**Category:** Data Structures and Algorithms

[50] **viXra:1411.0592 [pdf]**
*replaced on 2015-01-07 02:16:04*

**Authors:** Sanjeev Saxena

**Comments:** 10 Pages. Section on initialisation, rewritten

Linear Programming is now included in Algorithm undergraduate and postgraduate courses for Computer Science majors. It is possible to teach interior-point methods directly with just minimal knowledge of Algebra and Matrices

**Category:** Data Structures and Algorithms

[49] **viXra:1411.0592 [pdf]**
*replaced on 2014-12-01 02:00:15*

**Authors:** Sanjeev Saxena

**Comments:** 8 Pages. Corrected some typos

Linear Programming is now included in Algorithm undergraduate and postgraduate courses for Computer Science majors. It is possible to teach interior-point methods directly with just minimal knowledge of Algebra and Matrices.

**Category:** Data Structures and Algorithms

[48] **viXra:1410.0122 [pdf]**
*replaced on 2014-11-04 01:53:29*

**Authors:** Laszlo B. Kish, Zoltan Gingl, Robert Mingesz, Gergely Vadai, Janusz Smulko, Claes-Goran Granqvist

**Comments:** 9 Pages. Accepted for Publication in Fluctuation and Noise Letters (November 3, 2014)

A recent paper by Gunn–Allison–Abbott (GAA) [L.J. Gunn et al., Scientific Reports 4 (2014) 6461] argued that the Kirchhoff-law–Johnson-noise (KLJN) secure key exchange system could experience a severe information leak. Here we refute their results and demonstrate that GAA’s arguments ensue from a serious design flaw in their system. Specifically, an attenuator broke the single Kirchhoff-loop into two coupled loops, which is an incorrect operation since the single loop is essential for the security in the KLJN system, and hence GAA’s asserted information leak is trivial. Another consequence is that a fully defended KLJN system would not be able to function due to its built-in current-comparison defense against active (invasive) attacks. In this paper we crack GAA’s scheme via an elementary current comparison attack which yields negligible error probability for Eve even without averaging over the correlation time of the noise.

**Category:** Data Structures and Algorithms

[47] **viXra:1410.0122 [pdf]**
*replaced on 2014-10-25 05:44:46*

**Authors:** Laszlo B. Kish, Zoltan Gingl, Robert Mingesz, Gergely Vadai, Janusz Smulko, Claes-Goran Granqvist

**Comments:** 9 Pages. Equation double-number fixed. In editorial process at a journal.

A recent paper by Gunn–Allison–Abbott (GAA) [L.J. Gunn et al., Scientific Reports 4 (2014) 6461] argued that the Kirchhoff-law–Johnson-noise (KLJN) secure key exchange system could experience a severe information leak. Here we refute their results and demonstrate that GAA’s arguments ensue from a serious design flaw in their system. Specifically, an attenuator broke the single Kirchhoff-loop into two coupled loops, which is an incorrect operation since the single loop is essential for the security in the KLJN system, and hence GAA’s asserted information leak is trivial. Another consequence is that a fully defended KLJN system would not be able to function due to its built-in current-comparison defense against active (invasive) attacks. In this paper we crack GAA’s scheme via an elementary current comparison attack which yields negligible error probability for Eve even without averaging over the correlation time of the noise.

**Category:** Data Structures and Algorithms

[46] **viXra:1410.0122 [pdf]**
*replaced on 2014-10-23 09:34:22*

**Authors:** Laszlo B. Kish, Zoltan Gingl, Robert Mingesz, Gergely Vadai, Janusz Smulko, Claes-Goran Granqvist

**Comments:** 9 Pages. Polished and many typos fixed. Submitted for publication

A recent paper by Gunn–Allison–Abbott (GAA) [L.J. Gunn et al., Scientific Reports 4 (2014) 6461] argued that the Kirchhoff-law–Johnson-noise (KLJN) secure key exchange system could experience a severe information leak. Here we refute their results and demonstrate that GAA’s arguments ensue from a serious design flaw in their system. Specifically, an attenuator broke the single Kirchhoff-loop into two coupled loops, which is an incorrect operation since the single loop is essential for the security in the KLJN system, and hence GAA’s asserted information leak is trivial. Another consequence is that a fully defended KLJN system would not be able to function due to its built-in current-comparison defense against active (invasive) attacks. In this paper we crack GAA’s scheme via an elementary current comparison attack which yields negligible error probability for Eve even without averaging over the correlation time of the noise.

**Category:** Data Structures and Algorithms

[45] **viXra:1409.0150 [pdf]**
*replaced on 2015-01-07 15:20:02*

**Authors:** X. Cao, Y. Saez, G. Pesti, L.B. Kish

**Comments:** 13 Pages. Accepted for Publication at Fluctuation and Noise Letters

In a former paper [Fluct. Noise Lett., 13 (2014) 1450020] we introduced a vehicular communication system with unconditionally secure key exchange based on the Kirchhoff-Law-Johnson-Noise (KLJN) key distribution scheme. In this paper, we address the secure KLJN key donation to vehicles. This KLJN key donation solution is performed lane-by-lane by using roadside key provider equipment embedded in the pavement. A method to compute the lifetime of the KLJN key is also given. This key lifetime depends on the car density and gives an upper limit of the lifetime of the KLJN key for vehicular communication networks.

**Category:** Data Structures and Algorithms

[44] **viXra:1407.0010 [pdf]**
*replaced on 2014-07-04 14:06:07*

**Authors:** Samuel C. Hsieh

**Comments:** 13 Pages. This version corrects a few typing errors found in the previous version.

We establish a lower bound of 2^n conditional jumps for deciding the satisfiability of the conjunction of any two Boolean formulas from a set called a full representation of Boolean functions of n variables - a set containing a Boolean formula to represent each Boolean function of n variables. The contradiction proof first assumes that there exists a RAM program that correctly decides the satisfiability of the conjunction of any two Boolean formulas from such a set by following an execution path that includes fewer than 2^n conditional jumps. By using multiple runs of this program, with one run for each Boolean function of n variables, the proof derives a contradiction by showing that this program is unable to correctly decide the satisfiability of the conjunction of at least one pair of Boolean formulas from a full representation of n-variable Boolean functions if the program executes fewer than 2^n conditional jumps. This lower bound of 2^n conditional jumps holds for any full representation of Boolean functions of n variables, even if a full representation consists solely of minimized Boolean formulas derived by a Boolean minimization method. We discuss why the lower bound fails to hold for satisfiability of certain restricted formulas, such as 2CNF satisfiability, XOR-SAT, and HORN-SAT. We also relate the lower bound to 3CNF satisfiability.

**Category:** Data Structures and Algorithms

[43] **viXra:1406.0124 [pdf]**
*replaced on 2014-09-27 23:38:47*

**Authors:** Laszlo B. Kish, Claes-Goran Granqvist

**Comments:** 9 Pages. Accepted for publication in Entropy (open access)

We introduce the so far most efficient attack against the Kirchhoff-law–Johnson-noise (KLJN) secure key exchange system. This attack utilizes the lack of exact thermal equilibrium in practical applications and is based on cable resistance losses and the fact that the Second Law of Thermodynamics cannot provide full security when such losses are present. The new attack does not challenge the unconditional security of the KLJN scheme, but it puts more stringent demands on the security/privacy enhancing protocol than for any earlier attack. In this paper we present a simple defense protocol to fully eliminate this new attack by increasing the noise-temperature at the side of the smaller resistance value over the noise-temperature at the at the side with the greater resistance. It is shown that this simple protocol totally removes Eve’s information not only for the new attack but also for the old Bergou-Scheuer-Yariv attack. The presently most efficient attacks against the KLJN scheme are thereby completely nullified.

**Category:** Data Structures and Algorithms

[42] **viXra:1406.0124 [pdf]**
*replaced on 2014-06-20 01:40:38*

**Authors:** Laszlo B. Kish, Claes-Goran Granqvist

**Comments:** 4 Pages. vixra hyperlink added

We introduce the so far most efficient attack against the Kirchhoff-law-Johnson-noise (KLJN) secure key exchanger. The attack utilizes the lack of exact thermal equilibrium at practical applications due to the cable resistance loss. Thus the Second Law of Thermodynamics cannot provide full security. While the new attack does not challenge the unconditional security of the KLJN scheme, due to its more favorable properties for Eve, it requires higher requirements for the security/privacy enhancing protocol than any earlier versions. We create a simple defense protocol to fully eliminate this attack by increasing the noise-temperature at the side of the lower resistance value. We show that, this simple defense protocol totally eliminates Eve's information not only in this but also in the old (Bergou)-Scheuer-Yariv attack. Thus the so far most efficient attack methods become useless against the KLJN scheme.

**Category:** Data Structures and Algorithms

[41] **viXra:1405.0352 [pdf]**
*replaced on 2014-06-06 08:25:56*

**Authors:** José Francisco García Juliá

**Comments:** 3 Pages.

Information hiding is not programming hiding. It is the hiding of changeable information into programming modules.

**Category:** Data Structures and Algorithms

[40] **viXra:1405.0312 [pdf]**
*replaced on 2014-09-21 05:47:47*

**Authors:** Sergey A. Kamenshchikov

**Comments:** 12 Pages. Journal of Chaos, Volume 2014, Article ID 346743. Author: ru.linkedin.com/pub/sergey-kamenshchikov/60/8b1/21a/

The goal of this investigation was to overcome limitations of a persistency analysis, introduced by Benoit Mandelbrot for monofractal Brownian processes: nondifferentiability, Brownian nature of process and a linear memory measure. We have extended a sense of a Hurst factor by consideration of a phase diffusion power law. It was shown that pre-catastrophic stabilization as an indicator of bifurcation leads to a new minimum of momentary phase diffusion, while bifurcation causes an increase of the momentary transport. An efficiency of a diffusive analysis has been experimentally compared to the Reynolds stability model application. An extended Reynolds parameter has been introduces as an indicator of phase transition. A combination of diffusive and Reynolds analysis has been applied for a description of a time series of Dow Jones Industrial weekly prices for a world financial crisis of 2007-2009. Diffusive and Reynolds parameters shown an extreme values in October 2008 when a mortgage crisis was fixed. A combined R/D description allowed distinguishing of market evolution short-memory and long memory shifts. It was stated that a systematic large scale failure of a financial system has begun in October 2008 and started fading in February 2009.

**Category:** Data Structures and Algorithms

[39] **viXra:1404.0081 [pdf]**
*replaced on 2014-05-15 23:12:47*

**Authors:** Hsien-Pu Chen, Laszlo B. Kish, Claes-Göran Granqvist, Gabor Schmera

**Comments:** 11 Pages. missing/incorrect abstract fixed; extended (second) version

Recently, Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. We proved in a former paper [http://arxiv.org/pdf/1404.4664] that GAA’s mathematical model is unphysical. Here we analyze GAA’s cracking scheme and show that in the cable loss free case it serves less eavesdropping information than the old mean-square based attack, while in the loss-dominated case it offers no information. We also investigate GAA's experimental claim to be capable of distinguishing, with a poor statistics over a few correlation times, the distributions of two Gaussian noises with a relative variance difference of less than 10–8. Normally such distinctions would require hundreds of millions of correlations times to be observable. We identify several experimental artifacts due to poor design that can lead to GAA’s assertions; deterministic currents due to spurious harmonic components ground loop, DC offset; aliasing; non-Gaussian features including non-linearities and other non-idealities in the generators; and the time-derivative nature of their scheme enhancing all these aspects.

**Category:** Data Structures and Algorithms

[38] **viXra:1404.0081 [pdf]**
*replaced on 2014-05-15 06:52:11*

**Authors:** Hsien P. Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera

**Comments:** 11 Pages. second draft

Recently Gunn, Allison and Abbott (GAA) [1] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. In a former paper [2], we proved that the wave claims in the GAA’s attack are heavily unphysical, since the quasi-static limit holds for the KLJN scheme, implying that physical waves do not exist in the wire channel. The assumption of existing wave modes in the short cable at the low frequency limits violates a number of laws of physics including the Second Law of Thermodynamics. One aspect of the mistakes is that in electrical engineer jargon all oscillating and propagating time functions are called waves while in physics the corresponding retarded potentials can be wave-type of non-wave type. Physical waves involve two dual energy forms that are regenerating each other during the propagation, such as the electrical and magnetic fields are doing (similarly kinetic and potential energy in elastic waves); while non-wave-type retarded potential effects in the quasi-static regime, such as in KLJN, have negligible crosstalk between these energy forms and the energy exchange takes place between them and the generators [2].

**Category:** Data Structures and Algorithms

[37] **viXra:1404.0081 [pdf]**
*replaced on 2014-04-11 08:37:35*

**Authors:** Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera

**Comments:** 4 Pages. second draft

Recently Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. In a former paper [http://vixra.org/pdf/1403.0964v4.pdf], we proved that CAA's wave-based attack is unphysical. Here we address their experimental results regarding this attack. Our analysis shows that GAA virtually claim that they can identify, in a few correlation times that, from two Gaussian distributions with zero mean, which one is wider when their relative width difference is <10^-4. Normally, such decision would need millions of correlations times to observe. We identify the experimental artifact causing this situation: existing DC current and/or ground loop (yielding slow deterministic currents) in the system. It is important to note that, while the GAA's cracking scheme, the experiments and the analysis are invalid, there is an important benefit of their attempt: our analysis implies that, in practical KLJN systems, DC currents ground loops or any other mechanisms carrying a deterministic current/voltage component must be taken care of to avoid information leak about the key.

**Category:** Data Structures and Algorithms

[36] **viXra:1404.0054 [pdf]**
*replaced on 2014-07-08 09:47:54*

**Authors:** Y. Saez, X. Cao, L.b. Kish, G. Pesti

**Comments:** 12 Pages. Paper accepted for publication at FNL on May 19, 2014

We review the security requirements for vehicular communication networks and provide a critical assessment of some typical communication security solutions. We also propose a novel unconditionally secure vehicular communication architecture that utilizes the Kirchhoff-law–Johnson-noise (KLJN) key distribution scheme.

**Category:** Data Structures and Algorithms

[35] **viXra:1403.0964 [pdf]**
*replaced on 2014-04-07 13:23:57*

**Authors:** Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera

**Comments:** 13 Pages. Accepted for publication in Fluctuation and Noise Letters

We refute a physical model, recently proposed by Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf], to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. Their model, and its theoretical underpinnings, is found to be fundamentally flawed because their assumption of electromagnetic waves violates not only the wave equation but also the Second Law of Thermodynamics, the Principle of Detailed Balance, Boltzmann’s Energy Equipartition Theorem, and Planck’s formula by implying infinitely strong blackbody radiation. We deduce the correct mathematical model of the GAA scheme, which is based on impedances at the quasi-static limit. Mathematical analysis and simulation results confirm our approach and prove that GAA’s experimental interpretation is incorrect too.

**Category:** Data Structures and Algorithms

[34] **viXra:1403.0964 [pdf]**
*replaced on 2014-04-02 10:24:11*

**Authors:** Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera

**Comments:** 12 Pages. author's name corrected; link added

We refute a physical model, recently proposed by Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf], to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. Their model, and its theoretical underpinnings, is found to be fundamentally flawed because their assumption of electromagnetic waves violates not only the wave equations but also the Second Law of Thermodynamics, the Principle of Detailed Balance, Boltzmann’s Energy Equipartition Theorem, and Planck’s formula by implying infinitely strong blackbody radiation. We deduce the correct mathematical model of the GAA scheme, which is impedance-based. Mathematical analysis and simulation results confirm our approach and prove that GAA’s experimental interpretation is incorrect too.

**Category:** Data Structures and Algorithms

[33] **viXra:1403.0964 [pdf]**
*replaced on 2014-03-31 13:41:15*

**Authors:** H.P. Chan, L.B. Kish, C.G. Granqvist, G. Schmera

**Comments:** 12 Pages. revised

We refute a physical model, recently proposed by Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf], to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. Their model, and its theoretical underpinnings, is found to be fundamentally flawed because their assumption of electromagnetic waves violates not only the wave equations but also the Second Law of Thermodynamics, the Principle of Detailed Balance, Boltzmann’s Energy Equipartition Theorem, and Planck’s formula by implying infinitely strong blackbody radiation. We deduce the correct mathematical model of the GAA scheme, which is impedance-based. Mathematical analysis and simulation results confirm our approach and prove that GAA’s experimental interpretation is incorrect too.

**Category:** Data Structures and Algorithms