Data Structures and Algorithms

Previous months:
2009 - 0908(1)
2010 - 1003(2) - 1004(2) - 1008(1)
2011 - 1101(3) - 1106(3) - 1108(1) - 1109(1) - 1112(2)
2012 - 1202(1) - 1207(1) - 1208(3) - 1210(2) - 1211(1) - 1212(3)
2013 - 1301(1) - 1302(2) - 1303(7) - 1305(2) - 1306(6) - 1308(1) - 1309(1) - 1310(4) - 1311(1) - 1312(1)
2014 - 1403(3) - 1404(2) - 1405(26) - 1406(3) - 1407(3) - 1408(3) - 1409(4) - 1410(2)

Recent submissions

Any replacements are listed further down

[96] viXra:1410.0134 [pdf] submitted on 2014-10-22 17:29:34

Distances, Hesitancy Degree and Flexible Querying via Neutrosophic Sets

Authors: A. A. Salama, Mohamed Abdelfattah, Mohamed Eisa
Comments: 6 Pages. Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. İn this paper we, introduce the distances between neutrosophic sets: the Hamming distance, The normalized Hamming distance, the Euclidean distance an

Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. İn this paper we, introduce the distances between neutrosophic sets: the Hamming distance, The normalized Hamming distance, the Euclidean distance and normalized Euclidean distance. We will extend the concepts of distances to the case of neutrosophic hesitancy degree. Added to, this paper suggest how to enrich intuitionistic fuzzy querying by the use of neutrosophic values..
Category: Data Structures and Algorithms

[95] viXra:1410.0122 [pdf] submitted on 2014-10-21 10:02:06

Analysis of the Attenuator-Artifact in the Experimental Attack of Gunn-Allison-Abbott Against the KLJN System

Authors: Laszlo B. Kish, Zoltan Gingl, Robert Mingesz, Gergely Vadai, Janusz Smulko, Claes-Goran Granqvist
Comments: 7 Pages. first draft

After briefly summarizing our general theoretical arguments, we show that, the experienced strong information leak at the Gunn-Allison-Abbott attack [Scientific Reports 4 (2014) 6461] against the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange scheme, resulted from a serious design flaw of the system. The attenuator broke the single Kirchhoff-loop into two coupled loops. This is an illegal operation because the single loop is essential for the security, thus the observed leak is obvious. We demonstrate this by cracking the system with an elementary current comparison attack yielding close to 1 success probability for Eve even without averaging within a sub-correlation-time measurement window. A fully defended KLJN system would not be able to function, at all, due to its built-in current-comparison defense against active (invasive) attacks.
Category: Data Structures and Algorithms

[94] viXra:1409.0235 [pdf] submitted on 2014-09-29 21:02:39

Cellular Automaton Graphics(4)

Authors: Morio Kikuchi
Comments: 406 Pages.

We fill three-dimensional space up regularly using painting algorithms.
Category: Data Structures and Algorithms

[93] viXra:1409.0180 [pdf] submitted on 2014-09-26 05:40:42

MetaData Visualization :

Authors: Samit Kumar
Comments: 3 Pages.

Purposeful Information can be represented in a hierarchical manner using basic Data originating from digitally connected sources. Such hierarchical represented data highlights the precarious state .
Category: Data Structures and Algorithms

[92] viXra:1409.0150 [pdf] submitted on 2014-09-20 13:41:49

On KLJN-Based Secure Key Distribution in Vehicular Communication Networks

Authors: X. Cao, Y. Saez, G. Pesti, L.B. Kish
Comments: 13 Pages. Submitted for publication to Fluct. Noise Lett. on September 20, 2014

In a former paper [Fluct. Noise Lett., 13 (2014) 1450020] we introduced a vehicular communication system with unconditionally secure key exchange based on the Kirchhoff-Law-Johnson-Noise (KLJN) key distribution scheme. In this paper, we address the secure KLJN key donation to vehicles and give an upper limit for the lifetime of this key.
Category: Data Structures and Algorithms

[91] viXra:1409.0071 [pdf] submitted on 2014-09-10 14:26:01

Anima: Adaptive Personalized Software Keyboard

Authors: Panos Sakkos, Dimitrios Kotsakos, Ioannis Katakis, Dimitrios Gunopoulos
Comments: 4 Pages.

We present a Software Keyboard for smart touchscreen de- vices that learns its owner’s unique dictionary in order to produce personalized typing predictions. The learning pro- cess is accelerated by analysing user’s past typed communi- cation. Moreover, personal temporal user behaviour is cap- tured and exploited in the prediction engine. Computational and storage issues are addressed by dynamically forgetting words that the user no longer types. A prototype implemen- tation is available at Google Play Store.
Category: Data Structures and Algorithms

[90] viXra:1408.0145 [pdf] submitted on 2014-08-21 18:55:42

Enhanced Usage of Keys Obtained by Physical, Unconditionally Secure Distributions

Authors: Laszlo B. Kish
Comments: 3 Pages. submitted for publication

Unconditionally secure physical key distribution is very slow whenever it is undoubtedly secure. Thus it is practically impossible to use a one-time-pad based cipher to guarantee perfect security be-cause using the key bits more than once gives out statistical information, such as via the known-plain-text-attack or by utilizing known components of the protocol and language statistics. Here we outline a protocol that seems to reduce this problem and allows a near-to-one-time-pad based communication with unconditionally secure physical key of finite length. The unconditionally secure physical key is not used for communication; it is use for a secure communication to generate and share a new software-based key without known-plain-text component, such as keys shared via the Diffie-Hellmann-Merkle protocol. This combined physical/software key distribution based communication looks favorable compared to the physical key based communication when the speed of the physical key distribution is much slower than that of the software-based key distribution. The security proof of this scheme is yet an open problem.
Category: Data Structures and Algorithms

[89] viXra:1408.0123 [pdf] submitted on 2014-08-18 13:06:44

Facts, Myths and Fights About the KLJN Classical Physical Key Exchanger

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist, He Wen
Comments: 4 Pages. In: Proceedings of the first conference on Hot Topics in Physical Informatics (HoTPI, 2013 November). Paper is in press at International Journal of Modern Physics: Conference Series (2014).

This paper deals with the Kirchhoff-law-Johnson-noise (KLJN) classical statistical physical key exchange method and surveys criticism - often stemming from a lack of understanding of its underlying premises or from other errors - and our related responses against these, often unphysical, claims. Some of the attacks are valid, however, an extended KLJN system remains protected against all of them, implying that its unconditional security is not impacted.
Category: Data Structures and Algorithms

[88] viXra:1408.0048 [pdf] submitted on 2014-08-08 23:27:58

Trillion by Trillion Matrix Inverse: not Actually that Crazy

Authors: Alexander Fix, Misha Collins
Comments: 3 Pages.

A trillion by trillion matrix is almost unimaginably huge, and finding its inverse seems to be a truly im- possible task. However, given current trends in com- puting, it may actually be possible to achieve such a task around 2040 — if we were willing to devote the the entirety of human computing resources to a single computation. Why would we want to do this? Perhaps, as Mallory said of Everest: “Because it’s there”.
Category: Data Structures and Algorithms

[87] viXra:1407.0222 [pdf] submitted on 2014-07-30 21:22:50

Cellular Automaton Graphics(3)

Authors: Morio Kikuchi
Comments: 92 Pages.

We fill a plane up regularly using painting algorithms(3).
Category: Data Structures and Algorithms

[86] viXra:1407.0063 [pdf] submitted on 2014-07-08 22:02:18

Polynomial Time Integer Factorization

Authors: Yuly Shipilevsky
Comments: 10 Pages.

A polynomial-time algorithm for integer factorization, wherein integer factorization reduced to a convex polynomial-time integer minimization problem
Category: Data Structures and Algorithms

[85] viXra:1407.0010 [pdf] submitted on 2014-07-01 21:16:24

A Lower Bound of 2^n Conditional Jumps for Boolean Satisfiability on A Random Access Machine

Authors: Samuel C. Hsieh
Comments: 13 Pages.

We establish a lower bound of $2^n$ conditional jumps for deciding the satisfiability of the conjunction of any two Boolean formulas from a set called a full representation of Boolean functions of n variables - a set containing a Boolean formula to represent each Boolean function of n variables. The contradiction proof first assumes that there exists a RAM program that correctly decides the satisfiability of the conjunction of any two Boolean formulas from such a set by following an execution path that includes fewer than 2^n conditional jumps. By using multiple runs of this program, with one run for each Boolean function of n variables, the proof derives a contradiction by showing that this program is unable to correctly decide the satisfiability of the conjunction of at least one pair of Boolean formulas from a full representation of n-variable Boolean functions if the program executes fewer than 2^n conditional jumps. This lower bound of 2^n conditional jumps holds for any full representation of Boolean functions of n variables, even if a full representation consists solely of minimized Boolean formulas derived by a Boolean minimization method. We discuss why the lower bound fails to hold for satisfiability of certain restricted formulas, such as 2CNF satisfiability, XOR-SAT, and HORN-SAT. We also relate the lower bound to 3CNF satisfiability.
Category: Data Structures and Algorithms

[84] viXra:1406.0105 [pdf] submitted on 2014-06-16 18:39:38

Methodology for Sensor Data Forecast.

Authors: Michail Zak
Comments: 46 Pages.

One of the fundamental objectives of mathematical modeling is to interpret past and present, and, based upon this interpretation, to predict future. The use at time t of available observations from a time series to forecast its value at some future time t+l can provide basis for 1) model reconstruction, 2) model verification, 3) anomaly detection, 4) data monitoring, 5) adjustment of the underlying physical process. Forecast is usually needed over a period known as the lead time that is problem specific. For instance, the lead time can be associated with the period during which training data are available. The accuracy of the forecast may be expressed by calculating probability limits on either side of each forecast. These limits may be calculated for any convenient set of probabilities, for example, 50% and 90%. They are such that the realized value of the time series, when it eventually occurs, will be included within these limits with the stated probability.
Category: Data Structures and Algorithms

[83] viXra:1406.0044 [pdf] submitted on 2014-06-07 20:34:41

Cellular Automaton Graphics(2)

Authors: Morio Kikuchi
Comments: 120 Pages.

We fill a plane up regularly using painting algorithms(2).
Category: Data Structures and Algorithms

[82] viXra:1405.0352 [pdf] submitted on 2014-05-29 05:46:55

On Information Hiding

Authors: José Francisco García Juliá
Comments: 3 Pages.

Information hiding is not programming hiding. It is the hiding of changeable information into programming modules.
Category: Data Structures and Algorithms

[81] viXra:1405.0101 [pdf] submitted on 2014-05-07 03:58:11

Conversion of P-Type to N-Type Conductivity in Zno Thin Films by Increasing Temperature

Authors: Trilok Kumar Pathak, Prabha Singh, L.P.Purohit
Comments: 10 Pages.

ZnO thin films with the thickness of about 15nm on (0001) substrates were prepared by pulsed laser deposition. X-ray photoelectron spectroscopy indicated that both as grown and then annealed ZnO thin films were oxygen rich. Hydrogen (H2) sensing measurements of the films indicated that the conductivity type of both the unannealed and annealed ZnO films converted from p-type to n-type in process of increasing the operating temperature. However, the two films showed different conversion temperatures. The origin of the p-type conductivity in the unannealed and annealed ZnO films should be attributed to oxygen related defects and Zinc vacancies related defects, respectively. The conversion of the conductivity type was due to the annealing out of the correlated defects. Moreover, p-type ZnO films can work at lower temperature than n-type ZnO films without obvious sensitivity loss.
Category: Data Structures and Algorithms

[80] viXra:1405.0099 [pdf] submitted on 2014-05-07 04:02:40

Hierarchical Importance Indices Based Approach for Reliability Redundancy Optimization of Flow Networks

Authors: Kumar Pardeep
Comments: 15 Pages.

In flow networks, it is assumed that a reliability model representing telecommunications networks is independent of topological information, but depends on traffic path attributes like delay, reliability and capacity etc.. The performance of such networks from quality of service point of view is the measure of its flow capacity which can satisfy the customers demand. To design such flow networks, hierarchical importance indices based approach for reliability redundancy optimization using composite performance measure integrating reliability and capacity has been proposed. The method utilizes cardinality and other hierarchical importance indices based criterion in selecting flow paths and backup paths to optimize them. The algorithm is reasonably efficient due to reduced computation work even for large telecommunication networks.
Category: Data Structures and Algorithms

[79] viXra:1405.0057 [pdf] submitted on 2014-05-06 23:30:29

Design & Simulation of 128x Interpolator Filter

Authors: Rahul Sinha, A. Sonika
Comments: 10 Pages.

This paper presents the design consideration and simulation of interpolator of OSR 128. The proposed structure uses the half band filers & Comb/Sinc filter. Experimental result shows that proposed interpolator achieves the design specification, and also has good noise rejection capabilities. The interpolator accepts the input at 44.1 kHz for applications like CD & DVD audio. The interpolation filter can be applied to the delta sigma DAC. The related work is done with the MATLAB & XILINX ISE simulators. The maximum operating frequency is achieved as 34.584 MHz.
Category: Data Structures and Algorithms

[78] viXra:1405.0056 [pdf] submitted on 2014-05-06 23:35:59

A New Optimization of Noise Transfer Function of Sigma-delta-modulator with Supposition Loop Filter Stability

Authors: Saman Kaedi, Ebrahim Farshidii
Comments: 15 Pages.

In this paper a discrete time sigma-delta ADC with new assumptions in optimization of noise transfer function (NTF) is presented, that improve SNR and accuracy of ADC. Zeros and poles of sigma-delta’s loop filter is optimized and located by genetic algorithm with assumption loop filter stability and final quantization noise density of modulator will be significantly decrease. Supposition density of quantization noise as default of optimization result without need to additional circuit or filter, the folded noise in pass band due to down sampling, has been minimized so SNR will be more increase. The circuit is designed and implemented using MATLAB. The simulator result of sigma-delta ADC demonstrates this methodology has 7db (equivalent more than 1bit) improvement in SNR.
Category: Data Structures and Algorithms

[77] viXra:1405.0055 [pdf] submitted on 2014-05-06 23:37:04

Analysis of Relative Importance of Data Quality Dimensions for Distributed Systems

Authors: Gopalkrishna Joshi, Narasimha H Ayachit, Kamakshi Prasad
Comments: 13 Pages.

The Increasing complexity of the processes and their distributed nature in enterprises is resulting in generation of data that is both huge and complex. And data quality is playing an important role as decision making in enterprises is dependent on the data. This data quality is a multidimensional concept. However, there does not exist a commonly accepted set of the dimensions and analysis of data quality in the literature by the concerned. Further, all the dimensions available in literature may not be of relevance in a particular context of information system and not all of these dimensions may enjoy the same importance in a context. Practitioners in the field choose dimensions of data quality based on intuitive understanding, industrial experience or literature review. There does not exist a rigorously defined mechanism of choosing appropriate dimensions for an information system under consideration in a particular context. In this paper, the authors propose a novel method of choosing appropriate dimensions of data quality for an information system bringing in the perspective of data consumer. This method is based on Analytic Hierarchic Process (AHP) popularly used in multi-criterion decision making and the demonstration of the same is done in the context of distributed information systems
Category: Data Structures and Algorithms

[76] viXra:1405.0054 [pdf] submitted on 2014-05-06 23:38:30

.Concurrent Adaptive Cancellation of Quantization Noise and Harmonic Distortion in Sigma–Delta Converter

Authors: Hamid Mohseni Pour, Ebrahim Farshidi
Comments: 10 Pages.

Adaptive noise cancellation (ANC) technique can removes thermal and shaped wideband quantization noise from the output of sigma-delta modulator and improves SNR and SFDR ratios. ANC filter more than desired signal passes harmonics of input signal caused by analog element such as operational amplifier of the integrator without any suppression and this issue causes less increment in SNR and SFDR of analog to digital converter. This paper presents a technique by adding an adaptive harmonic canceller filter in the front of ANC filter addresses this issue and improves considerably performance of the ADC. The simulation results demonstrate effectiveness of this combination technique in first order sigma-delta converter.
Category: Data Structures and Algorithms

[75] viXra:1405.0051 [pdf] submitted on 2014-05-07 01:24:58

An Investigation on Project Management Standard Practices in IT Organization

Authors: Pecimuthu Gopalasamy, Zulkefli Mansor
Comments: 12 Pages.

In many organizations, project management is no longer a separately identified function, but is entrenched in the overall management of the business. The typical project management environment has become a multi - project. Most of the project decisions require consideration of schedule, resource and cost concerns on other project work, necessitating the review and evaluation of multi-project data. Without good project management standard practices the organization very hard to reach their target. The research problem of this study is to assess how project management standard practices in the IT Organizations are using it. The research method employed was to first identify the best practices of project management, by focusing on generally accepted standards and practices are particularly effective in helping an organization achieve its objectives. It also requires the ability to manage projects in today’s complex, fast-changing organizations, its people, processes and operating systems which all work together in a collaborative, integrated fashion.
Category: Data Structures and Algorithms

[74] viXra:1405.0050 [pdf] submitted on 2014-05-07 01:31:32

Software Maintenance of Deployed Wireless Sensor Nodes for Structural Health Monitoring Systems

Authors: S.A.Quadri, Othman Sidek
Comments: 28 Pages.

The decreasing cost of sensors is resulting in an increase in the use of wireless sensor networks for structural health monitoring. In most applications, nodes are deployed once and are supposed to operate unattended for a long period of time. Due to the deployment of a large number of sensor nodes, it is not uncommon for sensor nodes to become faulty and unreliable. Faults may arise from hardware or software failure. Software failure causes non-deterministic behavior of the node, thus resulting in the acquisition of inaccurate data. Consequently, there exists a need to modify the system software and correct the faults in a wireless sensor node (WSN) network. Once the nodes are deployed, it is impractical at best to reach each individual node. Moreover, it is highly cumbersome to detach the sensor node and attach data transfer cables for software updates. Over-the-air programming is a fundamental service that serves this purpose. This paper discusses maintenance issues related to software for sensor nodes deployed for monitoring structural health and provides a comparison of various protocols developed for reprogramming.
Category: Data Structures and Algorithms

[73] viXra:1405.0049 [pdf] submitted on 2014-05-07 01:32:37

A Study of Information Security in E- Commerce Applications

Authors: Mohammed Ali Hussain
Comments: 9 Pages.

Electronic Commerce (Ecommerce) refers to the buying and selling of goods and services via electronic channels, primarily the Internet. The applications of E- commerce includes online book store, e- banking, online ticket reservation(railway, airway, movie, etc.,), buying and selling goods, online funds transfer and so on. During E commerce transactions, confidential information is stored in databases as well communicated through network channels. So security is the main concern in E commerce. E commerce applications are vulnerable to various security threats. This results in the loss of consumer confidence. So we need security tools to counter such security threats. This paper presents an overview of security threats to E commerce applications and the technologies to counter them.
Category: Data Structures and Algorithms

[72] viXra:1405.0048 [pdf] submitted on 2014-05-07 01:34:19

Minimizing Clock Power Wastage By Using Conditional Pulse Enhancement Scheme

Authors: A.saisudheer, V. Murali Praveen, S.jhansi Lakshmi
Comments: 6 Pages.

In this paper, a low-power pulse-triggered flip-flop (FF) designed and a simple two-transistor AND gate is designed to reduce the circuit complexity. Second, a conditional pulse-enhancement technique is devised to speed up the discharge along the critical path only when needed. As a result, transistor sizes in delay inverter and pulsegeneration circuit can be reduced for power saving. Various post layout simulation results based on UMC CMOS 50-nm technology reveal that the proposed design features the best power-delay-product performance in several FF designs under comparison. Its maximum power saving against rival designs is up to 18.2% and the average leakage power consumption is also reduced by a factor of 1.52
Category: Data Structures and Algorithms

[71] viXra:1405.0047 [pdf] submitted on 2014-05-07 01:35:18

SSBD: Single Side Buffered Deflection Router for On-Chip Networks

Authors: V.Sankaraiah, V.Murali Praveen
Comments: 6 Pages.

As technology scaling drives the no.of processors upward, current on-chip routers consume substantial portions of chip area, performance, cost & power budgets. Recent work proposes to apply well-known routing technique, which eliminate buffers & hence buffers power (static & dynamic) at the cost of some misrouting or deflection called bufferless deflection routing. While bufferless NoC design has shown promising area and power reductions and offers similar performance to conventional buffered for many workloads. Such design provides lower throughput, unnecessary networkhops and wasting power at high network loads. To address this issue we propose an innovative NoC router design called Single Side Buffered Defection (SSBD)router. Compared to previous bufferless deflection router SSBD contributes (i) a router microarchitecture with a double-width ejection path and enhanced arbitration with in-router prioritization. (ii)small side buffers to hold some traffic that would have otherwise been deflected.
Category: Data Structures and Algorithms

[70] viXra:1405.0046 [pdf] submitted on 2014-05-07 01:36:50

Information & Communication Technology for Improving Livelihoods of Tribal Community in India

Authors: Vinay Kumar, Abhishek Bansal
Comments: 9 Pages.

Development level of a society is a measure of how efficiently the society is harnessing the benefits of different developmental and welfare programs initiated by the government of the day. Tribal in India have been deprived of opportunities because of many factors. One of the important factor is unavailability of suitable infrastructure for the development plan to reach to them. It is widely acknowledged that Information and Communication Technologies (ICTs) have potential to play a vital role in social development. Several projects have attempted to adopt these technologies to improve the reach, enhance the coverage base by minimizing the processing costs and reducing the traditional cycles of output deliverables. ICTs can be used to strengthen and develop the information systems of development plans exclusively for tribal and thereby improving effective monitoring of implementation. The paper attempts to highlight the effectiveness of ICT in improving livelihood of tribals in India.
Category: Data Structures and Algorithms

[69] viXra:1405.0045 [pdf] submitted on 2014-05-07 01:37:39

Implementation of Distributed Canny Edge Detector on FPGA

Authors: T.Rupalatha, G.Rajesh, K.Nandakumar
Comments: 7 Pages.

Edge detection is one of the basic operation carried out in image processing and object identification .In this paper, we present a distributed Canny edge detection algorithm that results in significantly reduced memory requirements, decreased latency and increased throughput with no loss in edge detection performance as compared to the original Canny algorithm. The new algorithm uses a low-complexity 8-bin non-uniform gradient magnitude histogram to compute block-based hysteresis thresholds that are used by the Canny edge detector. Furthermore, an FPGA-based hardware architecture of our proposed algorithm is presented in this paper and the architecture is synthesized on the Xilinx Spartan-3E FPGA. Simulation results are presented to illustrate the performance of the proposed distributed Canny edge detector. The FPGA simulation results show that we can process a 512×512 image in 0.28ms at a clock rate of 100 MHz.
Category: Data Structures and Algorithms

[68] viXra:1405.0044 [pdf] submitted on 2014-05-07 01:38:30

Enhanced Face Recognition System Combining PCA, LDA, ICA with Wavelet Packets and Curvelets

Authors: N.Nallammal, V.Radha
Comments: 11 Pages.

Face recognition is one of the most frequently used biometrics both in commercial and law enforcement applications. The individuality of facial recognition from other biometric techniques is that it can be used for surveillance purposes; as in searching for wanted criminals, suspected terrorists, and missing children. The steps in a face recognition steps are preprocessing (image enhancement), feature extraction and finally recognition. This paper identifies techniques in each step of the recognition process to improve the overall performance of face recognition. The proposed face recognition model combines enhanced 2DPCA algorithm, LDA, ICA with wavelet packets and curvelets and experimental results proves that the combination of these techniques increases the efficiency of the recognition process and improves the existing systems.
Category: Data Structures and Algorithms

[67] viXra:1405.0043 [pdf] submitted on 2014-05-07 01:39:33

An Efficient Carry Select Adder with Reduced Area Application

Authors: Ch. Pallavi, V.swathi
Comments: 7 Pages.

Design of area, high speed and power-efficient data path logic systems forms the largest areas of research in VLSI system design. In digital adders, the speed of addition is limited by the time required to transmit a carry through the adder. Carry Select Adder (CSLA) is one of the fastest adders used in many data-processing processors to perform fast arithmetic functions. From the structure of the CSLA, it is clear that there is scope for reducing the area and delay in the CSLA. This work uses a simple and an efficient gate-level modification (in regular structure) which drastically reduces the area and delay of the CSLA. Based on this modification 8, 16, 32, and 64-bit square-root Carry Select Adder (SQRT CSLA) architectures have been developed and compared with the regular SQRT CSLA architecture. The proposed design has reduced area and delay to a great extent when compared with the regular SQRT CSLA. This work estimates the performance of the proposed designs with the regular designs in terms of delay; area and synthesis are implemented in Xilinx FPGA. The results analysis shows that the proposed SQRT CSLA structure is better than the regular SQRT CSLA.
Category: Data Structures and Algorithms

[66] viXra:1405.0042 [pdf] submitted on 2014-05-07 01:40:47

A Technique of Image Compression Based on Discrete Wavelet Image Decomposition and Self Organizing Map

Authors: Megha Sharma, Rashmi Kuamri
Comments: 12 Pages.

Image compression is the growing research area for the real world applications which is spreading day by day by the explosive growth of image transmission and storage. This paper presents the algorithm for gray scale image compression using self organizing map (SOM) and discrete wavelet transform (DWT). Self organizing map network is trained with input patterns in the form of vectors which gives code vector (weight matrix) and index values as the output. The discrete wavelet transform is applied on the code vectors and storing only the approximation coefficients (LL) and the index values obtained from the self organizing map. The result obtained shows the better compression ratio as well as better peak signal to noise ratio (PSNR) in comparison with the existing techniques.
Category: Data Structures and Algorithms

[65] viXra:1405.0041 [pdf] submitted on 2014-05-07 01:42:04

Adaptive Duty-Cycle-Aware Using Multihopping in WSN

Authors: J. V. Shiral, J. S. Zade, K. R. Bhakare, N. Gandhewar
Comments: 15 Pages.

A wireless sensor network consists of group of sensors, or nodes, that are linked by a wireless medium to perform distributed sensing tasks. The sensors are assumed to have a fixed communication and a fixed sensing range, which can significantly vary depending on the type of sensing performed. Duty cycle is the ratio of active time i.e the time at which the particular set of nodes are active to the whole scheduling time. With duty cycling, each node alternates between active and sleeping states, leaving its radio powered off most of the time and turning it on only periodically for short periods of time. In this paper, an ADB protocol is used to manage and control duty cycles as well as regulate , monitor on going traffic among the nodes by using adaptive scheduling. Thus congestion, delay can be controlled and efficiency and performance of overall network can be improved.
Category: Data Structures and Algorithms

[64] viXra:1405.0040 [pdf] submitted on 2014-05-07 01:43:45

Memory Centered Recognition of Fir Numerical Filter by Lut Optimization

Authors: A. Saisudheer
Comments: 12 Pages.

Finite impulse response (FIR) digital filter is widely used in signal processing and image processing applications. Distributed arithmetic (DA)-based computation is popular for its potential for efficient memory-based implementation of finite impulse response (FIR) filter where the filter outputs are computed as inner-product of input-sample vectors and filter-coefficient vector. In this paper, however ,we show that the look-up-table(LUT)-multiplier-based approach, where the memory elements store all the possible values of products of the filter coefficients could be an area-efficient alternative to DA-based design of FIR filter with the same throughput of implementation.
Category: Data Structures and Algorithms

[63] viXra:1405.0039 [pdf] submitted on 2014-05-07 01:44:44

Transmission of Image using DWT-OFDM System with Channel State Feedback

Authors: Lakshmi Pujitha Dachuri
Comments: 16 Pages.

In many applications retransmissions of lost packets are not permitted .OFDM is a multi-carrier modulation scheme having excellent performance which allows overlapping in frequency domain. With OFDM there is a simple way of dealing with multipath relatively simple DSP algorithms. In this paper, an image frame is compressed using DWT, and the compressed data is arranged in data vectors, each with equal number of coefficients. These vectors are quantized and binary coded to get the bit steams, which are then packetized and intelligently mapped to the OFDM system. Based on one-bit channel state information at the transmitter, the descriptions in order of descending priority are assigned to the currently good channels. such that poorer sub-channels can only affect the lesser important data vectors .we consider only one-bit channel state information available at the transmitter, informing only about the sub-channels to be good or bad. For a good sub-channel, instantaneous received power should be greater than a threshold Pth. Otherwise, the sub-channel is in fading state and considered bad for that batch of coefficients. In order to reduce the system power consumption, the mapped descriptions onto the bad sub channels are dropped at the transmitter The binary channel state information gives an opportunity to map the bit streams intelligently and to save a reasonable amount of power. By using MAT LAB simulation we can analysis the performance of our proposed scheme, in terms of system energy saving without compromising the received quality in terms of peak signal-noise ratio.
Category: Data Structures and Algorithms

[62] viXra:1405.0038 [pdf] submitted on 2014-05-07 01:53:06

Object Tracking System Using Stratix FPGA

Authors: A. Saisudheer
Comments: 9 Pages.

Object tracking is an important task in computer vision applications. One of the crucial challenges is the real time speed requirement. In this paper we implement an object tracking system in reconfigurable hardware using an efficient parallel architecture. In our implementation, we adopt a background subtraction based algorithm. The designed object tracker exploits hardware parallelism to achieve high system speed. We also propose a dual object region search technique to further boost the performance of our system under complex tracking conditions. For our hardware implementation we use the Altera Stratix III EP3SL340H1152C2 FPGA device. We compare the proposed FPGA-based implementation with the software implementation running on a 2.2 GHz processor. The observed speedup can reach more than 100X for complex video inputs.
Category: Data Structures and Algorithms

[61] viXra:1405.0037 [pdf] submitted on 2014-05-07 01:55:43

Smart Phone as Software Token for Generating Digital Signature Code for Signing In Online Banking Transaction

Authors: A. Saisudheer
Comments: 4 Pages.

Nowadays, Online banking security mechanisms focus on safe authentication mechanisms, but all these mechanisms are rendered useless if we are unable to ensure the integrity of the transactions made. Of late a new threat has emerged known as Man in the Browser attack, it’s capable of modifying a transaction in real time without the user’s notice, after the user has successfully logged in using safe authentication mechanisms. In this paper we analyze the Man in the Browser attack and propose a solution based upon digitally signing a transaction and using the mobile phones as a software token for Digital Signature code generation. Two factor authentication solutions like smartcards, hardware tokens, One Time Password’s or PKI have long been considered sufficient protection against identity theft techniques. However, since the MITB attack piggybacks on authenticated sessions rather than trying to steal or impersonate an identity, most authentication technologies are incapable of preventing its success. In this paper we take a brief look into how the MITB attack takes place how it is capable of modifying an online transaction. We propose a solution based on using mobile phones as software token for Digital signature code generation. Digital signature is known to ensure the authenticity and integrity of a transaction. Mobile phones have become a daily part of our life, thus we can use the mobile phone as software token to generate Digital Signature code.
Category: Data Structures and Algorithms

[60] viXra:1405.0036 [pdf] submitted on 2014-05-07 01:56:28

Facial Expression Recognition System by Using AFERS System

Authors: A. Saisudheer
Comments: 7 Pages.

Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop "non-intrusive" technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.
Category: Data Structures and Algorithms

[59] viXra:1405.0035 [pdf] submitted on 2014-05-07 01:58:08

An Effective GLCM and Binary Pattern Schemes Based Classification for Rotation Invariant Fabric Textures

Authors: R. Obula Konda Reddy, B. Eswara Reddy, E. Keshava Reddy
Comments: 16 Pages.

Textures are one of the basic features in visual searching, computational vision and also a general property of any surface having ambiguity. This paper presents a novel texture classification system which has a high tolerance against illumination variation. A Gray Level Co-occurrence Matrix (GLCM) and binary pattern based automated similarity identification and defect detection model is presented. Different features are calculated from both GLCM and binary patterns (LBP, LLBP, and SLBP). Then a new rotation-invariant, scale invariant steerable decomposition filter is applied to filter the four orientation sub bands of the image. The experimental results are evaluated and a comparative analysis has been performed for the four different feature types. Finally, the texture is classified by different classifiers (PNN, KNN and SVM) and the classification performance of each classifier is compared. The experimental results have shown that the proposed method produces more accuracy and better classification rate over other methods.
Category: Data Structures and Algorithms

[58] viXra:1405.0021 [pdf] submitted on 2014-05-03 20:37:05

Cellular Automaton Graphics

Authors: Morio Kikuchi
Comments: 387 Pages.

We fill a plane up regularly using painting algorithm.
Category: Data Structures and Algorithms

[57] viXra:1404.0081 [pdf] submitted on 2014-04-10 23:52:54

On the "Cracking" Experiments in Gunn, Allison, Abbott, "A Directional Coupler Attack Against the Kish Key Distribution System"

Authors: Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera
Comments: 4 Pages. first draft

Recently Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. In a former paper [http://vixra.org/pdf/1403.0964v4.pdf], we proved that CAA's wave-based attack is unphysical. Here we address their experimental results regarding this attack. Our analysis shows that GAA virtually claim that they can identify, in a few correlation times that, from two Gaussian distributions with zero mean, which one is wider when their relative width difference is <10-4. Normally, such decision would need millions of correlations times to observe. We identify the experimental artifact causing this situation: existing DC current and/or ground loop (yielding slow deterministic currents) in the system. It is important to note that, while the GAA's cracking scheme, the experiments and the analysis are invalid, there is an important benefit of their attempt: our analysis implies that, in practical KLJN systems, DC currents ground loops or any other mechanisms carrying a deterministic current/voltage component must be taken care of to avoid information leak about the key.
Category: Data Structures and Algorithms

[56] viXra:1404.0069 [pdf] submitted on 2014-04-09 06:17:49

Building of Networks of Natural Hierarchies of Terms Based on Analysis of Texts Corpora

Authors: D.V. Lande
Comments: 5 Pages. Russian language

The technique of building of networks of hierarchies of terms based on the analysis of chosen text corpora is offered. The technique is based on the methodology of horizontal visibility graphs. Constructed and investigated language network, formed on the basis of electronic preprints arXiv on topics of information retrieval.
Category: Data Structures and Algorithms

[55] viXra:1404.0054 [pdf] submitted on 2014-04-07 14:21:35

Securing Vehicle Communication Systems by the KLJN Key Exchange Protocol

Authors: Y.Saez, X. Cao, L.B. Kish, G. Pesti
Comments: 13 Pages. Paper submitted for publication

We review the security requirements for a vehicle communication network. We also provide a critical assessment of the security communication architectures and perform an analysis of the keys to design an efficient and secure vehicular network. We propose a novel unconditionally secure vehicular communication architecture that utilizes the Kirchhoff-Law-Johnson-Noise (KLJN) key distribution scheme.
Category: Data Structures and Algorithms

[54] viXra:1403.0957 [pdf] submitted on 2014-03-28 08:51:39

A Novel Model for Implementing Security over Mobile Ad-hoc Networks using Intuitionistic Fuzzy Function

Authors: A. A. Salama
Comments: 7 Pages.

Abstract: Mobile adhoc network is a special kind of wireless networks. It is a collection of mobile nodes without having aid of establish infrastructure. In mobile adhoc network, it is much more vulnerable to attacks than a wired network due to its limited physical security, Securing temporal networks like Mobile Ad-hoc Networks (MANETs) has been given a great amount of attention recently, though the process of creating a perfectly secured scheme has not been accomplished yet. MANETs has some other features and characteristics those are together make it a difficult environment to be secured. The bandwidth of MANET is another challenge because it is unlikely to consume the bandwidth in security mechanisms rather than data traffic. This paper proposes a security scheme based on Public Key infrastructure (PKI) for distributing session keys between nodes. The length of those keys is decided using intuitionistic fuzzy logic manipulation. The proposed algorithm of Security-model is an adaptive intuitionistic fuzzy logic based algorithm that can adapt itself according to the dynamic conditions of mobile hosts. Finally the Experimental results shows that the using of intuitionistic fuzzy based security can enhance the security of (MANETs).
Category: Data Structures and Algorithms

[53] viXra:1403.0956 [pdf] submitted on 2014-03-28 09:16:01

Neutrosophic Relations Database

Authors: A. A. Salama
Comments: 13 Pages.

The fundamental concepts of neutrosophic set, introduced by Smarandache in [9, 10] and Salama et al. in [4, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. In this paper we introduce and study new types of neutrosophic concepts "  cut levels , normal neutrosophic set, convex neutrosophic set". Added to we will begin with a definition of neutrosophic relation and then define the various operations and will study its main properties. Some types of neutrosophic relations and neutrosophic database are gevine. Finaly we introduce and study neutrosophic database (NDB for short). Some neutrosophic queries are gevine to a neutrosophic database .
Category: Data Structures and Algorithms

[52] viXra:1403.0940 [pdf] submitted on 2014-03-26 11:19:44

Critical Considerations for Developing MIS for NGOs

Authors: Kailash Ch. Dash, Umakant Mishra
Comments: 13 Pages.

Although Information Systems and Information Technology (IS & IT) has become a major driving force for many of the current day organizations, the NGOs have not been able to utilize the benefits up to a satisfactory level. Most organizations use standard office tools to manage huge amount for field data and never feel the need for a central repository of data. While many people argue that an NGO should not spend too much money on information management, it is a fact that organizing the information requires more of a mindset and an organized behavior than a huge financial investment.
Category: Data Structures and Algorithms

[51] viXra:1312.0007 [pdf] submitted on 2013-12-01 20:47:05

Times in Noise-Based Logic: Increased Dimensions of Logic Hyperspace

Authors: Laszlo B. Kish
Comments: 4 Pages. first draft

Time shifts beyond the correlation time of the logic and reference signals create new elements that are orthogonal to the original components. This fact can be utilized to increase the number of dimensions of the logic space while keeping the number of reference noises fixed. Using just a single noise and time shifts can realize exponentially large hyperspaces with large numbers of dimensions. Other, independent applications of time shifts include holographic noise-based logic systems and changing commutative operations into non-commuting ones. For the sake of simplicity, these ideas are illustrated by deterministic time shifts, even though random timing and random time shifts would yield the most robust systems.
Category: Data Structures and Algorithms

[50] viXra:1311.0177 [pdf] submitted on 2013-11-26 18:09:43

Solution of Extreme Transcendental Differential Equations

Authors: S J Nettleton
Comments: 6 Pages.

Extreme transcendental differential equations are found in many applications including geophysical climate change models. Solution of these systems in continuous time has only been feasible with the recent development of Runge−Kutta sampling transcendental differential equation solvers with Chebyshev function output such as Mathematica 9's NDSolve function. This paper presents the challenges and means of solving the widely used DICE 2007 integrated assessment model in continuous time. Application of the solution technique in a mobile policy tool is discussed.
Category: Data Structures and Algorithms

[49] viXra:1310.0226 [pdf] submitted on 2013-10-25 09:15:15

Learning Markov Networks with Context-Specific Independences

Authors: Alejandro Edera, Federico Schlüter, Facundo Bromberg
Comments: 8 Pages.

Learning the Markov network structure from data is a problem that has received considerable attention in machine learning, and in many other application fields. This work focuses on a particular approach for this purpose called independence-based learning. Such approach guarantees the learning of the correct structure efficiently, whenever data is sufficient for representing the underlying distribution. However, an important issue of such approach is that the learned structures are encoded in an undirected graph. The problem with graphs is that they cannot encode some types of independence relations, such as the context-specific independences. They are a particular case of conditional independences that is true only for a certain assignment of its conditioning set, in contrast to conditional independences that must hold for all its assignments. In this work we present CSPC, an independence-based algorithm for learning structures that encode context-specific independences, and encoding them in a log-linear model, instead of a graph. The central idea of CSPC is combining the theoretical guarantees provided by the independence-based approach with the benefits of representing complex structures by using features in a log-linear model. We present experiments in a synthetic case, showing that CSPC is more accurate than the state-of-the-art IB algorithms when the underlying distribution contains CSIs.
Category: Data Structures and Algorithms

[48] viXra:1310.0217 [pdf] submitted on 2013-10-24 17:13:30

Neutrosophic Relations

Authors: A. A. Salama, Mohamed Eisa, S.A. Albolwi, Florentin Smarandache
Comments: 2 Pages.

In this paper we will introduce and study neutrosophic relations, which can be discussed as generalization of fuzzy relations and intuitionistic fuzzy relations. We will begin with a definition of neutrosophic relation and then define the various operations and will study the main properties. In addition, we will discuss reflexive, symmetric and transitive neutrosophic relations. Possible applications to database systems are touched upon.
Category: Data Structures and Algorithms

[47] viXra:1310.0030 [pdf] submitted on 2013-10-05 21:36:42

An Extension Collaborative Innovation Model in the Context of Big Data

Authors: Xingsen Li, Yingjie Tian, Haolan Zhang, Florentin Smarandache
Comments: Pages.

The process of generating innovative solutions mostly rely on skilled experts which are usually unavailable and with uncertainty. Computer science and information technology is changing the innovation environment and accumulating big data from which a lot of knowledge is discovered. However, it is a rather nebulous area and still remains several challenge problems to integrate multi-information and lots of rough knowledge effectively to support the process of innovation. Based on the new cross discipline Extenics, we present a collaborative innovation model in the context of big data. The model transforms collected data into a knowledge base in a uniform basic-element format, and then we explore the innovation paths and its solutions by a formularized model based on Extenics. Finally we score and select all possible solutions by 2D dependent function. The model can collaborate different departments to put forward the innovation solutions with support of big data. The model is proved useful by a practical innovation case in management.
Category: Data Structures and Algorithms

[46] viXra:1310.0028 [pdf] submitted on 2013-10-05 22:00:31

Impact of Social Media on Youth Activism and Nation Building in Pervasive Social Computing Using Neutrosophic Cognitive Maps (NCMS)

Authors: A.Victor Devadoss, M. Clement Joe Anand
Comments: 6 Pages.

Youth is the major assets of a nation, we need to channel their energy accordingly and dissipate it appropriately for the benefits of a nation and humanity as a whole. Social media has how become indispensable in our societies. Most of the major social media are predominated by the youth, exploiting it for one purpose or the other. In this paper we analyzed how youth could constructively, the role of social media and it’s how it build a nation and achieve a promising future not only for themselves but equality for the upcoming generations using Neutrosophic cognitive maps. This paper has four sections. In section one, we give an introduction about Pervasive social Media, Section two we recall the definition of Neutrosophic Cognitive Maps (NCMs) Section three is deals with the methods of finding the hidden pattern in NCMs and analysis of Features or Characters of Youth and Youth Activism. In final section we give the conclusion based on our study.
Category: Data Structures and Algorithms

[45] viXra:1309.0106 [pdf] submitted on 2013-09-16 15:52:30

On the Security of the Kirchhoff-Law-Johnson-Noise (KLJN) Communicator

Authors: L.B. Kish, C.G. Granqvist
Comments: 4 Pages. submitted for publication

A simple and general proof is given for the information theoretic (unconditional) security of the Kirchhoff-law-Johnson-noise (KLJN) key exchange system under practical conditions. The unconditional security for ideal circumstances, which is based on the Second Law of Thermodynamics, is found to prevail even under slightly non-ideal conditions. This security level is guaranteed by the continuity of functions describing classical physical linear, as well as stable non-linear, systems. Even without privacy amplification, Eve's probability for successful bit-guessing is found to converge towards 0.5 - i.e., the perfect security level - when ideal conditions are approached.
Category: Data Structures and Algorithms

[44] viXra:1308.0113 [pdf] submitted on 2013-08-21 14:33:49

Current and Voltage Based Bit Errors and Their Combined Mitigation for the Kirchhoff-Law–Johnson-Noise Secure Key Exchange

Authors: Yessica Saez, Laszlo B. Kish, Robert Mingesz, Zoltan Gingl, Claes G. Granqvist
Comments: 9 Pages.

We classify and analyze bit errors in the current measurement mode of the Kirchhoff-law–Johnson-noise (KLJN) key distribution. The error probability decays exponentially with increasing bit exchange period and fixed bandwidth, which is similar to the error probability decay in the voltage measurement mode. We also analyze the combination of voltage and current modes for error removal. In this combination method, the error probability is still an exponential function that decays with the duration of the bit exchange period, but it has superior fidelity than in the former schemes.
Category: Data Structures and Algorithms

[43] viXra:1306.0213 [pdf] submitted on 2013-06-26 04:41:30

Information Hiding and Modula

Authors: José Francisco García Juliá
Comments: 2 Pages.

The information hiding principle can be applied completely using the Modula language.
Category: Data Structures and Algorithms

[42] viXra:1306.0193 [pdf] submitted on 2013-06-22 15:24:38

Log N Algorithm for Search from Unstructured List

Authors: Dhananjay P. Mehendale
Comments: 4 pages

The unstructured search problem asks for search of some predefined number, called target, from given unstructured list of numbers. In this paper we propose a novel classical algorithm with complexity ~O(Log N) for searching the target from unstructured list of numbers. We propose a new algorithm, which achieves improvement of exponential order over existing algorithms. Suppose N is the largest number in the list then we consider N dimensional vector space with Euclidean basis. With each of the numbers in the given unstructured list we associate the unique basis vector among the vectors that form together the Euclidean basis. For example suppose j is a number in the list then we associate with this number j the unique basis vector in the above mentioned N-dimensional vector space, namely, |j> = transpose(0, 0, 0, … , 0, 0, 1, 0, 0, … , 0, 0, 0), where the there is entry 1 only at j-th place and every where else there is entry 0. We then divide the given list of numbers in two roughly equal parts (i.e. we divide the given bag containing scrambled numbers in two roughly equal parts and put them in two separate bags, Bag 1 and Bag 2). We represent the list of numbers in Bag 1, Bag 2 in the form of equally weighted superposition of basis vectors associated with the numbers contained in these bags, namely, we represent list in Bag 1 (Bag 2) as a single state formed by equally weighted superposition using orthonormal states forming Euclidean basis corresponding to numbers in the bag B1 (bag B2), namely, |Psi-1> (|Psi-2>). Let t be the target number. It will be represented as |t>. We then find the value of scalar product of target state |t> with |Psi-1> (or Psi-2>). It will revel us whether t belongs to Bag 1 (or Bag 2) which essentially enables us to carry out the binary search and to achieve above mentioned ~O(Log N) complexity!
Category: Data Structures and Algorithms

[41] viXra:1306.0128 [pdf] submitted on 2013-06-17 01:49:37

Analysis of Point Clouds Using Conformal Geometric Algebra

Authors: Dietmar Hildenbrand, Eckhard Hitzer
Comments: 6 Pages. 6 figures, 1 table. In Braz, J. (ed.), GRAPP 2008, 3rd Int. Conf. on Computer Graphics Theory and Applications. Proc.: Funchal, Madeira, Portugal, January 22-25, 2008, Porto: INSTICC Press, pp. 99-106 (2008). DOI: 10.1.1.151.7539

This paper presents some basics for the analysis of point clouds using the geometrically intuitive mathematical framework of conformal geometric algebra. In this framework it is easy to compute with osculating circles for the description of local curvature. Also methods for the fitting of spheres as well as bounding spheres are presented. In a nutshell, this paper provides a starting point for shape analysis based on this new, geometrically intuitive and promising technology. Keywords: geometric algebra, geometric computing, point clouds, osculating circle, fitting of spheres, bounding spheres.
Category: Data Structures and Algorithms

[40] viXra:1306.0120 [pdf] submitted on 2013-06-17 03:19:30

The GeometricAlgebra Java Package – Novel Structure Implementation of 5D Geometric Algebra R_4,1 for Object Oriented Euclidean Geometry, Space-Time Physics and Object Oriented Computer Algebra

Authors: Eckhard Hitzer, Ginanjar Utama
Comments: 13 Pages. 3 figures, 5 tables. Mem. Fac. Eng. Univ. Fukui 53(1), pp. 47-59 (2005).

This paper first briefly reviews the algebraic background of the conformal (homogeneous) model of Euclidean space in Clifford geometric algebra R_4,1= Cl(4,1), concentrating on the subalgebra structure. The subalgebras include space-time algebra (STA), Dirac and Pauli algebras, as well as real and complex quaternion algebras, etc. The concept of the Horosphere is introduced along with the definition of subspaces that intuitively correspond to three dimensional Euclidean geometric objects. Algebraic expressions for the motions of these objects and their set theoretic operations are given. It is shown how 3D Euclidean information on positions, orientations and radii can be extracted. The second main part of the paper concentrates on the GeometricAlgebra Java package implementation of the Clifford geometric algebra R_4,1 = Cl(4,1) and the homogeneous model of 3D Euclidean space. Details are exemplified by looking at the structure and code of the basic MultiVector class and of the 3D Euclidean object model class Sphere. Finally code optimization issues and the ongoing open source project implementation are discussed.
Category: Data Structures and Algorithms

[39] viXra:1306.0068 [pdf] submitted on 2013-06-11 02:23:10

Understanding the LivMach Framework

Authors: Shreyak Chakraborty
Comments: 4 Pages.

We introduce the alpha version of a C++ Computational Framework to simulate life processes in the body of a living multicellular organism by virtually replicating the data flow of the actual living being in real time. LivMach Framework is an open source project on Sourceforge.net We use various data structures to effectively simulate all components of a living organism 's body. Due to the absence of a Graphical User Interface(GUI), we use special indicator statements to display the flow of data between various parts of the virtual body. Using this code,one can simulate the complete physical,mental and psychological behaviour of simple and complex multicellular organisms on low cost machines.
Category: Data Structures and Algorithms

[38] viXra:1306.0058 [pdf] submitted on 2013-06-09 11:36:09

Cracking the Bennett-Riedel Secure Scheme and a Critical Analysis of Their Claims About the Kirchhoff-Law-Johnson-Noise System

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist
Comments: 32 Pages. First draft; to be disseminated at seminar at Uppsala University, Sweden

Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435) has claimed that, in the Kirchhoff-law-Johnson-noise (KLJN) classical statistical physical key exchange method, thermodynamics (statistical physics) is not essential and that the KLJN scheme provides no security. They attempt to prove the no-thermodynamics view by proposing a dissipation-free deterministic key exchange method with two batteries and two switches. After showing that the BR scheme is unphysical and that some elements of the assumptions violate basic protocols of secure communications, we crack their system by passive attacks in eight different ways, with 100% success probability, and show that the same cracking methods do not work against the KLJN scheme due to Johnson noise and the Second Law of Thermodynamics. We critically analyze the other claims of BR; among others, we prove that their equations (1-3) describing zero security are incorrect for the KLJN scheme. We give mathematical security proofs for each BR attack type and conclude that the information theoretic (unconditional) security of the KLJN method has not successfully been challenged.
Category: Data Structures and Algorithms

[37] viXra:1305.0068 [pdf] submitted on 2013-05-11 21:47:07

Physical Uncloneable Function Hardware Keys Utilizing Kirchhoff-Law-Johnson-Noise Secure Key Exchange and Noise-Based Logic

Authors: Laszlo B. Kish, Chiman Kwan
Comments: 8 Pages.

Weak uncloneable function (PUF) encryption key means that the manufacturer of the hardware can clone the key but anybody else is unable to so that. Strong uncloneable function (PUF) encryption key means that even the manufacturer of the hardware is unable to clone the key. In this paper, first we introduce a "ultra"-strong PUF with intrinsic dynamical randomness, which is not only not cloneable but it also gets renewed to an independent key (with fresh randomness) during each use via the unconditionally secure key exchange. The solution utilizes the Kirchhoff-law-Johnson-noise (KLJN) method for dynamical key renewal and a one-time-pad secure key for the challenge/response process. The secure key is stored in a flash memory on the chip to provide tamper-resistance and non-volatile storage with zero power requirements in standby mode. Simplified PUF keys are shown: a strong PUF utilizing KLJN protocol during the first run and noise-based logic (NBL) hyperspace vector string verification method for the challenge/response during the rest of its life or until it is re-initialized. Finally, the simplest PUF utilizes NBL without KLJN thus it can be cloned by the manufacturer but not by anybody else.
Category: Data Structures and Algorithms

[36] viXra:1303.0106 [pdf] submitted on 2013-03-15 02:32:27

Lecture Notes On Recursive Algorithm

Authors: Cheng Tianren
Comments: 50 Pages.

This is the first volumn of the primer of algorithm. As is well known, algorithm become center of computer science now. In my lecture notes , I focus on one of the alogorithms, which is called The Recursive Algorithm. In this lectures, we use an understandable viewpoint towards the problems we meet in the class which are not easy to accepted by students, where we select muItiple examples in recursive algorithm together to explain, to make our teaching more convenient. However, I must say that if you don not have any basic knowledge in computer language. This lecture may be not easy for you and even the same as other algorithm books. But if you read this lecture carefully, it will be helpful for you to study other materials and you will feel easier to accept.
Category: Data Structures and Algorithms

[35] viXra:1303.0094 [pdf] submitted on 2013-03-12 20:53:43

Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution Over the Smart Grid with Switched Filters

Authors: Elias Gonzalez, Laszlo B. Kish, Robert Balog, Prasad Enjeti
Comments: 22 Pages.

We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional grids (chain-like power line). The speed of the protocol (the number of steps needed) versus grid size is analyzed. When fully developed such system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary dimensions.
Category: Data Structures and Algorithms

[34] viXra:1303.0067 [pdf] submitted on 2013-03-09 10:56:31

Correlation Coefficients of Neutrosophic Sets by Centroid Method

Authors: I. M. Hanafy, A. A. Salama, K. M. Mahfouz
Comments: 4 Pages.

In this paper, we propose another method to calculate the correlation coefficient of neutrosophic sets. The value which obtained from this method tells us the strength of relationship between the neutrosophic sets and the whether the neutrosophic sets are positively or negatively related. Finally we give some proposition and examples.
Category: Data Structures and Algorithms

[33] viXra:1303.0065 [pdf] submitted on 2013-03-09 11:18:30

Neutrosophic Filters

Authors: A. A. Salama, H. Alagamy
Comments: 6 Pages.

In this paper we introduce the notion of filters on neutrosophic set which is considered as a generalization of fuzzy filters studies in [6], the important neutrosophic filters has been given. Several relations between different neutrosophic filters and neutrosophic topologies are also studied here. Possible applications to computer sciences are touched upon.
Category: Data Structures and Algorithms

[32] viXra:1303.0045 [pdf] submitted on 2013-03-07 08:50:04

On the P-Untestability of Combinational Faults

Authors: Suresh k Devanathan, Michael L Bushnell
Comments: 4 Pages.

Abstract—We describe the p-untestability of faults in combinational circuits. They are similar to redundant faults, but are defined probabilistically. P-untestable fault is a fault that is not detectable after N random pattern simulation or a fault, FAN either proves to be redundant or aborts after K backtracks. We chose N to be about 1000000 and K to be about 1000. We provide a p-untestability detectability algorithm that works in about 85% of the cases, with average of about 14% false negatives. The algorithm is a simple hack to FAN and uses structural information and can be easily implemented. The algorithm does not prove redundancy completely but establishes a fault as a probabilistically redundant, meaning a fault with low probability of detection or no detection.
Category: Data Structures and Algorithms

[31] viXra:1303.0043 [pdf] submitted on 2013-03-07 07:08:19

Pror Compaction Scheme for Larger Circuits and Longer Vectors with Deterministic Atpg

Authors: Suresh k Devanathan, Michael L Bushnell
Comments: 5 Pages.

Abstract—Reverse order restoration ROR techniques have found great use in sequential automatic test pattern generation ATPG, esp. spectral and perturbation-based ATPG. This paper deals with improving ROR for that purpose. We introduce parallel-fault multipass 2-level polynomial reverse order restoration PROR algorithms with constant complexity of the form H(n)G(n) + c where H(n) is the number of vectors to be released this iteration and G(n) is the attenuation factor. In PROR H(n) = nk and G(n) here is 1
Category: Data Structures and Algorithms

[30] viXra:1303.0037 [pdf] submitted on 2013-03-06 16:32:05

Fquantum :A Quantum Computing Fault Simulator

Authors: Suresh kumar Devanathan, Michael L Bushnell
Comments: 2 Pages.

Abstract—We like to introduce fQuantum, a Quantum Computing Fault Simulator and new quantum computing fault model based on Hadamard, PauliX, PauliY and PauliZ gates, and the traditional stuckat-1 SA1 and stuck-at-0 SA0 faults. We had close to 100% fault coverage on most circuits. The problem with lower coverage comes from function gates, which we will deal with, in future versions of this paper.
Category: Data Structures and Algorithms

[29] viXra:1302.0120 [pdf] submitted on 2013-02-18 09:23:54

An Electronic Library As An Environment Of Adaptive Aggregation Of Informarion

Authors: Dmitry Lande, Olga Barkova
Comments: 7 Pages. Ukrainian language

The generalized schema of operation of an electronic libraries network is proposed, which is based on a phenomena of the confluence of the two main functions of library – servicing customers and gathering collections. Some parameters of electronic libraries network have been examined. The estimate of intensity of collections augment of an electronic library as a part of a peer-to-peer network has been performed.
Category: Data Structures and Algorithms

[28] viXra:1301.0117 [pdf] submitted on 2013-01-20 07:38:55

New Algorithms: Linear, Nonlinear, and Integer Programming

Authors: Dhananjay P. Mehendale
Comments: 40 Pages

In this paper we propose a new algorithm for linear programming. This new algorithm is based on treating the objective function as a parameter. We form a matrix using coefficients in the system of equations consisting objective equation and equations obtained from inequalities defining constraint by introducing slack/surplus variables. We obtain reduced row echelon form for this matrix containing only one variable, namely, the objective function itself as an unknown parameter. We analyze this matrix in the reduced row echelon form and develop a clear cut method to find the optimal solution for the problem at hand, if and when it exists. We see that the entire optimization process can be developed through the proper analysis of the said matrix in the reduced row echelon form. From the analysis of the said matrix in the reduced row echelon form it will be clear that in order to find optimal solution we may need carrying out certain processes like rearranging of the constraint equations in a particular way and/or performing appropriate elementary row transformations on this matrix in the reduced row echelon form. These operations are mainly aimed at achieving nonnegativity of all the entries in the columns corresponding to nonbasic variables in this matrix or its submatrix obtained by collecting certain rows of this matrix (i.e. submatrix with rows having negative coefficient for parameter d, which stands for the objective function as a parameter for maximization problem and submatrix with rows having positive coefficient parameter d, again representing the objective function as a parameter for minimization problem). The care is to be taken so that the new matrix arrived at by rearranging the constraint equations and/or by carrying out suitable elementary row transformations must be equivalent to original matrix. This equivalence is in the sense that all the feasible solution sets for the problem variables obtained for different possible values of d with original matrix and transformed matrix are same. We then proceed to show that this idea naturally extends to deal with nonlinear and integer programming problems. For nonlinear and integer programming problems we use the technique of Grobner bases (since Grobner basis is an equivalent of reduced row echelon form for a system of nonlinear equations) and the methods of solving linear Diophantine equations (since the integer programming problem demands for optimal integer solution) respectively.
Category: Data Structures and Algorithms

[27] viXra:1212.0136 [pdf] submitted on 2012-12-22 16:30:46

Brain-Like Operation System Concept

Authors: Vaclav Kosar
Comments: 6 Pages. my name with special characters in latex form: V\' aclav Ko\v sa\v r

This article should be easy to understand for anybody and is meant to prove that I proposed new kind of operation system based on wiki-like or graph-like structure is 1-more natural, thus easier to learn 2-more efficient on existing tasks in terms of human time spent 3-can handle new kind of tasks Computer task is data transformation. It is improbable that current paper-like handling of data is the best way. I would like to show that current computers provide a much more natural and useful way of handling all-purpose data. The main idea is that one should store information in a structure as natural as possible, so that user does not have to give efforts to transform the information (express more, search naturally, write once then just reference ...). I am not sure if I can claim any authorship of following ideas, since one can never be sure whether an idea existed before and what actually helped one to come up with this idea. The only purpose of this paper is thus the pure desire to make progress of thought, by starting the discussion and construction of crowd sourced operation system based entirely on idea of graph databases. I cannot provide the reader with infinite detais and precision, thus I leave some uncertainties to be cleared by the reader himself for pleasure. My main inspirations for this more natural operation system were: Graph database, Wikipedia, brain, Lisp, mind-mapping, QED manifesto, CSS 3, WikiOS.
Category: Data Structures and Algorithms

[26] viXra:1212.0109 [pdf] submitted on 2012-12-17 07:58:51

Polynomial 3-SAT-Solver

Authors: Matthias Mueller (aka Louis Coder)
Comments: 10 Pages. Algorithm has been well (!) tested!

In this document I want to introduce and explain an algorithm that determines the solvability state (solvable or unsatisfiable) of any exact-3-SAT formula in polynomial time. It is for sure that the algorithm has polynomial runtime, even in the worst case, as the runtime is artificially limited. The question is only if the algorithm does always output the correct result. I suppose it does, due to a proof of correctness that will be shown in this document, and the evidence that an implementation of the algorithm solved millions of test formulae without any error. Furthermore this document provides a download link to a (Windows) demo solver program (including source code) that you can instantly try out.
Category: Data Structures and Algorithms

[25] viXra:1212.0077 [pdf] submitted on 2012-12-11 09:05:35

A New Algorithm for Linear Programming

Authors: Dhananjay P. Mehendale
Comments: 6 Pages. Presented and Published in the Proceedings of International Conference on Perspectives of Computer Confluence with Sciences 2012, ICPCCS 12.

In this paper we propose a new algorithm for linear programming. This new algorithm is based on treating the objective function as a parameter. We transform the matrix of coefficients representing this system of equations in the reduced row echelon form containing only one variable, namely, the objective function itself, as a parameter whose optimal value is to be determined. We analyze this matrix and develop a clear method to find the optimal value for the objective function treated as a parameter. We see that the entire optimization process evolves through the proper analysis of the said matrix in the reduced row echelon form. It will be seen that the optimal value can be obtained 1) by solving certain subsystem of this system of equations through a proper justification for this act, or 2) By making appropriate and legal row transformations on this matrix in the reduced row echelon form so that all the entries in the submatrix of this matrix, obtained by collecting rows in which the coefficient of so called unknown parameter d whose optimal value is to be determined, become nonnegative and this new matrix must be equivalent to original matrix in the sense that the solution set of the matrix equation with original matrix and matrix equation with transformed matrix are same. We then proceed to show that this idea naturally extends to deal with nonlinear and integer programming problems. For nonlinear and integer programming problems we use the technique of Grobner bases since Grobner basis is an equivalent of reduced row echelon form for a system of nonlinear equations, and the methods of solving linear Diophantine equations respectively.
Category: Data Structures and Algorithms

[24] viXra:1211.0138 [pdf] submitted on 2012-11-23 14:13:49

On the Intelligent Smart Grid Network

Authors: David Grace, Tony Marshall, Xiaodong Hu
Comments: 7 Pages.

Smart Grid Networks present a modern solution for network automation and digital communication in order to improve the efficiency, sustainability and reliability of electricity distribution. The development of Smart Grid networks is not a simple matter. Electronic grids consist of a large number of systems, intelligent and regular electronic devices, substations, switching stations, dispatching centers and many other elements. This paper analyse the technological requirements of intelligent Smart Grid network with a focus on networking aspects, standards and protocols.
Category: Data Structures and Algorithms

[23] viXra:1210.0126 [pdf] submitted on 2012-10-22 21:46:31

Solving Hub Location Problems for Networks

Authors: Richard Smith, Chenwen Zheng, Frederic Launois
Comments: 11 Pages.

This work developed a heuristic algorithm to find a solution for the CSAHLP problem. Two formulations were proved CSAHLP-C y CSAHLP-N. For the CSAHLP-C only three size of nodes were proved: 10, 20 and 25 nodes. For problems with more nodes the Cpu-time was very large. For the CSAHLP-N six size of nodes were proved: 10, 20, 25, 40, 50 and 100 nodes. The Cpu-time found are interesting and the gaps are few in the most of cases.
Category: Data Structures and Algorithms

[22] viXra:1210.0122 [pdf] submitted on 2012-10-22 13:02:50

Extension Communication Pentru Rezolvarea Contradicţiei Ontologice Dintre Comunicare şi Informaţie

Authors: Florentin Smarandache, Stefan Vladutescu
Comments: 12 Pages.

Studiul se înscrie în zona interdisciplinară dintre teoria informaţiei şi extensică, în calitatea ei de ştiinţă a rezolvării contradictoriilor. În acest spaţiu se abordează problema centrală a ontologiei informaţiei relaţia contradictorie dintre comunicare şi informare. Nucleul cercetării îl reprezintă realitatea că investigaţia ştiinţifică a relaţiei comunicare-informare a ajuns într-o fundătură. Relaţia bivalentă comunicare-informare, informare-comunicare a ajuns să fie contradictorie, iar cele două concepte să se blocheze reciproc. În condiţiile în care Extensics este o ştiinţă a soluţionării problemelor contradictorii, se vor utiliza „extensical procedures” pentru a rezolva contradicţia.
Category: Data Structures and Algorithms

[21] viXra:1208.0226 [pdf] submitted on 2012-08-28 09:40:25

Complex Noise-Bits and Large-Scale Instantaneous Parallel Operations with Low Complexity

Authors: Laszlo B. Kish, He Wen, Andreas Klappenecker
Comments: 10 Pages. physical informatics is the exact topic

We introduce the complex noise-bit as information carrier, which requires noise signals in two parallel wires instead of the single-wire representations of noise-based logic discussed so far. The immediate advantage of this new scheme is that, when we use random telegraph waves as noise carrier, the superposition of the first 2^N integer numbers (obtained by the Achilles heel operation) yields non-zero values. We introduce basic instantaneous operations, with O(1) time and hardware complexity, including bit-value measurements in product states, single-bit and two-bit noise gates (universality exists) that can instantaneously operate over large superpositions with full parallelism. We envision the possibility of implementing instantaneously running quantum algorithms on classical computers while using similar number of classical bits as the number of quantum bits emulated without the necessity of error corrections. Mathematical analysis and proofs are given.
Category: Data Structures and Algorithms

[20] viXra:1208.0204 [pdf] submitted on 2012-08-20 21:28:38

Fuzzy Grids-Based Intrusion Detection in Neural Networks

Authors: Izani Islam, Tahir Ahmad, Ali H. Murid
Comments: 18 Pages.

The proposed system is developed in two main phases and also a supplementary optimizing stage. At the first phase, the most important features are selected using fuzzy association rules mining (FARM) to reduce the dimension of input features to the misuse detector. At the second phase, a fuzzy adaptive resonance theory‐based neural network (ARTMAP) is used as a misuse detector. The accuracy of the proposed approach depends strongly on the precision of the parameters of FARM module and also fuzzy ARTMAP neural classifier. So, the genetic algorithm (GA) is incorporated into the proposed method to optimize the parameters of mentioned modules in this study. Classification rate (CR) results show the importance role of GA in improving the performance of the proposed intrusion detection system (IDS). The performance of proposed system is investigated in terms of detection rate (DR), false alarm rate (FAR) and cost per example (CPE).
Category: Data Structures and Algorithms

[19] viXra:1208.0146 [pdf] submitted on 2012-08-18 13:05:14

A Novel Merge sort

Authors: D.Abhyankar, M.Ingle
Comments: 6 Pages.

Sorting is one of the most frequently needed computing tasks. Mergesort is one of the most elegant algorithms to solve the sorting problem. We present a novel sorting algorithm of Mergesort family that is more efficient than other Mergesort algorithms. Mathematical analysis of the proposed algorithm shows that it reduces the data move operations considerably. Profiling was done to verify the impact of proposed algorithm on time spent. Profiling results show that proposed algorithm shows considerable improvement over conventional Mergesort in the case of large records. Also, in the case of small records, proposed algorithm is faster than the classical Mergesort. Moreover the proposed algorithm is found to be more adaptive than Classical Mergesort.
Category: Data Structures and Algorithms

[18] viXra:1207.0033 [pdf] submitted on 2012-07-09 08:07:02

Qualitative Analysis of Academic Group and Discussion Forum on Facebook

Authors: Hannah Arendt, Ivan Matic, Lin Zhu
Comments: 10 Pages.

In the present study, data was triangulated and two methods of data analysis were used. Qualitative analysis was undertaken of free-text data from students’ reflective essaysto extract socially-related themes. Heuristic evaluation was conducted by expert evaluators, who investigated forum contributions and discourse in line with contemporary learning theory and considered the social culture of participation. Findings of the qualitative analysis of students’ perceptions and results of the heuristic evaluation of forum participation confirmed each other, indicating a warm social climate and a conducive, well-facilitated environment that supported individual styles of participation. It fostered interpersonal relationships between distance learners, as well as study-related benefits enhanced by peer teaching and insights acquired in a culture of social negotiation. The environment was effectively moderated, while supporting student-initiative. The mixed-methods approach of evaluating essays and discussions showed a virtual community where most participants experienced a sound balance of social- and study-related benefits, but with a stronger focus on academic matters.
Category: Data Structures and Algorithms

[17] viXra:1202.0036 [pdf] submitted on 2012-02-13 09:49:02

Multiplication of Any Number Using Left Shift

Authors: Baldha Prashantkumar Mansukhbhai
Comments: 4 Pages.

The new algorithm for multiplication. The multiplication algorithm is best for multiplication algorithm in some cases.
Category: Data Structures and Algorithms

[16] viXra:1112.0029 [pdf] submitted on 2011-12-07 18:58:12

Design of High Speed 64 Bits Multiplier by Square Function

Authors: Wu Sheng-Ping
Comments: 2 Pages.

This article propose a new Booth multiplier design that the booth expansion is rearranged in square term like: \[ ab=((a+b)^2-a^2-b^2)/2 \] If the code length of $a,b$ is $n$, the multiplier on the right is with the size $2^{2n}$, but multiplier on the left is with the size $2^n$.
Category: Data Structures and Algorithms

[15] viXra:1112.0028 [pdf] submitted on 2011-12-07 19:00:52

Evolutionary Computation Hybrids With Monte Carlo Method For Differential Equation

Authors: Sheng-Ping Wu
Comments: 2 Pages.

This article uses the hybrids between the evolutionary method and Monte Carlo method to solve the differential equation, for example in this article, the Schrodinger equation for atom
Category: Data Structures and Algorithms

[14] viXra:1109.0036 [pdf] submitted on 16 Sep 2011

An OpenCL Fast Fourier Transformation Implementation Strategy

Authors: Sven De Smet
Comments: 9 pages

This paper describes an implementation strategy in preparation for an implementation of an OpenCL FFT. The two most essential factors (memory bandwidth and locality) that are crucial to obtain high performance on a GPU for an FFT implementation are highlighted. Theoretical upper bounds for performance in terms of the locality factor are derived. An implementation strategy is proposed that takes these factors into consideration so that the resulting implementation has the potential to obtain high performance.
Category: Data Structures and Algorithms

[13] viXra:1108.0028 [pdf] submitted on 22 Aug 2011

One More Step Towards Generalized Graph-Based Weakly Relational Domains

Authors: Sven de Smet
Comments: 21 pages. This paper is a slightly modified version of a draft paper that was submitted to ParCo 2011 and is very preliminary. Since I do not have the resources to complete this paper by increasing its clarity, adding examples, adding an experimental evaluation and adding a section on related work, I'm making it available so that it may be useful to others.

This paper proposes to extend graph-based weakly relational domains to a generalized relational context. Using a new definition of coherence, we show that the definition of a normal form for this domain is simplified. A transitive closure algorithm for combined relations is constructed and a proof of its correctness is given. Using the observed similarity between transitive closure of a combined relation and the normal form closure of a graph-based weakly relational domain, we extract a mathematical property that a relational abstract domain must satisfy in order to allow us to use an algorithm with the same form as the transitive closure algorithm to compute the normal form of a graph-based weakly relational domain.
Category: Data Structures and Algorithms

[12] viXra:1106.0033 [pdf] submitted on 15 Jun 2011

Towards a Group Theoretical Model for Algorithm Optimization

Authors: Sven de Smet
Comments: 4 Pages.

This paper proposes to use a group theoretical model for the optimization of algorithms. We first investigate some of the fundamental properties that are required in order to allow the optimization of parallelism and communication. Next, we explore how a group theoretical model of computations can satisfy these requirements. As an application example, we demonstrate how this group theoretical model can uncover new optimization possibilities in the polyhedral model.
Category: Data Structures and Algorithms

[11] viXra:1106.0023 [pdf] submitted on 12 Jun 2011

A Lattice Model for the Optimization of Communication in Parallel Algorithms

Authors: Sven de Smet
Comments: 12 pages. This paper is a slightly modified version of a draft paper that was submitted to ParCo 2011 and is very preliminary. Since I do not have the resources to complete this paper by increasing its clarity, extending the experimental evaluation and adding a section on related work, I'm making it available so that it may be useful to others.

This paper describes a unified model for the optimization of communication in parallel algorithms and architectures. Based on a property that provides a unified view of locality in space and time, an algorithm is constructed that generates a parallel architecture that is optimized for communication for a given computation. The optimization algorithm is constructed using the lattice algebraic properties of congruence relations and is therefor applicable in a general context. An application to a bio-informatics algorithm demonstrates the value of the model and optimization algorithm.
Category: Data Structures and Algorithms

[10] viXra:1106.0022 [pdf] submitted on 12 Jun 2011

Parallelisation with a Grid Abstraction

Authors: Sven de Smet
Comments: 16 pages, This paper is a slightly modified version of a draft paper that was submitted to ParCo 2011 (with added proofs) and is very preliminary. Since I do not have the resources to complete this paper by increasing its clarity, adding examples, adding an experimental evaluation and adding a section on related work, I'm making it available so that it may be useful to others.

This paper describes a new technique for automatic parallelisation in the Z-polyhedral model. The presented technique is applicable to arbitrarily nested loopnests with iteration spaces that can be represented as unions of Z-polyhedra and affine modular data-access functions. The technique partitions both iteration and data spaces of the computation. The maximal amount of parallelism that can be represented using grid partitions is extracted.
Category: Data Structures and Algorithms

[9] viXra:1101.0082 [pdf] submitted on 24 Jan 2011

A More Rational Way to Program Storage and Communication of Objects

Authors: Ir J.A.J. van Leunen
Comments: 3 pages

A C# class library is described that offers an efficient and secure way of object oriented data transfer and data storage. The classes convert a relational database in an effective object oriented database and a file system in an object oriented data storage and transfer system.
Category: Data Structures and Algorithms

[8] viXra:1101.0062 [pdf] submitted on 19 Jan 2011

Managing the Software Generation Process

Authors: Ir J.A.J. van Leunen
Comments: 20 pages

The current software generation process is rotten. This paper analyses why that is the case and what can be done about it.
Category: Data Structures and Algorithms

[7] viXra:1101.0061 [pdf] submitted on 19 Jan 2011

Story of a War Against Software Complexity

Authors: Ir J.A.J. van Leunen
Comments: 6 pages

This is the account of the course of a project that had the aim to improve the efficiency of embedded software generation with several orders of magnitude. All factors that determined the success of the project are treated honestly and in detail.
Category: Data Structures and Algorithms

[6] viXra:1008.0032 [pdf] submitted on 11 Aug 2010

A Unit Based Crashing Pert Network for Optimization of Software Project Cost

Authors: Priti Singh, Florentin Smarandache, Dipti Chauhan, Amit Bhaghel
Comments: 10 pages

Crashing is a process of expediting project schedule by compressing the total project duration. It is helpful when managers want to avoid incoming bad weather season. However, the downside is that more resources are needed to speed-up a part of a project, even if resources may be withdrawn from one facet of the project and used to speed-up the section that is lagging behind. Moreover, that may also depend on what slack is available in a non-critical activity, thus resources can be reassigned to critical project activity. Hence, utmost care should be taken to make sure that appropriate activities are being crashed and that diverted resources are not causing needless risk and project scope integrity. In this paper we want to present a technique called "Unit Crashing" to reduce the total cost of project. Unit Crashing means to crash the project duration by one unit (day) instead of crashing it completely. This technique uses an iterative approach to perform unit crashing until all activities along the critical path are crashed by desired amount. The output of this method will reduce the cost of project, and is useful at places where cost is of major consideration. Crashing PERT networks can save a significant amount of money in crashing and overrun costs of a company. Even if there are no direct costs in the form of penalties for late completion of projects, there is likely to be intangible costs because of reputation damage.
Category: Data Structures and Algorithms

[5] viXra:1004.0015 [pdf] submitted on 8 Mar 2010

Neutrosophic Relational Data Model

Authors: Haibin Wang, Rajshekhar Sunderraman, Florentin Smarandache, André Rogatko
Comments: 25 pages

In this paper, we present a generalization of the relational data model based on interval neutrosophic set [1]. Our data model is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or intuitionistic fuzzy relation can only handle incomplete information. Associated with each relation are two membership functions one is called truth-membership function T which keeps track of the extent to which we believe the tuple is in the relation, another is called falsity-membership function F which keeps track of the extent to which we believe that it is not in the relation. A neutrosophic relation is inconsistent if there exists one tuple a such that T(α) + F(α) > 1. In order to handle inconsistent situation, we propose an operator called "split" to transform inconsistent neutrosophic relations into pseudo-consistent neutrosophic relations and do the set-theoretic and relation-theoretic operations on them and finally use another operator called "combine" to transform the result back to neutrosophic relation. For this data model, we define algebraic operators that are generalizations of the usual operators such as intersection, union, selection, join on fuzzy relations. Our data model can underlie any database and knowledge-base management system that deals with incomplete and inconsistent information.
Category: Data Structures and Algorithms

[4] viXra:1004.0007 [pdf] submitted on 8 Mar 2010

Algebraic Generalization of Venn Diagram

Authors: Florentin Smarandache
Comments: 3 pages

It is easy to deal with a Venn Diagram for 1 ≤ n ≤ 3 sets. When n gets larger, the picture becomes more complicated, that's why we thought at the following codification. That's why we propose an easy and systematic algebraic way of dealing with the representation of intersections and unions of many sets.
Category: Data Structures and Algorithms

[3] viXra:1003.0135 [pdf] submitted on 6 Mar 2010

Optimal Plant Layout Design for Process-focused Systems

Authors: M. Khoshnevisan, Sukanto Bhattacharya, Florentin Smarandache
Comments: 13 pages

In this paper we have proposed a semi-heuristic optimization algorithm for designing optimal plant layouts in process-focused manufacturing/service facilities. Our proposed algorithm marries the well-known CRAFT (Computerized Relative Allocation of Facilities Technique) with the Hungarian assignment algorithm. Being a semi-heuristic search, our algorithm is likely to be more efficient in terms of computer CPU engagement time as it tends to converge on the global optimum faster than the traditional CRAFT algorithm - a pure heuristic. We also present a numerical illustration of our algorithm.
Category: Data Structures and Algorithms

[2] viXra:1003.0134 [pdf] submitted on 6 Mar 2010

Definitions Derived from Neutrosophics

Authors: Florentin Smarandache
Comments: 15 pages

Thirty-three new definitions are presented, derived from neutrosophic set, neutrosophic probability, neutrosophic statistics, and neutrosophic logic. Each one is independent, short, with references and cross references like in a dictionary style.
Category: Data Structures and Algorithms

[1] viXra:0908.0052 [pdf] submitted on 10 Aug 2009

Odd Algebraic Bases, and Their Utility in Computer

Authors: Hamid V. Ansari
Comments: 3 pages

It is shown that we can take all numbers to odd bases such that we require only about half of the digits required in the current method provided that we introduce negative mark for each digit. Most probably this method will have various applications in the computer technology.
Category: Data Structures and Algorithms

Replacements of recent Submissions

[43] viXra:1410.0122 [pdf] replaced on 2014-10-25 05:44:46

Analysis of an Attenuator Artifact in an Experimental Attack by Gunn–Allison–Abbott Against the Kirchhoff-Law–Johnson-Noise (KLJN) Secure Key Exchange System

Authors: Laszlo B. Kish, Zoltan Gingl, Robert Mingesz, Gergely Vadai, Janusz Smulko, Claes-Goran Granqvist
Comments: 9 Pages. Equation double-number fixed. In editorial process at a journal.

A recent paper by Gunn–Allison–Abbott (GAA) [L.J. Gunn et al., Scientific Reports 4 (2014) 6461] argued that the Kirchhoff-law–Johnson-noise (KLJN) secure key exchange system could experience a severe information leak. Here we refute their results and demonstrate that GAA’s arguments ensue from a serious design flaw in their system. Specifically, an attenuator broke the single Kirchhoff-loop into two coupled loops, which is an incorrect operation since the single loop is essential for the security in the KLJN system, and hence GAA’s asserted information leak is trivial. Another consequence is that a fully defended KLJN system would not be able to function due to its built-in current-comparison defense against active (invasive) attacks. In this paper we crack GAA’s scheme via an elementary current comparison attack which yields negligible error probability for Eve even without averaging over the correlation time of the noise.
Category: Data Structures and Algorithms

[42] viXra:1410.0122 [pdf] replaced on 2014-10-23 09:34:22

Analysis of an Attenuator Artifact in an Experimental Attack by Gunn–Allison–Abbott Against the Kirchhoff-Law–Johnson-Noise (KLJN) Secure Key Exchange System

Authors: Laszlo B. Kish, Zoltan Gingl, Robert Mingesz, Gergely Vadai, Janusz Smulko, Claes-Goran Granqvist
Comments: 9 Pages. Polished and many typos fixed. Submitted for publication

A recent paper by Gunn–Allison–Abbott (GAA) [L.J. Gunn et al., Scientific Reports 4 (2014) 6461] argued that the Kirchhoff-law–Johnson-noise (KLJN) secure key exchange system could experience a severe information leak. Here we refute their results and demonstrate that GAA’s arguments ensue from a serious design flaw in their system. Specifically, an attenuator broke the single Kirchhoff-loop into two coupled loops, which is an incorrect operation since the single loop is essential for the security in the KLJN system, and hence GAA’s asserted information leak is trivial. Another consequence is that a fully defended KLJN system would not be able to function due to its built-in current-comparison defense against active (invasive) attacks. In this paper we crack GAA’s scheme via an elementary current comparison attack which yields negligible error probability for Eve even without averaging over the correlation time of the noise.
Category: Data Structures and Algorithms

[41] viXra:1409.0235 [pdf] replaced on 2014-10-16 21:23:43

Cellular Automaton Graphics(4)

Authors: Morio Kikuchi
Comments: 407 Pages.

We fill three-dimensional space up regularly using painting algorithms.
Category: Data Structures and Algorithms

[40] viXra:1407.0010 [pdf] replaced on 2014-07-04 14:06:07

A Lower Bound of 2^n Conditional Jumps for Boolean Satisfiability on A Random Access Machine

Authors: Samuel C. Hsieh
Comments: 13 Pages. This version corrects a few typing errors found in the previous version.

We establish a lower bound of 2^n conditional jumps for deciding the satisfiability of the conjunction of any two Boolean formulas from a set called a full representation of Boolean functions of n variables - a set containing a Boolean formula to represent each Boolean function of n variables. The contradiction proof first assumes that there exists a RAM program that correctly decides the satisfiability of the conjunction of any two Boolean formulas from such a set by following an execution path that includes fewer than 2^n conditional jumps. By using multiple runs of this program, with one run for each Boolean function of n variables, the proof derives a contradiction by showing that this program is unable to correctly decide the satisfiability of the conjunction of at least one pair of Boolean formulas from a full representation of n-variable Boolean functions if the program executes fewer than 2^n conditional jumps. This lower bound of 2^n conditional jumps holds for any full representation of Boolean functions of n variables, even if a full representation consists solely of minimized Boolean formulas derived by a Boolean minimization method. We discuss why the lower bound fails to hold for satisfiability of certain restricted formulas, such as 2CNF satisfiability, XOR-SAT, and HORN-SAT. We also relate the lower bound to 3CNF satisfiability.
Category: Data Structures and Algorithms

[39] viXra:1406.0124 [pdf] replaced on 2014-09-27 23:38:47

Elimination of a Second-Law-Attack, and All Cable-Resistance-Based Attacks, in the Kirchhoff-Law–Johnson-Noise (KLJN) Secure Key Exchange System

Authors: Laszlo B. Kish, Claes-Goran Granqvist
Comments: 9 Pages. Accepted for publication in Entropy (open access)

We introduce the so far most efficient attack against the Kirchhoff-law–Johnson-noise (KLJN) secure key exchange system. This attack utilizes the lack of exact thermal equilibrium in practical applications and is based on cable resistance losses and the fact that the Second Law of Thermodynamics cannot provide full security when such losses are present. The new attack does not challenge the unconditional security of the KLJN scheme, but it puts more stringent demands on the security/privacy enhancing protocol than for any earlier attack. In this paper we present a simple defense protocol to fully eliminate this new attack by increasing the noise-temperature at the side of the smaller resistance value over the noise-temperature at the at the side with the greater resistance. It is shown that this simple protocol totally removes Eve’s information not only for the new attack but also for the old Bergou-Scheuer-Yariv attack. The presently most efficient attacks against the KLJN scheme are thereby completely nullified.
Category: Data Structures and Algorithms

[38] viXra:1406.0124 [pdf] replaced on 2014-06-20 01:40:38

Second-Law-Attack, and Eliminating All Cable Resistance Attacks in the Johnson Noise Based Secure Scheme

Authors: Laszlo B. Kish, Claes-Goran Granqvist
Comments: 4 Pages. vixra hyperlink added

We introduce the so far most efficient attack against the Kirchhoff-law-Johnson-noise (KLJN) secure key exchanger. The attack utilizes the lack of exact thermal equilibrium at practical applications due to the cable resistance loss. Thus the Second Law of Thermodynamics cannot provide full security. While the new attack does not challenge the unconditional security of the KLJN scheme, due to its more favorable properties for Eve, it requires higher requirements for the security/privacy enhancing protocol than any earlier versions. We create a simple defense protocol to fully eliminate this attack by increasing the noise-temperature at the side of the lower resistance value. We show that, this simple defense protocol totally eliminates Eve's information not only in this but also in the old (Bergou)-Scheuer-Yariv attack. Thus the so far most efficient attack methods become useless against the KLJN scheme.
Category: Data Structures and Algorithms

[37] viXra:1406.0044 [pdf] replaced on 2014-09-13 19:56:25

Cellular Automaton Graphics(2)

Authors: Morio Kikuchi
Comments: 129 Pages.

We fill a plane up regularly using painting algorithms(2).
Category: Data Structures and Algorithms

[36] viXra:1405.0352 [pdf] replaced on 2014-06-06 08:25:56

On Information Hiding

Authors: José Francisco García Juliá
Comments: 3 Pages.

Information hiding is not programming hiding. It is the hiding of changeable information into programming modules.
Category: Data Structures and Algorithms

[35] viXra:1405.0312 [pdf] replaced on 2014-09-21 05:47:47

Transport Catastrophe Analysis as an Alternative to a Monofractal Description: Theory and Application to Financial Time Series

Authors: Sergey A. Kamenshchikov
Comments: 12 Pages. Journal of Chaos, Volume 2014, Article ID 346743. Author: ru.linkedin.com/pub/sergey-kamenshchikov/60/8b1/21a/

The goal of this investigation was to overcome limitations of a persistency analysis, introduced by Benoit Mandelbrot for monofractal Brownian processes: nondifferentiability, Brownian nature of process and a linear memory measure. We have extended a sense of a Hurst factor by consideration of a phase diffusion power law. It was shown that pre-catastrophic stabilization as an indicator of bifurcation leads to a new minimum of momentary phase diffusion, while bifurcation causes an increase of the momentary transport. An efficiency of a diffusive analysis has been experimentally compared to the Reynolds stability model application. An extended Reynolds parameter has been introduces as an indicator of phase transition. A combination of diffusive and Reynolds analysis has been applied for a description of a time series of Dow Jones Industrial weekly prices for a world financial crisis of 2007-2009. Diffusive and Reynolds parameters shown an extreme values in October 2008 when a mortgage crisis was fixed. A combined R/D description allowed distinguishing of market evolution short-memory and long memory shifts. It was stated that a systematic large scale failure of a financial system has begun in October 2008 and started fading in February 2009.
Category: Data Structures and Algorithms

[34] viXra:1405.0021 [pdf] replaced on 2014-10-16 21:20:59

Cellular Automaton Graphics

Authors: Morio Kikuchi
Comments: 402 Pages.

We fill a plane up regularly using painting algorithms.
Category: Data Structures and Algorithms

[33] viXra:1404.0081 [pdf] replaced on 2014-05-15 23:12:47

On the “cracking” Scheme in the Paper “A Directional Coupler Attack Against the Kish Key Distribution System” by Gunn, Allison and Abbott

Authors: Hsien-Pu Chen, Laszlo B. Kish, Claes-Göran Granqvist, Gabor Schmera
Comments: 11 Pages. missing/incorrect abstract fixed; extended (second) version

Recently, Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. We proved in a former paper [http://arxiv.org/pdf/1404.4664] that GAA’s mathematical model is unphysical. Here we analyze GAA’s cracking scheme and show that in the cable loss free case it serves less eavesdropping information than the old mean-square based attack, while in the loss-dominated case it offers no information. We also investigate GAA's experimental claim to be capable of distinguishing, with a poor statistics over a few correlation times, the distributions of two Gaussian noises with a relative variance difference of less than 10–8. Normally such distinctions would require hundreds of millions of correlations times to be observable. We identify several experimental artifacts due to poor design that can lead to GAA’s assertions; deterministic currents due to spurious harmonic components ground loop, DC offset; aliasing; non-Gaussian features including non-linearities and other non-idealities in the generators; and the time-derivative nature of their scheme enhancing all these aspects.
Category: Data Structures and Algorithms

[32] viXra:1404.0081 [pdf] replaced on 2014-05-15 06:52:11

On the “cracking” Scheme in the Paper “A Directional Coupler Attack Against the Kish Key Distribution System” by Gunn, Allison and Abbott

Authors: Hsien P. Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera
Comments: 11 Pages. second draft

Recently Gunn, Allison and Abbott (GAA) [1] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. In a former paper [2], we proved that the wave claims in the GAA’s attack are heavily unphysical, since the quasi-static limit holds for the KLJN scheme, implying that physical waves do not exist in the wire channel. The assumption of existing wave modes in the short cable at the low frequency limits violates a number of laws of physics including the Second Law of Thermodynamics. One aspect of the mistakes is that in electrical engineer jargon all oscillating and propagating time functions are called waves while in physics the corresponding retarded potentials can be wave-type of non-wave type. Physical waves involve two dual energy forms that are regenerating each other during the propagation, such as the electrical and magnetic fields are doing (similarly kinetic and potential energy in elastic waves); while non-wave-type retarded potential effects in the quasi-static regime, such as in KLJN, have negligible crosstalk between these energy forms and the energy exchange takes place between them and the generators [2].
Category: Data Structures and Algorithms

[31] viXra:1404.0081 [pdf] replaced on 2014-04-11 08:37:35

On the "Cracking" Experiments in Gunn, Allison, Abbott, "A Directional Coupler Attack Against the Kish Key Distribution System"

Authors: Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera
Comments: 4 Pages. second draft

Recently Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf] proposed a new scheme to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. In a former paper [http://vixra.org/pdf/1403.0964v4.pdf], we proved that CAA's wave-based attack is unphysical. Here we address their experimental results regarding this attack. Our analysis shows that GAA virtually claim that they can identify, in a few correlation times that, from two Gaussian distributions with zero mean, which one is wider when their relative width difference is <10^-4. Normally, such decision would need millions of correlations times to observe. We identify the experimental artifact causing this situation: existing DC current and/or ground loop (yielding slow deterministic currents) in the system. It is important to note that, while the GAA's cracking scheme, the experiments and the analysis are invalid, there is an important benefit of their attempt: our analysis implies that, in practical KLJN systems, DC currents ground loops or any other mechanisms carrying a deterministic current/voltage component must be taken care of to avoid information leak about the key.
Category: Data Structures and Algorithms

[30] viXra:1404.0054 [pdf] replaced on 2014-07-08 09:47:54

Securing Vehicle Communication Systems by the KLJN Key Exchange Protocol

Authors: Y. Saez, X. Cao, L.b. Kish, G. Pesti
Comments: 12 Pages. Paper accepted for publication at FNL on May 19, 2014

We review the security requirements for vehicular communication networks and provide a critical assessment of some typical communication security solutions. We also propose a novel unconditionally secure vehicular communication architecture that utilizes the Kirchhoff-law–Johnson-noise (KLJN) key distribution scheme.
Category: Data Structures and Algorithms

[29] viXra:1403.0964 [pdf] replaced on 2014-04-07 13:23:57

Do Electromagnetic Waves Exist in a Short Cable at Low Frequencies? What Does Physics Say?

Authors: Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera
Comments: 13 Pages. Accepted for publication in Fluctuation and Noise Letters

We refute a physical model, recently proposed by Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf], to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. Their model, and its theoretical underpinnings, is found to be fundamentally flawed because their assumption of electromagnetic waves violates not only the wave equation but also the Second Law of Thermodynamics, the Principle of Detailed Balance, Boltzmann’s Energy Equipartition Theorem, and Planck’s formula by implying infinitely strong blackbody radiation. We deduce the correct mathematical model of the GAA scheme, which is based on impedances at the quasi-static limit. Mathematical analysis and simulation results confirm our approach and prove that GAA’s experimental interpretation is incorrect too.
Category: Data Structures and Algorithms

[28] viXra:1403.0964 [pdf] replaced on 2014-04-02 10:24:11

Do Electromagnetic Waves Exist in a Short Cable at Low Frequencies? What Does Physics Say?

Authors: Hsien-Pu Chen, Laszlo B. Kish, Claes-Goran Granqvist, Gabor Schmera
Comments: 12 Pages. author's name corrected; link added

We refute a physical model, recently proposed by Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf], to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. Their model, and its theoretical underpinnings, is found to be fundamentally flawed because their assumption of electromagnetic waves violates not only the wave equations but also the Second Law of Thermodynamics, the Principle of Detailed Balance, Boltzmann’s Energy Equipartition Theorem, and Planck’s formula by implying infinitely strong blackbody radiation. We deduce the correct mathematical model of the GAA scheme, which is impedance-based. Mathematical analysis and simulation results confirm our approach and prove that GAA’s experimental interpretation is incorrect too.
Category: Data Structures and Algorithms

[27] viXra:1403.0964 [pdf] replaced on 2014-03-31 13:41:15

Do Electromagnetic Waves Exist in a Short Cable at Low Frequencies? What Does Physics Say?

Authors: H.P. Chan, L.B. Kish, C.G. Granqvist, G. Schmera
Comments: 12 Pages. revised

We refute a physical model, recently proposed by Gunn, Allison and Abbott (GAA) [http://arxiv.org/pdf/1402.2709v2.pdf], to utilize electromagnetic waves for eavesdropping on the Kirchhoff-law–Johnson-noise (KLJN) secure key distribution. Their model, and its theoretical underpinnings, is found to be fundamentally flawed because their assumption of electromagnetic waves violates not only the wave equations but also the Second Law of Thermodynamics, the Principle of Detailed Balance, Boltzmann’s Energy Equipartition Theorem, and Planck’s formula by implying infinitely strong blackbody radiation. We deduce the correct mathematical model of the GAA scheme, which is impedance-based. Mathematical analysis and simulation results confirm our approach and prove that GAA’s experimental interpretation is incorrect too.
Category: Data Structures and Algorithms

[26] viXra:1308.0113 [pdf] replaced on 2013-10-14 13:37:44

Current and Voltage Based Bit Errors and Their Combined Mitigation for the Kirchhoff-Law–Johnson-Noise Secure Key Exchange

Authors: Yessica Saez, Laszlo B. Kish, Robert Mingesz, Zoltan Gingl, Claes G. Granqvist
Comments: 9 pages

We classify and analyze bit errors in the current measurement mode of the Kirchhoff-law–Johnson-noise (KLJN) key distribution. The error probability decays exponentially with increasing bit exchange period and fixed bandwidth, which is similar to the error probability decay in the voltage measurement mode. We also analyze the combination of voltage and current modes for error removal. In this combination method, the error probability is still an exponential function that decays with the duration of the bit exchange period, but it has superior fidelity to the former schemes.
Category: Data Structures and Algorithms

[25] viXra:1308.0113 [pdf] replaced on 2013-09-10 10:04:57

Current and Voltage Based Bit Errors and Their Combined Mitigation for the Kirchhoff-Law–Johnson-Noise Secure Key Exchange

Authors: Yessica Saez, Laszlo B. Kish, Robert Mingesz, Zoltan Gingl, Claes G. Granqvist
Comments: 9 pages, submitted for publication

We classify and analyze bit errors in the current measurement mode of the Kirchhoff-law–Johnson-noise (KLJN) key distribution. The error probability decays exponentially with increasing bit exchange period and fixed bandwidth, which is similar to the error probability decay in the voltage measurement mode. We also analyze the combination of voltage and current modes for error removal. In this combination method, the error probability is still an exponential function that decays with the duration of the bit exchange period, but it has superior fidelity to the former schemes.
Category: Data Structures and Algorithms

[24] viXra:1308.0113 [pdf] replaced on 2013-08-22 11:49:06

Current and Voltage Based Bit Errors and Their Combined Mitigation for the Kirchhoff-Law–Johnson-Noise Secure Key Exchange

Authors: Yessica Saez, Laszlo B. Kish, Robert Mingesz, Zoltan Gingl, Claes G. Granqvist
Comments: 9 pages, submitted for publication

We classify and analyze bit errors in the current measurement mode of the Kirchhoff-law–Johnson-noise (KLJN) key distribution. The error probability decays exponentially with increasing bit exchange period and fixed bandwidth, which is similar to the error probability decay in the voltage measurement mode. We also analyze the combination of voltage and current modes for error removal. In this combination method, the error probability is still an exponential function that decays with the duration of the bit exchange period, but it has superior fidelity to the former schemes.
Category: Data Structures and Algorithms

[23] viXra:1306.0193 [pdf] replaced on 2013-06-28 01:21:51

Log N Algorithm for Search from Unstructured List

Authors: Dhananjay P. Mehendale
Comments: 4 pages. Sorting algorithm is added.

The unstructured search problem asks for search of some predefined number, called target, from given unstructured list of numbers. In this paper we propose a novel classical algorithm with complexity ~O(Log N) for searching the target from unstructured list of numbers. We propose a new algorithm, which achieves improvement of exponential order over existing algorithms. Suppose N is the largest number in the list then we consider N dimensional vector space with Euclidean basis. With each of the numbers in the given unstructured list we associate the unique basis vector among the vectors that form together the Euclidean basis. For example suppose j is a number in the list then we associate with this number j the unique basis vector in the above mentioned N-dimensional vector space, namely, |j> = transpose(0, 0, 0, … , 0, 0, 1, 0, 0, … , 0, 0, 0), where the there is entry 1 only at j-th place and every where else there is entry 0. We then divide the given list of numbers in two roughly equal parts (i.e. we divide the given bag containing scrambled numbers in two roughly equal parts and put them in two separate bags, Bag 1 and Bag 2). We represent the list of numbers in Bag 1, Bag 2 in the form of equally weighted superposition of basis vectors associated with the numbers contained in these bags, namely, we represent list in Bag 1 (Bag 2) as a single state formed by equally weighted superposition using orthonormal states forming Euclidean basis corresponding to numbers in the bag B1 (bag B2), namely, |Psi-1> (|Psi-2>). Let t be the target number. It will be represented as |t>. We then find the value of scalar product of target state |t> with |Psi-1> (or Psi-2>). It will revel us whether t belongs to Bag 1 (or Bag 2) which essentially enables us to carry out the binary search and to achieve above mentioned ~O(Log N) complexity!Also, representing list as superposition provides sorting of numbers instantly! One needs to read vector from left to right and prepare the desired sorted list!
Category: Data Structures and Algorithms

[22] viXra:1306.0058 [pdf] replaced on 2013-10-20 14:40:22

Critical Analysis of the Bennett–Riedel Attack on the Secure Cryptographic Key Distributions Via the Kirchhoff-Law–Johnson-Noise Scheme

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist
Comments: 33 Pages. Accepted for publication at PLOS ONE

Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law–Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to demonstrate this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR’s scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. All our analyses are based on a technically-unlimited Eve with infinitely accurate and fast measurements limited only by the laws of physics and statistics. For non-ideal situations and at active (invasive) attacks, the uncertainly principle between measurement duration and statistical errors makes it impossible for Eve to extract the key regardless of the accuracy or speed of her measurements. To show that thermodynamics and noise are essential for the security, we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.
Category: Data Structures and Algorithms

[21] viXra:1306.0058 [pdf] replaced on 2013-10-14 10:27:03

Critical Analysis of the Bennett–Riedel Attack on the Secure Cryptographic Key Distributions Via the Kirchhoff-Law–Johnson-Noise Scheme

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist
Comments: 33 Pages. expanded, in response to Charles Bennett: sec. 1.1.4

Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law–Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to demonstrate this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR’s scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. All our analyses are based on a technically-unlimited Eve with infinitely accurate and fast measurements limited only by the laws of physics and statistics. For non-ideal situations and at active (invasive) attacks, the uncertainly principle between measurement duration and statistical errors makes it impossible for Eve to extract the key regardless of the accuracy or speed of her measurements. To show that thermodynamics and noise are essential for the security, we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.
Category: Data Structures and Algorithms

[20] viXra:1306.0058 [pdf] replaced on 2013-09-08 16:02:35

Critical Analysis of the Bennett–Riedel Attack on Secure Cryptographic Key Distributions Via the Kirchhoff-Law–Johnson-Noise Scheme

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist
Comments: 31 Pages. small but important corrections

Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law–Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to prove this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR’s scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. Furthermore we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.
Category: Data Structures and Algorithms

[19] viXra:1306.0058 [pdf] replaced on 2013-08-10 22:03:20

Critical Analysis of the Bennett–Riedel Attack on Secure Cryptographic Key Distributions Via the Kirchhoff-Law–Johnson-Noise Scheme

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist
Comments: 31 Pages. some typos fixed

Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law–Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to prove this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR’s scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. Furthermore we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attack against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.
Category: Data Structures and Algorithms

[18] viXra:1306.0058 [pdf] replaced on 2013-07-02 02:11:52

Critical Analysis of the Bennett–Riedel Attack on the Secure Cryptographic Key Distributions Via the Kirchhoff-Law–Johnson-Noise Scheme

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist
Comments: 31 Pages. typo in abstract corrected

Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) argued that thermodynamics is not essential in the Kirchhoff-law–Johnson-noise (KLJN) classical physical cryptographic exchange method in an effort to disprove the security of the KLJN scheme. They attempted to prove this by introducing a dissipation-free deterministic key exchange method with two batteries and two switches. In the present paper, we first show that BR’s scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communication. Furthermore we crack the BR system with 100% success via passive attacks, in ten different ways, and demonstrate that the same cracking methods do not function for the KLJN scheme that employs Johnson noise to provide security underpinned by the Second Law of Thermodynamics. We also present a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply to the KLJN scheme. Finally we give mathematical security proofs for each BR-attacks against the KLJN scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.
Category: Data Structures and Algorithms

[17] viXra:1306.0058 [pdf] replaced on 2013-06-17 11:43:33

Cracking the Bennett-Riedel “secure” Scheme and Critical Analysis of Their Claims About the Kirchhoff-Law-Johnson-Noise System

Authors: Laszlo B. Kish, Derek Abbott, Claes-Goran Granqvist
Comments: 34 Pages. corrected, expanded

Recently, Bennett and Riedel (BR) (http://arxiv.org/abs/1303.7435v1) claimed that thermodynamics (statistical physics) is not essential in the Kirchhoff-law-Johnson-noise (KLJN) classical statistical physical key exchange method, and they also asserted that the KLJN scheme does not provide security. They attempted to prove the no-thermodynamics view by proposing a dissipation-free deterministic key exchange method with two batteries and two switches (a scheme that was earlier patented by Davide Antilli). In the present paper, we first show that the BR scheme is unphysical and that some elements of its assumptions violate basic protocols of secure communications. Furthermore we crack the BR system with 100% success by passive attacks in ten different ways and demonstrate that the same cracking methods do not function for the KLJN scheme, which is based on Johnson noise and the Second Law of Thermodynamics. We also provide a critical analysis of some other claims by BR; for example, we prove that their equations for describing zero security do not apply for the KLJN scheme. Finally we provide mathematical security proofs for each of the attacks on the BR scheme and conclude that the information theoretic (unconditional) security of the KLJN method has not been successfully challenged.
Category: Data Structures and Algorithms

[16] viXra:1305.0126 [pdf] replaced on 2013-10-20 15:31:49

Errors and Their Mitigation at the Kirchhoff-Law-Johnson-Noise Secure Key Exchange

Authors: Yessica Saez, Laszlo B. Kish
Comments: 19 Pages. Accepted for publication at PLOS ONE

A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The most interesting finding is that the error probability decays exponentially with the duration of the time window of single bit exchange. The results indicate that it is feasible to have so small error probabilities of the exchanged bits that error correction algorithms are not required. The results are demonstrated with practical considerations.
Category: Data Structures and Algorithms

[15] viXra:1305.0126 [pdf] replaced on 2013-05-21 06:44:51

Errors and Their Mitigation at the Kirchhoff-Law-Johnson-Noise Secure Key Exchange

Authors: Yessica Saez, Laszlo B. Kish
Comments: 18 Pages. submitted for publication

A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The results are demonstrated with practical considerations.
Category: Data Structures and Algorithms

[14] viXra:1305.0068 [pdf] replaced on 2013-07-26 18:31:54

Physical Uncloneable Function Hardware Keys Utilizing Kirchhoff-Law-Johnson-Noise Secure Key Exchange and Noise-Based Logic

Authors: Laszlo B. Kish, Chiman Kwan
Comments: 9 Pages. clarifications/enhancements; in publication process

Weak uncloneable function (PUF) encryption key means that the manufacturer of the hardware can clone the key but anybody else is unable to so that. Strong uncloneable function (PUF) encryption key means that even the manufacturer of the hardware is unable to clone the key. In this paper, first we introduce an "ultra"-strong PUF with intrinsic dynamical randomness, which is not only not cloneable but it also gets renewed to an independent key (with fresh randomness) during each use via the unconditionally secure key exchange. The solution utilizes the Kirchhoff-law-Johnson-noise (KLJN) method for dynamical key renewal and a one-time-pad secure key for the challenge/response process. The secure key is stored in a flash memory on the chip to provide tamper-resistance and non-volatile storage with zero power requirements in standby mode. Simplified PUF keys are shown: a strong PUF utilizing KLJN protocol during the first run and noise-based logic (NBL) hyperspace vector string verification method for the challenge/response during the rest of its life or until it is re-initialized. Finally, the simplest PUF utilizes NBL without KLJN thus it can be cloned by the manufacturer but not by anybody else.
Category: Data Structures and Algorithms

[13] viXra:1305.0068 [pdf] replaced on 2013-05-21 04:40:40

Physical Uncloneable Function Hardware Keys Utilizing Kirchhoff-Law-Johnson-Noise Secure Key Exchange and Noise-Based Logic

Authors: Laszlo B. Kish, Chiman Kwan
Comments: 8 Pages. submitted for publication

Weak physical uncloneable function (WPUF) encryption key means that the manufacturer of the hardware can clone the key but anybody else is unable to so that. Strong physical uncloneable function (SPUF) encryption key means that even the manufacturer of the hardware is unable to clone the key. In this paper, first we introduce a "ultra"-strong PUF with intrinsic dynamical randomness, which is not only not cloneable but it also gets renewed to an independent key (with fresh randomness) during each use via the unconditionally secure key exchange. The solution utilizes the Kirchhoff-law-Johnson-noise (KLJN) method for dynamical key renewal and a one-time-pad secure key for the challenge/response process. The secure key is stored in a flash memory on the chip to provide tamper-resistance and non-volatile storage with zero power requirements in standby mode. Simplified PUF keys are shown: a strong PUF utilizing KLJN protocol during the first run and noise-based logic (NBL) hyperspace vector string verification method for the challenge/response during the rest of its life or until it is re-initialized. Finally, the simplest PUF utilizes NBL without KLJN thus it can be cloned by the manufacturer but not by anybody else.
Category: Data Structures and Algorithms

[12] viXra:1303.0106 [pdf] replaced on 2013-03-15 20:58:37

Lecture Notes On Recursive Algorithm

Authors: Cheng Tianren
Comments: 50 Pages.

This is the first volumn of the primer of algorithm. As is well known, algorithm become center of computer science now. In my lecture notes , I focus on one of the alogorithms, which is called The Recursive Algorithm. In this lectures, we use an understandable viewpoint towards the problems we meet in the class which are not easy to accepted by students, where we select muItiple examples in recursive algorithm together to explain, to make our teaching more convenient. However, I must say that if you don not have any basic knowledge in computer language. This lecture may be not easy for you and even the same as other algorithm books. But if you read this lecture carefully, it will be helpful for you to study other materials and you will feel easier to accept.
Category: Data Structures and Algorithms

[11] viXra:1303.0094 [pdf] replaced on 2013-06-15 11:16:34

Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution Over the Smart Grid with Switched Filters

Authors: Elias Gonzalez, Laszlo B. Kish, Robert S. Balog, Prasad Enjeti
Comments: 24 Pages. updated, polished

We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.
Category: Data Structures and Algorithms

[10] viXra:1303.0094 [pdf] replaced on 2013-03-19 12:11:30

Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution Over the Smart Grid with Switched Filters

Authors: Elias Gonzalez, Laszlo B. Kish, Robert Balog, Prasad Enjeti
Comments: 22 Pages. draft

We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional grids (chain-like power line). The speed of the protocol (the number of steps needed) versus grid size is analyzed. When fully developed such system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary dimensions.
Category: Data Structures and Algorithms

[9] viXra:1302.0055 [pdf] replaced on 2013-06-15 11:29:55

Enhanced Secure Key Exchange Systems Based on the Johnson-Noise Scheme

Authors: Laszlo B. Kish
Comments: published in: Metrology and Measurement Systems. Volume XX, Issue 2, Pages 191–204 (open access)

We introduce seven new versions of the Kirchhoff-Law-Johnson-(like)-Noise (KLJN) classical physical secure key exchange scheme and a new transient protocol for practically-perfect security. While these practical improvements offer progressively enhanced security and/or speed for the non-ideal conditions, the fundamental physical laws providing the security remain the same. In the "intelligent" KLJN (iKLJN) scheme, Alice and Bob utilize the fact that they exactly know not only their own resistor value but also the stochastic time function of their own noise, which they generate before feeding it into the loop. By using this extra information, they can reduce the duration of exchanging a single bit and in this way they achieve not only higher speed but also an enhanced security because Eve's information will significantly be reduced due to smaller statistics. In the "multiple" KLJN (MKLJN) system, Alice and Bob have publicly known identical sets of different resistors with a proper, publicly known truth table about the bit-interpretation of their combination. In this new situation, for Eve to succeed, it is not enough to find out which end has the higher resistor. Eve must exactly identify the actual resistor values at both sides. In the "keyed" KLJN (KKLJN) system, by using secure communication with a formerly shared key, Alice and Bob share a proper time-dependent truth table for the bit-interpretation of the resistor situation for each secure bit exchange step during generating the next key. In this new situation, for Eve to succeed, it is not enough to find out the resistor values at the two ends. Eve must also know the former key. The remaining four KLJN schemes are the combinations of the above protocols to synergically enhance the security properties. These are: the "intelligent-multiple" (iMKLJN), the "intelligent-keyed" (iKKLJN), the "keyed-multiple" (KMKLJN) and the "intelligent-keyed-multiple" (iKMKLJN) KLJN key exchange systems. Finally, we introduce a new transient-protocol offering practically-perfect security without privacy amplification, which is not needed at practical applications but it is shown for the sake of ongoing discussions.
Category: Data Structures and Algorithms

[8] viXra:1302.0055 [pdf] replaced on 2013-04-12 10:41:55

Enhanced Secure Key Exchange Systems Based on the Johnson-Noise Scheme

Authors: Laszlo B. Kish
Comments: 14 Pages. accepted for publication

We introduce seven new versions of the Kirchhoff-Law-Johnson-(like)-Noise (KLJN) classical physical secure key exchange scheme and a new transient protocol for practically-perfect security. While these practical improvements offer progressively enhanced security and/or speed for the non-ideal conditions, the fundamental physical laws providing the security remain the same. In the "intelligent" KLJN (iKLJN) scheme, Alice and Bob utilize the fact that they exactly know not only their own resistor value but also the stochastic time function of their own noise, which they generate before feeding it into the loop. By using this extra information, they can reduce the duration of exchanging a single bit and in this way they achieve not only higher speed but also an enhanced security because Eve's information will significantly be reduced due to smaller statistics. In the "multiple" KLJN (MKLJN) system, Alice and Bob have publicly known identical sets of different resistors with a proper, publicly known truth table about the bit-interpretation of their combination. In this new situation, for Eve to succeed, it is not enough to find out which end has the higher resistor. Eve must exactly identify the actual resistor values at both sides. In the "keyed" KLJN (KKLJN) system, by using secure communication with a formerly shared key, Alice and Bob share a proper time-dependent truth table for the bit-interpretation of the resistor situation for each secure bit exchange step during generating the next key. In this new situation, for Eve to succeed, it is not enough to find out the resistor values at the two ends. Eve must also know the former key. The remaining four KLJN schemes are the combinations of the above protocols to synergically enhance the security properties. These are: the "intelligent-multiple" (iMKLJN), the "intelligent-keyed" (iKKLJN), the "keyed-multiple" (KMKLJN) and the "intelligent-keyed-multiple" (iKMKLJN) KLJN key exchange systems. Finally, we introduce a new transient-protocol offering practically-perfect security without privacy amplification, which is not needed at practical applications but it is shown for the sake of ongoing discussions.
Category: Data Structures and Algorithms

[7] viXra:1302.0055 [pdf] replaced on 2013-02-14 20:30:55

Enhanced Secure Key Exchange Systems Based on the Johnson-Noise Scheme

Authors: Laszlo B. Kish
Comments: 13 Pages. This version is submitted for publication

We introduce seven new versions of the Kirchhoff-Law-Johnson-(like)-Noise (KLJN) classical physical secure key exchange scheme. While these practical improvements offer progressively enhanced security and/or speed for the non-ideal conditions, the fundamental physical laws providing the security remain the same. In the "intelligent" KLJN (iKLJN) scheme, Alice and Bob utilize the fact that they exactly know not only their own resistor value but also the stochastic time function of their own noise, which they generate before feeding it into the loop. By using this extra information, they can reduce the duration of exchanging a single bit and in this way they achieve not only higher speed but also an enhanced security because Eve's information will significantly be reduced due to smaller statistics. In the "multiple" KLJN (MKLJN) system, Alice and Bob have publicly known identical sets of different resistors with a proper, publicly known truth table about the bit-interpretation of their combination. In this new situation, for Eve to succeed, it is not enough to find out which end has the higher resistor. Eve must exactly identify the actual resistor values at both sides. In the "keyed" KLJN (KKLJN) system, by using secure communication with a formerly shared key, Alice and Bob share a proper time-dependent truth table for the bit-interpretation of the resistor situation for each secure bit exchange step during generating the next key. The remaining four KLJN schemes are the combinations of the above protocols to synergically enhance the security properties. These are: the "intelligent-multiple" (iMKLJN), the "intelligent-keyed" (iKKLJN), the "keyed-multiple" (KMKLJN) and the "intelligent-keyed-multiple" (iKMKLJN) KLJN key exchange systems.
Category: Data Structures and Algorithms

[6] viXra:1302.0055 [pdf] replaced on 2013-02-12 14:12:34

Enhanced Secure Key Exchange Systems Based on the Johnson-Noise Scheme

Authors: Laszlo B. Kish
Comments: 12 Pages.

We introduce seven new versions of the Kirchhoff-Law-Johnson-(like)-Noise (KLJN) classical physical secure key exchange scheme. While these practical improvements offer progressively enhanced security and/or speed for the non-ideal conditions, the fundamental physical laws providing the security remain the same. In the "intelligent" KLJN (iKLJN) scheme, Alice and Bob utilize the fact that they exactly know not only their own resistor value but also the stochastic time function of their own noise, which they generate before feeding it into the loop. By using this extra information, they can reduce the duration of exchanging a single bit and in this way they achieve not only higher speed but also an enhanced security because Eve's information will significantly be reduced due to smaller statistics. In the "multiple" KLJN (MKLJN) system, Alice and Bob have publicly known identical sets of different resistors with a proper, publicly known truth table about the bit-interpretation of their combination. In this new situation, for Eve to succeed, it is not enough to find out which end has the higher resistor. Eve must exactly identify the actual resistor values at both sides. In the "keyed" KLJN (KKLJN) system, by using secure communication with a formerly shared key, Alice and Bob share a proper time-dependent truth table for the bit-interpretation of the resistor situation for each secure bit exchange step during generating the next key. The remaining four KLJN schemes are the combinations of the above protocols to synergically enhance the security properties. These are: the "intelligent-multiple" (iMKLJN), the "intelligent-keyed" (iKKLJN), the "keyed-multiple" (KMKLJN) and the "intelligent-keyed-multiple" (iKMKLJN) KLJN key exchange systems.
Category: Data Structures and Algorithms

[5] viXra:1212.0109 [pdf] replaced on 2013-12-13 07:21:02

Polynomial 3-SAT-Solver

Authors: Matthias Mueller
Comments: 26 Pages. Algorithm has been well tested.

This document describes an algorithm that is supposed to decide in polynomial time and space if an exact 2- or 3-SAT CNF has a solution or not. To verify the algorithm for correctness, it has been implemented as computer program which successfully determined the solvability of more than 1 million exact-3-SAT formulas. The solver program (Windows binary & source code) can be downloaded via a link in the document. The solver program should run, with some little changes and re-compiling, also on Linux.
Category: Data Structures and Algorithms

[4] viXra:1208.0226 [pdf] replaced on 2012-12-04 09:25:42

Complex Noise-Bits and Large-Scale Instantaneous Parallel Operations with Low Complexity

Authors: He Wen, Laszlo B. Kish, Andreas Klappenecker
Comments: 10 Pages. In press at Fluctuation and Noise Letters

We introduce the complex noise-bit as information carrier, which requires noise signals in two parallel wires instead of the single-wire representations of noise-based logic discussed so far. The immediate advantage of this new scheme is that, when we use random telegraph waves as noise carrier, the superposition of the first 2^N integer numbers (obtained by the Achilles heel operation) yields non-zero values. We introduce basic instantaneous operations, with O(2^0) time and hardware complexity, including bit-value measurements in product states, single-bit and two-bit noise gates (universality exists) that can instantaneously operate over large superpositions with full parallelism. We envision the possibility of implementing instantaneously running quantum algorithms on classical computers while using similar number of classical bits as the number of quantum bits emulated without the necessity of error corrections. Mathematical analysis and proofs are given.
Category: Data Structures and Algorithms

[3] viXra:1208.0226 [pdf] replaced on 2012-09-06 17:25:19

Complex Noise-Bits and Large-Scale Instantaneous Parallel Operations with Low Complexity

Authors: He Wen, Laszlo B. Kish, Andreas Klappenecker
Comments: 10 Pages.

We introduce the complex noise-bit as information carrier, which requires noise signals in two parallel wires instead of the single-wire representations of noise-based logic discussed so far. The immediate advantage of this new scheme is that, when we use random telegraph waves as noise carrier, the superposition of the first 2^N integer numbers (obtained by the Achilles heel operation) yields non-zero values. We introduce basic instantaneous operations, with O(1) time and hardware complexity, including bit-value measurements in product states, single-bit and two-bit noise gates (universality exists) that can instantaneously operate over large superpositions with full parallelism. We envision the possibility of implementing instantaneously running quantum algorithms on classical computers while using similar number of classical bits as the number of quantum bits emulated without the necessity of error corrections. Mathematical analysis and proofs are given.
Category: Data Structures and Algorithms

[2] viXra:1109.0036 [pdf] replaced on 19 Sep 2011

An OpenCL Fast Fourier Transformation Implementation Strategy

Authors: Sven De Smet
Comments: 9 pages

This paper describes an implementation strategy in preparation for an implementation of an OpenCL FFT. The two most essential factors (memory bandwidth and locality) that are crucial to obtain high performance on a GPU for an FFT implementation are highlighted. Theoretical upper bounds for performance in terms of the locality factor are derived. An implementation strategy is proposed that takes these factors into consideration so that the resulting implementation has the potential to achieve high performance.
Category: Data Structures and Algorithms

[1] viXra:1004.0007 [pdf] replaced on 12 Apr 2010

Algebraic Generalization of Venn Diagram

Authors: Florentin Smarandache
Comments: 3 pages

It is easy to deal with a Venn Diagram for 1 ≤ n ≤ 3 sets. When n gets larger, the picture becomes more complicated, that's why we thought at the following codification. That's why we propose an easy and systematic algebraic way of dealing with the representation of intersections and unions of many sets.
Category: Data Structures and Algorithms