[4] viXra:2502.0128 [pdf] replaced on 2025-03-13 21:14:31
Authors: Samuel Bonaya Buya
Comments: 34 Pages.
This research introduces a novel classification system for composite numbers based on their least common prime factor (LCPF). The goal is to develop an efficient sieve for distinguishing prime numbers from composite numbers. A mathematical framework is presented to define logical formulae for different subsets of composite numbers. Additionally, a new formula for estimating the number of primes up to a given value is proposed. The paper also explores a graphical approach to factorization,providing an alternative method for decomposing composite numbers into their prime components.Several examples are presented to illustrate the classification system in action. Finally the new classification system of composite numbers will be used to prove the Binary Goldbach conjecture.
Category: Data Structures and Algorithms
[3] viXra:2502.0044 [pdf] replaced on 2025-03-13 08:36:57
Authors: Masataka Ohta
Comments: 4 Pages.
Errors on computations depend on their arguments and are, in general, different arguments by arguments. Though Shor used a simple error model that a qubit is disturbed/decohered locally by its environment state only, he overlooked a fact that an initial environment state around a qubit is output from a QEC (Quantum Error Correction) encoder and is affected by and entangled with all the argument qubits used to compute the qubit, which makes the errors depend on the argument qubits. As linear superposition for quantum parallelism keeps the errors different, usual QEC schemes just do not work. It is actually demonstrated that Shor code fails to correct a single qubit error if an input qubit to an encoder is entangled with an external qubit. Quantum block codes are not helpful to correct errors on qubits of a block entangled with qubits outside of the block. Though it may be possible to construct an improved QEC circuit for N quantum parallel computations to correct N different errors, detections and corrections of N different errors require O(N) information, which makes hardware complexity of the circuit O(N), which is no better than classical N parallel computations with N parallel hardware, which means quantum supremacy with QEC is denied.
Category: Data Structures and Algorithms
[2] viXra:2502.0043 [pdf] replaced on 2025-03-12 23:17:27
Authors: Sanjeev Saxena
Comments: 6 Pages. Expanded
There are several notions of duality between lines and points. In this note, it is shown that all these can be studied in a unified way. Most interesting properties are independent of specific choices. It is also shown that either dual mapping can be its own inverse or it can preserve relative order (but not both). Generalisation to higher dimensions is also discussed. An elementary and very intuitive treatment of relationship between arrangements in d+1 dimensions and searching for k-nearest neighbour in d-dimensions is also given.
Category: Data Structures and Algorithms
[1] viXra:2502.0010 [pdf] submitted on 2025-02-01 21:09:04
Authors: Donald Mortvedt
Comments: 3 Pages. (Note by viXra Admin: the abstract should after the article title; scientific references should be cited and listed; and AI assisted article is in general not acceptable)
The P vs. NP problem is one of the most fundamental open questions in computational complexity. This paper presents a Prime Mover Proof, a self-verifying argument that establishes P ≠ NP. The proof asserts that proving P ≠ NP is itself an NP problem, meaning its difficulty serves as direct empirical evidence that NP is distinct from P. To reinforce this result, we present three supporting mathematical proofs: 1. Set-Theoretic Proof — Establishing the fundamental separation between P and NP. 2. Constructive Proof — Demonstrating that proving P ≠ NP is an NP problem. 3. Reductio ad Absurdum Proof — Showing contradiction if P = NP were assumed.We introduce a computational framework based on Origin, Approach Space, and Destination Space,providing a structured model for decision problems. Additionally, we clarify how truth tables extend to NP problems, including weighted solution spaces such as knapsack-style problems.By combining logical elegance with mathematical rigor, this proof offers a compelling case forP ≠ NP that is direct, self-verifying, and independent of reductionist assumptions.We welcome further analysis and discussion from the computational complexity community.
Category: Data Structures and Algorithms