Beyond the Honest Majority: Securing Decentralized Computation
New research demonstrates a path to reliable, large-scale decentralized computing even when malicious actors control the majority of nodes.
New research demonstrates a path to reliable, large-scale decentralized computing even when malicious actors control the majority of nodes.
Researchers have developed a graph-based method for verifying the accuracy of Single Transferable Vote elections, bolstering confidence in fair and secure outcomes.
![The study demonstrates that the calculated barrier heights for the [latex]PO^{-3}_{3} + H_2O[/latex] reaction are sensitive to the size of the virtual orbital space included in the active space calculation, with results converging towards full-space coupled cluster methods-CCSD, UCCSD(4), and CCSD(T)-and exhibiting distinctions between interacting (i-) and composite (c-) UCCSD(4)/MP2 approaches utilizing either canonical or natural orbitals.](https://arxiv.org/html/2602.04783v1/x6.png)
Researchers have developed a method to significantly reduce the computational demands of accurate electronic structure calculations without sacrificing precision.

A new analysis reveals that smart contracts generated by artificial intelligence models often harbor significant security flaws, demanding greater scrutiny before deployment.
Researchers are exploring a unique method of concealing data within numerical sequences, leveraging the mathematical properties of semigroups.

A new FPGA-based co-processor efficiently unifies multiple cryptographic algorithms to deliver significant performance and energy gains for resource-constrained devices.
A new approach to authentication replaces traditional certificate-based systems with a streamlined, identity-based encryption scheme designed for the demands of modern 5G networks and cloud infrastructure.

Researchers have developed ZOR filters, a novel approach to representing sets of data that offers a compelling trade-off between speed and memory usage.

A new method dramatically improves the performance of large language models when using extremely low-precision weights, opening the door to more accessible and efficient AI.

A new method previews future data to significantly improve the compression of large language models without sacrificing accuracy.