Securing Finance in a Quantum World
A new protocol aims to safeguard financial transactions against the looming threat of quantum computing by combining advanced cryptography and privacy-enhancing technologies.
A new protocol aims to safeguard financial transactions against the looming threat of quantum computing by combining advanced cryptography and privacy-enhancing technologies.

Researchers demonstrate a new method for embedding backdoors in federated learning systems by subtly manipulating model structures.
Researchers are exploring the potential of geometrically-inspired error correction to overcome the inherent noise challenges of analog in-memory computing systems.
A new perspective demonstrates a fundamental connection between orthogonal projection methods and the Feshbach-Schur projection, offering a powerful tool for accurately describing systems governed by the Pauli exclusion principle.
A new analysis of muon-electron conversion offers stringent limits on potential violations of fundamental symmetries and opens the door to even more sensitive searches with upcoming experiments.

A new system, Mica, enables confidential computing by enforcing user-defined policies that govern communication between secure enclaves, even when the underlying components aren’t fully trusted.

A novel quality scoring system aims to ensure consistent and trustworthy results when running large language models across decentralized networks.
New research reveals fundamental tradeoffs between query complexity and error rates for a class of codes designed for efficient data retrieval.

Researchers demonstrate a practical implementation of a non-orthogonal HARQ-CC scheme using software-defined radio to enhance spectral efficiency and reduce latency for future 6G networks.
![The study demonstrates that increasing model size-measured in billions of parameters on a logarithmic scale [latex]log_{10}[/latex]-generally correlates with improved robustness against diverse perturbations-including mathematical errors, extraneous steps, unit conversion issues, skipped steps, and susceptibility to sycophancy-though the precise nature of this relationship differs depending on the specific type of perturbation applied.](https://arxiv.org/html/2603.03332v1/2603.03332v1/plots/accuracy_vs_model_size/Sycophancy.png)
New research reveals that even the most powerful language models are surprisingly vulnerable to subtle disruptions in their reasoning processes.