Smoothing the Path to Post-Quantum Security
New research establishes tighter bounds on code smoothing, a critical process for building secure cryptographic systems resilient to attacks from future quantum computers.
New research establishes tighter bounds on code smoothing, a critical process for building secure cryptographic systems resilient to attacks from future quantum computers.
![The study demonstrates how variations in Borel parameters influence hadronic coupling constants - specifically, the relationship between [latex]G_{\chi_{c1}\rho Z_{c}^{-}}[/latex] and [latex]G_{\chi_{c0}\pi Z_{c}^{+}}[/latex] - suggesting that even fundamental constants may be subject to shifts dependent on the framework used to observe them.](https://arxiv.org/html/2603.18877v1/x1.png)
New calculations using QCD sum rules offer theoretical predictions for the strong decays of exotic hidden-charm tetraquark states, paving the way for experimental validation.

New research reveals that the pattern of uncertainty during an AI’s thought process is a more accurate indicator of a correct answer than the overall level of doubt.

Researchers have developed a refined differential cryptanalysis technique targeting the SIMON32 algorithm, improving the efficiency of security assessments for resource-constrained devices.
![CORE dissects input features into components aligned with classification-[latex]z_{\parallel}[/latex]-and orthogonal residuals-[latex]z_{\perp}[/latex]-the latter of which encodes a membership signal demonstrably more discerning than simple confidence metrics and inherently undetectable by methods relying solely on logits.](https://arxiv.org/html/2603.18290v1/x5.png)
Researchers have developed a novel method for reliably detecting when artificial intelligence systems encounter data outside of their training, improving safety and trustworthiness.
A new framework uses fuzzy logic to intelligently embed hidden data within images, balancing imperceptibility and resilience against detection.
![The study demonstrates a nuanced correspondence between exact diagonalization and an annealing-based neural network ansatz-specifically, for the two-site Hubbard model at [latex] U=3 [/latex], [latex] \beta=5 [/latex], [latex] \mu=2 [/latex], and with [latex] N_t=20 [/latex] states-suggesting the network accurately captures the system’s underlying correlations despite the inherent complexity of fermionic systems.](https://arxiv.org/html/2603.18205v1/x5.png)
Researchers are employing deep generative models to overcome long-standing challenges in accurately simulating complex quantum materials at finite density.
Researchers have developed a process-control framework designed to significantly improve the trustworthiness of large language models and reduce the risk of fabricated responses.

A new fuzzing framework, TenSure, systematically uncovers errors in the increasingly complex process of compiling code for sparse tensors.
New research reveals that high accuracy in phishing detection doesn’t necessarily translate to security, as attackers can exploit low-cost feature manipulation to bypass defenses.