Pinpointing Nodes: The Limits of Graph Positioning
![On random 33-regular graphs, a phase-transition-like behavior emerges where localization error diminishes not with increasing anchor count, but rather with a budget ratio [latex]\rho_{eng}[/latex] that consistently organizes the transition across varying spectral dimensions-indicating the collapse of observation-map fibers and a system governed more by resource allocation than sheer connectivity.](https://arxiv.org/html/2603.25030v1/x1.png)
New research establishes fundamental limits on how accurately nodes can be located within a graph using a blend of distance and structural information.
![On random 33-regular graphs, a phase-transition-like behavior emerges where localization error diminishes not with increasing anchor count, but rather with a budget ratio [latex]\rho_{eng}[/latex] that consistently organizes the transition across varying spectral dimensions-indicating the collapse of observation-map fibers and a system governed more by resource allocation than sheer connectivity.](https://arxiv.org/html/2603.25030v1/x1.png)
New research establishes fundamental limits on how accurately nodes can be located within a graph using a blend of distance and structural information.

New research focuses on detecting and classifying errors in how large language models arrive at answers, moving beyond simple content moderation.

As fog computing expands, machine learning-powered resource provisioning becomes increasingly vulnerable, and this research details a novel approach to proactively fortify these systems.
As Large Language Models become increasingly integrated into software, traditional testing methods struggle to keep pace, necessitating innovative approaches to ensure reliable performance.
As organizations increasingly leverage the power of multiple large language models, optimizing how those models are queried becomes critical for both performance and cost-effectiveness.
Researchers are developing new systems that ground AI decision-making in verified knowledge and robust safety protocols, moving beyond simple prompt engineering.
![GlowQ establishes a framework where diffusion models aren't limited to generating images from noise, but can instead be conditioned on any arbitrary data to produce highly customizable outputs, effectively transforming the diffusion process into a programmable function governed by [latex]x_t = f(\epsilon, x_t)[/latex].](https://arxiv.org/html/2603.25385v1/x1.png)
Researchers have developed a technique to dramatically reduce the size of large language models without sacrificing performance, paving the way for wider accessibility and deployment.
![This research demonstrates a selective cost reduction and error control mechanism-SCoRE-that optimizes predictions in both drug discovery and clinical applications; in drug discovery, SCoRE minimizes average cost [latex]L\_{n+1}\operatorname{\mathds{1}}\{Y\_{n+1}\leq c\}[/latex] among selected compounds while maintaining activity below a threshold of [latex]\alpha=1[/latex], and in clinical prediction, it identifies health outcome predictions with minimal error [latex]f(X\_{n+1})\approx Y\_{n+1}[/latex] and controls total squared error during deployment, as validated through simulations on a semi-synthetic dataset.](https://arxiv.org/html/2603.24704v1/x1.png)
New research highlights how unconscious biases and limited representation hinder women’s success in artificial intelligence and proposes solutions for a more equitable future.
A new system verifies existing digital certificates on-chain using zero-knowledge proofs, enabling privacy-preserving and legally-sound decentralized identity.
![The partition sum [latex]\mathcal{Z}\_{\alpha,s}[/latex] for Ising spins on the dual square lattice of a surface code defines a relationship between reference error strings [latex]\mathcal{C}\_{z}^{\mathrm{ref}}[/latex] and the configuration of Ising interactions [latex]\eta\_{\bm{r}\bm{r}^{\prime}}[/latex], a connection mathematically expressed as a product of transfer matrices detailed in Eq. (13).](https://arxiv.org/html/2603.25665v1/Figures/RBIM.png)
Researchers have developed a theoretical framework to map the performance of quantum error correction in surface codes, revealing distinct operational regimes for reliable quantum computation.