Satellite Signals: Authenticating Identity from Orbit
![The system establishes secure authentication by challenging a claimant to demonstrate consistent orbital mechanics across a series of randomized temporal intervals [latex]t_1, t_2, \dots, t_N[/latex] within a satellite visibility window, discerning legitimate claimants - exhibiting predictable kinematic features derived from ephemeris data - from malicious actors whose forged responses reveal inconsistencies in their claimed trajectory.](https://arxiv.org/html/2603.25576v1/Figure/proposed_CRA.jpg)
A new approach to securing low Earth orbit (LEO) satellite communications leverages the predictable nature of orbital mechanics to verify transmitter authenticity.
![The system establishes secure authentication by challenging a claimant to demonstrate consistent orbital mechanics across a series of randomized temporal intervals [latex]t_1, t_2, \dots, t_N[/latex] within a satellite visibility window, discerning legitimate claimants - exhibiting predictable kinematic features derived from ephemeris data - from malicious actors whose forged responses reveal inconsistencies in their claimed trajectory.](https://arxiv.org/html/2603.25576v1/Figure/proposed_CRA.jpg)
A new approach to securing low Earth orbit (LEO) satellite communications leverages the predictable nature of orbital mechanics to verify transmitter authenticity.
![On random 33-regular graphs, a phase-transition-like behavior emerges where localization error diminishes not with increasing anchor count, but rather with a budget ratio [latex]\rho_{eng}[/latex] that consistently organizes the transition across varying spectral dimensions-indicating the collapse of observation-map fibers and a system governed more by resource allocation than sheer connectivity.](https://arxiv.org/html/2603.25030v1/x1.png)
New research establishes fundamental limits on how accurately nodes can be located within a graph using a blend of distance and structural information.

New research focuses on detecting and classifying errors in how large language models arrive at answers, moving beyond simple content moderation.

As fog computing expands, machine learning-powered resource provisioning becomes increasingly vulnerable, and this research details a novel approach to proactively fortify these systems.
As Large Language Models become increasingly integrated into software, traditional testing methods struggle to keep pace, necessitating innovative approaches to ensure reliable performance.
As organizations increasingly leverage the power of multiple large language models, optimizing how those models are queried becomes critical for both performance and cost-effectiveness.
Researchers are developing new systems that ground AI decision-making in verified knowledge and robust safety protocols, moving beyond simple prompt engineering.
![GlowQ establishes a framework where diffusion models aren't limited to generating images from noise, but can instead be conditioned on any arbitrary data to produce highly customizable outputs, effectively transforming the diffusion process into a programmable function governed by [latex]x_t = f(\epsilon, x_t)[/latex].](https://arxiv.org/html/2603.25385v1/x1.png)
Researchers have developed a technique to dramatically reduce the size of large language models without sacrificing performance, paving the way for wider accessibility and deployment.
![This research demonstrates a selective cost reduction and error control mechanism-SCoRE-that optimizes predictions in both drug discovery and clinical applications; in drug discovery, SCoRE minimizes average cost [latex]L\_{n+1}\operatorname{\mathds{1}}\{Y\_{n+1}\leq c\}[/latex] among selected compounds while maintaining activity below a threshold of [latex]\alpha=1[/latex], and in clinical prediction, it identifies health outcome predictions with minimal error [latex]f(X\_{n+1})\approx Y\_{n+1}[/latex] and controls total squared error during deployment, as validated through simulations on a semi-synthetic dataset.](https://arxiv.org/html/2603.24704v1/x1.png)
New research highlights how unconscious biases and limited representation hinder women’s success in artificial intelligence and proposes solutions for a more equitable future.
A new system verifies existing digital certificates on-chain using zero-knowledge proofs, enabling privacy-preserving and legally-sound decentralized identity.