When AI Assistants Spill the Beans: A New Privacy Threat

The increasing use of AI agents to access external tools creates a significant privacy risk as sensitive data can be inadvertently leaked through tool interactions.

The increasing use of AI agents to access external tools creates a significant privacy risk as sensitive data can be inadvertently leaked through tool interactions.

New research reveals that imperfect alignment of light’s temporal modes can significantly reduce the secure key rate in continuous-variable quantum key distribution systems.

Researchers have developed NgCaptcha, a novel system designed to effectively mitigate automated abuse by blending computational challenges with AI-resistant image recognition.

A new machine learning approach leverages the inherent geometry of quantum states to improve the efficiency and interpretability of quantum state tomography.

A new distributed system, Lotus, tackles performance limitations in disaggregated memory architectures by radically rethinking how transactions and locking are handled.

Researchers have developed a robust algorithm for reliable data transmission in challenging underwater environments using light, even with faint signals and imprecise timing.

New research reveals that even minor data corruption can severely compromise the performance of promising state-space model architectures like Mamba.

Researchers have developed an automated method to create stronger safeguards against malicious prompts that can hijack large language models.

A new analysis shows that a surprisingly straightforward approach to identifying bug-inducing code changes can match-and even outperform-complex, spectrum-based techniques.
A new study determines the absolute minimum key size needed to guarantee secure aggregation of data across a network, even with malicious collaborators and varying privacy demands.