Factoring the Unfactorable: A New Polynomial Approach

Researchers have developed a streamlined method for factoring polynomials over finite rings, offering a significant advance for cryptographic code construction.

Researchers have developed a streamlined method for factoring polynomials over finite rings, offering a significant advance for cryptographic code construction.
New research reveals that deterministic sparse FFT algorithms, while promising speedups, can be surprisingly vulnerable to carefully crafted attacks, demanding robust safety guarantees.

A new quantization technique significantly reduces the memory demands of key-value caches, enabling more efficient deployment of large language models.

Researchers have developed a post-quantum cryptographic scheme to fortify the rapidly expanding world of flying ad-hoc networks against emerging threats.
A new machine-checked proof establishes a universal foundation for verifying the resilience of post-quantum cryptographic hardware against side-channel analysis.
A new machine-checked proof establishes a universal foundation for verifying the effectiveness of masking techniques used to protect cryptographic implementations against side-channel attacks.
As the Internet of Things expands, a shift towards decentralized security models is critical for ensuring privacy, scalability, and resilience at the network edge.
As the Internet of Things expands, this review explores how decentralized security approaches are reshaping trust and resilience at the network edge.
![The analysis reveals that the mean transfer characteristic ratio [latex] TCR [/latex] fluctuates with bit position, demonstrating consistent variation across all five concurrency levels-a pattern underscored by the standard deviation observed across those levels.](https://arxiv.org/html/2604.17249v1/x1.png)
New research reveals a critical vulnerability in how large language models manage memory, potentially allowing subtle data corruption to alter outputs without detection.
![The study demonstrates that employing a low-rank approximation-specifically utilizing [latex]MXFP8e4[/latex] quantization for the low-rank branch and activations alongside [latex]MXFP4e2[/latex] for the residual branch-within the LoRaQ framework on the PixArt-ΣΣ architecture demonstrably impacts generative quality, as corroborated by quantitative results presented elsewhere.](https://arxiv.org/html/2604.18117v1/sections/04_experiments/figures/table4_fp8/sample_7/r128.png)
A new method dramatically reduces the computational demands of large AI models without sacrificing performance by intelligently compressing their core components.