Hunting Exotic Diquarks at the High-Luminosity LHC
![The study establishes a 95% confidence level upper limit on the signal strength multiplier μ times the machine learning event yield [latex]S_{ev}[/latex] for the fully hadronic decay channel of [latex]Su \rightarrow u\chi \rightarrow u(Wb)[/latex], specifically when the mass of χ is 2 TeV and the parameter <i>D</i> equals 0.9.](https://arxiv.org/html/2601.11181v1/x4.png)
A new study explores the potential to uncover ultraheavy diquarks decaying into multijet final states at the upcoming High-Luminosity Large Hadron Collider.
![The study establishes a 95% confidence level upper limit on the signal strength multiplier μ times the machine learning event yield [latex]S_{ev}[/latex] for the fully hadronic decay channel of [latex]Su \rightarrow u\chi \rightarrow u(Wb)[/latex], specifically when the mass of χ is 2 TeV and the parameter <i>D</i> equals 0.9.](https://arxiv.org/html/2601.11181v1/x4.png)
A new study explores the potential to uncover ultraheavy diquarks decaying into multijet final states at the upcoming High-Luminosity Large Hadron Collider.

New research reveals a significant reduction in the energy loss experienced by fast-moving particles as they traverse the ultra-hot quark-gluon plasma created in heavy-ion collisions.

A new study details how a future circular collider could dramatically improve our understanding of the Higgs boson by precisely measuring its decay into tau leptons.
![Attention mechanisms are being refined to address the escalating costs of key-value (KV) caching-Multi-Head Attention’s independent projections ([latex]2LHd_h[/latex]) give way to compression in Multi-Latent Attention ([latex]Ld_c[/latex], where [latex]d_c \ll d[/latex]), then to shared projections in Multi-Query and Grouped-Query Attention, culminating in a novel Low-Rank KV approach that-by maintaining full-rank projections alongside low-rank residual updates ([latex]2L(d_h + Hr)[/latex])-achieves a balance between head diversity and caching efficiency comparable to Multi-Latent Attention.](https://arxiv.org/html/2601.11471v1/figures/LowRankKVCache.png)
A new attention mechanism dramatically reduces the memory footprint of large language models without sacrificing performance.
New research demonstrates how Interval CVaR-based regression models can maintain accuracy even when faced with noisy or corrupted datasets.
![The study reveals how subtle changes in spectator, participant, and stretched bond orbital energies-specifically the [latex]E^{SIC}[n\_{i}][/latex]-along reaction pathways connecting reactants, transition states, and products, directly correlate with the magnitude of the SIC correction to forward and reverse reaction barriers.](https://arxiv.org/html/2601.11454v1/x1.png)
A new analysis reveals the persistent impact of self-interaction errors on the accuracy of density functional theory calculations for reaction barriers, even with modern functionals like SCAN.

A new system intelligently manages memory to accelerate long-form AI interactions and reduce response times.

New research reveals a subtle attack vector where AI agents, designed to leverage external tools, can be tricked into endlessly looping interactions, silently consuming significant computing resources.

A new system embeds schema-aware watermarks directly into assessment documents to proactively deter AI-assisted cheating and establish clear authorship signals.

Researchers have developed an adaptive architecture for systematically evaluating and improving the safety of complex AI agents against adversarial attacks.