Squeezing Speech: Adaptive Quantization for Robust ASR
![Dynamic quantization in encoder-decoder automatic speech recognition models addresses error propagation through a novel calibration method that utilizes layer-wise scaling factors [latex]\alpha_{\ell}[/latex], computed based on error indicators, to correct the update direction-a refinement of standard post-training quantization [latex]Eq.(1)[/latex] that calibrates the encoder with audio data and the decoder with text and quantized encoder outputs, as defined in [latex]Eq.(9)[/latex].](https://arxiv.org/html/2601.02455v1/x3.png)
New research tackles the challenges of compressing automatic speech recognition models without sacrificing accuracy, focusing on how errors accumulate during quantization.
![Dynamic quantization in encoder-decoder automatic speech recognition models addresses error propagation through a novel calibration method that utilizes layer-wise scaling factors [latex]\alpha_{\ell}[/latex], computed based on error indicators, to correct the update direction-a refinement of standard post-training quantization [latex]Eq.(1)[/latex] that calibrates the encoder with audio data and the decoder with text and quantized encoder outputs, as defined in [latex]Eq.(9)[/latex].](https://arxiv.org/html/2601.02455v1/x3.png)
New research tackles the challenges of compressing automatic speech recognition models without sacrificing accuracy, focusing on how errors accumulate during quantization.

Researchers explore how quantum-enhanced neural networks can optimize contextual bandit algorithms, potentially offering performance gains with reduced computational demands.

New research reveals that quantum neural networks can match or exceed the robustness of traditional methods in noisy healthcare speech applications.
Researchers have discovered a powerful connection between shifted Yangians and the critical cohomology of quiver varieties, offering new insights into representation theory and quantum geometry.
![Current wireless systems rely on layered error correction-[latex]HARQ[/latex] at the MAC layer and [latex]ARQ[/latex] at the RLC layer-dependent on feedback loops, but the implementation of forward erasure correction, such as network coding, offers a path toward significantly reduced latency by preemptively addressing potential errors rather than reacting to them.](https://arxiv.org/html/2601.01645v1/Figures/Final/intro_simplified.png)
A new look at error correction techniques reveals how network coding can dramatically improve latency and efficiency in next-generation wireless networks.
Researchers have developed Bithoven, a formally verified language designed to make Bitcoin smart contracts both safer and easier to develop.

A new framework uses artificial intelligence to mimic human reasoning and detect fraudulent activity in audio and text conversations with improved speed and accuracy.
![The computational scaling of the WISE algorithm reveals a performance trade-off: while total solution time and the identification of divergent Weinberg eigenvalues scale quadratically with the number of scattering channels [latex] \propto N^{2} [/latex], convergence of the Born series for both the regularized source vector and eigenvectors achieves linear scaling [latex] \propto N [/latex], suggesting a fundamental limit to efficiency as system complexity increases.](https://arxiv.org/html/2601.01159v1/x4.png)
Researchers have developed a new algorithm that dramatically improves the efficiency of quantum scattering calculations, potentially unlocking simulations of more complex molecular collisions.
Researchers are exploring a surprising new application for SHA-256 ASICs – harnessing their inherent timing variations to build energy-efficient physical reservoir computing systems.
Researchers have developed a new QEMU plugin, NQC2, that significantly accelerates code coverage analysis for embedded systems without compromising performance.