arXiv Research Digest

November 01, 2025 • 45 papers across 5 interests
🔬

Efficient ML / Edge AI

🟢 Applied

Inference-Cost-Aware Dynamic Tree Construction for Efficient Inference in Large Language Models

💡 This research tackles the problem of language AI.
Large Language Models (LLMs) face significant inference latency challenges stemming from their autoregressive design and large size . To address this, speculative decoding emerges as a solution, enabling the simultaneous generation and validation of multiple tokens .
🟢 Applied

Distilling Multilingual Vision-Language Models: When Smaller Models Stay Multilingual

💡 This research reduces language AI.
Knowledge distillation (KD) demonstrates promising results in transferring knowledge from larger to smaller VLMs . applying KD in multilingualism is an underexplored area . We study five distillation formulations across CLIP and SigLIP2 .
🟢 Applied

STAR: A Privacy-Preserving, Energy-Efficient Edge AI Framework for Human Activity Recognition via Wi-Fi CSI in Mobile and Pervasive Computing Environments

💡 This research presents techniques for privacy-preserving AI.
Human Activity Recognition (HAR) via Wi-Fi Channel State Information (CSI) presents a privacy-preserving, contactless sensing approach suitable for smart homes, healthcare monitoring, and mobile IoT systems .
🟢 Applied

Do Students Debias Like Teachers? On the Distillability of Bias Mitigation Methods

💡 This research running AI locally on devices for computer vision.
Knowledge distillation (KD) is an effective method for model compression and transferring knowledge between models . However, its effect on model's robustness against spurious correlations that degrade performance on out-of-distribution data remains underexplored . This study investigates the effect of knowledge distillation on the transferability of ``debiasing'' capabilities from teacher models to student models .
🟢 Applied

An Agentic Framework for Rapid Deployment of Edge AI Solutions in Industry 5.0

💡 This research reduces edge computing.
We present a novel framework for Industry 5.0 that simplifies the deployment of AI models on edge devices in various industrial settings . The design reduces latency and avoids external data transfer by enabling local inference and real-time processing .
🟢 Applied

Energy-Efficient Autonomous Driving with Adaptive Perception and Robust Decision

💡 This research explores techniques in machine learning.
Autonomous driving is an emerging technology that is expected to bring significant social, economic, and environmental benefits . However, these benefits come with rising energy consumption by computation engines limiting the driving range of vehicles, especially electric ones . Perception computing is typically the most power-intensive component, as it relies on deep learning models to extract environmental features . To address these challenges, we propose an energy-efficient autonomous driving framework, called EneAD .
🟢 Applied

Resource-Efficient and Robust Inference of Deep and Bayesian Neural Networks on Embedded and Analog Computing Platforms

💡 This research makes more efficient edge computing.
While machine learning has transformed numerous application domains, its growing computational demands increasingly constrain scalability and efficiency . In practice, neural networks must not only operate efficiently but also provide reliable predictions under distributional shifts or unseen data . This work advances resource-efficient and robust inference for both conventional and Bayesian neural networks .
🟢 Applied

UHKD: A Unified Framework for Heterogeneous Knowledge Distillation via Frequency-Domain Representations

💡 This research reduces computer vision.
Knowledge distillation (KD) is an effective model compression technique that transfers knowledge from a high-performance teacher to a lightweight student, reducing cost while maintaining accuracy . In visual applications, where large-scale image models are widely used, KD enables efficient deployment .
🟢 Applied

A Survey on Efficient Vision-Language-Action Models

💡 This research presents techniques for computer vision.
Vision-Language-Action models (VLAs) represent a significant frontier in embodied intelligence, aiming to bridge digital knowledge with physical-world interaction . While these models have demonstrated remarkable generalist capabilities, deployment is severely hampered by the substantial computational and data requirements .
🟢 Applied

Rethinking Inference Placement for Deep Learning across Edge and Cloud Platforms: A Multi-Objective Optimization Perspective and Future Directions

💡 This research running AI locally on devices for language AI.
Edge intelligent applications like VR/AR and language model based chatbots have become widespread with the rapid expansion of IoT and mobile devices . But constrained edge devices often cannot serve the increasingly large and complex deep learning (DL) models . Research aims to balance accuracy, computation delay, transmission delay, and privacy concerns .
🔬

Privacy-Preserving ML

🟢 Applied

Non-Convex Over-the-Air Heterogeneous Federated Learning: A Bias-Variance Trade-off

💡 This research distributed machine learning across computer vision.
Over-the-air (OTA) federated learning (FL) has been well recognized as a scalable paradigm that exploits the waveform superposition of the wireless multiple-access channel . We develop novel OTA-FL SGD updates that allow a structured, time-invariant model bias while facilitating reduced variance updates .
🟡 Advanced

On Purely Private Covariance Estimation

💡 This research presents techniques for privacy-preserving AI.
We present a simple perturbation mechanism for the release of $d-dimensional covariance matrices under pure differential privacy . For large datasets with at least $n\geq d^2/\varepsilon elements, our mechanism recovers the provably optimal Frobenius norm error guarantees of \cite{nikolov2023private}.
🟢 Applied

Tight Differentially Private PCA via Matrix Coherence

💡 This research makes more efficient privacy-preserving AI.
We revisit the task of computing the span of the top $r$ singular vectors $u_1, \ldots, u_r$ of a matrix under differential privacy . We show that a simple and efficient algorithm -- based on singular value decomposition and standard perturbation mechanisms -- returns a private rank-$r$ approximation whose error depends only on the coherence of the input .
🟢 Applied

UnifiedFL: A Dynamic Unified Learning Framework for Equitable Federation

💡 This research protecting data privacy in privacy-preserving AI.
Federated learning (FL) has emerged as a key paradigm for collaborative model training across multiple clients without sharing raw data . We propose UnifiedFL, a dynamic federated learning framework that represents heterogeneous local networks as nodes and edges in a directed model graph optimized by a shared graph neural network .
🟢 Applied

PEEL: A Poisoning-Exposing Encoding Theoretical Framework for Local Differential Privacy

💡 This research protecting data privacy in privacy-preserving AI.
Local Differential Privacy (LDP) is a widely adopted privacy-protection model in the Internet of Things . However, existing defenses either incur prohibitive resource overheads or rely on domain-specific prior knowledge . We propose PEEL, a Poisoning-Exposing Encoding theoretical framework for LDP, which departs from resource- or prior-dependent countermeasures . PEEL amplifies stealthy poisoning effects by re-encoding LDP-perturbed data via sparsification, normalization, and low-rank projection .
🔬

Creative AI / Emotion

🟢 Applied

Contribution-Guided Asymmetric Learning for Robust Multimodal Fusion under Imbalance and Noise

💡 This research achieves better emotion AI.
Contribution-Guided Asymmetric Learning (CAL) aims to enhance the contribution of high-contribution modalities while compressing weak modalities to increase their contribution . CAL has shown outstanding performance in imbalanced fusion tasks and noise robustness tests . CAL is based on a modality contribution metric W^m combining the information quantity I(m) and confidence D(m).
🟢 Applied

Lost in Phonation: Voice Quality Variation as an Evaluation Dimension for Speech Foundation Models

💡 This research presents techniques for speech processing.
Recent advances in speech foundation models (SFMs) have enabled the direct processing of spoken language from raw audio . This capability allows SFMs to be exposed to rich paralinguistic variations embedded in the input speech signal . One under-explored dimension of this variation is voice quality, encompassing phonation types such as creaky and breathy voice .
🟢 Applied

Evaluating Emotion Recognition in Spoken Language Models on Emotionally Incongruent Speech

💡 This research achieves better language AI.
Advances in spoken language processing have driven the development of spoken language models . We evaluate four SLMs on the task of speech emotion recognition using a dataset of emotionally incongruent speech samples . Results indicate that SLMs rely predominantly on textual semantics rather than speech emotion to perform the task .
🟢 Applied

MCIHN: A Hybrid Network Model Based on Multi-path Cross-modal Interaction for Multimodal Emotion Recognition

💡 This research understanding emotions in emotion AI.
Multimodal emotion recognition is crucial for future human-computer interaction . However accurate emotion recognition still faces significant challenges due to differences between different modalities and the difficulty of characterizing unimodal emotional information . A hybrid network model based on multipath cross-modal interaction (MCIHN) is proposed .
🟢 Applied

Emotion-Coherent Reasoning for Multimodal LLMs via Emotional Rationale Verifier

💡 This research understanding emotions in language AI.
Emotional Rationale Verifier (ERV) and an Explanation Reward are novel approaches to predicting emotions . Authors propose a novel approach: the ERV and an explanation reward . Their method significantly improves explanation-prediction consistency and explanation emotion accuracy .
🟢 Applied

Emotion Recognition with Minimal Wearable Sensing: Multi-domain Feature, Hybrid Feature Selection, and Personalized vs. Generalized Ensemble Model Analysis

💡 This research proposes a method for edge computing.
Negative emotions are linked to the onset of neurodegenerative diseases and dementia . Physiological signals from wearable devices offer a promising noninvasive method for continuous emotion monitoring . The method is designed for deployment in resource-constrained systems, such as Internet of Things .
🟢 Applied

LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis

💡 This research explores techniques in emotion AI.
LUNA (Latent Unified Network Architecture) is a self-supervised foundation model that reconciles disparate electrode geometries while scaling linearly -- not quadratically -- with channel count . LUNA compresses multi-channel EEG into a fixed-size, topology-agnostic latent space via learned queries and cross-attention . It demonstrates highly competitive performance across several benchmarks .
🟢 Applied

Multi-dataset Joint Pre-training of Emotional EEG Enables Generalizable Affective Computing

💡 This research presents techniques for emotion AI.
The method outperforms state-of-the-art large-scale EEG models by an average of 4.57% in AUROC for few-shot emotion recognition and 11.92% in accuracy for zero-shot generalization to a new dataset .
🟢 Applied

SentiMaithili: A Benchmark Dataset for Sentiment and Reason Generation for the Low-Resource Maithili Language

💡 This research presents techniques for language AI.
Maithili is an Indo-Aryan language spoken by more than 13 million people in the Purvanchal region of India . It is valued for its rich linguistic structure and cultural significance .
🟢 Applied

REVE: A Foundation Model for EEG -- Adapting to Any Setup with Large-Scale Pretraining on 25,000 Subjects

💡 This research reduces computer vision.
Foundation models have transformed AI by reducing reliance on task-specific data through large-scale pretraining . While successful in language and vision, their adoption in EEG has lagged due to the heterogeneity of public datasets . Existing EEG foundation models struggle to generalize across these variations, often restricting pretraining to a single setup . We present REVE (Representation for EEG with Versatile Embeddings) a pretrained model .
🔬

Lightweight Systems

🟢 Applied

Vectorized Context-Aware Embeddings for GAT-Based Collaborative Filtering

💡 This research enhances language AI.
Recommender systems often struggle with data sparsity and cold-start scenarios . This paper presents a Graph Attention Network (GAT) based Collaborative Filtering (CF) framework enhanced with context aware embeddings .
🟡 Advanced

On neighborhoods of embedded toroidal and Hopf manifolds and their foliations

💡 This research running AI on low-power devices for edge computing.
In this article, we give completely new examples of embedded complex manifolds the germ of neighborhood of which is holomorphically equivalent to the zero section in its normal bundle . The first set of examples is composed of connected abelian complex Lie groups, embedded in some complex manifold $M$. The second set is $n$-dimensional Hopf manifolds, embedded as hypersurfaces .
🟢 Applied

Scales++: Compute Efficient Evaluation Subset Selection with Cognitive Scales Embeddings

💡 This research makes more efficient language AI.
The prohibitive cost of evaluating large language models (LLMs) on comprehensive benchmarks necessitates the creation of small yet representative data subsets that enable efficient assessment while retaining predictive fidelity . Current methods for this task operate under a model-centric paradigm, selecting benchmarking items based on the collective performance of existing models . Such approaches are limited by large upfront costs, an inability to immediately handle new benchmarks (`cold-start'), and the fragile assumption that future models will share the failure patterns of their predecessors .
🟡 Advanced

From Embedding to Control: Representations for Stochastic Multi-Object Systems

💡 This research achieves better machine learning.
This paper studies how to achieve accurate modeling and effective control in stochastic nonlinear dynamics with multiple interacting objects . Non-uniform interactions and random topologies make this task challenging .
🔴 Theory-Heavy

Sharp embeddings and existence results for Logarithmic $p$-Laplacian equations with critical growth

💡 This research explores techniques in machine learning.
In this paper, we derive a new $p$-Logarithmic Sobolev inequality and optimal continuous and compact embeddings into Orlicz-type spaces of the function space associated with the logarathmic $p$.-Laplacian . By employing the method of the Nehari manifold, we prove the existence of a nontrivial weak solution . We conduct an asymptotic analysis of a weighted nonlocal, nonlinear problem governed by the fractional
🟢 Applied

Accretion rates of stellar-mass compact objects embedded in AGN discs

💡 This research running AI on low-power devices for edge computing.
Stellar-mass compact objects (COs) embedded in active galactic nucleus (AGN) discs are commonly assumed to accrete via Bondi or Bondi-Hoyle-Lyttleton prescriptions . We show that differential rotation in AGN discs can impart non-negligible angular momentum, in which case accretion proceeds through a viscous disc rather than Bondi/BHL flow .
🟡 Advanced

An explicit formula of the limit of the heat kernel measures on the spheres embedded in $\R^\infty$

💡 This research explores techniques in machine learning.
We show that the heat kernel measures based at the north pole of the spheres converges to a Gaussian measure in $R^\infty$ We also find an explicit formula for this measure .
🟡 Advanced

Tight Spherical Embeddings (Updated Version)

💡 This research explores techniques in machine learning.
This is an updated version of a paper which appeared in the proceedings of the 1979 Berlin Colloquium on Global Differential Geometry . The main result of this paper is that every compact isoparametric hypersurface $M^n \subset S^{n+1} is tight .
🟢 Applied

Enhanced quality factors at resonance in acoustofluidic cavities embedded in matched elastic metamaterials

💡 This research enhances machine learning.
We show that by embedding liquid-filled acoustofluidic cavities in a metamaterial, the quality factor of the cavity at selected acoustic resonance modes can be enhanced by 2 to 3 orders of magnitude .
🟢 Applied

Hierarchical Physics-Embedded Learning for Spatiotemporal Dynamical Systems

💡 This research explores techniques in edge computing.
Modeling complex spatiotemporal dynamics, particularly in far-from-equilibrium systems, remains a challenge in science . The governing partial differential equations (PDEs) for these systems are often intractable to derive from first principles .
🔬

Offline-First / Local AI

🟢 Applied

SBASH: a Framework for Designing and Evaluating RAG vs. Prompt-Tuned LLM Honeypots

💡 This research explores techniques in language AI.
Honeypots are decoy systems used for gathering valuable threat intelligence . Maximising attacker engagement is essential to their utility . Research has highlighted that context-awareness is necessary to increase engagement . Large Language Models (LLMs) have been shown as one approach to increase context awareness .
🟢 Applied

REx86: A Local Large Language Model for Assisting in x86 Assembly Reverse Engineering

💡 This research improves language AI.
Large Language Models offer potential for improving RE efficiency through automated comprehension and commenting . Cloud-hosted, closed-weight models pose privacy and security risks and cannot be used in closed-network facilities . REx86 reduces test-set cross-entropy loss by 64.2% and improves semantic cosine similarity against ground truth by 20.3\% over its base model .
🟢 Applied

CORE: Reducing UI Exposure in Mobile Agents via Collaboration Between Cloud and Local LLMs

💡 This research achieves better language AI.
Mobile agents rely on Large Language Models (LLMs) to plan and execute tasks on smartphone user interfaces . While cloud-based LLMs achieve high task accuracy, they require uploading the full UI state at every step . In contrast, local LLMs avoid UI uploads but suffer from limited capacity, resulting in lower task success rates . CORE comprises three key components: (1) layout-aware block partitioning, (2) Co-planning) and Co-decision-making .
🟢 Applied

LLM-guided Hierarchical Retrieval

💡 This research explores techniques in language AI.
Modern IR systems are increasingly tasked with answering complex, multi-faceted queries that require deep reasoning . We introduce LATTICE, a hierarchical retrieval framework that enables an LLM to reason over and navigate large corpora with logarithmic search complexity .
🟢 Applied

COSTAR-A: A prompting framework for enhancing Large Language Model performance on Point-of-View questions

💡 This research enhances language AI.
COSTAR-A is a novel prompt engineering framework that enhances the existing COSTAR method . COSTAR stands for Context, Objective, Style, Tone, Audience, and Response, by adding the 'Answer' component at the end .
🟢 Applied

Bridging Semantics & Structure for Software Vulnerability Detection using Hybrid Network Models

💡 This research explores techniques in language AI.
Software vulnerabilities remain a persistent risk, yet static and dynamic analyses often overlook structural dependencies that shape insecure behaviors . Viewing programs as heterogeneous graphs, we capture control- and data-flow relations as complex interaction networks . Our hybrid framework combines these graph representations with light-weight (<4B) local LLMs .
🟢 Applied

Open WebUI: An Open, Extensible, and Usable Interface for AI Interaction

💡 This research presents techniques for language AI.
The toolkit is designed to be open (open-source and local), extensible ( plugin support and users can interact with multiple models) The extensibility is enabled through a two-pronged plugin architecture and a community platform for sharing, importing, and adapting extensions .
🟢 Applied

DualTune: Decoupled Fine-Tuning for On-Device Agentic Systems

💡 This research protecting data privacy in language AI.
Large Language Models (LLMs) consistently underperform compared to frontier models in tool calling scenarios . We propose "decoupled fine-tuning" to create dedicated LoRA adapters for tool selection and tool-specific argument generation using separate loss masking for each of the subtasks . DualTune is an inference framework that leverages the LRA adapters created using decoupled fines-tune to perform efficient agent orchestration with the help of local models .
🟢 Applied

SecureFixAgent: A Hybrid LLM Agent for Automated Python Static Vulnerability Repair

💡 This research automatically finding language AI.
Static analysis tools like Bandit are effective at vulnerability detection but suffer from high false positives and lack repair capabilities . Large Language Models (LLMs) can suggest fixes but often hallucinate changes and lack self-validation . We present SecureFixAgent, a hybrid repair framework integrating Bandit with lightweight local LLMs in an iterative detect-repair-validate loop .
🟢 Applied

PrivWeb: Unobtrusive and Content-aware Privacy Protection For Web Agents

💡 This research protecting data privacy in language AI.
PrivWeb is a trusted add-on on web agents that anonymizes private information on interfaces according to user preferences . It features privacy categorization and adaptive notifications that selectively pauses tasks for user control over information collection for highly sensitive information . PrivWeb reduces perceived privacy risks with no associated increase in cognitive effort, and resulted in higher overall satisfaction .