From 94f300c7bb6766966fa37dc0c1cfbbcbcb9a001e Mon Sep 17 00:00:00 2001 From: Amirhossein Rostami Date: Fri, 11 Jul 2025 23:30:37 +0200 Subject: [PATCH] Update data.yaml Add NLU --- Papers/Others/data.yaml | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/Papers/Others/data.yaml b/Papers/Others/data.yaml index 7aee689..079ac15 100644 --- a/Papers/Others/data.yaml +++ b/Papers/Others/data.yaml @@ -103,4 +103,15 @@ abstract: > Fueled by the availability of more data and computing power, recent breakthroughs in cloud-based machine learning (ML) have transformed every aspect of our lives from face recognition and medical diagnosis to natural language processing. However, classical ML exerts severe demands in terms of energy, memory and computing resources, limiting their adoption for resource constrained edge devices. The new breed of intelligent devices and high-stake applications (drones, augmented/virtual reality, autonomous systems, etc.), requires a novel paradigm change calling for distributed, low-latency and reliable ML at the wireless network edge (referred to as edge ML). In edge ML, training data is unevenly distributed over a large number of edge nodes, which have access to a tiny fraction of the data. Moreover training and inference is carried out collectively over wireless links, where edge devices communicate and exchange their learned models (not their private data). In a first of its kind, this article explores key building blocks of edge ML, different neural network architectural splits and their inherent tradeoffs, as well as theoretical and technical enablers stemming from a wide range of mathematical disciplines. Finally, several case studies pertaining to various high-stake applications are presented demonstrating the effectiveness of edge ML in unlocking the full potential of 5G and beyond. +- + name: > + NLU: An Adaptive, Small-Footprint, Low-Power Neural Learning Unit for Edge and IoT Applications + url: https://ieeexplore.ieee.org/document/10904478/ + date: 2025/02/26 + conference: IEEE Open Journal of Circuits and Systems + code: + authors: Rostami, et al. + abstract: > + Presents a tiny, low-power, and flexible hardware accelerator (NLU) enabling real-time neural network training on edge devices such as wearables and sensors. + ...