Insights

Edge AI use cases: Real-world applications across industries 

Mike Sales

Mike Sales

Principal Consultant

Discover how Edge AI is transforming real-world applications, from predicting equipment failures on factory floors to powering personalised medical devices and smart consumer products. Learn how 42T applies embedded AI at the edge to deliver faster, safer and more intelligent solutions across MedTech, industry and consumer tech. 

Why Edge AI matters: Benefits and timing for deployment  

Artificial intelligence has shifted from a futuristic concept to an everyday expectation. But as AI applications proliferate, so do the limitations of conventional cloud-based architectures. Sending data back and forth to the cloud introduces latency, consumes bandwidth and often raises serious privacy concerns. Enter Edge AI: a decentralised, real-time approach where AI models are run directly on local devices like sensors, smartphones, cameras or microcontrollers. 

Edge AI isn’t just about convenience; it’s about necessity. For applications that demand fast decisions, offline functionality or strict data sovereignty, edge computing is the only viable path. At 42T, we specialise in building real-world-ready AI systems that don’t just theorise the edge, they live on it. 

Edge AI also represents a deeper shift in engineering thinking. While traditional AI has certainly been deployed in real-world settings, much of its development has relied on structured datasets, stable environments and powerful infrastructure.  Edge AI challenges all of that. It must function under tighter hardware and power constraints, in dynamic environments and often with limited connectivity. This demands greater robustness and more rigorous testing, as updates and maintenance may not be straightforward once deployed. This is much more than just a new deployment target, it’s a fundamentally different mindset, focused on resilience and real-time reliability.  

Edge AI vs Cloud AI: Architecture, performance and cost comparison Before diving into applications, it’s critical to understand what sets Edge AI apart from its cloud counterpart. While both are powerful, their optimal use cases are determined by their fundamental architectural differences, which impact performance, security and cost. 

Edge AI performs computations locally on devices like sensors, smartphones, IoT devices, microcontrollers, wearables or industrial machines, keeping processing close to the data source. Cloud AI, conversely, sends data to centralised, remote servers for processing. 

Here’s a breakdown of how they compare: 
Feature Edge AI Cloud AI 
Architecture Computation occurs locally on the device (“at the edge” of the network). This means AI models run on-site without needing an external server connection. Devices operate offline, though they must be individually updated or maintained with model improvements or software updates. Processing is performed on a centralised, remote cloud platform or server. Data is sent to the cloud for processing and results are then sent back to the user or device. This setup relies on a stable and continuous internet connection. 
Performance & Scalability Offers lower latency and faster response times because data doesn’t need to travel to a remote server. This makes it ideal for real-time and time-sensitive applications like autonomous vehicles or medical monitoring. However, its scale is constrained by the hardware capabilities of the local device, often requiring optimised, lightweight AI models. Provides the vast computing power of large data centres, enabling virtually unlimited scalability. It’s perfect for tasks requiring heavy computation, such as large-scale data analytics or initial model training.  However, latency and response times can vary depending on network quality and bandwidth. This reliance on continuous connectivity introduces unpredictability, making cloud AI less suited for safety-critical or real-time applications, even if other performance metrics like accuracy remain strong.   
Privacy & Security Offers significant advantages in privacy and data security. Since data is processed locally on the device, sensitive information (e.g., biometric data, surveillance footage) doesn’t need to be transmitted over the internet, reducing the risk of interception or leakage during transmission. However, edge devices can face their own security challenges. Being remotely deployed, they are more exposed to physical tampering, side-channel attacks and other hardware-level vulnerabilities that require robust protection measures.  Requires data to be sent off-site to remote servers, where it’s often stored. This can increase the risk of cyberattacks or data breaches. While cloud providers implement robust security protocols, transferring and storing personal data in the cloud can still pose privacy challenges, even with encryption and compliance certifications. 
Cost Requires an upfront investment in capable hardware, such as embedded computing boards (e.g., NVIDIA Jetson, Synaptics Astra) or custom ASICs, particularly for high-performance applications. However, many use cases can be supported by lower-cost microcontrollers, with libraries like Arduino’s Neurona enabling AI on devices costing as little as £2. In some cases, existing deployed hardware can gain AI capabilities via software updates. Classic ML models often demand modest compute, making them suitable for ultra-low-cost devices. More intensive applications using deep learning may require powerful platforms like Synaptics Astra or, in some cases, specialised chips like Innatera’s Spiking Neural Processor, which offer highly efficient performance for specific workloads at the edge Ultimately, the hardware requirement, and associated cost, depends entirely on the specific application and performance needs.  Typically operates on a subscription or consumption-based pricing model. While initial deployment can be cost-effective, ongoing expenses for data processing, transfer, storage and bandwidth can accumulate quickly, especially for applications with high data volumes. By contrast, Edge AI avoids much of this overhead, though hardware requirements vary widely.   

At 42T, we deploy both cloud and edge architectures, choosing the right mix based on application, constraints and commercial goals. Edge AI shouldn’t be viewed as a replacement but as a powerful complement that delivers performance where it matters most. This can include a hybrid approach where AI is performed at the Edge with only the required data required for the application sent to a cloud platform for remote access. In healthcare, this could mean identifying a critical anomaly in a vital sign in milliseconds. In industrial settings, it could mean detecting a fault in a conveyor belt motor before an entire production line goes down. 

Real-world applications of Edge AI 

Edge AI in industrial & manufacturing: Predictive maintenance and quality control  

Manufacturing environments demand speed, reliability and minimal downtime. Cloud reliance introduces risk. Edge AI empowers systems to act instantly, directly on the factory floor. 

Use cases 

Predictive maintenance: Identify anomalies in vibration or current data to predict equipment failure before it happens. 

Visual inspection: Edge cameras detect defects or misalignments in real-time, without halting production. 

Safety systems: Detect worker proximity to dangerous zones and trigger alerts autonomously. 

42T in action: In collaboration with pharma industry partners (Balluff, Synaptics & Arcturus Networks), we created a real-time anomaly detection and line clearance system running entirely on the production line with no cloud required. This embedded solution uses edge-based vision and sensing to automate checks, trigger alerts, log incidents and support compliance with minimal latency, enabling faster and safer changeovers. 

For another project, we were given the challenge by a machine manufacturer to recognise and categorise their product range using low-cost electronics. We needed to detect the exact colour of each item to determine the classification accurately. The solution we devised used a number of coloured and IR LEDs coupled with a small number of wideband photo detectors. The readings from each PD were normalised to place them in a multidimensional space. The K-Nearest Neighbours machine learning algorithm was trained to categorise the products into around 100 different groups by using only a small number of examples of each type. This Edge-AI approach provided a remarkably cost effective and accurate solution to the client’s brief. 

We also worked on the TripleOhm project, a system developed to monitor 1 or 3 phase electrical supply. By using shunt resistors rather than current transformers, we achieved a more cost effective and physically smaller solution. More importantly, we were able to capture more detailed, high frequency, information about the electricity usage. Machine learning allowed us to get useful insights into this data, such as the devices that are being used in the home. The amount of data across a network and the potentially sensitive nature of the data makes it desirable to perform this processing on the Edge. Work is ongoing to predict maintenance requirements, such as issues with the fridge compressor or a washing machine motor. 

Edge AI in healthcare: On-device diagnostics and monitoring  

Medical devices often operate in privacy-sensitive and mission-critical environments. Edge AI enables fast, secure and localised decision-making. 

Use Cases 

Wearable health monitoring: Track vitals or movement to detect seizures, cardiac events or falls. 

Portable diagnostics: Use AI vision to detect cataracts or skin lesions directly on-device. 

Assistive technologies: Smart pill dispensers, mobility aids or elder-care monitors that work in real time.  

Digital biomarkers are emerging as a key frontier in diagnostic innovation and Edge AI is enabling their collection and interpretation directly from the source. 

  • Neurodegenerative disease: Passive monitoring of gait, speech and motor function supports early detection of conditions like Parkinson’s, Alzheimer’s and MS, where traditional diagnostics often arrive too late. 
  • Mental health: Edge-processed data such as speech cadence, sleep patterns and device interactions are unlocking earlier detection and better treatment pathways for anxiety, depression and cognitive decline. 
  • Cardiometabolic health: Continuous monitoring from wearables enables dynamic, real-time risk scoring for conditions like hypertension, arrhythmias and diabetes, bringing longitudinal context to what were once point-in-time metrics. 
  • Clinical trials: Pharma companies are adopting digital phenotyping to improve endpoint measurement, monitor adherence and reduce trial costs, gaining real-world insight at scale through embedded Edge AI systems. 

42T in Action: We developed a handheld IR camera for neonatal eye screening, capable of operating in remote or low-connectivity areas. A YOLO-based deep learning model identifies eyes and an efficient Net CNN deep-learning model flags anomalies like cataracts. Initially trained in the cloud, the model is being optimised for full edge deployment, ensuring data privacy and portability worldwide. This is currently being trialled in the UK with a target of gathering images from 130,000 babies. 

Technical challenges of Edge AI: Constraints and engineering demands Running AI at the edge is about engineering for constraint. Devices vary, from powerful embedded platforms to microcontrollers with kilobytes of memory and tight power budgets. That variation forces sharp trade-offs. At the lightweight end, success can mean hitting tight latency targets with minimal compute. It demands efficiency, precise optimisation and the discipline to make AI deliver under real-world constraints. To achieve performance: 

  • Model pruning reduces size by eliminating low-impact parameters. This is generally achieved by inducing sparsity in the network, removing low-value weights from the network during training, though this may impact accuracy. 
  • Quantisation: Converts weights from floating point (typically FP32) to 8-bit integers to reduce memory usage. This significantly lowers computational demand but can introduce quantisation noise and inaccuracies that arise from rounding floating-point values to fixed-point approximations. These effects must be carefully managed to maintain model accuracy.  
  • Weight sharing: Further reduces model size by clustering weights into buckets and using shared values. This technique limits the number of distinct weight values the model uses, which reduces memory overhead but requires careful calibration to avoid degradation in precision, especially in vision or speech tasks. 

These methods are often used in combination, to produce high-efficiency models that still perform reliably under edge constraints.  

Knowledge distillation teaches smaller models by having them mimic the behaviour of larger, pre-trained models, effectively compressing intelligence without significant accuracy loss. It’s particularly useful when model size and power consumption are constrained. Techniques like quantisation and weight sharing are often paired with distillation to further reduce memory footprint and computational load. 

Sparsity and weight sharing reduce complexity without destroying performance. 

At 42T, we combine embedded engineering with machine learning to deliver edge AI that works under real-world constraints. We optimise models for specific hardware targets, from low-power microcontrollers to custom silicon, and validate performance through rigorous system-level testing. 

In many of our projects, the bottleneck isn’t model training, it’s engineering for real-time, on-device inferencing. That’s where our systems engineering DNA gives clients an edge. 

42T’s Edge AI development process: From prototyping to deployment   

Edge AI is not just about training a model. It’s about building a system that works reliably, repeatedly and commercially. Our approach spans the full product lifecycle: 

Early-stage sandboxing 

We prototype rapidly, validate ideas using simulation tools and iterate quickly. This reduces risk early and surfaces integration challenges. 

Critical data evaluation 

Good AI starts with good data. We design data collection protocols, clean datasets and assess bias or noise before model development. 

Sensor design and fusion 

We create the sensing hardware: optical, electrochemical, acoustic or multi-modal. We then combine sensor streams for richer interpretation.  

System integration 

Our cross-functional teams design the entire system from electronics and firmware to mechanicals, UI and AI, all tailored to the final use case. While some machine learning models may operate as functional black boxes, we prioritise explainability wherever possible and ensure that system behaviour is well understood, tested and documented end to end. 

Deployment and validation 

We package models for embedded environments, validate under edge conditions and deliver production-grade solutions with robust fallback and update mechanisms. 

Commercial readiness 

We factor in manufacturing scale, supply chain constraints, certification and long-term maintenance to ensure what we build doesn’t just work once, it works at scale. 

What’s next: Future trends in Edge AI 

Edge AI is still early in its adoption curve, but the pace is accelerating. 

Emerging trends: 

Neuromorphic processors: Brain-inspired chips that consume microwatts. 

Federated learning: Models trained across devices without sharing raw data. 

Edge-cloud orchestration: Smart coordination between local and cloud resources, not just for inference, but also for training workflows such as federated learning, where models are trained across distributed devices without sharing raw data. 

Standardisation: ONNX and other frameworks are making deployment easier across platforms. 

Multimodal AI at the edge: Combining vision, audio and sensor data to enable richer, more context-aware local decision-making. 

42T is actively working with partners to explore next-generation architectures that make intelligent systems even more efficient, robust and commercially viable. This includes our strategic collaboration with Arcturus Networks, whose expertise in secure, embedded computing aligns closely with the needs of Edge AI. Their platform enables tighter integration of hardware, firmware and AI workloads, supporting rapid development and deployment of intelligent products at the edge. 

Why choose 42T for Edge AI: Full-system engineering expertise

Edge AI holds transformative potential. But unlocking it requires more than just model training. It takes: 

  • Application-first thinking 
  • System-level integration 
  • Deep cross-functional expertise 

That’s where 42T excels. 

Whether you’re developing a diagnostic device, an industrial sensor or a connected consumer product, we understand that AI is just one part of a complex puzzle. Our teams bring together electronics, embedded software, data science, optics, compliance and manufacturing know-how to deliver intelligent systems that work reliably, repeatedly and at scale. 

We help you build engineered solutions that solve real problems.  

Let us help you

We excel in deep innovation and technical breakthroughs, from early-stage exploration to end-to-end development and manufacturing.