Whitepaper

Building Trustworthiness in Distributed AI Systems

OBJECTIVES

In the past decade, we have witnessed a prolific growth and adoption of Internet of Things (IoT) not only in the consumer sector, such as health tech and slow-moving consumer goods, but also in the industrial sector, such as manufacturing, transport, energy, and the built environment. These IoT devices generate massive amount of data, useful for different machine learning (ML) / artificial intelligence (AI) services. Typically, organisations are averse to sharing private, sensitive data to a central server due to risks of data breaches, privacy violations, and legal liabilities. Federated learning (FL, see Fig.1) offers an alternative to traditional centralised ML methods by enabling on-device ML, negating the need to transfer sensitive end-user data to a central server. Each client trains local models by using their data in a private setting and sends only model updates to the server for an aggregated global update. Despite its privacy advantages, FL is not completely immune to different kinds of security and privacy attacks, as we will discuss below.

Security attacks can happen when a single or group of malicious clients can either force the global model from converging by boosting their local updates (Byzantine attack) or cause convergence to a wrong model (data poisoning attack). In privacy attacks, client-supplied models or updates can leak certain attributes from the client’s private, sensitive data. To combat either security or privacy attacks, independent defence mechanisms still have severe detrimental effects on the performance of the ML models, exposing the system to major operational and financial risks.

Users frequently seek explanations for AI solutions instead of viewing them as inscrutable ‘black boxes’, aiming to enhance result reliability. Within the FL framework, multiple parties demand explainability – servers may need to clarify overall model behaviour, while clients might want insight into specific predictions. However, FL scenarios bring extra challenges. For instance, localised data from clients can yield inconsistent explainer results, and edge devices may lack the computational resources to support more accurate, resource-heavy explainers.

 

Fig 1: FL framework with security (set at server side) and privacy (ε-DP added at clients’ side) mitigation.
Fig 1: FL framework with security (set at server side) and privacy (ε-DP added at clients’ side) mitigation.

With increasing adoption of ML models across various sectors, new risks linked to ethical principles and negative social impacts are getting included alongside. Hence, there is an urgent need to implement edge MLOps systems governed by industry-standard auditability protocols to mitigate privacy and security risks throughout the lifecycle of the system and demonstrate the observance of AI and data principles, as described e.g., in UK-/EU-GDPR, Data Protection Act (DPA 2018), and the currently debated Data Protection and Digital Information Bill.

OctaiPipe’s own distributed ML platform has combined FL, automated ML and Edge MLOps to automate the entire ML lifecycle on connected devices. Hence, it is extremely important for us to have mitigation against possible security and privacy attacks following established auditability protocols while providing proper explainability to earn clients’ trust and to provide best results to maximise clients’ economic and other value potential. The goal of this project is to lay the technological and algorithmic foundation for a trustworthy AI solution to enhance OctaiPipe as an industrial grade Edge MLOps platform in collaboration with key partners. In the following, we outline the key findings in our pilot implementation of security, privacy, explainability, and auditability in an FL setting.

 

Key findings

Adversarial Fortification: Data Poisoning attack by a group of malicious clients (> 40%) appears to be most detrimental to model performance like reduction of overall accuracy and increase of wrong prediction (Fig. 2). In contrast, Byzantine attack with the same group of malicious clients has considerably limited effect.

Fig. 2: Training trends for accuracy and fractions of wrong predictions for adversarial attacks. Trimmed average with low fraction is most effective against combined security attacks (data poisoning + Byzantine) with 40% malicious clients.

Mitigations are achieved by using different FL aggregation strategies like replacing average with median or averaging after trimming local updates with varying fraction. Effects of different aggregation techniques largely depends on tasks on hand and data complexity.

Privacy-accuracy trade-off: We have chosen Differential Privacy (DP) as a defence mechanism against privacy attack as it defines a limit or bounds on the amount of privacy loss one can have while releasing information. DP distorts client update by varying noise parameters (𝜀,𝛿) so that no unequivocal inference can be made based on client’s individual updates. Although the application of DP can be efficiently done at client side, the privacy comes at cost of modest impact on model accuracy. Small 𝜀≈1 (Fig 3), are often recommended for maximising privacy but is often tuned based on data heterogeneity and the acceptable range of accuracy on the task.

 

Fig 3: High ε retains good model accuracy but less protection and vice versa.

Fig 4: Performance of local & global model explainers. Distribution of feature importance over clients shown for features 5-8. Features 1-4 are unimportant.

Explainable Edge MLOPS: To a trained model one can use post-hoc explainers to provide local explanations on model’s prediction on a single data instance, and global explanations on model’s general behaviour. Model-agnostic explainers often require fitting by a background dataset; in edge computing, results from locally fitted explainers can be inconsistent, due to the different distributions of local data. Using KernelSHAP and LIME explainers, we find such inconsistency in instances of local explanations, while global explanations are consistent and sensible, giving largest feature importance to informative features (Fig 4). The performance of the FL server explainer remains comparable to that of a centralised model explainer.

Auditable Edge MLOPS: Existing governance frameworks (e.g., the inter-governmental Supreme Audit Institutions whitepaper), are geared towards general ML algorithms in the context of cloud computing. Those on edge/IoT computing and ML/FL, while present (e.g., ETSI EN 303 645), are not as applicable. Edge MLOps pose unique auditability challenges due to the heterogeneity of devices/data and complex infrastructure. Key auditability elements identified in this context are security, responsible AI, compliance with data protection rules like UK/EU-GDPR and the UK’s Data Protection Act and achieving business objectives. This understanding forms a crucial foundation for Phase 2 implementation.

Contribution from Consortium Partners: Health, Safety and Environment (HSE) concerns, especially safety critical events and near misses, are paramount in the industry. Beyond merely identifying such events, it is crucial to predict their occurrence and assess risk. AI systems, combining computer vision (CV) for event recognition and ML for prediction, can serve this need. Yet, the infrequency of these events necessitates large-scale observation, often beyond the reach of a single corporation. In these multiparty situations, privacy of individuals and corporations becomes crucial. A solution lies in pairing privacy-focused CV systems with FL for IoT, enabling learning in edge environments across entities. This addresses a real-world challenge faced by large corporations aiming for improved HSE but facing data sharing issues. Moreover, safety applications demand high explainability and security in AI systems for safety and auditability to ensure necessary adjustments in case of faults.


OctaiPipe is the lead partner in bringing together a world class consortium to develop trustworthy federated edge AI for IoT technology. In Phase 2, we will bring together a large corporate partner with other large corporate collaborators, and specialist SME partners to not only solve this HSE use case of great economic and societal benefit but use the unique qualities of their use case to develop best in class technology for securing, explaining, and auditing privacy preserving federated edge AI systems.

 

Summary and Outlook

Edge ML systems need to be resilient to large-scale attacks: We will develop and implement practical countermeasures based on realistic scenarios of malicious activities identified in Phase 2.

Security-privacy-accuracy trade-off: Differential privacy allows protecting the privacy of clients, but the distortion added to the updates severely degrades the accuracy of the global model. On other hand, distorted updates may have adverse effects on the security mitigation in place. This trilemma is one the grand challenges we will tackle in Phase 2 built on existing best practices in DevOps.

Explainability on edge devices: To ensure consistent model explanations across edge devices, we will standardise and benchmark model explainability methods, which is task-, data-, and compute resource-dependent.

Auditable Edge MLOps: To formalise a global-standard governance framework for Edge MLOps, we will leverage existing frameworks for ML/AI and IoT hand-in-glove with our consortium partners, collaborators, and industry leaders.

Navigate the Complex Landscape of Security, Privacy, and Explainability in Federated Learning Systems

Click below to download the whitepaper and redefine trust in distributed AI