Framework for Auditable Edge MLOps

4 minutes

Table of Contents

  • Edge MLOPS governance
    • Abstract
    • 1. Machine learning governance
    • 2. Auditable information in a machine learning lifecycle
    • 3. Outlook
    • References

Edge MLOPS governance

Abstract

We establish an edge MLOPS governance framework. Existing governance frameworks are mostly geared towards general machine learning algorithms, and cloud computing. Those tai-lored to edge/Internet-of-Things computing, and the use of federated learning, while present, are much less explored. The data localisation, different infrastructure, and scale of edge com-puting with federated learning present new, unique challenges to its governance. In Phase one, we establish an edge MLOPS governance framework, and in particular, identify pro-cesses and records specific to edge MLOPS and federated learning that are required to fulfil AI governance principles. The implementation of the framework in OctaiPipe in Phase two will greatly assist in its users to easily fulfil AI governance principles.

 

1. Machine learning governance

AI governance is indispensable to providing trustworthiness and accountability to an AI solu-tion. An AI solution must be demonstrated to fulfil at least the following governance princi-ples:

  1. Compliance of existing standards, such as in software and cyber security
  2. Responsible AI and data protection principles, as stipulated in UK GDPR, UK’s Data Protection Act, EU GDPR, and their international counterparts, which can be summa-rised in the following figure:

Figure 1 Summary of AI and data protection principles

  1. Accountability to stakeholders: business objectives and scope for using an AI solution must be clearly formulated, and there must exist quantifiable and verifiable evidence that they are fulfilled by the AI solution, and that it is worthy of investment

 

To meet these governance principles, one must implement processes, and provide the corre-sponding verifiable records for auditability. General Auditability frameworks for machine learn-ing (ML) algorithms have been established by various inter-/governmental efforts, such as a whitepaper by the Supreme Audit Institutions of Finland, Germany, the Netherlands, Norway and the UK. A machine learning governance framework can be organised according to the steps involved in a typical machine learning lifecycle, as is illustrated in the following figure.

Figure 2 A machine learning lifecycle. Model training and inference can be done on the cloud, or on the edge (current focus). Federated learning can be deployed to the edge for secure, distributed model aggregation.

Auditing is a way for regulatory bodies and stakeholders to verify the AI governance fulfilment of a machine learning solution. To meet audit requirements, to an ML solution one must provide records, documents, and codes that are verifiable, well-annotated and peer-reviewed, to be readily inspected, and provide the necessary information for the reproducibility of any experi-mental results.

The auditable processes and records of each step of a lifecycle are related to each other. For example, to a machine learning model one must supplement a record of an evaluation of its fulfilment of the business use cases, as identified and documented in the initial phase of the lifecycle. An AI solution is often, in turn, only one component of a larger IT infrastructure; the inter-connectedness of its components must be taken into consideration in designing an au-ditability framework.

A machine learning solution may be developed, trained, and deployed in a centralised (cloud) environment, or in a distributed manner, on edge/IoT devices. For the latter, federated learning (FL) may be employed. While there are existing compliance programs for cloud computing, in the edge case and with federated learning, such standards exist (e.g. ETSI EN 303 645) only in a lesser quantity. The difference in context for edge ML-/FL-OPS, the localisation of data, the scale of the number of edge devices, and the increased— at least different— infrastructure complexity necessitate an additional set of processes and records to comprehensively fulfil the AI governance principles.

2. Auditable information in a machine learning lifecycle

We identify the information in each step of the machine learning lifecycle necessary for meet-ing the AI governance principles enumerated in the last section, as given in the following figure.

For edge computing, additional considerations need to be taken, such as demonstrating the efficiency of deploying the pipeline at scale, and implementing measures to verify that edge devices are indeed performing the prescribed ML tasks.

For federated learning, given that it has only started gaining traction in the industry, the dis-cussion on AI trustworthiness and governance for FL, in both the academic community and in the industry, is relatively recent. We have surveyed the literature on its latest development, and identified some of the unique auditable elements. For example, it is important to keep a registry of clients in the federation, and be able to demonstrate the fairness of their participa-tion in the model aggregation. To the model aggregation strategy, one can incorporate fairness metrics to improve the intrinsic fairness of the global model. One also needs to be able to get consistent results for the model performance, explanations, and fairness in the federated learning framework; this can be a challenge that needs to be overcome, following the findings of our Explainable Edge MLOPS studies.

Figure 3 AI governance Information required in each step of a machine learning lifecycle, fulfilled by implementing necessary processes and keeping the corresponing records. Additional information for edge MLOPS and federated learning are separately enumerated.

3. Outlook

In Phase 1, we have established a framework for Edge ML-/FL-OPS governance, enumerat-ing the necessary information for meeting the AI governance principles. In the next phase, we will implement, and improve on, the framework in a real-life, industrial setting The implementation will be simplified with the help of OctaiPipe, a platform that enables edge MLOPS and federated learning at scale.

 

References
  1. Supreme Audit Institutions of Finland, Germany, the Netherlands, Norway and the UK. Audit-ing machine learning algorithms— A white paper for public auditors https://auditingalgo-rithms.net/Introduction.html
  2. Information Commissioner’s Office. Guidance on AI and data protection. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelli-gence/guidance-on-ai-and-data-protection/
  3. U.S. Government Accountability Office. Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities https://www.gao.gov/products/gao-21-519sp
  4. ETSI. Consumer IoT security https://www.etsi.org/technologies/consumer-iot-security
  5. Mehrabi et al. Towards Multi-Objective Statistically Fair Federated Learning https://arxiv.org/abs/2201.09917
  6. Sánchez et al. FederatedTrust: A Solution for Trustworthy Federated Learning https://arxiv.org/abs/2302.09844
  7. Zhang and Yu. Towards Verifiable Federated Learning https://arxiv.org/abs/2202.08310

TO FIND OUT MORE ABOUT THE PROJECT & OUR SERVICES, GET IN TOUCH WITH THE TEAM.