OctaiPipe v2.2 is now live!

3 minutes

OctaiPipe v2.2 is live!

OctaiPipe, the Federated Edge AI company announced general availability of version 2.2 of its platform. Targeted at Critical Infrastructure and Industrial IoT operators, the OctaiPipe platform allows data scientists to build and orchestrate networks of intelligent Edge devices, streamlining the entire lifecycle from network setup, data analysis and training to deployment, AI model fine-tuning and continuous learning. 

Version 2.2 continues our journey adding federated learning capability for popular model types and reducing CPU, memory and storage usage so they can run on increasingly constrained edge devices. 

Critical Infrastructure companies across energy, telecoms, civil engineering and security are aware of the many benefits AI and connected IoT devices can offer – from improving operations efficiency and sustainability, to monitoring asset performance and ensuring network resilience. Critical Infrastructure environments are typically data-rich and highly secure. As such, intensive cloud data requirements, security and network dependency concerns have limited the sector’s utilisation of Cloud AI and connected devices.

Federated Learning Operations (FL-Ops) enables the deployment of AI to the edge and the management of distributed learning across a network of intelligent devices. Rather than move data from Edge devices to the cloud to train AI algorithms – with all the data storage costs and security concerns that entails – Federated Learning (FL) instead trains algorithms on-device at the Edge with data shared between devices in a decentralised network for continuous, distributed learning.


For an explanation on Federated Learning for Edge AI for IoT, watch OctaiPipe’s latest demo below! 

Changes in Version 2.2

Federated XGBoost

XGBoost (Extreme Gradient Boosting) is a powerful and popular machine learning algorithm that often out-performs deep learning approaches in both speed and accuracy while using fewer resources and producing more explainable results.
OctaiPipe V2.2 introduces a novel approach to federated training of XGBoost models that minimises the number of interactions between edge devices and the Federated Learning server for improved robustness to intermittent networks.

Web Assembly support for inference

OctaiPipe V2.2 introduces an alternative container for inference workloads – OctaiOxide. Instead of using python, these containers use Web Assembly binaries to run model inferences at native speeds and can run with as little as 25MB of RAM. The storage requirement for this image is also much lower – about 200MB. OctaiOxide is the first step towards running inference and federated learning workloads on Android, RTOS and embedded Linux devices, so stay tuned for updates from us!

Improvements to Model monitoring and continuous learning

The MLOps capabilities added in V1.6 that enabled definition of monitoring policies for models deployed to edge devices have been updated for improved monitoring to detect drift so that models can be retrained to maintain performance over time. Take a look at our latest demo video above to see this in realtime.

Support for self-service installation of OctaiPipe from the Azure marketplace

OctaiPipe portal can be installed in customer Azure or AWS cloud environments. In V2.2 the Azure installation process has been updated so new customers can self-install OctaiPipe into their Azure subscription without the need for hands-on support. This allows organisations to rapidly evaluate how OctaiPipe can reduce costs and improve efficiency training, deploying and managing machine learning solutions on edge devices.

Continuous improvements to platform reliability, security and costs

During every release cycle, OctaiPipe is updated to fix bugs, reduce complexity, increase security and reduce running costs. A comprehensive set of security and vulnerability tests are completed to ensure new capabilities are not degrading performance in this area. In V2.2, we have:

  • Removed the requirement to use a separate VPN connection to access the Jupyter notebooks as all OctaiPipe access is secured with a single sign on authority.
  • Moved the Portal to run in Kubernetes and optimised node sizes improving cloud agnosticism and further reduce running costs.
  • Implemented TLS certificates for FL communication rounds

Ready to embrace the future of Edge AI? Talk to our experts and see how our solution can enhance your operations.