How Federated Learning is Going to Revolutionize AI

A new framework has emerged in AI which has the capability to compute across millions of devices and consolidate those results to provide better predictions for enhancing user experience. Welcome to the era of federated (decentralized) Machine Learning.

author-image
SMEStreet Edit Desk
New Update

Article By Ashwani Gupta, Senior Data Scientist, Publicis Sapient

This year, we observed an amazing astronomical phenomenon – a picture of a black hole – for the first time. But did you know, this black hole was more than 50 million light-years away? And to capture this picture, scientists required a single disk telescope as big as the size of the earth! Since it was practically impossible to create such a telescope, they brought together a network of telescopes from across the world. The Event Horizon Telescope, thus created, was a large computational telescope with an aperture of the same diameter as that of the earth.

This is an excellent example of decentralized computation and it displays the power of decentralized learning that can be exploited in other fields as well.

Formed in the same principles, a new framework has emerged in AI which has the capability to compute across millions of devices and consolidate those results to provide better predictions for enhancing user experience. Welcome to the era of federated (decentralized) Machine Learning.

What’s federated (decentralized) Machine Learning?

We’ll get there, in a bit. But first, we need to understand what traditional or centralized Machine Learning is.

Centralized Machine Learning

With billions of mobile devices in the world, this is an era of enormous computing power. With the invention of cheaper computing power, we already have mobile phones with a hardware capacity equivalent to laptops. It won’t be long before your pocket devices will have GPUs (Graphical Processing Units) and they will be able to train deep neural networks easily.

Almost everyone in the world has personal devices due to which we are witnessing a new surge in the volumes of data generated - something never observed in the past, and this is increasing at an exponential rate. With data getting generated at an ever-increasing pace, it has opened up new possibilities for providing more accurate and personalized ML models that can enhance customer experience and help them make decisions.

Centralized Machine Learning is all about creating an algorithm using ‘training data’ - sample data, to identify patterns and trends in it. The machine then uses the algorithms to ‘learn’ such patterns and identify them in bigger chunks of data similar to sample data (more elaborate explanation here)

Let’s go into specifics now. There are five steps involved in this process:

  1. Identification of the problem
  2. Data preparation for solving the problem
  3. ‘Training’ an ML algorithm on a centralized server or machine
  4. Sending the trained model to client systems (or providing an ML service that exposes the API)
  5. Commencement of result prediction on unseen data

Hence, in the current ML world, the approach to model training is centralized. Centralized training requires data to be stored at a central location or a data server, thereby limiting access and also raising security concerns (what if this data is hacked?).

Ever wondered how Google Maps suggests alternate routes at just about the right moment? Three words: Real-time Computation.

Google collects data on its server from hundreds of vehicles that have already passed through the same route you are taking, computes the best route chosen by most and passes on this information to you - making your life much easier (you’re welcome).

Limitations of centralized learning

But these amazing flexibilities come at a cost that most people don’t even realize. Storing data at a central server not only leads to a violation of user privacy but also poses a risk of releasing other personal data as well. Most of the times, user data is stored on a cloud owned by big corporations without them even knowing it. Users get better applications that are more personalized for their needs by trading-in their privacy.

Governments in various countries have taken heed to these privacy concerns and have come up with strict measures to ensure data privacy. Some of these are HIPAA - Health Insurance Portability and Accountability Act in the healthcare sector and GDPR - General Data Protection Regulation. These restrict access to user data by any organization until explicitly permitted (often in writing) by the user.

So what do organizations that thrive off personal data do? It is getting more and more difficult for start-ups and companies to build applications that could provide better-personalized results to users. All ML applications work on a simple logic: the more data you feed it, the more accurate it gets, the better and more personalized results it returns. If not built by training on large user data, these often result in poor and non-personalized results. This leads to less adaptability of the new applications by the user community.

These problems for both the user and for organizations can be addressed with the help of Federated Learning.

Back to the original question: What is Federated Learning and how will it help?

Federated Learning is a new branch in AI that has opened the doors for a new era of Machine Learning.

It can exploit both - decentralized data (data not stored in one location, thereby making it vulnerable) and decentralized computing power available in the modern world to provide a more personalized experience without compromising on user privacy.

It is now possible to share information between a client and server without compromising on user privacy through homomorphic encryption (see this article by Andreas Poyiatzis). In simple terms, with homomorphic encryption, it is now possible to perform computations on encrypted data (no privacy violations) at the remote server location.The computation results which are also encrypted will then be sent back to the clients, and clients can decrypt the personalized results without worrying about compromising its privacy.

How does it work?

Don’t get bogged down by the complex diagram. Here’s what happens:

Typical Federated Learning solutions start by training a generic Machine Learning model in a centrally located server.This model is not personalized, but acts as a baseline to start with. Next, the server sends this model to user devices(Step 1), also known as ‘clients’ (clients can range from hundreds to millions, depending on the user base of the application). As client systems generate data, local models (on respective user devices) learn and get better with time.

Periodically, all clients send their learning to the central server without ever exposing the user’s personal data to servers (Step 2). This is done with the help of homomorphic encryption. The server then aggregates the new learnings from the clients and continues to improve the shared model(Step 3). The new shared model is again sent back to the client’s systems and the same cycle repeats again and again. With each passing cycle, the shared model located at the central server gets better and becomes more personalized.

This art of learning from the user’s personal data, without any threat of exposing it, has a lot of potential to derive new possibilities in the future.

Future of Federated Learning 

Self-driving connected cars can leverage Federated Learning to drive safely. Instead of avoiding a pothole just based on a predetermined set of algorithms and rules, if a self-driving car utilizes the information from all cars that crossed the same pothole in the last 1 hour, it will definitely be able to take a better decision in terms of safety and comfort of the passenger.

The next 5 years are going to be very interesting for Federated Learning. We will see a plethora of new applications taking advantage of Federated Learning, enhancing user experience in a way that was not possible before. A lot of companies will come forward and provide platforms for developing Federated Learning applications quickly. We will see an era where the user will be rewarded for sharing their local learnings with big companies.

Google has already shared its Federated Learning platform in the form of Tensorflow Federated. Although still in its nascent stage,it’s a good learning platform to start with. Upcoming releases will come with new features that will enable users to build an end-to-end scalable Federated Machine Learning model.

OpenMined is a company that has already started some serious work in this area. Their approach ensures full data protection along with rewards to clients for sharing their learning. I recommend visiting their website if you want to explore more in this field.

AI ML Federated Learning