Called Watson Services for Core ML, the program lets employees using equipped MobileFirst apps to analyze images, classify visual content and train models using Watson Services, according toApple. Watson's Visual Recognition delivers pre-trained machine learning models that support images analysis for recognizing scenes, objects, faces, colors, food and other content. Importantly, image classifiers can be customized to suit client needs.
Integrating Watson tech into iOS is a fairly straightforward workflow. Clients first build a machine learning model with Watson, which taps into an offsite data repository. The model is converted into Core ML, implemented in a custom app, then distributed through IBM's MobileFirst platform.
Introduced at the Worldwide Developers Conference last year, Core ML is a platform tool that facilitates integration of trained neural network models built with third party tools into an iOS app. The framework is part of Apple's push into machine learning, which began in earnest with iOS 11 and the A11 Bionic chip.
"Apple developers need a way to quickly and easily build these apps and leverage the cloud where it's delivered," said Mahmoud Naghshineh, IBM's general manager, Apple partnership.
"That's the beauty of this combination. As you run the application, it's real time and you don't need to be connected to Watson, but as you classify different parts
Apple and IBM first partnered on the MobileFirst enterprise initiative in 2014. Under terms of the agreement, IBM handles hardware leasing, device management, security, analytics, mobile integration and on-site repairs, while Apple aids in software development and customer support through AppleCare.
IBM added Watson technology to the service in 2016, granting customer access to in-house APIs like Natural Language Processing and Watson Conversation. Today's machine learning capabilities are an extension of those efforts.