JFrog Integrates with Qwak for Streamlined ML Application Delivery

Uniting JFrog Artifactory and Xray with Qwak’s ML Platform brings ML apps alongside all other software development components in a modern DevSecOps and MLOps workflow

author-image
SMEStreet Edit Desk
New Update
Gal Marder, Executive Vice President of Strategy, JFrog

Gal Marder, Executive Vice President of Strategy, JFrog

Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

 JFrog Ltd. the Liquid Software company and creators of the JFrog Software Supply Chain Platform, today announced a new technology integration with Qwak, a fully managed ML Platform, that brings machine learning models alongside traditional software development processes to streamline, accelerate, and scale the secure delivery of ML applications.

“Currently, data scientists and ML engineers are using a myriad of disparate tools, which are mostly disconnected from standard DevOps processes within the organization, to mature models to release. This slows MLOps processes down, compromises security, and increases the cost of building AI powered applications, ” said Gal Marder, Executive Vice President of Strategy, JFrog. “The combination of the JFrog Platform – with Artifactory and Xray at its core - plus Qwak provides users with a complete MLSecOps solution that brings  ML models in line with other software development processes, creating a single source of truth for all software components across Engineering, MLOps, DevOps and DevSecOps teams so they can build and release AI applications faster, with minimal risk and less cost.”

Uniting JFrog Artifactory and Xray with Qwak’s ML Platform brings ML apps alongside all other software development components in a modern DevSecOps and MLOps workflow, enabling data scientists, ML engineers, Developers, Security, and DevOps teams to easily build ML apps quickly, securely, and in compliance with all regulatory guidelines. The native Artifactory integration connects JFrog’s universal ML Model registry with a centralized MLOps platform so users can easily build, train, and deploy models with greater visibility, governance, versioning, and security. Using a centralized platform for ML model deployment also allows users to focus less on infrastructure and more on their core data science tasks.

IDC research indicates that while AI/ML adoption is on the rise, the cost of implementing and training models, shortage of trained talent, and absence of solidified software development life-cycle processes for AI/ML are among the top three inhibitors to realizing the full benefits of AI/ML at scale.[1]

"Building ML pipelines can be complicated, time-consuming, and costly to organizations looking to scale their MLOps capabilities. These homegrown solutions are not equipped to manage and protect the process of building, training, and tuning ML models at scale with little to no audibility," said Jim Mercer, Program Vice President Software Development, DevOps, and DevSecOps. "Having a single system of record that can help automate the development, providing a documented chain of provenance, and security of ML models alongside all other software components offers a compelling alternative for optimizing the ML process while injecting more model security and compliance.”

Without the right infrastructure, platform and processes needed for ML operations (MLOps), it’s challenging to build, manage, and scale complex ML infrastructure, deploy models quickly, and secure them without incurring excessive costs. Companies often struggle to manage infrastructure complexity causing expensive and time-consuming authentication and security protocols between various development environments.

“AI and ML have recently transformed from being a distant future prospect to a ubiquitous reality. Building ML models is a complex and time-intensive process, which is why many data scientists are still struggling to turn their ideas into production-ready models,” said Alon Lev, CEO, Qwak.  “While there are plenty of open source tools on the market, putting all of those together to build a comprehensive ML pipeline isn’t easy, which is why we’re thrilled to work with JFrog on a solution for automating ML artifacts and releases in the same, secure way customers manage their software supply chain with JFrog Artifactory and Xray.”

Proof of why having secure, end-to-end MLOps processes is imperative was further confirmed by the JFrog Security Research team in their discovery of malicious ML Models in Hugging Face, a widely used AI model repository. Their research found that several malicious ML Models housed in Hugging Face posed the threat of code execution by threat actors, which could lead to data breaches, system compromise, or other malicious actions. 

For a deeper look at the integration between the JFrog Platform and Qwak and how it works, read this blog or view this video. You can also register to join JFrog and Qwak for an informative webinar detailing best practices for introducing model [check out this video] use and development into secure software supply chain and development processes, on Tuesday, April 2, 2024 at 9 a.m. PST/5 p.m. UTC.

 

JFrog ML Applications