Helping companies apply AI models more responsibly MIT News

Today, companies are incorporating artificial intelligence into every corner of their business. The trend is expected to continue until machine learning models are incorporated into most of the products and services we interact with every day.

As these models become a larger part of our lives, ensuring their integrity becomes more important. That’s the mission of Verta, a startup that grew out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Verta’s platform helps companies deploy, monitor and manage machine learning models securely and at scale. Data scientists and engineers can use Verta’s tools to track different versions of models, check them for bias, test them before deployment, and monitor their performance in the real world.

“Everything we do is to enable more products to be built with AI, and to do it safely,” said Verta founder and CEO Manasi Vartak SM ’14, Ph.D. ’18: “We’re already seeing with ChatGPT how AI can be used to create data, artifacts that look right but aren’t. There needs to be more governance and control over how AI is used, especially for enterprises providing AI solutions.”

Verta currently works with major healthcare, finance and insurance companies to help them understand and audit their model recommendations and predictions. It also works with a number of high-growth technology companies that want to accelerate the deployment of new, AI-driven solutions while ensuring the appropriate use of those solutions.

Vartak says the company has been able to reduce the time it takes customers to deploy AI models by orders of magnitude, while ensuring those models are explainable and fair, which is an especially important factor for companies in highly regulated industries.

Healthcare companies, for example, can use Verta to improve AI-powered patient monitoring and treatment recommendations. Such systems should be thoroughly checked for errors and biases before they are used on patients.

“Whether it’s bias or fairness or explainability, it’s about our philosophy of model management and governance,” Vartak says. “We think of it as a pre-flight checklist. Before the plane takes off, there are a number of checks you need to make before you get your plane off the ground. It’s the same with AI models. You’ve got to make sure you’ve done your bias checks, you’ve got to make sure there’s some level of explainability, you’ve got to make sure your model is reproducible. We help with all that.”

From project to product

Before coming to MIT, Vartak worked as a data scientist at a social media company. On one project, after spending weeks setting up machine learning models that processed content to display in people’s feeds, he learned that a former employee had already done the same. Unfortunately there was no record of what they did or how it affected the models.

For his PhD at MIT, Vartak set out to build tools to help data scientists develop, test, and iterate on machine learning models. Working in CSAIL’s database group, Vartak recruited a team of graduate students and participants in MIT’s Undergraduate Research Opportunities Program (UROP).

“Verta would not exist without my work at MIT and the MIT ecosystem,” says Vartak. “MIT brings together people at the forefront of technology and helps us build the next generation of tools.”

The team worked with data scientists from the CSAIL Alliances program to determine what features to build and iterate on based on feedback from those early adopters. Vartak says the resulting project, called ModelDB, was the first open source model management system.

Vartak also took several business classes during his PhD at the MIT Sloan School of Management and worked with classmates on startups offering apparel and health tracking, spending countless hours at the Martin Trust Center for MIT Entrepreneurship and participating in the center’s delta v to the summer accelerator.

“What MIT allows you to do is take risks and fail in a safe environment,” Vartak says. “MIT allowed me to have those entrepreneurial forays and showed me how to go about building products and finding the first customers, so when Verta came along, I was doing it on a smaller scale.”

ModelDB helped data scientists train and track models, but Vartak quickly saw that the stakes were higher when the models were run at scale. At that point, trying to improve (or accidentally break) the models could have huge implications for companies and society. That insight led Vartak to start building Verta.

At Verta, we help manage the models, we help launch the models and make sure they’re working as expected, which we call model monitoring,” explains Vartak. “All of those pieces trace their roots back to MIT and my thesis work. Verta really evolved from my PhD project at MIT.”

Verta’s platform helps companies deploy models faster, ensure they continue to work as intended over time, and manage models for compliance and governance. Data scientists can use Verta to trace different versions of models and understand how they were built, answering questions such as how the data was used and what explainability or bias checks were performed. They can also verify them by running them through deployment checklists and security scans.

“Verta’s platform takes a data science model and adds half a dozen layers on top of it to turn it into something you can use for, say, an entire recommendation system on your website,” Vartak says. “That includes performance optimization, scaling and cycle time, which is how quickly you can take a model and turn it into a valuable product, as well as management.”

Supporting the AI ​​channel

Vartak says large companies often use thousands of different models that affect almost every part of their operations.

“An insurance company, for example, will use models for everything from underwriting to claims to back-office processing to marketing and sales,” says Vartak. “So the variety of models is really big, the volume of them is big, and the level of scrutiny and compliance that companies need around those models is very high. They should know things like: Did you use the data you should have used? Who were the people who checked it? Have you performed comprehensibility checks? Have you done bias checks?’

Vartak says companies that don’t embrace AI will be left behind. Meanwhile, companies driving AI to success will need well-defined processes to manage their ever-growing list of models.

“In the next 10 years, every device we interact with will have artificial intelligence, whether it’s your toaster or your email programs, and it’s going to make your life much, much easier,” Vartak says. “What will enable that intelligence are better models and software like Verta to help you integrate AI into all these applications very quickly.”

Source link