Underspecification in machine-learning models

A group of 40 Google researchers has identified a major cause for the common failure of machine-learning models. It’s called “underspecification” and it means we can’t tell if the machine-learning models we use today will work in the real world or not. That’s a real problem.

The researchers looked at a range of different AI applications, from image recognition to natural language processing (NLP) to disease prediction. They found that underspecification was to blame for poor performance in all of them. The problem lies in the way that machine-learning models are trained and tested, and there’s no easy fix.

We need to be doing a lot more testing, but that won’t be easy.

Have a read at the full article here:

If you are as excited as we are about to disrupt the M&A insurances market follow our story!

The Team of Seal Deal

3 views0 comments