Machine learning and its life cycle are different from traditional programming and software development life cycle. Since ML is new and different, we do not yet understand it as well as traditional software.
Before embarking on building ML applications, studying why ML projects fail can improve the chances of success. Sample this:
Jan 2019: Gartner predicted that through 2020, 80% of AI projects will remain alchemy, run by wizards and through 2022, only 20% of analytic insights will deliver business outcomes.
May 2019: Dimensional Research — Alegion Survey reported 78% of AI or ML projects stall at some stage before deployment, and 81% admit the process of training AI with data is more difficult than they expected.
July 2019: VentureBeat reported 87% of data science projects never make it into production.
July 2019: International Data Corporation (IDC) survey found that a quarter of organizations reported up to 50% AI project failure rate.
Failure is inevitable when attempting anything new and hard, but these are alarmingly high rates. In fact, so high that it is reasonable to be skeptical.
I can’t help but recall similar claims about software development in the 1990s. The Standish Group’s CHAOS Report in 1994 claimed that only 16% of the software projects succeeded. Such a high failure rate was doubted, and over years CHAOS report has become more nuanced. But still, only 36% of projects were reported as a success, 45% as challenged, and 19% failed in the 2015 report.
Though the exact percentage was doubted, there was a broad agreement that the software project failure rate was unacceptably high. This led to the product management and development process evolution to improve the success rate.
High ML Project Failure Rate is Real
Even if the ML project failure rate is not ~80% (for example, failure at the Proof of Concept stage is a good thing), the “real” failure rate is quite likely still very high.
This story is repeated so very often:
A data scientist is brought in to do ML on the data being collected.
She discovers that the data is unusable. Either nobody knows what it is, or it is incomplete and unreliable.
Somehow she manages to clean the data; experiment, and build a model. She has it all in her Jupyter notebook.
Management considers it done and ready to deploy. Only to learn that significant work is needed to take it to production.
Disappointed management says, “okay, do it.” The data scientist replies, “she can’t, engineers have to do it.” And engineers are like, “who, me? this math?”
Nobody is yet realizing that it is NOT done even after deployment. The model must be monitored for data drift and retrained.
Nobody is happy in the end. Management thinks ML is a hoax. Data Scientist thinks they don’t get it.
ML project failures can happen due to:
Lack of ownership: waterfall-like “thrown over the wall” handoffs between teams
Poor problem formulation: solving the wrong problem, optimizing wrong metrics
Data access, insufficiency, quality, collection, and curation issues
Infeasibility or cost of deploying a model
Lack of model monitoring and maintenance
How to Improve Success Rate
We can apply lessons learned in software development:
Consolidate Ownership: Cross-functional team responsible for the end-to-end project.
Integrate Early: Implement a simple (rule-based) model and develop product features around it.
Iterate Often: Build better models and replace the simple model, monitor, and repeat.
Additionally, since data is the foundation for Machine Learning:
Begin with data collection: implement a data strategy
Solve simple problems first: walk before you run
This is beautifully captured in The Data Science / Machine Learning / AI Hierarchy of Need:
Source: The AI Hierarchy of Needs by Monica Rogati.
With this context, we will follow the Data Science Hierarchy of Needs in futures issues:
Identifying problems suitable for Machine Learning
Machine Learning product design
Data collection instrumentation and pipelines
Common machine learning models
Model training and selection
Deployment and monitoring
ML4Devs Newsletter - Issue 08, published on 7 July 2022.