Sparse modelling with small datasets

Share this on social media:


Takashi Someda, CTO at Hacarus, on the advantages of sparse modelling AI tools

With the recent rise in interest in artificial intelligence for computer vision applications, a lot of attention has been given to the potential benefits that AI can bring – promises of more accurate quality control inspection with fewer false alarms and lower cost.

However, when deployed, these goals often aren’t met – in fact, 85 per cent of AI projects fail. Some of the reasons behind these numbers are that most of the efforts to deploy AI to date assume the availability of large amounts of data to train the model, ignore the importance of being able to explain how the algorithm arrived at its conclusion, and lack consideration for compute resource requirements.

Often, in today’s complex manufacturing processes, there aren’t enough examples of defects to create an accurate model. Furthermore, and perhaps more importantly, commonly used methods such as deep learning are a black box, only able to provide results but not show the method used to reach a conclusion. Without contextualising classification or decisions, little guidance is given for how to improve production processes in order to reduce defects and errors.

Does this mean that we should abandon AI for visual inspection? No! Luckily there are other approaches that can be used to reap the benefits of AI. One of these is sparse modelling, a technique that remedies some of the issues inherent to deep learning.

Essentially, sparse modelling is a data modelling approach that focuses on identifying unique characteristics. A reasonably recent example of sparse modelling is its use to create the first ever image of a black hole, published by researchers belonging to the Event Horizon Telescope (EHT) project. Sparse modelling, however, is not new; over the past 20 years research in academia has been ongoing, and it’s been growing as a tool in the statistical analysis and machine learning community.

Unlike deep learning, there is no concrete algorithm and therefore the exact origins are difficult to pinpoint. However, Robert Tibshirani’s 1996 ‘Regression shrinkage and selection via Lasso’ can be considered an early paper that makes use of the technology. Sparse modelling can be seen as a data analysis toolbox aiming to understand and isolate the factors that have a meaningful impact on a certain occurrence.

Sparse modelling-based inspection

When applying sparse modelling to visual inspection there are three main steps that need to be taken. The first one is pre-processing the data, where harmonisation through the use of techniques such as patching, cropping and sampling are performed. This is followed by feature extraction, which depends on the dataset. Feature classification is the final step; even standard classification methods like tree-based models can work well for well-designed features. Sparse modelling works with small datasets, and can provide accurate models with as little as 50 images.

The benefits of this approach are inherent in its design: by abstracting a series of key features from the input, the user knows how the technique arrived at its conclusion. What’s more, since there aren't many features, the compute power needed is far less when compared to deep learning. As such, sparse modelling can be deployed even on edge devices, removing the need for external cloud connectivity for processing.

Hacarus was founded in Kyoto as an AI start-up in 2014; it applies sparse modelling technology. Takashi Someda spoke at the Embedded Vision Europe event in Stuttgart, Germany in October.

Write for us

Want to write about an industrial imaging project where you have successfully deployed artificial intelligence technology? Email:


Related analysis & opinion

14 December 2020

Greg Blackman reports on the views of panellists from AIT, MVTec, Irida Labs, and Xilinx discussing AI and machine vision

27 March 2020

Newly elected EMVA president Chris Yates considers what the future might hold for machine vision

27 January 2020

Prior to speaking at the Embedded World trade fair, The Khronos Group’s president, Neil Trevett, discusses the open API standards available for applications using machine learning and embedded vision

13 January 2020

Vassilis Tsagaris and Dimitris Kastaniotis at Irida Labs say an iterative approach is needed to build a real-world AI vision application on embedded hardware

03 December 2019

Following speaking at Embedded Vision Europe, Pierre Gutierrez, lead machine learning researcher at Scortex, writes about the challenges of deploying deep learning on the factory floor

Related features and analysis & opinion

Vision Components MIPI modules can be connected to various embedded processors, including Nvidia Jetson boards

17 February 2021

Greg Blackman examines the effort that goes into creating an embedded vision system

14 December 2020

Greg Blackman reports on the views of panellists from AIT, MVTec, Irida Labs, and Xilinx discussing AI and machine vision

MVTec’s deep learning tool provides means to label data simply and efficiently

10 November 2020

Matthew Dale explores the software to simplify the management and labelling of deep learning data

MVTec’s Halcon software library includes a deep learning OCR tool with pre-trained fonts from a wide range of industries. Credit: MVTec

03 August 2020

Matthew Dale explores vision solutions for code reading and inspection in pharmaceutical production

A setup for photometric stereo imaging in which multiple lights are used to illuminate an object from different directions. Credit: Advanced illumination

04 June 2020

Matthew Dale explores the power of computational imaging, all made possible by clever illumination