Tony McGrail: Caveat Emptor! (Let the buyer beware…)

It is not enough for an algorithm to do some cute analysis – it also has to do no harm.

The phrase “caveat emptor” is an old Latin aphorism meaning “Let the buyer beware.” But how can you guarantee what you are buying will do what it is supposed to do? And if it does not perform as expected, then who is at fault? Machine Learning and Artificial Intelligence (ML/AI) systems often make great claims about the effectiveness and ability of algorithms and applications – from self-driving cars to investment analyses. But we need to look beyond the hype and the successes to set and measure expectations.

As journalist John Naughton noted in an Observer article on September 6th, imagine if pharmaceutical companies were held to the same standards as tech companies. The drug development process would consist of dreaming up a new molecule, showing some dramatic results in clinical trials and then launching a drug onto the market. Who would be interested in a COVID-19 vaccine which prevents the disease, but may cause harm in other ways?

With ML/AI there is an analogy with the food industry in the United States. The Pure Food and Drug Act of 1906 was enacted to ensure that food product ingredients were of adequate quality, but the later Federal Food, Drug & Cosmetic Act of 1938 required that food products demonstrate safety before they were sold. For ML/AI systems we are in a similar situation. While we may be able to show that the algorithms are well constructed and provide possible benefits, can we show that they do no harm?

Applying advanced ML/AI systems requires care and forethought – what are the expectations? After all, we are using historical data to try and predict the future. Does this even make sense when more than 90 % of the benefits of ML can be extracted through data cleansing and standard statistical tools? Any ML/AI algorithm could act in unexpected ways – and if the algorithms are not clear and transparent, the reasons for the anomalous performance may be impossible to identify and correct. Doble’s Asset Health Index system, for example, relies on standards and guidelines for diagnostics and failure mode identification with benchmarking against millions of test results to identify true anomalies and outliers.

To quote Warren Buffet from 2009, discussing models used in the financial crisis:

“Constructed by a nerdy-sounding priesthood using esoteric terms such as beta, gamma, sigma and the like, these models tend to look impressive. Too often, though, investors forget to examine the assumptions behind the models. Beware of geeks bearing formulas.”

When considering ML/AI systems, set expectations, look for transparency and logic, ensure that safety is demonstrable and be skeptical of grand claims. Remember that Latin phrase, “caveat emptor.”

Dr. Tony McGrail, Doble Engineering Company

Dr. McGrail provides condition, criticality and risk analysis for substation owner/operators. Previously Dr. McGrail has spent over 10 years with National Grid in the UK and the US as a substation equipment specialist, with a focus on power transformers, circuit breakers and integrated condition monitoring, and has also taken on the role of substation asset manager identifying risks and opportunities for investment in an ageing infrastructure. Dr. McGrail is a Fellow of the IET, past-Chairman of the IET Council, a member of the IEEE, ASTM, ISO, CIGRE and the IAM, and a contributor to SFRA and other standards.