While focused on medical devices, this proposal is interesting read for folks interested in applying machine learning and artificial intelligence to other regulated areas, such as manufacturing.
We are seeing is the early stages of consensus building around the concept of Good Machine Learning Practices (GMLP), the idea of applying quality system practices to the unique challenges of machine learning.
My day 2 at WCQI is Day 1 of the conference proper. I’m going to try to live blog.
Today’s morning keynote is the same futurist as at the LSS in Phoenix last month, Patrick Schwerdtfeger, and not only was I dismayed that it was they exact same I was reminded yet again how much I dislike futurists. I’m all about thinking of the future, but futurists seem to be particularly bad at it. It is all woowoo and bro-slapping and never ever a serious consideration of the impact of technology. Futurists are grifters.
These grifters profit by obscuring facts for personal gain.
They are working an angle, all of them: the health gurus and the life hackers
peddling easy solutions to difficult problems, the futurists who basically
state current trends as revelations. They are all trying to pull off the ultimate
con – persuading people they really matter.
They are selling themselves: their books, their podcasts,
their websites, their supplements, their claims to some secret knowledge about
how the world works. But I fundamentally doubt that anyone who gave 40 talks in
the last year has the bandwidth to do anything that really matters. It is all
snake-oil. And as quality professionals, individuals who are dedicated to
process and transparency and continuous improvement, we deserve better.
I’m not sure how these keynotes are selected but I think we
need to holistically view just what we want to be as a society and the pillars
we want for our conferences.
Anyone know a good article that evaluates futurists and life
hackers with the prosperity gospel? Seem like they are coming from a similar
place in the American psyche.
Ooh, artificial intelligence. Don’t get me wrong there is
real potential (maybe not the potential people feel like there is) but most
discussions on artificial intelligence is hype and bluster, and this
presentation is no different. Autonomous vehicles block chains. Hype and
There is definitely people thinking this seriously, offering real insights and tools. We’re just not there. The speaker admits he gets 90% of his income from speaking. Pretty sure he isn’t actually doing that much. He might be an aggregator (as most of his slides with real content were attributed) but I keep struggling to see value here.
Gratuitous Steve Jobs picture.
After this there is some white space for vendor stuff.
At least I could multi-task and did some work.
Quality 4.0 Talks
I didn’t attend many of these last year because they were all standing and had a thrown together feeling. This year the ASQ seems to have upped the game. Shorter sessions can be good if the presentation is tight. At 15 minutes that is a hard bar to set.
First up we have Nicole Radziwill on “Mapping Quality Problems to Machine Learning Solutions” Nicole’s very active in the software section, which under the new membership model I’ll start paying a lot more attention to.
In this short presentation Nicole focuses on hitting the points of her 2018 Quality Progress article. Talking about quality 4.0’s path from Taylorism as “discovery & learning.”
Hit on big data hubris and the importance of statistics and analysis. The importance of defining models before we use them.
Covers machine learning problem types at a high level.
Pattern Identification (Clustering)
High level recommendations were domain expertise, statistical expertise, data quality and human bias.
Next up is Beverly Daniels on “Risk and Industry 4.0”
It is telling that so many of the talkers make Millennial jokes. As a Gen-Xer I am both annoyed because no one ever made Gen-Xer jokes (the boomers never even noticed we were at the conference) AND frustrated because this is something telling about the graying of the ASQ.
Quick review of risk as more than probability. Hit on human beings as eternally optimistic and thus horrible at determining probability. As this was a quick talk it left more questions than answers. Getting rid of probability from risk is something I need to think about more, but her points are aligned to my thoughts on uncertainty.
Focusing on impact and mitigation is interesting. I liked the line “All it does is make your management feel good about not making a decision.”
The future is now. Industry 4.0 probably means you have algorithms in your process. For example, if you aren’t using algorithims to analyze deviations, you probably soon will.
And with those algorithms come a whole host of questions on how to validate and how to ensure they work properly over time. The FDA has indicated that ““we want to get an understanding of your general idea for model maintenance.” FDA also wants to know the “trigger” for updating the model, the criteria for recalibration, and the level of validation of the model.”
Kate Crawford at Microsoft speaks about “data fundamentalism” – the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. It shouldn’t take much to realize the reasons why this trap can produce some very bad decision making. Our algorithm’s have biases, just as human beings have biases. They are dependent on the data models used to build and refine them.
Based on reported FDA thinking, and given where European regulators are in other areas, it is very clear we need to be able to explain and justify our algorithmic decisions. Machine learning in here now and will only grow more important.
Ask an Interesting Question
The first step is to be very clear on why there is a need for this system and what problem it is trying to solve. Having alignment across all the stakeholders is key to guarantee that the entire team is here with the same purpose. Here we start building a framework
Get the Data
The solution will only be as good as what it learns from. Following the common saying “garbage in, garbage out”, the problem is not with the machine learning tool itself, it lies with how it’s been trained and what data it is learning from.
Explore the Data
Look at the raw data. Look at data summary. Visualize the data. Do it all again a different way. Notice things. Do it again. Probably get more data. Design experiments with the data.
Model the Data
The only true way to validate a model is to observe, iterate and audit. If we take a traditional csv model to machine learning, we are in for a lot of hurt. We need to take the framework we built and validate to it. Ensure there are emchanisms to observe to this framework and audit to performance over time.