“What do you do for a living?” is often the first question I ask people I meet.
For a good reason.
What you do for a living often strongly correlates with how you think and how you approach problems.
Not so long ago, a dog bit me. Fortunately, nothing bad happened, just a small bite wound and a big bruise. As I told friends and family about the incident, I got different reactions, and they matched quite well with their professions:
Medical field → How did you treat the wound?
HR → How did you work it out with the dog owners?
Law enforcement → Have you considered a lawsuit?
Their reaction matches well with this famous quote by Abraham Maslow:
If the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.
The quote usually comes with a negative interpretation: You are an idiot because the next problem will be a screw and you will use a hammer.
But there’s a positive side: Having a hammer allows you to become great at detecting nails and even transforming problems into nail-shaped problems.
What if the hammer is supervised machine learning?
Seeing predictions everywhere
I wrote in Modeling Mindsets about supervised learning:
Modeling requires translation into a prediction problem, strict evaluation with ground truth, and optimization.
A change for me was that now I see many, many problems as prediction or learning problems.
Prediction means inferring unavailable data from available data.
This definition is very broad and covers many common tasks:
Time series forecasting: The past is available, but the future is not.
Image classification: The pixels are available, but the label is not.
Text-to-text generation: The prompt is available, but the answer is not.
The prediction definition matches many situations and shapes how I see the world.
For example, I have become critical of TV pundits who make predictions with high confidence. Usually with little repercussions when predictions turn out to be wrong. Often you can’t evaluate their predictions anyway because they were too vague in the first place.
In general, I see professions more favorable when they have a feedback loop: Does a person who makes a decision (based on a prediction) actually see the ground truth later on? For example, I have more trust in surgeons who follow up with their patients and therefore see the results of their decision to operate.
When I read about analysis or theories, I ask myself: If these are true, what are they predicting?
But my training in supervised ML also changed me in other ways:
For fun, I make predictions about what will happen next in movies / TV series.
To improve my palate and cooking skills, I try to guess the ingredients of dishes made by others.
How has your worldview changed with your background in machine learning?
In other news
The paperback of Interpreting Machine Learning Models with SHAP is almost done. I already got the author's proof, fixed some issues, and am now waiting for the updated proof. If the new proof is good to go, I’ll hit publish. Hopefully next week 🤞
I was just thinking about this last night because I’ve found I take an engineering mindset for everything I do. It’s not great for raising kids 🤣
As a computer scientist with a bias towards seeing algorithms everywhere (which is not necessarily a god thing) this resonated with me a lot.