Please reload

Recent Posts

Save Something Good for the End in Order to Sell Again in the Future: The Peak-End Rule in Service Design

March 6, 2018

1/10
Please reload

Featured Posts

Machine Learning Needs to Start Unlearning, For our Own Good

March 9, 2017

Last week wasn’t a good one and in the weekend, I wanted to watch a comedy – one of those mind-numbing movies that generates mindless laughter. I opened Netflix (we don’t have cable, just Netflix and Amazon Prime) and started looking for such a movie. After about half-an-hour of frustration, my wife recommended me a title after she searched on her smartphone for “good comedies on Netflix”.

 

Five minutes of additional frustration later, we found a movie that fitted the mind-numbing criterion and made us laugh about 5 times in 90 minutes.

 

 

Naturally, you might wonder why it was so hard for me to find a mind-numbing comedy on Netflix, when one can find virtually any kind of film on the aforementioned platform?

 

Netflix uses machine learning to learn our preferences and to provide us with choices that fit those preferences.

 

In my case, mind-numbing comedies that generate mindless laughter are not within my natural preferences. My wife and I are rather high on intellectual curiosity (openness to experience) and enjoy crime movies / series, nature documentaries, historical documentaries and World War II related movies and the occasional sit-com for weekend breakfast. Generally, we tend to avoid films with little meaning or filled with sensationalism or overly commercial.

 

In the past 3-4 years, Netflix’s algorithm learned our preferences rather well and delivers suggestions that fit those preferences.

 

The problem is that it delivers (almost) exclusively options that fit those preferences. And this lack of quasi-random options is not specific to Netflix alone. This is an issue with all machine-learning based services.

 

Most of my YouTube usage is for music and, once again, its machine learning algorithm learned my preference for different variations of metal and provides me with choices of my favorite songs and bands. I very much enjoy the tunes that I’ve been listening in the past 5-20 years.

 

Occasionally, however, I’d like to get out of my comfort bubble and experience something new. Maybe not something radically different from what I’ve been liking since ever, but something that is sufficiently novel so that I don’t experience deja-vu’s all the time.

 

 

Dear machine-learning algorithms, please be so kind that you allow me to experience some kind of authentic novelty and pretty please: don’t lock me in a self-sustaining, sound-proof echo chamber.  

 

Until now I mentioned rather benign downsides of machine-learning over-use. In areas, other than entertainment, things can be even worse.

 

Social media feeds and search results are tailored by machine learning algorithms to give each of us information that fits our, interests, preferences and our personality. This sounds very good and, I guess, most of the time it’s OK.

 

When it comes to news and points of view, this, however, is simply counterproductive because it locks each person in psychologically comfortable echo-chambers. We can (and often do) become blind to others’ perspectives and to what kind of information others are consuming.

 

For our own good as societies, machine-learning algorithms need to start doing some unlearning.  

 

Machine-learning algorithms need to incorporate some randomness or quasi-randomness. For example, such an algorithm could provide 90% of options that fit existing preferences and 10% quasi-random options – things that remotely fit preferences.

 

I suggest quasi-random options (and not purely random ones) because users, most likely, will reject / not consider / not even notice options that are at odds with their preferences. For example, on Netflix I’d never watch soap-operas; on YouTube, I’d never listen to boy-band love sons.

 

Similarly, when it comes to news feeds on social media, a liberal wouldn’t be likely to engage with (click on) a post that expresses ultra conservative views such as “we must defend ourselves from Satan’s trap of legalizing same-sex marriages”.

 

However, a liberal would (at least) not reject / disengage with a post that expresses more moderate views, such as “Residents of Centerville are concerned about rising crime in immigrant dominated neighborhoods”.  

 

Don’t blame the user

 

It’s easy to say that people should “like” pages and / or posts that don’t fit with their views (preferences) so that the machine-learning algorithm learns to give them more of “that kind” of content. People naturally avoid “energy loss” and it is very easy to do nothing and keep being comfortable.    

 

If you take a design perspective, there’s a problem with the algorithm, not with the user.

 

I agree that machine learning algorithms, most often, are a good thing, but I also believe that there can be “too much of a good thing”.

 

 

 

 

 

Please reload

Follow Us