Teaching machines to understand…is it that simple?


Those of you who don’t know the term “Machine Learning”, “Machine learning is a set of algorithms that are given some data as an input, those algorithms learn it, teach themselves and use that intelligence for different things”. It’s something we call “Artificial intelligence”.

I have been doing some Machine Learning stuff at my office here. I have been understanding all different algorithms, their learning mechanisms, kind of data they need, their accuracies, their tuning parameters etc.

I wrote few of the test classifiers and they worked well with my test data and all. Programs I wrote were able to predict accurately, But after going through them and learning the inside stuff, it wasn’t satisfying experience.

I was talking to Google assistant in my phone, I told it that I’m getting bored and quickly he smiles at me says, “Ooh, lets play a game.”. I liked its response but somewhere in my mind, I knew that that’s what its been taught. It doesn’t get my feelings. It doesn’t understand what it means by getting bored…

What I mean by this is that, it was okay that after going through all the training data and learning through it, it was able to predict with unseen data ( other than training data) but it was a complete mathematical approach which was based on equation fitted accurately with training set.

Today’s algorithms don’t at all understand anything they do. Just writing some sentences after looking at sentences, or predicting situations based on previous data just by fitting some absolute curves…I don’t think it does mean much…they don’t really understand.

I think, all these of our methods of surfacing important words with TF-IDF, training classifiers by shaping the data as per our needs, fitting those polynomial curves are not going to help. Because 70-80% stuff we learn as a human is completely unsupervised. Although it seems that we learn by training ourselves with lots and lots of samples, one thing we do forget is that we understand meaning or context of each and every thing. That’s the key to our adaptive nature, our unsupervised learning.

Our machines won’t be able to understand anything till they understand the context and they understand the meaning…once this happens, all other problems are just miles away.

One thing I don’t understand though is that “Why are we trying to teach machines with our language? I mean something that we can understand”.

Why don’t we show that algorithm a RED color and use its own response to it as a measure of RED than us telling it, ”This is called RED” through our training data. I always think of our current methods as showing it moon and saying “this is a circle”. This is what our current algorithms are learning. I don’t see much advantage of teaching them with huge data.

Learning for machine isn’t only about using vast amount of data to build relationships with…

In order to create context/meaning for something they talk or predict, we should have sophisticated methods that will create their own understandings of something they learn rather than us hammering them for its relationships. Something that unsupervised learning does. But we aren’t been much deep into unsupervised learning as far as current world of ML is considered…We are more concentrated towards making it a talking parrot.

4 thoughts on “Teaching machines to understand…is it that simple?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.