Artificial Intelligence for starters (in non-geeky terms)

Anurag A V
5 min readJul 1, 2019

You’re planning a road trip for an upcoming long weekend. You start to think about what all you’d need to pack and if you need to order something.

You’ve packed what you think you’d need but suddenly your voice assistant alerts you to the upcoming weather forecast of your destination and prompts you to pack rain protecting clothing and extra clothes.

It also asks if it could order raincoats and umbrellas from the e-commerce store as it has not detected such kind of items in your smart wardrobe.

You say yes and they get delivered to your doorstep in an hour by delivery drones.

You start out on the trip and your smart car recommends you the best possible way to your destination after anticipating traffic and other road conditions. It suggests you all great Points of Interest along the way and alerts if you are sleepy or distracted behind the wheel. The car self-adjusts its dynamics so as to make you and your passengers feel the best comfort and safety that it can offer.

Sounds like a story?

Well, this story can be closer to reality than we can think of.

There are already intelligent voice and home assistants in the market that take on the role of your own personal secretaries or butlers.

E-commerce giants like Amazon and Alibaba are testing aerial drones that can promise last-mile delivery.

Many automotive manufacturers are researching autonomous vehicles and connected cars that can reduce manual driving efforts and make mobility easier.

What empowers these so-called ‘Smart devices, objects, tools, machines’ to think and even suggest their users?

What makes them different from conventional computers or electronics?

It is the ability of Smart devices to think and act like humans to make their lives easier is what makes them different.

This ability is often termed as ‘Artificial Intelligence’ (AI) amongst the term’s various other definitions.

Alan Turing, a pioneer of Theoretical Computer Science described a test called the Turing Test in the year 1950 to determine whether a machine is intelligent or not.

The test includes a human interrogator asking questions that require human intelligence to two players A & B. One of them is a machine or computer and the other is a real human. The identities or which player is what is not revealed to the interrogator.

The interrogator has to seek answers from the players and determine which player is a machine and which is human.

If the interrogator can’t differentiate between the answers, then it can be inferred that the machine which was answering is intelligent and probably in some-way think of its own.

How does AI mimic the thinking ability that we humans possess? At least to some extent?

There are different models developed by real humans that rely on insights and strategies derived from historical and current sets of data, input-outputs and user behavior.

The models analyze those sets and are able to form a rule of thumb or optimal solution strategies that can be used the next time a similar problem is put across for analysis.

Did that bounce over your head?

Don’t worry, let’s understand better with the help of an example of facial recognition and how AI is helping in recognizing faces of people for identification purposes.

Facial recognition models basically take input photographs as data say for example a test subject’s photos of their face from different angles at a particular point of time.

The next day the same subject is seen in a camera but in a different attire or with some glasses on his/her eyes, the model will compare the input (in this case, the live video or photograph of the person) with its analyzed data( the previous photos from different angles) and will look for similarities and see If there is a match.

To determine if it’s a match or not, the similarities between the input and the analyzed data should cross a certain threshold (say 90%), if that is the case, then the AI model will recognize the person as the same subject from the previous day.

What’s astounding is that these models possess the ability to learn from their mistakes and be better equipped for the future. If in the above case the model had failed to provide the correct answer, its human teacher could point it out to the model and from the next time, the model won’t repeat the same mistake. This (in technical terms) is called ‘Supervised Learning’.

There are AI models called Neural Networks that mimic the human brain and the firing of neurons. They make decisions by firing their artificial neurons.

It is because of superior learning abilities and the ability to think like humans that AI has become a forefront for problem-solving across many applications and industries.

That was all the Brightside, but what is the criticism that AI is facing?

Well, one is the job scenario and the other is bad AI.

Let’s talk about jobs first.

We humans have always not embraced technological advancements readily. Be it the shift from stones to steel and then automated tools or the shift from animal-powered mobility to the Internal Combustion Engine, the adoption was done gradually.

In my personal opinion, gradual change is good as people know more about the benefits that the change can create and harm that can be prevented by accepting that change.

The same is going to happen with AI. It may be a disruptor, but acceptance will be in steps.

Like with previous technological advancements, AI is going to create more jobs than the type of jobs can make obsolete, mostly mundane and less skill based.

Machines and algorithms in the workplace are expected to create 133 million new roles, but cause 75 million jobs to be displaced by 2022 according to a new report from the World Economic Forum (WEF) called “The Future of Jobs 2018.” This means that the growth of artificial intelligence could create 58 million net new jobs in the next few years (source).

Job seekers and professionals need to re-skill and keep learning so as to extract the best out of AI, after all the aim with which it was created was and is to improve our human lives, both personally and professionally.

What about bad AI? Is it like Skynet?

Yes, AI can become a hindrance and possibly a threat if it falls into the wrong hands.

But isn’t it the same with all technological changes and leaps we have experienced till now? There are always both sides of a coin.

Currently, AI is still in its baby phase as far as the ability to think is concerned. This could change in the future as better algorithms and models are being created along with better hardware like IPEs(Intelligence Processing Units) and Quantum computing.

The advantages of AI far outweigh the disadvantages and that is why it is a step in the direction of a better future for us.

There could be a day when AI will actually help us in making better decisions and ultimately improve our lives.

AI will make us more human than machine and not the inverse.

--

--

Anurag A V

A Technology and Management enthusiast with an undergraduate degree in Computer Engineering. Currently pursuing a masters degree in management.