FROM THE BLOG

Language and Artificial Intelligence

 

 

Artificial Intelligence is revolutionizing the world. Soon our native language will be the new interface and everybody that can talk will be able to benefit from technology efficiently.

Yiğit Kulabaş, PhD*

yigit@redesignbiz.com

Artificial Intelligence is getting ready to redefine our relationship with the world, machines and technology. I’ve already mentioned in my previous article that 2017 will be the year of AI and it is only about robots as everyone assumes. Finally, AI is all ready to step into our daily lives, since technology made huge progress regarding to language. AI now listens, reads, writes, understands, translates, talks and acts.

Restoring to Factory Settings

We had to develop artificial languages, menus, systems, devices and interfaces in order to communicate with technology, because computers were not able to understand our language. Since this was the case, the efficient technology usage was very limited. Then we started to think human-oriented and thereby purified ourselves from artificiality. We explored to touch with smart phones and tablets again and eliminated the mouse. Thereby, everyone from a 2-year-old kid to 90-year old granny started to benefit from technology. And now it is time to purify ourselves from other artificialities. We don’t need to understand the language of technology anymore, because now technology has started to understand our language. Language and technology has always had a close relationship, and especially in last few years this relationship got even closer. Advancements in Natural Language Processing area are quite impressive for example. The technical term is “conversational interfaces”. Let me explain a bit more to clarify what it is. Now we are able to communicate with any kind of device and object by talking or writing to them. And this technology will be available not only for science people, engineers or professional but to everyone that can talk and write.

Language is important

As you can guess these new technologies are first tested in English and then applied to other languages. Therefore, everyone should look after for their native language to benefit from these technological advancements and provide perfect and natural language usage. We all have to make sure that our native language is efficiently processed. Let’s examine the relationship between language and technology. After all we have to make a huge progress to adapt this new technology to our daily life and thereby to our language. In order to achieve this, the device should hear us first, listen us and understand us and then it’s time to talk and agree.

1) #ToHear: The most challenging task for AI is to differentiate the voices. Technology becomes deaf, especially after it steps into the real world from labs. Enhanced voice services separated the voice in layers. The initial goal with this to get rid of the noise at the background. It separates the voices from different sources and enhance the voice quality. Another sub-feature about hearing is to recognize the voice. It is not only about differentiating the voice of car from bird’s. Elsa and Anna don’t sound the same either. We are able to understand who we are talking to only from their voice, without even seeing them. Even this is too normal to us, it is quite challenging for a machine to achieve this. And this is where voice biometrics steps into the stage.

2) #ToListen: In order to go from hearing phase to listening phase, the voice should be transformed. There are many techniques to achieve this. Today most of the devices can be activated by voice command, yet it is not enough. In our dreams we have devices that listens continuously and replies when it is needed.

3) #ToUnderstand: Another challenging task for AI is to make sense of things. Most of the time it is not enough to know the words. In order to understand something said fully, it is essential to see the bigger picture, and relate what is said with previous sentences. Same sentences can have completely different meanings with different gestures, stressed words and body language. Considering that it is already a challenge to process natural language, it is even harder in some languages due to flexibility in meaning. Voice- emotion recognition and meaning analysis are important steps for AI to understand natural language. With voice emotion recognition it is possible to understand whether the it is meant positive, negative or neutral.

4) #ToTalk: You heart, you listened and not it is time to talk. It sounds so easy right? Yet, it is a huge world. From making a sentence to improvisation, resemblance to human voice to expressing emotions just by voice, accents to stress, meaning to expression, whispering to shouting. The number of devices that we can talk to increases every single day. Smart phones, applications, cars, navigation systems are the first ones that pop in to my mind. It is expected that soon talking to devices will be even more common with smart home assistants. Amazon Echo has already joined to families in English speaking countries. Arçelik has recently launched the first Turkish speaking home assistant.

5) #ToUnderstandEachOther: We heart, listened, understood, talked and now it is the last step: to understand each other. It is a broad term that covers dialogue, to compromise, to speak the same language, to agree. In order to act, first we need to understand the other party. When we don’t speak the same language with the other person/device/object, the only solution is translation and translation is one of the important areas that AI is used. Google Translate has made major improvements recently. It started to use AI support for 8 different languages to improve the translations and enhanced the translation quality.

 

 

*Yiğit Kulabaş is a businessman, academician and author. He is the CEO and founding partner of RE/Design Business. This article of him is translated from Capital. It was published in February 2017.

Comments are closed.