NLP Software, Human Speech, New Challenges for Developers
As artificial intelligence has become a daily part of the lives of millions of consumers around the world, the challenges that software developers face when looking to ensure that AI software functions accurately and efficiently can often go unnoticed. To this point, one of the most common applications of artificial intelligence are popular voice assistants such as Apple’s Siri and Amazon’s Alexa, as well as more basic chatbots that consumers utilize in the context of customer service. As these chatbots function on the basis of Natural Language Processing or NLP for short, there are a variety of challenges that can arise when developing said chatbots. With this being said, some common issues that software developers face when looking to create software programs that utilize NLP include homonyms, contextual words, and phrases, synonyms, sarcasm and irony, ambiguity, and general errors in speech or text.
Homonyms, contextual words, and phrases
As words within the English language can have the same pronunciation but completely different
meanings, a major challenge that software developers face when looking to create chatbots or speech-to-text applications are homonyms and contextual words and phrases. For example, many people struggle with the appropriate usage of the words their, there, and they’re. While these three words are pronounced the same, they in fact elicit three completely different meanings when used in written or spoken language. To this end, as human beings themselves may struggle to use words in the appropriate context from time to time, a software program that is being developed from scratch would undoubtedly face these challenges as well.
Synonyms can also present challenges for the machine learning models that NLP uses to function. As many different words can be used to describe or illustrate the same idea, a machine learning model that has not been trained on all possible synonyms for a particular word or phrase will struggle to identify these words when implemented within a chatbot or speech-to-text application. What’s more, many synonyms also denote varying levels of scope or complexity, leading to additional issues. For instance, large, substantial, enormous, and massive are all words that can be used to describe the size of a particular object or person, depending on the context in which they are used within a particular conversation.
Irony and sarcasm
The concepts of irony and sarcasm can be a conundrum for software developers looking to create NLP machine learning algorithms. When speaking to a person in real-time, many people can often recognize sarcasm or irony through the tone of the speaker’s voice, or the context of the conversation at hand. As software programs will not have access to these nonverbal cues, such applications can struggle to understand words that have a positive or negative meaning on a surface level that are in fact being used to denote the opposite meaning when used in conversation. For example, after receiving bad customer service when eating at a restaurant, a customer might make a sarcastic remark saying that the service was top-notch. However, without the context of receiving bad customer service, a software program would not be able to recognize the sarcasm in such a statement.
As language has developed as a result of thousands of years of human interaction, culture, and communication, all spoken and written languages will contain some modicum of ambiguity. For example, a simple sentence such as “I saw the boy in the street with my binoculars” could in fact have two different meanings. On the one hand, the speaker could be saying that they are looking at the boy in the street via their binoculars. On the other hand, the speaker could also be saying that they are viewing a boy walking in the street physically holding the speaker’s binoculars. As such, ambiguity can also cause issues for speech-to-text applications and chatbots, as the software will not be aware of the level of ambiguity that can be present within English or another language.
Errors in speech or text
Finally, simple errors in speech or text can lead to challenges for speech-to-text applications as well. A common example of this is the misuse of the words effect and affect. While these two words sound the same and even have similar meanings, misusing these words can completely alter the meaning of a phrase or sentence. For example, a person might be looking to say that the music that they listen to figuratively affects them, but use the world effect instead. While this would seem like a simple error on the part of the speaker, as human beings would be able to grasp the sentiment of the comment, a software program might take the same sentence to mean that the music has had a physical effect on the speaker.
While many human beings have taken written and spoken language for granted, scientists around the world continue to struggle with the mental functions of the brain that allow people to communicate at the most basic of levels. Subsequently, creating software programs that are capable of understanding and responding to written and spoken language can prove to be extremely challenging. From synonyms to errors in speech or text, there are a number of obstacles that can lower the effectiveness of a speech-to-text software program or application. As such, while popular chatbots such as Siri and Alexa are very much modern marvels of technology, more effective speech-to-text programs will inevitably be developed in the future, as the machine learning algorithms and language models that are used to support such programs only continue to improve.