Advantages and disadvantages of Google Translate
Google Translate (GT) is the world’s number one translation software. It supports 103 languages, 10 thousand language pairs, and processes about 500 million translation requests every day. Experts claim that GT’s neural system will soon be able to process not only texts but also audio and video files. Hence, we should expect rapid progress in the development of machine translation. The first steps in this direction have already been taken, and we’re now seeing algorithms capable of analysing video and audio being actively developed.
In 2016, Google developers introduced Neural Machine Translation System (GNMT). Based on an artificial neural network, it was meant to improve translation quality immensely. With its help, students will be able to locate and buy homework online a lot faster than they did in the past. Three years have passed since then, and we can now evaluate its effectiveness. Did translation quality really improve and what else does it take to make it even better?
How does GT’s algorithm work?
The neural model of machine translation relies on standard translation methods. Before the advent of neural networks, translation was usually done in a word-for-word fashion. The system simply translated separate words and phrases, taking into account basic grammar rules. Therefore, the quality of translation left much to be desired.
In a modern neural system, the smallest element is not a word but its fragments. Thanks to that, the machine’s computational power focuses not on word forms but on the context and meaning of the sentence. The software translates the whole sentence by taking into account the context. It does not store hundreds of translation variants in its memory. Instead, it operates on the semantics of the text and divides sentences up into dictionary segments .
As of today, GNMT uses about 32,000 such fragments. By using special decoders, it determines the significance of each segment in the text. Next, it computes the maximum number of possible meanings and translation options. The last step is to combine the translated segments with grammar. According to the developers, this approach enables to ensure high speed and accuracy of translation without consuming excessive computational power.
Due to the semantic and grammatical features of languages, proper translation requires completely different software algorithms which are implemented as separate modules and dictionaries in some programs. A neural network can work with many pairs of languages, including those that were not involved in the initial learning process. For example, if a system was trained to translate between English and Japanese, and English and Korean, then it can easily translate from Japanese to Korean without using English as an intermediate language.
Over the past few years, artificial intelligence (AI) has developed so much that it has become capable of translating from and into languages for which it was not originally designed. This became possible due to the fact that AI began to use its own artificial language, which acts as an intermediate language in the translation process. This universal computer language, which was called Interlingua is absolutely unsuitable for communication between people.
The translation method implemented by Google developers is called zero-shot translation. It is a more sophisticated technology that relies on an intermediate artificial language. This area of research is developing very rapidly, and such systems will become the primary means of automatic translation in the near future. The self-learning feature of the system allows a neural network to accurately translate even slang, jargon, and neologisms that are not available in popular dictionaries. In addition, a neural network can operate with letters the words are made up of. It is necessary when transliterating proper nouns from one ABC to another.
The GNMT system has improved the translation of the two most used language pairs – Spanish-English and French-English. As a result, translation accuracy increased to 85%. In 2017, Google conducted large-scale surveys among regular GT users. They were asked to evaluate three translation options: machine statistical, neural, and human ones. The results were impressive — translation that relied on neural networks turned out to be near perfect in some language pairs. Below is a table in which the 6-point system for evaluating the quality of translation is used. The maximum score is 6, while 0 is the minimum one.
|Statistical model||Neural Network||Human Translation|
As you can see, the quality of translation in the English-Spanish and French-English pairs is very close to translation made by a human. This is not surprising since these language pairs were used for deep learning of GT algorithms. With other language pairs, the situation is not so good, but large-scale research into them is still ongoing. Nevertheless, if neural translation works quite well with structurally similar languages, then with radically different language systems, for example, Japanese-Finnish, computer translation is noticeably inferior.
What are GT’s drawbacks?
The practical benefits of GT and similar technologies are hard to underestimate. However, something is still missing in this machine approach. This can be described in one word — understanding. That is, computer translation is never focused on machine understanding. Software developers have always tried to improve decryption methods or, in other words, attempted to cope with the translation task by using analytical powers of the machine without.
It is worth noting that GNMT developers’ primary goal was not to achieve maximum translation accuracy. Considering the current level of technological development, any computer translator will require operating with complex language constructs, thus significantly reducing its speed. Therefore, GNMT developers tried to find a balance between accuracy and speed of translation.
Let’s use GT to translate the following phrase into French:
In their house, every family member has personal things. There is his big car and her small car, his slippers and her slippers, and his books and hers.
Here is what GT came up with:
Dans leur maison, chaque membre de la famille a des objets personnels. Il y a sa grosse voiture et sa petite voiture, ses pantoufles et ses pantoufles, ses livres et les siens.
If we use the French-English translator again, we get this sentence:
In their home, each member of the family has personal belongings. There is his big car and his little car, his slippers and his slippers, his books and his own.
The problem is that in French and in other Romance languages, nouns have genders, and pronouns like “him” and “her” refer to the gender of things and not their owners. GT did not understand the meaning and translated the sentence accordingly. It is clear to any person that the sentence is about a family in which each member has personal belongings. For example, GT used the word “sa” for both cars and “ses” for both types of slippers. So, we cannot say anything about the size of the car of each family member. As a result, translation made by GT failed to convey the original meaning.
The software simply ignored the most vital information in the sentence. People understand such subtleties but GT only processes lines consisting of words and letters. It quickly processes pieces of text without understanding their meaning. Therefore, a translation system relying on advanced AI technology can be inaccurate and even erroneous.
The Eliza effect
For any machine, computing device or software, words matter. However, machines can never understand the in-depth meaning of these words. Back in the 1960s, a mechanical device called Eliza was designed. It manipulated a set of answers to questions, thus creating an impression of intelligent phrases. Since then, the issue of whether a machine can think like a human has been called the Eliza effect.
For decades, software developers and AI researchers have been influenced by the Eliza effect. Most GT users assume that this software is able, at least sometimes, to understand the meaning of words. However, it is not true — GT simply bypasses the issue of language understanding. Of course, GT can sometimes come up with sentences that sound pretty good.
It may even happen that a paragraph or two are translated perfectly, and this creates an illusion that GT understands the meaning of the text. However, we should not forget that GT cannot think like a human and is only able to processes texts in a certain way. A computer program has no memory, no imagination, and no understanding of the hidden meaning of words with which it can operate so quickly.
However, there is no reason to think that computer devices will not be able to think like humans in the future. Perhaps, they will even be able to do excellent translations between several different languages. It’s quite likely that they will successfully translate jokes, puns, novels, poems, and essays. After all, modern technology is developing very fast.