Linguist in the Machine – A closer look at Machine Translation

An increasing demand

As the age of globalization and worldwide electronic communication is unfolding, companies are in a constant race to fulfill their ever-demanding needs for accurate and, more importantly, fast localization in a wide array of languages. The global translation industry continues to grow with its role at the forefront of this trend of businesses seeking to break the constricting barriers of language in order to offer services and products worldwide. The problem is that, even as they employ new and innovative technologies to aid in their work, the complex process of translation is a time consuming and highly specialized task that has up until now, been solely performed by human linguists.

One of the proposed solutions for a far quicker, and more cost-effective method to localize is, of course, Machine Translation.

While translating tools that employ some sort of machine assisted translation have been around for many years, effectively aiding translators in streamlining their work, machine translation promises to revolutionize the field with the aid of AI. Technology companies are constantly announcing breakthroughs in the development of AI algorithms, and have begun to perceive of a future when MT might even replace human translators.

And yet, as hopeful and exciting as this future sounds, how realistic is the expectation of entirely replacing human translators and how do the next 5 years look like in this rapidly evolving field?

A brief history

Machine translation is not, by far, a modern concept. Procedurally interpreting and translating text has been a topic of research from the 50s, and techniques for systemic language translation used by modern software can be traced as far back as the 9th Century.

At first, machine translation systems, developed for use in the Cold War, used rule-based methods, applying rules that have been developed by humans or sourced from dictionaries in order to achieve rudimentary translations.

The 1990s brought forth significant improvements in this field, with the advent of revolutionary computing technologies. IT companies such as IBM introduced the use of statistical models and machine translation adopted the use of this novel technology, shifting away from the use of previous rule-based engines. However, early adopters had to immediately contend with numerous flaws and errors. Operating on a finite database of previous translations, the unpredictability of language and its propensity to encapsulate a wide range of nuances and meanings would elude the limited engine, and would thus cause it to drastically depart from the meaning of its source text.

As such, companies began to develop a hybrid method that brought rules and statistical analysis together in an effort to increase the quality of their automated translations, to varied results.

However, by far the most impressive technological leap in machine translation was marked by the introduction of neural networks around the 2010s. Neural Machine Translation (NMT) seeks to replicate the thought process of a translator and work its way towards the result in a similar way to its human counterpart. Through this process, NMT seeks to address many crucial flaws in previous iterations of machine translation, such as unnaturally structured, almost unreadable sentences and critical incompatibility with certain languages.  The neural network utilizes vast amounts of data and is a tool engaged in the process of constantly sharpening its own edge. Given access to the worldwide stores of big data with little or no guidance on decision making, the neural network engages in a vastly complex and almost incomprehensible path of making its own decisions and associations, learning from its previous errors and constantly training itself to be a more effective translator. Though it sounds promising, NMT has not fully left its developmental stage and is still in the process of being rolled out by machine translator vendors.

A wrench in the cogs of the translating machine

The idea of having complex AI neural networks perform linguistic tasks in unparalleled short spans of time for a wide range of fields ranging from the utilitarian, day to day customer experience to specialized domains that require specific terminology has raised questions on whether human translations could become obsolete in the near future.

As it currently stands, most specialists in the field would say no. Neural networks are still an emerging technology, and its capacities, though exciting, are still a vast amount of research away from being comprehensively understood and measured with any amount of accuracy.

Even with the latest technology and immense processing power at hand, machine translation has come short in many regards. Firstly, it has been observed that its effectiveness significantly decreases in highly specialized areas, where the AI fails at picking up and contextualizing specific terminology and acronyms. Secondly, in more creative areas, language tends to confuse the AI by flaunting its inherent complexity of meaning. All in all, in order to produce high quality, error free translations, machines still need to be closely monitored and revised by human professionals.

But the problem of human translation versus machine translation might be a false dichotomy. As tempting and exciting it might be to imagine a world of autonomous AI, a more realistic view on the subject would be to envision a future in which AI will become an effective aid for the professional translator, accurately and seamlessly collaborating with them to reduce the manual workload and thus reduce costs of labor. It could provide a rough rendition to pass on to their human colleagues to edit and polish into a finished product.

The effectiveness of machine translation is not expected to improve to the point of autonomy anytime in the near future. Perhaps it will never reach that point. But the good news is that it might not need to. Its true power does not stand in its capacity to deliver polished, correct translations, but in becoming a fast and useful tool for professional translators – together, computer and machine can cooperate to weed out each other’s inherent flaws and deliver higher quality in less time.

If you like this post, please share!

Leave a comment

Your email address will not be published. Required fields are marked *