Facebook announced this morning that it had completed its move to neural machine translation — a complicated way of saying that Facebook is now using convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to automatically translate content across Facebook.
Google, Microsoft and Facebook have been making the move to neural machine translation for some time now, rapidly leaving old-school phrase-based statistical machine translation behind. There are a lot of reasons why neural approaches show more promise than phrase-based approaches, but the bottom line is that they produce more accurate translations.
Traditional machine translation is a fairly explicit process. Relying on key phrases, phrase-based systems translate sentences then probabilistically determine a final translation. You can think of this in a similar light as using the Rosetta Stone (identical phrases in multiple languages) to translate text.
In contrast, neural models deal in a higher level of abstraction. The interpretation of a sentence becomes part of a multi-dimensional vector representation, which really just means we’re trying to translate based on some semblance of “context” rather than phrases.

It’s not a perfect process, and researchers are still tinkering with how to deal with long-term dependencies (i.e. retaining understanding and accuracy throughout a long text), but the approach is incredibly promising and has produced great results, thus far, for those implementing it.
Google announced the first stage of its move to neural machine translation in September 2016 and Microsoft made a similar announcement two months later. Facebook has been working on its conversion efforts for about a year and it’s now at full deployment. Facebook AI Research (FAIR) published its own research on the topic back in May and open sourced its CNN models on GitHub.
“Our problem is different than that of most of the standard places, mostly because of the type of language we see at Facebook,” Necip Fazil Ayan, engineering manager in Facebook’s language technologies group, explained to me in an interview. “We see a lot of informal language and slang acronyms. The style of language is very different.”
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.
Facebook has seen about a 10 percent jump in translation quality. You can read more into the improvement in FAIR’s research. The results are particularly striking for languages that lack a lot of data in the form of comparative translation pairs.