How a cloud computing leader translates its software documentation faster and cheaper with “hybrid translation”
A leading cloud computing and virtualization provider needed to translate its software documentation faster.
Even their “light post-editing” quality tier wasn’t fast enough to translate their docs for 60+ products into the top 8 languages for every release.
About the buyer
The client is a leading cloud computing and virtualization provider to businesses around the world.
It grew to billions in revenue by launching innovative enterprise products and by acquisitions.
99 Fortune 100 customers
400K business customers around the world
14 languages and 25 locales
XTM with custom Google Translate
By integrating the ModelFront translation risk prediction API, they’re skipping human post-editing for 75% of segments - with no loss in final quality!
Human translation is slow. With over 60 constantly updating technology products on the market, waiting for human translation for 8 languages causes product and feature launch delays.
And paying for all that human translation isn’t cheap. Even though most segments were “perfect MT” - the professional human translators just ended up confirming them as-is - paying for post-editing means paying for them all.
Even “light post-editing” was slow and expensive. And as human translators rushed their work more, final quality suffered.
The first question was about the fit of machine translation for the content and workflow.
“How many of your machine pre-translations are ‘perfect MT’?”
It turned out that, across major target languages, professional human translators were approving more than 80% of the machine pre-translated segments untouched - without editing a single character!
Technical documentation can be dry and has a lot of segments that don’t even need to be translated, like code snippets. That means even generic machine translation can do well.
The next question was about the end result.
“What is the final human quality today?”
The client performed a human evaluation of their human translation.
It turned out that the human quality was not perfect. The main cause of errors was human translators accidentally approving bad machine translations - because they’re in a rush.
That human evaluation also confirmed that more than 80% of the machine translated segments are good.
The client heard about a way to automatically score each machine translation. This core technology opens up a fundamentally new approach, as revolutionary as the shift to translation memories or machine translation post-editing. The leading provider of this core technology is ModelFront.
In a hybrid translation workflow, only the potentially bad machine translations are sent for human post-editing. All the good machine translations are automatically confirmed and ready to be published - just like a translation memory match.
The client ordered a custom machine translation risk prediction model from ModelFront. The model was trained on the client’s human translation data and deployed via a secure, scalable cloud API. One model supports all language pairs.
The API can instantly predict if a machine translation is good or bad - with the same accuracy as humans.
But to work, the predictions needed to be accurate, so that final quality - the number of good translations - stayed the same.
The custom model was accuracy tested on the client data - using the client’s own human evaluation - to show that there would be no change in final quality.
XTM is known as an extensible translation management system (TMS) - it’s relatively easy to integrate third-party APIs. It’s also easy to update the status of each segment. The client developers did the integration themselves - without any support from XTM.
The hybrid translation workflow is much faster and much cheaper. Importantly, it is customized and set up so that the final quality is the same as with the light post-editing workflow it is replacing.
4x faster and cheaper translation
The same final quality as light human post-editing