Archive for the ‘Translation and localisation’ Category

STAR at the MT Summit 2025: trends, talking points and innovations

Posted on: July 8th, 2025 by Frank Wöhrle No Comments

This year’s MT Summit was held in Geneva, Switzerland, and featured a diverse programme of tutorials, workshops and inspiring presentations on the topics of machine translation (MT) and large language models (LLMs).

As a platinum sponsor of the event, STAR AG was on site together with three experts from the company’s Development, Support and Sales teams. STAR’s very own Language Technology Consultant, Julian Hamm, also attended the week-long conference to represent the company and took away new ideas and food for thought from research and industry.

While outside the temperatures were soaring, inside the very hottest trends were being presented – by technology providers and representatives from notable companies and institutions in a series of lectures and poster sessions. The dedicated organisation team from the University of Geneva put together a varied programme of events, while also setting the scene for valuable discussions.

Human in the cockpit – man and machine, a skilful combination

Despite staggering progress in the field of generative AI, this MT Summit made one thing clear: it simply doesn’t work without people!

This general philosophy was also key to our sponsored talk, entitled Human in the Cockpit – How GenAI is shaping the localisation industry and what it means for technology and business strategies. In their presentation, Diana Ballard and Julian Hamm demonstrated the influence that generative AI is exerting on the localisation industry, highlighting use cases of particular relevance for the use of AI.

As a longstanding technology and translation partner, STAR understands the precise requirements of users and continuously optimises its own tools and solutions to make them future-proof by means of integrating smart features.

Visitors to the STAR stand were able to get a hands-on experience through live demonstrations, alongside opportunities to speak to our experts about various aspects of AI in practice.
In addition to the integration of big-name LLM systems, such as ChatGPT, the team demonstrated work on smaller local models, including TermFusion, a project optimised for terminology work, which does not call for a dedicated GPU and can therefore be operated with very few resources. Local models will be used to facilitate term extraction from bilingual data records, for instance, or for the intelligent correction of terminology specifications. Using this approach as a basis, other models are currently in development to ensure working in the translation tool is even more efficient.

Artificial intelligence in localisation: it’s here to stay!

Aktuelle Statistiken zur KI-Nutzung in Unternehmen bestätigen, dass diese Entwicklungen nicht nur eine Randerscheinung sind. Vor allem Kundenkontakt, Marketing  und Kommunikation sind vielversprechende Einsatzgebiete, die bereits jetzt intensiv bedient werden.

Survey: Application of generative artificial intelligence in companies in 2025
Published by the Statista Research Department, 20th May 2025

 

Even though the use of AI in localisation still varies a lot, one thing is plain to see: there is no one-size-fits-all solution. After all, only those familiar with the use case who can clearly define the requirements will understand how the technology can be used wisely and sustainably.

After five days of in-depth discussions with representatives from research and industry, we are taking seven important insights away with us:

  • Neural machine translation (NMT) remains the most widely used language technology in localisation processes. Parallel to this, LLMs are increasingly being used to optimise NMT output. NMT technology is increasingly being displaced by LLMs, especially in the research sector.
     
  • Systems and workflows are increasingly geared towards seamless interplay between different translation resources. Translation memories (TM) and terminology databases provide important translation-relevant information and can be scaled up or down to produce better and more consistent translations. Another method establishing itself is retrieval augmented generation (RAG), whereby smaller databases can be used as a reference point for text creation or translation.
     
  • In certain use cases, generic AI models outperform open source models . Customisation in the form of translation rules or automatic terminology adjustments is making its way into many commercial solutions. In the medium to long term, this approach looks set to overtake the earlier method of dedicated training for NMT systems.
     
  • Growing translation volumes alongside the overall squeezing of prices call for the use of intelligent analysis tools to evaluate the added value of using AI and automating processes for the long term. The integration of models for MT quality estimation and the evaluation of translations using suitable metrics, in some cases assisted by an LLM, are particularly relevant at the moment.
     
  • Not all tasks necessarily have to be performed by an LLM, however. There is still a place for conventional rule-based approaches, such as the use of regular expressions in quality assurance, and in some instances, these can actually prove more efficient than LLM-based mechanisms.
     
  • LLMs are already capable of analysing texts at a document level and identifying distant connections. In CAT tools, however, translation is almost always performed at a segment level. Does the technology need rethinking here? While it is evident that creation systems and translation resources are increasingly being merged, this calls for new and innovative approaches for handling translation resources and AI systems.
     
  • More and more content is being created or translated by generative AI. The impact of this is felt in our culture, language and social life, for example through heightened media consumption via social media platforms or the gradual suppression of minority languages. Researchers are currently studying the effects of generative AI on our communication behaviour.

 

Did you miss the MT Summit 2025 and want to find out more about the latest trends?

Watch our webinar recordings now and discover how you can improve translation, terminology and content creation over the long term.

Transit NXT: The underestimated CAT tool that has the professionals convinced

Posted on: July 2nd, 2025 by Frank Wöhrle No Comments

Anyone who regularly works with CAT tools (computer-aided translation software) probably thinks of Trados Studio, memoQ or Across first. One name is often overlooked – and unfairly so: Transit NXT: the underestimated CAT tool from the STAR Group. It is a genuine powerhouse for anyone who wants their work to be structured, consistent and terminology-focused.

What actually is Transit NXT?

Transit is a professional CAT tool that has been on the market since the 1990s. It combines classic segmentation with a project-orientated working method – incorporating translation memory, terminology management, preview options, quality checks and various functions designed specifically for technical documentation.
The extensive and growing portfolio of AI features, which are demonstrated in a series of short videos on our YouTube channel, are not to be missed.

5 reasons why so many professionals have put their trust in Transit NXT for years

1. Up-to-date and contextualised terminology

Transit works seamlessly with TermStar. Live terminology entries are displayed to translators within the editor itself – including the definition, context and source of the term. This extensive integration is a clear benefit over those tools where the terminology often features only in the sidelines.

2. Project structure, not file chaos

Unlike other CAT tools, Transit thinks in terms of projects with a clear-cut file structure. This takes the hard work out of managing big or lengthy translation projects – especially when it comes to regular updates or complex workflows.

3. Need technical formats? No problem with Transit.

Whether DITA, XML, FrameMaker, InDesign or XLIFF – Transit leads the way when it comes to the variety of natively supported file formats. Many other tools need extra modules or conversions to handle these files.

4. Local installation – full data sovereignty

Transit NXT works entirely locally – without any cloud obligations. For companies that have high data protection requirements, this is a crucial advantage over cloud-based solutions.

5. Quality assurance at the highest level

With automated checks, an in-context preview function and variant check, Transit NXT offers precise quality management for an impressive level of efficiency that is especially beneficial for those handling technical content.

 

Transit Software Bedienoberfläche

Who is Transit most suited to?

  • Technical translators working with complex formats.
  • Public authorities, industrial companies and service providers who need to keep sensitive data locally.
  • Freelancers who attach great importance to a reliably maintained terminology.
  • Translation agencies that want an efficient tool for managing large structured projects.

Sound good?

Transit NXT is no entry-level tool – but that is precisely what makes it a great option for anyone who values structure, terminology and format variety.

If you want to see for yourself how Transit works, simply request a non-binding trial version now.

Aspect in Slavic languages – a small difference with a big impact

Posted on: June 24th, 2025 by Frank Wöhrle No Comments

If you have ever had a text translated into Polish, translated it yourself or have had anything else to do with Slavic languages, you may have come across a linguistic phenomenon that we are unfamiliar with in English – aspect. In Polish and other Slavic languages, a verb not only states what happens, but also whether the action is already completed or is still ongoing.
This difference is crucial when it comes to translating – because it can determine whether a sentence achieves the intended effect or is misleading.

Imperfective aspect – when the action is ongoing

The “imperfective aspect” describes an action that is either happening right now, is regularly repeated or is of general, unlimited nature. It doesn’t matter if the action is already completed, the focus is on the process, duration or repetition. This is often a challenge because this nuance is primarily expressed in other languages by tenses or additional adverbs such as “regularly”, “right now”, “usually” or similar.

Example in Polish:

  • czytać (to read – imperfective aspect)
    • Czytałem książkę. (I read/have read a book. /I was reading a book. – The action was in progress or repeated; it is not mentioned whether you are already finished or the end of the book was actually reached. It could also mean “I only started reading but didn’t finish the book”.)
    • Codziennie czytam gazety. (I read newspapers every day. – This is a habit; something that happens repeatedly, irrespective of whether the action is fully completed each time.)

The imperfective aspect may also express unfinished or failed actions, where the focus is on the attempt.

Perfective aspect – when the action is completed

In contrast, the perfective aspect signals that an action was completed and a result has been reached. In this case, the focus is on the completion of an action and an objective or a state being achieved. It’s a one-off, completed action that has reached an end.

Example in Polish:

  • przeczytać (to read through – perfective aspect)
    • Przeczytałem książkę. (I have read through/finished reading the book. – The action has been finished, the end of the book has been reached and there is an outcome.)

In narratives, this means that it’s clear which events have already finished and the story moves forwards. In instructions, reports and legal texts, this aspect can change the tone, focus and even the overall message.

One verb – two faces: Paired aspects

Almost every verb in Polish and other Slavic languages leads a kind of “double life” because it exists in imperfective and perfective forms that each express a certain course of action.
Paired aspects are not formed according to a fixed rule; they are instead based on different morphological units. This often requires people to learn pairs, rather than relying on rigid rules. For each verb in Slavic languages, such as Polish, you need to learn not just one, but two pieces of vocabulary.

Common methods for forming paired aspects include:

  • Prefixation: Adding a prefix to the imperfective stem to express the perfectivity. This is one of the most common methods, e.g. robić (to do – imperfective) → zrobić (to finish/complete – perfective)
    • Robiłem obiad. (I was cooking lunch. – The action was ongoing; I was in the process of preparing the food.)
    • Zrobiłem obiad. (I have prepared/cooked lunch. – The action is finished, lunch is ready and can be served.)

  • Suffixation: Adding a suffix or modifying the stem. This can often bring subtler nuances to the meaning.
    Example: zamykać (to close, imperfective) →  zamknąć (to close, perfective)

  • Changes to the stem: Changing the vowels or consonants in the stem, often accompanied by a prefix.
    Example: brać (to take, imperfective) → wziąć (to take, perfective)
    -> Complete change to the stem: bra-wzi-

  • Suppletive forms: In some cases, there are completely different stems for the imperfective and perfective form.
    Example: iść (to go, perfective) → chodzić (to go, imperfective)
    -> Different stems: iść vs. chodzić

Why aspect is crucial for translations

If you’re translating into Polish, you need to know more than just the right word. You need to understand the perspective of the action – is it currently ongoing, is it completed or is it repeated?

This linguistic phenomenon enables the author of a text to emphasise exactly the part of the action that is to be communicated – whether it be the process itself or the result achieved. This means that Slavic languages are often very precise in what they can express. However, they require non-native speakers to rethink their perception of actions and time when translating and interpreting.

What this means for you

When working with Slavic languages – whether it be for international locations, customers or target markets – aspect is a good example of how complex language is. It also demonstrates how machine translation is often not enough to capture the right tone.

Good translation is not only translating “word for word” but also conveying the right focus, considering the course of action and adopting a change in perspective.

Conclusion

Aspect in Polish (and other Slavic languages) is much more than just grammar – it’s a key way of creating meaning. Without correctly applying the aspect, sentences may be misleading or even falsely interpreted.

As a translation agency, it therefore goes without saying that we need to not only be familiar with these linguistic subtleties, but also to actively incorporate them into our work – so that your texts are understood as they are intended in the target country.

Would you like to know whether your Polish communication is finding the right tone?
We’re happy to assist you.

The Korean language – navigating layers of politeness

Posted on: April 24th, 2025 by Frank Wöhrle No Comments

As a professional language service provider, we encounter the challenges and subtleties of a wide variety of languages each and every day. One language that has been attracting more and more attention in recent years due to economic, cultural and political developments is Korean. Whether through K-pop, South Korean technology companies, or trade relations, interest in the Korean language is growing rapidly. But what makes Korean so special, especially when compared to English?

One of the most striking and complex features of the Korean language is the system of politeness and formality levels. This is where Korean differs fundamentally from English.

In Korean, the social status of the people you are speaking to must be taken into account at all times. These include:

  • Age
  • Professional position
  • Familiarity/closeness with the person
  • Social hierarchy

 

The appropriate politeness level must be selected for each situation. There are several levels, but the most common are:

  1. Informal (low register) (반말 / banmal) – used with friends, family and those with whom you have a close relationship, as well as with children.
  2. Polite (neutral) (존댓말 / jondaetmal) – the standard level of politeness used in most professional and everyday contexts.
  3. Formal (high register) (격식체 / gyeoksikche) – particularly polite, often used in presentations, and when communicating with customers or superiors.

While in modern English we only have one term for “you”, whether speaking to one person or a group of people, from commoners to kings, the Korean language is far more complex! The person’s status and demographic affects not only the personal pronoun, but even the entire sentence structure, vocabulary and verb conjugations, including suffix formation.

For example: The verb “to eat” in different levels of politeness:

  • Informal: 먹어 (meogeo)
  • Polite: 먹어요 (meogeoyo)
  • Formal: 먹습니다 (meokseumnida)
  • Honorific (e.g. showing respect towards elders): 드십니다 (deusimnida)

 

For companies communicating with Korean business partners, choosing the correct level of politeness is not only a linguistic issue, but also a cultural non-negotiable. An incorrect form of address can instantly come across as impolite or disrespectful.
There are also important differences in non-verbal communication: While people in the Anglosphere greet each other with a handshake or a hug, in Korea, the bow is used as a sign of respect.
So, these distinct levels of politeness are not to be taken lightly and once again clearly demonstrate that language is often a mirror of society.

Alphabet and writing system: “Hangul” – simple and ingenious

One of the most fundamental differences between English and Korean is the alphabet. While English is based on the Latin alphabet, Korean uses the so-called “Hangul” or “Hangeul” (한글) alphabet. This writing system was introduced in the 15th century by King Sejong the Great in order to facilitate the general population’s access to the written language, with great success.

Hangul consists of 14 consonants and 10 vowels, which are combined into syllable blocks. This results in a system that is both easy to learn and extremely effective. In contrast to English spelling, which often appears haphazard (compare “cough”, “through” and “bough”, for example), Hangul is largely phonetic: In most cases, the words are pronounced exactly as they are written.

For our work as a language service provider, and also for the many people in Europe who are learning Korean, this means that deciphering Korean characters is not a major hurdle compared to many other non-Latin writing systems. Nevertheless, the correct translation and interpretation depends on the context – especially when it comes to the politeness levels.

A tricky number system – “Hangul” vs. “Hanja”

When “Hangul” was declared the official language of Korea, it replaced the previously used language and writing system, called “Hanja”. Hanja uses Chinese characters and pronunciation to express the Korean language. It was mainly used in academic circles, and Sino-Korean characters can still be found in official documents, such as those used to pass laws. Hangul was mainly spoken by the lower classes and women at that time, who often did not enjoy the education of the upper classes and intellectuals, who favoured Hanja. When Korea was annexed by the Japanese Empire (1910–1945), Hangul temporarily dropped out of favour, with the Japanese imposing their own language and culture.

As a result, Japanese influences can be found alongside Chinese in the Korean language today, and Hanja continues to be an important building block. There are two number systems in Korea, the “pure” Korean number system and the Sino-Korean number system. For example, when taking a photo of someone, you would count in Hangul: “hana, dul, set!”. To arrange an exact time for a meeting, you would use the Sino-Korean numerical unit for the minutes, but give the hours in Hangul: 12:30 would be “yeol-du” (12; pure Korean) “shi” (hour) “sam-ship” (30; Sino-Korean) “bun” (minute). So, Hanja is still an integral part of the Korean language. And it gets even better. If you want to order one bowl of “bulgogi”, for example, a classic Korean meat dish, you must use the pure Korean numerical unit. When ordering two portions of “tteogbokki” – a popular Korean snack made from rice cakes – you must switch back to the Sino-Korean numerical unit. This can get pretty confusing!

Another point of focus – sentence structure, grammar & spelling

A fundamental difference from English lies in the sentence structure. While English usually follows a subject-verb-object pattern (e.g. “I see the dog”), Korean typically uses the subject-object-verb structure (e.g. “I the dog see” – 나는 개를 본다).

In addition, there are no articles in Korean, which means a lot of information in Korean is implicit and depends on the context. For translators, this requires a thorough understanding of both languages in order to achieve coherent results, both culturally and in terms of content.

In principle, male and female pronouns do exist, but apart from a few exceptions, such as in antiquated poetry, they are rarely used. Consequently, you must always pay close attention to the subject in Korean. Once a name is mentioned, it can be assumed that the person will also be the subject of the sentences that follow.

As a logical consequence of the fact that pronouns are not specifically labelled as masculine or feminine, unlike in English, there is no gender debate, at least not a linguistic one. Instead, context and social factors are used to signal gender and other social roles.

Furthermore, Korean verbs and nouns do not have a grammatical number. Whether something is singular or plural is simply not considered that important in Korean; the plural is only used explicitly if this is important in a given situation and should be emphasised.

It is hard to imagine the English language without agreement between the nouns, pronouns and verbs to indicate number!

Vocabulary and loan words – old and new combined

The Korean vocabulary combines native terms, Sino-Korean words (borrowed from Chinese) and modern loan words, mainly from English. Just like English, words in Korean have varied origins.

There is one difference: Many loan words are phonetically adapted in Korean – for example, “computer” becomes 컴퓨터 (keompyuteo). And everyday terms such as 커피 (keopi) for “coffee” or 핸드폰 (haendeupon) for “mobile phone” (“hand-phone”), are also common.

For professional translations, it is essential to know the origin and usage of a term. Especially in technical, legal or medical texts, seemingly small differences can instantly make a big difference in meaning.

Context is everything – subject and object are overrated

Another difference to English is the importance of context-based communication. In Korean sentences, the subject or object is often simply omitted if this is clear from the context.

For example:

  • “I’m eating now” – 이제 먹어요 (ije meogeoyo), literally: “Now eating.”
  • “Do you like coffee?” – 커피 좋아해요? (keopi joahaeyo?), literally: “Coffee like?”

 

In English, such constructions would immediately be perceived as incomplete. In Korean, on the other hand, they are considered completely natural. This type of communication requires a keen sense of the cultural and situational context when translating.

In summary: Korean – more than just a language

Korean is a deeply expressive language with a distinctive writing system. Anyone who learns the language, which is spoken by more than 81 million native speakers, will also gain a deep insight into the culture, history and traditions of the country. The differences to English range from grammar and sentence structure to its distinct layers of politeness.
For language service providers and companies with business relations to Korea, this means that successful translations and language training not only require in-depth linguistic knowledge, but also intercultural expertise.

We are happy to support you in your professional context – simply get in touch!

Translate more efficiently with AI – but how?

Posted on: February 26th, 2025 by Frank Wöhrle No Comments

AI: What started as a buzzword, and then became an established term in everyday language, is now a basic requirement for many applications and processes. And this technology is not stopping at the language industry either. Since the launch of ChatGPT we know that translating can now also be completely interactive. Large Language Models, also know as LLMs, in chatbot form are now flooding the market. It feels as though a new model is popping every week, announcing its intention to outdo its competitors in terms of efficiency, quality and reliability. Neural machine translation (NMT) doesn’t seem that old – and yet we are already discussing when this technology will disappear from the market and be replaced by generative AI.

The key question is: I want to translate more efficiently with AI – but how?

AI for targeted optimisation of translation quality

Even though the technology has made significant progress over the last five years, the results of the commonly used and established NMT systems are not always good enough. This can have a variety of causes:

  • The desired language combination has not been trained with sufficient material or goes via a pivot language (often English). This can lead to structural problems or errors in meaning.
  • The MT system does not recognise specialist or customer-specific terminology.
  • The MT system was used for content in which style is extremely important or the translation needs to be targeted towards a specific target group.


Manuals, marketing texts or content with high customer visibility therefore often do not achieve the desired levels of quality through machine translation alone. Language professionals then optimise the machine-generated texts as part of a post-editing process. Machine translations are carefully checked, compared with the source text and corrected if necessary.

As a central translation platform, the CAT tool enables users to work efficiently and offers targeted support for quality assurance thanks to a range of automated features. But where exactly is AI being used here? LLMs such as ChatGPT from OpenAI are perfectly capable of producing translations that, like DeepL or Google Translate, provide a good starting point for further processing, depending on how it is to be used.

However, a significant leap in quality can be achieved by improving the translation requests through the targeted use of prompts and the addition of reference files. To achieve this, however, in addition to a well thought-out prompt engineering design, validated translation resources in the form of translation memory and terminology databases are a fundamental prerequisite.

 

AI for better translation resources

As with any new technology, a question often arises: What can AI do for me?

However, if you want to integrate AI into your language processes in the long term, you should first ask yourself: What can I do for AI?

Well-maintained translation resources make a significant contribution to improving the results of your AI solution. Take the topic of terminology, for example. If you use a generic system such as DeepL for your translation processes, you will receive translations that do not match your company terminology – unless you integrate a glossary.

Are you only at the stage of establishing your terminology but don’t want to miss out on the benefits of MT? Use language models to extract potential terminology from your monolingual or multilingual documents. You can also use AI to check your translation memory databases, for example to find inconsistent translations or to automate clean-up or correction across large data sets. Use these resources consistently to increase the translation quality of your language model or improve the output of NMT systems.

AI as co-pilot? Reach your destination safely with the new STAR webinar series

As you can see, we are extremely enthusiastic about the topic of AI – and we don’t claim to be reinventing the wheel. However, the technology offers a lot of potential for optimisation if it is used efficiently and sustainably.

Of course, we would like to share our enthusiasm with you and invite you to our webinar series “AI as co-pilot: Forging new paths to smart language processes” starting in March. All the webinars will be held in German only.


How exactly does generative AI actually work? What advantage does it offer for the translation? How can I use language models to create product texts? Can I train my own AI? And what actually happens to my data?

Our Language Technology Consultant Julian Hamm answers these and many other questions and discusses the many different uses of generative AI, including translation, terminology management, content creation and content delivery. You can expect the following content in the first group of topics:

  • Was ist generative KI, und wofür kann ich sie einsetzen? (What is generative AI and what can I use it for?)
  • Wie kann KI bei der Übersetzung unterstützen? (How can language experts benefit from AI?)
  • Wie kann ich KI für die Terminologiearbeit einsetzen? (How can I use AI for terminology work?)
  • Welche Vorteile bietet KI für die Content-Erstellung? (What advantages does AI offer for content creation?)


Further information on the events and the registration form can be found here.

We look forward to you joining us!

2024 – the year of AI: Important developments and lessons learned

Posted on: December 16th, 2024 by Frank Wöhrle No Comments

Another year is drawing to a close, and we can hardly believe how fast the time has flown by. Now is a good opportunity to take a look back at all of the important developments that 2024 – the year of AI – has brought us, and give you an insight into what next year has in store for us.

AI has been a hot topic ever since OpenAI stunned the whole world with ChatGPT. Companies are increasingly insisting on using AI wherever this seems possible. From many discussions and exciting customer projects over the course of the year, we have identified key lessons learned and trends in this field.

Five key trends relating to the use of AI in the context of translation

  • Expectations for generative AI remain very high.
    However, the purposes for which people want to use it differ greatly, especially in language processes: From the fanciful idea of a wonder machine which produces, translates and optimises texts so they are perfect, through to a clever tool that provides targeted assistance with specific tasks that are usually performed manually at present. The increasing integration of large language models into translation processes makes exactly this possible by providing these with targeted and modular support. This ranges from the bilingual extraction of terminology and the post-editing of machine-translated content, through to assessing the quality of multilingual documents.
  • If you want to use the terminology efficiently and sustainably, you also need high-quality, well-structured language resources to be able to supply the language models with relevant information.
    This means that years of working with translation memory and terminology management systems now offers double the benefits. If this data is prepared in a structured and sustainable manner, language models can use it to optimise machine-translated content, for instance in the form of retrieval-augmented generation (RAG).
  • The topic of data protection continues to generate extreme uncertainty despite the adoption of the EU AI Act in May 2024.
    Many companies are looking for ways to use AI in the most secure possible way in order to protect their precious data against misuse.
  • A lot of businesses are experiencing issues with the scalability of AI solutions, whether this concerns the IT infrastructure, financial resources or further training of staff.
  • Human in the cockpit. People will increasingly return to the centre of the AI-based translation workflow.
    While translators were previously responsible for the post-editing of predefined machine-translated content, among other tasks, as part of human in the loop concept, the new human in the cockpit principle aims for translators to use modern language technologies – even interactively – in order to exert their own influence on the output and to create efficient design processes.
    The technological transformation is also resulting in changing requirements for current and future language experts. The relevant universities have also recognised these developments and are revising the degrees and courses they offer accordingly. For instance, prompt engineering, language technologies and information management are important focal topics that will feature more often on the curriculum in future.

Are you interested in this subject? Then don’t miss our STAR webinar, which is scheduled for early 2025. There, we will be sharing information about current trends and our latest technological developments.

Certified processes: The foundations for AI integration you can trust

Posted on: November 25th, 2024 by Frank Wöhrle No Comments

This year, STAR Deutschland GmbH once again welcomed its independent certification partner LinquaCert to its Sindelfingen office for the ISO 18587:2017 surveillance audit (“Post-editing of machine translation output”) shortly before the annual tekom conference. We are pleased to confirm the successful recertification in line with this standard that relates explicitly to quality assurance in the production of machine translations.

Spotlight on terminology integration and automation in quality assurance when incorporating AI into translation processes

As well as active discussions on qualifications, training measures and quality measures, there was once again a real need to discuss the integration of generative AI into translation processes.
The spotlight was shone primarily on the topics of terminology integration and automation in quality assurance that provide more tailored support to the linguists delivering MT post-editing projects and are designed to reduce the processing effort.
As a longstanding technology partner and language service provider, we embrace current trends and give our translators the expertise they need to be able to work efficiently and in a future-oriented way.

Missed tekom, but want to know more about AI? Our STAR webinar exploring “Augmented Translation” offers a quick insight into the latest developments in language technologies. Register here to be sent the webinar recording: https://www.star-deutschland.net/en/language-management-consulting/machine-translation-and-post-editing/star-webinar-augmented-translation/ 

Want to get the most out of modern language technologies and are committed to delivering high-quality translations? We have the right service for you: https://www.star-deutschland.net/en/language-management-consulting/machine-translation-and-post-editing/

AI for voices, voice recordings and voice-over translations

Posted on: October 28th, 2024 by Frank Wöhrle No Comments

Can AI help to create high-quality content in any language while adhering to corporate language and specific rules?

Today we’re interviewing David Heider, the owner of a STAR partner sound studio in the Czech Republic, to shed light on this fascinating question – can artificial intelligence be effectively used in the area of video and audio productions?

STAR: David, when did you start offering professional audio productions?

Our recording studio has been providing its services since 1999 and we’ve specialised in the spoken word. We cover two different areas. Firstly, the “corporate world”, with recordings of material for internal purposes, such as e-learning. This also includes localisation of internal company systems and software. This can be either training material or various web-based platforms with voice output or automatic operators on your phone, sat nav, etc.– in short, various applications where we often have to cut the sound word by word or even syllable by syllable and where everything is then put together by a system into sentences and whole messages.

The second area is more artistic in nature and covers advertising and promotional videos, among other content. This area differs from the “corporate world” previously mentioned in that it’s not just about conveying content, but rather about a form that appeals to listeners and attracts them. So we need professionals who can express themselves artistically and use their voice skilfully. To summarise, you might say that our first area of action is to provide information. This is about content where users, to put it more clearly, don’t have much choice, as they generally have to listen. In contrast, artistic productions aim to seduce the “audience” in some way, not only in terms of content but also their form.

Tonstudio

STAR: This inevitably leads me on to the next question – can AI be used in your work?

AI is an amazing tool and offers numerous advantages. For example, we don’t need to contact a voice-over artist and make an appointment; the AI is always available.

STAR: Are you already using AI?

Yes. We use AI in some cases for preparing and producing audio material. But there’s also a downside. In most languages, the AI voice seems artificial or boring, especially after listening to it for a long time.

STAR: Can’t AI intonate?

Intonation in itself isn’t usually a problem, but the AI does it in unnatural inflections, which is really inconvenient. Often it doesn’t emphasise the core message, which a person would normally express through a particular emphasis. And when you listen to an AI recording, you get this unnatural inflection on repeat that starts to get annoying after a while, because you can’t shake the feeling that it’s actually just “copy-paste”. In comparison, I find it much better in English than in other languages, where the AI can work with variable intonation and make the voice sound very natural and lively. But in all the other languages, we still have a long way to go before we reach that point. At the moment, the other languages still sound very “plastic”.

STAR: Are there any other disadvantages to AI voices?

There’s a second point that I think is more serious, especially with e-learning. As with any AI, the quality of the output depends on the quality of the input. You also always have to prepare the content correctly for AI voices. Perhaps the AI doesn’t read all the abbreviations correctly, e.g. in the same way as you would read them in a specific corporate culture. Every company has its own corporate jargon and the AI won’t take this into account. This also applies to different product names, place names and foreign words. For example, if French names appear in English text, should it be read in French or English?

STAR: How can this be explained?

Only the employees at a company are really familiar with the corporate language and know why a certain linguistic rule can sometimes be ignored for internal company content or marketing reasons. And the listeners are insiders, i.e. they usually know what the content’s about. Companies also have to be consistent, otherwise it will sound strange to their ears. Sometimes, of course, a term or abbreviation can be misunderstood, either phonetically or for names, but that’s just the way it’s done at the company and we should respect it.

STAR: What other challenges are there?

Abbreviations and other specific features are a major challenge for AI. They usually need a lot of adjustments and corrections, which can result in the final price being similar to that of a traditional voice-over. We need to create pronunciation tips or edit the text so that it’s easy for the AI to read. This is very time-consuming – so AI makes little sense for a one-off project. In addition, we also “proof-listen”, i.e. do a listen-through to check, after the AI.

STAR: Don’t you “proof-listen” for human speakers too?

If there are two of us in addition to the speaker during the recording, we don’t do this any more because we can hear and check everything during the recording. The exceptions are languages that we don’t understand, such as Asian languages. But, in the case of AI, we don’t know beforehand what it knows and what it can read. I’ll give you an example. Let’s take the unit of a “megapascal”. This term has the abbreviation “MPa”, and the AI can read it as “em-pee-ay”, which is complete nonsense to a technical expert. So we’ve got to figure out how to get the AI to read it correctly as “megapascal”.

Sometimes we go through the recording and it seems right to us, but then the customer finds something that doesn’t fit their corporate culture. That’s why, while I think AI is a useful tool in certain informational texts that can make work faster and cheaper, and I’m happy to recommend it, in the hands of an inexperienced user, AI can behave unpredictably, and the end product will cause more disappointment than enthusiasm about the resources saved.

STAR: Is there a financial difference?

Yes, using AI reduces the budget to around half or two-thirds, as the work is mainly done by a machine and no voice professionals are involved in the process.

STAR: What do you do if a recording isn’t suitable for AI?

We are the guarantor of quality, and if we have serious and justified doubts about whether AI will lead to the right result, we’ll inform the customer. But customers also want to have personal experiences of this. I then try to point this out first by saying, “don’t be disappointed, but I don’t think AI is suitable for this particular project.” When I feel that I’ve outlined everything, I leave the decision up to them. But in some cases, customers themselves are unsure and are grateful for our support.

STAR: Thank you, David, for this very interesting discussion about AI in audio recordings.

Bild von David

AI voices aren’t yet perfect, and human voices are still winning the race. They’re able to convey emotions and leave a strong impression. However, AI voices are an inexpensive alternative. Please feel free to contact us for our advice.

David Heider,
owner of a STAR partner sound studio in the Czech Republic

How translations can be processed faster with COTI Level 3

Posted on: August 1st, 2024 by Frank Wöhrle No Comments

In the fast-paced world of the translation and localisation industry, efficiency is the key to success. One solution that can play an important role in delivering this efficiency is the Common Translation Interface (COTI) standard, particularly in its highly developed form – COTI Level 3. But what exactly does this standard entail and how can it speed up translation processes?

What is the COTI standard?

The Common Translation Interface (COTI) standard was developed specifically for the translation and localisation industry to improve interoperability between different software tools and systems. The COTI standard defines a manufacturer-independent format for exchanging data between translation memory systems (TMS) and editorial systems, such as content management systems (CMS) and other tools used in the industry.

Higher COTI level, more automation

COTI levels build on each other and offer varying degrees of integration and automation:

  • Level 1 – core features: Translation data is saved in a defined structure, compressed as a ZIP file with the extension .coti and enhanced with meta information. The data is transferred manually, but the meta information and fixed structure make it easy for the receiving system to interpret the packets.
  • Level 2 – extended features: At this level, the transfer of COTI data packets becomes automated. The editorial system generates a package that is automatically recognised and imported by a TMS as soon as it is placed in a shared transfer folder (hotfolder) that is constantly monitored. Meta information enables the receiving system to create an automated order system, for example.
  • Level 3 – expert features: The highest level of integration offers fully automated data transfer between the systems. This removes the need to create or monitor packages manually. Instead, translation data and meta information is transferred via an API between the editing system and the TMS. Not only translation data, but also status information such as translation progress can be transmitted.

 

Diagram of the COTI workflow between customer and language service provider. On the left is "Customer" with the items CMS, PIM and ERP, on the right is "Language service provider" with the items Translation, Terminology and Review. In the centre, a double arrow shows the data transfer from COTI level 1 to 3.

Benefits of full automation with COTI Level 3

The implementation of COTI Level 3 brings with it several benefits that can dramatically improve the translation process:

  • Fast data transfer: Thanks to the fully automated API, translation data is transferred seamlessly between systems without any delay.
  • Increased efficiency: Large and complex translation projects can be processed more efficiently, since the workflow no longer has to include any manual steps.
  • Round-the-clock operation: Automation facilitates continuous operation without human intervention, resulting in round-the-clock availability of translation data.
  • Security: By eliminating manual steps, the risk of human error is minimised, which in turn ensures data transfer is more secure.
  • Time and cost savings: Full automation leads to significant time savings, while also reducing the operational effort and costs involved in translation projects.

Conclusion

The introduction of COTI Level 3 signalled a major advancement in the translation industry; one which not only increases efficiency, but also improves the quality and reliability of translation processes. Through seamless integration and automated data transfer, companies are able to expand their global reach while also saving time and resources.

The following editorial systems can currently use COTI packages of various levels:

    • TIM – Fischer
    • AEM – Adobe
    • and much more besides

 

With our translation memory system STAR Transit NXT; and our workflow solution STAR CLM, we provide links at all three levels – in order to transfer data efficiently, securely and quickly and to speed up translation processes.

We process your COTI packages automatically using STAR CLM! 

Contact us for tailored advice

Top language service provider STAR Group – Top spot in the DACH region

Posted on: May 7th, 2024 by Frank Wöhrle No Comments

According to the recently published 2024 Slator and Nimdzi indices, the STAR Group is one of the top 25 language service providers in the world. In the DACH region (Germany, Austria and Switzerland), STAR takes the top spot in terms of audited turnover!

STAR honoured as a “Super Agency”

The “Super Agency” award recognises STAR’s comprehensive range of language solutions and translation services. The STAR Group’s independence and its turnover of more than USD 200 million are also criteria for this important categorisation.

Slator and Nimdzi Rankings essential for top language service providers

The #Slator and #Nimdzi indices list the most important companies in the language industry around the world in the fields of translation, localisation, interpreting and language technology.

The STAR Group sets itself apart thanks to its successful business model, excellent customer relationships and unrivalled expertise – all of which is recognised in this magnificent ranking.

With two branches in Germany and over 100 employees, STAR Deutschland is a unique partner for your corporate communications.

 

Are you looking for a top language service provider to partner with you on your translation projects?
If so, please get in touch – we’re here to support you.