Posted on: July 28th, 2025 by Frank Wöhrle No Comments
In less than 4 months, the next tekom annual conference is set to open in Stuttgart. The world’s biggest conference for technical communication will be held in Stuttgart from 11th to 13th November. Come along to find out more about our language services, enterprise technologies and all the latest developments.
STAR’s AI workshop
On 12th November, come along to our STAR workshop entitled “AI as co-pilot?! Successfully navigating language and translation processes with AI assistance” to find out how you can use NMT and LLM technologies efficiently and sustainably for language and translation processes. (Please note that this workshop will be held in German only.)
Posted on: July 8th, 2025 by Frank Wöhrle No Comments
This year’s MT Summit was held in Geneva, Switzerland, and featured a diverse programme of tutorials, workshops and inspiring presentations on the topics of machine translation (MT) and large language models (LLMs).
As a platinum sponsor of the event, STAR AG was on site together with three experts from the company’s Development, Support and Sales teams. STAR’s very own Language Technology Consultant, Julian Hamm, also attended the week-long conference to represent the company and took away new ideas and food for thought from research and industry.
While outside the temperatures were soaring, inside the very hottest trends were being presented – by technology providers and representatives from notable companies and institutions in a series of lectures and poster sessions. The dedicated organisation team from the University of Geneva put together a varied programme of events, while also setting the scene for valuable discussions.
Human in the cockpit – man and machine, a skilful combination
Despite staggering progress in the field of generative AI, this MT Summit made one thing clear: it simply doesn’t work without people!
This general philosophy was also key to our sponsored talk, entitled Human in the Cockpit – How GenAI is shaping the localisation industry and what it means for technology and business strategies. In their presentation, Diana Ballard and Julian Hamm demonstrated the influence that generative AI is exerting on the localisation industry, highlighting use cases of particular relevance for the use of AI.
As a longstanding technology and translation partner, STAR understands the precise requirements of users and continuously optimises its own tools and solutions to make them future-proof by means of integrating smart features.
Visitors to the STAR stand were able to get a hands-on experience through live demonstrations, alongside opportunities to speak to our experts about various aspects of AI in practice. In addition to the integration of big-name LLM systems, such as ChatGPT, the team demonstrated work on smaller local models, including TermFusion, a project optimised for terminology work, which does not call for a dedicated GPU and can therefore be operated with very few resources. Local models will be used to facilitate term extraction from bilingual data records, for instance, or for the intelligent correction of terminology specifications. Using this approach as a basis, other models are currently in development to ensure working in the translation tool is even more efficient.
Artificial intelligence in localisation: it’s here to stay!
Aktuelle Statistiken zur KI-Nutzung in Unternehmen bestätigen, dass diese Entwicklungen nicht nur eine Randerscheinung sind. Vor allem Kundenkontakt, Marketing und Kommunikation sind vielversprechende Einsatzgebiete, die bereits jetzt intensiv bedient werden.
Survey: Application of generative artificial intelligence in companies in 2025 Published by the Statista Research Department, 20th May 2025
Even though the use of AI in localisation still varies a lot, one thing is plain to see: there is no one-size-fits-all solution. After all, only those familiar with the use case who can clearly define the requirements will understand how the technology can be used wisely and sustainably.
After five days of in-depth discussions with representatives from research and industry, we are taking seven important insights away with us:
Neural machine translation (NMT) remains the most widely used language technology in localisation processes. Parallel to this, LLMs are increasingly being used to optimise NMT output. NMT technology is increasingly being displaced by LLMs, especially in the research sector.
Systems and workflows are increasingly geared towards seamless interplay between different translation resources. Translation memories (TM) and terminology databases provide important translation-relevant information and can be scaled up or down to produce better and more consistent translations. Another method establishing itself is retrieval augmented generation (RAG), whereby smaller databases can be used as a reference point for text creation or translation.
In certain use cases, generic AI models outperform open source models . Customisation in the form of translation rules or automatic terminology adjustments is making its way into many commercial solutions. In the medium to long term, this approach looks set to overtake the earlier method of dedicated training for NMT systems.
Growing translation volumes alongside the overall squeezing of prices call for the use of intelligent analysis tools to evaluate the added value of using AI and automating processes for the long term. The integration of models for MT quality estimation and the evaluation of translations using suitable metrics, in some cases assisted by an LLM, are particularly relevant at the moment.
Not all tasks necessarily have to be performed by an LLM, however. There is still a place for conventional rule-based approaches, such as the use of regular expressions in quality assurance, and in some instances, these can actually prove more efficient than LLM-based mechanisms.
LLMs are already capable of analysing texts at a document level and identifying distant connections. In CAT tools, however, translation is almost always performed at a segment level. Does the technology need rethinking here? While it is evident that creation systems and translation resources are increasingly being merged, this calls for new and innovative approaches for handling translation resources and AI systems.
More and more content is being created or translated by generative AI. The impact of this is felt in our culture, language and social life, for example through heightened media consumption via social media platforms or the gradual suppression of minority languages. Researchers are currently studying the effects of generative AI on our communication behaviour.
Did you miss the MT Summit 2025 and want to find out more about the latest trends?
Watch our webinar recordings now and discover how you can improve translation, terminology and content creation over the long term.
Posted on: July 2nd, 2025 by Frank Wöhrle No Comments
Anyone who regularly works with CAT tools (computer-aided translation software) probably thinks of Trados Studio, memoQ or Across first. One name is often overlooked – and unfairly so: Transit NXT: the underestimated CAT tool from the STAR Group. It is a genuine powerhouse for anyone who wants their work to be structured, consistent and terminology-focused.
What actually is Transit NXT?
Transit is a professional CAT tool that has been on the market since the 1990s. It combines classic segmentation with a project-orientated working method – incorporating translation memory, terminology management, preview options, quality checks and various functions designed specifically for technical documentation. The extensive and growing portfolio of AI features, which are demonstrated in a series of short videos on our YouTube channel, are not to be missed.
5 reasons why so many professionals have put their trust in Transit NXT for years
1. Up-to-date and contextualised terminology
Transit works seamlessly with TermStar. Live terminology entries are displayed to translators within the editor itself – including the definition, context and source of the term. This extensive integration is a clear benefit over those tools where the terminology often features only in the sidelines.
2. Project structure, not file chaos
Unlike other CAT tools, Transit thinks in terms of projects with a clear-cut file structure. This takes the hard work out of managing big or lengthy translation projects – especially when it comes to regular updates or complex workflows.
3. Need technical formats? No problem with Transit.
Whether DITA, XML, FrameMaker, InDesign or XLIFF – Transit leads the way when it comes to the variety of natively supported file formats. Many other tools need extra modules or conversions to handle these files.
4. Local installation – full data sovereignty
Transit NXT works entirely locally – without any cloud obligations. For companies that have high data protection requirements, this is a crucial advantage over cloud-based solutions.
5. Quality assurance at the highest level
With automated checks, an in-context preview function and variant check, Transit NXT offers precise quality management for an impressive level of efficiency that is especially beneficial for those handling technical content.
Who is Transit most suited to?
Technical translators working with complex formats.
Public authorities, industrial companies and service providers who need to keep sensitive data locally.
Freelancers who attach great importance to a reliably maintained terminology.
Translation agencies that want an efficient tool for managing large structured projects.
Sound good?
Transit NXT is no entry-level tool – but that is precisely what makes it a great option for anyone who values structure, terminology and format variety.
Posted on: June 24th, 2025 by Frank Wöhrle No Comments
If you have ever had a text translated into Polish, translated it yourself or have had anything else to do with Slavic languages, you may have come across a linguistic phenomenon that we are unfamiliar with in English – aspect. In Polish and other Slavic languages, a verb not only states what happens, but also whether the action is already completed or is still ongoing. This difference is crucial when it comes to translating – because it can determine whether a sentence achieves the intended effect or is misleading.
Imperfective aspect – when the action is ongoing
The “imperfective aspect” describes an action that is either happening right now, is regularly repeated or is of general, unlimited nature. It doesn’t matter if the action is already completed, the focus is on the process, duration or repetition. This is often a challenge because this nuance is primarily expressed in other languages by tenses or additional adverbs such as “regularly”, “right now”, “usually” or similar.
Example in Polish:
czytać (to read – imperfective aspect)
Czytałem książkę. (I read/have read a book. /I was reading a book. – The action was in progress or repeated; it is not mentioned whether you are already finished or the end of the book was actually reached. It could also mean “I only started reading but didn’t finish the book”.)
Codziennie czytam gazety. (I read newspapers every day. – This is a habit; something that happens repeatedly, irrespective of whether the action is fully completed each time.)
The imperfective aspect may also express unfinished or failed actions, where the focus is on the attempt.
Perfective aspect – when the action is completed
In contrast, the perfective aspect signals that an action was completed and a result has been reached. In this case, the focus is on the completion of an action and an objective or a state being achieved. It’s a one-off, completed action that has reached an end.
Example in Polish:
przeczytać (to read through – perfective aspect)
Przeczytałem książkę. (I have read through/finished reading the book. – The action has been finished, the end of the book has been reached and there is an outcome.)
In narratives, this means that it’s clear which events have already finished and the story moves forwards. In instructions, reports and legal texts, this aspect can change the tone, focus and even the overall message.
One verb – two faces: Paired aspects
Almost every verb in Polish and other Slavic languages leads a kind of “double life” because it exists in imperfective and perfective forms that each express a certain course of action. Paired aspects are not formed according to a fixed rule; they are instead based on different morphological units. This often requires people to learn pairs, rather than relying on rigid rules. For each verb in Slavic languages, such as Polish, you need to learn not just one, but two pieces of vocabulary.
Common methods for forming paired aspects include:
Prefixation: Adding a prefix to the imperfective stem to express the perfectivity. This is one of the most common methods, e.g. robić (to do – imperfective) → zrobić (to finish/complete – perfective)
Robiłem obiad. (I was cooking lunch. – The action was ongoing; I was in the process of preparing the food.)
Zrobiłem obiad. (I have prepared/cooked lunch. – The action is finished, lunch is ready and can be served.)
Suffixation: Adding a suffix or modifying the stem. This can often bring subtler nuances to the meaning. Example: zamykać (to close, imperfective) → zamknąć (to close, perfective)
Changes to the stem: Changing the vowels or consonants in the stem, often accompanied by a prefix. Example: brać (to take, imperfective) → wziąć (to take, perfective) -> Complete change to the stem: bra- → wzi-
Suppletive forms: In some cases, there are completely different stems for the imperfective and perfective form. Example: iść (to go, perfective) → chodzić (to go, imperfective) -> Different stems: iść vs. chodzić
Why aspect is crucial for translations
If you’re translating into Polish, you need to know more than just the right word. You need to understand the perspective of the action – is it currently ongoing, is it completed or is it repeated?
This linguistic phenomenon enables the author of a text to emphasise exactly the part of the action that is to be communicated – whether it be the process itself or the result achieved. This means that Slavic languages are often very precise in what they can express. However, they require non-native speakers to rethink their perception of actions and time when translating and interpreting.
What this means for you
When working with Slavic languages – whether it be for international locations, customers or target markets – aspect is a good example of how complex language is. It also demonstrates how machine translation is often not enough to capture the right tone.
Good translation is not only translating “word for word” but also conveying the right focus, considering the course of action and adopting a change in perspective.
Conclusion
Aspect in Polish (and other Slavic languages) is much more than just grammar – it’s a key way of creating meaning. Without correctly applying the aspect, sentences may be misleading or even falsely interpreted.
As a translation agency, it therefore goes without saying that we need to not only be familiar with these linguistic subtleties, but also to actively incorporate them into our work – so that your texts are understood as they are intended in the target country.
Would you like to know whether your Polish communication is finding the right tone? We’re happy to assist you.
Posted on: May 30th, 2025 by Frank Wöhrle No Comments
We have successfully completed the training to become a certified translation service provider for the SCHEMA ST4 content management system. As such, STAR Deutschland is now an official certified translation service provider for SCHEMA ST4.
What is SCHEMA ST4?
SCHEMA ST4 is a professional content management system that more and more companies are turning to when producing technical documentation. It assists users in the creation, management and publication of multilingual product documentation (manuals, instructions, catalogues, online guides, etc.).
SCHEMA ST4 is an XML-based editing system that separates the layout from the textual content. In technical documentation, this is very beneficial when reusing text fragments and when managing multiple languages and versions.
SCHEMA ST4 finds application in a broad spectrum of industries, e.g. in the automotive sector, in mechanical and plant engineering or in pharmaceuticals. One major benefit of this system lies in the extensive optimisation of the translation process, which in turn reduces costs.
Training content and key training topics
The “Translation Management” training programme covers the various steps of the translation process, namely:
Selecting the right text fragments
Exporting the text content for translation, if necessary using COTI
The subsequent import of the translated content into SCHEMA ST4
The training also offers insights into potential challenges that may be encountered, both in terms of the editing and the translation.
Translation process for SCHEMA ST4 content
The SCHEMA ST4 content management system is one of the most frequently implemented solutions in technical editing among STAR’s customers.
Let us assist you with our in-depth knowledge of the SCHEMA ST4 translation interface and the related processes.
Posted on: April 24th, 2025 by Frank Wöhrle No Comments
As a professional language service provider, we encounter the challenges and subtleties of a wide variety of languages each and every day. One language that has been attracting more and more attention in recent years due to economic, cultural and political developments is Korean. Whether through K-pop, South Korean technology companies, or trade relations, interest in the Korean language is growing rapidly. But what makes Korean so special, especially when compared to English?
One of the most striking and complex features of the Korean language is the system of politeness and formality levels. This is where Korean differs fundamentally from English.
In Korean, the social status of the people you are speaking to must be taken into account at all times. These include:
Age
Professional position
Familiarity/closeness with the person
Social hierarchy
The appropriate politeness level must be selected for each situation. There are several levels, but the most common are:
Informal (low register) (반말 / banmal) – used with friends, family and those with whom you have a close relationship, as well as with children.
Polite (neutral) (존댓말 / jondaetmal) – the standard level of politeness used in most professional and everyday contexts.
Formal (high register) (격식체 / gyeoksikche) – particularly polite, often used in presentations, and when communicating with customers or superiors.
While in modern English we only have one term for “you”, whether speaking to one person or a group of people, from commoners to kings, the Korean language is far more complex! The person’s status and demographic affects not only the personal pronoun, but even the entire sentence structure, vocabulary and verb conjugations, including suffix formation.
For example: The verb “to eat” in different levels of politeness:
Informal: 먹어 (meogeo)
Polite: 먹어요 (meogeoyo)
Formal: 먹습니다 (meokseumnida)
Honorific (e.g. showing respect towards elders): 드십니다 (deusimnida)
For companies communicating with Korean business partners, choosing the correct level of politeness is not only a linguistic issue, but also a cultural non-negotiable. An incorrect form of address can instantly come across as impolite or disrespectful. There are also important differences in non-verbal communication: While people in the Anglosphere greet each other with a handshake or a hug, in Korea, the bow is used as a sign of respect. So, these distinct levels of politeness are not to be taken lightly and once again clearly demonstrate that language is often a mirror of society.
Alphabet and writing system: “Hangul” – simple and ingenious
One of the most fundamental differences between English and Korean is the alphabet. While English is based on the Latin alphabet, Korean uses the so-called “Hangul” or “Hangeul” (한글) alphabet. This writing system was introduced in the 15th century by King Sejong the Great in order to facilitate the general population’s access to the written language, with great success.
Hangul consists of 14 consonants and 10 vowels, which are combined into syllable blocks. This results in a system that is both easy to learn and extremely effective. In contrast to English spelling, which often appears haphazard (compare “cough”, “through” and “bough”, for example), Hangul is largely phonetic: In most cases, the words are pronounced exactly as they are written.
For our work as a language service provider, and also for the many people in Europe who are learning Korean, this means that deciphering Korean characters is not a major hurdle compared to many other non-Latin writing systems. Nevertheless, the correct translation and interpretation depends on the context – especially when it comes to the politeness levels.
A tricky number system – “Hangul” vs. “Hanja”
When “Hangul” was declared the official language of Korea, it replaced the previously used language and writing system, called “Hanja”. Hanja uses Chinese characters and pronunciation to express the Korean language. It was mainly used in academic circles, and Sino-Korean characters can still be found in official documents, such as those used to pass laws. Hangul was mainly spoken by the lower classes and women at that time, who often did not enjoy the education of the upper classes and intellectuals, who favoured Hanja. When Korea was annexed by the Japanese Empire (1910–1945), Hangul temporarily dropped out of favour, with the Japanese imposing their own language and culture.
As a result, Japanese influences can be found alongside Chinese in the Korean language today, and Hanja continues to be an important building block. There are two number systems in Korea, the “pure” Korean number system and the Sino-Korean number system. For example, when taking a photo of someone, you would count in Hangul: “hana, dul, set!”. To arrange an exact time for a meeting, you would use the Sino-Korean numerical unit for the minutes, but give the hours in Hangul: 12:30 would be “yeol-du” (12; pure Korean) “shi” (hour) “sam-ship” (30; Sino-Korean) “bun” (minute). So, Hanja is still an integral part of the Korean language. And it gets even better. If you want to order one bowl of “bulgogi”, for example, a classic Korean meat dish, you must use the pure Korean numerical unit. When ordering two portions of “tteogbokki” – a popular Korean snack made from rice cakes – you must switch back to the Sino-Korean numerical unit. This can get pretty confusing!
Another point of focus – sentence structure, grammar & spelling
A fundamental difference from English lies in the sentence structure. While English usually follows a subject-verb-object pattern (e.g. “I see the dog”), Korean typically uses the subject-object-verb structure (e.g. “I the dog see” – 나는 개를 본다).
In addition, there are no articles in Korean, which means a lot of information in Korean is implicit and depends on the context. For translators, this requires a thorough understanding of both languages in order to achieve coherent results, both culturally and in terms of content.
In principle, male and female pronouns do exist, but apart from a few exceptions, such as in antiquated poetry, they are rarely used. Consequently, you must always pay close attention to the subject in Korean. Once a name is mentioned, it can be assumed that the person will also be the subject of the sentences that follow.
As a logical consequence of the fact that pronouns are not specifically labelled as masculine or feminine, unlike in English, there is no gender debate, at least not a linguistic one. Instead, context and social factors are used to signal gender and other social roles.
Furthermore, Korean verbs and nouns do not have a grammatical number. Whether something is singular or plural is simply not considered that important in Korean; the plural is only used explicitly if this is important in a given situation and should be emphasised.
It is hard to imagine the English language without agreement between the nouns, pronouns and verbs to indicate number!
Vocabulary and loan words – old and new combined
The Korean vocabulary combines native terms, Sino-Korean words (borrowed from Chinese) and modern loan words, mainly from English. Just like English, words in Korean have varied origins.
There is one difference: Many loan words are phonetically adapted in Korean – for example, “computer” becomes 컴퓨터 (keompyuteo). And everyday terms such as 커피 (keopi) for “coffee” or 핸드폰 (haendeupon) for “mobile phone” (“hand-phone”), are also common.
For professional translations, it is essential to know the origin and usage of a term. Especially in technical, legal or medical texts, seemingly small differences can instantly make a big difference in meaning.
Context is everything – subject and object are overrated
Another difference to English is the importance of context-based communication. In Korean sentences, the subject or object is often simply omitted if this is clear from the context.
“Do you like coffee?” – 커피 좋아해요? (keopi joahaeyo?), literally: “Coffee like?”
In English, such constructions would immediately be perceived as incomplete. In Korean, on the other hand, they are considered completely natural. This type of communication requires a keen sense of the cultural and situational context when translating.
In summary: Korean – more than just a language
Korean is a deeply expressive language with a distinctive writing system. Anyone who learns the language, which is spoken by more than 81 million native speakers, will also gain a deep insight into the culture, history and traditions of the country. The differences to English range from grammar and sentence structure to its distinct layers of politeness. For language service providers and companies with business relations to Korea, this means that successful translations and language training not only require in-depth linguistic knowledge, but also intercultural expertise.
Posted on: March 31st, 2025 by Frank Wöhrle No Comments
We are pleased to announce that we are listed in the new Slator Index 2025 as being among the top 10 language service providers in the world. This ranking as one of the largest international translation service providers confirms our focus on customer-orientated solutions and excellence.
A huge thank you to our dedicated teams worldwide and to our customers for putting their trust in us and for our successful collaborations together!
STAR once again honoured as a “Super Agency”
Slator’s ranking includes almost 300 service providers. The “Super Agency” award recognises STAR’s comprehensive range of language solutions and translation services. The STAR Group’s independence and its turnover of more than USD 200 million are also criteria for this important categorisation.
The industry is currently undergoing rapid change with dynamic competition – conditions which are challenging for the STAR Group. By focusing on our core business, our own further developments in the areas of AI, machine translation and LLMs as well as the resolutely striving to automate our processes further, we were able to maintain our leading international position.
Slator Ranking essential for top language service providers
The Slator Index ranks the world’s largest providers of translation, localisation, interpreting services and language technology by revenue and is considered an important information platform for language industry stakeholders.
Are you looking for a top language service provider and system supplier as a partner for your translation projects? We can help you – simply get in touch.
Posted on: February 26th, 2025 by Frank Wöhrle No Comments
AI: What started as a buzzword, and then became an established term in everyday language, is now a basic requirement for many applications and processes. And this technology is not stopping at the language industry either. Since the launch of ChatGPT we know that translating can now also be completely interactive. Large Language Models, also know as LLMs, in chatbot form are now flooding the market. It feels as though a new model is popping every week, announcing its intention to outdo its competitors in terms of efficiency, quality and reliability. Neural machine translation (NMT) doesn’t seem that old – and yet we are already discussing when this technology will disappear from the market and be replaced by generative AI.
The key question is: I want to translate more efficiently with AI – but how?
AI for targeted optimisation of translation quality
Even though the technology has made significant progress over the last five years, the results of the commonly used and established NMT systems are not always good enough. This can have a variety of causes:
The desired language combination has not been trained with sufficient material or goes via a pivot language (often English). This can lead to structural problems or errors in meaning.
The MT system does not recognise specialist or customer-specific terminology.
The MT system was used for content in which style is extremely important or the translation needs to be targeted towards a specific target group.
Manuals, marketing texts or content with high customer visibility therefore often do not achieve the desired levels of quality through machine translation alone. Language professionals then optimise the machine-generated texts as part of a post-editing process. Machine translations are carefully checked, compared with the source text and corrected if necessary.
As a central translation platform, the CAT tool enables users to work efficiently and offers targeted support for quality assurance thanks to a range of automated features. But where exactly is AI being used here? LLMs such as ChatGPT from OpenAI are perfectly capable of producing translations that, like DeepL or Google Translate, provide a good starting point for further processing, depending on how it is to be used.
However, a significant leap in quality can be achieved by improving the translation requests through the targeted use of prompts and the addition of reference files. To achieve this, however, in addition to a well thought-out prompt engineering design, validated translation resources in the form of translation memory and terminology databases are a fundamental prerequisite.
AI for better translation resources
As with any new technology, a question often arises: What can AI do for me?
However, if you want to integrate AI into your language processes in the long term, you should first ask yourself: What can I do for AI?
Well-maintained translation resources make a significant contribution to improving the results of your AI solution. Take the topic of terminology, for example. If you use a generic system such as DeepL for your translation processes, you will receive translations that do not match your company terminology – unless you integrate a glossary.
Are you only at the stage of establishing your terminology but don’t want to miss out on the benefits of MT? Use language models to extract potential terminology from your monolingual or multilingual documents. You can also use AI to check your translation memory databases, for example to find inconsistent translations or to automate clean-up or correction across large data sets. Use these resources consistently to increase the translation quality of your language model or improve the output of NMT systems.
AI as co-pilot? Reach your destination safely with the new STAR webinar series
As you can see, we are extremely enthusiastic about the topic of AI – and we don’t claim to be reinventing the wheel. However, the technology offers a lot of potential for optimisation if it is used efficiently and sustainably.
Of course, we would like to share our enthusiasm with you and invite you to our webinar series “AI as co-pilot: Forging new paths to smart language processes” starting in March. All the webinars will be held in German only.
How exactly does generative AI actually work? What advantage does it offer for the translation? How can I use language models to create product texts? Can I train my own AI? And what actually happens to my data?
Our Language Technology Consultant Julian Hamm answers these and many other questions and discusses the many different uses of generative AI, including translation, terminology management, content creation and content delivery. You can expect the following content in the first group of topics:
Was ist generative KI, und wofür kann ich sie einsetzen?(What is generative AI and what can I use it for?)
Wie kann KI bei der Übersetzung unterstützen? (How can language experts benefit from AI?)
Wie kann ich KI für die Terminologiearbeit einsetzen?(How can I use AI for terminology work?)
Welche Vorteile bietet KI für die Content-Erstellung?(What advantages does AI offer for content creation?)
Further information on the events and the registration form can be found here.
Posted on: January 31st, 2025 by Frank Wöhrle No Comments
With the latest Transit NXT Service Pack, you can benefit from a host of new features to speed up your processes.
New file formats
Transit NXT has been once again expanded to include the very latest file formats with Service Pack 17. Documents from InDesign 2025 and drawings up to AutoCAD 2025 can now also be translated as optional file versions. Documents from the Google Docs Editors Suite are now also officially supported as another file format.
Machine translation
Integration of Amazon Translate.
With the addition of another MT system, translations can now also be requested via Amazon Translate – interactively directly in the translation editor, or as an option automatically when importing the project. New functions are also available for DeepL, Systran and Textshuttle.
Project management
Professional support in the editor thanks to integrated MS Word grammar check, AutoCorrect and AutoComplete functions.
TermStar
For TermStar, we are focusing on terminology export for this Service Pack: For one, TBX version 3 is now officially supported. What’s more, this Service Pack also makes it possible to export multimedia files (e.g. graphics or videos) from dictionaries to certain formats.
New editor functions
Translators can look forward to additional helpful editor functions:
The AutoComplete function makes it quicker to enter words and phrases with project-specific suggestions from dictionaries and translation memories.
AutoCorrect corrects typical typos, typographically converts quotation marks and makes it possible to use shortcuts to enter special characters and frequently used phrases. Date and number formats as well as alphanumeric strings can now be adapted to the target language format with a simple mouse click.
For quality assurance, the translation can now also be checked for correct grammar and corrected interactively. What’s more, Russian for Kazakhstan is now available as an additional working language.
Posted on: December 16th, 2024 by Frank Wöhrle No Comments
Another year is drawing to a close, and we can hardly believe how fast the time has flown by. Now is a good opportunity to take a look back at all of the important developments that 2024 – the year of AI – has brought us, and give you an insight into what next year has in store for us.
AI has been a hot topic ever since OpenAI stunned the whole world with ChatGPT. Companies are increasingly insisting on using AI wherever this seems possible. From many discussions and exciting customer projects over the course of the year, we have identified key lessons learned and trends in this field.
Five key trends relating to the use of AI in the context of translation
Expectations for generative AI remain very high. However, the purposes for which people want to use it differ greatly, especially in language processes: From the fanciful idea of a wonder machine which produces, translates and optimises texts so they are perfect, through to a clever tool that provides targeted assistance with specific tasks that are usually performed manually at present. The increasing integration of large language models into translation processes makes exactly this possible by providing these with targeted and modular support. This ranges from the bilingual extraction of terminology and the post-editing of machine-translated content, through to assessing the quality of multilingual documents.
If you want to use the terminology efficiently and sustainably, you also need high-quality, well-structured language resources to be able to supply the language models with relevant information. This means that years of working with translation memory and terminology management systems now offers double the benefits. If this data is prepared in a structured and sustainable manner, language models can use it to optimise machine-translated content, for instance in the form of retrieval-augmented generation (RAG).
The topic of data protection continues to generate extreme uncertainty despite the adoption of the EU AI Act in May 2024. Many companies are looking for ways to use AI in the most secure possible way in order to protect their precious data against misuse.
A lot of businesses are experiencing issues with the scalability of AI solutions, whether this concerns the IT infrastructure, financial resources or further training of staff.
Human in the cockpit. People will increasingly return to the centre of the AI-based translation workflow. While translators were previously responsible for the post-editing of predefined machine-translated content, among other tasks, as part of human in the loop concept, the new human in the cockpit principle aims for translators to use modern language technologies – even interactively – in order to exert their own influence on the output and to create efficient design processes. The technological transformation is also resulting in changing requirements for current and future language experts. The relevant universities have also recognised these developments and are revising the degrees and courses they offer accordingly. For instance, prompt engineering, language technologies and information management are important focal topics that will feature more often on the curriculum in future.
Are you interested in this subject? Then don’t miss our STAR webinar, which is scheduled for early 2025. There, we will be sharing information about current trends and our latest technological developments.