Author Archive

Quick wins for technical writing – how local LLMs can speed up your daily work

Posted on: February 27th, 2026 by Frank Wöhrle No Comments

Find out why local large language models (LLMs) are considered an insider tip for technical writing and how you can use AI to automate routine tasks.

Is your company yet to discover knowledge-based chatbots? Do you want to ensure that your sensitive data does not leave the computer? If so, we have a host of practical tips to get you off to a good start.

Why a local LLM makes sense for technical writers

Whether product changes, API updates or new features – technical documentation often needs to be adapted at short notice. This is precisely where local LLMs such as Ollama come into play:

  • Data sovereignty: Your content remains in-house.
  • Offline capability: It can be used even without an internet connection.
  • Cost-effectiveness: No ongoing cloud fees.
     

Please note: Ensure that you collaborate with your IT security department with regard to installation and configuration – safety first!

A number of suitable local LLM environments are now available that can be used to perform simple tasks in the technical writing office. We took a closer look at the Ollama platform.

 


Ollama at a glance – the local LLM platform

Screenshot LLama Software

 

Who is behind it?

Ollama is an open-source platform for running large language models (LLMs) locally. It was developed by Jeffrey Morgan (CEO) and Michael Chiang, the brains behind Kitematic, now part of Docker.

Over 156,000 GitHub stars (as of 2025) show that the project is growing rapidly. Supported by Y Combinator, Ollama remains community-driven. The platform enables you to run various LLMs locally on your computer (Windows, MacOS, Linux). Unlike cloud-based services, your data remains under your control.

Advantages of Ollama

  • No cloud uploads of confidential data
  • Integration of existing reference documents
  • Rapid, repeatable text generation for manuals, API documentation or help files
     

Recommended models

  • Llama3 / Llama3.1 – good balance of size and speed
  • Mistral – ideal for short, precise sections of text
  • Gemma3 – strong at text suggestions and summaries
     

After Ollama has been launched for the first time, the model is loaded locally. It is easy to install using an installer, and the application can then be launched via a user interface or from the CMD/Powershell console.

 


Get your local LLM ready to go in just a few steps

1. Installation

Download Ollama at https://ollama.com/download and follow the installation instructions. Afterwards, the loveable llama will welcome you.

Screenshot LLama Software

 

2. Download a suitable model

After installation, you can download a local model that is suitable for technical documentation or use a model that is already installed.

Open Ollama, click on “Download” and select, for example, llama3.2, gemma3:4b or mistral.

Screenshot LLama Software

Alternatively, you can download additional models via the console in Windows (CMD or Powershell):

To do this, use the command: ollama pull llama3:8b

Screenshot LLama Software

Wait until the model is fully downloaded – and you’re all set.

 


Hands-on: How to create a new section of text based on a reference document

Starting point

  • Reference manual (e.g. Word or PDF) with older version of the documentation is available.
  • New features or modified technology that require the document to be revised or reissued.
     

How it works: Update your manual efficiently – with local support

  • Content reuse: Analysis of existing reference documents (PDF, DOCX) with LLM
  • Structuring: Automatic generation of headings, lists and paragraphs
  • Terminology harmonisation: Matching of technical terminology with targeted prompts

 

Practical tips: Prompt engineering for local LLMs made easy

The key to success lies not in the model itself, but in something known as “prompt engineering”. Since we are using a relatively small local model compared to GPT-4, we need to be very precise here. For our example scenario, we would like to create the new function “Comments at segment level” based on an existing manual.

Prompt for creating a chapter

We came up with the following prompt for our enquiry. It is important to refer to the reference document, which you can insert using drag and drop.

Screenshot LLama Software

And the likeable llama replies:

Screenshot LLama Software

 

Example prompts for further useful queries

# Summary of content
“Summarise the contents of this file in 10 bullet points.”

# Creation of a new section
llama3 model: “Create a new section for the ‘Batch Export’ feature in the style of the reference document. Use short, clear sentences and a level 3 heading. Focus on step-by-step instructions.”

# Formatting adjustments
“Convert all step lists in the following text into numbered lists. Keep the technical terms, but simplify the wording slightly.”

Our tip: Always provide the reference document – this ensures that style and structure remain consistent.

 

Saving and post-processing

Even good AI needs editorial fine tuning:

  • Check the technical accuracy of the generated content.
  • Adapt the wording to your editorial style.
  • Add screenshots or diagrams if necessary.
  • Test the steps described to ensure they are correct.
  • Try out different models that suit your requirements.

 

Summary: Data protection meets productivity

Local LLMs such as Ollama open up new possibilities for creating technical documentation – without any dependence on the cloud. With precise prompt engineering and targeted post-processing, you can achieve significant time savings without compromising data protection and data sovereignty. Give it a try: you’ll be surprised at how quickly you achieve initial results.

 


More quick wins for technical writing

Scope of applicationDescription
Automatic structuringLong passages of continuous text are separated into clear headings, lists and paragraphs.
Template creationA reference document is used to create a standardised template (e.g. consistent structure for “Function description”, “Prerequisites”, “Examples”).
Linguistic harmonisationAll sections are harmonised to ensure a consistent style and tone (e.g. informal vs formal, passive vs active).

 


What you should bear in mind with local LLMs

Technical limitations

Context window:

  • Problem: Large documents (> 50 pages) → Possible loss of information during processing by the LLM
  • Solution: Chapter-by-chapter processing
     

Risk of “hallucinations”:

  • Problem: Fictitious technical details in the generated content
  • Solution: Prompt modification with restrictions, e.g. “Only change the explicitly highlighted area,” “Do not invent technical specifications.”

 

Compliance

  • GDPR: Compliance guaranteed through on-premises operation
  • Security audit: Have your IT security team conduct risk analysis prior to implementation
     

 


Local LLMs: Limits today – prospects for tomorrow

When local systems reach their limits – for example, when it comes to team-wide collaboration or larger volumes of documentation – the next step is clear: an integrated, cloud-based solution.

Get in touch with us – we’d be happy to show you how to optimally integrate AI into your technical writing processes.

If a translation is produced solely by AI, who is liable?

Posted on: January 27th, 2026 by Frank Wöhrle No Comments

Simple, fast, cost-effective: AI tools can now translate entire documents in mere seconds. But the crucial question that must be asked is: Who is liable for AI translations if something goes wrong?

In a corporate environment in particular, translation is more than just language – it involves both legal and economic responsibility. Incorrect translations can obfuscate contracts, render product instructions unusable, damage a brand’s reputation or even, in the very worst case, lead to personal injury. And for liability cases? It’s not the AI, but you as the user who bears responsibility.

The risk: Liability for AI translations

Many companies now use AI tools on the fly and independently of existing processes. However, these tools do not assume any responsibility, nor do they offer any warranties.

Most providers will clearly state “Liability excluded” in their terms and conditions. If incorrect or misleading translations result in financial losses, these have to be borne by the company that used AI.

Added to this is the issue of data protection. Anyone who uploads internal or confidential documents to a public AI system is, in effect, passing this data on to third parties. This may not only violate internal security policies, but also the European General Data Protection Regulation, or GDPR for short.

In short, the combination of liability gaps and data protection risks means that using uncontrolled AI translations can pose a genuine risk – especially in regulated industries.

Professional alternatives: Responsibility included

As a certified language service provider, we see AI as a tool – not as a substitute for professional service. Our service promise:
Technological efficiency, human validation and legal responsibility.

We work in accordance with ISO 17100 and use structured processes to ensure that every translation is technically accurate and consistent as well as being legally compliant.
The key difference is that we take responsibility for the results and vouch for them because our translations are documented, checked and approved.

Human-in-the-loop: The balance of man and machine

Our answer to the question of liability for AI translations is simple: keeping humans in the loop.

We use AI where it makes sense – in pretranslation, terminology work or quality assurance – but never without final verification carried out by a human.
Every translation is checked, revised and approved by experienced specialist translators.

This allows our customers to benefit from increased efficiency without compromising quality, data protection or legal security.

How we intelligently integrate AI

AI is an integral part of our workflow – but only within protected, certified systems. We use AI for:

  • Machine pretranslation combined with post-editing by our specialist translators.
  • Terminology extraction to automatically recognise relevant terms.
  • Terminology replacement and terminology checking to ensure consistent wording.
  • Harmonisation, i.e. implementing style guidelines and layout specifications.

This approach saves time and maintains quality whilst also ensuring complete traceability. AI provides support – but humans make the decisions.

CAT tool or AI tool? A crucial difference

Many people confuse computer-assisted translation tools (or CAT tools for short) with AI tools. But these tools differ in quite significant ways.

AspectCAT toolAI tool (free or web-based)
PurposeSupport for professional translation processesInstant translations for end users
Data sovereigntyLocal or GDPR-compliant environmentCloud-based, unknown data paths
Translation memory (TM)Customer-specific, well-maintained language archiveNo long-term storage
Terminology managementUse of approved technical terminologyNo checks or approval processes
Quality assuranceIntegrated checking tools,
human-in-the-loop
No checks or liability
LiabilityService provider assumes responsibilityExcluded in the terms and conditions

Translation memory as a repository of quality work

A tool such as TransitNXT is more than just software – it is a process anchor. It stores verified translations in the translation memory (abbreviated to TM) and links them to approved terminology and allows for machine-translation suggestions to be integrated in a controlled manner.

This enables us to achieve consistent terminology, unified corporate language and efficiency in the long term.
In the context of TM, AI suggestions are always checked, harmonised and adapted as required.

This ensures that every project remains consistent, traceable and legally compliant. AI systems, on the other hand, operate in isolation: They simply produce texts – they don’t take responsibility for the resulting translations.

Legal reality: AI is not liable

The legal situation is also unequivocal. An AI cannot be a contractual partner or assume responsibility. If an incorrect translation causes damage – for example, through incorrect safety information, faulty assembly instructions or ambiguous commercial clauses – the company that used the AI remains liable.

A professional provider, on the other hand, offers documented quality assurance, version history, traceable testing processes and is also covered by public liability insurance should this ever be required.
This is the only way to ensure legal and liability security – as guaranteed by ISO-certified language service providers such as STAR Deutschland.

AI does not replace human discernment

Modern AI systems can imitate syntax and style, but cannot develop an awareness of context, culture or intention.
Our translators recognise when terms have different legal or technical meanings, when irony in marketing texts must be preserved, or when something needs to be adapted to the culture of the target readership.

This form of language is not the product of pure data processing – it is the result of professional experience.
That is the difference between a generated text and a responsible translation.

Summary: Checks create liability security

AI-assisted translation presents an opportunity – but offers no guarantees.
Anyone who produces translations in a fully automatic process must understand that AI systems bear absolutely no responsibility for the output.

Our approach combines the best of both worlds:

  • AI for speed and efficiency,
  • CAT tools for consistency and traceability,
  • Humans for quality, responsibility and cultural understanding.

This is how we ensure that every translation is legally compliant, linguistically accurate and fully in line with data protection regulations.

Because our motto is: AI supports – people safeguard.
This is the only way to achieve real certainty with regard to liability for AI translations.

Untranslatable terms in technical translation – precision meets comprehension

Posted on: December 18th, 2025 by Frank Wöhrle No Comments

Language as a tool for precise communication

Technical language thrives on precision. In documentation, service and in product descriptions, every single word matters – one or two nuances in meaning can make the difference between correct operation and malfunction.

However, translators sometimes encounter terms that appear to be untranslatable: namely, words or phrases that cannot be translated directly into another language because they are too deeply rooted in a cultural or technical context. Due to its status as lingua franca in many industries, English terms often reign supreme – you’re as likely to see the words “workaround” and “influencer” in a German text as you are in a fully English one. For the translation professional, the question remains: should these be explained for the benefit of lay readers who have little or no understanding of English?

Untranslatable technical terms – a challenge and a mark of quality

Glance in any German-English dictionary and the top match for “Anschlag” would be “stop”. And you certainly wouldn’t be wrong to translate it as such. But if end customers read an instruction to “open the drawer to the stop”, they could be forgiven for scratching their heads and wondering ‘what kind of stop’? In reality, the best rendering for this sentence would be “open the drawer fully” – i.e. until it stops being pull-out-able!

Such cases show that untranslatable technical terms are not a shortcoming of language, but rather evidence of its precision. A good translation therefore does not have to be word for word, but rather meaningful, functionally correct and technically comprehensible.

Strategies for translating technical language

Professional translators in the technical sector use various strategies to deal with terms that are difficult to translate:

  1. Paraphrase: If there is no direct equivalent, the function or application is described.
    Example: “Verschlimmbessern” → the (horribly unusable) literal translation of this German term is “worse bettering”, but in reality this needs to be rendered “unintentionally making a situation worse while trying to improve it”.
  2. Subject-specific terminology work: Terminology databases and glossaries ensure that all terms remain consistent, even in international projects.
  3. Transcreation in a technical context: Marketing texts or product brochures are not only translated, but also creatively localised as necessary in order to achieve the same impact for the target audience.
    Example: Depending on the target market, “excessive play”, relating to the tightness of a technical component, may be translated as “too loose”, if that best suits the desired tone and the intended readership.

This results in communication that accurately reflects the technical language and takes into account the mentality of the target audience.

Man and machine – precision in harmony

AI engines are capable of astonishing feats today, especially in the field of technical translation. However, when specialist knowledge, experience and contextual understanding are required, AI often reaches its limits. If an AI tool automatically translates “Schnecke” as “snail”, it takes a human to know that it should be rendered “auger” in the context of construction. And when it comes to the German “Mutter”, unfortunately it’s far too common to see “mother” being used to secure a bolt in place rather than a “nut”.

That’s why modern translation service providers combine the efficiency of a machine with the precision of a human – through post-editing and specialist revision. This is the only way to classify untranslatable terms meaningfully and integrate them into the company’s terminology system.

Language changes – and with it, technology

With every new product and every innovation, new terms are also created. They reflect not only technological development, but also the current ways of thinking. A translation service provider specialising in mechanical engineering therefore operates at the intersection of language, technology and international standards.

Whether it’s assembly instructions, CE-compliant documentation or product catalogues, the objective remains the same: comprehensibility across language barriers. And it is precisely where words reach their limits that the translator’s real work begins.

Summary: Technical language and sensitivity unite

Untranslatable terms remind us that language is not a rigid system – especially in the technical field. Translating means making complex concepts tangible without losing their precision.

As a language service provider with many years of experience, we know that every single term is important. That’s why we don’t just translate words, we understand what they mean in practice – for designers, engineers and anyone who works with precision.

Untranslatable? But by no means insurmountable. Please contact us.

Making e-learning effective worldwide: Achieve real learning success with professional localisation

Posted on: November 27th, 2025 by Frank Wöhrle No Comments

E-learning is considered a central pillar of continuing professional development in many companies – from global onboarding courses to complex product training. At the same time, projects repeatedly face similar challenges: Content that works extremely well in the country of origin loses its impact in other markets, is misunderstood or simply not used. The reason for this rarely lies in the didactic concept itself, but rather in the type and quality of localisation.

Why companies rely on e-learning

From a business perspective, numerous factors speak in favour of digital learning formats. Employees can learn flexibly – regardless of location, time zone and device, which suits geographically dispersed teams.

E-learning supports independent learning at any time: Content is available on demand, without having to rely on specific training dates. Thanks to their modular setup, learning units can be clearly structured, specifically combined and, if necessary, updated gradually.

Another advantage is that the learning pace is down to each individual: Employees can pause, repeat or delve deeper into complex content without disrupting anyone else’s flow. In addition, digital training makes it possible to create customised learning content – tailored to specific roles, regions or target groups within the company.

Multimedia elements such as videos, animations, interactive exercises and quizzes create a rich learning experience and increase engagement. Offering content in multiple languages contributes significantly to accessibility and, for international workforces, makes real headway in terms of removing barriers to learning.

Ultimately, when these factors are successfully implemented, they lead to increased learning success – measurable in terms of knowledge transfer, application in daily work and reduced error rates.

Complexity of modern e-learning formats

In practice, it quickly becomes apparent that e-learning courses are significantly more complex than traditional training materials. A typical module includes slides or screen recordings, embedded videos, spoken commentary, subtitles and interactive elements such as quizzes, conversation simulations and the like.

When it comes to localisation, this means that content to be translated is not contained in a single file or format, but is distributed across a variety of authoring tools such as Adobe Captivate, Articulate Storyline, Articulate Rise, iSpring, Elucidat, Lectora etc., SCORM packages, video and audio scripts, and, where necessary, external sources (e.g. course descriptions, content in files linked to the course, where applicable) and, in some cases (video and audio scripts), still need to be transcribed before localisation. In addition, there are technical requirements – such as support for character sets, space restrictions in buttons, and synchronising subtitles and voiceovers.

Underestimating the complexity of this process often leads to problems during the project: Missing text exports, texts with similar content in different formats, untranslated UI elements or videos that have to be edited retrospectively at great expense. For a localisation process to run smoothly, therefore, a structured approach that takes all components into account from the outset is crucial.

Learning in your native language: an efficiency factor

From a didactic perspective, it is well documented that learning content is best internalised when delivered in one’s own mother tongue. Learners then need to expend less cognitive effort in understanding the language and can concentrate more on content, context and application.

This is particularly important when dealing with complex, security-related or legal issues in order to avoid misunderstandings and misinterpretations. Emotional access also plays a role: Language can influence how credible, esteemed and motivating a training course is perceived to be.

For companies, this means that even employees with good foreign language skills benefit from training in their native language – namely, by making faster and more stable progress in their learning. Those who systematically utilise these positive consequences are able to significantly increase the effectiveness of global learning programmes and, at the same time, justify the investment in localisation.

 

Lektorin sitzt lächelnd mit Headset an Schreibtisch in modernem Büro.

The role of professional specialist translations in e-learning localisation

In order for e-learning courses in other languages to achieve the same learning objectives as the original, a basic word-for-word translation simply cannot do the job.
Native-speaking specialist translators combine linguistic competence with industry knowledge and are familiar with the terminology and common phrases used in their respective fields of expertise.

They ensure that technical terms are used consistently, instructions are clear and action-oriented, and didactic subtleties are preserved.
At the same time, they adapt examples, metaphors or references if these are not readily translatable with regard to either culture or context.

Professional translation therefore makes a significant contribution to learning objectives being achieved quickly: Content is easier to understand, easier to remember and more likely to be put into practice.
A clearly defined terminology and review process also supports company-wide consistency, both in terms of corporate documentation and individual learning outcomes. This is especially the case when dealing with a vast number of courses and a multitude of languages.

The importance of professionally localised audio and video

Audio and video are key carriers of information and sources of motivation in modern e-learning courses – and pose particular challenges for localisation.
Voiceover texts must be translated in such a way that they match the visual material in terms of tone, length and rhythm, while at the same time being technically accurate.

For voiceovers, you also need to select suitable narrators or satisfactory AI software to suit your corporate image and target audience.
In addition to the voice, elements such as the narrator’s gender, age, pronunciation quality, accent and/or dialect, and any background sound such as music, etc., are crucial in order to avoid misunderstandings and convey an entirely professional feel.

The client’s specifications regarding desired pronunciation, use of abbreviations, accessible (i.e. gender-neutral) language, and so on, are essential, as the client’s satisfaction will be strongly tied to how well these requests are implemented.

Subtitles, on the other hand, must be precise, easy to read and synchronised with the spoken word. The wording must be concise as well as complete, with some rephrasing required.
Last but not least, visual elements – such as text overlays or UI screens – may need to be adapted to ensure that they remain understandable in the target language and fit in with the design in terms of form.

Licence for e-learning

Last but not least, when using human voices for voiceovers, the customer must also clarify the intended use and reach. Are the e-learning courses with voiceover to be used exclusively internally or also publicly? Are there any plans to sell the courses commercially to third parties?
Depending on the type of transmission and the type of media/rights usage, the recording studio involved in the project may charge a licence fee per voice artist used. In most cases, these are flat fees with indefinite validity.

Summary: Localisation as an integral part of the e-learning strategy

E-learning can only reach its full potential when content is tailored to the specific language and culture.
Companies wishing to roll out e-learning courses internationally would do well to consider localisation as an integral part of the conceptualisation process from the outset – rather than as a downstream translation step.

The combination of didactically excellent courses, native-language specialist translation and professionally localised audio and video elements forms the basis for true learning success in multiple languages.
This makes global training programmes consistent, efficient and effective – and fulfils the requirement to make knowledge accessible across the globe without ever compromising on quality.

Get in touch to find out how we can help in ensuring your learning content achieves exactly the international impact you want – we speak your language!

Terminology in the translation process: Why it’s more important than ever in the age of AI

Posted on: October 30th, 2025 by Frank Wöhrle No Comments

Terminology work has always been the key to consistent, high-quality translations. The advent of AI and large language models (LLMs) is fundamentally changing translation processes. But this is precisely what makes terminology even more important.

“AI now translates everything perfectly – so why do we still need terminology management?”
As a language service provider, we are increasingly hearing this question being posed. But anyone who has ever seen how a single mistranslated technical term can distort a product description, manual or marketing message knows that terminology is not a side issue – it is the foundation of high-quality translation.

In the age of LLMs and generative AI, the how of handling terminology is changing. The why, however, remains unchanged.

Why terminology is the backbone of every translation

Terminology is much more than just a dictionary. It defines how a company talks about its products, services and values.

Whether we’re talking about “controllers”, “control modules” or “control units”, the right term ensures recognition, trust and legal certainty.

Without consistent terminology, inconsistencies are bound to arise. In practice, this leads to:

  • different translations for the same term,
  • increased correction effort,
  • unnecessary additional costs, multiplied by the number of target languages,
  • inconsistent brand communication,
  • misunderstandings among customers or users.


Consistent terminology management is crucial for ensuring consistency and precision in the translation process, especially when dealing with high-volume, multilingual content from the fields of technical documentation or marketing.

From glossaries to integrated solutions: Successful translation processes thanks to terminology

In the past, terminology work was often handled outside of the actual translation process – in the form of Excel spreadsheets or static lists. Today, it can be seamlessly integrated into translation management systems (TMS).
This enables:

  • automatic terminology suggestions directly in the CAT tool,
  • terminology checks during translation,
  • centralised maintenance and approval processes.


This makes terminology a living part of the workflow, not just an afterthought during quality control.

How AI and LLMs are changing terminology work

AI systems and LLMs open up new possibilities for maintaining terminology in a more dynamic and intelligent way. Some specific applications may include:

  • AI terminology extraction:
    AI can quickly analyse multilingual texts to automatically recognise relevant technical terms and suggest them as term candidates. This saves time during the creation phase and helps to identify terminology that has not been taken into account previously. However, final validation remains the task of human experts.
  • Building a terminology database:
    If translations or a defined structure have yet to be established, generative AI can support the creation of a terminology database. This allows variants and synonyms to be clustered efficiently, while metadata such as context, grammatical information or suggested definitions are generated automatically. However, the final review and validation stages are still handled by humans.
  • Terminology checks by AI:
    Terminology errors identified by a rule-based check are sent to the AI, where they are evaluated and corrected in their overall context, taking into account additional terminological information.


These new approaches make terminology work faster, more scalable and more data-driven. At the same time, it remains dependent on human validation – because AI does not automatically understand corporate language or brand values.

Limitations and risks: When AI ‘invents’ terms

As powerful as LLMs are, they also pose a real risk. This is because an AI model can:

  • ’hallucinate’ terms – i.e. it can create plausible but incorrect terms,
  • overlook customer-specific requirements if these are not clearly specified in the prompt or system,
  • confidential terminology data is at risk if it is fed into publicly accessible systems.


The conclusion? AI can support, but not decide. Human expertise remains indispensable when deciding whether a term is terminologically correct, brand-compliant and contextually appropriate.

Best practices: How we combine human expertise with the power of AI

As a language service provider, we see the added value in using technology sensibly – not automating everything blindly. Successful terminology work in the age of AI is based on five principles:

  • Centralisation:
    All terminology data belongs in a central database – not in miscellaneous lists scattered far and wide.
  • Integration:
    Terminology must be directly linked to CAT tools so that translators can access it in real time.
  • AI as a support, not a substitute:
    AI tools can assist with research, extraction and checks – but final validation remains in human hands.
  • Security-conscious:
    Sensitive terminology data should only be processed in data protection-compliant, controlled systems.

 

Terminology remains strategic corporate knowledge

Artificial intelligence and large language models are fundamentally changing how we work with language – but they are no substitute for terminology management. When used correctly, they actually make it more efficient and intelligent. Terminology is a company’s linguistic memory.
Particularly in the age of generative AI, clear and well-defined terms are crucial to ensure that man and machine truly speak the same language.

Contact us if you want to build your terminology efficiently, maintain it consistently and optimise it with AI support – we will support you every step of the way.

Learn more about our services in combination with AI for efficient terminology management

tekom annual conference 2025

Posted on: October 7th, 2025 by Frank Wöhrle No Comments

We offer you a warm welcome!

The world’s biggest conference for technical communication will be held in Stuttgart from 11th to 13th November.
Visit us at tekom in the foyer at stand 21 to find out more about our language services, enterprise technologies and all the latest developments.

Your free ticket to the tekom trade fair

We would like to invite you to the tekom annual conference. Simply fill out this form and we will send you your personal trade fair code with which you can register straight away.

Please note:
The trade fair code is only valid for visiting the trade fair. The trade fair ticket is not valid for attending the conference.

STAR AI workshop at tekom (in German)

On 12th November at 4.30 p.m., come along to our STAR workshop entitled “AI as co-pilot?! Successfully navigating language and translation processes with AI assistance” to find out how you can use NMT and LLM technologies efficiently and sustainably for language and translation processes. (Please note that this workshop will be held in German only.)

You can register here

STAR expert demos at our stand (Foyer 21):

Discover live at our stand how you can save time with our solutions – our free demos bring everything to life!

  • Translation services with AI assistance
    Tuesday 11/11, 11.00 a.m. and Wednesday 12/11, 1.00 p.m.
  • Workflow automation and connectivity
    Tuesday 11/11, 4.00 p.m., Wednesday 12/11, 4.00 p.m. and Thursday 13/11, 11.00 a.m.
  • Personalised content delivery
    Tuesday 11/11, 1.00 p.m., Wednesday 12/11, 11.00 a.m. and Thursday 13/11, 1.00 p.m.

Party at our stand on 11/11 from 6 p.m. onwards – drinks, snacks and good conversation!

We cordially invite you to join us for a party at our stand on Tuesday 11th November, starting at 6 p.m. Just drop by – we look forward to spending time with you as the trade fair day draws to a close!

 

We’re looking forward to exchanging interesting ideas with you!

Transit NXT Service Pack 18: Smart functions for translation workflows – now available!

Posted on: September 9th, 2025 by Frank Wöhrle No Comments

With Transit NXT Service Pack 18, the STAR Group is introducing a powerful extension to its translation memory system – a real game changer for professional translators and project managers!

What’s new? – Highlights at a glance

  • Various improvements: The update enhances existing functionalities to make translation even more efficient. A variety of user requests have been implemented
  • Translation of variables in InDesign documents: A particularly exciting addition is that variables in InDesign documents can now be translated directly. This gives translators additional flexibility when dealing with complex layout files.

Why is it worth upgrading to Service Pack 18?

  • Machine translation: DeepL Pro now also supports language variants in glossaries (e.g. for English, Portuguese and Chinese). For Textshuttle, you can now control whether terminology from project dictionaries is transferred to Textshuttle or not.
  • Project exchange: Transit NXT now supports Phrase projects. Users can unpack MXLIFF files directly, translate them and import them back into Phrase.
  • Optimised web search: In the integrated web search, the prioritisation of services has been optimised in order to obtain initial results even faster.

Which stakeholders benefit most from SP 18?

  • Professional translators who frequently work with DTP tools such as InDesign and want to edit the content of variables efficiently.
  • Project managers who want to equip their teams with an even more powerful CAT environment.
  • Companies that strive for high quality, speed and flexibility in localisation.

Follow this link to download the Service Pack

You can find more information at: https://www.star-deutschland.net/en/technology-and-software/software-products/

tekom annual conference 2025 preview

Posted on: July 28th, 2025 by Frank Wöhrle No Comments

In less than 4 months, the next tekom annual conference is set to open in Stuttgart.
The world’s biggest conference for technical communication will be held in Stuttgart from 11th to 13th November.
Come along to find out more about our language services, enterprise technologies and all the latest developments.

STAR’s AI workshop

On 12th November, come along to our STAR workshop entitled “AI as co-pilot?! Successfully navigating language and translation processes with AI assistance” to find out how you can use NMT and LLM technologies efficiently and sustainably for language and translation processes. (Please note that this workshop will be held in German only.)

You can find out more at: https://tcworldconference.tekom.de/conference-program

Secure your ticket now

Secure your ticket for tekom 2025 today at: https://tcworldconference.tekom.de/tickets/buy-ticket

We can’t wait for you to join us for some exciting presentations. We are looking forward to exchanging ideas with you!

STAR at the MT Summit 2025: trends, talking points and innovations

Posted on: July 8th, 2025 by Frank Wöhrle No Comments

This year’s MT Summit was held in Geneva, Switzerland, and featured a diverse programme of tutorials, workshops and inspiring presentations on the topics of machine translation (MT) and large language models (LLMs).

As a platinum sponsor of the event, STAR AG was on site together with three experts from the company’s Development, Support and Sales teams. STAR’s very own Language Technology Consultant, Julian Hamm, also attended the week-long conference to represent the company and took away new ideas and food for thought from research and industry.

While outside the temperatures were soaring, inside the very hottest trends were being presented – by technology providers and representatives from notable companies and institutions in a series of lectures and poster sessions. The dedicated organisation team from the University of Geneva put together a varied programme of events, while also setting the scene for valuable discussions.

Human in the cockpit – man and machine, a skilful combination

Despite staggering progress in the field of generative AI, this MT Summit made one thing clear: it simply doesn’t work without people!

This general philosophy was also key to our sponsored talk, entitled Human in the Cockpit – How GenAI is shaping the localisation industry and what it means for technology and business strategies. In their presentation, Diana Ballard and Julian Hamm demonstrated the influence that generative AI is exerting on the localisation industry, highlighting use cases of particular relevance for the use of AI.

As a longstanding technology and translation partner, STAR understands the precise requirements of users and continuously optimises its own tools and solutions to make them future-proof by means of integrating smart features.

Visitors to the STAR stand were able to get a hands-on experience through live demonstrations, alongside opportunities to speak to our experts about various aspects of AI in practice.
In addition to the integration of big-name LLM systems, such as ChatGPT, the team demonstrated work on smaller local models, including TermFusion, a project optimised for terminology work, which does not call for a dedicated GPU and can therefore be operated with very few resources. Local models will be used to facilitate term extraction from bilingual data records, for instance, or for the intelligent correction of terminology specifications. Using this approach as a basis, other models are currently in development to ensure working in the translation tool is even more efficient.

Artificial intelligence in localisation: it’s here to stay!

Aktuelle Statistiken zur KI-Nutzung in Unternehmen bestätigen, dass diese Entwicklungen nicht nur eine Randerscheinung sind. Vor allem Kundenkontakt, Marketing  und Kommunikation sind vielversprechende Einsatzgebiete, die bereits jetzt intensiv bedient werden.

Survey: Application of generative artificial intelligence in companies in 2025
Published by the Statista Research Department, 20th May 2025

 

Even though the use of AI in localisation still varies a lot, one thing is plain to see: there is no one-size-fits-all solution. After all, only those familiar with the use case who can clearly define the requirements will understand how the technology can be used wisely and sustainably.

After five days of in-depth discussions with representatives from research and industry, we are taking seven important insights away with us:

  • Neural machine translation (NMT) remains the most widely used language technology in localisation processes. Parallel to this, LLMs are increasingly being used to optimise NMT output. NMT technology is increasingly being displaced by LLMs, especially in the research sector.
     
  • Systems and workflows are increasingly geared towards seamless interplay between different translation resources. Translation memories (TM) and terminology databases provide important translation-relevant information and can be scaled up or down to produce better and more consistent translations. Another method establishing itself is retrieval augmented generation (RAG), whereby smaller databases can be used as a reference point for text creation or translation.
     
  • In certain use cases, generic AI models outperform open source models . Customisation in the form of translation rules or automatic terminology adjustments is making its way into many commercial solutions. In the medium to long term, this approach looks set to overtake the earlier method of dedicated training for NMT systems.
     
  • Growing translation volumes alongside the overall squeezing of prices call for the use of intelligent analysis tools to evaluate the added value of using AI and automating processes for the long term. The integration of models for MT quality estimation and the evaluation of translations using suitable metrics, in some cases assisted by an LLM, are particularly relevant at the moment.
     
  • Not all tasks necessarily have to be performed by an LLM, however. There is still a place for conventional rule-based approaches, such as the use of regular expressions in quality assurance, and in some instances, these can actually prove more efficient than LLM-based mechanisms.
     
  • LLMs are already capable of analysing texts at a document level and identifying distant connections. In CAT tools, however, translation is almost always performed at a segment level. Does the technology need rethinking here? While it is evident that creation systems and translation resources are increasingly being merged, this calls for new and innovative approaches for handling translation resources and AI systems.
     
  • More and more content is being created or translated by generative AI. The impact of this is felt in our culture, language and social life, for example through heightened media consumption via social media platforms or the gradual suppression of minority languages. Researchers are currently studying the effects of generative AI on our communication behaviour.

 

Did you miss the MT Summit 2025 and want to find out more about the latest trends?

Watch our webinar recordings now and discover how you can improve translation, terminology and content creation over the long term.

Transit NXT: The underestimated CAT tool that has the professionals convinced

Posted on: July 2nd, 2025 by Frank Wöhrle No Comments

Anyone who regularly works with CAT tools (computer-aided translation software) probably thinks of Trados Studio, memoQ or Across first. One name is often overlooked – and unfairly so: Transit NXT: the underestimated CAT tool from the STAR Group. It is a genuine powerhouse for anyone who wants their work to be structured, consistent and terminology-focused.

What actually is Transit NXT?

Transit is a professional CAT tool that has been on the market since the 1990s. It combines classic segmentation with a project-orientated working method – incorporating translation memory, terminology management, preview options, quality checks and various functions designed specifically for technical documentation.
The extensive and growing portfolio of AI features, which are demonstrated in a series of short videos on our YouTube channel, are not to be missed.

5 reasons why so many professionals have put their trust in Transit NXT for years

1. Up-to-date and contextualised terminology

Transit works seamlessly with TermStar. Live terminology entries are displayed to translators within the editor itself – including the definition, context and source of the term. This extensive integration is a clear benefit over those tools where the terminology often features only in the sidelines.

2. Project structure, not file chaos

Unlike other CAT tools, Transit thinks in terms of projects with a clear-cut file structure. This takes the hard work out of managing big or lengthy translation projects – especially when it comes to regular updates or complex workflows.

3. Need technical formats? No problem with Transit.

Whether DITA, XML, FrameMaker, InDesign or XLIFF – Transit leads the way when it comes to the variety of natively supported file formats. Many other tools need extra modules or conversions to handle these files.

4. Local installation – full data sovereignty

Transit NXT works entirely locally – without any cloud obligations. For companies that have high data protection requirements, this is a crucial advantage over cloud-based solutions.

5. Quality assurance at the highest level

With automated checks, an in-context preview function and variant check, Transit NXT offers precise quality management for an impressive level of efficiency that is especially beneficial for those handling technical content.

 

Transit Software Bedienoberfläche

Who is Transit most suited to?

  • Technical translators working with complex formats.
  • Public authorities, industrial companies and service providers who need to keep sensitive data locally.
  • Freelancers who attach great importance to a reliably maintained terminology.
  • Translation agencies that want an efficient tool for managing large structured projects.

Sound good?

Transit NXT is no entry-level tool – but that is precisely what makes it a great option for anyone who values structure, terminology and format variety.

If you want to see for yourself how Transit works, simply request a non-binding trial version now.