SubComm Infographics – Different Types of Subtitles

By Hannah Silvester

This infographic goes hand-in-hand with the infographic Who watches subtitles? and offers an overview of some of the different kinds of subtitles that may be available to people when accessing audiovisual content. The list of different types of subtitles is not exhaustive, but includes some of the most common ones.

Interlingual subtitles

These subtitles involve a move from one language to another. They offer a written translation of the dialogue, narration, some song lyrics and usually other content that is relevant for the plot (such as signs, text messages or banners that may appear on screen). For the most part, these subtitles are produced for hearing viewers, and therefore don’t include sound effects or speaker identification, as the viewer can still access this through the soundtrack. When it comes to interlingual subtitling of dialogue, this has been described as ‘diagonal translation’, because we’re moving from one language to another (for example from French into English), but also from the oral to the written. This is interesting because it’s quite common to hear things said aloud that you would not generally see in writing, in particular when it comes to slang, less formal language, or swearing, for instance. One example offered by Díaz Cintas and Remael (2021, p. 185), from The Grapes of Wrath is the following:

Dialogue: I’d’av walked if my dogs wasn’t pooped out.

French subtitle: J’aurais marché si j’étais pas crevé.

English gloss: [I’d have walked if I wasn’t exhausted]

Jorge Díaz Cintas and Aline Remael’s book Subtitling: Concepts and Practices, is a key reference work on subtitling in general. It offers a good insight into the general requirements for interlingual subtitles, and examines some of the various aspects of subtitling that might present a challenge, such as language variation, humour, etc. You can also find more resources about subtitles in general over on SUBTLE’s website, under Resources.

Automatically generated subtitles

Photo by Collabstr on Unsplash

These subtitles are produced using software that uses ‘automatic speech recognition’ – this means that it produces a written version of verbal speech that has been inputted. As there is no human intervention, there might be mistakes where the software has ‘misheard’ words and the output does not match the input. The accuracy of speech recognition software varies depending on the language involved, and it generally works better for more common or bigger languages, such as English, for example. Sometimes, a machine is then used to translate these subtitles into another language. These would be interlingual subtitles, as above, but the same issues regarding mistakes, such as misheard words, are relevant, since the accuracy of the text to be translated has not been checked by a human, and neither is the resulting output. Automatically generated subtitles are most commonly found online, such as on YouTube, for example, where there is no human translation available. You can read more about using automatically generated subtitles on YouTube here, and about accuracy in automatic vs human live captions in English here (Romero-Fresco and Fresno, 2023).

Watch a scene from Good Will Hunting (Van Sant, 1997) with automatically generated subtitles on YouTube.

Live Subtitles

These subtitles are produced in real time, for live programmes such as the news or current affairs programmes. These are created by humans who use ‘voice recognition software’. The subtitler speaks their subtitles aloud, including punctuation, and the software transcribes what they have said into subtitles. The subtitler usually then has a short delay in which to check their subtitles before they are streamed live. You can see a video of how respeaking is done at the BBC here. Live subtitling is also used at some live events, including conferences, to improve accessibility, and is also referred to as speech-to-text interpreting (STTI). Due to their nature, live subtitles often appear on screen in a different pattern to interlingual subtitles and subtitles for the d/Deaf and hard-of-hearing, since they sometimes appear in smaller chunks of words or phrases, compared to full subtitles of one or two lines that appear and disappear in regular succession. 

Photo by Zhifei Zhou on Unsplash

Subtitles for the d/Deaf and Hard-of-Hearing (SDH), Closed Captions (CC)

SDH, or Subtitles for the d/Deaf and Hard-of-Hearing, include as much of the speech as possible. Depending on how much text can be included in the subtitles, whilst keeping them easy to read for viewers, the speech might be included word-for-word, or it might be slightly condensed. As it can’t be assumed that people using these subtitles (for more on audiences, see SubComm Infographics – Who Watches Subtitles?) can access other aspects of the soundtrack, we usually find elements such as sound effects included. For example, if a door slams, this may signal that another character has entered, or someone has left the scene, and this information might be really important for the plot. This article gives some insight into the SDH for Stranger Things, which were very well-received. You can also see some examples of the subtitles in the article. SDH also usually indicate who is speaking that particular dialogue, and there are a number of ways this can be done. Sometimes, the speaker’s name is included in the subtitle text, but other times, different colours, or the position of the subtitle on the screen, can be used to show who is saying the line. Josélia Neves’ article ‘10 Fallacies about Subtitling for the d/Deaf and the hard of hearing‘ is recommended reading. Another good reference work on this topic would be the book Captioning and Subtitling for d/Deaf and Hard of Hearing Audiences, by Soledad Zárate.


These translations are made by fans who are not paid. Fansubs are usually created online, by people who like the TV series or film they are translating. They were originally produced for anime, when there weren’t any subtitles in the  languages through which fans wanted to access content. They have since grown in popularity across languages, although the advent of streaming platforms and the ‘US+24’ subtitling model (where content is expected to be available in translation within 24 hours of its US release) has led to a shift in motivation for fansubbing. Official subtitles are now available more quickly for a wider range of content, but fansubbers might work to subtitle content which is not accessible in a given language or context due to factors such as censorship, for example. Read about Dingkun Wang and Xiaochun Zhang’s research on activist fansubbing in China –  ‘Fansubbing in China: Technology-facilitated activism in translation’. Fansubs are legally contentious, but the activity of creating fansubs themselves is not generally regarded as illegal, more the sharing of copyrighted content. Fans usually work together to create the subs for an episode or film, usually online using free software, and then share these subs online. Fansubs are different to interlingual subtitles since, depending on which fansubbing group creates them, they may not be required to meet the same rules as other subtitles (i.e. those outlined in the infographic ‘Subtitles: A Balancing Act’). This means that sometimes fansubs include extra information about elements that are specific to a culture, or perhaps are more experimental in terms of placement, font, colours, reading speed, etc. Max Deryagin shares some examples of creative fansubbing techniques in this blog post.


Further Resources

AVT Masterclass offers a range of paid and free webinars on topics related to these types of subtitles, including SDH and starting out as a subtitler.

The Audiovisual Division of the American Translators Association also has many interesting blog posts.

The Journal of Audiovisual Translation (JAT) publishes research articles and practice reports on relevant topics.

Are there further open access resources you would recommend on this topic? Please comment below with links!

Leave a Comment