Skip to main content

A guide to using subtitles, captions and transcripts for accessibility

Zoe Portlock
Learn how to make your videos more accessible with subtitles, closed captions and transcripts. We explain the difference between each and how they help people with various access needs.

When it comes to making video content accessible, these are terms we’ve all heard and used at times. But if you’re unsure of which terms refer to which practices, you are not alone. Subtitles and captions are often used interchangeably, even by large TV and media companies like YouTube and Netflix. But each format is unique and offers different benefits for disabled users.

Let’s take a look at the definitions for each, why they’re used and who benefits from them.

The difference between subtitles and captions

  • Subtitles translate the speech on-screen to text. They are generally designed for viewers who can hear but do not understand the language in the video.
  • Captions transcribe both speech and additional audio cues like “knock on the door.” They are designed for viewers who cannot hear the audio in the video.

Both subtitles and captions appear as words onscreen in the video player and are synchronised with the audio.


Judith Heumann interviewed for Crip Camp
Netflix documentary Crip Camp: A Disability Revolution with English subtitles.

Subtitle text displays within the video player itself. Subtitles translate the spoken audio on the screen but generally assume that the viewer can hear. This means they only cover the spoken dialogue, often missing out sounds like “knock on the door.”

Crip Camp shown with Polish subtitles.

Who they benefit

Subtitles are helpful if the video content is in a foreign language to the viewer. Subtitles help users who have difficulty processing auditory information, such as people with autism or dyspraxia.

But because they lack any audio but speech, subtitles are not fully accessible to people who are deaf or hearing-impaired.


Captions come in two forms, open and closed. Both are often confusingly described as subtitles.

Closed captions “cheers and applause” provide extra context to onscreen audio.

Captions appear in the video player itself, but as well as the dialogue, they include additional audio cues. Because they are created for deaf and hearing-impaired viewers, they also include all other audio information necessary to understand the video content.

These non-speech sounds (such as a doorbell or fire alarm sounding), and the tone of voice (such as sarcasm) are essential for understanding. Closed captions give viewers access to context and nuance, beyond the spoken dialogue.

Closed captions

With closed captions, users have the choice to turn them on or off if they would like to.

Closed Captions are the most common form of captioning. Some providers also have customisation options, such as a choice of font, colour, size and opacity of the background between the captions.

Open captions

A person may add open captions during the video production stage. The viewer cannot turn them on or off. Open captions are commonly used on social media feeds, especially Facebook, where 85% of videos are watched without sound.

Scope’s Support to Work video with open captions, which are added during production.

Remember that captions alone do not provide a full-text alternative to the video. Captions assume that the user is able to, or wants to engage with the video player itself.

Supplying both captions and subtitles gives your users a choice in how they want to consume your video content. Whether they have access needs or not.


Example of a film transcript beneath a video by Scope on the social model of disability. The transcript is clearly visible on the web page beneath the video player.

Transcripts provide access to your video content in a different format and medium.

Types of transcript

  • Basic transcripts simply relay the spoken dialogue within the video. They are used by people who are deaf, hearing-impaired or have difficulty processing auditory information.
  • Descriptive transcripts include all the audio and visual information a non-disabled user would get form watching the video. They are specifically designed for people who are both deaf and blind.

Think of them as a detailed video script that includes all the necessary information to understand what the video is about. They can either be accessed in a new webpage or as part of an existing page where the video is embedded (like an accordion element, for example).

Transcripts also describe titles and visual cues that a non-disabled viewer would get from watching the video itself. The main difference is that a transcript is separate to the video player, and does not require the user to engage with the video player itself.

Contextualising the dialogue when you create a transcript is important. If your video introduces the speakers with their name and job title in writing, that information must be introduced in the transcript too.

If there’s a graph, chart or infographic in the video, you must also include a description of these. A transcript which only has the dialogue is not an accessible or complete transcript.

It’s useful to make the distinction between transcripts and transcription, as it’s easy to confuse these two terms. Transcription is the act of writing or adding captions to a video. The transcript is the end product.

Who they benefit

Transcripts are the only way to make audio-only content, like podcasts or radio programmes, accessible to people who are deaf or have a hearing impairment.

Transcripts are the only way a deafblind person (someone who is deaf and blind) can access your video content independently. They can do this by using a tactile device such as a refreshable braille display.

Some people may want to keep their own pace and speed of consuming the content. For example, someone who prefers to skim read may not want to pause their reading flow by watching a video. This can be useful for screen reader users, too.

Providing a transcript as well as captions is a great way to offer all your users choice. It gives people the option to skip the media player altogether (for accessibility reasons or otherwise) without missing out on the content.

Search engine optimisation and transcripts

As well as improving accessibility, the search engine optimisation (SEO) benefits can be really powerful too. Transcripts declare your content to Google, allowing search engines to essentially “read” the video. This makes it possible to rank and index the content. This can lead to greater visibility in Google search results, directing more users to your website.

Providing accessible alternatives like subtitles, captions and transcripts, gives each user the choice of how they interact with your content. This allows the broadest possible group of users to access and enjoy your work.

Captions, subtitles and transcripts do not complete each other’s job fully. Providing a combination of each is the best way to serve all your users’ needs.

More resources on video accessibility

Making audio and video media accessible (Web Accessibility Initiative)
Captions, Transcripts and Audio Description (WebAIM)

Contributor: Zoe Portlock
Organisation: Freelance Writer, The Big Hack

Contribute an article

The Big Hack is an open community, and if you have an idea, or an article, about how to make the digital world more inclusive, we want to hear from you.

Message us about a contribution

People are talking about this article:

  • Adam Golightly 16th December 2020

    Thanks for explaining the difference between captions can subtilities. I can see how a business could really benefit from making sure that everyone can understand their videos. Getting the captions made by a professional could be really useful for them.

Leave a Reply

Your email address will not be published.

Was this article helpful?