All non-text content that is presented to the user has a text alternative that serves the equivalent purpose.
Any information conveyed by means other than text needs to be provided in text as well. Accessibility The text should provide equivalent information. Examples of non-text content are images, graphs, media, animations, CAPTCHA, and audio alerts. Short and long text alternatives can be used as needed to convey the equivalent of the non-text content. The most common short text alternative is the use of ALT text attributes for images. Examples of long alternatives include descriptions for graphs, charts, or complex images.
Exceptions: There are situations where providing a text equivalents either not possible or not desirable. Accessibility Exceptions include controls, inputs, tests, specific sensory experiences, and CAPTCHA. Content that is decorative, used to provide formatting, or visually hidden does not require a text alternative.
Audio-Only And Video-Only (Prerecorded)
For prerecorded audio-only or video-only media, an alternative provides Accessibility equivalent information.
Please note that when we collect personal data pursuant to a Customer Agreement, we typically act as a data processor on behalf of our customer.
This requirement ensures that when information is presented in a playable medium, the user has an alternative way of consuming it.
For audio-only information, such as a recording of a speech, provide a transcript. A transcript is also the preferred alternative for video-only information, such as a soundless demonstration video. This is because a transcript – which in the case of a video describes all important visual information — can be presented to the user in many different ways by assistive technology. The alternative for video-only material could also be provided as an audio track.
Exceptions: This requirement does not apply when the audio-only or video-only content is actually the media alternative for text content, and is labeled as such. This alternative equivalent should provide no more information than is available in the text content.
Captions are provided for all prerecorded audio content in synchronized media.
This requirement ensures that captions are provided as text equivalent for audio content. They are synchronized to appear on screen with the relevant audio information, Accessibility such as dialogue, music, and sound effects.
Captions are of the following two types:
Open captions cannot be turned off by the user; they are typically burned into the image so that the user cannot adjust their appearance.
Closed captions can be turned on or off by the user. They are provided in a separate data stream that is synchronised with the multimedia. The user has the potential to alter their size, typeface and colour.
The source material for captions can also serve as a transcript or be transformed by assistive technology into other formats.
Exceptions: Captions are not needed when the media itself is an alternate presentation of text information. For example, if information on a page is accompanied by a synchronized media presentation that presents no more information than is already presented in text, but is easier for people with cognitive, language, or learning disabilities to understand, then the media would not need to be captioned. Such media should be labelled as a media alternative.
Audio Description Or Media Alternative (Prerecorded)
An alternative for time-based media or audio description of the prerecorded video content is provided for synchronized media.
This requirement ensures that people who are blind or visually impaired have access to the visual information that is provided in multimedia. Anything that has a play button is typically “time based media,” which includes anything that has duration and plays to the user over time. Video, audio, and animation are all time based media.
The visual information is described using one of the following:
An audio description (also called video description or descriptive narration) is either added to the existing audio track during pauses in the existing audio content or on an alternate audio track. A narrator describes the important visual details that are not already explained in the soundtrack, Accessibility including information about actions, text information not verbally described, who is speaking, facial expressions, scene changes, and so on.
A full text alternative is a transcript that includes descriptions of all of the visual details, including the visual context, actions and expressions of the actors, and any other visual content, as well as the auditory information—the dialogue, who is speaking, sounds (as would be contained in captions). The goal is to give a person who is blind or visually impaired the same information a sighted user would get from the media. People who are deaf, hard of hearing, or who have trouble understanding audio information also benefit because they can read the text alternative as a transcript of the audio or video information.
The source material for captions can also serve as a transcript or be transformed by assistive technology into other formats.
Exceptions: When the media is a media alternative for text and is clearly labeled as such.
Captions are provided for all live audio content in synchronized media.
This requirement ensures people who are deaf or hearing-impaired, Accessibility work in noisy environments, or turn off sounds to avoid disturbing others have access to the auditory information in real-time (live) presentations. Live captions do this by identifying who is speaking, displaying the dialogue and noting non-speech information conveyed through sound, such as sound effects, music, laughter, and location as it occurs.
Live captions must be synchronized with the presentation and be as accurate as possible, typically accomplished through Wikipedia® – Communication Access Real-time Translation (CART) real-time captioning.
As an added benefit, captioning live presentation can often produce a transcript that everyone who missed the live presentation can use for review.
This requirement also allows for equivalent facilitation when open or closed captions cannot be provided within the live video content via a separate communication channel, such as a third-party transcription service or by having meeting participants use a group chat to transcribe the discussion.
Audio Description (Prerecorded)
Audio description is provided for all prerecorded video content in synchronized media.
This requirement ensures that users who cannot see have access to the visual information in a media presentation through an audio description, also sometimes called video descriptions or descriptive narration, to augment the audio portion of a presentation. This audio description is synchronized with the content; during existing pauses in the main soundtrack, the user hears an audio description of actions, characters, scene changes, and on-screen text that are important to understand the presentation. Secondary or alternate audio tracks are commonly used for this purpose.
Note: This requirement overlaps with requirement 1.2.3: Audio Description or Media Alternative (Prerecorded). Compliance requirements for 1.2.3 allow either an audio description or a text alternative. However, this requirement, 1.2.5, requires an audio description to comply.
Exceptions: If all important visual information is already announced in the audio track, or there is no important information in the video track, such as a “talking head” where a person is talking in front of an unchanging background, no audio description is necessary.
Info And Relationships
Information, structure, and relationships conveyed through presentation can be programmatically determined or are available in text.
Other Information With Your Consent
We collect other information when you give us permission at the time of collection for the purposes disclosed to you at that time.
This requirement ensures that Assistive Technologies (AT) can programmatically gather information, meaningful structure, and relationships that are displayed visually so that it can be rendered to the AT user. ATs such as screen readers and magnifiers speak the content or change the presentation layout for the user’s needs.
Each visual cue must be implemented in the standard way supported by the platform or technology. Common visual cues to the semantics of the content on the Web are supported programmatically through markup, and include headings, labels, forms, tables with associated row and column headers, images with & without captions, lists, emphases such as bold and italics, hyperlinks and paragraphs.
In cases where there is no support for programmatic identification of the content, an alternative must be provided in the text. For example where identification is required, but no markup is available for emphasis, additional characters such as a double asterisk (**) may be used.
Note: The requirement is not limited only to visual cues, although they are the most common. For example, if audio cues are used to indicate required content, markup or textual identification must be additionally provided.
When the sequence in which content is presented affects its meaning, a correct reading sequence can be programmatically determined.
This requirement ensures if the visual order of reading information on the page is important to its meaning, then the sequence, such as reading order, of the information is available programmatically.
Teams need to do two things to achieve the goal of this requirement:
Determine if any information being presented has a meaningful reading order.
Ensure that the meaningful order is available to assistive technologies.
An example of meaningful sequence, such as reading order, is text in a two-column article. A user must read the lines of text in the first column sequentially, then move to the second column and do the same. If an assistive technology reads across both columns (e.g., reads the first line of the cell in the first cell and then reads the first line of the cell in the second column) before proceeding to the next line, the user will obviously not understand the meaning. Other potential failures of meaningful sequence can occur when using CSS, layout tables, or white space to position content.
It is important not to confuse the reading order with the navigation order. The ability to navigate (i.e., move by keyboard between interactive controls) in a way that preserves meaning is covered. Meaningful Sequence is concerned solely with the reading order, and so cannot normally be tested without assistive technology. Screen readers render content in a serialised manner.
Note: Sequence is not important for some content. For example, it does not normally affect meaning whether side navigation is read before or after the main content. So while matching the visual and reading order is a way to ensure this requirement is met, a difference in visual order and reading order is not a failure where the sequence does not affect the meaning.
Instructions provided for understanding and operating content do not rely solely on sensory characteristics of components such as shape, size, visual location, orientation, or sound.
This requirement ensures instructions to the user that use sensory characteristics such as shape or location to provide context are not the only means of conveying information. When elements are described using visual cues such as shape, size, or physical location (e.g., “select the button on the right”), it can make it easier for a sighted user and some people who have cognitive disabilities to locate the elements. But, someone with a vision disability may not be able to perceive those visual cues. The element must also be described in another non-visual way, such as by its label (e.g., “select the Delete button on the right”).
It is important to understand that this requirement is entirely focused on referencing sensory characteristics in instructions. The instructions need to be understandable even if the content is reflowed or transformed. For example, a screen display without images, or the display is reflowed on a mobile device changing the content positioning. The simplest way to prevent failures of sensory characteristics is to always have a visible text label and reference that label in the instructions.
Content does not restrict its view and operation to a single display orientation, such as portrait or landscape.
This requirement ensures that content does not get locked to either portrait or landscape presentation mode. The key intended beneficiaries of this requirement are users unable to modify the orientation of devices. If designers force content to only display in portrait (or landscape) mode, it deprives users of the option to consume the content in the perspective that they need. If a user has a device affixed to a wheelchair or is otherwise unable to reorient the device to match the design-imposed orientation, the content can become unusable.
Note: This requirement bans developer-controlled techniques to limit the display orientation of an application on the device. Where a user activates any system-level orientation lock, applications will be subjected to this system-level setting without any action required on the developer’s part. System or hardware locks are user-imposed preferences and are not contradictory to this requirement.
Essential Exception: This requirement does not apply where a specific display orientation is essential. WCAG defines essential as: ”if removed, would fundamentally change the information or functionality of the content, and information and functionality cannot be achieved in another way that would conform.” Such a situation will be rare; most applications should be able to be usable when displayed in either orientation although one orientation may be less optimal.
An example of an application whose orientation would be considered essential is a piano keyboard emulator. Since such an app mimics a physical piano keyboard, which is heavily biased to horizontal operation, the few keys available in a portrait orientation would make the application’s intended function unusable.