Skip to content

Dr Scott Hollier - Digital Access Specialist Posts

TellMe TV audio description service: hands-on

In December 2016 the TellMe TV online subscription service was established to provide audio described video content.  The service has been designed to meet the needs of people who are blind or vision impaired and offers Netflix-style online streaming of content.  As an Australian with very few audio described content options I decided to give it a go.  

The service has been established by Canadian accessibility advocate Kevin Shaw, which stated in the press release that “TellMe TV is an exciting new destination where 100 per cent of the on-demand programming, including a diverse portfolio of movies, television shows and documentaries, [are provided] in fully described video.”

As a vision impaired person located outside North America, I wasn’t sure if I’d be able to access it at all: it’s not unusual for such services to be geoblocked or content hidden from international viewers. After signing up I navigated my way through the content and found a documentary. On the plus side, it worked without getting any geoblocking message.  The downside however is the notable lack of modern content.

As promised, the service does indeed provide 100% audio described titles, meaning that for people with vision-related disabilities the more visual aspects of the videos are described by a narrator.  While the interface was a little cumbersome to use in my high contrast color scheme, the screen reader picked things up well and it was able to play the selected video. I was pleasantly surprised to see the video start playing given I’m based in Australia, and both video and audio played with no problem.

However, the biggest issue with this service at the moment is the content.  It was hard to determine if the lack of modern content was due to the good stuff being hidden from me or if there was just a very limited amount of titles at this time, but most of the movies, documentaries and services I could play were quite old and presumably available as public domain titles.  That said, if the service is able to secure the rights to the big movies and TV shows then the interface works quite well, and if the service continues to be available internationally then it has the potential to revolutionise the way in which people who are blind or vision impaired watch TV. As such, it’s my hope the big media companies will step up to support this initiative especially as audio description is hard to come by through traditional media sources.  For now though the best option here in Australia remains with Netflix and its small but growing audio described titles.

More information on the service can be found at the TellMe TV website.  The service offers a seven day free trial.

Dr Scott Hollier to give W4A2017 William Loughborough address

It is with much excitement that I’m able to share with you some fantastic news; I’ve been given the great honour and privilege to deliver the William Loughborough After-Dinner address at the Web for All (W4A) 2017 conference.

My topic, ‘Technology, education and access: a ‘fair go’ for people with disabilities’ will focus on my personal journey and on the broader benefits that education and technology can provide..  In particular, the address will focus on how key accessibility developments help to  provide a ‘fair go’ in relation to the inclusion of assistive technologies now built-in to mainstream devices, the future implications of accessibility and the wonderful dedication and hard work of accessibity professionals in their support of people with disabilities.   

Details regarding the after-dinner address and additional conference information can be found at the W4A2017 website.

Microsoft adds Braille and mono audio to Windows 10 insider preview

Microsoft has announced accessibility improvements to its latest Windows 10 build including the addition of Braille support to the Narrator screen reader and the inclusion of a mono audio option.

In a blog post by Microsoft’s Dona Sarkar, developers who are a part of the Insider Preview network can update their version of Windows 10 to version 15025 which contains the new accessibility features.  

The blog post stated that “We love getting feedback from our visually-impaired Insiders and implementing features to support your needs. It’s so important that we keep our diverse customers in mind as we co-create with you.  Today, we are excited to announce braille support for Narrator. This experience is currently in beta.”To enable the feature, Microsoft has provided the following instructions:

  • Ensure Narrator is running. Then go to Settings > Ease of Access (WIN + U) and under the Narrator settings, activate the “Download Braille” button. You will be prompted to install braille support.
  • Under Settings > Ease of Access, activate the “Enable braille” button and add a braille display. Note that USB and serial connections for the display are supported.
  • Under Settings > Ease of Access, choose the language and braille table you want to use. NOTE: There are coexistence issues with braille support and third party screen readers. Until the documentation is available, we recommend that braille be enabled for Narrator only on PCs that do not also have a third-party screen reader configured to use a braille display.

The new mono audio feature is another great accessibity feature and assists both hearing and vision impaired users.  People with a hearing impairment in one ear can benefit by ensuring that information pushed only to one audio channel is available in both channels, while vision impaired users that use a screen reader with one earpiece can also receive audio sent to both channels.   This feature can be found in the Ease of Access section once users have updated to the developer build.

Microsoft has not confirmed when the features will be available in the standard Windows 10 builds but based on development cycles its likely to arrive on most consumer devices before the end of the year.

Mobile World Congress 2017 highlights: VR, printing and translation

The 2017 Mobile World Congress (MWC) has recently wrapped up in Barcelona and is generally considered the world’s largest mobile technology event.   While there was lots of great mobile tech on display, there were a few things that really jumped out in terms of access potential so here’s a round-up of some of the key products and announcements.  

Virtual Reality display at 2017 Mobile World CongressImage of 2017 MWC ©2017 the Verge

 Virtual Reality and 3D Printing

Some of the most exciting product announcements aren’t so much new products, but rather the way in which familiar products were joined together.  The latest HTC Vibe Virtual Reality (VR) system was used in partnership with a 3D printer to demonstrate how a 3D object can be created in VR, then immediately printed out on a 3D printer.  I’ve recently been involved in discussions with projects associated with accessibility and the Arts, and demonstrations like this clearly show just how technologies such as this can be applicable.  For example if a blind person wanted to experience a sculpture, the model could be worked up in VR then printed as an accessible tactile version. In a related announcement, the ability to create 3D objects virtually and the conversion of 2D into 3D continues to become commonplace with Microsoft providing details that the next update to Windows 10 will add 3D functionality to the built-in Paint feature.  The preview of the new version of Paint with 3D can be downloaded from the Windows Store now for Windows 10 users.

 Universal Translator

The second thing that really struck me on just how significant it could be is the VoxOx Universal Translator.  As with many people who have enjoyed watching Star Trek TV shows over the years the idea that you can easily understand anyone in any language is very appealing, especially if you can’t see print or hand gestures very well as is the case for me.  This device can currently translate between a handful of languages in real-time such as SMS or social media posts.  While we’re not quite a the Star Trek stage yet, the idea that communication between people that use different languages is as simple as posting a message on social media now and have confidence in both what is received and what comes back is very exciting.

 Google Digital Assistant

While VR printing and universal translation may take some time before it arrives in our homes, one update that’s on its way now is the Google Digital Assistant to more Android smartphones.  Previously only available on Google’s own Pixel smartphone, the digital assistant is now rolling out to Android smartphones running Android 6.0 Marshmallow or later.  Google stated that:

 “The Google Assistant will begin rolling out this week to English users in the U.S., followed by English in Australia, Canada and the United Kingdom, as well as German speakers in Germany. We’ll continue to add more languages over the coming year.”

 Smartphones that receive the update will be able to long-press the home button or enable an ‘OK Google’ command to interact directly with the assistant.  

 This is fantastic news for people with disabilities.  In recent times digital assistants such as Siri and Cortana in our computers and smartphones have become more useful in performing basic commands and web searches.  While Android-based smartphones have some limited functionality similar to the Assistant already, the addition of Google’s digital assistant for users of a large range of smartphones provides more functionality, choice and affordability to people with disabilities, such as people with vision or mobility impairments.

 This is just a few of the highlights from MWC for people with disabilities this year. Full details on all the announcements can be found at the Mobile World Congress website.

WCAG 2.1 draft: reflections on the new guidelines and success criteria

For people that work in the web accessibity area, today’s news that the World Wide Web Consortium (W3C) has now made publically available the first draft of the Web Content Accessibility Guidelines (WCAG) 2.1 public draft is very exciting.  In order for people with disabilities to use computers and Internet-related technologies, two things need to happen: the first is that people with disabilities get the tools they need on the device they want to use to assist them to access content, the second is that content needs to be designed in a way that works with those tool’s.  This discussion looks at the second part of that requirement: what developers need to do to make sure that content is designed in a way that supports the needs of people with disabilities and the assistive technologies they use.

For the benefit of people new to accessibility, the current definitive world standard for this is called the Web Content Accessibility Guidelines (WCAG) 2.0, published in December 2008.  However the world was very different nine years ago in terms of technology support for people with disabilities.  For example, the first iPhone that people who are blind or vision impaired could use didn’t come out until 2009.  As such the standard needed updating and the first part of that update is WCAG 2.1.

As noted in the W3C Web Accessibility Initiative (WAI) landing page, “This first draft includes 28 new Success Criteria, three of which have been formally accepted by the Working Group and the remainder included as proposals to provide an opportunity for early feedback.” 

So with today marking our first chance to look at WCAG 2.1, its worth considering two questions; what’s the current thinking of the Accessibility Guidelines Working Group (AG WG)? and what are the new success criteria being proposed?

WCAG 2.1 – improved inclusivity for people and devices

The first paragraph of the WCAG 2.1 abstract answers the first question, and it’s very much in line with what has been called for in recent years – a greater inclusion of cognitive-related disability support and specific guidance on a range of devices including the specific naming of mobiles and tablets. To quote the abstract:

“Web Content Accessibility Guidelines (WCAG) 2.1 covers a wide range of recommendations for making Web content more accessible. Following these guidelines will make content accessible to a wider range of people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, limited movement, speech disabilities, photosensitivity, and combinations of these. These guidelines address accessibility of web content on desktops, laptops, tablets, and mobile devices. Following these guidelines will also often make your Web content more usable to users in general.”

The last point is a particularly good addition.  It’s often argued that accessibity is not just helpful to people with disabilities, but in fact helpful to everyone, and it’s great to see that point made in the draft.

Before moving on to discussion of the specific Success Criteria (SC), there’s a few important points to note about the approach of the AG WG:

  1. They are not currently changing anything in WCAG 2.0.  While this means there’s overlap and redundancy, they are first focusing on the things that WCAG 2.0 does not cover before adjusting the terminology and teaching of the current standard.
  2. Only three of the 28 proposed SC have been adopted by the AG WG.  As such there’s still a lot of room to move and this provides a fantastic opportunity for public feedback.
  3. Compliance with WCAG 2.1 will also result in compliance with WCAG 2.0.  This is referenced in a few places and provides confidence that the final version of WCAG 2.1 will be both effective and compatible with current policy frameworks.

In my opinion the development path is a sensible one.  It makes sense to plug the holes of WCAG 2.0 first, and then renovate the existing standard later.  As with all W3C working groups there’s a lot of moving parts when work is being developed so things can change quickly, and often in exciting ways.

Approved new success criteria proposals

There are currently three SC that have been approved by the AG WG.  They are:

  • 1.4.11 Resize content (Level A): Content can be resized to 400% without loss of content or functionality, and without requiring two-dimensional scrolling except for parts of the content where fixed spatial layout is necessary to use or meaning
  • 1.4.12 Graphics Contrast (Level AA): The visual presentation of graphical objects that are essential for understanding the content or functionality have a contrast ratio of at least 4.5:1 against the adjacent color(s), except for the following:
    • Thicker
    • Sensory
    • Logotypes
    • Essential
  • 2.2.8 Interruptions (minimum) (Level AA): There is an easily available mechanism to postpone and suppress interruptions and changes in content unless they are initiated by the user or involve an emergency.

The first of these takes into account a common issue on mobiles whereby making content bigger has a habit of breaking the website as even now there’s an assumption that people are viewing websites on desktops with large screens.  With responsive design not being around much in 2008 it’s great to see an SC highlighting the need to ensure that if text is increased it won’t break things. It also addresses the presence of unwieldy scroll bars which become particularly challenging if you are using screen magnification tools on a mobile device.

Graphics contrast is also a great addition, clarifying a long-standing issue with WCAG 2.0 in that the 4.5:1 Level AA contrast is quite clear, but how it specifically relates to graphics is not.  This is now addressed, along with important exceptions such as logos for images that have to have specific colours otherwise content is lost.  My only concern relates to the ‘essential’ point which could be a loophole for people to put anything they like on a website arguing the colours have to be that way, but perhaps this will be further clarified during the review process.  Each of the bullet points for this criteria have additional information which can be viewed at the linked resource.

The final point is one for which I cheer.  With ARIA support becoming more common and a greater ability for developers to take charge of assistive technologies, there’s a lot of ways the process of assistive technology such as a screen reader can be interrupted.  This SC is a logical progression of existing SC that relate to auto-updates and I hope this remains largely unchanged.

For the remaining 25 SC that are proposed but not yet approved by the AG WG, I’ve just noted a few thoughts.  You can read more about the details of the specific SC information by following the respective links.   

Proposed success criteria

Guideline 1.3 additions

There’s one new proposed SC for guideline 1.3:

I agree with ethologic of separating this out from the broader sensory characteristics, but in my opinion more information is needed to explain the scenarios and why WCAG 2.0 doesn’t currently address this already. 

Guideline 1.4 additions

Addressing issues relating to seeing and hearing content featured highly in the update

Starting with linearization, I’m a big fan of this one.  It essentially proposes that content can be viewed as a single column.  In the era of responsive design and mobile use as mentioned earlier, this would be absolutely fantastic and I hope it gets up.

This second SC states that ‘If content can printed’ (I think the word ‘be’ is missing) then you can have some flexibility in how the content is presented.  I can appreciate the reason why this is here but personally I don’t think it’s such a critical issue that it needs to be in WCAG.  I’m conscious that the more SC that are added the more work it will be for developers, and personally I don’t see printing as a priority.

The next SC looks at specific contrast requirements for user interface elements. I can certainly see the logic and importance, but I’d thought this has already large covered in WCAG 2.0.  Would be good to see some additional information on the context.

The Adapting Text SC would be a fantastic addition.  As a high contrast color user its often the case that websites don’t account for user-defined colors and you end up in situations where text gets garbled or you can end up with for example, black text on a  black background.  The specifics of this need some work but I’m a big fan of the principle. 

For the last SC on the list I’m not entirely sure about this one.  The idea is that there’s more control around On Hover and On Focus.  It seems like a logical improvement to On Focus in WCAG 2.0 rather than a standalone SC. 

Guideline 2.1 additions

There’s only one new proposed change to keyboard accessibility SC, but it’s a big one.

This SC adds a requirement that speech input is not obstructed.  This is a great additional and reflects the changing nature of how we are interacting with our devices through features such as digital assistants and there’s clear Internet of Things implications here.  In the long run I suspect the whole guideline’s terminology will need to be changed from ‘keyboard accessible’ to something more broad, but this SC is a great addition and reflects the changing way people with disabilities are interacting with their devices.

Guideline 2.2 additions

There’s two proposed SC relating to timing:

When I saw the timeout criteria I wanted to leap from my chair and punch he air in celebration.  Few things are more frustrating than having websites timeout when you’re trying to complete an online task.  While developers have often tried to address the issue, there’s been little guidance from WCAG as to what is best practice – until now.  I’m not sure on the point about a one week data retention as I can’t see the basis for that specific time period, but I’m very excited about this SC being n there and really looking forward to its refinement as the WCAG 2.1 process continues.  The second SC seems relatively minor by comparison and perhaps this will be folded into an existing WCAG 2.0 SC.

Guideline 2.4 additions

There’s one proposed SC about helping users navigate and find content:

The statement for this SC is ‘Single-character shortcuts are not the only way activate a control, unless a mechanism is available to turn them off or remap them to shortcuts with two or more characters.’  If I understand the concept correctly it seems like a good idea but the language here seems a bit clunky and it’d be good to tidy up the wording.

New proposed guides 2.5 and 2.6 Pointer Accessible and Additional Sensor Inputs

In the current WCAG 2.1 there are two new guidelines proposed to provide specific guidance in WCAG for mobile content.  It makes it easier for users to operate pointer functionality and touchscreen interfaces.  The SC for 2.5 includes:  

These four SC essentially explain how touch interfaces should work, what size area should be allocated for touch to be accessible, how that varies depending on the pointer devices and the accessibility of specific gestures in the content itself, separate from the browser or device interface.  While all these things are important and really highlight why WCAG 2.1 is needed, the standout point for me is touch with assistive technology. 

Its remarkable how often aps work great before I turn on the screen reader on my phone, and how completely inaccessible it becomes once the screen reader is enabled.  While the other SC are quite specific, it’s this broad requirement of AT compatibility that I suspect will be one of the greatest arguments put forward for moving to WCAG 2.1 and it’s my hope that this is adopted by the AG WG as soon as possible.  

In regards to Guideline 2.6, there’s not much detail yet about how the guideline is defined, but the two related SC are as follows:

I like the second SC as it’s amazing how often an app on a smartphone can break if you try to use it in a different orientation to what is expected, especially in the location and use of buttons, some of which disappear completely when the orientation is changed.   

Guideline 3.1 additions

There are three proposed updates to the use of language:

There’s two things I really like about these proposed SC.  Firstly, it’s a good compromise between making the essential things clear such as how to structure instructions and where common language is needed, but doesn’t restrict the actual language of a website.  Secondly, it brings cognitive accessibility to Level A and AA which is long overdue.  In my opinion the focus on improving language and structure in content with some exceptions has the right balance and I’m looking forward to seeing this progressed further.  

Guideline 3.2 additions

There are three updates for helping content to work in predictable ways:

All three of these SC seem like pretty common-sense requirements to me, ensuring consistent and expected operation and likely issues that can cause confusion in a mobile environment such as accidental activation.  Will be interesting to see the specifics as the SC are evolved.

Guideline 3.3 additions

To finish off the current round of updates, we see a number of proposed SC to help users avoid and correct mistakes.

I’m particularly excited to see the last two.  The ability to go back and undo something or repair data in a straightforward manner is a great addition.  I also like the idea that help information is provided although I’d prefer to see this as a Level A requirement.  

Final thoughts on WCAG 2.1

Overall it’s been fantastic to see such a great first step by the AG WG in its development of the first WCAG 2.1 public draft.  Many of the new SC are revolutionary and while I’m sure there’s still a lot of work to go, it’s off to a flying start.  On a personal note as a person with a disability, it’s a wonderful thing to see pretty much everything on my wishlist appear here.

If you want to contribute to the WCAG 2.1 development, the AG WG are accepting public comment by e-mailing public-agwg-comments@w3.org.  Comments close 31 March 2017.