What we learned from doing research with people using assistive technology
Assistive technology varies greatly which influences how a user interacts with your service
At dxw, we focus on making the services and products we deliver as accessible and inclusive as they can be. Recently, I ran a series of remote usability testing sessions on a live service with participants who use different types of assistive technology. In this post, I am going to talk through the things I’ve learned about some forms of assistive technology and doing research with people who use them.
There are many types of assistive technology
Assistive technology varies greatly which influences how a user interacts with your service. This is important because some types of assistive technology would work well with some aspects of our service, but others would not.
For example, most forms of assistive technology found our links with ease. However, one type of assistive technology, Dragon, struggled to find our links. This is a common issue with Dragon which is wider than the service we were testing. The finding was useful because it meant that we considered how we space text on the screen with the user needing to rely on Dragon’s numerical navigation grid to get their cursor to the right information.
The key learning from this is that just because your service works well for one form of assistive technology or even a few different types of assistive technology, it does not mean that it is accessible for all users. Testing your service with various types of assistive technology will significantly improve your findings.
Assistive technology is used in different ways
As well as there being many different types of assistive technology, participants used the same assistive technology in different ways and in combination with other devices.
For example, we had a couple of participants who both used ZoomText. One participant used ZoomText to read content and navigate around the screen. This participant told us how too much white space was a problem because they’d find it hard to trace content across a line. However, another participant used ZoomText in combination with Jaws (this is called Fusion). This made ZoomText more of a support feature for the user as they mainly used Jaws to navigate through pages and read text aloud.
The key learning for me was the need to test our service with multiple users of the same type of assistive technology because each user can use the same device in different ways.
In remote sessions, we sometimes don’t see what the participant sees
This problem was highlighted to me before the research sessions started by a participant who was considering how the remote sessions would work. The problem was that when the participant was using ZoomText and sharing their screen, the screen share would not show me the same zoom level that the participant was using. Their screen would be zoomed in but when sharing their screen, it would look a standard 100% zoom size to me.
To get around this problem, I asked the participants that this impacted to take screenshots of certain pages and to save them in a document to send to me afterward (credit to Steph Troeth). This did break up the flow of the sessions a bit as the participants had to navigate to a different place to paste the screenshots. However, the participants were all happy to do this and it enabled us to see what they were seeing.
The key learning is to be clear about how the session will run with participants beforehand and get their feedback on how effective they think this will be. There are always going to be things we don’t think of!
Allow time for each test activity
By testing with different assistive technology, many variables can impact how the session runs. For example, as a researcher, I was more familiar with some assistive technologies than others. This meant that the time it took a participant to explain to me how they were using the technology and the impact our service was having on them would vary.
Also, for some participants, they were less familiar with video conferencing so we had to make sure the assistive technology they were using would work with it at the start of the session. Additionally, the assistive technology some participants used helped them to complete tasks a lot quicker than others.
These variables; my knowledge of the participant’s assistive technology, the participant’s familiarity with video conferencing, and how quickly the participant completes tasks is not an exhaustive list of all the things that could impact the session. My learning from this is to allow time for each task. Including too many tasks will mean that things can be missed. There are always future research rounds to add other tasks and to discover new things!
Findings can sometimes conflict with each other
We found that participants wanted or needed changes to the service that were sometimes in conflict with each other. For example, one participant who was using Dragon wanted more space between content to make it easier to navigate using the numbered grid. However, another user who could only see points of the screen at a time needed the content to be close together so they could follow it in a line.
There are 2 things we did when we were in this situation. One was to prioritise the findings based on severity to the user. Did these findings mean that the participants could not achieve what they wanted to achieve? Is it a want or a need?
The other thing we did was to go back to the pain point and turn it into a How Might We statement to generate ideas. Often some lateral thinking can enable us to improve the situation for users.
An important learning I took from this is to have clear prioritisation criteria for findings during synthesis. It will make sure that we are delivering the most value for the most important needs.
Learn about the assistive technology before the session
With there being so many different types of assistive technology, it helps to familiarise yourself with what the participant is going to be using. Doing some desktop research and speaking to an accessibility specialist (credit to Calum Ryan) helped me a lot when designing the research session.
It’s helpful to ask the participants what they are using in the recruitment screener. If you are still unsure, I found that participants were happy to give me a bit more information about what they were using before the session.
The main takeaway here is to have a basic understanding of what devices the participant is going to be using before the session. It will help in every research stage, from designing the session through to synthesising the data.
I hope you found this post helpful. Please share your thoughts, experiences, and any further tips! If you want to talk further about this or have any questions, please me contact chris.sutton@dxw.com