#blogtober
This list is a part of a series of blogtober posts for October 2023. I very frequently overthink what I write and end up not writing it. Maybe if I make myself write and post something every day, I can help get past that.
Not everything needs to be an Epic. Sometimes a nice couplet can be revelatory.
1 of 31 (hopefully)
The first step in getting good at accessibility is questioning what you know
When I first started in accessibility, the biggest way we made something accessible was ensuring it was keyboard accessible. The place I worked focused on keyboard interaction as the best way to speed p workflows. Less moving hands between mouse and keys means faster workflows.
Clicks were the work of the devil. Keyboards are where it’s at. Get that function in the tab order. Give it a short cut! That’s the ticket.
That worked for our users. It would work for all users, no?
“If its important, it’s in the tab other” was the early accessibility mantra. We spent weeks assessing key workflows and building out a meta layer between our software and the pre-MSAA language our software used.
Cringe, right, that date range that provides? IYKYK.
It was, naturally, really freaking cool. Our in between layer would pull out whatever had system focus and feed into into an alert API tat would be interpreted by assistive technology. It was kind of like wrapping the concept of “focus” in a live region. You tab, it spits that text out at the user that needs the audio version.
I am still genuinely impressed at how well that worked.
I am still genuinely astounded that we thought taking complex screens, healthcare data entry screens, and linearizing them was the best possible accessibility we could provide.
At the time, it maybe was?
The thing is, all the while we were architecting these enormous systems to take bits of text and turn it into audio, we never stopped to ask one important question:
Is that how assistive tech users actually interact with software?
Tabbing through an interface feels like an arcane secret when you figure it out. Suddenly, you can race through forms and fields to get to only the stuff you want to get to. The stuff that you can see on the screen and know where it is.
What we didn’t think of is hat the complexity of the screes meant to listen to all that text over and over and over… just to get to the 3 fields you really care about.
Now, it’s an exaggeration to write that users would listen to all the text every time they interact with e screen. No, they’ll probably memorize exactly how many tabs they need to press to get where they need and just… gird themselves to listen to the half started words the computer would holler at you as you did it.
Assuming that all users interact the same with software is… vaguely positive in that it doesn’t come from a place thinking users are incapable of using the software. And, generally, it is true that users interact with software the same - everyone wants to tab through fields quickly so they can fill in the stuff they care about.
What they don’t want is to have to tab throygh all the text too.
So, like, we were half right but ultimately very misguided and the end results was a usable software (and still a pretty cool interface layer) but it was hardly an accessible interface.
Question assumptions even if they seem positive or even just benign on the surface. Had we done so, maybe we wouldn’t have committed all those Tabindexes (indices?) to an eternity of moderately frustrating half-said phrases to users just trying to do their jobs well. Maybe we still would have…
But if we had just asked people, real people who user screen readers or braille keyboards or switch controls what would make sense, maybe we would have figured that lesson out sooner.
Published on October 1, 2023