Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Besides building accessibility into frontend/React component toolkits, how do we automate testing for accessibility? I've turned on text dictation and tested apps with a blindfold, but that doesn't scale and I'm not even sure if it's how people really use an app without sight.


After years of trying, I've still not found a reliable way to automate accessibility testing. The only really workable way to manage it currently is: bake it into your entire dev process.

When designing an application, forget the visuals: design the flow of information, and the interactions. This is a surprisingly good facsimile for mobile-first thinking, as it follows similar principles: in both cases, you have a restricted amount of information to display, and have to design to deal with that.

Once you've got the information flow, step from there to visual elements, and ensure that as you build, you're baking in ARIA support and your testers are interacting with it using VoiceOver/JAWS.

At the end, the fact is you won't have anything perfect, but you'll have something better than the majority of sites out there. The reality is that perfection is impossible, but if you bake inclusive thinking into your app from the get-go, it's pretty straightforward, and you usually end up with an application that is less confusing and overloaded with information for your visual users too.

If you leave it as something to slap on at the end, it's almost always impossible.


All good points there, and agreed about automated testing, I think the most you can hope for in that department is linting level testing (color contrast, valid html, associated labels and form controls, etc.)

The hard things like focus control require manual testing, ideally by a skilled user of AT.


Tangent:

I think you should really have someone who hasn't seen the app test with the blindfold.

Is that double blind, or just single blind plus literally blind?


In a medical context, double blind means neither the patient or the doctor knows if the patient is receiving the drug being tested or a placebo.

I'm not sure how that would work for software, but it sounds like a much larger experiment than is currently customary.


> how do we automate testing for accessibility

Have you looked into pa11y and its CI integration [1]? It's a good start but it cannot replace properly testing your UI with accessibility in mind.

[1] https://github.com/pa11y/pa11y-ci


I’d think regression testing is easier than with a GUI. Just interpose between the app and the screen reader, and check for expected strings in the output.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: