Hacker News new | past | comments | ask | show | jobs | submit login

So much this! I happen to be a blind software developer who has had just this sort of experience in years gone by. Web apps mean that you are at the mercy of the developers. Something can work one day and break the next. This is even more true for blind people than it is for the general public. Even if there is accessibility testing, I doubt that it covers my particular toolstack. I'm on Linux. So I'm doubly a niche user.

The web (and web apps) are all about providing an experience. I don't want an experience, I want a reliable tool.




Oh man, makes me so happy to see the accessibility concerns at the top of this thread. I hate Slack so much. Nothing has made me say "is 10 AM too early for a beer?" quite so much as that absolute pile of uselessness. I thought they'd actually improved their accessibility story when my screen reader read various elements as buttons. Later I discovered that, while they'd likely added the correct ARIA role to a <div/>, they didn't bother adding expected keyboard behaviors. I'm fortunate enough to work with co-ops, and the company I'm founding hosts its own tools specifically because those I can control, and I can pick the more accessible open source chat solution. But I can't count how many times I've had to be some company's special snowflake because I can't use Slack, can't use Toggl, can only use parts of Basecamp, and as such can't participate in a bunch of their processes. Now I'll encourage companies further away from Slack than I already do. Forget not touching it with a 10-foot pole. The 100-footer is coming out for this one. I'm sorry to post such an unproductive comment, but if you're working for a silicon valley company and not doing accessibility then you're doing it wrong, and you can pay me or any number of other talented blind developers with some of that investor capital if you want us to show you how to do it right. There is no excuse for being so exclusionary.


As a developer who should probably pay more attention to this than I do, can you recommend some reading material about how to make an app accessible, and how to make sure it stays accessible (i.e. is there a good way to CI test this?).


The only way I'm aware of today is to learn to use assistive technologies and use them on the right combinations of browser/OS/version. These are recommendations for common combinations. [0]

I've given the CI deal a good amount of thought. You'd have to go through the trouble of:

- Provisioning a Windows VM with specific versions of browsers (e.g. IE11) and AT (e.g. JAWS 17, the versions differ quite significantly)

- Writing an automation suite that is capable of controlling the browser and AT (Selenium probably does fine), but crucially interpreting the feedback from the assistive tool to check for correctness. This is tremendously hard. Either using some debugging APIs if any exist in the various assistive tools, or reading memory / reverse engineering using IDA, or capturing the audio output to the sound card and running it through speech recognition to figure out if what was said by the screen reader is what you'd expect. With something like Dragon Dictate you'd have to figure out how to trigger voice commands.

- Expose the VM using an API that you can call from your test suite

- `expect(jawsOutput).toBe("Type in two or more characters for results.")`

That's a potentially tremendously profitable SaaS offering (to the right companies), if someone can build it.

[0]: https://accessibility.blog.gov.uk/2016/11/01/results-of-the-...


I wouldn't recommend using JAWS and IE for CI. For this purpose, I think it would be much better to use NVDA (https://www.nvaccess.org/) with any browser that can be controlled by a test framework like Selenium. (NVDA supports all the major browsers now.) Then, to feed the text-to-speech output back into your test framework, you can write a simple TTS driver for NVDA, in Python.


That would be a lot easier. I've assumed that NVDA would be the easiest to plug into for obvious reasons but have not looked into it specifically.

I used JAWS and Windows IE11 as a specific example because that's a popular combination with screen reader users. If something works well in NVDA and FireFox on Linux it does not follow that it will work in other combinations, at least in my own testing with things I've worked on in the past. Though targeting the low hanging fruit to begin with is how I'd also start if I was building something for this in earnest, ideally I'd want to automate testing with all the popular combinations that I expect users to have.


For guidelines on making an app accessible, check out the W3C's WAI-ARIA Authoring Practices: https://www.w3.org/TR/wai-aria-practices-1.1/


I also dislike Slack. Slack is just IRC, but reinvented with one centralized provider of everything and clunky, inaccessible UIs that they can change around however they want whenever they want.

(My accessibility issue is much smaller: I merely avoid using the mouse cursor, because the keyboard is much lighter on my wrists and hands than the mouse, trackpad, or trackball.)


>"I hate Slack so much. Nothing has made me say "is 10 AM too early for a beer?"

Thank you for this, this made me laugh. You are not alone in this reaction.


Go host your own messaging tool: Relay is an alternative to slack. Relay is open source, built on top of Mattermost. This means you can host Relay yourself. https://relay-chat.com/


i see the comparison to slack, but how does it compare to mattermost ?


Mattermost is open core. So I guess that means mattermost has lots of paid features thay relay will have to reimplement. And I wonder if those will be available in the selfhosted version. https://about.mattermost.com/pricing/


Relay is built on Mattermost team edition so it has all the team edition features. It plans to add new features as per user feedback which will be contributed upstream to Mattermost.


Relay is actually hosted mattermost. You'll get the benefits of mattermost, except with us taking care of the hosting :).


> I can pick the more accessible open source chat solution.

I'd love to hear more about this (the good/Bad/ugly). My guess would be irc is head and shoulders above anything else, due to established standard + myriad of solid clients.

But what have you found so far?


I couldn't agree more!


Oh man, makes me so happy to see the accessibility concerns at the top of this thread

Being “able-bodied” is only temporary, for everyone. Any dev that doesn’t realize this will eventually come to regret it as they age.


Besides building accessibility into frontend/React component toolkits, how do we automate testing for accessibility? I've turned on text dictation and tested apps with a blindfold, but that doesn't scale and I'm not even sure if it's how people really use an app without sight.


After years of trying, I've still not found a reliable way to automate accessibility testing. The only really workable way to manage it currently is: bake it into your entire dev process.

When designing an application, forget the visuals: design the flow of information, and the interactions. This is a surprisingly good facsimile for mobile-first thinking, as it follows similar principles: in both cases, you have a restricted amount of information to display, and have to design to deal with that.

Once you've got the information flow, step from there to visual elements, and ensure that as you build, you're baking in ARIA support and your testers are interacting with it using VoiceOver/JAWS.

At the end, the fact is you won't have anything perfect, but you'll have something better than the majority of sites out there. The reality is that perfection is impossible, but if you bake inclusive thinking into your app from the get-go, it's pretty straightforward, and you usually end up with an application that is less confusing and overloaded with information for your visual users too.

If you leave it as something to slap on at the end, it's almost always impossible.


All good points there, and agreed about automated testing, I think the most you can hope for in that department is linting level testing (color contrast, valid html, associated labels and form controls, etc.)

The hard things like focus control require manual testing, ideally by a skilled user of AT.


Tangent:

I think you should really have someone who hasn't seen the app test with the blindfold.

Is that double blind, or just single blind plus literally blind?


In a medical context, double blind means neither the patient or the doctor knows if the patient is receiving the drug being tested or a placebo.

I'm not sure how that would work for software, but it sounds like a much larger experiment than is currently customary.


> how do we automate testing for accessibility

Have you looked into pa11y and its CI integration [1]? It's a good start but it cannot replace properly testing your UI with accessibility in mind.

[1] https://github.com/pa11y/pa11y-ci


I’d think regression testing is easier than with a GUI. Just interpose between the app and the screen reader, and check for expected strings in the output.


Just curious — how do you effectively program blind? Seems to me like a really difficult problem because coding is about jumping around so quickly and needing to be able to scroll and grok at high speeds. You also have the issue of all kinds of specialized characters that are difficult for any kind of text-to-speech. Are there specialized Braille displays for this kind of stuff? How do you go back and forth between keyboard and such a thing effortlessly?


Not the OP and not blind, but I've worked with a blind programmer before. You move your cursor in the code and it reads you the line. The screen readers can be adjusted so that the speed of reading is really fast. To someone who is not used to it, it sounds like gibberish. But it's pretty amazing how fast the speech can be. After that, it depends on the editor. My colleague used vi (this is a long time ago -- before there was a vim) and was at least as productive as me. The main thing is that you have to remember the code.

I've occasionally tried to set up a workable system so that I could program blind. I have vision problems where I get ocular migraines unless I have my system set up with a huge font and very high contrast anyway, so I often think that it would be nice to program without looking at the screen. However, I have yet to get my system set up in any way that works. Accessibility has a long way to go. Every time I've tried to set things up I wonder how a blind person can possibly get to the point where they can even start. It's so frustrating.

Actually if anyone in the know is reading this, I'd appreciate a pointer to the easiest to set up Linux system. I wouldn't mind giving it a try again.


> You move your cursor in the code and it reads you the line.

That's somewhat similar to how ed works. You choose a line number or range and print those lines to the screen.


Since you mentioned ed, I know of a blind programmer who actually likes and uses ed (or did last time I heard from him). In fact, he wrote his own version of ed that also includes a web browser, and called it edbrowse. To be sure, he's in the minority even among blind programmers. But for what it's worth, you can find an article that he wrote about his approach here: http://www.eklhad.net/philosophy.html


I am not blind but edbrowse is far and away the best non-GUI web browser I've ever used (better than elinks, lynx, etc). I highly recommend that sighted folk crack open the user manual and give it a try.


I love edbrowse! I keep a copy handy; it's the only web browser I know of that is distributed as a single statically-linked executable. Great for getting through wifi login portals before installing packages.

http://edbrowse.org/


But how does it sanely pronounce things with abbreviations or even something like:

NSDictionary *myCompoundedWord = @{@“key: [NSNumber numberWithInt: 7] };

And know that it’s missing the terminal “ in the string and has an extra space after the ]?

Seems very difficult. Would be great if it could understand the language enough to verbalize it at a higher level.


With the punctuation level set to all, the NVDA screen reader for Windows reads your code snippet like this:

n s dictionary star my compounded word equals at left brace at left quote key colon [pause] left bracket n s number number with int colon [pause] 7 right brace right bracket semi

It's a lot to absorb, but people do program productively this way. For example, the NVDA screen reader is itself developed primarily by blind people.


I think it would be much better if the screen reader could use sounds for punctuation, like the sound of a typewriter typing to indicate a dot, and some meep-like sound with the frequency goes up for an opening parenthesis, and down for a closing parenthesis.


I liked Urbit's mapping from symbols to syllables: https://github.com/urbit/docs/blob/master/docs/hoon/syntax.m...


That idea is as old as Victor Borge...


Interesting, how do blind developers feel about minimalist languages like lisp? On one hand it seems like it would read very well in some circumstances (+ 1 2), but the scoping could be a real pain. Cobol seems like another language that might be well suited to them.


I'm not aware of any correlation between blindness and programming language preference, even when blind programmers work on their own projects. I used to think blind programmers wouldn't like Python because it has significant indentation. (Note: I'm visually impaired, but I program visually, not with a screen reader.) But as it turns out, I know blind programmers who love Python and can deal with the indentation just fine. The NVDA screen reader is written in Python, and that project was started by a blind programmer who could choose any language he pleased.

Some projects developed exclusively or primarily by blind programmers do make odd indentation choices. A couple of my blind programmer friends prefer single-space indentation, or at least they did the last time I worked with them (using Python). NVDA uses tabs for indentation, which breaks with the Python convention of four spaces per indentation level. But blind programmers are perfectly capable of following the usual indentation conventions when working with sighted programmers.

Finally, I don't know of any blind programmers who like COBOL. I'm sure there are some, probably working at banks like their sighted counterparts; I just don't happen to know them.


Emacspeak[0] is one of the more popular voice oriented IDEs. I have yet to get it working, but I think you can do things like get it to read visual regions and sections between matching parens, etc. Ideally this is what I want to use, but it has resisted my efforts so far. Maybe I'll give it another try this weekend.

[0] - http://emacspeak.sourceforge.net/


"parens parens parens parens parens parens parens some code here parens parens parens"


The regularity of Lisp's syntax suggests an interesting way to render it in speech, at least for blind people who happen to have a good ear for music. Set the TTS engine to monotone (i.e. no attempt at natural intonation), and increase the pitch for each level of parenthesis nesting. So it would basically be singing the code, going up a note or two for each level of nesting. It would sound weird, but I think it could work for some people, myself included.


I like that direction, but it also sounds like it might be hard to know the reference points. I wonder if it'd be easier to separate if you used musical notes in conjunction, where the octave/note/chord/scale is mapped to the indentation?

Even better would be tools that are aware of indentation, that you can't see the indentation, and help you debug problems without having to make it so explicit all the time. It could get really weird / grinding to have to listen to monotone speech that's constantly changing pitch.


What if instead of just the pitch it said "do ra mi fa so la ti do" every time you went up/down a level? If I ever lost my sight I doubt my tone deafness would would go away.


Ugh, that won't do. I need my: brace bracket paren asterisk some code paren bracket brace semicolon.


Screenreaders usually break it down into chunks it can pronounce or spell it out character by character, with added cues to indicate punctuation and some other things. It's not as slow as it sounds though, most blind people have their screenreaders set to read at a speed that is totally incomprehensible if you're not used to it. It requires a very good memory to manage something like programming, but blind people get really (almost unbelievably) good at that sort of thing simply because they practice a lot by necessity.


I imagine, if you can comprehend at a very fast speed, it gets easier to keep the line in your head. As you can store and revall the characters from the very short term memory. I don't know if this phenomenon exists, but if I,'ve heard the entire line in for example 0.5 seconds, I think I'll be able to construct a mental image of it and code.

Another point is that I imagine it takes your complete focus to listen and comprehend single characters at such speeds, so you will be super-focused on the task when you're writing code.

We, as the programmers with sight, can read code without getting anything out of it, if we're not focused.



> coding is about jumping around so quickly and needing to be able to scroll and grok at high speeds

I mean that might just be how you code, and GP does not code that way...


If you work on a large production code base, I don’t see how you can’t end up having to search and grok lots of code written by other people...


Since you mentioned braille displays, some blind programmers do use those. They're expensive though ($1000 or more). Computer braille has 8 dots per cell rather than 6. That's a good fit for ASCII.


depending on how our punctuation is set up with any screen reader, the characters in code are read off nicely. And no special Braille Display is needed for this; any normal one will do, then again, Braille displays are a rather wide selection. With the keyboard, we can move back and forth nearly as fast as, if not sometimes faster, than our sighted counterparts.


Out of curiosity, what is your toolstack?


I use emacs with emacspeak for programming and a good many other things. For pure terminal interaction outside of emacs I use a console-based screen reader called Speakup. For graphical applications, I use a screenreader called Orca. I don't use a whole lot of graphical applications, but I need Firefox for most of the "modern" web. I've also used Chrome with Chromevox over the years.

Honestly I prefer text-mode browsers when I can use them, but that ship has mostly sailed. I've been involved with the development of edbrowse; the author is a friend of mine.


Do you need a ridiculously good memory and visualization skills for that? I can't imagine writing code without looking at it.


I do have a very good memory, but I cannot really visualize. I've never had any sight. I'm so bad at visualization that I'm baffled by the concept of a picture. How do people manage to cram three dimensions of reality into two? It must be very lossy. Anyway I do have a knack for understanding how all the pieces fit together and keeping it in my head.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: