Hacker News new | past | comments | ask | show | jobs | submit login

> Here's a thought: how about you write a native app for each platform? I can guarantee that the hundreds, if not thousands, of engineers working on AppKit and Windows APIs are a lot better at getting this to work than your team.

Not just that, but it took them months to implement some (mind you, still not all) features that are useful for blind users that someone already did in a userscript in a few days. So yeah, I take this promise with some skepticism.

So either this is a lack of priority and disrespect to a part of their users or some level of incompetence.

I might sound harsh about this, but imagine being a blind software dev that's supposed to work with Slack to participate in teams. Every day you sign on to your team it's possible that the Slack devs break something and you can't function. And now they closed the escape hatch.




So much this! I happen to be a blind software developer who has had just this sort of experience in years gone by. Web apps mean that you are at the mercy of the developers. Something can work one day and break the next. This is even more true for blind people than it is for the general public. Even if there is accessibility testing, I doubt that it covers my particular toolstack. I'm on Linux. So I'm doubly a niche user.

The web (and web apps) are all about providing an experience. I don't want an experience, I want a reliable tool.


Oh man, makes me so happy to see the accessibility concerns at the top of this thread. I hate Slack so much. Nothing has made me say "is 10 AM too early for a beer?" quite so much as that absolute pile of uselessness. I thought they'd actually improved their accessibility story when my screen reader read various elements as buttons. Later I discovered that, while they'd likely added the correct ARIA role to a <div/>, they didn't bother adding expected keyboard behaviors. I'm fortunate enough to work with co-ops, and the company I'm founding hosts its own tools specifically because those I can control, and I can pick the more accessible open source chat solution. But I can't count how many times I've had to be some company's special snowflake because I can't use Slack, can't use Toggl, can only use parts of Basecamp, and as such can't participate in a bunch of their processes. Now I'll encourage companies further away from Slack than I already do. Forget not touching it with a 10-foot pole. The 100-footer is coming out for this one. I'm sorry to post such an unproductive comment, but if you're working for a silicon valley company and not doing accessibility then you're doing it wrong, and you can pay me or any number of other talented blind developers with some of that investor capital if you want us to show you how to do it right. There is no excuse for being so exclusionary.


As a developer who should probably pay more attention to this than I do, can you recommend some reading material about how to make an app accessible, and how to make sure it stays accessible (i.e. is there a good way to CI test this?).


The only way I'm aware of today is to learn to use assistive technologies and use them on the right combinations of browser/OS/version. These are recommendations for common combinations. [0]

I've given the CI deal a good amount of thought. You'd have to go through the trouble of:

- Provisioning a Windows VM with specific versions of browsers (e.g. IE11) and AT (e.g. JAWS 17, the versions differ quite significantly)

- Writing an automation suite that is capable of controlling the browser and AT (Selenium probably does fine), but crucially interpreting the feedback from the assistive tool to check for correctness. This is tremendously hard. Either using some debugging APIs if any exist in the various assistive tools, or reading memory / reverse engineering using IDA, or capturing the audio output to the sound card and running it through speech recognition to figure out if what was said by the screen reader is what you'd expect. With something like Dragon Dictate you'd have to figure out how to trigger voice commands.

- Expose the VM using an API that you can call from your test suite

- `expect(jawsOutput).toBe("Type in two or more characters for results.")`

That's a potentially tremendously profitable SaaS offering (to the right companies), if someone can build it.

[0]: https://accessibility.blog.gov.uk/2016/11/01/results-of-the-...


I wouldn't recommend using JAWS and IE for CI. For this purpose, I think it would be much better to use NVDA (https://www.nvaccess.org/) with any browser that can be controlled by a test framework like Selenium. (NVDA supports all the major browsers now.) Then, to feed the text-to-speech output back into your test framework, you can write a simple TTS driver for NVDA, in Python.


That would be a lot easier. I've assumed that NVDA would be the easiest to plug into for obvious reasons but have not looked into it specifically.

I used JAWS and Windows IE11 as a specific example because that's a popular combination with screen reader users. If something works well in NVDA and FireFox on Linux it does not follow that it will work in other combinations, at least in my own testing with things I've worked on in the past. Though targeting the low hanging fruit to begin with is how I'd also start if I was building something for this in earnest, ideally I'd want to automate testing with all the popular combinations that I expect users to have.


For guidelines on making an app accessible, check out the W3C's WAI-ARIA Authoring Practices: https://www.w3.org/TR/wai-aria-practices-1.1/


I also dislike Slack. Slack is just IRC, but reinvented with one centralized provider of everything and clunky, inaccessible UIs that they can change around however they want whenever they want.

(My accessibility issue is much smaller: I merely avoid using the mouse cursor, because the keyboard is much lighter on my wrists and hands than the mouse, trackpad, or trackball.)


>"I hate Slack so much. Nothing has made me say "is 10 AM too early for a beer?"

Thank you for this, this made me laugh. You are not alone in this reaction.


Go host your own messaging tool: Relay is an alternative to slack. Relay is open source, built on top of Mattermost. This means you can host Relay yourself. https://relay-chat.com/


i see the comparison to slack, but how does it compare to mattermost ?


Mattermost is open core. So I guess that means mattermost has lots of paid features thay relay will have to reimplement. And I wonder if those will be available in the selfhosted version. https://about.mattermost.com/pricing/


Relay is built on Mattermost team edition so it has all the team edition features. It plans to add new features as per user feedback which will be contributed upstream to Mattermost.


Relay is actually hosted mattermost. You'll get the benefits of mattermost, except with us taking care of the hosting :).


> I can pick the more accessible open source chat solution.

I'd love to hear more about this (the good/Bad/ugly). My guess would be irc is head and shoulders above anything else, due to established standard + myriad of solid clients.

But what have you found so far?


I couldn't agree more!


Oh man, makes me so happy to see the accessibility concerns at the top of this thread

Being “able-bodied” is only temporary, for everyone. Any dev that doesn’t realize this will eventually come to regret it as they age.


Besides building accessibility into frontend/React component toolkits, how do we automate testing for accessibility? I've turned on text dictation and tested apps with a blindfold, but that doesn't scale and I'm not even sure if it's how people really use an app without sight.


After years of trying, I've still not found a reliable way to automate accessibility testing. The only really workable way to manage it currently is: bake it into your entire dev process.

When designing an application, forget the visuals: design the flow of information, and the interactions. This is a surprisingly good facsimile for mobile-first thinking, as it follows similar principles: in both cases, you have a restricted amount of information to display, and have to design to deal with that.

Once you've got the information flow, step from there to visual elements, and ensure that as you build, you're baking in ARIA support and your testers are interacting with it using VoiceOver/JAWS.

At the end, the fact is you won't have anything perfect, but you'll have something better than the majority of sites out there. The reality is that perfection is impossible, but if you bake inclusive thinking into your app from the get-go, it's pretty straightforward, and you usually end up with an application that is less confusing and overloaded with information for your visual users too.

If you leave it as something to slap on at the end, it's almost always impossible.


All good points there, and agreed about automated testing, I think the most you can hope for in that department is linting level testing (color contrast, valid html, associated labels and form controls, etc.)

The hard things like focus control require manual testing, ideally by a skilled user of AT.


Tangent:

I think you should really have someone who hasn't seen the app test with the blindfold.

Is that double blind, or just single blind plus literally blind?


In a medical context, double blind means neither the patient or the doctor knows if the patient is receiving the drug being tested or a placebo.

I'm not sure how that would work for software, but it sounds like a much larger experiment than is currently customary.


> how do we automate testing for accessibility

Have you looked into pa11y and its CI integration [1]? It's a good start but it cannot replace properly testing your UI with accessibility in mind.

[1] https://github.com/pa11y/pa11y-ci


I’d think regression testing is easier than with a GUI. Just interpose between the app and the screen reader, and check for expected strings in the output.


Just curious — how do you effectively program blind? Seems to me like a really difficult problem because coding is about jumping around so quickly and needing to be able to scroll and grok at high speeds. You also have the issue of all kinds of specialized characters that are difficult for any kind of text-to-speech. Are there specialized Braille displays for this kind of stuff? How do you go back and forth between keyboard and such a thing effortlessly?


Not the OP and not blind, but I've worked with a blind programmer before. You move your cursor in the code and it reads you the line. The screen readers can be adjusted so that the speed of reading is really fast. To someone who is not used to it, it sounds like gibberish. But it's pretty amazing how fast the speech can be. After that, it depends on the editor. My colleague used vi (this is a long time ago -- before there was a vim) and was at least as productive as me. The main thing is that you have to remember the code.

I've occasionally tried to set up a workable system so that I could program blind. I have vision problems where I get ocular migraines unless I have my system set up with a huge font and very high contrast anyway, so I often think that it would be nice to program without looking at the screen. However, I have yet to get my system set up in any way that works. Accessibility has a long way to go. Every time I've tried to set things up I wonder how a blind person can possibly get to the point where they can even start. It's so frustrating.

Actually if anyone in the know is reading this, I'd appreciate a pointer to the easiest to set up Linux system. I wouldn't mind giving it a try again.


> You move your cursor in the code and it reads you the line.

That's somewhat similar to how ed works. You choose a line number or range and print those lines to the screen.


Since you mentioned ed, I know of a blind programmer who actually likes and uses ed (or did last time I heard from him). In fact, he wrote his own version of ed that also includes a web browser, and called it edbrowse. To be sure, he's in the minority even among blind programmers. But for what it's worth, you can find an article that he wrote about his approach here: http://www.eklhad.net/philosophy.html


I am not blind but edbrowse is far and away the best non-GUI web browser I've ever used (better than elinks, lynx, etc). I highly recommend that sighted folk crack open the user manual and give it a try.


I love edbrowse! I keep a copy handy; it's the only web browser I know of that is distributed as a single statically-linked executable. Great for getting through wifi login portals before installing packages.

http://edbrowse.org/


But how does it sanely pronounce things with abbreviations or even something like:

NSDictionary *myCompoundedWord = @{@“key: [NSNumber numberWithInt: 7] };

And know that it’s missing the terminal “ in the string and has an extra space after the ]?

Seems very difficult. Would be great if it could understand the language enough to verbalize it at a higher level.


With the punctuation level set to all, the NVDA screen reader for Windows reads your code snippet like this:

n s dictionary star my compounded word equals at left brace at left quote key colon [pause] left bracket n s number number with int colon [pause] 7 right brace right bracket semi

It's a lot to absorb, but people do program productively this way. For example, the NVDA screen reader is itself developed primarily by blind people.


I think it would be much better if the screen reader could use sounds for punctuation, like the sound of a typewriter typing to indicate a dot, and some meep-like sound with the frequency goes up for an opening parenthesis, and down for a closing parenthesis.


I liked Urbit's mapping from symbols to syllables: https://github.com/urbit/docs/blob/master/docs/hoon/syntax.m...


That idea is as old as Victor Borge...


Interesting, how do blind developers feel about minimalist languages like lisp? On one hand it seems like it would read very well in some circumstances (+ 1 2), but the scoping could be a real pain. Cobol seems like another language that might be well suited to them.


I'm not aware of any correlation between blindness and programming language preference, even when blind programmers work on their own projects. I used to think blind programmers wouldn't like Python because it has significant indentation. (Note: I'm visually impaired, but I program visually, not with a screen reader.) But as it turns out, I know blind programmers who love Python and can deal with the indentation just fine. The NVDA screen reader is written in Python, and that project was started by a blind programmer who could choose any language he pleased.

Some projects developed exclusively or primarily by blind programmers do make odd indentation choices. A couple of my blind programmer friends prefer single-space indentation, or at least they did the last time I worked with them (using Python). NVDA uses tabs for indentation, which breaks with the Python convention of four spaces per indentation level. But blind programmers are perfectly capable of following the usual indentation conventions when working with sighted programmers.

Finally, I don't know of any blind programmers who like COBOL. I'm sure there are some, probably working at banks like their sighted counterparts; I just don't happen to know them.


Emacspeak[0] is one of the more popular voice oriented IDEs. I have yet to get it working, but I think you can do things like get it to read visual regions and sections between matching parens, etc. Ideally this is what I want to use, but it has resisted my efforts so far. Maybe I'll give it another try this weekend.

[0] - http://emacspeak.sourceforge.net/


"parens parens parens parens parens parens parens some code here parens parens parens"


The regularity of Lisp's syntax suggests an interesting way to render it in speech, at least for blind people who happen to have a good ear for music. Set the TTS engine to monotone (i.e. no attempt at natural intonation), and increase the pitch for each level of parenthesis nesting. So it would basically be singing the code, going up a note or two for each level of nesting. It would sound weird, but I think it could work for some people, myself included.


I like that direction, but it also sounds like it might be hard to know the reference points. I wonder if it'd be easier to separate if you used musical notes in conjunction, where the octave/note/chord/scale is mapped to the indentation?

Even better would be tools that are aware of indentation, that you can't see the indentation, and help you debug problems without having to make it so explicit all the time. It could get really weird / grinding to have to listen to monotone speech that's constantly changing pitch.


What if instead of just the pitch it said "do ra mi fa so la ti do" every time you went up/down a level? If I ever lost my sight I doubt my tone deafness would would go away.


Ugh, that won't do. I need my: brace bracket paren asterisk some code paren bracket brace semicolon.


Screenreaders usually break it down into chunks it can pronounce or spell it out character by character, with added cues to indicate punctuation and some other things. It's not as slow as it sounds though, most blind people have their screenreaders set to read at a speed that is totally incomprehensible if you're not used to it. It requires a very good memory to manage something like programming, but blind people get really (almost unbelievably) good at that sort of thing simply because they practice a lot by necessity.


I imagine, if you can comprehend at a very fast speed, it gets easier to keep the line in your head. As you can store and revall the characters from the very short term memory. I don't know if this phenomenon exists, but if I,'ve heard the entire line in for example 0.5 seconds, I think I'll be able to construct a mental image of it and code.

Another point is that I imagine it takes your complete focus to listen and comprehend single characters at such speeds, so you will be super-focused on the task when you're writing code.

We, as the programmers with sight, can read code without getting anything out of it, if we're not focused.



> coding is about jumping around so quickly and needing to be able to scroll and grok at high speeds

I mean that might just be how you code, and GP does not code that way...


If you work on a large production code base, I don’t see how you can’t end up having to search and grok lots of code written by other people...


Since you mentioned braille displays, some blind programmers do use those. They're expensive though ($1000 or more). Computer braille has 8 dots per cell rather than 6. That's a good fit for ASCII.


depending on how our punctuation is set up with any screen reader, the characters in code are read off nicely. And no special Braille Display is needed for this; any normal one will do, then again, Braille displays are a rather wide selection. With the keyboard, we can move back and forth nearly as fast as, if not sometimes faster, than our sighted counterparts.


Out of curiosity, what is your toolstack?


I use emacs with emacspeak for programming and a good many other things. For pure terminal interaction outside of emacs I use a console-based screen reader called Speakup. For graphical applications, I use a screenreader called Orca. I don't use a whole lot of graphical applications, but I need Firefox for most of the "modern" web. I've also used Chrome with Chromevox over the years.

Honestly I prefer text-mode browsers when I can use them, but that ship has mostly sailed. I've been involved with the development of edbrowse; the author is a friend of mine.


Do you need a ridiculously good memory and visualization skills for that? I can't imagine writing code without looking at it.


I do have a very good memory, but I cannot really visualize. I've never had any sight. I'm so bad at visualization that I'm baffled by the concept of a picture. How do people manage to cram three dimensions of reality into two? It must be very lossy. Anyway I do have a knack for understanding how all the pieces fit together and keeping it in my head.


> it took them months to implement some (mind you, still not all) features that are useful for blind users that someone already did in a userscript in a few days

A userscript hammered out in a few days is not really that comparable to incorporating accessibility in a flexible and sound way across a codebase.

Where one is dependent on the current representation and types of features in the app, the other touches pretty much everywhere in a code base that might be split across different people or teams that have other business goals to accomplish.

The scale of work is not really as comparable as they may seem at first glance.

So, contrary to what you said about lack of priority and disrespect, I think it's admirable that they take the time to add these necessary accommodations in a way that ensures that they'll be appropriately maintained and present with future iterations.


The scale of work would be a lot, lot, lot smaller if they had made native apps to start, and could use the built in accessibility stuff instead of having to reinvent the wheel.


Is there a reason Electron apps can't take advantage of the accessibility features built into Chromium? Having separate platform apps runs into the issue of the user settings page being accessible in the Windows app, but unusable with a screen-reader in the Mac app because of a bug, etc etc.


assuming this claim is valid, that it took "few days" to implement what took them "months", then they could rewrite the entire user script from scratch every time a change is made, and this could be repeated dozens of times, which, assuming major UI changes are made once every few months, would take several years.


But, but, but, then I'd have to touch the same code twice... /s

There is definitely a poison in our profession, I definitely have to fight the urge to make sure no future changes will break something, instead of just budgeting time to fix breaking changes later. Especially since no one seems to remember when we all agree something doesn't need to be bullet proof. Just today there was an expression of disbelief when I reminded people I'd built a tempermental UI for some internal tool. Never mind that it was a conscious decision to prioritize a better UI later if we found the tool useful and found the UI was causing problems.


These are not unrelated observations! The same people who expressed disbelief that your UI is temperamental will react the same way if Slack releases a temperamental UI.

The difference is that they don't work in the same office as Slack. They'll never hear about all the important, completely justified reasons Slack decided to release a bad UI. They'll just notice it happened, and conclude that Slack must hire incompetent UI developers who didn't realize it was bad. Any product with a large customer base has to be pathologically averse to things like this.


What is a temperamental UI? It must have something with UX to do I am sure but can’t find anything on the first page of Google about it aside from one page that mention avoiding UIs to be temperamental but firstly that was all that was said it seems and secondly the forum is was posted on was, ironically, completely broken on mobile such that the text could not be seen. The page had a mobile navigation bar and the zoom set to mobil unchangeable but the content is wide like on a regular monitor and cannot be scrolled into view and additionally there is a sidebar so only the first few letters are visible of each line of text. It was about the worst thing I have ever seen on mobile. Obviously they have wrapped a forum solution in their own templates and their templates are responsive but the forum content is not. So I guess the people that run that site don’t browse their own forum in a mobile browser. Anyway I digress.


Here it's not a named concept, it's just "temperamental" + "UI". In this case, temperamental means "something that does not behave or work reliably."



Any extra engineering time spent on something beyond that required to make sure it fulfils its purpose is wasted. It's like the old quote about Formula 1 cars: The perfect Formula 1 car crosses the finish line in first place, then falls apart. If it doesn't cross the finish line in first place, it's too light. If it doesn't fall apart the moment it crosses the finish line, it's too heavy.

(Note that 'purpose' might be 'allow us to process this one batch of files' or it might be 'provide a stable, maintainable infrastructure for our product for the next 20 years'. It's just important not to lose sight of that purpose either way!)


Not to mention that Chromium takes a performance hit when accessibility is on – that's why it's off by default. But both Safari and native Mac apps are always accessible.


Well, in this case I was quite glad that they have targeted the web platform. At least that allows me to code my own stopgap solution using userscripts and stuff. That's harder for native.


With native apps, you don't need to. Good accessibility solutions are the default.


As a developer community, we need to get to the point where accessibility is not an afterthought, not even something that has to be considered at all; that just is. I'm stating this from experience; I'm blind myself, so I know exactly what is being referred to. My group used to use Slack, and we stopped using it for this very reason. It's not hard to fill out the accessible label field. If it's present in the framework, then it should be taught and enforced.


What do you use instead of Slack? I'm a blind developer and find it usable enough, but I'm also on a fairly small team with out a tun of slack traffic.


We use Microsoft Teams, and for our public interface, Discord. Though I just set up a team on Keybase as well. Look for OpenCAD on Discord and StormlightTech on Keybase if interested.


I hear you.

I worked with two blind systems people for close to 5 years - we were all working remote, so initially I had no idea they were blind - and subsequently learned from them about their struggles and frustrations dealing with shitty or nonexistent accessibility features.

And with assistive devices’ drivers that were broken, or not updated since Windows State of the Ark version, or not available on Linux or Mac, and so on.

These two people dramatically improved the accessibility features of the smartphone product that the company sells, by reporting the issues they found while dogfooding it. They raised the awareness of many people, including me, of the challenges of the blind, particularly in technology settings.

As a result, I learned ‘dot’ (graphviz) pretty well, and became much more text-centric in other ways (e.g. using markdown, avoiding images when possible, adding alt text).

Slack has done the community a disservice by dropping support for open protocols like IRC and XMPP, which support text-based interfaces that work well with screen readers.


It might be only tangentially related to your point, but there are Slack API-based clients for [emacs][1] and [weechat][2].

So screen-reader usability is still a thing. The fact it's not using a proper standard open protocol is a problem.

[1]: https://github.com/yuya373/emacs-slack

[2]: https://github.com/wee-slack/wee-slack


As someone who doesn't use slack, why did we ever move away from chat programs and protocols that worked fine? I don't know why I need to use slack, hangouts, discord, etc, that are just reinventing irc and/or the garden variety instant messaging platforms that already exist.


  I don't know why I need to use slack [...] just
  reinventing irc
Features Slack has that IRC doesn't include:

* User authentication

* Support for multiple concurrent logins by one user

* Persistent, searchable history

* (Ad-free) file and image sharing built in

* Simple integrations, like webhooks, built in.

In other words, Slack is like IRC+NickServ+Irssi+Screen+Imgur, except easier to use, in the sense that you don't need to know key combos like Ctrl+A+D or Ctrl+Alt+2, you don't have to figure out how to send such combos from your phone's terminal emulator, and you don't need access to an always-on server to run your screen session.

Of course, it's not all good; Slack has a bunch of opinionated design choices, like a channel it's impossible to leave, no ability to block users, no off-the-record option, and suchlike.


You are trying to try people what they should prefer. This is about openness and choice. If I've been using screen and irssi/xchat/etc for decades I don't want to learn anything new. I don't want a huge app shoved down my throat that's not nearly as customizable and integratable into my workflow as all those tools I already know. The slack app is just a horrible tool designed to get into your way and interrupt your work. Thank god we didn't jump that shittrain on my current job.


Not to mention the webhooks. It's trivial to implement pushing data into slack.

They even give you a hello world sample curl when you opt to add the webhook. At the simplest you can just replace the hello world text and bam -- you're sending to slack. Just takes a very simple json input.


Discord is amazing, there really isn't a good replacement right now. Before that it was a mess/mix of IRC/Skype/Teamspeak/Whatsapp, now you can combine all that in one great client from a company that actually seems to care about its users. It's my favorite monthly Paypal charge!


not self-hostable. also, why is it a problem to use different tools for different use cases?

chances it will be around in 10 years? I would say 25%.


I replaced Discord with Mumble [0][1] / Murmur. (Self hosted). It scales really well. On a tiny VM I could handle thousands of people. That said, it isn't quite as happy-clicky-frictionless as Discord. They are working on that aspect of it.

[0] - https://github.com/mumble-voip/mumble

[1] - https://wiki.mumble.info/wiki/Main_Page


When I combine the feature set of IRC/Skype/Teamspeak/WhatsApp, I come up with text chat + voice/video conferencing, which e.g. Skype already provides. Is the difference that the client is great?


Yeah, a feature set doesn't matter if the features are bad. I would never want to use Skype as a platform for an ongoing text chat. (Does Skype even have persistent channels?)

Also now that Discord exists, I would never do a voice chat in Skype either. A substantial portion of every Skype call I've ever been on was people apologizing to each other for the bad audio. Discord apparently just has better signal processing.


> Does Skype even have persistent channels?

Skype for Business does. But... not the Azure/Cloud version; you have to host it on-site, and MS are rapidly replacing Skype with the less feature rich (if that's even possible!) 'MS Teams'.


Ever try to get Skype for business (née lync) working on Linux? With video/voice?


Discord's inability to separate identities is the deal breaker. I don't want to be logged into work and play at the same time. I'd also like to be able to engage in some communities pseudonymously and others not.

None of the chat apps ticks all boxes, which is why we need a universal client that puts the user back in control like in the Trillian/Adium days. And no, matrix+bridges is not that solution.


What's the problem with matrix plus bridges? I am uniformed, so don't take this question to imply there are no problems


As someone also relatively uninformed, when my team moved to Slack I was hoping to get a Matrix integration going. But I don't have admin rights to install the needed integrations on the slack side (and I think we're at max integrations anyway, somehow, why is that a thing...). Though recently I found a different type of slack-matrix bridge that works via user-puppeting, https://github.com/matrix-hacks/matrix-puppet-slack so no action needed on the slack end. Unfortunately it requires you to setup your own homeserver... One day I'd like to have a one-client solution to all these things again like I used to with Trillain/Pidgin. Matrix gets me a lot of the way there and with a little more effort (like my own homeserver) possibly all the way there.


One solution is for matrix.org to provide a hosted instance of matrix-puppet-slack - although we (matrix.org) are not very comfortable doing so because we'd start gathering everyone's slack credentials, which is quite a lot of responsibility. It'd be much better if everyone could run their own and have responsibility for their own bridges. In practice we haven't had much bandwidth for bridge work over the last year but hopefully this will change soon.


can slack plausibly cause ADA compliance problems re: visually impaired people? or would it end up as "use the browser client with a screen reader and be glad you can do that"


The ADA only applies to certain types of "public" private businesses. Like grocery stores and bakeries, hotels, etc. I have never heard of it applying to any private software (that isn't government related). You can always use some other chat software or none at all.

However, I would imagine that as a company, if you require employees to use specific software as a condition of employment and no accommodation can be made, you might run into trouble as an employer.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: