The blind user in TFA might be surprised at your assertion.
Blind users typically rely on screen-readers, including tools such as emacsspeak (which relies on either Emac's built-in eww browser, w3m (of which I believe eww is based), lynx, etc.
The ability to rely on console-based tools with text-to-speech capability, and receiving typed input, is fairly widespread.
The requirement that interactive content be rendered directly to speech is key.
I am blind. I know or have known at least 20 other blind people sufficiently to know what their browsers are. None of them used Links. One of them used Edbrowse. The rest (including myself) are Firefox, Chrome, or Safari. I have at least one personal project (not public) which uses React heavily. Saying that Links is necessary is an outdated view, so much so that we have things like the accessibility object model [0] in progress to possibly go so far as even supporting use cases like remote desktop connections in the browser by making fake screen reader only nodes in the accessibility tree.
In general, the terminal itself is even not so great. There are efforts like Emacspeak which mandate learning what is essentially a second desktop environment, but outside that it turns out that offering semantics (which only non-text browsers and apps can do) is useful: for example knowing whether or not the cursor is in a text editor, so that deletions are significant, or whether text is a table.
The idea that JS is bad for screen readers--or indeed that we use text-based browsers--is a consistent misconception that is no longer true. It was true 10 or 15 years ago, if not longer, but everything AT has come a very long way since then.
> The idea that JS is bad for screen readers--or indeed that we use text-based browsers--is a consistent misconception that is no longer true.
To be perfectly blunt, I feel this misconception is pushed mainly by people with an "anti-javascript" agenda.
If one can no longer argue that "supporting non-javascript clients is the only way to support accessibility", one is only left with "if you break support for non-javascript clients, you will only be excluding people who deliberately disable javascript". And at that point, the amount of effort to support non-javascript vs. the return on investment shifts heavily in favor of not caring about users who intentionally disable javascript. This is an argument I've had in every shop I've worked at and at the end of the day in every instance we decided it was simply not worth the hassle to support people who intentionally disable javascript.
In fact, I'm pretty sure any competitive search engine these has to have a very complex crawler that is more than able to deal with javascript rendered pages. If they didn't, they'd be leaving a ton of content out of their indexes--not a good look for a search engine. So not even the "you have to support text-only browsers to please google" argument has most likely fallen out of favor.
I see this like supporting DOS in 2019 or somesuch. There might be an esoteric reason to do so but when 99% of the userbase has left and the old thing can't support new technologies, saying that we need to support the old thing forever because a tiny subset of users still use it stops meaningful progress. If there weren't plenty of good, modern options I would be all for Links but there are, so I'm not. At some point it's on the user for choosing not to leave their little island of familiarity, especially when the user is technical enough to be using Links.
Firefox and Chrome both have mature accessibility API implementations at this point. Edge is also at least okay. Internet Explorer has worked forever. You then couple those with a screen reader--most commonly Jaws or NVDA--and you get something that very much resembles Emacs or what have you: there's around 50 or 60 keystrokes I use on a regular basis. It's like a local client-server model (indeed documentation on this topic uses those terms). You couple something implementing the server with a client, i.e. a screen reader, a one-switch controller, speech recognition, what have you, and those consume exposed semantic information. Browsers then map web pages to the platform's accessibility model for consumption.
NVDA offers scriptability for the web and otherwise in Python as well, so anything it can't do can probably be added. For instance there's an add-on for using your local screen reader to control a remote machine, provided that both run it (not the most applicable to accessibility, but a good example of how far you can take NVDA's scripting). Jaws also does much of this but is much more proprietary including an only half documented scripting language.
iOS is also good. unfortunately Apple very much dropped the ball on OS X and hasn't picked it up again, but my brother (also blind, it's genetic) did an entire business degree on an iPhone because he didn't want to be bothered learning a laptop. That's a loss in efficiency, but even the lesser options are now sufficient enough that a non-programmer can pick them up and go get a college degree.
There is an idea that goes something like "Obviously screen readers have to struggle to present information, therefore dedicated text-based browsers are better". That was true in 1995 when we didn't even have MSAA. I know people from that era and they had to hook APIs in other processes at runtime. But in actuality, once you expose the accessibility tree and hand it over to the people who want to use it, good things happen.
Ah, I'm sorry, I misunderstood what you meant. You're talking about screen reader compatibility only.
I was interested in hearing about browsers that do what Lynx does, but are better. Unfortunately, the browsers you mention are graphical, and so are not Lynx replacements.
From my perspective there is very little difference. The interface I get out of Firefox is exposed as if it were a text-based browser for lack of a better analogy (it's not quite the same, but the differences are subtle and not obvious at first glance). But I also get the ability to do all the non-text-based-browser things with that interface instead of being limited to what a text-based browser supports, and those things can be made accessible to me. But the really big advantage is that my skills at driving Firefox also work with Chrome, IE, and Edge, and any web view on the platform. Plus there is a large common subset that is shared with all the desktop apps as well.
I'm not the right person if you're looking for someone who shares enthusiasm for text-based browsers, in other words. In general I would like it if people would stop using blindness as a point in their arguments that they're necessary because it shows a massive misunderstanding of what the world of accessibility is like.
> In fact, I'm pretty sure any competitive search engine these has to have a very complex crawler that is more than able to deal with javascript rendered pages.
Save the region-specific engines which likely lag behind, Google [0] and Bing [1] both support crawling javascript, and Bing is generally the search engine index of choice for all other search engines like Yahoo, DDG (at least for now, I occasionally get crawls from duckduckgobot), etc.
I admit I have an anti-javascript agenda since most JavaScript on the web is used against me, to track me, show ads, autoplay videos, popups, exploits etc. I don't trust you, I don't want you to run code on my computer.
You're already visiting web pages that are directing your computer to access servers somewhat arbitrarily. You're running quite a bit of other people's code on your computer.
I appreciate your assistance but I can check spelling; a simple "Did you know that it's Lynx" would have sufficed. Good to know there's two text-based browsers. I didn't, but I and everyone else I know will go on not using either.
There's also a difference between those who acquire perceptive limitations (sight, hearing, also motor control, etc.) later in life, whether through accident, injury, illness, or degeneration, and those who have limitations from birth or a young age. Having to learn some (admittedly arcane) interface such as emacs late in life, with fewer capabilities and often declining cognitive capabilities, is difficult.
And yes, mainstream commercial software and OS offerings are improving. Slighty. (Most are still abysmally poor.)
I'm hard-pressed, though, to see how an increased dependence on dynamic and programmatic web design elements improves accessibility. Especially when wielded by technologists, managers, and clients with little awareness or concern for such access.
Again: Google should be much better positioned to grasp this than most. They clearly don't.
Google themselves are your example. Leaving aside some horrible accessibility keybindings in Docs, both Docs and Sheets are basically fully accessible. in fact Sheets is the best spreadsheet program I've used. It's not as powerful as Excel, but Excel is laggy for a variety of technical reasons that shouldn't exist, at least with NVDA. I can also read presentations in slides, and I might be able to make one. I've never tried; web or not, making slides just isn't something super feasible for a blind person if it's going to look any good.
I have gripes about Aria. It's definitely possible to abuse this stuff and end up with an inaccessible mess, but overall we have been trending toward a more accessible internet and things like the aforementioned do exist.
I've been blind since birth. I started on a device called the Braille 'N Speak 2000, which functioned very much like Emacs. I don't use Emacs because Emacspeak requires Linux desktop and adds a ton of extra complexity on top for very little gain. Linux dropped the ball big time on accessibility and audio in general, and never really recovered. Obviously this is opinionated, but I feel like you're implying that I lost my vision later in life and am forming my opinion around that perspective. You might additionally want to look into Jaws and NVDA. Learning those is about as bad as learning Emacs or etc; knowledge from when you were sighted doesn't transfer in the slightest and the interface is much more arcane than you probably imagine it to be.
> Docs and Sheets are basically fully accessible. in fact Sheets is the best spreadsheet program I've used.
This is off topic, and I don't want to distract from the current conversation, but speaking of sheets -- as a web developer, I often build SVG charts with d3, and I've been racking my brain lately trying to figure out how to make them more accessible to blind users beyond just linking to tables of data.
If you're using Sheets, are you also regularly consuming charts as well? Is there a common auditory shorthand for representing something like a pie chart?
Sadly no. Making charts accessible is an unsolved problem. There have been some efforts for accessible graphing calculators that work more or less, but it's not trivial to make a generic one-size-fits-all solution.
For Sheets, the underlying stuff that runs it is quite complicated. They ended up doing something akin to an offscreen model with HTML to make it work because afaik they use a canvas of some sort to draw everything. In fact, unless you turn on braille mode, both products actually have a built-in screen reader that talks via aria live regions. That's terrible practice, but to their credit they got ahead of what the internet was providing for accessibility and didn't have a choice in that regard.
For something you can practically implement without a huge project, I suggest text descriptions of the data. If you want to do a bit better, make it an HTML table--that'll give some convenient navigability for free.
Blind users typically rely on screen-readers, including tools such as emacsspeak (which relies on either Emac's built-in eww browser, w3m (of which I believe eww is based), lynx, etc.
The ability to rely on console-based tools with text-to-speech capability, and receiving typed input, is fairly widespread.
The requirement that interactive content be rendered directly to speech is key.