I think about this whenever someone posts about yet another new immediate mode UI library, or a custom layout engine they’ve built on top of HTML canvas. Generally all these kind of approaches make your app invisible to screen readers.
The amount of work Apple, Google and others have put into making all this work is staggering, and life changing for people who are blind. Please don’t throw all that work away in your apps just because you think some UI toolkit is shiny.
Honestly the amount of work people have put in over the years to NOT just build an app with the native tooling is staggering. It really is not that expensive to duplicate some code across multiple target platforms, compared to the compatibility, UX and accessibility headaches.
Facebook and Twitter both spend dozens if not hundreds of man-years trying to make HTML lists (tweets, posts) scroll as fast as native. Google is spending IDK how much hours painstakingly redrawing all the native components for both iOS and Android into Flutter, having to catch up every time they make a change.
Just use the platform like it was intended to be used. You're not special. You're not smarter. You're forgetting most of what the authors have already solved.
And, if you're at the scale of the Facebooks and co, you have the engineers to build it. You have the engineers to rebuild your whole app from scratch a dozen times over. And if you don't, it'll be trivial to find and hire them at the rates you're paying.
Flutter is probably meant to be the native toolkit of a platform. I don’t think there is anything wrong with code-once toolkits that don’t use native widgets if they spend the effort to be accessible and have good UX.
I’m personally falling into the belief that we’ve fooled ourselves regarding UI. That one single UI library could actually be the ideal choice for every possible app, and that we just need to keep trying until the absolute perfect set of tradeoffs appears. I think this is enticing because UI is very hard and full of problems you would have to solve over and over if you had multiple UI frameworks, whereas these things could be amortized if there were less.
I don’t think so. I do think the UI ecosystem, as it slows down, could become more modular and share more core components. I expect to see “base” rendering libraries, like Skia or Pathfinder, powering next generation UIs, and hopefully someone will also make a cross-platform library to provide accessibility and IME primitives for Windows TSF/macOS/etc. I think smaller bits like that can clearly be minmax’d to ideal conditions for given use cases. But if you were hoping for a future of “just use the native toolkit!” I think it’s not likely, at least not until software slows down a lot more. Even then, lowest common denominator libraries like wxWidgets often end up being disliked by users and developers alike, due to the compromises needed for them to accomplish their goals. I’ll admit the story has gotten better with React Native and its success on mobile, but even that is not a perfect story: performance on Android was apparently not good enough for Discord, which may in fact be a pretty good signal for why something like Flutter is a good idea anyways: it can mature over time on existing platforms and then possibly even become the first class toolkit for something.
I do agree that most developers should prefer native over vanity, but it’s a false dichotomy, as there are plenty of valid reasons to not do native. No more is this clear than Windows, where virtually nobody does “native” anymore. (It can still be OK, but it’s not great. Having true HWNDs for every component is a great way to have things flicker whenever you delete/create new widgets.)
> I do think the UI ecosystem, as it slows down, could become more modular and share more core components
I think things like Flutter will take another few years to become performant compared to even Electron/Tauri/Webview+JS
Yes, it is sad that we redesign UI (even web UIs) every so often, but it is an unforgiving world. I am not sure if it is the fickle users or the product manager(s) to blame, but we seek fancier and fancier UI with every update. There are exceptions like the craigslist website, but those are rare.
The new design puts content first. There are no superfluous borders, backgrounds, or gradients. Every visual element exists for a good reason. This is good, functional, accessible, design.
And YouTube of 2006 is a conservative example, we (well, most of us) can remember the UX/UI atrocities that existed back then.
> hopefully someone will also make a cross-platform library to provide accessibility and IME primitives for Windows TSF/macOS/etc.
I'm planning to do this for accessibility. I don't think I have the expertise to cover IME though. Do you think they both need to be done in one library, or can they be decoupled?
Certainly on Windows, IME and accessibility are separate; IME is done through TSF as you mentioned, whereas accessibility is done through UI Automation.
> Just use the platform like it was intended to be used. You're not special. You're not smarter.
If you assume that they are not less smart than the average engineer, then a reasonable assumptions is that they weighed the trade-offs and went with what worked best for them, and you're missing some context (or give different weighting to importance of native widgets - which comes at a cost).
There are pros and cons to using native widgets or cross-platform libraries/UIs that go back to when UIs and platforms became a thing. I know of the SWT vs. Swing debate in the Java world some decades ago - there are likely many precedents before then. All I know is there is no right answer, just trade-offs you have to weigh.
Or they have evaluated their potential solutions without understanding the choice they are making. For example, they have only considered how something looks, not how it works(for both people using accessibility tech and those who don't).
If you look at a screen produced by, say, Flutter and compare it to Aplpe's native toolkit you might conclude that you can produce the "same" thing with Flutter as you'd get with Apple's native toolkit. To boot you can do it in less time.
The thing to consider is: Did you really produce the same thing? Maybe you produced two things that look the same, but aren't even close to equivalent in many other important ways.
> For example, they have only considered how something looks, not how it works(for both people using accessibility tech and those who don't).
When Sun or Google created their own, non-native toolkits designed to run on Mac OS/iOS, they were fully aware of what doesn't work. However, they balanced that con against the pros of the ability to write cross-platform code once (ground floor engineers), and on strategic level, they wanted to commoditize Mac OS/iOS into a dumb pipe (one of many other dump pipes to deliver code/content to), rather than a platform with inherent value - they consciously considered this to be more important than users' griping at the weird scroll-speed curves. One can create a shim for native widgets like QT does, but you'll be at the platform owners mercy when it comes to release cadence.
It's good business practice to commoditize your complement - seen in that light, the decisions are far from "not smart". Not great for some users, for sure, but they come from deliberate decision-making over control.
This is the answer. This is always what happens. You use the fastest thing that will get the feature done or the bug fixed. To do that, you choose the components that provide the things you need to match your task's requirements. If the requirements don't list accessibility, then you do not care. Your job isn't design, it's implementation. The senior people are supposed to think of that stuff. You have enough work just keeping up with all the things you have to do at that scale.
In the initial meetings where you talk about making new component libraries, someone who worked on the old one says, "We reinvented the wheel last time, then standards changed, and we couldn't keep up. We need to take current standards into account before doing anything new, including accessibility."
And the new boss says, "That's a good call-out, but accessibility isn't a problem if nobody can even load the component list and scroll to the thing they want, and with the current ux and stats, we're getting close to using most of the available memory on devices just to scroll our content. We don't want to change the content we can scroll, we just want to make scrolling more efficient. Everyone agrees the current design is at its limits. We need to start over at the beginning. Accessibility is step 3. We're at step 0. Let's focus and move forward." And while they are right, they need to demo an map to an abled-person boss, so accessibility gets pushed farther back as technical things take priority.
Often, this leads to teams over time producing lots of things without accessibility, and you build new things from old things, and it just snowballs.
Suddenly a user appears, and they aren't served, and now a whole new bunch of juniors are scrambling to add accessibility to old things without breaking current usage.
Engineers like it when everything works the same. It is also easier for product marketing: the new features will be available everywhere, and they’ll work the same. Less to document and keep in your head.
The advantage is mostly that it is easier to manage for the organization building the app. There is no real benefit for the users, in this case most likely slower, larger, and resource intensive apps, that avoid platform integration.
Also, the organization is ceding control to an other when they don't use their own stuff. That doesn't sit well when the needs of the other gets in the way of profits.
I assume most of the time it's the higher-ups deciding these things, not the engineers. The boss decides they doesn't like something about the native UI and asks the team to build something new, without understanding the deeper implications.
The real problem with Facebook at least is build times -- using the native stack incurs something insane like several hundred hour build times IIRC. So the native stack is just a non-starter once you get to a certain (ludicrous) codebase size.
Are you saying that the client includes all of the Facebook logic or something?? Surely displaying forms (essentially what a post is) and ability to scroll through a list and click on a button or view/upload a photo is the essence of the app?
Where's the magical bit that means they can't use a native UI toolkit?
The magical bit is that they have hundreds and hundreds of cooks in the kitchen, who all need to justify their existence by creating tons and tons of little one-off UI screens for obscure features that hardly anyone sees or uses.
This kind of team structure also causes other strange-looking technology choices, such as CSS-in-JS: they have to do stuff like that because they literally cannot prevent different members of the team from writing CSS class names that collide.
I think that number of classes is the main issue. It's in the many, many millions.
Also, the app has sooooooo much more functionality than people realize. Plus, there is a lot of functionality that ships with the app but is behind a feature gate, meaning that it is turned off for the majority of users. There's lots of internal tools for adjusting feature gates on a per-user, per-region, per-whatever basis, in the name of user research as well as just safely deploying new features at scale.
Speaking as someone focused on code maintenance this isn't a good thing. If the highest complexity of your product exists at a level where users can't interact with it then you're spending a lot of dev time inefficiently.
But I don't believe you're correct, honestly feature flag rollout, even among insanely specific cohorts, isn't actually a hard problem and it's essentially been solved at this point - it isn't easy, not by any measure - but the components and how they interact are rather easy to comprehend. Facebook's main complexity (IMO) is from trying to build essentially a full OS on top of the browser from which to serve a variety of integration apps into their platform and, while I'm not a board member of Facebook or at all familiar with their earnings, that particular goal seems to have been, essentially, a dud.
Being able to play farmville with your friends may have been a decent income source once upon a time but it's very far from what their core competencies now are. Facebook now primarily uses API hooks into other native standalone apps for that particular class of data collection, but their pure web based collection seems to be where they really get value. The fact that they can see precisely where users are going on the web is, IMO, their main value proposition at this point - the social network stuff needs to exist to support that and ease the process of identifying users - but anything related to games seems utterly unnecessary and that core platform they have could be vastly simplified while still delivering the same value to the company.
That all said, they've invested a whole lot into their existing platform so I can see why business would be very very hesitant to try anything that might rock the boat, if they can sustain their platform being fast and responsive they can minimize their corporate risk.
They aren't saying that the feature gating code adds to app size, but rather that there's a lot of code behind feature gates (for tests/staged rollout/locale-specific features/etc) that most users won't see which still add to app size.
Fail fast and KISS are pretty celebrated virtues of sustainable project development. I understand that on the scale of Facebook you have issues with project management but if there is a significant amount of code that is locked to specific cohorts of users aren't you opening up the door to unprofitable levels of complexity and long running poor investments?
I'm sure a lot of developers on here try and minimize their use of integration branches in the day-to-day (they are necessary for some things but keep them short and sweet) and try and get in-progress features into master ASAP - that's largely due to the fact that maintaining multiple copies of the same basic logic can quickly become extremely difficult to manage.
Localization is a really big exception to this but that's why, whenever possible, you'll see game companies limit localization to strings only - including logical statements in the realm of information to be localized can make security issues extremely fun to track down along with causing frequent usability breaks in less used localizations.
I don't know - whatever the reasons for it and no matter the resources FB has - this stuff increases in cost exponentially and if they do have a really fragmented codebase it's likely that the majority of their labour goes into process definition and QA to make sure that they don't break the Swahili language version of the landing page for China when they change their contact us link.
Fail fast works on the web, where you can redeploy the app with the next page refresh.
It works poorly on mobile, where users are not keen to reinstall an app every few days, and some do not update for months and years, because of lack of space, scarce bandwidth, old hardware, or just neglect.
It doesn't just apply to the web though - it can even apply to OS design. On the web it's usually (ab)used to leverage users as the metric of whether something is failing, but in theory fail fast is just about learning that something isn't working through any means - whether that be user reports, automated tests or proofs of concept.
Additionally, at least where Facebook is concerned and IIRC, they actually do heavily utilize out-of-app data in their mobile app. There is a good deal of code, but a lot of the UI ends up being tweaked by data that's being served to the client.
I don't disagree with the sentiment - but I still think the app is poorly tuned if it's been allowed to accrue this much complexity in the relatively non-essential UX that we all interact with through the web. My point is that the complexity of FB's landing page is unmaintainable and misaligned with business goals - if we view those goals to be about data collection and not actually running a social network then the misalignment is actually worse.
Literally, Facebook reverse engineered Android's vm solution to monkey patch in a longer method table instead of reducing the method count. I have no idea why but their massive amount of client code seems very important to them.
Part of it is FB’s company culture — generally the ethos has been to enable engineers, technical precedent be damned, and to never say no to a product idea. It is really impressive and a major part of why FB can move so fast despite being a very large corporation.
Yup. Browsers used to render all pages incrementally, so you could simply keep the HTTP connection open and add more HTML text to the page, and it would just get tacked on to the bottom. Then you had a CGI endpoint in a separate footer frame (sort of like an iframe but arranged in a Grid-like layout) with a "Read More..." form button for requesting more data. It worked really well and didn't even need to use JS.
You can still do this. I wrote a chess game that checks arg 0 for "cgi" (otherwise it uses a VT interface) and renders a new board in the browser every time the other player makes a move. It works with absolutely no javascript.
> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
You know, maybe sass isn't appreciated on this site, but it's pretty pathetic to imagine someone could unironically imply any part of UI was a solved problem in the 90's and not face a challenge here. UI engineering is a discipline which most software engineers grossly underestimate the complexity of
Lol you are very naive if you think all Facebook does is show posts and messages. You need to track the users, collect sensitive date, and all the other malicious stuff
The problem they're trying to solve isn't "Writing apps in Swift is too hard". The problem is "Writing your apps twice is expensive and throttled by Apple's ecosystem". Most app purveyors will target iOS first because iOS users tend to be more affluent and willing to spend money. Google obviously doesn't like that for their own competitive reasons, but the rest of the market is also loathe to give Apple so much power over the app market that they can charge monopolistic fees. And since Apple has no interest in making it easy on everyone to work with their competitors, dev tool makers are forced to use hacks like react-native to meet their needs.
I'm not visually impaired outside of garden-variety nearsightedness, but I sort of have a strong appreciation for "boring" websites that have very utilitarian "Header, subheader, body of text, etc...". They work well with text-only browsers, but also are trivial to make work with various text-to-speech things which I find occasionally useful; sometimes it's just easier for me to understand something if it's being read along to me while reading it, particularly if the font is small.
They also typically load faster because they don't need megabytes of javascript to "display" the content or offer pointless fading in/out of text and lazy/load of images.
All of which means less CPU, fewer network requests and less power used which means a longer-lasting planet.
At some level I understand that it's kind of a form of artistic expression, and I understand and appreciate that. If people like making pretty and super-interactive websites because it's a way to make themselves happy, by all means they should do it. It's not radically different that someone getting into painting or sculpting in my mind.
I think I mostly get annoyed with stuff like news websites of personal blogs having all this extra crap added to it. At the end of the day, I view these things as somewhat utilitarian; I don't go onto a blog to be wowed by a million JavaScript effects, I go there to read text, and maybe some pictures to help elaborate on stuff.
> sometimes it's just easier for me to understand something if it's being read along to me while reading it
Doing exactly this right now. I use a bookmarklet to make anything I click start speaking, on any website. It even shows the progress by fading away read words. I've been doing this for at least 7 years, always using Alex from MacOS, at 1.3x. I used to have it work on PDF's as well, but that went away when Chrome's PDF viewer was changed from PDF.js
I was recently shopping for cross-platform desktop GUI toolkits, and discovered that there are really very few options that work well with screen readers. The only ones that seemed at all viable were Electron and JavaFX.
.NET MAUI also seems very promising, and I like that they're actively blogging about it[1], but it hasn't been officially released yet.
Anyway, it seemed to me that basically all the others either aren't accessible at all, or are only accessible on one or two of the target platforms, or have long-standing accessibility bugs that seem like they would render the UI unusable.
According to the information I was able to gather, it seemed that the answer to that question depends on whether you ask Qt, or a visually impaired user of Qt applications.
I decided the latter opinion was the one I cared more about.
I'm admittedly trying to play it very safe. I have zero proficiency with screen readers, so I'm not really able to independently verify any of this. All I know is that the GUI toolkit is one of the most expensive things to have to change later, so I want to be as close as I can possibly be to 100% sure that I won't end up in a situation where my projects have accessibility issues that I can't fix without changing the GUI toolkit.
Thank you very much for taking the time to research toolkit accessibility and being willing to make it a deciding factor. Which toolkit did you go with?
From what I've read, JavaFX apparently has its own accessibility problems; for instance, I vaguely remember reading that if you used Alt+Tab to switch to or from a JavaFX window, JavaFX would automatically activate the menu bar. I've never used a real JavaFX-based app though.
As much as some of us might dislike it, I think the safest choice is Electron, or just making it a web app if that's an option.
I chose Electron, largely because there seemed to be a lot more good documentation on how to maintain a decent level of accessibility.
Electron does have its downsides. On the upside, one nice thing about making screen reader support non-negotiable is that it took so many options off the table. So I was at no risk of analysis paralysis, and I don't have to worry about second-guessing my decision.
Neither have I; from what I can tell, the only place that JavaFX seems to have a foothold still is "intro to software engineering" classes in colleges.
The basics work, however for example screen readers do not anounce context menus under Windows, treeviews are not providing all the information they could see (https://bugreports.qt.io/browse/QTBUG-81874?jql=text%20~%20%...), etc. But in the end, if you need somewhat more advanced widgets (tables etc.) and you do not want to implement the gui on each platform or use Electron or similar, you have no choice.
This as of now also applies to most Rust UI libs, hence why I don't use them. At least in that community each UI kit has an 'accessibility' issue though so hopefully improvements to come.
Apple and Google have done all that work because of accessibility laws at the US Federal and state (particularly California) levels. They have done it because accessibility laws are Civil Rights laws.
They have done all that work to avoid problems in the lucrative market that is the US.
Microsoft has done similar work for the same reasons. Business reasons. Bottom line reasons.
While that is true, many of the devs also do care greatly about accessibility. As someone who has worked on some a11y stuff, I do it because I think it's the right thing to do, not because of the regulations. The regulations just set a baseline.
People often forget there are humans on the other end developing things, not just faceless corporations.
companies don't make decisions just out of thin air. A lot of the accessibility features on display aren't just meeting the baseline, they go above and beyond. That's done because people legitimately care about this stuff, in addition to it being beneficial to the company.
I don’t see anything above or beyond. Just an effort to comply with civil rights law. I mean Steve Jobs was infamous for parking his Benz in handicap parking spaces just for convenience.
The companies are just as committed on principle as a real estate developer who puts in curb ramps to stay out of court.
That doesn’t mean the designer who drew the plans and the building inspector who approved them don’t care about the plight of the disabled. But the ramps are there because of the law not goodwill.
They're there for both reasons. Laws never compel fully moral, virtuous action. There's no law forcing me to be nice to my inlaws. The law only says I can't assault them or rob them.
Its the same here - Apple had some legal obligations, but they didn't need to make their phones so insanely accessible to blind people. The situation is kind of absurd - the software on ios for blind people is so good that blind people prefer to use iphones (which are pure touch screen devices) over phones with keyboards. And they have since some of the very first iphones, back when there was real competition.
Desktop computers from a few years ago didn't do any of this. Screen reading software on windows used to be 3rd party software, and it sort of sucked. Apple could have made some APIs for that and forced someone else to make expensive, janky software for blind people. The law also has no problem forcing blind people to buy expensive electronic braille keyboards and things like that. (Which used to cost upwards of $3000.) Again, they could have done the same with the iphone - forcing blind people to type with $3000 external bluetooth braille keyboards. But they didn't do that. They built everything blind people need into the operating system. They made it work well.
Be cynical if you need to. I have no shortage of criticism for the way Apple handles the app store. But credit where its due - Apple has gone above and beyond to make accessibility for blind people great on the iphone. And I think accessibility on android isn't too far behind. The software on modern phones is a massive enabler for blind people the world over. It didn't happen by accident, and it wasn't written to pass the minimum bar set by the legal department. Real people poured love into modern smartphone accessibility, and it shows. They deserve credit and respect for their work.
I'm not sure if that's actually fully settled, but the supreme court only recently rejected dismissal of a case on this (sending it back to lower courts), and DOJ has not specified if ADA applies to websites/etc. IANAL, but I believe this issue is still being contested.
There are some folks doing great work here, although I agree on the overall sentiment. Ryan Florence is one name that comes to mind: https://reach.tech
It has a pretty good accessibility story for it's built-in widget set (Material and Cupertino). However, if you want to create a new Widget that does not depend on the built-ins but still provide a11y, you can wrap it using the Semantics[1][2] widget.
Having several friends on the Apple a11y team, they really do work tirelessly to make the accessibility experience amazing. A lot of the engineers on that team have visual, audiotory, or other physical impairment as well, which is really cool to see and as a result they’re all really invested in making the product accessibility awesome.
Personal favorite “Easter egg” in their accessibility utilities is the baroque voiceover descriptions for the built-in wallpapers. A current colleague of mine shared it last week as part of Global Accessibility Awareness Day - https://twitter.com/mattt/status/1395439320652148736?s=21
With that attitude, only the most elite and large companies would even consider it. It's not that you need someone with a handicap, it's simply that you need to consider them when you're building something.
Almost any product that is built without input from the people who are going to use it is going to be inherently worse. Imagine a team creating a camera from scratch who are not photographers, and who never included any photographers in the design process. It would be a functioning camera, I’m sure, but it’s usability for real-world use would almost certainly be worse than those developed by teams that include professional photographers.
The effect may be less for accessibility features, but it would still be there. I’d highly suggest anyone building out accessibility features to at least consult with disability advocates and other end users.
The problem is that the considerations are never adequate when they don't actually involve people who are directly affected.
I remember one time my mom told me about a meeting she attended hosted by an organization in our state called ‘The Council for the Blind’. I think it was about how and where to get your COVID-19 vaccines.
The presenter was a sighted man on the council. Apparently, although he could see his own notes, he had some trouble getting his PowerPoint presentation to display on the Zoom call. He spent 10 or 15 minutes fiddling with it before an attendee finally butted in and said ‘You know, most of us won't be able to see your slides at all even if you get them working. Why don't you just go ahead and post a link to them after the presentation?’. There are lapses of judgment that are possible even when you have every reason to, abstractly and externally, ‘consider’ the position of someone with a disability you don't have, but that just never occur when you have that disability.
I know it's a silly story, but that kind of stuff happens ALL THE TIME. It's also really common for websites or software to have accessibility features that are so ill-considered, they're practically unusable.
You're right that positions for, e.g., blind software engineers, could be hard to fill for small companies. But you don't have to involve disabled users only at that level to make sure that they're part of your development process. You could have disabled users as testers with much less difficulty, by reaching out to relevant organizations for (perhaps paid) volunteers.
That's the point. It's an Easter egg -- extravagantly detailed accessibility descriptions for UI elements that non-sighted users will rarely interact with.
If it’s something that makes someone understand and enjoy their chosen wallpaper preference better is it really useless?
I guess it comes down to your personal interpretation of “usefulness”, but frankly I think you take for granted the amount of detail that you’re able to ascertain from being (presumably) fully sighted.
There’s no reason that someone with any visual disability can’t appreciate a scene with as much detail and intricacy as you can, if it’s adequately conveyed to them.
Words and grammar can be learned and expounded upon, but the description of a scene as “a tree on top of a hill” cannot, so why not err on the side of being overly descriptive for those that can understand, and leave the door open for those that can learn more?
There will come a day when Apple will lean in heavily on Accessibility just like they've recently amped up Privacy in ads lately.
I, for one, absolutely love this about Apple! Accessibility is a beautiful core value to strive for (Privacy and others too). But I particularly appreciate Accessibility.
Good designs/affordances such as gestural trackpads or mouse cursor support on iPads are all accessibility features, except they cover a major swath of humanity rather than those traditionally considered "less-abled".
Disclaimer: I work at Apple so I'm biased. More likely, my heroes at Apple such as Sue Booker are accessibility experts -- so I'm always fan-boying over these features!
I would just like to say that there isn't a company on the planet making consumer electronics that leans in harder than Apple does, as a quadriplegic I can get an able-bodied monkey to take my new iShiny out-of-the-box and from then on I can do everything you can do on that device.
They really are world leaders at this stuff, I can use my right index finger and my mouth and with just that I can use Apple products to run my own company and talk to people on HN (you know, the important stuff!).
---
edited to add: over the past few weeks Apple and their accessibility approach has come up a few times on HN, and when I responded there have been so many questions that I really really wanted to answer. However, not had the energy to be able to respond in a timely manner and certainly not quickly enough and set the article is still on the front page of HN. Basically because quadriplegia.
So I know that there are are a lot of questions about use my computer, and that's largely because The people on HN are genuinely and wonderfully curious.
The question that seems to fascinate able-bodied engineers goes something like this: "imagine coming home and finding somebody had removed every switch, button and lever from every device in your environment; What would you do?" That's the question I had to answer before I could get to where I don't get a job and have to start my own company as a result!
So my question is how best do people think I should share this information? Blog post, Twitter thread (eww) or messenger pigeon?
I would imagine a series of blog posts that you then turn into a book? I love your question about “...what would you do?” It reads like a writing prompt.
A podcast I listen to (maybe CodingBlocks.Net or ATP?) talked about a dev who uses a camera to track a shiny bit of tape over the bridge of their glasses as a mouse and speech (with their own phonetic alphabet variation) for typing and is able to create software at the same pace as an able bodied person - I’d love to hear all of those stories.
Could design a website that was the equivalent of "imagine coming home and finding somebody had removed every switch, button and lever from every device in your environment; What would you do?"
i.e. where "able-bodied" visitors would have to figure out how to interact with the site without all their usual means.
If you designed it right and maybe even gamified it (in a good way that doesn't trivialize, that challenges you to learn), it might entice young and old to explore what it means to have to interact with the world without the benefit of X.
A series of blog posts could serve as in invaluable resource for us caring but ultimately able-bodied code monkeys to refer to. If you go ahead with those, post them here. :)
If you could find a CHI researcher to work with it would make an amazing case-study for a research paper, and that would give it some longitivtiy as well as be a good basis for a more compact blog post.
Oh, they already did, circa 2004, and they never stopped.
They were the first mainstream company to release a screen reader, the first company ever to release a high-quality, free screen reader, the first to build one into the operating system, the first to make installing the OS accessible, the first to make touchscreens accessible for the blind, and that's not even all of it. The things we're seeing from Apple are just extraordinary, and beyond what anyone else provides. I've been a really happy iPhone user for years, and I recently got myself a Mac too. This technology literally changes our lives.
sorry, by "release a screen reader" I meant "release a screen reader that was actually usable, and that people wanted to use". Narrator wasn't usable for almost anything until Windows 8, and that's when it started supporting OS configuration. Before then, it was mostly used for recovery, when your screen reader crashed and you needed to do one or two really simple things.
They do mention every now and then how the accessibility features enable quite a lot of people to use their devices like everybody else. But you're right: they should advertise it much more. it is consistent with their user-centric approach to privacy.
It really seems to me that people like to be mad, get upset about marginal things that would only affect a small percentage of users (eg inability to side load apps on non-jailbroken iOS) and generally just focus on the negative. Maybe you could call it Outrage Addiction?
So it's refreshing to see something like this where a modern smartphone has such a positive (even life-changing) impact on someone's life.
This is also one of the things I find depressing about a significant chunk of Americans. Many will dismiss things like public transit as "people want to drive". Of course we then design cities and subsidize driving so it becomes a vicious circle and any public transit become unviable (to retrofit).
But what about the people who can't drive? Or even can't afford to drive?
Curb cuts, in the USA, are a relatively recent addition and they benefit far more people than just wheelchair riders. Accessibility features can help everyone.
This is mostly a good thing. I say "mostly" because it encourages scooter riders to use the sidewalk and pretty much anywhere I've been scooter-share riders are a public menace.
One issue with these in NYC at least is the drainage just isn't designed for them. So anytime you get significant rain or snow melt you get pools of water that don't drain because the actual drains are elsewhere.
If only the US could adopt Dutch junction design [1].
And don't get me started on the cyclist and pedestrian safety nightmare that is allowing people to turn right at red lights.
Funny how sidewalks are usually mentioned in terms of mobile people. Every economically disadvantaged wheelchair-bound person I have ever seen has not used the sidewalk. They use the road. It's much less likely to have huge broken up chunks and obstructions, and it's less problematic to have to haul yourself up the incline or potentially crash down the decline.
I wish we could somehow force everyone to live as somebody else for a week.
> It's much less likely to have huge broken up chunks and obstructions, and it's less problematic to have to haul yourself up the incline or potentially crash down the decline.
It's also much more dangerous. Making the sidewalks useable for wheelchair users is much better for everyone than forcing them to go on the road.
It's also possibly not possible? I used to have a neighbor who used the road. I don't know that they really had any other choice. There were plenty of people on the block who also knew they had a neighbor who used a wheelchair, but that didn't prevent them from blocking the sidewalk with their cars, or setting up their lawn sprinklers to ensure that the sidewalk was really well watered, or allowing their bushes to grow across the sidewalk, or leaving their sidewalks unshoveled for days on end.
Government can do a lot of things, but I don't think it can force people to be conscientious.
Right. But that’s a problem that can be solved by law and enforcement. It’s not a law of nature. Just fine people who leave their cars where they don’t belong and otherwise obstruct public ways. That’s what I meant by “making them useable for wheelchairs”.
It also has the benefit of making them more useable for other people.
Those fines would need to be ruinous to justify the cost of collecting them, and you would still have people needing to block walkways for e.g. construction work.
"I wish we could somehow force everyone to live as somebody else for a week."
I often wished this simple request whenever I was lumbered with a non-technical manager who would promise things to their equally-out-of-touch manager or a customer, eg. "I've told them it will be done in 2 weeks. How long will it take?"
It'd reduce the pain of everyone to live with more empathy.
Or for whom driving is excessively inconvenient. I've always been a public transit user, but my feelings about it changed when I started having kids. We are fortunate enough to be able to afford a car, but parking in the city is expensive, especially in and around many places I might be taking my kids, and taxis and rideshare are simply not an option for us. (Officially our city has an exception carved out of the child car seat law for medallioned taxis, but there's no way I'm putting my toddler in the back of a car that's being driven by an aggressive taxi driver without a car seat.) Public transit, on the other hand, is perfect. So, even though we are physically able to drive and can afford to drive, it's still something of a lifeline.
I think your comment is solely calling out Americans' singular focus on cars as transit but I wanted to make a tangentially related point that Americans care deeply about accessibility. ADA has cemented accessibility into the core of public spaces. Driving is much easier to navigate in many ways than public transit for those with many mobility disabilities. And many American companies (many mentioned in this thread) strive to provide quality experiences to users with disabilities.
I remember being shocked the first time I went to Europe that if you were in a wheel chair you couldn't access 90% of building. Even new ones. Also good luck navigating their subways and train stations.
> the first time I went to Europe that if you were in a wheel chair you couldn't access 90% of building. Even new ones. Also good luck navigating their subways and train stations.
All I can say is `wat?`
But probably depends on which part of Europe, too. Most of the Northern Europe I've seen, you can access pretty much everything and every train and subway station is equipped with a wheelchair-accessible elevator.
Some parts of Europe may have old infrastructure that can't be easily retrofitted. London's subway has stations that are not wheelchair accessible, and accessibility is always marked on the map: https://content.tfl.gov.uk/standard-tube-map.pdf
The really weird thing is that a lot of folks simply don't have the option for public transport. Rural? Good luck, even during the day because there is no city-to-city transportation. Medium sized town? Maybe you have taxis, but honestly, they can't really be counted on to get you to where you need to by.
I'm serious on the last one: I lived one area that didn't allow you to order a cab beforehand, and waiting times could be 15 minutes to 2 hours. Good luck getting to and from work, even more luck if you need to pick up children from child care before they close.
Buses can be a real mixed bad: One city I lived in didn't have busses available for folks working second shift (3-11) because they stopped running, nor did busses run on Sunday. One bar had a large van/paddywagon style vehicle to take drunk patrons home.
> that would only affect a small percentage of users (eg inability to side load apps on non-jailbroken iOS)
The iOS app censorship issue is only a fringe issue because we're in peacetime. Censorship that can be deployed society-wide (such as the CCP requiring Apple to censor VPN apps, or the banning of the protest coordination apps in HK) in a click is an existential threat to a free society.
Apple's already maintained the e2e encryption backdoor in iMessage for the FBI[1] just upon request (not even legal compulsion); imagine if on "national security" grounds (or some other emergency circumstance as dictated by the US) they disabled Signal and every other e2e messenger, routing all iPhone-mediated communications into surveillance channels.
This is a good reminder to me that while sometimes it's a pain in the neck to make sure there are alt tags for every image, or make sure things are in text form rather than a pdf, that you can tab through a site, etc. - it makes it so everyone can use the internet more easily and that's important.
I work with restaurants a lot as a part of my job, and there's been a big push to make sure all the websites are ADA compliant; it's something all front end devs and digital marketers should keep in mind.
If anyone is aware of a good resource for how to write alt tags for photos I would be most appreciative. I never know how much detail is appropriate. Is it a photo of “pink flowers” or “pink chrysanthemums” or “pink chrysanthemums in a glass vase on a dining room table”? Or maybe something else entirely? I’m never sure how to balance descriptiveness vs brevity.
I generally tell people that alt text is one of the few places where accessibility can be as much art as it is science. As other commenters noted, context is important. Images will most likely be consumed with its surrounding context. If the image is bringing nothing new, consider marking it as decorative.
But if it isn't, express the intent of the image along with its content. Let's say you're back in the office, and describing a photo to someone sitting at a desk across from you. How would you describe it? It the point of the image that it's art? Is it the structure and layout of the scene? Is the point just to identify what a chrysanthemum looks like?
I'm often reminded by a colleague that if you get too wordsy with alt text, or it doesn't seem important or valid, a screen reader user can easily skip past the image and move on.
At the end of it all, the two most important bits seem to be: making any effort is better than not trying at all, do your best and you'll get better at it over time; and remember that even though a screen reader (or some other mechanical assistive technology) will parse the text you write, you're doing this for the person on the other side — they love to laugh, smile, have fun, learn, and understand things just like you do. :)
I have a perceptual visual disability (severe convergence insufficiency) that does not affect actual visual acuity. However, I do rely on screen readers for reading and I use (and have access to) several libraries for people with print-related disabilities.
The above guidelines are an initiative of Bookshare.org, which is the world’s largest digital library (which only people with print related disabilities can legally access).
As a blind person, I've been thinking a lot about this issue.
I think the best recommendation I can give is to try interacting with the resource as if the image wasn't there. Maybe remove it for a while. Put the information that you're missing in the alt description. If all the information from the image is understandable without seeing it, think decorative images in newspaper articles, set alt to "".
In particular, if the image is a screenshot of a terminal, code or other piece of text, you should put the whole text in the description. In that case, strongly consider omitting the image entirely. When charts are involved, often providing the data in a table next to the chart is the right way to go.
> I think the best recommendation I can give is to try interacting with the resource as if the image wasn't there. Maybe remove it for a while. Put the information that you're missing in the alt description.
This is really helpful, thanks! Maybe I’ve been too caught up in wondering whether my alt text would be burdensome to listen to on a screen reader that I’ve been missing the forest for the trees!
Some images of flowers behave like headings; some are "the content"; some are decoration. Identify the function and work from there.
By default images are "replaced inline elements" i.e. they're of the same kind as inline text. This is why `alt=""` is perfectly valid and correct for a lot of images! If you couldn't see the image, what would you want in that place, in the context of the text around it?
Don't have a good resource of my own either, but I think I would prefer more descriptiveness. Reading the various options you've written gives me totally different ideas of the image in my mind so, especially if it's important to the website, more descriptiveness seems appropriate for higher accuracy.
I think that this would depend on whether or not the photo has a caption. Captions themselves should strive to answer the five Ws: who, what, when, where, and why.
> or make sure things are in text form rather than a pdf
I publish a digital magazine every month in PDF format (it's free to download). I thought PDF was ok for screen readers. We have been doing this for years now (more than 70 issues, 6 years), so it would be a bit too late to change into HTML now :(
Speaking of visually impaired people, please consider joining Be My Eyes [1], if you are a sighted person or if you have impaired vision. It is a fantastic idea and I get a lot out of using technology to help people "see" even though it happens so rarely.
I've had it installed for a few years and have only received two calls. On two other occasions, though, I was notified but someone answered before I did.
A couple years ago we were able to hire someone with a visual impairment to do some testing on our website. I consider myself to be very knowledgeable about accessibility, I use a screen reader to test, etc.
But watching someone use something I made, and struggle to complete the task because of a fairly straightforward disability they overcome every day in many ways, but they couldn’t get that thing I made to work? That’s an incredibly humbling experience. Never felt anything like that before. I’d highly recommend doing some testing with a real user if you ever have the opportunity.
The first time I taught a blind student, I was teaching C. I walked up to them to find they were clearly irritated, and said something like "Nothing works, I don't understand".
Looking at their computer, it appeared to be off. I said, delicately, that were they sure the computer was turned on. They laughed and turned their laptop's screen on, saying (as you say), they left it off to save battery life!
The problem turned out to be they were using textedit, which while a nice light (and highly accessible) editor, by default turns "s into “”. Unfortunately, Apple's text-to-speech "helpfully" just described all these as speech quotes. This made it impossible to understand, from the compiler error, why the code wasn't working. A quick head into settings and the student was off (they soon changed to textmate, also very accessible).
I know you can turn a MacBook's brightness essentially off, just by bringing it to the lowest tick. On iOS there is a VoiceOver gesture to turn off the screen even while the phone is on, so there may also be a purpose-built version of that in VO for macOS.
Most PC laptops have a key in the function key or media control row for this. Look for one that resembles a monitor, or check your laptop's owners' manual.
I've a Microsoft Surface Book 2 and I can't turn off the backlight entirely with the brightness adjustment. I also can't just turn the display off and have the computer keep processing unless I use a third party tool.
I didn't catch if she said she was completely blind or not. She may be able to make out some shape or color distinctions and is using that as a navigational aid.
I have a friend who is blind but keeps their phone on max brightness for this reason. Very little comes through but they still find it helpful.
Funny you mention this. Most screen readers actually have that as a feature, and it's called screen curtain. As many things in this field, it was pioneered by Apple, but it's available almost everywhere these days.
This opens up really interesting opportunities, think cheating at school for example.
Previously, other tricks have been used to achieve the same effect, including not having a display plugged in at al, Configuring Windows to use a second display while none was actually plugged in, or using a fake HDMI dongle.
Interesting that she located the elements by tapping on the screen at certain positions, instead of using the left/right swipe _anywhere_ on the screen, which selects and reads the previous/next element.
Having seen a lot of these videos before (we've been working on making apps truly accessible), I never actually came across someone who didn't use the left/right swipe as a first plan of action, but hey ho, you always learn.
(I wonder if it depends on mental models, and whether you prefer building a 2D map of the screen to moving up and down in a 1-dimensional vector.)
I usually use left/right flicks for screens I haven't interacted with before, but I use taps to find elements I use often, particularly when they're easy to find. I sometimes use a mixture of both, i.e. tapping somewhere near the bottom of the screen to focus on the tab bar, then flicking left or right to find the right tab.
One thing I've thought about – how much harm did the Chrome team do by adding default outlines to focused elements? Although well intentioned, I think making it the default for all users resulted in pretty much every website adding "outline: 0" to their css: https://www.google.com/search?q=chrome+remove+blue+border, which ultimately harmed the ecosystem for keyboard only users.
Hopefully this will start to go away soon. I learned recently that Chrome 90 (released April 2021) replaced :focus with :focus-visible in the default UA style sheet, so the focus ring will only show when using the keyboard now.
I remember watching a blind man teach a group and he had in one earbud to his phone and was just using his thumb to flip back and forth quickly through his notes as he was speaking, and taking comments and questions. I was impressed at the multitasking. I don't think I could listen to two different things so well, or talk while also processing audio notes.
Humans can get good at most things pretty quickly. You have no skill at listening to two audio tracks at once because you’ve never tried. You’ve never tried because you have no need. If you were blind, or a forensic audio analyst, or a music producer, you’d get pretty good pretty quickly.
I've been talking to some visually impaired people here in Brazil, and despite being much more expensive due to taxes it is the most used phone between visually impaired people.
It’s extremely easy to learn, and takes a day of practice, at most. This is the place to learn how to do this as a sighted individual (type literary UEB [Unified English Braille] here): https://uebonline.org/
To start out, you use your normal QWERTY/QWERTZ physical keyboard instead of a virtual braille keyboard (screen braille keyboard) or a true Bluetooth braille keyboard [1][2] (also known as a Perkins keyboard or Perkins brailler) to learn this.
Seriously, if you do the lessons on UEB online with a physical keyboard you will learn this 100% solidly and will have absolutely zero learning curve with the iPhone/iOS braille keyboard or Android braille keyboard. The iOS/Android braille keyboards are aware of your fingers, their positions, and your hand orientation, so it translates pretty much perfectly when you transition your learning onto iOS/Android braille keyboards.
This is because your individual fingers are assigned a “braille dot” (excluding thumbs). UEB literary braille uses six dots per braille cell. However, some braille codes have eight dots per cell. As I said, all you have to do is learn UEB literary braille. You can learn it in a day by doing the practice lessons at the link I provided above.
Dot 1: (left index finger, “F” key on physical keyboard)
Dot 2: (left middle finger, “D” key on physical keyboard)
Dot 3: (left ring finger, “S” key on physical keyboard)
[If the braille code uses 8-dot cells]
Dot 7: (left pinky finger, “A” key on physical keyboard)
RIGHT HAND
Dot 4: (right index finger, “J” key on physical keyboard)
Dot 5: (right middle finger, “K” key on physical keyboard)
Dot 6: (right ring finger, “L” key on physical keyboard)
[If the braille code uses 8-dot cells]
Dot 8: (right pinky finger, “;” key on physical keyboard)
_________________
So a 6-dot braille cell using all 6 fingers would spatially “look” like this, linearly:
(bottom to top of cell) LEFT HAND SIDE|RIGHT HAND SIDE (top to bottom of cell)
dot 3-dot 2-dot 1|dot 4-dot 5-dot 6
The easiest way to remember this is:
3-2-1|4-5-6
Where dots 1 and 4 are the top two dots, and dots 3 and 6 are the bottom two dots if this is represented spatially as a Unicode braille cell.
If this was 8 cell braille, it would be
7-3-2-1|4-5-6-8
Where dots 1 and 4 are the top two dots and dots 7 and 8 are the bottom two dots.
Anyways once you practice the lessons on the UEB braille website there is no learning curve whatsoever for the iOS/Android braille keyboards. For typing a space you use your thumb.
You can also get a physical Bluetooth braille keyboard, if you were really into it. Many blind people, who do not regularly use braille, not only prefer to type on the virtual braille keyboard, but also like to use a physical braille keyboard with their mobile device for typing ABC letters.
I recently posted this video playlist of quick accessibility tips for websites (each tip is just 1 minute).
Many websites don't follow these best practices. However, you might be surprised by how simple and low-effort it is to incorporate these tips into any website.
I'm not visually impaired but have been doing some work at a local institute for visually impaired people and did setup some phones for WiFi there. I have never tested these accessibility settings on smartphones myself so I have no idea why but ever single phone I saw there were Android phones. Now I'm curious why since it seems people here see iPhones as a better choice.
Language could be the one. Android supports vastly more. It's quite sad how behind Apple is. If you happen to be from a "small" country with under 10 million people - forget it.
If it's academic, Android makes sense because it's easy to side-load experimental apps. iPhone appears to be better for consumers due to native solutions.
Have been discussing with a visually impaired friend how to rethink Accessibility. Over the last 20 years, I've seen him use Accessibility on Windows and then moving over to the Mac and finally the iPhone. The main problem is that VoiceOver is a speech+touch interface slapped onto a visual+touch interface. So, roughly 1/3rd speed. Moreover, 3rd party developers need to label elements, which they rarely do.
I remember walking out of an Accessibility meetup at SalesForce waiting for a ride and seeing an attendee struggling with hailing an Uber. She was an experienced user but still struggling with her new iPhone. It had a larger screen, so all her muscle memory was foiled; the buttons had shifted.
So, my friend and I are looking into rethinking the interface starting with 1st principles, like Fitt's Law. My guess is maybe a 9X improvement for visually impaired and 3X for folks with regular vision. Reserved the name touch.ai for that purpose.
> Moreover, 3rd party developers need to label elements, which they rarely do.
Seems like your approach would require 3rd party developers to build a completely new UI, so this doesn’t seem like a valid criticism of the current paradigm.
It’s also the case that existing designs take into account Fitt’s law. Apple and Google are well aware of these.
That said, I do share the intuition that an information architecture built around cognitive efficiency could be a lot better than we currently have.
I’m skeptical you’ll find particularly easy to achieve - a lot of the complexity of current UI is incidental, but equally a lot is not.
> Seems like your approach would require 3rd party developers to build a completely new UI
Nope; no changes to existing UIs. Instead a driver, which has its own problems of deployment, as that requires convincing the hardware OEMs.
> It’s also the case that existing designs take into account Fitt’s law. Apple and Google are well aware of these.
Hmmmm ... I don't think so. I remember Bruce Tognazzini wrote about how Apple messed up Fitt's law in a version of OSX, where it was one pixel away from supporting an infinite target.
More recently was Apple's addition of "reachability" which is an incredible kludge.
Apple and Google may have usability research, but that is nothing compared to the deep research done at Xerox Parc. I recall seeing studies on the D* machines which measured the efficiency of several text editors, broken down by select, cut, paste, etc. Xerox first tested and measured alternatives before deciding on the best one.
At the first WWDC for the iPhone (2007?), I went a design lab to review our first app. The designer suggest I put navigation at the top, out of reach. I asked him "What about Fitt's Law?" His reply was "who's that?"
> I’m skeptical you’ll find particularly easy to achieve
I agree; this is incredibly hard to achieve. It involves integrating several moving parts, leveraging: TPUs, crypto, and negotiating with OEMs like Apple. I believe the payoff is worth it.
> Nope; no changes to existing UIs. Instead a driver, which has its own problems of deployment, as that requires convincing the hardware OEMs.
I infer from this and your domain name that you plan on using ML to ‘read’ UIs and extract salient features into a canonical model, and then to transform this into your more efficient interaction paradigm.
Is that a fair read?
>> It’s also the case that existing designs take into account Fitt’s law. Apple and Google are well aware of these.
> Hmmmm ... I don't think so. I remember Bruce Tognazzini wrote about how Apple messed up Fitt's law in a version of OSX, where it was one pixel away from supporting an infinite target.
I’m curious if you remember the example. Also - Tog worked at Apple for 14 years, so they clearly did know about it at that time.
> More recently was Apple's addition of "reachability" which is an incredible kludge.
It’s kludge but it has nothing to do with them not knowing fitts law. It has a lot more to do with the iterative path which started with a screen that was small enough to be reached with one hand as a constraint. Market demand forced them to relax this constraint, and they haven’t caught up with the changes yet.
> Apple and Google may have usability research, but that is nothing compared to the deep research done at Xerox Parc. I recall seeing studies on the D* machines which measured the efficiency of several text editors, broken down by select, cut, paste, etc. Xerox first tested and measured alternatives before deciding on the best one.
Did you know that many of those folks went fron Parc to Apple and continued their research?
At the first WWDC for the iPhone (2007?), I went a design lab to review our first app. The designer suggest I put navigation at the top, out of reach. I asked him "What about Fitt's Law?" His reply was "who's that?"
I don’t doubt this, but Apple has a huge number of designers. A developer evangelist is quite different from someone reporting to Alan Dye reviewing fundamental changes.
> I’m skeptical you’ll find particularly easy to achieve
I agree; this is incredibly hard to achieve. It involves integrating several moving parts, leveraging: TPUs, crypto, and negotiating with OEMs like Apple. I believe the payoff is worth it.
> I infer from this and your domain name that you plan on using ML ...
Yep
> I’m curious if you remember the example. Also - Tog worked at Apple for 14 years
I couldn't find it, although he has another post on Fitt's Law. The phrase he used was "pulling defeat from the jaws of victory" - maybe it's on Archive.org
> it’s kludge but it has nothing to do with them not knowing fitts law.
Yeah, my characterization is a bit unfair
> Did you know that many of those folks went front Parc to Apple and continued their research?
I did have a chance to meet Jef Raskin a few times. He was between Apple stints, working on the Canon Cat. He's the one who turn me onto Card, Moran, Newell 1983 -- and thus Fitt's Law.
I don't doubt there is a lot of thought that goes into refining the UI. However, there's the quandary of breaking the current idiom, which limits change. However, for low vision folks, there is less of a switching cost. Same holds for newer devices, like glasses.
> Do you have the core technology proven out yet?
Tis incomplete. Some parts of the interaction model. Am a bit weak on the ML part. May license GPT-n. Looking to form a team. Just convinced my blind friend to join. He kinda pioneered AR in the 90's.
If you want to get level with the user in that video, turn on Screen Curtain on your iPhone and try to use it. Very, very humbling experience as an iOS dev.
I’m curious, is there a way for someone who is visually impaired to have the screen totally turned off and still use it like this?
Would make sense and save some battery.
Interstingly, my cousin, a visually impaired 25 years old girl, never liked using the screen off feature. Her argument was that if someone sees her typing on a phone when its screen is off, they would automatically assume that the poor girl doesn't know the phone is off, someone should tell her :)
Privacy as well: a blind user can not notice that someone is looking over their shoulder.
So yes there is a built-in feature called “screen curtain” which turns the display off. By default it’s a triple 3-fingers tap when VoiceOver is active to toggle.
How much do the vocal visually impaired use keyboards like the Braille keyboard shown and how much do they use voice to speech? A lot of people don’t want to talk to their phone, is the same resistance there for someone whose phone is always talking to them?
> Different responses to different touch and gestures, oriented towards the blind
That's VoiceOver on iOS. It's a screen reader that's also available on the Mac, iPad, and Apple Watch. Android has a similar screenreader called TalkBack.
I work with visually impaired people on accessible apps, and the large majority of them prefer Apple's devices because they have more advanced accessibility features.
> When you use Camera, VoiceOver describes objects in the viewfinder. To take a photo or start, pause, or resume a video recording, double-tap the screen with two fingers.
They can also generate alt text for photos which do not have that information already. Here's a video of the person in the original article describing this feature:
It does! It goes beyond simply enumerating objects and can describe their properties or context as well -- for example, it'll describe a husky as "a black and white dog lying on a wooden floor", or a soft drink as "a transparent cup with brown liquid in it".
VoiceOver also works with another accessibility tool called Magnifier, allowing it to be used as a general "what am I looking at" tool.
I'm a big believer in assistive tech. It isn't particularly difficult to implement, and the rewards can be great.
Nowadays, the apps I'm developing support things like accessibility labels (voiceover), high-contrast mode, and scaling, in addition to the localization that I've always had.
I also suggest using tools like Sim Daltonism[0], to evaluate colorblind accessibility.
Wow very impressive! She could probably save a lot of battery too, there is no reason for that screen to be on. Sort of mind blowing when you think about it as a seeing person.
It's also a side effect of good abstractions, so since they own all the stack they are able to iterate on their own APIs. I would even argue it helps debug those abstractions, you are basically swapping the last layer of user-computer interaction.
> She interacts so smoothly and quickly with the keyboard. I wonder why she doesn’t use voice input though?
Presumably because interactimg with the keyboard is smooth and quick.
A keyboard is also private and works in noisy environments. Some people (including me) would prefer not to speak when avoidable, jusr as some would prefer not to type when avoidable. It may be easier (or higher throughput) to use audio feedback from the computer while providing touch inputs than if both feedback and input are audio.
Personally, while I've spent lots of time training typing, the learning curve on speech to text is too high and the apparenr reward too meager for me to go much beyond the bare minimum (I can cancel many accidental summonings of speech interfaces)
yeah i had the same knee-jerk question. I can see the use-cases here for non-audio. But for the most part it would seem that audio input get to a level of performance that it simply works 99% of the time with little training.
I just broke my screen a few days ago, and b/c of the lock-down here (im in a tiny city) everything was closed. So i was forced to use audio inputs as much as possible, and on the IOS i was horrified that i couldnt simply say "Siri, close this modal" or "Siri, send new whatsapp text to Jane"... shocked.
It's surely amazing when you account for the fact that she's poking at a frickin' touchscreen with no visual closed-loop feedback. But as a UX mode it sucks. Why not just pair a physical Bluetooth gamepad with actual buttons?
That doesn't help with picking the right button, though. She's sort of coping with it by hovering her fingers on the intended portions of the screen but the way she's doing it seems highly unergonomic, and a physical device could also be more efficient.
On the flip side, I have a blind friend who is in the market for a new phone right now to replace her old nokia with custom text to speech software. Half the buttons have fallen off. It's very difficult to find phones with physical buttons nowadays and the specialist ones cost the earth.
As nice as touch screens are, remember that physical buttons are more helpful for blind people. She knows that text messages are 2 clicks down in the menu, calls are one click, contacts are 3 clicks, etc. rather than having to swipe around and find the app
There in lies another issue with iphones for the blind, you can't remove all the other gumpf you don't need. If there were just 3 big screen filling buttons with phone, texts and contacts, that would be easier to use than smaller icons.
When talking with my colleagues about accessibility I like to propose an exercise: “turn off your monitor and now try to use your feature.” After getting some uncomfortable laughs or overall confusion, it presents a great opportunity to demonstrate things like screen readers and keyboard navigation. A lot of people don’t even realize just how many accessibility features are built into operating systems these days.
These days if you’re a Windows developer Microsoft has some great a11y tools like Accessibility Insights that will do a lot of automated testing for you as well.
While I don’t rely on AT (even though I am an a keyboard navigation enthusiast) it’s an area I’m passionate about. I like to encourage others to not think of a11y as an afterthought.
Shameless plug: One of our engineer developed a Vision Impaired OCR app that scan and read text aloud in the user favorite language and accent with built-in TTS and translation service. Basically, the app is voice driven and require minimal interaction. All the user has to do is take a picture and the scan process starts immediately and TTS takes place after scan.
This kind of thing is really inspiring to me. Are there any similar projects I can contribute to? I would love to work on software that has such a clear positive impact on someone's life, like in this video.
If there's one open source project that changed blind people's lives, that's it. It made screen readers go from about $1000 to practically free, at least if you're an individual, not a corporation.
Alternatively, learn accessibility APIs for <platform>, and then start fixing bugs in open source apps. The meetings/messaging space could definitely use some work, for example. Stuff like Signal/Telegram, accessibility wise, is much worse than WhatsApp, same with Zoom vs. Jitsi.
I worked many months making a couple of bank app accessible to visually impared people. Hard work. Luckly I had a team that would test every version until they could themselfs use the app correctly.
In the end I was so used to test apps in this mode,that I put a shortcut in my phone so I could used it with headphones while the phone was in my pocket in the subway, or while I'm driving so I don't take the eyes of the road.
Well I have a related question, maybe someone here has an answer. How can a paralyzed person hang up the phone / end a call? They can initiate a call using voice, but no way to hang up.
So now, either the other end has to hang up. Or there is a long timeout, like 10 minutes or something.
Apple devices support "Switch Control", which stands for a family of devices that translate some signal that the user is capable of providing (blinking, tongue clicks, etc) into a button press. This can be used to select from a menu of options that the OS presents:
Nope. In Poland where I live, most sighted people use Android, but the proportion of iPhone users is much higher amongst the blind population, despite the enormously high unemployment and bad social security. Apple's stuff is just way better. Those who have to use Android often replace Talkback with a Chinese screen reader called Commentary. That program apparently sends god knows what god knows where, and a screen reader has access to almost everything, but that's the price you have to pay for blindness, I guess.
Not sure about Android, but ChromeOS' a11y features are embarrassingly poor, which surprised the hell out of me considering its primary audience is kids (some crazy-high percentage of Chromebook use must be in schools as student laptops, from what I've seen) and old people who have trouble with computers. And the settings menus are so terrible that it took me quite a while to convince myself they were as limited as they seemed and I wasn't just missing something (god knows my dad could never have found what they did have, on his own) but that part's just typical Google UI. Wish I'd gotten my dad to spring for an iPad w/ keyboard instead. Modern iOS may not be anywhere near as intuitive as in, say, iOS 6, but I could have configured it to be really good for him.
No not really. If you put them in a list of features and compare them it seems pretty close, they have most of the same stuff. But if you actually learn and use both the android one is much worse.
Knowing people who work with others in accessibility, the discussion tends to be that Android and iOS are more or less on par... with Android having the edge in customization and iOS having the edge in consistency.
Imagine she would have the screen off, just holding a black glass slab... Sort of makes me feel like I am touch and hearing impaired in stead of her being visually impaired.
Could vibration be enhanced so that would create another dimention(s) of haptic feedback? Kind of vary on pitch, length, strength, and Morse-like patterns to augument voice over.
It means the tweet went viral (9.5M views of the video alone), licensing company noticed and contacted the OP to serve as a licensing middle-man, which OP accepted and publicised.
hey, quick question. can't iPhones for visually impaired users fix their brightness to ultra low(beyond standard lowest)? since everything is being read out, it'd probably save a lot of battery and make sure the phone lasts longer.
The amount of work Apple, Google and others have put into making all this work is staggering, and life changing for people who are blind. Please don’t throw all that work away in your apps just because you think some UI toolkit is shiny.