Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I once worked in a design research lab for a famous company. There was a fairly senior, respected guy there who was determined to kill the keyboard as an input mechanism.

I was there for about a decade and every year he'd have some new take on how he'd take down the keyboard. I eventually heard every argument and strategy against the keyboard you can come up with - the QWERTY layout is over a century old, surely we can do better now. We have touchscreens/voice input/etc., surely we can do better now. Keyboards lead to RSI, surely we can come up with input mechanisms that don't cause RSI. If we design an input mechanism that works really well for children, then they'll grow up not wanting to use keyboards, and that's how we kill the keyboard. Etc etc.

Every time his team would come up with some wacky input demos that were certainly interesting from an academic HCI point of view, and were theoretically so much better than a keyboard on a key dimension or two... but when you actually used them, they sucked way more than a keyboard.

My takeaway from that as an interface designer is that you have to be descriptivist, not prescriptivist, when it comes to interfaces. If people are using something, it's usually not because they're idiots who don't know any better or who haven't seen the Truth, it's because it works for them.

I think the keyboard is here to stay, just as touchscreens are here to stay and yes, even voice input is here to stay. People do lots of different things with computers, it makes sense that we'd have all these different modalities to do these things. Pro video editors want keyboard shortcuts, not voice commands. Illustrators want to draw on touch screens with styluses, not a mouse. People rushing on their way to work with a kid in tow want to quickly dictate a voice message, not type.

The last thing I'll add is that it's also super important, when you're designing interfaces, to actually design prototypes people can try and use to do things. I've encountered way too many "interface designers" in my career who are actually video editors (whether they realize it or not). They'll come up with really slick demo videos that look super cool, but make no sense as an interface because "looking cool in video form" and "being a good interface to use" are just 2 completely different things. This is why all those scifi movies and video commercials should not be used as starting points for interface design.



My experience with people trying to replace a keyboard is that they forget about my use cases and then they're surprised that their solution won't work for me. For example:

1. I'm in a team video conference and while we are discussing what needs to be done, I'm taking notes of my thoughts on what others said.

2. I'm working as a cashier and the scanner sometimes fails to recognize the barcode so I need to manually select the correct product.

Now let's look at common replacements:

A. Voice interface? Can't work. I would NOT want to shout my private notes into a team video call. The entire point of me writing them down is that they are meant for me, not for everyone.

B. Touch screen? Can't work. I can type without looking at the keyboard because I can feel the keys. Blindly typing on a touch screen, on the other hand, provides no feedback to guide me. Also, I have waited for cashiers suffering through image-heavy touch interfaces often enough to know that it's easily 100x slower than a numpad.

C. Pencil? Drawing tablet? Works badly because the computer will need to use AI to interpret what I meant. If I put in some effort to improve my handwriting, this might become workable for call notes. For the cashier, the pen sounds like one more thing that'll get lost or stuck or dirty. (Some clients are clumsy, that's why cashiers sometimes have rubber covers on the numpad.)

I believe everyone who wants to "replace" the usual computer interface should look into military aircrafts first. HOTAS. "hands on throttle-and-stick". That's what you need for people to react fast and reliably do the right thing. As many tactile buttons as you can reach without significantly moving your hands. And a keyboard already gets pretty close to that ideal...


Don't forget MFDs from military/aviation/marine interfaces. Buttons on the edges of the screen, and the interface has little boxes with a word (or abbreviation or icon) for what the button does just above each button on the screen. When the system mode changes, the boxes change their contents to match the new function of the buttons. So you get the flexible functions of a touch screen with the tactile feedback of buttons.

Some test equipment (oscilloscopes, spectrum analyzers, etc.) has the same thing.


TI calculators do the same thing.


>Voice interface? Can't work. I would NOT want to shout my private notes into a team video call. The entire point of me writing them down is that they are meant for me, not for everyone.

Heh, I had a weird nightmare about that. I was typing on my laptop at a cafe, and someone came up to me and said, "Neat, you're going real old-school. I like it!" [because everyone had moved to AI voice transcription]

I was like, "But that's not a complete replacement, right? There are those times when you don't want to bother the people around you, or broadcast what you're writing."

And then there was a big reveal that AI had mastered lip-reading, so even in those cases, people would put their lips up to a camera and mouth out what they wanted to write.


There are many times I really wished to use voice interface but in private. Some notes - both personal and professional - I feel I can voice better than type them out. Sometimes I can't type - it's actually a frequent occurrence when you have small kids. For all those scenarios, I wish for some kind of microphone/device that could read from subvocalizations or lip movement.

In a similar fashion, many times I dreamed about contact lenses with screens built-in, because there are many times I'd like to pull up a screen and read or write something, but I can't, because it would be disturbing to people around me, or because the content is not for their eyes.


>For all those scenarios, I wish for some kind of microphone/device that could read from subvocalizations or lip movement.

There's a similar issue with those automated phone interfaces that "helpfully" require you to speak what you want, because you have to restart every time a pet or child screams. In those cases, it's better to have "press 1 for <whatever>", but it would also be an improvement to have it only read from your lips, so you wouldn't have to worry about background noise.


> "As many tactile buttons as you can reach without significantly moving your hands. And a keyboard already gets pretty close to that ideal..."

The DataHand came even closer: https://en.wikipedia.org/wiki/DataHand

but I'm not sre that's good; moving your hands to reach more keys, without looking, brings even more keys into reach. I can hit the control keys with the palms of my hands - and often do that with the palm of the knuckle under the pinky finger - and feel where they are by the gaps around them, similar with ESC and some of the F-keys, and backspace from its shape, etc. I don't know of a keyboard which is designed to maximise that effect, or how one would be.


https://store.azeron.eu/azeron-keypads does this in a bit different (better?) way



I'm 100% with you on this but I will admit there was one concept / short run project that actually looked like it was on the right track: The Optimus Maximus keyboard[1].

The keyboard itself was not good for a bunch of reasons, but the idea was gold. Individual, mechanical keys which could change their legends to suit the current context. You wouldn't have to memorize every possible layout before using it, and you could change the layout to suit whatever you're currently doing.

The closest equivalent I've seen would be the pre-mouse-era keyboards which could accept laminated paper legends for specific applications. The next closest, tho in the opposite direction, would be modern configurable keyboards with support for multiple layers layers.

1: https://www.artlebedev.com/optimus/maximus/


There's this great article on the Optimus Maximus, and how it directly lead to the now popular Stream Deck (by Elgato, not the Steam Deck by Valve)

https://www.theverge.com/c/features/24191410/elgato-stream-d...


The fundamental problem with a dynamic layout is that you need to look at it to know what the keys are. The one huge, underrated benefit of a static layout is that it's constant across the environments that you use it in, so it's (always to some degree, rarely perfectly) memorised. Qwerty doubly so, because so many people have it memorised. It avoids the problem with the Maximus that in order to take advantage of the dynamic layout, you really want to be able to see through your fingers. Your fingertips by default block the information you need.

I can see the Maximus being useful for switching between and learning new layouts - so if you want to give colemak a try you can, without needing to swap all your keycaps (even if that's possible on your keyboard), or swapping to blanks and forcing yourself to learn everything by heart. But I think the reason you don't see this idea repeated much is that it's self-defeating.


Do you have to be looking at it? My keyboard has blank keys. I used to use vim, which had a modal input scheme. My music keyboard has modes where different keys play different instruments.

I agree that having the legends are good for affordances when learning. But they oddly hurt training. Specifically, they make it harder to remove the visual from the feedback loop. When training typing a long time ago, you wouldn't even look at the screen til you typed it all.


That's exactly the opposite scenario though: the reason you can get away with having blank keys on a keyboard or a modal input in vim is that the inputs never change, you can commit them to muscle memory and then never consciously think about it. That all gets blown away if the interface is dynamic.


But they do change? Depending on the mode of my editor, they do different things.

There is some stability, but not absolute.

Similarly, if I play a game using me keyboard, which keys control my character will be in contrast to what is displayed on the keyboard.

Now, constant change would be bad. But I'm assuming that isn't what is being proposed?


It is, in the sense that, if there's a way to display a dynamic label on individual keys, some UI/UX designers will make use of that to create interfaces that can't be learned, as they change depending on tasks, environments, and from version to version.

Consider that touchscreen is very much this, and ever since phones got them, you can no longer do anything without looking at the screen. Interfaces are not stable enough to learn (and not reliable enough to operate without looking).


Yeah, on that I will agree it would be a terrible idea. The stream deck should be a good example, though; In that I'm assuming that most people do not have those buttons change much?

Amusingly, I'm now remembering the very old video game system where the different games had cards you slid into the controller to indicate what the different buttons did. Seems like that is the general ideal.


The whole point of the Maximus is that it had little displays on each keycap.


> you really want to be able to see through your fingers

As when graphic artists draw on, and interact with, a tablet, while watching a screen, rather than using a tablet display. Similarly, while I enjoy the feel of a thinkpad keyboard, I do wish it did whole-surface multitouch and stylus and richly dynamic UI as well. So I tried kludging a flop-up overhead keyboard cam, and displaying it raw-ish as a faint screen overlay, depth segregated by offsetting it slightly above the shutter-glasses-3D screen. In limited use (panel didn't like being flickered), I recall it as... sort of tolerable. Punted next steps included seeing if head-motion perspective 3D was sufficient for segregation, and using the optical tracking to synthesize a less annoying overlay. (A ping-pong ball segment lets a stylus glide over keys.)


The flux keyboard seems to be a modern attempt at the same concept. They're taking pre-orders, I don't know if they've shipped any yet or how close they are to shipping.

https://fluxkeyboard.com/



I may have worked for that company, but I came away with a different take.

People are User Interface bigots!

People get used to something and that's all they want. The amazing thing Apple was able to do was get people to use the mouse, then the scrollwheel, and then the touchscreen. Usually, that doesn't mean that you get rid of an interface that already exists, but when you create a new device you can rethink the interface. I used the scroll wheel for the iPod before it came out and it was not intuitive, but the ads showed how it worked, and once you used it 20-50x it just seemed right... and everything else was wrong! People would tell me how intuitive it was, and I would laugh, because without the ads and other people using it, it was not at all.

Now we're in a weird space, because an entire generation is growing up with swipe interfaces (and a bit of game controller), and that's going to shape their UI attitudes for another generation. I think the keyboard will have a large space, but with LLM prediction, maybe not as much as we've come to expect.

I could go on about Fitts testing and cognitive load and the performance of various interfaces, but frankly people ignore it.


Strangely, apple sucks at mice. A multi-button mouse with a scroll wheel is way better than any apple mouse I've used (especially the round one).

That said, the touchpad on some of their laptops is pretty good when you can't carry a mouse, but nowhere near a good mouse.

(I have owned all their mice, all their trackpads, etc)

Their keyboards have gone downhill too. I like the light feel of current keyboards, but the lack of sculpted keys to center and cushion fingers and arranged keys for the hands has really replaced function with form.

all the people who knew these kinds of truth have probably retired. sigh.


The multiple-button mouse predates the one-button Apple mouse by 2 decades.

The one-button mouse paired to a GUI was an innovative solution: Xerox couldn’t find a way to make a GUI work with one button only, as per their design 1983 article on designing the Alto UI. They tried, did a lot of HMI research but were trap in local maxima in terms of GUI.

Jeff Raskin and others from PARC who moved to Apple (Tesler if I recall well) had seen how three buttons brought confusion even amongst the people who were themselves designing the UI!

So Raskin insisted that with one button, every user would always click the right button every time. He invented drag diagonally to select items, and all the other small interactions we still use. Atkinson then created the ruffling drop down menus, a perfect fit for a one-button mouse.

They designed all the interface elements you know today around and for the one-button mouse. That’s why you can still use a PC or Mac without using the ‘context’ command attached to the secondary button.


I think that's all purist nonsense. It's like tesla removing the turn signal and drive selector stalks from their cars, or people that use super-minimalist keyboards.

Sometimes dedicated buttons for certain functions are GOOD.

People playing FPS games use two buttons to great effect. Maybe one button for aim, another for fire. People with MMOs can use as many buttons as the mouse allows. Creative types in tune with their environment can assign buttons to frequently used functions and flow through their tasks.

Yes, there are ways to double up functions and use less hardware. In macos you can use control + single mouse button for a context menu.

And I understand that poorly designed products use buttons willy-nilly and create a mess. Many remote controls are rows and columns of identical dedicated buttons that are lazy designs.

But why extra-minimal? I think it is a kind of designer-induced technical poverty.


Yeah, the Alto had a mouse. The Chipmunk had a scrollwheel. LG Prada phone had a touchscreen. Few remember them though.


> People get used to something and that's all they want.

It's more than "getting used to." Learning to type (or to edit text fast using a mouse) is a non-trivial investment of time and energy. I don't think wanting to leverage hard-earned skills is bigotry, seems more like pragmatism to me. Unless the "new way" has obvious advantages (and is not handicapped by suboptimal implementation) the switching cost will seem too high.


It's not just that people don't want to waste an investment into learning something; that investment can actually enable you way more than a more easily accessible interaction method, and you stick to it because it's _better_.

Once you've learned to use the keyboard property, it's simply faster for many applications. Having buttons at your fingertips beats pointing a cursor and clicking at a succession of targets. For example, I can jump through contexts in a tyling window manager and manipulate windows and their contents much faster with a keyboard, than wading through a stack of windows and clicking things with a mouse.

It all depends on what you're interacting with, and how often. I mostly have to deal with text, and do not need to manipulate arbitrary 2d or 3d image data.

But suggesting that I am simply too set in my ways to ditch the keyboard in favor of poking things with a pointy thing or talking into a box is just too reductive.


I add to your point and to parent's one.

I do use touchscreens when I've got one on a laptop or on a tablet with keyboard and desktop apps. The reason is precisely what you wrote: pushing buttons in front of us at close reach is faster than reaching for a mouse, aiming and clicking. When the screen is at 50 cm or less from our belly it's not hard to raise a hand and use it.

That also builds on a lifelong investment on pushing buttons, physical and on screen with mice. Using the Tab key to navigate a UI, or shortcuts or hjkl is something that only a few people (comparatively) did so it's can not become mainstream.

The last time I truly learned something new was swyping on my phone keyboard, some 15 years ago. It's extremely niche. I'm the only person doing it among everybody I know. I invested some time but the reward is great, especially when holding the phone with one hand or when there are many vibrations and the screen moves a little. A swiping finger never loses contact. And it's faster for me than two fingers tapping.

On the other side, that's another way to cement the qwerty layout because I swype on it and I would have to adjust to a different layout, so why bother?

Finally, voice. I realized that I used to dictate text to my keyboard on the early 10s, then I stopped. It worked well even back then without all the new technology. I don't remember why I stopped but if I did it means that I didn't lose anything. I wouldn't start again now because I think that there are very few keyboards with only local speech to text.


> Unless the "new way" has obvious advantages

I agree with this. The cases are rare. Still there are cases like the current sad state of motion control in video game consoles, where I have to agree to the opposite. Pretty much everyone who's put the time to play with motion controls outperforms those who don't and can play to satisfaction even without aim assist (which is relentlessly ubiquitous, for those unaware). But the tech started out kinda ass, and the Xbox still don't have a built in gyroscope, so the adoption is artificially stunted. The result? The masses still call it "waggle" with disdain.


The scroll wheel is a step up over both d-pad and touch screen. I also had a Creative Zen which has a scroll lane and it was great too. Why? Because interaction was a factor of motion control and it has great feedback. Same with Apple touchpad. Yep you still have to learn it, but it was something done in a few minutes and fairly visual.

There's a reason a lot of actually important interfaces still have a lot of buttons, knob, lights, and levers and other human shaped controls. Because, they rely on more than visual to actually convey information.


> it's usually not because they're idiots who don't know any better or who haven't seen the Truth, it's because it works for them.

The reason we have touch screen phones today is exactly because Apple dared to challenge that assertion. We should not assume that what is out there now is the end goal. Users don’t have a choice they can only buy and use what’s available to them in stores. The second touch screen phones were available the entire market shifted in a short period, but the mantra at the time was just like you have now “physical keyboard are the only way” who knows what could come from people who think outside the box in the future.


I was recommending laptop to someone and the only criteria he had was a number pad and a big screen because he mostly use Excel. I think input method is fairly context sensitive. Touch is the most versatile one as it acts directly on the output, but I still prefers a joystick for playing game, a midi keyboard for creating music, a stylus for drawing, and voice when I'm driving or simple tasks. Even a simple remote is better than mouse+keyboard for an entertainment center (as long as you're not entering text). We need to bring the human aspect of interface instead of the marketing one (branding and aesthetics).


> Even a simple remote is better than mouse+keyboard for an entertainment center

where you going to find a simple remote anymore?

The only things simple are the giant keys marketing dedicated to partners (like the youtube or netflix buttons)

I want a skip forward button!


Apple TV remote has it’s problems but at least it strives to be simple. Magically it also controls enough amplifier and projector (I don’t know how, hdmi signals?) so I don’t need to touch any other remotes on daily basis.


apple tv itself can be controlled over HDMI-CEC with TV's remote, if the TV supports the mappings


probably hdmi-cec


Before anyone bothers reinventing the keyboard I would rather that it were made practical to easily and reliably type accented and other characters on a UK keyboard in all applications. I use English and Norwegian regularly with the occasional French, German, or Swedish word. I have been unable to find a simple method of configuring Linux Mint to support these other than by switching layouts every time I need an ø or and e acute, etc.

I did once get the compose key to work but the settings didn't survive an upgrade and I have been unable to get them to work again in Firefox.


Use character composition. You then type those characters buy pressing compose key (I've set it to Caps Lock) and then sequence of characters. Much easier than switching keyboard layouts, and you can type other non-usual characters like °, µ, €

ø = Compose -> / -> o

é = Compose -> ' -> e

° = Compose -> o -> o

µ = Compose -> m -> u

€ = Compose -> = -> e

https://en.wikipedia.org/wiki/Compose_key#Common_compose_com...

https://cgit.freedesktop.org/xorg/lib/libX11/plain/nls/en_US...


The Compose approach is extremely handy if you need to type several languages (e.g.: Spanish, German and Pinyin).

A wrote a short article on it a while ago: https://whynothugo.nl/journal/2024/07/12/typing-non-english-...

I also keep a handy alias to quickly find how to write new symbols:

    alias compose='fzf < /usr/share/X11/locale/en_US.UTF-8/Compose'


takes a bit of time but you end up being fast enough without changing keyboard layout, pretty great


The compose key defined in Linux Mint's own keyboard settings doesn't work in Firefox.


MacOS developers have solved this problem pretty neatly:

https://support.apple.com/en-qa/guide/mac-help/mh27474/mac


This is cool!:-)

How come this is not the first “tip” on a fresh Mac?


It's very useful but it's sloooow


This is cool!

But I thought you were going to recommend pressing "fn" to switch layouts (I believe you can use either fn or ctrl+space on macOS).

I use to switch from German (for chat/documentation) and English (for coding), and it's quite instant and second nature to me.


Is there a similar trick for non-letter characters ?


Yes, for some of them, but not all.

I've not been able to find a convenient online image showing the characters you get from holding down alt while typing, it may vary by layout, but for me this lets me type:

Number row: ¡€#¢∞§¶•ªº–≠ with shift: ⁄™‹›fifl‡°·‚—±

First row: œ∑´®†¥¨^øπ“‘ with shift: Œ„‰ÂÊÁËÈØ∏”’

Home row: åß∂ƒ©˙∆˚¬…æ« with shift: ÅÍÎÏÌÓÔÒÚÆ»

Bottom row: `Ω≈ç√∫~µ≤≥÷ with shift: ŸÛÙÇ◊ıˆ˜¯˘¿

But of those, I only remember €, # (both printed on the key!), ∞, ƒ, ™, π/∏ (thanks to growing up with MacOS classic — Marathon Infinity for ∞, ƒ for folders, ™/π/∏ no idea why), and –/— (en-dash/m-dash, not sure why I learned them, but was one surprise source of compile-time errors around 2010 because they look exactly like - in a fixed-width font).


If you ever used MPW shell, a lot of those characters were part of the syntax of commands and the regular expression parser so it was common to learn to compose ∫,® ∂ etc. The debugger TMON also used them, so they just become second nature, like !@#.


Neat, did not know that. At the time MPW shell was used, it was a little bit too advanced for me — I was only as far as working my way through C For Dummies (or something like that) with a limited student edition of CodeWarrior* around the time REALbasic came out.

* Possibly bronze edition? Whatever it was, it was 68k only.


`~/.XCompose` is your friend.

I frequently input International Phonetic Alphabet glyphs, some polytonic Greek, some Spanish and some Old English. Nothing is more than three key-presses away after an AltGr.


I'll look into that. The compose key defined in Linux Mint's own keyboard settings doesn't work in Firefox.


Thanks for sharing the keyboard story!

I agree that keyboards can be improved, but I think gradual changes—like making them split and wireless—are a better approach. I use a split keyboard myself and can comfortably do development with just 34–36 keys.

If the interface changes too much in a short time, it can become quite a hassle.


My personal prediction is that nothing will replace the keyboard except direct brain-to-computer interfaces. The keyboard is an incredible tool that people take for granted.


Windows Phone (or was it Windows mobile) had an excellent keyboard with caron/accent keys. So e.g. if I they `C` and then caron key it will replace first `C` with `Č`. I was looking for an Android keyboard with same functionality but didn't find one.


Paraphrasing UX expert Johnny Lee:

  UX = P_Success*Benefit - P_Failure*Cost
...and yet with every new generation of tech, it's never surprising how the hype cycle results in a brazen dismissal of the latter half of this fundamental relationship.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: