Hacker Newsnew | past | comments | ask | show | jobs | submit | QuiEgo's commentslogin

My dream is having an “app” on ipadOS that switches out userland from ipadOS to macOS when launched. Let them be two silos, containers, VMs, whatever.

Only allow “Mac mode” if you have a keyboard and monitor attached. Hell, automatically “sleep” it if you undock. Make it unapologetically keyboard-and-mouse first.

One UI for keyboard/mouse. A second UI for touch. One device that can do it all. That’s the dream.

I feel like we’ve had a few ham fisted attempts over the years at this, and Apple could actually pull it off. I get that it probably won’t happen though.


Touch and mouse are two very distinct forms of input that need to be kept separate. Every convertible Windows laptop I have ever used has convinced me of that.

Mouse interfaces can be incredibly information dense because mice are both incredibly economic from a space and motion standpoint, and also somehow incredibly precise. You can flick your wrist to select any target the size of a grain of rice on a 32" screen. Touch interfaces require larger targets because fingertips are larger than a cursor point, and also require smaller screens because your arm now has to move the entire length of the screen, which is slow and tiring.

Where touchscreens excel is tactile experiences, things that mice cannot replicate. Multi-touch, pressure sensitivity, pen angle. Sweeping motions are difficult to control with a mouse. Manipulating multiple analog controls is nigh-impossible with a mouse.

When an app tries to accommodate both input styles, it inevitably ends up catering to one style or the other, unless two separate interfaces are built. And because a touchscreen laptop can be touched or have the mouse moved at any given time, it's not really possible to switch between the two input styles seamlessly.

I would really enjoy having a device that is capable of both, since the iPad has a gorgeous screen, a great form factor, and a lot of killer uses. But it can't cannibalize mouse interfaces or we wind up with the direction that MacOS is going.

There is nothing wrong with having a keyboard connected to a touch device per se, but the gross arm motion required to move between the touchscreen and the keyboard, and the awkward angle the keyboard puts the touchscreen at sort of nukes the usefulness of the touchscreen. And again, jumping in text is the sort of small target action that mice excel at.


Touch and mouse are complementary inputs that must be included. Working on a Windows laptop with touch and an iPad with Magic Keyboard have convinced me of that.

Don’t make the mistake of thinking that having touch means you only use touch. Same for a mouse/trackpad pointer. Each has strengths and weaknesses and it better at some tasks than others. The pointer is good for clicking on small UI elements or doing small movements. It suffers with larger movements across the screen. Touch is good for scrolling, zooming, tapping buttons, tabs, and sometimes links. It’s good for jumping around the screen and moving things.

The keyboard is a third input/control interface and can be even faster and more precise than the mouse pointer. When the mouse first came on the scene, people derided it as less efficient than a keyboard and complained that you had to move your fingers away from the keyboard to use one. They swore they would never use one.

Where these work best is a mix of input modes using different ones for different scenarios. Having a mix if broad and precise inputs means you don’t need to tailor the whole interface for just precision or just broad strokes. The interface can be designed to accommodate the presentation of information and let the choice of inputs be up to the user. A side benefit of having difference input modes is that your hands move in different ways for each. You are less subject to repetitive stress from doing the same hand motion for everything.


> Mouse interfaces can be incredibly information dense because mice are both incredibly economic from a space and motion standpoint, and also somehow incredibly precise. ...

There's exactly one feature of touch interfaces that can be incredibly input-information dense, easily rivaling the mouse, and that's swiping gestures with 1-to-1 fluid animation for feedback. Usually seen with pie menus and the like. Drag and drop, the mouse equivalent, is extremely clunky - and mouse gestures that don't even involve clicking even more so.


The surface pro argues otherwise. Using Lightroom classic on the Pro is largely best done from the keyboard, but there are certain workflows where using the touchscreen or a stylus is much better than a touchpad. The fact that it's limited doesn't mean it isn't a good idea.

> Every convertible Windows laptop I have ever used has convinced me of that.

This is a very strange conclusion considering everything is a webpage/webapp nowadays which are designed to be operated by big fat fingers.

/s but...


I want the same mode, but on iOS! Imagine carrying nothing but the phone in your pocket, sitting down at your desk, plugging your phone into the monitor, which has your keyboard and mouse docked, and you have a full development environment.

Partially there on Android Pixels with "Linux Terminal". With the rumored convergence of ChromeOS and Android, it should be possible to have a desktop ChromeOS pKVM VM with accelerated vGPU graphics on Android mobile devices that have enough RAM.

You can sort of do that but you’re VNCing into a remote device.

iPads have M-series chips in them that support hardware virtualisation. But Apple goes out of its way to disable the hypervisor for its iOS/iPadOS builds[1]. All they have to do is stop doing that and allow apps to make use of the virtualisation APIs. UTM hypervisor already exists in App Store, so Apple is clearly not against the principle of it. As soon as that happens, running macOS (or Linux) will become elementary.

[1] https://x.com/utmapp/status/1708907045314035986


When Google ships unified ChromeOS+Android on pKVM on ex-Apple Qualcomm laptops from Dell/HP/Lenovo, Apple executives will have a market-driven reason to make this change. Some iPad improvements appeared after Microsoft shipped Surface 2-in-1.

The closest thing I’ve seen to this is steamdeck with desktop mode and steam mode.

Someone did this by hacking an iPad with a "MacBook" keyboard - https://www.macstories.net/stories/macpad-how-i-created-the-...

(this is how android has always worked)

No? Android has the same interface regardless, it just scales a little with a bigger screen. Samsung did separate silos with Dex, but that's their own thing.

Maybe they only ever get Samsung devices. Dex is almost a decade old.

A desktop mode was recently added for base Android tho. And you could always mod your Android device to open termux when you connected an external monitor, that sort of thing.


Really? I feel like android doesn't change at all with a keyboard, and doesn't support half the keyboard shortcuts you can use on iPad.

I like what the iPad is and it just doesn’t make sense to have a keyboard and mouse with it. Let’s leave the tablets as is and use laptops for serious typing.

People write out books on their phones. There's no need to be so rigid in the distinction between the types of device.

But I do agree with the original point that everyone has failed to make a unified interface for both modes and a distinct switch would be better until they can converge from real world learned lessons.

Apple will never make a product like that though.


They will if/when Google does, which may happen this year.

Cool, I think they're pointless slabs of wasted material. If I could run macOS on a macbook neo so should I be able to on an iPad, and it would make it a useful device because I do not have to spend all my time inside a terminal app to make it useful.

Cool, I don't. iPads are so useful for drawing, and I hate having to use a terminal app on an iPad.

Different horses for different courses.


I agree except for the monitor attached part. There’s so reason my iPad Pro with that expensive keyboard and trackpad can’t run macOS. I had such dreams Of using it as a laptop replacement and all it’s ended up being is a very expensive portable monitor.

Isn’t that just called a bond?

History has shown prohibition can be… problematic.

Just tax it very very heavily and apply education / social pressure?


See the problems with the Australian system, which is basically what you describe.

Exactly this.

Australia (and the States) tried to impose ever increasing tax and restriction on smoking and over the last few years, smoking has reached critical mass, with more people smoking, cheaper smokes, and smokes becoming more available AND less regulated.

Previously a 20 pack was around $40-60 at most smoke shops, then the illegal darts started to come in, they were priced as low as $6 or $8 for the cheapest 20 pack. They become rampant and barely anyone purchased genuine smokes. In fact, these illegal smoke stores were exactly like real smoke shops, proper business, proper storefront and everything. Excluding the prices, you couldn't tell you were buying illegal products.


I would not be surprised at all, a $1,000/mo tool that makes your $20,000/mo engineer a lot more productive is an easy sell.

I’m guessing we’re gonna have a world like working on cars - most people won’t have expensive tools (ex a full hydraulic lift) for personal stuff, they are gonna have to make do with lesser tools.


noway.

i bought a $3k AMD395+ under the Sam Altman price hike and its got a local model that readily accomplishes medial tasks.

theres a ceiling to these price hikes because open weights will keep popping up as competitors tey to advertise their wares.

sure, we POV different capabilities but theres definitely not that much cash in propfietary models for their indererminance


What about when there is a $100/month tool that makes your engineer 90% as productive as they were on the $1000/mo tool?

What if that tool is something you can run on prem, and over time make the investment back?

It's not so simple.


If your company is making $1 mil per employee per year, then 10% is 100k. Even at 500k employee or lesseer numbers it's almost always better to buy the $1000/month tool (break even is a measly $108k revenue per employee per year)

It's not just about cost, it's about having the control, stability, and autonomy of on-prem. Plus you can probably repurpose that compute when employees are out of the office.

Anyways, I'm just saying it's not so simple ;)


No engineer will cost 20.000 bucks a month at this point in time. Offshoring is still happening aggressively.

The article was entertaining and made me smile. Thank you for that.

Real advice: national parks very much have seasons, be it weather, tourist (or lack thereof), wildlife, bugs, or all of the above. The same park can be a miserable experience or incredible a few months apart.


"Piracy is almost always a service problem and not a pricing problem"

I left when shows I enjoyed were a revolving door, and the UI felt hostile (constantly trying to shove terrible quality original content on me).


Is it really that hard to just hit cancel yourself?

When you're holding a hammer all the world looks to be nails or something.

EDIT: Also, there's no way in hell I'd let any of the AIs near something with my credit card info saved in their current state.


I agree the wording is a bit alarmist, but a closer example to what they are saying is:

  bool silly_mistake = false;
  
  //... lots of lines of code

  free(x);

  //... lots of lines of code

  if (silly_mistake) { // silly_mistake shown to be false at this point in the program in all testing, so far
     free(x);
  }
A bug like above would still be something that would be patched, even if a way to exploit it has not yet been found, so I think it's fair to call out (perhaps with less sensationalism).

FWIW there's a whole boutique industry around finding these. People have built whole careers around farming bug bounties for bugs like this. I think they will be among the first set of software engineers really in trouble from AI.


That is something a good static analyser or even optimising compiler can find ("opaque predicate detection") without the need for AI, and belongs in the category of "warning" and nowhere near "exploitable". In fact a compiler might've actually removed the unreachable code completely.


Well yeah, it’s a toy example to illustrate a point in an HN discussion :).

Imagine “silly mistake” is a parameter, and rename it “error_code” (pass by reference), put a label named “cleanup” right before the if statement, and throw in a ton of “goto cleanup” statements to the point the control flow of the function is hard to follow if you want it to model real code ever so slightly more.

It will be interesting to see the bugs it’s actually finding.

It sounds like they will fall into the lower CVE scores - real problems but not critical.


That's what I'm saying; a static analyser will be able to determine whether the code and/or state is reachable without any AI, and it will be completely deterministic in its output.


You cannot tell if code is actually reachable if it depends on runtime input.

Those really evil bugs are the ones that exist in code paths that only trigger 0.001% of the time.

Often, the code path is not triggerable at all with regular input. But with malicious input, it is, so you can only find it through fuzzing or human analysis.


You cannot tell if code is actually reachable if it depends on runtime input.

That is precisely what a static analyser can determine. E.g. if you are reading a 4-byte length from a file, and using that to allocate memory which involves adding that length to some other constant, it will assume (unless told otherwise) that the length can be all 4G values and complain about the range of values which will overflow.


Why hasn't it then? The Linux kernel must be asking the most heavily-audited pieces of software in existence, and yet these bugs were still there.


People find and report bugs in the kernel using static analysers all the time.


Rust’s trait system and the embedded HAL say “hi there.”


It's also reasonable from a business point of view to say "we can't justify the investment to optimize our software in the current environment." I assume this is what's happening - people are trying to get their products in customers hands as quickly as possible, and everything else is secondary once it's "good enough." I suspect it's less about developers and more about business needs.

Perhaps the math will change if the hardware market stagnates and people are keeping computers and phones for 10 years. Perhaps it will even become a product differentiator again. Perhaps I'm delusional :).


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: