Hacker Newsnew | past | comments | ask | show | jobs | submit | pzs's commentslogin

This trend of overengineering is apparent now in cars, too. An innocent failure, like a headlight going out can turn into a major systemic issue, like the engine refusing to start through a chain reaction inside an inadequately tested software control system.

I wonder if this is a one-way street, that is, if a realization will come at some point that simple solutions to simple problems can be more robust...


> This trend of overengineering

I'd dispute it being over-engineering: media keys tend to control a mix of hardware and software (OS) features (looking at asus keyboards on the internet I see audio volume, mic mute, fan speed / perf governor, display off, brightness, projection mode, touchpad off, sleep, and airplane mode).

Given this, an OS driver is a requirement, and the OS further needs to access the hardware for obvious reasons.

This means you can either implement everything uniformly in driver (just bouncing from the interrupt to a hardware operation in the case of hardware features), or you can mix the implementation between firmware and driver.

Unless you have a very good justification to do so (which I'd very much dispute the existence of for gaming-oriented ASUS laptops) the latter seems like the over-engineering.


I think in many respects these problems are actually _under_-engineering. It's possible to treat software as an artefact with a measurable level of quality, and to use frankly not especially ambitious tools (programming languages with memory safety and rich type systems, unit and integration tests, etc) to build them. It's also possible to have a strong sense of user experience and taste as far as what makes a product, not just a pile of parts.

But you have to take software seriously as something that can improve a system, not just a cost centre to be minimised where possible, and an embarrassing source of problems that will ultimately end up in the newspaper or worse.


Some of the "proudly-open" laptops have open-source EC firmware. I don't have one and haven't looked deeply enough to know, but maybe they have these features sanely implemented there.

On the other hand, I'm not as optimistic about open-source BIOSes like Coreboot, whose only reason for existence seems to be "it's open-source!" --- that project has been around since the last century, yet still lacks any actual GUI/TUI for configuration, like any other BIOS has had since the late 80s.


The UI is a payload issue, not a Coreboot issue - various vendors ship Coreboot based firmware with a configuration interface, usually based on the Tiano payload. But for my EC issues I simply took the approach of reverse engineering the EC firmware, binary patching it, flashing that back, and getting on with life. Skill issue.


> I simply took the approach of reverse engineering the EC firmware, binary patching it, flashing that back, and getting on with life. Skill issue.

There is no simply here.

You can’t list a litany of niche skills before implying that’s just life and it’s everyone fault they don’t have the time and knowledge to just, you know, casually reverse engineer and patch a binary.


It was a sarcastic joke ;)


Hard to tell in writing. Still not convinced.


They call the cherries cascara, and I have come across them in some specialty coffee shops packaged just like the beans. You can pour hot (not boiling) water over them and prepare a tea-like infusion. It tastes sweet-ish without adding anything else. It gives a pretty noticeable kick to me when I drink it, even though I am a regular coffee drinker. I think it is worth a try, if you haven't done so yet.


"If the human brain were so simple that we could understand it, we would be so simple that we couldn’t." - without trying to defend such business practice, it appears very difficult to define what are necessary and sufficient properties that make AGI.


What about if the human brain were so complex that we could be complex enough to understand it?


To update this excellent quote to 2025, change minutes to seconds and you just described TikTok.


Yeah, I was thinking that the while modern social media has made the "cost of entry lower," and everyone can theoretically reach more people than ever, it's hard to even describe most of it as "fame" anymore. I mean, does content even "go viral" anymore, with users subdivided into the tiniest niche communities or audiences? Even if things get wider traction for a while, there's so much competition with so much other content that everything seems to get quickly drowned out and then can't even be found again later through search.


There’s a saying on twitter that every day there is a main character and the goal of twitter is to not be it.


"The real problem is the ROI on AI spending is.. pretty much zero. The commonly asserted use cases are the following: Chatbots Developer tools RAG/search"

I agree with you that ROI on _most_ AI spending is indeed poor, but AI is more than LLM's. Alas, what used to be called AI before the onset of the LLM era is not deemed sexy today, even though it can still make very good ROI when it is the appropriate tool for solving a problem.


AI is a term that changes year to year. I don't remember where I heard it but I like that definition that "as soon as computers can do it well it stops becoming AI and just becomes standard tech". Neural Networks were "AI" for a while - but if I use a NN for risk underwriting nobody will call that AI now. It is "just ML" and not exciting. Will AI = LLM forever now? If so what is the next round of advancements called?



There is a video on the page in which Bret Victor explains what it is all about. I find it very difficult to summarize, but my best attemp would be something like transforming computation into an activity that a community of people performs via manipulating real world objects.


This reminds me of what I learned about myself during my years spent at the university. I observed that in the morning my brain is better at understanding new concepts. Mornings were the best time for me to practice and improve problem solving, but I tend to remember less details of what I come across. However, at about 2pm my brain appears to switch to memorizing mode, where I struggle with problem solving compared to the morning, but I will remember a lot more of what I read. I structured my learning activity leveraging this observation. Even to this day (am 46) I can feel the same tendency, e.g., if a problem seems somewhat difficult, I just wait until the next morning, if I can, only to find it easy to come up with some solution that seemed out of reach the previous evening. Also, I try to do most of my reading at night (well, life with a family doesn't leave a whole lot of options for timing anyway).


> an attacker passively eavesdropping a GSM communication between a target and a base station can decrypt any 2-hour call with probability 0.43, in 14 min

The authors give the above example in the abstract. It does not look like the typical use case for embedded systems. I would think embedded systems send and receive small amounts of non-critical data over GSM, hopefully encrypted, as the parent pointed out. But I may be wrong here - is there a real use case for attacking embedded systems using this method?


> But I may be wrong here - is there a real use case for attacking embedded systems using this method?

yeah, any IoT device that has been built with the assumption of GSM being not eavesdroppable. Cars and alarm systems come to my mind here.


I read his book on relativity theory, which I would characterize as one written for popular consumption [1]. I recommend reading it if you have not done so yet. I found the explanation of the special theory in the book easily accessible and enlightening, less so the explanation of the general theory, although it did help me understand it better.

[1] https://en.wikipedia.org/wiki/Relativity:_The_Special_and_th...


"As the article states, no sensible application does 1-byte network write() syscalls." - the problem that this flag was meant to solve was that when a user was typing at a remote terminal, which used to be a pretty common use case in the 80's (think telnet), there was one byte available to send at a time over a network with a bandwidth (and latency) severely limited compared to today's networks. The user was happy to see that the typed character arrived to the other side. This problem is no longer significant, and the world has changed so that this flag has become a common issue in many current use cases.

Was terminal software poorly written? I don't feel comfortable to make such judgement. It was designed for a constrained environment with different priorities.

Anyway, I agree with the rest of your comment.


> when a user was typing at a remote terminal, which used to be a pretty common use case in the 80's

Still is for some. I’m probably working in a terminal on an ssh connection to a remote system for 80% of my work day.


If you're working on a distributed system, most of the traffic is not going to be your SSH session though.


sure, but we do so with much better networks than in the 80s. The extra overhead is not going to matter when even a bad network nowadays is measured in megabits per second per user. The 80s had no such luxury.


First world thinking.


Not really. Buildout in less-developed areas tends to be done with newer equipment. (E.g., some areas in Africa never got a POTS network, but went straight to wireless.)


Yes, but isn't the effect on the network a different one now? With encryption and authentication, your single character input becomes amplified significantly long before it reaches the TCP stack. Extra overhead from the TCP header is still there, but far less significant in percentage terms, so it's best to address the problem at the application layer.


the difference is that with kb/s speed, 40x of 10 characters per second overhead mattered. now, humans aren't nearly fast enough to contest a network.


Why? What do you do?


It was not just a bandwidth issue. I remember my first encounter with the Internet was on a HP workstation in Germany connected to South-Africa with telnet. The connection went over a Datex-P (X25) 2400 Baud line. The issue with X25 nets was that it was expensive. The monthly rent was around 500 DM and each packet sent also had to been paid a few cents. You would really try to optimize the use of the line and interactive rsh or telnet trafic was definitely not ideal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: