Hacker Newsnew | past | comments | ask | show | jobs | submit | felixhammerl's commentslogin

I was like "wow, I haven't seen a considered harmful article in a while, the quiet must be ancient", the I saw the author and it made sense.

When he says something, it's usually worth a listen.


Reminds me of Doggerland: https://youtube.com/shorts/Afwxk4peYys


The place was part of doggerland


Good comment, thanks.

Doggerland is often thought of as the North Sea section surrounding Doggerbank as it is today, but as you've highlighted, it actually extended much further and as far south as Britanny.


If your app has to display stuff, you have no code kits available that can help you out. No vibe coding needed.

If your app has to do something useful, your app just exploded in complexity and corner cases that you will have to account for and debug. Also, if it does anything interesting that the LLM has not yet seen a hundred thousand times, you will hit the manual button quite quickly.

Claude especially (with all its deserved praise) fantasizes so much crap together while claiming absolute authority in corner cases, it can become annoying.


That makes sense, I can see how once things get complex or novel, the LLMs start to struggle. I don't think my app is doing anything complex.

For now, my MVP is pretty simple: a small app for people to listen to soundscapes for focus and relaxation. Even if no one uses, at least it's going to be useful to me and it will be a fun experiment!

I’m thinking of starting with React + Supabase (through Lovable), that should cover most of what I need early on. Once it’s out of the survival stage, I’ll look into adding more complex functionality.

Curious, in your experience, what’s the best way to keep things reliable when starting simple like this? And are there any good resources you can point to?


You can make that. The only ai coding tools i have liked is openai codex and claude code. I would start with working with it to create a design document in markdown to plan the project. Then i would close the app to reset context, and tell it to read that file, and create an implementation plan for the project in various phases. Then i would close context, and have it start implementing. I dont always like that many steps, but for a new user it can help see ways to use the tools


That’s a good advice, thank you!

I already have a feature list and a basic PRD, and I’m working through the main wireframes right now.

What I’m still figuring out is the planning and architecture side, how to go from that high-level outline to a solid structure for the app. I’d rather move step by step, testing things gradually, than get buried under too much code where I don’t understand anything.

I’m even considering taking a few React courses along the way just to get a better grasp of what’s happening under the hood.

Do you know of any good resources or examples that could help guide this kind of approach? On how to break this down, what documents to have?


I've always wanted to make an app like this. I think you could do a lot with procedural generation and some clever DSP.


Learning how to get it to run build steps was a big boost in my initial productivity when learning the cli tools


Maybe react native if you like react


For those with an old Kindle, couldn't you download it onto the Kindle and then pull it off of there via USB?


That sounds simple, but wouldn't the ebook you "pulled off" the Kindle still be in Amazon's format with DRM? I don't think this solves the original problem.


The first problem was to get the original file from somewhere in a usable format, then strip DRM in a later stage. Seems like step 1 was already made significantly harder now.


I tried to do this recently but discovered that the DRM algorithm changed and I couldn't use the standard de-DRM tools.


Thanks for the info!


are you using a relatively new kindle?


In the news we seem to have reached Schrödinger's AI: Too dumb to do anything properly, but coming for everyone's jobs due to being too powerful.


There's a third way; coming for some peoples' jobs due to false advertising and almost unthinkable levels of hype.

Like, there are actual companies who have stopped hiring junior developers or even laid off junior developers in favour of our robot overlords. This is, obviously, a terrible idea, and will hurt those companies, because the output of these things is pretty much crap, but meanwhile some people _have_ lost their jobs.


You just need to convince the management


Did anyone figure out how to handle two fundamental problems of MSRs yet?

1) Molten salt is corrosive as hell and will chew through your pipes.

2) It can't be serviced. If it ever shuts down and solidifies, it cannot restart.


Copenhagen Atomics claims to have solved 1 [0], I believe point 2 is mentioned but not in detail.

[0]: https://www.youtube.com/watch?v=FjHH8Qf3aO4


the soviets had a mini sub program with reactors like that. the fleet barely made it into service before it was cancelled. I would not even want to tune up a car if it had to be running at the same time


Even the most massive hacks or breaches or cyber attacks barely put a dent into any reasonable business. One or two news cycles and a management rotation, that's it. Okta? Target? Equifax? Capital One? Uber? Even Solarwinds for crying out loud.

Everyone does enough to not be accused of gross negligence, but really I have not seen anyone pay more than lip service. And I don't blame them. No matter how much this hurts to say as a security professional.


The biggest group of people paying lip service to security are software engineers, and ops people. Both groups regularly choose implementation speed, and reduced work over sound security practices.

A good example of this is in C/C++. Most C code bases I have seen spread buffer use and allocation code over hundreds or thousands of files. Anyone of these files could have a security bug because some code does not check the buffer size before writing data into a buffer. There is no way this pattern will ever be secure because it requires software engineers to get every check right which is impossible.

Even worse, many software engineers do not care about security, or even correctness. They will happily write dangerous code because it takes less time.

Another example of both operations and software engineers having a blind spot is cloud computing. When you write software in the cloud, you want to minimize secrets for the following reasons:

1) They have to be periodically rotated (changed). Rotation takes time, and it is error prone. Making a mistake leads to an outage. Not rotating them can lead to a hack when an employee leaves the team or when a breach occurrs and the attacker gets a copy of the secret.

2) If a breach occurs, secrets have to be rotated very quickly. This is hard to do unless a team has spent a lot of effort on automated secret rotation.

The solution is to use managed identities (i.e. identities which automatically rotate their credentials every X days). I know Azure provides them, and I bet AWS, GCE, etc. also provide them. It takes a little more work but now, you do not have to worry about secret rotation anymore.

The problem is, more work means a lot of people just won't do it.

The final example is the principal of least privilege. Convincing people to only give the appropriate privileges to an account, managed identity, person, etc. is hard. Lots of people just give as much access as possible "in case someone needs it", or because it is easier. This leads to much worse security breaches.

My basic point is security problems are not just because companies don't care or are not punished enough. They also occur because software engineers, ops, and other technical people don't really care. If the people doing the actual work don't care, the situation is not going to ever improve.


This is not my experience, working in small shops/enterprise companies (some regulated). What I've seen is a constant, hard resistance from security "departments" to do anything that is not making policies (one company I worked with for a while had a security policy denying usage of managed identities in Azure...) and buying yet another magic solution from a vendor that will fix all our security problems (offloading its maintenance on... operations teams!), sometime with configurations that resemble the proverbial "very expensive firewall with ACCEPT ALL policies in all directions".

The companies with working security in my - limited, sure - experience had security teams owning the tools and making the life easier for developers and ops, from something "simple" like certificate rotation automation, to mTLS that is "transparent" for apps, to authn/authz, to secret management, all owned and managed by the security org.


The problem with the principle of least privilege is that you don't know how much privilege you need until you need it. And once you need it, you need to define a scope for it. If you wish to bake an apple pie from scratch, you must first invent the universe. But are you done with the universe once the apple pie is baked, or does it still need to be eaten, digested, and excreted? Are you done then? And what specific portions of the universe did you need in order to accomplish this goal? You're not sure? I'll see you in a few years when you're done with the research.

Sorry to be so cynical, as I do actually believe the principle of least privilege is an appropriate goal; I just think that there's no getting around that the engineers themselves are the ones who really must uphold this virtue, and even then, it can go overboard. At some point, the software should do something.


"thousands of dollars every day" does not a negative reinforcement make. That us not even a rounding error for even mid sized companies.


Then use 1% of revenue or 2K per day, whichever is greater.


So after 4 months, the company would lose more than their entire revenue?


Why not?

A $20k car can do far more than $200k in damage.

We don’t limit liability to the price of the vehicle.


The equivalent would be a $20k Ford resulting in a $1,762,000k fine.


Yeap, should deter building vulnerability riddled solutions.


There are so many half baked takes, but this is my favorite:

> There will always be a market for stable-over-latest software, especially for businesses.

That market is called the nvd.nist.gov at best and 0 day brokers at worst. Why do people stil not accept the fix forward supremacy and patch their mess?


Because people aren't fond of things randomly breaking or forcing reconfiguration in order to get their security patches.


Wife uses a mid-2013 Macbook Air as a daily driver still after >10 years, works in academia. What happened is simply that CPU stopped being the thing that made stuff slow, so a mid-2010s upper midrange can still hold its own today, ssd was already good then, and memory plateaued around 8gb for entry level machines some 5 years ago. So as long as your thermals don't suck, your ssd doesn't smoke out, and your keyboard holds up, you can still use these machines today. If you maxed out memory then, that is.


Even more so with desktops.

I am running a 2014 CPU i7 4790k with 32GB RAM and a 1070ti.

Due to it being a desktop w noctua cooler the temps never get high. Keyboard and mouse you just replace when worn out.

This year I am thinking about upgrading.

But my 2019 Thinkpad already feels slow... so I never got laptops for long use. They suck.


Yeah, I got used to desktops with RAID, powerful GPUs, large nice monitors, etc back in the early 2000s and laptops never feel fast. They can’t be — I don’t care how many benchmark results say otherwise, a machine with a given size square inches worth of silicon at a given nm scale using a 130 watt power adapter isn’t going to break the laws of physics and beat a desktop drawing 800 watts with 2-5x as much same-nm silicon. Especially now that I’ve gotten into video editing and stuff like generative visuals for music, you really notice where they’re saving power on laptops when switching between a desktop and laptop RTX 4090 + 16-core CPU.


I also have a 4790k chugging along! I had to upgrade due to max RAM of 32GB, so I got a Dell Precision Mobile laptop with 128GB. I normally hate laptops, but with remote working meaning the occasional commute it was unavoidable.


Like the other commenter said, you haven’t tried Apple silicon if you think all laptops suck.

My M1 MacBook Air is way faster than my iMac with i7 7700. And now M3 pro is probably faster than my desktop i9 10850k.


Clearly you haven’t tried Apple Silicon laptops


I haven't and they are a very new and recent creation also windows games on desktop better get it?

Every single time on this site you people just LOVE your M1s dont you.

Desktops generally offer better performance, get it? Also pheripherals


Apple Silicon truly is phenomenal for dev stuff. But gaming ... oh well. And you obviously can't heat your room with them, so your feet may feel cold. ;)


The last time I considered Apple Silicon I discovered that there were a few compatibility issues. I can't recall which softwares specifically, but it put me off "for a generation or two" and I ended up buying a Framework instead.

What is the compatibility story at this point?


Atleast for PC hardware, CPU development was severely stifled by lack of competition most of the 2010s.

Intel had no credible competition until AMD got their shit together with Zen, and wasted no opportunity to rest on their laurels. This is also why both newer Zen and also Apple's new silicon is seemingly making such fantastic advances. It's essentially catching up to where we could have been all along if we had a healthier market.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: