Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're right across the board. My only issue with it is general-purpose AI fatigue. Everywhere I look, blog posts use the same generic-looking AI art (I can't quite put it into words, but you probably know the look I'm talking about), social media posts are written up using genAI (e.g. Linkedin will now ask you if you want AI to write your post for you -- though let's be honest, original thoughts are few and far between on there to begin with), and while interviewing recently I received multiple warnings about disabling any AI assistants in my editor (to me, it's kind of a bummer that that's a big enough issue to mention at all).

I have, in principle, nothing against an opt-in feature that requires unmistakable user consent and a specific sequence of actions to enable. I'm just kinda tired of AI in general, and I'm also worried about potential licensing issues that may arise if you use genAI in your terminal to write scripts that weren't permissively licensed before being used as part of a training set. That's nothing new though, I had, and have, the same concerns with Github Copilot.

I also recognize that my complaint is pretty personal (not counting the licensing thing). My low-level annoyance with genAI isn't something the AI industry at large should seek to resolve. Maybe I'm more set in my ways than I should be at this point in my life. Either way, it's a factor for me and a few other tech people I know.




> Everywhere I look, blog posts use the same generic-looking AI art (I can't quite put it into words, but you probably know the look I'm talking about)

They got that AI grease on them


Oh, that's easy, you just add the words "but don't make it look greasy", and as a bonus you're now a fully accredited Prompt Engineer! :p


Realistically, you use models that make it easier to prompt away from greasiness:

Positive prompt: 1girl, hacker, laptop, terminal, coffee, green hair, green eyes, ponytail, hoodie

Negative prompt: worst quality, low quality, medium quality, deleted, lowres, comic, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry, looking at viewer

Steps: 25, Sampler: DPM++ 3M SDE Karras, CFG scale: 7, Seed: 1313606321, Size: 768x384, Model hash: 51a0c178b7, Model: kohaku-xl, VAE hash: 235745af8d, VAE: kohaku-xl.safetensors, Denoising strength: 0.7, Hires resize: 2048x1024, Hires upscaler: Latent (bicubic), Version: v1.6.0-2-g4afaaf8a


Now you can be like every other obnoxious tech blogger putting ai art crap in their header.


Hah! That's actually a pretty good way of describing it.


The best is going to boingboing, and seeing that shit art everywhere, while they have posts whining about AI art. Be Consistent. You're just being part of the problem if you can't even abide by the basics.

I never thought I would end up hating the whole tech world so much, and I thought "Crypto" was peak - but that was just a bunch of scammy dudes and rugpulls for suckers. This? Everyone is suckers. In theory there's a case to be made for it, but I trust none of the entities involved in pushing this out.

For about 5 years I thought MS was going to do something good. WSL2 was actually good tech and they seemed to uh... "embrace" open source. But since 2020 I feel like things are just going downhill.

My inner old man yells at the lawnmowermanchild : GET OFF MY LAWN.


> while interviewing recently I received multiple warnings about disabling any AI assistants in my editor

Weird. Does the company forbid its staff to use AI assistants?

I get that they want to find out what you know. If you know how to solve problems using an AI, isn't that what they are going to (increasingly) expect you to do on the job?

In fact, demonstrating that you can effectively use AI to develop, would seem to me to be something they'd want to see.


The stated reason was that they wanted to understand my ability to solve a problem and my thought process when faced with their problem. Having run technical interviews in the past, I completely agree with the reasoning.

While I don't use it, I'll grant you that AI is good at solving small, repetitive, tedious problems; stuff that's maybe a bit too domain-specific to be widely available in a library, but that's consistently subtly different enough that you have to sink time into either making different implementations or trying to generalize.

AI is generally going to be poor at solving novel problems, and while that's something that you can never really use in an interview for a variety of reasons, I can send you a problem that's likely new to you and see how you tackle it.

I'll also admit that there's not a single good way to do that as it is. Technical interviewing is more of an art than a science, and it's difficult to get really good signal from an interview, generally speaking.


If you're experienced with how to use an LLM they can be very good at helping with novel problems and much more than boilerplate.

I've been able to tackle problems that would have otherwise taken up too much time and effort between my jobs as both expert witness for software related Federal lawsuits and as a father of two young children.

Here's just a sampling of what I've accomplished since I started using these tools:

A convolutional neural network trained using PyTorch to predict the next chord (as tablature) on a guitar based on image data. Includes labeling software for the image data as well as an iOS app for hosting and running the model.

A web framework written in C using clang blocks for the DSL, per request memory arena, and complete JSON-API compatible ORM, JWT auth, and enough sanitizers and leak detect to make your head spin.

A custom CLI DSL modeled somewhat on AWK syntax written with Python Lex Yacc for turning CSV data into matplotlib graphics.

A custom web framework written in F# that compiles to JS with Fable.

LLMs helped with looking up documentation, higher level architecture, fixing errors, and yes, plenty of boilerplate!

All of this being said, I brought with me almost 30 years of experience writing software and a pretty good idea high level of how to go about building these things. The devil is in the details and that's where the LLM was most useful.


Sure, but none of what you mentioned actually solves novel problems. It's a helpful tool, sure, but that was never up for debate.

Beyond that, if it helps you spend more time with your family while still being good at your job, that's always a win.


I'll agree that I was the one ultimately responsible for solving the novel problems!


Most people don’t actually work on novel problems though. They build CRUD web apps. Copilot shines there.


And leetcode interview problems are not novel either. They are formulaic, you just have to know the formulas.


The ability to solve novel problems is tested like a marathon, not like a 100m race.


> If you know how to solve problems using an AI, isn't that what they are going to (increasingly) expect you to do on the job?

Why would you work somewhere that prescribed your workflow?

You're a professional. Do your job as you see fit, whether that involves AI assistance or not.

> demonstrating that you can effectively use AI to develop, would seem to me to be something they'd want to see.

Irrelevant. They want you to meet business needs more than play with today's shiny technology.


The trouble is that AI assistants have seen all of the contrived algorithmic problems before. I once interviewed a candidate for whom Copilot spat the answer out as soon as he typed the function signature. Whether these problems are a good idea in the first place is a separate discussion, but as it stands the AI seems to just sidestep the whole thing.


I refuse interviews that include shit like leetcode currently, as they are a waste of time, and I’m glad that LLM’s are ruining them.


I hate them too, but I'll put up with them out of necessity if the product or company mission's interesting to me.

Ironically, the most interesting positions I've held have almost universally been at companies that don't have leetcode-style questions as part of the hiring process.


The interview (test) isn't how well you know your IDE or the compiler tools, but the test is: Can you work hard (study leetcode) and have mental ability to achieve your goals (a job offer).

IMHO, this is the same as disabling linting or compiling. Or why companies (used to) do coding challenges on a white board.


You'll think I'm kidding but at least 1/3rd of Google isn't sure if AIs can code, and 1/3 thinks that if they can the code is too bad to bother with. Things like "gee idk, VS Code autocomplete is pretty cool" are either non-sequitors or a battleground.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: