Hacker Newsnew | past | comments | ask | show | jobs | submit | more elliotec's commentslogin

A lot of people, in Austria at least, have moved to signal in my experience. My communities in the US and Austria have trended toward adoption of Signal with very few holdovers between messages and WhatsApp, some partly due to my pressure but overall it’s just getting away from the BS of the alts


I think it’s important to highlight this part of the abstract:

> modern smartphones are better understood as external to, but symbiotic with, our minds, and, sometimes even parasitic on us, rather than as cognitive extensions

“Symbiotic with, and sometimes parasitic.” It feels like the symbiotic bit is more on point overall.


The symbiotic-parasitic spectrum likely depends on usage patterns - tools that augment cognition when used intentionally can become parasitic when designed to maximize engagement at the expense of user agency.


>tools that augment cognition when used intentionally can become parasitic when designed to maximize engagement at the expense of user agency

Wonderful definition, thank you. It has reach beyond software I think, into areas like harmful memes (mental parasites); even drug addiction.


How much further to peak Enshitification?


Oh there’s a way down to go yet… a long way to go.


Exactly. Everywhere I’ve worked, this was a quick and non-intensive collaboration between engineering management and like one finance person. It’s baked into a ton of tools already (like you mentioned, Jira) so the percentages are usually just there and eng leaders review it with FP&A twice a year.


Real innovators can’t stand this sort of noise and so it is a direct shot against their bow


Have you tried Claude code? I’m surprised it’s not in this analysis but in my personal experience, the competition doesn’t even touch it. I’ve tried them all in earnest. My toolkit has been (neo)vim and tmux for at least a decade now so I understand the apprehension for less terminal-inclined folks that prefer other stuff but it’s my jam and just crushes it.


Right, after the Sonnet 4 release it was the first time I could tell an agent something and just let it run comfortably. As for the tool itself, I think a large part of its ability comes from how it writes recursive todo-lists for itself, which are shown to the user, so you can intervene early on the occasions it goes full Monkey's Paw.


yeah i've been manually doing first a TASKS.md so i can modify it while the agent starts working on it.


Maybe usually it’s just for personal fun or learning. I think “your audience” can be you and that’s enough. I’ve personally written articles for nobody but myself and “the world” and I’m shocked by how much traffic they get over a decade later. Sometimes the little esoteric things you record for nobody in particular shows up for those particular nobodies and it matters.


Better start now! It’s incredible and unbelievable how productive it is. In my opinion it still takes someone with a staff level of engineering experience to guide it through the hard stuff, but it does in a day with just me what multiple product teams would take months to do, and better.

I’m building a non-trivial platform as a solo project/business and have been working on it since about January. I’ve gotten more done in two nights than I did in 3 months.

I’m sure there are tons of arguments and great points against what I just said, but it’s my current reality and I still can’t believe it. I shelled out the $100/mo after one night of blowing through the $20 credits I used as a trial.

It does struggle with design and front end. But don’t we all.


Designers and frontend developers don’t struggle with those. That’s why they are designers and frontend developers.

Before those 3 months you mentioned, how much did you spend time coding on average (at work, or as a hobby) percentagewise?


Of course they do. I’ve been primarily a front end developer for 15 years. Working with designers. Shit takes so many iterations and so much time. Claude is faster but still “struggles” compared to basic rails work and API calls and test writing and whatnot.

I’m not sure how to answer the question on percentage of time coding. I quit my job as a director where coding wasn’t part of the job but have kept up on side stuff and architecture at work. Since the new year when I started this it’s been in bursts, some weeks or nights I’ll go super hard coding and others I’ll focus on other stuff. I go to conferences and study a lot on the subject of the industry so that’s what I do in bursts of the non-coding time.

I hired a virtual assistant to help with the non-coding things so lately it’s been a lot more.

In general I’d estimate at least 50% of my work on this thing since January has been coding but it’s really hard to gauge. Claude over the past 3 days has surpassed my personal coding productivity over the past 3 months though, if it wasn’t clear what I was saying.


Will these kinds of software end up like a programs written with bespoke Lisp macros? Lots of power but only one person actually knows it by heart.


Hit me up when you release your product. I keep seeing stuff like this and never see any proof. Companies aren't releasing 10x the features/patches/bug fixes/products, open source isn't getting 10x the number of quality PRs, absolutely no real evidence that the massive productivity gains actually exist.

What I've seen is people feel more productive, until the reality of all the subtle problems start to set in. Even skilled engineers usually only end up with 10 or 20% productivity gains by the time they reduce its usecase to where it's actually not total dog shit, or by the time they go back around and fix all the problems.

The highest quality product I know of where the creator has talked about his use of AI is ghostty, and he's not claiming massive improvements, just that it's definitely helpful.


I’ll happily let you know when I release. Goal date for public beta is the 15th. I’d love eyes and feedback on it ASAP.

Hopefully it’s obvious that Claude will not have simply written the entire thing but you might get a sense of what it can do quickly as part of a whole - maybe similar to your last sentence but I suppose I am claiming massive improvements (in productivity, no warranty on quality yet).

Also keep in mind I’m entirely solo here. I fully agree with your points that the proof is in the pudding and obviously there’s nuance to all of it. But yeah, I’m not exaggerating with my commentary above.


If you don't mind me asking a couple questions, what percentage of your code would you say is AI generated, meaning you promoted an AI and it went off and wrote code that you used (with or without modification)?

And how much time would you say you spend wrangling the AI, meaning either repromting or substantially editing what you get back?


This is really cool, and very relevant to something I'm working on. Would you be willing to do a quick explanation of the build?


Sure! I first used openai embeddings on all the paper titles, abstracts and authors. When a user submits a search query, I embed the query, find the closest matching papers and return those results. Nothing too fancy involved!

I'm also maintaining a dataset of all the embeddings on kaggle if you want to use them yourself: https://www.kaggle.com/datasets/tomtum/openai-arxiv-embeddin...


So did you just combine Title+Abstracts+Authors into a single chunk and embed them or embedded them individually?


Impressive! Will you parse the papers in the future? Without citations this is not that usable for professors or scientists in general. The relevance ranking largely depends on showing these older, prominent papers. (from our lab experience building decentralised search using transformers)


One chunk embedded together


That method can break when author names and subject matter collide.


True, but similarly if your embeddings are any good they'll capture interesting associations between authors, topics and your search query. If you find any interesting author overlap results I'd be very interested!


Not exactly what I was looking for, but interesting nonetheless: https://arxivxplorer.com/?q=exotic+penis


Thank you!!


Are you talking about set and setting? What recreational drugs do you mean? I’m not finding the analogy but actually curious where you’re coming from.


I did start by disclaiming a hot take, so forgive my poetic license and unintentional lede burying.

What I'm trying to convey is a metaphorical association that describes moderation and overdoing it. I'm thinking about the articles I've read about college professors who are openly high functioning heroin users, for example.

Every recreational drug has different kinds of users: social drinkers vs abusive alcoholics, people who microdose LSD or mushrooms vs people who spend time in psych wards, people who smoke week to relax vs people who go all-in on slacker lifestyle. And perhaps the best for last: people who occasionally use cocaine as a stimulant vs whatever scene you want to quote from Wolf of Wall Street.

I am personally convinced that there are positive use cases and negative use cases, and it usually comes down to how much and how responsible they are.


Yeah these are both extremely basic great use cases for LLM-assisted programming. There’s no difference, I wonder what the OP thinks that is.


Disagree. There is almost no decision making in converting to use i18n APIs that already have example use cases elsewhere. Building a frontend involves many decisions, such as picking a language, build system, dependencies, etc. I’m sure the LLM would finish the task, but it could make many suboptimal decisions along the way. In my experience it also does make very different decisions from what I would have made.


The AI will pick the most common technologies used for this purpose, which is both "good enough" and also what people generally do at scale (for this exact reason).


This was the promise of no-code. ”All apps are crud anyway”, ”just build for common use cases” etc. This didn’t turn out to be true as often as predicted. If averaging people’s previous decisions was truly a fruitful way, we’d have seen much stronger results of that before AI.


On the contrary, it turned out to be exactly as common as predicted, which is why you see so many people going "this AI assistant thing makes me 100% more productive". It's precisely those tasks. And they are handled in precisely this way by humans, too - throw whatever the popular stack of the day is at them. And sure, it results in inefficiencies and crappy code, but that code is "good enough" wrt what the customer wants it to do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: