From my perspective it’s very convenient that all the new information and competition supports his existing priors that:
1) we need to do various forms of regulation to entrench US closed source market leaders, which happens to increase his company’s value
2) the best way towards improvement in models is not efficiency but continuing to burn ever increasing piles of money, which happens to increase his company’s value
The situation the author describes sounds like a summer day hike in subalpine terrain at Rainier. I’d do that in a cotton hoodie and jeans, and I recreate in that part of the cascades 12 months a year. Our forecasts are some of the best in the world, and even if they missed the solution to getting rained on in July on a day hike is to walk back to your car a little damp and disgruntled.
Your example actually makes his point almost exactly. The 7 day thru hike is akin to when hiring a data engineering team and investing super heavily makes sense, the day hike is when you’re chatting with users and figuring out the domain. The “wrong” tools are less consequential at the start and when the stakes are lower.
I’m not trying to respond to the article, overall I agree with the advice. just replying to this silly comment saying I’m more likely to need hiking pants in everyday life than when I’m on a hike, and that my estimation of risk of those activities and locations is off.
There’s a broadly believed myth that it produces meaningful professional opportunities among certain folks in the industry. In my observations this is very rarely true, and in many more cases it’s a professional liability.
This is highly dependent on what particular professional niche you're in.
Hollywood actors are now routinely cast (in part) based on how many social media followers they have, leading to a lot of weirdness around their agents and agencies buying followers, accusing other competing actors of buying followers, etc.
A bit closer to the HN crowd, there is definitely a correlation between speaking at conferences and having an "audience" and being a well-known figure online.
Similarly, the "build-in-public" indie folks are active on social trying to break the build it and they will come cycle.
There are ways to participate and filter through the noise that are positive, but certainly a lot of negative as well.
Thank you for the kind words, but even good CVs get lost in the great recruitment filter all the time. It's great to have a platform where you can interact directly with real engineers at various companies.
Frameworks, compilers, and countless other developments in computing massively expanded the efficiency of programmers and that only expanded the field.
Short of genuine AGI I’ve yet to see a compelling argument why productivity eliminates jobs, when the opposite has been true in every modern economy.
> Frameworks, compilers, and countless other developments in computing
How would those have plausibly eliminated jobs? Neither frameworks nor compilers were the totality of the tasks a single person previously was assigned. If there was a person whose job it was to convert C code to assembly by hand, yes, a compiler would have eliminated most of those jobs.
If you need an example of automation eliminating jobs, look at automated switchboard operators. The job of human switchboard operator (mostly women btw) was eliminated in a matter of years.
Except here, instead of a low-paid industry we are talking about a relatively high-paid one, so the returns would be much higher.
A good analogy can be made to outsourcing for manufacturing. For a long time Chinese products were universally of worse quality. Then they caught up. Now, in many advanced manufacturing sectors the Chinese are unmatched. It was only hubris that drove arguments that Chinese manufacturing could never match America’s.
I’m only half joking when I’ve described ChatGPT-authored emails as a uniquely inefficient transport format.
Author feeds bullet points into ChatGPT which burns CPU cycles producing paragraphs of fluff. Recipient feeds paragraphs of fluff into ChatGPT and asks it to summarise into bullet points.
Then users turn around and feed the fluff into energy hungry summarizers because who has time for a 5 paragraph email that could’ve been a three point bulleted list?
It would be a net win if it could normalize sending prompts instead of normal communication, which is not far in terms of useless waste of energy and space to LLM output that emulates it.
Similarily, MSFT recently announced the upcoming ability to clone your voice for Teams meetings. Extrapolating, in a few months, there will be Teams meetings which are only frequented by avatars. At the end of the meeting, you get an email with the essential content. Weird times ahead.
The way I explained it when I taught English 101 to first-year university students: any substantive question can generate an answer of a paragraph or a life's work; in this assignment I expect you to go into this much depth. Of course, good expository writing is as to-the-point as possible, so the first hurdle for most students was eliminating the juvenile trick of padding out their prose with waffle to meet an arbitrary word-count. Giving a word-count to an AI seems (currently) to activate the same behavior. I've not yet seen an AI text that's better writing than a college freshman could be expected to produce.
> Of course, good expository writing is as to-the-point as possible, so the first hurdle for most students was eliminating the juvenile trick of padding out their prose with waffle to meet an arbitrary word-count.
This is the most beautiful sentence I’ve read today.
We won't get them unless we appreciate both teaching and the Humanities more than we do. I was good, but by no means the best (75th percentile, maybe?). I loved doing it, but changed careers to IT because I'd never have been able to support a family on what I was paid.
A culture which pays teachers poorly, treats them with disrespect ("those who can do..."), and relentlessly emphasizes STEM, STEM, STEM is one that's slowly killing itself, no matter how much shiny tech at inflated valuations it turns out.
I don't know how it is elsewhere, but where I grew up we had minimum word limits on pretty much all essays. Doesn't matter if you can say what you want to say in 6 sentences, you need 4000 words or 2 pages or whatever metric they use
Oh, of course. Length requirements are important, for the reasons I explained up-thread. However, if teachers accept any old thing ("padding") to reach the count, then that metric is arbitrary, which (justifiably!) makes students cynical.
If a student can say all that they want to say in six sentences then they need to learn more about the topic, and / or think through their opinions more deeply. Teachers who do not take that next step are bad teachers, because they are neither modeling nor encouraging critical thinking.
In some places the majority of teachers are themselves incapable of critical thinking, because those who are leave the profession (or the locale) for the reasons in my comment above.
[Edit to add]: Please note that I say bad teachers, not bad people. Same goes for students / citizens, as well. The ability to think critically is not a determinate of moral worth, and in some ways and some cases might be anti-correlated.
Don't get me started on college admissions essays. Rich kids pay other people to write them. Poor kids don't understand the class-markers they're expected to include. If AI consigns them to the dustbin of history it might be the first unalloyed good that tech ever does.
One of my favorite Mark Twain quotes comes from one of his correspondences: 'My apologies for such a long letter, I hadn't the time to write a short one.'
I never had that requirement outside the first years of school- where it’s more about writing practice than writing actual essays.
After it was always “must be below X pages”
X words is supposed to be a proxy for do enough research that you have something to say with depth. A history of the world in 15 minutes better cover enough ground to be worth 15 minutes - as opposed to 1 minute and then filler words. Of course filler is something everyone who writes such a thing and comes up a few words short does - but you are supposed to go find something more to say.
I eventually flipped from moaning about word count minimums to whining about conference page limits but it took a long, long time- well into grad school. The change came when I finally had something to say.
Write a comment explaining that the ostensibly simple task of writing a dozen or so thank you letters for those socks/etc you received for Christmas can, for some people, be an excruciating task that takes weeks to complete, but with the aid of LLMs can easily be done in an hour.
Sure, it's not just thank you cards though. I once had a job in which my boss assigned me the weekly task of manually emailing an automatically generated report to his boss, and insisted that each email have unique body text amounting to "here's that report you asked for" but stretched into three or four sentences, custom written each week and never repeating. The guy apparently hated to receive automated emails and would supposedly be offended if I copy-pasted the same email every time.
Absolutely senseless work, perfect job for an LLM.
I’d prefer to receive no thank you than to receive an AI written one. One says you don’t care, the other says you don’t care but also want to deceive me.
There's the third case: they care. In which you wouldn't be able to tell whether the card is "genuine" or AI written; the two things aren't even meaningfully different in this scenario.
You can tell if the thank-you card is hand-written. Most people don't have a pen plotter connected to their AI text generator to write thank-you notes.
Emailed or texted "thank you" notes don't count. At all.
Yes, but they could also generate the text and transcribe it onto paper by hand.
For many people, myself included, 90%+ work on things like thank-you notes, greetings, invitations, or some types of e-mails, is in coming up with the right words and phrases. LLMs are a great help here, particularly with breaking through the "blank page syndrome".
It's not that different from looking up examples of holiday greetings on-line, or in a book. And the way I feel about it, if an LLM manages to pick just the right words, it's fine to use them - after all, I'm still providing the input, directing the process, and making a choice which parts of output to use and how. Such text is still "mine", even if 90% of it came from GPT-4.
I guess if someone went through the effort to prompt an LLM for a thank-you card note, and then transcribed that by hand to a card and mailed it, that would count. It's somehow more about knowing that they are making some actual effort to send a personalized thank you than it is about who wrote it.
But honestly I don't think "blank page syndrome" is very common for a thank-you card. We're talking about a few sentences expressing appreciation. You don't really have to over-think it. People who don't send thank-yous are mostly just being lazy.
My financial advisor sends out Christmas cards and Birthday cards. They are pre-printed stock cards. I don't even open them. I should tell him not to waste the money. If he even wrote just one sentence that expressed some personal interest, then they would mean something.
These kinds of messages are on the one hand just pro-forma courtesies, but on the other hand they require that that some personal effort is invested, or else they are meaningless.
There's something painfully ironic and disturbing that the pseudo-Kolmogorov complexity of clickbait content, as judged "identical in quality" by an average human viewer, is arguably less than the length of the clickbait headline itself, and perhaps even less than the embedding vector of said headline!
It's always been this way, it's just rules of polite/corporate culture don't allow to say what you actually mean - you have to hit the style points and flatter the right people the right way, and otherwise pad the text with noise.
If the spread of AI would make it OK to send prompts instead of generated output, all it would do is to finally allow communicating without all the bullshit.
Related, a paradox of PowerPoint: it may suck as communication tool, but at the same time, most communication would be better off if done in bullet points.
I'm 26, so now a decade removed from the relevant age bracket here, but was one of the first waves to hit high school with a majority of students having smartphones.
There's been a palpable shift in peers (and myself) post-pandemic with regards to phones and social media in particular. A lot more emphasis being placed on being present in person and a lot more skepticism across the board towards phones/social media. Peers are starting to have kids and almost none of them are posting pictures of their kids and when it's come up in conversation they're doing everything they can to delay ipads/smartphones.
This is exactly how I’ve used copilot for over a year now. It’s really helpful! Especially with repetitive code. Certainly worth what my employer pays for it.
The general public has a very different idea of that though and I frequently meet people very surprised the entire profession hasn’t been automated yet based on headlines like this.
Because you are using it like that doesn't mean that it can't be used for the whole stack and on its own and the public including laymen such as the Nvidia CEO and Sam think that yes, we (I'm a dev) will be replaced. Plan accordingly my friend.
Even last year's gpt4 could make a whole iphone app from scratch for someone that doesn't know how to code. You can find videos online. I think you are applying the ostrich method which is understandable. We need to adapt.
Complexity increase over time. I can create new features in minutes for my new selfhosted projects, equivalent work on my entreprise work takes days...
Making a simple app isn't evidence that it will replace people, any more than a 90%-good self-driving car is evidence that we'll get a 100%-good self-driving car.
Which industry would you pivot to? The only industry that is desperate for workers right now is the defense industry. But manufacturing shells for Ukraine and Israel does not seem appealing.
I was a hacker before the entire stack I work in was common or released, and I’ll be one when all our tools change again in the future. I have family who programmed with punch cards.
But I doubt the predictions from men whose net worth depends on the hype they foment.
> being shown super human intelligence behind closed doors
This seems to be the "crypto is about to replace fiat for buying day to day goods/services" statement of this hype cycle. I've been hearing it at least since gpt-2 that the secret next iteration will change everything. That was actually probably most true with 2 given how much of a step function improvement 3 + chatGPT were.
I don't see anyone asking the question that to me is the elephant in the room:
How are you preventing hallucinations and plainly false information being sent to buyers engaging in what's likely to be one of the largest financial decisions of their life? Beyond just leading to a bad UX, what's your legal exposure there?
You mention providing comps, there's a LOT of local knowledge that goes into that. How are you automating that? Other solutions I've seen like Zillow are pretty laughable. In some neighborhoods a 2 car garage is worth six-figures despite not contributing to square footage, because pulling permits for a new one is basically impossible, just as one local example.
Our goal is not to rewrite every property description, but rather just link you the properties that we think you would like. If a property is mis-listed on Zillow, then that would be the same issue if a Realtor would send you that property as a recommendation
Looking through the demo I think almost all of the experience is being powered by the zillow API, and a minor amount of summarization may be getting handled by an OpenAI API. I think the AI claims are largely but not entirely hype related. It's still not very clear to me what advantages this is giving me as a buyer vs a traditional agent or even being unrepresented and just using zillow's product offerings. Maybe I'm missing the point though.
1) we need to do various forms of regulation to entrench US closed source market leaders, which happens to increase his company’s value
2) the best way towards improvement in models is not efficiency but continuing to burn ever increasing piles of money, which happens to increase his company’s value