Hacker Newsnew | past | comments | ask | show | jobs | submit | old_man_cato's commentslogin

A lot of engineers underestimate the learning curve required to jump from IDE to terminal. Multiple generations of engineers were raised on IDEs. It's really hard to break that mental model.


Yes.

A simple example is GIT. Many insist on using standalone GUI/IDE panels for a simple fetch/push rather than just using terminal.


It's pretty important! You could make a case that there's not much else to talk about. I don't love that fact and I wish it would go away but when billionaires are talking about trying to build something that will replace everyone, that's kind of a big topic!


First, you pay a human artist to draw a pelican on a bicycle.

Then, you provide that as "context".

Next, you prompt the model.

Voila!


Oh, and don't forget to retain the artist to correct the ever-increasingly weird and expensive mistakes made by the context when you need to draw newer, fancier pelicans. Maybe we can just train product to draw?


This hits too close to home.


How to draw an owl.

1. Draw some circles.

2. Prompt an AI to draw the rest of the fucking owl.


And then the AI doesn’t handle the front end caching properly for the 100th time in a row so you edit the owl and nothing changes after you press save.


[flagged]


Hire a context engineer to define the task of drawing an owl as drawing two owls.


I'll be sure to remind myself of that next time I wish I had more time to spend watching my daughter grow up. Sure, I may miss some seminal moments but the raise I get at the end of the year is just as good.


No sense in responding, people. This person can't be saved. Move on.


What approach is that?

Some people like AI. They should use it, talk about why they like it and use products that leverage it.

Other people don't like it. They should avoid using it, talk about why they don't like it and not use products that leverage it.

Each side's enthusiasm for their perspective can be shared for the purposes of convincing others that theirs is the correct perspective.

That all sounds pretty fine to me?


> Other people don't like it. They should avoid using it, talk about why they don't like it and not use products that leverage it.

How do you avoid reading AI slop, and AI ads, on the internet? Could you explain?

Also: AI is now further killing publishing (and "creators"), which had massively expanded through the internet first through blogging, and then facebook, twitter, youtube, tiktok, ... by allowing the platform owners to destroy the reach real people have by burying them in AI slop. How do you avoid that?


Dehumanization might be the wrong word. It's certainly anti social technology, though, and that's bad enough.


I believe that our socializing is the absolute most fundamentally human aspect of us as a species.

If you cut off a bird's wings, it can't bird in any real meaningful sense. If you cut off humans from others, I don't think we can really be human either.


There are a lot of incredibly offended kiwi birds out there now.


And I think a lot of people would agree with you.


Sometimes I feel like I'm losing my mind with this shit.

Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?

What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?


No, you're wrong. They wrote the story before coming up with the model!

In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.


https://ai-2027.com/research says that:

AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.

You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?


Yes, that's correct. The authors themselves are being extremely careful (and, I'd argue, misleading) in their wording. The right way to interpret those words is "this is literally a model that supports our predictions".

Here is the primary author of the timelines forecast:

> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.

> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

Here is one staff member at Lightcone, the folks credited with the design work on the website:

> I think the actual epistemic process that happened here is something like:

> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon

> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world

> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to

> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...


This quote is kindof a killer for me: https://news.ycombinator.com/item?id=44065615 I mean if your prediction disagrees with your short-story, and you decide to just keep the story because changing the dates is too annoying, how seriously should anyone take you?


Ok, yeah, I take the point that one illustration did not obviously precede the other but are likely the coincident result of a worldview.

I don't think it changes anything but thanks for the correction.


Correct. Entirely.

And I'm yuge on LLMs.

It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.

As neutrally as possible, I think everyone can agree:

- There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,

- Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.

- Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.

- You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.

It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.

In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago

It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.


Thank you for this comment, it is exactly my impression of all of this as well.


The point? MIRI and friends want more donations.


Well, yeah. Obviously.


The image of a bunch of children in a room gleefully playing with their computers is horror movie type stuff, but because it's in a white room with plants and not their parent's basement with the lights off, it's somehow a wonderful future.

Karpathy and his peer group are some of the most elitist and anti social people who have ever lived. I wonder how history will remember them.


Its early days. Agree with your point that the "vision" of the future laid out by tech people doesn't have much of a chance of becoming (accepted) reality, because its necessarily a reflection of their own inner world, largely devoid of importance and interactions with other people. Prime example, see metaverse. Most of us don't want to replace the real world with a (crappy) digital one; the sooner we build things that respects that fundamental value, the sooner we can build things that actually improves our lives.


did you not have the computer room open to flash games and the like over lunch time? competitive 4 player bmtron was a blast way back whenhttps://www.games1729.com/archive/


I did. I also had basically unlimited access to pornography and I saw more than one video of someone having their head severed off. But yeah, I played a lot of computer games. That was fun.


I thought that video was generated. Everything about it seemed off


"I’m still a computer scientist, an academic, a straight-ticket Democratic voter, a liberal Zionist, a Jew, etc. (all identities, incidentally, well-enough represented at LessOnline that I don’t even think I was the unique attendee in the intersection of them all)"

Not incidental!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: