Hacker Newsnew | past | comments | ask | show | jobs | submit | TheGrognardling's commentslogin

This is absolutely a crucial and salient point; call me an optimist, but I'm encouraged by the fact that it seems as technological advancement has progressed throughout human history, the speed of responses has accelerated in conjunction with shortened timelines - steam and mechanization, electricity and mass production, telecommunications and media, digital information, and now artificial intelligence have respectively seen faster response times compared to each previous revolution.

I think short-term suffering, or at the very least disruption (as we're seeing) is essentially inevitable, but with all of these preemptive frameworks being implemented, or at the very least discussed (though just the latter isn't really good enough at all, of course) in turnaround times that are unprecedented, I really do not foresee a techno-dystopia; however, again, perhaps that's just wishful thinking.

Quite honestly, I think a pragmatic place to start, outside of theology and moral philosophy, is to make AI development necessarily adherent to some consortium of standards outlined by governments and implemented by boards within industries - like what we see with many engineering professions in the US and other countries.


No, because it’s built on the false premise that the inequality following the industrial revolution ever stopped.[1]

It’s easy for us to be optimists and shrug nonchalantly about the short-term (?) suffering when we don’t face the worst or even median pain that these changes bring. Very strong “you will suffer but that’s a sacrifice I’m willing to make” vibes.

[1] https://news.ycombinator.com/item?id=43953142


There are a lot of good points here, by multiple vantage points as far as views for the argument of how imminent, if it - metaphysically or logistically - viable at all even, AGI is.

I personally think the conversation, including obviously in the post itself, has swung too far in the direction of how AGI can or will potentially affect the ethical landscape regarding AI, however. I think we really ought to concern ourselves with addressing and mitigating effects that it already HAS brought - both good and bad - rather than engaging in any excessive speculation.

That's just me, though.


That’s an intentional misdirection, and an all too common one :(


I certainly don’t dispute the empirical validity of the findings from the study - but there are important nuances to consider as well. I am certainly more naturally-attuned to languages as-far as language-learning and reading than mathematics, but I have also found myself understanding more mathematical and theoretical linguistics as well. I also love programming.

It wasn’t until high school, when I tested-into the highest math class that the school offered, that I began to unlock (with some initial struggle) more logical and procedural reasoning specific to mathematics that I had always done well in, but never explicitly went above-and-beyond in, despite hints of such in arithmetic competitions that my school would hold and that sort of thing. I just think my brain works well for both the linguistic aspects of programming (more naturally) and the computational problem-solving aspects of programming. Certainly there are individuals who have strengths in both cognitive aspects, despite being more naturally-attuned to one versus the other, at least presumably.

Perhaps this shows a cognitive profile that has natural strengths in both "brains", or maybe this highlights limitations of the article's potentially narrow definitions of "language" and "math", implying a more complex intellectual landscape.

Interesting findings nonetheless.


> I certainly don’t dispute the empirical validity of the findings from the study

You absolutely should. Tiny sample size and poor statistical method. It is p-hacking plain and simple


Okay, yeah - fair point. I admittedly didn't look too closely at the article before posting this - and I'm not too statistically-minded in many respects. But upon further investigation, yeah, you seem to be right. I think this article is really just promotional material for something.


It's encouraging to me that as AI becomes more lifelike, people are becoming more and more resistant to humanlike AI 'avatars' and the like - echoing Kevin Kelly's sentiments that widespread societal adoption of new technologies is usually slow, or even nonexistent in some cases. It seems like we're heading towards a crypto bubble scenario in many cases - and we'll find where AI is genuinely useful and where it's just bullshit.


Honestly, I'm pretty encouraged by all of the projects and efforts within legislation and organizations regarding clear lines being drawn - i.e., through watermarking to clearly label whether something is AI-generated or not - as well as the efforts by industries for livelihoods to be protected, specifically in the creative space, where human intentionality and feeling are still of the utmost essentiality. We've seen, are seeing, and will see cultural and societal acceptance and backlash against one thing or another, but I'm confident that we will adapt. Ultimately, pushback, thanks to the Web itself, is already pretty monumental among artists and even other AI researchers in many respects - regulations for the internet, largely due to lack of the Web, were far slower to materialize, on an exponential scale. I remain optimistic that we will find the niches where AI is needed, where it isn't, and where it is detrimental.


While I know there have been plenty of scathing essays, backlash among various communities, etc. do you have some concrete examples of the clear lines being drawn and legislation that gives you this optimism?

Maybe the progress you’re describing has escaped me because of the sheer speed this is all unfolding, but it feels like all I’ve heard is lots of noise, while AI companies continue to hammer hosted resources across the Internet to build their next model, the US government continues to claim they’ll use AI to solve problems of waste and fraud, companies like Shopify claim they won’t hire anyone unless it can be proven that AI cannot do the job, and an increasing % of the content I encounter is AI slop.

Maybe this is all necessary for a proper backlash to form, and I definitely want to become more aware of the positives anywhere I can find them. I’m not an AI doomer, but haven’t yet found the optimism you describe.


The EU AI Act, while I personally find it to be overreach, has certain points that I certainly agree with, and are encouraging given that every major platform has huge userbases in the EU. The first Executive Order on AI in the US following announcement of the AI Action Plan is surprisingly rigid whilst still encouraging innovation, especially given all of the drama regarding federal agencies as-of late. Creative industries are increasingly drawing clear lines on where AI use is and isn't acceptable, especially among individual studios and unions following the SAG-AFTRA strikes of 2023-2024. And ultimately, this is all not even considering the profound advancements in education, biotech, and healthcare.

This is very easy to lose sight of, especially given rapid advancements, but it's important. I think certain companies like Anthropic definitely have safety approaches that I agree with more, being more thoughtful and having clearly-outlined scaling policies (such as the latest Responsible Scaling Policy effective March 31 of this year) versus more vague safety promises such as from companies such as OpenAI and Google. Websites such as https://www.freethink.com have wonderful essays espousing techno-humanism that ultimately gives compelling arguments on how AI will be a progressively beneficial force on humanity, rather than detract from it.

Yes, there WILL be growing pains - as is what happened with the internet and the World Wide Web. But, I am confident that we will adapt. There is no better time in history to be living in than right now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: