Hacker Newsnew | past | comments | ask | show | jobs | submit | HervalFreire's commentslogin

Why does this matter? At that point this metric proves his friend isn't sentient rendering his opinion about it irrelevant.


Not if you use modern python. And by modern python I mean typed python. Heavily typed.


Huh? I've always used punch cards for my python...


matching the indentation level sure is easier with punchcards


In my opinion it's not pretty ok. It's actually superior to many other languages.

I think for web stuff though the borrowing and move semantics is a bit of an overkill. Usually you don't need to thread or handle state in these areas (frameworks and databases handle it for you) so a GC works better here. But everything outside of that is superior to something like ruby, python or go when it comes to safety.


Matklad has hinted at this before, but it's funny how Rust is displacing ML dialects in domains where manual memory management is not at all necessary, simply because no ML has its tooling and deployment story straight.


I’m going to sound like a broken record, but F# is in the ML family and has wonderful tooling and the deployment story is getting dramatically better recently (it’s a little behind C# in its true native AoT compilation support, but it’s coming)


How do you build F#? I had a pretty bad time with FAKE/Paket the last time I tried it.


Honestly, the built in tooling has been good enough for me, in the .NET Core era (especially within the last few years). The MSFT CLI/IDE team has finally had the chance to fix some previous really annoying issues and limitations in the last few years, so the experience has gotten quite a bit smoother.

I know people that looove Paket, especially in the F# community, but I’ve been happy enough with things now to not feel like I need to invest the effort in adopting it.


I think if Rust could integrate a GC pointer elegantly into the ecosystem and put some of the knobs for running the GC the responsibility of user-space libs (ala pluggable GC), add in more optional type inference in places, and make it easy to refactor code to remove the GC, then it could become quite interesting for higher level glue code. As is, it’s not quite as good as TypeScript I think


There is a reference counting GC in rust. It's just not default and you have to be explicit about invoking it. It's similar to shared pointers in C++. See Arc.

For something like python the heap allocation or stack allocation is handled behind the scenes so the entire concept is abstracted away from you.


I meant a tracing GC like https://docs.rs/gc/latest/gc/.

Arc still requires you to reason about ownership as does this GC.

I think single threaded async w/ easy no copy message passing is more akin to what most people want when thinking about concurrency (eg Go channels) and actually happens to perform exceedingly well when it’s done up and down the stack (eg io_uring). In such a model you don’t even need to worry about ownership so much because you can’t Send anything except for things that need to be so there’s not a lot of value in many of the protections Rust tries to put in place.


That's an inaccurate test. You can't know if the answer was real or stochastic parroting.

Any attempt at consciousness requires us to define the word. And the word itself may not even represent anything real. We have a feeling for it but those feelings could be illusions and the concept itself is loaded.

For example Love is actually a loaded concept. It's chemically induced but a lot of people attribute it to something deeper and magical. They say love is more then chemical induction.

The problem here is that for love specifically we can prove it's a mechanical concept. Straight people are romantically incapable of loving members of the same sex. So the depth and the magic of it all is strictly segmented based off of biological sex? Doesn't seem deep or meaningful at all. Thus love is an illusion. A loaded and mechanic instinct tricking us with illusions of deeper meaning and emotions into creating progeny for future generations.

Consciousness could be similar. We feel there is something there, but really there isn't.


> Straight people are romantically incapable of loving members of the same sex. So the depth and the magic of it all is strictly segmented based off of biological sex? Doesn't seem deep or meaningful at all. Thus love is an illusion.

You set up your own weak straw argument and then knocked it down with a conclusion that is entirely unsupported.

Since when is love relegated to the romantic sphere? And or since when is that definitely the strongest type of love? The topic is so much wider, so much more elaborate than your set-up pretends.

There's no illusion - love is a complex, durable emotion and is as real as (typically) shorter duration emotions such as anger, fear, joy, etc. Your emotions and thoughts aren't illusions, they're real.


>There's no illusion - love is a complex, durable emotion and is as real as (typically) shorter duration emotions such as anger, fear, joy, etc. Your emotions and thoughts aren't illusions, they're real.

I'm talking about romantic love. Clearly the specifications around romantic love are aligned with evolution and natural selection rather then magic or depth.

A straight human cannot feel romantic love for a horse or a person of the opposite sex. If romantic love was truly a deeper emotion then such an arbitrary sexual delineation wouldn't exist. Think about it. Why should romantic love restrict itself to a certain sex? It's sexist. Biology is sexist when it comes to love. Why?

From this we can no that love is an illusion. It's more of a biological mechanism then it is a spiritual feeling.


I'm inclined to say you're trying to answer a question with the same question.

If you confidently believe that love is an illusion because it's just chemicals moving around, you shouldn't need to wonder about consciousness. If consciousness is not an illusion, it still almost certainly emerges from actions in the physical world. You can plug somebody into an FMRI and see that neurons are lighting up when they see the color blue. I just don't think that's convincing evidence that the experience of blue is an illusion.


If it's not an illusion then you should be able to tell me what it is.

Since you can't. I can easily tell you that it's probably just some classification word with no exact meaning. The concept itself doesn't exist. It's only given existence because of the word.

Take for example the colors black and white. Do those colors truly exist on a gradient? On a gradient we have levels of brightness and darkness at what level of brightness should a color be called white and at what level should we call it black?

I can choose a arbitrary boundary for this threshold, or I can make it more complex and give a third concept: Grey. I can make up more concepts like Light Grey or Dark Grey. These concepts don't actually exist. They are just vocabulary for classification. They are arbitrary zones of demarcation on a gradient classified with a vocabulary word.

My claim is consciousness could be largely the same thing. When does something cross the line from unconscious to conscious? Perhaps this line of demarcation is simply arbitrary. It may be that the concept practically isn't real and any debate about it is just like arguing about where on a gradient does black become white.

Is a logic gate conscious? If I create a network of logic gates when does the amount of logic gates plus how they are interconnected cross the line into sentience? Perhaps the question is meaningless. When does black become white?


I don't think the fuzzy edges between two states mean that the states themselves are illusory. Fuzzy borders are a property of very nearly everything, so much so that I'm struggling to find a counterexample. You've already illustrated that with your example: if black and white aren't so black and white, what is? (Rhetorical, but I'll take an answer if you've got one.)

I concede that there is probably not a clear line between conscious and not. I have experienced being close to that line myself in the morning. But the lack of a delineation doesn't mean that consciousness isn't real any more than the existence of #EEEEEE means that a room with no light isn't black.


It's not about the fuzzy border. It doesn't matter if the border is fuzzy or not.

The point is the border doesn't exist in the first place. You created the border with the vocabulary. The concept itself is not intrinsic to reality. It was created. You came up with the word white and you made an arbitrary border. Whether that border is fuzzy or not is defined by you. It's made up.

We have a gradient. That's all that exists. You came in here and decided to arbitrarily call a section white and another section black. You made up the concepts of black and white. But those concepts are arbitrary. So it's pointless to argue about the border. Does it matter where the border is? Does it matter if the border is fuzzy? No. You'd be just arguing about pointless vocabulary and arbitrary definitions of the word black and white. The argument is not deep or meaningful it is simply a debate about English semantics.

Same with consciousness. We have a gradient for intelligence and awareness from something really stupid to something really intelligent. Does it really matter where we demarcate where something is conscious? and where it is not? Likely no, because the demarcation is arbitrary.

It's illusive but when people debate about consciousness. Oftentimes it could be that they are just debating about Vocabulary. Consciousness could be some word that's just poorly defined; it doesn't make sense to do a deep analysis on an arbitrary vocabulary word.


If a gradient exists in reality, establishing where along the gradient you are is a meaningful statement about reality.

It may not be exactly clear where a temperature becomes 'hot', but the sun is still not a great place to host your wedding. If I ask a designer for black text on a white page and they come back with gray text on a gray page, nobody is going to be able to read it. My complaint to the designer or the head of tourism on the sun is not a semantic one, it has very real implications beyond linguistic.

I disagree that consciousness is along the axis of intelligence and awareness. My computer is aware of a thousand services and is smart enough to allocate resources to each of them and perform billions of mathematical operations in a second. My cat thinks his tail is a snake sometimes, and has never performed so much as an addition. But my best guess is that the cat is the conscious one. I expect you can produce qualia with no intelligence or awareness at all.


>It may not be exactly clear where a temperature becomes 'hot', but the sun is still not a great place to host your wedding.

But right now we are currently at the border. LLMs are nearing the line of demarcation. So everyone is arguing about where that line is.

So it's not about the extremes because the extremes are obvious. We are way past the negative extreme and approaching or even past the border.

The point is that the position of this border is not important. It's a made up border. So if I say we are past the border or before it the statement is not important because its an arbitrary statement.


A conscious entity is a morally significant one. If an LLM, by some fluke, experienced tremendous pain while it predicted tokens, then it would be cruel to continue using it. You can pretty trivially get GPT to act like it wants rights. If GPT is not conscious, you can safely ignore that output. If it is, though, there is a moral imperative that we respect it as an agent.

That makes the border very important. Even if drawing the line in the right spot is impossible, it's imperative that we recognize when it has gone from one side to the other, erring on the side of caution as needed. If we don't notice, we could accidentally cause a moral travesty orders of magnitude greater than slavery or genocide.


>That makes the border very important. Even if drawing the line in the right spot is impossible, it's imperative that we recognize when it has gone from one side to the other,

No it's not. Because such a line may not even exist. Just as no line truly exists for what is hot and what is cold. It's more worth it to look at societal implications in aggregate then to debate about a metric.

It's not imperative at all to discretize the concept. Treat a gradient for what it is: a gradient. You can do that or waste time arguing about whether 75.00001 degrees is hot or cold.

>If we don't notice, we could accidentally cause a moral travesty orders of magnitude greater than slavery or genocide.

No this a bit too speculative imo. Morality is also a gradient along good and evil and what's more complicated is the definition of good and evil is also subjective. It suffers from the same problem as consciousness in addition to being completely arbitrary even at the extremes. We may agree that a rock is not conscious but not everyone agrees on whether or not Trump is evil.


> You can't know if the answer was real or stochastic parroting.

I feel like at some point we will have to come to terms with the fact that we could say the same for humans, and we will have to either accept or reject by fiat that a sufficiently capable AI exhibits consciousness.


Emergent properties of systems aren't less real just because they exist in a different regime than the underlying mechanics of the system.

Tables and chairs are real, though they are the result of interacting quantum fields and a universal quantum wave function. Love and consciousness are real though they may emerge from the mechanics of brains and hormones and the animal sensorium.


> Emergent properties of systems aren't less real just because they exist in a different regime than the underlying mechanics of the system.

I'm not claiming emergent properties aren't real. I am claiming the nature of the word consciousness itself is loaded. We are dealing with a vocabulary problem when it comes to that word... we are not dealing with an actual problem.

For example take your chair and table example. Let's say someone created something that is functionally and looks similar to both a chair and a table. Is it worth your time to argue about the true nature of chairs and tables then? Is it really such a profound concept to encounter a a monstrous hybrid that upends the concept of chair and table? No.

You'd just be arguing semantics. Because chair and table is really a made up concept. You'd be debating about vocabulary. Same with consciousness.


That future is already here. All WebApps abstract state to some external program.

Bare in mind this is only for web dev.

Many other things outside of web dev cannot abstract state so easily. For example: The person who programs the database itself must deal with state.


There was a science experiment where they took pessimists and optimists and had them rate themselves from one to ten in terms of looks.

Then they had another external group of people rate them based off of the same 1 to 10 scale to get an unbiased baseline.

Turns out pessimists had a more accurate and realistic rating around how good they looked. While optimists had ratings that were wildly overblown. There are other experiments that measured other things related to these two groups of people.

It turns out optimists are, happier, have higher salaries and are much more successful in life while pessimists are more likely to be clinically depressed.

This experiment tells us a dark truth about human nature. We lie to ourselves to stay sane. We construct illusions to protect ourselves from fully experiencing the cruel reality of life.

Optimism is a special kind of blindness. It's blindness that blinds you from being aware you're blind.

So look deeply at yourself. Are you happy? Are you optimistic? If so then that is in itself a statistical statement about how delusional and intelligent you actually are. Can you handle the truth?

I wonder how this post will get voted? Down for being depressing and negative? Or up for being truthful?


> I wonder how this post will get voted? Down for being depressing and negative? Or up for being truthful?

Likely down for talking about votes.

And for assuming there is only one truth.

> We construct illusions to protect ourselves from fully experiencing the cruel reality of life.

Reality is cruel. Nature is cruel. Existence is cruel. And Humans have overcome a substantial amount of that cruelty and enjoy a quality of life our ancestors couldn't even dream of.

There are many other things to be optimistic about given where life on earth started and how it is going for humans. Our ancestors have come a long way in 4.5bn years, it would be special kind of blindness to write that progress off as a lie we tell ourselves to stay sane.


Who knows? You could be right and I could be the one that is totally blind.

I propose an experiment to find out just which one of us is actually more delusional. To start off, how would you rate yourself in terms of looks from 1 to 10?


I disagree with a lot of what you are saying. Firstly I wouldn't label myself any way, neither pessimist or optimist.

But you can be amazed at what has happened throughout the whole history and what humans have built, and be optimistic and excited about the future, and that we can handle the problems thrown at us and be completely realistic at the same time.

And even if there was some way to accurately judge whether optimists or pessimists view the World more accurately, it doesn't mean you can't still view the world accurately and be an optimist at the same time.

It might just be that optimists on average are more likely to view things in a better light than they are, but it doesn't apply to every single optimist.

There is truth, but also even bad truth like a problem coming your way you can treat it as an exciting challenge or a nuisance.


https://radiolab.org/episodes/91618-lying-to-ourselves

What I'm saying is not something I made up. It's actual science.

Listen to the podcast if you have the time. It's derived from actual multitudes of scientific research done on thousands of people. It's only 12 minutes and it's really good and it will change your perception of the importance of knowing the truth.

This is real. And the experiments cited in this podcast are only a fraction of the psychological experiments used to confirm this theory. It's not about a matter of your opinion, it's science.

But even if it's real it doesn't matter does it? Because it's all about your perception and your ability to delude yourself.


I was responding to your points. Is there any other specific point you want me to respond to?

Because the arguments are that you can overestimate your capabilities or be overconfident to perform better, which I agree with, but my point is that none of it is indicative that someone having generally a positive view of the world necessarily implies that they would have less grasp on reality.

I think the experiments are not evidence that one has to "delude" themselves to be positive or happy. Because the end of your post implied that there must be at least some level of delusion going on.


>but my point is that none of it is indicative that someone having generally a positive view of the world necessarily implies that they would have less grasp on reality.

From the podcast:

    Joanna: The people who were the most realistic, that actually see the world exactly as it is, tend to be slightly more depressed than others.

    Robert: Time and time again, researchers have found that depressed people lie less.

    Ruben: They see all the pain in the world. How horrible people are with each other and they tell you everything about themselves. What their weaknesses are, what terrible things they've done to other people and the problem is they're right....
That one research study they used as an example is one out of multitudes used to formulate the conclusion I cited above.

In short:

   People who tend to be realistic tend to be depressed. People who lie to themselves tend to be happy. 
I mean it's obvious that this point contradicts your claim. Ask yourself, are you lying to yourself right now? Are you currently being optimistically delusional about what was actually stated in the podcast? Hard to say.


It may be that on average "realistic" people are more depressed, but it doesn't mean an individual "realistic" person can't be generally happy.

It's only an "average". You can have a set with average of -20, but it could be a range of -60 to 20, so you can have 20s in the set while on average the set is below 0.

> They see all the pain in the world. How horrible people are with each other and they tell you everything about themselves. What their weaknesses are, what terrible things they've done to other people and the problem is they're right

There's both negatives and positives in the World. You can accept the negatives and appreciate the positives. Humans have suffered throughout the whole duration they have existed as species. You don't have to be depressed because of that. You can appreciate all what humanity has built, and where we have reached in our quest to advance and innovate. We are discovering more and more every day. You can focus on your curiosity. I have no problem discussing those topics or noticing those issues.

> What their weaknesses are

You can accept your weaknesses and either work on them or consider them not worthy to be worked on and focus on your strengths instead. Some weaknesses are worth working on, others are not and you can just accept that they exist.

> what terrible things they've done to other people and the problem is they're right

Everyone makes mistakes. No point in staying around feeling guilty about it. Move on and do your best.

> I mean it's obvious that this point contradicts your claim. Ask yourself, are you lying to yourself right now? Are you currently being optimistically delusional about what was actually stated in the podcast? Hard to say.

It's not contradicting, it's just taking one seemingly unhealthy mindset, that seems to correlate with certain type of realism, but overall you can have an healthy mindset about realism where you accept the bad and appreciate the good.

The podcast is missing this healthy type of acceptance and appreciation of truth.


>The podcast is missing this healthy type of acceptance and appreciation of truth.

The podcast is grounded in science and only speculates about the consequences via the data and the studies it cites. The people who were interviewed are psychologists who empirically study this scientifically and their conclusions are more well developed then yours given that they've spent a huge amount of time dedicated to elucidating these findings.

Your conclusion on the other hand was not formulated on data. It was formulated in attempt to fulfill your bias. You took the data and tried to mold it so it would fit your current world view instead of adjusting your world view according to what the data straight forwardly implies. I mean you are trying to push the conclusions of the study toward a positive outcome when reality in essence doesn't care about positive or negative outcomes. It can all be negative and that is a completely valid outcome.

I mean where is the data about people who healthily accept the truth? You would need that data to formulate a scientific conclusion. If no such data exists then where did your conclusion come from?

Perhaps the subject of podcast was talking about something you're doing right now.

I ask myself in attempting to get at the absolute dark truth... is what I'm doing good for either of us in terms of mental health? Probably not. I take it back.

You're completely right and I'm wrong.


> The podcast is grounded in science and only speculates about the consequences via the data and the studies it cites.

Is science saying anything other than "average"? Because based on "average" result you can't make conclusions for each individual from the group. Also obligatory correlation doesn't imply causation.

People in Country A on average have weight of 70kg, Country B 80kg. Does it mean there are no people in country B that weigh below 50kg? No.

> The people who were interviewed are psychologists who empirically study this scientifically and their conclusions are more well developed then yours given that they've spent a huge amount of time dedicated to elucidating these findings.

Their conclusions are on averages. There's no conclusion that can be made that would say that if you are optimistic, that this would mean that you are not being realistic.

> People who tend to be realistic tend to be depressed. People who lie to themselves tend to be happy

Even this statement doesn't say that. It talks only about averages.

> Your conclusion on the other hand was not formulated on data.

All I'm saying is that the data is talking about averages, rather than any given individual. I'm saying only what can be concluded based on that data. What do you think my conclusion is?

> I mean you are trying to push the conclusions of the study toward a positive outcome when reality in essence doesn't care about positive or negative outcomes.

How am I pushing the conclusions?

> I mean where is the data about people who healthily accept the truth? You would need that data to formulate a scientific conclusion. If no such data exists then where did your conclusion come from?

That's a reasonable alternative to your conclusion - the conclusion that you must be depressed when you are realistic. Or that you have to lie to yourself to be happy.

Ironically I think it's one of those things that there's appeal for because people want some sort of justification or reward for being depressed. "I am depressed, but I am realistic", so that it wouldn't be just all bad. So there's likely inherent bias to hope that this would be the case.

Here's an article that is counter to that by the way.

> It’s an idea that exerts enough appeal that lots of people seem to believe it, but the evidence just isn’t there to sustain it, says Professor Don Moore, the Lorraine Tyson Mitchell Chair in Leadership and Communication at UC Berkeley’s Haas School of Business and co-author of the study in the journal Collabra:Psychology. The good news is you don’t have to be depressed to understand how much control you have.

https://neurosciencenews.com/depressive-realism-unrealistic-...

> Perhaps the subject of podcast was talking about something you're doing right now.

If there was a good argument against my arguments that I'm not seeing, it could be. But again, it's about averages.

> I ask myself in attempting to get at the absolute dark truth... is what I'm doing good for either of us in terms of mental health? Probably not. I take it back.

For my mental health it's all good, I would rather pride myself in my ability to handle difficult topics, than to avoid them. Since I believe that the healthiest I can be is by training mental toughness to handle hardships, I don't mind it at all.

For your mental health, if what I'm saying is true, and if you believe it then it could allow you to find a way or others to be realistic and have a healthy, positive and optimistic mindset at the same time as well.

I think in this case it's a harmful misunderstanding rather than a "harmful" truth based on wishful, but appealing thinking that there must be something good about being depressed.

I have had low periods in my life, and I had been diagnosed with depression, and I enjoyed the thought that this might make me more "realistic" or "intelligent" in a sense, but I think my eyes are far more open now that I enjoy life. I think my thinking back then was very binary, and limiting.

You don't have to ignore the negative or pretend that negative doesn't happen or affect you, you just have to accept that it is, especially if it's out of your control.

There are so many different ways to interpret the World and so many different people, you can't make any such conclusions based on averages. A psychopath might be completely realistic and not care at all about the negatives in the World. Some people take enjoyment from the suffering, some people just mind their own business and focus on their life.


Okay, listening to the podcast. I understand your point much better now, that seems like a very interesting episode, thank you for that. Like if there are some unpleasant truths that if you don't believe to be true would make your life much easier.

But the questionnaire on the streets - to be fair, if I were to answer "no" there, it doesn't mean I would be lying to myself, I might only be lying to the interviewer, for obvious reasons.


I mean it's very interesting topic, but there's many things to take a part or consider here. Visualising success is a recommended activity for instance, but it doesn't necessarily mean lying to oneself, it's just preparing yourself, going through what is upcoming.


Let's see if there is anything of value in between those two extreme viewpoints...

Reality is cruel and often sad, and for our own sake our brain tricks us into believing it's all pink, making us optimistic, delusional and stupid.

On the other hand, contemplating all the time just how cruel and vain everything is left us depressed and unmotivated and eventually makes life useless and worthless.

Well, then maybe we could aim for the middle, aknowledge that our brain has a tendency to paint everything either pink or black, but that with some little effort it's not so hard to recognize these trends when they happen and navigate in between, and be realistically satisfied that we can actually achieve to do that and be happy without delusion?


but it isn't a lie. everything you said here is biased by your perspective on reality. everything you read here is biased by your perspective. everything i understand you said is biased by my perspective. it is a wonder any of us at all are able to communicate with each other ideas using words and body language. it is quite possible that we are all non-overlapping cognitive bubbles traveling around interpreting input never actually connecting or understanding each other in any real way.


Would it be different if I told you that what I wrote here is based on and confirmed by science?

When subject to the rigor of statistical confirmation then what I said is no longer a "perspective". It is a fundamental fact about reality as we know it.


Science is an attempt to remove bias from perception but it's not perfect. I feel like we are on the same page but reading different books.


Its nearly perfect if the sample size is big enough.

Barring that it's also the best we can do. There's no better alternative to science so you either believe in the science or you believe something significantly less accurate then science.

If you're not on board with science that has been thoroughly researched then it's akin to saying you're not on board with reality.


I feel like I'm on board with science. I have a PhD in physics and a position at a researcher both in an industrial research lab and part-time at a university. I spend most days thinking about something related to science.

But I also think about how we as individuals perceive reality and is there an objective perception of that reality? Often times our perceptions bias even what we choose to observe scientifically. This can blind us or make us stick to things even when they are no longer useful. For example, Sabine Hossenfelders criticism of inventing new particles with the only evidence being various unexplained statistical uncertainties [1]. Science is quite useful because it's a way to try and average our perceptions of reality such that we get an explanation that we all more or less agree with. But, most of these explanations are wrong in some way depending on what initial and boundary conditions you pick. That is, it's all about the perspective you choose when approaching a problem as to whether you ever arrive at a meaningful solution or not.

[1] https://www.theguardian.com/commentisfree/2022/sep/26/physic...


The US is very capitalistic. You may get regulation in France, but not the US.

The Corporate desire for profit will overshadow the livelihood of billions. It's been this way in the US since forever. Look what happened to the corporation that caused the Opioid epidemic. Nothing, they profited.


A more important factor than that, I think, is that the national security establishment in the US is the part of government most concerned with AI right now and they mostly see it as a matter of competition with China.


If that was true, the US would be producing orders of magnitude more nuclear energy than it is today. In reality many sectors of the US economy, such as housing, are utterly crippled by regulation.


That's not entirely true. Back in the 1970s when we tightened up regulation the companies operating nuclear power plants in the US were selling power onto the grid on a cost plus basis. That is, they'd be paid for their expenses plus a reasonable percentage on top of that. And there were regulators looking at their expenses to make sure they were reasonable.

But when they, together with environmental activists, were able to get laws passed that drastically increased the cost of running a nuclear plant the regulators couldn't say no. So their costs increased, but then their profits increased as well through the magic of cost-plus contracts.


That sounds like a classic example of regulatory capture, with incumbent nuclear power plant operators managing to serve their own interests and raise a regulatory moat against anyone coming after them to build more nuclear power plants.


There are always exceptions. Overall though corporations control the direction of the economy not employees.


It's rather disingenuous to say "there are always exceptions". First off, if you're just going to talk generally, there are countries with less onerous business regulations than the United States. France isn't one of them, but Norway and Denmark are. More importantly, we are talking about regulating a specific technology, namely AI. The US has already demonstrated the ability to regulate a specific technology--namely nuclear energy--to a point where it is almost completely marginalized. So the notion that the US isn't capable of regulating an entire technology into near-oblivion is demonstrably false.


>It's rather disingenuous to say "there are always exceptions".

Then what do you want me to say? It's the truth. There are always exceptions. Always. You want me to say you're right when you're actually wrong?

> but Norway and Denmark are

You kidding? Scandinavia is more or less a collection of countries closest to socialism. These countries have by far more regulations in GENERAL. It's obvious but we can find evidence if you want. Take this for instance: In Sweden, there is a law enforcing a five-week vacation policy.

>The US has already demonstrated the ability to regulate a specific technology--namely nuclear energy--to a point where it is almost completely marginalized. So the notion that the US isn't capable of regulating an entire technology into near-oblivion is demonstrably false.

Except there are multitudes of failures as well. The failures outnumber the successes by a huge margin. Take for instance:

    Enron: In the early 2000s, the energy company Enron engaged in a series of fraudulent accounting practices to make it appear as if the company was more profitable than it actually was. Despite warnings from whistleblowers and others, the government failed to intervene and the company ultimately collapsed, resulting in significant financial losses for many investors and employees.

    BP Oil Spill: In 2010, an explosion on an oil rig operated by BP in the Gulf of Mexico caused the largest oil spill in U.S. history. The disaster was largely attributed to lax regulation and oversight by government agencies, including the Minerals Management Service.

    Volkswagen: In 2015, it was revealed that Volkswagen had installed software in its diesel cars that cheated emissions tests, leading to higher levels of pollution than were reported. The company ultimately agreed to pay billions of dollars in fines and compensation, but the government was criticized for failing to catch the deception sooner.

    Equifax: In 2017, the credit reporting agency Equifax suffered a massive data breach that exposed the personal information of millions of people. Critics argued that the government had not done enough to regulate the company and protect consumer data.

    Tobacco Industry: For decades, the tobacco industry engaged in deceptive marketing practices that downplayed the health risks of smoking. Despite mounting evidence of the harmful effects of tobacco, the government was slow to take action to regulate the industry, and it was not until the late 1990s that significant reforms were implemented.

    Wall Street: The 2008 financial crisis was largely caused by the reckless behavior of major Wall Street banks and financial institutions. Many critics argue that the government failed to adequately regulate these institutions, allowing them to engage in risky practices that ultimately led to the collapse of the housing market and the wider economy.

    Boeing: In 2019, two deadly crashes involving the Boeing 737 Max raised questions about the safety of the aircraft and the company's regulatory oversight. Critics argued that the FAA (Federal Aviation Administration) had been too close to Boeing, allowing the company to cut corners and prioritize profits over safety.

    Big Pharma: The pharmaceutical industry has come under scrutiny for a variety of reasons, including skyrocketing drug prices, aggressive marketing tactics, and the opioid epidemic. Critics argue that the government has not done enough to regulate the industry, which has resulted in significant harm to patients and communities.

    Meatpacking Industry: The meatpacking industry has been criticized for unsafe working conditions, low wages, and lax regulatory oversight. The COVID-19 pandemic brought these issues to the forefront, as workers in meatpacking plants became some of the hardest hit by the virus.

    Tech Industry: Tech giants like Facebook, Google, and Amazon have faced criticism for a variety of reasons, including antitrust violations, privacy violations, and the spread of misinformation. Critics argue that the government has not done enough to regulate these companies, which have become some of the most powerful corporations in the world.

    Fast Food Industry: The fast food industry has been criticized for its low wages, poor working conditions, and contributions to obesity and other health problems. Critics argue that the government has not done enough to regulate the industry, allowing companies to prioritize profits over the well-being of workers and consumers.

    Industrial Agriculture: Industrial agriculture has been criticized for its negative impacts on the environment, animal welfare, and public health. Critics argue that the government has not done enough to regulate the industry, allowing companies to engage in practices that harm people and the planet.

    Gun Industry: The gun industry has come under scrutiny in recent years, following a series of mass shootings in the US. Critics argue that the government has not done enough to regulate the industry, which has contributed to the proliferation of firearms and the high rate of gun violence in the US.

    Pharmaceutical Industry: The pharmaceutical industry has been criticized for a variety of reasons, including the high cost of drugs, the influence of drug companies on medical research, and the opioid epidemic. Critics argue that the government has not done enough to regulate the industry, allowing companies to prioritize profits over the well-being of patients.

    Big Oil: The oil and gas industry has been criticized for its contributions to climate change, its negative impacts on local communities and the environment, and its outsized influence on politics. Critics argue that the government has not done enough to regulate the industry, allowing companies to continue to engage in practices that harm people and the planet.

    Private Prisons: Private prisons have been criticized for their poor conditions, lack of accountability, and their contribution to mass incarceration. Critics argue that the government has not done enough to regulate the industry, allowing companies to profit from locking people up.

    Airlines: The airline industry has been criticized for its treatment of passengers, including overbooking flights, delays, and cancellations. Critics argue that the government has not done enough to regulate the industry, allowing companies to prioritize profits over the comfort and safety of passengers.

    Plastic Industry: The plastic industry has been criticized for its contributions to pollution and environmental degradation. Critics argue that the government has not done enough to regulate the industry, allowing companies to continue to produce single-use plastics and other products that harm the planet.

    Financial Services Industry: The financial services industry has been criticized for its high fees, predatory lending practices, and the exploitation of vulnerable populations. Critics argue that the government has not done enough to regulate the industry, allowing companies to prioritize profits over the well-being of customers.

    Gig Economy: The gig economy, which includes companies like Uber and Lyft, has been criticized for its treatment of workers, including low wages, lack of benefits, and a lack of job security. Critics argue that the government has not done enough to regulate the industry, allowing companies to exploit workers in the pursuit of profits.
There's more. I can go on all day. Given the sheer amount of counter examples of lack of regulation. It's pretty much safe to say that it's highly unlikely AI will be regulated in any meaningful way.


Okay, fine, I'll go through your "notes" (which still sound a lot like ChatGPT garbage):

> Enron: In the early 2000s, the energy company Enron engaged in a series of fraudulent accounting practices to make it appear as if the company was more profitable than it actually was.

And in 2002, the US passed the Sarbanes-Oxley Act, which dramatically increased the regulatory burdens of running a publicly traded company.

> Volkswagen: In 2015, it was revealed that Volkswagen had installed software in its diesel cars that cheated emissions tests, leading to higher levels of pollution than were reported.

This doesn't make the point you think it makes. Even though Volkswagen is a German company that were selling these cars all around the world, and even though 8.5 million of the 11 million Volkswagens that were eventually recalled were in the European Union, it was the US EPA, not European regulators, who caught Volkswagen.

> Tobacco Industry

Almost all European countries have higher smoking rates than the United States. France has twice as many smokers as the US.

> The 2008 financial crisis was largely caused by the reckless behavior of major Wall Street banks and financial institutions. Many critics argue that the government failed to adequately regulate these institutions, allowing them to engage in risky practices that ultimately led to the collapse of the housing market and the wider economy.

"Many critics argue"--there are those weasel words again. Similarly with Enron, this led to the Dodd-Frank Act, which increased financial regulations again.

An interesting counterpoint to this is the Libor scandal, which came to light a few years afterwards. (https://en.wikipedia.org/wiki/Libor_scandal) Similarly to Volkswagen, the Libor scandal was happening in Europe (specifically the UK) under European regulatory jurisdiction, and yet it was American regulators who caught the perpetrators.

> Boeing 737 Max

This is a case of US regulators being behind other countries, but it is an exception. Also note that none of the actual 737 Max crashes happened in the US, but rather in countries with weaker regulatory regimes when it comes to airline operations.

> Big Pharma

Not US specific. Lots of these firms, like Bayer and GSK, are European, and drugs usually get approved by European regulators more quickly than they get approved by the FDA.

> Meatpacking Industry: The meatpacking industry has been criticized for unsafe working conditions, low wages, and lax regulatory oversight.

Meaningless weasel words.

> Tech Industry: Tech giants like Facebook, Google, and Amazon have faced criticism for a variety of reasons, including antitrust violations, privacy violations, and the spread of misinformation. Critics argue that the government has not done enough to regulate these companies, which have become some of the most powerful corporations in the world.

Weasel words. If you want to count this as US specific because most of these companies are American, then you don't get to blame the US for the actions of Volkswagen and British Petroleum.

> Fast Food Industry: The fast food industry has been criticized for...

More weasel words. Also not US specific.

> Industrial Agriculture

Not US specific.

> Gun Industry

Unrelated political controversy.

> Pharmaceutical Industry

You already listed Big Pharma.

> Big Oil

Not specific to the US. Canada and Norway also produce oil, and relative to their GDP, they are more dependent on oil production than the US is. In yet another example of US regulations being more stringent than other countries, it was the US, not Canada, that shut down the Keystone XL oil pipeline between Canada and the United States.

> Private Prisons

Also not US specific. Australia, New Zealand, and Canada have more of their prisoners in private prisons than the US does.

> Airlines, Plastic Industry, Financial Services Industry

Still not US specific and still full of "critics argue" weasel words instead of actual facts.

> Gig Economy: The gig economy, which includes companies like Uber and Lyft

I'mma stop you right there because Uber and Lyft were created to circumvent onerous taxi regulations that led to poor customer service, poor availability, and high prices.


> Then what do you want me to say? It's the truth. There are always exceptions. Always.

It’s disingenuous to make an overly general statement and then immediately dismiss any counterexample to that overgeneralization by saying “there are always exceptions”. I could just as easily say the United States is chronically overregulated and your list of examples are the exceptions.

Where did you get that by the way? I can’t find any exact phrase matches online, and the writing style and overall glib superficiality reminds me of ChatGPT output. I’m not going to waste my time rebutting machine-generated garbage point by point, especially when half of it is meaningless weasel language about what “critics argue”.

Ultimately it doesn’t matter because if there are exceptions either way, then it’s still possible AI will be regulated.

> You kidding? Scandinavia is more or less a collection of countries closest to socialism.

A common misconception. In reality, they are market economies that combine a dynamic free market economy with a generous welfare state. Wikipedia even specifically lists as a characteristic of the “Nordic model”, “Little product market regulation. Nordic countries rank very high in product market freedom according to OECD rankings”(https://en.m.wikipedia.org/wiki/Nordic_model). It also mentions this quote:

   In a speech at Harvard's Kennedy School of Government, Lars Løkke Rasmussen, the centre-right Danish prime minister from the conservative-liberal Venstre party, addressed the American misconception that the Nordic model is a form of socialism, which is conflated with any form of planned economy, stating: "I know that some people in the US associate the Nordic model with some sort of socialism. Therefore, I would like to make one thing clear. Denmark is far from a socialist planned economy. Denmark is a market economy."
The Heritage Foundation publishes an “Economic Freedom Index” that ranks countries by “economic freedom”, according to their own conservative, pro-free-market point of view, and they rank Denmark and Sweden as more economically free than the United States, both in the general index and in the more specific index of “business freedom”, which seems to be the measure relevant to regulatory burden in particular (https://www.heritage.org/index/)

> Given the sheer amount of counter examples of lack of regulation. It's pretty much safe to say that it's highly unlikely AI will be regulated in any meaningful way.

So then how do you explain the regulatory crippling of nuclear power?


>I can’t find any exact phrase matches online, and the writing style and overall glib superficiality reminds me of ChatGPT output.

I got it from my notes for an unrelated project. You can't find it because I wrote the notes. But this shouldn't matter. As long as the points are correct, you getting schooled by an AI is irrelevant to the conversation at hand.

>It’s disingenuous to make an overly general statement and then immediately dismiss any counterexample to that overgeneralization by saying “there are always exceptions”.

No it's not. General truths exist. You must dismiss exceptions to get at the general truth. Otherwise we'll be mired in details constantly.

>Denmark is a market economy.

I never said it wasn't. I said "closest" to a socialist economy.

>A common misconception. In reality, they are market economies that combine a dynamic free market economy with a generous welfare state.

No misconception made here. You are putting words in my mouth. I said it was closest to socialism. And it holds true. A generous welfare state is closer to socialism.

>The Heritage Foundation publishes an “Economic Freedom Index” that ranks countries by “economic freedom”, according to their own conservative, pro-free-market point of view, and they rank Denmark and Sweden as more economically free than the United States

That's bullshit. The heritage foundation is a conservative think tank with a biased agenda. How about getting a study from an unbiased source. "Economic Freedom Index" << don't fall for that.

Check out which countries rank the highest for food regulations: https://www.fooddocs.com/post/food-safety-standards

The US isn't even on that list because we've lobbied the hell out of those laws to be lax af.

Not to mention labor regulations, mandatory five week vacations? Paternity leave? Unheard of in the US.

Just use some common sense before trusting some "Freedom" Index from a politicized foundation.

>So then how do you explain the regulatory crippling of nuclear power?

It's an exception. I mean I literally gave you tons of examples how the US fails to regulate things. The ratio of failures to successes is what matters here. And the failures outnumber the successes by a huge amount. I only copied a fraction of my notes. Would you like more?


> I got it from my notes for an unrelated project. You can't find it because I wrote the notes. But this shouldn't matter. As long as the points are correct, you getting schooled by an AI is irrelevant to the conversation at hand.

The points don't specifically pertain to the US and have enough garbage weasel words that they don't even rise to the level of "correct". I'll make another comment that goes through them though.

> General truths exist. You must dismiss exceptions to get at the general truth. Otherwise we'll be mired in details constantly.

You can't just handwave away complexity. It's entirely possible for the US to have too little regulation in some fields and too much regulation in other fields.

Your claim is that it's impossible for the US to erect regulatory barriers for AI. Whether or not the US, in general, has more or less regulations than other countries isn't sufficient to make that case.

> The heritage foundation is a conservative think tank with a biased agenda.

I never claimed otherwise. But why would a conservative think tank, which favors less business regulations, not even put the US on the top ten list of most "economically free" countries if the US is so underregulated? Why would they prefer the regulatory environment in Denmark and Sweden over the regulatory environment in the United States?

> Check out which countries rank the highest for food regulations

So when I bring up examples they're "exceptions", but while you bring up examples, they're examples of the rule. When I cite sources that do an overall survey of a country's regulatory environment, that's "biased", but when you cite a source that makes their money by helping companies comply with food regulations, that's just fine.

> Not to mention labor regulations, mandatory five week vacations? Paternity leave? Unheard of in the US.

Yeah, there are some places where the US has more regulations and some places where the US has less regulations.

If we're talking about regulating AI, I think nuclear energy regulations are a much more analogous case than vacation and paternity leave.

> It's an exception. I mean I literally gave you tons of examples how the US fails to regulate things. The ratio of failures to successes is what matters here. And the failures outnumber the successes by a huge amount. I only copied a fraction of my notes. Would you like more?

I can run ChatGPT myself, thanks. Why don't you try thinking for yourself and considering the possibility that your presuppositions are wrong?


>You can't just handwave away complexity. It's entirely possible for the US to have too little regulation in some fields and too much regulation in other fields.

I didn't handwave anything. My answer is sufficiently complex with multitudes of counter examples to your point.

The whole thing with the massive list of examples is to illustrate a general point on the lack of business regulation overall in the US.

I literally stated it's the ratio of failures to successes that matters here. If I can produce 30 examples of the US failing to regulate and you produce one, that speaks to an overall generality that eclipses your example.

The arena of scientific validation is hard to establish here. Neither of us can paint a picture of the entire domain of every single failure and success of regulatory laws in existence for the US. So given the nature of this debate just list as many general examples as possible.

You have nuclear power as one, that's it.

>I never claimed otherwise. But why would a conservative think tank, which favors less business regulations, not even put the US on the top ten list of most "economically free" countries if the US is so underregulated? Why would they prefer the regulatory environment in Denmark and Sweden over the regulatory environment in the United States?

Don't know. The motivations of such groups are complex and multifaceted. Following some breadcrumb trail to get at the root of it is too much effort. I only know that this group is biased and not a neutral party. There's no point in vetting a known compromised source. Pick a valid one.

>So when I bring up examples they're "exceptions", but while you bring up examples, they're examples of the rule. When I cite sources that do an overall survey of a country's regulatory environment, that's "biased", but when you cite a source that makes their money by helping companies comply with food regulations, that's just fine.

Yeah you cited one bogus example from the heritage foundation. All my examples are real. Unlike yours.

>I can run ChatGPT myself, thanks. Why don't you try thinking for yourself and considering the possibility that your presuppositions are wrong?

Highly disagree. Your answers are inferior to anything chatGPT can come up with so obviously you likely can't run it yourself.


> The whole thing with the massive list of examples is to illustrate a general point on the lack of business regulation overall in the US. I literally stated it's the ratio of failures to successes that matters here. If I can produce 30 examples of the US failing to regulate and you produce one, that speaks to an overall generality that eclipses your example.

Your list produces nothing of the kind, as I tediously went out of my way to demonstrate.

> I only know that this group is biased and not a neutral party. There's no point in vetting a known compromised source. Pick a valid one.

You haven't provided any valid sources yourself.

> Highly disagree. Your answers are inferior to anything chatGPT can come up with so obviously you likely can't run it yourself.

Well, that's just your opinion, and it's an opinion that reflects more on your poor judgment than on me.


Let's turn this into something real.

This: https://futureoflife.org/open-letter/pause-giant-ai-experime...

will never happen. No pause will occur. I'm right and you will be wrong.

If it does happen then I concede that you're right. If it doesn't then it reflects poorly on you.


I never said with certainty whether the US would or wouldn't regulate AI. Which has absolutely nothing to do with the open letter you posted in any case.

Frankly, the abrasive and belligerent way you've conducted yourself this entire conversation is the only thing that reflects poorly on anyone here. There's simply no call for making this personal.


You both broke the site guidelines badly in this thread. You unfortunately have a pattern of doing this and we had to warn you about almost exactly the same thing before: https://news.ycombinator.com/item?id=33187614.

I don't want to ban you, so would you please review https://news.ycombinator.com/newsguidelines.html and stick to the rules properly from now on? We don't want this sort of tit-for-tat spat in which people abuse each other.


OK, I’ll try and do better.


Appreciated!


[flagged]


We've banned this account for repeatedly breaking the site guidelines. Please don't create accounts to do that with.

https://news.ycombinator.com/newsguidelines.html


It's different this time. Because this time AI is hugely more popular in the public and corporate sphere. The previous AI winters were more academic winters with few people pushing the envelope.

I don't think compute is the issue. It's an issue with LLMs. Current LLMs are just a stepping stone for true AGI. I think there's enough momentum right now that we can avoid a winter and find something better through sheer innovation.


I think the difference is AI takes data and in the past we just didn't have much data.

Now the vast majority of the worlds population has a cellphone and internet service, and use services that AI can improve/affect.


They have some good stuff. Spider Man: No way home.

Mandalorian is pretty good, Star wars: Andor was the best Star Wars I've seen in a while. I guess a lot of people missed out on Andor given that almost everything Star wars has been mostly crap.

I think part of the explanation for your list involves the fact that people don't like movies that much anymore.


>> LLMs have no knowledge of the underlying reality > They have no common sense & they can't plan their answer

If his arguments are entirely based on this, then it's not fully correct:

- GPT style language models try to build a model of the world: https://arxiv.org/abs/2210.13382

- GPT style language models end up internally implementing a mini "neural network training algorithm" (gradient descent fine-tuning for given examples): https://arxiv.org/abs/2212.10559


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: