Hacker News new | past | comments | ask | show | jobs | submit | bitpush's comments login

damned if they do, damned if they dont.

You mean damned if they do but allow their own applications the unrestricted access, and damned if they later change their mind without stating any reason or change of policy due to public outcry but still hamstringing other applications that serve a similar purpose that can't get the same level of media coverage?

I mean, yes? This is exactly what happens when you put yourself in the position of a censor, especially as people's reliance on you grows and grows. It is fundamentally impossible to please everyone.

Doctors are doing surgery .. for earning money?

Dont be reductive.


How is the choice of language the cause of anything complex/complicated?

Both python and rust (for instance) are both turing complete, and equally capable


Disappointed with the article, especially because it uses "think of the planet" as a weak argument against this technology.

> Firstly, there’s the environmental impact

Their own blog contributes to climate crisis they are now crying about. We can argue someone in a developing country writing a similar article saying "all these self publishing technologists are making climate crisis worse" and it'll have a stronger point.

I say this without discounting the real environmental costs associated with technology, but LLMs / AI isnt uniquely problamatic.

Your latest macbook, iphone, datacenter, ssds.. all have impact.


Your argument seems to be flattening everything so it's just the same - "everything has an impact" - but different things have different impacts, and each needs to be measured against their utility. Me ordering a coffee at my local cafe has an impact, but it's a good deal less than me driving from London to Edinburgh. The author's argument, as I read it and generally agree, is that we don't really need LLMs to get things done, but their use comes with a large environmental cost. I don't think that will stop its use, unfortunately. There's just too much capital behind them at the moment.

We don't really need LLMs like we don't really need the internet. We don't really need that blog, nor do we really need Netflix, porn or Facebook.

Individuals value things differently, so attempting to do society-wide prioritization is always going to be a reductive exercise.

For example: Your local cafe doesn't need to exist at all. You could still drink coffee, you'd just have to make it yourself. That cafe is taking up space, running expensive commercial equipment, keeping things warm even when there aren't customers ordering, keeping food items cool that aren't going to be eaten, using harsh commercial chemicals for regular sanitization, possibly inefficient cooling or heating due to heavy traffic going in and out the door, so on and so forth.

Imagine the environmental impact of turning all cafes into housing and nobody driving to go get a coffee.


Again, this just feels like throwing your hands up in the air and saying "it's too hard to decide!" But we have to take decisions somehow if we're going to do anything.

Yes, that's the challenge of centralizing economies. You aren't going to be able to do so efficiently because you don't have every person's preferences.

If by "we're going to do anything" you mean presumably fiat power to ban LLMs, then you're better off using that fiat power to just put a sin tax on carbon emissions and letting people decide where they want to cut back.


Ok lets try to trick China and India into believing industrialization is lame and being poor is cool. That should buy us a couple of years

Take one fewer hot shower a week and you've saved enough energy to power a lot of ChatGPT queries. Play one fewer hour of Minecraft. Turn off raytracing. Eat one fewer burger per month. All of those things would save more energy than forgoing a few ChatGPT conversations.

Their use does not come with a large environmental cost. The average American lifestyle has a “water footprint” of 1200 “bottles of water” per day. 10-50 ChatGPT queries == 1 bottle of water. If you decide to use ChatGPT but shorten your daily shower by a second or two you will more than offset your total water usage increase.

Thus LLMs don’t have to be that useful to be worth it. And if used in certain ways they can be very useful.

Source (with links to further sources): https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...


Crossing your fingers and hoping for a sci-fi technology solution to be invented for climate change seems far more realistic at this point than expecting Taylor Swift to not dump 10 tons of CO2 into the atmosphere because she wanted to have lunch in France, so I'm putting all my eggs into that basket.

It's not a large environmental cost, though.

Correct. We have to order by impact and refrain from spending effort on all but the issue at the top of the list.

If we choose to affect any lower-priority issue, it is an example of hypocriticality that de-legitimizes the whole project.


If we had stayed farmers, we wouldn't have these issues. Most people would be still happy. Intelligence is both destruction and savior.

"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - Douglas Adams

> farmers are 3.5 times more likely to die by suicide than the general population

https://www.fb.org/in-the-news/modern-farmer-farmers-face-a-...


Do you think that study applies if we would go 2000 years back? World was kinda different.

[flagged]


[flagged]


(I thought it would be funny - in a self-referential way - to use our especially fine-tuned LLM to analyze bias in the OP in order to prove that LLMs can be useful by showing how they may be usefully applied to the OP itself.)

It's not particularly funny and generated comments are discouraged on HN https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

People should realize that Apple plays favorites and lets their own apps use private APIs. Developers that bet on Apple platforms (iOS in particular) are at the mercy of Apple, and the company doesnt even try to play fair most of the time.


Apple has always considered their apps to be part of the OS.

It was only because of legal disputes that they were ever split off.

And it is shocking that Apple the OS company has a favourable relationship with Apple the app company. Never happens in IT.


No, but there's also something about being stubborn

Ask Nokia, BlackBerry and Kodak.


Better than Apple Intelligence which put the cart before the horse.


The CEO of Google seems to be navigating the AI war pretty competently.

A few months back everyone was writing the eulogies for Google and how they fumbled AI. Now Gemini is one of the top models (if not the best) and it is extremely capable and price competitive

Meanwhile Apple is still trying to wrap the head around AI. (Didn't stop them from making a splashy marketing campaign though)

So no, both companies are not the same.


I didn’t say the companies were the same.

Google has no idea how to survive in the new paradigm.


Jesus Christ. How worse can they be? Willfully malicious


Now we will have corporate training programs teaching us to say "inform the user" when we mean "scare the user", and then this wont show up in court. Just like how Microsoft taught us to never say "crush the competition". They will do all the same things though.


Then that training will get called into evidence (like Google’s prior training re: anti competitive training) to prove they were being intentionally anti-competitive, and rinse repeat.


I’ve worked for one BigTech company in my career and there were a list of banned words we couldn’t say in writing. The one word I remember we couldn’t say was “moat”.


I wonder if they implemented some filter that prevented messages containing those words from posting. Return a 401 or 403.


You give way too much credit to the capabilities of Chime. (how do you say where you worked without saying where you worked)


I received training like this over 20 years ago as a brand new engineer at my first job. The training was literally about how to avoid writing emails that could create legal liability due to careless language. Emails that came out during the Ford Pinto lawsuits were used as examples. It was all carefully worded to be about not being misperceived in court.

That job was at an avionics company working on safety critical systems. They paid tons of lip service to always placing safety first -- and from what I personally witnessed, at least at that time, the concern was genuine. There was a culture of taking the responsibility seriously, at least at the engineering level I interacted with.

Even acting in good faith though, things happen. Planes crash (usually due to pilot error), and when they do everyone gets sued, and when that happens careless language represents a risk for a company, even if they did everything right.

Having moved on to consumer tech, I haven't seen similar cultures of doing the "right thing". That could be the modern world, my own cynicism, or just the differences inherent to industries where lives aren't explicitly on the line. Regardless, it's not at all hard to imagine that employees can be taught to self censor in ways that won't themselves create more liability.


Honestly, I’ve seen way, waay, waaaaaaaaaaaaaay worse in internal comms in different companies. Not trying to defend Apple, but this just sounds like an average conversation to me.


I mean, that's just evidence that they really are doing illegal shit and should be punished for it.


Or that they are doing legal shit that has the appearance of doing illegal shit, and will get into a lot of trouble more trouble than they want if it ever goes to trial.

Nobody stops you from texting your friends about buying ten pounds of cocaine, (when you actually meant confectioner's sugar), but putting it in writing may make your life more difficult if at some point, by happenstance, you might end up the defendant in a narcotics trial. Even if you are completely innocent.


Eh, every single group chat I've been a part of is littered with this kind of a talk.


It's evidence of malfeasance, that's the topic at hand. Otherwise it's just whataboutism right.


What's the association with Google? I ask this because you feature the Google logo prominently on the landing page.


We have users at Google that use Memex


Just to be sure, same with Samsung, Harvard, UCLA? It means that someone once signed up with an email address from the organization? You can just do that?


In general, we can't see what users are doing. But we can see some things like that they upgrade to new releases. We only site users @ logos that are using it on a consistent basis


Are you suggesting that people are committing code into google / samsung / salesforce codebase using Anthropic / Sonnet?


we can't see their activity other than that they have accounts and have used it.

In the case of Google, we've had folks from both PM & engineering. We've talked with our PM user who has been using for prototyping.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: