Hacker Newsnew | past | comments | ask | show | jobs | submit | uncomputation's commentslogin

“Bans ByteDance” might be better wording.


Wow, this might be one of the worst PR decisions in recent history.


Certainly it is the equivalent of a kid yelling "I can play my music as loud as I want to!"

As I understand proposed legislation, it would apply to many websites and not be directed at TikTok or China specifically. I wonder if there is a larger strategic interest for China if the US enacts this type of law? Maybe the blowback is entirely expected and is the actual desired response?


I was mistaken about the currently proposed legislation[0], which mentions ByteDance specifically and is broader to include any app from a "foreign adversary country" defined elsewere.

It shall be unlawful for an entity to distribute, . . . a foreign adversary controlled application by . . .: [app store] or [internet hosting]

[...]

FOREIGN ADVERSARY COUNTRY.—The term “foreign adversary country” means a country specified in section 4872(d)(2) of title 10, United States Code.

USC Title 10 section 4872(d)(2) defines the adversaries as N. Korea, China, Russia and Iran. [1]

0. https://www.congress.gov/bill/118th-congress/house-bill/7521...

1. https://www.law.cornell.edu/uscode/text/10/4872


Except it's a lie. The modal did show up for some people, but it was closable via an 'X'.


> Except it's a lie. The modal did show up for some people, but it was closable via an 'X'.

Or was it a dark pattern?

The calling and hanging up aspect seems to indicate a lot of people didn't understand how to close it without calling.




> they don't refute that they did betray it

They do. They say:

> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”

Whether you agree with this is a different matter but they do state that they did not betray their mission in their eyes.


The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.

Of course they can give us nothing, but in that case they should start paying taxes and stop claiming they're a public benefit org.

My prediction is they'll produce little of value going forward. They're too distracted by their wet dreams about all the cash they're going to make to focus on the job at hand.


I agree with your sentiment but the prediction is very silly. Basically every time openai releases something they beat the state of the art in that area by a large margin.


We have a saying:

There is always someone smarter than you.

There is always someone stronger than you.

There is always someone richer than you.

There is always someon X than Y.

This is applicable to anything, just because OpenAI has a lead now it doesn't mean they will stay X for long rather than Y.


> The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.

OpenAI gets to decide what it does with its intellectual property for the same reason that a whole bunch of people are suing it for using their intellectual property.

It only becomes repugnant to me if they're forcing their morals onto me, which they aren't, because (1) there are other roughly-equal-performance LLMs that aren't from OpenAI, and (2) the stuff it refuses do is a combination of stuff I don't want to exist and stuff I have a surfeit of anyway.

A side effect of (1) is that humanity will get the lowest common (moral and legal) denominator in content from GenAI from different providers, just like the prior experience of us all getting the lowest common (moral and legal) denominator in all types of media content due to internet access connecting us to other people all over the world.


> The benefit is the science, nothing else matters

Even if that science helps not so friendly countries like Russia?


OpenAI at this point must be literally #1 target for every single big spying agency in whole world.

As we saw previously it doesn't matter much if you are top notch ai researcher, if 1-2 millions of your potential personal wealth are in stake this affect decision making (and probably would mine too).

How much of a bribe would it take for anybody inside with good enough access to switch sides and take all the golden eggs out? 100 million? A billion? Trivial amounts compared to what we discuss. And they will race each other to your open arms for such amounts.

We see sometimes recently ie government officials betraying their own countries to russian spies in Europe for few hundred - few thousands of euros. A lot of people are in some way selfish by nature, or can be manipulated easily via emotions. Secret services across the board are experts in that, it just works(tm).

To sum it up - I don't think it can be protected long term.


I'm a very weird person with money. I've basically got enough already, even though there are people on this forum who earn more per year than I have in total. My average expenditure is less than €1k/month.

This means I have no idea how to even think about people who could be bribed when they already earn a million a year.

But also, if AI can be developed as far as the dreamers currently making it real hope it can be developed, money becomes as useless to all of us as previous markers of wealth like "a private granary" or "a lawn" or "aluminium cutlery"[0].

[0] https://history.stackexchange.com/questions/51115/did-napole...


Wouldn't you accept a bribe if it's proposed as "an offer you can't refuse"?


Governments WILL use this. There really isn't any real way to keep their hands off technology like this. Same with big corporations.

It's the regular people that will be left out.


> Even if that science helps not so friendly countries like Russia?

Nothing will stop this wave, and the United States will not allow itself to be on the sidelines.


They are totally closed now, not just keeping their models for themselves for profit purposes. They also don't disclose how their new models work at all.

They really need to change their name and another entity that actually works for open AI should be set up.


Their name is as brilliant as

“The Democratic People's Republic of Korea”

(AKA North Korea)


> everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...

everyone... except scientists and the scientific community.


Well, the Manhattan project springs to mind. They truly thought they were laboring for the public good, and even if the government let them wouldn’t have wanted to publish their progress.

Personally I find the comparison of this whole saga (deepmind -> google —> openai —> anthropic —-> mistral —-> ?) to the Manhattan project very enlightening, both of this project and our society. Instead of a centralized government project, we have a loosely organized mad dash of global multinationals for research talent, all of which claim the exact same “they’ll do it first!” motivations as always. And of course it’s accompanied by all sorts of media rhetoric and posturing through memes, 60-Minutes interviews, and (apparently) gossipy slap back blog posts.

In this scenario, Oppenheimer is clearly Hinton, who’s deep into his act III. That would mean that the real Manhattan project of AI took place in roughly 2018-2022 rather than now, which I think also makes sense; ChatGPT was the surprise breakthrough (A-bomb), and now they’re just polishing that into the more effective fully-realized forms of the technology (H-bomb, ICBMs).


> They truly thought they were laboring for the public good

Nah. They knew they were working for their side against the other guys, and were honest about that.


The comparison is dumb. It wasn’t called the “open atomic bomb project”


Exactly. And the OpenAI actually called it "open atomic bomb project".


They literally created weapons of mass destruction.

Do you think they thought they were good guys because you watched a Hollywood movie?


Hmm do you have some sources? That sounds interesting. Obviously there’s always doubt, but yeah I was under the impression everyone at the Manhattan project truly believed that the Axis powers were objectively evil, so any action is justified. Obviously that sorta thinking falls apart on deeper analysis, but it’s very common during full war, no?

EDIT: tried to take the onus off you, but as usual history is more complicated than I expected. Clearly I know nothing because I had no idea of the scope:

  At its peak, it employed over 125,000 direct staff members, and probably a larger number of additional people were involved through the subcontracted labor that fed raw resources into the project. Because of the high rate of labor turnover on the project, some 500,000 Americans worked on some aspect of the sprawling Manhattan Project, almost 1% of the entire US civilian labor force during World War II.
Sooo unless you choose an arbitrary group of scientists, it seems hard. I haven’t seen Oppenheimer but I understand it carries on the narrative that he “focused on the science” until the end of the war when his conscience took over. I’ll mostly look into that…


If you really think you're fighting evil in a war for global domination, it's easy to justify to yourself that it's important you have the weapons before they do. Even if you don't think you're fighting evil; you'd still want to develop the weapons before your enemies so it won't be used against you and threaten your way of life.

I'm not taking a stance here, but it's easy to see why many Americans believed developing the atomic bomb was a net positive at least for Americans, and depending on how you interpret it even the world.


The war against Germany was over before the bomb was finished. And it was clear long before then that Germany was not building a bomb.

The scientists who continued after that (not all did) must have had some other motivation at that point.


I kind of understand that motivation, it is a once in a lifetime project, you are part of it, you want to finish it.

Morals are hard in real life, and sometimes really fuzzy.


In this note: HIGHLY recommend “Rigor of Angels”, which (in part) details Heisenbergs life and his moral qualms about building a bomb. He just wanted to be left alone and perfect his science, and it’s really interesting to see how such a laudable motivation can be turned to such deplorable, unforgivable (IMO) ends.

Long story short they claim they thought the bomb was impossible, but it was still a large matter of concern for him as he worked on nuclear power. The most interesting tidbit was that Heisenberg was in a small way responsible for (west) Germany’s ongoing ban on nuclear weapons, which is a slight redemption arc.


Heisenberg makes you think, doesn't he? As the developer of Hitler's bomb, which never was a realistic thing to begin with, he never employed slave labour for example. Nor was any of his stuff used during warfare. And still, he is seen by some as some tragic figure, at worst as man behind Hitler's bomb.

Wernher vin Braun on the other hand got lauded for his contribution to space exploration. His development of the V2 and his use of slave labour in building them was somehow just a minor disgression for the, ultimately under US leadership, greater good.


To be reductionist - history is written by the victors.

https://www.smbc-comics.com/comic/status-2


Charitably I think most would see it as an appropriate if unexpected metaphor.


I think they thought it would be far better that America developed the bomb than Nazis Germany, and the Allies needed to do whatever it too to stop Hitler, even if that did mean using nuclear bombs.

Japan and the Soviet Union were more complicated issues for some of the scientists. But that's what happens with warfare. You develop new weapons, and they aren't just used for one enemy.


What did Lehrer (?) sing about von Braun? "I make rockets go up, where they come down is not my department".


Don't say that he's hypocritical,

Say rather that he's apolitical.

"Once the rockets are up, who cares where they come down?

That's not my department, " says Wernher von Braun.


That's the one, thank you!


So.. "open" means "open at first, then not so much or not at all as we get closer to achieving AGI"?

As they become more successful, they (obviously) have a lot of motivation to not be "open" at all, and that's without even considering the so-called ethical arguments.

More generally, putting "open" in any name frequently ends up as a cheap marketing gimmick. If you end up going nowhere it doesn't matter, and if you're wildly successful (ahem) then it also won't matter whether or not you're de facto 'open' because success.

Maybe someone should start a betting pool on when (not if) they'll change their name.


OpenAI is literally not a word in the dictionary.

It’s a made up word.

So the Open in OpenAI means whatever OpenAI wants it to mean.

It’s a trademarked word.

The fact that Elon is suing them for their name when the guy has a feature “AutoPilot” which is not a made up word and had an actual well understood meaning which totally does not apply to how Tesla uses AutoPilot is hilarious.


Actually Open[Technology] pattern implies a meaning in this context. OpenGL, OpenCV, OpenCL etc. are all 'open' implementations of a core technology, maintained by non-profit organizations. So OpenAI non-profit immediately implies a non-profit for researching, building and sharing 'open' AI technologies. Their earlier communication and releases supported that idea.

Apparently, their internal definition was different from the very beginning (2016). The only problem with their (Ilya's) definition of 'open' is that it is not very open. "Everyone should benefit from the fruits of AI". How is this different than the mission of any other commercial AI lab? If OpenAI makes the science closed but only their products open, then 'open' is just a term they use to define their target market.

A better definition of OpenAi's 'open' is that they are not a secret research lab. They act as a secret research lab, but out in the open.


> An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems). https://en.wikipedia.org/wiki/Autopilot

Other than the vehicle, this would seem to apply to Tesla's autopilot as well. The "Full Self Driving" claim is the absurd one, odd that you didn't choose that example.


OpenAI by Microsoft?


Ilya may have said this to Elon but the public messaging of OpenAI certainly did not paint that picture.

I happen to think that open sourcing frontier models is a bad idea but OpenAI put themselves in the position where people thought they stood for one thing and then did something quite different. Even if you think such a move is ultimately justified, people are not usually going to trust organizations that are willing to strategically mislead.


What they said there isn't their mission, that is their hidden agenda. Here is their real mission that they launched with, they completely betrayed this:

> As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world

https://openai.com/blog/introducing-openai


“Dont be evil” ring any bells?


Google is a for-profit, they never took donations with the goal of helping humanity.


They started as a defence contractor with generous “donation” from DARPA. That’s why i never trusted them from day 0. And they have followed a pretty predictable trajectory.


"Don't be evil" was codified into the S-1 document Google submitted to the SEC as part of their IPO:

https://www.sec.gov/Archives/edgar/data/1288776/000119312504...

""" DON’T BE EVIL

Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.

Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see. """


Yes, there they explain why doing evil will hurt their profits. But a for profits main mission is always money, the mission statement just explains how they make money. That is very different from a non-profit whose whole existence has to be described in such a statement, since they aren't about profits.


Nothing in an S-1 is "codified" for an organization. Something in the corporate bylaws is a different story.


This claim is nonsense, as any visit to the Wayback Machine can attest.

In 2016, OpenAI's website said this right up front:

> We're hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We'll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

I don't know how this quote can possibly be squared with a claim that they "did not imply open-sourcing AGI".


In that case they mean that their mission to ensure everyone benefits from AI has changed to be that only a few would benefit. But it would support them saying like "it was never about open data"

In a way this could be more closed than for profit.


> but it's totally OK to not share the science...

That passes for an explanation to you ? What exactly is the difference between openai and any company with a product then ? Hey, we made THIS and in order to make sure everyone can benefit we sell at a price of X.


The serfs benefitted from the use of the landlord's tools.

This would mean it is fundamentally just a business with extra steps. At the very least, the "foundation" should be paying tax then.


So, open as in "we'll sell to anyone" except that at first they didn't want to sell to the military and they still don't sell to people deemed "terrorists." Riiiiiight. Pure bullshit.

Open could mean the science, the code/ip (which includes the science) or pure marketing drivel. Sadly it seems that it's the latter.


“The Open in openAI means that [insert generic mission statement that applies to every business on the planet].”


Can the title be updated to include the “Meet”? Otherwise it’s a bit ominous…


The reporting on this study is conflating sentiment with plot structure, which misrepresents the study.

See for example, Frankenstein. The sentiment rises slightly during the Creature’s narration to Victor of his circumstances - likely the narration of the French family he was “living”/stowing away with - but that’s certainly not a “rise” in the sense Oedipus rises to noble status. It’s hard to interpret Frankenstein as anything other than the protagonist’s consistent and tragic downfall (riches to rags in this analysis).

Not sure if that’s fundamentally a problem trying to extrapolate plot beats from sentiment alone, or a bit of less than accurate journalism.


TLDR:

Word standardizes text through:

- Document templates

- English as lingua franca

- Auto correct and completion

Whether or not you agree (I personally do not find it convincing) it up to you, but there’s a summary because the article is very long-winded.


The internet is not centralized. It is literally the largest decentralized network in existence. A network of networks with no central hub.


Oh so you haven't heard of Cloudflare?


Cloudflare, AWS, Azure, Google Cloud and Whatever-is-used-by-China: people make fun of the IBM guy who said "the world has a market for maybe 5 computers", but he was right all along...


Seems intermittent. Sometimes a refresh will work to display files and commit history.


You have it backwards on two counts. First count is the point of POSIX is any OS vendor doesn’t need to worry about any other OS compatibility. Just implement POSIX interfaces. Second count is that is OP’s exact point: it never has been done before, because too many OS vendors give POSIX very little thought. This helps realize the original vision of POSIX.


> First count is the point of POSIX is any OS vendor doesn’t need to worry about any other OS compatibility. Just implement POSIX interfaces.

I must be missing something. How are you getting from "just implement POSIX interfaces" to "compile once run everywhere"?

Wouldn't the former just promise that you could compile the same source on any POSIX-compliant OSs and get a binary that runs on that OS, on that architecture?


It's because cosmos implements POSIX in a portable way


POSIX was never about binary portability.


The binary portability is, in practice, not the most difficult feature, as long as the CPU arch is the same. It is also kind of a hack and IMO a nice-to-have feature but not as vital as true portability.

POSIX was not limited to the Unix world, the goal was for it to be implemented by all OS vendors, and it was partially done.


Doesn't matter. POSIX is "write once, compile everywhere," while this is "compile once, run everywhere." It could be that POSIX is easier to write than for Cosmo bins, shifting the balance between them! I see them as just different endpoints of development.


Well said. In reality, it has been more about supporting common system level APIs (think read, write, fork, etc).


> Recommendations for Okta

Seems a bit haughty to publicly chastise another company. The tone of this article is a bit off-putting for me personally.


Not really. Here is one of the recommendations:

"Take any report of compromise seriously and act immediately to limit damage; in this case Okta was first notified on October 2, 2023 by BeyondTrust but the attacker still had access to their support systems at least until October 18, 2023."

It is good to call Okta out here as it impacts Cloudflare's business as well and if you can't fix a critical issue for 16 days, that is bad. Remember we are talking about Auth here. A breach impacts everything.


SEC requires public disclosure basically immediately (within a few days. Less than a week for sure) for public companies if a hack could harm your bottom line or trade value.

Hopefully they sink their teeth and give out a nice fine for this insane negligence, but I suspect okta is in for a strongly worded letter.


[flagged]


See also, https://www.beyondtrust.com/blog/entry/okta-support-unit-bre...

> We raised our concerns of a breach to Okta on October 2nd. Having received no acknowledgement from Okta of a possible breach, we persisted with escalations within Okta until October 19th when Okta security leadership notified us that they had indeed experienced a breach and we were one of their affected customers.


Okay — and?

Do we have anything to suggest CloudFlare is factually wrong? — or was that just random conversational chaff from a brand new account distracting from the stunning incompetence of Okta in ignoring a breach for two weeks?

CloudFlare has more than enough reputation to make such an allegation — and Okta should be cut from any production usage.

Two weeks of failing to address auth compromise is unprofessional conduct by both Okta leadership and engineers.


To be fair, it's also the second time this has happened in 2 years - I don't mean okta breaches in general, I mean it's the second time the support system has been compromized to get access to customer accounts.


First, Okta got hacked and that hack allowed CloudFlare to get hacked. That is bad. Second, one of Okta’s other customers reported the hack and Okta either ignored the report, or investigated the report and did not find the hack. That is not good. Third, Cloud Flare’s response was professional. They asked a company providing a very important service to improve because that company’s product and practices endangered CloudFlare.

If Okta does not want its customers to publically complain about its actions, Okta needs to improve and do better. In particular, if someone says they have been hacked, listen to them and keep digging until you find the problem.



Yes. No one likes a sore winner. Providing your customers with assurances? Good. Providing tips to Okta customers? Sure. Publicly chastising another company you do business with? Unnecessary. That should be kept private. Just my opinion


I am responsible for spending several hundred thousand dollars a year with Cloudflare (out of my budget). I like this style. Don’t want to get called out, get your org fixed. This is somewhere between the third and fifth breach, depending on how you’re counting.


Are you going to move your spend, or is having a 3rd party sling words good enough for you?


Edit: removed for subthread cleanup.


My bad ... cf not okta.


This is the _second_ time this has happened, and it's clear Octa hasn't learned any lessons. So Cloudflare is right to call them out, and Okta should be embarrassed. What surprised me about this post is that they didn't say they were dropping them. Okta is a vulnerability to any organization.


> Publicly chastising another company you do business with? Unnecessary.

I think this makes more sense for strategic business partners. In the Cloudflare-Okta case I'd wager that their relationship is fairly transactional.


I am not sure I would call CloudFlare a “winner” in this case. They did not win anything by getting hacked.


They do win some points on having better security than a popular security product, considering Cloudflare's own security posture is also quite important to their customers.


Agrees - CloudFlare and its employees did outstanding work. My main point was calling CloudFlare a sore winner did not make sense because they did not win anything.

Also, I think CloudFlare’s blog post was very good.


Agree. CF wont have the inside scoop and they use another company's statement to bolster own thoughts. I wonder about the BeyondTrust statement too. This just doesn't sound right....and, so far, although it could happen this week, there have been no SEC filings by Okta - which would have to happen if this was a bad situation for them.


But the recommendations are good?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: