Hacker Newsnew | past | comments | ask | show | jobs | submit | cbg0's commentslogin

Update: 4/11/26, 11:45 a.m. ET: Rockstar Games confirmed that a data breach has happened. A spokesperson sent over this statement to Kotaku:

“We can confirm that a limited amount of non-material company information was accessed in connection with a third-party data breach. This incident has no impact on our organization or our players.”


That's what I would say regardless of if I was considering paying or not.

Do most people keep the notifications disabled for their messaging apps?

It's just a mental compartmentalization thing for me. When I want to get into slack/signal chatting mode or read messages I load such an app and look/interact. When I'm not doing that I don't want to be bothered with messages. I'm already sacrificing a portion of my life to work related tasks and being in front of a computer at many hours, when I'm not in that mode I don't want to be interrupted - people who need to reach me in an emergency have other ways to get ahold of me.

But maybe _you_ are the minority

I disable notifications on every app that is not on the critical path to me earning a living. Notifications are largely unnecessary. Either you are actively engaged with something, in which case you didn't need the notification, or you are doing something else and don't need the distraction, in which case you didn't need the notification. Only my employer gets a right to demand my time during work hours, which is why notifications are enabled during work hours for work apps.

We as a society have gotten way too comfortable expecting every single person to be available at all times to provide us some kind of immediate response. Let people live. If I'm hiking through the woods with my camera doing bird photography, even if you're my best friend you can wait until I get back to my car and manually check my messages, I don't need a notification. If it's an emergency, dial my number and call me, which will make my phone ring. Novel concept, I know.


Signal notifications are the #1 thing in the critical path for me earning a living. Isn’t this normal in our industry?

Okay, well you should probably have them enabled then. For me, Signal is for personal messaging. My work messages are mostly Slack, Webex, and Teams.

Nope.

Personally, I have multiple messaging apps. I have notifications on for work slack, which is high signal, and I have notifications off for personal discord which is noisy and low priority.

Back then things were centered around "can we even do this?" and now it's more of "how do we keep this running more than 5 minutes?".

My impression back then from those profs was that it (fusion) would be inevitable but you do have to think long term, really long term. I'm old enough now (55) to understand that mentality.

I'd put money on something useful fusion related happening within the next 10 years or perhaps 20. I'm not up on the current state of experiments etc but it will happen.

AGI? - lol!


AFAIK superconductors are a major limiting tech. But we are slowly getting better ones, both by discovering more and by learning to mass produce superconducting wire.

With superconductors you can make magnetic bottles.

There’s also some interesting inertial confinement work happening. There the limiter is both confinement and the efficiency of the driver. Look up MagLIF for a hybrid magnetic inertial approach under study.


That's exactly what a clanker would say. ^/s

I don't think there's currently better value than Github's $40 plan which gives you access to GPT5 & Claude variants. It's pay per request so not ideal for back-and-forth but great for building complex features on the cheap compared to paying per token.

Because GH is accessing the API behind the scenes, you should face less degradation when using Sonnet/Opus models compared to a Claude subscription.

Keep a ChatGPT $20 subscription alongside for back-and-forth conversations and you'll get great bang for buck.


I'm still paying the 10$ GH copilot but I don't use it because :

  - context is aggressively trimmed compared to CC obviously for cost saving reasons, so the performance is worse
  - the request pricing model forces me to adjust how I work
Just these alone are not worth saving the 60$/month for me.

I like the VSCode integration and the MCP/LSP usage surprised me sometimes over the dumb grep from CC. Ironically VSCode is becoming my terminal emulator of choice for all the CLI agents - SSH/container access and the automatic port mapping, etc. - it's more convenient than tmux sessions for me. So Copilot would be ideal for me but yeah it's just tweaked for being budget/broad scope tool rather than a tool for professionals that would pay to get work done.


You can use your GH subscription with a different harness. I'm using opencode with it, it turns GH into a pure token provider. The orchestration (compacting, etc.) is left to the harness.

It turns it into a very good value for money, as far as I'm concerned.


But you still get charged per turn right ? I don't like that because it impacts my workflow. When I was last using it I would easily burn through the 10$ plan in two days just by iterating on plans interactively.

Honestly I'm not sure, I'm on my company's plan, I get a progress bar vaguely filling, but no idea of the costs or billing under the hood.

But you still get the reduced context-window.

Disagree entirely.

GHCP at least is transparent about the pricing: hit enter on a prompt= one request. CC/Codex use some opaque quota scheme, where you never really know if a request will be 1,2,10% of your hourly max, let alone weekly max.

I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.


>I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.

Then we've had wildly different results. Running CC and GH copilot with Opus 4.6 on same task and the results out of CC were just better, likewise for Codex and GPT 5.4. I have to assume it's the aggressive context compaction/limited context loading because tracking what copilot does it seems to read way less context and then misses out on stuff other agents pick up automatically.


Is your source code worth only $40 for them to train their models on?

https://www.techradar.com/pro/bad-news-skeptics-github-says-...


This is of course not a problem for business accounts.

We are not allowed to use anything other than our company provided GHCP credentials due to the data retention clause in our contracts. Ie. they are not allowed to use our data.


Considering how much data they already have from everything that's on GitHub, I doubt you would make a dent boycotting their AI product.

And don't you think they're going to realize soon that it's also pretty good at "doing penetration testing" for your company when it's already trained on your company's source code?

It's already more than "pretty good": https://www.anthropic.com/glasswing

Google $20/mo plan has great usage for Claude Opus. Last time I used it, around Feb, it felt basically unlimited.

Agree, that was Feb. Not now, I cancelled mine on the 7th. Claude Opus via Gemini is just a few prompts then it locks you out for another week.

So, you basically tried it a century ago...

This is not a good analogy.

Large corporations have been downsizing on QA and CS roles since before the LLM era. For many of those companies the lack of proper QA leads to more problems for users which compounds the lack of available CS staff. It's called either enshittification or maximizing shareholder value, can't remember which.

Why not both? ;)

One of the things I'm always looking at with new models released is long context performance, and based on the system card it seems like they've cracked it:

  GraphWalks BFS 256K-1M

  Mythos     Opus     GPT5.4

  80.0%     38.7%     21.4%

Data source:

https://www-cdn.anthropic.com/53566bf5440a10affd749724787c89...

(Search for “graphwalk”.)

If true, the SWE bench performance looks like a major upgrade.


Huh, I don’t know what “long context performance” means exactly in these tests, so completely anecdotally , my experience with gpt5.4 via codex cli vs Claude code opus, gpt5.4 seems to do significantly better in long contexts I think partly due to some special context compaction stored in encrypted blobs. On long conversations opus in Claude code will for me lose memory of what we were working on earlier, whereas one of my codex chats is already at >1B tokens and is still very coherent and remembers things I asked of it at the beginning of the convo.

This isn’t talking about compaction. This refers to performance as the model is loaded with 500k to 1m tokens.

Ah, thanks, makes sense, I’ll read more about this

this seems to be similar to gpt-pro, they just have a very large attention window (which is why it's so expensive to run) true attention window of most models is 8096 tokens.

What's the "attention window"? Are you alleging these frontier models use something like SWA? Seems highly unlikely.

well the attention is a matrix at the end of a day which scales exponentially, 1m tokens would need more memory than any computer system in the world can hold. They maybe have larger ones such as 16k to 32k, but you can just see how GLM models work for more information.

Deepseek is the frontrunner in this technology afaik.


source on the 8096 tokens number? i'm vaguely aware that some previous models attended more to the beginning and end of conversations which doesn't seem to fit a simple contiguous "attention window" within the greater context but would love to know more

well 8096 is just the first number that came to my mind, obviously frontier models have 32k or above, but they essentially they have a layer which "looks" at a limited view of the entire context window. {[1m x 3-4 weights] attention layer to determine what is actually important} -> {all other layers}

Reading a bunch of posts related to Claude Code and some folks voice genuine upset about rate limits and model intelligence while others seem very upset they can't get their fix because they've reached the five hours limits is genuinely concerning to how addictive LLMs can be for some folks.

I think the social aspect is underreported. I think this applies even for people using Claude Code and not just those treating an LLM as a therapist. In other words, I wonder how many of these people can't call their doctor to make an appointment or call a restaurant to order a pizza. And I say this as someone who struggles to do those things.

People claim that DoorDash and other similar apps are about efficiency, but I suspect a large portion is also a desire to remove human interaction. LLMs are the same. Or, in actuality, to create a simulacrum of human interaction that is satisfying enough.


It's reflecting the value we get from it, relative to the cost of continuing if we switch to the API pricing. It is genuinely upsetting to hit the limits when you face a substantial drop in productivity.

Imagine being an Uber driver and suddenly have to switch to a rickshaw for several hours.


The extremists want you to believe that, but the EU is an economic alliance, not a federal republic. Being pro-EU is usually anti-isolationist, but it isn't always anti-nationalist.

> EU is an economic alliance

lol that ship sailed a long time ago it's certainly not a full federal republic but it's a lot closer to one then a mere "economic alliance".


> the EU is an economic alliance, not a federal republic

The line between those two things in the case of the EU is awful blurry.

The Espace Léopold issues laws that are binding on member nations, wields significant power over trade, fiscal policy, and mandates open borders between member nations. These are hardly the features of a purely economic treaty organisation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: