This assumes that all areas of research are bottlenecked on human understanding, which is very often not the case.
Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.
An LLM would not be able to do 24/7 work in this case, and would only save a few hours per day at most. Scaling up to many experiments in parallel may not always be possible, if you don't know what to do with additional experiments until you finish the previous one, or if experiments incur significant cost.
So an AGI/expert LLM may be a huge boon for e.g. drug discovery, which already makes heavy use of massively parallel experiments and simulations, but may not be so useful for biological research (perfect simulation down to the genetic level of even a fruit fly likely costs more compute than the human race can provide presently), or research that involves time-consuming physical processes to complete, like climate science or astronomy, that both need to wait periodically to gather data from satellites and telescopes.
> Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.
With automation, one AI can presumably do a whole lab's worth of parallel lab experiments. Not to mention, they'd be more adept at creating simulations that obviates the need for some types of experiments, or at least, reduces the likelihood of dead end experiments.
Presumably ... the problem is this is an argument that has been made purely as a thought experiment. Same as gray goo or the paper clip argument. It assumes any real world hurdles to self improvement (or self-growth for gray goo and paper clipping the world) will be overcome by the AGI because it can self-improve. Which doesn't explain how it overcomes those hurdles in the real world. It's a circular presumption.
What fields do you expect these hyper-parallel experiments to take place in? Advanced robotics aren't cheap, so even if your AI has perfect simulations (which we're nowhere close to) it still needs to replicate experiments in the real world, which means relying on grad students who still need to eat and sleep.
Biochemistry is one plausible example. Deep Mind made hug strides in protein folding satisfying the simulation part, and in vitro experiments can be automated to a significant degree. Automation is never about eliminating all human labour, but how much of it you can eliminate.
To be honest it's just a thing from financial models inside industry. When you plan how much copies you suppose to sell to turn profit you must consider refund rate as well.
Another reason is that games on Steam can and are often refunded after quite long time has passed. E.g people buy lots of games on holidays or sales and then refund them when they decide not to play it. Sometimes refunds can happen even after a year if e.g player can't get your game to run on their device, etc.
But yeah it make sense not to include it in math for single sale that was not refunded.
Ah, I see - thank you for the context! If you don't mind one more question - do you have to leave a substantial amount of cash in the Steam account in case of very late refunds? Or does Steam just send you an invoice or something?
As developer you don't have "cash on Steam account". Valve just do automatic payouts to a bank with a delay around ~1 month.
As for refunds AFAIK if you have bunch of refunds later they will just be substracted from future sales. But games I worked on didnt have refund spikes so no clue how Valve handle situations with mass refunds.
the reform needs to happen at the layer where whether a copyright is valid or not is decided upon, not before (at the point of "should copyright exist") and not after (enforcement).
a world without copyright means those with the largest advertising budgets will reap nearly all the rewards from new IP created by small artists. BigCorp Inc. can just sit around and wait for talented musicians to post something interesting on soundcloud, for example, then just have their in-house people copy it and push it out to radio and streaming platforms via their massive ad budgets and favorable relationships for getting new material onto the waves immediately. meanwhile the original artist gets nothing.
the position of advocating against all copyright protections at all only makes sense for people who are already wealthy enough that they don't need proceeds from their art to survive.
But the point of the response is that "getting money from selling music" is, in digital era, artificial scarcity. I.e. the copyright laws that big corporations are lobbying for continued enforcement and tightening, are the very thing that create this artificial scarcity that they are best positioned to profit off.
Cut out copyright, and no one will be getting any money from selling music per copy (or equivalent) - as it should be.
digital music is not artificial scarcity, because it's not the copied bits that are the resource, it's attention. we only have so much time and attention for consuming media, and only so much attention and memory space in our brains for keeping track of where to find it. large budgets can easily dominate these channels and limit the average person's apparent choice.
this is what I mean when large players would outcompete smaller players in a digital marketplace with no copyright. the only way for this to work would be with a benevolent neutral 3rd party managing the marketplace, like Steam, so users can easily see when a large player is cloning a smaller players work - but even then it still depends on the good will of the general public to prefer the "original" artist which is not guaranteed.
> the position of advocating against all copyright protections at all only makes sense for people who are already wealthy enough that they don't need proceeds from their art to survive.
This makes it sound like the majority of people produce more content than they consume.
The reality is that 99.99999% of people do not produce "art", let alone with the intention of living of it.
Whatever harms you might envision for the tiny minority who do want to try living off copyright, those concerns are dwarfed by the benefits for the rest of us.
Further, not many people who are serious about reform are literally "advocating against all copyright" A reform that simply curbed the duration to something less insane than 150 years would resolve much of what makes copyright bad, even if it continued to exist.
Megacorps being naturally risk-averse, and the lion's share of the rest of the capital being held by that banquet room, it's going to take one or a few scrappy startups hitting it big while also committing to WFH/etc to get the banquet to loosen the purse strings a bit and kick off a new wave of investment a la Web 2.0 post-dotcom-crash (which was coincidentally also post-oh-noes-outsourcing-1.0)
That plus a few years of new successes might get the megacorps to start hiring en masse and possibly see the value in WFH again, but it'll take a lot of these stars aligning to produce several new unicorns that can eat a few lunches to get there, which will probably take the rest of the 2020s and possibly part of the 2030s (based on the last time this happened, going from 1999 crash to the 2010/2011 renaissance)
If I was a betting man, I'd guess the first wave of new startups will be unifying a huge dataset of local info with AI into like the AirBNB-of-local-whatever personal concierge sort of thing, like OpenTable on steroids. but I'm frequently wrong, so ¯\_(ツ)_/¯
> R voters claim the economy is bad under a D president and immediately switch to claiming it's good under an R president.
adding on to this, typically it takes 1-3 years for the effects of an administration to really bear fruit in the economy, either good or ill. So a common tactic from the R side the last several presidential cycles is to claim ownership of the economy handed to them by the outgoing D president, then when their policies cause some kind of problem, blame the incoming D president 4 years later.
See also: who the R's blame the deficit on vs. which party's presidents actually increased the deficit the most over the last 20-25 years.
If it takes 2-3 years, then that means that the economic growth we experienced in ~2021 is Trump’s 2017/19 policies, and the economic downturn we’re receiving in 2024 is due to Biden’s 2021/23 policies.
I highly doubt any but the most crude policy changes would demonstrate within 5 years how they were to play out in the longer term. The idea of that seems almost as insane as US politics.
In 2020, a Pennsylvania white man illegally voted via mail-in ballot on behalf of two deceased parents.
Also in 2020, a black woman in Memphis voted while ineligible due to a felony conviction without being informed she wasn't allowed, and was convicted and sentenced to 6 years in jail.
As for how this applies to why Trump is not in jail for his convictions, I will leave that as an exercise for the reader.
hmm having a sort of mini-forum-like experience tied to particular pages in a book seems like a fascinating idea! being able to discuss plot twists and such only once you've already gotten to that point?
wow this seems like an amazing idea actually! any names yet? I'd love to check it out once it's done!
I think that's part of the carefully-crafted hype messaging. Close enough to get excited about, but far enough away that by the time we get there people will have forgotten we were supposed to have it by then.
To paraphrase Goggins, "Who's gonna carry the cabbage?"
While it's true there are a lot of jobs obsoleted by technological progress, the vision of personal AI teams creating a new age of prosperity only makes sense for knowledge workers. Sure, a field worker picking cabbage could also have an AI team to coordinate medical care. But in this brilliant future, are the lowest members of society suddenly well-paid?
The steam engine and subsequent Industrial Revolution created a lot of jobs and economic productivity, sure, but a huge amount of those jobs were dirty, dangerous factory jobs, and the lion's share of the productivity was ultimately captured by robber barons for quite some time. The increase in standard of living could only be seen in aggregate on pages of statistics from the mahogany-paneled offices of Standard Oil, while the lives of the individuals beneath those papers more often resembled Sinclair's Jungle.
Altman's suggestion that avoiding AI capture by the rich merely requires more compute is laughable. We have enormous amounts of compute currently, and its productivity is already captured by a small number of people compared to the vast throngs that power civilization in total. Why would AI make this any different? The average person does not understand how AI works and does not have the resources to utilize it. Any further advancements in AI, including "personalized AI teams," will not be equally shared, they will be packaged into subscription services and sold, only to enrich those who already control the vast majority of the world's wealth.
The thing is: robotics is knowledge work. Supposing a scenario in which AI makes advancing fields of engineering and science much more rapid, it will be leveraged to build and cheapen robotic labor. There would be a gap period where AI is smart but unable to perform labor without humans, which could be ugly, and then we reach effective post-scarcity and post-humans-being-useful. Where we go from there could be heaven or hell depending on who's in charge.
Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.
An LLM would not be able to do 24/7 work in this case, and would only save a few hours per day at most. Scaling up to many experiments in parallel may not always be possible, if you don't know what to do with additional experiments until you finish the previous one, or if experiments incur significant cost.
So an AGI/expert LLM may be a huge boon for e.g. drug discovery, which already makes heavy use of massively parallel experiments and simulations, but may not be so useful for biological research (perfect simulation down to the genetic level of even a fruit fly likely costs more compute than the human race can provide presently), or research that involves time-consuming physical processes to complete, like climate science or astronomy, that both need to wait periodically to gather data from satellites and telescopes.