Hacker Newsnew | past | comments | ask | show | jobs | submit | duccinator's commentslogin

I don't understand how Apple is hitting home runs. What have they really innovated on post Steve Jobs? Their products are pretty much equivalent to the competition with 5% more polish at the cost of 5% more time to release. Marketing wise, they are close to gods, but innovation wise, even Microsoft is better.

I definitely agree on the fact that Tim is a much better CEO than Sundar. However I consider Satya to be much better than Tim.


Once you have autonomous robots you can use those robots to build more robots leading to an exponential curve. The day they make the first one, we will reach a million in 2-3 years and a billion in 2-4 years after that.


The laws of physics and the material cost of creating robots still exist. Real life isn’t a sci fi movie.


If robots actually replaced all human labour and left nothing for humans to be employed at, then the robots necessarily can do their own resource extraction.

The current cost of a Boston Dynamics Spot is around a year's income, give or take whose income you're measuring against.

If it were able to do any human task at the same rate as a human — and yes, I know it isn't, this is just a anchoring point for the discussion — a group of them would be able to extract and process enough resources in a year sufficient to double their population, all the way from rocks in the ground to a finished deliverable.

n years later, there are 2^n robots. Sure, sure, that's a whole 33 years to go from one total to one per human, not the numbers the other person gave which would need a much shorter (but not wildly implausible) reproduction time of 5-8 weeks, but the point is still valid.

That exponential stops only when some un-substitutable resource is fully exploited, so I'm not sure what the upper limit actually is, but given we exist I assume 8 billion robots is also possible.


These robots can also teleport and charge anywhere or do you predict an expedition to some mine in Africa when they are at the stage where they need some cobalt?

I'm just finding it funny thinking of 15 Spots queuing for a flixbus / greyhound bus because they need to go get some raw material across the continent.


I expect cobalt to be mined in the cheapest possible way.

If that's humans, humans have work[0]. If humans don't have work, the only alternative to robots is, what, well-trained squirrels?

> I'm just finding it funny thinking of 15 Spots queuing for a flixbus / greyhound bus because they need to go get some raw material across the continent.

Me too.

But I expect actual logistics to be much like current logistics, so it would be more like a few Spots loading a standardised intermodal shipping container, a Tesla semi taking it to the port (which is guarded by drones), an automated gantry that puts the container on a cargo ship, a few more robots guarding the cargo ship from pirates (who may well also be robots), the same in reverse at the other end.

[0] You may point to the current conditions of cobalt mines and go "this is bad"; but once was a time when other forms of mining were seen as good solid work, and those with those jobs protested against their mines being closed down. Almost broke the UK when they did that protest, too.


You don’t have to do all this hand waving about what you think is going to be happening in the near future.

Just explain why AGI will be in our near. Frankly, I don’t see how from what we have now.


> You don’t have to do all this hand waving about what you think is going to be happening in the near future.

Strange response…

> Just explain why AGI will be in our near. Frankly, I don’t see how from what we have now.

…but OK.

The AI we have now, is already capable of learning from what we do. It's very stupid, in the sense that it takes a huge number of examples, but surveillance is trivially cheap so a huge number of examples is actually very easy to get.

As I wrote in 2016:

"""We don’t face problems just from the machines outsmarting us, we face problems if all the people working on automation can between them outpace any significant fraction of the workforce. And there’s a strong business incentive to pay for such automation, because humans are one of the most expensive things businesses have to pay for.""" - https://kitsunesoftware.wordpress.com/2016/04/12/the-singula...

Compute cost is also important: more compute makes better GenAI images, allows larger models in general, turns near-real-time into actual-real-time (many robot demos you see on YouTube have sped-up footage).

Here's something I wrote in 2018 anchored on iPhone compute cost and some random guesstimate I found for what it would take to have an uploaded human brain running in real time, though I can't remember if I've ever compared it with actual compute improvements since then: https://kitsunesoftware.wordpress.com/2018/10/01/pocket-brai...

So, while I don't know what you mean by "near", I'd put "the economy is going to change radically due to AI" somewhere between "imminent" and "20 years" (±3σ, skew normal distribution with a mode somewhere around 2028-2030).


I mean by hand waving you are focusing on discussing the future robots that are going to take over and perform x task or y task or duplicate themselves. Obviously these robots doing that is predicated on them having AGI so it’s pointless to talk about what these AGI robots will be doing if we haven’t established that AGI is even possible in the near term or at all.

It’d be like me making a prediction that we will have begun to colonize another galaxy in 10 or 20 years time and then only talking about how there’ll be trade between Earth and the colonies and maybe even wars and revolutions. Meanwhile completely skipping over how our spacefaring technology will advanced to the point we can even travel those distances in a reasonable timeframe.

I’m not an expert on LLMs but there doesn’t seem to be very much about them that is even approaching AGI. They’re a useful tool and it’ll definitely disrupt certain sectors of the economy mostly white collar jobs, but we’re in the middle of a peak of inflated expectations. This has happened before with other technologies.


> I mean by hand waving you are focusing on discussing the future robots that are going to take over and perform x task or y task or duplicate themselves.

Invert your causality.

The discussion so far was "oh no, oh woe, we shall have no jobs!" — this can only happen if AI is good enough to do all that humans can do. Until that point, we're fine, it's the status quo, and also it doesn't matter how long we stay in this state. I'm not making any strong claim about the start date of the transition (I have a 20 year spread which I think is pretty vague), only the duration of such a transition.

When AI can do that, when, then it's obvious they can do things like "build a robot body", which is obvious because we can, and the definitional requirement of there not being any more work for humans is that the robots can do all the things we can. It's a necessary precondition for the scenario, not a prediction.

> Obviously these robots doing that is predicated on them having AGI

No, it isn't. "AGI" isn't even a well-defined term, each letter of the initialism means a different thing to different people.

And self-replication has much, much lower brain power requirements than full AGI, even for simple definitions of AGI: an AI-and-robot combo with all the intellect of the genome of E. coli is also capable of self-replication. The hard part of self-replication right now isn't the brain power.

So again, invert your causality: the brain power to replace all human workers includes the knowledge of how to self-replicate, but the knowledge of how to self-replicate does not require the brain power to replace all human workers.

> so it’s pointless to talk about what these AGI robots will be doing if we haven’t established that AGI is even possible in the near term or at all.

The specific things an AI needs to do, is learn. That's all. And they already can. The weaknesses of current models still, even if left unresolved, result in AI learning to do each thing humans do eventually when given enough examples, which limits humans to the role of teaching the machines. This is still a form of employment, so it's not economic game-over.

> It’d be like me making a prediction that we will have begun to colonize another galaxy in 10 or 20 years time and then only talking about how there’ll be trade between Earth and the colonies and maybe even wars and revolutions. Meanwhile completely skipping over how our spacefaring technology will advanced to the point we can even travel those distances in a reasonable timeframe.

No. That would require a change of the laws of physics. We don't need a change to the laws of physics for AI, because no matter what definition is used and whether or not current models do or don't meet any given standard, the chemistry in our own bodies definitely demonstrates the existence of human-level intelligence.

> I’m not an expert on LLMs

Are not the only kind of AI. You can't use an LLM for OCR, tagging photos, blurring the background of a video call, driving a car, forecast the weather, or predict protein foldings, and shouldn't use one for route finding or playing chess (although they're surprisingly good at the latter two, all things considered). Other AI do those things very well.

But LLMs will translate between languages as a nice happy accident. And they can read the instructions and use other AI as tools. And, indeed, write those other AI, because one of the things they can translate is English to python.

> but there doesn’t seem to be very much about them that is even approaching AGI.

Then you are one of many whose definition of "approaching" and "AGI" is one I find confusing and alien.

Between all AI, every single measure of what it means to be intelligent that I was given growing up has been met. Can machines remember things? Perfectly. How big is their vocabulary? Every word ever recorded. How many languages do they speak? Basically all of them. Are they good at arithmetic? So good that computers small enough and cheap enough to be given away for free, glued to the front of magazines, beat all humans combined and still would even if everyone was trained to the level of the current world record holder. How well do they play chess? Better than the best humans, by a large margin. Go? Ditto. Can they compose music? Yes, at any level from raw sound pressure levels to sheet music. Can they paint masterpieces? Faster than the human eye's flicker fusion rate. Can they solve Rubik's cubes? In less than the blink of an eye. Can they read and follow instructions, such they can use tools? Yeah, now we have LLMs, they can do that great. Can they make software tools? Again, thanks to LLMs, yes. Do they pass law school exams, or medical exams, can they solve puzzles from the International Mathematical Olympiad? Yup.

We're having to invent new tests in order to keep claiming "oh, no, turns out it's not smart".

> They’re a useful tool and it’ll definitely disrupt certain sectors of the economy mostly white collar jobs, but we’re in the middle of a peak of inflated expectations. This has happened before with other technologies.

LLMs, probably so. I often make the analogy with DOOM, released 30 years back, and the way games journalists kept saying each new 3D engine was "amazing" or "photorealistic", and yet we've only just started to really get that over the last decade. Certainly all the open source models are gushing over each other as "ChatGPT clones" or "ChatGPT killers" in the same way games were "DOOM clones" or whatever the noun was in the cliché "${noun} killers".

And yet the field of AI as a whole, including but not limited to GenAI, has been making rapid progress and doing things which are "decades or centuries away" every few couple of years since I graduated in 2006. Even just the first half of the 2010s was wild, and the rate of change has only gone up since then, this last 18 months has felt like more than that entire decade.


They can just hire a human, until there are enough of them to fully set up global automated distribution systems.


One of your assumptions is that we will start from 1 robot per year. I believe this is not true. I assume robots will be similar to high end cars in regards to the complexity of manufacturing. Once a company develops a prototype with AGI(mechanically the robots are almost there already, just the control systems and software that's lacking, which is supposed to be solved by AGI) it will rain VC money. The first million will be built by humans. The initial robots will take over the manufacturing only later. Setting up manufacturing that will be able to produce a million units in 2-3 years is possible. Let's say 5 years for a more plausible situation for a million robots to be built. These million will then scale exponentially. Also there is no reason to believe it will be 2^n, it can also be 3^n or 1.1^n or any arbitrary number.


> One of your assumptions is that we will start from 1 robot per year.

Not per year, total. And it's not really an assumption, just a demonstration of how fast exponential growth is.

> I assume robots will be similar to high end cars in regards to the complexity of manufacturing.

Agreed. This is also the framing Musk uses for Tesla's androids.

> Once a company develops a prototype with AGI

I don't think it needs a complete solution to AGI, as other people use the term. First, all three letters of that initialism mean different things to different people — by my standard, ChatGPT is already this because it's general over the domain of text, and even if you disagree about specific definitions (as almost everyone reading this will), I think this is the right framing, as you'd "only" need something general over the domain of factory work to be a replacement factory worker, or general over mining and tunnels to be a miner, or general over the domain of roads and road users to be a driver.

This isn't to minimise the complexity of those domains, it's just that something as general as ChatGPT has been for text is probably sufficient.

> The first million will be built by humans. The initial robots will take over the manufacturing only later.

Perhaps, perhaps not. The initial number made by humans is highly dependant on the overall (not just sticker-price) cost and capabilities, so a $200k/year TCO robot that can do 80% of human manual labor tasks is very impressive, but likely to be limited to only a few roles, probably won't replace anyone in its own factory; while one which has total costs of $80k/year and can do 90% might well replace most (but not all) of the humans in its own factory; and one costing $20k/year all-in and which can do 95% might well replace all the factory workers but none of the cobalt miners or the truck drivers.

"Fully general" is the end-state, not the transition period. But with fully-general, which is a necessary condition for nobody having any more work, we get a very fast transition from the status quo to having one robot per human.

> Setting up manufacturing that will be able to produce a million units in 2-3 years is possible. Let's say 5 years for a more plausible situation for a million robots to be built.

Agreed on both.

> Also there is no reason to believe it will be 2^n, it can also be 3^n or 1.1^n or any arbitrary number.

It's a definitional requirement of exponential growth, 2^n units after n doubling periods. I anchored on a the doubling period being a year just by reference to the cost of an example existing robot, and using that dollar cost as a proxy for equivalent human labor, and I specifically noted that the other poster's estimate corresponded to a 5-8 week doubling period which didn't seem unreasonable to me. Some robot can do each specific task 4.2 times slower against the wall clock and still be just as fast as a human overall because it's working 24/7 rather than 8/5.


Agreed, More clarification about the first part:

What I want to convey is that the growth function will be somewhat similar to y= c + ax^n (ignoring/collapsing into c the linear and higher order terms) rather than just y= ax^n.

The c here is robots produced via humans. I predict c will easily touch a million in 5 years with or without human help.

Even if the later bots can do only 50% the work of humans, we will still exponentially grow the robots until the humans become a bottleneck. And that 50% capability is also expected to grow exponentially.

Gemini 1.5 pro already beats most humans in most benchmarks, combine it with Sora which has a great visual world model, add some logical reasoning(architecture or scale), memory and embodiment(so it can experiment and test) and you pretty much have the seeds for an agi.

My optimistic/most probable prediction about the growth rate say it's

Regarding the last part:

My bad, I speed read your comment and didn't focus on the exponential calculations. An exponential growth is just x^n. Both x(multiplication rate(?)) and n(units of time) can be manipulated.


We're broadly in agreement, the only part I'd disagree with here is:

> Even if the later bots can do only 50% the work of humans, we will still exponentially grow the robots until the humans become a bottleneck. And that 50% capability is also expected to grow exponentially.

I think most automation since the dawn of the industrial revolution has done 50% or more of the task it was automating, and although yes the impact there is exponential growth, humans are a very rapid bottleneck until the next thing gets automated — Amdahl's law, rather than Moore's.


Whoops, deleted part of my response before submitting.

My optimistic/most probable predictions about the growth rate say it's wild. Like within 5-10 years with the multiple exponentials among different fields you can easily do away with most jobs unless there is a major bottleneck (I don't think there is, there is so much low hanging fruit). I guess that's what the singularity is all about. I don't think this will take multiple decades or centuries in any scenario other than the equivalent of ww3


The discussion is about robots replacing laborers and tradesmen, not all human work. It seems far more likely than humans will maintain control of the corporations that manufacture the robots.

The rest of this is just sci-fi speculation.


> The discussion is about robots replacing laborers and tradesmen, not all human work.

If that's all they did, it would be just another change to the nature of work, and humans would simply find other roles to fill.

Rapid change is scary, but can be managed, has been managed before — this kind of thing has historically grown the metaphorical economic pie, making everyone better off. If it's either one of "just muscle power" or "just brain power", the other leaves opportunities for humans.

Only a total replacement of all human work causes such a break that we're fumbling around in the dark. ("Fumbling around" is how I see the discussion of UBI: even if it turns out to be right, we don't yet have anything like a good enough model for the details).

> It seems far more likely than humans will maintain control of the corporations that manufacture the robots.

Will they, though? We've already got people (unwisely IMO) putting AI on boards of directors. Yes, it's as much a stunt as anything else, the law prevents them from being treated as "people", but the effect is the same: https://asia.nikkei.com/Business/Artificial-intelligence-get...


It almost makes me wonder if something really intelligent (which is where we are heading according to some) would mean we need as much labor, or things would just be much more efficient ?

I get the feeling less "conventional" robots might be needed someday, rather than more.


imo Satya is a different ball game compared to Ballmer. I wouldn't put a lot of emphasis on msft's track record pre-Satya personally.


I wonder though, considering the amount of other popular people who do not share the same achievements who you probably just ignore anyway (think some influencers and youtubers), do we really want him to shut up? His opinions and promises might not sit well with everyone but I definitely think you can learn a lot by analysing him.

I personally prefer him over the random billionaire #362682 who haven't achieved half as much and won't share anything.

Also, I would recommend reading his biography, changes your perspective on a lot of things.


Yes, even GPT3.5 is better. I am in uni, and LLMs are probably the best teachers I have had the experience to learn from(and I have had some great teachers and professors). They work even better if you feed them the content of a book/manual/documentation as a reference.

They do suck at solving problems correctly, however if you give them an incorrect solution and ask them to spot mistakes, or just ask for a general method to do a problem, it works out.

However, they might not yet compare to the best of humans. The best SO answers probably represent 0.01% of the answers, which is a high bar. I am certain very amazing teachers and professors exist out there in the world whom LLMs can't beat yet but the average can't compete.


> Yes, even GPT3.5 is better. I am in uni, and LLMs are probably the best teachers... They do suck at solving problems correctly...

The discussion was specifically about LLMs to write software. Not about university essays or articles or exams. Are you claiming GPT3.5 is better at writing bug-free software than the average software engineer?


No, please read my response again. My claim is that GPTs are better than human teachers, for most* domains, including software.

However, I do think a framework needs to be developed for formally learning any particular topic. If you are self learning using just chatgpt, you might miss out on a few key things. I haven't used it much personally but the khan academy bot is close.


China might not be as successful as the west(yet) but they have their own ecosystem and have alternatives for most tech products.

All the tech companies in China are practically under the control of the party. China also has a billion+ people, even the market is smaller than the west, I think they will manage.

Not to mention the difference in privacy laws and a higher number of stem grads to throw at the problem.


So we agree: that was my point. China is not a competitor for western markets, meaning the argument that "If we don't do it, China will" is fucking ridiculous, as China doesn't have access to the data necessary to make things that WORK for the western market.


A lot of western data is public, people in China aren't aliens compared to those in the west, there are only small cultural differences so chinese data in itself is usable for many western requirements.

Combine the public western data and private chinese data, and it should be enough for them to give the west a run for its money if they decide to slow/stop. Not to mention that chinese apps like tiktok are used very widely in the west, and coorps like Tencent have a tentacle wrapped around hundreds of western coorps.


This is truly amazing. We are living in the future.

I wonder though, if the brain already has specific regions for control of specific parts of our body, will it be impossible to add new limbs in the future? An extra arm would be helpful.


Brain plasticity to the rescue. There are examples of people being able to integrate completely new senses (vibrating compass belt) and controlling prosthesis with completely unrelated nerves.

But if you want to graft additional limbs to people, I highly recommend starting with baby aged humans already, their brain plasticity is unmatched. Imagine how much more fulfilment an amazon fulfilment center worker could bring with four arms! I should get Bezos on the phone.

https://www.carlosterminel.com/wearable-compass

https://www.dailymail.co.uk/health/article-4196408/World-s-p...


Equisapiens to the rescue!


I'm kind of surprised that in 2023, some signal replicator bridge from one side of the spinal cord to the other side isn't a lot more straightforward. I mean we're doing neural implants and the like already.

But for the use case you say, I think it's more likely a robotic arm with AI / voice instruction would do that. Or a neural helmet.


I think people drastically overestimate modern medical technology. We are not advanced by any means. We are still just barely learning small pieces...


We're making good strides in some areas and others are more resistant to breakthroughs.

This even works in different aspects of the same thing: We have the ability to genetically modify T-cells to kill some kinds of cancers, but it's more difficult to use that against solid tumors, which create their own microenvironments inside of them, than blood cancers.

https://stemcellres.biomedcentral.com/articles/10.1186/s1328...

But, of course, we keep making progress, and there's been some promising results in making CAR T-cell therapies that work against solid tumors:

https://www.cancernetwork.com/view/cldn6-car-t-cell-therapy-...

We advance piecemeal and some things are more difficult than others, but we definitely advance.


it took them like 30 years to completely dissect a single amino acid which would hold huge medical breakthroughs. AI found it's own way to do the same but to every amino acid saving decades of tedious work. in two years we've gone from barely knowing the full innards of amino acids and how they work etc, and now we know all of them.

Earlier this year, an AI research lab basically cured a rare cancer in a week time.

Imagine having 200 MDs and biologists in a virtual world working 24/7 with real world Drs and biologists. Nobody will know who is who they just work together on aspects then use the virtual lab to analyze potential results before trying in a real lab.

The regular researchers of course won't be able to go 24/7 but while they sleep the AI researchers could solve 2 years worth of problems.

We are way more advanced than we know because we have knew, never before realized potential to quadruple research in scientific endeavors.


That's crazy. We had to learn the structure of all the amino acids in my biochem class a decade ago. I'm glad science has finally caught up. Maybe the advances could have happened faster if the scientists studied the structures of the amino acids that undergrads were drawing from memory.


That AI hype is a powerful drug.


That contrarianism is a powerful drug.


At least AI is legitimately solving real world problems. I'll take that over crypto-hype any day.


I really like this approach even though it's antithetical to the way medical technology is structured right now. Basically medical device companies and drug companies just want to manufacture something and hold a patent for 20 years and whatever they can get for extended.

The described approach is a lot more like programming where you have a whole bunch of skilled professionals working together to solve specific bespoke problems because cancer is actually thousands of diseases depending on the gene expression and underlying genetics.

So I really like this approach. I hope it gets formalized and scaled to some degree outside of the auspices of the drug companies who just want to patent squat

I also don't know if the FDA is equipped for it because it sounds like cuz we're going to get specifically tailored drugs or other vectors individualized but how do you test that in a way that the FDA typically does?


Tell that to people who died from aids 40 years ago who could live a full life now.

Heck, tell that to my mother who had four knee replacements, one even with horrible infection, all in her lifetime (in her 60’s), before one last year finally got her back on her feet.


> I'm kind of surprised that in 2023, some signal replicator bridge from one side of the spinal cord to the other side isn't a lot more straightforward.

The issue is that the spinal cord is a bundle of cables essentially, a lot of axons from individual neurons. If you sever it, finding the right connection is impossible, so you have to use more blunt tools like electrical stimulation of the whole bundle.

We are getting better and better at labeling individual cells, even at a molecular level. When we understand how to do that, we might be able to do as you propose. I think we will see some forms of paralysis reverted in the coming decades with technologies such as those.


Interesting. So like how many axons? If you connect to all the axons on both ends and make the stimulation programmable then you could adjust simulation of the axons across the bridge. But I'm guessing we're talking a lot of axons.

Anyway, what do I know? I'm an idiot on the internet


In humans, around a million axons [1]. But not every lesion severs all axons. It's very challenging to stimulate individual axons as well, especially at scale.

https://www.frontiersin.org/articles/10.3389/fnana.2017.0012...


Hello Peter, thank you for the AMA. I am an Indian undergrad from a top uni, I plan to do a startup after graduating/maybe a year of work experience (I am currently working on it as a side project). What do you think are my options for getting a VISA quickly considering I do not plan to be employed for long and am unlikely to get in via O1 or E2.


The easiest option is just to use your OPT to run and build your business after you graduate and then move to O-1 or rely on STEM OPT to continue to run your company or work for another.


There is a chance that your photos are being backed up by some cloud service and being removed from your gallery. The most likely suspect is Google Photos.

Note that Google photos not only OCRs, but it also does a visual search of objects, faces, scenery etc. and is extremely powerful.


> There is a chance that your photos are being backed up by some cloud service and being removed from your gallery. The most likely suspect is Google Photos.

I have Google Photos upload and backup both disabled.

But then, I'm pretty sure either Google or Samsung SMS app had a "feature" to automatically delete old messages (for a definition of "old" that was neither specified, nor configurable), and it defaulted to ON on my current phone, likely costing me significant chunk of my message archive (that I dutifully transferred over from the previous phone) before I accidentally found and disabled the switch.

So yeah, could be Google Photos deleting it. Or someone else. I don't trust Android as a platform anymore.

BTW. about this "delete old messages" "feature" - most likely this was implemented for performance reasons. But the thing is, you're unlikely to send or receive enough SMS in your whole life for it to take a noticeable amount of space. The irony here is, I do remember a case where the messaging app would become slow and laggy if you had enough texts stored on the phone - but that was solely because someone implemented the message list as a linked list, thus adding a O(N) multiplier to many GUI operations.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: