There are very very few places in nyc not accessible via some combo of bus, metro and ferry. It's not as reliable as say Japan but the public transit network is pretty extensive.
Not everyone who drives through NYC lives in NYC. Even if it were, those transit hops add time. Now you're forcing people to choose between paying money they don't have or spending time they don't have.
If you're driving through NYC you probably have enough money for gas and $9 and all the other tolls on the road. No one is driving around on their last drop of gas going "gosh I could just get out of Nyc to Long Island if I just had that $9 for gas".
The poor car owner who can't afford $9 stories are all made up nonsense. "Not everyone has $9 to spend to drive their tens of thousands of dollars car."
I think the AI advice is pretty important. You say you should understand and question everything the lawyers do but where do you even start without an AI to read and explain thousands of pages of contracts and legal process. Nevermind all the case law and it's trained on and has access to.
I think if he tried this ten years ago he'd have a pretty hard time with it.
At the rate they're going it'll just get cheaper. The cost per token continues to drop while the models get better. Hardware is also getting more specialized.
Maybe the current batch of startups will run out of money but the technology itself should only get cheaper.
I think articles like this have the big assumption under them that we are going to plateau with progress. If that assumption is true, then sure.
But if it's false, there's no saying you can't eventually have an ai model that can read your entire aws/infra account, look at logs, financials, look at docs and have a coherent picture of an entire business. At that point the idea that it might be able to handle architecture and long term planning seems plausible.
Usually when I read about developer replacement, it's with the underlying assumption that the agents/models will just keep getting bigger, better and cheaper, not that today's models will do it.
There is a high risk that the systems that AIs build, and their reasoning, will become inscrutable with time, as if built by aliens. There is a huge social aspect to software development and the tech stack and practices we have, that ensures that (despite all disagreements) we as developers are roughly on the same page as to how to go about contemporary software development (which now for example is different than, say, 20 or 40 years ago).
When AIs are largely on their own, their practices will evolve as well, but without there being a population of software developers who participate and follow those changes in concepts and practices. There will still have to be a smaller number of specialists who follow and steer how AI is doing software development, so that the inevitable failure cases can be analyzed and fixed, and to keep the AI way of doing things on a track that is still intelligible to humans.
Assuming that AI will become that capable, this will be a long and complex transition.
There are parts of this I agree with and parts I do not. Being able to "talk" to documentation rather than dig through it to try to understand a concept feels like a way more efficient way to get to the same end.
I think digging through forums or comments or SEO garbage to try to find answers is a nightmare compared to having a solid llm do it for you. Being able to ask to explain a concept 5 different ways or compare concepts is incredible.
Or say - knowing nothing about 3d printing and being able to just ask it about current capabilities, materials, costs etc as an entrypoint. There are whole business ideas I wouldn't even consider exploring without it because it would be so overwhelming to research from scratch.
I believe we're already using llms to evaluate llm output for training, I wonder if there's some variation of that which could be used to identify when one llm gets "stuck".
I guess chain of thought in theory should do that but having variations on prompt and context might behave differently?
Genuine question but did folks feel poor during the heyday of the middle class? It seems like there was a period post-ww2 where you could make a living wage and still buy a house and sends your kids to college.
I'm not sure what the answer here is, but it does seem like this is something that has existed. Consumerism was pretty rampant in the 50s and 60s when a lot more was still made in the US.
"cooling a living space is always more costly than heating a living space"
Man I wish this was true but it definitely isn't in anyplace that gets significantly cold. Heat pumps are super super efficient at cooling but they get less efficient at heating the colder it gets. Humans and appliances create a pretty negligible amount of heat.
Add insulation, and use a heatpump. The more insulation you add the easier it easy to keep a space warm. Add enough insulation, and eventually the waste heat production from human activities inside the space will equal or surpass the heat loss through the insulation, removing the need for additional heating.
Insulation obviously also helps keep a place cool. But no amount of insulation will ever remove the need for cooling if the outside is warmer than the inside. Energy is always going to move from a hot place to a cold place, but at least insulation lets us control how quickly that happens.
Also the limit on air sourced heat pumps in cold conditions is basically caused by water freezing on the evaporator coils, effectively adding a layer of insulation that limits how much energy can be drawn from the air, we’re not really limited by the refrigerants. As other have mentioned you dig down to find a better source of heat, and often you don’t even need to dig far, a 20-30cm trench is often enough. Although in super cold climates you’ll need to go deeper to make sure your through the frost layer in the winter.
> "cooling a living space is always more costly than heating a living space" Man I wish this was true but it definitely isn't in anyplace that gets significantly cold. Heat pumps are super super efficient at cooling but they get less efficient at heating the colder it gets. Humans and appliances create a pretty negligible amount of heat.
I thought any place that is significantly cold can still dig underground and at some point you can get enough heat to run your heat pump?
Yeah, if you have a bare minimum of 30k burning a hole in your pocket and enough open land to drill the well with the correct geology, and the larger your house the bigger/more wells you need as you're drawing from the Earth's relatively constant temperature. So the only way to get more heat is to get more surface area for the coolant.
Some people on reddit are reporting quotes of 125k for larger (>3000 sq ft) houses.
As someone who lives in a 4-season environment that can get down into the single digits F on occasion in the winter (forecast to be there for a couple of days next week), and has an air-source heat pump, I just suck it up and eat the $400-$500/month heating costs for the auxiliary (electric resistive) heat in Dec/Jan/Feb. If someone gifts me a ground-sourced heat pump I'll gladly accept, but I've got kids to raise so setting aside money for one is a long way off.
Heating is more costly if you use technology created for cooling. When you try to cool a cold space in order to heat hot space, you will have a bad time. You could use electric heater for heating, it should have no problems with heating, but will use more electricity. Or you could use something actually cheaper, like wood or fossil fuels. If you use more expensive method (like electricity) it will be more expensive.
This might be true for you. I have lived with free wood for heating and it was more expensive for me than using a heat pump. What is expensive depends on a lot of factors, political, social, location, time and knowledge. It is not a clear dollar per delta T.
Chopping up a tree is kind of fun, we bill that to the entertainment budget instead of the heating budget. And it usually happens during a hotter season, so I might have to go inside to take a break, get a cool drink. So, we can bill some of the tree chopping activity to the cooling budget!
AlphaGo and AlphaStar both started out based on human training and then played against versions of themselves to go on and create new strategies in their games. Modern LLMs can't learn/experiment as far as I know in exactly the same way but that may not always be true.
Yeah, but they had a limited set of rules to work within (they were just hyper-efficient at calculating the possible outcomes relative to those rules). Humans, in theory, only have the rules they believe as there technically are no rules (it's all make-believe). For example, what was the "rule" that told people to make a wheel? There wasn't one. The human had to think about it/conceive it, which AI can't (and I'd argue never will be able to) without rules.