> if we run a system for smaller trains, we can build smaller stations for these trains, saving a huge amount on station costs. This costs us in reduced total capacity, but this can easily be made up for by increasing train frequency.
There is a safe minimal distance between trains, in fact, a safe distance for a given speed. Shorter trains are not exempt from obeying it. You can make shorter trains more frequent at the expense of lowering traveling speed.
What is the cap of throughput is due to these speed limitations is an exercise left for the author of the article.
For capacity calculations, headway is what matters. E.g. trains spaced 2 mins apart means that 30 trains run in an hour.
It’s the same with cars. A 2s headway with cars holding 1 person each means that the maximum capacity of a highway lane is 1,800 people per hour, no matter how fast they go (the cars are further apart at higher speeds).
Freeway capacity is maximized around 35 MPH. Faster, and the greater distance between cars reduces capacity. Slower, and there are not enough cars per minute per lane. So the goal of ramp metering signals is to throttle input to keep the freeway speed around 35 MPH.
I think you would want to keep the road just slightly less dense (fewer cars, higher speeds) than the density that maximizes throughput, because otherwise you operate at the edge of an instability. Any tiny local deviation in speed somewhere triggers a slight local decrease in throughput, causing bunching which further decreases throughput and snowballs into a traffic jam.
When operating at slightly faster than max capacity, local slowdowns cause a local increase in throughput, allowing bunches to dissipate.
Doesn't really matter - in the real world everybody is tailgating. Drivers need to maintain 3 seconds (GP was using 2 seconds which safety experts have not declared too little) between cars for safety reasons. However drivers are instead maintaining more like .5 seconds between cars. As such nearly every city needs about 5 times as many lanes as they currently have when people finally start demanding "just one more lane"! 5 times - that puts Huston level freeways in Des Moines.
If you maintain the proper following distance at maximum capacity when there is an issue you can momentarily drop to 1 second (while braking) and then expand back to normal and there is no effect on traffic.
I'm going to double-down on this point because it does matter. In the real world some people are tailgating, but that does not cause traffic jams if and only if the road is running below peak capacity, where negative feedback self-corrects those traffic density deviations and gradually smooths everything back out. (Unless the tailgater causes a crash, of course).
But if the average following distance is such that the road is exactly at the peak of throughput, or any smaller, then any momentary dip into tighter following distances pushes the road it into a positive feedback operating mode, which triggers a traffic jam.
Interesting - I have believed for many years that it was around 17 MPH. I felt that this tallied with my observations - as traffic levels increase, vehicles slow down (increasing total capacity) until it falls to a critical speed (when slowing down reduces capacity) and then it changes to stop/go.
In my experience (on UK roads) this critical speed is around 17 MPH - but it might be a little different elsewhere.
Freeway capacity chokes at the limit giving rise to basically overpacked lanes slumping from metastability; systemic control with ramp meters or I66 (within the Washington D.C. Metro area) style real-time dynamic pricing lets freeways flow properly around peak capacity if properly implemented.
There are roads that regularly suffer from acutely insufficient capacity in many metro areas; specifically, repeatedly at times _the dynamic pricing toll that would discourage enough people from using it to stay uncongested_ would overshadow the price of a rental-with-driver (Uber-style) during off-peak times.
It's not that the people shouldn't get through; it's that most people won't need more than a backpack worth of luggage with them and could thus be packed 3~4 passengers for each driver.
Splitting the toll would be the reason to do so.
Unfortunately only really dynamic congestion tolls would really fix the concept of rush hour traffic jams. And the necessary surveillance system would bring severe mass surveillance/tracking concerns with it at least in central Europe.
I don’t think the surveillance needs to be cameras, a combination of radars (for velocity) and induction loops or the axle-counting rubber hoses (for counting) should do the trick
The problem is less from the full coverage measurements of congestion this scheme would need, and more due to the billing of almost all vehicles on almost all roads which could be used for bypassing major roads when driving congestion-relevant commutes. So main thoroughfares/arteries, pretty much anything you'd visually classify as "highway" (unless it's a dead-end), and Autobahn/Interstate.
Without cameras:
How do you do the billing then? Like, what else other than ALPR or the privacy-basically-equivalent RFID tag/token stuck to the windscreen (and correlated with a camera or similar to catch vehicles with inoperable RFID tags)?
If you'd cover old-built urban cores, you could further punish the driving-in-circles tactic of avoiding multi-story parking garages that hopes to either find a surface spot during their brief empty lifetime, or even to stall until a former passenger has ran an errand and can be picked up again.
That's more of an illustration that throughput isn't something people want than an illustration that throughput is higher at 75 than at 35.
You see a lot of blanket assumption, in discussions of traffic, that throughput should be maximized, and almost no examination of whether increasing throughput is a goal that makes any sense.
For example, working from home has catastrophic effects on throughput.
It’s throughput in combination with demand. That’s why Chicago has reversible lanes on the major highways. Inbound and outbound throughput needs vary during the day.
> What is the cap of throughput is due to these speed limitations is an exercise left for the author of the article.
They already did that exercise:
> 3-car trains running at 30-40 trains per hour (a normal peak frequency for automated or even some human-driven metro lines) reach a capacity of about 18,000 passengers per hour per direction, well above the expected demand of any American line that doesn’t go through Manhattan.
40 trains per hour is in fact not "normal", but extremely difficult. Only a few systems in the entire world operate more than 30 per hour.
The fundamental constraint is not technology, but people and physics: you need to decelerate and stop, let people disembark and get on, accelerate and clear the platform. This cycle requires a bare minimum of 90 seconds, although IIRC a few lines in a few places like Paris and Moscow do 85 secs.
Indeed, the Victoria line in London manages 36 TPH and we've not bothered beating it since. It's much easier to run 26-30TPH with slightly more carriages.
> the Victoria line in London manages 36 TPH and we've not bothered beating it since
That was a world record for a line following modern safety standards, set less than 10 years ago. It's hardly a case of "not bothered", it's just hard.
90 seconds is very possible in new-build lines which is what the author is talking about. You can buy a turnkey Innovia (e.g. Vancouver Skytrain) or AnsaldoBreda (e.g Copenhagen) that does this out of the box. Retrofitting 90s operation is basically impossible but not the point of this exercise.
Yes, they are assuming a best-case scenario. Driverless systems are very expensive for reasons that have little to do with the cost of the driverless trains, if you're not going to consider those variables this kind of armchair speculation is a waste of everyone's time.
They aren't though? If you're building a new line, fully driverless is pretty much the default these days, especially if the line is fully underground or elevated.
What is incredibly expensive, though, is retrofitting a line designed for manual operation to run automatically instead.
Well, a lot of systems exist that were initially designed for automatic operation but still end up becoming operated manually or partially manually due to safety concerns or politics. Washington DC Metro and BART are the two big systems I can think of that had this issue.
Both examples of Great Society metros that were on the bleeding edge of what was possible in the early 70s. Automatic train control advanced rapidly, with both Vancouver SkyTrain and London Docklands Light Rail being built in the 80s and operating driverless for their entire existence.
DC Metro just recently re-enabled full automatic train operation across all the lines in June.
I don’t think anybody has tapped into the forbidden magic trick of having CBTC broadcast position and velocity instead of position alone. For vehicle ACC at least, there is a safe region of velocity and follow distance where a following train can plan to enter the space where the lead train currently occupies.
I wonder if it's possible to run trains at higher speeds closer to each other using fixed brakes embedded near the tracks, similarly to how roller coasters often have mid-course brake runs that are only activated in emergencies when the train ahead unexpectedly slows or stops.
We're at the point where we could easily fit so large a fraction of the rail length covered/overshadowed by the train with https://en.wikipedia.org/wiki/Track_brake that we could pull around 4 G deceleration if we cover the bottom in that brake chains (chain strong; chain flexy enough to just wiggle over humps in the track).
There is a safe minimal distance between trains, in fact, a safe distance for a given speed. Shorter trains are not exempt from obeying it. You can make shorter trains more frequent at the expense of lowering traveling speed.
What is the cap of throughput is due to these speed limitations is an exercise left for the author of the article.