Is it possible to automate the job of an ATC controller? At least partially? Or at least just as a sanity check on every human decision? Not saying I want human ATC controllers replaced, but if there’s a severe staff shortage, I feel like a computerized version is better than nothing at all.
In this specific incident, there was a system in place called Runway Entrance Lights [0] that does serve as an automated sanity check on controllers commands. The surveillance video that is circulating shows that the system was working and indicated that the runway was not safe to enter. It's not clear yet why the truck entered the runway anyway.
I wonder if they thought that since they were responding to an emergency, and they were given clearance to cross by ATC, that that would override normal procedures. Kind of like how emergency vehicles cross a red light all the time when responding to an emergency.
It would be interesting to know whether that rule was onerous enough in practice that they had little choice but to break it in order to do their jobs effectively. They were responding to an emergency, seconds count, and they believed they had clearance from the controller.
> The surveillance video that is circulating shows that the system was working and indicated that the runway was not safe to enter.
A citation, please? The only video that I know of is [1].
[2] is my best mock up of the only video I have. I'm am not an expert, but my best read of that is that the RWSL is maybe? green to the taxiway¹ traffic, so, to me, the actual status of the RWSL at the time of the incident is "unknown"; that seems like something I should wait for the NTSB report, or at least someone with expert knowledge on. But your claims doesn't jive with the evidence I have, so that's what makes me ask for a citation.
¹but I think there are a number of problems with my own interpretation: I could be wrong about which lights are which; I am using the near-side lights, not the lights on the side the truck is entering from, and assuming them to be symmetrical (though what little I can see of the far side does seem to align with the near side); some of the lights I think are RWSLs & not RGLs look downright yellow, but that could be a property of the low quality of the video; there's the rather large problem of the plane on the runway that must then be explained.
Sorry for the late reply, and sorry for linking to reddit, but it was the first place I could find the right video. I saw this video linked at least a dozen times in the hours following the incident. I believe the clip you linked to is from the same video, but with the beginning cut off.
At approximately 1.5 seconds in my link, you can see one set of lights perpendicular to the runway turn off. I admit the the lighting/colors are not as crisp as I would like, but the lights that turn off are positionally consistent with the Runway Entrance Lights and the time at which they turn off (approximately 2 seconds before the plane enters the intersection) is also consistent with the operation of the Runway Entrance Lights system.
Furthermore, if the system was not operational it should have been NOTAMed as such, and I can find no such NOTAM so my default position is that the system was operational.
The fire station was located on the opposite side of runway 4 from the United plane. To avoid crossing the runway would mean having to travel a few extra miles around the thresholds (I assume).
I guess they could have found a route that wouldn't conflict with landing aircraft, but I doubt that's a practical option most of the time.
You sometimes have end around taxiways that are at one end of the runway and can be used when active. But that could be a massive diversion.
I know that Heathrow have multiple fire stations and rendezvous points for emergency services so that fire service can attend even when one runaway is closed to crossing. This could be needed to allow continued operations following a crash. It allows them to accept emergency landings more easily whilst maintaining emergency service to another active runway.
The REILs are part of ARFF training. Pilot training on it is also clear. The system is automated. It plots the direction and speed of anything approaching the runway and predicts a conflict. If the REILs are red it is HIGHLY likely there is a conflict that is missed by human error and you should not proceed without confirming. Don't just confirm cleared to cross, explicitly tell the controller "XYZ tower we have red runway entrance lights. Please confirm runway XX is clear".
The system is smart enough that if you get red bars to cross for an airplane departing once it passes your position the red clears because it knows the airplane is already past you. It is not dumb - it was deliberately designed to minimize false positives so everyone would trust it otherwise they might ignore it when it really counts. (AFAIK it very accurate in fact so the firetrucks weren't crossing because they distrusted the red lights).
This is just like all aviation incidents and indeed most incidents of any kind: the holes in the swiss cheese lined up.
The emergency aircraft couldn't find a free gate, creating a massive distraction for ATC, airport, et al. This is probably the primary domino that started the sequence. Had a gate been free this incident would not have happened. One big hole lined up.
Normally the aircraft would visually see the truck or the truck would visually see the airplane. But it was dark and rainy. Another hole lined up.
Everyone involved was rushing because noise abatement requires the airport to close at a certain hour. Thus everyone wanted to take-off or land before that shutdown. Another hole.
Normally the controller wouldn't issue the clearance to cross or their supervisor monitoring behind them would notice the error and override. But the controller and/or supervisor were distracted by the emergency. Another hole lined up.
The controller realized the error and issued a stop command but the fire truck proceeded anyway; they may or may not have heard the transmission. Another hole lined up.
Then someone else decided to jump on frequency during this busy time (we don't know who just yet) which may have prevented the controller's stop and/or go-around commands from being heard (another hole lined up).
The ARFF crew did not obey the REILs, accepting the clearance. Perhaps they thought the red lights were due to aircraft on short final and they still had time to cross? Perhaps it was some other misunderstanding of how that system works. Another hole lines up.
And the Air Canada jet was not paying attention to the chaos on frequency. There's a reason runway crossings are typically done on tower frequency: so aircraft can hear what is going on. But it was late at night and their brains probably didn't process what was happening. Or they were too close to touching down to have the bandwidth. Another hole lined up.
> The emergency aircraft couldn't find a free gate, creating a massive distraction for ATC, airport, et al.
Yes. And I want to add one more thing to this: the airplane with the "odour" issue was kinda ambivalent about the danger. They deemed it dangerous enough to declare an emergency, and request a gate then later ask for airstairs but not dangerous enough to pop the slides and just evacuate right there and then. I'm not saying this is wrong. Obviously they were evaluating the situation as new information was coming in. But it increased the workload of the ATC. They were trying to find a gate, and etc. If it was a clearer "mayday mayday mayday, aft cabin fire, we are evacuating" that might have been paradoxically less "work" for the ATC. Or at least more of a "practiced" scenario.
> Perhaps it was some other misunderstanding of how that system works.
Yeah. That's a big one. Total speculation but maybe they thought the airplane with the "odour" issue was keeping it red?
A great deal of ATC relies on automation, such as systems like ASDE-X, which is used at LGA. ASDE-X uses radar and vehicle transponders (among other things) to detect collision hazards on the ground. Unfortunately ASDE-X only works if every vehicle has a transponder.
#2 only works if the public is allowed to invest when the new technology is in its early stages, which is currently not the case. Microsoft went public in 1986 at a valuation of $2.3 billion (in today's dollars). What's OpenAI / Anthropic going to be worth by the time they IPO? $1 trillion? $2 trillion?
If everyone had kids at 18-20, then the grandparents could take care of the grandkids while in their 40s while the parents build their careers from 20-40, then start taking care of the grandkids as the cycle repeats
And then you end up raising your grandkids instead of the kids you gave birth to. It's not something that comes without cost. And what if you don't particularly trust your parents to raise kids? I suppose you would have no idea whether you did or not, because they would not have parented you...
Peoples 40s and 50s are their most productive years. We would be better off just letting people take 10 years off in their twenties - but most people would just party party party (what they do anyway)
Those four grandparents could end up with anywhere from 1-8+ grandkids though, depending on how many children they had, and how many grandchildren come along
have you heard of people not surviving into old age, or not being present or not being able to take care of kids? What the fuck is wrong with people in this comment section?
> have you heard of people not surviving into old age
not really, overall the life expectancy is growing well over 80 years old. unless you live, like, in the woods and feed off berries and hunting or something like that.
and yeah sure there might be somebody that loses their parents at 15, absolutely. i'm sorry for them, but they are not statistically representative in any way.
The tenacity part is definitely true. I told it to keep trying when it kept getting stuck trying to spin up an Amazon Fargate service. I could feel its pain, and wanted to help, but I wanted to see whether the LLM could free itself from the thorny and treacherous AWS documentation forest. After a few dozen attempts and probably 50 KWh of energy it finally got it working, I was impressed. I could have done it faster myself, but the tradeoff would have been much higher blood pressure. Instead I relaxed and watched youtube while the LLM did its work.
Plug a new chromecast into one of the HDMI ports and use that and only that and weld the setting shut so that you never have to deal with the TV’s default UI ever again.
I use an Amazon FireTV Stick on my old non-smart LG TV. And the advantage is that the FireTV has a simple cute little remote control device. There is a nifty Setting in the Amazon FireTV UI to allow its remote to turn on/off the TV too.
So it's been a long time since I had to wrestle with the TV's built-in OS.
I just use the pleasant UI of the FireTV Stick to watch Netflix, Prime, Disney+, etc. on that decade+ old TV. That FireTV becomes sluggish if I keep multiple apps open, so I have learnt to exit out of an app before switching to the new one.
I may get a new FireTV stick this year, rather than splurging for a new TV, since the old TV is still doing well.
As the Americans say:
If it ain't broken, don't fix it.
I have this setup, and the Firestick UI is horribly slow. Sometimes it takes 30 seconds or more for it to give any response to a button press. It's worst when I'm trying to watch something on Amazon Prime, to the point that I hardly watch that anymore because the UI is so annoying.
This sounds like either your FireTV stick is too old or your TV is.
My LG TV is more than a decade old (non-smart LED TV), the FireTV stick is around 6 years old.
But apart from the FireTV stick (whose remote controls the TV too) taking 15 seconds for a cold start (the TV tends to go to deep sleep mode after idle for long time or when switched off via remote), or 5 seconds for a warm start, the FireTV GUI is quite snappy thereafter (I can briskly move the cursor/selection across icons/thumbnails, menus and apps), till I switch it off again. Netflix, Disney+, Amazon Prime, Discovery+, Apple TV - they all work well on this old setup.
You may want to uninstall some apps on the FireTV stick to give it some breathing space when it runs.
Try the FireTV stick on a PC monitor having HDMI input. If you face same issues there, then it may be time to buy a new FireTV stick or Chromecast, or splurge for a new smart TV.
There are some things it's really great at. For example, handling a css layout. If we have to spend trillions of dollars and get nothing else out of it other than being able to vertically center a <div> without wrestling with css and wanting to smash the keyboard in the process, it will all have been worth it.
Not really sure what's so crazy about that. A brick and mortar shop will spend way more than that on renting a good location for their business when they have no clue whether they'll turn a profit. This is just the digital equivalent of that. People trust authoritative domains like vidaliaonions.com way more than something like vidaliaonions-direct.net and they're given more SEO weight as well. At least I know that used to be true; not sure how true that is today but I'd imagine it still is.
Yeah, exactly. Go price the equipment it takes to rig out a new upstart plumbing biz (trunk/van, all the hardware, insurance, etc). Startup web businesses is insanely cheap, even with a couple grand on a domain.
I am probably going to get downvoted to oblivion for this but if you’re going to have AI write your code, you’ll get the most mileage out of letting it do its thing and building tests to make sure everything works. Don’t look at the code it generates - it’s gonna be ugly. Your job is to make sure it does what it’s supposed to. If there’s a bug, tell it what’s wrong and to fix the bug. Let it wade through its own crap - that’s not your tech debt. This is a new paradigm. No one is going to be writing code anymore, just almost like no one is checking the assembly output of a compiler anymore.
This is just my experience. I’ve come to the conclusion that if I try to get AI to write code that works and is elegant, or if I’m working inside the same codebase that AI is adding cruft to, I don’t get much of a speed up. Only when I avoid opening up a file of code myself and let AI do its thing do I get the 10x speed up.
What is the time of the longest-lived actively developed and deployed codebase where this approach has been successful so far and your co-maintainers aren't screaming bloody murder?
My friends and I have always wondered as we've gotten older what's going to be the new tech that the younger generation seems to know and understand innately while the older generations remain clueless and always need help navigating (like computers/internet for my parents' generation and above). I am convinced that thing is AI.
Kids growing up today are using AI for everything, whether or not that's sanctioned or if it's ultimately helpful or harmful to their intellectual growth. I think the jury is still out on that. But I do remember growing up in the 90s, spending a lot of time on the computer, older people would remark how I'll have no social skills, I won't be able to write cursive or do arithmetic in my head, won't learn any real skills, etc, turns out I did just fine and now those same people always have to call me for help when they run into the smallest issue with technology.
I think a lot of people here are going to become roadkill if they refuse to learn how to use these new tools. I just built a web app in 3 weeks with only prompts to Claude Code, I didn't write a single line of code, and it works great. It's pretty basic, but probably would have taken me 3+ months instead of 3 weeks doing it the old fashioned way. If you tried it once a year ago and have written it off, a lot has changed since then and the tools continue to improve every month. I really think that eventually no one will be checking code just like hardly anyone checks the assembly output of a compiler anymore.
You have to understand how the context window works, how to establish guardrails so you're not wasting time repeating the same things over and over again, force it to check its own work with lots of tests, etc. It's really a game changer when you can just say in one prompt "write me an admin dashboard that displays users, sessions, and orders with a table and chart going back 30 days" or "wire up my site for google analytics, my tag code is XXXXXXX" and it just works.
The thing is, Claude Code is great for unimportant casual projects, and genuinely very bad at working in big, complex, established projects. The latter of course being the ones most people actually work on.
Well either it's bad at it, or everyone on my team is bad at prompting. Given how dedicated my boss has been to using Claude for everything for the past year and the output continuing to be garbage, though, i don't think it's a lack of effort on the team's part, i have to believe Claude just isn't good at my job.
I was going to try having an AI agent analyze a well-established open source project. I was thinking of trying something like Bitcoin Core or an open-source JavaScript library, something that has had a lot of human eyes on it. To me, that seems like a good use case, as some of those projects can get pretty complex in what they're aiming to accomplish. Just the sheer amount of complexity involved in Bitcoin, for instance, would be a good candidate for having an AI agent explain the code to you as you're reviewing it. A lot of those projects are fairly well-written as they are, with the higher-level concepts being the more difficult thing to grasp.
Not attempting to claim anything against your company, but I've worked for enterprises where code bases were a complete mess and even the product itself didn't have a clear goal. That's likely not the ideal candidate for AI systems to augment.
Frankly, the code isn't messy whatsoever. There's just lots of it, and it's necessarily complex due to the domain. It's honestly the best codebase I've ever worked with - i shudder to think what nonsense Claude would spew trying to contextualize the spaghetti at my last job
As context size increases, AI becomes exponentially dumber. Most established software is far, FAR too large for AI. But small, greenfield projects are amazing for something like Claude Code.
This is why I argue that the impact of LLMs is in the tail. Its all the small to midsize shops that want something done, but don't have money to hire a programmer. Its small tasks, like pushing data around, writing a quick interface to help day to day jobs in niche jobs and technical problems. Its the ability to quickly generate prototype logos and scripts for small scale ad campaigns, for solving Nancy's Excel issue, etc. Big companies have big software and code stacks with tons of dependencies. Small shops have little project needs that solve significant issues facing their operations, but will unlikely become large enough that things like scaling issues, maintenance, integration, are ever a problem at all. Its a tail, but its long in small to midsize businesses. In research labs, which I have personal experience, AI is rapidly making feasible more ambitious projects, quicker timelines, and better code, generally.
reply