Hacker Newsnew | past | comments | ask | show | jobs | submit | Fischgericht's commentslogin

NOT true. They have NOT fully reversed on this. Please read:

https://kb.synology.com/en-global/DSM/tutorial/Drive_compati...


The article is about the changed actual policy deployed with DSM 7.3, that only just started rolling out. Your link hasn’t been updated in over two months, so doesn’t reflect that yet.

Edit: Updated KB article is here: https://kb.synology.com/en-us/DSM/tutorial/Drive_compatibili...


This is not "my" link. That link is part of their press release.

Because I am no longer sure people are all getting shown the same content for that URL, here is what is shown to me (no local caches or proxies):

Hard disk drives (HDD) & M.2 NVMe solid-state drive (SSD) Series

Details

FS, HD, SA, UC, XS+, XS, Plus, DVA/NVR, and DP

Only drives listed in the compatibility list are supported.


Here is the updated KB article (note “en-us” vs. “en-global”): https://kb.synology.com/en-us/DSM/tutorial/Drive_compatibili...

In particular: “At the same time, with the introduction of DSM 7.3, 2025 DiskStation Plus series models offer more flexibility for installing third-party HDDs and 2.5" SATA SSDs when creating storage pools. While Synology recommends using drives from the compatibility list for optimal performance and reliability, users retain the flexibility to install other drives at their own discretion.”

NASCompares confirms that no warnings are shown: https://www.reddit.com/r/synology/comments/1o1a32m/testing_s...

I agree that the information is still a bit muddled right now.


Ahahaha.

I can confirm that if I change my Accept-Language headers in my browser from "en" to "en-US" I get the other version of that page. Actually, for everything else I tried other than "en-US" I get the evil version.

Synology press team Achievement unlocked: Confuse all global IT press outside of the United States.


Last self-reply on this, I promise:

If I would have to GUESS here is the explanation to this incorrect story:

AFAIK there is not SATA SSD vendor left on the market besides some left-over stock put into enclosures by some chinese companies. This means Synology will no longer have the option to force you to buy "compatible" SSDs, because they themselves can not source them.

So my GUESS (not backed up by proper research) is: They had to lift this requirement in hiding because they made it impossible to follow their extortion instructions.


Thanks for sharing.

It seems like they want to make sure NAS' are running NAS grade drives, instead of consumer grade (SMR) drives which can have serious issues when rebuilding an array after a drive failure.

Customers buying inappropriate drives for NAS and then eventually blowing back on Synology, if a driver of this could be handled differently.


Nah, not really. They already have a compatibility page of known-good drives and they recommend people stick to it. They could also have an incompatibility list showing known-bad drives, and alerting if you install one of them.

If I put junk tires on my Toyota, I don’t blame Toyota. But if Toyota used that as an excuse to make it impossible to use third party tires, I guarantee you my next car purchase wouldn’t have that same limitation.


Your Toyota analogy doesn't hold up. If a customer puts SMR into their NAS, they are absolutely going to call Synology and complain. And they are going to have to re-explain this over and over because most people don't understand nascent HDD writing modes the way they do a vehicle tire. Even then, and appropriate analogy would be a tire that is cheap and new but refuses to spin above 25mph vehicle speed.

First, I don't think that's true. It could even be a FAQ on their website:

Q: Why is my brand new WD drive so slow in my NAS?

A: Because they lied to you and sold you junk. Here are the details...

It would be very easy to push the blame onto the vendor, where it belongs, because the defect is 100% with the drive and not at all with Synology. They don't have any control over it. Synology could even automate this. Whenever you insert a drive that isn't on their compatibility list, it prompts you with a message to make sure you want to proceed. They could very easily make that popup say something like "WARNING: THIS HARD DRIVE MODEL IS DEFECTIVE. WE STRONGLY URGE YOU TO REMOVE IT AND REPLACE IT WITH A DRIVE ON OUR COMPATIBILITY LIST."

But in any case, dealing with those support requests has to be way cheaper than the enormous financial and reputational loss they seem to be taking from this boneheaded move.


SMR drives aren't defective though. They have a capacity and they are capable of storing at that capacity. They just can't keep up with the throughout requirements of a nas. And remember the WD SMR scandal was because they weren't being forthcoming about that limitation. I fully support Synology's move to lock it drives. I think it's the tech crowd that got it wrong... mostly. Synology should have sweetened the deal and along with the lock-in, offered cheaper prices with proof of purchase of the Disk Station.

Incorrect.

Western Digital deceptively sold and charged a premium for the WD Red drives sold as NAS drives that were CMR, when they were not.

Western Digital didn't withhold anything about SMR being good or bad.

Western Digital confesses some WD Red Drives use SMR without disclosure:

https://www.tomshardware.com/news/wd-fesses-up-some-red-hdds...

I know several folks who bought these drives as NAS drives, for NAS use, when they were not all the same. Folks could have just bought SMR drives from WD, but specifically bought NAS drives.

Western Digital's denial, and the fact it took a class action lawsuit, were enough that WD no longer sells WD RED, only WD Red+ and WD Red Pro.

SMR drives don't work well for NAS'. SMR is useful for things other than NAS storage which is on all the time.

Rebuilding a NAS because things overlap so much takes a lot longer with SMR drives, compared to CMR. SMR drives used in NAS formation seem to fail more too.

Building any kind of NAS with SMR drives is asking for trouble and pain. I guess SMR drives could be proactively replaced, would need to factor that into the cost / tco.


deceptive? sure. defective, nah. evidence of deception in the market further instills why Synology made the right move initially.

They're defective by design when advertised as NAS drives. It was impossible for them to work as users expected given their construction. It wasn't defective in the sense that there was a manufacturing flaw that made some of them fail, but in the sense that it was inherently unfit for purpose. If you design a car's brakes to fall off when they get hot so as to protect the braking system at the expense of the car, even if it works as designed, it's still defective.

I don't know how to reply to the rest. If you think it's a good idea for Synology to make their systems not work with even known-good drives from reputable manufacturers, I don't think there's likely to be a common ground we can find to discuss it further.


All Synology has to do is pop up a dialog box: "Warning: Bad idea. Don't blame us if this drive ruins your life. <Proceed> <Cancel>"

That's all they had to do.


In no way am I sticking up for Synology. I'm not a customer of Synology.

Customers should have absolute control over what drives they want, it's their choice to put crappy tires on their car or not.


Just commented here to point out that this news story spreading is wrong (and that other IT news outlets have since corrected/retracted it), don't have any eggs in that basket, but:

Discussions on their reasoning happened back when they introduced the extortion fees. No, it's not about NAS grade drives. They are just re-labelling existing NAS drive models, putting their own sticker onto it. The original manufacturers identical NAS drive model is then listed as incompatible.

There is nothing remotely connected to actual technology involved in this story at all. This is a sales-strategy-only subject.


I'm not a customer of Synology. I don't agree with justifying forced purchase of a relabeled product.

They deserve the result of their decision and not understanding their customers - they could just start a separate enterprise line if they didn't have one already for whatever they wanted to force.

Enterprise brands like HP, etc, to my last experience, do sell white-labelled drives, but don't bar you from using those same drives yourself.

My lack of trust remains with the parts that will fail the most - hard drives.

Hard Drive manufacturers don't have the best history, whether it was Western Digital lying to their customers about CMR when it was actually SMR. That would be my reason for never accepting a forced labelling of a drive.


Any example of such damaging blowback that is ravaging other NAS vendors who carelessly allow their users to recklessly use inappropriate drives?

Exec summary for those who think their time is not worth this evil madness:

The only change is that they now allow you to use any 2.5" SATA SSD. Everything else, meaning: 2.5" SATA HDDs (the by far most common thing you would want to use) and NVME SSDs: Still a no-no.

No, there was no lesson learned here by them at all.

The liked article specifically is wrong here:

"Third-party hard drives and 2.5-inch SATA SSDs"

No, not hard drives. 2.5" SSDs only.

Very sorry to spoil the party, but sadly Synology STILL hasn't learned the lesson. :(

Let's check again after they have lost 95% of their customers...


> DS Plus series (DSM 7.3): > HDD > Not Listed > Supported for: > New installation and storage pool creation

This is the main change. Other series (not Plus) are still locked down.


Regardless, the fact that this is even a question or has ambiguity/uncertainty in any way is enough to get me to never use a Synology NAS.

What are you talking about. This is a quote directly from the page you’ve linked to:

> At the same time, with the introduction of DSM 7.3, 2025 DiskStation Plus series models offer more flexibility for installing third-party HDDs and 2.5" SATA SSDs when creating storage pools.


I do not know if we are getting served different content for the same URL. Content I can see, with server-side timestamp of 2 seconds ago:

Hard disk drives (HDD) & M.2 NVMe solid-state drive (SSD) Series:

FS, HD, SA, UC, XS+, XS, Plus, DVA/NVR, and DP

Only drives listed in the compatibility list are supported.


Looks like you are indeed getting an older page. Try https://archive.is/8aUdC

As German IT news media has retracted the "Synology reverses" story based on the content they are reading in the press release link, I suspect there is some Geo-stuff involved here (I tested this from multiple German IPs now and always get "the other version").

> Last updated:Jul 30, 2025

Appears to be stale documentation.


Incompetency.

Good to get this fact about the Synology organization.


I do remember to have read a couple of years ago that the Windows Ui team got replaced and now only consists of Mac users, never having used Windows themselves.

If that is true, it's now wonder that they do not understand all the value that Windows NT has brought, why having a standard on menu structure, a standard for all UI controls etc made sense. And to understand that while Apple's mission is to provide a walled garden, Windows has been and is used in a million different scenarios. Taking away options will ALWAYS hit some of your customers. And there are a gigantic amount of applications where you want local system accounts only. Yes, Dear Microsoft, computers without an Internet connection do exist and are a common thing.

For us it's Win10 IoT LTSC so we have updates for a couple of more years, and by then hopefully the last remaining software and hardware we have will be usable with Linux.


I think this change (and everything in Windows 11) is being driven by the MS Account PM watching telemetry and making number go up.

Their telemetry data didn't seem to help them figure out how important the start menu is for users. I doubt it's going to help them really do anything else either. They might have the data, but they're not using it.

I was at Microsoft during Windows 8 and the decision to remove the start menu was made with telemery data first and foremost.

"Only 3% of users regularly use the start menu." was the justification.

Then they did a bunch of research with eye tracking software to justify the new 'start screen' saying that it was actually better for users who do use the start menu because they were able to locate an item on the full sceeen overlay faster than the traditional start menu.


I think they forgot what people actually use the start menu for: accessing things not immediately available on desktop/taskbar. People don't use it to explore/play around. They're looking for something.

So of course they figure they can insert ads for their own shit, make it a Bing website search, whatever. People don't want fucking an internet search. They're looking for things on their computer.


I feel like these people have a fundamental misunderstanding of how human memory works. We have way better spatial memory than for words. The start menu is one of the better tools for that, because it can contain a lot of information (applications) based on where the user puts them and it's easily accessible. You don't have to minimize other applications, like you do if you want to access the desktop, and the start menu loads instantly (well, it did until Windows 11/before ads).

You're not supposed to "look" for where something is, you're supposed to know. Just like you're not supposed to look for the X in the top right corner of an application. You know it's there and you know that if you move the mouse into the corner you can click it (which is very infuriating when some UI design decides that the top right corner pixel does not count for pressing the X button). The start menu in the bottom left worked in a similar way.


Yeah, but all of that was supposed to apply to the Windows 8 start screen also

I would wager that most users that left telemetry on are fine with whatever changes Microsoft makes to the operating system and user interface, and that most people that turned telemetry off are the ones which want and need a good start menu and did not want those changes.

Not sure. If they would actively read that telemetry data they would notice that the market share of Win11 due to their actions is shrinking, not rising.

But maybe they are holding the telemetry graphs upside down? ;)

And, obviously, a Windows system not connected to the Internet will not give you Telemetry, so this part of your customer base is invisible to you. As a PM, you would have to actually talk with your actual customers to learn about it.

Or they could have just done a survey where customers can vote on what they want. I assume that "Half of the OS settings dialogues now apply changes the moment you klick a checkbox, without a OK / Cancel button; and the other half of the OS allows you to review your changes and revert them in one go if you want."

It's just said seeing this great NT system getting crippled and ruined by actively making it harder to use and limiting choices.


The W11 market share isn't shrinking though. A few statistics tracking of websites shows that, but there are plenty of reasons that would go down for w11. Nobody (<0.05%) are buying machines and installing W10.

[For those who are not into the LTSC IoT stuff: Basically it's a decrapified Win10 with support and security updates until January 13 2032. Yes, 2032.]

I am seeing the exact opposite. It's not just that my tiny company has completely moved to Win10 Enterprise LTSC IoT, but every newly bought computer gets Win11 nuked and that installed. In Germany (shady) resellers of Win10 LTSCblabla licenses are popping up.

Pretty much everyone in the embedded electronics industry that has to use Windows is doing ass covering right now by buying the LTSC licenses while you still can.

The departures time table on your airport or train station is not going to be replaced because M$ claims that Win11 is incompatible with it. It will be moved to LTSC if it's not already on that for long. Same for ATMs, the strange machine my dentist uses together with her drills etc.

Of course I have no clue how/if Win10 LTSC market share is or can be detected at all. But from inside the embedded electronics industry I can say: Panic buying of Win10 LTSC licenses going on.

Not a contradiction to what you wrote, by the way: "Nobody" ) is buying Win10 Pro or Ent anymore. But they are buying LTSC in heaps according to sadly only anecdotical evidence.

) Well, not in their online shop, but if you ask, you very well can still buy new Thinkpads with Win10 installed from Lenovo, for example.


One of the screens on the Vancouver SkyTrain platform crashed once and displayed the xfce logo, which made me feel a bit of excitement in wondering how it was implemented.

It's crazy they don't even have a toast sort of notification for checking a box. Some visual flair to let a user know "this was successful"

Engagement numbers went up and to the right because it requires multiple infuriating clicks and keystrokes to do basic things. Start menu randomly resorted your apps? 2 more clicks to find the app you wanted!

> And to understand that while Apple's mission is to provide a walled garden, Windows has been and is used in a million different scenarios.

You're conflating the vertical integration of hardware and software (Apple's walled garden) with Microsoft's current direction (you can't use Windows without MS online services).

Microsoft has never given a damn about customers being free to use the software the way they want to. In light of how the company is behaving today, the "openness" of Windows WRT to hardware was clearly only about market share.


That was impressively delusional.

> having a standard on menu structure, a standard for all UI controls etc You mean all the stuff apple brought to personal computers?

By the way, you can use a Mac (and iPhone) without an Apple ID and there's no sign that this is changing.



In what way do you think that's relevant? Large portions of that standard had already been abandoned in the Windows 95 era. Nowadays, approximately nobody uses Shift-Insert for Paste, and most laptop users wouldn't even know where to find an Insert key without hunting for it.

I use shift insert to paste all the time.

Thanks, I couldn't have said it better myself:

> The detailed CUA specification, published in December 1987, is 328 pages long. It has similarities to Apple Computer's detailed human interface guidelines (139 pages). The Apple HIG is a detailed book specifying how software for the 1984 Apple Macintosh computer should look and function. When it was first written, the Mac was new, and graphical user interface (GUI) software was a novelty, so Apple took great pains to ensure that programs would conform to a single shared look and feel.

Windows NT came out in 1993 by the way.


Did not mention facts to counter you but to provide... facts.

Parent comment by mine by the way also did not claim anyone invented anything, but that Windows once HAD and FOLLOWED human interface guidelines that made the system optimized to be used by... humans. While now MS is fighting their human users.

But to give you feedback: Sometimes it is nice to sit on a shady park bench on a Sunday without an apple fan boy running by with a loudspeaker "AND DID YOU KNOW? JOBS INVENTED BENCHES!!!".


The HN story is about whether you can log in to Windows without a Microsoft account.

You then went on a rant about how the reason Windows stopped having standard menu structures is because the Windows UI team now only consists of Mac users.

Even though you don't need an Apple ID to log in to macOS, and those standard menu structures come from Apple in the first place.

Thanks for the feedback. Your communication style is schizophrenic and melodramatic and I still have no idea what you actually think.


I googled a little... You can use iPhone without an appleid but you cannot install any apps. I wouldn't call this "using" even

The original article was about Windows, which is equivalent to Mac OS not iPhone.

Anyway there is a lot you can do with the default apps. But yes you can’t use the App Store without an Apple ID.


Offer self-hosted and I would buy.

Do not assume that companies are willing to put ALL of their intellectual property into your hands. Even if you would not be some startup where any sysadmin could steal and sell my data any time without you even noticing it, you will get hacked just like everyone else that stores interesting data. The data you have access to is absolutely perfect for the global data blackmailing gangs. As soon as you are successful, you will have every black hat hacker and their dog knocking on your doors.


Onyx.app has a self hosted option. I just did the docker setup yesterday. It’s not a great home user option imo but seems like it’s functional for enterprise.

Just had a quick look - while they have that self-hosting option, they still assume you will use a cloud LLM. I started digging because I got confused of them not mentioning any GPU when it comes to resource requirements. There is some documentation on using it fully self-hosted including the LLM, but the emphasis here is on "some".

To be clear: I am looking at this from a CEO perspective, not a "I will play with it in my spare time" nerd one.


Going off on a tangent here but, does anyone have a good guide on how to set up an AI/LLM/GPT chatbot/agent for a small business?

Not looking to spend millions but a couple thousand are alright. TIA


Open-webui works well for us: https://github.com/open-webui/open-webui

Hmm I thought this would be a competitive space already i.e. integrating llm chat bots on any website

Yep, makes a lot of sense. We architected our system to be easy to self-host & open-source in the future for this very reason, though we decided to launch with hosted because it's easier to improve and iterate.

Understood. Not my startup, but I would have started the other way round.

Businesses that would be willing to pay (a lot) for such a benefit often will be very conservative. In Germany the majority of medium sized businesses using SAP for example still refuse to be moved to SAP's cloud instead of on-premise.

C-Level types typically are not worried putting their email credentials etc into Outlook cloud and getting hacked this way. They are used to "everything is in the cloud". However, as soon as you mention, depending on the type of business "patents", "sales contacts", "production plans" C's will change their mind.

In Germany, where I am originally come from, all of these businesses are worried about their trade secrets to end up in China, and rightly so.

As self-hosting is very complex you could either make good money with consulting (but this means setting up tech teams in all target markets around the globe, using actual competent humans), or by selling it as a plug&play appliance. With that appliance simply being a rack server with a suitable GPU installed.

And again, for your business strategy the long-term risk of pretty much everyone trying to hack you on a daily basis appears too high to me. You might not have on your radar how serious industry spionage is. You will definitely have a fake utility company worker coming into your offices, trying to plug in a USB keylogger into some PC while nobody is looking.

As an example, proven strategy: Find targets internet uplink. Cut it. Customer calls ISP for help. You then send a fake ISP technician that arrives before the real one does. You put a data exfiltration dongle between the modem and the LAN. You then fix the cut outdoor line. Customer is happy that you have fixed it. Later the actual ISP guy arrives. Everyone will be a bit confused that the problem was already fixed, but then agree that it's probably just the ISP once again having screwed up their resource management. Works pretty much every time.


> You put a data exfiltration dongle between the modem and the LAN.

Sounds interesting, and could be used in a movie, but it doesn't look like it is practically applicable in real life. You will have a hard time making sense of the data without full-MITM'ing with SSL decryption, installing your CA certificate on all machines and browsers on the LAN, and solving the certificate pinning problem.

A USB keylogger may be a simpler solution even though it can't sniff the whole LAN.


Well, as this is standard practice the movie would be a ... documentary? ;)

I wasn't clear here enough: The device at this point enables you to typically see all devices on the LAN and WLAN on L2. Which means you can do ARP spoofing and all that kind of stuff. One of the first things you then would look at is what printers are available to infect. People often print interesting things :)

And yes, of course the USB keylogger is the cheap lazy solution. These days due to second factors not that useful as it used to be, but still... you can deploy it in seconds pretty much in every office, shop or governmental institutions.

But to not further drift into off-topic:

I am serious about all this. Should Grapevine be successful and for example one day put out a press release like "Procter & Gamble is now using our services", you will have in addition to state actors (China, Russia, Israel) a thousand kids looking up that P&G makes a profit of $15 Billion or whatever per year, and that they surely will pay 1% of that for not having all of their company data published.

If you look at existing knowledge management system that are deployed in physical-world-companies, you will see that they actually are not allowed to index all the data, but as you would be running against a lot of laws and management best practices if in the next coffee brake everybody would laugh about poor Tony who once had a really stupid concept, created a draft document of it, but then noticed that it won't work and make him look like a fool.... Thought not giving it to his manager would solve that "problem", but it got indexed as company knowledge..

So, erm, yeah: Existing knowledge management systems to a large extend are about NOT sharing knowledge.

Sorry for this raw brain dump of mine into this thread :)


Two things to think about:

a) Due to privacy laws, no European country would right now be allowed to use your service. The data your customers wants to index will always contain stuff that allows to identify a human, and once you are there it's basically "game over" for handing over data to a third party provider like you.

b) My organization is tiny. But we are in a sector were we must be ultra paranoid when it comes to security. We do not use a single external service whatsoever, everything is self-hosted. I would love to be able to AI-index all of our collected knowledge and would pay for the value this provides. So far have been unable to find any plug & play solution. Then open source nature you have mentioned is important so that your system security can be be validated, but in the end I would rather want to pay for it being plug&play AND on-premise AND open source.


consider allowing customers to deploy into their own AWS/Azure infra as a managed service. Your CICD can reach the deployment and you will be one step closer to enterprise customers.

u/eambutu, any timeline for the self-hosted version?

Also willing to buy.


controlcore.io was brought to the market for the same exact reason. Not AI Powered, but to control AI and its interactions with your Data, APIs,. Applications etc. And yes, we just give our service as a self-hostable solution. However is the encryption and SOC compliance be, we want our clients to know that none of their internal data or interaction transaction leave their control.

[Disclaimer: Not a hater, just a Nerd looking at data.]

And just as Tesla's stock goes up whenever there are reports about them no longer selling cars, or being years behind on self-driving tech and robotics... if Starlink would be publicly traded, their stock would now shoot way up.

On a more serious note: If analysts would do their job, they could have found out years ago that Starlink will never ever be profitable, just as no Sat ISP in history ever has been. All always have and are funded with tax-payer money.

Why is that? Simple maths.

Including R&D and launch cost and expected usage time, the TCO of one of their satellites will be somewhere in the area of $2,000,000. One of them in theory has a peak speed of 100 GBit/s. If you overbook the link by a factor of 10 as it is common for an ISP, that gives you 1,000 Gbit/s to sell.

So in best case over the lifetime of the system you will make a revenue of 1,000 * $100 * 36 months. So you end up somewhere in the area of $3,600,000. Yes, that is more than $2,000,000, but well, there are a couple of billions of investments and investor money here to be paid back one day.

"But why are you only assuming a usage time of 3 years?"

While Musk's idea of rapid R&D cycles is fine for Software, it's extremely expensive. The "Oops, the Sat-to-Sat links are not working, so we now have to build base stations everywhere and can not do load distribution" might have cost Starlink something like $10 BILLION? I guess I would have tested my stuff first before launching it. With now two generations of Starlink sats already being outdated and/or falling from the sky, the "in two weeks" promises from Musk don't make me very confident that Starlink v3 will actually be properly tested prior to polluting space with their buggy trash again.

But let's restart it in a much simpler way: A currently used commercial fiber cable can do 800 GBit/s, so eight times of a Starlink Satellite. Real-life data has already proven that the lifespan (outdated transceivers etc) is somewhere around 5-8 years, with the biggest risk being your cable getting cut. The cable itself costs virtually nothing. Due to this "developing" countries have mostly decided to not lay fiber underground. In Thailand for example, the fiber cables are simply thrown onto houses and through the jungle, as replacing them is dirt cheap. Anyway: If you map this to the TCO on 3 years as mapped above, this means compared to the TCO of $2,000,000 for Starlink, for fiber you are looking at something in the area of $10,000 instead. It's a no-brainer.

Real-life proof: I live on a tiny and very very remote Island in Asia. Some people used to have Starlink here. But due to their Satellites now being massively overbooked, speeds went down months to months. So people noticed that it is actually cheaper to run 10 KILOMETERS / 6 Miles of Fiber cable through the jungle. And on this tiny remote Island there are three Fiber ISPs to choose from. Two of them offer 1 GBit/s for $13 per month, and if you want a business service, for $40 you can get 2 GBit/s down / 1 GBit/s up. And unlike Starlink those ISPs are profitable.

You have to be EXTREMELY remote for Sat internet to make sense. No, not rural USA. Fiber will be cheaper. No, not Africa. Fiber through the desert will be cheaper. Sat Internet may make sense if you live in the artic or on mount Everest or something like that. Or Mars. In all other cases the TCO of Fiber will win.


> "But why are you only assuming a usage time of 3 years?"

Your entire analysis rests on this point, which you fail to demonstrate. (You also cite zero sources, which isn't encouraging.)

(EDIT: This assumption is conservative, but reasonable.)

Was this AI generated?

> The cable itself costs virtually nothing

Did you attempt to look up the cost of laying new fibre trunk?

> due to their Satellites now being massively overbooked, speeds went down months to months

Then this isn't a remote location. Starlink's economics have been pretty obvious for anyone who has been on a plane, boat or train in the last decade. They're also terrifically useful for remote mining, observation and military operations.

> people noticed that it is actually cheaper to run 10 KILOMETERS / 6 Miles of Fiber cable through the jungle

Well sure, if you ignore negative exernalities a lot of stuff is cheap.


Wow. Well, I believe that YOU are a bot, not me. Are you Grok?

Anyway, yes, I am a human.

And it is not that hard to find the sources for this point:

https://en.wikipedia.org/wiki/List_of_Starlink_and_Starshiel...

v1 constellation was completed in 2021, and decommissioned from 2024. v2 deployed from 2023, but the sat-to-sat communication is not working, so all of them, will need to be replaced by v3, too.


The sat-to-sat laser links are used to provide connectivity on the open ocean and in remote parts of Australia and Argentina that are beyond the range of any ground station. They're definitely working but AFAIK they are only used when necessary so if you're within range of a ground station your traffic will never use laser links.

Oops, forgot one important thing: Sure, why do additional hops if you can see the base station. But what about shared state? Why do you definitely still get a completely new session when moving to the next sat? If the laser links are working, that state should be shared between neighboring sats.

Inter-satellite links simply provide additional (time-variant) paths, which doesn't inherently relate to shared state.

You seem to be under the impression that inter-satellite links somehow imply a self-organizing mesh topology that preserves terminal-to-gateway associations at any cost (including that of extra in-space hops), but that does not necessarily follow from the existence of ISLs.

In other words, your observation of occasional routing instability causing higher-layer issues is perfectly compatible with working ISLs.


Accepted.

Starlink switches beams every 15 seconds and satellites every 120 seconds.

You keep your sessions through both. Lasers or no lasers


> Why do you definitely still get a completely new session when moving to the next sat? If the laser links are working

Imagine Amazon 10x'd its ingress/egress fees between regions.


You're not getting new sessions period.

I will not disagree as I can not verify this claim. Have you tested it yourself or have a source which has some tech proof on that one?

Why do you believe the inter-satellite links are not working?

[Due to the part of the spectrum I am on, I do not have believes or opinions.]

The laser based inter-links still not working has been subject on various conferences like AngaCOM etc.

But in my case: I have simply tried it *). And every Starlink user can do it, too: Use traceroute. And if you think "they might be hiding the hop-to-hops between Sats!", you can dig deeper using MTR behind the modem or simply rooting the modem itself.

Last time I have connected to a v3 Sat however was ~6 months ago. Maybe an active user reading this can try today?


You're equating occasional dropouts (which can happen for all kinds of reasons even in bent-pipe topologies) with the absence of inter-satellite links. That makes no sense.

The empirical way to test for the existence of ISLs would be to go to the middle of an ocean, safely out of reach of any ground station, and see what happens. If you get a connection, that can only be due to ISLs.

It seems like your actual complaints are with network/routing stability, and you're drawing invalid conclusions from there.


Do you have a link to a blog or writeup regarding the inter-links not working? Hard to find it without getting lost in "Troubleshoot your starlink device" SEO hell.

> Do you have a link to a blog or writeup regarding the inter-links not working?

The simpler answer is intra-constellation communication is a bleeding-edge technology. It's an extraordinary challenge for which extraordinary proof is needed to show success, not the other way around. SpaceX has solved most of the gating technical problems. But getting it to work reliably enough that it becomes more economic than ground-based backhaul will take time.


> intra-constellation communication is a bleeding-edge technology

Iridium has been successfully doing it for a quarter of a century now.


An even simpler answer is Starlink is available in locations too far from ground stations.

Ergo they are served via laser.

Cook Island

Ascension Island

Iran

Venezuela

Cuba

Galapagos/Easter Islands

Vanuatu

Eastern Ukraine

Syria

Lebanon

Iraq

Iqaluit

Antarctica, as South as the South Pole

Tristan de Cunha

The range of the ground stations are under 1500 miles and I really don't know where people are getting the idea that the lasers don't work.


"The range of the ground stations are under 1500 miles and I really don't know where people are getting the idea that the lasers don't work."

Maybe because v1 and v2 did not even have working lasers on the hardware level...?

The idea is coming from "reality", Starlinks own reporting, industry talks, tech press etc.

Anyway, to shorten this we can agree that we have different definitions of what one expects from having a dedicate backbone. I would expect seamless handover amongst other things, which I have never ever seen, and unless you show me a video recording of a 24h Starlink session with MTR running I simply will trust the data I have over a random claim.

As said elsewhere in this thread: It is extremely hard to find detailed benchmarks from happy Starlink users. Next to all positive content is paid content. And a quick look at trustpilot & co clearly hint that there a huge chunk of Starlink customers might be unhappy. And even if it's just because their online gaming sessions getting interrupted on every Sat hand-over, which exists in reality, but not in your mind :)

Seriously, if you have access to any benchmark data sources, please gimme. I'm not here for "winning" an argument. Data, Data, Data.


I've pointed out where you can get the data from a professor.

You can also fly to one of the many Islands (or Iqaluit) I've mentioned and do your testing.

https://www.pcmag.com/news/starlinks-laser-system-is-beaming...

It's quite fascinating, there's people who's only (or first) experience with Starlink is via lasers and there's people on the Internet who'll tell you it doesn't work (I forgot to mention Georgia and Kazakhstan)

You really will hear everything on the Internet.


You are aware that you are giving more weight to photos done of a powerpoint presentation over actual data points?

Sorry, that's not a path I am willing to follow. Religion is not my cup of tea.


I've given you links to the data you want and someone that can get it for you (and has monitoring stations).

I've also mentioned several places you can travel to do your own testing.

If you want to believe Tristan de Cunha or the Falkland Islands or Bhutan or Antarctica can be served by anything other than lasers, (get a map) I don't know what else to call that but religion.

I'm not giving more weight to anything. I've pointed you in the right direction but I don't think even internal tools from SpaceX would change your mind even though geography should be enough.


please talk to "panuvic". pan &a@t& uvic.ca he's a networking prof.

Here's the data you've been asking for and dodging like I didn't post.

https://old.reddit.com/user/panuvic/submitted/


Serious question: If you are not willing to entertain anybody else's data or observations, why should anyone entertain yours?

Is there any input whatsoever that could change your foregone conclusions? If not, what's the point of this discussion?


I don't even want him to entertain my data or observations.

He can go to or recruit a user in territories that can be only served via laser in 2025,(I've listed 10 at least so far) install a computer that boots to desktop and remotely test his theories.

His base argument is no one can hold a phone call or run a game without interruptions every 120 seconds on Starlink if they are served via lasers (two more countries, Kyrgyzstan and Mongolia) not because of packet drops but because of session terminations. Now I can understand the possibility of broken middle boxes, but the thing is we'd see the effect in our applications before even bringing out any network analysis tools.

He's also skipped the users doing BGP over the lasers. Doesn't BGP have sessions?

Why did this country https://m.facebook.com/watch/?v=602234895877639&vanity=61560... switch to a product that's so bad it's unusable


> Maybe because v1 and v2 did not even have working lasers on the hardware level...?

Did those islands have Starlink until lasers?


He's pretending not to understand geography or distances.

I'm very fascinated especially given the existence of community gateways on several islands that can only be served by lasers regardless of the presence of ground stations.


Here is an example thread of someone having done the measurements of v3 vs mini:

https://www.reddit.com/r/Starlink/comments/1eg4e4d/starlink_...

Have a look at the downtimes of the system.

A simple way to verify that their inter-sat links are not working and/or are not used is to simply sit and wait: If you are switched from one Sat to the next, you get new "session" and previous NAT state is lost. If this would be a meshed backbone, that would not happen.


This is ridiculous.

How's service delivered to the South Pole?

Iqaluit?

As long as your traffic is terminated at the same POP, you won't get any session terminations.

And Starlink tells you when your public IP changes anyway


Erm. That person has posted a detailed explanation on how he has measured.

How can this be ridiculous? Is it ridiculous because the data does not match your believes...? Confirmation bias?

It's Data. And it hints, amongst other things that they have seen the same that I am seeing on every single Starlink installation I got my hands on so far: There is no active handover, and no shared state between Sats.

And you are referring to the wrong layer, talking about the ground station. Of course that does not move, and does not forget about your IP. Wrong layer.

It's about the Satellites (!) not doing an active handover and not sharing L2 state, like it would the case for any meshed network, no matter if cellular or WiFi. The analogy here would be a WiFi access point or a cell tower, and you roaming from one to the next while having a phone call, not having any drop-outs. That's the industry standard for Wireless. Starlink isn't there (yet).

If you don't think that true data is true, check ARP table of the MAC of your gateway IP changing after handover.

You appear to be a happy Starlink user - so do you care to share some 24h benchmark with us to prove your claims? I would highly appreciate that!

So far sadly none of the "But it works!" people has been able or willing to provide a benchmark on their own setup. `

Again: I am not here to win and argument. But to change my conclusions, I need data that hints at my conclusions potentially being wrong. As explained elsewhere in this thread, due to lack of serious benchmarks, most of this is based on anecdotical data points.


There's lots of users in laser only areas doing RTC via VOIP or video. Some of them on Ships in the Persian Gulf (look up SEA-2. https://www.wired.com/story/us-navy-starlink-sea2/)

That person has posted an install in a moving vehicle with the antenna inside.

Do you think I can't read or what?

The ground stations don't handle user IP addresses or even IP packets.

They are strictly layer 2 from the user perspective and traffic is terminated at POP's.

Here's a scientist that does actual work, not the nonsense you've been posting as "data".

https://www.reddit.com/user/panuvic/submitted/

I gave you his email.

I've pointed out to you where you can travel to test laser service yourself.

I'm not saying "but it works"

I pointed out to you places that can't be served without laser connectivity.

It's like asking me to prove the earth isn't flat.


And this link form wired is about something completely unrelated - getting more stable coverage by using multiple different providers. It does not even mention Lasers.

You also clearly do not know what Layer 2 on the ISO/OSI model is.

But you are in total rage mode.

Triggered because the actual data invalidates what your cult says? :)

Sorry, will ignore you from now on. Again: Religion is not my cup of tea, bold claims on powerpoint presentations neither, I prefer to use data. We simply do not share a model of the world that is compatible to discuss these kind of things. No harm done, but no thank you :)


You can't get service to them on Starlink without lasers.

I gave you more than one link. I gave you an email to a prof.

I started working at a Telco at 13 and got my CCNA there ages ago within a summer break.

You've demonstrated a lack of understanding of basic geography.

Go get your money back from your tutors.


That person is measuring periodic packet drops (which can also be caused by e.g. having an incomplete view of the sky), and you're drawing unsupportable conclusions on "session drops" from that. (The word "NAT" does not even occur in the observation thread you've linked!)

So, yes, confirmation bias.


> I believe that YOU are a bot

I don't believe you were a bot, but there were one or two phrasings that gave me pause. (If I believed you had written that with AI, I'd have just asked that and not bothered engaging.)

> v1 constellation was completed in 2021, and decommissioned from 2024. v2 deployed from 2023, but the sat-to-sat communication is not working, so all of them, will need to be replaced by v3, too

Fair enough. $3.6mm on $2mm--assuming $100,000 per month revenue and $2mm paid up front, which is unrealistically conservative--yields a 22% annualised. Take that out to the increasingly-attained design life of 5 years and it jumps to 25%. To put it bluntly, these are both incredibly high telecom returns.

You've already incorporated launch, maintenance, disposal, et cetera in TCO. So the remainder is customer service (usually 5 to 10% of revenue) and cost of capital. Even assuming 10% WACC, which is on the upper end for a leveraged telecom play, we're still comfortably generating excess return.

Where the comparison fall apart is in respect of fibre. Laying physical infrastructure is hard. You have long periods between capital outlay and return. Also, you have to right scale up front--you can't just launch more birds in a few months as demand scales (or hold them back if it doesn't).

You're not going to replace fibre with Starlink. But the economic case for the latter doesn't fall apart with 20%+ operating returns.


Well, on purpose I have given Starlink very optimistic numbers, yes. :)

And yes, 22% yield sounds nice, but if someone would hand me their pitch deck and give me a SWAT analysis I would just laugh them away: The risks are far too high.

(See for example the article that this very thread is about.)

Of course you can only guess based on that, but it looks that in real life things are worse:

https://arstechnica.com/space/2025/02/starlink-profit-growin...

These data points might be interpreted as "Starlink is getting 40% of their revenue from tax money".

And while "7 million subscribers" might sound impressive on first sight: This is the number of DSL connections subscribed to in the tiny country of Belgium. But for magical reasons Starlink is valuated at a price higher than if you would buy all of Belgium ;)

Your point in regards of laying physical infrastructure is valid for a lot of western countries. But not all of them. Some countries in the EU for example years ago created laws that say that whoever opens the street for any reasons has to put in empty tubes for someone to later put in fiber before closing the street again.

So: This is a regulatory subject really, not physical cost. Fiber is dirt cheap if you are allowed to use existing power poles for example (which is unlike with copper obviously not a problem in regards of signal integrity), or existing underground pipes, or just throw it from house roof to house roof.


> I have given Starlink very optimistic numbers

Your revenue figures are consumer only. And while you're generous on utilization factor, we capitalised the TCO up front while amortising revenue, and then reduced asset tenure to worst case observed during development.

Flex up to 4 years, let $1mm TCO be paid up front and the rest amortised, and reduce utilisation to 80% ($80k/month revenue) and IRR shoots up to 73%. Take TCO to $3mm ($1mm up front, $2mm amortised), reduce utilisation to 75% and we're still over 20%.

> while "7 million subscribers" might sound impressive on first sight: This is the number of DSL connections subscribed to in the tiny country of Belgium. But for magical reasons Starlink is valuated at a price higher than if you would buy all of Belgium

Well, yes. Starlink connections are more profitable and you can't scale selling internet to Belgium into a Starshield defence contract. Or selling to airlines and cruise ships and yachts and mining operations, all of which pay more than a Belgian.

> some countries in the EU for example years ago created laws that say that whoever opens the street for any reasons has to put in empty tubes for someone to later put in fiber before closing the street again

Starlink doesn't sense in densely-populated areas of the EU or Asia. (And the equivalent for SpaceX would be ridesharing Starlink on someone else's flight.)

> Fiber is dirt cheap if you are allowed to use existing power poles for example

If you have the scale. You're underestimating the risk that comes from having to place infrastructure up front.

Your analysis is pretty solid. But I don't think it's taking into account the fact that you can build multibillion-dollar telecoms business on a few tens of millions of high-paying customers.


I guess we can agree that the comparison between Sat internet and physical links depends a lot on the physical situation in the target region, and the regulatory frame work.

And please keep in mind that while you are right that there is a risk investing into physical infrastructure also applies to Starlink. It's worth remembering here that all Sat Internet companies prior to Starlink had failed and needed to be rescued with tax payer money.

I don't have exact numbers, and it's a bit muddy due to state subsidiaries, but in Germany the average cost to connect a subscriber in a medium density town with fiber, with given that nothing was prepared and you have to open the street etc appears to be in region of €/$ 2,000 or so.

I don't know if that is done in the US, but also in Europe we now do "trenching". It has some downsides and pitfalls, but this reduces the upfront infrastructure cost for fiber massively.


> while you are right that there is a risk investing into physical infrastructure also applies to Starlink

Absolutely. It's why I think assuming the WACC of a highly-leveraged telecom (around 10%) is appropriate.

> this reduces the upfront infrastructure cost for fiber massively

Fibre makes sense where there is density. It's higher capacity and cheaper. That doesn't mean it makes sense everywhere. And a lot of that everywhere will pay a lot of money for connectivity.

The global telecom market generates trillions of dollars of annual revenue [1]. There is a lot of fruit for the picking.

[1] https://www.grandviewresearch.com/industry-analysis/global-t...


> Was this AI generated?

It's crazy to me that people use AI to generate comments for social sites of all things, but here we are.


I find it even more crazy that you no longer can comment on HN without someone trying to invalidate valid points by claiming you not being human. :)

To be honest, while I took it lightly, others might feel pretty insulted by such claims. De-humanizing someone stinks.


> you no longer can comment on HN without someone trying to invalidate valid points by claiming you not being human

I made this mistake, but I'll defend it by pointing out that I've gone a few comments deep on HN, thinking through and citing and engaging in good faith, only to realise I wasn't talking to a human but to a bot. (Then the commenter gets defensive about using a bot, hallucinations and all.)

Instead of taking it as a personal insult, maybe interpret it as your comment having inspired someone to engage effortfully with what you said.


>are funded with tax-payer money

This has nothing to do with profitability. DoD/War Dept contracts are "tax payer money" and shareholders are happy to have those.

>it is actually cheaper to run 10 KILOMETERS / 6 Miles of Fiber cable through the jungle

Cheaper, sure. But try getting this approved in the US through a County Planning Commission. And you did get NEPA/CEQA done too right?

>No, not rural USA. Fiber will be cheaper.

My not-that-rural town has fiber only 80% of town. Houses with city sewer/water don't have fiber


All of this is regulatory stuff. Your state has the option of making it expensive and a PITA or not.

In my ex home town in Germany we had the exact same thing as you are describing - Fiber available everywhere up to 20 meters away from our house, and no chance to get it connected. For purely regulatory reasons.


True. And starlink is a way to bypass all your/my local regulatory hurdles. They had to deal with several very large regulatory hurdles, and then they're golden. No dealing with every little town separately.

Not true really. You will hit regulatory hurdles if your rockets explode in other countries too often :)

And: RF spectrum is HIGHLY regulated.

Also, 4 weeks ago they spent 17 BILLION USD on buying ~30 MHz of spectrum in the 2 Ghz range. 30 MHz translated to a total bandwidth capacity of about 300 MBit/s.

Yes, you have read that correctly: 17 Billion for 300 MBit/s.


So you're telling people to live with bad/no Internet connection now (due to local regulations) because of hypothetical future problems with their viable alternative in the future?

Easy advice to give from the outside, especially (presumably) from a place with great fiber options.

> Also, 4 weeks ago they spent 17 BILLION USD on buying ~30 MHz of spectrum in the 2 Ghz range. 30 MHz translated to a total bandwidth capacity of about 300 MBit/s.

That's L-band spectrum for direct-to-device services, which comes at a heavy premium due to its advantageous physical properties and inherent scarcity (the entire L-band has fewer Hz of spectrum than what Starlink alone is already using in the Ka band). Ka-band spectrum is much, much cheaper. You're comparing the cost of real estate for factory/campus on a green field hours away from everyone with that of a high street storefront.


> The "Oops, the Sat-to-Sat links are not working, so we now have to build base stations everywhere and can not do load distribution" might have cost Starlink something like $10 BILLION? I guess I would have tested my stuff first before launching it. With now two generations of Starlink sats already being outdated and/or falling from the sky

You don't seem to understand their strategy: Constant replacement is a feature, not a bug, to them.

And in that paradigm, why wait any longer than absolutely necessary with any given launch? The problem is already fixed – at least inter-satellite links seem to be working well enough now (as evidenced by global coverage on the oceans).

> Starlink will never ever be profitable, just as no Sat ISP in history ever has been.

How do you explain the non-zero stock price of e.g. Iridium and Viasat?

> You have to be EXTREMELY remote for Sat internet to make sense. No, not rural USA. Fiber will be cheaper.

Are you sure laying fiber to every last home is really more capital efficient in the long term? Have you done the math on that side too?

And what about mobile coverage? Even solar-powered low maintenance cell stations need to be installed, repaired after storms, have their solar cells dusted off etc.

> No, not Africa. Fiber through the desert will be cheaper. Sat Internet may make sense if you live in the artic or on mount Everest or something like that.

Mount Everest has pretty good cell signal, as far as I know. It's a tiny area, compared to actually remote but still (sparsely) populated regions.


Due to the nature of the business I am in I very well know Viasats customer base. They are too important to fail for multiple european military organizations.

As discussed elsewhere in this thread, the intra-links still do not seem to be enabled. Can not verify myself due not having a yacht and/or time, but I am constantly flying between Asia and Europe with various airlines, and so far none of them have switched to Starlink but keep paying the outrageous pricing from ViaSat & co.


> Due to the nature of the business I am in I very well know Viasats customer base. They are too important to fail for multiple european military organizations.

So there is demand :)

> As discussed elsewhere in this thread, the intra-links still do not seem to be enabled. Can not verify myself due not having a yacht and/or time

Are you arguing that everybody reporting successfully using it far away from land is part of some conspiracy? How else would SpaceX get away with claiming that they have global coverage?

> I am constantly flying between Asia and Europe with various airlines, and so far none of them have switched to Starlink but keep paying the outrageous pricing from ViaSat & co.

Installing a new satellite terminal on the outer hull of a commercial aircraft costs millions, including the lost time spent in the hangar, and that's to say nothing about all the required certifications.

That said, Hawaiian Airlines have been using it for a few months now. Seems to be working great, and their routes are also definitely not possible to cover from LEO without inter-satellite links.


No conspiracy, but let's say that it is rather hard to get proper benchmarks done by actual users, and one has to rely on a lot of anecdotical data. Have you seen any real-life benchmark reports with traceroutes, measure downtime, handover time etc that impressed you in a positive way? If so, please share.

Hawaiian Airlines - very interesting. Sadly wrong side of the planet for me to test it myself :)

It very well might be possible that the intra-links are only used for special customers like airlines for now, and not for consumers, and that this is the reason that all people I know who use Starlink still handover downtime...


"Handover downtimes" for stationary or mobile users? If they're stationary, that's not something inter-satellite links are needed for or would help with.

You are very wrong here:

Right now Starlink claims to be operating a mesh, but they are not. If they would want to build a mesh, Inter-sat links for NOT be used used to pipe through bandwidth to the "best" base station. It would be used for shared state to be able to prepare a handover. Synching state obviously is much easier and more stable if the neighboring sats can talk directly, instead of sharing it over their slow, high latency and lossy base stations.

See IEEE 802.11r for the equivalent for WiFi.


…what? Where do you see the claim that they are running a mesh? Why would they do that?

The main point of inter-satellite links is to provide coverage to areas beyond single-hop (subscriber to satellite to ground station) coverage. (Theoretically they can also be used to provide extremely low latency intercontinental routing, but for most traffic, the goal would be to minimize routing in space.)

Since the entire constellation is known a priori, all paths can be precomputed centrally, just like in a non-moving network, and that routing information can then be propagated to terminals and satellites. There’s no need to dynamically make complex “mesh” routing decisions at the edge.

802.11r controls faster key exchanges in 802.11 roaming scenarios – what’s the relation to satellite ISPs?

It seems like you have some axe to grind with Starlink and are collecting evidence through that lens.


Can not reply to your anymore, guess we are nested too deep now?

I think we are simply talking about two different things here.

I mentioned 802.11r not due to the key exchange implementation details, but to point to the general point: Seamless handover requires shared state between cells.

This is not about static vs dynamic routing, you are thinking on the wrong layer here. We are in L1+L2 land.

On Starlink, the last time I tested a handover between two Sats in 2025 still involves a downtime of at least 5 seconds, and both L2 info and NAT state being lost.

In regards of axes: I am not much into emotions. Of course the data says that Elon Musk is the cancel cell that will play a huge part in destroying the western civilization. But as I do not like the western civilization and humans in general much, this does not trigger much emotions.

And even if I hated Elon Musk: We are talking about technology, R&D and implementation details here (which I enjoy!). I do not have emotions on IP protocols and such :)

No, in reality it's really very simple: My data says that Starlink just is not worth it. It is not commercially feasible. It pollutes the space with tons of trash that will harm productive future space missions and projects. It's highly overrated and overhyped. It's very hard to find positive reviews that haven't been paid for.

Or, executive summary: Starlink is a dead end, and without the Elon cult nobody after looking at a hypothetical business plan would invest.

And finally: Anecdotical evidence collected from my own tests and those of friends all says: It's just shitty. However: That of course depends on your use case. For some an 8 seconds drop-out might mean "patient dead". For others it might be "I will retry loading this after grabbing a cup of coffee". My peer group might have higher standards than others.

Of course Sat internet has its place as a niche business. But as you surely are aware in the US it was and is tried to steal tax money meant to build fiber by claiming Starlink would be equivalent. And you might also remember that if someone would not have pulled the emergency break, you know would have air traffic controllers seeing planes with 100ms+ of latency AND every now and then losing contact to all airplanes for 8 seconds.

And all of this has been tried before. Over in Europe, we 10 years ago had those fights where Viasat & co claimed to be an alternative when we got the "basic human right to broadband".


What's different this time is the cost and time to orbit. SpaceX has been able to launch every three days in 2025. Viasat, Iridium, and everyone else who came before didn't have that. I don't have your spreadsheet to plug that bit of data into, and it's balanced by the number of satellites starlink needs, but governments tend to have a lot of money to keep the things they rely on running.

>steal tax money

I just realized you're trolling.

Have a great day.

>It's very hard to find positive reviews that haven't been paid for.

You could try contacting people in places where it's pretty much the main/a major provider.

Kiribati, Galapagos, Iqaluit, Ukraine, Pikangikum, Vanuatu, Falklands


I really can't tell if you're deep into motivated reasoning here to protect your foregone conclusions or are just trolling.

please talk to "panuvic".

pan &a@t& uvic.ca

I believe he's a University professor in Canada. Working on this data wrt OneWeb and Starlink.


United, Zipair, Hawaiian, Qatar.

The lasers work and I really don't know where you got the idea they don't.

They've worked since at least late 2022.

We're in 2025


Analysts that I've seen estimate that Starlink is already profitable and will remain so. Unless you can explain the differences between your math and their math, this is yet another Elon-hating conspiracy theory.

Gimme your source URL, please.

As others have pointed out already in this thread: No serious analyst and not even Starlink themselves have claimed to be profitable. They have claimed to be operationally profitable. This means that the cost of operating the sats is lower than the revenue they make. It does leave out all other cost. Yes, if they could build and launch the Sats for free instead of ~$2 million per piece, that could be a profitable business.

Also, have you actually used Starlink? It's crap. Yes, in 2023 when they did not have customers you got decent speeds. Now it's completely overbooked. Yes, you can make a year of profits milking existing customers.

Google "Starlink benchmark" or "Starlink feedback" etc and you will see things like these:

https://www.trustpilot.com/review/starlink.com

At this point Starlink's active customer base is rating their service to be worse than... cancer, I guess?


>> Also, have you actually used Starlink?

Yes, for example, via a battery-operated "Mini" terminal a month or so ago in extreme rural Finland, ~1km from the Russian border, while photographing wolves & bears.

It worked great.


Mad respect!


> this is yet another Elon-hating conspiracy theory

Nothing in their analysis is conspiratorial. It's flawed. But not alleging conspiracy.


That's also my opinion - it will probably never be profitable - it's a great product, but the economics are not right - and that's why no other provider did this (even though they have the tech).

Let's see what happens once the bubble pops.


> once the bubble pops

What's the bubble? It's cash-flow positive. All of SpaceX is cash-flow positive--they've been buying back their own shares.

You can argue it's overrated, i.e. customers will drop it after trying it for a while. (Or when a recession forces their hand.) But bubble requires leverage and losses, neither of which SpaceX (or Starlink) have.


Sorry, I was referring to the general stock market (mostly AI) bubble.

As for SpaceX, it's pretty much impossible to know their finances - they don't publish audited accounts. We can just trust what Elon is willing to share with us.


What does a stock market bubble have to do with the profitability (i.e. not the valuation) of any given company?

Are you arguing that the demand in Internet connectivity in rural/remote areas is somehow caused by an investment bubble as opposed to a long-term stable need?


No, I don't think there is any relationship there.

I'm saying that I highly doubt the real profitability of SpaceX / Starlink, and we will only see if it's really as good as they say, once the bubble pops and there is no inifinite capital, and maybe some accountability.


They aren't raising money.

What's there to pop?


Sure, just like xAI is never raising any money (according to Elon). All of his companies are very tightly interconnected, so you should see them as a whole.

When money is actually raised people talk.

Sovereign wealth funds and bankers like to talk about their investments even in leaks to Journalists of things under NDA.


> for SpaceX, it's pretty much impossible to know their finances - they don't publish audited accounts

SpaceX has audited financials. They're not published, but they leak a lot.


Yes, and Elon companies are well known for leaking reliable information.

> Elon companies are well known for leaking reliable information

SpaceX isn't leaking their own financials.


Disclaimer: I am not a doomsday apologist, just a Nerd collecting, analyzing data and doing projections out of it.

There is not only an AI bubble, but a US Dollar bubble, too. Most people don't get it what "Stock markets, gold, real estate, crypto, EVERYTHING is up" actually means: That the currency you are paying in is losing value at rapid pace.

I do not yet understand how the market manipulation actually works, but if you exchange from an Asian country to US Dollar right now, an amount that will buy you a excellent dinner for the whole family no longer even buys you a hamburger in the US.

The part I also do not understand is why the only thing that right now is "cheap" as seen from the US perspective are foreign currencies.

I also don't understand why other countries are not liquidating their USD reserves. How can you as a country not see that your reserves now would be much safer in Asian countries including even the Chinese Rimibi?

Well, maybe the answer to the USD losing value rapidly is that in reality there ARE market actors in the background moving to other currencies, but are really really good at hiding it.

But in any case: No doubt, the western part of the global financial system will be crashing within the next 3 weeks. With the US being on the brink to civil war anyway, you should be able to extrapolate what a financial crisis and bank run in the US in combination with everyone and their dog owning war-grade weapons will end up at.

Right now you have heaps of people in the US who are completely ignoring any data but invest only based on being member of a fanatic cult - every time you read news that Tesla no longer is able to sell cars, or that their robots do not work, or that their Robotaxi stuff is a scam, the Tesla share value goes further up. The same applies to AI. That AI will never make any profit is no longer niche knowledge, it's headlines at Bloomberg, WSJ, FT & co. You have to actively ignore that information - still, AI stocks go up.

So what is missing for the bubble to finally burst? Anything that makes those cult members start to doubt. Anything that triggers "Maybe putting all my retirement savings into crypto is too risky" or "Maybe Elon actually isn't the messiah", "I put all my money into a Trump meme-coin. Is this really more important than getting my teeth repaired?". Or anything that triggers those sheeps that are invested into this to simply needing cash quickly. So, for example due to a natural disaster, or some pandemic, the Epstein stuff finally blowing up on Trump etc.

The list of things that now may trigger the implosion is gigantic. Due to that it is possible to predict where it will start - it might not be the AI bubble. It very well even may be something rather classic like the "Deported migrants no longer pay towards their mortgages / loans" stuff happening right now.

But in summary: Due to the very high number of potential triggers it is stunning it hasn't imploded yet, and if you read/watch interviews from scientists in this area, the common theme these days is: "WTF is going on here?! This should have imploded LONG ago!".


OMG, the bloody marxist Europeans AGAIN limiting the freedom of choice and free speech of American mega-corps!!1ONEohenee

:)


Had a quick look at the patent, had a quick look at the code. To me it appears that 99,999% of all involved research has been taken from prior research from tons of scientists.

You might have good intentions, but in my value system if you invite others to also enjoy what you have stolen, you still are just a thief.

Polite reminder: Just because you managed to trick the US Patent Office into stamping your patent application does not mean you have invented something. It simply means you have managed to convince a bureaucrat to give you a stamp so you can claim ownership about other researchers' work.

Want to be part of the good guys? Burn the patent, and apologize to the research community you tried to steal from.


> Had a quick look at the patent, had a quick look at the code. To me it appears that 99,999% of all involved research has been taken from prior research from tons of scientists.

How'd you arrive at this conclusion? The stuff in the body of the patent can be expected to be 99.99% of widely known stuff, always.

What counts is that something new is disclosed, and that is what the claims cover.

A description of a patent must be enabling: it must tell someone ordinarily skilled in the art enough to reproduce the claimed invention. Gesticulating at "you could find a bunch of the simpler steps in earlier research papers" is not good enough.

How far attorneys go to make sure a patent description is wildly varying (I remember some of my earlier ones take some time to describe what a CPU and program are...) but it's best to error on the side of caution and describe well known techniques. Otherwise, you might spend time arguing in future litigation about whether the average software engineer in 2015 knew how to do a particular thing.


True. For some of my patents, the write ups make clear that the attorneys didn’t really understand how things worked. At some point, reviewing these things, you have to say, “Yea, okay, close enough,” and sign off on it.


100% truth. People complain of AI Slop, but patent legalese Slop is 100x worse and has been going on for 100years. Absolute garbage


You're being entirely unfair to Supabase here. Research is important, but there is a reason why the USPTO has developed substantial case law around Reduction to Practice, everything is built on prior work, so to say there is nothing novel about actually building displayed working system from parts is factually inaccurate.

https://en.wikipedia.org/wiki/Reduction_to_practice


To add to your point: we didn’t file the patent. we acquired it (at a considerable cost) and we are working to make it freely available

https://news.ycombinator.com/item?id=45196771


Yeah, huge props here. There's a contingent on HN that seems to assume that almost any action by a company is done in bad faith. I dislike all of the shady stuff that happens, but that's why we should celebrate when companies are doing awesome things.

This is all positive. Super appreciate what you folks have done. It's clearly hard, well intentioned, and thoughtfully executed.


Mad respects.

People might say that the company is doing it for good will, but that is the point, it is better to get the good will of the users by actually helping them instead of being like thousands of other companies which don't even do that. It is a nuanced topic but I feel like we should encourage companies which do good period. (like silksong / team cherry in gaming) etc.

I will look further into this now :p thanks!


> Burn the patent

that's ... that's what they are doing by making it freely available, no?

this helps anyone who is covered by the patent because they are (a bit better) protected from other patent trolls (and from other IP litigation)


A US Patent can not be ever open-sourced. In legal and logical terms, those are on the pretty much opposite sides of the spectrum.

The closest that comes to mind is "Free Nestlé bottled water".


why? the usual OSI licenses all work through copyright, and through conditional granting of rights (hence the name license)

for a parent granting irrevocable royalty-free usage rights to anyone is equivalent to putting it into the public domain, no?

both copyright and a patent grants exclusive rights to the rightsholder (right to perform, modify, or in case of patents to use the invention in the way covered by one of the claims), and the rightsholder is free to include more persons in the group of authorized users (ie. extend the rights to others, and can do so conditionally - hence the Apache2 patent grant)

can you please explain where my understanding is incorrect or missing something? thanks!


Copyright and patents are very different beasts.

For example there a lot of ways to use the so-called priority of the US patent to file other patents, including elsewhere in the world.

US Americans sadly often forget that the majority of the planet is not the USA. Here in Thailand, US and EU pharma companies have managed to get patents on 15 year old basic stuff like anti-histaminic meds. "Oh, but the system of limited monopoly in the US worked for us?". Yeah. But in Europe, the cost per pill is somewhere around $0.08. In Europe it's about $0.02. In Thailand it's $0.80 because Zyrtec managed to file a patent here re-using the priority of the US patent. And wage-adjusted in Thailand the cost per pill is about $15 per pill. You better don't have an allergy over here. So: Patents can have long-term consequences.

Back to IT: Have a look at the whole patent troll industry. The biggest chunk of junk patents that they bought are coming from "we will not do any harm" owners/filers. A lot can happen in 20 years.


Ah, I completely support your righteous anger against the tragicomedy of US "IP colonisation", though note that it has not much to do with my comment about the possibility of an open source patent.

The fact that most of the world was just laying down belly up to accept the negative consequences of the ridiculous cottage industry of churning out patents "for later" is truly enraging if someone gets to see how little value it produces. (It's very convenient for large conglomerates, VC fueled startups, and ... that's it.)

When it comes to specifics, ie. with pharma, it's hard to really disentangle the mess from other ugly facts, like the post-WWII global economy which and the very misregulated US healthcare industry (R&D including).

The US is subsidizing R&D for the world, because the other rich countries have sane "collective bargaining", but this leaves the small not so rich markets out in the fucking cold, because they have a weak bargaining position (market access doesn't worth that much to pharma companies) and also usually no internal R&D (so no local companies filing for these trivial parents). But because of all this in effect the world is buying R&D at the worst price!

(Though of course the picture is more complicated than this, but R&D is definitely waaay too inefficient in the US, but also all over the world too.)


Not really? You can build on top of their code, but as far as I can tell, you can’t build your own thing separately.


From the article:

The intention of OrioleDB is not to compete with Postgres, but to make Postgres better. We believe the right long-term home for OrioleDB is inside Postgres itself. Our north star is to upstream what’s necessary so that OrioleDB can eventually be part of the Postgres source tree, developed and maintained in the open alongside the rest of Postgres.


OK, just saved to the file cringespeak.txt:

"Our north star is to..."

:)


This comment makes no sense. They're actively open sourcing the patent and trying to get it upstream into Postgres. They purchased another company to get this patent, and they're spending a lot of money on lawyers to figure out how to release it to the community.

Call out shady shit when companies do shady things, but the sentiment behind this comment seems to be looking for reasons to bee outraged instead of at what's actually being done.

If companies get evicerated every time they try to engage with the community they'll stop engaging. We should be celebrating when they do something positive, even if there are a few critiques (e.g. the license change call out is a good one). Instead, half the comments seem like they're quick reactions meant to stoke outage.

Please have some perspective - this action is a win for the community.


I stated I see the good intentions.

I am "owner" of a bunch of patents, too, and some have actually been proven their test of time by after years having been re-invented (better: "parallel-invented later in time") elsewhere in the open source world.

But in my value system one does not do press releases saying "HELLO! We have decided not to do something evil!".

They could have done the very same thing done quietly to make clear there is no hidden agenda.

"Look, we hold this trivial patent on the open source ecosystem. No no no, all will be fine. No, no, we will not pick up the phone should Broadcom call us one day."

Yay. \o/


- €150 is higher than 0€ - Also, you are fully wrong. The EU has an import duty of 0% on pretty much anything, especially electronics. Here, the de minimis is about the VAT. Which you therefore also are wrong about. No, there is no VAT on items below €150 - because that what our de minimis is about. Also, import VAT is easy to process. Because it's always the same amount for everything - 19% in Germany, for example. For tariffs you need to know what is INSIDE the package. - China's high tech economy does not depend on small imports. Also, they do have de minimis for import VAT, and 0% import duty on lots of stuff (again, including electronics)

The problem for low-value items is not the import duty. It's the delay of processing.

I am running an electronics development company in the EU. To us it's mission critical to be able to get devkits, samples, prototypes, spare parts etc within 5 days from China.

I would not mind having to pay $1 of duty on a $10 part. I would be in huge trouble having to wait for that part 30 days.

Also: As always, Trump gave nobody any time whatsoever to prepare. The US now suddenly will have to hire thousands of customs employees. New machinery to transport all of this. New warehouses for storing stuff that sits for customs processing.

You could not be more off here. This will turn out to be a gigantic disadvantages for huge bunch of the innovative parts of the US economy. In our industry, nothing matters more than time to get parts.

As you have noticed: All other countries have LOWERED de minimis, and they did this with 12-18 months of advance notice.

Thailand last year tried to get rid of de minimis. They reverted the decision after ONE WEEK.

You are completely underestimating the amount of the US shooting into their own feet AGAIN here. In these days of global shipping volume you MUST have de minimis a country, or else you will be de-coupled from global R&D markets.


I'm in the EU, and in this corner of it it's damn near impossible to import anything, no matter how small.

I would be delighted if de minimis rules applied to imports. But everyone I know who has tried to import trivial items - books, small presents, and such - has been slapped with wildly unpredictable and excessive costs, and long delays while the paperwork clears.

As for the US - yes, clearly the goal is the destruction of government, health, education, research, and the economy in general. Whatever the people nominally in charge think they're doing, the people who are advising them and setting policy are either cranks or traitors.

Given their links to other countries, it's hard not to suspect the latter.


>Given their links to other countries, it's hard not to suspect the latter.

Interesting... Which people are you talking about and which countries?


Most “mission-critical” R&D parts will still clear in < 48 hours. Express carriers already transmit full data to CBP before wheels-up. Type 86 filings have been required “upon or prior to arrival” since Feb 2024, so the paperwork is literally done while the flight is in the air.

This will shift important inventory to local distributors in the US. This makes local supply chains more resilient, not more vulnerable.


Ah, how I love americans still living in the bubble thinking they are relevant.

No, you will neither get an Intel N100 devkit in the US, nor any Realtek devkits.

No, your supply chains will not get "more resilient". The industry simply is no longer taking your country as a trustworthy an serious trading partner.

If you want to do R&D in electronics, you need quick imports from China.

Yes, the US is totally prepared now for a future of coal powered steam trains, true. Have fun with that :)


Intel N100 devkits are primarily made in Oregon and Arizona in the US.

Realtek devkits are made in Taiwan.

China is clearly important. Nothing here changes that. Orders will just shift to bulk and get sourced from local distributors.


At least I have to order the devkits from China, can't get them from the US.

As you know, in practical terms china/taiwan/hongkong don't make a difference anymore when it comes to customs and shipping times.

But I understand and value that you understand the business, and we are looking at it from a different angle and potentially from inside/outside a bubble.

I understand your point that markets will adapt looking at it on a large scale. But the question is HOW the market is adapting. In the US far too many people are assuming that the world will bend for Trumpamerica.

In reality the world is at least in my industry is doing free-trade agreements in record time now. And the people in China I work with are not even thinking about the US market anymore at all. The future markets are elsewhere on this planet. What they care about is that the EU won't buck to Trump and also implement any kind of trade barriers against China just to please Trump.

And again: You are talking about bulk distribution. I am talking about small businesses, R&D, rapid prototyping, time to market. Days count. Every day I wait for a prototype I have developers sitting here doing nothing. You can not "shift" that problem. There is only one region where you are able to get not just SOME of the electronic parts you need, but ALL of them.

And that very clearly is not the USA.


So, it means that you and the LLM together have managed to write SEVEN lines of trivial code per hour. On a protocol that is perfectly documented, where you can look at about one million other implementations when in doubt.

It is not my intention to hurt your feelings, but it sounds like you and/or the LLM are not really good at their job. Looking at programmer salaries and LLM energy costs, this appears to be a very very VERY expensive OAuth library.

Again: Not my intention to hurt any feelings, but the numbers really are shockingly bad.


I spent about 5 days semi-focused on this codebase (though I always have lots of people interrupting me all the time). It's about 5000 lines (if you count comments, tests, and documentation, which you should). Where do you get 7 lines per hour?


>So, it means that you and the LLM together have managed to write SEVEN lines of trivial code per hour.

Here's their response

>It took me a few days to build the library with AI.

>I estimate it would have taken a few weeks, maybe months to write by hand.

>That said, this is a pretty ideal use case: implementing a well-known standard on a well-known platform with a clear API spec.

https://news.ycombinator.com/item?id=44160208

Lines of code per hour is a terrible metric to use. Additionally, it's far easier to critique code that's already written!


Yes, my brain got confused on who wrote the code and who just reported about it. I am truly sorry. I will go see my LLM doctor to get my brain repaired.


My Tesla still detects about 90% of garbage bins on our street, but only about 60% of the school kids crossing the road (I live in Germany where kids walk to school). The rest it would kill. As I pass by that school daily on my way to work, my Tesla would probably kill about 10-20 kids per week.

Yeah, good idea to hide the crash data.


I also find the opposite hilarious: The amount of things that Teslas detect as trash cans is absurd


My favorite is that it identified my wife's Honda CRV as a trash can.


You base that assumption on the visualizations in the center I guess ? They are not actually everything the car sees and reacts to. Especially not in our German FSD cars


I am aware that FSD has a different software stack. But it's the same hardware. So why would they make the detection of kids different on the standard firmware artificially worse? As Marketing for people who hate school kids?

I find it laughable that there still are Musk fanboys who after a decade of lies about this still believe in "Robotaxis". 90% of them clearly have never tried to drive a Tesla in a scenario where the minimal protections for kids to use public street space is not "kids should get a SUV to not get killed".

It is also amusing to watch videos of Tesla fanboys on YouTube who proudly show that their Tesla now can use FSD for up to 500 miles without a single crash (or "critical disengagement)". A human driver statistically causes a crash every 500,000 miles.

But yes, we will have flying Robotaxis in 2 weeks from now, that will solve this problem. Musk said so.

:)


> I am aware that FSD has a different software stack. But it's the same hardware. So why would they make the detection of kids different on the standard firmware artificially worse? As Marketing for people who hate school kids?

Not sure what's your argument here. The visualization you get using "Enhanced Autopilot" is completely different to the one you get using "FSD Beta" because the software you are running is completely different as well.


The point is not the visualization towards the driver. It's that the same data clearly is the base for the decisions this car makes. If it is not showing the kid crossing the street, you also will not get an emergency break warning, which I get in tons of other situations.

What you see is what you get.


I'm sorry, I struggle to understand your point. Which one is it?

- Do you think that a Tesla with an enhanced Autopilot would hit a kid because you don't see it in the visualization?

- Or do you think that a Tesla with FSD Beta would hit a kid because it uses "the same data" as the one without it?


I think the idea is, why would the visualization be so intentionally bad in the Autopilot version as to not detect the kids entirely? What benefit does that confer, or, from another perspective, what software constraint forces this to be the case?


It's not _intentionally bad_. Autopilot and FSD are _different products_ with _different tech_.

It's not like they could simply copy `detect_children()` from FSD to Autopilot and call it a day.


Hmm, that might be possible, but that's essentially not what I assumed. At the very least, they operate on the same hardware, so Autopilot is in some sense "intentionally bad" as a whole.


Exactly that, yes. Thank you.


Not defending Musk, I don’t like him but I am not sure why you would think two separate software stacks should somehow be comparable. Maybe it’s my old age but I get tired of these style of rants where folks are fixated on a single thing.


Other than the first paragraph, this all seems to be replying to something else?


> only about 60% of the school kids [...] The rest it would kill. [...] would probably kill about 10-20 kids per week.

I'm no Tesla fan - but it would be real-world obvious if even 0.1% of Teslas actually were that "eager" to kill children. In most western countries, covering up child-killing accidents scales very poorly.


Well, we don't have FSD in Europe, and in US, I guess the children don't walk to school.


US kids walk to school far less than in the Good Old Days...but there's still a fair amount of walking. And on low-traffic residential streets, there can be quite a bit of de facto playing in the street. So it's still a "passably" target-rich environment for killer robocars.


In the US, letting your children walk to school is taking a non-negligible risk that you'll be charged with a crime or have your children taken away. Their deaths from a motor vehicle are assumed by all to be a certain eventuality, and parents are more likely to be blamed for it than drivers.


The worrying part is that if/when those percentages get better, you will be more likely to trust it enough to let it run over children.


Soon promised to only have 1/10 of detection failures, better than ever before! Only 1 child per week! Rejoice!

On a more serious note: Where do we as a society put the bar? What are the numbers, at which we accept the risk? Do we put the bar higher than for humans? Or same level? Or does the added convenience for car drivers tempt us to accept a lower bar?


I think it is just not possible to have mixed traffic of devices (humans) with a weight of 70kg and SUVs of 3 metric tons.

You have to seperate those. And the default in car nations like Germany or the US has always been to ban the humans. After having seen how other nations are handling it, and what it does for quality of life, whenever I see how German cities look like (and of course most of US cities) it feels totally alien to me.

Anyway: No, Robotaxis clearly are not the solution to the problem. In school kid vs. Tesla, the car always will win. And this includes even if you blame the kid for having made a mistake according to road regulations - making mistakes in regards of traffic rules as a young human should not be punished by death.

What I have seen in my German home town also is a downward spiral: Hockey mums thinking it is safer for their kids to come pick them up with their SUVs. But because those are so big that it is impossible to see the other kids, risk of accidents is actually rising, causing more mums to driver their kids in SUVs etc.


Setting a bar is the mistake. We need to reframe the entire narrative.

Safety implementation is never objective. You can only implement a system by subjecting it to context. Traffic safety is a world of edge cases, and each driving implementation will engage with those edge cases from a different subjective context.

We are used to framing computation as a system of rules: explicit logic that is predictably followed. Tesla is using the other approach to "AI": statistical models. A statistical model replaces binary logic with a system of bias. A model that is built out of good example data will behave as if it is the thing creating that data. This works well when the context that model is situated in is similar to the example. It works poorly when there is a mismatch of context. The important thing to know here is that in both cases, it "works". A statistical model never fails: that's a feature of binary logic. Instead, it behaves in a way we don't like. The only way to accommodate this is to build a model out of examples that incorporate every edge case. Those examples can't conflict with each other, either. The model must be biased to make the objectively correct decision for every unique context it could possibly encounter in the future; or it will be biased to make the wrong decision.

The only real solution to traffic safety is to replace it with a fail-safe system: a system whose participants can't collide with each other or their surrounding environment. Today, the best implementation of this goal is trains.

Humans have the same problems that statistical models have. There are two key differences, though:

1. Humans are reliably capable of logical deduction.

2. Humans can be held directly accountable for their mistakes.

Tesla would very much like us to be ignorant of #1, and to insulate their platform from #2.


"today, the best implementation of this goal is trains."

Could not agree more.


we have 2 very recent Tesla 3's here (in the US, tho i'm not sure which gen HW 3 or 4 they have and I don't drive them), i'm told (judging by center console) reliably identify anything they need to but FSD isn't happy in construction zones with orange cones and will go slow.


In Germany (and a lot of the world, really) town centers are very old and streets are narrow and are shared. Over here it is also totally legal to cross the road wherever you like.

Also, due to the narrow roads it's standard practice to be in eye contact with other users of the shared space to make sure who drives/walks next.

Car AIs can not hold eye contact, so this is where the problem starts.

And, this one of course is very very specific just to Germany: On parts of the Autobahn you have to always expect another car approaching on the left lane with 250 km/h / 155 MPH, so you really have to use the rear view mirror very early to get an idea at what speed that car may be moving. The reach of the Tesla back camera is far too low for another driver at that speed being able to break so to not crash into your back.

So, when it comes to Germany even if the system worked better, there simply is no place where you could really make use of it without either killing people or getting killed.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: