As someone in tech over 40, and who previously worked in embedded systems, it amazes me how little attention we pay to efficiency today.
I now work with teams who will think nothing of spinning up 1000 containers with 2000 vCPUs to run a microservice.
I've previously been told developer availability and developer convenience are higher priority than performance. But frankly, when you're spending millions on compute, perhaps reducing the cost of compute might be more important than finding devs?
A lot of the time the company cannot make it remote global because of tax and regulatory compliance. I work for a multinational company who has employees in at least a dozen countries, but we can't employ anyone who isn't in a country we don't have a registered company entity.
Add to that the number of people we've had apply for jobs using AI and then turn out to have zero knowledge of what we've hired them for when they turn up.
What I'm saying is if you can't hire me, that's fine, but I'd like a little bit less of my time wasted up front by stating clearly where a company is hiring instead of just "remote".
I never worked for Samsung, but I built TVs for JVC and LG, among many other brands. I don't work in consumer electronics anymore but a decade ago that was my field.
TVs are a wildly unprofitable business. It's astoundingly bad. You get 4-6 months to make any profit on a new model before it gets discounted so heavily by retailers that you're taking a bath on each one sold. So every dollar in the BOM (bill of materials) has to be carefully considered, and not far back the CPUs in practically every TV was single core or dual core, and still under 1GHz. Bottom of the bin ARM cores you'd think twice to fit to a cheap tablet.
They sit within a custom app framework which was written before HTML5 was a standard. Or, hey want to write in an old version of .NET? Or Adobe Stagecraft, another name for Adobe Flash on TV?
Apps get dropped on TVs because the app developers don't want to support ancient frameworks. It's like asking them to still support IE10. You either hold back the evolution of the app, or you declare some generation of TV now obsolete. Some developers will freeze their app, put it in maintenance mode only and concentrate on the new one, but even then that maintenance requires some effort. And the backend developers want to shutdown the API endpoints that are getting 0.1% of the traffic but costing them time and money to keep. Yes, those older TVs are literally 0.1% or less of use even on a supported app.
After a decade in consumer electronics, working with some of the biggest brands in the world (my work was awarded an Emmy) I can confidently say that I never saw anyone doing what could be described as 'planned obsolescence'. The single biggest driver for a TV or other similar device being shit is cost, because >95% of customers want a cheap deal. Samsung, LG and Sony are competing with cheap white label brands where the customer doesn't care what they're buying. So the good brands have to keep their prices somewhere close to the cheap products in order to give the customers something to pick from. If a device contains cheap components, it was because someone said "If we shave $1 off here, it'll take $3 off the shelf price." I once encountered a situation where a retailer, who was buying cheap set-top boxes from China to stick a now defunct brandname on, argued to halve the size of an EEPROM. It saved them less than 5c on each box made.
For long life support of the OS and frameworks, aside from the fact that the CPU and RAM are poor, Samsung, LG and Sony don't make much money from the apps. It barely pays to run the app store itself, let alone maintain upgrades to the OS for an ever increasing, aging range of products.
And we as consumers have to take responsibility for the fact that we want to buy cheap, disposable electronics. We'll always look for the deal and buy it on sale. Given the choice of high quality and cheap, most people choose cheap. So they're hearing the message and delivering.
Yeah, but is there a way for consumers to compare the compute performance of any given TV?
If OEMs differentiated their TVs based on compute performance, consumers might be able to make an informed choice. (See smartphones: consumers expect a Galaxy Sxx to have faster compute than a Galaxy Axx.)
If not, consumers just see TVs with similar specs at different prices, so of course they’re going to pick the cheaper one.
It's really hard to get these things across to consumers.
This is why we ended up with phrases like "Full HD".
The average consumer doesn't know what these numbers mean, people who read hackernews aren't the 99%. Phones have helped a little bit with widening the idea of newer = better, but ask the average person how many cores their phone is or how much RAM it has? They don't know.
Also, it's hard to benchmark TV performance as a selling point. Perhaps sites like rtings need to have UX benchmarks as well? They could measure channel change times, app load times, etc. That might create some pressure to compete.
>I can confidently say that I never saw anyone doing what could be described as 'planned obsolescence'. The single biggest driver for a TV or other similar device being shit is cost, because >95% of customers want a cheap deal.
You are literally the first person I have ever seen say this online, besides myself. I have worked in hardware for years and can vouch that there is no such thing as planned obsolescence, but obsession over cost is paramount. People think LED bulbs fail because they are engineered that way, but really it's because they just buy whatever is cheapest. You cannot even really support a decent mid-grade market because it just gets eviscerated by low cost competitors.
Thanks for sharing. Without insight beyond being a consumer, I do think there's room for disription (ideally from within the industry itself) vs 10y ago.
Comparing models from 2005/2015/2025, for example. Most people literally can't tell 4k from 1080 and anything new in the HD race mostly feels like a scam. The software capabilities are all there. I think to differentiate from the no-name stuff, longevity is going to become a more significant differentiator.
We tried to disrupt the market, back about 10 years ago.
One of the significant problems is that 80% of TV SOCs are made by one company, MStar (or their subsidiary). And there's only a handful of companies who make the motherboards with those chipsets. Anyone entering the market either buys those or isn't competitive. It's hard to be competitive because everything is so concentrated and consolidated. Since ST Microelectronics and Broadcom left the TV chip market it became a much less diverse market.
We were an established company who made software for STBs, we had done a ground-up build of what was probably the most capable and powerful framework for TV/DVRs. The new design was commissioned from us by a well known open source Linux distro, who then decided they didn't want to continue with the project after they realised that getting into TV OS's was hard. We then took on ownership of that project but getting investment or even commitments from buyers was impossible.
The retailers and TV brands wanted to rehash the same thing over and over because that was tried and tested. It didn't matter that we made something that was provably better and used modern approaches, it wasn't worth the effort for them. If you can't order about 500,000 TVs then you're not going to get anyone to make anything custom for you these days and you'll not make a profit.
--
It was a DVR/TV framework that was designed by people who had worked for big names in the TV business with a clean slate. It would handle up to 16 different broadcast networks (e.g. satellite, terrestrial, cable) and up to 255 tuners, even hot pluggable. Fast EPG processing and smart recording to either internal storage or USB storage. It was user friendly and allowed for HTML5 apps. We pushed it as much as we could but eventually on the brink of financial ruin the company was sold to someone who had no interest in what had been built. I will always feel that something great was lost.
But then they're running on the same common platform as the models half the price. But more than 95% of the cost of the TV is in the panel itself, a fancy model is usually just a bigger model and maybe some different, higher end panel. But the CPU inside is nothing special because then they can keep costs down to compete the with the cheap 60in TV you saw while shopping for groceries.
> TVs are a wildly unprofitable business... not far back the CPUs in practically every TV was single core or dual core
Explain to me then how an Apple TV device for $125 (Retail! not BOM!) can be staggeringly faster and generally better than any TV controller board I've seen?
I really want to highlight how ludicrous the difference is: My $4,000 "flagship" OLED TV has a 1080p SDR GUI that has multi-second pauses and stutters at all times but "somehow" Apple can show me a silky smooth 4K GUI in 10 bit HDR.
This is dumbass hardware-manufacturer thinking of "We saved 5c! Yay!" Of course, now every customer paying thousands is pissed and doesn't trust the vendor.
This is also why the TVs go obsolete in a matter of months, because the manufacturers are putting out a firehose of crap that rots on the shelves in months.
Apple TV hasn't had a refresh in years and people are still buying it at full retail price.
I do. Not. Trust. TV vendors. None of them. I trust Apple. I will spend thousands more with Apple on phones, laptops, speakers, or whatever they will make because of precisely this self-defeating decisions from traditional hardware vendors.
I really want to grab one of these CEOs by the lapels and scream in their face for a little while: "JUST COPY APPLE!"
> Explain to me then how an Apple TV device for $125 (Retail! not BOM!) can be staggeringly faster and generally better than any TV controller board I've seen?
This is the result of Apple being vertically integrated and reusing components from other product lines in products like Apple TV. The SoC used in the Apple TV are from lower-tier bins of chips produced for mobile applications.
With the Apple TV, you are getting a SoC that is effectively the same as a recent-year iPhone. With most other Smart TV devices you are getting a low computational power SoC, Raspberry Pi tier, with processing blocks that are optimized for the video playback and visual processing use cases.
Apple also does this with the iPhone where the non-flagship variants will reuse components or designs from prior years.
Television/Smart TV manufacturer margins are in the single-digit percentages and the Samsung and LG tv businesses are significantly threatened since their high-volume products have been commoditized from Chinese producer competition. Most potential customers are shopping based on screen size per dollar, versus specs like peak luminance and contrast ratios. Flagship TV products like "The Wall" are low-volume halo products. Lifestyle products like "The Frame" exist because they are able to differentiate to certain segments of customers that place enough value the packaging aesthetics to buy a higher priced product with better margins for the manufacturers.
Most other hardware device manufacturers are jealous of Apple's margins. Nvidia would probably be one of the few exceptions.
Thin margins on commodity tier products drive these manufacturers to cut their BOM costs as much as possible, even if it makes the product worse in other ways. This is also the big driver for why ads are appearing as part of the Smart TV experience at the device/screen level. Vizio for example shared that they made more money from their ACR business than they did from the device sales themselves. There are companies with business models based around giving you the screen for "free" in exchange for permanent ad-space. Even adjacent products and companies like Roku have business models where they are selling their hardware at near break-even cost points because their business model is built around 'services' from having a large user audience.
Greater than 95% of the cost of a TV is in the panel.
TV panels must have a near 0% defect rate and a single piece of dust during the manufacture will render the finished panel e-waste. The bigger the panel the risk of a defect goes up exponentially because the surface area for any defect becomes bigger. It follows the same issue as to why chip companies introduced chiplets, the smaller die sizes improves the yield and they can throw away less silicon.
A TV panel is basically a 50in chip, and a mobile phone display is a 6in chip.
In theory they do have access and should, but in practice they don't.
Samsung's flagship mobile phone products tend to ship with Qualcomm Snapdragon SoCs in competitive markets, such as USA/North America, versus their "in-house" Exnyos SoC used in markets where consumers tend to have less choice (e.g. Samsung S-series phones with Snapdragon for USA, Exnyos for EU and KDM markets)
Microsoft removed support for Skype on TV, not Samsung.
Most apps get removed because the people writing them don't want to support them anymore. The Samsung framework from 2013 was always trouble and it doesn't support many current W3C features that you'd want as a developer. Most people I know are drawing the line at supporting 2014 or 2016 Samsung devices.
Could Samsung update their devices to ensure they still supported modern frameworks? Possibly, but they don't really get any revenue from providing OS upgrades and those devices suck in terms of RAM and CPU.
I hate this idea that software "rots" all by itself when it's just left on a device and is impossible to keep working. I would at the very, very least expect my device to work exactly as it did on day one, for the next 50 years, assuming I don't change the software. It's bits on a flash drive! It doesn't rot, outside some freak cosmic ray from space flipping a bit.
If you're saying the software stops working because the backend it talks to goes away, well that's a deliberate choice the company is making. All they have to do is have a proper versioning system and do not touch the backend service, and it also should work forever.
I certainly hate that idea as well, but I also accept a pretty decent amount of that because of interactions with the greater world outside of one company’s direct control.
For instance, suppose a streaming service starts requiring a new login method. They have to update their apps to use this new API. If there are and have been over a dozen different distinct smart television operating systems in the past 15 years, and there will be a dozen more in the next 15 years, it’s unreasonable to expect that even companies the size of say, Netflix, are going to reach far enough back in their history to update all those apps. They probably don’t have developers who understand those systems anymore.
And also, the software distribution mechanisms for each of those platforms are probably no longer intact either in order to receive an update. While it’s true that my Panasonic Blu-ray player that I bought in 2009 is still perfectly functional, and has a Netflix app, I assume it doesn’t work and that Panasonic would be hard pressed to distribute me a working updated app.
The only way things would be much different would be if technology progressed at a far slower pace, so there had been no need to adopt any breaking changes to how the app is built, how the apps and firmware was distributed, etc.
There are several examples I've seen of firmware on devices failing because of bit rot, so that's not true. We used to design devices so that the bootloader was pulled from NOR instead of NAND because of this. Then the device could be recovered using a USB stick.
Most people don't encounter it because their device was updated at least once. People should be less trusting in flash drives than they are, I recently pulled three USB flash sticks out of storage and two of the three are now unhappy.
There's a strong argument that consumer electronics should be able to be more incrementally upgraded. Including things like baseline upgrades for certificates. One of the things about TVs and these systems is that they are usually running on something like OverlayFS to avoid corruption of the base OS and enhancing security/integrity. They focus on replacing the underlying image, which is often security signed as well. If you screw something up with a device that's in a customers home then you're going to be spending a lot of money fixing it, the manufacturers have their war stories in this regard, so they're very risk adverse.
As for freezing the backend, you can't. Your API will evolve and for example if your database changes then your backend services will need to be touched. That database will change, some metadata or label will need to change. Even if you keep the API the same you'll need to maintain the legacy backend. Then you need that service running, consuming compute, for years even if there's hardly anyone using it and it's costing money. Then you need security patches for the backend service because the framework needs upgrading or the OS needs upgrading. Eventually the backend framework will be EoL/EoS and so you need to spend to upgrade. It's like saying we'll keep a Java backend running on a public facing API well beyond it's life, log4j anyone?
I think there is a strong argument to simply not checking certificate expiry dates in embedded hardware.
Just keep using the expired certificate forever.
Sure - that means if someone leaks the private key that everyone worldwide needs to do a firmware update to get security.
But that's probably less user harm than everyone worldwide needing to do a firmware update to replace an expired cert, and having a dead device otherwise.
1) You can't pass a PCI-DSS audit if you have expired certificates.
2) You can't always tell the CDN providers what to do with certs.
3) We've seen examples of new root certificates that mean devices don't know about things like LetsEncrypt
At the very least the user should be able to override the failing certificate check. So much "security" cargo culting is intentionally planned failure.
So don't burn CA pubkeys into your binaries without means for user override. If the software can persist a user-specific analytics ID it can support user certs. This is a solved problem.
Yeah but how many people would do that? You, me, and maybe thousand other people here and similarly minded. That's sadly fart in the wind for such companies and not worth creating more friction and risk (ie folks hack their under-warranty tvs till they stop working and then come back asking for free replacements and tarnishing the brand).
I wish there was some trivial real-life applicable solution to this that big companies would be motivated to follow, but I don't see it. Asking for most users to be tinkering techies or outright hackers ain't realistic, many people these days often don't accept basic aspects of reality if it doesn't suit their current comfy view, don't expect much.
Here in South Korea, everyone who uses online banking has to renew and reissue banking certificates every year. While I'm not convinced the certificate process is 100% safe, using certificates is one good concept in the sh*t show of Korean online "security" malware users are required to install.
You can add as many user-defined, custom trust anchors as you want, they’re not going to make an expired server TLS certificate work.
Don’t get me wrong, allowing users to add their own trust anchors is absolutely a good thing. But it wouldn’t change anything if the vendor did what GP suggested, which is that the vendor "[does] not touch the backend service." Because one day, their TLS certificate would expire, and they would technically no longer be able to deliver security updates even if the user wanted them.
Not my problem as a buyer. Build the infrastructure to make certificates and everything else work for a reasonably long time. Service is part of the contract.
That's the point, there are no substantive contracts between you and the OS. If we want apps to be responsible for root certs that's interesting, but then the app needs some roof of trust with the OS anyway.
Mentioning that certificates expire was directed against GP’s unreasonable demand that the vendor "do not touch the backend service." This doesn’t have to do anything with the buyer.
There's little difference between enabling a single screen shot and allowing screen recording. The rights owners require blocking of screen recording, so all screen capture gets blocked as a result.
If you could opt-out of Google ads and just be distributed/indexed by YouTube, then you'd be paying for hosting/delivery/indexing. Given that the economies of scale are spread among many users, the bigger streamers who this would benefit would then make the platform worse for everyone else.
Youtube is spreading the burden of carrying all that content, from utter crap that no one watches, deep archive and onwards to Mr Beast, etc. There's a huge volume of content that Google hosts that's costing more than it earns them.
Your comment I think drives my point even further. No content creator is paying for YT, neither is any end user[0]. So in this theoretical world where YT is separate from Google, now YT has to possibly pay Google for storage and one way to subsidize that would be to deduct it from creator's adsense revenue and/or limit the free content one can post in some way, like plans with different tiers of GB.
In some ways in this theoretical world, the small/new upcoming creators would have a larger chasm to cross into profitability if they move from a free plan into a paid hosting/delivery plan before becoming profitable. Unless a small/new creator gets massive quickly or goes viral they will have a much longer time before adsense can fund the storage. This might mean that the nonsense content goes away because churning out volumes of content to have more "surface area" for people to discover a channel can't be profitable.
[0] There may be premium content subscription options where some users pay a creator but I would imagine that is a minority of creators.
It's also important to recognise that content providers and CDNs adding private peering or hosting within ISPs doesn't diminish public peering. It actually frees up capacity on other transit routes.
ISPs don't QoS some companies to give them better service, the only difference is that in-demand companies tend to invest in capacity in partnership with ISPs. But ultimately, in most cases those investing generally use the same capacity that everyone else can use. The only company who doesn't resell their CDN capacity is Netflix. The others, Google and Amazon, dog food their own products. If you want to use the same systems as Prime or Google Video then you absolutely can. Other streaming providers use public CDN capacity just like anyone else.
Does YouTube get a favourable rate for capacity over other users? Yes and no. If Google doesn't charge YouTube then it's losing profit on the compute to sustain YT. But YT still has to make a profit and YT carries the cost burden of a great deal of legacy crap that a new entrant wouldn't. What Google has built is a miracle of engineering, to be able to get videos from relative nobodies to the other side of the in the world within minutes, at relatively high quality. While also allowing millions of kids to watch someone play Minecraft.
I respect what they've built. Would it be good to have diversity? In some ways yes, but in other ways choice sucks. Fragmentation of places to view content is something that gets increasingly complained about in the streaming world.
Is YouTube greedy? I don't think so. Building and maintaining what they've built is hard. As everyone else whose tried it knows. Just riding on their coat tails and leaching on their servers isn't sustainable. Ad blocking and saying Google deserves it isn't sustainable. In the extreme, if we burned down Google and said we wanted that model to end, the world would be a poorer place for it IMHO.
Context: 24 years in media, a decade in streaming for big companies, no affiliation with Google.
Yes, I agree... But it's even more complicated, because peering exchanges are only available in meaningful numbers in Europe and some countries around it. Also paths can be QoSed depending on many parameters, costs and deals.
We built a software that optimized routing based on cost depending on the 95th percentile usage, channel quality and some other parameters.
In Asia, South America (really shitty place for internet infra) and North America the conditions are different.
But Youtube did indeed built a great CDN, coupled with control of the end user video player made the best video on demand platform by far.
It's also worth noting that in most cases on-net caches (in ISPs) aren't as common as people think. It's mostly private peering at public data centres. Google doesn't have servers in all ISPs, this was more common in the past than today.
People complain so much and don't appreciate what they have.