If you're US based, there's tons of data broker sites, and you can glue together the information for free as various brokers leak various bits (E.g. Some leak the address, others leak emails, others leak phone numbers). And that's by design for SEO reasons, they want you to be able to google someone with the information you have, so they can sell you the information you don't have.
Some straight up list it all, and instead of selling people's information to other people, they sell removals to the informations owner. Presumably this is a loop hole to whatever legislation made most sites have a "Do Not Sell My Info" opt out.
What you do is look up a data broker opt out guide, and that gives you a handy list of data brokers to search. E.g.
Im not sure a gas station analogy really works. I use a gas station out of convenience (i.e. its on my route) and will only go out of my way for a significant difference in price. This means i go to the same gas stations even when there are others that are “as good as” around just because it’s convenient for me. Similarly if i already setup an account with Amazon and currently use Amazon i won’t move to an “as good as” competitor just because its an inconvenience to setup a new account, add billing info, add my address, etc… for no real improvement.
Alternatively they can work in care facilities (elderly care, psychiatric hospitals, etc…). I dont have the details but i know a few people who did this
One thing i like about Emacs is i can be reasonably sure it isnt going anywhere anytime soon. The GNU version was first released almost 40 years ago and its still actively used and maintained. I like investing my time learning quality tools that stand the test of time.
Asianometry's videos are good precisely because of the detail and background he goes into. If you summarize them you take that away and pretty much just end up with what has already been said here.
I tend to bookmark Asianometry videos to watch later because they seem very informative but I'm rarely in the mood to actually listen to the very dry documentary style. They don't make good background noise for instance - I need to focus to accept the new information. At the same time they don't naturally attract my focus.
I don't understand this argument. What has video length to do with whether it can be denser? This is like looking at a 1gb file and saying it could certainly be smaller.
The commenter believes the video should take less time and contain a higher percentage of strictly factual information.
A text analogy might be a recipe written in simple style, steps, ingredients, etc. and one you might find on a food blog where there is an intro about their childhood, how Nana was the best and along the way, somewhere in there one might learn how to prepare the food.
In this case, the video producer made pretty good choices about info density and content length.
The commenter disagrees and here we are chatting about all that.
You mean 20 minutes short. There's enough in there to blow it up into a 45 minute documentary at least. You already spent more than 20 minutes commenting under this story.
> The information density can almost certainly be denser.
And what would be the point of that? There's a limited amount of information one can retain in a short span of time, and it's not like he repeats himself or has a verbose style.
I already go back and rewatch his videos later, taking new pieces of information from them.
Again, if you want the tldw, it's already in the comments here. If you want the details, go watch the video.
The video is being linked because the video itself is good. Wanting a summary that retains the same qualities is like wanting to have your cake and eat it too.
I generally read faster than some narrator slowly babbling on over a meandering script, so that is 20 minutes long. If the video is 20 minutes long, I wager I can read an equivalent article in less than 5 minutes and come out enlightened all the same.
Videos are great for getting the eyes of the general man who doesn't have a preconceived interest in a subject, you're trying to bait clicks and videos are great for that. For people already interested in the subject though? Videos are almost always a literal waste of time compared to a well written article.
And if you wanna say I have a short attention span: Sue me. I'm a 35 year old millenial, we're infamous for having short attention spans.
You my friend may benefit from developing the arts of the 2x speed, the skipping, the scrubbing and the stopping.
Not every video is worth watching to completion (some are, you get a feel for it), there may be background details you want to skip or scrub through eyeballing the thumbnails depending on familiarity with the subject matter and sometimes everything you want to know is right at the end of the video in a neat little summary. The comments can even give you some insight into where the video is going and whether you want to continue if you read through some of the top ones during playback.
I’m not much younger than you, but watching and re-pacing YouTube for educational/information videos is a skill that can be refined and the visual imagery can provide details that again, depending on what it is, might be missed in a written summary. And hey, if none of this is for you, maybe this comment helps someone else out.
One reasonable compromise would be for video makers to provide a transcript or written article to complement their video. Video is a terrible format especially when you're actually using the video and not just using it as a mechanism to deliver audio. Audio is not a bad medium because you can do something else while listening to it.
I mean you could restrict yourself to only a single medium, independent of what the rest of the world is doing; or you can learn to process information efficiently regardless of medium and respect each medium for its own strengths and weaknesses. A good YouTube video produced perfectly needs none of the “hacks” I listed above and will relay far more information on complex subject matter in context than just an essay will, but people are more comfortable writing will write and people who want to make videos will make videos.
There is a slight conflict of interest where more money can be earned by wasting the information recipient’s time via advertising. Text offers less opportunity to do this.
Perhaps some amount of time wastage is necessary to incentivize the information providers to provide the information, but the pendulum can also swing too far.
That’s why I got good at getting through videos quickly and figuring out when or if they’re a waste of time.
There’s plenty of “research” videos that are just spewing crap that can be found on a wiki or a database somewhere else on the web; but see enough of them and you pick up on the pattern and cadence and quality they’re produced at quickly enough to just move on when you see it.
Same. Reading is always faster than watching video.
However, listening to one can be done while driving, or doing many other tasks.
Expecting producers to cater to the can read fast crowd is not realistic. People are just not going to produce for us. And I do not believe they should.
Nope. That producer packs it in solid. Yes, it could be more dense, but at the expense of it being watchable by most people.
This is a case of just because one can does not mean one should.
Having an audience matters. It matters more than optimal info density does. Besides, just watch it at 2x. With this producer doing that is challenging. Pay attention!
I didn't watch the video but I skim read the YouTube transcript.
The video doesn't propose any single explanation, just a series of events all of which arguably set back Japan's indigenous software industry. A few of the incidents it mentions include (my summary below is more based on my own knowledge of the topic, than what the video specifically mentions):
Fujitsu and Hitachi cloned IBM mainframes. So did lots of other companies. At the time they started doing it, IBM was (intentionally) releasing their software into the public domain. However, in 1969, IBM announced they'd start copyrighting their software. Initially they still released the core OS (primarily MVS) into the public domain, and only copyrighted add-ons. However, as the 1970s progressed, more and more new functionality went into the copyrighted add-ons, while the public domain core received only limited enhancements. Finally, in the early 1980s, they put the whole OS under copyright. This left Fujitsu and Hitachi in a difficult position. They were used to getting their mainframe OS from IBM for free, and suddenly they couldn't legally do that any more. Legal choices for them would have included: (1) fork IBM's operating system and create new enhancements themselves (either clone IBM's copyrighted enhancements by clean-room engineering, or design their own incompatible enhancements), (2) negotiate with IBM for a license (unclear if IBM would agree, and may have cost $$$), (3) license an alternative operating system (e.g. UNIX), (4) build their own OS from scratch. But none of those options appealed to them (or maybe they tried some and it wasn't working out), so they decide to go with option (5): illegally copy IBM's copyrighted mainframe operating systems. They used the fact that IBM still shipped the source code for much of its copyrighted software to customers, and somehow got customers to (illegally) hand that source code over. They made rather trivial changes to the source code to try to hide the copying–for example, Fujitsu renamed a lot of IBM routines whose name started with the letter I, to start with the letter J instead. They searched and replaced IBM copyright notices with their own. They even bribed IBM employees to give them IBM confidential material (the IBM employees accepted the bribes as part of an FBI sting operation). And IBM found out, and sued both Fujitsu and Hitachi, and the settlement of the suit required Fujitsu and Hitachi to pay IBM hundreds of millions of dollars, and also banned Fujitsu and Hitachi from continuing to sell the software outside Japan (IBM agreed to let them continue selling it in Japan, in exchange for them paying licensing fees.)
Other stuff I know about this topic (not in the video): In the 1980s and early 1990s, Fujitsu mainframes were quite popular in Australia, but due to this settlement, by the end of the 1990s, basically all of Fujitsu's Australian mainframe customers had either migrated to IBM mainframes, or else to non-mainframe platforms. There are still Fujitsu and Hitachi mainframes running in Japan today, but they are deeply legacy, basically stuck in the 1990s – they didn't follow IBM's transition to 64-bit in 2001. Fujitsu and Hitachi weren't the only mainframe vendors faced with this problem, but other vendors sought to solve it within the confines of the law. In the US, Amdahl had the same issue, but it decided to focus on their Unix variant UTS instead of MVS. (Amdahl did have an internal project to build a clone of IBM's MVS, apparently based on legal clean-room reverse engineering, called Aspen, but it got caught in development hell, and Amdahl cancelled it before they ever officially shipped it, although possibly a few customers got beta test versions.) Germany's Nixdorf had a fork of IBM's DOS/VS operating system (for low-end mainframes), which they got by acquiring the American company TCSC; they ported the Unix clone Coherent to run on top of it, before killing it off in the late 1980s when Nixdorf decided to give up on mainframes and focus purely on Unix instead. Other mainframe vendors didn't have this problem because their operating systems were not based on IBM's – for example, the other Japanese mainframe vendor, NEC, their mainframes run a fork of GE/Honeywell/Bull's GCOS operating system (ACOS), which NEC legally licensed.
Another incident the video discusses is the TRON project, which was a Japanese indigenous standard for operating system APIs, endorsed by the Japanese government, conceptually similar to POSIX. It included both variants aimed at general purpose computing (BTRON) and embedded systems (ITRON). However, this frightened the US software industry, which convinced the US government to declare TRON a "trade barrier". And that mostly killed TRON as an operating system. TRON didn't die completely, it still sees some use in embedded systems even today (the video mentions the Nintendo Switch Joy-Con controllers run it), but it never achieved the original vision of becoming Japan's standard operating system. Instead, Microsoft Windows did.
And then there were also macroeconomic issues (Japan's real estate crisis in the 1990s), and cultural issues – it mentions how the Japanese government encouraged Japanese industry to focus on copying successful Western technologies, even improving them incrementally in the process, as opposed to coming up with fundamentally novel technologies of their own. That approach served Japan very well for industries such as cars, but doesn't work so well for the software industry.
I had tried to figure out some of the details of TRON but some are difficult to find due to being Japanese and/or some files seems to be missing.
(I think ITRON is still in use, but BTRON and CTRON are not as common these days, as far as I know.)
There is also FOSS implementation of BTRON called B-Free but it is seems to be incomplete, and as far as I can tell is abandoned. (There is also year 2053 problem, which could be mitigated by using 64-bit timestamps, and some other problems.)
(I had also had idea of my own operating system design, which also uses TRON character code, as well as other things. This can also be made operating system standard which multiple implementations could be made up, I would hope.)
CTRON appears to have been based on OSI-I see references to FTAM and MOTIS (the X.400 mail transport protocol)-and also advertised support for ISDN as a key feature-which would make it very dated by today’s standards
I can’t find any references to actual specs for MTRON. I am wondering if it was ever actually specified, or if it was just vapourware
> (I had also had idea of my own operating system design, which also uses TRON character code,
You don’t need a whole operating system for that. It could just be a library which supported converting TRON code to other character sets, displaying text in TRON code, etc.
> The ITRON specs can be downloaded from the TRON website
I had seen that, but many things (including many of the older stuff) seems to be missing.
However, some stuff I had found elsewhere (not from TRON website), and I had been able to partially figure out from the Japanese documentation and had been able to write a program that can partially parse the TRON Application Databus format. However, many things I could not figure out very well.
I had also found what seems to be some document of TRON instruction set (I have some interest in the instruction sets of some older computers, not only TRON), but is Japanese and also seems that some files are missing, anyways. So, I don't know its working anyways. (I also found some English documentation but it does not actually explain much, although there is a few minor explanation of them.)
> CTRON appears to have been based on OSI-I see references to FTAM and MOTIS (the X.400 mail transport protocol)
However, I think X.400 uses ASN.1, and ASN.1 does not have a TRON string data type (I had once also wanted to use this in something else (unrelated to X.400 mail and CTRON), so I used the octet string type instead).
> I can’t find any references to actual specs for MTRON. I am wondering if it was ever actually specified, or if it was just vapourware
That was my guess as well, but I don't know either.
> You don’t need a whole operating system for that. It could just be a library which supported converting TRON code to other character sets, displaying text in TRON code, etc.
You are right, I do not need a whole operating system for that (see below). But, the operating system design is helpful for many other things. The use of TRON character code is only one of its features; it also has many other features, many of which are difference from POSIX and other systems (although some things are similar to other systems). (I had written elsewhere about my ideas of operating system designs, too.)
I had done some of the other stuff relating to TRON code in Linux too (although it is incomplete). I have written some programs that can display text, I had made fonts with TRON character set (although not all planes are implemented), and some programs that can convert some character codes (including e.g. EUC-JP, EUC-CN, EUC-KR). I had also been able to write partial English documentation from what I could figure out (which I documented on Just Solve The File Format Problem Wiki), although much of it is difficult.
(One of the problems I have is the way the GT fonts are coded; they are several TrueType fonts, that use an improper Unicode mapping that does not seem to have anything to do with the actual Unicode characters that those numbers are supposed to correspond to (except font 1, which does correspond correctly to Unicode), and the mapping of improper Unicode into TRON code is given in a large PDF file, and the mapping seems to not have any sort of reasonable order, and that I could not figure out how to handle automatically. If I could figure out how to handle it properly then I could implement a file that can use them with the TRON code directly; someone who is Japanese and is able to compare the characters to figure out how to make up your own bitmap fonts with the GT character set, could do that too, I suppose.)
The main selling point of the TRON character code, from what I understand, is CJKV speakers who disagree with Han unification.
But it sounds like you don’t know Japanese. Do you know another CJKV language? If not, what makes the TRON character code attractive to you?
Personally TRON interests me simply because it is an OS API which looks rather different from POSIX, and I’m interested in learning about other ways of doing things - just maybe some of those other ways of doing things contain some good ideas. But the TRON character code doesn’t really, since as a non-CJKV speaker, the debate about Han unification has no practical relevance, but rejecting that is the main selling point of the TRON encoding.
Actually, I do know a little bit, but not very well. (I still don't like Han unification though. Also, that is not the only problem with Unicode (and some other character sets) anyways. Furthermore, I also think that one character set will not be suitable for all purposes anyways (and that it is not possible to make it so), so I also have interest to allow additional character sets to be available for use.)
> Personally TRON interests me simply because it is an OS API which looks rather different from POSIX, and I’m interested in learning about other ways of doing things - just maybe some of those other ways of doing things contain some good ideas.
I am also interested in it for this reason, too. (I also have interest to learn of some old Japanese computer systems, too.) (As you could see, I did mention stuff other than TRON character code, too)
> Furthermore, I also think that one character set will not be suitable for all purposes anyways (and that it is not possible to make it so), so I also have interest to allow additional character sets to be available for use.)
There are a number of libraries in existence for converting between character sets. ICU is one of the most famous but it is Unicode-centric (it supports many character sets but wants to use Unicode as a lowest common denominator.) But older libraries such as iconv or recode lack ICU’s Unicode-centricity, so might be more appealing to you. Have you thought about doing something like contributing TRON support to recode?
https://github.com/rrthomas/recode - I always called it “GNU recode”, but https://www.gnu.org/software/recode/ says it isn’t a GNU package, even though for a long time the GNU website hosted it. (I think maybe it is “ex-GNU”: the GNU project maintainer retired, and the new maintainer who took over wasn’t willing to abide by the GNU project’s policies.)
I think GNU iconv also internaly uses Unicode although the interface does not require it (so it would be possible to modify the implementation so that a direct conversion (without going through Unicode, unless you are deliberately converting from or to Unicode) will be used if possible, without changing the interface).
A better way to handle conversion of character encoding is: Each character encoding will be a specific character set, and the encoding and decoding function. And then, there can be conversion between character sets. A direct conversion would normally be better (if it is available), although sometimes an indirect conversion would also be possible. (To convert JIS to TRON, an indirect conversion is unlikely to be useful, but a direct conversion is not too difficult (I have implemented it before) and would be much more useful.)
Furthermore, there may be more than one way to convert between character sets, depending on the application and on other things, including what character properties are intended, etc. There are also sometimes other options desired, e.g. how to handle conversion of invalid encodings, invalid code points, ambiguous conversions, multiple ways to encode a sequence of characters (although there may be one "canonical" way), etc.
(There is also the question of if you need to convert the character sets at all (in some cases only the encoding needs to be converted); for example, if you have fonts with the proper character set already, then a conversion may be unnecessary. Nevertheless, the ability to convert is useful, so it is helpful to have programs that do so, for the cases where it is helpful to do the conversion.)
I will look at them more later (I have not had time to look at them thoroughly by now, but I had partially done), to see if I can contribute support for TRON (and possibly other character sets). Depending on how it is implemented, it might be easy or difficult to change it to do such things.
> It makes sense to have one stanfard across the world. This way good software can come from multiple countries.
TRON was not the only attempt to define a standardised operating system API in the 1980s. As well as TRON and POSIX, another was IEEE Std 855-1990 (Microprocessor Operating System Interface or MOSI for short). But POSIX was the only one which really succeeded.
MOSI is pretty obscure, but my impression of what happened there – in the early 1980s, 8-bit platforms were widely popular, but very incompatible with each other (e.g. software written for Apple II could not run on Commodore 64 even though they both had 6502 CPUs). So the proposal for a common OS API was made, and an IEEE standards committee started standardising it. But by the time the standard was finished, those 8-bit platforms were declining, and IEEE was left with a standard focused on the needs of a declining market, and so very few ever used it. [0] (MOSI itself isn't inherently 8-bit – like POSIX it is a source-level standard rather than a binary-level standard, so could be used on 16-bit or 32-bit systems – but its feature set was a lowest common denominator of what 8-bit systems supported, so not very attractive for machines that have the memory to do much more.)
In 1988, the Japanese education ministry decided to make BTRON the standard operating system for Japanese schools. From what I understand, this move frightened Microsoft (among others), who feared that it would prevent DOS/Windows from being used in Japanese schools, or else force Microsoft to add a BTRON compatibility subsystem to their operating systems. So Microsoft lobbied the US government to pressure the Japanese government, and that pressure resulted in the Japanese education ministry dropping the requirement for BTRON, which in turn largely killed BTRON off. It didn't completely die; a variant of BTRON (Cho-Kanji) continues to be developed into this century, but it is a niche product whose primary value proposition is far more comprehensive support for obscure Kanji characters than mainstream Unicode-based operating systems (maybe useful if you do research into historical Japanese texts). Another factor in killing the Japanese education ministry's requirement for BTRON, was domestic opposition from NEC – at the time, NEC PC-98 machines running DOS were the de facto standard in the Japanese education system, and BTRON threatened NEC's dominance of that market. It could well have been a combination of both external pressure from the US government and internal pressure from NEC that killed it.
Related is Ada Programming Support Environment (APSE) and Common APSE Interface Set (CAIS). Part of the US DOD project which resulted in Ada, whose requirements demanded not only a standard programming language, but also a standard development environment, with APIs for integrating with compilers, editors, version control, build tools, etc. CAIS is standardised in MIL STD-1838A. So it is like POSIX/MOSI/BTRON, a cross-operating system API, albeit one focused on the needs of software development rather than general purpose computing–implementations of CAIS existed for Unix, OpenVMS and MVS, so development tools written against the CAIS API could run on all three operating systems. And the US government poured untold amounts of money into it, but I'm not sure if anyone ever used it. Probably some military projects did.
And APSE/CAIS in turn inspired PCTE (Portable Common Tool Environment), which was basically the EU's answer to APSE/CAIS. And just like APSE/CAIS, it consumed large quantities of EU research funding, before eventually being forgotten without ever seeing much if any real world use. It is standardised as ISO/IEC 13719–which apparently nobody uses, but ISO keeps on renewing because withdrawing a standard consumes bureaucratic resources, and PCTE is so obscure nobody even wants to expend the effort on withdrawing it.
[0] There was an implementation of MOSI for CP/M-80 and Pascal-MT+ – you can find it at https://github.com/skissane/MOSI/ – but I doubt that ever saw much use.
Siemens did real mainframes and their mainframe OS BS2000 is still around, it's just part of Fujitsu, Nixdorf appears in that story as well because that's how the Siemens mainframe division ended up at FSC (Siemens acquires Nixdorf, folds its mainframe division into that, then splits it up into the ATM business and sells the rest to Fujitsu).
Nixdorf shut down their mainframe business in 1989, and sold the remnants to Comparex (which started out as a Siemens-BASF joint venture, but Siemens withdrew around the same time as Comparex acquired Nixdorf's mainframe business). So when Siemens and Nixdorf merged in 1990, Siemens did not acquire Nixdorf's mainframe business, only Nixdorf's other product lines (Unix systems, ATMs, etc). But Siemens still had their own mainframe business. Comparex already sold IBM-compatible mainframes, so they didn't continue Nixdorf's mainframes as an independent hardware line, they were primarily buying the support contracts and the customer base.
Siemens mainframes and Nixdorf mainframes had significant differences:
Siemens BS2000 mainframes were derived from RCA Spectra 70. Their ISA was mostly IBM-compatible in user mode (problem state), but significantly different in kernel mode (supervisor state), and their operating system was completely incompatible–the BS2000 operating system was derived from RCA TSOS. RCA sold their mainframe business to Sperry, who then merged with Burroughs to form Unisys. The RCA Spectra mainframes became Unisys' Series 90 mainframe line, and RCA TSOS was renamed to Unisys VS/9. But by the 1980s or early 1990s, the RCA-derived Unisys mainframe line was dead. Whereas, their Sperry and Burroughs heritage mainframe lines (Unisys OS 2200 and Unisys MCP) survive today, although now they are software emulators running on x86-64 servers instead of physical hardware. RCA Spectra/TSOS only survives today in the BS2000 branch, save that Siemens ended up selling it to Fujitsu.
By contrast, the Nixdorf mainframes were more straight IBM clones, and so aimed for instruction set compatibility both at the user application and operating system level, and could run IBM operating systems. They were mainly used with the low-end IBM DOS/360-derived operating systems rather than the high-end MVS operating system family. Nixdorf faced the same problem that Fujitsu and Hitachi did, of IBM closing their operating systems, but they solved it by buying the American software company TCSC, who maintained their own fork of the IBM mainframe DOS, called Edos, which Nixdorf then renamed NIDOS (Nixdorf DOS). TCSC had started Edos when IBM decided to make new DOS versions available only for S/370, not for older S/360 machines, hence Edos was originally a backport of those newer S/370-only DOS versions to the older S/360 machines. When Nixdorf bought TCSC, they renamed it NCSC. NIDOS ended up offering features that IBM DOS/VSE never had, like a Unix compatibility subsystem (PWS/VSE-AF, derived from Coherent) – much latter, MVS (now z/OS) and VM/CMS (now z/VM) ended up getting one, but DOS/VSE (later z/VSE and now VSE^n since IBM offloaded it to 21CSW) never has.
Siemens also once had a lower-end mainframe line, which ran an operating system optimised for smaller machines, BS1000. BS1000 was discontinued long ago, and there is little information about it online. There was a BS1000 compatibility subsystem for BS2000, called SIM-BS1000 [0], but I'd be surprised if anyone is still using it today.
And Siemens also had BS3000 mainframes – like Nixdorf mainframes, these were fully IBM compatible, and designed to be able to run IBM's operating systems – they ran the Siemens BS3000 operating system, which was a rebadging of Fujitsu MSP – Fujitsu stolen version of IBM MVS. Siemens had to enter into a settlement with IBM as a result, although I'm led to believe the terms were relatively lenient on Siemens, who did their best to portray themselves as innocent victims of Fujitsu's dishonesty. But that was the end of BS3000. I think the remnants of the Siemens BS3000 line ended up with Comparex too. Comparex finally shut down their IBM-compatible mainframe business in 2000; they survived as an IT services business until 2019, when they were acquired by SoftwareOne.
And then in 1999 Siemens transferred their mainframe business to the Fujitsu-Siemens joint venture, and in 2009 Fujitsu bought out Siemens, and hence Fujitsu ended up with Siemens mainframe business.
And so today Fujitsu has three totally incompatible mainframe lines – their own Fujitsu MSP mainframes (previously sold internationally but now only surviving in Japan), the ex-Siemens BS2000 (primarily surviving in Germany, although a little bit in the UK and a few other European countries), and the VME mainframes they got by buying ICL in 2002 (I believe the UK government is the sole remaining user, they really want to migrate off them but it is just too hard.) Both BS2000 and VME now run under x86-64, while I believe the Japanese line still has proprietary physical hardware.
"Japan has a large trade deficit in software, importing far more software and services than it exports. Despite having iconic hardware companies, Japan lacks major software giants like Microsoft or Oracle. This is due to a history of government policies that favored hardware over software development, as well as a shortage of skilled software engineers and a lack of software startups in Japan. While Japan has made efforts to develop domestic software platforms, they have largely failed to gain traction. The video suggests there are no easy solutions to Japan's software industry challenges."
You do you, but I'd chime in on why it's not recommended: any simple answer to that question will just be "there's a long history and international context that led to a complex situation".
That's the perfect TL;DW but I don't think it helps you much.
20 min is short for such a vague question, and you can watch at 2+x the speed if info density is so paramount.
To note it still glosses over an incredible amount of critical things, it's just not a topic that can be shortened that much for anyone actually caring about understanding it.
To me it comes down to how the creator decided to publish their piece.
If there is no specific accessibility need, getting it in the original format on the chose platform would be my primary choice. In particular it's not a time sensitive subject and watch it later sounds easy enough.
You seem to put reading/writing on a pedestal, but as you point out we're not in the medieval ages anymore, nobody should feel superior because they read it instead of watching it.