The linked blog post about the smallest possible valid (X)HTML documents is noteworthy, if only for the fact that a surprising amount of people adamantly refuse to believe that they are valid. Even when you think you have gotten through to them with specifications and validators, a lot of people will still think “yeah, but it’s relying on error handling though”. I’m not sure why “HTML explicitly permits this” will not be tolerated as a thought and somehow transforms into “HTML doesn’t permit this but browsers are lenient”. It’s a remarkably unshakeable position. And even the people who are eventually convinced that it’s valid still think that it is technically incorrect in some unspecified way.
"if only for the fact that a surprising amount of people adamantly refuse to believe that they are valid... And even the people who are eventually convinced that it’s valid still think that it is technically incorrect in some unspecified way."
Speaking from my personal experience, if your idea of "valid HTML" was created in the late 1990s or early 2000s, it's worth a spin through the current HTML standard. HTML has always de facto been permissive, but de jure it had certain requirements. However, HTML 5 essentially works by reifying a very, very well-specified algorithm for how to handle HTML "loosely" (even though it is very strictly specified), and then refactors away effectively every requirement it possibly can and defers them to that algorithm instead.
Technically speaking, as long as you put down the correct doctype, you can elide almost anything nowadays and get a functional document; for instance, "<!DOCTYPE html><title>Hello</title>" is fully standards compliant now (push it through [1]). Only thing the validator gives is a warning that you might like to specify a language in the doctype. It isn't just "browsers will pretty much do the 'right thing'" with that, which has been true for a long time... that's actually standards-compliant HTML now.
What a lot of old hands don't understand is that HTML 5 was a seismic shift in how HTML is specified. Instead of specifying a rigid language and then pretending the world is complying and it's super naughty of them not to, it defines a standard for extracting a DOM tree from effectively any soup of characters you can throw at it, compliance is loosened as much as is practical, and even when things don't comply there's a specification on exactly how to pick up the pieces. HTML 5 has a completely different philosophy than HTML 4 and before.
(Relatedly, the answer to the frequently-asked question "What is the BeautifulSoup equivalent for $LANGUAGE", at least as far as parsing, is effectively now "Find an HTML 5-compliant parser", which they all have now. Beautiful Soup's parsing philosophy was enshrined into the standard.)
It’s fair to point out the big difference in parsing philosophy between HTML 2–4 and HTML 5, but what I’m talking about happened before HTML5 as well. Some people can’t handle the fact that HTML intentionally has implied elements.
> <!DOCTYPE html><title>Hello</title>" is fully standards compliant now
Sure, but switch the doctype and put a <p> on the end, and it’s fully standards compliant HTML 4.01 Strict too. And yet so many people are adamant that it can’t be. That it’s invalid (even though a validator says it’s valid). That it’s relying on error handling (even though the spec. says otherwise). That some browsers parse it wrong (but they can never name one). That the DOM ends up broken (when browser dev tools show a normal DOM). That you need <html> and <body> elements (even though it already has both). That there’s something wrong with it at a technical level (even though they cannot describe what).
The concept “This is correct HTML that works everywhere with no error handling” is very difficult for some people to grasp, to a genuinely surprising degree.
This is especially ironic, considering the same people will gladly use XML syntax and serve it as text/html. Historically, this has only worked because no relevant browser has ever implemented SGML (and NET [1], in particular), as required by HTML standards up to version 4 [2].
> Historically, […] no relevant browser has ever implemented SGML […] NET
I can probably confirm that "relevant" part of this claim for the times spanning from the first decade of 2000s, but I still desperately (in a way) seek information whether ANY even niche and obscure application that consumed "HTML" treated the NET as specified back then. I am quite certain W3C Validator did (that Mathias' article proves that, after all) and that Amaya might have do that, since it was a reference implementation from the same spec body, IIRC, but cannot swear on that.
Have anybody here have a clearer recollection of that times, or even some evidence?
I still find it strange such feature had such prominent space in the specs back then, but practically nowhere else.
EMACS/W3 originally supported SHORTTAG NET but was “fixed” to remove support. In practical terms, mainstream browsers couldn’t afford to parse SHORTTAG NET properly because it was very common to leave attribute values unquoted. You can leave some values unquoted, but not ones with slashes in. So the very common error <a href=http://xn--rvg would not get parsed as the author expected if SHORTTAG NET was enabled.
This is the earliest reference I could locate easily, from the www-html mailing list:
You’ll be able to find more if you go trawling through USENET archives of places like comp.infosystems.www.authoring.html from 25–30 years ago, but it was a fairly niche subject even back then.
I think there were a couple of other niche tools that supported it, but I don’t remember the details after all this time.
Good idea. I remember I have done some research about this in the past when I tried to trace historical arguments for the infamous "should there be a space before slash in void tags for the best compatibility"
And RFC 2854, which defines the text/html media type, explicitly states this is permissible to label as text/html:
> The text/html media type is now defined by W3C Recommendations; the latest published version is [HTML401]. In addition, [XHTML1] defines a profile of use of XHTML which is compatible with HTML 4.01 and which may also be labeled as text/html.
However even browsers that support XHTML rendering use their HTML parser for XHTML 1.0 documents served as text/html, even though they should really be parsing them as XHTML 1.0.
But yes, that extra slash means something entirely different to the SGML formulation of HTML (HTML 2.0 to HTML 4.01). HTML5 ditched SGML though, so SHORTTAG NET is no longer a thing.
[XHTML1] defines a profile of use of XHTML which is compatible with HTML 4.01
is technically incorrect. While the XHTML 1 compatibility profile was compatible with HTML 4 as implemented by major browsers, that wasn't actually HTML 4. HTML 4 is based on SGML, while what was implemented was a combination of HTML 4 semantics with the tagsoup parsing rules that browsers organically developed. These rules were only later formalized as part of HTML 5.
The compatibility guidelines do recommend a space between <br and />, but (at least according to https://validator.w3.org/ in HTML 4 mode) this doesn't change anything about <br /> being a NET-enabling start-tag <br /, followed by a greather-than sign.
Enter this:
<h1>Hello<br />world</h1>
and select "Validate HTML fragment", "HTML 4.01", and "Show Outline". This is the result:
[H1] Hello>world
(Obviously nitpicking, but that's my point: the nitpickers can be out-nitpicked.)
Haha yes. Appendix C gave compatibility guidelines, but you are right that doesn’t actually result in documents that could be parsed by a parser that implemented SHORTTAG NET.
Elsewhere in the thread, I posted an example of SHORTTAG NET being removed from a browser to enable parsing of XHTML documents:
Nevertheless, the text/html RFC explicitly condones Appendix C, so despite it not being fully reflective of reality, it’s still permissible to use text/html to label XHTML 1.0 documents that follow Appendix C :D
I like to tell people I was a professional ActionScript developer at Microsoft. This was during peak Silverlight days.
I was working on the MyFordTouch system and the UI was written in Flash by a contractor, then handed off to Microsoft to fix the bugs. It was a nightmare to work with and flash in a car worked about as well as you'd expect it to in 2012 (badly).
I worked for a company called Chumby ( https://en.wikipedia.org/wiki/Chumby ) a few years before that and our device was Flash focused. It was basically an "Internet Alarm Clock" that could run Flash apps. Having a dedicated "Internet Alarm Clock" made sense back then though after the iPhone caught on it became a pretty hard sell to have a dedicated device for this purpose.
At the time I found ActionScript 3 to be a very good language that one could structure programs in very nicely, but ActionScript 2 was kind of nightmareish and any project in AS2 that had grown to a certain size was bound to be horrendous unless the person creating it was extremely meticulous with how they structured everything.
Chumby devices were stuck on the AVM1 (ActionScript 1 & 2) virtual machine for most of the company's lifespan though we did support AVM2 (AS3) very late in the game when the larger 8" devices were released.
For most of my time there I worked in the Haxe language because it allowed you to have an ActionScript 3 like language that could be compiled down to AVM1 bytecode (among other targets) and it had a pretty amazing compiler macro system that was great for doing all sorts of precompute optimizations which were very useful given how anemic the hardware we were running on was. Its been years since I've used Haxe but it appears to still be a thriving, if somewhat niche, language targeting a lot of different platforms.
We also had access to the underlying Flash Plugin C++ code as part of our deal with Adobe and I worked in that code directly for a while to extend a custom Chumby flash player to support braille devices as output for a joint project accessibility demo we did with NPR. Whatever suspicions people who have worked with the Flash Player as a blackbox might have about it being a horrible ball of spaghetti inside, I can vaguely confirm.
The device with the card "Captioned Radio" under it in the 2nd photo is an Insignia Infocast 8", which was basically the same thing as the Chumby 8", just a white-label version produced for Best Buy with a cheaper plastic outer casing and a customized dashboard app with their branding all over it.
Best Buy/Insignia was a retail partner of chumby for a time (they also sold a rebranded version of the Chumby One).
My dad is in his late 70s and has tested at very high PSA levels a few times. So far none of the biopsies have found cancer, but they've caused a lot of stress and discomfort for him.
I don't have a strong opinion about the tests either way, but I wasn't the one getting the biopsies.
That does sounds stressful. Sorry he had to go through that.
I have high psa levels. 17.
Had a biopsy. Turns out I have a really large prostate. My doctor said that some just naturally have larger prostates and the larger ones produce more psa. The psa density function put my levels at normal when taking in to consideration the size. The biopsy came back negative.
Right, in that analogy, but in this case there was nothing further you would get as far as I'm aware, it was a one and done transaction? So the analogy doesn't really hold, because unlike the doctor's practice without the sale you had no expectation of any further service or benefit from it anyway.
Surely Theranos developed some technology that was useful?
It lasted for 15 years and (as far as I know) employed actual scientists and researchers that were trying to revolutionize blood testing. They must have gotten somewhere right? Even if it wasn't as far as they wanted/claimed?
I don't know much about what Haemanthus is claiming, but could a business be built using whatever technology Theranos developed? Or were they headed down a dead end street with nothing of use along the way?
Theranos was a bit too early to its idea, and while they were flailing around trying to get customers with the Edison, microfluidic testing became a reality. It was done by the "old" medical device companies that Theranos was supposed to disrupt. It turned out they were the competent ones.
It is entirely possible to spend 15 years doing absolutely nothing of value.
Their claim was that they could run hundreds of tests on a mere drop / drops of non-arterial blood, including several that are basically physically impossible due to the makeup of blood in capillaries being different from arteries.
That's still not possible to anything like the original degree of the claim.
Yeah that's true and it's an important distinction. Tests today run on a few microliters of arterial blood, which is "one drop," but to get the right blood they still need to stick a needle in your arm.
Presumably the claimants in the many lawsuits would now own any IP, or a receiver holds that IP in trust for the claimants so that any revenue derived from the IP goes to the claimants.
The equity holders have surely been totally wiped out and have no further claim.
Working hard doesn't guarantee results. I feel like this is obvious. I would imagine that they never got the thing to work right, or at least not well enough. Even if they did, there's no guarantee it's profitable, much less cost-effective.
Correct. The problem is arrogance doesn't guarantee results either but it sure seems to over-promise to the point of fraud more often than not. Perhaps the Ivy business schools should make an effort to mint MBAs who have real world experience rather churn out Dunning-Kruger effect specimens who believe they have a special monopoly on "the answer".
I remember reading some VC or other came out and said that they picked founders that were over confident, because it was some advantage.
This is the AI result from Google
> Paul Graham and Y Combinator (YC) prioritize determination and ambition over intelligence in selecting founders, often finding success in founders who are overconfident and optimistic. This isn't a mistake; it's a calculated risk based on the belief that persistence and belief in their vision are crucial for overcoming the inevitable challenges of starting a business.
edit: I've just noticed at the bottom of Paul's piece a note about Sam Altman that I think is incredibly accurate - look for hackers (not crackers) - people that find ways to profit by looking at the system in a different way (but they emphasise not to be evil, just naughty)
I prefer people with a sense of humor and joie de vivre over people who treat others like objects, crush your hand with a handshake as a "joke", or walk out of the room mid-conversation. I think it's a mistake to seek cofounders to work with or founders to invest in only for the utilitarian advantage of a current project because there's nothing much holding them together otherwise, and that leads to venture fragility from the outset.
I think you can finance any founder you like with your theory of an ideal.
There's little to no point arguing on here about someone else's opinion, not least because its not my opinion, and I haven't offered one on what's best nor what I think of Paul's
Well, I wouldn't put it that way as long as we're correcting the record. It was always the boring moving dirt company, never pivoted, but it grew a cancer that became Too Big To Fail because it was more profitable than moving dirt.
- it did boring non-tech big industry stuff
- it was good at this
- it started an in-house hedging department (normal)
- they were good at their jobs and accidentally created a massive speculative trading business that fell apart
> bankrupted utilities by causing energy prices to be 20x normal rates
Allegedly this is "good business", something that companies aspire to do (creating an environment where their competitors fail, and they profit big time)
Well, as long as we're correcting the record, I'd point out yes, you get it: there was the in house hedging department that turned into Wolf of Wall Street traders, and there was the dirt movers this small group metastasized on and destroyed.
I talked to some people in the industry shortly before the scandal broke, there were already rumors and the Glassdoor reviews had many warnings from clearly disgruntled former employees. Their thoughts was that the people there should have known, so either they lacked competence, they were desperate, or they were in on it.
While there are plenty of people looking for the chance to do something great and could do it if given the right environment, I expect Theranos didn’t foster such an environment.
The entire organization was made up of rich idiots with no domain expertise throwing money at a young lady because she wore a turtleneck and claimed to be the next steve jobs. Meanwhile there were actual scientists involved, including the CSO and a former director of the CDC, and they never blew the whistle, so they either never thought to ask for a demo of the magic science product, or they were in on it.
It was never going to do anything productive. It wasn't even an elaborate con, she just lied to people's faces and they never even considered whether they should verify her claims.
Theranos is a great example of how pathetically incompetent and stupid you can act as a rich person and STILL come out pretty well. Nobody did any due diligence because they almost never do due diligence and it almost never hurts them.
Meanwhile I have to do due diligence on the damn clothing I buy or it probably won't even fit.
From accounts it seems likely that they completely squandered that opportunity covering up for the exaggerations and fraud.
If at the start there had been, at least internally, an honest view of: We have no idea how to do this, existing technology won't do this, we must make a breakthrough-- and then spent 15 years grinding on that then there might be a chance.
But even then it would just be a chance. It might well be the case that what they were promising is only possible through molecular nanotechnology or some other kind of breakthrough that was entirely outside the domain of their research and which has still not yet been accomplished.
Even the new company's pitch supports that: They credit AI as an integral part of their supposed solution. Was Theranos spending those 15 years working on anything we'd call AI today? probably not.
Have you noticed how many startups get butt-loads of VC money based on a blatantly obvious faulty premise and never succeed?
A solid, responsibly managed company, has no place in the minds of investors.
To me, the problem is that it is almost more lucrative to NOT succeed, unless one can achieve Nvidia-level of success. It is easier to promise the impossible. I profit today but if we scale the unproven business plan 1000X, the profits will be earth shattering!
How the hell do stupid upstart app-based shady loan companies have tens of thousands of employees including thousands of engineers?
> The 10-minute transition time will move before the hour instead of after the hour. Previously a one-hour class with an official start time of 9:00 a.m. would begin at 9:10 a.m. Under the new policy, class will begin at the official start time but end at 9:50 a.m.
reply