More like it's time for the pendulum to swing back...
We had very decentralized "internet" with BBSes, AOL, Prodigy, etc.
Then we centralized on AOL (ask anyone over 40 if they remember "AOL Keyword: ACME" plastered all over roadside billboards).
Then we revolted and decentralized across MySpace, Digg, Facebook, Reddit, etc.
Then we centralized on Facebook.
We are in the midst of a second decentralization...
...from an information consumer's perspective. From an internet infrastructure perspective, the trend has been consistently toward more decentralization. Initially, even after everyone moved away from AOL as their sole information source online, they were still accessing all the other sites over their AOL dial-up connection. Eventually, competitors arrived and, since AOL no longer had a monopoly on content, they lost their grip on the infrastructure monopoly.
Later, moving up the stack, the re-centralization around Facebook (and Google) allowed those sources to centralize power in identity management. Today, though, people increasingly only authenticate to Facebook or Google in order to authenticate to some 3rd party site. Eventually, competitors for auth will arrive (or already have ahem passkeys coughcough) and, as no one goes to Facebook anymore anyway, they'll lose grip on identity management.
It's an ebb and flow, but the fundamental capability for decentralization has existed in the technology behind the internet from the beginning. Adoption and acclimatization, however, is a much slower process.
These centralized services do and did solve problems. I'm old enough to remember renting a quarter rack, racking my own server and other infrastructure, and managing all that. That option hasn't gone away, but there are layers of abstraction at work that many people probably haven't and don't want to be exposed to.
Aaand even if we ignore the "benefit" of Cloudflare and AWS outages being blamed on them, rather than you, what does uptime look like for artisanaly hosted services on a quarter rack vs your average services on AWS and Cloudflare?
It's a reference to Eric S. Raymond's famous article "The Cathedral and the Bazaar", where he compares the rather top-down, leader driven culture of Unix development to the free-for-all style of Linux.
Of course, I always like to point out the foolishness of this metaphor: Bazaars in the Near East were usually run in a fairly regimented fashion by merchant guilds and their elected or appointed leaders.
I don’t see how it’s foolish. When you mention a bazaar essentially no one thinks of the closed door meetings of the merchant guilds. Instead, they think of the hustle and bustle of a busy marketplace where all manner of goods, services, and ideas are openly exchanged.
This in contrast to the somber atmosphere of a cathedral where people whisper even when there are no services taking place at the time. It’s an image of reverence, humility, and monumental architecture.
Yeah, "foolish" maybe wasn't the right word. All metaphors fall short in some way (hence why they're metaphors). I just, knowing something of the history of that part of the world, like to use the opportunity to share the knowledge that, despite the appearances of a chaotic, random aggregation of humans, Bazaars often had a significant structure under the surface (perhaps another lesson about open source to be had there).
Shoot, you're absolutely right! It's been a long while since I last re-read the article, and I had forgotten how "targeted" (for lack of a better term) it was at certain specific individuals.
I can tell you the same thing I was told when I started my program: no thesis represents more than 1 year's worth of work. The reason it takes most Ph.D.s 5-10 years (8 in my case) to graduate is that you have to fail, and fail, and fail again for 4-9 years before you find your thesis project.
In my case, I started on two exploratory gene knockout "fishing expeditions", both of which didn't turn up anything interesting after a year. Then I crystalized a protein and submitted it to X-ray diffraction, but the results were not good enough for a "high quality" structure, and besides the structure we did find was not particularly interesting. Then I switched to working on NMR structures, but ended up switching universities (politics...there's going to be lots of politics) before that went anywhere.
At my new university I switched to structure modeling and worked on a project my advisor suggested for about a year to optimize a modeling routine, but even the optimized version didn't turn up anything interesting. Finally, I landed on a very intriguing problem that could have had far reaching implications. I worked hard at it for almost a year, only to realize that even state-of-the-art modeling was at least a decade away from being able to begin to address the problem I needed to solve. Finally, I returned to a question that a professor had asked me in my first year of graduate school, half jokingly, assuming there was no way to answer the question. For about a year I worked hard at it, finally arrived at a very interesting answer, and graduated.
> no thesis represents more than 1 year's worth of work. The reason it takes most Ph.D.s 5-10 years (8 in my case) to graduate is that you have to fail, and fail, and fail again for 4-9 years before you find your thesis project.
This exactly describes my own experience - 9 years for me. It was a miserable experience, but trying a lot of things that don't work, and then admitting to yourself that they won't work, is honestly great emotional endurance training to be a scientist.
I'm no expert on Tor, but IIRC the story is precisely that spies operating from hostile territory would have a red target painted on them from using encrypted communications...unless a whole lot of people in that hostile territory were also using encrypted communications. This is why Tor was released open source and wide adoption was encouraged.
It's been known that if you connected to Tor in a hotel located in this US allied country (there have been briefings published around this so you can take a guess) you would immediately become visible and targeted for a drive by.
Tor just isn't as common as you think nor is it widely adopted due to unreliable and the problem with that cover explanation is that you wouldn't know where Tor is widely used in the first place to be able to find "safety in numbers".
The flip-flop operator is very useful to extract continuous subsets, typically, sections of (multi-line) strings, where the dev defines the delimiters - think of the `=begin` and `=end` keywords.
I've personally never used for anything else than strings, but when I do, it's very useful.
“The form of the flip-flop is an expression that indicates when the flip-flop turns on, .. (or ...), then an expression that indicates when the flip-flop will turn off. While the flip-flop is on it will continue to evaluate to true, and false when off.”
flipflop basically has a hidden boolean variable for state. Btw, while I HAD used it, i still have no idea what's the scope of that state and when it'd reset itself.
Without having read in-depth either original paper, it seems like the issue here is much simpler than reproduction (though reproduction is the gold standard as is totally under-appreciated these days).
Rather, it seems the authors made a much simpler mistake: hypotheses can only be refuted by evidence, not confirmed. So, in this case, if the hypothesis is "judges act more harshly when hungry", what they should have been doing is looking for evidence disproving that statement. Instead, they seem to have presented a correlation and a suggestion, which is not the same thing as a scientific finding.
Dealerships don't make most of their money (or any money, really) selling cars. They make their money from service, and selling info to 3rd parties (ever notice when you buy a new car, you will receive mail from SirusXM for a year after). In order for this to work, their systems are all tightly integrated. When you buy a new car, the dealer will know when you're due for service. When you have your current car serviced more than a certain amount, the dealer will know you're in the market for a new car.
So, it's not possible to swap out only the service management piece of CDK or only the sales piece, or only the CRM piece. They all have to work together. The reason I know is I happened to contract with the IT department of a very large dealer network at the time they were undergoing a migration from Reynolds to CDK. A year into the migration process...they pulled the plug. Moving even from one large incumbent to another was too much work.
Good luck breaking in on that as a brand new baby startup! I think you'd probably have better luck completely disrupting the car sales process from the ground up.
CDK didn't just run the DMS. They operated the physical networks at the dealerships, managed the the PCs, the phone system, even leased the dealers their printers. CDK was DEEPLY integrated into the dealerships.
I agree, and in fact I would consider working with junior developers a highlight of my day because there is an inherent virtue in the "beginners mind". And you're not wrong that juniors writing things that seem more complicated than they have any right to be producing is not a new thing. It's more that the patterns have skewed. Over a nearly 20 year career, I've developed pattern matching skills that can catch this kind of code...when produced by humans. It seems that I'll have to adjust my priors now. That's all.
In principal I agree, and it's not like I'm losing entire days to this sort of thing (well, once I did, but only once), but there's a tricky interplay of experience + lingering imposter syndrome. I do a fair bit of jumping around between languages and frameworks, so when I, say, come back to JS after being away for 6-9 months and I see something unfamiliar, there's usually a 60/40 split that it's a bug vs something the JS community has decided is a new "industry best practice". Before immediately going to the author asking for an explanation, I attempt to do my due diligence and be sure that I'm on the 60% side of that split.
It's just that lately the split has been more like 60/30/10 between bug, some new thing, and garbage AI spew.
We had very decentralized "internet" with BBSes, AOL, Prodigy, etc.
Then we centralized on AOL (ask anyone over 40 if they remember "AOL Keyword: ACME" plastered all over roadside billboards).
Then we revolted and decentralized across MySpace, Digg, Facebook, Reddit, etc.
Then we centralized on Facebook.
We are in the midst of a second decentralization...
...from an information consumer's perspective. From an internet infrastructure perspective, the trend has been consistently toward more decentralization. Initially, even after everyone moved away from AOL as their sole information source online, they were still accessing all the other sites over their AOL dial-up connection. Eventually, competitors arrived and, since AOL no longer had a monopoly on content, they lost their grip on the infrastructure monopoly.
Later, moving up the stack, the re-centralization around Facebook (and Google) allowed those sources to centralize power in identity management. Today, though, people increasingly only authenticate to Facebook or Google in order to authenticate to some 3rd party site. Eventually, competitors for auth will arrive (or already have ahem passkeys coughcough) and, as no one goes to Facebook anymore anyway, they'll lose grip on identity management.
It's an ebb and flow, but the fundamental capability for decentralization has existed in the technology behind the internet from the beginning. Adoption and acclimatization, however, is a much slower process.
reply