The article mentions early a "cancer diagnosis" but puts that aside and moves on, when this is pretty much the crux of the issue. Prostate and Breast cancers are a 1 in 8 chance. The risk of no insurance at 25 is very different than 50, and than 75. And everyone at all ages is paying for those expensive treatments.
The system is broken, but going without insurance is you basically toying with the odds of life.
If you get a very serious and expensive problem, insurance may not help nearly as much as you'd think. My mother had great insurance, but when she got cancer, the insurance didn't stop her from getting absolutely destroyed by the medical bills (not to mention having to constantly fight with the insurance company while being extremely ill).
It drove her to bankruptcy anyway. In hindsight, she commented that had she known that the insurance wouldn't be all that helpful, she would have just saved up all the money she poured into premiums over the decades.
I feel the constant fighting with insurance isn't spoken to enough. I don't want insurance because I don't want to be both a billing department and a sick person. We went through the same mess when both of my parents were sick. We were already taking in an enormous amount of new information about their illnesses and then we were also having to try and learn how their insurance worked, what was covered, what wasn't, trying to vet what would happen in every appointment, which doctors would show up (bc what if one of the doctors is out-of-network), duking it out with insuance over prior authorizations, trying to tie each bill that came in to something that happened months ago and then vetting if the bill was correctly billed, correctly covered by insurance etc, and on and on and on. I'd rather have 0 insurance and just negotiate each bill as it came in with one single entity, the hospital.
> I'd rather have 0 insurance and just negotiate each bill as it came in with one single entity, the hospital.
That's not how it works, insurance or not. You won't get just one bill from a single entity, you'll get many bills from many different entities and will have to negotiate with each separately.
Pre-ACA my mother got cancer in a short window where the University my Dad was president of got wound down for financial reasons.
Destroyed my entire trajectory in life.
The prior system was mega fucked, our current system is still fucked.
If you had a congenital condition prior to the ACA you were a wage slave once you hit 18, no private insurance and couldn't get public. Literally founded a successful startup the minute I got ACA.
Over 40+ years I've seen nearly every profession go through a bubble and lean years, lawyers, mechanics, academics.
But never doctors, in retrospect I should have joined that protectionist racket, but my family couldn't afford to let me at the time.
In a perfect world, a healthcare plan should pay for cancer treatments or crucial medical procedures. In the United States, I'm not sure this is a guarantee[0][1]. Going without healthcare seems to be the riskier gamble, but it's a gamble either way.
Author here. I'm definitely not advocating going without health insurance. Just running simple numbers to get some perspective.
I'd like to see health insurance act like insurance again though. Right now it covers absolutely everything, meaning it's more like pre-payment for routine care + insurance.
Insurance isn't for routine, predictable, or low-cost expenses. But we've mandated that our health insurance cover all of those things.
The comparison to car insurance is overused, but it's a good one. Catastrophic coverage + dedicated savings with lower premiums looks more attractive to a lot more people.
If CUDA isn't that strong of a moat/tie-in and Chinese tech companies can seemingly reasonably migrate to these chips, why hasn't AMD been able to compete more aggressively with nVidia on a US/global scale when they had a much longer head start?
1. AMD isn’t different enough. They’d be subject to the same export restrictions and political instability as Nvidia, so why would global companies switch to them?
2. CUDA has been a huge moat, but the incentives are incredibly strong for everybody except Nvidia to change that. The fact that it was an insurmountable moat five years ago in a $5B market does not mean it’s equally powerful in a $300B market.
3. AMD’s culture and core competencies are really not aligned to playing disruptor here. Nvidia is generally more agile and more experimental. It would have taken a serious pivot years ago for AMD to be the right company to compete.
AMD is HIGHLY successful in the GPU compute market. They have the Instinct line which actually outperforms most nVidia chips for less money.
It's the CUDA software ecosystem they have not been able to overcome. AMD has had multiple ecosystem stalls but it does appear that ROCm is finally taking off which is open source and multi-vendor.
AMD is unifying their GPU architectures (like nVidia) for the next gen to be able to subsidize development by gaming, etc., card sales (like nVidia).
Why doesn't AMD just write a CUDA translation layer? Yeah, it's a bit difficult to say "just", but they're a pretty big company. It's not like one guy doing it in a basement.
Does Nvidia have patents on CUDA? They're probably invalid in China which explains why China can do this and AMD can't.
The CUDA moat is extremely exaggerated for deep learning, especially for inference. It’s simply not hard to do matrix multiplication and a few activation functions here and there.
It regularly shocks me that AMD doesn't release their cards with at least enough CUDA reimplementation to run DL models. As you point out, AI applications use a tiny subset of the overall API, the courts have ruled that APIs can't be protected by copyright, and CUDA is NVIDIA's largest advantage. It seems like an easy win, so I assume there's some good reason.
A very cynical take: AMD and Nvidia CEO’s are cousins and there’s more money to be made with one dominant monopoly than two competitive companies. And this income could be an existential difference-maker for Taiwan.
AMD can't even figure out how to release decent drivers for Linux in a timely fashion. It might not be the largest market, but would have at least given them a competitive advantage in reaching some developers. There is either something very incompetent in their software team, or there are business reasons intentionally restraining them.
From what I've been reading the inference workload tends to ebb and flow throughout the day with much lower loads overnight than at for example 10AM PT/1PM ET. I understand companies fill that gap with training (because an idle GPU costs the most).
So for data centers, training is just as important as inference.
> So for data centers, training is just as important as inference.
Sure, and I’m not saying buying Nvidia is a bad bet. It’s the most flexible and mature hardware out there, and the huge installed base also means you know future innovations will align with this hardware. But it’s not primarily a CUDA thing or even a software thing. The Nvidia moat is much broader than just CUDA.
And it would be a big bet for AMD. They don't create and manufacture chips 'just in time' -- it takes man hours and MONEY to spin up a fab, not to mention marketing dollars.
> If CUDA isn't that strong of a moat/tie-in and Chinese tech companies can seemingly reasonably migrate to these chips, why hasn't AMD been able to compete more aggressively with nVidia on a US/global scale when they had a much longer head start?
It's all about investment. If you are a random company you don't want to sink millions in figuring out how to use AMD so you apply the tried an true "no one gets fired for buying Nvidia".
If you are an authoritarian state with some level of control over domestic companies, that calculus does not exist. You can just ban Nvidia chips and force to learn how to use the new thing. By using the new thing an ecosystem gets built around it.
It's the beauty of centralized controlled in the face of free markets and I don't doubt that it will pay-off for them.
I think they'd be entirely fine just using NVIDIA, and most of the push came from US itself trying to ban export (or "export", as NVIDIA cards are put together in the china factories...).
Also AMD really didn't invest enough in making their software experience as nice as NVIDIA.
The only way the average person can access a MI300 is through the AMD developer cloud trial which gives you a mere 25 hours to test your software. Meanwhile NVidia hands out entire GPUs for free to research labs.
If AMD really wanted to play in the same league as NVidia, they should have built their own cloud service and offered a full stack experience akin to Google with their TPUs, then they would be justified in ignoring the consumer market, but alas, most people run their software on their local hardware first.
> The only way the average person can access a MI300 is through the AMD developer cloud trial which gives you a mere 25 hours to test your software
HN has a blindspot where AMDs absence in the prosumer/SME space is interpreted as failing horribly. Yet AMDs instinct cards are selling very well at the top end of the market.
If you were trying to disrupt a dominant player, would you try selling a million gadgets to a million people, or a million gadgets to 3-10 large organizations?
AMD sells 100% of the chips they can produce and at a premium. It's chicken and the egg, here. They have to compete with nVidia for pre-buying fab capacity at TSMC and they are getting out bought.
AMD probably don't have chinese state backing, presumably, where profit is less of a concern and they can do it unprofitably for many years (decades even) as long as the end outcome is dominance.
This was in terms of breaking the Nvidia monopoly. Mojo is a variant of python. When looking at the difficulty of migrating from CUDA , learning python is pretty small barrier.
Sure, you can keep buying nvidia, but that wasn't what was discussed.
Yes, over simplifying the concept. what is wrong with that? If I post a thesis on compilers would that really help clarify the subject? Read the link for details. Is Mojo attempting to offer a non-Cuda solution? Yes. Is it using Python as the language? Yes. Is there some complicated details there? Yes. Congratulations.
I think you are missing the nuance between the different aspects of using the Python Interpreter, and integrating new functions with Python. And compiling to a different target. Would you say Iron Python is Not Python, and quibble about it? Is there some Python purist movement I'm not aware of? Should every fork of Python be forced to take Python out of its name?
To say Mojo doesn't use Python, when clearly that is a huge aim of the project, makes me think you are splitting hairs somewhere on some specific subject that is not clear by your one liners.
Key aspects of Mojo in relation to Python:
• Pythonic Syntax and Ecosystem Integration:
Mojo adopts Python's syntax, making it familiar to Python developers. It also fully integrates with the existing Python ecosystem, allowing access to popular AI and machine learning libraries.
• Performance Focus:
Unlike interpreted Python, Mojo is a compiled language designed for high-performance execution on various hardware, including CPUs, GPUs, and other AI ASICs. It leverages MLIR (Multi-Level Intermediate Representation) for this purpose.
• Systems Programming Features:
Mojo adds features common in systems languages, such as static typing, advanced memory safety (including a Rust-style ownership model), and the ability to write low-level code for hardware.
• Compatibility and Interoperability:
While Mojo aims for high performance, it maintains compatibility with Python. You can call Python functions from Mojo code, although it requires a specific mechanism (e.g., within try-except blocks) due to differences in compilation and execution.
• Development Status:
Mojo is a relatively new language and is still under active development. While it offers powerful features, it is not yet considered production-ready for all use cases and is continually evolving.
I think then I'd have to go back to your original reply, and ask what your point was. What is it you are finding objectionable? These one liner "doh, your wrong", isn't clarifying.
Do you really think Mojo is not based on Python? Or they are not trying to bypass Cuda? what is the problem?
The rest might be marketing slop. But I'm not catching what your objection is.
? Are we talking about same thing? Mojo, the new language for programming GPU's without CUDA?
The marketing and web site materials clearly show how they are using the Python interpreter and extending Python. They promote the use of Python everywhere. Like it is one of the most hyped points.
I think you are trying to quibble with, does the new functions get compiled differently than the rest of Python? So technically, when the Mojo functions are in use, that is not Python at that point?
Or maybe you are saying that they have extended Python so much you would like to not call it Python anymore?
Like IronPython, maybe since that gets compiled to .NET, you disagree with it being called Python?
Or maybe to use the IronPython example, if I'm calling a .NET function inside Python, you would like to make the fine distinction that that is NOT Python at that point? It should really be called .NET?
Here is link to docs. You worked there. So maybe there is some hair splitting here that is not clear.
> The marketing and web site materials clearly show how they are using the Python interpreter and extending Python.
brother you have literally not a single clue what you're talking about. i invite you to go ask someone that currently works there about whether they're "using the Python interpreter and extending Python".
"This is 100% compatible because we use the CPython runtime without modification for full compatibility with existing Python libraries."
At this point you need to either explain your objection, or just admit you are a troll. You haven't actually at any point in this exchange offered any actual argument beyond 'duh, you're wrong'. I'd be ok if you actually pointed to something like 'well technically, the mojo parts are compiled differently', or something. You say you worked there, but you're not even looking at their website.
You're really splitting some very thin pedantic hairs.
You're problem isn't with me, you are quibbling with there own marketing materials. Go complain to marketing if they are using the words that you disagree with. Everything I've posted is directly from Mojo's website.
You: "Well, technically they are embedding the interpreter, so all the surrounding code that looks exactly like python, and we promote as being compatible with python, and promote as extending python. My good sir, it is not really python. That is just a misunderstanding with marketing. Please ignore everything we are clearly making out as an important feature, totally wrong".
They clearly promote that they are extending python. What is your problem with that? How is that wording causing you to seize up?
I'm aware of what is technically happening. Where did I ever say anything that was not directly from them? Do I really need to write a thesis to satisfy every ocd programmer that wants to argue every definition.
Were you let go because of an inability to think flexibly? Maybe too many arguments with co-workers over their word choice? Does you're brain tend to get single tracked on a subject, kind of blank out in a white flash when you disagree with someone?
Actually, I'm kind of convinced you're just arguing to argue. This isn't about anything.
mojo has zero to do with python. zilch, zero, nada.
what they are doing is simply embedding the python interpreter and running existing python code. literally everyone already does that, ie there are a million different projects that do this same thing in order to be able to interoperate with python (did you notice the heading at the top of the page you linked is *Python interoperability* not *Python compatibility*).
> This isn't about anything.
it's about your complete and utter ignorance in the face of a literal first hand account (plus plenty of contrary evidence).
> Were you let go because of an inability to think flexibly?
let go lololol. bro if you only knew what their turnover was like you would give up this silly worship of the company.
To be clear, I'm not a fan boy. I don't really know much about Mojo. I've watched some videos, checked out their website, thought it was interesting idea.
The parent post was about alternatives to CUDA.
I posted a 6 word sentence summarizing how Mojo is trying to bypass CUDA, and using Python. -> And you flipped out, that it isn't Python. Really?
I checked out your link, sure does look like Python. But that is the point, all of their promotional materials and every Chris Lattner video, all sales pitches, everywhere.
Everywhere, is Python, Python, Python. Clearly they want everyone one to know how closely tied they are to Python. It is a clear goal of theirs.
But. I see now the pedantic hair splitting. Mojo 'Looks Like Python', they use the same syntax. "Mojo aims to be a superset of Python, meaning it largely adopts Python's syntax while introducing new features".
But you say, they aren't modifying or extending CPython so this is all false, it is no longer technically Python at all.
And I guess I'm saying, Chill. They clearly are focused on Python all over the place, to say that it isn't, is really ludicrous. You're down a rabbit whole of debating what is a name, what is a language. When is Python not Python? How different does it have to be, to not be?
A new M4 Air is now $799 at Amazon, and a new M1 Air is $599 at Walmart. So it's not like $999 is really the starting price if you spent a minute to search outside of Apple's Online Store.
This rounded corner change feels very off. Since Apple has that same radius across all its products (software and hardware), it could be signaling a broader upcoming shift in their hardware, perhaps driven by industrial design needs for future AR/VR/MR glasses.
Not OpenAI, but Anthropic CPO Mike Krieger said in response to a question of how much of Claude Code is written by Claude Code: "At this point, I would be shocked if it wasn't 95% plus. I'd have to ask Boris and the other tech leads on there."
> During take-home assessments Complete these without Claude unless we indicate otherwise. We’d like to assess your unique skills and strengths. We'll be clear when AI is allowed (example: "You may use Claude for this coding challenge").
> During live interviews This is all you–no AI assistance unless we indicate otherwise. We’re curious to see how you think through problems in real time. If you require any accommodations for your interviews, please let your recruiter know early in the process.
He'd have to ask yet did not ask? A CPO of an AI company?
TFA says "How Anthropic uses AI to write 90-95% of code for some products and the surprising new bottlenecks this creates".
for some products.
If it were 95% of anything useful, Anthropic would not still have >1000 employees, and the rest of the economy would be collapsing, and governments would be taking some kind of action.
> If it were 95% of anything useful, Anthropic would not still have >1000 employees
I think firing people does not come as a logical conclusion of 95% of code being written by Claude Code. There is a big difference between AI autonomously writing code and developers just finding it easier to prompt changes rather than typing them manually.
In one case, you have an automated software engineer, and may be able to reduce your headcount. In the other, developers may just be slightly more productive or even just enjoy writing code using AI more, but the coding is still very much driven by the developers themselves. I think right now Claude Code shows signs of (1) for simple cases, but mostly falls into the (2) bucket.
I don't doubt it, especially when you have an organization that is focused on building the most effective tooling possible. I'd imagine that they use AI even when it isn't the most optimal, because they are trying to build experiences that will allow everyone else to do the same.
So let's take it on face value and say 95% is written by AI. When you free one bottleneck you expose the next. You still need developers to review it to make sure it's doing the right thing. You still need developers to be able to translate the business context into instructions that make the right product. You have to engage with the product. You need to architect the system - the context windows mean that the tasks can't just be handed off to AI.
So, The role of the programmer changes - you still need technical competence, but to serve the judgement calls of "what is right for the product?" Perhaps there's a world where developers and product management merges, but I think we will still need the people.
Been using claude code almost daily for over a month. It is the smartest junior developer I've ever seen; it can spew high-quality advanced code and with the same confidence, spew utter garbage or over-engineered crap; it can confidently tell you a task is done and passing tests, with glaring bugs in it; it can happily introduce security bugs if it's a shurtcut to finish something. And sometimes, will just tell you "not gonna do it, it takes too much time, so here's a todo comment". In short, it requires constant supervision and careful code review - you still need experienced developers for this.
Weasel words. No different than Nadella claiming 50%.
When you drill in you find out the real claims distill into something like "95% of the code, in some of the projects, was written by humans who sometimes use AI in their coding tasks."
If they don't produce data, show the study or other compelling examples, don't believe the claims; it's just marketing and marketing can never be trusted because marketing is inherently manipulative.
It could be true, the primary issue here is that it's the wrong metric. I mean you could write 100% of your code with AI if you were basically telling it exactly what to write...
If we assume it isn't a lie, then given current AI capabilities we should assume that AI isn't being used in a maximally efficient way.
However, developer efficiency isn't the only metric a company like Anthropic would care about, after all they're trying to build the best coding assistant with Claude Code. So for them understanding the failure cases, and the prompting need to recover from those failures is likely more important than just lines of code their developers are producing per hour.
So my guess (assuming the claim is true) is that Anthropic are forcing their employees to use Claude Code to write as much code as possible to collect data on how to improve it.
This is classic marketing speak. Plant the idea of 95+% while in actuality this guy doesn't make any hard claims about the percentage. It can just as well be 0 or 5%.
It’s worth pointing out that the statement is about how much of Claude Code is written with it and not how much of the codebase of the whole company. In the more critical parts of the codebase where bugs can cause bigger problems, I expect a lot less code to be fully AI generated.
Standard CxO mentality. “I think the facts about our product might be true but I won’t say it because the shareholders and SEC will hang me when they find out it’s bullshit.” Then defer to next monkey in circus. By which time the tech press, which seems to have a serious problem with literacy and honesty (gotta get those clicks) extrapolates it for them. Then analysts summarise those things as projections. Urgh.
The other tactic is saying two unrelated things in a sentence and hoping you think it’s causal, not a fuck up and some marketing at the same time.
In the year 2025 the primary job function of all C-level execs is marketing. Which is to say, he probably doesn't know the actual number, doesn't care, and is just saying what he knows the "right" answer should be.
This guy is so full of shit. Anthropic’s leadership are all talk and hype at this point. And they’re not the only ones guilty of this in this hype cycle by far.
I don't really think Meta ever had a vision beyond "Facebook is a social network to connect people". Since then, their strategy has primarily been driven by their fear of being left behind, or of losing the next platform war. Instagram, Whatsapp, Threads, VR, AR, and now AI, they all weren't driven by a vision as much as it was their fear of someone else opening a door to a new market that renders them obsolete. They are good at executing and capturing the first wins, but not at innovating, redefining a market, or pushing the frontier forward; which is why they eventually get stuck, lose direction, and fall behind (Tiktok, Apple Vision Pro, AI).
Yes, but they’ve definitely made a big contribution to AI / LLMs. I just don’t understand how they plan on monetizing upon things, apart from “better AI integration inside their own products”.
Are they planning to launch a ChatGPT competitor?
It seems like this acquisition is focused on technology, but what’s the product vision?
Who will be responsible for figuring out what AI features to build? I think it is reasonable to look into it, seriously, with the point of view of "can we disrupt ourselves before being disrupted." This doesn't mean putting a significant engineering team behind this, but, to put a significant effort in figuring out what is it you could build, and what an ROI on that would be.
You've two outcomes from this, either you do find a disruptive AI angle and move a sufficiently-large part of your team to it, or you don't, but figure out a minimal effort that would satisfy the "investor positioning angle". The third option, to do nothing or aggressively push back against AI and the CEO's desire, would potentially yield to no Series C or a down-round, which is something that you, your CEO, and your customers would not like.
For each of the 2024 7 swing states, the winner was <1% ahead on average, so what good are these polls if the results are going to be within their margin of error?
They need to either find a more accurate way, or... give up!
What they're good for is telling you that things are close. A tied poll or a 50-50 model can tell you that if your beliefs think it's 99% to go one way, you're probably overconfident, and should be more prepared for it to go the other way.
I cared about the result, because it was going to decide whether I settled down in the US or whether I wanted to find a different place to live. And because I paid attention to those polls, I knew that what happened was not particularly unlikely. I prepared early.
A lot of people I know thought it couldn't happen. They ignored the evidence in front of them, because it was distasteful to them (just as it was to me). And they were caught flat-footed in a way that I wasn't.
That's not the benefit of hindsight: I brought receipts. You can see the 5,000 equally-likely outcomes I had at the start of the night (and how they evolved as I added the vote coming in) here: https://docs.google.com/spreadsheets/d/11nn9y9fusd-6LQKCof3_... .
We had a pretty weird year in general. Harris did bad across most safe states but seemed to do much better than her average in swing states (not enough to win them, but much better than she did in non-competitive states)
Many election models rely heavily on historical correlation. States like OH and IN might vote quite differently but their swings tend to be in the same direction.
The weirdness this year (possibly caused by the Harris campaign having a particularly strong ground game in swing states) definitely challenged a lot of baked in assumptions of forecasts.
I see this as a combination of three forces at play: AI, WFH, and Skillset--all adding downward pressure to hiring talent in the U.S.:
1) While A.I. may now be only adding 10-20% of productivity gains, the rapid pace of improvement leaves open the possibility that tha gains can be soon much more than that. So, instead of scaling your company now, if you can afford to, wait out a bit and see where this goes.
2) Even though much of BigTech is clawing back WFH, startups aren't as much. And once you introduce WFH to your culture and processes, it is hard to reason with the idea that you should pay $200K/year for an engineer when it can cost you a fraction (possibly 20-50% of that) to hire them remotely from another country, when also nowadays most of these remote employees are more than willing to work in EST/PST timezones. This used to be the case before COVID, but now many more startups have accepted and adapted to the idea of WFH.
3) While advanced skillsets and deep experience is necessary in many (but not most) startups, and while these skills are more difficult to find in India or Pakistan, the reality is, for many, many tech companies, most of the work doesn't require top-notch skills. You don't need a top 99% percentile in frontend engineering skills for a 1-year-old "name whatever category" app. And with the recent rise of focus on profitability, frugality, and the difficulty in fund-raising, being cognizant of cost per talent is now a thing.
I think Elon and Vivek's comments are more nuanced than they are taken. Elon, given he's at the cutting edge of engineering, must be having difficulty hiring top-99.9%-percentile talent against BigTech, and wants to open the pool of these types of talent from elsewhere. I don't think he wants H1Bs for React Native engineers. I am interpreting his comments as "I want to suck-in all A.I. researchers into America".
H1B has been around for a while now. It can't take more than a moment of original research to realize it's vastly used for junior roles & a large percentage of consulting outsourcing houses who charge much, pay little and deliver nothing.
| I think Elon and Vivek's comments are more nuanced than they are taken.
If they are, they have the platform to provide that nuance. Take a look at the public H1B data for Tesla (disclaimer it doesn't tell the full story), it does not seem like they are vying for the top-99.9%.
It seems odd we're giving billionaires the benefit of the doubt.
They are positioning themselves to win, and that's totally fine in the system we're in, but let's not assume they are friends of the working class.
> 3) While advanced skillsets and deep experience is necessary in many (but not most) startups, and while these skills are more difficult to find in India or Pakistan, the reality is, for many, many tech companies, most of the work doesn't require top-notch skills. You don't need a top 99% percentile in frontend engineering skills for a 1-year-old "name whatever category" app. And with the recent rise of focus on profitability, frugality, and the difficulty in fund-raising, being cognizant of cost per talent is now a thing.
a. Note that "outside of the US" covers more than India and Pakistan. Google, Microsoft, Meta, etc. all have sizeable research or R&D centers in France, Germany, Switzerland, Ireland, UK, etc. Most of these countries have engineers of a level comparable (better by some metrics, worse by others) to US engineers.
b. I've known several top-notch programmers from India. One of them is an important contributor to the Linux kernel, another to the core of Firefox. I have no clue how common that is, but be wary of stereotypes.
Tesla wasn't paying as much as the big tech companies, which meant he didn't have access to that top 1%. By opening the door to more H-1B visas, he could ideally flood the market with international candidates and attract higher skills at a lower cost.
While this approach is self-serving, it makes sense. He could acquire that top talent today if he was willing to pay for it—people would leave their current jobs for a pay upgrade. But he's not willing to do that. So, he needs more candidates.
If someone is good then they are able to compete for more highly paid positions and therefore aren't working for 20% of the salary.
So in the end you shoot yourself in the foot, especially in startups where crappy code leads your team to work at a snails pace as your code becomes a spaghetti tangled mess. Then, once it does you end up hiring the expensive guys to come in as consultants to try to get back to what you could have avoided in the first place. Then you have to hope that in the meantime you haven't had any major security issues...
> Well, there are quite a lot of rumors and stigma surrounding COBOL. This intrigued me to find out more about this language, which is best done with some sort of project, in my opinion. You heard right - I had no prior COBOL experience going into this.
I hope they'd write an article about any insights they gained. Like them, I hear of these rumors and stigma, and would be intrigued to learn what a new person to COBOL encountered while implementing this rather complex first project.
> One of the rumoured stigma is that the object-oriented flavour of COBOL goes by the unwieldy name of ADD ONE TO COBOL YIELDING COBOL.
Which is a joke. Rather than an extension, the COBOL standard itself incorporates OO support, since COBOL 2002. The COBOL standards committee began work on the object-oriented features in the early 1990s, and by the mid-1990s some vendors (Micro Focus, Fujitsu, IBM) were already shipping OO support based on drafts of the COBOL 2002 standard. Unfortunately, one problem with all the COBOL standards since COBOL 85 (2002, 2014 and 2023), is no vendor ever fully implements them. In part that is due to lack of market demand, in part it is because NIST stopped funding its freely available test suite after COBOL 85, which removed a lot of the pressure on vendors to conform to the standard.
Algol 68 actually isn't too bad of a language to work with, and there's a modern interpreter easily available. Unfortunately it lacks all support for reading and manipulating binary data so I think a Minecraft server would be nearly impossible.
And then there is the whole DoD security assessment of Multics versus UNIX, where PL/I did play a major role versus C, so the compiler did work correctly enough.
Just this week we're discussing a VC++ miscompilation on Reddit.
IBM are still building and maintaining their PL/I compiler for z/OS, today. Though it is only compliant with specs up to 1979. The '87 ISO is only partially adopted.
I get the distinct feeling it's been a long time since IBM wrote PL/I compilers considering anyone but IBM. So 'correct' here might be 'what IBM needs'. YMMV.
The system is broken, but going without insurance is you basically toying with the odds of life.
reply