It’s true that many providers need a custom solution for their unique workflows, and the one-size-fits-all EHR is often a myth. The problem is that many EHRs try to solve this with customizations, which can be expensive and still feel like a compromise.
On the other hand, when a team tries to build their own tools, they quickly realize they have to build a ton of compliance and interop code they never wanted to touch in the first place. That’s why open source platforms that handle the core infrastructure, like Medplum, HAPI, or OpenEMR, can be such a good starting point. They get the team 90% of the way there, so they can focus on what really matters: building a great UI/UX for their users.
I don’t think providers truly want to go back to pen and paper, but they are looking for a better way. They can see the promise of what the solution could be, but they just haven't experienced it yet.
I had this thought recently: "different hospitals have different workflows, and they want to see different stuff in the UI. But obviously they all want domain objects like "a patient," "an appointment," etc. Some company should offer a standard backend, and a starting template for a frontend that each hospital can customize however they want."
It turns out that concept is called "Headless EHR," and it's pretty new.[0] Medplum (that the parent comment mentions) is one of the companies in this space.
The custom workflows are because each clinic system is trying to figure out the best way to make money, not the best way to treat patients or serve clinicians.
Disclaimer: I know a number of people who work for Epic. ;-)
> given AI seems like a winner takes all type market
Unfortunately for OpenAI and Softbank, it seems like AI will not be "winner take all", and may actually be quite commoditized. It's as easy as choosing a different model in a dropdown in Cursor or whatever your tool of choice.
Interestingly, when discussing WHOIS with my networking students, I discovered .edu WHOIS is not (cannot?) hidden. I suppose EDUCAUSE either requires WHOIS to remain open or they do not offer information hiding.
Doing some WHOIS lookups, we found a point of contact at a university, called the network admin said hello and launched into an impromptu network admin interview. It was cool stuff. I emailed him later in the day to apologize to and thank him for being a good sport about the whole thing. He (fortunately) found it all rather enjoyable.
Some other TLDs, like .us and .in, also forbid WHOIS privacy. TLD owners are free to set whatever policy they want around this. Perhaps .edu does the same.
It's useful for checking if a domain name is taken without doing that through a registrar, which is both less convenient, and (in case of shitty registrars) can be sold to domain speculators.
Both give you a way to find out the domain's registrar, registration date, transfer status, and administrative contacts like abuse@. Nameserver data can also be somehow useful.
Otherwise, what did you expect the registrar to divulge to you, a random passer-by?
As an Australian, I can look up the ownership of random properties in the US for free. But if I want to do the same for a building on my own street, I have to pay a US$11 fee per a property searched.
The US has a reputation of being a hypercapitalist society, yet they seem to be behind Australia in the descent into hypercapitalism by not (yet) privatising the registration of land titles. [0]
Considering Australia (SA) invented the concept of the Torrens Title which means that we don't have to pay extra to protect a piece of paper, and that the Titles Office has always charged for access to titles, I don't think that this is the "hypercapitalism" hill to die on.
It also means that banks can't sell mortgages out from under their borrowers because all liens and other finanacial liabilities attached to a title are known.
It doesn’t because you can negotiate a bulk discount. If you want all the titles, they’ll sell that to you - for a huge fee, but still a big discount off paying for them all individually. So essentially it prevents mass scraping by individuals and small businesses, while posing no real obstacle for megacorps with megabudgets
Learning assembly was profound for me, not because I've used it (I haven't in 30 years of coding), but because it completed the picture - from transistors to logic gates to CPU architecture to high-level programming. That moment when you understand how it all fits together is worth the effort, even if you never write assembly professionally.
While I think that learning assembly is very useful, I think that one must be careful at applying assembly language concepts in a HLL C/X++/Zig..
For example, an HLL pointer is different from an assembly pointer(1).
Sure the HLL pointer will be lowered to an assembly language pointer eventually but it still has a different semantic.
1: because you're relying on the compiler to use efficiently the registers, HLL pointers must be restricted otherwise programs would be awfully slow as soon as you'd use one pointer.
This out of everything, convvinced me. The more I get the "full picture" the more I appreciate what a wondrous thing computers are. I've learned all the way down to Forth/C and from the bottom up to programming FPGAs with Verilog so Assembly may be just what I need to finally close that last gap.
There are two interesting "safety nets" at play here: the classic "Nobody ever got fired for choosing IBM" principle (where ubiquity creates an implicit guarantee of continuity), and a less visible but equally powerful one where certain open source tools become such fundamental infrastructure that they're essentially "too critical to fail." Think curl, gpg, or apt - most users never directly interact with these, but they're so deeply embedded in the internet's fabric that the ecosystem ensures their maintenance. One heuristic I've found helpful is looking at major corporate adoption patterns - it can be a decent signal for identifying which tools fall into these categories.
I highly recommend js13k (https://js13kgames.com), an annual game jam where your entire game must fit in 13 kilobytes. The tight size limit forces you to get creative with optimization - think procedural generation, custom minimal engines, and code golf techniques that actually matter. While the games might be "useless" in a practical sense, you'll learn more about low-level JavaScript optimization than from most serious projects, all while being part of an incredibly supportive community that loves sharing tricks and tips.
js13k is a game jam competition. Make a game that fits in a 13 kilobyte zip.
"The thirteenth, anniversary edition of the online js13kGames competition starts… NOW! Build a Web game on a given theme within the next month and fit it into a 13 kilobyte zip package to win lots of cool prizes, eternal fame, and respect!"
Medplum (YC S22) is an open source, API first, healthcare developer platform. "Headless EHR", we take care of the security, compliance, and regulatory burdens of healthcare software development. Well funded and growing fast.
We're hiring an amazing Dev-Ex / Dev-Rel engineer to delight customers, build sample apps, and promote the Medplum platform.
On the other hand, when a team tries to build their own tools, they quickly realize they have to build a ton of compliance and interop code they never wanted to touch in the first place. That’s why open source platforms that handle the core infrastructure, like Medplum, HAPI, or OpenEMR, can be such a good starting point. They get the team 90% of the way there, so they can focus on what really matters: building a great UI/UX for their users.
I don’t think providers truly want to go back to pen and paper, but they are looking for a better way. They can see the promise of what the solution could be, but they just haven't experienced it yet.
Disclaimer: I work for Medplum.