Hacker News new | past | comments | ask | show | jobs | submit | depr's comments login

For a single user, email signup is easy. But if they were to offer it they would also need to offer it to business customers. That means they have to build a role system and link that to AD etc. That is why they don't have it. (Their CEO discusses this in a recent interview).


It has become a sport here to criticize titles for not explaining any random thing the commenter doesn't know. Generally these things are either in the article or they are very easily findable with a single web search.


I have no thoughts on whether it's actually fake, but it can be to appear financially healthy / boost company image, to gather resumes, to gather information about the job market, to comply with internal policies that say jobs most be posted externally even when planning to promote internally, etc.


> the White House recently took down its paper about the future of programming being memory safe

They took down all of https://www.whitehouse.gov/oncd/, what makes you think they gave any thought to this paper in particular?


I think the Azure Document Intelligence, Google Document AI and Amazon Textract are among the best if not the best services though and they offer these models.


I have not tested Azure Document Intelligence, Google Document AI, but AWS Textract, LLamaparse, Unstructured and Omni made to my shortlist. I have not tested Docling, as I could not install it on my Windows laptop.


So are you mostly processing PDFs with data? Or PDFs with just text, or images, graphs?


Not the parent, but we process PDFs with text, tables, diagrams. Works well if the schema is properly defined.


Isn't that Nuance product EOL?


>> environment related bug on linux, which is mysteriously less a problem on other unix's.

> How do you figure?

From https://illumos.org/man/3C/putenv:

> The putenv() function can be safely called from multithreaded programs


Considering this is a libc issue, not a Linux specific one, I wonder how thread safe other libc implementations like musl and Bionic are. How do the BSDs stack up? Humorously, illumos also ships with glibc...


Seems like a good move and everything but 12,000 doesn't sound like that many VMs? Is that a lot of VMs?


In terms of VMware customers it isn't a ton of VM but not peanuts either. E.g. the last healthcare place I was at (single customer rather than cloud provider) had ~30k VMware VMs and we were still small fish compared to some others. I've heard of places 10x the VM count as this making the move post acquisition - albeit less publicly.

I think the purpose of the article is to highlight companies like this are starting/continuing to migrate post-acquisition rather than this particular customer was impressively large and did so. Particularly with the bits about the relative cost increase even though the customer was willing to walk away if needed.


Single customer 30k sounds a lot easier to migrate (or at least to schedule) than 12k with potentially 3k different customers (probably more like several dozens or a couple hundred)


Could be, lots of pros and cons to each scenario - doubt either are easy by any measure.

E.g. for about 20% we didn't even have a single piece of documentation other than the server name for who might actually care about the VM going down for the migration we wanted to schedule. Let alone how to test the migration, when it is best to do it, what software was actually running on it, if it's actually managed/monitored integrated with/by other systems whoch need to be looked at too, or if it could just be shit down instead (yay healthcare mergers and acqs). Our migration was also to VMware from (mostly) Hyper-V at the time, so not as much custom tooling needed.

On the flip side a cloud provider is going to have all of the owner contact info but no direct control of the guest OS to effect the change so the battle is more with trying to get the customers to care enough to do the migration with you but not be so bothered by it all they up and leave your hosting.Not exactly a walk in the park either.

In either case - almost never the tech that's the hard part for sure :).


I'm involved with Red Hat's effort to shift customers off VMware (upstream project: https://github.com/libguestfs/virt-v2v). Things have really blown up since the Broadcom acquisition at the end of 2023. For us 12K VMs is a medium to large customer, but definitely not unusual. Think someone like a regional bank.

A full conversion of such a customer might involve one consultant on-site, a scoping exercise to classify the VMs into groups and assess which ones are going to be more or less difficult to convert, and perhaps 1-3 months of work to convert them all. Individual VMs would be down from anything between a simple reboot, up to 12 hours, depending on which strategy we used for conversion (there are complicated trade-offs related to storage and network bandwidth).


Sounds like it's similar to what happened with PostgreSQL after Oracle bought MySQL. ;)


That depends on your context. 12,000 VMs is enough to a fairly large chunk of a small countries healthcare infrastructure, if not all of it.

It a pretty decent amount of VMs, but not close to being unmanageable. I think it's more down to how the rest of your infrastructure looks, if we're talking about ease of migration.


Entirely depends on context, like asking how long is a piece of string. One VM could be 200 cores, or it could be 1 core. It could also be a kubernetes/docker worker as-well so one VM may be thousands of containers. Finally they could instead just be important VM's. You could imagine a small or medium company having maybe 4 VM's each for prod, staging, testing etc... letting CDN's handle scaling (with everything else running on local dev machines) and so that 12,000 could be an entire 3,000 companies whole stacks.


Difficult to say. That could conceivably fit in one rack of 60 compute nodes (1/2U size) at 200VMs per node, leaving 15U for networking and SAN. Maybe $100/year/VM (rough cost of a lower-end cloud VM like EC2, droplet, etc.) in that case so $1.2 million per year cost.

Or it could take 10 racks and $50 million per year.


The article says, right at the start,

> Anexia was founded in 2006, is based in Austria, and provides cloud services from over 100 locations around the world by placing equipment in third party datacenters.

From the company's homepage:

> The founder and CEO of Anexia [...] recently acquired a small hydropower plant in Kammern in the Liesingtal region of Styria for a “significant seven-figure sum” – i.e. several million euros. The power plant on the River Liesing generates 600 KW of electricity, enough to cover a third of the electricity consumption of Anexia’s Vienna data center

so this seems to be a significant operation.


While average power consumption per rack has been increasing fairly steadily over the past 10 years, the metric I currently use is abound 10kW per rack under reasonable to heavy load - that's about the same as a consumer electric shower.

So, this is implying their Vienna data center has 180 racks? With 60 being about a third, if we say each rack has 40 servers... that's ~7k servers total... which is a sizeable chunk of floor space, like 3000m^2, or... 40 tennis courts?

But yea, that a non-insignificant operation just for the Vienna data center.


I agree wrt to his goal but there is no way he is autistic, and what's a human successionist?


I think the GP is referring to people who either don't care whether AGI takes over from humanity, or who actively prefer that outcome.


I had the impression that he was ADHD, not autistic.


ADHD and autism frequently co-occur.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: