When I was at OpenText - there was a new acquisition email at least every quarter. I keep wondering how OpenText keeps acquiring so many companies because the value then was maybe $6 billion and now it's $11.5B (still shy of the $12bil they aimed for some years ago). Best part of working there was when I was out at a social event in SF and I overheard someone who was working for a company getting acquired by OpenText saying quite unpleasant things about OpenText and their strategy. I think she would've killed me if she knew I worked for them. I'm pretty sure OpenText was about to fire everyone and that's why she was so upset. (Strategy of the company AFAIK is to acquire lots of enterprise offerings, integrate the product with their current offering, fire everyone they can, and then sell the product with their current offering in hopes of getting bigger numbers to prop up stock price further)
I expect any consumer facing options to vaporize or be generally unsupported. OpenText is all about that enterprise/government software stuff. (I worked on a project where we were competing with IBM - to give some context)
Growth through acquisition is a pretty workable model. Especially when you cut the resulting companies to the bone and basically put their product on life support.
For products used by big enterprises its pretty amazing how long a product can go without meaningful updates and still have companies paying for maintenance. I've seen it a few times now, the engineering/testing is basically cut to one developer who's job is basically to fix bugs and nothing else. For many enterprises this is a pretty decent model. In many cases they don't need or want new features, and the IT guys are very happy if they just get a point release once or twice a year that is a 100% drop in replacement with some bugs fixed.
For a large conglomerate that can sell new licenses as part of a bundle deal, its a good plan to, the customer base grows at about the same rate as people drop the products. The result is just a constant cash flow with basically no overhead.
Exactly my thoughts as well! The average update to a product I use regularly is strongly negative. If I'm already using it that means that it mostly does what I need it to. Constant updates are far more likely to change or degrade existing functionality and/or add bloat that slows performance or clutters the UI, than they are to improve something that already works.
I strongly agree! I really like the idea of a "finished" product that doesn't keep adding new features and growing the team just because they took too much funding and need to find more ways to grow. This is kind of product I'm trying to build with my startup [1]. (I did take some funding from Earnest Capital, but this was just to help with my personal living expenses and pay for a few larger projects.)
Reminds me of how this has not worked - for Mint.com. It was cool when it first came out, got acquired by intuit and now is an old, kinda useless app that was never updated.
I think many companies built around one successful product will hit that point in their lifecycle... where putting more work into perfecting and improving the product provides diminishing returns, but there is a solid demand for the current version of the product.
At that point, it is absolutely the best thing to get rid of everyone and just keep selling the product you have. Too often a company tries to hold on to long, adding new features that don't actually improve anything.
I wish more companies were willing to admit when they reached that point.
A lot of companies try to pivot using the core product as income to find the next big win, with varying levels of success.
But, yes your right at some point you have to admit your product is mature, and cut the engineering effort on that product, hopefully before you start driving customers away.
It's probably not a terrible business model in some regards. However, as a software engineer there - it was miserable. Compensation was crap too. I wouldn't recommend it as a place to work unless you're high up. I think those people got $$$.
Yup - this was the CA (Computer Associates) working model for decades. And as further commenters in the thread show, sometimes for enterprises reliably staying the same year after year is a GOOD thing.
This used to be the standard acquisition model for a few decades but I felt that since 2010 most large corporations figured out this wasn't a great strategy as they lost all the mindshare and eventually the market to a new competitor that sprung up from nowhere.
When I was at Oracle this was a well known business plan, if Oracle acquired it a bunch of guys who knew the market/problem well would get together, seek funding and start a competitor aimed at the existing customer base. People couldnt be happier to get out of working with Oracle so it worked well for a long time.
More recently larger companies have learned to leave their acquisitions alone. Just look at LinkedIn as a great example.
Interesting, I know that Oracle paid $850 million for moat analytics, would you happen to know any competitors that could have sprung from that acquisition?
What's worse is, that they also acquired multiple competing businesses on the market with similar products in a very short time frame, Web Content Management Software in our case. That was about a decade ago. This wreaked havoc on the morale of the teams within.
On one hand, they had to handle a new management and culture of a global company, which is completely different compared to the smaller scale they worked in before.
On the other hand, each team knew, that they had to prove to management, that their product was superior compared to the others. Any attempt to integrate those products with each other or learn from them was futile, because you could not simply remove competitive behavior among those teams.
The best decision in my life was leaving this nightmare. I never heard of the products again.
> (Strategy of the company AFAIK is to acquire lots of enterprise offerings, integrate the product with their current offering, fire everyone they can, and then sell the product with their current offering in hopes of getting bigger numbers to prop up stock price further)
That's profitable because enterprise sales are so glacial.
So, if you have the money, it's better to wait until somebody else finishes the slog and gets the contract, and then you can just buy that company to get the enterprise customer.
That does not engender confidence when choosing products. I would be wary of them letting products either wither or just milk their momentum for as long as they can.
Nah. Per a study The Economist covered some years back, about half of all mergers destroy value.
The acquire-and-cut-to-the-bone strategy is a good way to juice short-term numbers, but it's bad in the long term in that you destroy the drivers of long-term growth. All the execs get their bonuses and get to cash out their options at a high value, it's bad for everybody else in the long term: customers, employees, investors.
The theoretical justification for post-merger cuts has also been declining for some time. Improved computation and communication have made it much easier to outsource non-critical functions, so merged companies will have a lot redundancy today. E.g., in a merger 20 years ago, maybe two merged tech companies could consolidate data centers and get rid of a bunch of ops staff. Now maybe they get a slight improvement on their AWS bill and they can lose a few execs, but it's not nearly the same.
And let's not forget the diseconomies of scale. Everybody here should already know that small companies can innovate much more quickly than large ones. And mergers have their own costs; I consulted for a while for a company that grew mainly through mergers, and you cannot imagine the number of meetings that existed just to bridge fault lines between different legacy software, different teams, different offices. It was a mess.
I keep hearing this and it is from a widely misunderstood study. It did not find that half of all mergers destroyed value, instead, it found that only half of mergers created value. The proportion of mergers that destroyed value was much smaller (about 20% iirc). Unfortunately, most people did not bother looking beyond the headline and extrapolated incorrectly.
The much mocked "synergies" that come from an acquisition. I never did understand why people make fun of that term so much, it's a very simple concept.
When a word is used to the point of meaninglessness (when "expected synergies" don't exist), or when its use is really a euphemism for "we're going to fire a bunch of people" then it exposes itself to mockery.
Synergy's eventual heat death was, I'd hypothesize, greatly accelerated due to its preponderance in merger announcements and consulting decks in the 80s, 90s, and 2000s.
Would be nice to use it again without caveats. Maybe one day.
Eliminating duplicative or wasteful structures are “synergies” whether that’s engineering, sales, support, manufacturing, or G&A staff, real estate, or basically anything else that has a fixed cost component.
The fact that we’re in a people-heavy and asset-light industry means that these synergies are often people-related doesn’t make the use of the term improper or a euphemism any more than any other industry common term.
The dictionary definition of this is typically something like "when combined, greater than the sum of their individual parts."
I guess technically if you combine two companies and accomplish the same thing but more cheaply due to overlap the company is therefore "greater" but you see where the meaning is already watered down from its original intent. This is elimination of redundancy, not synergy.
"This merger will result in a big ROI due to redundancy that can be eliminated" is the proper way to say it. Not "This merger will realize a number of synergies". Hence the euphemism comment.
> Eliminating duplicative or wasteful structures are “synergies” whether that’s engineering, sales, support, manufacturing, or G&A staff, real estate, or basically anything else that has a fixed cost component.
I would characterize that as removing redundancy. Synergy has a different definition altogether.
The definitions I found across several dictionaries (after looking just now) all amounted to something like:
> the interaction of elements that when combined produce a total effect that is greater than the sum of the individual elements, contributions, etc.
When an intended effect of a for-profit company is profit and a combination of two companies becomes more profitable (has a greater total intended effect) as a result of the combination, how is that altogether different?
Because the definition of "synergy" doesn't contain the part where you start firing people and closing departments to deduplicate jobs done in the now merged companies.
It's because it was radically overused by managers trying to justify all sorts of dumbassery, so much so that it became a joke. E.g., it was memorialized in this lovely deck of corporate jargon flashcards: https://www.amazon.com/Corporate-Flashcards-Knock/dp/1601060...
So I worked for Carbonite back in 2016 for almost a year.
About 2 weeks after training I found myself wondering what they'd do in the face of everyone and their mother offering "free storage" which while completely different than secure backup sounds like the same thing to the layperson. Towards the end of my year there most of my calls (yeah I was customer support on the consumer level) were from seniors who didn't know how it worked but were told they needed to have it. It was an interesting if frustrating job.
You can look on Glass Door for various stories or check the papers in Maine for how they billed themselves as creating jobs even after they outsourced the consumer biz to Jamaica. That I saw coming but didn't acknowledge out of fear of losing my job. Oh well. They still canned me a week before Christmas.
This sale doesn't surprise me nor does the idea that OpenText will gut then sell it. Sour grapes and all they had to know this was where they'd end up sooner or later.
They may not be marketing towards consumers anymore, but they will happily sell a $10/mo single PC license like before. I've been using them at home for sometime.
Yeah, that's okay, but I have a lot of small family devices that need backup, but not a lot of backups. I don't want to pay $10/mo to backup my mom's desktop and the 15GB of data in her user profile. Crashplan had a family plan that handled that use case.
I ended up switching to iDrive, which has a 5TB limit, but which also allows for unlimited clients within that 5TB.
I switched to SpiderOak - it's not good and/or doesn't work and/or is very slow. Now switching again to Duplicacy which is just a backup app and you choose your storage option(s). I really enjoy the control that you have over its behavior.
Backblaze personal backup used to only keep files for 30 days from your last activity, although I see they now let you keep old files for a year, for an extra $2/month.
Personally, I use Arq Backup and Backblaze B2 (their “real” cloud storage). Arq is a one-time paid app that does differential encrypted backups to your choice of local storage or cloud provider.
Keeping 1TB of backups cost me around $5 per month, and the files won’t disappear on me if my computer stops working for a month.
If anyone looking to try it but wants something open source for a client, have a look at http://restic.net. Been using it to backup
V MO to B2 for over a year now.
Also, Arq is “one time buy” just for one version apparently.
I've discovered Restic while playing with Golang, looks pretty neat. Have you had any issues with stability or corruption of your data? what are you using for encryption (if any?) thanks in advance.
BackBlaze by far - I moved from CrashPlan when they screwed their consumer offering.
BackBlaze has a stable client (even though it lives in system extensions) that uses fewer resources, as opposed to the buggy POS that is CrashPlan. IIRC CP client is Java, BB is native.
BackBlaze's restore is far more robust (I got timeouts trying to restore large files from CP) and more intuitive.
Currently I neither use Backblaze or CrashPlan but restore was one of the few features (among other features like CP’s excellent versioning) where CP beat BB hands down. Heck, the last time I checked BB didn’t even let you restore using the client. You had to download the files from the portal and then put it somewhere yourself. I am not sure it has changed now. While CP’s was so seamless that it would just restore your files in the background wherever you wanted - at their original location or elsewhere. It worked so well.
Yup, Crashplan. I pay for the small biz option, but that let's me back up the machines I need to backup. AND, I have restored a total of three different machines from Crashplan over the years, and it "just works." I am a fan.
I just moved to a NAS with RAID. My data is in my own hands, Cloud Station makes it work just like Dropbox, and I don't have to worry about paying a subscription or a company going south.
I use the Synology DS218+ along with two 4TB WD Red drives (which I got off craigslist for a steep discount -- the seller even let me run diagnostics on them for me and they were very lightly used).
I don't have to pay for the software. If you stick with the Synology OS it comes with they have a package manager with things like various media servers, Cloud Station, a download client, etc. You can also just shell into it and install your own stuff.
You can get NAS disk stations with more capacity (more drive bays), but this was all I needed.
I've got 2x 8 bay NAS now (onsite and offsite) for my data. One at home and another at a friends for DR.
Running ProxMox for the hypervisor w/ ZFS and running either native Docker images or LVM's on ProxMox for all of my services. I went down the route of using ProxMox because I want to be able to build my on k8's cluster on my own hardware.
Now if you've not got much experience with ZFS, it's very easy to get your head around but you need to get your head around the basics.
The documentation in the link below is enough to get you started on ProxMox. The documentation below is some of the best I've ever read.
If you want to go down the route of something a bit more off the shelf for your OS. I wouldn't use anything else but UnRaid. It's got cheap licensing but it's so fucking easy to use and has an awesome UI.
Not GP, but considering he said “Cloud Station” I assume he’s using a Synology NAS which is a ready-to-go product (just add drives).
Personally I have a 2U Dell R520 I run FreeNAS on which cost me as much as one of synology’s systems with a lot more capability. But I also have a 24U rack in my office and use it to provide storage for my hypervisors in addition to Plex, general SMB shares, etc. so I have use cases that are a little more complicated than even the average HN user from their home network.
I never upgraded to FreeNAS 10, it was a significant departure and it was pretty clear ixSystems had made a fumble and was going to end up needing to redirect the ship. Once 11.1 came out and it was clear they learned not to throw baby out with the bathwater I upgraded and have been quite happy with the train since, though I’m glad the “new” dashboard is still optional because it needs some love yet.
I'm planning to do the same, but best practices on the 3-2-1 rule say that at least one of those backups should be off-site.
I made a backup of all of my stuff onto a 4TB external and left it at my parent's house last Christmas, but that's probably not what they meant by off-site. :)
That's definitely a legitimate offsite backup strategy. If your data changes more often than you visit, you may want something more interactive like a Raspberry Pi connected to that external drive that you can send files to.
If you use a Dropbox like solution, some of your data is already off-site in the form of the end user machines (e.g. laptops). It's not exactly a "backup" (in that it's not a timestamped snapshot), but it does mean your data is replicated.
But please be careful because if you use anything based on automatic synchronisation, such as RAID or a DropBox-like service, there is also a danger that anything you accidentally delete or corrupt will then propagate. If there’s no historical version saved as part of the system, or if there is but you don’t notice the problem before it disappears, you can end up damaging your “backup” copies of your data in exactly the same way as the original if something goes wrong.
"Who's still left in the home backup solution ring? Besides iCloud/Google Drive/Dropbox, I count...Backblaze? And that's it? "
If you are a unix user and/or reasonably technical, I think rsync.net would be the best choice.
You get an empty ZFS filesystem, possibly with snapshots enabled, that you can do whatever you like with (provided "whatever you like" is something that runs over SSH...).
Pointing borg/rclone/restic/git-annex/etc. to rsync.net works exactly the way you'd think it would.
Looking at rsync.net's pricing system, I calculate that backing up 1TB of data would run about $25/mth. Do you consider that competitive for a home user just trying to backup his photos? I have considerably more--4TB, to be exact, so that seems really untenable to me.
Important to know Arq only runs on mac/windows but has an open source client for mac/windows/linux that can do restores. So if they go out of business your data isn't lost (presuming you used a cloud backup that still exists).
I left because Google's rate limiting makes them really slow.
Also they mix together all errors including permission errors and "the file is being accessed by another program" so every run will have a few hundred errors (in fact, if I got zero errors that always a sign that the backup hadn't run correctly).
Not a typical home user setup given the lack of pretty UI, but we sync our various individual devices to our server using scheduled jobs and then back up everything relevant from the server using Tarsnap.
It’s not perfect given the lack of snapshot functionality of both the local devices and the server, and technically we’re vulnerable to corruption if we get something like a database or Git repo at exactly the wrong time, but in practice we run the important jobs at the end of a day when we’re not doing anything else anyway so it’s very unlikely to cause a problem in our situation.
After Crashplan decided they didn't want my money, I just stuck with Acronis TrueImage. I primarily keep my backups on my NAS, but they added cloud storage option not long ago. I've not had to test out a full rescue disk restore from the cloud yet, for file-level stuff it works fine.
Nice thing is it can actually store disk images, so when your disk decides to die, like my OCZ SSD did, you can be back up and running within 30 minutes or less.
Windows only though.
Would like to be able to use some filebased solution ala Crashplan, but haven't found anything that has similar features.
You’ll have to let me know what you think. I’ve got 5TB of data on my FreeNAS box I would like to have a hot backup of, Wasabi is pretty much the only affordable solution on the market outside of shipping another box to my dad and using ZFS send/receive (which I’ll probably also do since he wants a media server in his house someday, it can just pull double duty as my offsite backup).
Darn! I just got through converting my Mozy backup to Carbonite.
Carbonite failed to port any of my Mozy backup settings. They were backing up the useless "\Users" folder, not the important stuff which I had configured in Mozy.
Instead of notifying me from the "mozy.com" domain, they notified me from a carbonite.com email domain, which was not whitelisted and therefore never reached me.
I interviewed at Carbonite back in 2014ish. I walked out of the interview absolutely perplexed what they do and where the leadership was. Before I even got the rejection email I started the process to give me the option to short stocks because I was sure CARB was going to go out of business in short order based on what I saw.
I didn't end up following through but the stock has been solid and increased a fair amount and here they are selling for a billion dollars. So... goes to show what I know.
This thread could not have come at a more perfect time, my organization is looking to back up just under 1PB of content spread across about a dozen storage nodes.
We are looking for a reputable backup provider who can provide a solid product at the lowest price, striking a good balance between cost and features.
Our priorities are price, ability to encrypt the data, and setting up private network peering.
Any recommendations?
Our staff is comprised of smart system admins and developers who are well versed with Linux. No pretty UI's needed on our end.
We[1] will meet B2/Wasabi pricing for 1PB of storage.
"Our staff is comprised of smart system admins and developers who are well versed with Linux. No pretty UI's needed on our end."
Please email info@rsync.net. It sounds like a good match. I would be very happy to discuss how our product (an actual ZFS filesystem that you control) could be deployed for you.
Yev here -> Gleb's email is above, but if you do already have some hardware from 45 Drives or FreeNAS - Backblaze B2 may already be integrated into them!
Cloud storage isn't the same as endpoint data protection. While it is creating a second copy of data and has versioning it really is an online filesystem with redundancy. Storage providers have not yet put together what I would consider an auditable data protection capability where you can restore a full folder, disk volume, or complete system to a point-in-time snapshot of the data with specific retention periods (i.e. we need to keep 7-years for our ZZZ records and 4-years for the QQQ records, etc.)
Context and disclosure: I run Jungle Disk [1] which is in the endpoint data protection (and storage) market for small business. We see a lot of CrashPlan (the Code42 product) and Backblaze [2]. The market has a ton of competitors [3] overall. Further up market we see Barracuda, Druva and CommVault (along with a whole other set of competition which integrates in with cloud providers.
"Storage providers have not yet put together what I would consider an auditable data protection capability where you can restore a full folder, disk volume, or complete system to a point-in-time snapshot of the data with specific retention periods"
I hope it will be useful and interesting to point out that rsync.net, built on the ZFS filesystem, allows customers to create arbitrary snapshot schedules that are live and browseable.
Thousands of our customers do exactly what you just described when they set up day/week/month/quarter/year snapshots and then browse right in with any old SFTP client[1] and retrieve arbitrary files and directories (or VM images, whatever) as they existed on those dates.
These ZFS snapshots are immutable (read only) and immune to attacks like ransomware or a rogue employee.
For servers (or Linux workstations) this is a great strategy. For Windows 10 / macOS endpoints you get the filesystem provided by the OS.
This also doesn't address data stored on cloud storage, full endpoint unstructured user data backup is a messy one. While large companies use policy management to force data off of laptops or non-protected cloud systems almost every employee I interact with at normal small business has critical files on their PC.
I can highly recommend Arq with the cloud storage provider of your choice. I switched after Crashplan screwed with their plans and have been very happy.
Something like G Suite Business, gives me my own email domain , all their apps, unlimited data storage for $12 per user. Which if you have over 1TB worth of data, definitely worth it. It will cost you more for less features with Arq. Under 1 TB and no use for anything I said, Arq cloud definitely where it's at.
Seems like the unlimited storage is only valid if you have more than 5 accounts. Otherwise, it's capped at 1tb. Is that your experience as-well, or are you grandfathered in another plan?
Haven't noticed this. There are times where I'll pause its activity to save bandwidth, but apart from that, it's been almost entirely "set-it-and-forget-it".
Carbonite does enterprise file backup. OpenText is a big software conglomerate that tends to purchase slow-growing or declining (aka, "boring") enterprise software companies and consolidates them, cutting costs and milking the existing customer base for updates and support contracts. Basically they seem to bet on inertia surrounding technology in big companies, which tend not to change things that are working acceptably.
OpenText also acquired Guidance Software a few years ago. They make EnCase which is one of the largest digital forensic software tools utilized for investigations.
I was a developer at Guidance from 2003-2010. I left because the future was easy enough to foresee. There might be one or two developers left at OpenText whom I know, but most have left. It’s all about milking that Enterprise SMS revenue.
X-Ways and Magnet are rightfully decimating EnCase’s traditional use in forensics.
Can't answer fully your question but in 2015 OpenText acquired Actuate a BI and analytics company. It was of interest to me because Actuate at the time was the main contributor to BIRT[0] an Eclipse Foundation project for creating reports. IMHO the best Open Source reporting application. Anyway BIRT has been around for ages and does not seem to have been affected by acquisition. Not much happening there but it was already a mature product.
carbonite -> backup solutions, similar to Veeam.
OpenText -> Enterprise Document Management system (with all the governance stuff big co's want and then some)
We have some of the largest enterprise software repos around. And we have some of the strongest enterprise relationships around with top tier companies in the world. Anyone who says otherwise has no idea what their talking about.
I expect any consumer facing options to vaporize or be generally unsupported. OpenText is all about that enterprise/government software stuff. (I worked on a project where we were competing with IBM - to give some context)