Yikes, that's a rough outlook. Not disagreeing with it, just poking at it a bit from a distance, and hoping that you experience a change in direction after a couple days.
Every single bit of data (that you wanted to back up using Active Backup for Microsoft 365) in your Microsoft 365 tenant, could have also been accessed by a malicious actor. The exact period for which this flaw existed for is unknown, but it was fixed by Synology after modzero disclosed it to them.
Inspecting the setup process once, of any Synology Active Backup for Microsoft 365 install - gives you the master key to all M365 tenants that had authorised the Active Backup for Microsoft 365 enterprise app.
Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead, at least for my daily flavor which is PowerShell. No way a double-digit amount of jobs aren't at stake. This stuff feels like it is really starting to take off. Incredible time to be in tech, but you gotta be clever and work hard every day to stay on the ride. Many folks got comfortable and/or lazy. AI may be a kick in the pants. It is for me anyway.
I've been trying every flavor of AI powered development and after trying Claude Code for two days with an API key, I upgraded to the full Max 20x plan.
Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.
The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI
It really is night and day. Most of them feel like cool toys, Claude Code is a genuine work horse. It immediately became completely integral to my workflow. I own a small business and I can say with absolute confidence this will reduce the amount of devs I need to hire going forward.
I don't get claims like that, if AI let me do more and be more productive with less people I could also grow and scale more, that means that I can also hire more and again multiply growth because each dev will bring more and more... I'm skeptic because I don't see it happening, actually the contrary more people doing more things maybe, but the not 10x nor 100x otherwise we would see products built in 5 years coming out in literally 15 days
It might be that the value more software can add is already at its limit in any given business - or at least returns will be diminishing. Meaning in those particular businesses the appetite to hire devs might stay flat (or even shrink!) as AI makes existing devs more efficient.
The more interesting question is whether this is true across the economy as a whole. In my view the answer is clearly no. Are we already operating at the limit of more software to add value at the margin? No.
So though any particular existing business might stop hiring or even cut staff, it won't matter if more businesses are created to do yet more things in the world with software. We might even end up in a place where across the economy, more dev jobs exist as a result of more people doing more with software in a kind of snowball effect.
More conservatively, though, you'd at least expect us to just reach equilibrium with current jobs if indeed there is new demand for software to soak up.
It’s not my goal to scale infinitely. I want to run a small, tight business which is highly profitable but also pleasant to operate long term. I’m not looking for some huge exit.
I would also like to know this. I've only very briefly looked into Claude code and I may just not understand how I'm supposed to be using it.
I currently use cursor with Claude 4 Sonnet (thinking) in agent mode and it is absolutely crushing it.
Last night i had it refactor some Django / react / vite / Postgres code for me to speed up data loading over websocket and it managed to:
- add binary websocket support via a custom hook
- added missing indexes to the model
- clean up the data structure of the payload
- add messagepack and gzip compression
- document everything it did
- add caching
- write tests
- write and use scripts while doing the optimizations to verify that the approaches it was attempting actually sped up the transfer
All entirely unattended. I just walked away for 10 minutes and had a sandwich.
The best part is that the code it wrote is concise, clean, and even stylistically similar to the existing codebase.
If claude code can improve on that I would love to know what I am missing!
My best comparison is that it's like MacBooks/iPhones etc.
Apple builds both the hardware and the software so it feels harmonious and well optimized.
Anthropic build the model and the tool and it just works, although sonnet 4 in cursor is good too but if you've got the 20$ plan often you're crippled on context size (not sure if that's true with sonnet 4 specifically).
I had actually heard about the OpenAI Codex CLI before Claude Code and had the same thought initially, not understanding the appeal.
Give it a shot and maybe you'll change your mind, I just tried because of the hype and the hype was right for once.
i rewrote a code base that i’ve been tinkering on for the last 2 years or so this weekend. a complete replatform, new tech stack, ui, infra, the whole nine yards. the rewrite took exactly 3 days, referenced the old code base, online documentation, github issues all without (mostly) ever leaving claude.
it completely blew my mind. i wrote maybe 10 lines of code manually. it’s going to eliminate jobs.
that's the part i'm not sold on yet. it's a tool that allows you to do a year's work in a week - but every dev in every company will be able to use that tool, thus it will increase the productivity of each engineer by an equal amount. that means each company's products will get much better much faster - and it means that any company that cuts head count will be at risk of falling behind it's competitors.
i could see it getting rid of some of the infosec analysts, i guess. since itll be easier to keep a codebase up to date, the folks that run a nessus scan and cut tickets asking teams to upgrade their codebase will have less work available.
the amount isn't relevant to the argument; the point is that the amount - whatever that may be - is applied equally to all companies, which means the competitive balance will stay the same. its a great build tool, but you still need builders to use the tool.
Claude Code works surprisingly well and is also cheaper, compared to Windsurf and Cline + Sonnet 4. The rate of errors dropped dramatically for my side projects, from "I have to check most changes" to "I have not written a line".
But note the problems it got wrong are troubling, especially the off-by-one error the first time as that's the sort of thing a human might not be able to validate easily.
Yup, Claude Code is the real deal. It's a massive force multiplier for me. I run a small SaaS startup. I've gotten more done in the last month than the previous 3 months or more combined. Not just code, but also emails, proposals, planning, legal etc. I feel like working in slo-mo when Claude is down (which unfortunately happens every couple of days). I believe that tools like Claude code will help smaller companies disproportionately.
> Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead,
I’ve been avoiding LLM-coding conversations on popular websites because so many people tried it a little bit 3-6 months ago, spot something that doesn’t work right, and then write it off completely.
Everyone who uses LLM tools knows they’re not perfect, they hallucinate some times, their solutions will be laughably bad to some problems, and all the other things that come with LLMs.
The difference is some people learn the limits and how to apply them effectively in their development loop. Other people go in looking for the first couple failures and then declare victory over the LLM.
There are also a lot of people frustrated with coworkers using LLMs to produce and submit junk, or angry about the vibe coding glorification they see on LinkedIn, or just feel that their careers are threatened. Taking the contrarian position that LLMs are entirely useless provides some comfort.
Then in the middle, there are those of us who realize their limits and use them to help here and there, but are neither vibe coding nor going full anti-LLM. I suspect that’s where most people will end up, but until then the public conversations on LLMs are rife with people either projecting doomsday scenarios or claiming LLMs are useless hype.
I purchased Max a week ago and have been using it a lot. Few experiences so far:
- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".
- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.
- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.
- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.
- Robust linting, formatting and testing tools for the language seem necessary. My pet peeve is how many spaces the LLM will add in. Thankfully cargo-fmt clears up most LLM gunk there.
My imagined normal government advisor-sort who's just (figuratively) awoken from a 20-year coma and is attempting to approach this situation, initially, as a normal interaction with a normal President is the one drinking and hitting the crack pipe by the end. I tried not to over-burden it with formatting, but may have gone too light and made that unclear.
According to the original producers of The Apprentice, he's not a drinker but has had an addiction to both Sudafed and Cocaine going back probably 40 years now, which has made him completely incontinent.
I know we're talking about AWS and Azure here, but had to add that fwiw, the M365 admin interface(s) are so bad it practically feels like a prank. In other words, it's as though someone is purposely making them as chaotic as possible to what end I can't even guess.
I think it was the InTune interface I was in the other day that had the same link underneath 4 different sections of the dashboard, which I noticed when I had all 4 of them expanded at once? That got a good laugh out of me.
"Here...don't miss this settings page! Seriously! Look!"
Not familiar with the product. The MS name alone would make me biased. Is it really good or even better in some way? Or did you just slightly ironically mean they did not manage to make it worse than competing products?
A great thing about mice is that they are fungible and don't change without the user's consent, unlike software, so you can keep buying and using the same mouse forever.
The average mouse of the time was blocky and uncomfortable.
Seems it's seeking to the the first ($1) argument in /dev/port file and writing in stuff from the second argument ($2) with some hex/decimal magic. It's pretty hacky, but if it works, it works.
Writing to seek ofset N of /dev/port puts the written byte out on port N. There are 256 possible IO ports in x86 which afaik can be mapped arbitrarily by manufacturers. The hex encoding (using bash math eval syntax) is just for their convenience, so they can write `./outb 80 X` instead of `./outb 0x80 X`, as dd takes decimal parameters, not hex.
65,536 IO ports not 256. Not counting memory mapped IO ports whose number is only limited by the physical address space
> which afaik can be mapped arbitrarily by manufacturers
Pretty much, but some of the assignments IBM picked for the original IBM PC are a de facto standard. However, as newer machines lose more and more legacy peripherals and controllers, fewer and fewer of those original ports still exist. Thats said, the article mentions using the POST port (0x80) which is one of those original port numbers.
While you can theoretically talk to 65,536 ports, a lot of old hardware only wired up the lowest 10 or so bits of the address space. So the parallel port at 0x378 might also be accessible at 0x1378, 2378... F378.