The chart in the tweet represents year-on-year growth. Based on these figures alone the actual number of people employed in tech is still really high, and the numbers can't just go up forever.
Also this only captures 6 industries, which is a narrow view of what would define "tech" these days.
Not to say that the job market isn't tough but this graph is a very narrow view
> The chart in the tweet represents year-on-year growth.
Can’t believe how many people are commenting without looking at what the chart means. We’ve lost 50k jobs last two years after decades of adding 100k+ every year including the pandemic highs of 300k+ per year. Total employment remains way above 2000s, 2008 and 2020 unlike the title suggests.
Tech has also changed to become an all encompassing thing. In 2000, loads of people didn't have computers or cell phones. Maybe they owned a CD player and watched TV. Tech was avoidable then. But now everyone has a phone in their pocket, a computer, does all their banking through apps instead of visiting the bank, orders food online, orders taxis through apps, and so on. Everything is lumped under tech now and unavoidably so.
Yes, but how many people have tried to enter the field since then? Is the economy that supports current number of tech workers really better than one that supports 10x?
No, the title is not misleading at all - your comment is misleading. Total tech jobs being up doesn't tell us anything, since there are also way more tech workers now than back then.
Over 100K people graduate in CS/IT per year, and that doesn't even count people who come in to the industry from overseas or from other degree paths.
"Tech employment now significantly worse than the 2008 or 2020" says the unemployment rate is higher today than in 2008 and 2020, but that is NOT what the chart shows.
As an aside, I remember some time ago that Tesla stock went down because the growth of the Model 3 sales went down... After years of being one of the best selling cars on the planet.
If number don't go up fast I guess people get scared.
Absolute numbers are still higher than they were 5 years ago but the number of jobs going down means that the same (or about the same) number of people are starting to compete for a smaller number of jobs. Many people have chosen to study software development in recent years so nowadays the workforce is much larger than it was 5-10 years ago.
This imbalance of supply and demand shifts power toward employers and it's hard not to feel the pressure even if you're not looking for a job right now.
The health of the market is not a function of the total number of jobs alone, it's a function of the number of jobs and the number of people to fill them.
The number of total jobs going up year after year meant that there were increasing numbers of candidates, new people entering the field. If the job growth stops, then there still we be candidates coming in. There will also be the new hires from the last decade moving into increasingly senior roles, and there won't be space for them (unless you devalue the meaning of "senior" even more).
So the year over year change matters a lot. If it plateaus, or even declines slightly, it's more than enough to make a terrible market.
YoY change in jobs is still probably not the best way to visualize overall market health. As you say, you also have to take into account the number of people of fill the jobs. To me it seems like the least misleading statistics would be a graph showing unemployment and underemployment % over time. I'd probably also toss in graphs of length of unemployment period as well as various median wage percentiles (quintiles or deciles maybe) over time.
And I already thought we hired more devs than needed pre-covid. It was pretty well surmised that big tech was hiring to starve other companies of talent, and thus employees were underutilised.
Thank you. And those raw numbers in the chart that go back to 2001 are not normalized percentages; what’s happening right now is NOTHING like 2001.
But, it just doesn’t hit the same way on X to say “We are back to late 2023-levels of tech employment” or “The losses in tech jobs over the last 18 months give back two months of hiring in 2022”.
I do a check for `request.htmx` in my views and conditionally return a template partial as needed. This reduced my need for one-off view functions that were only returning partials for htmx. Works pretty well from my experience.
Any code or blog written by Adam is worth spending some time on.
It will be interesting to see how the tasks framework develops and expands. I am sad to see the great Django-Q2 lumped in with the awful Celery though.
Celery is the worst background task framework, except for all the others.
There are bugs and issues, but because so many people are using it, you’re rarely the first to stumble upon a problem. We processed double-digit millions of messages daily with Celery + RabbitMQ without major obstacles. Regardless of what people say, it should be your first go-to.
Celery has way too much magic crammed into it, it is very annoying to debug, and produces interesting bugs. Celery is/was also a "pickle-first" API and this almost always turns out to be the wrong choice. As a rule of thumb, persisting pickles is a really bad idea. Trying to hide IPC / make-believe that it's not there tends to be a bad idea. Trying to hide interfaces between components tends to be a bad idea. Celery combines all of these bad ideas into one blob. The last time I looked the code was also a huge mess, even for old-guard-pythonic-code standards.
I think Celery has a lot of magic happening under it. When the abstractions are so high, it's important they never leak and you don't see anything below the turtles you are supposed to see.
I often prefer designing around explicit queues and building workers/dispatchers. One queuing system I miss is the old Google App Engine one - you set up the queue, the URL it calls with the payload (in your own app), the rate it should use, and that's it.
I tried django-q and I thought it was pretty terrible. The worst was that I couldn't get it to stop retrying stuff that was broken. Sometimes you ship code that does something unexpected, and being able to stop something fast is critical imo.
Fundamentally I think the entire idea behind celery and django-q is mostly misguided. People normally actually need a good scheduler and a bring-your-own queue in tables that you poll. I wrote Urd to cover my use cases and it's been rock solid.
Temporal is an AMAZING piece of software, however I don't believe it's a replacement for something more simple like Celery. Even if you write helpers, the overhead to setting up workflows, invoking them, etc. is just too much for simple jobs like sending an email (imo). I would love to work in a codebase that had access to both, depending on the complexity of what you're trying to background.
It's okay till it's not. Everyone I know who had Celery in production was looking for a substitution (custom or third-party) on a regular basis. Too many moving pieces and nuances (config × logic × backend), too many unresolved problems deep in its core (we've seen some ghosts you can't debug), too much of a codebase to understand or hack. At some point we were able to stabilize it (a bunch of magic tricks and patches) and froze every related piece; it worked well under pressure (thanks, RabbitMQ).
Because it’s a seducer. It does what you need to do and you two are happy together. So you shower more tasks on Celery and it becomes cold and non-responsive at random times.
And debugging is a pain in the ass. Most places I’ve been that have it, I’ve tried to sell them on adding Flower to give better insight and everyone thinks that’s a very good idea but there isn’t time because we need to debug these inscrutable Celery issues.
Although we could say the same thing about Kafka, couldn't we? It's made for much higher throughput and has usually other use cases, but it's also great until it's not great.
At least the last time I used Kafka (which was several years ago so things might have changed) it wasn't at all easy to get started. It was a downright asshole in fact. If you pursue a relationship with an asshole, you shouldn't be surprised when they become cold to you
Celery is great and awful at the same time. In particular, because it is many Python folks' first introduction to distributed task processing and all the things that can go wrong with it. Not to mention, debugging can be a nightmare. Some examples:
- your function arguments aren't serializable
- your side effects (e.g. database writes) aren't idempotent
- discovering what backpressure is and that you need it
- losing queued tasks during deployment / non-compatible code changes
There's also some stuff particular to celery's runtime model that makes it incredibly prone to memory leaks and other fun stuff.
> your side effects (e.g. database writes) aren't idempotent
What does idempotent mean in this context, or did you mean atomic/rollback on error?
I'm confused because how could a database write be idempotent in Django? Maybe if it introduced a version on each entity and used that for crdt on writes? But that'd be a significant performance impact, as it couldn't just be a single write anymore, instead they'd have to do it via multiple round trips
In the context of background jobs idempotent means that if your job gets run for a second time (and it will get run for a second time at some point, they all do at-least-once delivery) there aren't any unfortunate side effects to that. Often that's just a case of checking if the relevant database updates have already been done, maybe not firing a push notification in cases of a repeated job.
If you need idempotent db writes, then use something like Temporal. You can't really blame Celery for not having that because that is not what Celery aims to be.
With Temporal, your activity logic still needs to ensure idempotency e.g. by checking if an event id / idempotency key exists in a table. It's still at-least-once delivery. Temporal does make it easy to mint an idempotency key by concatenating workflow run id and activity id, if you don't have a one provided client-side.
Temporal requires a lot more setup than setting up a Redis instance though. That's the only problem with it. And I find the Python API a bit more difficult to grasp. But otherwise a solid piece of technology.
In my experience async job idempotency is implemented as upserts. Insert all job outputs on the first run. Do (mostly) nothing on subsequent runs. Maybe increment a counter or timestamp.
I'm of the opinion that django task apps should only support a single backend. For example, django-rq for redis only. There's too many differences in backends to make a good app that can handle multiple. That said, I've only used celery in production before, and I'm willing to change my mind.
When that single cheat might have resulted in expulsion if it were caught, it implies a pretty significant effect on the grade.
It wouldn’t actually change the result of the metric (because survivorship bias: you don’t count students who never graduated in the graduated student population), but it changes the believability of and the ethic behind the metric. Now we can say that at least 10% of the folks who graduated didn’t earn their grades, and the school’s reputation is less for it.
My experience is that in recent years, cheating is ubiquitous in many elite schools. This goes all the way to the top. Fake research data. Self-enriching financial schemes by faculty. Conflicts-of-interest. Etc.
Cheating may be a rational thing to do, on an individual level, if we ignore ethics. Outside of that it breaks down. I wouldn't say "best" without a lot of qualifiers.
A couple of outages that have affected my client i learned about first on HN. We were able to mobilize a team and get on top of it faster than any monitoring team at the client ( a state government ). I feel like HN should invoice us haha
Even when it's a false alarm it's usually something else is having a problem that is affecting many people and manifesting itself as a particular service being down.
100%. Though, interest rate based appreciation is likely at its apex, presuming zero is the floor. If we find ourselves with negative rates (after taxes and fees; there was one case of negative rates in Europe, but net inclusive of fees, it was still positive) -- then we're in truly uncharted territory.
At this juncture, it seems most appreciation will arise from supply issues, which aren't new to the post-2008 world. And while we do have lots of unoccupied housing nationally, we don't have it stock in areas where it's most needed: e.g, job centers. You can easily find a $10k home in Detroit if you wish.
The graph would be helpful it broke out metro versus rural areas, in addition to factoring interest rates.
The last HP Inkjet that I had would go through the Yellow cartridge faster than black, even when printing only Black and White. Which is how I discovered these fun little dots.
They didn't go into too much detail about how the dots are actually printed (what type of ink, how heavy, etc.), but they imply in the article that at least some tracking dots require a UV light to detect.
I'd be curious to know how the dots actually get printed.
Hi, author here! The dots themselves aren't printed with UV ink it's just that the UV light makes it easier to see them due to the yellow ink used.
There were images I sent the magazine of the dots from some magazines I scanned, but they didn't run them. If you scan a page and invert it, the patterns are more legible https://i.imgur.com/x1TXa30.png
You should see rows of tightly packed blue dots in repeating patterns - the machine identification codes for a Xerox printer, to be precise. You may have to turn off f.lux to see them
Interesting protip: fluorescent compounds are added to all sorts of things: mouthwash bottles, toothpaste, white paper, laundry detergent, and the “bright” colours of printer ink.
I'm not sure if normal scanners detect UV light, but that would break a good chunk of their tracking purpose these days if they were not detectable in scans.
On a monochrome printer, I guess you can still to steganography by messing with the dithering, I guess? However, since the stated aim of the fingerprinting is to catch money counterfeiters, I guess they are less interested in monochrome.
The largest chunk of savings (~85%!) is from Card Account Updater, which many payment processors offer. In some ways that says more about the B in the A/B test than it does about Stripe.
I tried to use their API for a personal project and found starting one month a bunch of transactions were missing from my bank account. It turned out Chase included a promotion on the pdf statement that month which threw off their scraping algo. Really woke me up to their "tech", I changed passwords and avoid them now.
Also this only captures 6 industries, which is a narrow view of what would define "tech" these days.
Not to say that the job market isn't tough but this graph is a very narrow view