Hacker Newsnew | past | comments | ask | show | jobs | submit | teagee's commentslogin

The chart in the tweet represents year-on-year growth. Based on these figures alone the actual number of people employed in tech is still really high, and the numbers can't just go up forever.

Also this only captures 6 industries, which is a narrow view of what would define "tech" these days.

Not to say that the job market isn't tough but this graph is a very narrow view


> The chart in the tweet represents year-on-year growth.

Can’t believe how many people are commenting without looking at what the chart means. We’ve lost 50k jobs last two years after decades of adding 100k+ every year including the pandemic highs of 300k+ per year. Total employment remains way above 2000s, 2008 and 2020 unlike the title suggests.


Tech has also changed to become an all encompassing thing. In 2000, loads of people didn't have computers or cell phones. Maybe they owned a CD player and watched TV. Tech was avoidable then. But now everyone has a phone in their pocket, a computer, does all their banking through apps instead of visiting the bank, orders food online, orders taxis through apps, and so on. Everything is lumped under tech now and unavoidably so.


Yeah but, there are like 100K CS/IT graduates in the US every year. Tech jobs increasing 100K per year was just maintenance.

Lotta people in tech are going to struggle to find a job. That's the point.


Yes, but how many people have tried to enter the field since then? Is the economy that supports current number of tech workers really better than one that supports 10x?


Thanks for pointing this out - title is extremely misleading. Total tech jobs is not lower than 2008 just because YoY is down.


No, the title is not misleading at all - your comment is misleading. Total tech jobs being up doesn't tell us anything, since there are also way more tech workers now than back then.

Over 100K people graduate in CS/IT per year, and that doesn't even count people who come in to the industry from overseas or from other degree paths.


"Tech employment now significantly worse than the 2008 or 2020" says the unemployment rate is higher today than in 2008 and 2020, but that is NOT what the chart shows.


General media, news, etc. gets this wrong all the time, see any commentary on inflation, GDP growth, rate of change in house prices, etc.


As an aside, I remember some time ago that Tesla stock went down because the growth of the Model 3 sales went down... After years of being one of the best selling cars on the planet.

If number don't go up fast I guess people get scared.


It could have been that so much future growth was priced in that a reduction in the growth rate could have justified a reduction in the share price


I didn't think of that! That's certainly possible.


The issue is the company's valuation has priced in ridiculous growth in the future. The trajectory matters here. Not saying you should short the stock


Absolute numbers are still higher than they were 5 years ago but the number of jobs going down means that the same (or about the same) number of people are starting to compete for a smaller number of jobs. Many people have chosen to study software development in recent years so nowadays the workforce is much larger than it was 5-10 years ago.

This imbalance of supply and demand shifts power toward employers and it's hard not to feel the pressure even if you're not looking for a job right now.


The chart shows devs still growing, and "Computer System Design SERVICES" getting hammered (most of the total loss).

I'm not even sure this chart tells the story of the title.


Is this WITCH companies?


Yes, but...

The health of the market is not a function of the total number of jobs alone, it's a function of the number of jobs and the number of people to fill them.

The number of total jobs going up year after year meant that there were increasing numbers of candidates, new people entering the field. If the job growth stops, then there still we be candidates coming in. There will also be the new hires from the last decade moving into increasingly senior roles, and there won't be space for them (unless you devalue the meaning of "senior" even more).

So the year over year change matters a lot. If it plateaus, or even declines slightly, it's more than enough to make a terrible market.


YoY change in jobs is still probably not the best way to visualize overall market health. As you say, you also have to take into account the number of people of fill the jobs. To me it seems like the least misleading statistics would be a graph showing unemployment and underemployment % over time. I'd probably also toss in graphs of length of unemployment period as well as various median wage percentiles (quintiles or deciles maybe) over time.


It shows growth or decline but it absolutely does not show what the title implies.


The chart shows the derivative of the thing people care about which is total cumulative change. The area under the curve shows cumulative change.


And they said we'd never have to use calculus in real life!


The post-COVID spike was also absolutely insane and much bigger than dotcom boom.


And I already thought we hired more devs than needed pre-covid. It was pretty well surmised that big tech was hiring to starve other companies of talent, and thus employees were underutilised.


Thank you. And those raw numbers in the chart that go back to 2001 are not normalized percentages; what’s happening right now is NOTHING like 2001.

But, it just doesn’t hit the same way on X to say “We are back to late 2023-levels of tech employment” or “The losses in tech jobs over the last 18 months give back two months of hiring in 2022”.


Asked Gemini quickly for 2000 and 2025 numbers.

Tech employees: 5.5m vs 9.9.

Software developers: 0.68m vs 3.2m.

Different ball game.


Check out the HTMX example in the blog, this helped me better understand how it could be used

https://adamj.eu/tech/2025/12/03/django-whats-new-6.0/#rende...


I'm an avid HTMX user but never did I ever think "I'm using so many includes, I wish I didn't have to use include so much."

What I would like is a way to cut down the sprawl of urls and views.


I do a check for `request.htmx` in my views and conditionally return a template partial as needed. This reduced my need for one-off view functions that were only returning partials for htmx. Works pretty well from my experience.


Any code or blog written by Adam is worth spending some time on.

It will be interesting to see how the tasks framework develops and expands. I am sad to see the great Django-Q2 lumped in with the awful Celery though.


Celery is the worst background task framework, except for all the others.

There are bugs and issues, but because so many people are using it, you’re rarely the first to stumble upon a problem. We processed double-digit millions of messages daily with Celery + RabbitMQ without major obstacles. Regardless of what people say, it should be your first go-to.


Celery has way too much magic crammed into it, it is very annoying to debug, and produces interesting bugs. Celery is/was also a "pickle-first" API and this almost always turns out to be the wrong choice. As a rule of thumb, persisting pickles is a really bad idea. Trying to hide IPC / make-believe that it's not there tends to be a bad idea. Trying to hide interfaces between components tends to be a bad idea. Celery combines all of these bad ideas into one blob. The last time I looked the code was also a huge mess, even for old-guard-pythonic-code standards.


I think Celery has a lot of magic happening under it. When the abstractions are so high, it's important they never leak and you don't see anything below the turtles you are supposed to see.

I often prefer designing around explicit queues and building workers/dispatchers. One queuing system I miss is the old Google App Engine one - you set up the queue, the URL it calls with the payload (in your own app), the rate it should use, and that's it.


OP here, thanks for the praise!

Yeah, I mentioned Celery due to its popularity, no other reason ;)


You are a great writer - thanks for putting this together!


I tried django-q and I thought it was pretty terrible. The worst was that I couldn't get it to stop retrying stuff that was broken. Sometimes you ship code that does something unexpected, and being able to stop something fast is critical imo.

Fundamentally I think the entire idea behind celery and django-q is mostly misguided. People normally actually need a good scheduler and a bring-your-own queue in tables that you poll. I wrote Urd to cover my use cases and it's been rock solid.


I've been using Celery for years. What is the major issues you have with it and how does Django Q2 help?

I also use Kafka on other tech stacks but that's another level completely and use case.


Why is celery awful?




Temporal is an AMAZING piece of software, however I don't believe it's a replacement for something more simple like Celery. Even if you write helpers, the overhead to setting up workflows, invoking them, etc. is just too much for simple jobs like sending an email (imo). I would love to work in a codebase that had access to both, depending on the complexity of what you're trying to background.



It's okay till it's not. Everyone I know who had Celery in production was looking for a substitution (custom or third-party) on a regular basis. Too many moving pieces and nuances (config × logic × backend), too many unresolved problems deep in its core (we've seen some ghosts you can't debug), too much of a codebase to understand or hack. At some point we were able to stabilize it (a bunch of magic tricks and patches) and froze every related piece; it worked well under pressure (thanks, RabbitMQ).


Because it’s a seducer. It does what you need to do and you two are happy together. So you shower more tasks on Celery and it becomes cold and non-responsive at random times.

And debugging is a pain in the ass. Most places I’ve been that have it, I’ve tried to sell them on adding Flower to give better insight and everyone thinks that’s a very good idea but there isn’t time because we need to debug these inscrutable Celery issues.

https://flower.readthedocs.io/en/latest/


Although we could say the same thing about Kafka, couldn't we? It's made for much higher throughput and has usually other use cases, but it's also great until it's not great.


At least the last time I used Kafka (which was several years ago so things might have changed) it wasn't at all easy to get started. It was a downright asshole in fact. If you pursue a relationship with an asshole, you shouldn't be surprised when they become cold to you


Yes, absolutely. It's still pretty much that way. Especially if you want to make changes to a running installation, add nodes etc.


Celery is great and awful at the same time. In particular, because it is many Python folks' first introduction to distributed task processing and all the things that can go wrong with it. Not to mention, debugging can be a nightmare. Some examples:

- your function arguments aren't serializable - your side effects (e.g. database writes) aren't idempotent - discovering what backpressure is and that you need it - losing queued tasks during deployment / non-compatible code changes

There's also some stuff particular to celery's runtime model that makes it incredibly prone to memory leaks and other fun stuff.

Honestly, it's a great education.


> your side effects (e.g. database writes) aren't idempotent

What does idempotent mean in this context, or did you mean atomic/rollback on error?

I'm confused because how could a database write be idempotent in Django? Maybe if it introduced a version on each entity and used that for crdt on writes? But that'd be a significant performance impact, as it couldn't just be a single write anymore, instead they'd have to do it via multiple round trips


In the context of background jobs idempotent means that if your job gets run for a second time (and it will get run for a second time at some point, they all do at-least-once delivery) there aren't any unfortunate side effects to that. Often that's just a case of checking if the relevant database updates have already been done, maybe not firing a push notification in cases of a repeated job.


If you need idempotent db writes, then use something like Temporal. You can't really blame Celery for not having that because that is not what Celery aims to be.


With Temporal, your activity logic still needs to ensure idempotency e.g. by checking if an event id / idempotency key exists in a table. It's still at-least-once delivery. Temporal does make it easy to mint an idempotency key by concatenating workflow run id and activity id, if you don't have a one provided client-side.


Temporal requires a lot more setup than setting up a Redis instance though. That's the only problem with it. And I find the Python API a bit more difficult to grasp. But otherwise a solid piece of technology.



In my experience async job idempotency is implemented as upserts. Insert all job outputs on the first run. Do (mostly) nothing on subsequent runs. Maybe increment a counter or timestamp.


From your experience, what is a better alternative guys?


Not the comment that you replied to but I use my own Urd. It's a fancier Cron that you can stop fast. Which is imo what you normally want.

Task queues are like email. It's what everyone is used to so people ask for more of it, but it's not actually good/the right tool.


There’s no alternative (while prototyping), and anything else is better (when you properly defined your case).


DjangoQ2 is a fine alternative during early development


I’m currently stuck with the tech debt of Celery myself. I understand that! Does Django Tasks support async functions?


Computer, load up Celery Man please.


I'm of the opinion that django task apps should only support a single backend. For example, django-rq for redis only. There's too many differences in backends to make a good app that can handle multiple. That said, I've only used celery in production before, and I'm willing to change my mind.


With that logic, the Django orm should only support one database.


Why have a backend then?


Seaford, the city to which this would apply, was once regarded as "The Nylon Capital of the World"!

https://www.capegazette.com/affiliate-post/remembering-nylon...


> Ten percent of respondents who reported having a 4.0 said they had cheated in an academic context while at Harvard.

Imagine how many cheated and didn't admit it! A very sad state of affairs...


Though this implies that 10% of the 4.0s don't deserve it, it's not proof; a single cheat may or may not have affected the grade.


When that single cheat might have resulted in expulsion if it were caught, it implies a pretty significant effect on the grade.

It wouldn’t actually change the result of the metric (because survivorship bias: you don’t count students who never graduated in the graduated student population), but it changes the believability of and the ethic behind the metric. Now we can say that at least 10% of the folks who graduated didn’t earn their grades, and the school’s reputation is less for it.


My experience is that in recent years, cheating is ubiquitous in many elite schools. This goes all the way to the top. Fake research data. Self-enriching financial schemes by faculty. Conflicts-of-interest. Etc.


Cheating is probably the best thing to do. It's a zero sum world.


Sure, if you're satisfied with rampant corruption and dysfunctional institutions. Let's embrace dishonesty and underachievement.


Cheating may be a rational thing to do, on an individual level, if we ignore ethics. Outside of that it breaks down. I wouldn't say "best" without a lot of qualifiers.


HN is truly a market leader in status page technology


A couple of outages that have affected my client i learned about first on HN. We were able to mobilize a team and get on top of it faster than any monitoring team at the client ( a state government ). I feel like HN should invoice us haha


You might benefit from some better monitoring!


I came to HN to make sure it was actually down.


Except when it is a false alarm.

I've seen a few get to the front page.

But when we're right we're right.


10/9 times


HN: All of our amps go to 11.


Even when it's a false alarm it's usually something else is having a problem that is affecting many people and manifesting itself as a particular service being down.


I'll make my employer pay for it if HN starts a paid service. Honestly they should write a blog post about when down times were first reported on Hn.


HN was also down for a moment because too much traffic


I experienced that too


Such a lightweight tool at that! Barely any JS loaded.


*was


Add our current low interest rates to the mix as well and the picture changes dramatically


100%. Though, interest rate based appreciation is likely at its apex, presuming zero is the floor. If we find ourselves with negative rates (after taxes and fees; there was one case of negative rates in Europe, but net inclusive of fees, it was still positive) -- then we're in truly uncharted territory.

At this juncture, it seems most appreciation will arise from supply issues, which aren't new to the post-2008 world. And while we do have lots of unoccupied housing nationally, we don't have it stock in areas where it's most needed: e.g, job centers. You can easily find a $10k home in Detroit if you wish.

The graph would be helpful it broke out metro versus rural areas, in addition to factoring interest rates.


This must result in a non-trivial amount of ink/toner used in the name of security


The last HP Inkjet that I had would go through the Yellow cartridge faster than black, even when printing only Black and White. Which is how I discovered these fun little dots.


They didn't go into too much detail about how the dots are actually printed (what type of ink, how heavy, etc.), but they imply in the article that at least some tracking dots require a UV light to detect.

I'd be curious to know how the dots actually get printed.


Hi, author here! The dots themselves aren't printed with UV ink it's just that the UV light makes it easier to see them due to the yellow ink used.

There were images I sent the magazine of the dots from some magazines I scanned, but they didn't run them. If you scan a page and invert it, the patterns are more legible https://i.imgur.com/x1TXa30.png


The dot patterns are much smaller and more dense than I expected


I don't understand what I'm looking at here.


You should see rows of tightly packed blue dots in repeating patterns - the machine identification codes for a Xerox printer, to be precise. You may have to turn off f.lux to see them


Error: Secret UV ink is empty. Please contact NSA for a new cartridge.


Interesting protip: fluorescent compounds are added to all sorts of things: mouthwash bottles, toothpaste, white paper, laundry detergent, and the “bright” colours of printer ink.


I'm not sure if normal scanners detect UV light, but that would break a good chunk of their tracking purpose these days if they were not detectable in scans.


And how do they get printed on a monochrome printer like a laser?


There are color laser printers.

On a monochrome printer, I guess you can still to steganography by messing with the dithering, I guess? However, since the stated aim of the fingerprinting is to catch money counterfeiters, I guess they are less interested in monochrome.


Are any major banknotes monochrome?


Each layer is, but that's probably more advanced than just "printing a banknote".


That anti feature is absent there. Black instead of yellow dots would be very visible


I believe, could be very wrong, that its only on color printers and uses the Yellow color to print dots too small to see with the eye.


The largest chunk of savings (~85%!) is from Card Account Updater, which many payment processors offer. In some ways that says more about the B in the A/B test than it does about Stripe.


I tried to use their API for a personal project and found starting one month a bunch of transactions were missing from my bank account. It turned out Chase included a promotion on the pdf statement that month which threw off their scraping algo. Really woke me up to their "tech", I changed passwords and avoid them now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: