Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m a security engineer, so my work is a bit different than SWE - data wrangling and analysis, tying systems together and correlating events across them, building defense in depth, cool and effective under stress.

That said, LLMs are showing up a lot in my job. Being a good sec eng is having a 70% base in most systems, solid comp sci and programming chops (I call it I can build and run a mediocre app), and solid security expertise.

GPT is really good at radically speeding up how to get past the 70% starting point I usually operate from. I run (sanitized) terminal output through it so the 60% of the output I table to RTFM later to understand, I can understand immediately via GPT. Sec eng benefits a lot from leaning into pandas/notebooks vs log csvs, and GPT does that really well too.

The big marker for me is incident response - standardized tech data requiring analysis and correlating in a pinch. I’m going to have an incident response LLM partner soon enough. Analyzing open source codebases for how they do input sanitizing with a new to me language? LLM partner walking me through.

All this together - goodbye entry level cybersecurity jobs in a few years I think. Many of the things you’d need a security analyst for or the more busy work sec eng 1 jobs I truly think are turning into LLM jobs. My lived experience this past year reflects it.

I think layoffs, and productivity gains from LLMs are under the hood of tech layoff tight now. Curious if other engineering tracks are seeing this though? A SWE buddy at Google thinks so.



I'm 15+ years in security and just this week I needed to hit a few domains, find a script tag that imports JS from a certain CDN, parse and make sense of it.

After 20 minutes of telling ChatGPT exactly what I needed and a couple of test runs and optimizations, I had the perfect tool. 10 years ago this would have been a half to full day project.

I had a meeting with an entry level security engineer and he asked if I need him to do the task, as we were both in the meeting where it was being discussed.

I didn't even think to ask him! It was quicker and easier to do it myself.

This was the type of project I had been assigned dozens of times during my first job.

I don't know what that means for the future of work but it's changing fast.


Scripts and unix pipe style work flows like this are going to be the easiest to automate. Even 5-6 years ago if you knew where to look for some cutting edge stuff, this was what they were experimenting with in automated code gen space and it was pretty good back then. The difference is you didn't do natural language prompts, you had to be able to ask for things in a very specific way.

It doesn't make up the bulk of development (though LLMs are eating low hanging fruit there too but its going to take a little longer) its a canary in the coalmine for how much this tech will cause some kind of disruption to people's jobs.

Whether it means that ultimately, it'll be like the adoption of the computer itself (an explosion of jobs relative to those who lost them due to the wide scale of computers) or not remains to be seen. It might open up higher forms of work and focusing on more thorny problems.

It also might kill huge swaths of the sector


This. We were supposed to hire a junior dev to help me build out a new app. I'm moving so fast using copilot we just forgot about it.

I'm putting out a complex product at break-neck speed using frameworks and tech I was barely familiar with when I started.

This is not a good time to be a grad or junior dev, or a search engine.


Why did you think you needed to hire a junior dev before even starting work on the application? I know estimation can be a difficult task but the typical "I'm moving so fast..." type experiences usually mean you didn't or don't understand your tooling or the scope.

Also how were you going to take on a junior dev and a new framework at the same time? Were you expecting them to know the framework?

As the saying goes though the last 20% takes 80% of the time.


Because the project was big enough to warrant more than one person. I have a whole team surrounding me to handle non-technical/non-development incidentals. Most companies would have had a lot more budgeted and would have pre-hired five devs. Then everything would have moved glacially slow, fulfilling the prophecy that five devs were needed.


  > Because the project was big enough to warrant more than one person.

But based on what, the scope? If you weren't familiar with the tech stack how would you gauge that? I understand people can conceptualize frameworks at a high-level.

  > I have a whole team surrounding me to handle non-technical/non-development incidentals.

Are these the people finding the junior or (5) devs that would be needed. Do they have experience with the framework to know how to scope the project? The hiring of 1 - 5 developers in-house or even as contractors is a labor intensive process so I'm not really sure companies would have just done it based on an idea of an application. I can see where they might have hired early based on winning a contract but they probably under estimated the work if that was the case or padded the cost to account for ramp-up time.

  > Most companies would have had a lot more budgeted and would have pre-hired five devs.

Maybe you haven't worked places that do spikes or just allow people to develop prototypes without entire scoping documents or hiring people. Also keep an eye on your worth here. If you are saving the company the cost involved in getting (5) more developers then you should be getting a bonus or have decent compensation. A lot people fall in this trap of "saving" the company money as if its their own, its not, and unless you are getting some of that savings you are diluting your current pay and working twice as hard.

  > Then everything would have moved glacially slow, fulfilling the prophecy that five devs were needed.

Yeah this is understood as the "mythicial man month" in terms of things slowing down. Adding the wrong head count is a planning and leadership issue. There is nothing stopping teams from being dynamic at a point but that depends on how long the application is going to be supported. Having (5) people now can spread out the working knowledge and workload enough that "no single" developer is holding up forward progress. If you are having to mentor people on the project or fix mistakes then they are the wrong people or wrong skillset for the team. A leader will be able to convey the issue to management and have people let go or replaced. People don't like to do this but there is no reason to keep a failed process going as we are all professionals. Alternatively people above you have accepted this as part of the application development process, it justifies their job, and are fine with it so getting the work done any faster is just a bonus to them.


Honestly it sounds like it wasn't a tool that is needed often, if it was you or someone else would have already written it. Or you don't regularly day-to-day program enough in javascript / python to do this quickly. There isn't anything wrong with that, as you mentioned, you have entry level security engineers that typically handle those tasks. Creating a tool goes fast when you know exactly what you want it to do and don't have to explain to another person all the requirements and pitfalls to avoid based on experience you might have in writing quick scripts. I don't know if this really changes anything.


Fascinating to hear similar from you.


Entry level devs will need to be much more skilled than I was to enter the field a few years ago.

My internships and the first 6 months of my first full time job are trivial to ChatGPT. Copilot would be needed for work since then (as it is specific to the codebase), but even so, I am far more productive with them.

One of my first internships was hacking together a mobile demo of a digital ID concept. I’d be surprised if it took more than a few hours to replicate a month of vanilla HTML/CSS/JS effort from back then.

I would prefer ChatGPT to me as a co-worker up until about 1.5 years of experience, if simply because it replies instantly and doesn't forget stuff.


Right - I think when the equivalent of CoPilot shows up in incident response, security employment market changes for good. When a “cleared” CoPilot (for govt-supporting work) shows up, total overall.

If you don’t operate in the approach I describe, or are not just an all around tech expert who likes security for some reason, the stable high paying market is around digital forensics/incident response firms. Those folks have a lock bc there’s a small group of who knows assembly and OSs across multiple systems very well and knows if from a security context. Tribal work for a LLM soon enough as it’s just parsing OpCodes and stretching across log sources, end of the day. Scary stuff, glad I’m past entry level and I’m no fool thinking that I don’t have to worry too.


I'm not sure I see this as a reality anytime soon.

  > Those folks have a lock bc there’s a small group of who knows assembly and OSs across multiple systems very well and knows if from a security context.

There is two parts to this. The first is for some of these business in that arena I'm sure if they could speed up analysis to take on more client jobs requiring less labor they would have done so. Second is, what output are you going to provide that wouldn't need the very same people to decipher, validate, or explain "what" is going on?

As an example if you get hacked and you make a cyber insurance claim you are going to have to sufficiently explain to the insurance company what happened so they can try to get out of paying you and you won't be able to say "Xyz program says it found malware, just trust what it says." If people don't understand how the result was generated they could be implementing fixes that don't solve the problem because they are depending on the LLM/decision tree to tell them what the problem is. All these models can be gamed just like humans.

I'm not quite sure I agree that a better LLM is what has been keeping people from implementing pipeline logic to produce actionable correlation security alerts. Maybe it does improve but my assumption is much like we still have software developers any automation will just create a new field of support or inquiry that will need people to parse.


I think the impact of LLMs in DFIR will come down to

- speed at which actionable insights can get generated (otherwise needing a very high paid eng poking through Ghidra and cross-log correlation)

- reduced need for very high paid DFIR engs due to the above.


New devs' code looks like ChatGPT wrote it if it had been trained on pre-existing parts of the codebase. Copy pasta eeeverywhere :p


For this specifically, is no different than when StackOverflow was the go-to solution.


Worse maybe. I used to be able to tell when someone was using SO because everyone was blindly copying the same email regex answer. You can carry Mixtral in on a personal computer and transcribe novel output now. It’s so much harder to detect from just looking at a PR.


80% of being a good security engineer is knowing the big picture, all the parts and how they work. The correlation that LLM produces has no value if its not actionable. You are the one that determines the weights, values, and features that are important. I'd be very curious of how you currently account for scheduled outages, unscheduled outages, new deployments, upgrades of existing systems, spinning instances up and down for testing, laptop device swap-outs, traffic in different silos. How are you baselining normal communications and session timing between services or across protocols? If you are in the cloud is baselining done by service HTTP, DNS, DB, etc? I could see different weights being constructed to represent defense-in-depth but this would seem to be a constant amount of work while also investigating or feeding true/false positives back into the system.

Entry-level cybersecurity isn't a thing which is why it isn't working out as typically you need prior dev, ops, devops, sre, sysadmin, etc existing experience. The talent shortage is because you can't do an undergrad in cybersecurity and somehow pick-up the prior operational knowledge that develops your skills for understanding and troubleshooting how systems, networks, and applications all function together. Cybersecurity as it stands, and you mention, is in my experiences best as a focus off-of computer science. I mean even the CISSP requires working experience in the field.

The one item I think you are overlooking is that you have the experience in how everything works together which makes a tool like ChatGPT or some other analyzer where you can ask the "right" questions a useful tool because you have the mental mapping and models through experience of the questions to ask. So while a security analyst job might go away you are back at the original problem of developing security engineers that know the architecture, flows, daily expectations, etc and having a LLM buddy is not going to turn a security analyst directly into a cybersecurity engineer over night.


> The correlation that LLM produces has no value if its not actionable.

For security, there are two parts to this:

- correlation within detection engines, i.e. what Crowdstrike does: CS and so on are already doing what you describe (baselining normal system and identity behaviors). It is hit-or-miss still, but noticeably better than a few years ago, and I think this current AI era will push it further. These already took away the need for several sec eng hires.

- correlation across logs, i.e. an incident is happening and under time/under stress, and usually this is a IR team putting together ad hoc search queries and so on. LLMs, as many of them seem to have indexed query languages docs and much of the open source docs on AWS, O365 etc, are in almost invaluable tool here. It's hard to explain how quickly security dev across pre-incident prep or in-incident IR are sped up by them.

> where you can ask the "right" questions a useful tool because...

Yes, this specifically is one of the great value-adds currently - gaining context much quicker than the usual pace. For security incidents, and for self-build use cases that security engineers often run into, this aspect specifically enough to be a huge value add.

And I agree, it will exacerbate the existing version of this, which is my point on replacing analysts:

> you are back at the original problem of developing security engineers...

This is already a problem, and LLMs help fix the immediate generation's issues with it. It's hard to find good sec engs to fit the developmental sec eng roles, so those roles become LLMs. The outcome of this is... idk? But it is certainly happening.


> All this together - goodbye entry level cybersecurity jobs in a few years I think

If nobody is entry level how would anyone be able to penetrate the job market? Nobody graduates into a mid-senior level job, right?


That is the question.

I think SWE is going to experience what cybersec already has an issue with.

The “talent shortage” in cyber is for mid levels, not entry level security analysts. This is bc cyber is applied IT and engineering through a security lens. It’s hard to make the jump into a sec eng career bc of this. IMO, LLMs build the wall higher.

To your point, nobody is really graduating right into cyber. Most go to non-technical compliance jobs or MSSPs if they are cyber first, or get really luck on a self-developed path. Rest are lateral transfers from IT or SWE/ops. MSSPs are hard to get out of (24hr security triage centers), and are prime targets for LLMs.

I speculate SWE starts experiencing what cyber has dealt with for years - what do you do if entry level is automated, or the knowledge bar is really high. I think cyber just gets worse/harder to get into.


You will start to see this turn around as companies realize they need to go back to the path of entry -> mid -> Senior/Principal. For cybersecurity this is operations and/or development -> cybersecurity w/ focus on either dev or operations. Then at the Senior/Principal layer people can float between things. This isn't too far off from many other jobs, no EE out of school is designing circuits and boards from scratch its debug what Senior EE's have created or problems in existing products, then you work your way up. Its the same with cybersecurity does a person that hasn't developed software start out doing reverse engineering? Does a person develop or approve security policies, devices, network architecture or designs if they haven't every deployed an application or service in production? How are you determining if something is an incident or valid alert if you haven't managed a network.

When money was free companies could hire people for very specific tasks and knowledge areas because it wasn't costing them anything to get the money. This is why lay-offs in engineering, while smaller percentages compared to other departments in the company, are for jobs that are specialized where in previous times it might have made sense to get a consultant or contractor.


> For cybersecurity this is operations and/or development -> cybersecurity w/ focus on either dev or operations.

Fwiw, I haven't heard or worked at any company implementing this pipeline formally. And, cyber teams (or more appropriately, the industry career thought leaders) expecting it to work this way is a large part of the existing issue.

Fundamentally, under this logic one industry (cyber) is relying on another (SWE/IT) to train its entry level candidates. Logical enough.

In practice, some of the issues:

- there are very few roles that are entry in cyber that aren't a large pay decrease for the SWE for a year or two to take. So, many don't take this jump unless there is a clean pivot into appsec or infrasec. Companies needing both of those are small, you largely only see this pivot in tech.

- IT teams don't particularly want to lose their headcount, so outside of excellent manager or very self-steering IT eng, nothing in IT is helping the aspiring sec eng make the jump over.

The end result, to solve this problem

> Does a person develop or approve security policies, devices...How are you determining if something is an incident...

is it's not really solved in a clean way. There's a massive talent gap and favorable mid+ sec eng employment market because of it. Cybersec is already experiencing it, LLMs will make it worse, and I think it'll get worse for devs as well ("How are you determining a performant app if you've never built an unperformant one and fixed it?")

// which is a long way of discussing

> You will start to see this turn around as companies realize...

it hasn't turned around in cyber fwiw and it's been growing for probably 2 decades, 1 decade in earnest. Perhaps b/c SWEs are a profit center vs. the security cost center, there'll be motivations though. IMO the only thing driving sec eng hiring isn't companies realizing career pipelines are messed up, it's regulations or getting hacked in profit-damaging ways, and there aren't a ton of companies in those buckets


  > it hasn't turned around in cyber fwiw and it's been growing for probably 2 decades, 1 decade in earnest. Perhaps b/c SWEs are a profit center vs. the security 
  > cost center, there'll be motivations though. IMO the only thing driving sec eng hiring isn't companies realizing career pipelines are messed up, it's regulations 
  > or getting hacked in profit-damaging ways, and there aren't a ton of companies in those buckets

I don't know from my observations cybersecurity has only been a thing in the last decade outside any defense industry. Before that it was information security and most operations/network security was done by systems and network administrators[1] with the driver being reliability of services verse any concern about the equipment or data on it.

While the hacks are a driver of the cybersecurity field the biggest driver as with all things is insurance companies and cyber coverage. Insurance companies requiring people to be dedicated on keeping up with vulnerabilities, secure default implementations, data restrictions is what is driving the need and companies just want to fill it to keep their coverage or keep their rates lower. Its the typical idea that if you add more software developers or people to a project it gets done faster, when in reality it doesn't work that way. This is why I think we will see a shift back to a more graduated source of cybersecurity professionals. There wasn't a formal path to being a systems administrator or network administrator compared to Computer Science degree -> developer.

Thanks for the astute discussion. Its much better than the "one" line bot responses that you typically see now.

[1] For all the young kids these jobs were renamed DevOPS, NetOPS, SRE, etc. Previously these responsibilities were just part of operating a network.


> Before that it was information security

Fair call-out. To clarify, I swap what I call the job depending on the audience, but IMO the underlying requirements of the job haven't really changed. A SWE/business audience - call it cybersec. At the security cons in Vegas - call it infosec. Obviously there's skill variations within the security needs of the day (i.e. pure "netsec" isn't around as much anymore vs. "cloudsec"). But, skill shortages have persisted across all these variations of the job IMO.

> insurance companies and cyber coverage.

I've primarily worked in tech or finance, and tbh I don't run into insurance topics a lot although it's of course speculated as a possible growing motivator for the field and related hiring. The issue and "signal" I look for with that changing is when will the Fortune 500-style mass data breach actually turn into (a) uninsurability or (b) massive fines. Neither have happened yet, but IMO this is changing.

In terms of security programs I've joined where there was an incentive to hire, it is always something like this, which is what I mean by regulations or hacks driving hiring in my (anecdotal) experiences:

- Want to IPO, Series C tech startup? Must pass SOC-2, must hire security team.

- Horrible hack or very narrow close call, largely stayed internal -> board/founders gets fired up about cyber risk, and it filters down to hiring out a security team.

...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: