AI can’t do our jobs today, but we’re only 2.5 years from the release of chatGPT. The performance of these models might plateau today, but we simply don’t know. If they continue to improve at the current rate for 3-5 more years, it’s hard for me to see how human input would be useful at all in engineering.
I dont think its especially unreasonable to assume that these models will continue to improve. Every year since chatGPT has seen incredible advancements, that will end eventually but why do you think it is now?
> Every year since chatGPT has seen incredible advancements
Advancements in what exact areas? My time using GitHub Copilot years ago was more successful for the simple act of coding than my more recent one trying out Cursor with Claude Sonnet 3.5. I'm not really seeing what these massive advancements have been, and realistically none of these LLMs are more useful than a very, very bad junior programmer when it comes to anything that couldn't already be looked up but is simply faster to ask.
> realistically none of these LLMs are more useful than a very, very bad junior programmer
This is an incredible achievement. 5 years ago chatbots and NLP AI couldnt do shit. 2 years ago they were worthless for programming. Last year they were only useful to programmers as autocomplete. Now they replace juniors. There has been obvious improvement year after year and it hasnt been minor
To the extent it’s measurable, LLMs are becoming more creative as the models improve. I think it’s a bold statement to say they’ll NEVER be creative. Once again, we’ll have to see. Creativity very well could be emergent from training on large datasets. But also it might not be. I recommend not speaking in such absolutes about a technology that is improving every day.
I agree, and I think most people would say the current models would rank low on creativity metrics however we define them. But to the main point, I don’t see how the quality we call creativity is unique to biological computing machines vs electronic computing machines. Maybe one day we’ll conclusively declare creativity to be a human trait only, but in 2025 that is not a closed question - however it is measured.
We were talking about LLM here, not computing machines in general. LLM are trained to mimic not to produce novel things, so a person can easily think LLM wont get creative even though some computer program in the future could.
Most software engineering jobs aren't about creativity, but about putting some requirements stated in a slightly vague fashion, and actualizing it for the stakeholder to view and review (and adjust as needed).
The areas for which creativity is required are likely related to digital media software (like SFX in movies, games, and perhaps very innovative software). In these areas, surely the software developer working there will have the creativity required.
We’re talking about spending a couple of hours learning the basics of negotiation for a likely 10-20% increase in salary/equity. That’s certainly not making the difference in solving the world’s problems.
If you’re speaking more broadly than just salary negotiation - I’d just say that humans aren’t perfect machines. We care about solving problems but we also have desire for money, power, status and following random rules.
The title sounds whimsical, but animals cause a significant amount of outages. Around 5-10% are caused by animals. When I interned at a power company I saw them install “squirrel guard” insulating equipment terminals
This is a classic security dilemma that is not easily resolvable. Suppose we just look at the US and China. Each side will discover some number of vulnerabilities. Some of those vulnerabilities will be discovered by both countries, some just by one party. If the US discloses every vulnerability, we’re left with no offensive capability and our adversary will have all of the vulnerabilities not mutually discovered. Everyone disclosing and patching vulnerabilities sounds nice, but is an unrealistic scenario in a world with states that have competing strategic interests.
Haha yeah I thought about this. But I guess everyone has a different idea about a bad cause. But yes a next version of this would let you select from a few different organisations.
I can’t relate to that. When I see a banner ad I find it obtrusive whether it’s from Bank of America or my favorite HAM radio company. If I’m in the market for a product I value hearing the testimonials of people in my life rather than an advertisement.
The one case where I find ads useful, when word of mouth isn't an option, is in a static image on a site (review site, blog, whatever) where I'm researching a thing. The ad would be related to that thing, doesn't need to know a thing about me other than I'm browsing that page, and is related to the content on that page. I click on those ads sometimes.
I’m trying to think of anything I find useful that I stumbled upon thanks to ads over the past twenty years or so, and I’m pretty much drawing a blank. It certainly seems negligible.
The problem with prohibiting ads is how to prevent (or even define) payed hidden promotions. But tracking and targeted ads could be prohibited, which would already make things much more civil and less relevant as a tech profit center.
>I’m trying to think of anything I find useful that I stumbled upon thanks to ads over the past twenty years or so, and I’m pretty much drawing a blank. It certainly seems negligible.
Maybe the ad is good when you arent even aware that you were influenced by it?
This also seems generational to me. I’m an American younger than 30 and the only people at my company who embody this are the senior people over the age of 40.
It's because us millennials came of age at a time when there was massive optimism that computers and the Internet would make the world a much better place. From equalizing education through online resources to bringing people around the world together through online discourse and intellectual discussion.
Most of that didn't happen of course (although Khan academy has helped tons of people), but we were raised to believe that the software we wrote was going to help people.
It is sad that Gen Z doesn't believe that, it signifies a large cultural shift in the computer geek culture.
Fwiw I have written software that saved lives, and I still believe software can do a lot of good in this world. We should aim to create things throughout our life, using the skills that we have, that make the world a better place.
There is no quicker way to get yourself put on the back burner than failing to pick up the pom poms. If you are valued by management you will get a few warnings but other than that you go one a silent list for the next restructuring or making an example of.
As a Gen Xer, I figured this was a millennial thing. Given the age ranges you cite that may still be the case. I don't work with enough Gen Zers to paint them with a broad brush.
But most of the people I've worked with who wanted feel "part of something" had been 10+ years younger than me.
So your sample size is 1 company? That's very anecdotal. And you're younger than 30, so you probably haven't worked at many companies. There are plenty of people in plenty of other companies that you've never met.
It sounded anecdotal because I shared an anecdote. I don’t have a research study on the topic. Still anecdotally, I’d say the same thing about my 3 internships, so I’ll say n = 4. Happy to hear your anecdotal experience or non anecdotal data.
My more qualified anecdote after 30 years in the industry is that you are wrong. I've worked with 20-somethings that have been all-in-drink-the-koolaid, as well as anyone of any age, as well as people of all ages that were there only there for the paycheck. It isn't "generational", it's a personality trait. You either have it or you don't.
It can have wider bandwidth, lower noise+distortion, smaller Vos, maybe other things I'm not recalling off the top of my head. In practical terms, you might not be able to run any test with the ADC that can find the limits of the op amp.
According to the paper, this worked because the dielectric change from all other blood diagnostics was negligible, allowing glucose to be measured. Gluclose in blood is around 80 mg/dL. It may be possible to measure other blood chemistry metrics that are similar in concentration at other frequencies, but there’s a lot of blood tests, many of which would probably be impossible - like white blood cell count, anything enzymatic, something whose concentration is measured in ug/dL, or something that has no effect on dielectric properties of the blood. I wouldn’t expect to see a whole blood panel via wearable radar anytime soon, but we may get a few more tests from RF sensing.
Yes, it is sub optical RF sensing. The important factor here is that the gluclose blood capacitively couples to small sensing antennas. The sensing antennas are resonant elements, whose exact resonance changes depending on the surrounding environment, in this case gluclose in the blood. You can then transmit an RF signal to the antenna, then record the signal reflected from the antenna’s port to estimate the gluclose level.