Hacker Newsnew | past | comments | ask | show | jobs | submit | mdavidn's commentslogin

Look around your town and try a few regular local group activities. See "third place" for ideas. Be patient with yourself. This will take time, and that's okay.

https://en.wikipedia.org/wiki/Third_place

Some of my most durable adult friendships started at group ballroom dance classes. The studio was a 15-minute drive from my suburban home, twice per week, and focused on social dancing, not competitive. I don't think dancing was the thing that made it work, but funny teachers and regular faces. That studio closed (pre-pandemic, thank goodness). No studio since has recreated the magic, but other activities have.


When a cross-cutting team is responsible for something, it's no longer the product team's problem: Architecture, infrastructure, CI/CD, QA, load testing, security...


Welcome to the club. I registered my own domain and moved my digital life off Google services 18 years ago for this exact reason. If you need another reason: They scan all of your e-mail to target ads at you and your associates. Do it. It's not that difficult!

My "new" mail provider fetches messages from Gmail to create a unified inbox, which helped with the transition. Today, I'm thinking of shutting this off given the volume of misaddressed e-mail and spam that arrives via Gmail.


It boggles my mind that they need a photo ID to prove that my 9-year-old account with a saved credit card belongs to an adult. The linked Steam account is 18 years old.


from the article:

`For most adults, age verification won’t be required, as Discord’s age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process`


they don't do this for age verification, they do this to build dataset to sell.


> Key privacy protections of Discord’s age-assurance approach include:

> On-device processing: Video selfies for facial age estimation never leave a user’s device.

> Quick deletion: Identity documents submitted to our vendor partners are deleted quickly— in most cases, immediately after age confirmation.

> Straightforward verification: In most cases, users complete the process once and their Discord experience adapts to their verified age group. Users may be asked to use multiple methods only when more information is needed to assign an age group.

> Private status: A user’s age verification status cannot be seen by other users.


Yes, I definitely trust the multi-billion dollar corporation regarding my data


Discord is an app that's so routinely reverse-engineered there are projects with a million+ users designed around patching changes to it, straight in the binary.

https://betterdiscord.app/

Do you think their big evil plan is to make up a lie that will last maybe 3 weeks, jeopardize the user trust and lose nitro revenue

Surely there is so much money to be made selling random people's faces.

If they tell you they're not selling your data they're not selling your data. What you should worry about is incompetence

Not even 6 months ago a third party they used for ID verification got breached

https://www.bbc.com/news/articles/c8jmzd972leo


> Do you think their big evil plan is to make up a lie that will last maybe 3 weeks, jeopardize the user trust and lose nitro revenue

???? Yes? Companies nuke their core product all the time for the sake of a big IPO number.


Of course discord has no track record of overextending their privacy policy and selling data you would not expect (sarcasm).

For example but not limited to "programs you run and other system specific information". I believe I read a while back they recorded titles of all opened windows but I can't seem to find a reference for that.

https://www.reddit.com/r/privacy/comments/rsxeee/you_should_...


I'm not saying they won't ever start collecting it and selling it. I'm saying the day they do, it will be laid out in their privacy policy. Right now they're making statements that they're not even collecting it.


Surely there is so much money to be made selling random people's faces.

I really hope I misread sarcasm in that statement. Because of course there is a lot of money in that


How much? 2 bucks per user?

Their paid users shell out 3 a month...

And then you think of the real world

> secretly selling your IDs data behind your back, they have to account for that revenue in their books, put it in their privacy policies or do it illegally, it's weak to whistleblowers, third parties get breached all the time (as well as yourself), and you have to trust the people you're selling this to. It's not credible.


How many users are paying? a few million? How many use the service for "free"? A few hundred million? Are you stupid?


>How many users are paying?

7.3 million paying every month

>How many use the service for "free"?

143 million times maybe 2 bucks once. Most likely five cents once.

>Are you stupid?

Flagged


While what GP said was not worded how the site rules say it should be, your original point is very tedious and can only be read charitably if we assume you never read any news or barely retain anything. We are currently on a news website. I think if you want non-commenting readers to see your point and have charitable thoughts of you it would help to explain why you're ignoring reality for whatever it is you are positing (consumer protections because of subscriptions? really? for this corporation?).

What you're saying in this post essentially just underlines GPs point, which I imagine isn't what you're trying to communicate. You have to help a reader understand your point of view, especially if it's far removed from objective reality (which is that a corporate entity will betray you for money, regardless of whether that makes sense long-term).


Nope, when corporate overlords sell your data they say it in their terms of use and privacy policies because no one is that stupid. If Discord says they're not selling that data, they're not selling that data. The day they'll start doing it, they'll put it in their policy.

You're making up a reality that doesn't exist in your head and claiming it's the truth.

You have in your head examples like facebook or spotify. Spoiler: They tell you exactly with what sauce you're gonna be eaten


Discord had a scandal not too long ago where pictures of people/passports were stolen. There they said that they delete those files immediately after processing them. This proves your statement as false.


You got that fact from my own comment a few ones above this

https://www.bbc.com/news/articles/c8jmzd972leo

It was a 3rd party


Are you saying that corporations respect the letter of the law when it comes to privacy? They don't, they can just drop some lunch money when caught red-handed [0]

Even when they write in their privacy policy that they collect private data and sell them to third parties, unlawfully, that does not make it any better. Cambridge Analytica was operating with respect to Facebook policies. Would you say that people that took an IQ test and were manipulated into voting pro-Brexit were well-aware of the sauce they were eaten with?

Discord is unfortunately no different, they're profit-driven and likely to sell user data already or in the future, because it's incredibly easy and profitable to do so. Why would a chat app try and predict its users' gender? [1]

[0] https://en.wikipedia.org/wiki/GDPR_fines_and_notices [1] https://x.com/DiscordPreviews/status/1790065494432608432


Vencord is more patching Discord: https://github.com/Vendicated/Vencord

BetterDiscord is more... client modding to enable userscripts. Vencord is actually running find-and-replace on Discord's Webpack modules to implement deeper integrations. They're far more reverse-engineering than BetterDiscord's monkey-patching.


I think selling it to state actors lined could definitely be a big boon. I'll never trust them, I'd rather delete my account


Do you think they reverse engineer the server side?


Oh hey Direwolf I've contributed some stuff to your mods.

You mean if they lied about just the IDs but not the faces? The paragraph quoted mentions that the verification is done client side, "never leaves your device".

If we admit that they're saying they won't store it, then secretly selling your IDs data behind your back, they have to account for that revenue in their books, put it in their privacy policies or do it illegally, it's weak to whistleblowers, third parties get breached all the time (as well as yourself), and you have to trust the people you're selling this to. It's not credible.

There's similar debates with Whatsapp and their E2E encryption. Read this

https://blog.cryptographyengineering.com/2026/02/02/whatsapp...


Right, because that never happened to discord or any other multibillion VC fueled company that offers its services for free. See also meta repeatedly lying about absolutely anything that has to do with privacy.


> If they tell you they're not selling your data they're not selling your data.

Oh you naive child. /s

If they tell you they are not selling your data, its because they have a license agreement with another company which is selling your data. 'They' very specifically arent selling it, however they are very much profitting from other companies using it.


Yeah because they don’t haha. It boggles the mind because the headline is clickbait.


Youtube routinely asks for ID on accounts that are already of drinking age, they dgaf they want document scans they can use for profiling and to likely sell to 3rd parties.


> and to likely sell to 3rd parties.

Can you provide literally any evidence that would suggest this is the case?


Since selling PII is a common practice in the US industry, I believe the onus is reversed, they need to prove that they delete/keep-private.


Given how YouTube makes money from advertising, I suspect it's more profitable for them to keep the data to themselves and use it for targeting. I would not be surprised if they also share it with Adsense & other Alphabet entities (and presumably with government agencies), but am doubtful beyond that.

Not that this is much better than directly selling to third parties.


Yep, that's my reasoning too.


This sort of thing is common enough that simply establishing means, motive and opportunity are convincing to me. If not yet then soon. You can't hope for a smoking gun every time.


Give it a couple years for the inevitable data breach to leak all the details


If you want to compare Redis and PostgreSQL as a cache, be sure to measure an unlogged table, as suggested in the article. Much of the slowness of PostgreSQL is to ensure durability and consistency after a crash. If that isn't a concern, disable it. Unlogged tables are automatically truncated after a crash.


I don’t complain about Electron because I didn’t install the app if I could avoid it.


Pipelines are indeed one flow, and that works most of the time, but shell scripts make parallel tasks easy too. The shell provides tools to spawn subshells in the background and wait for their completion. Then there are utilities like xargs -P and make -j.


This holiday season, hearing my parents rant about AI features unnaturally forced onto their daily gadgets warmed my heart.


Hah, I was listening to a similar conversation that began with family members working in the school system complaining about AI slop that began (relatively) harmlessly in day-to-day email conversations padded with time wasting filler but now has trickled down into "professional" education materials and even textbooks.

Which led to a lot of agreement and rants from others with frustrating stories about their specific workplaces and how it just keeps getting worse by the day. Previously these conversations just popped up among me and the handful of family in tech but clearly now has much broader resonance.

As can be observed in my comment history, I use LLM agentic tools for software dev at work and on my personal projects (really my only AI use case) but cringe whenever I encounter "workslop" as it almost invariably serves to waste my time. My company has been doing a large pilot of 365 Copilot but I have yet to find anything useful, the email writing tools just seems to strip out my personal voice making me sound like I'm writing unsolicited marketing spam.

Every single time I've been using some Microsoft product and think "Hmm, wait maybe the Copilot button could actually be useful here?", it just tells me it can't help or gives me a link to a generic help page. It's like Microsoft deliberately engineered 365 Copilot to be as unhelpful as possible while simultaneously putting a Copilot button on every single visible surface imaginable.

The only tool that actually does something is designed to ruin emails by stripping out personal tone/voice and introducing ambiguity to waste the other person's time. Awesome, thanks for the productivity boost, Microsoft!


It sounds like you're the one in denial? AI makes some things faster, like working in a language I don't know very well. It makes other things slower, like working in a language I already know very well. In both cases, writing code is a small percentage of the total development effort.


No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.

Even if you limit your AI experience to finding information online through deep research it's such a time saver and productivity booster that makes a lot of difference.

The list of things it can do for you is massive, even if you don't have it write a single line of code.

Yet the counter argument is like "bu..but..my colleague is pushing slop and it's not good at writing code for me", come on, then use it at things it's good at, not things you don't find it satisfactory.


It "obviously" does based on what, exactly? For most devs (and it appears you, based on your comments) the answer is "their own subjective impressions", but that METR study (https://arxiv.org/pdf/2507.09089) should have completely killed any illusions that that is a reliable metric (note: this argument works regardless of how much LLMs have improved since the study period, because it's about how accurate dev's impressions are, not how good the LLMs actually were).


It's a good study. I also believe it is not an easy skill to learn. I would not say I have 10x output but easily 20%

When I was early in use of it I would say I sped up 4x but now after using it heavily for a long time some days it's 20% other days -20%

It's a very difficuly technology to know when you're one or the other.

The real thing to note is when you "feel" lazy and using AI you are almost certainly in the -20% category. I've had days of not thinking and I have to revert all the code from that day because AI jacked it up so much.

To get that speed up you need to be truly focused 100% or risk death by a thousand cuts.


Yes, self-reported productivity is unreliable, but there have been other, larger, more rigorous, empirical studies on real-world tasks which we should be talking about instead. The majority of them consistently show a productivity boost. A thread that mentions and briefly discusses some of those:

https://news.ycombinator.com/item?id=45379452


Some (partial) counter points:

- I think given public available metrics, it's clear that this isn't translating into more products/apps getting shipped. That could be because devs are now running into other bottlenecks, but it could also indicate that there's something wrong with these studies.

- Most devs who say AI speeds them up assert numbers much higher than what those studies have shown. Much of the hype around these tools is built on those higher estimates.

- I won't claim to have read every study, but of the ones I have checked in the past, the more the methodology impressed me the less effect it showed.

- Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.

- Review is imperfect, and LLMs produce worse code on average than human developers. That should result in somewhat lowered code quality with LLM usage (although that might be an acceptable trade off for some). The fact that some of these studies didn't find that is another thing that suggests there shortcomings in said studies.


> - Most devs who say AI speeds them up assert numbers much higher than what those studies have shown.

I am not sure how much is just programmers saying "10x" because that is the meme, but if at all realistic numbers are mentioned, I see people claiming 20 - 50%, which lines up with the studies above. E.g. https://news.ycombinator.com/item?id=45800710 and https://news.ycombinator.com/item?id=46197037

> - Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.

Absolutely, and all the largest studies I've looked at mention this clearly and explain how they try to address it.

> Review is imperfect, and LLMs produce worse code on average than human developers.

Wait, I'm not sure that can be asserted at all. Anecdotally not my experience, and the largest study in the link above explicitly discuss it and find that proxies for quality (like approval rates) indicate more improvement than a decline. The Stanford video accounts for code churn (possibly due to fixing AI-created mistakes) and still finds a clear productivity boost.

My current hypothesis, based on the DORA and DX 2025 reports, is that quality is largely a function of your quality control processes (tests, CI/CD etc.)

That said, I would be very interested in studies you found interesting. I'm always looking for more empirical evidence!


> I see people claiming 20 - 50%, which lines up with the studies above

Most of those studies either measure productivity using useless metrics like lines of code, number of PRs, or whose participants are working for organizations that are heavily invested in future success of AI.

One of my older comments addressing a similar list of studies: https://news.ycombinator.com/item?id=45324157


As mentioned in the thread I linked, they acknowlege the productivity puzzle and try to control for it in their studies. It's worth reading them in detail, I feel like many of them did a decent job controlling for many factors.

For instance, when measure the number of PRs they ensure that each one goes through the same review process whether AI-assisted or not, ensuring these PRs meet the same quality standards as humans.

Furthermore, they did this as a randomly controlled trial comparing engineers without AI to those with AI (in most cases, the same ones over time!) which does control for a lot of the issues with using PRs in isolation as a holistic view of productivity.

>... whose participants are working for organizations that are heavily invested in future success of AI.

That seems pretty ad hom, unless you want to claim they are faking the data. Along with co-authors who are from premier institutes like NBER, MIT, UPenn, Princeton, etc.

And here's the kicker: they all converge on a similar range of productivity boost, such as the Stanford study:

> https://www.youtube.com/watch?v=tbDDYKRFjhk (from Stanford, not an RCT, but the largest scale with actual commits from 100K developers across 600+ companies, and tries to account for reworking AI output. Same guys behind the "ghost engineers" story.

The preponderence of evidence paints a very clear picture. The alternative hypothesis is that ALL these institutes and companies are colluding. Occam's razor and all that.


> if at all realistic numbers are mentioned, I see people claiming 20 - 50%

IME most people claim small integer multiples, 2-5x.

> all the largest studies I've looked at mention this clearly and explain how they try to address it.

Yes, but I think pre-AI virtually everyone reading this would have been very skeptical about their ability to do so.

> My current hypothesis, based on the DORA and DX 2025 reports, is that quality is largely a function of your quality control processes (tests, CI/CD etc.)

This is pretty obviously incorrect, IMO. To see why, let's pretend it's 2021 and LLMs haven't come out yet. Someone is suggesting no longer using experienced (and expensive) first world developers to write code. Instead, they suggest hiring several barely trained boot camp devs (from low cost of living parts of the world so they're dirt cheap) for every current dev and having the latter just do review. They claim that this won't impact quality because of the aforementioned review and their QA process. Do you think that's a realistic assessment? If and on the off chance you think it is, why didn't this happen on a larger scale pre-LLM?

The resolution here is that while quality control is clearly important, it's imperfect, ergo the quality of the code before passing through that process still matters. Pass worse code in, and you'll get worse code out. As such, any team using the method described above might produce more code, but it would be worse code.

> the largest study in the link above explicitly discuss it and find that proxies for quality (like approval rates) indicate more improvement than a decline

Right, but my point is that that's a sanity check failure. The fact that shoving worse at your quality control system will lower the quality of the code coming out the other side is IMO very well established, as is the fact that LLM generated code is still worse than human generated (where the human knows how to write the code in question, which they should if they're going to be responsible for it). It follows that more LLM code generation will result in worse code, and if a study finds the opposite it's very likely that the it made some mistake.

As an analogy, when a physics experiment appeared to find that neutrino travel faster than the speed of light in a vacuum, the correct conclusion was that there had almost certainly been a problem with the experiment, not that neutrinos actually travel faster than the speed of light. That was indeed the explanation. (Note that I'm not claiming that "quality control processes cannot completely eliminate the effect of input code quality" and "LLM generated code is worse than human generated code" are as well established as relativity.)


> Yes, but I think pre-AI virtually everyone reading this would have been very skeptical about their ability to do so.

That's not quite true: while everybody acknowledged it was folly to measure absolute individual productivity, there were aggregate metrics many in the industry were aligning on like DORA or the SPACE framework, not to mention studies like https://dl.acm.org/doi/abs/10.1145/3540250.3558940

Similarly, many of these AI coding studies do not look at productivity on an individual level at a point of time, but in aggregate and over an extended period of time using a randomized controlled trial. It's not saying Alice is more productive than Bob, it's saying Alice and Bob with AI are on average more productive than themselves without AI.

> They claim that this won't impact quality because of the aforementioned review and their QA process. Do you think that's a realistic assessment? If and on the off chance you think it is, why didn't this happen on a larger scale pre-LLM?

Interestingly, I think something similar did happen pre-LLM at industry-scale! My hypothesis (based on observations when personally involved) is that this is exactly what allowed offshoring to boom. The earliest attempts at offshoring were marked by high-profile disasters that led many to scoff at the whole idea. However companies quickly learned and instituted better processes that basically made failures an exception rather than the norm.

I expand a bit more and draw parallels to coding with AI here: https://news.ycombinator.com/item?id=44944717

> ... as is the fact that LLM generated code is still worse than human generated...

I still don't think that can be assumed as a fact. The few studies I've seen find comparable outcomes, with LLMs actually having a slight edge in some cases, e.g.

- https://arxiv.org/abs/2501.16857

- https://arxiv.org/html/2508.00700v1


> My hypothesis (based on observations when personally involved) is that this is exactly what allowed offshoring to boom.

Offshoring did happen, but if you were correct that only the quality control process impacted final quality, the software industry would have looked something like e.g. garment industry, with basically zero people being paid to actually write software in the first world, and hires from the developing world not requiring much skill. What we actually see is that some offshoring occurred, but it was limited and when it did occur companies tried to hire highly trained professionals in the country they outsourced to, not the cheapest bootcamp dev they could find. That's because the quality of the code at generation does matter, so it becomes a tradeoff between cost and quality.

> I still don't think that can be assumed as a fact. The few studies I've seen find comparable outcomes, with LLMs actually having a slight edge in some cases, e.g.

Anthropic doesn't actually believe in their LLMs as strongly as you do. You know how I can tell? Because they just spent millions acquihiring the Bun team instead of asking Claude to write them a JS runtime (not to mention the many software engineering roles they're advertising on their website). They know that their SOTA LLMs still generate worse code than humans, that they can't completely make up for it in the quality control phase, and that they at the very least can't be confident of that changing in the immediate future.


Offshoring wasn't really limited... looking at India as the largest offshoring destination, it is in the double-digit billions annually, about 5 - 10% of the entire Indian GDP, and it was large enough that it raised generations of Indians from lower middle-class to the middle and upper-middle class.

A large part of the success was, to your point, achieved by recruiting highly skilled workers at the client and offshoring ends, but they were a small minority. The vast majority of the workforce was much lower skilled. E.g. at one point the bulk of "software engineers" hired didn't even study computer science! The IT outsourcing giants would come in and recruit entire batches of graduates regardless of their education background. A surprisingly high portion of, say, TCS employees have a degree in something like Mechanical Engineering.

They key strategy was that these high-skilled workers acted as high-leverage points of quality control that were scaled to a much larger force of lower-skilled workers via processes. As the lower strata of workers upskilled over time, they were in turn promoted to lead other projects with lower-skilled workers.

In fact, you see this same dynamic in high-performing software teams, where there is a senior tech lead and a number of more junior engineers. The quality of output depends heavily on the skill-level of the lead rather than the more numerous juniors.

Re: Anthropic, I think we're conflating coding and software engineering. Writing an entire JS runtime is not just coding, it's a software engineering project, and I totally agree that AI cannot do software engineering: https://news.ycombinator.com/item?id=46210907


not OP but I have a hard metric for you.

AI multiplied the amount of code I committed last month by 5x and it's exactly the code I would have written manually. Because I review every line.

model: Claude Sonnet 3.5/4.5 in VSCode GitHub Copilot. (GPT Codex and Gemini are good too)


I have no reason to think you're lying about the first part (although I'd point there's several ways that metric could be misleading, and approximately every piece of evidence available suggests it doesn't generalize), but the second part is very fishy. There's really no way for you to know whether or not you'd have written the same code or effectively the same code after reviewing existing code, especially when that review must be fairly cursory (because in order to get the speed up you claim, you must be spending much less time reviewing the code than it would have taken to write). Effectively, what you've done is moved the subjectivity from "how much does this speed me up?" to "is the output the same as if I had done it manually?"


> There's really no way for you to know whether or not you'd have written the same code or effectively the same code after reviewing existing code.

There is in my case because it's just CRUD code. The pattern looks exactly like the code I wrote the month prior.

And this is where LLMs excel at, in my experience. "Given these examples, extrapolate to these other cases."


I am not even a software engineer but from using the models so much I think you are confined to a specific niche that happens to be well represented in the training data so you have a distorted perspective on the general usefulness of language models.

For some things LLMs are like magic. For other things LLMs are maddeningly useless.

The irony to me is anyone who says something like "you don't know how to use the LLM" actually hasn't explored the models enough to understand their strengths/weaknesses and how random and arbitrary the strengths and weakness are.

Their use cases happen to line up with the strengths of the model and think it is something they are doing special themselves when it is not.


>No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.

Feel free to cite said data you've seen supporting this argument.


Insider trading is a form of theft. Every trade has a counterparty who missed out on those capital gains because they did not have their thumb on regulatory levers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: