Hacker Newsnew | past | comments | ask | show | jobs | submit | badtuple's commentslogin

Sure. That's pretty acidic wording, but I think it's fair to say they want more consumer market share and lock-in helps that.

The original post's point was that by being more open they would encourage more software to be built for their platform. That would create more demand for their products from consumers.


wow, I take time out of my day to write a clever pun and you flag comment? You're sure abusive.


We've diluted the term AI before. Eventually the hype will wear off and we'll call them LLMs, just like what happened to all the previous versions of machine learning or various expert systems.

Vending Machines used to be called robots. Then they stopped seeming magic.


It'll stop being called AI. “LLMs are just statistical models, not AI.”

Like everything else that was ever called AI then realized, the goalposts move to exclude it.


The only thing that works for me is separating things into very distinct "realms" and only having one realm open at a time. No exceptions.

For example one realm is for communication. Slack, Browser, Email, and Calendar can be open. Nothing is really a distraction from anything else here. I'm just being "at work" and communicating in this mode.

Another is for coding. Literally the only things open are vim and a terminal. NO browser and NO Slack. If I need documentation then I didn't design well enough, and design is it's own realm. I should know the libraries I'm using, and anything else is easily handled by vim's autocomplete/intellisense or navigating to the code.

The other two explicit realms are Writing and Design/Planning. There are more adhoc ones, but I really try to avoid adhoc-ness.

Switching realms is a hassle and requires super deliberate action. This means I can't just randomly switch between tabs and code and Slack and email and social media and just...kinda looking at things? That was my main problem. It was too easy to "move" and so I could never stop moving and somehow the entire day was gone. At no point was I goofing off, but my day just disappeared.

The only issue is that work people really really want my dot to be green on Slack at all times. They even give me the room to be on my own, but literally just having Slack open is a weird attention drain and I don't really know how to convey that. This leads to me getting most of my work done after hours and working way too long :/


> If I need documentation then I didn't design well enough,

I need documentation because my coworkers didn't design well enough.


It's hard enough to view the content I want to see when everyone has "smart" content feeds shoving ads in my face. I work hard to isolate what I look at on Amazon from what I see on YouTube so marketers don't ruin the little bits of the web I'm able to enjoy intentionally. Creating cross-site bleed of account information behind the scenes just makes everything worse for the user who can no longer drive your app...they just have to plop in and let it drive them.

You handwaved the idea of consent by saying it's "clear" to the user. But if there isn't a way to opt out then consent isn't addressed at all.


They didn't say all engineers, and I certainly wouldn't assert that. But I've definitely seen it personally at 2 early stage startups, a moderate-ish size business, and a Fortune 100. Just a handful of people, but they were very open about it. Those around them simply participated in the conversation and didn't judge.

It was never framed as "abuse." It was talked about very casually as a tool to get more work done, and they deliberately sought out an ADHD diagnosis for access.

I'm in no way judging. They DID get a diagnosis and are using it as treatment. The reason it's relevant is because _they_ talked about it as a tool to get more work done, or a way to be "on" after a late night. Not as part of treatment. But there are tons of reasons why that could be, none of which can be assumed.


I can attest. A lot of people with ADHD will gravitate towards cannabis because of the focussing effect. Cannabis affects everyone different and some people need to sink into the couch after smoking, but myself I usually find myself going for a bike ride, working in excel, code, or cleaning, which is a big one.

That said, I have worked for one of the largest cannabis companies in the world (at the time) and I can say, some of the most productive people I know are all day, every day smokers. A lot of them should have been on some sort of adhd meds for sure but if the cannabis is working, and it keeps them off pharmaceuticals then what’s the harm? It really did change my perception of how some people operate while stoned.

All that said, it’s easy to use cannabis as a crutch, it’s definitely not healthy in other ways but there are worse things to be hooked on.


I find weed helps eliminate the "choice paralysis" that I have with my ADHD, but the other things it does, like making boring and tedious and repetitive things fun, make it TERRIBLE to self medicate with.


I guess that's kind of the point. You don't have to choose to be that kind of grown up.


It seems like the point being made is that because an LLM lives within the universe and can't store the entire universe, it would need to "reason" to produce coherent output of a significant length. It's possible I misunderstood your post, but it's not clear to me that any "reasoning" isn't just really good hallucination.

Proving that an AI is reasoning and not hallucinating seems super difficult. Even proving that there's a difference would be difficult. I'm more open to the idea that reasoning in general is just statistical hallucination even for humans, but that's almost off topic.

> Any model that trivially depends upon statistics could not do causal reasoning, it would become exponentially less likely over time. At long output lengths, practically impossible.

It's not clear to me that it _doesn't_ fall apart over long output lengths. Our definition of "long output" might just be really different. Statistics can carry you a long way if the possible output is constrained, and it's not like we don't see weird quirks in small amounts of output all the time.

It's also not clear to me that adding more data leads to a generalization that's closer to the "underlying problem". We can train an AI on every sonnet ever written (no extra tagged data or metadata) and it'll be able to produce a statistically coherent sonnet. But I'm not sure it'll be any better at evoking an emotion through text. Same with arithmetic. Can you embed the rules of arithmetic purely in the structure of language? Probably. But I'm not sure the rules can be reliably reversed out enough to claim an AI could be "reasoning" about it.

It does make me wonder what work has gone in to detecting and quantifying reasoning. There must be tons of it. Do we have an accepted rigorous definition of reasoning? We definitely can't take it for granted.


Reasoning and hallucinating are terms that are more shallow that are oftentimes used in discussions of this topic, but ultimately don't cover where and how the model is fitting the underlying manifold of the data -- which is in fact described by information theory rather well. That's why I referenced Shannon entropy, which is important as an interpretive framework. It provides mathematical guarantees and ties nicely into the other information compressive measures which do I feel answer some of the queries you're noting seem more ambiguous to you.

That is the trouble with mixing inductive reasoning sometimes with a problem that has mathematical roots. There are degrees where it's intractable to easily measure how much something is happening, but we have a clean mathematical framework that answers these questions well, so using it can be helpful.

The easiest example of yours that I can tie back to the math is the arithmetic in the structure of language. You can use information theory to show this pretty easily, you might appreciate looking into Kolmogorov complexity as a fun side topic. I'm still learning it (heck, any of these topics goes a mile deep), but it's been useful.

Reasoning on the other hand I find to be a much harder topic, in terms of measuring it. It can be learned, like any other piece of information.

If I could recommend any piece of literature for this, I feel like you would appreciate this most to start diving into some of the meat of this. It's a crazy cool field of study, and this paper in particular is quite accessible and friendly to most backgrounds: https://arxiv.org/abs/2304.12482


Honestly, on a non-toy project, build times with cgo are _brutal_. I agree with you usually, but when a build time on a beefy computer switches from under a second to >1min you notice it.

Linters and IDEs get slow when they check for errors, tests run slow, feedback drags, and all your workflows that took advantage of Go's fast compile times are now long enough that your flow and mental context disappear.

I'm way more lenient with other languages since the tooling and ecosystem are built around long build times. Your workflows compensate. But Go's tooling and ecosystem assume it compiles fast and treat things more like a scripting language. When that expectation is violated it hurts and everything feels like it's broken.


In my experience, encapsulating the access to sqlite in a go package helps a lot with avoiding recompilation of the c source, which indeed is brutally slow. It acutally seems to be way slower than compiling with gcc from the command line. Anyone knows why this is the case?


Why would you have to recompile sqlite every time?

I guess you just need to compile the .a once and then just reuse it?

If you're rebuilding it every single time, your build is set up wrong.


I'm curious about this too, but haven't been able to figure it out. I want to do some extremely basic detection on user specified videos and it'd be really slick to do it entirely in the browser.

Unless someone has a trick I haven't thought of though, I think I'll have to download it first which isn't nearly as cool :/


It's annoying because it's just the same-origin thing stopping it working.

I see there is an origin parameter[1] which sounds like it is nearly what is needed.

I don't know exactly what CORS setting is needed to make this work though.

[1] https://developers.google.com/youtube/player_parameters#orig...


He's an adult, let him make his own choices.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: