Hacker Newsnew | past | comments | ask | show | jobs | submit | more epistemer's commentslogin

We really got to the point of having these almost mandatory, 100% formulaic and 100% useless sex scenes in movies.

Not even sex scenes but kissing and clothes unbuttoning scenes with elevator music and maybe a bare ass shot if you the director really wanted to push the envelope.

A completely ridiculous motif that ran its course.


Duolingo was my gateway drug to Anki. I just can't imagine going back to Duolingo with the speed of going through say a 100 cards in Anki. There is just so much animation/sounds/nonsense in the way with Duolingo. It would take at least 10X longer.

There is absolutely something special about building decks too. I also get pretty much the same motivation from the timeline statistics in Anki as the streaks in Duolingo.


On the navy video I remember one of the guys mentioning they think it is a drone.

It such hubris to believe those are alien craft and not foreign military drones. As if alien craft is the more probable explanation than another country having drones the US Navy does not.


you seriously claim this after we all witnessed russia - the supposed number 2 or 3 military power in the world loose a hundred thousand soldiers to ukraine in less than a year? the reality is that the USA has the most advanced military and no one else comes close... so yes, aliens are far more likely explanation than china having anti-gravity drones


I think that underestimates the massive amount of distractions a person had to deal with being born in 2000.

I can remember being bored out of my mind and so practicing Bach on guitar for hours in 1990. There is just no way I would have done that in 2015. That feeling was gone forever the first time my modem connected to the internet.

It wouldn't be shocking that the intellectual giants of old became so because they had nothing much else to do but read and think.

I do put Wolfram in the same league as Von Nuemann in terms of not being ashamed to admit they are beyond me. I can distinctly remember getting New Kind of Science from the library, getting home and then quickly realizing there is no chance in hell I can read a 1000 pages of this.


That is a huge assumption. The simplest explanation to me is that while they have larger language models they don't have a better product than chatGPT to release. I would think building that product is what this $400m represents.

The impressive thing with chatGPT to me is how well it understand what you want even with very sparse input. Even if it gives wrong answers it still feels like you are both on the same page. That seems like the secret sauce even if a larger language model would give more correct output. I wouldn't be shocked at all if Google doesn't have anything currently that feels the way chatGPT does when it comes to interaction and now they are racing to build exactly that.


It makes me a bit sad thinking about how smart creative people in order to make a bunch of money use to form bands like this. Now they form startups in a much more dull society.


Awesome. I am ordering this right now.


I see it as the opposite. It is amazing what the model can do with poor quantitative reasoning.

I just can't imagine adding super human quantitative reasoning is going to be that big of a stumbling block over the next decade. If anything that is probably the low hanging fruit here for a huge jump into the unknown.


Looking at the state if the art in automated theorem provers, I don’t think that’s low hanging fruit.

We probably can make something that can calculate well and won’t make mistakes in combining various numbers found online, and can do rote evaluation of expressions not found anywhere online, but adding ‘reasoning’?

Even disregarding that it would have to, somehow, assign different trust levels to various online sources (for example, are, https://en.uncyclopedia.co/wiki/Wikipedia or https://en.wikipedia.org/wiki/Uncyclopedia trustworthy?), it, IMO, already would fall at the hurdle of doing ‘simple’ math.

For example “the sine of 100 factorial” has a well-defined value, but computing it in IEEE doubles doesn’t make sense because representable numbers are way too far apart around 100! (Google says it is about 0.68395718932, but it also thinks that sin(1+100!) ≈ 0.68395718932. I trust neither answer)

That’s solvable by using better software. Wolfram Alpha claims these are about -0.17, respectively -0.92, for example, but in my book, an AI wouldn’t be intelligent if it always used one; it would have to know when to fall back on the heavy guns. For the “what’s sin(100!)” question, I think the first response might be a counter-question “why do you want to know?”, but that depends on earlier discussion.


I think it is really hard to say where all this goes right now when we currently don't even have good quantitative reasoning.

10 years ago we were still working on MNIST prediction accuracy. 10 years forward from here all bets are off. If the model has super human quantitative reasoning and a mastery of language I am not sure how much programming we will be doing compared to moving to a higher level of abstraction.

On the other hand, I think there will be so many new software jobs because of the volume of software built over the next 20 years. The volume of software built over the next 20 years is probably unimaginable sitting where we are.


The radar and RF on approach might make sense. I just don't see how a provocation test makes any sense though. It is hardly like America is not willing to use military force.

Really, a mistake on the part of China feels like the Occam's razor explanation to me.

It is just very hard to see the risk/reward calculation to do this intentionally with spy networks both on the ground and in space.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: