Hacker Newsnew | past | comments | ask | show | jobs | submit | snackai's commentslogin

What? People trust science. If they don't they are morons. Of course sometimes things and long held truths are proven wrong or different from new discoveries but that does not at all hurt science, this is the process.


> People trust science.

Very large numbers do not.

> If they don't they are morons.

Perhaps, but that doesn't mean they don't exist.


There is a PR with pinboard JSON support now.


How is this AI? Snooze, Reply-to and mute emails? Wow, needs hella brain for that. If-Statements are not AI.

The title is misleading.


These days I cringe when I see 'AI powered' and am wary of products that have it in their marketing lib. Feels like a gimmick due to the hype around AI.


Snoozing and muting emails are added functionalities. The AI makes the following desicions: 1. Which Emails are important and need a reply. 2. Which sent Emails require follow ups and what is the best time for a follow up. 3. Which contacts are important to you and which ones are drifting away and need to be reconnected with.


I think it's doing a little more than that:

"Caspy’s AI learns from your email history to figure out what types of emails you send replies to. With that knowledge, it will alert you only when an email needs a reply."


But that's hardly AI. I think even Thunderbird does this, learns that emails I click links in or reply to are important and flag them as such.


Who said that an email that I typically reply to is more important for me to see than one that I typically wouldn't reply to?


most classical AI research was in fact around symbolic logic which is basically if statements.


Nevermind it being a little OT, little discussions like this make HN great IMHO.


Even without any newly discovered backdoor. The Intel ME was always a fuing security issue. A BACKDOOR. It is completely naive to think the NSA can't use the ME to get access to anything, but hey it needs another Snowden for people to listen again.


Guys, cancer. It's still out there. Donate for research instead of wasting 400 Bucks on a fu*king juice press. No one can justify this useless piece of crap.


Well here comes Google, shoving leaflets up your ass because they think their leaflets are special and more acceptable!


Machine Learning is just a tool belt. AI is the idea of intelligent machines being able to solve problems (and recognizing them) on their own. Therefore of course AI can utilize the tool belt that is Machine Learning.


>AI is the idea of intelligent machines being able to solve problems (and recognizing them) on their own.

From the perspective of a language descriptivist instead of prescriptivist, that strict definition of AI is already long gone. That ship has sailed.

Even wikipedia says that colloquial uses of "artificial intelligence" stands for machine learning that mimics intelligence. It's understandable that The NYT and all mainstream non-compsci publications will use "artificial intelligence" that way. One of the researchers (Stephen Weng) cited in the article also uses the term "artificial intelligence" in his LinkedIn page and in the PDF research article itself.

If and when "solving/recognizing on its own" truly becomes reality, I suspect the world will adapt and call it "True-AI" or "Lifelike-AI" or "Strong-AI". We'll find another term to distinguish it from the watered down "artificial intelligence".


The timeless problem of AI, anything that starts out AI ends up being rebranded as not AI once it's understood. It's the no true Scotsman fallacy. Machine learning is AI, it's just not human level AI.


I'd disagree, there is more here than the no true Scotsman fallacy. It is equivalent to saying that any magic trick ceases to be magic once exposed as a trick. We don't have a good definition of I in AI therefore such arguments go on forever, yet somehow we know that the "I as in human being "I" is different than the "I" as in "AI" statistical smoke and mirrors. My opinion is that a good definition could be established that would clear such BS once and for all, some attempt here: http://blog.piekniewski.info/2017/04/13/ai-confuses-intellig...


You're conflating human level intelligence with artificial intelligence for one, AI doesn't have to be human level to be AI. Secondly "statistical smoke and mirrors" is the Chinese room fallacy; it doesn't matter what it is under the covers, if it behaves intelligently then it's intelligent regardless of whether or not it can be boiled down to some math. Your link covers that near the beginning when he discusses duck typing, but he comes to the wrong conclusion. The article should have ended there, if it behaves intelligently, then it's intelligent, that simply is the truth and looking at the implementation is cheating because we don't know brains aren't doing the same thing.

What's really going on is you are branding "human intelligence" special because we don't understand its implementation and labeling everything else not intelligent because we do, for all we know the human mind itself could be nothing more than statistical smoke and mirrors. The only problem here is human ego.

A car that can drive me somewhere on its own simply by being given a destination, is AI, not matter how it's implemented as long as it's the computer doing the driving and is operating locally by actually having sensors that see the road. It doesn't have to be able to ponder its own existence to be AI.

Neural nets were an attempt to model how the brain works; they are by definition AI regardless of whether they boil down to some maths. Everything a computer does boils down eventually to some maths, that is not an escape hatch to claim something isn't AI.

Machine learning is AI. It is not AGI, but it most certainly is AI.


You are dogmatically defending Turing test which I think is the primary source of this confusion. Turing test says: if it fools humans into thinking its intelligent it is intelligent. That is fair. But once some other humans understand the inner workings of some simple "AI" mechanism it no longer fools humans, since they now know what adversarial questions to ask to uncover it. Therefore it consequently fails the Turing test and we have the AI effect. This test is just a bad idea and it impairs research (for a number of reasons stated in the post which you prematurely dismiss).

The coffee criterion for AGI (https://en.wikipedia.org/wiki/Artificial_general_intelligenc...) is much better, since it requires ability to creatively interact with unpredictable reality as a test for intelligence. It avoids all the philosophical bullshit and all the smoke and mirrors, since you cannot fool physics. Somehow the so called "AI researchers" avoid robotics like fire, sine there stuff actually needs to work (not just statistically) and outrageous BS claims cannot be made.

And yes, ultimately the human brain may be smoke and mirrors. But frankly, quite sophisticated smoke and mirrors, not anywhere close to the crap that is being put forward right now.


No, I'm defending the Chinese room thought experiment. It's cheating to look at the implementation and then claim it isn't AI; you can't look at the human implementation which could very well also be based on simple math we simply haven't figured out yet. It's only fair to judge by inputs and outputs. And you are confusing AI with AGI; something does not have to have human level intelligence to be AI. The Turing test is about AGI, not AI.

Useful and relevant world changing AI will happen long before AGI which could very well be a pipe dream. A car that drives far better than humans is useful AI and yet would fall short of AGI, a robot that can clean my house is useful AI but could fall way short of AGI, there are vast world changing things to be done by AI long before AGI ever becomes a reality and that we understand how something works DOES NOT disqualify it from being AI, even if it boils down to little more than some statistical inference.

Saying it isn't AI because you understand how it works is like saying submarines can't swim; it doesn't have to work like nature to be valid nor does it have to be like human intelligence to be intelligent and any intelligence we build is by definition artificial intelligence. Machine learning that can diagnose better than a doctor... is AI, not matter how well you understand it's just math, it's still AI. Those who conflate AI with consciousness are the ones in err. AI does not and has not ever meant artificial self aware consciousness, while such a thing would be AI, it would be the pinnacle of AI, AGI.


> Machine learning is AI

I disagree. Machine Learning is just super-charged linear regression.

If you build a ML system that autonomously chooses it factors, and automatically adjusts to model drift (by either adjusting existing coefficients (easy) or adding/removing factors (hard), then ML drifts into AI.


> I disagree. Machine Learning is just super-charged linear regression.

Machine learning is neural nets, which started out as an approach to AI and which yes boil down to linear regression, but that's about as useful as saying brains are just super super charged linear regression. And that's the point, AI is a label that keeps getting cast off of things that started out as AI but once understood people decided they no longer were, you are committing the no true Scotsman fallacy. If it started off as AI, it's AI, that doesn't change because you understand the math underneath that it boils down to.


I think you might be talking about what other people call Artificial General Intelligence (AGI)?

https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


They don't charge you for the domain itself but for some part of their service. What's so hard to get about this?


Outside their app snapchat as close to zero visibility. No share buttons on the web, no snap ghosts in commercials (like "add us on facebook"), nothing. They really have to come up with something there.

When Facebook had their IPO everyone argued about them having no revenue, but they still had user growth, when Snap started they already had no user growth. This combined with no revenue... Wall Street does not approve!


"Outside their app snapchat as close to zero visibility. No share buttons on the web, no snap ghosts in commercials (like "add us on facebook")..."

That's exactly the point. Snapchat has absolutely no interest in associating themselves with Facebook. Snapchat caters to a entirely different demographic, and being anti-Facebook is a large part of the appeal.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: