Because near everything about this appears to be a low effort scam. The gmail is to add to the "authenticity" of the scammer being 16, because this way they can add a presumed birthyear, wheras with domain email it would make no sense. Proof is in that the address williamcranna@gmail.com is avaliable, which would be the first thing a reasonable person would have tried, before adding the year like a dickhead.
Nah, I don't buy it. Theyre good enough at higher level skills / comprehension that such enormous failures in more foundational skills makes no sense.
the website advertises a product that makes use of web infrastructure signficantly more complex/sophisticated than webmail tied to your domain. It's a zero effort zero cost addon for most hosting services these days.
so either this whole thing is a LLM project, and the 16 year old just has their name and reputation as a biological human of young age propping up / hyping the llm based product, or its a scam.
bottom line. It's either unremarkable or untrustworthy, likely both.
> Nah, I don't buy it. Theyre good enough at higher level skills / comprehension that such enormous failures in more foundational skills makes no sense
It wouldn’t make sense for a scammer to make those mistakes either. So you’re not proving anything nefarious by pointing out those mistakes. All you’re proving is mistakes have been made.
You might be surprised to learn this, but people makes a lot of mistakes at the start of their career. And 16 is very early on.
I suspect there might have been some “vibe coding” going on, but that’s the worst criticism I’m willing to make. And even if I’m correct here, everyone has to start somewhere. I grew up with massive printed books, others on here learned from Stack Overflow. Vibe coding is only a problem if you don’t learn from it too.
I've already pointed out why a scammer might make this mistake - adding credibility to the "im 16" angle. lets take the HN account, created 9 days ago. No "2008" or anything of the sort. Only in the prominantly displayed email. Only to help support the lie that helps gain attention. It reeks.
It reeks of the marketing tactic along the lines of "my brother is autistic, here's my subpar contribution of "their" work to a subreddit dedicated to artisinal skills" and watch the karma roll on in for them, only to check their comment history and it being a clear case of karma whoring via lies.
this smells the same.
now, if i am wrong and its just a vibe coding thing, then the "im 16" part plays no role, it would be impressive if the 16 year old did this in a responsible way, but anyone can vibe code their way to such a product woth zero real skill or effort, making the "im 16" point lose the context underwhich it would be pertinent information.
Llm's have changed our world in many ways, this being one of them. Imagine someone asks some agentic llm framework a research question, and a series of unrelated tangents later this agentic framework solves a nobel prize worthy problem. Should the human get that prize? Of course not, it would otherwise recognize skills/brightness in a human who doesnt actually posses it.
If this isnt just a scam, then this would be no different.
last example. If someone posted, "im 16, here is my art portfolio" and its all AI generated content, would you care? Would it demand the same response as if this was a gallery of high quality and beautiful work, painted by hand? Of course not.
I do appreciate that your post is genuinely trying to come from a good place, but I can’t say I agree with any of it.
By that I don’t just mean the analysis of this specific submission but also your tangent points about AI. People have used tools to enable creativity for the entirety of human history. The definition of what tools are acceptable and which are cheating is a subjective one and often defined by the age of the person (ie what was the norm when you were in your 20s).
I’ve seen the same debates time and time again. Whether it’s Ableton vs vinyl (DJs), search engines vs directory listings (research), internet vs books (research), VSTi’s vs instruments (musicians), automatic vs manual shift (cars), Photoshop vs traditional photography effects, CGI vs animatronics (movies), I could go on and on.
Even dumb things like a central payment till in restaurants (eg McDonalds) was heavily criticised in the UK when it was new because the “correct” way to serve food was via table service…or so people over a certain age believed.
Most people hate change. I know because I’m old myself and have seen enough change first hand and how people react to it. But change doesn’t make a compelling argument for why this 16 year old shouldn’t be helped in their endeavours to build an app. Nor is it proof that this individual isn’t who they claim to be.
I also appreciate where you're coming from and it's likely we wont agree, due to a difference in fundamental values (neither being better imo, just different), but i value the discourse and world view broadening that conversations like this offer no less. So let me preface this with the fact i value your input and think your point is not just valid, but true if i allow myself to evaluate all this under a different value system.
that being said, even if i shift my values in this way, one problem remains - the nature of creativity.
so, under this value system tool use is irrelevant, if it makes creativity easier cool, doesnt diminish it non.
so lets evaluate this purely on a novelty / creativity angle.
marketing wise: tactic / style taken from a youtuber they reference in their twitter account
product: idea contributes jothing novel to a saturated space of ai learning tolls
tool use: tools used, and the way they were used, is about as basic as it gets
insight: no real novel or interesting insights into tool use, or the problem being solved
wholisitc interpretation: all together, what appears novel in all this, is applying a particular marketing strategty to hn, one that is usually aimed at children. This raises a few interesting questions about shifting demogrpahics on hn among other things, but this post is interesting in a meta way, not a direct way.
as an example, writing text like this, in a digital way, is not special anymore, by anyones reakoning - and yet if you apply this skill creatively, be it a story or poetry or solving a novel problem etc. Then that comment still has creative merit, even if the skills underlying it are no longer noteworthy. The same is true here, the skills underlying what was done are no longer noteworthy, and so we must evaluate on content alone. The content is derivative. So it stands on nothing.
You're right I have no coding skill. But testing out Lovable and bringing my idea to reality made me realize this is something I want to learn so I've already began taking a course learning how to code softwares of my own.
People should't be "scared" of these LLM's it's just a tool that shows coding to a wider audience.
That's a really positive outcome, one I am personally supportive of. Learning to code is a rewarding journey.
Now, while I am not scared of LLM's, I am scared for users who use them inappropriately.
I use LLM's extensively, and so I am intimately familiar with the dangers they pose to the uninitiated. I would HEAVILY caution against relying on LLM's until you can read and understand the code your asking LLM's to write comfortably.
Personally, I would recommend you first learn to code in a language of interest, then use LLM's to automate the stuff that has become second nature. The stuff you can pump out mindlessly. This takes the burden of monotonous tasks of your hands, and you have the expertise to check the LLM output for glaring issues. It's still not fully automated but it's much faster if you can write something complex, critical, or sensitive, while the LLM churns out boiler plate and routine chunks. You then comeback later and proof read the LLM output.
Trusting AI code you yourself don't understand is a recipe for disaster. You claim your users data will be private, but then have to rely on AI jank to keep this data safe, if it is even safe. It might just throw everything into publicly accessible folders. What happens when you promise safety, but don't actually provide any. What happens when a users data is then stolen? Who does the court hold accountable? you? the LLM you blindly trusted?
Appreciate it, I didn’t expect this post to get as much attention as it did, this is just a small idea I had. I didn’t think an email yet was necessary.
It might be more productive to explain how this scam works instead of dismissing my counter argument with an incorrectly used meme.
I really don’t see how a learning app for students who also frequent HN is a valuable enough demographic to target. And ironically, the red flags identified would be mistakes a scammer would know not to make.
Everything about this submission can be easily explained by inexperience but it’s a lot harder to explain why it’s scam.
You could apply Occam’s Razor here and reasonably say that this being an inexperienced 16 year old is the hypothesis with the fewer assumptions.
The reason I have is even less believable, lol. I made my personal email when I was 7 so I just made it that. If you don’t believe me check out my Twitter @willcranna .
Very pleased this is coming. Once a week I hold a meeting with stakeholders to show my latest art works and I can hear them push the print screen button. Very annoying. I am trying to get these freshly minted but if is becomes public somebody has screenshotted them, the value plummets
It's worth noting that when the NSA invented DES, they took a cipher from IBM and made it more resistant (to differential cryptanalysis, a technique that at the time wasn't known outside the NSA itself).
> NSA gave Tuchman a clearance and brought
him in to work jointly with the Agency on
his Lucifer modification. . . . NSA tried to
convince IBM to reduce the length of the
key from 64 to 48 bits. Ultimately, they
compromised on a 56-bit key.
> The cryptographic core of NSA's sabotage of DES was remarkably blunt: NSA simply convinced Tuchman to limit the key size to 56 bits, a glaring weakness.
> Whit Diffie and Marty Hellman wrote a paper explaining in considerable detail how to build a machine for $20 million that would break each DES key with an amortized cost of just $5000/key using mid-1970s technology. They predicted that the cost of such a brute-force attack would drop "in about 10 years time" to about $50/key, simply from chip technology improving.
> Diffie and Hellman already distributed drafts of their paper before DES was standardized. Did NSA say, oh, oops, you caught us, this isn't secure?
> Of course not. NSA claimed that, according to their own estimates, the attack was 30000 times more expensive: "instead of one day he gets something like 91 years".
The main source here is https://archive.org/details/cold_war_iii-nsa/cold_war_iii-IS..., "American Cryptology during the Cold War, 1945-1989", DOCID: 523696, REF ID: A523696, a declassified internal NSA history. Longer version of the quote above, originally classified TOP SECRET UMBRA, from p.232 (p.240/271)
> (S CCO) The decision to get involved with NBS was hardly unanimous. From the SIGINT standpoint, a competent industry standard could spread into undesirable areas, like Third World government communications, narcotics traffickers, and international terrorism targets. But NSA had only recently discovered the large-scale Soviet pilfering of information from U.S. government and defense industry telephone communications. This argued the opposite case - that, as Frank Rowlett had contended since World War II, in the long run it was more important to secure one's own communications than to exploit those of the enemy.
> (FOUO) Once that decision had been made, the debate turned to the issue of minimizing the damage. Narrowing the encryption problem to a single, influential algorithm might drive out competitors, and that would reduce the field that NSA had to be concerned about. Could a public encryption standard be made secure enough to protect against everything but a massive brute force attack, but weak enough to still permit an attack of some nature using very sophisticated (and expensive) techniques? NSA worked closely with IBM to strengthen the algorithm against all except brute force attacks and to strengthen substitution tables, called S-boxes. Conversely, NSA tried to convince IBM to reduce the length of the key from 64 to 48 bits. Ultimately, they compromised on a 56-bit key.
This may sound like a paranoid conspiracy theory, but it is the point of view of an NSA insider, writing in 01998 for an audience of NSA cryptoanalysts and cryptographers to educate them on the history of cryptology during the Cold War. It is understandable that Schneier and others believed that the overall influence of the NSA on DES was to increase its security, because they did not have access to this declassified material when they formed those opinions; it wasn't declassified until July 26, 02013.
That's true, but the fact that NSA wanted to make brute force cheaper also suggests that they didn't have any particular offensive tricks up their sleeve (they had differential cryptanalysis but they used their knowledge defensively) like they did with Dual_EC_DRBG.
Yes; also, if they had had such tricks, they probably would have mentioned them in that document, perhaps in a following paragraph that was censored from the declassified version. But there seems to have been no such paragraph, further supporting your inference.
Basically anything llama.cpp (Vulkan backend) should work out of the box w/o much fuss (LM Studio, Ollama, etc).
The HIP backend can have a big prefill speed boost on some architectures (high-end RDNA3 for example). For everything else, I keep notes here: https://llm-tracker.info/howto/AMD-GPUs
Using a custom-built interception layer, I decouple session tokens from identifiable browser states, rotating my signature footprint every few requests via controlled entropy injection. “No more third-party cookies” sounds like a big shift, but it’s functionally irrelevant if your presence is already undetectable.
reply