Hacker Newsnew | past | comments | ask | show | jobs | submit | doctoboggan's commentslogin

It will be interesting to see if the other major providers follow suit, or if those in the know just learn to go to google or anthropic for medical or legal advice.

I suspect this is an area that a bit of clever prompting will now prove fruitful in. The system commands in the prompt will probably be "leaked" soon which should give you good avenues to explore.

Clever is one thing, sometimes just clear prompting (I want to know how to be better informed about what kinds of topics or questions to speak to the doctor/professional about) can go a long way.

Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.

While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.


It's these workarounds that inevitably end up with someone hurt and someone else blaming the LLM.

I see it as a blessing: privacy advocates have previously argued that yes these invasive tools might currently help an honest government do its job to stop bad guys, but the tools could eventually fall into the hands of a not so honest government. Now, you don't really need much of an imagination to see what happens when the tools fall into the wrong hands, and hopefully more of the citizenry can get behind the idea of privacy as a fundamental right, and not just something for those who have something to hide.

> Precisely, why more as a manifold than as a square

In a double pendulum, each arm can freely rotate (there is no stopping point). This means 0 degrees and 360 degrees are the same point, so the edges of the square are actually joined. If you join the left and right edges to each other, then join the top and bottom edges to each other, you end up with a torus.


Many (most?) grid scale PV plants use at least single axis tracking. Sure adding more panels could also increase output, but these plants are usually completed covered with panels and there is no more space to add more.

Think of regen braking as a way to save an expendable part (brake pads). Instead of dumping your kinetic energy into heating up some brake pads, you can dump your kinetic energy back into the battery. That it happens to recharge (likely only a few small percent) is just a bonus.


I don't think this is true, class 2 e-bikes are allowed a thumb throttle up to 20mph


This is a Class 3, so it needs pedals to hit 28mph. It also has a throttle up to 20mph for class 2 operation.


That's not how the regulation works. You can't have a multi-class bicycle. It can either have a throttle, or it can exceed 20, but not both.


Apparently Californa requires a label for what class the bike is[1]. I have never seen such a label on an e-bike and I've seen a lot of e-bikes in CA.

1: https://leginfo.legislature.ca.gov/faces/codes_displaySectio... (312.5.c)


There are just a lot of illegal Chinese electric motorcycles on the roads in California. That doesn't make them e-bikes though.


I stand corrected. I did not know that class 2 allows for throttle-only operation.


Just as an FYI, I would not use LLMs to generate content like this. When I get the whiff of LLM generated content it gets hard for me to read and I usually disengage.

Try writing your own content and see if you are able to get more engagement.


The very first sentence:

> Welcome to EdgeAI for Beginners – your comprehensive...

Em dash and the word "comprehensive", nearly 100% proof the document was written by AI.

I use AI daily for my job, so I am not against its use, but recently if I detect some prose is written by AI it's hard for me to finish it. The written word is supposed to be a window into someone's thoughts, and it feels almost like a broken social contract to substitute an AI's "thoughts" here instead.

AI generated prose should be labeled as such, it's the decent thing to do.


Or just by somebody that knows how to use English punctuation properly.

Is it so hard to believe that there are some people in the world capable of hitting option + “-“ on their keyboard (or simply let their editor do it for them)?


I said em dash _and_ the word comprehensive. If you work with LLM generated text enough it gets very easy to see the telltale signs. The emojis at the start of each row in the table are also a dead giveaway.

I am guessing you are one of those people who used em dashes before LLMs came out and are now bitter they are an indicator of LLMs. If that's the case, I am sorry for the situation you find yourself in.


Yes, it’s become a tired trope of a particular kind of LLM luddite to me.

Especially given that there are so many linguistic tics one could pick on instead! “Not x, but y”, the bullseye emoji etc., but instead they get hung up on a typographic character actually widely used, presumably because they assume it only occurs on professionals’ keyboards and nobody would take enough care to use it in casual contexts.


If it makes a difference: it's an en dash used in the readme.

I've been wondering why LLMs seem to prefer the em dash over en dash as I feel like en (or hyphen) is used more frequently in modern text.


In my experience the em dash is still correctly used, the modern style has just evolved to put a space around it.

So:

* fragment a—fragment b (em dash, no space) = traditional

* fragment a — fragment B (em dash with spaces) = modern

* fragment a -- fragment b (two hyphens) = acceptable sub when you can’t get a proper em to render

But en-dashes are for numeric ranges…


em dash plus spaces is quite rare in English style guides. It’s usually either an em dash and no spaces or an en dash with them.


Apologies, now that I've been Beider-Meinhoff'd, I've realized you're correct and I've been misreading en dashes and em dashes. Thank you!


> The emojis at the start of each row in the table are also a dead giveaway.

What's up with the green checks, red Xs, rockets, and other stupid emoji in AI slop? Is it an artifact from the cheapest place to do RLHF?


It's the linkedin post recommendation AFAIK. The LI algo pushed such posts to the top before. So my leap of thought is that somebody at MS decided that top LI posts is the go-to structure for "good text".

I have no proof, sorry.


Imagine if we spent a trillion dollars to turn the internet into infinite degraded copies of LinkedIn. Business influencer spam generated by robots for other robots, with occasional corrections by the cheapest English speakers in the world. That's dark.


It's not an em-dash, it's an en-dash, which is rare in LLM output. Also just stop being insufferable.


You forget that MS Word loves to substitute things like em dashes in where you don’t want them. The “auto correct” to those directional quotation marks that every compiler barfs on used to be a real peeve with I was forced to use MS junk.


> AI generated prose should be labeled as such, it's the decent thing to do.

The decent thing to do is to prefix the slop with the prompt, so humans don't waste their time reading it.


Doesn't a word document essentially convert dashes to emdashes?


I don’t really care if it was.

It’s also documentation for an AI product, so I’d kinda expect them to be eating their own dogfood here.


He didn't make a statement about whether there is an AI bubble, he just said we wouldn't know if we were in one:

> "It is very hard to time a bubble," Prof Admati told me. "And you can't say with certainty you were in one until after the bubble has burst."

This statement is very true. Even if we are in a bubble you should not make the mistake of trying to time it properly.

For example, look at Nvidia during the last cryptocurrency hype cycle. If you predicted that was a bubble and tried shorting their stock, you would have lost since it didn't drop at all at they successfully jumped from crypto to AI and continued their rise.

I am not saying crypto wasn't a bubble, and I am not saying AI isn't a bubble; I am saying it would be a mistake to try to time it. Just VT and chill.


He gave an innocuous response.

Which prompted the question of whether there would be strong disincentive for any Stanford business school professor to give a non-innocuous response that would be a wet blanket on the AI ambitions surrounding them.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: