Yes, it overlaps well with the market open time. But I thought Claude was good with coding... Does this mean major trading agents write code using Claude to make trading decisions? Or Claude models are relatively better than other models in non-coding trading work?
"Claude" is their chat bot product, so a peer of ChatGPT and used for everything. It by default uses their "Claude Sonet" models.
"Claude Code" is their code-writing client application, which uses "Claude Opus" models.
Fair point. Asked Gemini to suggest alternatives, and it suggested Gemini Velocity, Gemini Atom, Gemini Axiom (and more). I would have liked `Gemini Velocity`.
I like Anthropic's approach: Haiku, Sonnet, Opus. Haiku is pretty capable still and the name doesn't make me not wanna use it. But Flash is like "Flash Sale". It might still be a great model but my monkey brain associates it with "cheap" stuff.
The danger of short form videos is because the form enables the algorithm designer to artificially maximize the reward with minimum effort by the viewer. It doesn't matter whether you watch kitten ones initially. After watching it for a month casually, chances are you would end up watching some addictive videos for hours with little effort. It could be some endless stream of Buddhist monks talking about suffering, if someone likes that kind of thing. It's just designed to be addictive with crazy high reward/effort ratio.
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
Have you considered a case where English might not be the authors' first language? They may have written a draft in their mother tongue and merely translated it using LLMs. Its style may not be many people's liking, but this is a technical manuscript, and I would think the novelty of the ideas is what matters here, more than the novelty of proses.
I agree with the "writing is thinking" part, but I think most would agree LLM-output is at least "eloquent", and that native speakers can benefit from reformulation.
This is _not_ to say that I'd suggest LLMs should be used to write papers.
> What you are obsessing with is about the writer's style, not its substance
They aren’t, they are boring styling tics that suggest the writer did not write the sentence.
Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.
Are you saying human brain is kind of similarly vulnerable to well-crafted facts? Does it mean any intelligence (human or non-human) needs a large amount of generally factual data to discern facts from fakes, which is an argument toward AIs that can accumulate huge swath of factual data?
I feel like you're trying to twist my words into something they don't resemble at all.
I'm not saying anything is vulnerable to anything. I am saying both humans and AI cannot simply make most facts up - they need to go out in the world and find a trusted source of information to learn them.
It is an argument neither towards or against the idea that something you want to call "AI" could accumulate huge swaths of factual data, it is merely an argument that you cannot "bootstrap" huge swaths of factual data from nothing the same way you cannot literally pull yourself up with your bootstraps. If you want the information, you have to collect it from the environment.
Agreed in principle, but has anyone seen any practical difference between these DNS services? What would be a more detailed downside for using these in parallel instead of the ISP default as a fallback?
Some of them are so privacy-preserving they block sending your own location to the original DNS server, which makes anycast not work, so you get slower connections to the site.
Wikipedia article about Consciousness opens with an interesting line: "Defining consciousness is challenging; about forty meanings are attributed to the term."
Perhaps "consciousness" is just a poor term to use in a scientific discussion.
In Portuguese, they indicate that a syllable is stressed and alternate ways to say the vowels. e.g. "país" is stressed in "i" and means "country", while "pais" is stressed in "a" and means "parents". Tilde (~) indicates that the vowel is nasal, e.g. the "ã" in "São Paulo" means that it sounds like the "u" in "sun"; the default sound of "a" in Portuguese is the same as in "car".
because you know the stress syllable by looking at the word. take Desert and Dessert, do we say DES-ert or des-ERT. Also in portuguese, at least, I can know which "e" sound [1] each "e" in the word makes by knowing this (well, almost, but not completely, but much better than English.)