Hacker Newsnew | past | comments | ask | show | jobs | submit | arkmm's commentslogin

Looking forward to reading corroborating essays from other non-human species.


"Don't be snarky."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


Let me rescue it, then. It's a valid point that other species don't have literature. It shows that they don't have ideas. If they did, it would be really obviously evident. Instead we have to look hard for traces of memetic transmission of idea-like behaviours involving sticks and leaves, and rocks and shells, and calls and signs. These memes don't go anywhere, and the animals aren't creative. If they develop, it's by accident.


> It shows that they don't have ideas.

That is false. If you gave a whale five digits and an opposable thumb and have them live on land, you'd strongly reconsider that. Even without this, it doesn't take very long when studying animals to see that they have a plethora of ideas. Orcas demonstrate strong examples of this all the time.

And how can you possibly claim that you know any animal's internal dialogue?

> If they develop, it's by accident.

Human evolution is no different.


Apes do have opposable thumbs. They still don’t really engage in any intellectual activity that we can recognize beyond basic communication. They probably have an internal dialogue, but their curiosity and capacity for communication stops at immediate needs like hunger and danger.


> Apes do have opposable thumbs.

Apes are also not whales.

> that we can recognize

And there we go. That's an us problem and not a them problem.

> but their curiosity and capacity for communication stops at immediate needs like hunger and danger.

There are several interviews with native tribes who still practice hunting and gathering and that's the exact thing they worry about. Those humans are identical to us. But by your argument, "civilized" humans are more exceptional than these groups of humans?

Humans still have these basic needs and worries and thoughts. Just because we layer meta-societal pieces on top of that doesn't make them go away.

What makes humans different is technology. That does not make us different in an inherently exceptional way.


>What makes humans different is technology.

Partly, but that's a side effect. What makes us different are the mental faculties that give rise to technology (and many other fields).


That is why I gave the whale counterexample in the first place. If you place humans in the ocean with magic to allow them to survive, you will not get technology. If you placed whales on land with dextrous hands, you would very likely get technology.

Our mental faculties are not wholly unique. Look at an orca brain vs a human brain and ask who the smooth brain is, even ignoring the size.


That's better, yes! Although it makes me want to cite this other guideline:

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."

- because I think the article already addresses this "other species don't have literature" argument, though it doesn't talk about literature specifically.


But the article, or the book it's promoting, isn't making any very strong point. I checked. It's saying that animals have great senses, and some of them can see Saturn's rings on a clear night. But they don't know or care that they're seeing Saturn's rings, and they don't have telescopes anyway, and we do and can see the rings much better if we want to, because we do want to, because we think about the things that we can see. So, I don't know, maybe there's nothing to talk about here except sidetracks.


I think that that was the strongest plausible interpretation of the article in my point of view as well.

I’m not sure if you read the article, but if you did, what would you say is the strongest argument that should be discussed ?

The author literally argues that humans are not exceptional because some animals can do things better than us.


The strongest plausible interpretation isn't "humans are not exceptional". Every species is exceptional by definition, so that's a weak and easily dismissed claim. This critique is not so interesting.

What's meant by "human exceptionalism" is something more like "humans' longstanding habit of regarding ourselves as the apex of a strict hierarchy of species, a worldview which has had profound consequences for ourselves and others". That is a complex thing worth exploring, and what the work in the article is about. A critique from that level would be more interesting. But to do this, one would have to take in a larger working set of information.

Comments that engage with only the title of an article or the tip of its iceberg tend to be rather boring, and also reflexive/indignant. On HN, a good comment is reflective rather than reflexive [1], and engages with specifics rather than just being a generic reaction to a generic claim (like "humans are/aren't exceptional") [2].

One way to "engage with specifics" is to dig beneath the top of the abstraction heap (i.e. the title or top-level claim) until you hit a layer of substance of the relevant work or argument. In this case that's pretty easy to do: there are two paragraphs which, in their first sentences, get more specific:

  * when we assess other animals, we use human beings as the baseline

  * our tests of the abilities of nonhuman animals [...] study them under highly artificial conditions
One can disagree or debate the significance, but a response on this level is likely to be less reflexive and therefore more interesting.

To me the noteworthy thing in this HN thread is how rapid the reflex is to wholly dismiss the article (and the research it's about) and also how shallow that reflex is—how little information is processed before doing the dismissal. Strong emotional conditioning means little information can be tolerated before a reaction needs expressing. This thread is such a clear a case of that, that it points to how deeply what is called "human exceptionalism" lives in us.

Edit: actually, I was describing what I saw in the thread last night. Having looked it over again, there are a least some more substantive subthreads. That's good, and it's also common for those to take longer to appear, as described at [1] and [3].

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Valuing other species based on human traits is misleading. Literature doesn't mean anything to animals, it's not applicable. Same thing as the ability to glow isn't applicable to humans, but is for bacteria. Ideas are a human-only trait. Trying to argument that animals don't have ideas, therefore they are worse, is like saying that humans are worse than dolphins because humans can't breathe under water.

If you value animals based on human traits, humans will always be better. Because you take your own good traits which other species don't have. But that's not the point. Animals have animal traits. For example, low factor of self-extinction is something we should be learning from from animals. Acceptance of death. Limiting the use of our own resources. Taking these aspects into consideration make humans a stupid race that destroy the environment they live in.


The article's premise is "some animals are better at specific things than humans, therefore humans are not exceptional", or stated differently, "humans are only exceptional if they're the best at literally everything".

It seems obvious to me that this is a fairly useless definition of "exceptional" that would not be accepted in any context other than an ideological one.

Yes, HN is better without shallow dismissals. Perhaps we should extend that idea to shallow articles as well.


That is by no means the strongest plausible interpretation of the article, so I think you're running into the same problem I mentioned here: https://news.ycombinator.com/item?id=44958944 and in the longer reply: https://news.ycombinator.com/item?id=44966242.


I think what I said is actually the strongest interpretation of the article's claim, just with the author's word games stripped away. You called out two claims from the article as being worthy of deeper thought, so I'll address both:

> when we assess other animals, we use human beings as the baseline

Let's use a different baseline then, let's say the visual acuity of birds of prey or the longevity of sea tortoises. Those animals win against humans in their respective categories. Use every animal as a baseline against which to compare every other animal and add up all the "wins" across all of those, and you will find that humans win in far more categories and to a much greater degree than any other single animal. This claim is just a convoluted way of saying what I said in my last comment. The language gives it an academic veneer, but that does not make it a profound claim.

> our tests of the abilities of nonhuman animals [...] study them under highly artificial conditions

This is the actual quote with a bit more context: "We study them under highly artificial conditions, in which they are often miserable, stressed, and suffering. Try caging human beings and seeing how well they perform on cognitive tests."

Does anyone honestly believe that a stressed out human would perform worse on a cognitive test than a perfectly content chimpanzee? It's a fair point that animals are often not "in their element" when we study them, but the idea that this accounts for the vast gap in intelligence and creativity between them and humans is laughable. Is the author claiming that animals behave with a sophistication whose utility rivals the utility of human behaviors, but conveniently only when we're not watching them? I'm pretty sure there's a Far Side comic about this.

On a meta note, you talk about how a lot of commenters dismissed the article by only engaging with the title. I would suggest that you did not engage with what those commenters were actually saying--they did engage with the article, but the article had no substance. It was you who reflexively dismissed the commenters, because you're sympathetic to the article's worldview.


One man’s snarkyness is another man‘s critical and teaching comment.


Yes, up to a point, but that's one of those arguments that proves too much. If you take it literally, there's no difference in discussion quality and therefore no point in having guidelines at all.


"There was one surprise when I revisited costs: OpenAI charges an unusually low $0.0001 / 1M tokens for batch inference on their latest embedding model. Even conservatively assuming I had 1 billion crawled pages, each with 1K tokens (abnormally long), it would only cost $100 to generate embeddings for all of them. By comparison, running my own inference, even with cheap Runpod spot GPUs, would cost on the order of 100× more expensive, to say nothing of other APIs."

I wonder if OpenAI uses this as a honeypot to get domain-specific source data into its training corpus that it might otherwise not have access to.


> OpenAI charges an unusually low $0.0001 / 1M tokens for batch inference on their latest embedding model.

Is this the drug dealer scheme? Get you hooked later jack up prices? After all, the alternative would be regenerating all your embeddings no?


I don’t think OpenAI train on data processed via the API, unless there’s an exception specifically for this.


Maybe I misunderstand, but I'm pretty sure they offer an option for cheaper API costs (or maybe its credits?) if you allow them to train on your API requests.

To your point, pretty sure it's off by default, though

Edit: From https://platform.openai.com/settings/organization/data-contr...

Share inputs and outputs with OpenAI

"Turn on sharing with OpenAI for inputs and outputs from your organization to help us develop and improve our services, including for improving and training our models. Only traffic sent after turning this setting on will be shared. You can change your settings at any time to disable sharing inputs and outputs."

And I am 'enrolled for complimentary daily tokens.'


i'd not rule out some approach like instead of training directly on the data, may be they would train on a very high dimensional embedding of such a data (or some other similarly "anonymized", yet still very semantically rich representation of the data)


Can you truly trust them though?


Yes, it would be disastrous for OpenAI if it got out they are training on B2B data despite saying they don’t.


We're both talking about the company whose entire business model is built on top of large scale copyright infringement, right?


Not the same when the people you infringe on can sue you into the dirt


Have they said they don't? (actually curious)


Yes, they have. [1]

> Your data is your data. As of March 1, 2023, data sent to the OpenAI API is not used to train or improve OpenAI models (unless you explicitly opt in to share data with us).

[1]: https://platform.openai.com/docs/guides/your-data


Yeah, so many companies have been completely ruined after similar PR disasters /s


Their terms of service say they won’t use the data for training, so it wouldn’t just be a PR disaster; it’d be a breach of contract. They’d be sued into oblivion.


i am too lazy to ask openai.


It'd be a way to put crap or poisoned data into their training data if that is the case. I wouldn't.


How are you guys reaching users with such a technical value proposition? Cold emailing engineers first and then expanding the conversation from there?


To be frank, that's the challenge we're working on now. To start, my partner and I were mostly tapping into our network for initial customers. Both of us have been in the Bay Area for 10+ years.

Now, we're working on more of an inbound strategy and have a bunch of ideas. We're debating on cold emailing, but as an engineer myself, I hate getting cold emails.


This is a really great explanation.


IMO if you think you can sell to users within the niche, you can publish a blog post of benchmarks and that'll serve as strong technical marketing for your niche.

It also keeps open the option to sell to an incumbent (possibly helps maximize the value of that option as well).


Missing the California high-speed rail on their list of examples.


When do you think fine tuning is worth it over prompt engineering a base model?

I imagine with the finetunes you have to worry about self-hosting, model utilization, and then also retraining the model as new base models come out. I'm curious under what circumstances you've found that the benefits outweigh the downsides.


For self-hosting, there are a few companies that offer per-token pricing for LoRA finetunes (LoRAs are basically efficient-to-train, efficient-to-host finetunes) of certain base models:

- (shameless plug) My company, Synthetic, supports LoRAs for Llama 3.1 8b and 70b: https://synthetic.new All you need to do is give us the Hugging Face repo and we take care of the rest. If you want other people to try your model, we charge usage to them rather than to you. (We can also host full finetunes of anything vLLM supports, although we charge by GPU-minute for full finetunes rather than the cheaper per-token pricing for supported base model LoRAs.)

- Together.ai supports a slightly wider number of base models than we do, with a bit more config required, and any usage is charged to you.

- Fireworks does the same as Together, although they quantize the models more heavily (FP4 for the higher-end models). However, they support Llama 4, which is pretty nice although fairly resource-intensive to train.

If you have reasonably good data for your task, and your task is relatively "narrow" (i.e. find a specific kind of bug, rather than general-purpose coding; extract a specific kind of data from legal documents rather than general-purpose reasoning about social and legal matters; etc), finetunes of even a very small model like an 8b will typically outperform — by a pretty wide margin — even very large SOTA models while being a lot cheaper to run. For example, if you find yourself hand-coding heuristics to fix some problem you're seeing with an LLM's responses, it's probably more robust to just train a small model finetune on the data and have the finetuned model fix the issues rather than writing hardcoded heuristics. On the other hand, no amount of finetuning will make an 8b model a better general-purpose coding agent than Claude 4 Sonnet.


Do you maybe know if there is a company in the EU that hosts models (DeepSeek, Qwen3, Kimi)?


Most inference companies (Synthetic included) host in a mix of the U.S. and EU — I don't know of any that promise EU-only hosting, though. Even Mistral doesn't promise EU-only AFAIK, despite being a French company. I think at that point you're probably looking at on-prem hosting, or buying a maxed-out Mac Studio and running the big models quantized to Q4 (although even that couldn't run Kimi: you might be able to get it working over ethernet with two Mac Studios, but the tokens/sec will be pretty rough).


When prompt engineering isn't giving you reliable results.


only for narrow applications where your fine tune can let you use a smaller model locally , specialised and trained for your specific use-case mostly


finetuning rarely makes sense unless you are an enterprise and even generally doesn't in most cases there either.


Automating applying to jobs makes sense to me, but what sorts of things were you hoping to use Operator on Amazon for?


Finding, comparing, and ordering products -- I'd ask it to find 5 options on Amazon and create a structured table comparing key features I care about along with price. Then ask it to order one of them.


A bit of a minor detail, but this piqued my interest "DOM-aware browser integration" - could you say a little more?


You can select an element by clicking on it and it knows where in the React code it came from


Not affiliated, but someone's already working on that for you: https://www.realavatar.ai/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: