Stunningly beautiful landing page. I would never normally comment on the aesthetics of anything in the dev sphere but that completely blew me away. I'll preorder for sure.
I'd echo the other comment mentioning that a coffee-table version of this would be great.
Agreed, it's aesthetically beautiful. It should be a coffee table book. But for the web, it has terrible usability. Really, really terrible in multiple ways. My comments will be harsh, but since the creator is obviously very skilled, he should know better.
Why multicolumn text? So it looks like an old printed manual? At first view, it's not clear where the first column ends. This is not something we see on the web (because there's no need for it), so it's not clear that the content flows from one column to the next. When the viewport is sized to two columns, I need to scroll down to finish the first column, then scroll back up to read where it continues on the second column.
Justified text is bad on the web. We're starting to get some better features to make it usable, but it's not widely supported, so right ragged text is always more readable.
There are numerous animations that never stop. This is highly distracting and makes it very difficult to read the text.
I'm sure there are more issues but the site is so unusable for me, I won't continue trying.
So, yeah. It's gorgeous design. I love it. But it's for the sake of aesthetics, not for the user's sake. It's completely unusable to me. Since this is the first installment, I hope the designer will keep the aesthetics but improve the usability in future installments.
Beause in print you would typically use your publishing software to adjust various things, like where hypenated word breaks should happen. This is much trickier in digital media, and usually just isn't done, resulting in ugly word spacing.
> Justified text is bad on the web. ...so right ragged text is always more readable.
I disagree. I like justified text on the web as well as in print. To me, jagged right hand side of the text column is more disturbing than uneven spaces between words. So, you cannot universally declare that justified text is an accessibility issue.
I believe this will be fixed by this year - you have smart hyphenation going to come natively in CSS. It was always possible using the JS hyphenator lib.
They still end up doing that, regardless of the intention. It would be better if they only animated once, or just when you hover over them instead of constantly.
I agree that they are beautiful and detailed, clearly illustrating the point. I really, really love them. I'd love to have them on my wall.
That's not my problem. My problem is that they never stop animating. For me and many other people, when something is moving in our visual field, it is very, very difficult to read the text next to it.
Full disclosure: I'm autistic. I was wondering whether I should mention that. All the issues that I mentioned exclude me from using this resource. So maybe we could call these accessibility issues instead of usability issues. When I disclose that I'm autistic, it tends to evoke two types of responses:
1) Oh, sorry, we'll make it accessible. But they do it out of shame, which I don't like. I'd rather it's out of empathy.
2) You're too small of a segment to care about.
But I'm beginning to think that the only difference between usability and accessibility is the size of the population that's being excluded by the design. I chose to keep my autisticness separate to see how people responded when I presented this as a usability issue instead of an accessibility issue.
I'm only asking that designers have empathy for all possible users of their media. That's all. That's what good design is supposed to do.
Probably got a lot of neurospicy folks here on Hackernews, so there's less of a stigma associated with it, and more people familiar with the kind of sensory issues you deal with.
I feel you. I can't have autocomplete on when I'm coding, partly because having video game stuff happening in my field of view while I'm trying to focus throws me off. I'd rather just remember the name.
There has to be a middle ground though. For instance, I am definitely the opposite; I would feel as the site would be less usable to me if it just ran once and I had to reload the page or spam click replay if I wanted to see the animation again. I'd imagine people who are slow readers or with dyslexia would feel the same, and would make similar aurguement that you are making that them auto playing the animation assuming everyone had same reading speed when the animation is in focus are not taking thier cohort in consideration due to the small size of the group. I am sure that there are some colorblindness condition that would find the colors of the site difficult to distinguish as well. I would be be more emphatic to your view point if this was a school/employment/healthcare/goverment document, but it isnt realistic to say a primarily artistic/design project has to cater to everyone's specific accessibility concerns.
I hope you didn’t think I meant to imply that this is only bad for autistic people. I know that many other people have the same issues. But when I mention these problems to people that don’t have difficulty, sometimes they assume it’s just me and a tiny minority.
Sorry - couldn't find a security.txt but thought I should alert you your API is currently vulnerable to prompt injection on at least a few of the fields in the structured JSON you send when analysing the game.
Happy to give more details if there's any way to get in touch outside this thread.
I don’t think went that hard though? I was just pointing out the discrepancy between what they said and what they mean. Not everyone might know that the marketplace doesn’t need you permission to remove your extensions.
Probably the best on the civil liberties front are the Liberal Democrats (they were pretty good at quashing mandatory national ID cards back in the day, at least).
That being said, they still have a lot of folk angry at them for allowing university fees to be introduced 15 years ago when they were in coalition government (a Tory policy!).
I'm someone who was perfectly able to use anki and learn chinese to a decent level with fairly intense combined-type untreated (at the time) ADHD. What you need is a compelling reason to learn (for me, it was the fear of letting down my wife by not being able to talk to her family).
Ignore them. There's a belief system that sees a lack of success in anything that one wants to do as illness or witchcraft. For them, a person doesn't lack motivation to practice something because that thing is difficult and exhausting and one can't always see a sufficient reward (or chance of gaining that reward in a reasonable amount of time) in the end. For them, it's always going to be an illness or a curse that is stealing away your ability to achieve your true desires.
They might be recommending pharma here, but it would be prayer on another forum, or more protein intake on a third.
> Another came with sad eyes and said to him: "I don't know what my sickness is."
"I know," Baudolino said. "You are slothful."
"How can I be cured?"
"Sloth appears the first time when you notice the slowness of the movement of the sun."
"And then—?"
"Never look at the sun."
Is this true in a marginal cost sense? I was under the impression most of the environmental impact occurred during the training stage, and that it was significantly less costly post training?
You could argue that this is no longer the case once the model is done; the cost per request will go down over time, as the set amount of power and coolant pumped through data centres gets divided over more people.
However, AI companies can't afford to stand still. They have to keep training or they risk being made irrelevant by whatever AI company comes next.
Furthermore, a non-significant amount of energy and cooling is being used for generating responses as well. It's plainly obvious when you run even the very modest AI models at home how much power these things take.
The paper[1] mentions the statistics used to calculate these numbers. It has a separate column for inference, with numbers ranging from 10mL to 50mL of water per inference depending on the data centre sampled.
The numbers seem bad, but the authors also call out that more transparency is needed. With all the bad rep out there from independent estimations and no AI companies giving detailed environmental impact data, I have to assume the real cost is worse than estimated, or companies would've tried to greenwash themselves already.
> It's plainly obvious when you run even the very modest AI models at home how much power these things take.
Really good point to put this into perspective. I tried models locally and my gpu was running red hot. Granted, I think the server boards like H100 are more optimized for the AI workloads so they run more efficiently than consumer gpus, but I don't believe they are more than 1 magnitude more efficient.
Another corollary is that AI companies don’t train one model at a time. Typical engineers will have maybe 5-10 models training at once. Large hyperparameter grid searches might have hundreds or thousands. Most of these will turn out to be duds. Only one model gets released, and that one’s energy efficiency is what’s reported.
Llama 403b takes OOM a kilowatt minute to respond on our local gpu server, or about 10 grams of C02 per email. Last I checked, add another 20 grams of amortized manufacturing emissions. A typical commute is OOM 5-10 kg of CO2.
this article is alarmist bullshit. (for entirely unrelated reasons openai delenda est)
A thousand times? I’d have a hard time typing out that many queries in 8 hours. Even 100 seems like a stretch for someone who uses it within something like cursor.
More and more environments offer LLM aid without having you explicitly typing in a query. E.g. trigger inference whenever static analysis fails (e.g. on a compile error). Or trigger an LLM aided auto-complete with Ctrl-Space. I don't think it'll be particularly unusual to reach 1000 queries in a working day that way.
Coding models these days use an inference every time you stop typing. Let’s say it’s 0.1 inference oer keystroke. If you keep VSCode open all day, I could believe it’s a significant energy draw.
Google now uses several inferences per Google search.
The average user’s #inferences-per-day is going to skyrocket.
My point is that it’s understandable to consider AI a significant contributor to the average professional’s energy budget. It’s not an insult to point this out.
reply