Hacker Newsnew | past | comments | ask | show | jobs | submit | bottled_poe's commentslogin

Accuracy where it matters is why. Do you have a better suggestion for projecting a sphere onto a rectangle?

I would not use such strong rhetoric as the GP, but I believe they probably mean we should lean towards using the Gall/Peters projection, which maintains lengths and areas, but not angles.

(There are of course other projections with other interesting features; or you could take the same projection but center the world differently etc.)


Web Mercator does not preserve angles.

We're currently forced to use a projection that is strictly worse than what it was based on, the Mercator projection, created in 1569.

Everyone on this thread needs to read this presentation entitled "Use Literally Anything But Web Mercator":

https://www.esri.com/content/dam/esrisites/en-us/events/conf...

Let's say that a bit louder shall we:

USE LITERALLY ANYTHING BUT WEB MERCATOR.


Thanks for that, I wasn't even aware that "web Mercator" was a thing.

Why? Why is lengths and areas more important than angles? You have to choose one, its essentially arbitrary. Personally I find it more useful to know what is parallel to what and what is at which angles from what, than some size. We have globes, so we know what the "real size" of Greenland looks like... this has always been a silly argument from the overzealous online looking for right wrongs that don't exist.

> Why is lengths and areas more important than angles?

Well, of course the answer is "it depends on what it is you want to learn from the map. If you're driving around and want to navigate, you'll take Mercator probably. But if you want to compare sizes of objects (like lakes or forests or islands or world states), especially when zoomed out, you'll prefer Gall-Peters.

Many argue, and I tend to agree, that when looking at a map of the whole world, you are typically better served with Gall-Peters in terms of what your interest is, and in fact, people _do_ use Mercator maps to semi-consciously compare sizes of things - and have false impressions about geo-politics because of it.


WEB MERCATOR DOES NOT PRESERVE ANGLES.

I know. But they mentioned Mercator, not Web Mercator.

They were talking about Web Mercator but didn’t know they were!

This comment is inaccurate! Web Mercator causes such large errors in geolocation that the NGA had to issue an advisory about it [1].

There is a whole science behind map projections and Google ignored it entirely when they created Web Mercator, which was a hack to divide the world into a quad tree. It was vaguely clever and utterly stupid at the same time.

[1] https://web.archive.org/web/20140607003201/http://earth-info...


> Accuracy where it matters is why

Why the downvotes for correcting this laughable statement? Web Mercator is well documented as being extremely inaccurate.


Hi - I understand you feel strongly; your Web Mercator input is interesting. I would just focus on the intellectually interesting part - people might not get it; you can't control that or compel them to.

You've been repeating essentially the same comment, writing in all caps (in some comments), complaining about downvotes, telling everyone they are idiots one way or another. None of those things are likely to be welcome.


Almost every commenter here repeated the same misconception, and I corrected them each time.

I feel justified in shouting in this case.


What’s the I/O throughput of /dev/null ?


Single client, I'm getting ~5GB/s, both on an 8-year-old intel server, and on my M1 ARM chip.

However with a single server, it doesn't perfectly linearly scale with multiple clients. I'm getting

1 client: 5GB/s

2 clients: 8GB/s

3 client: 8.7GB/s


I'm easily reaching 30GB/s with a single client:

    dd if=/dev/zero of=/dev/null bs=1M status=progress
A second dd process hits the same speed.


My artisanal architecture design uses writes with a few characters and uses unix pipes:

    yes | pv > /dev/null
I hope that in my next rewrite I can advance to larger block sizes.


Interestingly I tried this as well and was disappointed with the results:

  yes $(printf %1024s | tr " " "y") | pv > /dev/null
About the same throughput as letting yes output a single character. I guess Unix pipes are slow.


> I guess Unix pipes are slow.

Or string concatenation, or pipeviewer.


yes doesn't do string concatenation, at least not in the loop that matters. It just prepares a buffer of bytes once and writes it to stdout repeatedly.

https://github.com/coreutils/coreutils/blob/master/src/yes.c



What's the best hardware for running a /dev/null instance for production?


A single resistor at ground voltage.


That doesn't support expected features like 'stat /dev/null'.


I usually do a kubernetes cluster on top of VMs. But sometimes when I really want to scale the standard cloud server less platforms all support /dev/null out of the box. (Except for Windows...)


> Except for Windows...

copy c:\file nul

It's been there since DOS or more likely CP/M :)


   set "nul1=1>nul"
   set "nul2=2>nul"
   set "nul6=2^>nul"
   set "nul=>nul 2>&1"
just saw this in a .cmd script


Still need an adapter library though! Fortunately there are about 7 competing implementations on npm and most of them only have 5-6 transitive dependencies.


How did you measure this? Do you know that /dev/null is the limiting factor, or could it be the data source that is limiting?


You start dealing with Heisen-throughput at that point, it goes as high as you can measure.


How is this any different to downloading and running a binary?


I'm working to make private hosting easier. I've been running a software development agency in Melbourne for 10+ years and have been building this platform in the background to help automate and standardise the hosting needs for our clients.

We're now getting ready to launch a web portal for others to manage their own private hosting in a simpler way. The product also includes a directory of off-the-shelf applications which can be launched in a few clicks (eg. Deepseek chatbot).

If you're interested in being part of our closed-Beta in March, reach out! (e: james at below domain)

https://getbach.io


It will be very interesting to see if they can reproduce a similar model on the shoestring budget claimed by Deepseek.


but deepseek hasn't claimed the figure touted by everyone for this particular R1 model, cause that 5.6mn was apparently for Deepseek's coder model


5.6mn figure is for base Deepseek V3 model. Both instruction and reasoning tuning of it has neglectable cost in comparison with it.


Exactly, is OP really the one who should be influencing others preferences? Most couldn’t give two shits about the technological perils that await them just over the horizon - and should you really be the one to inform them of those horrors? Just relax - and embrace the book of faces. The movement will be swift and relatively painless, mostly.


Should have reviewed their content with chatgpt before posting. Maybe next time.


I think you've managed to hit the nail on the head.

Rather than accept the rather human nature of people writing in their own style and making mistakes, we'd prefer to filter it through a dispassionate void first.

It's rather embarrassing how quickly we're willing to toss away the human elements of writing.


Agreed. LLM writing style is disgustingly bland and "offensively inoffensive" like Corporate Memphis. Would rather have actual human style, mistakes and all.


Comments like this one are so predictable and incredulous. As if the current state of the art is the final form of this technology. This is just getting started. Big facepalm.


Have you already noticed the trend of image search results for porn containing inferior AI slop porn?

I have. It sucks. The world we're headed for maybe isn't one we actually wind up wanting in the end.

I like the idea of increasingly advanced video models as a technologist, but in practice, I'm noticing slop and I don't like it. Having grown up on porn, when video models are in my hands, the addiction steers me in the direction of only using the the technology to generate it. That's a slot machine so addictive akin to the leap from the dirty magazines of old to the world of internet porn I witnessed growing up. So, porn addiction on steroids. I found it eventually damaging enough to my mental health that I sold my 4090. I'm a lot better off now.

The nerd in me absolutely loves Generative models from a technology perspective, but just like the era of social media before it, it's a double edged sword.


It sounds like you have a personal problem that you’re trying to project onto the rest of society.


No, I'm providing a personal anecdote that some members of society that do have, or may develop, the same or similar problems are having both the (perceived) good and the bad aspects of those problems seriously magnified by this technology. This can have personal consequences, but also the consequences can affect the lives of others.

Hence, a certain % of the population will be negatively affected by this. I personally personally think it's worth raising awareness of.


I hope they're right. If the technology improves to such a degree that meaningful content can be produced then it could spell global disaster for a number of reasons.

Also I just don't want to live in a world where the things we watch just aren't real. I want to be able to trust what I see, and see the human-ness in it. I'm aware that these things can co-exist, but I'm also becoming increasingly aware that as long as this technology is available and in development, it will be used for deception.


That ship sailed shortly after the invention of photography. Photos were altered for political purposes during the US Civil War.

Now, we have entire TV shows shot on green screen in virtual sets. Replacing all the actors is just the next logical step.


That's exactly what I mean, all of those methods take some human effort, there is a human involved in the process. Now we face a reality that it might take no human effort to do... well, anything. Which is terrifying to me.

I do believe that humans are restless, and even when there is no longer any point to create, and it is far easier to dictate, we still will, just because we are too driven not to.


you know that there is still offline artforms like concerts theaters opera installations etc so i wouldn see it that negative. and we have nearly 100years of music and film we can enjoy. so maybe video is a dying artform for human to act in but there is so much more.


The most predictable comment is yours, especially since you completely missed the point of the original comment which had nothing to do with the video quality.


AI generated slop content begets human generated slop comment.


So, even better porn?


Could you please add an option to enforce the use of transactions within the SQL input?


Great idea, I'll get it added to the roadmap!


It’s interesting. They have had plenty of time and resources available to mount solid competition. Why haven’t they? Is it a talent hiring problems or some more fundamental problem with their engineering processes? The writing has been on the wall for gpgpu for more than 10 years. Definitely enough time to catch up.


Its a commitment problem IMO. NVidia stuck with CUDA for a long time before it started paying them what it cost. AMD and Intel have both launched and killed initiatives a couple times each, but abandon them within a few years because adoption didn't happen overnight.

If you need people to abandon an ecosystem thats been developed steadily over nearly 20 years for your shiny new thing in order to keep it around, you'll never compete.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: