Hacker Newsnew | past | comments | ask | show | jobs | submit | oofabz's commentslogin

One of the main differences from Linux is BSD's separation between the base system and installed applications.

On Ubuntu, Arch, Mint, etc. there is no such distinction. Everything is made of packages, including the base system. You have packages for the kernel, the init system, logging, networking, firmware, etc. These are all versioned independently and whether or not they are considered "essential" is up to the user to decide.

On BSD, the base system is not composed of packages. It is a separate thing, with the kernel, libc, command line utilities all tightly coupled and versioned together. This allows the components to evolve together, with breaking ABI changes that would not be practical in Linux. This makes BSD better for research, which is why things like IPv6, address space randomization, SSH, jails, capabilities were developed there.

Packages are used for applications and are isolated to /usr/local. Dependency and compatibility problems only exist for packages. The base system is always there, always bootable, and you can count on being able to log in to a command line session and use the standard suite of tools. It is sort of like a Linux rescue image, except you boot off it every time.


I first read about continued fractions in HAKMEM from 1972. These things have fascinated programmers for decades.

https://web.archive.org/web/20190906055006/http://home.pipel...


Here's another one by Gosper, with the famous quote:

> Abstract: Contrary to everybody, this self contained paper will show that continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.

https://perl.plover.com/yak/cftalk/INFO/gosper.txt


I find it interesting that the transform was controversial in the '90s.

Today, it seems like a normal solution to the problem to me, and the controversy seems silly. I have much experience with the map function from Javascript. It is too simple to be objectionable.

But in the '90s, I would also have had trouble understanding the transform. Lambdas/closures were unheard of to everyone except Lisp dweebs. Once I figured out what the code was doing, I would be suspicious of its performance and memory consumption. This was 1994! Kilobytes mattered and optimal algorithmic complexity was necessary for anything to be usable. Much safer to use a well understood for loop. I have plenty of experience making those fast, and that's what map() must be doing under the hood anyway.

But I would have been wrong! map() isn't doing anything superfluous and I can't do it faster myself. The memory consumption of the temporary decorated array is worth it to parse the last word N times instead of N log N times. Lisp is certainly a slow language compared to C, but that's not because of its lambdas!


Very impressive work. For those who aren't familiar with this field, Valve invented SDF text rendering for their games. They published a groundbreaking paper on the subject in 2007. It remains a very popular technique in video games with few changes.

In 2012, Behdad Esfahbod wrote Glyphy, an implementation of SDF that runs on the GPU using OpenGL ES. It has been widely admired for its performance and enabling new capabilities like rapidly transforming text. However it has not been widely used.

Modern operating systems and web browsers do not use either of these techniques, preferring to rely on 1990s-style Truetype rasterization. This is a lightweight and effective approach but it lacks many capabilities. It can't do subpixel alignment or arbitrary subpixel layout, as demonstrated in the article. Zooming carries a heavy performance penalty and more complex transforms like skew, rotation, or 3d transforms can't be done in the text rendering engine. If you must have rotated or transformed text you are stuck resampling bitmaps, which looks terrible as it destroys all the small features that make text legible.

Why the lack of advancement? Maybe it's just too much work and too much risk for too little gain. Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering? It would be a daunting task. Rendering glyphs is one thing but how about handling line breaking? Seems like it would require a lot of communication between CPU and GPU, which is slow, and deep integration between the software and the GPU, which is difficult.


> Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering? […] Rendering glyphs is one thing but how about handling line breaking?

I’m not sure why you’re saying this: text shaping and layout (including line breaking) are almost completely unrelated to rendering.


> Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering?

https://github.com/servo/pathfinder uses GPU compute shaders to do this, which has way better performance than trying to fit this task into the hardware 3D rendering pipeline (the SDF approach).


Just for the record, text rendering - including with subpixel antialiasing - has been GPU accelerated on Windows for ages and in Chrome/Firefox for ages. Probably Safari too but I can't testify to that personally.

The idea that the state of the art or what's being shipped to customers haven't advanced is false.


SDF is not a panacea.

SDF works by encoding a localized _D_istance from a given pixel to the edge of character as a _F_ield, i.e. a 2d array of data, using a _S_ign bit to indicate whether that distance is inside or outside of the character. Each character has its own little map of data that gets packed together into an image file of some GPU-friendly type (generically called a "map" when it does not represent an image meant for human consumption), along with a descriptor file of where to find the sub-image of each character in that image, to work with the SDF rendering shader.

This definition of a character turns out to be very robust against linear interpolation between field values, enabling near-perfect zoom capability for relatively low resolution maps. And GPUs are pretty good at interpolating pixel values in a map.

But most significantly, those maps have to be pre-processed during development from existing font systems for every character you care to render. Every. Character. Your. Font. Supports. It's significantly less data than rendering every character at high resolution to a bitmap font. But, it's also significantly more data than the font contour definition itself.

Anything that wants to support all the potential text of the world--like an OS or a browser--cannot use SDF as the text rendering system because it would require the SDF maps for the entire Unicode character set. That would be far too large for consumption. It really only works for games because games can (generally) get away with not being localized very well, not displaying completely arbitrary text, etc.

The original SDF also cannot support Emoji, because it only encodes distance to the edges of a glyph and not anything about color inside the glyph. Though there are enhancements to the algorithm to support multiple colors (Multichannel SDF), the total number of colors is limited.

Indeed, if you look closely at games that A) utilize SDF for in-game text and B) have chat systems in which global communities interact, you'll very likely see differences in the text rendering for the in-game text and the chat system.


If I understand correctly, the authors approach doesn't really have this problem since they only upload the glyphs being used to the GPU (at runtime). Yes you still have to pre-compute them for your font, but that should be fine.


but the grandparent post is talking about a browser - how would a browser pre-compute a font, when the fonts are specified by the webpage being loaded?


The most common way this is done is by parsing the font and generating the SDF fields on the fly (usually using Troika - https://github.com/protectwise/troika/blob/main/packages/tro...). It slows down the time to the first render, but it's only a matter of hundreds of ms not seconds, and as part of rendering 3D on webpage no one really expects it to be that fast to start up.


> It slows down the time to the first render

Would caching (domain restricted ofc) not trivially fix that? I don't expect a given website to use very many fonts or that they would change frequently.


webassembly hosting freetype in a webworker. not too difficult.


Why not prepare SDFs on-demand, as the text comes in? Realistically, even for CJK fonts you only need a couple thousand characters. Ditto for languages with complex characters.


Generating SDFs is really slow, especially if you can't use the GPU to do it, and if you use a faster algorithm it tends to produce fields with glitches in them


Because it's slow.


> Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering?

It is tricky, but I thought they already (partly) do that. https://keithclark.co.uk/articles/gpu-text-rendering-in-webk... (2014):

“If an element is promoted to the GPU in current versions of Chrome, Safari or Opera then you lose subpixel antialiasing and text is rendered using the greyscale method”

So, what’s missing? Given that comment, at least part of the step from UTF-8 string to bitmap can be done on the GPU, can’t it?


The issue is not subpixel rendering per se (at least if you're willing to go with the GPU compute shader approach, for a pixel-perfect result), it's just that you lose the complex software hinting that TrueType and OpenType fonts have. But then the whole point of rendering fonts on the GPU is to support smooth animation, whereas a software-hinted font is statically "snapped" to the pixel/subpixel grid. The two use cases are inherently incompatible.


Thanks for the breakdown! I love reading quick overviews like this.


> complex transforms like skew, rotation, or 3d transforms can't be done

Good. My text document viewer only needs to render text in straight lines left to right. I assume right to left is almost as easy. Do the Chinese still want top to bottom?


> Good. My text document viewer only needs to render text in straight lines left to right.

Yes, inconceivable that somebody might ever want to render text in anything but a "text document viewer"!


God I hope that you don’t work on anything text-related


A classic example of main character syndrome, pun not intended :D


Believe it or not, other people who aren't you exist.


If you work with ASCII-only monospaced-only text, then yeah sure. It gets weird real quick outside of those boundaries.


Authentically bad is a good way to put it. My favorite part is the 3300 line Game::updatestate() function and its gigantic switch statement.


I think it's pretty charming. Games have so much abstraction these days it feels like there's no way to truly understand what it is they're even doing.

One can spend months agonizing over the true nature of things and how ideas and concepts relate to each other and eventually distill it all into some object oriented organization that implements not just your game but all possible games.

One can also just cycle the game's state machine in a big function, haha switch statement go brrr. Reminds me of the old NES games which would statically allocate memory for game objects, very much in the "structure of arrays" style, they too had game logic just like that.

Also reminds me of old electromechanical pinball machines. You can literally see the machine cycle.

https://youtu.be/ue-1JoJQaEg

https://youtu.be/E3p_Cv32tEo


Holy... all wrapped in an if statement too!


> case 4099:

lol!

Also I like that every function starts with:

>jumpheld = true;


You can do selection first in Vim by using visual mode. For this particular example (5dd) you would want to use visual line mode by pressing shift-v. Then you can select the lines you wish to cut, and press d to delete them, or apply any other action to that block of text.

I frequently use c (change) on my visual selections, type in new code at the point where it was removed, then use p to paste the old code somewhere else.


timegm() is even available on Haiku


Beatriz Marinello is a professional chess player who was Chilean Women's Chess Champion in 1980 and was vice president of FIDE until 2018.


The die size of the B580 is 272 mm2, which is a lot of silicon for $249. The performance of the GPU is good for its price but bad for its die size. Manufacturing cost is closely tied to die size.

272 mm2 puts the B580 in the same league as the Radeon 7700XT, a $449 card, and the GeForce 4070 Super, which is $599. The idea that Intel is selling these cards at a loss sounds reasonable to me.


Though you assume the prices of the competition are reasonable. There are plenty of reasons for them not to be. Availability issues, lack of competition, other more lucrative avenues etc.

Intel has neither, or at least not as much of them.


At a loss seems a bit overly dramatic. I'd guess Nvidia sells SKUs for three times their marginal cost. Intel is probably operating at cost without any hopes of recouping R&D with the current SKUs, but that's reasonable for an aspiring competitor.


It kinda seems they are covering the cost of throwing massive amounts of resources trying to get Arc’s drivers in shape.


I really hope they stick with it and become a viable competitor in every market segment a few more years down the line.


The drivers are shared by their iGPUs, so the cost of improving the drivers is likely shared by those.


The idea that Intel is selling these at a loss does not sound reasonable to me:

https://news.ycombinator.com/item?id=42505496

The only way this would be at a loss is if they refuse to raise production to meet demand. That said, I believe their margins on these are unusually low for the industry. They might even fall into razor thin territory.


Never had this problem with Gnumeric


Ya, when Gnumeric fails to load in my browser, there are never any annoying “Got it” buttons that make it work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: