Hacker Newsnew | past | comments | ask | show | jobs | submit | HelloNurse's commentslogin

And hardware raytracing is on the same trajectory as hardware rasterization: devs finding ways to repurpose it, leading to pressure for more general APIs, which enable further repurposing, until hardware raytracing evolves into a flexible hardware accelerated facility for indexing, reordering, etc.

It is unreasonable to expect to run the same graphics code on desktop GPUs and mobile ones: mobile applications have to render something less expensive that doesn't exceed the limited capabilities of a low-power device with slow memory.

The different, separate engine variants for mobile and desktop users, on the other hand, can be based on the same graphics API; they'll just use different features from it in addition to having different algorithms and architecture.


> they'll just use different features from it in addition to having different algorithms and architecture.

...so you'll have different code paths for desktop and mobile anyway. The same can be achieved with a Vulkan vs VulkanES split which would overlap for maybe 50..70% of the core API, but significantly differ in the rest (like resource binding).


But they don't actually differ, see the "no graphics API" blog post we're all commenting on :) The primary difference between mobile & desktop is performance, not feature set (ignoring for a minute the problem of outdated drivers).

And beyond that if you look at historical trends, mobile is and always has been just "desktop from 5-7 years ago". An API split that makes sense now will stop making sense rather quickly.


Different features/architecture is precisely the issue with mobile, be it due to hardware constraints or due to lack in deiver support. Render passes were only bolted into Vulkan because of mobile tiler GPUs, they never made any sense for desktop GPUs and only made Vulkan worse for desktop graphics development.

And this is the reason why mobile and desktop should be separate graphics APIs. Mobile is holding desktop back not just feature wise, it also fucks up the API.


You don't need a "proper" random selection: if your points are sorted deterministically and not too adversarially, any reasonably unbiased selection (e.g. every Nth point) is pseudorandom.

Why do you think the mentioned "color-centric" image processing operations deserve a specialized file format, rather than building and using a transient index in memory when it is useful? Do you have some special use case in mind?

Any interactive application, for instance, can be expected to render the image to the screen (repeatedly); any GPU use for texture mapping needs pixels sorted by location, regardless of whether pixel values are palette indices or explicit colours; many image processing tasks, like segmentation and drawing, need efficient access to pixels at arbitrary locations or near already processed locations.


Thanks for asking, C2PM isn’t meant to replace raster formats it serves as an intermediate format for workflows where users repeatedly operate on color groups (palette swaps, region recoloring, mask generation, etc). In a standard image, every color swap requires rescanning all pixels. In C2PM, the color to pixel index is already stored, so repeated swaps or multi-color operations become O(1) per color instead of O(P) per operation. It’s designed for tools where color groups matter more than spatial order, not for real-time display or GPU textures.

Weapons and violence in Jane Austen's novels include characters hunting with shotguns and (remotely implied) armed Navy ships.

Nothing comparable to treating a mobile suit like an extension of the body, or killing people, or both at the same time (e.g. the "duel" between Char Aznable and Kycilia Zabi).


Aren't the notes adjacent enough on consecutive lines?

  2.0 note Cmaj7 ch=1 
  2.0 note D ch=1 
  2.0 note C dur=0.15 ch=2
  2.1 note C ch=2
  2.1 note Cmaj ch=1

Imagine there are four, eight, maybe dozens of voices being mixed together in a track. Could get unwieldly.

That's why I decided to allow arbitrary order within the file. In this way you can group notes by instruments and the parser will deal with reordering them.

I also plan to create a flag for the CLI tool that reorders the lines within the mtxt file in such way, that notes are grouped by instruments.


We are talking about an application file format, so "type errors" are about who's right: the application (even better, multiple equally right implementation of a specification) or random hackers altering the file in incorrect ways.

Loose type checks, e.g. NOT NULL columns of "usually" text, are loose only compared to typical SQL table definitions; compared to the leap forward of using abstract tables and changing them with abstract SQL instead of using text or byte buffers and making arbitrary changes, enforcing data types on columns would be only a marginal improvement.


For many web services it would be more often 200 KB of schema (many possible request and responses, some of them complex) tacked onto a less than 1 KB message (brief requests and acknowledgements without significant data inside).


The blog seems to contain other similar misunderstandings: for example the parallel article against using SVG images doesn't consider scaling the images freely a benefit of vector formats.


https://aloisdeniel.com/blog/i-changed-my-mind-about-vector-... seems fairly clearly to be talking about icons of known sizes, in which case that advantage disappears. (I still feel the article is misguided and that the benefit of runtime-determined scaling should have been mentioned, and see no benchmarks supporting its performance theses, and I’d be surprised if the difference was anything but negligible; vector graphic pipelines are getting increasingly good, and the best ones do not work in the way described, and could in fact be more efficient than raster images at least for simpler icons like those shown.)


> seems fairly clearly to be talking about icons of known sizes, in which case that advantage disappears.

That's the point: obliviousness to different concerns and their importance.

Among mature people, the main reason to use SVG is scaling vector graphics (in different contexts, including resolution-elastic final rendering, automatically exporting bitmap images from easy to maintain vector sources, altering the images programmatically like in many icon collections); worrying about file sizes and rendering speed is a luxury for situations that allow switching to bitmap images without serious cost or friction.


Are there display pipelines that cache the generated-for-my-device-resolution svgs instead of doing all the slower parsing etc from scratch every time, achieving benefits of both worlds? And you can still have runtime-defined scaling by "just" rebuilding the cache?


Haiku (OS) caches the vector icons rendered from HVIF[1][2] files which are used extensively for UI.

I didn't find details of the caching design. Possibly it was mentioned to me by waddlesplash on IRC[3].

[1] 500 Byte Images: The Haiku Vector Icon Format (2016) http://blog.leahhanson.us/post/recursecenter2016/haiku_icons...

[2] Why Haiku Vector Icons are So Small | Haiku Project (2006) https://www.haiku-os.org/articles/2006-11-13_why_haiku_vecto...

[3] irc://irc.oftc.net/haiku


> The drawback to using vector images is that it can take longer to render a vector image than a bitmap; you basically need to turn the vector image into a bitmap at the size you want to display on the screen.

Indeed, would be nice if one of these blogs explained the caching solution to tackle the drawback.

Another issue, I think, especially at smaller sizes, is the pixel snapping might be imperfect and require "hints" like in fonts? Wonder if these icons suffer from these/address it


Increasingly I think you’ll find that the efficient format for simple icons like this actually isn’t raster, due to (simplifying aggressively) hardware acceleration. We definitely haven’t reached that stage in wide deployment yet, but multiple C++ and Rust projects exist where I strongly suspect it’s already the case, at least on some hardware.


The best place for such a cache is a GPU texture, and in a shader that does simple texture mapping instead of rasterizing shapes it would cost more memory reads in exchange for less calculations.


Icons are no longer fixed sizes. They're are numerous dpi/scaling settings even if the "size" doesn't change.


The article goes into that, it’s making a sprite map of at least the expected scaling factors.


There are no "expected" scaling factors anymore.


Also architecturally suitable for the common case of collecting heterogeneous files in existing and new formats into a single file, as opposed to designing a database schema or a complex container structure from scratch.

Any multi-file archive format would do, but ZIP is very portable and random access.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: