Hacker Newsnew | past | comments | ask | show | jobs | submit | rezmason's commentslogin

It begins!


We also typically value things that are not tied to productivity/output, like product quality/reliability, security, and our own agency.

I want to be free to read, write, run, and share code, now and in the future. Relying on centralized services to do it for me (by extracting knowledge from countless other people) is certainly not a resilient strategy.


This article's from 2021. Does anyone know if there are elements (no pun intended) of this classification of element origins that's impacted by those JWST observations of complex early galaxies?



Also s4nake, the concept in a 4k binary from the demoscene circa 2013

https://www.pouet.net/prod.php?which=61035


Oh yeah! I was playing MacSwear quite a bit on my friends MacBook (I think with a PowerPC CPU)


I contributed one earlier this year! The community's a great bunch and I learned a lot.

Always remember, folks: the best feature request is a pull request ;)


base64 is embarrassingly parallel. So just pipe it to the GPU:

  precision highp float;
  uniform vec2 size;
  uniform sampler2D src,tab;
  void main(){
    vec4 a=(gl_FragCoord-.5)*3.,i=vec4(0,1,2,0)+a.y*size.x+a.x,y=floor(i/size.x),x=i-y*size.x;
    #define s(n)texture2D(src,vec2(x[n],y[n])/size)[0]
    #define e(n)texture2D(tab,vec2(a[n],0))[0]
    a=vec4(s(0),s(1),s(2),0)*255.*pow(vec4(2),-vec4(2,4,6,0)),a=fract(a).wxyz+floor(a)/64.,gl_FragColor=vec4(e(0),e(1),e(2),e(3));
  }


HN user: Ah yes let me casually scribble down a tweet-sized base64 encoder that runs parallel on GPU.

Bravo, that is a thing of beauty.


Uhhh no, it's a huge net loss because the cost of sending it to the GPU and back greatly exceeds the cost of just doing it then and there in CPU; even on iGPU the kernel launch latency etc will kill it, and that's assuming the kernel build is free. Not to mention this is doing pow calls (!!), which is so ridiculous it makes me wonder if this was a kneejerk AI prompt.

Another post in this thread mentioned V8 sped this up by removing a buffer copy; this is adding two buffer copies, each about an order of magnitude slower.

Come on guys...


Don't make me upload my web-browser-in-a-GLSL-shader snippet


Uhhh, go for it? You're welcome to link anything you like of course, but do you maybe want to address my actual points if you have any objections? Let's do some measurements, it sounds like you might be surprised by the outcome.

Web browser in a shader also sounds extremely inefficient, for obvious fundamental reasons.


Sorry, I was cracking a joke about the browser in a shader.

The GLSL I originally posted is from the "cursed mode" of my side project, and I use it to produce a data URI of every frame, 15 times per second, as a twisted homage to old hardware. (No, I didn't use AI :P )

https://github.com/Rezmason/excel_97_egg

That said, is `pow(vec4(2),-vec4(2,4,6,0))` really so bad? I figured it'd be replaced with `vec4(0.25, 0.0625, 0.015625, 1.0)`.


There goes my evening.


Just as long as we don't observe it reeeeally closely, I imagine.


Bravo! I love color and color spaces.

I've been researching the way classic Macs quantize colors to limited palettes:

https://rezmason.net/retrospectrum/color-cube

This cube is the "inverse table" used to map colors to a palette. The animated regions are tints and shades of pure red, green, and blue. Ideally, this cube would be a voronoi diagram, but that would be prohibitively expensive for Macs of the late eighties. Instead, they mapped the palette colors to indices into the table, and expanded the regions assigned to those colors via a simultaneous flood fill, like if you clicked the Paint Bucket tool with multiple colors in multiple places at the same time. Except in 3D.


I can appreciate the passion and consideration that went into this presentation of the subject!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: