Not only. There is an inherent aliasing effect with this method which is very apparent when the light is close to the wall.
I implemented a similar algorithm myself, and had the same issue. I did find a solution without that particular aliasing, but with its own tradeoffs. So, I guess I should write it up some time as a blog post.
There are several ways of enjoying Minecraft. I play it a lot with my kids (5 and 10) at the moment. They love creative mode, spawning mobs and just building strange houses. When I played with my friends, sibling snd parents, it was all about survival mode everyone would create their own huge buildings and connect up via railway, visit each other and make fun stuff. Then there was the whole red stone rabbit hole…
Another place where dithering is useful in graphics is when you can’t do enough samples in every point to get a good estimation of some value. Add jitter to each sample and then blur, and then suddenly each point will be influenced by the samples made around them, giving higher fidelity.
I recently learned the slogan “Add jitter as close to the quantisation step as possible.” I realised that “quantisation step” is not just when clamping to a bit depth, but basically any time there is an if-test on a continuous value! This opens my mind to a lot of possible places to add dithering!
Nope, I don't think you're the minority, once people think of this as a micro itx build. Power supply integrated. That's cool. Will be curious what the actual performance is because hard to compare the custom chipsets with what's out there now.
I think this works quite well with IO in Haskell. Most of my code is pure, but the parts which are really not, say OpenGL code, is all marked as such with IO in their type signatures.
Also, the STM monad is the most carefree way of dealing with concurrency I have found.
I don’t really understand how to write tests before the code… When I write code, the hard part is writing the code which establishes the language to solve the problem in, which is the same language the tests will be written in. Also, once I have written the code I have a much better understanding as the problem, and I am in a way better position to write the correct tests.
You write the requirements, you write the spec, etc. before you write the code.
You then determine what are the inputs / outputs that you're taking for each function / method / class / etc.
You also determine what these functions / methods / classes / etc. compute within their blocks.
Now you have that on paper and have it planned out, so you write tests first for valid / invalid values, edge cases, etc.
There are workflows that work for this, but nowadays I automate a lot of test creation. It's a lot easier to hack a few iterations first, play with it, then when I have my desired behaviour I write some tests. Gradually you just write tests first, you may even keep a repo somewhere for tests you might use again for common patterns.
I want to have a CUDA based shader that decays the colours of a deformable mesh, based on texture data fetched via Perlin noise, it also has to have a wow look as per designer requirements.
Quite curious about the TDD approach to that, espcially taking into account the religious "no code without broken tests" mantra.
Break it down into its independent steps, you're not trying to write an integration test out of the gate. Color decay code, perlin noise, etc. Get all the sub-parts of the problem mapped out and tested.
Once you've got unit tests and built what you think you need, write integration/e2e tests and try to get those green as well. As you integrate you'll probably also run into more bugs, make sure you add regression tests for those and fix them as you're working.
1. Write test that generates an artefact (e.g. picture) where you can check look and feel (red).
2. Write code that makes it look right, running the test and checking that picture periodically. When it looks right, lock in the artefact which should now be checked against the actual picture (green, if it matches).
3. Refactor.
The only criticism ive heard of this is that it doesnt fit some people's conceptions of what they think TDD "ought to be" (i.e. some bullshit with a low level unit test).
You can even do this with LLM as a judge as well. Feed screenshots into a LLM as a judge panel and get them to rank the design 1-10. Give the LLM judge panel a few different perspectives/models to get a good distribution of ranks, and establish a rank floor for test passing.
Parent mentioned "subjective look and feel", LLMs are absolutely trash at that and have no subjective taste, you'll get the blandest designs out of LLMs, which makes sense considering how they were created and trained.
LLMs can get you to about a 7.5-8/10 just by iterating itself. The main thing you have to do is just wireframe the layout and give it the agent a design that you think is good to target.
Again, they have literally zero artistic vision and no, you cannot get an LLM to create a 7.5 out of 10 web design or anything else artistic, unless you too miss the facilities to properly judge what actually works and looks good.
You can get an AI to produce a 10/10 design trivially by taking an existing 10/10 design and introducing variation along axes that are orthogonal to user experience.
You are right that most people wouldn't know what 10/10 design looks/behaves like. That's the real bottleneck: people can't prompt for what they don't understand.
Yeah, obviously if you're talking about copying/cloning, but that's not what I thought the context here was, I thought we were talking about LLMs themselves being able to create something that would look and feel good for a human, without just "Copy this design from here".
TDD fits better when you use a bottom up style of coding.
For a simple example, FuzzBuzz as a loop that has some if statements inside is not so easy to test. Instead break it in half so you have a function that does the fiddly bits and a loop that just contains “output += MakeFizzBizzLineForNumeber(X);” Now it’s easy to come up tests for likely mistakes and conceptually you’re working with two simpler problems with clear boundaries between them.
In a slightly different context you might have a function that decides which kind of account to create based on some criteria which then returns the account type rather than creating the account. That function’s logic is then testable by passing in some parameters and then looking at the type of account returned without actually creating any accounts. Getting good at this requires looking at programs in a more abstract way, but a secondary benefit is rather easy to maintain code at the cost of a little bookkeeping. Just don’t go overboard, the value is breaking out bits that are likely to contain bugs at some point where abstraction for abstraction’s sake is just wasted effort.
That's great for rote work, simple CRUD, and other things where you already know how the code should work so you can write a test first. Not all programming works well that way. I often have a goal I want to achieve, but no clue exactly how to get there at first. It takes quite a lot of experimentation, iteration and refinement before I have anything worth testing - and I've been programming 40+ years, so it's not because I don't know what I'm doing.
Not every approach works for every problem, still we’re all writing a lot of straightforward code over our careers. I also find longer term projects eventually favor TDD style coding as over time unknown unknowns get filled in.
Your edge case depends on the kind of experimentation you’re doing. I sometimes treat CSS as kind of black magic and just look for the right incantation that happens to work across a bunch of browsers. It’s not efficient, but I’m ok punting because I don’t have the time to become an expert on everything.
On the other hand when looking for an efficient algorithm or optimization I likely to know what kind of results I’m looking for at some stage before creating the relevant code. In such cases tests help clarify what exactly the mysterious code needs to do so in a few hours to weeks later when inspiration hits you haven’t forgotten any relevant details. I might have gone in a wildly different direction, but as long as I consider why each test was made before deleting it the process of drilling down into the details has value.
I don't want to insult you, but I had to re-program myself in order to accept TDD and newer processes and there are a lot of systems out there that weren't written with testability in mind and are very difficult to deal with as a result. You are describing a prototype-until-you-reach-done type of approach, which is how we ended up with so much untestable code. My take is that you do a PoC, then throw it out and write the real application. "Build one to throw away" as Brooks said back in 1975.
I get where you're coming from, because I'm about a decade behind you, but resisting change is not a good look. I feel the same way about all this vibe coding and junk--don't really think it's a good idea, but there it is. Get used to being wrong about everything.
It's as matter of practice. The major problem is that business folks don't even know how to produce a testable spec, they just give you some vague idea about what it is they want and you're supposed to produce a PoC and show it to them so they can refine their idea. If you go and produce a bunch of tests based on what they asked for, but no working code, you're getting fired. The whole process is on its head because we don't have solid engineering minds in most roles, we have people with liberal arts degrees faking it until they make it.
There were a few places I worked that TDD actually succeeded because the project was fairly well baked and the requirements that came it could be understood. That was the exception, not the rule.
I am not really sure if TDD often is compatible with modern agile development. It lends well to more waterfall style. Or clearly defined systems.
If you can design fully what your system does before starting it is more reasonable. And often that means going down to level of are inputs and states. Think more of something like control systems for say mobile networks or planes or factory control. You could design whole operation and all states that should happen or could happen before single line of code.
I was talking about spelling. I can clearly see how these clusters of consonants characteristic of all Slavic languages can be a pain for a beginner, no matter how you spell them.
Pre-Gameboy, when I was a child, my grandfather had a television— the kind that was furniture. Sometimes it would eschew modern trappings like colour and v-sync, and I would employ my Classical Vaudevillian training to set it straight with a wallop.
I implemented a similar algorithm myself, and had the same issue. I did find a solution without that particular aliasing, but with its own tradeoffs. So, I guess I should write it up some time as a blog post.
reply