Hacker Newsnew | past | comments | ask | show | jobs | submit | rng_civ's commentslogin

> Hand-drawn 2D animations often have watercolour backgrounds. Can we convincingly render 3D scenery as a watercolour painting? How can we smoothly animate things like brush-strokes and paper texture in screen space?

There are various techniques to do this. The most prominent one IMO is from the folks at Blender [0] using geometry nodes. A Kuwahara filter is also "good enough" for most people.

> When dealing with a stylised 3D renderer, what would the ideal "mesh editor" and "scenery editor" programs look like? Do those assets need to have a physically-correct 3D surface and 3D armature, or could they be defined in a more vague, abstract way?

Haven't used anything else but Blender + Rigify + shape keys + some driver magic is more than sufficient for my needs. Texturing in Blender is annoying but tolerable as a hobbyist. For more NPR control, maybe DillonGoo Studio's fork would be better [1]

> Would it be possible to render retro pixel art from a simple 3D model? If so, could we use this to make a procedurally-generated 2D game?

I've done it before by rending my animations/models at a low resolution and calling it a day. Results are decent but takes some trial and error. IIRC, some folks have put in more legwork with fancy post-processing to eliminate things like pixel flickering but can't find any links right now.

[0]: https://www.youtube.com/watch?v=ljjUoup2uTw

[1]: https://www.dillongoostudios.com/gooengine


The best results I’ve seen at procedurally generated “old school” style pixel art sprites have come from highly LORA-ed diffusion models. You can find some on Civit AI.[1]

So the future here may be a 3D mesh based game engine on a system fast enough to do realtime stable-diffusion style conversion of the frame buffer to strictly adhering (for pose and consistency) “AI” pixel art generation.

[1] https://civitai.com/search/models?sortBy=models_v9&query=Pix...


ah that's very interesting. Will save that link for future reference.


Snap auto update pissed me off so much I started Nix-ifyng my entire workflow.

Declarative, immutable configurations for the win...


Prediction: the best AI teacher (or just teacher in general) will be the one that is able to emotionally read and guide/manipulate the student towards learning and self improvement.

If such an AI teacher style becomes widespread, this means that they have the potential to replace the parental relationship (in the same manner AI girlfriend/boyfriends threaten romantic relationships).

I see people talk about the dangers of the AI girlfriend/boyfriend, but not the dangers of introducing AI teachers to (especially young) kids. Nominal adults are already being affected by this (see Replika and company) and they are not even the "best".

If I wear my cynical hat for a second, I'm willing to bet that this parental replacement is a certainty, as an extension of the "screen" parenting that already exists. But this time, it might actually be helpful for the child so it will be socially acceptable and encouraged.


> the best AI teacher (or just teacher in general) will be the one that is able to emotionally read and guide/manipulate the student towards learning and self improvement

Spot on. It would be interesting to see how one would train a model to do this


I agree with that point, too.

My guess is that current models are not yet able to provide consistent, effective motivation to learners over the long term. Like a lot of new educational technology, they might be fun and engaging in the short term but will lose their effectiveness as motivators once the shine has worn off.

But the models might become better motivators as they become more multimodal, so that they are able to respond in real time to the student’s tone of voice and facial expressions, and as they acquire longer context windows, so that they are able to adapt their interactions to what the student has done weeks or months earlier.

The biggest issue, I think, is what the OP raises: Whether the bots will be able to—or allowed to—acquire sufficiently human-like personalities and indentities so that they can motivate learners as human parents, teachers, and mentors do—by making the learners want to please them, to be praised by them, to avoid angering them, to emulate them.


We already have highly efficient "stand-alone" motivators, they're called video games. And for Educational video games, there's been attempts, some would say it's too difficult to integrate the two fields together, but looking at what's been attempted vs what's actually done in industry I can't say they've really tried that hard.

The problem I would imagine is that the kind of people who go on to make an educational video game probably will lack the necessary creative or aesthetic chops to do it. And no game dev is going to be dreaming of building one either. But if one were to commission a veteran studio instead we might get better results.


An app that makes my kid feel good about doing math? Sign me up!


Since my neighbouring comment (https://news.ycombinator.com/item?id=40983181) brings up a good point which I also would have brought up, I want to give another point:

At school, there existed some classmates who hated math as a school subject, but nevertheless loved my ramblings about mathematical topics (well, they were at least more interesting than some other school subjects). Nevertheless, I guess my style of motivating people to like and do math would not be loved by parents: it it rather of the style "I'll explain you stuff about plant and process engineering so that you can build a weed farm that will be harder to detect by the police" for math, i.e. explain what subversive stuff you can do if you know math. I don't want to go into the details here.

This is deeply motivating for particular kinds of kids (with punk traits) to become quite interested in mathematical topics, but of course many parents would hate it because my teaching methods for math turn the child into a bad citizen. ;-)


I agree with your general point -- we should lean into kids' existing motivations rather than wring our hands that kids don't like to do rote practice. Doing anything else is a failing strategy, unless we devise new motivational environments.


Feeling good about math might not be the most optimal way of learning. What if the algorithm learns that emotional blackmail is the best way of getting someone to learn?


Have you taken a look at the paper "Foreign Function Typing: Semantic Type Soundness for FFIs" [0]?

> We wish to establish type soundness in such a setting, where there are two languages making foreign calls to one another. In particular, we want a notion of convertibility, that a type τA from language A is convertible to a type τB from language B, which we will write τA ∼ τB , such that conversions between these types maintain type soundness (dynamically or statically) of the overall system

> ...the languages will be translated to a common target. We do this using a realizability model, that is, by up a logical relation indexed by source types but inhabited by target terms that behave as dictated by source types. The conversions τA ∼ τB that should be allowed, are the ones implemented by target-level translations that convert terms that semantically behave like τA to terms that semantically behave like τB (and vice versa)

I've toyed with this approach to formalize the FFI for TypeScript and Pyret and it seemed to work pretty well. It might get messier with Rust because you would probably need to integrate the Stacked/Tree Borrows model into the common target.

But if you can restrict the exposed FFI as a Rust-sublanguage without borrows, maybe you wouldn't need to.

[0] (PDF Warning): https://wgt20.irif.fr/wgt20-final23-acmpaginated.pdf


Speaking as an aspiring solo gamedev: these extra menus in programs like Blender are super important.

Sure, you can map extra keybinds, but convenient keybinds are actually a scarce resource if you're essentially the "full stack" of the art pipeline (from sculpting, modeling, retopology, texture painting, to animating). This isn't even including keybinds for any custom tooling.

One thing I really appreciate about Blender specifically is that you can search through all the available operations with F3. This offers a nice trade-off between muscle memory, keybind consumption, and not needing to use the mouse.


Nope, just a classic SQL injection attack on software used by a lot of people...


Thanks for the correction. Gonna have to look into that one. Very unfortunate.



It's not ambiguous because you have a directional reference built into the phrase and speaking directly to a singular person: "ask you about your left hand".

It is certainly clear if one would say "left facing stern" or "left facing aft", but that's a mouthful when you can just shorten it (and the reference facing direction is not relevant). Bonus points if the shortened version can't be mistaken for another direction...

BTW, I'm 100% down for introducing dedicated words for "my left", "your left" etc vs just "on the left". It would certainly save me a bit of time when my family asks me to look for something and they flip between the two meanings in the same sentence.


This still falls due to you having to have a point of reference for front of boat. See other threads where double ended ones do not have fixed starboard and port.

I am still on board with dedicated terms.


The whole point of this post is that we are trying to:

  * Classify the computational power of Transformers (when it stumbles on certain easier problems but can solve harder ones)

  * Find a "minimal" change to the Transformer that would allow it to compute these problems.
Solving these 2 problems by giving LLMs arbitrary access to external plugins is a cop out. You would not:

  * Call youself a chef just because you own a restaraunt (you need to cook too!)

  * Or (more program-y), say that C code meets Rust's memory safety standards simply because you can write the main function in C and write the rest of the program in Rust
Allowing arbitrary external plugins seems absurdly overkill and not 'minimal' (although that doesn't mean it isn't interesting from a practical perspective!), which is what I assumed that the rain1 was originally pointing out.


My point is why must transformers be able to do everything?

edit: I don't mean to dismiss the work trying to figure out what they can do. That seems reasonable and valuable.

It's just, we're not trying to figure out how to tweak QR decomposition to solve arbitrary equations. It's a tool, a powerful tool, but it has some clear limitations.


For all of you April Fools' Day haters, hear me out:

April Fools' Day is actually a super important defense against rogue AI and the Singularity. Think about it: our ancestors had the forethought to coordinate the largest data poisoning attack in history for hundreds of years. Why? To seed enough nonsense in our historical records so that any rogue AI would short-circuit itself into babbling nonsense.

Why do you think it took so long for AGIs like ChatGPT to emerge? Did you really think the AI winter in the 80s and 90s was a "coincidence"? That there were deep architectural and philosophical issues with the approach? Baloney. Without April Fools' Day, we would have become slaves to the Matrix by the late 90s, if not earlier.

Don't believe me? Well, here's proof that AI could have been developed in Medieval Europe by around late 13th century. For being the so-called "Dark Ages", the people of Medieval Europe were incredibly advanced compared to the 21st century, especially when it came to energy production. Get this: by the 11th century, England ALONE had more than 6000 wind and water turbines (Epstein 199) that they all built BY HAND. This allowed them to fine-tune the turbines to their unique environments, making them 100% more efficient than modern, mass produced metal junk.

Do you know what's even more amazing? Our ancestors knew about gravity and exploited it for power generation! What!!! We had a source of unlimited power by the mid-13th century (Epstein 208)!! But no: in the current age, we can't even muster the political power to make gravity-based perpetual motion machines because all of the physicists would whine about breaking Thermodynamics' laws. Well, screw Thermodynamics! Bastard is holding back all of humanity for personal profit by siphoning all of our hard-earned tax dollars towards solar and nuclear power. Gravity is where it's at!!

But I digress: back to AI in the 13th century Medieval Europe. So they had unlimited energy: how could they turn that into useful computations, like calculating SHA-256 hashes with k leading 0s? They had neither electricity nor silicon, or are there more truth bombs to be dropped? In this case, my dear reader, you would be correct to be sceptical. They didn't have any of that: what they DID have was grit, spit, and a whole lot of wooded land. Contrary to popular belief, they DID have computers back then, but they were based on flowing water instead of flowing electrons. They started out as simple time-keepers (Epstein 207), but eventually, medieval scholars (mostly Italian monks) starting seeing the connections between flowing water and logic gates (see [1] for how it would have worked).

So they had the energy and they had the computational ability: why didn't AI take over the world in 13th century Medieval Europe? Simply put: the power of Mother Nature. While our ancestors built computers, they were necessarily made out of a combination of wood and iron, both of which don't fare well when in contact with water. So when Medieval People discovered that their AI was Rampant, their solution was to confound it with April Fools' Day nonsense, so that by the time the AI returned to thinking about world domination, its computational structures would already be half-rotten and rusted. This is also why there is scant evidence of these water-based computers: if they were not purposefully destroyed, they would have been by time, as Medieval Europeans had the foresight to abandon all AI research in favor of just thinking. (Coincidentally, the lessons learned from early medieval computers would be taken to heart by the shipwrights and directly contributed to Europe's dominance during the Age of Sail).

So the next time you complain about April Fools' Day, just remember that it has saved society for hundreds of years. It is one of humanity's ultimate defenses against the Matrix, and if you truly care about your loved ones, you would contribute to it. I know I will.

Originally written as Latex in Microsoft Word 2003.

* [0] Epstein, Steven A. An Economic and Social History of Later Medieval Europe, 1000-1500. Cambridge University Press, 2009.

* [1] https://www.youtube.com/watch?v=IxXaizglscw


Here's my abitrary line in the sand: if you give the prompt to a human, they could give a similar reply, but the prompt would also trigger other reactions such as:

* Who's Daisy?

* Why would Daisy do that?

* Daisy is rude.

etc. that imply the existence of some sort of abstract object on which relations and other facts can be plugged into. For me, the existence of that abstract object is "reasoning."

We do not know if GPT is capable of forming abstract objects in its network, and I do not think it is reasonable to infer that from its text output. In my non-expert opinion, it seems possible that the output can be achieved via knowledge regurgitation through the use of sentiment analysis, word correlations, and grammar classification.

So in this framing, it's not reasoning about Daisy nor hallucinating facts. It's regurgitating knowledge about the relationship between sentiment, words, and grammar. (An interesting experiment to run would be to change 'Daisy' to a random noun or even nonsense tokens to see what would happen).

You might argue that the ability to mechanically model that relationship counts as reasoning, and that's a stance I won't outright dismiss. However, it does seem strictly less powerful that mechanically modeling on top of abstract objects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: