A classifier for handwritten digits in the MNIST dataset is generally considered the "Hello World" of neural networks. I went over it in a course, but there are countless tutorials to be found online, i.e. https://www.digitalocean.com/community/tutorials/introductio...
Once you begin to understand how to handle data and how to define layers, you can start playing around with whatever your heart desires. The rabbit hole is vast and endless :)
Herculaneum was one of the highlights of my trip to Italy with the wife. I didn't realize the scope of just how much ash and soil had to be removed for excavation. It was dozens of meters [1]. It's an absolute shame that the site is given a fraction of the attention that Pompeii receives, I thought it was vastly better preserved and truly awe-inspiring [2].
I highly recommend spending a few hours wandering the site, it is an absolute wonder.
I enjoyed the attention given to Herculaneum in a computer game called Rome: Pathway to Power (released in 1992). You start the game as a slave who has to escape Herculaneum before Vesuvius erupts. I loved the game as a kid. It's sort of like an isometric immersive sim (with a clunky interface). It got me interested in ancient Rome.
The modern Italian town of Ercolano lies just over Herculaneum, so excavations of the rest of the ancient town are a bit tricky. Only about a quarter has been excavated so far, in contrast to Pompeii, which are two-thirds out.
I use iTerm in visor mode, so ctrl-~ (tilde) shows my terminal tabs and changes focus to them. Pushing it again hides them and returns focus to the previous app.
Game developers getting creative due to hardware limitations is a topic I will never grow sick of. Historical gamedev journalism has grown much in recent years, with outlets like NoClip (https://www.noclip.video/) producing phenomenal content.
Also Fabien Sanglard's Game Engine Black Books (https://fabiensanglard.net/gebb/) are fascinating reads. Both Doom and Wolfenstein 3D sit on my living room coffee table.
Game Devs have always done amazing things like this.
I remember my one and only real attempt at game dev involved the zx spectrum where I got a book from my local library with all sorts of inspiring tricks in it. For people who don't know/remember, the speccy used to output to a TV[1] but didn't have enough graphics juice to actually fill the screen, so you could render into a pane in the center of the screen and the only thing you could do with the outside border was you could set the color [2]. Anyhow, this spectrum game dev book showed how to build a flight simulator game in assembly and one of the incredible hacks they employed was to change the color of the border at exactly the right moments in the scan path of the CRT so it would look like you had a continuous horizon all the way to the edges of the screen. You couldn't actually render any graphics out there but it blew my mind that you could at least make the sky and land stretch right to the edge of the screen.
[1] Which would have been CRT at the time, which turns out to be important for this hack.
These kinds of tricks were also very common on the NES, to create things like a HUD that doesn't scroll with the rest of the screen by changing the scroll values at the right time. Eventually they even started putting hardware in the cartridge to assist this by interrupting the CPU at the right time
Amiga may have taken this furthest with Copper, a co-processor that was synced with the CRT signal and programmed in a DSL for timed changes of register values in the other chips: https://en.wikipedia.org/wiki/Amiga_Original_Chip_Set#Copper
This is a channel that does deep dives on specific games, often at the level of carefully explained assembly code, for glitches and graphical techniques in 8-bit and 16-bit consoles.
I'd generalize it. Every time someone finds a creative hack to do something not possible with a device you get a magic moment. The opposite is also somehow true.. people today have unity open infinite world with pb-renderers and it seems quite boring.
I agree with your generalization now that I give it some thought. Even for topics I have little interest in or know little of, creative solutions are enthralling.
Your latter assessment is why even arbitrary limitations commonly inspire interesting gameplay and art style IMO. I quickly grew tired of Assassin's Creed and similar open world games; in contrast, games like The Messenger can pull me in immediately.
The (admittedly few) game jams I've actually completed had limiting restrictions (i.e. 8/16-bit art/gameplay). Two things make me more productive/creative than normal: looming deadlines and limitations.
like the saying: necessity is the mother of invention
I also think there's a balance between the breadth of a tech and its use. I feel weird seeing how much can software do, but all this for games.. it's like having a complete nuclear lab team to help you change batteries on your LED lightsaber...
I have issues with memory/retainment due to a traumatic brain injury from a car accident in high school, and am incredibly dependent on using Obsidian as sort of a "second brain."
While I've only adopted about half of the methods outlined in Tiago Forte's book, Building a Second Brain [1], it's been very effective for me. I prefer hierarchical folder structure for organization, but I do use his overall PARA structure.
I also use an "inbox" or intake folder inspired by Zettelkasten for newly created notes. I really believe significant cognitive overhead of sorting/tagging/organizing gets in the way of getting your thoughts/notes written down. I generally spend ~10-15 minutes after getting the kids to bed to organize any notes created throughout the day. This is part of my wind-down routine, involving quickly journaling an overall summary of the day on my daily notes and migrating any outstanding TODO's to the next day.
IMO though, the most important thing is to use whatever method of structure/routine/organization works for you. Approach it as an iterative process and play with interesting ideas or methodologies.
One thing in Tiago Forte's BASB that I _strongly_ agree with is that regardless of how much organization you put into your digital notes, search is often the fastest way to find something you're looking for, so spending immense time on organization is counter-intuitive to the reason to take notes. Spending some time to organize your thoughts can inspire connections between notes that you hadn't initially thought of, but it is a slippery slope: it is easy to get lost in the process of structuring your notes and end up with that as your sole purpose of your documented thoughts.
It's really surprising how awful the official Python docs are, considering how much the language has grown of late. If I need to reference core Python docs these days, I almost always go to this version on devdocs.io[1].
Thankfully most of the reference documentation I have to look up are the popular data science libraries like pandas. Their documentation[2] is so much cleaner than core Python.
A classifier for handwritten digits in the MNIST dataset is generally considered the "Hello World" of neural networks. I went over it in a course, but there are countless tutorials to be found online, i.e. https://www.digitalocean.com/community/tutorials/introductio...
Once you begin to understand how to handle data and how to define layers, you can start playing around with whatever your heart desires. The rabbit hole is vast and endless :)