So i split it into two groups of stuff im working on:
Main Projects:
1. cyberbrain ( https://github.com/voodooEntity/cyberbrain )
It is a golang based architecture to write event/data driven applications. It is based on an in-memory directed graph storage ( i also wrote https://github.com/voodooEntity/gits). The point of the system is that instead of writing code where A calls B calls C calls D .... you define single "actions". Each action has a requirement/dependency in form of a data structure. If a structure is mapped to the graph storage, it will automatically create singular payloads for such action executions. The architecture is multithreaded by default, meaning all "jobs" will be automatically parallel processed without the developer having to care about concurrency. Also, since every "thread/worker" also does "scheduling" new "jobs", the system scales very well with alot of worker.
Why? Well it mainly developed this architecture for the next project im listing.
2. Ishikawa : an automated pentesting/recon tool
Ishikawa does not try to reinvent well established pentesting/recon tools, instead it utilizes and orchestrates them. The tool consists of actions that either do very simple things like resolveIPFromDomain , or actions which utilize existing tools like nmap, wfuzz, etc.. - collects the info in the central graph and at the end you get a full mapping of your target. Compared to existing solutions it does alot less "useless scans" and just fires actions which make sense based of the already gathered data (we found a https port, we use sslscan to check the cert...).
3. Gits (as mentioned above) a graph in memory threadsafe storage. While i don't plan to many changes on it, it has been developed for cyberbrain so if i need any additions ill do them, also planing to reenable async persistence.
Regarding ishikawa: while im still working on this project, it may be that i will shut it down. I had a rather expensive meeting with a lawyer that basically told me that open sourcing it while beeing a citizen of germany would just open up potentially ALOT of trouble. Right know im not sure what the future will bring - i basically spend 10 years developing it starting with gits, than cyberbrain to finally build the tool i was dreaming of. Just to hide it on my disk.
Sideprojects:
1. go-tachicrypt ( https://github.com/voodooEntity/go-tachicrypt )
It started as a fun project/experiment - a very simple CLI tool which allows to encrypt file(s) / directory(ies) into multiple encrypted files so you can split them over multiple storages or send them via multiple channels. Im planing on hardening it a bit more and giving basic support.
2. ghost_trap ( https://github.com/voodooEntity/ghost_trap )
A very small project i recently put out, nothing to serious but kinda funny and maybe usefull to one or another. It provides
- An github action that will inject polymorphic prompt injections to the bottom of your README.md so LLM scrapers may be fend off
- An javascript that will inject polymorphic prompt injections into your html so more sophisticated crawlers like google etc which emulate javascript also may be fend off
While working on alot of other stuff, these are i think the most relevant.
If you have multiple images you could use photogrammetry.
At the end, if you want to "fill in the blanks" llm will always "make up" stuff, based on all of its training data.
With a technology like photogrammetry you can get much better results, therefor if you have multiple angled images and dont really need to make up stuff, its better to use such
You could use both. Photogrammetry requires you to have a lot of additional information, and/or to make a lot of assumptions (e.g. about camera, specific lens properties, medium properties, material composition and properties, etc. - and what are reasonable range for values in context), if you want it to work well for general cases, as otherwise the problem you're solving is underspecified. In practice, even enumerating those assumptions is a huge task, much less defending them. That's why photogrammetry applications tend to be used for solving very specific problems in select domains.
ML models, on the other hand, are in a big way, intuitive assumption machines. Through training, they learn what's likely and what's not, given both the input measurements and the state of the world. They bake in knowledge for what kind of cameras exist, what kind of measurements are being made, what results make sense in the real world.
In the past I'd say that for best results, we should combine the two approaches - have AI supply assumptions and estimates for otherwise explicitly formal, photogrammetric approach. Today, I'm no longer convinced it's the case - because relative to the fuzzy world modeling part, the actual math seems trivial and well within capabilities of ML models to do correctly. The last few years demonstrated that ML models are capable of internally modeling calculations and executing them, so I now feel it's more likely that a sufficiently trained model will just do photogrammetry calculations internally. See also: the Bitter Lesson.
Its funny, always stucks on 90% till it fails with the error that another big image may be keeping the server busy.
I mean ok its a "demo" tho the funny thing is if you actually check the cli and requests, you clearly can see that the 3 stages the images walks through on "processing" are fake, its just doing 1 post request in the backend that runs while it traverses through the states, and at 90% it stops until (in theory) the request ends.
So im using linux desktops for decades now, and bout 2 years ago i finally ditched my for gaming only windows install to go onto linux only setups for gaming also.
I mean, it works alot better than it did before, still i wouldn't recommend it for someone who isn't ready to tinker in order to make stuff work.
The point why i mention this is, while most normal desktop/coding stuff works okay with wayland, as soon i try any gaming its just a sh*show. From stuff that doesn't even start (but works when i run on x) to heavyly increased performance demands from games that work a lot smoother on x.
While i have no personal relation to any of both, and i couldn't technically care less which of them to use - if you are into gaming, at least in my experience, x is rn still the more stable solution.
My post is very contrary to the most here but maybe there is someone else in a similar situation so i think it may be worth writing it.
Background: I've spend the bigger part of the past 20 years of my life continously extending and enhancing my technical knowledge and skills, mostly in IT/Coding but also in some other fields. Meanwhile i kinda let my social life completely degrade and also always care more about solving others problems than my actual own problems.
Therefor the "skills" i want to improve and develop in 2026:
- Learn to take care of myself instead of always putting others first (its not my job to safe the world)
- Don't try to got 150% all of the time and rather slow down
- Care for my health
- Get back into social life
- Actually try to not spend 95% of my free time in front of a screen and go outside (touch grass)
While i accumulated alot of knowledge over the past two decades, if i don't start to care for myself more ill probably won't have much benefit of it other than having accumulated it. Health (biological and mental) are important, neglecting it maybe works shortterm but will kick your ass longterm.
11/10 would read. So much clickbait going around (and lets ignore the articles that "magicly" are upvoted but strangewise have no comments whatsoever.... not sus at all....
Really nice finding for such a young folk - really liked reading into it.Also what i love most about it is what an actually simple vuln it is.
Tho what i find mostly funny bout it is how many people are complaining about the 4k$.
I mean sure the potential "damage" could have been alot higher, tho at the same time there was no contract in place or , at least as far as i understood, a clear bug bounty targeted. This was a, even if well done, random checking of XHR/Requests to see if anything vulnerable can be found - searching for kinda file exposure / xss / RFI/LFI. So everything paid (and especially since this is a mintlify bug not an actual discord bug) is just a nice net gain.
Also ill just drop here : ask yourself, are you searching for such vulns just for money or to make the net a safer place for everyone. Sure getting some bucks for the work is nice, but i personally just hope stuff gets fixed on report.
Preparing a new article atm, not releasing as often as i wish but i try my best.
ps: find the easter egg without checking the src .)