I’ve been doing web since 1990s and I‘ve always thought spreading a UI element across 3 separate files (often in different locations) was an anti-pattern (or 5+ files In 5 different folders if you want MVC).
React is awesome because it allows feature-aligned separation of concerns (each component has a single job - render everything about a specific element - which is usually a well defined part of a specific use case).
Jsx is the best UI system ever in terms of productivity - speaking from experience: I’ve implemented production apps using dozens of UI frameworks/platforms - Html, WYSIWYG, Flash, WindowsForms, WebForms, Ajax, Asp.Net Mvc, Razor, WPF, Xaml, Silverlight, Knockout, Handlebars, PhoneGap, Ionic, Bootstrap, MaterialUI, Angular2, React w/ Class Components, React w/ Mobx, React w/ Hooks
I can tell you pros/cons of each of those. But at the end of the day I can develop an entire app in days in React+Hooks which would take me weeks in most any other.
I agree, I think crime fighting requires tools that operate at the user level.
So an infiltration bot: We have the technology today (gpt-3 level dialog) that could infiltrate criminal social networks, gather evidence, build credibility and power, and then help shut everything down.
The system itself cannot provide this, but an ai-human actor could.
Of course, this technology is scary: What I think is not a crime - like complaining about the government - is a crime in other places.
I can see the benefits of monitoring progress like this for a good project manager (who isn't micro-controlling or forcing a schedule, but who can keep the stake-holders at bay with good progress reports and early warnings about a schedule adjustment).
On the other side, an easier to implement and more worrisome idea (for me): AI that monitors your git repo to report you when you aren't getting things done quickly...
I'm not the author but I've done this before [1], so here's what I can quickly make up (EDIT: updated using informations gathered from p01's comment below):
// c is a canvas created outside
d = [ // 2 times audio frequencies used, I think
2280,
1280,
1520,
c.width = 1920,
// d[4] is not used, not sure why this stmt was stuffed into d
// required to hide the PNG bootstrap; the bare minimum would be `0'`, probably this compresses better though?
document.body.style.font = "0px MONOSPACE"
],
g = new AudioContext,
o = g.createScriptProcessor(4096,
// clears the margin and initializes vars
// (t: time in seconds, n: last t when speak occurred)
document.body.style.margin = t = n = 0,
1),
o.connect(g.destination),
o.onaudioprocess = o => { // periodically called to fill the audio buffer, used in place of setInterval
o = o.outputBuffer.getChannelData(
e = Math.sin(
t / 16 % 1, // this is the only arg to sin, others are for shoving exprs into a single stmt
m = Math.sin(Math.min(1, y = t / 128) * Math.PI) ** .5 + .1,
c.height = 1080, // setting canvas.width/height clears the canvas
b.shadowOffsetY = 32420,
// results in `radial-gradient(#222,black` or so, reinterpreting decimal number as hex, the last `)` is not required
c.style.background = "radial-gradient(#" + [222, 222, 222, 222, 155, 155, 102, 102][t / 16 & 7] + ",black",
b.font = "920 32px MONOSPACE",
// each function determines the dot size for 16 seconds, also sometimes used as a display text
f = [
(x, y, t) => x / y * 2 - t,
(x, y, t) => (x ** 2 + y ** 2) ** .5 - t,
(x, y, t) => x / 4 ^ y / 4 - t,
(x, y, t) => y % x - t
][t / 16 & 3],
// determines a string to print and speaks it every 16 second
// the inner [...][t/16|0] can return undefined, which gets coerced to an empty string by `""+[...]`
u = "" + [[, f, f, " CAN YOU HEAR ME", f, f, , "MONOSPACE", "THE END"][t / 16 | 0]],
t > n && speechSynthesis.speak(new SpeechSynthesisUtterance(u, n += 16)))
);
for (i = 0; 4096 > 4 * i; i++) // for each dot; `4096>4*i` probably compresses better than `1024>i`
// calculate the dot size and mix with the radius in the previous frame for easing
// f and g are objects (function and AudioContext), so can be abused as a generic store
g[i] = r = (f(x = 16 - i % 32, a = 16 - (i / 32 | 0), t) / 2 & 1) + (g[i] || 0) / 2,
x += o[0] / 4 + 4 * (1 - m ** .3) * Math.sin(i + t + 8),
a += o[64] / 4 + 4 * (1 - m ** .3) * Math.sin(i + t),
h = x * Math.sin(y * 2 + 8) + a * Math.sin(y * 2),
p = 4096 / (m * 32 + 4 * h * Math.sin(e) + t % 16),
b.beginPath(f[i] = r / p),
b.arc(h * Math.sin(e + 8) * p + 1280,
x * Math.sin(y * 2) * p - a * Math.sin(y * 2 + 8) * p - 31920,
p > 0 && p / (2 + 32 - r * 16),
0,
8), // anything larger than `2*Math.PI` will draw a full circle
b.shadowBlur = o[0] ** 2 * 32 + 32 - m * 32 + 4 + h * h / 2,
// `[a,b,c]` coerces into a string `a,b,c`
b.shadowColor = "hsl(" + [f(x, y, t) & 2 ? t - a * 8 : 180, (t & 64) * m + "%", (t & 64) * m + "%"],
b.fill();
b.shadowBlur = o[0] ** 2 * 32,
b.shadowColor = "#fee";
for (i = 0; 4096 > i; i++) // generate each sample, also prints the glitched text
o[i] = o[i] / 2 + (
(
Math.sin(t * d[t / [4, 4, 4, 4, 1/4, 1/4, 16, 4][t / 16 & 7] & 3] * Math.PI) * 8 +
(t * d[t / 8 & 3] / 2 & 6) + t * d[t / 16 & 3] / 4 % 6
) / 64 + f[i / 4 | 0] // f[0..1023] is the visual data, reused as a noise
) * m,
// prints at most 64 characters of u;
// 0th and 64th samples (o[0] & o[64]) of the prev/current buffer act as x/y jitter,
// first 64 samples also displaces the char offset for the glitched text effect
64 > i & t % 16 * 6 > i &&
b.fillText([u[i + (o[i] * 2 & 1)]], // again, [undefined] coerces into an empty string
i % 9 * 32 + o[0] * 16 + 180,
(i / 9 | 0) * 64 + o[64] * 16 - t - 31920),
t += 1 / g.sampleRate // so t increments by 1 per second
}
While the obfuscation itself is fairly standard, I think the real magic here is the carefully selected motion and jitters---which I can't easily figure out from a glance.
> Do you work directly in the minified code, or do you create the demo in normal code first then look for ways to minimize it?
Also, in my experience you end up structuring everything so that it can be easily minifiable (by hand or using something like terser-online [2]). This doesn't necessarily mean that the code is unreadable (variables can be renamed, statements can be converted to comma expressions and so on), but the resulting code would be very unorthodox. See the source code of my JS1024 entry for example.
Thanks for the expansion and comments! On your project, I noticed the comments were about 5 times longer than the source code :) - so that makes sense how to work with it.
I do not use any minifier. I minify the code by hand, and typically prototype ideas and performance tests in normal non-minified code. Once the main idea and approach are settled, I minify the code by hand and keep an eye on the heatmap of the DEFLATE stream to match the 1024 bytes limit.
MONOSPACE took ~4months on an off to create, tallying ~60h of work. You know 2020 + trying to balance work & family, and remain sane these days.
As I said on my site, the Audio and Visuals feed each other to make the background noise based on each dot, and the camera shake based on the Audio. That way I keep the Audio & Visuals are perfectly in synch, for free ;)
For the X & Y camera shake, I use the values 0 and 64 from the Audio buffer. The 64 is because this is the maximum number of characters rendered by the Text writer which happens in the loop updating the Audio buffer. Using something lower than the 64th value would make the last characters of the Text shake in a different way than the rest.
He is very talented and was nice enough to implement a couple of features to help with this kind of productions when I was working on BLCK4777 - https://www.p01.org/BLCK4777
I made 100s of creative projects and failed experiences before getting there. Don't get scared by that number. All it means, is that it is possible to do in 60h. Some would take 600, others 20.
I threw that number away to put it in context with the 4 months since the first prototype. I could only work from time to time. Some times not touching the code at all for weeks.
Are you sure it was the production database that was affected?
If you are not sure how a hard coded script that was targeting localhost affected a production database, how do you know you were even viewing the production database as the one dropped?
Maybe you were simply connected to the wrong database server?
I’ve done that many times - where I had an initial “oh no“ moment and then realized I was just looking at the wrong thing, and everything was ok.
I’ve also accidentally deployed a client website with the wrong connection string and it was quite confusing.
In an even more extreme case: I had been deploying a serverless stack to the entirely wrong aws account - I thought I was using an aws named profile and I was actually using the default (which changed when I got a new desktop system). I.e. aws cli uses —profile flag, but serverless cli uses —aws-profile flag. (Thankfully this all happened during development.)
I now have deleted default profiles from my aws config.
I started building websites in 1997 with FrontPage as a teenager. It was horrible - mostly because I was teenager and didn’t have anything interesting to say. It also wasn’t easy and took forever to get anything done. (So just slap an in construction gif in it.)
Now I can build anything I want quickly.
Writing raw html is very inefficient. If you want a friendly to create content, use Markdown.
If you want to structure your content or have some standard components, use React.
Use a static site compiler like Gatsby - that combines it all and makes your site compete with any hand-written html and still contain lazy-loaded interactive elements.
My blog is a perfect example: I can write my blog posts in markdown, paste images in as needed, run a build script, and push the changes. It takes about 1 minute to build and deploy.
But, at the same time, I can hide an entire text adventure terminal simulator in the header bar (that lazy loads on click).
My site loads as fast as any static site, but it transforms into full interactive quickly.
Best of both worlds.
So I can produce well formatted content, cool components, or even multiplayer games - it’s perfect.
We have a paradise of tools - it’s a creator’s dream - (once you figure out the right combination)
Interesting discussion. I’d like to ask a sincere question:
Wouldn’t a system A that is capable of encoding another complex system B, need to be as complex in order to encode all the information in the result?
It’s like a compression algorithm, you can encode the information, but the complexity level of that information is still there (also the difficulty in compressing the information increases very fast - exponentially or maybe even factorially).
So if the most basic protein sequence requires so many bits of information, wouldn’t anything capable of producing that (in a non-random manner) also require at least that level of information (if not more).
It doesn’t matter what process we call systems A and B.
So it seems if randomness doesn’t solve the problem (because math), then the only conclusion is that there is a fundamental requirement for intentionality.
It's possible for a simple thing to encode something more complex, deterministically.
The prime example is The Game of Life - simple rules from which complex behaviour emerges.
This idea of information is one we're putting onto the system, not some inherent attribute. Yes, the encoding of a protein needs to have enough information to produce that protein (or a family of proteins), but that says nothing about the process that created the encoding.
For example, a strand of RNA can be spliced in many different ways to create many different proteins [0] and this process can go weird in many ways. New sequences will arise from this process, even though they weren't 'intended' to.
The Game of Life doesn't produce complex behavior from simple rules.
The complex behavior comes from a large enough random starting state combined with a very low minimal required complexity to see something interesting. Also, even for a short interesting run of local behavior, the game never produces a stable behavior that grows in complexity beyond the initial information encoded in the random state. (i.e. if there is a bubble of cool stuff happening somewhere on the 2d plane, something usually interferes with it and destroys that pattern - like waves in the ocean, even when the energy curves combine to form a wave once in a while, they are limited and temporary).
So the Game of Life is actually an example that the system is limited to the information encoded in the initial starting state.
In the starting state there is either:
- a large enough random search space (i.e. a million random attempts with a 100x100 board might get something cool looking)
- intentionality (a person can design a starting state that can produce any possible stable system)
Yes, and useful proteins are basically the equivalent of "oscillators" or "spaceships" in the game of life. But must runs of the game of life are not oscillators or spaceships, just like most proteins are useless.
That's why the "initial condition" is so important, and why DNA is so important: without a good "start state", you get useless results—just like in the game of life.
What we are trying to find is not Conway's rules for the game of life, but this: how do we produce useful starting states (DNA) with a physical system? And more importantly, how do we create those starting states preferentially (i.e. non-randomly)?
We still need a model for how useful DNA (which corresponds to the "initial state" in the game of life) gets created. And we have no model for that right now, other than assuming unique random initial states are continually occurring and letting the law of large numbers eventually "find" winners.
For DNA, at least, it could have come from RNA (as per the link in my last post).
While I don't think the pre-biotic problem is solved at all, we have a lot more models of how it could have happened than you seem to credit - this is after all a huge research area.
For example, here is one [0], and here is a whole journal issue on the subject [1].
I found these by searching for 'evolution of DNA' and 'evolution of RNA'.
Now, these models all include some randomness, but in no way does anyone assume "unique random initial states are continually occurring... letting the law of large numbers eventually "find" winners"
The models show plausible environments where pre-biotic synthesis of RNA (or RNA pre-cursors) can occur, and stabilise.
This model you keep bringing up - randomly selecting a molecule from all possible combinations of atoms and saying 'enough time will get you one that works' - is not mentioned anywhere that I have seen. Perhaps some lay-people (of which I am definitely one!) believe it, but as you point out it is so obviously implausible it falls down on first inspection.
There are other models (lots of them!) and they don't rely on this pure randomness.
Minor side note, but most runs of the game of life actually will produce spaceships and/or oscillators, even starting from a random configuration. (Initialize a 100 x 100 box of cells randomly, and you're virtually guaranteed to get several gliders flying off of the resulting mess.)
This may result in applications trying to keep you hooked in, addicted, or having unnecessarily long processes.
Subscriptions make sense for dynamic content, and service. Fantastical and 1Password are examples of subscription-based platforms which were once buy to play (term from games industry).
However if you self host the data (you probably can and should) or sell your soul to the devil (Google etc) you paid for hosting. So a subscription for the software doesn't make sense from customer PoV. You essentially don't pay for service, compared to Disney+, World of Warcraft, or Netflix.
React is awesome because it allows feature-aligned separation of concerns (each component has a single job - render everything about a specific element - which is usually a well defined part of a specific use case).
Jsx is the best UI system ever in terms of productivity - speaking from experience: I’ve implemented production apps using dozens of UI frameworks/platforms - Html, WYSIWYG, Flash, WindowsForms, WebForms, Ajax, Asp.Net Mvc, Razor, WPF, Xaml, Silverlight, Knockout, Handlebars, PhoneGap, Ionic, Bootstrap, MaterialUI, Angular2, React w/ Class Components, React w/ Mobx, React w/ Hooks
I can tell you pros/cons of each of those. But at the end of the day I can develop an entire app in days in React+Hooks which would take me weeks in most any other.