Scary. Glad it turned out okay. The rule should be any standing water feature below 3' (1m) above grade should be enclosed by a fence to a height of 4' (1.3m) above grade if kids are anticipated to occupy the space.
And, for oceans, rivers, and flood water: remember that 1m³ of water weighs 1.1 ton/1 tonne.
You also have to weight the benefits versus the "risk".
For example, if you use FreeTube with SponsorBlock to improve your privacy and block ads, in fact you are sending to Cloudflare 100% of your YouTube watch history, and to SponsorBlock ("sponsor.ajay.io").
With Piped instances it's even worse, essentially escaping Google's tracking just to give our data to random strangers.
If you are worried, just run a second Chrome session with NordVPN and uBlock Origin in a loose jurisdiction and browse YouTube unlogged.
It's easy, simple, and you have the benefits of an audited platform and that reasonably legally confirm they don't store logs unless the court forced them: "we never log their activity unless ordered by a court never log their activity unless ordered by a court", but for that, the court has to find you as a user, which can be very complicated in practice.
> People can ‘git clone’ my code, and there’s a web-based browsing interface (the basic gitweb) for looking around without having to clone it at all.
I host my own public Git repositories, but statically--read-only, no HTML views. I don't want to run any specialized server-side code, whether dynamic or preprocessed, as that's a security vector and system administration maintenance headache I don't want to deal with. You can host a Git repository as a set of static files using any web server, without any special configuration. Just clone a bare repo into an existing visible directory. `git update-server-info` generates the necessary index files for `git clone https://...` to work transparently. I add a post-receive hook to my read-write Git repositories that does `cd /path/to/mirror.git && git fetch && git --bare update-server-info` to keep the public repo updated.
In theory something like gitweb could be implemented purely client-side, using JavaScript or WASM to fetch the Git indices and packs on-demand and generate the HTML views. Some day I'd like to give that a try if someone doesn't beat me to it. You could even serve it as index.html from the Git repository directory, so the browser app and the Git clone URL are identical.
This is probably just nostalgia, as I was the right age to sink hours into the golden age of Ultima IV-VII when I was younger, but I still think these are the best roleplaying games ever made, and by an absolutely gigantic mile. Every time I try a new RPG, I initially have this feeling like, will this be like Ultima, will this be like Ultima, but I always end up disappointed.
The best way of describing what makes them so great is that they avoid everything feeling like one of those fake-cardboard-cutout Western movie sets. Every other RPG I've played feels like this, the infinity engine games like Baldur's Gate (I've only played 1, not 2) being the canonical example. Everytime I run into an NPC or situation in Baldur's Gate it just feels like the characters start talking through a script that was written just for me, the player, to setup some problem for I, the player, to solve. This is of course the very definition of immersion breaking, because this artificial setup draws attention to the fact that you're playing a game, you're not actually in a real believable world. Baldur's Gate has fantastic combat (an area Ultima VII is terrible), but I think the way that the story is setup and told is boring and uninspired. And that goes similar for literally every other RPGs I've played: Mass Effect ("Hi I'm an alien from a new race you've never met, would you like me to tell you everything about how my race fits into the universe?"), Skyrim (Besthesda, masters of the anonymous, faceless NPC), the Witcher/Cyberpunk (the CD Projekt Red games are actually masters of this style of game design, because they use it as scaffolding for easily the best writing ever in video games, but they're still hampered by inherent weakness of the format: That the world feels like a prop to setup quests for the player to knock down).
In contrast, the Ultima games feel like they create the world first, so that feels alive and believable. And I don't mean by writing a bunch of lore (writing has it's format already, books, use game mechanics to tell your story), but I mean by creating a world piece by piece, character by character, city block by city block, room by room, each piece of furniture, individual dresser by individual dresser. Environmental story telling, game mechanic story telling, storytelling native to the format of of games. The tavern goes here, the barber lives here, these three friends meet at this pub, at this time every day, and discuss this. Ultima does this for every town and every character in the game, for even the most trivial NPC. There's no anonymous, faceless, story-less NPCs acting as walking props like in every other RPG. And once that world feels like a real believable place, one that you could just sit and watch at have it be interesting, like people watching through cafe window--existing through an intersection of mechanics (how NPCs move, day-night-cycles, how they interact with the environment, e.g., the classic "using flour to bake bread"). Only then are the player-driven interactions then built on top of this world, e.g., if you hear a rumor that the shopkeeper seems to disappear for a couple of hours after their shop closes each night, well you can wait till 5 PM and follow them and see what they're up to. Since everything is scripted to this degree, it doesn't feel like you've entered into a pre-programmed scenario for following just this one NPC, you can follow anyone in the game this way, it just so happens that some NPCs might do something interesting after you follow them, like maybe you see them hide a key under a plant and you can go investigate.
This way of having the player-driven gameplay come directly from mechanics that existed first to make a believable world, just makes for more interesting games in my opinion than anything that has come after. A game that's just a series of scripted encounters for the player to knock down is just less interesting.
The real prescient threat in that movie was the predictive AI algorithm that tracked individual behaviors and identified potential threats to the regime. In the movie they had a big airship with guns that would kill them on sight, but a more realistic threat is the AI deciding to feed them individualized propaganda to curtail their behavior. This is the villain's plot in Metal Gear Solid 2, which is another great story.
> Your persona, experiences, triumphs, and defeats are nothing but byproducts. The real objective was ensuring that we could generate and manipulate them.
It's really brilliant to use a video game to deliver the message of the effectiveness of propaganda. 'Game design' as a concept is just about manipulation and hijacking dopamine responses. I don't think another medium can as effectively demonstrate how systems can manipulate people's behavior.
Bazzite is the first time I feel like PC gaming on a TV is as convenient as a console. And you can put it on a 150$ mini PC from AliExpress and play pretty much anything that isn't an AAA game from the last 3-5 years or a competitive online game with unsupported anticheat, including most emulators. It's truly an amazing achievement.
The best sales people I've worked with were incredible strategic thinkers and not really sales people at all.
They built an intimate knowledge of their customer and their industry, built strong connections with the top brass of their client by delivering exceptional work that got those people promoted, and were really good at building autonomous teams that could get the (exceptional) work done with their guidance on the customer/industry/client. These folks would also often deliver very difficult messages to their clients, which often resulted in more business not less.
The sleazy sales people can build decent pipelines/sales numbers but they are not what I would ever label as 'elite'.
The article discusses the specific use case of serverless computing, e.g. AWS Lambda, and how a central database doesn't always work well with apps constructed in a serverless fashion.
I was immediately interested in this post because 6-7 years ago I worked on this very problem- I needed to ingest a set of complex hierarchical files that could change at any time, and I needed to "query" them to extract particular information. FaaS is expensive for computationally expensive tasks, and it also didn't make sense to load big XML files and parse them every time I needed to do a lookup in any instance of my Lambda function.
My solution was to have a central function on a timer that read and parsed the files every couple of minutes, loaded the data into a SQLite database, indexed it, and put the file in S3.
Now my functions just downloaded the file from S3, if it was newer than the local copy or on a cold start, and did the lookup. Blindingly fast and no duplication of effort.
One of the things that is not immediately obvious from Lambda is that it has a local /tmp directory that you can read from and write to. Also the Python runtime includes SQLite; no need to upload code besides your function.
I'm excited that work is going on that might make such solutions even faster; I think it's a very useful pattern for distributed computing.
In the war I fought in, there was a markedly different approach to combat air ops between even just the different service branches. The Army tended to use the Apache like it was a flying tank, hovering and delivering ordinance; the Marines (which is where I fought) flew as low and as quickly as possible even while sending rounds and hellfire missiles downrange. The marines were not losing aircraft in anywhere close to the numbers of the Army when I was there. You need a skilled operator to hit a helicopter moving at 100 knots when it's 15m over the buildings, and we mostly operated at night. I remember watching a ZPU gunner pointing the cannons directly at us and firing, and laughing as the rounds just flew behind the tail.
It's my understanding (and from watching the videos that I can get as a civilian) that the Russians still aren't operating their helicopters in a manner that I would be comfortable with if I was inside one. I certainly wouldn't be pumped flying in the environment they are in, with so many MANPADS out there, but there is no way a machine I was in would be hit with an anti-tank missile while we hovered (as was in the article.)
Lot of preamble to say: no, I don't think the attack helicopter is dead. Attack helicopters are nimble and can hide in terrain quite well, and even when an attacking force can see them it takes a skilled operator to actually hit them. The single use drones that operate like kamikaze vehicles may throw a winkle into the mix, but a helo flying at 150knots is going to be very challenging to hit for one of those. I expect there will be quite an arms race countering and then counter countering these in the future wars.
I think this somewhat misses an important nuance. Japanese PCs had to be different early on because of the complexities of the written language. All of the important characters could be handled in just a few bits (7 or 8) and low resolution in Western markets, with different fonts and character maps dropped in to support a few different alphabets.
But in CJK countries, things were much harder and the entire I/O system had to be significantly more capable than what might pass for usable elsewhere. This meant larger ROMs, larger framebuffers, higher resolution displays, more complex keyboarding systems, the works. Everything was harder and more expensive for a long time. A common add-on was ROMs with Kanji (Chinese derived characters) support in the same way a person in the West might buy a new sound card or get a VGA card. Except this was just so you could use your new $1200 computer (in today's money) to write things on.
Back then, given limited memory, you also ended up with a ton of different display modes that offered different tradeoffs between color, resolution, and refresh. Because of the complex character sets, these Japanese systems tended to focus on fewer colors and higher resolution while the west focused on more colors at a lower res in the same or less memory space (any fans of mode 13h?). The first PC-98 (the 9801) shipped in 1982 with 128k of RAM and a 640x400 display with special display hardware. The equivalent IBM-PC shipped with 16KB of RAM and CGA graphics which could give you a display no higher than 640x200 with 1-bit colors but was mostly used in 320x200 with 4 (terrible) colors.
Even with similar base architectures, these formative differences meant that lots of the guts of the systems were laid out different to accommodate this -- especially in the memory maps.
By the time "conventional" PCs were able to handle the character display needs (sometime in the mid-90s), they were selling in the millions of units per anum which drove down their per unit prices.
The Japanese market was severely fractured and in a smaller addressable market. Per unit costs were higher, but the software was largely the same. Porting the same businessware to half a dozen platforms cost too much. So now the average user of the Japanese systems had a smaller library of software which was more or less a copy of what was on IBM PCs, on more expensive hardware -- market forces solved the rest.
(btw, the FM Towns, IIR, also had specialized graphics hardware to produce arcade-like graphics with tiles and sprites and so on, making it even more different)
Some of this history also informs why home computing lagged in Japan compared to the West despite having all of the other prerequisites for it to take off.
The clue we all had with OpenAI for a long time that this was a search through a tree, they hired Noam Brown, and his past work all hinted towards that. Q, is obviously a search on a tree like A. So take something like CoT, build out a tree, search for the best solution across it. The search is the "system-2 reasoning"
With the AirPods now officially becoming hearing aids, it will hopefully reduce the stigma and attitude towards hearing aids and allow many more people to realize how bad their hearing actually is.
I have been wearing hearing aids for a few years now (Phonak). I've also used the AirPods Pro with the accessibility audiogram feature (basically making them hearing aids), which is really good and has also been around for a few years. I'm very glad, that Apple has made this official and even gotten FDA approval.
When I started to loose my hearing a decade ago, for a long time I refused to wear hearing aids, probably due to the perceived stigma. Even though it made life harder and harder -- imagine work meetings with a mumbling boss or me accusing my family to intentionally whisper -- it took years to change my mind. In hindsight I should have gotten hearing aids years sooner.
My 'real' hearing aids are nothing short of a technological marvel. They are tiny and run for a few days on zinc-air batteries (312/Costco but made by Varta), while providing all-day BT streaming. Btw, funny how most hearing aid brands come from Denmark. In contrast, the AirPods run out after a few hours and are also destined to become landfill due to their built in battery.
For context, the contemporary commercial merchant fleet is about 80,000 ships, roughly a third of which are bulk liquid carriers (a/k/a oil tankers). As a percentage, that's actually down from the 1970s/80s when half of all commercial ships were tankers. Most of the growth has been in container ships.
A consequence was the US government building the first long-distance oil pipelines, the "Big Inch" and "Little Big Inch" pipelines from east Texas to refineries on the Atlantic seaboard in New Jersey. They remain in use.
I've also realised that both whales and large-scale commercial shipping rely on similar circumstances: the ability to on- and off-board cargo (or food) rapidly, widely-separated ports (or feeding grounds), and no significant predators (or war / piracy hazards). Whales are a remarkably recent evolutionary development, with the large great whales dating back only about 5 million years. Similarly, bulk shipping required not only global markets but cargos which could be handled in aggregate, whether liquids (as with petroleum), dry solids (mostly ores), or containerised miscellaneous cargo, the latter being premised on standardisation. Canals, safe shipping routes, and quayside cargo handling capacity were also prerequisites.
I have a PhD in Physics from Berkeley. Still, in the strictest real estate sense, my best and highest use is as a dishwasher.
I was managing a small optics factory in Livermore. We made laser mirrors to order there — any wavelength, any reflectivity, any angle, any polarization, you name it. I was working my ass off and was unmarried at the time. Thanksgiving came around and I had nowhere to go. But I hooked up with a church in San Jose that had a dinner for poor people and went as a volunteer. After serving the dinner, I wandered back into the dish room. I immediately went over to the sink and kicked out the lady who was pretending to work. I then washed all the dishes and left.
Comes one year later. I’m in the exact same situation. I call up that church. The lady says, “Oh, that’s very nice of you. But we don’t need any more volunteers, we have enough.” Oh shit. What to do. I found my old replica army parka I had bought in a Cambridge surplus store 20 years earlier. I went as a poor person. I didn’t want to be alone.
The first thing I found out is that poor people are herded, controlled, treated like children. We had to wait outside the church in the mild cold until permitted to enter, in a kind of line. So I sit down. I will never forget the beatific smiles of the volunteers that served us. This was performance art, and they were the stars. Everyone on the supplicant end noticed this, I’m sure. So the meal ends. I walk back into the dish room, survey the situation, once again kick out whoever was pretending to do the dishes, do the dishes, and leave.
You don't see a person who needs workout advice from HN, but in case anyone else reading this might,
> Running hurts my knees
I've been following ATG (from Ben Patrick, the "knees over toes guy", google/youtube it) for about 2 years and it has made a tremendous difference to my knee ability; I have a torn meniscus and the routines he suggest have allowed me to regain full functionality.
It generates both a file that just contains a line per uninterrupted speaker speech prefixed with the speaker number, as well as a file with timestamps which I believe would be used as subtitles.
E-ink, the company, holds the patents of the pigment core tech that makes "paper-like" displays possible and strongarms the display manufacturers and the users of their displays to absolute silence. Any research project or startup that comes up with a better alternative technology gets bought out or buried by their lawyers ASAP.
E-ink don't make the display themselves, they make the e-ink film, filled with their patented pigment particles and sell it to display manufacturers who package the film in glass and a TFT layer and add a driver interface chip, all of which are proprietary AF and unless you're the size of Amazon, forget about getting any detailed datasheets about how to correctly drive their displays to get sharp images.
In my previous company we had to reverse engineer their waveforms in order to build usable products even though we were buying quite a lot of displays.
With so much control over the IP and the entire supply chain and due to the broken nature of the patent system, they're an absolute monopoly and have no incentive to lower prices or to bring any innovations to the market and are a textbook example of what happens to technology when there is zero competition.
So, when you see the high prices of e-paper gadgets, don't blame the manufacturers, as they're not price gouging, blame E-ink, as their displays make up the bulk of the BOM.
Tough, some of their tech is pretty dope. One day E-ink sent over a 32" 1440p prototype panel with 32 shades of B&W to show off. My God, was the picture gorgeous and sharp. I would have loved to have it as a PC monitor so I tried building an HDMI interface controller for it with an FPGA but failed due to a lack of time and documentation. Shame, although not a big loss as an estimated cost for that was near the five figure ballpark and the current consumption was astronomical, sometimes triggering the protection of the power supply on certain images.
You might enjoy the oldest active financial bond, a parchment manuscript from the Dutch water authority 1624.
It still pays a small amount of interest (a bit over 13 Euros per year) provided that the bearer (currently the New York Stock Exchange) if they show up every few years to collect it.
It takes quite a bit of work to maintain nuclear warheads. All active US weapons contain plutonium 239, which has a half life of 24,100 years. It's radioactive by alpha decay, which leads to changes in the material properties due to energetic collisions and the buildup of microscopic helium bubbles (alpha particles are merely ionized helium nuclei, so stopped alpha particles become helium). Since the US stopped testing actual nuclear warheads in the early 1990s, it takes a great deal of indirect theoretical and experimental evidence to make sure that nuclear warheads are reliable without live fire tests. That's part of "stockpile stewardship." [1] If the plutonium has deviated too far from its original mechanical behavior, it would need to be removed from warheads, purified, and remanufactured into replacements that match the original specs. And again, the rebuilt components need to be reliable but they can't actually be tested via explosion.
US weapons also rely on tritium gas "boosting" to operate reliably and efficiently [2], and tritium decays with only a 12.3 year half life. The gas reservoirs of weapons need their tritium replaced at significantly shorter intervals. Even manufacturing enough tritium to maintain the stockpile has become a challenge because the US has retired its Cold War era weapons-material reactors that used to operate at Hanford and Savannah River. Currently the US uses a power reactor owned by the Tennessee Valley Authority to make tritium for weapons [3].
It's possible to make nuclear weapons (even thermonuclear weapons) with only uranium 235 for fissile material and no stored tritium. Such weapons could last a much longer time without active maintenance, since U-235 decays thousands of times slower than Pu-239. However, they would be larger and heavier for the same explosive yield, which complicates delivery. They would also lose certain safety features. Finally, without being able to perform full scale tests, it is doubtful that the US would have the confidence to replace its current high-maintenance weapons stockpile with a new generation of low-maintenance weapons.
[0] Which somewhat confusingly credits "John Barkaus's LZW and GIF Explained" which actually turns out[1] to be... the text of "LZW and GIF explained----Steve Blackstock" via a news group post by John Barkaus (and is the text at OP's HN post URL).
[2] Including this fantastic example of visualizing individual sections of a multi-part binary file format, the UX/UI of which I think would be a great addition to a "hex viewer" application: https://www.matthewflickinger.com/lab/whatsinagif/gif_explor...
I didn't live through the early space programmes, but having read about them recently, I'm surprised by how incremental they (and the Soviet Sputnik and Vostok counterparts) were.
- The early Mercury flights developed the idea of putting a human in a capsule on top of an ICBM to see what happens at altitude and during re-entry.
- Later Mercury flights experimented with de-orbiting techniques. (The early flights didn't need that because the ICBMs that launched the first people into space did so on a ballistic trajectory – they never achieved orbit.)
- With Gemini we figured out things like endurance (what is it like to have humans in space for weeks), rendezvous and docking (incredibly difficult), and extravehicular activities (preparation for walking on another astronomical body.)
- Early Apollo was focused entirely on solving multi-stage flights without humans on board.
- With Apollo 7 we verified the command module was good enough to attempt to send a few laps around the moon, which happened with Apollo 8, while we were still waiting for a fully functioning lander.
- Apollo 9 was a dry run of the entire moon landing sequence – except in low Earth orbit.
- Apollo 10 repeated the same exercise from Apollo 9 except in Lunar orbit.
- Apollo 11 is often considered the first moon landing, but from the perspective of the program, it was really just another experiment: can we repeat Apollo 10 except also make a brief touch-and-go anywhere on the lunar surface?
- Even Apollo 12 isn't really a moon landing proper, but another experiment: can we repeat Apollo 11 but now also make a precision touchdown?
It wasn't until somewhere around Apollo 14/15 where the main purpose of the missions started becoming scientifically exploring the moon.
That's something like 25 crewed flights at various stages of development that had as their purpose to explore/learn about just one or two new aspects of the future moon missions, pushing the envelope a little further.
Granted, many of these things we have fresh practise in thanks to the space station, but also many of them we don't. It seems a little weird to bet it all on a small number of big bang launches.
You already noticed the technical card [1], but I can describe some of the details that go into this for those unfamiliar with the items on it.
1. The scope they used is roughly equivalent to shooting with an 800mm telephoto lens. But the fact that it's 8" wide means it can let in a lot of light.
2. The camera [2] is a cooled monochrome camera. Sensor heat is a major source of noise, so the idea is to cool the sensor to -10deg (C) to reduce that noise. Shooting in mono allows you shoot each color channel separately, with filters that correspond to the precise wavelengths of light that are dominant in the object you're shooting and ideally minimize wavelengths present in light pollution or the moon. Monochrome also allows you to make use of the full sensor rather than splitting the light up between each channel. These cameras also have other favorable low-light noise properties, like large pixels and deep wells.
3. The mount is an EQ6-R pro (same mount I use!) and this is effectively a tripod that rotates counter to the Earth's spin. Without this, stars would look like curved streaks across the image. Combined with other aspects of the setup, the mount can also point the camera to a specific spot in the sky and keep the object in frame very precisely.
4. The set of filters they used are interesting! Typically, people shoot with RGB (for things like galaxies that use the full spectrum of visible light) or HSO (very narrow slices of the red, yellow, and blue parts of the visible spectrum, better for nebulas composed of gas emitting and reflecting light at specific wavelengths). The image was shot with a combination: a 3nm H-Alpha filter captures that red dusty nebulosity in the image and, for a target like the horsehead nebula, has a really high signal-to-noise ratio. The RGB filters were presumably for the star colors and to incorporate the blue from Alnitak into the image. The processing here was really tasteful in my opinion. It says this was shot from a Bortle-7 location, so that ultra narrow 3nm filter is cutting out a significant amount of light pollution. These are impressive results for such a bright location.
5. They most likely used a secondary camera whose sole purpose is to guide the mount and keep it pointed at the target object. The basic idea is try to put the center of some small star into some pixel. If during a frame that star moves a pixel to the right, it'll send an instruction to the mount to compensate and put it back to its original pixel. The guide camera isn't on the technical card, but they're using PHD2 software for guiding which basically necessitates that. The guide camera could have its own scope, or be integrated into the main scope by stealing a little bit of the light using a prism.
6. Lastly, it looks like most of the editing was done using Pixinsight. This allows each filter to be assigned to various color channels, alignment and averaging of the 93 exposures shot over 10 hours across 3 nights, subtraction of the sensor noise pattern using dark frames, removal of dust/scratches/imperfections from flat frames, and whatever other edits to reduce gradients/noise and color calibration that went into creating the final image.
* SPF: Tell the world which servers are allowed to send email for your domain
* DKIM: Weak version of digitally signed email, add a header that only mailservers that have the private key you supply can generate. Tampering invalidates the signature (for example when an email gets relayed for a second time). The private key used counts for your whole domain.
* DMARC: Tells other mailservers what to do when the SPF and/or DKIM check fails, and also allows you to set an address where to send reports to. These reports contain counts of messages that failed the SPF and/or DKIM checks.
And, for oceans, rivers, and flood water: remember that 1m³ of water weighs 1.1 ton/1 tonne.
Also dangerous: low head dams. https://practical.engineering/blog/2019/3/16/drowning-machin...
Finally, don't sleep in or build homes in flash flood areas: (US): https://msc.fema.gov/portal/home