There is something uncanny about the bandwidth and quality of all the artifacts coming from this mission.
I've subsisted on photos from the Apollo missions and artistic renditions for so long that seeing the modern, high resolution real thing to be quite stirring in a way I didn't expect. It actually does make me believe that the future could be quite cool.
We haven't even seen the full quality images yet. They've commented that the live feed from the GoPro is a limited bandwidth because they have to share the bandwidth with running the capsule. The images from the Nikons onboard are just scaled down. My initial guess was from an export specifically to get an early dump to get everyone on the ground chomping at the bit something to see. They'll get the full images when the SD cards splash down. When those are released, I'm expecting quite a few OMG images
I wouldn't mind some raw files but I honestly don't think they'll be too strikingly different than these (make sure you're looking at the full 20 MP images which should be several MB, not the 2 MP previews at ~200 KB).
I don't know what the Lightroom* skillz of the astronauts are, but I would not be surprised if they were shooting RAW+JPEG and only processed the JPGs in Lightroom. They probably had presets to export to smaller images that was created months ago and loaded onto their PCDs. I'd imagine 4 humans in a tincan have more things to do than to be developing RAW images by digging out the details in the shadows, push the exposure and pull back the highlights, and then apply all of those settings to each sequence of images. They'll let the folks on the ground do that.
* The exif data has Adobe Lightroom Classic (Windows) metadata in it.
In that case with the metadata I wonder if the astronauts already sent the raw files over the laser link and the images were just processed by the ground staff for posting on the site.
> something uncanny about the bandwidth and quality of all the artifacts coming from this mission
Back in 2019, Robert Zubrin suggested using rovers "to do detailed photography of the [Moon] base area and its surroundings" to "ultimately form the basis of a virtual reality experience that will allow millions of members of the public to participate vicariously in the missions" [1].
On the other hand, maybe don't get your hopes up--I've only tried a few, but even the large MPG files don't seem to be "super high quality," but maybe they will meet your expectations.
I think perhaps you mean the far side of the moon. The "backside" of the moon implies a large graben stretching almost from pole to pole, and I have seen no evidence of such a geological formation in any photos.
It really is surprising being able to see the Moon isn't spherical. (Are those abberations?) It makes sene, given the moon isn't in hydrostatic equilibrium.
I'm getting so exhausted of the "slop" accusation on new project launches. There are legit criticisms of EmDash in the parent comment that are overshadowed by the implication it was AI coded and, thus, unusable quality.
The problem is there's no beating the slop allegation. There's no "proof of work" that can be demonstrated in this comment section that satisfies, which you can see if you just keep following the entire chain. I'd rather read slop comments than this.
The main engineer of this project is in the comments and all he's being engaged with on is the definition of vibes.
They called the project EmDash and launched it on April 1st with a blog which brags about how little effort it took to write because of agents before even saying what it is.
If the product launch involves dressing the engineering team up in duck suits and releasing to a soundtrack of quacking, it's really not surprising people are asking the guy they hid behind the Daffy mask on why he's dressed as a duck rather than what he learned about headless CMS architecture from being on the Astro core team...
I know that it's discourteous to write-off a potentially valuable project because the release post showed a lack of self-awareness, but I think it's indicative of the larger struggle taking place: that trust is decaying.
It's decaying for a lot of the reasons displayed in the post, like you described, but the post also:
- is overlong (probably LLM assisted)
- is self-congratulatory
- boosts AI
- rewrites an existing project (vs contributing to the original)
- conjures long-term maintenance doubt/suspicions
- is functionally an advertisement (for CloudFlare)
So yeah, maybe EmDash is revolutionary with respect to Wordpress, but it hasn't signaled trust, and that's a difficult hurdle to get past.
There's plenty of other comments saying this. It isn't that I don't understand, and need a clever metaphor.
But to run with your metaphor, can we, maybe, just ignore the quacking since we all know that's just how you get attention these days and instead focus on that other stuff? Because it seems like asking about the duck mask will never produce a satisfactory answer and instead turn into a debate on the merits of ducks.
Dare I suggest that this debate has become boring and beside the point. Unless someone on HN has been living under a rock they've already made up their mind about ducks.
Obtuse and repetitive debates is what HN comments are for. :)
But in this case it feels less like somebody has launched a revolutionary new product and HN is debating the MIT licence and landing page weight, and more like somebody has announced they've a plug-in replacement for a popular repository with a troll post and HN chooses not to spend enough time on Github to discover the all-star team and excellent architectural decisions the blog didn't bother mentioning.
Plus Cloudflare deliberately signalling that at best they're not very invested in its success and it might well just be low-effort slop probably is more pertinent to whether a purported WordPress replacement actually gains any traction than its technical merit, and headless CMS with vendor lockin vs managing WordPress security isn't likely to be a more productive debate than one on "slop". The target audience for this product is much more 'HN crowd' than 'read about agentic solutions to workforce automation on Gartner crowd' too, so the quacking alienating HN is actually relevant.
I am not implying unusuablilty due to AI involvement.
I am implying that Cloudflare is publishing unusable one-off software without care because they have done it before and the blog post indicates that they are doing it again („look how CHEAP it is to pump out code now“).
I don’t need a proof of work, I need a proof of quality, and the blog post is the opposite of that.
I am not Nick, but there's a few ways that world happens: the free tier goes away and what people pay for more correctly reflects what they use, this all becomes cheap enough that it doesn't matter, or we come up with an end to end method of determining usage is triggered by a person.
Another way is to just do better isolation as a user. That's probably your best shot without hoping these companies change policies.
This is so disingenuous. You literally clipped the full sentence that changes the context significantly.
> "Once I’ve proven to myself that rendering was feasible, I used Claude to create an approximate version of the game loop in JavaScript based on the original DOOM source, which to me is the least interesting part of the project"
This post is about whether you can render Doom in CSS not whether Claude can replicate Doom gameplay. I doubt the author even bothered to give the game loop much QA.
> I've always said this but AI will win a fields medal before being able to manage a McDonald's.
I love this and have a corollary saying: the last job to be automated will be QA.
This wave of technology has triggered more discussion about the types of knowledge work that exist than any other, and I think we will be sharper for it.
The ownership class will be sharper. They will know how to exploit capital and turn it into more capital with vastly increased efficiency. Everybody else will be hosed.
I'm not sure if people will be more hosed than before. Historically, what makes people with capital able to turn things into more capital is its ability to buy someone's time and labor. Knowledge labor is becoming cheaper, easier, and more accessible. That changes the calculus for what is valuable, but not the mechanisms.
> Historically, what makes people with capital able to turn things into more capital is its ability to buy someone's time and labor.
You forgot to include resources:
What makes people with capital able to turn things into more capital is their ability to buy labor and resources. If people with more capital can generate capital faster than people with less capital, then (unless they are constrained, for example, by law or conscious) the people with the most capital will eventually own effectively all scarce resources, such as land. And that's likely to be a problem for everyone else.
AI doesn't change the equation; it makes the equation more brutal for people who don't have capital.
If you don't have capital, the only way to get it is by trading resources or labor for it. Most poor people don't have resources, but they do have the ability to do labor that's valued. But AI is a substitute for labor. And as AI gets better, the value of many kinds of labor will go towards zero.
If it was hard for poor people to escape poverty in the past, it's going to be even harder with AI. Unless we change something about the structure of society to ensure that the benefits of AI are shared with poor people.
Ok, I'm following you. You're saying because labor gets cheaper it will be harder to make a living providing labor. Not disagreeing, but I wonder how much weight to give this argument. History shows a precedent of productivity revolutions changing the workforce, but not eliminating it, and lifting the quality of life of the population overall (though it does also create problems). Mixed bag with the arc bending towards betterment for all. You could argue that this moment is unprecedented in history, but unless the human spirit changes, for better or worse, we will adapt as we always have, rich and poor alike.
If the value of many kinds of labor go towards zero, those benefits also go to the poor. ChatGPT has a free tier. The method of escaping poverty will still be the same. Grow yourself. Provide value to your community.
Entire classes of workers have been put in the poorhouse on a near permanent basis due to technological changes, many tines during the past two centuries of industrial civilization. Without systemic structural changes to support the workforce this will happen/is already happening with AI.
There is a fundamental problem with this thinking, you are making an assumption about scale. There is the apocryphal quote "I think there is a world market for maybe five computers".
You have to believe that LLM scaling (down) is impossible or will never happen. I assure you that this is not the case.
Already enough comments about base rate fallacy, so instead I'll say I'm worried for the future of GitHub.
Its business is underpinned by pre-AI assumptions about usage that, based on its recent instability, I suspect is being invalidated by surges in AI-produced code and commits.
I'm worried, at some point, they'll be forced to take an unpopular stance and either restrict free usage tiers or restrict AI somehow. I'm unsure how they'll evolve.
Having managed GitHub enterprises for thousands of developers who will ping you at the first sign of instability.. I can tell you there has not been one year pre-AI where GitHub was fully "stable" for a month or maybe even a week, and except for that one time with Cocoapods that downtime has always been their own doing.
In a (possibly near) future where most new code is generated by AI bots, the code itself becomes incidental/commodotized and it's nothing more than an intermediate representation (IR) of whatever solution it was prompt-engineered to produce. The value will come from the proposals, reviews, and specifications that caused that code to be produced.
Github is still code-centric with issues and discussions being auxilliary/supporting features around the code. At some point those will become the frontline features, and the code will become secondary.
I'm definitely not an AI skeptic and I use it constantly for coding, but I don't think we are approaching this future at all without a new technological revolution.
Specifications accurate enough to describe the exact behaviors are basically equivalent to code, also in terms of length, so you basically just change language (and current LLM tech is not on course to be able to handle such big specifications)
Higher level specifications (the ones that make sense) leave some details and assumption to the implementation, so you can not safely ignore the implementation itself and you cannot recreate it easily (each LLM build could change the details and the little assumptions)
So yeah, while I agree that documentation and specifications are more and more important in the AI world, I don't see the path to the conclusions you are drawing
I think you're directionally correct, but this stuff still has to live somewhere, whether the repo is code or prompts. GitHub is actually pretty well-positioned to evolve into whatever is next.
I don't think GitHub's product is at risk, but its business model might.
I keep hearing this, and I know Azure has had some issues recently, but I rarely have an issue with Azure like I do with GitHub. I have close to 100 websites on Azure, running on .NET, mostly on Azure App Service (some on Windows 2016 VMs). These sites don't see the type of traffic or amount of features that GitHub has, but if we're talking about Azure being the issue, I'm wondering if I just don't see this because there aren't enough people dependent on these sites compared to GitHub?
Or instead, is it mistakes being made migrating to Azure, rather than Azure being the actual problem? Changing providers can be difficult, especially if you relied on any proprietary services from the old provider.
Running on Azure is not the same as migrating to Azure.
Making big changes like the tech that underpins your product while still actively developing that product means a lot of things in a complicated system changing at once which is usually a recipe for problems.
Incidentally I think that is part of the current problem with AI generated code. Its a fire hose of changes in systems that were never designed or barely holding together at their existing rate of change. AI is able to produce perfectly acceptable code at times but the churn is high and the more code the more churn.
> Its a fire hose of changes in systems that were never designed or barely holding together
Yeah... my career hasn't been that long but I've only ever worked on one system that wasn't held together by duct-tape and a lot that were way more complicated than they needed to be.
I agree with this. I just have seen a huge pile-on Microsoft for Azure, in regards to this GitHub migration. There are already plenty of legitimate reasons to be upset with Microsoft, without needing to tackle Azure.
The assumption is it would be mistakes in their migration - edge cases that have to be handled differently either in the infrastructure code, config or application services.
Text is cheap to store and not a lot of people in the world write code. Compare it, for example, to email or something like iCloud.
Also I would guess there would be copy-on-write and other such optimizations at Github. It's unlikely that when you fork a repo, somewhere on a disk the entire .git is being copied (but even if it was, it's not that expensive).
That doesn’t make sense. Commits are all text. If YouTube can easily handle 4PB of uploads a day with essentially one large data center that can handle that much daily traffic for the next 20 years, GitHub should have no problems whatsoever.
My friend and I are usually pretty good at ballparking things of this nature; that is "approximately how much textual data is github storing" and i immediately put an upper bound of a petabyte, there's absolutely no way that github has a petabyte of text.
Assuming just text, deduplication,not being dumb about storage patterns, our range is 40-100TB, and that's probably too high by 10x. 100TB means that the average repo is 100KB, too.
Nearly every arcade machine and pre-2002 console is available as a software "spin" that's <20TB.
How big was "every song on spotify"? 400TB?
the eye is somewhere between a quarter and a half a petabyte.
Wikipedia is ~100GB. It may be more, now, i haven't checked. But the raw DB with everything you need to display the text contained in wikipedia is 50-100GB, and most of that is the markup - that is, not "information for us, but information for the computer"
Common Crawl, with over one billion, nine hundred and seventy thousand web pages in their archive: 345TB.
We do not believe this has anything to do with the "queries per second" or "writes per second" on the platform. Ballpark, github probably smooths out to around ten thousand queries per second, median. I'd have guessed less, but then again i worked on a photography website database one time that was handling 4000QPS all day long between two servers. 15 years ago.
P.S. just for fun i searched github for `#!/bin/bash` and it returned 15.3mm "code", assume you replace just that with 2 bytes instead of 12, you save 175MB on disk. That's compression; but how many files are duplicated? I don't mean forks with no action, but different projects? Also i don't care to discern the median bash script byte-length on github, but ballparked to 1000 chars/bytes, mean, that's 16GB on disk for just bash scripts :-)
i have ~593 .sh files that everything.exe can see, and 322 are 1KB or less, 100 are 1-2KB, 133 are 2-10KB, and the rest - 38 - are >11KB. of the 1KB ones, a random sample shows they're clustering such that the mean is ~500B.
yeah i assume all the artifacts[0] and binaries greatly inflate that. I have no idea how git works under the hood as it is implemented at github, so i can't comment on potential reasons there.
Is there some command a git administrator can issue to see granular statistics, or is "du -sh" the best we can get?
0: i'm assuming a site-rip that only fetches the equivalent files to when you click the "zip download" button, not the releases, not the wikis, images, workers, gists, etc.
I don't think the issue at hand is a technical challenge. It's merely a sign, imo, that usage has surged due to AI. To your point, this is a solvable scaling problem.
My worry is for the business and how they structure pricing. GitHub is able to provide the free services they do because at some point they did the math on what a typical free tier does before they grow into a paid user. They even did the math on what paid users do, so they know they'll still make money when charging whatever amount.
My hunch is AI is a multiplier on usage numbers, which increases OpEx, which means it's eating into GH's assumptions on margin. They will either need to accept a smaller margin, find other ways to shrink OpEx, or restructure their SKUs. The Spotifies and YouTubes of the world hosting other media formats have it harder than them, but they are able to offset the cost of operation by running ads. Can you imagine having to watch a 20 second ad before you can push?
oh, i didn't see that the 1.97 billion pages were crawled in a 11 day period earlier this month. either way, nearly 2,000,000,000 pages fit in ~third of a petabyte...
p.s. thanks for correcting me, i was using this information for something else, and now it's correct!
I think the instability is mostly due to the CEO running away at the same time as a forced Azure migration where the VP of engineering ran away. There’s only so much stability you can expect from a ship that’s missing 2 captains.
I mean the fish rots from the head, but at the end of the day that rot translates into an engineering culture that doesn't value craftsmanship and quality. Every github product I've used reeks from sloppiness and poor architecture.
That's not to say they don't have people who can build good things. They built the standard for code distribution after all. But you can't help but recognize so much of it is duct taped together to ship instead of crafted and architected with intent behind major decisions that allow the small shit to just work. If you've ever worked on a similar project that evolved that way, you know the feeling.
But also, GitHub profiles and repos were at one point a window into specific developers - like a social site for coders.
Now it's suffering from the same problem that social media sites suffer from - AI-slop and unreliable signals about developers.
Maybe that doesn't matter so much if writing code isn't as valuable anymore.
The true value prop of github isn't "hosted git + nice gui", it is the whole ecosystem of contributers, forks, and PRs. You don't get that by hosting your own forge.
Also, I wouldn't say GitHub is a corporate attempt to own git... GitHub is a huge part of why Git is as popular as it is these days, and GitHub started as a small startup.
Of course, you can absolutely say Microsoft bought GitHub in an attempt to own git, but I think you are really underselling the value of the community parts of GitHub.
I've been using gstack for the last few days, and will probably keep it in my skill toolkit. There's a lot of things I like. It maps closely to skills I've made for myself.
First, I appreciate how he implemented auto-update. Not sure if that pattern is original, but I've been solving it in a different-but-worse way for a similar project. NOT a fan of how it's being used to present articles on Garry's List. I like the site, but that's a totally different lane.
The skills are great for upleveling plans. Claude in particular has a way of generating plans with huge blind spots. I've learned to pay close attention to plans to avoid getting burned, and the plan skills do a fair job at helping catch gaps so I don't have to ralph-wiggum later. I don't find the CEO skill terribly effective, but I do like the role it plays at finding delighters for features. This is also where I think my original prompting tends to be strong, which could be why it doesn't appear to have a huge impact like the other skills.
I think the design skills are great and I like the direction they're going. DESIGN.md needs to become a standard practice. I think it's done a great job at helping with design consistency and building UIs that don't feel like slop. This general approach will probably challenge lots of design-focused coding tools.
The approach to using the browser is superior to Claude's built-in extension in pretty much every way (except cookie management). It's worth it for that alone.
For people who don't understand this...think of each skill like a phase of the SDLC. The actual content, over time, will probably become bespoke to how your team builds software, but the steps themselves are all pretty much the same. All of this is still early days, so YMMV using these specific skills, but I like the philosophy.
Is there something you don't like about the substance of my comments? Or is this just name calling? Is this not Hacker News? Aren't AI dev stacks supposed to be interesting to developers?
reply