That's almost certainly it. I'd probably searched for common fragment. And 'select all' does indeed select across all of the search result buffers. Thanks!
This sort of difference is one of the reasons I will always prefer subs.
I guess I could be the odd one out but I'm not keen on the 'localisation' efforts that replace the cultural elements of the underlying media, e.g. how in Ace Attorney ramen is replaced with t-bone steaks (iirc?), prompting the meme 'Eat your hamburgers, Apollo'
> Token-agnostic prompt structures obscure the cost and are rife with misaligned incentives
Saying that, token-based pricing has misaligned incentives as well: as the editor developer (charging a margin over the number of tokens) or AI provider, you benefit from more verbose input fed to the LLMs and of course more verbose output from the LLMs.
Not that I'm really surprised by the announcement though, it was somewhat obviously unsustainable
Is it actually possible to just have the YAML that calls into your app today, without losing the granularity or other important features?
I am not sure you can do this whilst having the granular job reporting (i.e. either you need one YAML block per job or you have all your jobs in one single 'status' item?) Is it actually doable?
> Conforming to a complex specification is not inherently a good thing
Kind of a hard disagree here; if you don't want to conform to a specification, don't claim that you're accepting documents from that specification. Call it github-flavored YAML (GFY) or something and accept a different file extension.
> YAML 1.1 to be an important goal: they still don't support merge keys
right, they don't do merge keys because it's not in YAML 1.2 anymore. Anchors are, however. They haven't said that noncompliance with YAML 1.2 spec is intentional
> Call it github-flavored YAML (GFY) or something and accept a different file extension.
Sure, I wouldn't be upset if they did this.
To be clear: there aren't many fully conforming YAML 1.1 and 1.2 parsers out there: virtually all YAML parsers accept some subset of one or the other (sometimes a subset of both), and virtually all of them emit the JSON object model instead of the internal YAML one.
This is something that a custom parser library could figure out, no? The same as how you have format-preserving TOML libraries, for instance.
I think it makes way more sense for GitHub to support YAML anchors given they are after all part of the YAML spec. Otherwise, don't call it YAML! (This was a criticism of mine for many years, I'm very glad they finally saw the light and rectified this bug)
> This is something that a custom parser library could figure out, no? The same as how you have format-preserving TOML libraries, for instance.
Yes, it's just difficult. The point made in the post isn't that it's impossible, but that it significantly changes the amount of of "ground work" that static analysis tools have to do to produce useful results for GitHub Actions.
> I think it makes way more sense for GitHub to support YAML anchors given they are after all part of the YAML spec. Otherwise, don't call it YAML! (This was a criticism of mine for many years, I'm very glad they finally saw the light and rectified this bug)
It's worth noting that GitHub doesn't support other parts of the YAML spec: they intentionally use their own bespoke YAML parser, and they don't have the "Norway" problem because they intentionally don't apply the boolean value rules from YAML.
All in all, I think conformance with YAML is a red herring here: GitHub Actions is already its own thing, and that thing should be easy to analyze. Adding anchors makes it harder to analyze.
maybe, but not entirely sure. 'Two wrongs don't make a right' kind of thinking on my side here.
But if they call it GFY and do what they want, then that would probably be better for everyone involved.
> they don't have the "Norway" problem because they intentionally don't apply the boolean value rules from YAML.
I think this is YAML 1.2. I have not done or seen a breakdown to see if GitHub is aiming for YAML 1.2 or not but they appear to think that way, given the discussion around merge keys
--
(though it's still not clear why flattening the YAML would not be sufficient for a static analysis tool. If the error report references a key that was actually merged out, I think users would still understand the report; it's not clear to me that's a bad thing actually)
> But if they call it GFY and do what they want, then that would probably be better for everyone involved.
Yes, agreed.
> I think this is YAML 1.2. I have not done or seen a breakdown to see if GitHub is aiming for YAML 1.2 or not but they appear to think that way, given the discussion around merge keys
I think GitHub has been pretty ambiguous about this: it's not clear to me at all that they intend to support either version of the spec explicitly. Part of the larger problem here is that programming language ecosystems as a whole don't consistently support either 1.1 or 1.2, so GitHub is (I expect) attempting to strike a happy balance between their own engineering goals and what common language implementations of YAML actually parse (and how they parse it). None of this makes for a great conformance story :-)
> (though it's still not clear why flattening the YAML would not be sufficient for a static analysis tool. If the error report references a key that was actually merged out, I think users would still understand the report; it's not clear to me that's a bad thing actually)
The error report includes source spans, so the tool needs to map back to the original location of the anchor rather than its unrolled document position.
(This is table stakes for integration with formats like SARIF, which expect static analysis results to have physical source locations. It's not good enough to just say "there's a bug in this element and you need to find out where that's introduced," unfortunately.)
GraphQL has some impressive points, but sometimes feels like it shifts too much control to the client. I'm on the fence about it.
There are performance footguns (like recursion, for which you'd have to consult your GraphQL server library for mitigations).
There is often a built-in 'introspection' endpoint, which many consider a security faux-pas (to which I disagree — I think it's pretty noble, like having built-in OpenAPI docs) BUT you can easily craft a recursive query using just this endpoint that will take (some) GraphQL servers down.
There are plenty of posts written on the matter and I'm sure there are mitigations, but my first exposure to GraphQL was on someone else's project (a respectable engineer who I consider very skilled) and within the first day I had noticed this intriguing structural hazard and taken the server down...
It therefore seems to be a tool that is possibly difficult to 'hold correctly'; at least people should be cautious about going in without doing their research about these things.
Probably a maturity issue, but one that people ought to be aware of?
Other gripes:
It's displeasing how the HTTP status code does not correlate with the actual success response of the API, which makes typical request logging less useful.
I guess it also makes it harder to provide optimised hot paths on the server (because your client team might shift their queries around, or whatever).
In previous experience I also find that having easily-recognisable names for API endpoints (like `GET /repositories`) makes talking about them, and recognising them, easier than the more-opaque-feeling GraphQL approach.
Afaik Postgres doesn't. In my exposure it'd be quite uncommon for a b-tree to store the size of a subtree; would cause more churn/writes when updating trees.
Perhaps some of the page-level Copy-on-Write databases (LMDB?) might do this, since they have to rewrite ancestor pages anyway.
If you then Ctrl+A to select all and press backspace, I wonder if that would delete all those 3-line chunks...