Hacker Newsnew | past | comments | ask | show | jobs | submit | VMG's commentslogin

I've seen this in other places as well.

The bottleneck is not coding or creating a PR, the bottleneck is the review.


This ought to be automated using AI.

It could first judge whether the PR is frivolous, then try to review it, then flag a human if necessary.

The problem is that Github, or whatever system hosts the process, should actively prevent projects from being DDOS-ed with PR reviews since using AI costs real money.


> This ought to be automated using AI.

When the world is telling you to fucking stop, maybe take a moment and listen.


It's been stated like a consultant giving architectural advice. The problem is that it is socially acceptable to use llms for absolutely anything and also in bulk. Before, you strove to live up to your own standards and people valued authenticity. Now it seems like we are all striving for the holy grail of conventional software engineering: The Average.

It is absolutely not socially acceptable, and people like yourself blithely declaring that it is is getting tiring. Maybe it’s socially acceptable in your particular circles to not give a single shit, take no pride in the slop you throw at people, and expect them to wade through it no questions asked? But not for the rest of us.

Maybe I didn't clearly state my point. That was a comment about my experience earlier here on HN, someone was asked whether or not they've used AI to write and their response was "why not use it if it's better than my own", if that is the reasoning that people give and they are not self-aware enough to be embarrassed about it, I think it must mean that there's a lot of people who think like that.

I mean this with all sincerity, try doing this yourself.

The established projects are resistant to YOLOing their projects and running them on complete LLM autopilot.

You are proposing a completely different development style.

Fork Ocaml to Oca-LLM and Julia to Jul-AI and see how it goes.


I'm not trying to say that this is now projects ought to work right now.

I do think this is where we are heading, though.

No, existing open source projects are not ready for this and likely won't ever be.

It will start in the corporate world and maybe already has.


> This ought to be automated using AI.

...

> I'm not trying to say that this is now projects ought to work right now.

which is it?


It's both but the focus is on the future.

"Make product worse, get money"?

https://dynomight.net/worse/


ML/AI is much less stochastic than an average human

your browser is processing my comment

not necessarily, by default `Result` does not even carry a stack trace

Propagating upwards a valid way of handling it and often the correct answer.

There needs to be something at the top level that can handle a crashing process.


You mean like kubernetes that restarts your program when it crashes?

can Rust handle global panics?

Or can a unwrap be stopped?

This is just a normal Tuesday for languages with Exception and try/catch.


> This is just a normal Tuesday for languages with Exception and try/catch.

Yes, unfortunately, random stack unrolls and weird state bugs as a result are a normal Tuesday for languages with (unchecked) Exception and try/catch


I did not until now

infohazard!


In no way did I put any bad meaning in the the name


I am eternally grateful for the SGU.

Our escape to reality indeed


> by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.

Here is where you lose me

The JSON spec fits on two screen pages https://www.json.org/json-en.html

The XML spec is a book https://www.w3.org/TR/xml/


> The JSON spec fits on two screen pages https://www.json.org/json-en.html

It absolutely does not. From the very first paragraph:

It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999.

which is absolutely a book you can download and read here: https://ecma-international.org/publications-and-standards/st...

Furthermore, JSON has so many dangerously-incompatible implementations the errata for JSON implementations fills in multiple books, such as advice to "always" treat numbers as strings, popular datetime "extensions" that know nothing of timezones and so on.

> The XML spec is a book https://www.w3.org/TR/xml/

Yes, but that's also everything you need to know in order to understand XML, and my experience implementing API is that every XML implementation is obviously-correct, because anyone making a serious XML implementation has demonstrated the attention-span to read a book, while every JSON implementation is going to have some fucking weird-thing I'm going to have to experiment with, because the author thought they could "get the gist" from reading two pages on a blog.


I think you are misreading the phrase "based on". The author, I believe, intends it to mean something like "descends from", "has its origins in", or "is similar to" and not that the ECMAScript 262 spec needs to be understood as a prerequisite for implementing a JSON parser. Indeed, IIRC the JSON spec defined there differs in a handful of respects from how JavaScript would parse the same object, although these might since have been cleaned up elsewhere.

JSON as a standalone language requires only the information written on that page.


> JSON as a standalone language requires only the information written on that page.

    JSON.parse("{\"a\":9999999999999999.0}")
Either no browsers implement JSON as written on that page, or you need to read ECMAScript-262 to understand what is going on.


Well yes, if you're writing a JSON parser in a language based on ECMAScript-262, then you will need to understand ECMAScript-262 as well as the specification for the language you're working with. The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.

If you write a JSON parser in Python, say, then you will need to understand how Python works instead.

In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.


> The same would also apply if you were writing an XML parser in a language based on ECMAScript-262.

Thankfully XML specifies what a number is and anything that gets this wrong is not implementing XML. Very simple. No wonder I have less problems with people who implement XML.

> In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.

I'm glad you noticed that after it was pointed out to you.

The implications of JSON.parse() not being an implementation of JSON are serious though: If none of the browser vendors can get two pages right, what hope does anyone else have?

I do prefer to think of them as the same thing, and JSON as more complicated than two pages, because this is a real thing I have to contend with: the number of developers who do not seem to understand JSON is much much more complicated than they think.


XML does not specify what a number is, I think you might be misinformed there. Some XML-related standards define representations for numbers on top what the basic XML spec defines, but that's true of JSON as well (e.g. JSON Schema).

If we go with the XML Schema definition of a number (say an integer), then even then we are at the mercy of different implementations. An integer according to the specification can be of arbitrary size, and implementations need to decide themselves which integers they support and how. The specification is a bit stricter than JSON's here and at least specifies a minimum precision that must be supported, and that implementations should clearly document the maximum precisions that they support, but this puts us back in the same place we were before, where to understand how to parse XML, I need to understand both the XML spec (and any additional specs I'm using to validate my XML), plus the specific implementation in the parser.

(And again, to clarify, this is the XML Schema specification we're talking about here — if I were to just use an XML-compliant parser with no extensions to handle XSD structures, then the interpretation of a particular block of text into "number" would be entirely implementation-specific.)

I completely agree with you that there are plenty of complicated edge cases when parsing both JSON and XML. That's a statement so true, it's hardly worth discussion! But those edge cases typically crop up — for both formats — in the places where the specification hits the road and gets implemented. And there, implementations can vary plenty. You need to understand the library you're using, the language, and the specification if you want to get things right. And that is true whether you're using JSON, XML, or something else entirely.


> my experience implementing API is that every XML implementation is obviously-correct

This is not my experience. Just this week I encountered one that doesn’t decode entity/character references in attribute values <https://news.ycombinator.com/item?id=45826247>, which seems a pretty fundamental error to me.

As for doctypes and especially entities defined in doctypes, they’re not at all reliable across implementations. Exclude doctypes and processing instructions altogether and I’d be more willing to go along with what you said, but “obviously-correct” is still too far.

Past what is strictly the XML parsing layer to the interpretation of documents, things get worse in a way that they can’t with JSON due to its more limited model: when people use event-driven parsing, or even occasionally when they traverse trees, they very frequently fail to understand reasonable documents, due to things like assuming a single text node, ignoring the possibilities of CDATA or comments.


Exactly. In my experience, XML has thousands of ways to trip yourself while JSON is pretty simple. I always choose JSON APIs over XML if given the choice.


> This is not my experience.

Try not to confuse APIs that you are implementing for work to make money, with random "show HN AI slop" somebody made because they are looking for a job.


The "References" section of the XML spec is almost longer than the JSON spec itself

> [...] serious XML implementation [...]

You are cherry-picking here


> advice to "always" treat numbers as strings

FFS, have your parser fail on inputs it can not handle.

Anyway, the book defining XML doesn't tell you how your parser will handle values you can't represent on your platform either. And it also won't tell you how our parser will read timestamps. Both are completely out of scope there.

The only common issue in JSON that entire book covers is comments.

The SOAP specification does tell you how to write timestamps. It's not a single book, and doesn't cover things like platform limitations, or arrays. If you want to compare, OpenAPI's spec fills a booklet:

https://swagger.io/docs/specification/v3_0/about/


> FFS, have your parser fail on inputs it can not handle.

I wish browser developers would understand that.

    JSON.parse("9007199254740993") === 9007199254740992


> The JSON spec fits on two screen pages https://www.json.org/json-en.html

The beloved minimalist spec. . No way anything could be wrong with that: https://seriot.ch/projects/parsing_json.html

Turns out there are at least half a dozen more specs. trying and failing to clarify that mess.


Aside from the other commenter's point about this being a misleading comparison, you didn't need to reinvent the whole XML ecosystem from scratch, it was already there and functional. One of the big claims I've seen for JSON though is that it has array support, which XML doesn't. And which is correct as far as it goes, but also it would have been far from impossible to code up a serializer/deserializer that let you treat a collection of identically typed XML nodes as an array. Heck, for all I know it exists, it's not conceptually difficult.


You need to distinguish between the following cases: `{}`, `{a: []}`, `{a:[1]}`, `{a:[1, 2]}`, `{a: 1}`. It is impossible to express in XML in an universal way.


Xsd lets you explicitly specify if you are dealing with one or more elements, no need to encode that information in the data itself. It also gives you access to concrete number types, so you don't have to rely on the implementation to actually support values like 1 and 2.


Not every XML has associated XSD. You need to transfer XSD. You need to write code generator for that XSD or otherwise use it. A lot of work which is unnecessary when you can just write `JSON.parse(string)`.


XML is not a data serialisation tool, it is a language tool. It creates notations abd should be used to create phrase-like structures. So if a user needs these distinctions, he makes a notation that expresses them.


JSON is immediately usable without any notations.

Basically the difference is that underlying data structures are different.

JSON supports arrays of arbitrary items and dictionaries with string keys and arbitrary values. It aligns well with commonly used data structures.

XML node supports dictionary with string keys and string values (attributes), one dedicated string attribute (name), array of nodes (child nodes). This is very unusual structure and requires dedicated effort to map to programming language objects and structures. There were even so-called "OXM" frameworks (Object-XML Mapper), similarly to ORM.

Of course in the end it is possible to build a mapping between array, dictionary and DOM. But JSON is much more natural fit.


XML is immediately usable if you need to mark up text. You can literally just write or edit it and invent tags as needed. As long as they are consistent and mark what needs to be marked any set of tags will do; you can always change them later.

XML is meant to write phrase-like structures. Structures like this:

    int myFunc(int a, void *b);
This is a phrase. It is not data, not an array or a dictionary, although technically something like that will be used in the implementation. Here it is written in a C-like notation. The idea of XML was to introduce a uniform substrate for notations. The example above could be like:

    <func name="myFunc">
      <data type="int"/>
      <args>
        <data type="int"/>
        <addr/>
      </args>
    </func>
This is, of course, less convenient to write than a specific notation. But you don't need a parser and can have tools to process any notation. (And technically a parser can produce its results in XML, it is a very natural form, basically an AST). Parsers are usually a part of a tool and do not work on their own, so first there is a parser for C, then an indexer for C, then a syntax highlighter for C and so on: each does some parsing for its own purpose, thus doing the same job several times. With the XML processing scenario is not limited to anything: the above example can be used for documentation, indexing, code generation, etc.

XML is a very good fit for niche notations written by few professionals: interface specifications, keyboard layouts, complex drawings, and so on. And it is being used there right now, because there are no other tool like it, aside from a full-fledged language with a parser. E.g. there is an XML notation that describes numerous bibliography styles. How many people need to describe bibliography styles? Right. With XML they start getting usable descriptions right away and can fine-tune them as they go. And these descriptions will be immediately usable by generic XML tools that actually produce these bibliographies in different styles.

Processing XML is like parsing a language, except that the parser is generic. Assuming you have no text content it goes in two steps: first you get an element header (name and attributes), then the child elements. By the time you get these children they are no longer XML elements, but objects created by your code from these elements. Having all that you create another object and return it so that it will be processed by the code that handles the parent element. The process is two-step so that before parsing you could alter the parsing rules based on the element header. This is all very natural as long as you remember it is a language, not a data dump. Text complicates this only a little: on the second step you get objects interspersed with text, that's all.

People cannot author data dumps. E.g. the relational model is a very good fit for internal data representation, much better than JSON. But there is no way a human could author a set of interrelated tables aside from tiny toy examples. (The same thing happens with state machines.) Yet a human can produce tons of phrase-like descriptions of anything without breaking a sweat. XML is such an authoring tool.


But the part of XML that is equivalent to JSON is basically five special symbols: angle brackets, quotes and ampersand. Syntax-wise this is less than JSON (and it even has two kinds of quotes). All the rest are extras: grammar, inclusion of external files (with name and position based addressing), things like element IDs and references, or a way to formally indicate that contents of an element are written in some other notation (e. g. "markdown").


> Some controllers are originally painted with a rubber-like cover that, unfortunately, degrades with time and becomes a sticky gooey. I usually deal with it with the help of Methanol. It nicely removes it.

I have some products like that and I despise them. Maybe I should try methanol.


I was going to comment on this too. I notice this happens with what feels like more traditional plastics - what exactly is going with these? It feels like over time they are breaking down and liquefying, or releasing their oils?


Same thing happens with a somewhat expensive musical instrument brand that keeps using that plastic for their buttons.

As far as I can tell, it breaks down slower the more you use it, must be interacting with oils/something from human fingers, as I only have that happen for things that remain in storage for months/years at a time, but the gear with that sort of plastic that I use every day/week doesn't have that happening.


Yeah I had an official silicone iPhone case that was being used for about 8 months, replaced it with a third party leather one about a month ago and already within that time I noticed that the original one has gone all slimy just like those old plastics. There must be something about using it day to day that keeps it from breaking down.


With rubber products, it’s usually the plasticizers leaking over years. I have learned this the painful way (massive migration of plasticizers from the underside of my mousepad to other things), and now actively avoid any rubber products, usually in favour of silicone instead.


I believe isopropyl alcohol is safer for both plastics and humans. It even doubles as hand sanitizer refills!


Isopropyl alcohol works well too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: