(Consider what happens if you try to send a tweet's ID, a perfectly normal number like 205052027259195393, through JSON. Or if you try to serialize a stack trace on a 64-bit system, where addresses are also perfectly normal numbers.)
Why wouldn't you? It's a number. It's also sorted (up to about one second of inconsistency between Twitter worker processes on different machines), and you want to sort them by numeric sort, not by lexicographic sort as you would with strings.
I mean, there's a pretty clear argument here: Twitter themselves used to return these numbers as numbers in the API until they realized they were about to hit this problem. https://developer.twitter.com/en/docs/twitter-ids
Using numeric types as ids probably leaks into other areas, where being numeric is then assumed. When you want to change it at some point, you got to be very careful suddenly. A string could simply contain a new kind of id. Seems less refactoring effort would be required. On the other hand, switching from numeric to strings might give you compiler errors when types do not match, so perhaps it might even make refactoring simpler.
You can always add a second or third key. Stop messing with the primary key. Many systems have a numeric primary key and then only expose a hash or UUID to the public.
It's more a JS thing: The lone JS number type is double-precision floating point, meaning beyond a certain range there are integers that cannot be accurately represented as a JS number.
The JSON standard doesn't place restrictions on size or precision of numbers, instead just noting that implementations can vary their treatment of and limits on numbers. While JS uses doubles for all numbers, many other languages emit an integer type for a JSON integer. So, once you go beyond the range where a double can accurately represent all integers, you run the risk of a mismatch in how the number is interpreted by different languages parsing the same JSON.
Of course the spec also allows you to create way too big or too precise numbers that would be problematic in most languages as well; it's just that this is a somewhat common bugbear.
I wouldn't necessarily call it a flaw in JSON though, more an issue with JSON.parse or really just a fact of life when dealing with numbers in JS. Alternatives to the built in JSON.parse exist to read large integers as strings or bigints.
JavaScript is using double as a number. There's no such thing like integer or floating point numbers per se, only so called Number.
Described issue is not problem of a JSON but engine which parsed it and language which stands behind the parser. Any config format will eventually have same result and same issue.
So forcing programmer to parse every single piece of data for sake of "it's his responsibility" is not a case here.
I also disagree this is in any way programmer responsibility to create standardized way of creating parser for everything. This format gives you nothing but indentation so you are forced to create documentation for everything field, what type it's and what kind of values it takes. Lots of extra work for nothing when you have any other format.
It's a Javascript problem. Integer caps out at 9007199254740991 (253-1), and after that it's treated as a BigInt. As long as you're working with BigInt literals, it'll be accurate, but when you convert back and forth you can lose precision.
The example uses numbers outside the range of JS numbers (too many significant digits), so when the JSON is evaluated as a JS number some of the least significant bits are rounded. Personally, I think it is a mistake to use values in JSON that can’t be represented in JS, but the standard doesn’t explicitly forbid it.
In Python:
In my browser's JavaScript console: (Consider what happens if you try to send a tweet's ID, a perfectly normal number like 205052027259195393, through JSON. Or if you try to serialize a stack trace on a 64-bit system, where addresses are also perfectly normal numbers.)