Hacker News new | past | comments | ask | show | jobs | submit | mercantile's comments login

They've tried this twice already, although neither bill went through:

https://scocablog.com/exit-taxes-in-california-not-so-fast/

Especially egregious was a 10-year look back so even if you left years before the taxes would take effect, they'd still fleece you.

The way California's finances are going, and the way the state is degenerating, it's only a matter of time until they get serious and pass some form of this. It probably won't pass legal muster, but they'll do it anyway and spend years fighting it in courts.


From Wiki originally:

> the annual risk of a given person being hit by a meteorite is estimated to be one chance in 17 billion, which means the probability is about 0.00000000006 (6 × 10−11), equivalent to the odds of creating a few tens of trillions of UUIDs in a year and having one duplicate. In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%.

So in a theoretical sense, no, but in a practical sense, yes. The same is true for any custom ID format like yours as well. 128 bits is enough to never hit a dup though, so you don't need to go crazy.

Your database should be what authoritatively guarantees uniqueness at the end of the day — generate UUIDs assuming no collisions (which will ~always be true), but store in a UNIQUE index so things'll fail in case of a duplicate or a bug that results in trying to store the same ID twice.


Real UUIDs in the DB are definitely the right answer. I couldn't help but think the article has the right idea but reaches the wrong conclusion — it is possible to use user-specified IDs, but make sure you're taking them as UUID and storing them as such too.

That said, I understand how they got there. Although using real UUIDs in the backend is obviously the right path, it's amazing how rare their use is in industry. Developers either (1) don't know about the UUID type, (2) don't understand the advantages of such and therefore use a string instead because it's more familiar, or (3) in hubris, cast off the use of UUID because they know better and are doing their own thing.

We're using real UUIDs where I work now, but after a full ten years of industry experience, it's the first job where we're doing it right, despite previous jobs being at top name Silicon Valley companies who you'd think would know what they're doing.


There is a fourth reason: momentum.

We are just (finally) retiring a product with a large less-organised-than-we-care-to-about codebase that has been around in various states longer than our younger staff have been alive, that stored UUIDs wrong due to early misunderstanding by a long-gone expert. Several times during its life I made the case for converting it's use of UUIDS in strings to proper UUID types, even demonstrating significant performance improvements under load in a PoC, and storage savings. Everyone agreed it should be done that way, but there was considerably more work involved than just switching types (changing procs or type conversion could kill good query plans, dealing with the fact that at some point in history a smart person had used other strings as special values in one or two places, finding ad-hoc query preparation everywhere and making sure that declared the right type, ..., and of course the dreaded full regression test) so it never got done.

Especially in the last decade and a bit, because it would be retired in a year or two (ahem) so the time & effort would be better put to use elsewhere.


Is anyone able to get a little more specific about the kinds of "creators" he's talking about and what they do exactly which drive these abuse problems?

I read the article, but felt like there wasn't quite enough information to understand what a YouTube person does to cause trouble, although I totally believe it. I know a bit about cameras, but nothing about this adjacent "rumours" subindustry.


That seems to be right, but your parent's point still stands — the house may have burned down during an eight year absence, but it really doesn't seem to have "vanished". The title and lead into the story have been heavily sensationalized for click purposes.


My thinking is he was unreachable for 8 years, house burned down (for whatever reason) and became a dangerous eyesore, so they took it down.

The islanders faced with his reappearance decided not to incriminate anyone and keep quiet.

The only question I have is how did it burn down? I imagine it was in quite a state of disrepair after almost a decade of low to no maintenance. Part of it may have been falling down already and the islanders may have assumed he'd simply abandoned his house and the island.


Exactly this. Multiple requests with the same idempotency key are still different requests, and you want to be able to track them independently.

Also worth noting that a header named 'Request-Id' is already a widespread convention. You'll see one back from many popular APIs like AWS or Stripe, and definitely want a name for the idempotency header that differentiates itself from that.


They are 2 distinct concepts for sure, but if you also need to track each request independently then it would seem more robust to generate a separate Id for that from within the scope of your system. I don't think you would want to rely on a client to differentiate "physical" requests for you.

Maybe "Logical-Request-Identifier" and "Physical-Request-Identifier"?


On the surface this sounds about right, but how do we ensure that we only create one of these? Every layer will want to create a new id as not trusting the id given to it as 'new'. Having the actual client generate both is the simplest.


I agree — the 12" is still Apple's best ever form factor. I even considered buying another one as the line was winding down. (Though was glad I didn't after the M1 came out.)

But to be fair, the comment you're replying to is talking specifically about the 12" MacBook's keyboard, which is indeed awful. I'm typing on one right now, and even after years on the thing, it's a perpetual reminder of how they're slower to type on, and kind of make your fingers hurt.

Remove the butterfly keyboard, add a second port, and the 12" MacBook gets unequivocally better.


Surprisingly enough I quite like the keyboard. Its reliability is shit (though better in the 2017 model - haven't had to replace mine yet) but the feeling of it is fine; it took some time to get used to but after that I don't mind it at all!


I didn't mind the keyboard so much. I'll admit that when I switch back to that machine that I don't like it as much as on the MBA, but it was fine. Granted, I've always had weird keyboard preferences in terms of tactile feel, travel distance, and the like.

Once in a while I'd wind up with the sticky keys, but was lucky enough to avoid anything permanent.


Agreed. The small form factor is amazing, but ultimately the keyboard will always be awful.


Similar situation here. I use 1Password every day, but I only trust it to autofill simple login forms. Where something more complex is happening, I tend to copy information over field by field.

This was trained into me over the years as I saw 1Password do too many things that were wrong or even sometimes scary. The nominal benefit you get sometimes when it works properly isn't worth it.

And yes, web providers should give their web forms better names and better semantic information (e.g. `<input type="email">`), but even in 2021 it's just not always the case.


> And yes, web providers should give their web forms better names and better semantic information (e.g. `<input type="email">`), but even in 2021 it's just not always the case.

Well, there's no semantic input for "year" or "currency", so there's nothing the form designer could do to stop 1Password from picking the wrong field. This does kinda get to the root of the issue, which is that 1Password has to do a lot of "cognitive" analysis of the page to find the forms it wants which is simultaneously why it does better than most autofillers, and has worse false positives than most autofillers.

I am also a happy 1Password user, and I'll happily let it prefill a complex form for me, but I've caught enough of its mistakes in the past that I will always manually double check before submitting. I could also see myself making a similar mistake as this user on this form though (since it's so simple I might not manually double check).


> Well, there's no semantic input for "year" or "currency", so there's nothing the form designer could do to stop 1Password from picking the wrong field.

This is how you mark up the expiry year in a credit card form:

    <input autocomplete="cc-exp-year">


Mock me if you wish, but I tend to use 1Password for what it does best but use Apple’s Autofill for credit cards. It misses stuff sometimes but -fingers crossed - no issues with mis population into amount fields. I also use ApplePay or PayPal wherever possible to avoid data entry and reduce friction.


If you don't know COBOL, Wikipedia has a pretty excellent overview of a lot of its basic syntax here:

https://en.wikipedia.org/wiki/COBOL#Features

It's very verbose, and includes a lot of opaque language that almost seems designed to confuse (I know it wasn't actually, but it's pretty bad). For example, an alphanumeric variable type is called a `PIC` for "PICTURE". You can store an alphanumeric as a `PIC A`, or a numeric-only using a `PIC X`. Fields in a record get a "level number" that defines their behavior, and you just have to memorize which does what. 01 is a top-level record. 05 defines a subgroup. 88 is a conditional record.

There are many parts of the language that are clearly anti-features in retrospect. A 66 level number allows you to redefine a field in a previously defined record. Convenient in some cases maybe, but something that will clearly lead to maintainability problems as the shape of a record doesn't match its original definition in code. Another example is that COBOL has a huge vocabulary, with the original idea that you could write things as much like English as possible (e.g. use a `GREATER THAN` instead of a `>`), which is one of those things that probably seemed like a good idea at one point, but which every modern language has abandoned or is abandoning (through the use of linters, etc.).

The article makes quite a few salient points, but on the other hand, if there's ever been a language that got almost everything wrong in about as objective a sense as you can get when speaking about these things, it's this one.

Instead of inspiring the next generation of COBOL programmers, IMO there's a good alternative argument to be made for getting a bunch of smart people together to write a transpiler that could transform huge legacy COBOL codebases to something more maintainable, like how the Go team transpiled from C to Go, and vanquishing this language to the history books. Obviously very difficult, but something that'd pay off in the long run.


It helps to understand COBOL in the context of its environment: punchcards containing a single line of code, hierarchical (e.g. IMS) and network (e.g. IDMS) databases, the intended application being business data batch processing on mainframes, etc. In this context those field and record specifiers (etc.) make more sense.


Sorry to criticize, but as a daily maintainer/writer of COBOL, your characterization is a bit off. Besides technical inaccuracies (PIC 9 is for numerics, 88 is a conditional value in a field) you don't seem to grasp what's truly right or wrong with COBOL. Nor does the article. For example, ime no-one uses 66 levels, as the REDEFINES clause achieves the same result. While there _can_ be issues with maintaining code with overlapping definitions, _their use is deliberate_. For example, data read from a file could have two or more different record structures. It is read into one place in memory and the redefinition applies different formats/views to the same memory. The correct one is used as needed. There is no problem. You might be right that there are some anti-features, but like GOTO they simply aren't used in real codebases.

The real problem with COBOL imo, is that the language hasn't much improved since it's creation. While there have been some useful tweaks and changes, it has missed out on major items such as variable scoping/user-defined functions (all variables are global - the workaround is to use sub-programs and pass data in the call). Even "object-oriented" functionality as mentioned in the article was badly tacked on to the language and only complicates it without really adding much in terms of capability. Also, the language doesn't really have strings as in other languages, and consequently lacks the applicable functions.

Another major point that the article misses is that mainframe COBOL (most COBOL is/was mainframe and that means IBM) which does anything more than straightforward "read file,process contents,generate report" is inextricably linked to its environment. The article talks about poor form handling. COBOL doesn't do form handling. That would be CICS or IMS/DC software which the COBOL interfaces with. There is heavy integration in the code, but this is like confusing Java with Websphere or Tomcat. This is why it is so difficult to port COBOL systems to another language/environment. The COBOL code can be mechanically converted. But any complex system has calls to databases, transaction processing, and is built with batch jobs relying on JCL and system utilities. Replacing the environment while retaining the integration and reproducing the functionality is the hard part.


> COBOL doesn't do form handling. That would be CICS or IMS/DC software which the COBOL interfaces with

The COBOL 2002 standard includes form handling ("SCREEN SECTION"). However, IBM COBOL implementations never included that, and so you are correct that on IBM mainframes stuff like CICS and IMS/DC is used instead. Some COBOL compilers on other platforms did implement "SCREEN SECTION", e.g. Micro Focus COBOL on Windows/Unix, DEC COBOL on OpenVMS, Tandem NonStop COBOL, and COBOL software developed for those platforms did use it. It is also true that non-IBM mainframe COBOL is likely a minority of all COBOL software still in use.


GNU COBOL also implements it, and I have to admit that it's kind of fun to play around with.


It's actually not that complicated. It's just different and, coming from traditional languages, strange.

Some technical mistakes in your post was already raised by another reply, and I just want to point out how the levels actually work. I've played around with COBOL on and off out of casual interest, but when the Oreilly COBOL book became available to me recently I decided to read it just for fun.

As it turns out, COBOL mixes the concept of variables and report definitions. To paraphrase an example from the book[1], which I can recommend:

    01 date-of-birth.
      02 year.
        03 century pic 99.
        03 year-in-century pic 99.
      02 filler pic x value "-".
      02 month pic 99.
      02 filler pic x value "-".
      02 day pic 99.
With that definition, you can read and write values to and from each individual field, but also access the higher level ones as a combined field. For example, if I assign a full ISO 8601 date to the variable date-of-birth, like so:

    move "2010-01-02" to date-of-birth
I can then read the year by simply reading the corresponding variable:

    display "year is ", year
It all becomes much more clear once one realises that variable definitions in COBOL are conceptually field definitions for reports and fixed column storage formats. If the task at hand matches the way COBOL thinks of data, things are pretty simple. Once you want to move outside that realm things becomes harder.

My understanding is that in the mainframe world, COBOL programs are parts of much larger workflows, all coordinated by JCL scripts. Since all data is already in fixed column form, typing together these COBOL programs is similar to how you build pipes out of different programs when you write shellscripts in Unix.

[1] https://learning.oreilly.com/library/view/beginning-cobol-fo...


And not to mention that like everywhere else in San Francisco, there will likely be in no enforcement of these traffic rules whatsoever, so in practice private vehicles will still he allowed too.

Also, traffic on all cross streets is still open, and on most of them it’s common convention to trail red lights by ten seconds or more, well into the start of pedestrian signal as it’s changed for the perpendicular direction — an extremely dangerous practice blessed by the city’s authorities (again, by refusal to ever enforce infractions).

I’m glad we’re at least trying to make some forward progress here, but strongly suspect that Market St will still feel extremely dangerous for dangerous for pedestrians and bicyclists alike well into the foreseeable future. IMO, we should be more careful about letting city officials claim bold and innovative successes (as in the headline) while not really having changed much of consequence.


They did just approve a 600 million dollar construction package to rework market. A protected bikelane and new sidewalks will help a lot, but it takes time. Enforcement in general is for sure still a huge issue in the city.


Is that still active after the whole Public Works corruption thing?


What? Yes, it was voted upon and legislated lol. It's not going anywhere.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: