Hacker Newsnew | past | comments | ask | show | jobs | submit | more errcorrectcode's commentslogin

Weird Al should have a yakuza-themed album with about half nerdcore and half alternately shouting and squealing in Japanese.


Sounds like the rappers from Snow Crash.

For the closest real-world analogue, check out m-flo.


Cool.

I was just thinking there ought to be an AI chat agent that can recommend music based on themes and mood, perhaps with a periodic, paradoxically-opposite sense of humor.


Dudes are all secretly gangsta OGs working up the food chain to be a baws someday.


Oh, the fast food chain, you mean.


If those jobs haven't been automated yet. Brought to you by Carl's Jr.

https://shiftwa.org/fast-food-chains-announce-automation-pla...


My late grandfather (would've been 90's) didn't know quite as many questions on Jeopardy! as Ken Jennings, but within a std deviation.

When I was 20, if throwing out the sports deck, no one wanted to play Trivial Pursuit with me. Lol.


If technobabble, I'd be disappointed to not see "interposer" or "hash table" listed.


Meat ag is the problem. If everyone had piles of sausages, burgers. and steaks for every meal, food production would be incredibly difficult and expensive.


Why stop there?

Eliminate the entropic, commingling contamination problem by separately collecting 1, 2 and everything else for resource extraction.


Real. ADHD-PI, -PH, and -C are approximations. Each person has a smattering of symptoms of the DSM-5 criteria. [0]

I'm classified closer to -PI because I'm not like a hyper woman friend, but I still have abour half of the symptoms in each area but not enough to be called -C.

My parents believed it was a psychiatry conspiracy theory, which only caused me more problems for years of non management.

0. https://www.cdc.gov/ncbddd/adhd/diagnosis.html


As long as the constraints are maintained within the DB correctly, it's cool. Some constraints are more complicated than FK constraints allow. Implementing as close to the DB as possible is a good idea.

To do it in app code would be a terrible idea because it throws FK referential (and data) integrity out the window.


Having done computer architecture and bit twiddling x86 in the ye olden days, I immediately, independently converged on the patented solution (code / circuit / Verilog, more or less the same thing). It goes to show how broken the USPTO is because it's obvious to anyone in the field. Patents are supposed to be nonobvious. (35 USC 103)

https://patentdefenses.klarquist.com/obviousness-sec-103/


Agreed. I spent about a minute before reading it and came up with the first solution, didn't feel like thinking through the puzzle of how not to care which one is larger, and then settled on the one with the 2016 expiration date. All within 1 to 2 minutes. I briefly considered XOR but didnt feel like remembering more about it - the solution was obvious when I saw it. How any of that was ever patentable is a crime.


This is bizarre. I wonder how many of us saw that title, thought "That's a really simple problem, surely?" came up with a solution and then were shocked when their coffee-lacking brain actually came up with the patented solution?

I mean... ignoring the bitwise arithmentic (which this only obvious to people used to doing binary operations) this is the kind of maths that an 11yo could do.

That said, the patented solution is a little more complex. But not by much.

Which makes me curious: what other patents have we violated in our day-to-day without even knowing it?


> Which makes me curious: what other patents have we violated in our day-to-day without even knowing it?

Patents are like the criminal code - always remember "Three Felonies a Day" [1]. The system is set up so that if you are one of the 99%, the 1% can come in and bust you at will if you become too much of an annoyance/threat. They will find something if they just keep digging deep enough (not to mention that they can have your entire company's activity combed through with a microscope if they find a sympathetic court), and blast you with enough charges and threaten sequential jail time so that you cannot reasonably do anything other than plead guilty and forfeit your right to a fair trial [2].

And for what it's worth, that "play by the rules as we want or we will destroy you" tactic can even hit multi-billion dollar companies like Epic Games. It's one thing if society decides to regulate business practices by the democratic process of lawmaking... but the fact that Apple can get away banning perfectly legal activities such as adult content, vaping [3] or using a non-Apple payment processor from hundreds of millions of people is just insane, not to mention incredibly damaging to the concept of democracy.

[1]: https://kottke.org/13/06/you-commit-three-felonies-a-day

[2]: https://innocenceproject.org/guilty-pleas-on-the-rise-crimin...

[3]: https://www.macrumors.com/2020/06/01/pax-vape-management-web...


Freedom means some people will do stuff you don’t like.


It also means you are not necessarily entitled to rents for your “discoveries”.


And democracy means, at its core, to restrict the freedom of people committing actions that society has decided to be unlawful.


Except society hasn’t decided that such action is unlawful even if some people, such as yourself, wish it was. I am sure at least a few vegans would like to outlaw eating meat, but democracy also works the other way ensuring such extremely unpopular laws never get passed.


> I am sure at least a few vegans would like to outlaw eating meat, but democracy also works the other way ensuring such extremely unpopular laws never get passed.

I would not be so certain about that one.

Many countries are thinking of outright "meat taxes", health scores (similar to smoking warnings in the desired nudging effect) or extending CO² taxes onto agriculture: meat production causes about 14% of global CO² emissions [1], and outlawing/disincentivizing meat consumption is a very easy, very fast and incredibly effective way of cutting down on CO², methane and dung emissions. Not to mention the indirect emissions from land burning (especially in Brazil) or the societal cost of overconsumption of meat (e.g. obesity and heart issues).

Personally, I'm in the "omnivore" camp but recognize that the way how we deal with meat products has to be massively reformed. We need to cut waste and curb consumption, the sooner the better.

[1]: https://www.theguardian.com/environment/2021/sep/13/meat-gre...


That’s the more popular view, but there is still a group of hard core “meat is murder” vegans. Personally I would say get rid of all farm subsidies, but politics makes strange bedfellows and democracy means accepting compromises up to a point.


This will for now on be my preferred quote against “freedom at all cost” arguments.


We should end legal protection of ideas as soon as possible.


I just want to second this with my own experience just now:

I looked at the title while still waking up.

At first I thought of the low + (high - low) / 2 method. I then figured maybe it was better to simply predivide both numbers before adding and just correcting for the lowest bit (how was that ever patented?!).

However, I didn't like having to perform two divisions so I thought there was probably something clever one could do with bit operations to avoid it. But, still being tired, I decided I didn't want to actually spend time thinking on the problem and I'd already spent a minute on it.


x / 2 === x >> 1, it's fast.


For unsigned or positive x. Yes, the article is about unsigned integers, but some might see this for the first time and not be aware of this restriction. -3 / 2 == -1 but -3 >> 1 == -2.


Interesting thing is that `(a >> 1) + (b >> 1) + (a & b & 1)` works correctly for signed integers (if `>>` works like in Java, filling most significant bit with ones for negative numbers). With division you'll need to write different expressions depending on operand signs. E.g. (-3) / 2 + (-5) / 2 + ((-3) & (-5) & 1) = (-1) + (-2) + 1 = -2. But ((-3) >> 1) + ((-5) >> 1) + ((-3) & (-5) & 1) = (-2) + (-3) + 1 = -4.


You can use the appropriate right shift with signed integers as easy as with unsigned integers, you just have to handle in the right way the correction due to the bit shifted out.

The fact that the right shift for a negative integer gives the floor function of the result just makes the correction easier than if you had used division with truncation towards zero.

The shifted out bit is always positive, regardless whether the shift had been applied to negative or positive numbers.

Except for following a tradition generated by a random initial choice, programming would have been in many cases easier if the convention for the division of signed numbers would have been to always generate positive remainders, instead of generating remainders with the same sign as the quotient.


With positive remainders you get wired quotient behavior. Why should 10/3 and -10/-3 yield different results? Besides that, the choice is not universal, different languages use different conventions.


Why should 10/3 and -10/-3 yield the same result?

I do not see where this would be of any use.

On the other hand, if you want a quotient that has some meaningful relationship with the ratio between the dividend and the divisor, there are other more sensible definitions of the integer division than the one used in modern programming languages.

You can have either a result that is a floating point number even for integer dividend and divisor, like in Algol, or you can define the division to yield the quotient rounded to even (i.e. with a remainder that does not exceed half of the divisor).

In both cases 10/3 and -10/-3 would yield the same result and I can imagine cases when that would be useful.

For the current definition of the integer division, I do not care whether 10/3 and -10/-3 yield the same result. It does not simplify any algorithm that I am aware of, while having a remainder of a known sign simplifies some problems by eliminating some tests for sign.


I was not really thinking about application but the mathematics. It seems a reasonable decision to me that |a / b| = |a| / |b| and to not get results of different magnitude depending on sign changes only.


It's fast, but I figured doing that on both sides before adding looked a bit inelegant and maybe it could be avoided by doing "something something bit operations" and then I dropped the thought and clicked the link.


On a modern architecture given that most integers are usually u32 by default but the underlying CPU deals with 64bits natively, I'd just cast to u64 and call it a day.

Actually I was curious to see if GCC would be smart enough to automatically choose what's the best optimization depending on the underlying architecture, but it doesn't appear to be the case.

For x86_64 (with -O3 or -Os):

    avg_64bits:
    .LFB0:
        .cfi_startproc
        movl    %edi, %edi
        movl    %esi, %esi
        leaq    (%rdi,%rsi), %rax
        shrq    %rax
        ret
        .cfi_endproc

    avg_patented_do_not_steal:
   .LFB1:
        .cfi_startproc
        movl    %edi, %eax
        movl    %esi, %edx
        andl    %esi, %edi
        shrl    %eax
        shrl    %edx
        andl    $1, %edi
        addl    %edx, %eax
        addl    %edi, %eax
        ret
Clearly just casting to 64bits seems to denser code

For ARM32 (-O3 and -Os):

    avg_64bits:
        push    {fp, lr}
        movs    r3, #0
        adds    fp, r1, r0
        adc     ip, r3, #0
        mov     r0, fp
        mov     r1, ip
        movs    r1, r1, lsr #1
        mov     r0, r0, rrx
        pop     {fp, pc}

    avg_patented_do_not_steal:
        and     r3, r1, #1
        ands    r3, r3, r0
        add     r0, r3, r0, lsr #1
        add     r0, r0, r1, lsr #1
        bx      lr
A lot more register spilling in the 64bit version since it decides to do a true 64bit add using two registers and an adc.

My code, for reference:

    uint32_t avg_64bits(uint32_t a, uint32_t b) {
      uint64_t la = a;
      uint64_t lb = b;
    
      return (la + lb) / 2;
    }

    uint32_t avg_patented_do_not_steal(uint32_t a, uint32_t b) {
        return (a / 2) + (b / 2) + (a & b & 1);
    }


> Patents are supposed to be nonobvious

Emphasis on supposed.

The granted patents include: laser used to exercise cat, and mobile wood based dog game (log used to play fetch).

https://abovethelaw.com/2017/10/8-of-my-favorite-stupid-pate...

https://patents.google.com/patent/US5443036A/en

https://patents.google.com/patent/US6360693

Apple steals the cake though. By patenting a geometric shape.


I bet you broke this patent as a kid https://patents.google.com/patent/US6368227B1/en


100% correct

the patented solution immediately came to mind


It almost did for me. I thought that you should be able to divide each number by 2 (or shift one bit) before adding, but that would lose a 1 if both numbers have 1 in their least significant bit. The part with "a & b & 1" fixes that exact issue and is obvious to me in hindsight.


> and is obvious to me in hindsight.

Everything is. That's kinda hindsight's thing.

Not so say that a few people in this thread probably saw this solution right away, but the "this was all obvious" crowd in this thread is a little too large for my taste. Be real, guys.


I guess the part about overflow in the title primes most experienced developers to immediately think about a solution where the added numbers are restricted beforehand to avoid the overflow. From there the obvious answer is to halve them, which leaves the next problem when the numbers are odd.

If you're not aware that numbers can overflow (and you probably don't tend to think about that for every single + you type, I guess), then the proper solution is less obvious.


And what is the intention to make the patent? The second way is actually more useful, not limited to unsigned ints.


But it requires you to know which one is larger. The patented way is faster if you are working with unsigned.


The patent is more sophisticated than what the article implies - it's a single clock cycle method, which no compiler I've ever seen will do given the code presented in the article.

And it's from 1996.


This thread is full of people who challenged themselves to solve it and then failed to come up with the 'obvious' 1-cycle solution. It's clearly non-obvious, as this thread shows.

The actual patent system failure here is the patent is not useful -- it's not valuable. If you needed this solution, you could sit down and derive it in less than an hour. That's not because it's obvious, but because the scope is so small.

The only difference between this patent and say a media codec is how long it would take to reinvent it. It might take you 200 years to come up with something as good as h.265, but there's no magic to it. There's a problem, somebody came up with a solution, somebody else could do it again given enough time to work on it. This is true for everything that's ever been patented.

The point of patents is to compensate for value of the work needed to reinvent, and so the real problem here is that value is less than any sane minimum. The value is less than the patent examiner's time to evaluate it! But court rulings have said it doesn't matter how insignificant a patent is, as long as it does anything at all it's "useful", which leads to these kinds of worthless patents.


and then failed to come up with the 'obvious' 1-cycle solution

That's unfair, as the commenters here are providing a software solution. The patent is about a hardware solution which involves two parallel adder circuits. It implements in hardware exactly what the software solution does, but you can't express it in software because there is no operand that expresses "implement this addition twice please". You'd have to express it as:

  avg = [x>>1 + y>>1, x>>1 + y>>1 + 1][x&y&1]
Which isn't 1-cycle either without the specialized adder.


And I just realized the hardware solution is incredibly sub-optimal. If you were to design this in specialized hardware, you'd use a single (N+1)-bit adder and just discard the least significant bit in the output, not duplicate the entire adder tree in silicon.


There is no need for a specialized adder.

The patented expression is computable in an obvious way by a single ordinary adder and a single AND gate connected to the carry input of the adder, without any other devices (the shifts and the "& 1" are done by appropriate connections).

Any ordinary N-bit adder computes the sum of 3 input operands, 2 which are N-bit, and a third which is an 1-bit carry.


> This thread is full of people who challenged themselves to solve it and then failed to come up with the 'obvious' 1-cycle solution. It's clearly non-obvious, as this thread shows.

If a significant fraction of people come up with it on the spot, it's obvious. And they did.


I don't even see a single comment mentioning doing this in 1 cycle except from those who read the patent, much less reusing existing functional units to do so, so it's not clear to me any commenter came up with an equivalent to the patented solution or even identified the problem solved by it.

Keep in mind this solution was to support MPEG-1 video encoding in the olden days when state of the art processors were 100 MHz and 800 nm process. Doing this in 1 cycle while reusing already existing function units seems like a clever solution to me -- not patent-worthy, not difficult, but clever.


Are you very sure that patent would never get threatened toward a software implementation that doesn't know anything about cycles?

If so then the technique in the post isn't actually patented.

If that C code would get threatened, then the 1 cycle thing is a red herring.

Also "Doing this in 1 cycle while reusing already existing function units"? In hardware you can use a normal adder without any special technique...


I've already said twice now that it's not patent-worthy, so it seems we're in agreement on that point.


That's a response to you calling it clever.


Sorry, but this argument about the single-cycle implementation is complete BS.

Any logic designer, who is not completely incompetent, when seeing the expression

(a / 2) + (b / 2) + (a & b & 1);

will notice that this is a 1-cycle operation, because it is just an ordinary single addition.

In hardware the divisions are made just by connecting the bits of the operands in the right places. Likewise, the "& 1" is done by connecting the LSB's of the operands to a single AND gate and the resulting bit is connected to the carry of the adder, so no extra hardware devices beyond a single adder are needed. This is really absolutely trivial for any logic designer.

The questions at any hiring interview, even for beginners, would be much more complex than how to implement this expression.

It is absolutely certain that such a patent should have never been granted, because both the formula and its implementation are obvious for any professional in the field.


What is obvious today might not have been obvious in 1996.

Our experiences and training has changed dramatically over the past 26 years.


I can assure you I would have come up with the patented solution just as fast in 1996 when I was a teenager and dabbled in 6502 assembler on Atari computer. Because I solved it now on the basis of exactly the expeirience and knowledge I acquired back then.


But in as it is non-obvious, as to why it is non-obvious, criteria met.


> It goes to show how broken the USPTO is...

The patent issued in 1996 and wasn't revisited since then (because never asserted in litigation). The USPTO is a lot different now, a quarter-century later.


> The USPTO is a lot different now, a quarter-century later.

Please be more specific or link something that explains how they've improved.


Back then you couldn't early challenge a patent and prevent it from being issued, and once it was issued you couldn't challenge it without violating it and entering trial. Now you can do both.


Early challenging sounds helpful, but that's also outsourcing the work and it could put more coders at risk of treble damages further down the line from some patent they glanced at and forgot about.


Isn't there also a recourse process by which you can get a patent invalidated? You can't expect USPTO to hire an expert in every single possible field.


I think that's usually resolved in court. By which I mean, I don't think there's process beyond choosing to fight any suit brought against you and hoping you win in court.


> I don't think there's process beyond choosing to fight any suit brought against you and hoping you win in court.

Not true. See https://en.wikipedia.org/wiki/Reexamination It's even easier today than a decade ago, though the Wikipedia article doesn't explain that aspect very well. (I wouldn't be able to explain it, either. I think it has to do with reduced ability for a patent owner to drag out review, including dragging it into court.) Probably not nearly easy enough, though.


Why not?


This isn't a political post, it's a potential hilarity / ironic one.

Btw, what would happen if a Justice behaved inappropriately to another one?

Reason: https://en.wikipedia.org/wiki/Anita_Hill#Allegations_of_sexu...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: