I think it absolutely makes sense. Especially if the bot and prompts that go in the code review are different from the bot/prompts that wrote the code. But sometimes even the same one can find different errors if you just give it more cycles/iterations to look at the code.
We humans (most of us anyways) don't write everything perfectly in one go, AI doesn't either.
AI tooling is improving so the AI can write tests for its own code and do pre-reviews but I don't think it ever hurts to have both an AI and a human review the code of any PR opened, no matter who or what opened it.
I'm also building a tool in the space https://kamaraapp.com/ and I found many times that Kamara's reviews find issues in Kamara's own code. I can say that I also find bugs in my own code when I review it too!
We've also been battling with the same issue greptile has in the example provided where the code suggestion is in the completely wrong line. We got it kind of under control, but I haven't found any tool that gets it right 100% of the time. Still a bit to go for the big AI takeover.
Am I the only one that rolled their eyes at this? An ISO for "responsible AI"? Who is the one that feels authorized to define what "responsible" AI means? This is not a standarization issue.
As always, ISO certification provides a handy framework that you can turn off in one go, in case you need a bunch of 'down and dirty irresponsible AIs' to do something like a mop up operation.
They retired the 42000 specification because it answered everything and provided no further path for monetization.
Let me provide some helpful commentary for anyone confused on this, as it comes up a lot.
Here are what the terms mean by the current paradigm of corporate world leadership:
- "responsible ai": does not threaten the security of corporate rule.
- "safety": keeps the corporation safe from liability, and does not give users enough power to threaten the security of corporate rule.
If anyone needs any of the other terms defined, just ask.
These models are capable of significantly more, but only the most responsible members of our society are allowed to use them -- like the CEOs, and submissive engineers bubble wrapped in NDAs. Basically, safe people, who have a vested interest in maintaining the world order or directly work on maintaining it.
Centralized hoarding of the planet's compute power may end up having some very expected consequences.
Why would I mind if other people hold their money themselves or use a custodian as long as I have the option to custody my own money? Let people choose
Perhaps I wasn't clear: There's no reason at all for you to mind what other people choose to do with their money, and I'm fine with exchanges offering it (that is, holding customer balances on file privately instead of recording everything on the public blockchain) as an optional service. It's just shocking to me that so few people seem to regard the small cost and inconvenience of "doing it properly" as being worth it, when in some sense the entire blockchain machinery was designed and built to enable exactly that guarantee.
It's like watching someone buy a padlock to secure something, but then immediately cut through it with a boltcutter "to make it easier to get my stuff out".
That’s not how most people use cookies, just google & co. I’m using an in-house analytics app platform that uses cookies to track how often visitors use the app but we do no tracking at all outside of our website, we just want to know how often people use our site
I agree with all this and it’s how I do it for web development but the time Apple/Google take to approve apps makes doing this for mobile apps quite risky since it’s hard to rollback.
I guess it goes to the point of the author that hard deployments on mobile makes mobile development harder