Yes, and to add, in case it's not obvious: in my experience the maintenance, mental (and emotional costs, call me sensitive) cost of bad code compounds exponentially the more hacks you throw at it
I'm pretty sure that will be true with AI as well.
No accounting for taste, but part of makes code hard for me to reason about is when it has lots of combinatorial complexity, where the amount of states that can happen makes it difficult to know all the possible good and bad states that your program can be in. Combinatorial complexity is something that objectively can be expensive for any form of computer, be it a human brain or silicon. If the code is written in such a way that the number of correct and incorrect states are impossible to know, then the problem becomes undecidable.
I do think there is code that is "objectively" difficult to work with.
There are a number of things that make code hard to reason about for humans, and combinatorial complexity is just one of them. Another one is, say, size of working memory, or having to navigate across a large number of files to understand a piece of logic. These two examples are not necessarily expensive for computers.
I don't entirely disagree that there is code that's objectively difficult to work with, but I suspect that the Venn diagram of "code that's hard for humans" and "code that's hard for computers" has much less overlap than you're suggesting.
Certainly with current models I have found that the Venn diagram of "code that's hard for humans" and "code that's hard for computers" has actually been remarkably similar, I suspect because it's trained on a lot of terrible code on Github.
I'm sure that these models will get better, and I agree that the overlap will be lower at that point, but I still think what I said will be true.
I wouldn't expect so. These machines have been trained on natural language, after all. They see the world through an anthropomorphic lens. IME & from what I've heard, they struggle with inexpressive code in much the same way humans do.
What do you think about the argument that we are entering a world where code is so cheap to write, you can throw the old one away and build a new one after you've validated the business model, found a niche, whatever?
I mean, it seems like that has always been true to an extent, but now it may be even more true? Once you know you're sitting on a lode of gold, it's a lot easier to know how much to invest in the mine.
It hasn't always been true, it started with rapid development tools in the late 90's I believe.
And some people thought they were building "disposable" code, only to see their hacks being used for decades. I'm thinking about VB but also behemoth Excel files.
I guess the question is, are the issues not worth fixing because implementing a fix is extremely expensive, or because the improvements from fixing it were anticipated to be minor? I assume the answer is generally a mix of the two.
Someone has to figure out how to make the experiences of the two generations consistent in the ways it needs to be and differ only in the ways it doesn't still.
The tl;dr of this is that I don't think that the code itself is what needs to be preserved, the prompt and chat is the actual important and useful thing here. At some point I think it makes more sense to fine tune the prompts to get increasingly more specific and just regenerate the the code based on that spec, and store that in Git.
> At some point I think it makes more sense to fine tune the prompts to get increasingly more specific and just regenerate the the code based on that spec, and store that in Git.
Generating code using a non-deterministic code generator is a bold strategy. Just gotta hope that your next pull of the code slot machine doesn’t introduce a bug or ten.
We're already merging code that has generated bugs from the slot machine. People aren't actually reading through 10,000 line pull requests most of the time, and people aren't really reviewing every line of code.
Given that, we should instead tune the prompts well enough to not leave things to chance. Write automated tests to make sure that inputs and outputs are ok, write your specs so specifically that there's no room for ambiguity. Test these things multiple times locally to make sure you're getting consistent results.
> Write automated tests to make sure that inputs and outputs are ok
Write them by hand or generate them and check them in? You can’t escape the non-determinism inherent in LLMs. Eventually something has to be locked in place, be it the application code or the test code. So you can’t just have the LLM generate tests from a spec dynamically either.
> write your specs so specifically that there's no room for ambiguity
Using English prose, well known for its lack of ambiguity. Even extremely detailed RFCs have historically left lots of room for debate about meaning and intention. That’s the problem with not using actual code to “encode” how the system functions.
I get where you’re coming from but I think it’s a flawed idea. Less flawed than checking in vibe-coded feature changes, but still flawed.
> Write them by hand or generate them and check them in?
Yes, written by hand. I think that ultimately you should know what valid inputs and outputs are and as such the tests should be written by a human in accordance with the spec.
> Less flawed than checking in vibe-coded feature changes, but still flawed.
This is what I'm trying to get at. I agree it's not perfect, but I'm arguing it's less evil than what is currently happening.
Infrastructure-as-code went through this exact cycle. Declarative specs were supposed to replace manual config, but Terraform still needs state files because specs drift from reality. Prompts have it worse since you can't even diff what changed between two generation runs.
Observability into how a foundation model generated product arrived to that state is significantly more important than the underlying codebase, as it's the prompt context that is the architecture.
Yeah, I'm just a little tired of seeing these pull requests of multi-thousand-line pull requests where no one has actually looked at the code.
The solution people are coming up with now is using AI for code reviews and I have to ask "why involve Git at all then?". If AI is writing the code, testing the code, reviewing the code, and merging the code, then it seems to me that we can just remove these steps and simply PR the prompts themselves.
You don't actually need source control to be able to roll back to any particular version that was in use. A series of tarballs will let you do that.
The entire purpose of source control is to let you reason about change sets to help you make decisions about the direction that development (including bug fixes) will take.
If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners, or are they still using it because they don't want to admit to themselves that they've completely lost control?
> are they still using it because they don't want to admit to themselves that they've completely lost control?
I think this is the case, or at least close.
I think a lot of people are still convincing themselves that they are the ones "writing" it because they're the ones putting their names on the pull request.
It reminds me of a lot of early Java, where it would make you feel like you were being very productive because everything that would take you eight lines in any other language would take thirty lines across three files to do in Java. Even though you didn't really "do" anything (and indeed Netbeans or IntelliJ or Eclipse was likely generating a lot of that bootstrapping code anyway), people would act like they were doing a lot of work because of a high number of lines of code.
Java is considerably less terrible now, to a point where I actually sort of begrudgingly like writing it, but early Java (IMO before Java 21 and especially before 11) was very bad about unnecessary verbosity.
> If people are still using git but not really using it, are they doing so simply to take advantage of free resources such as github and test runners,
does it have to be free to be useful? the CD part is is even more important than before, and if they still use git as their input, and everyone including the LLM is already familiar with git, whats the need to get rid of it?
there's value in git as a tool everyone knows the basics of, and as a common interface of communicating code to different systems.
passing tarballs around requires defining a bunch of new interfaces for those tarballs which adds a cost to every integration that you'd otherwise get for about free if you used git
A series of tarballs is really unwieldy for that though. Even if you don't want to use git, and even if the LLM is doing everything, having discrete pieces like "added GitHub oauth to login" and "added profile picture to account page" as different commits is still valuable for when you have to ask the LLM "hey about the profile picture on the account page".
Also, the approach you described is what a number of AI for Code Review products are using under-the-hood, but human-in-the-loop is still recognized as critical.
It's the same way how written design docs and comments are significantly more valuable than uncommented and undocumented source.
Because LLMs are designed as emulators of actual human reasoning, it wouldn't surprise me if we discover that the things that make software easy for humans to reason about also make it easier for LLMs to reason about.
Now with AI, you're not only dealing with maintenance and mental overhead, but also the overhead of the Anthropic subscription (or whatever AI company) to deal with this spaghetti. Some may decide that's an okay tradeoff, but personally it seems insane to delegate a majority of development work to a blackbox, cloud-hosted LLM that can be rug pulled from underneath of you at any moment (and you're unable to hold it accountable if it screws up)
Call me naive, but I don't believe that I'm going to wake up tomorrow and ChatGPT.com and Claude.ai are going to be hard down and never come back. Same as Gmail, which is an entirely different corporation. I mean, they could, but it doesn't seem insane to use Gmail for my email, and that's way more important to my life functioning than this new AI thing.
I have been doing this for years, especially for libraries (internal or otherwise), anything that's `pub`/`export`, or gnarly logic that makes the intent not obvious. Not _everything_ is documented, but most things are.
I'm doing it because I know how much I appreciate well-written documentation. Also this is a bit niche, but if you're using Rust and add examples to doc-comments, they get run as tests too.
Also given we both managed to produce more than one sentence, and include capital letters in our comments, it's entirely possible both of us will be accused of being an AI. Because, you know... People don't write like this, right?
>Also given we both managed to produce more than one sentence, and include capital letters in our comments, it's entirely possible both of us will be accused of being an AI.
Could anyone explain the esoteric meaning of why people started doing that shit? I got a hypothesis, what's going on is something like this:
1. Prove you are human: write Like A Fucking Adult You Weirdo (internal designator for a specific language register, you know the one)
2. Prove you are human: _DON'T_ write Like A Fucking Adult You Weirdo (because that's how LLMs were trained to write, silly!)
3. ???? (cognitive dissonance ensues)
4. PROFIT (you were just subject to some more attrition while the AI just learned how to pass a lil bit better)
I never thought computer programmers of all people would get trapped in such a simple loop of self-contradiction.
But I guess the human materiel really has degraded since whenever. I blame remote work preventing us from even hypothetically punching bosses, but anyway weird fucking times eh?
Maybe the posts trying to figure "this post is AI, that post is not AI" are themselves predominantly AI-generated?
Or is it just people made uncomfortable by what's going on, but not able to articulate further, jumping on the first bandwagon they see?
Or maybe this "AI-doubting of probably human posters" was started by humans, yes - then became "a thing", and as such was picked up by the LLM?
Like who the fuck knows, but with all honesty that's how I felt about so many things, dating from way before LLMs became so powerful that the above became a "sensible" question to ask...
Predominantly those things which people do by sheer mimesis - such as pop culture.
"Are you a goddam robot already - don't you see how your liking the stupid-making song is turning you into stupid-you, at a greater rate than it is bringing non-stupid-you aesthetic satisfaction?" type of thing -- but then I assume in more civilized places than where I come from people are much more convincingly taught that personal taste "doesn't matter" (and simultaneously is the only thing that matters; see points 1-4... I guess that's what makes some people believe curating AI, i.e. "prompt engineering" can be a real job and not just boil down to you being the stochastic parrot's accountability sink?)
I'm not even sure English even has the notions to point out the concrete issue - I sure don't know 'em.
Ever hear of the strain of thought that says "all metaphysical questions are linguistic paradoxes (and it's self-evidently pointless to seek answers to nonsensical questions)"?
Feels kinda like the same thing, but artificially constructed within the headspace of American anti-intellectuallism.
Maybe a correct adversarial reading of the main branding acronym would be Anti-Intelligence.
You know, like bug spray, or stain remover.
But for the main bug in the system; the main stain on the white shirt: the uncomfortable observation that, in the end, some degree of independent thinking is always required to get real things done which produce some real value. (That's antithetical to standard pro-social aversive conditioning, which says: do not, under any circumstance, just put 2 and 2 together; lest you turn from "a vehicle for the progress of civilization" back into a pumpkin)
There's many JS implementations out there. Quality kind depends on what you need, and there's some engines more or less complete in which quirks are supported.
And for example, v8 doesn't make much sense in embedded contexts
There are definitely plenty of other JS engines, but they aren't always up to date on newer JS features. I'm pretty sure this is the 3rd JS engine to fully support the Temporal API (even JSC hasn't shipped it yet).
Random aside: I've seen a 2015 game be accused of AI slop on Steam because it used a similar concept... And mind you, there's probably thousands of games that do this.
First it was punctuation and grammar, then linguistic coherence, and now it's tiny bits of whimsy that are falling victim to AI accusations. Good fucking grief
To me, this is a sign of just how much regular people do not want AI. This is worse than crypto and metaverse before it. Crypto, people could ignore and the dumb ape pictures helped you figure out who to avoid. Metaverse, some folks even still enjoyed VR and AR without the digital real estate bullshit. And neither got shoved down your throat in everyday, mundane things like writing a paper in Word or trying to deal with your auto mechanic.
But AI is causing such visceral reactions that it's bleeding into other areas. People are so averse to AI they don't mind a few false positives.
It's how people resisted CGI back in the day. What people dislike is low quality. There is a loud subset who are really against it on principle like we also have people who insist on analog music but regular people are much more practical but they don't post about this all day on the internet.
perhaps one important detail is that cassette tape guys and Lucasfilm aren’t/weren’t demanding a complete and total restructuring of the economy and society
An excellent observation. When films became digital the real backlash came when they stopped distributing film for the old film projectors and every movie theaters had to invest in a very expensive DCP projectors. Some couldn’t and were forced to shut down.
If I had lost my local movie theater because of digital film, I would have a really good reason to hate the technology, even though the blame is on the studios forcing that technology on everyone.
It is not. People resisted bad CGI. During the advent of CGI people celebrated the masterpiece of the Matrix and even Titanic. They hated however the Scorpion King.
No, I don't think most people are really against AI Gen works "on principle". Or at least not in any interpretation of "on principle" that would allow for you to be dismissive of complaints in this way.
I think principles are important. Especially when it comes to art, principle might be all we have. Going back to the crypto example, NFTs were art that real people had made. In some cases, very good art. People railed against NFTs despite the quality of the art. That is being against something on-principle. Comparatively, if my local grocery chains were owned by neonazis, I'd have a much harder time of standing on principle, giving that doing so may have a negative impact on my ability to survive and prosper.
AI Gen works, on the other hand, most often do not come with readily available marking that it is AI Gen. What people are complaining about is the lack of quality in the work. If they accuse a poorly human-written article of being AI Gen, that's just a mistake. But the general case is a legitimate evaluation of the quality of the material and the conditions under which it was made and presented.
In my own case, while I certainly have plenty of "principled" reasons to dislike AI Gen works, I also dislike it because it's just garbage. Oh yeah, sure, it's impressive that a computer can spit out reasonable content at all. It would equally be impressive for a chimpanzee to start talking in full sentences. That doesn't mean I'm going to start going to the chimpanzee for dissertations on the human condition.
> I think less of someone as a person if they send me AI slop.
n=1 but working on side projects for others, i could easily generate ai images (instead of using stock photos) for a client, but i resist because i also feel this but as the sender...
there is the fact that such images 'look ai' but even if it were perfect, idk somehow i feel cheap doing that.
Agreed. Even in low value stuff I’d so much rather use basic stock images, ms paint drawings or almost anything over AI images. Seeing them is almost like being near someone who stinks or is sick/coughing. It’s a very visceral reaction.
Not just in the obvious ways either, even good CGI has been detrimental to the film (and TV) making process.
I was watching some behind the scenes footage from something recently, and the thing that struck me most was just how they wouldn't bother with the location shoot now and just green-screen it all for the convenience.
Even good CGI is changing not just how films are made, but what kinds of films get shot and what kind of stories get told.
Regardless of the quality of the output, there's a creativeness in film-making that is lost as CGI gets better and cheaper to do.
it may be an unpopular opinion but i feel like that watching any of the marvel movies... its like its just a showcase for green screens and ridiculous rubber-band acrobatics cgi everywhere...
that kind if stuff might work in anime or cartoons, but live action just looks ridiculous to me for the most part.
Not the same. The more effort you put into CGI the more invisible it becomes. But you can’t prompt your way out of hallucinations and other AI artifacts. AI is a completely different technology from CGI. There is no equivalence between them.
i think they are referring to statements that they have "solved" hallucinations and it wont be a problem anymore (which it obviously isn't yet anyways)
My guess is that post-training has gotten a lot better in the last couple of years and what people are attributing to better models are actually just traditional (non-LLM) models they place on top of the LLM which makes it appears that the model has increased in quality (including by seemingly fewer hallucination).
If this is the case it would be observed with different prompting strategies, when you find a prompt which puts more weight on the post-training models.
The story is that I was getting into a new genre of music, namely Japanese City pop from the 1980s. I was totally unfamiliar with the genre and started listening to it on YouTube. I found one playlist, which I listened to a lot, thinking: “wow, this is very formulaic, and the lyrics are very generic” but I kind of thought that was just how the genre went. Finally had planned to use it for during a small local event, but when I went to find out who the artists were I embarrassingly found out it was all AI generated.
Thing is, in this instance I knew nothing of the source material, when I went to get actual songs, written by actual people, the difference was start. I would be able to recognize AI generated City pop in an instant now 8 months later. This experience kind of felt like I had been scammed. That my ignorance of the genre had been taken advantage of. It was not pleasant.
I had a very similar experience, looking for music to play during D&D sessions. Not paying close attention to the music, it seemed like it fit the bill. Once I started listening more closely, there were lots of issues that became readily apparent.
My dad has also started sharing with me links on Facebook to pop songs that have been re-arranged in different genres. This was a big area of fun for a number of folks in my family several years ago as we discovered YouTube artists like Chase Holfelder who put significant effort into making very high quality rearrangements. But I kept noticing these weird issues in the new songs.
I've gotten to where I can identify an AI generated song almost immediately: there's a weird, high frequency hiss in the mix that sounds like heavy noise getting to overcome compression artifacts but the source from which it's coming should be clean. There's a general lack of enthusiasm to the lyrics and a boring, nonsensical progression to the lyrics on original arrangements. Sometimes, the person generating the song tries to hide that last issue by generating instrumentals only or they use one of those try-to-hard-to-sound-badass Country Rock genres that are popular on Tik Tok to stick on top of clips from the TV show Yellowstone (WTF is with that?!), but then when I check the details, there's an obviously AI cover art for artists I've never heard of. The accounts will be anthologies full of these artists that have never existed.
So, I know people keep parroting "a good artist can use any tool". But I've yet to see it. All this "democratizing art" (didn't know anyone was gate keeping it to begin with, certainly have not seen any lack of talent online in several years) doesn't seem to be producing results. It becomes pretty obvious very quickly it's all just a pump and dump scheme to Get Them Clicks.
You don't understand. I mean content that even now, you don't know it is AI.
Obviously you think the AI content that you can identify is bad. But there is content you've encountered that you think is good and not AI content, that actually is AI generated.
This sounds dangerously close to a No True Scotsman argument. Any example one could provide, you've teed it up nicely to claim that no, you didn't mean that one, obviously, because you could tell. No, it's some other thing that you haven't found yet. That's the passing-AI.
I think it is worse then a No True Scotsman. I think your parent actually performed a category mistake here. Survivorship bias does not apply here. Whether or not I notice or even unknowingly enjoy AI generated content is not in the same category as how much I notice or enjoy CGI.
The difference is in the authorship. Actual work and skill goes into CGI, and people generally notice bad CGI, and it generally affects how you judge the art. Sometimes CGI is actually part of the art and you are supposed to notice it, and it is still good (think how Cher use Autotune in Do You Believe). There is no such equivalence with AI.
To further elaborate. Bad CGI is often (but not always) used as a cost-cutting means. Directors (or producers encourage directors to) use it when they want to save money on practical effects or even cover up mistakes that happened during shooting and want to avoid an expensive re-shoot. This can work OK if used sparingly and carefully, however if this is done a lot and without the needed care, you will notice it, and you will judge the work from it. AI content is kind of like that, except that is kind of all what AI is. The other couldn’t be bothered to do the work and just prompted an AI to do it for them.
To summarize: AI is not like CGI in general, it is much closer to a strict subset of CGI which only includes bad CGI.
No there is a very loud minority of users who are very anti AI that hate on anything that is even remotely connected to AI and let everyone know with false claims. See the game Expedition 33 for example.
IMO it's a combination of long-running paranoia about cost-cutting and quality, and a sort of performative allegiance to artists working in the industry.
And yet, no game has problems selling due to these reactions. As a matter of fact, the vast majority of people can't even tell if AI has been used here or there unless told.
I reckon it's just drama paraded by gaming "journalists" and not much else. You will find people expressing concern on Reddit or Bluesky, but ultimately it doesn't matter.
The honor system is never a sustainable solution. It's not even down to corporate greed, it's just not something that works at scale, especially when there's money to be made, and even more especially when there isn't.
It baffles me when I see ostensibly smart people refusing to click shift. Especially programmers. I know you can do it! I've seen you use curly brackets!
I recently battled this and reverted to using DOM measurements. In my case the measurement would be off by around a pixel, which caused layout issues if I tried rendering the text in DOM. This was only happening on some Linux and Android setups
To be fair, early wine (when I first tried it) wasn't very usable, and for gaming specifically. So if you were an early enthusiast adopter, you might've just experienced their growing pains.
Also, I assume some Windows version jumps didn't make things easy for Wine either lol
The hype/performance mismatch was significant in the 2000s for Wine. I’m not sure if there was any actual use case aside from running obscure business software.
Yes, there was “the list” but there was no context and it was hard to replicate settings.
I think everyone tried running a contemporary version of Office and Photoshop, saw the installer spit out cryptic messages and just gave up. Enough time has passed with enough work done, and Wine now supports/getting to support the software we wanted all along.
Also, does anyone remember the rumours that OS X was going to run Windows applications?
I used WINE a lot in the 2000s, mostly for gaming. It was often pretty usable, but you often needed some hacky patches not suitable for inclusion in mainline. I played back then with Cedega and later CrossOver Games, but the games I played the most also had Mac ports so they had working OpenGL renderers.
My first memorable foray into Linux packaging was creating proper Ubuntu packages for builds of WINE that carried compatibility and performance patches for running Warcraft III and World of Warcraft.
Nowadays Proton is the distribution that includes such hacks where necessary, and there are lots of good options for managing per-game WINEPREFIXes including Wine itself. A lot of the UX around it has improved, and DirectX support has gotten really, really good.
But for me at least, WINE was genuinely useful as well as technically impressive even back then.
I remember it being surprisingly decent for games back then. Then a lot of games moved to Steam, which made it way harder to run them in Wine. Of course there was later Proton for that, but not on Mac.
Games are one of the easier things to emulate since gaming mechanics are usually entirely a compute problem (and thus not super reliant on kernel APIs / system libraries). Most games contain the logic for their entire world and their UI. The main interface is via graphics APIs, which are better standardized and described, since they are attempting to expose GPU features.
I worked on many improvements to wine's Direct3d layers over a decade ago... it's shockingly "simple" to understand what's happening -- it's mostly a direct translation.
Also these apps changed, A lot of windows programs were simple executables and I remenber for a moment it was very popular for developers to write portable apps that were just a .exe that you ran,also excel and other programs worked fine, but then microsoft and others started to use msxis or whatever it's called and more complex executable files and it was not longer posible, and microsoft and adobe switched to a subscription based system.
Most of the transfors you describe are still unfortunately destructive (ie the only way to go back is to undo). I'm not an expert on this, but I think the only way this could be key framed would be to take snapshots of the pixels and insert the modified raster data as keyframes? I'm not sure there's a good/correct/obviously way to interpolate betweens say a before and after liquefy operation the way it currently works. Maybe some of them coul store brush+inputs (pressure, cursor movement, etc) but that seems difficult to work with as an artist. Again, not done much animation (as a dev or artist) so maybe I'm just out of the loop completely
But yeah I agree with you in principle though, it would be nice if these were non-destructive and could be keyframed.
They are all non-destructive in Krita. Just use a transform mask and go to tool options, select liquefy and after you liquefy however you want you can just hide the transform mask and it stops liquefying the layer.
Yes, Krita has had this feature for years. Non-destructive filters (adjustment layers), too.
GIMP still doesn't have it. Only in 3.0 it got adjustment layers for filters.
Oh, this is news to me! I've used Krita to pain (recreational noob, not on a professional level) and I never realised this. I'll play with this tomorrow
No horse in this race, but your phrasing seems a bit weird, honestly... If reduced, your comments read as:
"You don't know about X? Well, at least I know about X and Y..." Doesn't seemed like a good faith comment to me either?
And then you say "You misunderstood my intentions so I'm going to disengage". For what it's worth, I didn't interpret your argument as insulting someone, but also it wasn't a useful or productive comment either.
What did you hope to achieve with your comments? Was it simply to state how you know something the other person doesn't? What purpose do you think that serves here?
If AI writes a for loop the same way you would... Does it automatically mean the code is bad because you—or someone you approve of—didn't write it? What is the actual argument being made here? All code has trade offs, does AI make a bad cost/benefit analysis? Hell yeah it does. Do humans make the same mistake? I can tell you for certain they do, because at least half of my career was spent fixing those mistakes... Before there ever was an LLM in sight. So again... What's the argument here? AI can produce more code, so like more possibility for fuck up? Well, don't vibe code with "approve everything" like what are we even talking about? It's not the tool it's the users, and as with any tool theres going to be misuse, especially new and emerging ones lol
I don't know why you have to qualify your sentence with "think carefully before you respond" it makes it seem like you're setting up some rhetoric trap... But I'll assume it's in good faith? Anyway...
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
reply