08 April 2026

Run!! Tsunami's Coming

If you've been keeping track of AI in the compute world, then you've seen/read at least a few reports that AI generated code has eaten the world. Here's a NYT version. All of which makes me laugh out loud. For at least two reasons.

The first, naturally, is that the RDBMS approach to code is to not do any (at least, as little as possible), and let the database manager take care of data integrity. The advantages are well documented here, and some other places. Not having to updage client code every day or two around the world is just one advantage. The other is that data constraints are succinct. Very. Yahoo.

The second, of course is that AI systems are based on the shit that litters the innterTubes; I suspect that what these AI Code generators do is slurp up all that RBAR (do I hear the plaintive cry of COBOL in the distance?) code from 1970 onwards. Back to the future? No, reactionary idiocy. Dr. Codd showed us how to build robust systems built on explicit data logic, not spaghetti code. But, just like AI Agents, doing that reduces the need for a battalion of coders. The difference is that RDBMS eschews code, while AI Agents have unchecked diahrrea. In the Good Olde Days, the coders counterattacked by asserting (and this is a direct quote) "we prefer to do transactions in the client". Keep those fingers typing.
When a financial services company recently began using Cursor, an artificial intelligence technology that writes computer code, the difference that it made was immediate.

The company went from producing 25,000 lines of code a month to 250,000 lines. That created a backlog of one million lines of code that needed to be reviewed, said Joni Klippert, a co-founder and the chief executive of StackHawk, a security start-up that was working with the financial services firm.
This is progress????

Of course, the reporters aren't clued in
Someone has to review the A.I.-generated code to test it for bugs, security and compliance. But it can sometimes be unclear whose job it is to fix issues created by A.I.-generated code. In the past, it would be the responsibility of the person who created the code.
That's not always, or even now, been the case. Back in the Good Olde Days, when COBOL (and earlier when it was assembler) ruled the business world, the programming shop was divided into three parts: systems analysts, coders, and testers. The latter two still exist widely. The first had, until now, been abandoned. But now that function has been split out, again. The purpose of the systems analyst is to define what the programs are intended to do, then assign a coder to make it manifest.

Now, if the AI monsters get their way, they will be the analysts and coders. The humans will be the testers, and of necessity, the second class analysts. "Joe, ya gotta figure out what the fucking AI did!! Your job depends on it!! Get to it."

Fun fact: when the analyst position was made largely redundant, the loudest bitch heard in the programming shop was when a project artifact was moved from one analyst/programmer (yes, that's how the job was often labelled) to another. The reason, of course, is that there wasn't an analyst's document of the code. Worse yet, in the world of C/C++/java coding, lots of folks never, ever bother to work out the code before typing. And, of course, the code never really works, sometimes mostly it does and sometimes it doesn't at all. Either way, the coder then fires up the debugger, and finally gets down to writing a working program.

Now, imagine how that's going to work when the AI machine goes off on its merry way!! You guessed it - KAOS.
Companies are struggling to hire enough people to monitor the A.I. code for risks, a role called application security engineer. "There are not enough application security engineers on the planet to satisfy what just American companies need," said Joe Sullivan, an adviser to Costanoa Ventures, a Silicon Valley venture firm. The large companies he works with would add five to 10 more people in this role if they could, he said.
Some have faced the Monster
Sachin Kamdar, a co-founder of Elvex, an A.I. agent start-up, said he created a rule around 16 months ago that all of the company's code needed to be reviewed by a human. Otherwise, problems would be harder to fix because no one would understand the work that A.I. had done.

"It's just going to break something, and they're not going to know why it broke," he said.
Well... duh!! The agent halucinates while writing yet another General Ledger. Cool. The smartest, coolest guys in the room have fucked the dog. Even more Cool.

No comments: