That’s probably the point. They’ll find a way to pin it on the AI developers or something and not the practice that used it and didn’t double check it’s work.
Although I feel like this is just the first step. Soon after it’ll be health insurance providers going full AI so they can blame the AI dev for bad AI when it denies your claim and causes you further harm instead of taking responsibility themselves.
A WELL TRAINED AI can be a very useful tool. However the AI models that corporations want to use aren’t exactly what I’d call “well trained” because that costs money. So they figure “we’ll just let it learn by doing. Who cares if people get hurt in the meantime. We’ll just blame the devs for it being bad.”
Edit: to add this is partly why AI gets a bad rap from folks on the outside looking it. Corporations institute barebones, born yesterday AI models that don’t know their ass from their elbow because they can’t be bothered to pay the devs to actually train them but when shit goes south they turn around and blame the devs for a bad product instead of admitting they cut corners. It’s China Syndrome but instead of nuclear reactors it’s AI.