I think what is C you O is not AI itself, but rather how much people aren't questioning its effectiveness and accuracy.
Given most AI these days is based on a learning model (rather than "expert system" in the past), like humans of course they need to be trained, and they keep learning. That's not to say that you can't trust AI if it makes a mistake - you need to trace back to how it made the mistake, and you certainly need sensible humans who can identify when it makes a mistake. IBM Watson took hours and hours of training and several blunders in testing before it became proficient enough to beat the best at Jeopardy (and it still failed when placing Daily Double and final round bids).
Turns out AI is just as bad as humans when it comes to something it doesn't know, i.e. they are extremely reluctant to admit they don't know and/or are wrong. But... we programmed those AI models to respond that way.
Fully trusting the AI models fundamentally is a mistake (as one law firm found out the hard way), but discarding and fully distrusting AI development categorically is not the answer either.
As with most things science and technology, the scientists and engineers prefer a methodical approach with proper contextual understanding whereas the popular media just wants to own the miracle in the most cavalier manner.... and then when things go wrong they will blame the scientists and engineers. If there's anything to be cheesed off about, it should be that.