Two Wiz researchers, after two years of hacking AI infrastructure, found vulnerabilities compromising virtually every major AI platform they targeted. Their key lesson is that security efforts should focus less on prompt-injection attacks and more on fundamental infrastructure vulnerabilities across the entire AI stack, such as insecure model formats like Pickle. They developed a five-layer threat model covering the AI lifecycle, from model training data leaks to application-layer flaws. The rapid deployment of AI has led companies to repeat past mistakes by prioritizing speed over security, leaving core systems exposed.