These sources provide a comprehensive overview of adversarial machine learning and the emerging field of AI penetration testing. Technical documentation from NIST establishes a formal taxonomy and terminology for identifying risks such as prompt injection, data poisoning, and privacy breaches across predictive and generative systems. Complementing this framework, educational materials from TCM Security and CavemenTech offer practical, hands-on guidance for detecting and exploiting these vulnerabilities in LLM-based applications. Through a combination of theoretical models and lab-based exercises, the materials illustrate how to bypass safety guardrails using techniques like Crescendo attacks and persona hacking. Ultimately, the collection serves as both a scientific standard and a tactical playbook for securing artificial intelligence against sophisticated modern threats.