product
5003966Adversarial AI Attacks, Mitigations, and Defense Strategieshttps://www.gandhi.com.mx/adversarial-ai-attacks--mitigations--and-defense-strategies-9781835088678/phttps://gandhi.vtexassets.com/arquivos/ids/4553862/image.jpg?v=638890588866200000739821MXNPackt PublishingInStock/Ebooks/4738477Adversarial AI Attacks, Mitigations, and Defense Strategies739821https://www.gandhi.com.mx/adversarial-ai-attacks--mitigations--and-defense-strategies-9781835088678/phttps://gandhi.vtexassets.com/arquivos/ids/4553862/image.jpg?v=638890588866200000InStockMXN99999DIEbook20249781835088678_W3siaWQiOiI4MmQzOTc4ZS02MDQzLTQ3OWYtYTc5Ni0xODhlYjJjY2VjZGMiLCJsaXN0UHJpY2UiOjgyMSwiZGlzY291bnQiOjgyLCJzZWxsaW5nUHJpY2UiOjczOSwiaW5jbHVkZXNUYXgiOnRydWUsInByaWNlVHlwZSI6Ildob2xlc2FsZSIsImN1cnJlbmN5IjoiTVhOIiwiZnJvbSI6IjIwMjQtMDctMjZUMDA6MDA6MDBaIiwicmVnaW9uIjoiTVgiLCJpc1ByZW9yZGVyIjpmYWxzZX1d9781835088678_<p><b>Understand how adversarial attacks work against predictive and generative AI, and learn how to safeguard AI projects with practical examples leveraging OWASP, MITRE, and NIST </b></p><h2>Key Features</h2><ul><li>Understand the connection between AI and security by learning about adversarial AI attacks</li><li>Discover the latest security challenges in adversarial AI by examining GenAI, deepfakes, and LLMs</li><li>Implement secure-by-design methods and threat modeling, using standards and MLSecOps to safeguard AI systems</li><li>Purchase of the print or Kindle book includes a free PDF eBook</li></ul><h2>Book Description</h2>Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips cybersecurity professionals with the skills to secure AI technologies, moving beyond research hype or business-as-usual strategies. The strategy-based book is a comprehensive guide to AI security, presenting a structured approach with practical examples to identify and counter adversarial attacks. This book goes beyond a random selection of threats and consolidates recent research and industry standards, incorporating taxonomies from MITRE, NIST, and OWASP. Next, a dedicated section introduces a secure-by-design AI strategy with threat modeling to demonstrate risk-based defenses and strategies, focusing on integrating MLSecOps and LLMOps into security systems. To gain deeper insights, youll cover examples of incorporating CI, MLOps, and security controls, including open-access LLMs and ML SBOMs. Based on the classic NIST pillars, the book provides a blueprint for maturing enterprise AI security, discussing the role of AI security in safety and ethics as part of Trustworthy AI. By the end of this book, youll be able to develop, deploy, and secure AI systems effectively.<h2>What you will learn</h2><ul><li>Understand how GANs can be used for attacks and deepfakes</li><li>Discover how LLMs change security, including prompt injections and data exposure</li><li>Understand privacy-preserving ML techniques and apply them using Keras and PyTorch</li><li>Explore LLM threats with RAG, embeddings, and privacy attacks</li><li>Find out how to poison LLMs by finetuning APIs or direct access</li><li>Examine model benchmarking and the challenges of open-access LLMs</li><li>Discover how to automate AI security using MLSecOps, including CI, MLOps, and SBOMs practices</li></ul><h2>Who this book is for</h2><p>This book tackles AI security from both angles - offense and defense. AI builders (developers and engineers) will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats and mitigate risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind. To get the most out of this book, youll need a basic understanding of security, ML concepts, and Python. </p>...(*_*)9781835088678_<p><b>Learn how to defend AI and LLM systems against manipulation and intrusion through adversarial attacks such as poisoning, trojan horses, and model extraction, leveraging DevSecOps, MLOps and other methods to secure systems </b></p><h2>Key Features</h2><ul><li>Understand the unique security challenges presented by predictive and generative AI</li><li>Explore common adversarial attack strategies as well as emerging threats such as prompt injection</li><li>Mitigate the risks of attack on your AI system with threat modeling and secure-by-design methods</li><li>Purchase of the print or Kindle book includes a free PDF eBook</li></ul><h2>Book Description</h2>Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips you with the skills to secure AI technologies, moving beyond research hype or business-as-usual activities. This strategy-based book is a comprehensive guide to AI security, presenting you with a structured approach with practical examples to identify and counter adversarial attacks. In Part 1, youll touch on getting started with AI and learn about adversarial attacks, before Parts 2, 3 and 4 move through different adversarial attack methods, exploring how each type of attack is performed and how you can defend your AI system against it. Part 5 is dedicated to introducing secure-by-design AI strategy, including threat modeling and MLSecOps and consolidating recent research, industry standards and taxonomies such as OWASP and NIST. Finally, based on the classic NIST pillars, the book provides a blueprint for maturing enterprise AI security, discussing the role of AI security in safety and ethics as part of Trustworthy AI. By the end of this book, youll be able to develop, deploy, and secure AI systems against the threat of adversarial attacks effectively.<h2>What you will learn</h2><ul><li>Set up a playground to explore how adversarial attacks work</li><li>Discover how AI models can be poisoned and what you can do to prevent this</li><li>Learn about the use of trojan horses to tamper with and reprogram models</li><li>Understand supply chain risks</li><li>Examine how your models or data can be stolen in privacy attacks</li><li>See how GANs are weaponized for Deepfake creation and cyberattacks</li><li>Explore emerging LLM-specific attacks, such as prompt injection</li><li>Leverage DevSecOps, MLOps and MLSecOps to secure your AI system</li></ul><h2>Who this book is for</h2><p>This book tackles AI security from both angles - offense and defence. AI developers and engineers will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats to AI and mitigate the risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind. To get the most out of this book, youll need a basic understanding of security, ML concepts, and Python.</p>...(*_*)9781835088678_<p><b>The book not only explains how adversarial attacks work but also shows you how to build your own test environment and run attacks to see how they can corrupt ML models. Its a comprehensive guide that walks you through the technical details and then flips to show you how to defend against these very same attacks. Elaine Doyle, VP and Cybersecurity Architect, Salesforce</b></p><h2>Key Features</h2><ul><li>Understand the unique security challenges presented by predictive and generative AI</li><li>Explore common adversarial attack strategies as well as emerging threats such as prompt injection</li><li>Mitigate the risks of attack on your AI system with threat modeling and secure-by-design methods</li><li>Purchase of the print or Kindle book includes a free PDF eBook</li></ul><h2>Book Description</h2>Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips you with the skills to secure AI technologies, moving beyond research hype or business-as-usual activities. Learn how to defend AI and LLM systems against manipulation and intrusion through adversarial attacks such as poisoning, trojan horses, and model extraction, leveraging DevSecOps, MLOps, and other methods to secure systems. This strategy-based book is a comprehensive guide to AI security, combining structured frameworks with practical examples to help you identify and counter adversarial attacks. Part 1 introduces the foundations of AI and adversarial attacks. Parts 2, 3, and 4 cover key attack types, showing how each is performed and how to defend against them. Part 5 presents secure-by-design AI strategies, including threat modeling, MLSecOps, and guidance aligned with OWASP and NIST. The book concludes with a blueprint for maturing enterprise AI security based on NIST pillars, addressing ethics and safety under Trustworthy AI. By the end of this book, youll be able to develop, deploy, and secure AI systems against the threat of adversarial attacks effectively.<h2>What you will learn</h2><ul><li>Set up a playground to explore how adversarial attacks work</li><li>Discover how AI models can be poisoned and what you can do to prevent this</li><li>Learn about the use of trojan horses to tamper with and reprogram models</li><li>Understand supply chain risks</li><li>Examine how your models or data can be stolen in privacy attacks</li><li>See how GANs are weaponized for Deepfake creation and cyberattacks</li><li>Explore emerging LLM-specific attacks, such as prompt injection</li><li>Leverage DevSecOps, MLOps and MLSecOps to secure your AI system</li></ul><h2>Who this book is for</h2><p>This book tackles AI security from both angles - offense and defence. AI developers and engineers will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats to AI and mitigate the risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind. To get the most out of this book, youll need a basic understanding of security, ML concepts, and Python.</p>...(*_*)9781835088678_<p><b>The book not only explains how adversarial attacks work but also shows you how to build your own test environment and run attacks to see how they can corrupt ML models. Its a comprehensive guide that walks you through the technical details and then flips to show you how to defend against these very same attacks. Elaine Doyle, VP and Cybersecurity Architect, Salesforce Get With Your Book: PDF Copy, AI Assistant, and Next-Gen Reader Free </b></p><h2>Key Features</h2><ul><li>Understand the unique security challenges presented by predictive and generative AI</li><li>Explore common adversarial attack strategies as well as emerging threats such as prompt injection</li><li>Mitigate the risks of attack on your AI system with threat modeling and secure-by-design methods</li></ul><h2>Book Description</h2>Adversarial attacks trick AI systems with malicious data, creating new security risks by exploiting how AI learns. This challenges cybersecurity as it forces us to defend against a whole new kind of threat. This book demystifies adversarial attacks and equips you with the skills to secure AI technologies, moving beyond research hype or business-as-usual activities. Learn how to defend AI and LLM systems against manipulation and intrusion through adversarial attacks such as poisoning, trojan horses, and model extraction, leveraging DevSecOps, MLOps, and other methods to secure systems. This strategy-based book is a comprehensive guide to AI security, combining structured frameworks with practical examples to help you identify and counter adversarial attacks. Part 1 introduces the foundations of AI and adversarial attacks. Parts 2, 3, and 4 cover key attack types, showing how each is performed and how to defend against them. Part 5 presents secure-by-design AI strategies, including threat modeling, MLSecOps, and guidance aligned with OWASP and NIST. The book concludes with a blueprint for maturing enterprise AI security based on NIST pillars, addressing ethics and safety under Trustworthy AI. By the end of this book, youll be able to develop, deploy, and secure AI systems against the threat of adversarial attacks effectively.<h2>What you will learn</h2><ul><li>Set up a playground to explore how adversarial attacks work</li><li>Discover how AI models can be poisoned and what you can do to prevent this</li><li>Learn about the use of trojan horses to tamper with and reprogram models</li><li>Understand supply chain risks</li><li>Examine how your models or data can be stolen in privacy attacks</li><li>See how GANs are weaponized for Deepfake creation and cyberattacks</li><li>Explore emerging LLM-specific attacks, such as prompt injection</li><li>Leverage DevSecOps, MLOps and MLSecOps to secure your AI system</li></ul><h2>Who this book is for</h2><p>This book tackles AI security from both angles - offense and defence. AI developers and engineers will learn how to create secure systems, while cybersecurity professionals, such as security architects, analysts, engineers, ethical hackers, penetration testers, and incident responders will discover methods to combat threats to AI and mitigate the risks posed by attackers. The book also provides a secure-by-design approach for leaders to build AI with security in mind. To get the most out of this book, youll need a basic understanding of security, ML concepts, and Python.</p>...9781835088678_Packt Publishinglibro_electonico_9781835088678_9781835088678John SotiropoulosInglésMéxico2024-07-26T00:00:00+00:00https://getbook.kobo.com/koboid-prod-public/packt-epub-d802baa3-c98e-4421-bfa2-b94d1182dcc3.epub2024-07-26T00:00:00+00:00Packt Publishing