Machine Learning for High-Risk Applications e-Book:
Machine Learning for High-Risk Applications ebook download free in pdf published by Oreilly Media, Author by Patrick Hall, released in August 2023
The past decade has witnessed a wide adoption of artificial intelligence and machine learning (AI/ML) technologies. However, a lack of oversight into their widespread implementation has resulted in harmful outcomes that could have been avoided with proper oversight. Before we can realize AI/ML's true benefit, practitioners must understand how to mitigate its risks. This book describes responsible AI, a holistic approach for improving AI/ML technology, business processes, and cultural competencies that builds on best practices in risk management, cybersecurity, data privacy, and applied social science.
It's an ambitious undertaking that requires a diverse set of talents, experiences, and perspectives. Data scientists and nontechnical oversight folks alike need to be recruited and empowered to audit and evaluate high-impact AI/ML systems. Author Patrick Hall created this guide for a new generation of auditors and assessors who want to make AI systems better for organizations, consumers, and the public at large.
- Learn how to create a successful and impactful responsible AI practice
- Get a guide to existing standards, laws, and assessments for adopting AI technologies
- Look at how existing roles at companies are evolving to incorporate responsible AI
- Examine business best practices and recommendations for implementing responsible AI
- Learn technical approaches for responsible AI at all stages of system development
Today, machine learning (ML) is the most commercially viable subdiscipline of artificial intelligence (AI). ML systems are used to make high-stakes decisions in employment, bail, parole, lending and in many other applications throughout the world’s economies. In a corporate setting, ML systems are used in all parts of an organization – from consumer-facing products, to employee assessments, back-office automation, and more. Indeed, the past decade has brought with it wider adoption of ML technologies. But it has also proven that ML presents risks to it’s operators and consumers. Unfortunately, and like nearly all other technologies, ML can fail – whether by unintentional misuse or intentional abuse. As of today, the Partnership on AI Incident Database holds over 1,000 public reports of algorithmic discrimination, data privacy violations, training data security breaches and other harmful failures. Such risks must be mitigated before organizations, and the general public, can realize the true benefits of this exciting technology. As of today, this still requires action from people — and not just technicians. Addressing the full range of risks posed by complex ML technologies requires a diverse set of talents, experiences, and perspectives. This holistic risk mitigation approach, incorporating technical practices, business processes, and cultural capabilities, is becoming known as responsible AI.
Who Should Read This Book
Non-technical oversight personnel – along with activists, journalist, and conscientious folks – need to feel empowered to audit, assess, and evaluate high-impact AI systems. Data scientists often need more exposure to cutting-edge technical approaches for responsible AI. Both of these groups need the appropriate critical literacy to appreciate the expertise the other has to offer, and to incorporate shared learnings into their respective work. Machine Learning for High-Risk Applications is the field guide for this new generation of auditors, assessors, leaders and practitioners who seek AI systems that are better for organizations, consumers, and the public. In reading Machine Learning for High-Risk Applications, auditors and attorneys can learn how to reframe their valuable knowledge and experience for better risk management of AI systems. Business leaders can use this book to understand the wide range of available approaches for building responsible AI culture, processes and governance, and to get a better grasp of the limitations of today’s AI systems. Data scientists can use Machine Learning for High-Risk Applications to learn responsible AI methods, and to apply their technical skills with an improved understanding of the real-world complexities implicated by AI system decisions.
What Readers Will Learn
Machine Learning for High-Risk Applications defines the eponymous concept and emphasizes why it’s so important. It addresses how to build accountable and diverse organizational cultures around AI, the necessary organizational structures and impactful roles that individuals can play, how existing roles at companies are evolving to incorporate responsible AI, and how responsible AI is being put into practice today. Integral to all of this is the education around, and standardization of, processes by which individuals can assess AI systems and appreciate the impact they have on business functions, consumers, and the public at large. To that end, Machine Learning for High-Risk Applications examines effective privacy and security policies for AI, applicable legal and compliance standards, the role of traditional model risk management, and AI incident response planning. This book also aims to reinforce auditing and oversight knowledge by linking business and social outcomes to technical tools. Numerous technical approaches to engineer responsible AI systems are available today, and for all stages of the AI lifecycle. For the technical reader, Machine Learning for High-Risk Applications explores porting standard software quality assurance processes to AI systems, experimental design for AI, and reproducibility, interpretability, fairness, security, and testing and debugging technologies. Machine Learning for High-Risk Applications ebook download free in pdf published by Oreilly Media, Author by Patrick Hall, released in August 2023
Preliminary Book Outline
By the end of this book the reader will understand cultural competencies, business processes, and technical practices for responsible AI. The book is divided into three parts that echo each major facet of responsible AI. Each part of the book is split further into chapters that discuss specific subjects and cases. While the book is still being planned and written, Machine Learning for High-Risk Applications will open with an introduction to the topic and then proceed to Part 1. A tentative outline for the book follows below.
Part 1: The Human Touch – Cultural Competencies For Responsible Machine Learning
Part 1 is targeted at the importance of organizational culture in the broader practice of responsible AI. Plans for the first chapter of Part 1 involve a call to stop going fast and breaking things, with a focus on well-known AI system failures and associated vocabulary and cases. Chapter 2 will analyze consumer protection laws, model risk management, and other guidelines, lessons and cases important for fostering accountability in AI organizations and systems. Chapter 3 will examine teams, organizational structures and the concept of an AI assessor. Chapter 4 will discuss the importance of meaningful human interactions with AI systems, and Chapter 5 is intended to detail important ways of working outside of traditional organizational constraints, like protests, data journalism, and white-hat hacking.
Part 2: Setting Up For Success – Organizational Process Concerns For Responsible Machine Learning
Part 2 is slated to cover responsible AI processes. It will begin with Chapter 6 and an exploration of how organizational policies and processes affect fairness in AI systems, and the startling lack thereof. Chapter 7 will outline common privacy and security policies for AI systems. Chapter 8 will consider existing and future laws and regulations that govern AI deployments in the United States. Chapter 9 will highlight the importance of model risk management for AI systems, but also points out a few shortcomings. Finally, the blueprint for Chapter 10 is a discussion of how corporations have heeded past calls for social and environmental responsibility in the context of future responsible AI adoption.
Part 3: The Scientific Method Versus The Kitchen Sink – Technical Approaches For Enhanced Human Trust And Understanding
The agenda for Part 3 covers the burgeoning technological ecosystem for responsible AI. Chapter 11 will address the important science of experimental design, and how it’s been largely ignored by contemporary data scientists. Chapter 12 will summarize the two leading technologies for increasing transparency in AI: interpretable ML models and post-hoc explainable AI (XAI). Chapter 13 is planned to be a deep dive into the world of bias testing and remediation for ML models, and should address both traditional and emergent approaches. Chapter 14 will cover security for ML algorithms and AI systems, and Chapter 15 will close Part 3 with a wide-ranging discussion of safety and performance testing for AI systems, sometimes also known as model debugging.
Bringing it All Together
After all that analysis and exposition, Machine Learning for High-Risk Applications will end with a Chapter entitled “Bringing It All Together.” It serves as a reminder that while building responsible AI organizations and technology is hard work, it’s also quite within reach for individuals and organizations alike. Moreover, it’s necessary. The AI genie is out of the bottle. Headlines revealing embarrassing and damaging AI incidents became much more common in 2020. They won’t stop until people remake AI into responsible AI.
About the Publisher
O’Reilly’s mission is to change the world by sharing the knowledge of innovators. For over 40 years, we’ve inspired companies and individuals to do new things—and do things better—by providing them with the skills and understanding that’s necessary for success