Purchase the book on Amazon here. Share this page:
Book Review
I was given a free copy of this book at a book-signing session by Aaron Roth during the TEC2019 conference of NABE (check out my blog post about that conference here). The book discusses the latest developments in the emerging field of “Fairness in Machine Learning” from an algorithmic perspective (for more information about this field see my resource – Artificial Intelligence for Economists). I finally got a chance to read it and I thoroughly enjoyed doing so. Why might anyone want to give this book a shot?
First, the topic is of great social importance as algorithms are increasingly implemented in both policy and day-to-day decision making. I have conducted research on fairness in machine learning myself during my CS MS degree (see Research) and I can attest that there is growing interest both from policymakers and academic researchers to design more “ethical” algorithms that can then be more safely utilized in decisions like school admissions, lending, employee evaluation, etc. (check out these societies and conferences). Second, the materials and examples that the authors draw from are fascinating, very relevant — they may even hit close to home! In this book you will read about dating and navigation apps, how your Netflix profile might not be so private, why scientific research may not be so scientific, how you may be contributing to algorithmic discrimination, and whether you should worry about superintelligence spelling the end of the human race, among many others. Third, unlike other similar books (check out this other favorite book of mine, weapons of math destruction), this book is written by two theoretical computer scientists and as such it focuses on algorithmic solutions to the ethical issues in machine learning, instead of engaging in fruitless discussions about what the morals of our society should be (although it touches on this too). By focusing on algorithms, the authors find a clever niche within this nascent field to provide great value to readers – whether advanced experts in the field or laypersons. As the authors clearly state, this book does not focus on the social, economic, moral impacts of biased algorithms, nor it is about regulation like limits on data collection and switching power from algorithms to humans. Rather, this book is about the ways (what the authors perceive to be the right approach) in which we can make algorithms behave more like how we want our society to operate. Instead of restricting the use of potentially biased machine learning, the authors urge us to instead focus our efforts on doing a better job at explaining our societal goals to the models that increasingly govern important decisions like employment, college admissions, loan approvals, criminal sentencing and so much more.
So.. yeah you might need to be a little bit of a nerd to want to read this, but as a nerd myself, I encourage you to do so. Or at the very least, read my chapter-by-chapter summary of the book below.
Book Summary ...
Introduction
“..as individuals, we aren’t just the recipients of the fruits of this data analysis: we are the data, and it is being used to make decisions about us–sometimes very consequential decisions.”
The introduction does a remarkable job at introducing concepts and preparing readers of all levels for the required state of mind for the rest of the book.
The authors motivate the importance of algorithmic privacy, fairness, safety, transparency, accountability, and even morality. What is critical to distinguish is the difference between algorithmic and data privacy. The authors are proponents of redesigning algorithms to behave more in accordance to our society’s values (instead of just purely on prediction) rather than imposing strong restrictions on data availability (for example the GDPR in Europe). The reason is twofold: first, restricting data access to scientists and policymakers will mainly result in less data-driven and scientific research and policy. Given how much of our society’s progress comes from those areas, it’s hard to advocate for that. Second, and as the authors demonstrate in numerous cases, data has to be restricted tremendously in order to ensure that someone cannot exploit it. As such, focusing our efforts in designing better algorithms should be our approach to issues of ethics in machine learning.
“To make sure that the effects of these models respect the societal norms that we want to maintain, we need to learn how to design these goals directly into our algorithms. […] Instead of people regulating and monitoring algorithms from the outside, the idea is to fix them from the inside.”
Thus, this book, written by two theoretical computer scientists, dives into the emerging and increasingly popularized field of designing social constraints directly into algorithms. To do that, we need to find ways to define our ethics into (algorithmic) constraints and take into consideration the consequences and trade-offs that emerge from the introduction of those constraints (more precisely the loss of predictive accuracy). The authors urge us to ponder questions like “How will we feel if more fair and private machine learning results in worse search results from Google, less efficient traffic navigation from Waze, or worse product recommendations from Amazon?”.
Chapter 1: Algorithmic Privacy – From Anonymity to Noise
Why censoring sensitive information cannot, by itself, guarantee privacy?
Privacy is not easy and it’s probably much harder than you imagine. The naive notion of simply removing sensitive information has long been rebuffed as a method of guaranteeing any privacy. To see why, the authors discuss a famous case of a failed data anonymization (one of many) in the 2006 Netflix price Competition. In an attempt to find the best collaborative filtering algorithm (algorithms designed to recommend movies to users based on what “similar” people liked), Netflix released a substantial amount of user-level data (movies watched, user ratings, etc) to anyone who wanted to compete for a hefty monetary prize! The released data was strictly censored of any user identifiable information (name, age, gender, zip code, etc were removed). Yet, the case was clouded with controversy when it turned out that it was possible to identify users’s sensitive information through a combination of other publicly available data. Here’s how: Suppose you had an IMDB profile (IMDB profiles are public) with ratings and movies watched. Someone could then match the ratings and movies from your IMDB profile to the Netflix dataset, and thus put your name on your Netflix history. Now, one may wonder why is that a problem when you have chosen to have a public IMDB profile. To see why let’s examine the following hypothetical scenario: Suppose someone who is not open about their sexual orientation or political affiliation watches and rates movies that are correlated to those attributes on Netflix (i.e. gay films) but does not rate those movies on IMBD because they don’t want this information to be public. Well, by matching their two profiles, now the world knows!
Notions of privacy:
How we define privacy will depend on the tradeoffs we are willing to make. It would be nice for no data to ever become public in order to fully protect everyone’s privacy. However this would greatly impede the progress of our society and change our lives dramatically (think of science, medicine, navigation apps, literally anything that needs data to function). Thus the authors discuss, but ultimately reject some highly restrictive notions of privacy like k-anonymity or the banning of the release of individual data (i.e. only allow aggregated data). Both solutions, while overly restrictive are interestingly… still not sufficient for privacy. They thus propose a definition for the goal of privacy as follows:
“nothing about an individual should be learnable from a dataset that cannot be learned from the same dataset but with that individual’s data removed.”
Perhaps the most important notion of privacy we currently have – one that is used by many companies and government agencies – is as follows:
Differential privacy promises safety against arbitrary harms: No matter what your data is and no matter what harm could potentially happen to you due to the inclusion of your data in a dataset, that harm becomes “almost” no more likely if you allow your data to be included in the study, compared if you do not. The “almost” part is determined by a tunable parameter, governing how much can the inclusion of an individual’s data change the probability of any outcome. i.e. The notion of differential privacy promises that the probability of your health insurance going up (or that someone identifies you as a high health-risk individual) does not increase by much if your health data is included in a study that is made public or if the results of a computation that included your data were made public.
“[..] differential privacy is among the strongest kinds of individual privacy assurances we could hope to provide without a wholesale ban on any practical use of data … The main question is whether it might be too strong.”
Applying differential privacy in surveys:
Consider the following survey response protocol called Randomized response: respondents in a survey are asked to sometimes answer truthfully and sometimes to give a random response (the “sometimes” is defined by a random device that is not revealed to the experimenter – i.e. a coin toss). Using randomized response, we can guarantee differential privacy (and even privacy from experimenter) while maintaining statistical accuracy, albeit at the cost of requiring larger samples. Any participant in such a survey has plausible deniability even if her data is leaked – “my response to that (sensitive) question was randomized”!
Pitfalls of differential privacy:
- While differential privacy targets the protection of individual data, it does not protect the data of groups – for example patrolling patters of groups of soldiers of a particular military base.
- Nor does it protect individuals’ privacy when they can be identified through others’ data – for example your DNA! (for an infamous such example, check out the capture of the golden state killer who was identified through one of his relative’s DNA sample uploaded voluntarily on GEDmatch.)
Chapter 2: Algorithmic Fairness – From Parity to Pareto
“Man Is to Computer Programmer as Woman Is to Homemaker?”
This chapter discusses scientific notions of algorithmic bias and discrimination: how to detect and perhaps most importantly how to measure them. The ultimate aim is to design fairer algorithms and the inherent tradeoffs between fairness and accuracy.
In several scenarios, lawmakers have regulated the kinds of information that can be used to make decisions. For example, race and gender cannot be used for credit score calculations and lending decisions. However, simply removing these types of information does little to improve discrimination (and can sometimes even exacerbate it) due to the inherent correlations that exist between sensitive variables and other characteristics (i.e. the car you drive, the phone you use, some of your favorite apps and websites, your location, etc). This is another reason, why the authors are adamant that instead of restricting data availability, society should focus attention to redesigning the algorithms that process that data and make decisions. Of course this approach is not straightforward: (1) There are many sensible ways of defining fairness (or any other societal value) — How do we resolve the tradeoffs between several fairness notions? and (2) Algorithms that obey our social values will perform, generally, more poorly relative to unrestricted ones–How do we resolve the tradeoff between accuracy and our fairness notions? These tradeoffs, while somewhat depressing (how great would it be if we could implement all notions of fairness and achieve maximal prediction accuracy too?), reinforce the central role that humans have to play in achieving fair decision making in our society.
Notions of fairness:
- Statistical parity: The percentage of individuals who receive some treatment should be approximately equal across (our defined) groups. A very crude criterion since it makes no explicit mention on either x (attributes) or y (outcomes), is more suitable in cases where there does not seem to be a clear way to allocate a good or service. i.e. Giving away free concert tickets.
- Approximate equality of false negatives: The percentage of mistakes in terms of not giving treatment to individuals who might deserve it (false negatives) should be approximately equal across groups. More appropriate in cases where denial of treatment is costly to the individual. i.e. Lending decisions.
- Approximate equality of false positives. Similar to (2), the percentage of mistakes in terms of giving treatment to individuals who did not deserve it (false positives) should be approximately equal across groups. More appropriate in cases where treatment is costly for the treated individual. i.e. Picking which tax returns to audit (a false positive would be an audit which discovers nothing illegal). Audits can be costly for the individual being audited.
Implicit in all of the above notions of fairness is that we know which groups we want to protect. That is not always obvious. While race and gender are usually the first groups to be considered for protection, cases have been made for many other groups including, but not limited to, age, disability status, nationality, sexual orientation, and wealth.
Other issues:
- “Fairness Gerrymandering” – The problem where multiple overlapping groups are protected, but at the expense of discrimination against some intersection of them. For example, in designing an algorithm to perform statistical parity over gender (male and female) and race (blacks and whites), we might end up with an allocation where black women are discriminated against even though neither blacks nor women are independently being discriminated against.
- Historical data – The problem where the data on which an algorithm trains on was collected as part of a historically discriminatory process. The (unconstrained) trained model will likely be discriminatory too.
- Data feedback loops – Suppose we have two, equally capable, groups of applicants to a college (group A and B). The college’s admissions officer is more familiar with the processes of high schools where students from group A attend. She does not intend to discriminate. However, because of her familiarity with high schools from group A, she is able to more accurately pick good group A good students over good students from group B. As such, students from group A who get admitted to the college tend to do better than students from group B who get admitted. If we then let an algorithm train on such data, it is likely that the model will learn that students group A are better and thus discriminate against group B!
“So while there is now quite a bit of solid science around fairness, there’s much more to do to understand how to better connect the narrow purview of an algorithm in isolation with the broader context in which it is embedded.”
Chapter 3: Games People Play (With Algorithms)
“Commuting was a game, but people couldn’t play it very well. This is where technology changed everything–and, as we shall see, not necessarily for the collective good.”
This chapter discusses several interesting social interactions where algorithms have significantly impacted people’s behavior and the ways in which algorithms can be designed to nudge behavior towards a better outcome. The area of computer science working on such problems is called algorithmic game theory — the intersection between game theory and microeconomics (from economics) and algorithm design, computational complexity and machine learning (from computer science).
One such interaction is commuting. When commuting, people mostly attempt to find and take the fastest route to their destination. As such they are “competing” with other drivers for the most efficient route. The competing part comes from the fact that how long a route takes at each particular point in time will partly depend on how many other drivers are taking it. Here come traffic navigation apps that utilize millions of user data points to real-time calculate the fastest route. While this might sound great initially, two important problems can arise: (1) People try to game the algorithm by manipulating the system – i.e. reporting false traffic accidents on Waze, (2) Individual optimization may result in a bad Nash equilibrium (if you don’t know what a Nash equilibrium is, first – good for you! Second, just replace it with the word “outcome”). For example, while it may be individually optimal for everyone to take the motorway and no one to take the side streets, it could be socially optimal to split the population between motorway and side streets. There are several algorithmic solutions for these problems. We can design navigation algorithms that minimize the collective travel time by taking into account all the drivers (instead of optimizing at the user level). But then why won’t the driver just switch to another app? – To make this incentive compatible then (i.e. to ensure people will want to follow the recommended routes) our algorithms can randomize between recommending the individually optimal route versus the socially optimal route. Finally, as the concept of self-driving cars becomes more prevalent, people may become less able to switch navigation apps or take whichever route they wish – routes may become automated, in a sense like public transport.
Other such examples include the problem of product recommendations in online shopping, the problem of echo chambers in news filtering, the problem of matching in dating and labor markets.
Chapter 4: Lost in the Garden – Led Astray by Data
“If you torture the data for long enough, it will confess to anything.” – Ronald Coase (Nobel Prize-winning British Economist)
When hackers are better statisticians than you.
Suppose that for 10 consecutive days, you receive an anonymous email correctly predicting whether the stock price of Instagram will be up or down at the end of the day. On the 11th day, the anonymous email asks you for money to continue giving you this information. Will you think that it’s a scam? And if so, how on earth did the hacker get it right 10 times in a row?! Well, let’s do the math. There’s 50% chance of getting it correct once (it’s either up or down), but what are the odds of getting it right 10 times in a row? Well, if you’re randomly guessing, the chance is about 0.09%! This person must definitely know what he’s doing right? Wrong! This is a typical scam. Here’s how it works: Day 1 – the scammer sends a prediction to, say 1 million people. For half of them, the scammer predicts that the stock will go up, and for the other half he predicts the stock to go down. He is thus guaranteed to get it right to exactly 500K people, who will make up his new sample for day 2. Day 2 – the scammer sends another predictive email with the same qualitative characteristics. To half of his sample (250K) he predicts an upward movement and to the other half he predicts a downward one (the scammer disregards the day 1 group that he got wrong). The pattern is repeated for 10 days in a row, each day having half of the sample of the previous day. By the 10th day, the scammer has correctly predicted the qualitative movement of the stock price of Instagram 10 days in a row to 1,000 people! Quite a remarkable feat.
How scientific is science?
The flawed scientific reasoning we saw above also plagues the scientific community, causing many published findings to be false positives – a feature that is referred to in academic circles as “the reproducibility crisis“, namely the fact that many research studies have trouble being replicated. Note that, for that to happen we don’t even need explicit malice from researchers (for example, p-hacking). It is enough that scientific research features scale and adaptability. In an attempt to enhance their careers academics strive to publish papers in “top” and “prestigious” journals, which in turn look for papers with positive and significant results that will get cited. As such, researchers (on aggregate) ask a lot questions of their data and perform too many experiments which “can be come a problem if the results are only selectively shared” [adaptability part], which is only exacerbated by the enormous quantity of research that is often performed on the same datasets [scale part]. While some solutions already exist (for example, the Bonferroni correction for multiple tests), these solutions do not work when researchers decide on the number of tests ex-post (i.e. after seeing the data) [adaptability] or for the aggregate community (i.e. when several independent researchers “try out” the same datasets) [scale]. One measure that has been gaining traction lately in the academic community is the concept of Pre-registration, where a researcher commits to a research strategy (hypotheses to be tested) before doing the analysis (or even better, before even collecting the data).
The authors propose an even better method. The data should exist inside a database operating according to a specific set of rules and researchers can only have access to results from through precise questions asked. Then the number of questions asked will also act as a correction for the hypothesis’ p-value. Such a method can deal with both scale and adaptability concerns.
Chapter 5: Risky Business – Interpretability, Morality and the Singularity
This chapter delves into three subjects that have received less attention but are nevertheless important to think about:
- [Interpretability] How interpretable should algorithms be and to whom?
- [Morality] What kinds of morals should algorithms base their decisions on?
- [Singularity] Can advances in artificial intelligence and superintelligence pose a threat to the human race?
Interpretability – Who interprets what?
Before diving into the question of how interpretable our models should be, the authors believe is critical to decide whom to we want the model to be interpretable to… “If we take the entire scope of mathematical and computational literacy encompassed by these different observers and try to formulate good definitions of interpretability for each, we’ll basically get the range from to”. This is a decision that should be left to humans to make.
Another important question for interpretability is which part of the model do we want to make interpretable? There are four possibilities: (1) The inputted data, (2) The designed algorithm, (3) The resulting fitted model (what is optimized by the algorithm), (4) The decisions made by the model. In terms of the data, it is more or less straightforward to explain what data the algorithm trains on. In terms of the algorithm itself, the authors argue that it might not be so difficult to explain that either. However the fitted models, especially those resulting from neural network architectures are often highly complex. Finally, lots of work should be done with regards to interpreting the decisions made by the model, since this is what most people would like to understand. For example, if your loan application got rejected, a reasonable question could be what do you need to do in order to get accepted, in which case the model should be able to say something like “Be in your current job 6 months or longer” or “Have more equity on your home for collateral”.
Morality – Should algorithms be allowed to kill?
Morality is a subject that has received less attention. However, as algorithms get more and more embedded into areas like self-driving cars, personalized medicine, and automated warfare, such discussions are becoming more important. The classic example is of a self-driving car faced with an inevitable fatal collision – Should the car take the action that minimizes harm to its owner / passenger or the one that minimizes total social harm? The authors are adamant that us humans should decide what morals algorithms should have and ultimately what decisions algorithms should or should not be allowed to make. For example, in automated warfare it has been argued that an algorithm should never be allowed to kill a human being (even if the algorithm could -theoretically- make those decisions more accurately than a human). The Moral Machine project at MIT is a project aimed at extracting such morals from people by presenting them with many such dilemmas and aggregating. You can try it out too, it’s fun!
The Singularity – Science fiction or a ticking bomb?
“The problem with computing machines is not that they won’t do what they are programmed to, but rather that they will do exactly what they are programmed to.”
With the above quote, the authors intend to convey that it is very difficult to anticipate the consequences of what we tell an algorithm to do, in every imaginable scenario of the world. The algorithm will try to do what we ask of it, but it is ultimately up to the model to figure out the best way of executing our commands, and sometimes that way might not be something we expected.
Anecdotal question: “Why don’t we just turn the computer off once we realize it is starting to exhibit these unintended behaviors?” A superintelligence optimization algorithm is unlikely to not take steps from preventing this from happening – not because of some instinct of self-preservation but as part of maximizing the chance that the path of optimization is realized.
The defining argument in this debate is whether we should worry now about something (perhaps) so far away in the future. One side argues that this problem is for another generation to deal with, whereas others say that if we keep deferring this discussion, a day will come when it might be too late! For the authors, this crucially depends on the rate of growth of AI. If we expect diminishing or linear returns to AI research, then any danger is indeed too far away. However an exponential rate of growth could render a threat as too real and closer than we might anticipate. Ultimately, the authors cautiously conclude that “…even if an intelligence explosion is not certain, the fact that it remains a possibility, together with the potentially dire consequences it would entail, make methods for managing AI risk worth taking seriously…”.