Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The dangers of letting algorithms make decisions in law enforcement (slate.com)
87 points by smil on May 17, 2015 | hide | past | favorite | 38 comments


> Employees at the center referred her to the online system. Uncomfortable with the technology, she asked for help with the online forms and was refused.

Seems like it was humans that failed her. I'm not sure what algorithms have to do with this.


I think the main point of the article is that the algorithms are applied without deference to real world conditions or impact.

This particular example appears to be a serious violation with the Americans with Disabilities Act, which I have noticed often falls by the wayside when developing many systems like this. In this case the humans deferred to the machines the decisions, most likely because the administrative regulations say so and leave no way to get around it "manually".

I have myself encountered these sort of issues, at one point many years ago I stopped filing for unemployment because I had found a job, weeks later a computer decided that my job search records needed to be audited for the period I was not filing, and if that didn't fly, they would want the last 6 months of UI back. I filed a blank form with a letter saying I was employed and who to talk to to verify that. This was based on advice of the employment department. There was no process for this, so a blank job search form was entered and nothing else done, I then was subject to collection actions, appealed and an administrative law judge interviewed me, and determined the collection was in error. A year later the employment department stopped many of the practices.

We should not blindly defer to machines, the more we do, the less we know how to, or have power to, correct situations when they go out of hand.


I think the main point of the article is that the algorithms are applied without deference to real world conditions or impact.

I'm not seeing evidence that algorithms are worse than people in this regard. How many horror stories are there of people citing rules or policy "without deference to real world conditions," or just simply being flat-out wrong? It's almost a cliche when talking about government bureaucracies.

I'm less concerned about algorithms strictly applying policy (that's what they do after all) and more interested in whether overall results are better than what humans do. Most people have biases about one thing or another (race, gender, age, physical appearances) and it's very difficult to eliminate those as they may be operating below the level of conscious thought. Also government bureaucracies aren't exactly known for hiring the best and brightest. It would seem to me likely that algorithms should eliminate those issues, at least.

Edit: posted before I saw jqm's reply


I agree. It seems like it's a design problem, for her at least. When I read this I couldn't help but think of AKO (Army Knowledge Online), which, ostensibly, is supposed to be the Army "one source" for soldier information. It's basically a website made in like 2001 or something, before mobile internet was a thing. No one building/managing the site is paying attention to how people actually interact with it, otherwise they'd notice that the links people most often click on are at the least accessible point. And that it's just a hideous website, to start with.

It seems to be a recurring theme here that when citizens interact with government technology they are met with a UI that doesn't make sense or that doesn't work for them. I think that's sad. I really want to change that...not sure how yet, I guess.

Edit: I'm for sure not trying to downplay the global implications of this post. Especially since I've seen how drone decisions actually play out. Basically, we all need to pay attention. That's all I'm saying.


That's because most government agencies don't know how to run a software project. From experience, they don't understand the requirements, don't pay attention to conflicting requirements, and often times have no choice in who they hire as a contractor (must submit 3 bids, take the lowest cost.) So what you end up with are a bunch of under paid, under qualified and under staffed people building things.


Does anyone have a link to the lady's obituary? According to the story, without government money, she was surely going to die. And, if she didn't die, then the hyperbole about the "life sustaining nutrition" was just bullshit. It goes to make me wonder just how necessary government handouts really are.

Also, she has emphysema and COPD -- what are the odds that was caused by smoking? How much money does smoking cost? $5 per day? How many years did this woman smoke? Has she quit smoking? Did she die from starvation? My point is that her being deaf isn't a money-deserving disability. There are thousands of deaf people who aren't on welfare. Her health problems would likely have been caused by years of smoking. Why should my tax dollars go to support someone who caused many of her own problems? If I bet my life savings on blackjack, should that make me eligible for government assistance? If I am an alcoholic, should I get a liver transplant?

So, unless she died from starvation, the algorithm worked. She shouldn't be getting my money just because she has had a lifetime of bad decisions. As far as the bipolar, I could point to dozens of programs in almost every state that would provide her with free care for that.

As far as the other gent, the prospective gang member, do we have any evidence of harm caused by that algorithm? Was he wrongly arrested, detained or otherwise victimized due to that algorithm? His history does seem almost textbook in terms of risk factors to commit a violent crime. Unless his rights were infringed upon, I'm not sure I understand the issue.


I take from this that there needs to be some flexibility in how the results of the algorithms are applied. I also find the first example in the article unconvincing as a real mistake. It states that Robert McDaniel had:

> a misdemeanor conviction and several arrests on a variety of offenses—drug possession, gambling, domestic violence

then it seems like it's calling the algorithms a mistake that

> branded Robert McDaniel a likely criminal

Maybe I"m just sheltered, but a history of arrests, drug possession, and domestic violence tell me that the person is probably a criminal (though whether that rises to the level of bring one of Chicago's top 420 criminals I can't say).


but a history of arrests, drug possession, and domestic violence tell me that the person is probably a criminal

Reading more about Robert McDaniel, he apparently made the list because one of his childhood friends was shot and killed: http://articles.chicagotribune.com/2013-08-21/news/ct-met-he...


> but a history of arrests

You're not looking at convictions? Just arrests?


Yeah, it says "a" conviction, and we all are (or should be) aware that arrests occur much more frequently than guilt.


How does the number of people negatively affected by algorithm compare to the number of people who would be negatively affected by human processor?

I mean, someone could probably write hundreds of similar articles about negative interactions with callous or incompetent human officials. Having dealt with at least an average number of DMV type officials over the years, I can't see that machines could do a whole lot worse.

I do agree with several points of the article though. Let the algorithms be open to public critique. This is democracy and it should lead to improvement (eventually). And of course there should always be recourse to human intervention.


I'm not sure if most folks really understand the nightmare we're setting ourselves up for. It's the domestic policy equivalent of drone warfare.

The western legal system was built and functions inherently on the precondition that it's people who use, administer, and maintain it. There's a lot of slack and human interpretation built into the process, and no laws are constructed such that they are enforced in a mechanical fashion. In addition, there's the premise that the folks doing the work of enforcing the laws are virtually the same as those being policed. Finally, severely unjust or unpopular laws are many times ignored by both the population and the enforcers.

All of that goes away with machine application of criminal/administrative law. The system was not built for this.


Finally, severely unjust or unpopular laws are many times ignored by both the population and the enforcers.

That's a bug, not a feature. That discretion that is permitted to LE often leads to selective enforcement. "The best way to get a bad law repealed is to enforce it strictly."


"The best way to get a bad law repealed is to enforce it strictly."

Yes, it's done wonders for the drug war.


The drug war is not even closed to being strictly enforced - it's actually a good example of what I was describing.

There's plenty of evidence that poor minorities are punished way more harshly than others (e.g. "A 2013 study by the American Civil Liberties Union determined that a black person in the United States was 3.73 times more likely to be arrested for marijuana possession than a white person, even though both races have similar rates of marijuana use.").

Start enforcing the law with the same strictness on the kids of the rich and powerful, and you'd see the laws change must faster.


This also works for speed limits, tax evasion and all sorts of laws. Imagine if a person were arrested and sent to prison for being $1 off in their taxes. Imagine 10 year sentences for these mistakes. Imagine every single person who was off by $1 were arrested and prosecuted. The tax code would be simplified almost overnight.

100% enforcement would equal almost an immediate repeal of bad laws.


I usually don't dive in deep on threads, but I feel like I have to say something. There are folks who seem to agree with this.

I believe you are correct when it comes to individual laws. If an individual law is bad, apply it absolutely equally and watch it change. Very cute, very succinct, and it sounds like it might work. (Makes for a great slogan even if it doesn't. See North Korea) This assumes that somehow automatic processing would constitute the same as equal enforcement. I doubt that.

The bigger problem is this: taking an individual law and publicly applying it equally was not the scenario I was describing. The average person is guilty of 3 felonies a day. Assuming that stat is exaggerated, and assuming a state where you have "three strikes and you're out", we'd all be in prison for life within the year. So that's not happening. The question on the table is not about one law or the other, it's about how to take a system of tens of thousands of laws, all unequally applied, and try to make them all work the same. If that doesn't keep you awake at night, you don't know the legal system. It's Orwellian.

There will be no huge uprising, because the system has to continue working. So what _will_ happen is that easy-to-data-process parts of the law will be enforced against people who won't complain too loudly. Let's read that as "crimes that make people feel morally superior to others" and "crimes involving easy data collection where the suspects are ill-able to defend themselves". Your car will report if you run a red light, or if you've had 3 glasses of wine instead of 2 at the restaurant. The DMV will know if you're poor and driving without insurance just to get to work -- because a local LE official will track you with a LPR. It'll be just more of the same, but there won't be any humans involved.

So you'll still see prosecutorial judgment, it just won't be evident. At all. Instead more and more people will run afoul of the law in little ways that won't make too much of a stink.

As Thomas Paine said, it's better to be the victim of a bad king rather than a complex system of government. If you're the victim of a bad king? You have somebody to blame. If you're the victim of some complex, impossible-to-understand system? You're still screwed -- but now there's nobody to point a finger at. Much, much worse.

I find it disturbing that folks would think that taking people out of the law enforcement system would be a good thing. Not only would it not be a good thing, it would be a disaster for all concerned. </rant>


But drunk driving laws are not "severally unjust or unpopular", you're talking about a different issue than the one I replied to.


"no laws are constructed such that they are enforced in a mechanical fashion"

Actually, I believe a lot of laws mandate minimums - see http://en.wikipedia.org/wiki/Mandatory_sentencing


Even when the law has a mandatory minimum, the prosecutor still has discretion for charging.

"If you go to trial and lose, we'll convict you for a Felony II, but if you take the plea bargain we'll only charge you with a Misdemeanor I."


Drones are remote control planes, not AI. They are human-based kills (just one-sided risk profile)


"Decision-making algorithms are politics played out at a distance, generating a troubling amount of emotional remove."

This is absolutely key. Adding distance[1] between the point where a decision is made and where the consequences of that decision are realized make it harder for any feedback from those consequences to affect the person making the decision. This makes the decisions worse (from lack of information) and the implementation worse (error must be much larger before the feedback from that error reaches the decision maker).

You see this effect in many areas. An obvious example is the law enforcement mentioned the article (or military), where "just following orders" to the modern variant of "just following an algorithm" end up causing problems.

A more interesting example might be the existence of the derivatives market and the invention of increasingly-exotic financial instruments. A bank giving someone a loan has some fairly well-known possible behaviors, and is (probably) close enough to allow feedback between the parties for things like capitalism to work (if you don't like the bank's behavior, you let them know that isn't acceptable by refinancing at a different bank). On the other hand, bad decisions bundled up and hidden in collateralized debt obligations sheltered these bad decisions until the problem blew up and introduced the world to the phrase "too big to fail".

A very interesting discussion of this problem - focused on how this kind of distance relates to human honesty (and rationalization) - is this RSA Animate featuring Dan Ariely: https://www.youtube.com/watch?v=XBmJay_qdNc

[1] measured in either number-of-hops or time


Using algorithms for support decision making and and putting a bad UI barrier between the users and the managers are two different things.

Anyway, public administrations should indeed make publish how their algorithms work in order to ensure they are reflecting the official policies.


This is not a problem of "algorithms" but rather of stupid policies. A programming manager at Google would have been fired if he had put such obvious errors in PageRank (or whatever they call it these days).

Algorithms and data can only improve effectiveness of these systems and agencies. However, their use has been combined with drastic funding cuts. These cuts and the resulting malfunctions aren't exactly a fundamental problem with data science.


But bureaucracy is filled with stupid policies, and was filled for centuries. Stupid policies have to expected, frankly. But unlike commercial companies, people don't have anyone else to turn to. That's why, in a special case of a government machine — which is (1) without alternative and (2) error prone — the human sanity check should always be present.


The difference is that when a person is involved, you can apply pressure via representative, the press, etc. Bureaucracy has been full of "jobsworths" ("Sorry, guv, more than me jobs worth.") forever.

In reality, the real issue is total underfunding of these services for those who need them. People aren't switching to computers for these things because they think it's better (obviously, the policing one is the exception), they are switching because they don't have a choice to keep up with the workload.


Not really. There always seems to be some "friendly fire" whenever Google releases new algorithm changes meant to combat abuse. I doubt anyone is ever fired over it, or else they would probably be constantly firing people.


I've been called as asked to take a survey about my interactions with an employee at a company I do business with. I could tell that the survey as constructed would not capture my actual concerns with the business processes and would instead reflect poorly on the employee that I did business with. The failure of the system would end up being used to mark the employee down even though he did a good job with the constrains he had.

It's unfortunate that these sorts of automated processes are ending up targeting edge cases, like things that should be covered by the ADA.


In the CS community a new (sub)subfield has emerged called "Fairness, Accountability, and Transparency in Machine Learning" (FATML). It's a young research topic, but I find it quite interesting.

http://www.fatml.org/


This seems to make the false premise that you need computers to make decisions algorithmicly. If someone writes out a set of hard rules as to who can apply for a welfare program, the result will be the same if a human or machine makes the determination.

Long before computers existed, people complained about "rigid bureaucracy", which is effectively a complaint that government or business employees stuck to a process (an algorithm) that had some problems.


I have sat across from a call centre in a government office and listened to the workers running through the written version of algorithms. I've spoken to people who worked there and heard how crushing it was to know that someone was getting screwed but to be totally unable to do anything about it because the policy dictated their reaction. And I've worked with charities and listened to the other end of those phone calls; people screaming that their kids are going to be taken away because their benefits have been delayed and they can't afford to feed them.

The underlying assumption of this piece seems to be that turning decision making over to algorithms reduces positive discretion. But the humans in these situations frequently have no more discretion than the machine does, and inefficiency also has a human cost. It seems false to me to pretend that what these algorithms are doing, at least in terms of the majority of their immediate effect, is qualitatively different.

What you're losing when you encode something as an algorithm is the insight that you get from having humans in the loop. Intuition; the things that people haven't thought to measure yet. That's the weakness in any statistical technique - you need a human to lend numbers relevance; to say what is important to know the relationships of; otherwise they're just a sequence of events.

But you need to start off with a system that leverages human strengths in order for that criticism to make sense. Human judgement only has an advantage in a system designed to use the different sorts of value that it offers. If your call centre worker is not truly responsible for the outcome of the call, and if you don't regularly attempt to get feedback from them to inform policy decisions, then it makes no difference if they are replaced by a machine. They were being treated as one to begin with, and the value that they added to the organisation by virtue of being human; of having professional judgement; was being thrown away anyway.

All this does, in a lot of cases, is make existing flaws more obvious.

The exception I can think of to this is the criminal justice system, where there are examples of positive discretion. However, there are also examples of negative discretion there. There are many stupid laws on the books, and selectively enforcing those laws allows you to screw, more or less, whoever you want. It's not surprising that a system that would mechanically implement those laws would produce undesirable outputs, it's just that it's finally being applied to people who have the power to say something about it, (and, perhaps, have their concerns taken seriously enough to alter policy.)

For all that there is a loss in the case of the criminal justice system, there is also a gain: Encoding something as an algorithm makes the flaws in the process more apparent.


Exactly this.

I often describe programming as creating tiny bureaucracies.

You put some information into a "form" (e.g. a search bar). The front desk bureaucrat (mouse, keyboard, screen, etc.) sends it off to other bureaucrats and they follow a bunch of rules to process it and give the front desk some new "paperwork" to give to you (e.g. the resulting web page).

What we are doing with automated algorithms is getting rid of the human bureaucrats and replacing them with "robotic" bureaucrats. That can be a really bad thing depending on the context, but even the human bureaucrats in many cases were already ~ robots.


That said, once you can automate this stuff, there are new types of policies you can make which would be too complicated otherwise, and that is a place where we can unwittingly innovate into new ways of hurting people.

On the other hand, exposing the inherent inhumanity of strict bureaucracy via conversations about automation may actually be a force of awareness and change. An opportunity to explicitly create "human integrations" at key touch-points where people would otherwise fall through the cracks (think API hooks where you can integrate PagerDuty).


This reminds me of the terrifying epistolary short story Computers Don't Argue [1] by Gordon R. Dickson

[1] online here: http://www.dave.rainey.net/calendars/dystopias/process3.html



compare and contrast this with Tim O’Reilly essay proposing Algorithmic Regulation http://beyondtransparency.org/chapters/part-5/open-data-and-...


This sort of thing is why Smart Contracts are actually the worst idea.


I'll take the algorithms any day of the week. The sooner we remove assholes from administering the law, the better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: