individual-entry-BA-Blog

« Here's a way someone is using enterprise decision management in mortgage | Main | Here's how one company is using business rules to manage advertising pricing »

Is it scary to allow machines to "make decisions"?

I saw this article today - Phila. getting software to predict who might kill - to which there was some interesting reaction that made me want to comment. Firstly some quotes from the article:

Initial research suggests the software-based system can make it 40 times more likely for caseworkers to accurately predict future lethality than they can using current practices.

"This will help stratify our caseload and target our resources"

When caseworkers begin applying the model next year they will input data about their individual cases ... to come up with scores that will allow the caseworkers to assign the most intense supervision to the riskiest cases

So far this reads like classic analytically-based decisioning. Using data to improve the accuracy of a decision so as to segment (into higher and lower risk) and so apply scare resources more effectively to meet an objective (in this case less repeat offending by parolees). This is a great example of what I have referred t as load-balancing between computers and people. You are not trying to replace the parole offices or his/her judgment, you are trying to use the computer to balance the workload involved in identifying high-risk parolees. The parole officer gets additional data from the analysis and can use it along with their own judgment to make a better decision.

The article goes on to talk about a potential to predict murders also and says:

But before that can begin in earnest, the public has to decide how many false positives it can afford in order to head off future killers, and how many false negatives (seemingly nonviolent people who nevertheless go on to kill) it is willing to risk to narrow the false positive pool.

The issue of false positives is always serious when using analytic models - for business-oriented analytic models false positives risk annoying customers (by declining a card thanks to a false fraud positive for instance) - but clearly in law-enforcement these kinds of false positives are more serious. Again, though, it depends on what I am going to do with the information. If I use the information to preemptively arrest someone (as was shown in the movie Minority Report) then this starts to have really serious implications for privacy and personal liberty. But what if I use it to decide how hard to check on an alibi? If I use a "murder score" to decide that I will investigate this alibi closely (because the person has a high murder score) while accepting another more or less on its face (because the person has a low one) then I am really using analysis to help prioritize my search for a killer. Is that scary? If a killing appears likely to have been done by someone in the same neighborhood, is it scary if I use a murder score to decide who to check up on first in the neighborhood? Am I not just using statistical analysis to enhance my personal judgment? What would I do without the score? Well I would rely on the instincts and experience of police officers alone. This might be good, if the police assigned are experienced and unbiased, or bad, if the police assigned are racist rookies.

Ephraim Schwartz over at InfoWorld said "Statistical analysis is indeed scary, balancing as it does free will against predictability of human behavior." Unlike Ephraim I don't believe that statistical analysis is inherently scary - I think it can be scary but it can also be an effective way to eliminate bias and broaden the focus of decision-making. Indeed when I reviewed Blink, Malcolm Gladwell's book, I said:

"I particularly enjoyed the stories about people who had trained their snap judgments so that they could make quick and accurate assessments of situations while not being distracted by misplaced reactions and those about how hard it can be to describe a reaction, even if it is a good one."

Using statistical analysis in this way is, I think, a good thing. Ephraim went on to ask:

Would they be denied parole if the computer decided they were likely to commit one of these crimes?

The reality is that they may currently be denied parole if the parole board thinks they are likely to commit one of these crimes. Does the computer do a better or worse job of deciding? Well it can be programmed to be oblivious to biases that don't matter and to focus on behavioral clues in a way a parole board cannot. It has no emotional response to the person being considered (which has good and bad aspects). It makes its decisions differently but I am not sure you can say it is better or worse.

This whole area of how to combine judgment and expertise with analytics is a fascinating one and Larry Rosenberger, Fair Isaac's head of R&D, gave a presentation on the future of analytics that touched on a number of these issues.

Technorati Tags: , , , , , , , ,

First time on the EDM blog?
Subscribe to the EDM blog feed or check out some other recent posts:

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451629b69e200d834cc939953ef

Listed below are links to weblogs that reference Is it scary to allow machines to "make decisions"?:

Comments

FICO

I just had to post this
http://www.dilbert.com/comics/dilbert/archive/dilbert-20061207.html
:-)

FICO

Check out this interesting Guardian article to get a quick reality check on math!
http://www.guardian.co.uk/life/badscience/story/0,,1968237,00.html

The comments to this entry are closed.

Search Site


  • dmblog.fico.com

Subscribe

  • enter your email

Upcoming Events

  • FICO Tools & Analytics User Forum 2012
    BERLIN: September 11-12, 2012 LONDON: September 18-19, 2012 Gain new insights for improving business performance through advanced analytics and decision management tools.