Algorithmic tools - grasping reason's full potential or 'suppression of what we know'? Sherlock Holmes vs Father Brown

5 Apr 2017
Papers stacked in a filing cabinet drawer

The possibilities of algorithmic machine learning tools are enormous - but how can 'human' principles such as reasonableness and natural justice be factored into this kind of decision making? 

Marion Oswald, Senior Fellow in Law and Director of the Centre for Information Rights at the University of Winchester, considers the issues and asks whether the different approaches to reasoning of fictional detectives Sherlock Holmes and Father Brown have something to teach us.

I think it goes without saying that Sherlock Holmes would have been a fan of algorithmic machine learning tools - whereby a computer is given a task and then learns and extracts a mathematical formula for that task from sets of input data. (Although whether he would have admitted that algorithms could enhance his own faculties, rather than just those of Inspector Lestrade, is open to debate!)

In The Adventure of the Five Orange Pips, Holmes remarked:

"The ideal reasoner would, when he has once been shown a single fact in all its bearings, deduce from it not only all the chain of events which led up to it but also all the results which would follow from it."
 
Holmes could see the potential of the ideal reasoning tool for making predictions. To fulfil reasoning’s full potential:

"it is necessary that the reasoner should be able to utilize all the facts which have come to his knowledge, and this in itself implies, as you will readily see, a possession of all knowledge…a somewhat rare accomplishment."

He may have been excited by ‘Big Data’ and by powerful computing that can draw inferences and conclusions from today’s vast digital datasets.

GK Chesterton’s Father Brown, however, would likely be rather irritated by our data driven society. "What do these men mean…when they say detection is a science?" he asked in The Secret of Father Brown:

"They mean getting outside a man and studying him as if he were a gigantic insect: in what they would call a dry impartial light, in what I should call a dead and dehumanized light. They mean getting a long way off him, as if he were a distant prehistoric monster…"

Father Brown was concerned about how such processes take us away from the real human:

"So far from being knowledge, it’s actually suppression of what we know. It’s treating a friend as a stranger, and pretending that something familiar is really remote and mysterious."

We hear much about machine learning’s possibilities: to improve medical diagnostics, make our cities ‘smarter’ and our streets safer. By using knowledge of the many to inform a decision about the one, decisions could become less open to human inconsistencies and bias. There are serious public interest concerns that drive deployment of these technologies in the public sector. In the criminal justice context for instance, a tool that made a more consistent assessment of the future risk posed by individuals might enable better decisions to be made about those who would benefit from rehabilitation programmes, without exposing communities to unnecessary risk. 

We are living through the early days of these technologies however. Biases and inconsistencies can be ‘baked’ into algorithms, and could therefore perpetuate. As the plethora of academic theories demonstrate, there is yet little agreement as to how to prevent this, nor regulatory or even voluntary standards that could be applied. One could argue that, in reality, nothing much has changed – human decision-making has always been opaque to a greater or lesser extent. But nowadays, we have ‘black-box’ algorithms - neural networks - where the network calculates an output from a specified input but the internal workings are opaque. The vast majority of us have no idea how the algorithm’s ‘mind is working’ (Oswald, Grace 2016), whereas we are innately ‘familiar’ with the human one. And even the best written algorithms cannot live up to Holmes’s ambition – these tools can never be in ‘possession of all knowledge.’ Input data is always limited to that which is easily categorized. As Alpaydin comments: "There are always other factors that affect the output; we cannot possibly record and take all of them as input, and all these other factors that we neglect introduce uncertainty." (Ethem Alpaydin, Machine Learning (MIT Press, 2016)). How do we categorize unique family circumstances; the importance of someone’s job to their self-esteem; compassion even, in a way that an algorithm could process?

Public law principles – reasonableness, necessity, proportionality, natural justice, procedural fairness – have stood the test of time. In my view, they will continue to do so in the face of challenges posed by algorithmic decision-making. This is why, in collaboration with Durham Constabulary, I have developed a decision-making framework for the deployment of algorithmic assessment tools in the policing context. The framework – ‘Algorithms in Policing –Take ALGO-CARE™’ - aims to translate key public law and human rights principles into practical considerations and guidance that can be addressed by public sector bodies. 

Each word in the mnemonic – Advisory; Lawful; Granularity; Ownership; Challengeable; Accuracy; Responsible; Explainable – is supplemented by questions and considerations representing key legal considerations, as well as practical concerns such as intellectual property ownership and the availability of an ‘expert witness’ to the tool’s functionality. Public law has been prepared to accept the need for processes and policies that attempt to ensure consistency and the treatment of similar individuals in a similar fashion. An important condition however is that a public body must not fetter its discretion. The individual human must be considered. This is a fundamental consideration to bear in mind for the future of algorithmic decision-making.  

So whose advice should we take – Sherlock’s or Father Brown’s? I’d say that we should take a bit of both.    

Marion Oswald is Senior Fellow in Law, Director of the Centre for Information Rights. @Marion_InfoLaw @_UoWCIR

Algorithms in Policing –Take ALGO-CARE™’ will be discussed in more detail at TRILCon17 on 3 May and will be included in my written evidence to the Parliamentary Science and Technology Committee’s inquiry into the use of algorithms in decision-making.

To find out about TRILCon17, the 4th Conference on Trust, Risk, Information & the Law on 3 May exploring machine learning and AI, and to book, go to www.winchester.ac.uk/trilcon

The views and opinions expressed in this blog are those of the author and do not necessarily reflect the position of the University.

 

Back to media centre