Digital evidence in investigative and evaluative proceedings

Event: Digital evidence in investigative and evaluative proceedings
Date and time: Monday 4 May 2020, 14.00-17.00
Location: Virtual
Registration link:

The next meeting of the Section will be a virtual meeting on “Digital Evidence in Investigative and Evaluative Proceedings”.

Data extracted from various digital media and devices, so-called digital evidence, is being used increasingly often at all stages of the criminal process, from investigation to evaluation at trial. Critical voices call for a better understanding of the capabilities, limitations and challenges of this emerging type of evidence. Besides general process features and managerial issues such as caseload, backlog, time and other resource constraints, lawyers and judges also need knowledge and understanding of how to question the potential probative value of digital evidence. In turn, forensic scientists need a clear view of how to assess digital evidence in a transparent, balanced and robust manner. These topics raise the question of what data is needed for securing usability of and trust in digital evidence as a distinct branch of forensic science, and its deployment in particular cases. The collection and analysis of relevant data in this area, as well as case studies, thus offer new opportunities for collaboration between forensic scientists, lawyers and statisticians. This afternoon seminar brings together key academics and practitioners who work at the intersection between forensic science, digital evidence and the law. They share their perspectives on the pending challenges that members of the judiciary and academia need to approach now in order to ensure that digital evidence can make reliable contributions in the legal proceedings.

There will be three speakers followed by an open discussion:

Dr. Gillian Tully, UK Forensic Science Regulator
Professor Eoghan Casey, University of Lausanne, Switzerland
Matt Tart, Principal Scientist, CCL Forensics Ltd., United Kingdom

Titles and abstracts:

Dr Gillian Tully: “Risk, Quality Assurance and Innovation in Digital Forensics”

The massive demand for extraction and analysis of data from digital devices has led, in England and Wales, to evolution of a cottage industry of small units, each using a selection of software tools, extracting data and passing it, largely uninterpreted, to investigators.

Some of these digital forensic units have in place the basic quality assurance measures needed to ensure that the limitations of their work are known and that the provenance, continuity and validity of the data they have extracted can be demonstrated; others do not. Increasingly, front-line police officers with minimal training are carrying out data extraction using self-service “kiosk” technology.

Those analysing the data are often, in this jurisdiction, intelligence analysts or investigators. A lack of methodological robustness in the analysis and interpretation of evidence has resulted in failures to find exculpatory evidence, which was present within the extracted data.

In this presentation, risks and failures in the extraction, analysis and interpretation of digital evidence will be considered alongside the need for process and technology innovation, with a view to achieving sustainable improvement.

Professor Eoghan Casey: “Scientific interpretation of digital evidence”

There is growing concern about misinterpretation of digital evidence, and there is a need to overcome perceived barriers to employing scientific reasoning within a logical Bayesian framework. This talk presents an approach to promoting Case Assessment and Interpretation (CAI) in digital investigations, and to formalising the expression of evaluative opinion in various legal contexts. Associated benefits and challenges are demonstrated through case examples involving tampering of digital evidence.

Matt Tart: “Opinion Evidence in Cell Site Analysis”

Issues concerning forensic inference exist in all areas of Forensic Science, and Cell Site Analysis is no exception. There is a standard concerning opinion evidence adopted by both the European Network of Forensic Science Institutes (ENFSI) and the Association of Forensic Science Providers (AFSP) based on the principles of the Case Assessment and Interpretation (CAI) Model widely used within “traditional” forensic Science. This standard has not been widely adopted, or does not appear to be particularly well known, either within Cell Site Analysis or in the general field of Digital Forensics. This paper is aimed at Cell Site Analysis experts and outlines the legislative and regulatory framework within which opinion in Cell Site Analysis is provided and addresses how the principles defined in the AFSP standard can be applied to Cell Site Analysis. A case example highlighting differences between a task-driven approach commonly used within Cell Site Analysis and a CAI approach to the same data is presented and explored.

Decision-support in litigation

Event: “Decision-support in litigation”

Date and Time: 6 December from 2-5pm

Location: 5th floor seminar room, Bayes Centre, University of Edinburgh

Registration link:


Should we settle this case? What are our chances of winning? Do we need another witness? These are all questions that might be asked during civil or criminal litigation. Traditionally, much work at the interface of statistics and the law has focussed on the interpretation of evidence and relatively little on strategies for running legal cases. As litigation can have huge implications for individuals and businesses and involve large sums of money, effective decision-making strategies are critical to ensure that good decisions are being made at each stage of the process. There are huge opportunities for statisticians and data scientists to work with lawyers to solve these problems and improve the legal decision-making process but there are many difficult challenges to resolve, for example how to collect appropriate datasets. At this event we will hear from three people working on interdisciplinary problems in this area. Following the talks we will have an open discussion aiming to identify both the key challenges and the ways in which statisticians might be able to contribute.

This is a joint event organised by the Royal Statistical Society Statistics and Law Section and the Edinburgh Centre for Statistics. There will be three speakers (abstracts below):

  • John MacKenzie, partner at Shepherd and Wedderburn.
  • Dr Alex Biedermann, University of Lausanne.
  • Wen Zhang, London School of Economics.

Following the talks there will be a panel discussion, chaired by Jamie Gardiner (advocate), investigating how statistics can be used to support decision-making in litigation.

This event is free to attend but registration is required. The registration link is:

Please email Amy.L.Wilson “at” if there are any problems registering.


John MacKenzie

John will share his experience of being a dispute resolution lawyer, and the reality of legal process and the litigation process at the moment. Looking at both the practical reality of a lawyer’s evidence gathering, and the psychology of dispute resolution, his presentation will consider if and how data, data science and statistics might revolutionise (or maybe just help) the dispute resolution world.”

Alex BiedermannAssessing the Value of Forensic Science Results in Strategic Legal Decision Analysis

Decision problems encountered by litigants present challenging features, such as multiple competing propositions, variable costs and uncertain process outcomes. This complicates analyses based on formal decision-theoretic models and the use of diagrammatic devices such as decision trees which mainly provide static views of selected features of a given problem. Moreover, strategic planning and the assessment of legal tactics – given a party’s standpoint – encounter further intricacies when considerations are extended to information provided by forensic science experts. This is because introducing results of forensic examinations may impact on the probability of various trial outcomes and hence play an important role in a party’s strategic analysis. This presentation analyses and discusses examples of decision problems at the interface of the law and forensic science using influence diagrams. Such models can be implemented through commercially and academically available software systems. These normative decision support structures represent core computational models that can be paired with other litigation-support systems, to help address a variety of questions in strategic legal decision analysis.

Wen Zhang – Decision support in civil litigation

The application of data science methodologies to civil litigation is a rapidly emerging field, which lags behind applications of statistical methods in criminal forensic science. This presentation will discuss how to apply advanced methodologies to the civil litigation process, particularly sequential game theory and decision trees. The benefit will be to decision support in the semi-cooperative environment of the negotiation. Progress will benefit national and professional objectives to decrease the cost and duration of litigation. Wider benefits will include contract design and better use of judicial resources, and in insurance cases, greater fairness to both claimant and defendant.

Summary of guides on statistics and the law

There are a number of available documents and guidelines that have been produced to assist legal and forensic practitioners in the interpretation of forensic evidence. The Section has produced a document summarising these (SummaryOfGuides), including details on which organisations have endorsed the guides.

We anticipate that there are other documents available that we have missed, and that the list will need to be updated over time. If you know of anything that should be added to the list (or any broken links), please contact the secretary of the section (details under “contact us”) to let us know.


Meeting: the statistics of drug testing in sport

The next section meeting will be held at the Royal Statistical Society, Errol Street on 18 April 2019 from 2pm-5pm.

The statistics of drug testing in sport

There have been a number of high-profile sports doping cases in recent years. Famous examples include the Russian Olympic scandal, Lance Armstrong, Justin Gatlin and many more. But how can cheating be distinguished from a good performance? How can abnormal measurements in drug tests be differentiated from measurement error? And how sure must the evidence be to punish athletes for failed drug tests? This event will bring together four experts in the field to discuss their experience in the statistics of drug testing. Following the presentations we will have an open discussion on the questions raised.

We will have four speakers followed by a panel discussion:


Dr Reid Aikin (World Anti-Doping Agency) – The Athlete Biological Passport

Professor Andrea Petroczi (Kingston University and World Anti-Doping Agency) – The elusive number: challenges and opportunities in estimating doping prevalence

15.00-15.30 – coffee break

15.30 – 16.30

Professor Sheila Bird (Medical Research Council Biostatistics Unit, Cambridge) – Reflections on twice being statistical expert witness when Tour de France cyclists had to defend against adverse analytical findings: absence of evidence versus critical prior in-race tests.

Professor Don Berry (The University of Texas MD Anderson Cancer Center, Houston) – The Science of Doping … Or Lack Thereof

16.30-17.00 – panel discussion
Attendance at the meeting is free of charge for RSS fellows and £25 for non-fellows. Registration is required on the following link:


Dr Reid Aikin  – The Athlete Biological Passport

Drug testing regimes in sports based on the analytical detection of substances are inherently limited by factors such as the continuous development of new substances and the use of substances that are indistinguishable from those naturally produced in the body. As an alternative and complementary approach, the Athlete Biological Passport (ABP) aims to monitor selected biological variables over time that can reveal the effects of doping. This talk will provide an overview of the principles of the ABP, the basic framework of an ABP program and how an ABP program is integrated into an overall anti-doping strategy. Limitations of the current system and strategies to further develop the ABP will be discussed.

Professor Andrea Petroczi – The elusive number: challenges and opportunities in estimating doping prevalence

Determining the prevalence of doping behaviour is a strategic priority for the wider anti-doping community, and for the World Anti-Doping Agency. Having a reliable and accurate estimate of what percentage of athletes are involved in doping is important to assess the extent of the issue in sports and to evaluate how effective the anti-doping programmes are.

To date, there is no reliable method for assessing doping prevalence in a robust, accurate and consistent manner. While analytical methods are objective on the individual test level, they have their limitations. Theoretically, self-reports would be perfect and inexpensive ways to obtain information on unobservable behaviour. The athletes themselves are in the best position to provide information on doping – either reporting on their own actions or of others. Furthermore, survey-based methods do not require laboratories, sampling, storage, transportation or skills for sample analysis and thus can be applied practically anywhere, anytime at low costs. However, owing to the social sensitivity of the behaviour, and the potential repercussions if exposed, direct questioning is likely to result in underestimation. Indirect estimation methods relying on randomised and/or ‘fuzzy’ responses for protection can offer a promising avenue – but this approach is not without challenges either. The presentation will focus on these challenges and – with the view of facilitating targeted research in this area – explore ways to address each.

Professor Sheila Bird – Reflections on twice being statistical expert witness when Tour de France cyclists had to defend against adverse analytical findings: absence of evidence versus critical prior in-race tests.

Statistician expert witnesses, engaged by prosecution and defence legal teams, were in agreement when heard and questioned before the court in Lausanne but judges decided otherwise. The second example resulted in the cyclist’s exoneration. Exoneration owed most to pharmacological modelling of the individual cyclist’s responses to dose-changes, for which his prior test results in Tour de France were crucial. However, my critique and evidence-synthesis of “randomized” dose-ranging studies were importantly supportive, not least of the need for logarithmic transformation of urinary salbutamol-levels and inter-individual variation. Intra-subject variation was poorly evidenced.

Professor Don Berry – The Science of Doping … Or Lack Thereof

The title of my presentation was also the original title of a commentary in Nature in 2008. Just before the article went to press Nature’s lawyers had the editors drop the title’s last three words under threat of a lawsuit from WADA. But the article didn’t change. See This is a quote from Nature‘s editorial on my article, which they titled “A level playing field” believes that accepting ‘legal limits’ of specific metabolites without such rigorous verification goes against the foundational standards of modern science, and results in an arbitrary test for which the rate of false positives and false negatives can never be known.” I will update some of the ideas in that article based on recent experiences. One of those is the subject of an article in Chance entitled “Statisticians Introduce Science to International Doping Agency: The Andrus Veerpalu Case” This case was the first successful appeal of a find of doping by WADA to the Court of Arbitration in Sport. The reasons cited by CAS were statistical.


House of lords inquiry into forensic science

The science and technology committee of the House of Lords recently conducted an enquiry into the use of forensic science in the criminal justice system. The inquiry looked at the research landscape in forensic science, standards and regulation, and digital forensics.

The Section submitted written evidence to the enquiry. This evidence has been published here:


AGM and meeting on Policing and Criminology

The next meeting of the section will be held on 10 December 2018 from 14.00-17.00 at the Royal Statistical Society, 12 Errol Street, London EC1Y 8LX. The AGM of the section will be from 13.40-14.00, immediately preceding the meeting.

The meeting will be on policing and criminology. We will have four speakers:

  • Professor Jim Smith (University of Warwick/ Alan Turing Institute)

Graphical Models to Help Investigate Violent Criminals

One serious challenge in providing support for policing various kinds of systematic violent crime is that cases can be very dynamic and idiosyncratic. In this talk I will outline the progress we are making in designing new graphical interfaces that help to separate the enduring structure of these processes from their more ephemeral features. Our Bayesian models have derived from earlier studies concerning the synthesis forensic activity level evidence but are now applied to resourcing models to support the prevention of crime. This reports on ongoing work undertaken by a team of researchers at the Alan Turing Institute.

  • Dr Anjali Mazumder (Carnegie Mellon University/ Alan Turing Institute)

Algorithmic Tools in Justice – bias and fairness, a causal lens

There has been an increasing use of algorithmic tools to support decision-making across public sectors, including financial, human, health care, policing and criminal justice services. The use of such algorithms is not new. However, with growing recognition of the bias and potential for unfairness that such tools may possess and perpetuate, researchers have begun to develop methods to achieve algorithmic fairness. In this talk, we discuss the use of algorithmic tools in justice, their potential for and inherent bias and implications on fairness in such high stakes decision-making, and approaches to achieve algorithmic fairness. We will take a particular causal lens to explore the fairness of algorithmic tools in which forensic science plays a central role in both the investigative and evaluative stage of a criminal case. This latter work reports on ongoing work undertaken by researchers at the Alan Turing Institute and Carnegie Mellon University.

  • Professor David Tuckett (Psychoanalysis Unit, UCL)

Making Decisions under Radical Uncertainty

How can academic study help business-leaders, policy-makers, regulators or those in in a courtroom, make better decisions? For a long time now, the answer has been by using tools such as game theory, expected utility theory, subjective utility theory to provide them with optimal choices – and often with major success. However, are these tools always – or even usually – appropriate? What role do specifically human qualities of imagination, feeling and intuition have to play? What is the role of analysing “data” properly and is the conclusion to draw from academic research that humans are poor decision-makers, influenced by bias and emotion, so that we would be better relying on AI as often as possible?

This talk will review these questions by introducing the work of the UKRI funded CRUISSE[1] network and introducing Conviction Narrative Theory, as a model for human decision-making when there is deep uncertainty.

[1] Confronting Radical Uncertainty in Science, Society and the Environment.

  • Dr Toby Davies (Jill Dando Institute of Security and Crime Science, UCL)

Understanding and predicting urban patterns of crime

One of the most crucial steps in preventing crime is understanding where and when it happens: as well as providing a basis for the deployment of police resources, such insight also provides a rationale for the application of place-based interventions. Traditionally, gaining such insight has been a particular challenge, given the complexity of behaviours involved, and its utility has been primarily descriptive. In recent years, however, improved data availability, coupled with the application of analytical techniques from other fields, has revealed a number of statistical regularities in crime data; most notably, its heterogeneous distribution in space and time and the prevalence of space-time clustering. In turn, the presence of these regularities has raised the possibility that they can be leveraged in order to predict the locations of future crimes by applying algorithms to past crime data. In this talk, I will briefly review this background, before discussing recent research which examines the role that urban structure – in particular the street network – plays in shaping these patterns. I will discuss the implications of this for crime prediction, and show how the adaptation of algorithms to account for this structure leads to improved predictive performance. In conclusion, I will describe a real-world implementation of predictive policing and identify opportunities for further exploitation of data in the field.

Attendance at the meeting is free of charge but registration is required. The registration link can be found here: