The poster session is scheduled during the reception, 17:15–19:00 in mini-forum 2, room 4.40 of the Informatics Forum.

- Colin Aitken (University of Edinburgh) “Evidence evaluation for discrete data”.
- Paul Hewson (Plymouth University) “Right Censored Geometric Distribution in Surveys for Global Acute Malnutrition”.
- Chris McLellan (University of Edinburgh) "Modelling tracks of cabbage root fly larvae in a novel study of crop protection”.
- Gustaf Rydevik (BioSS / Scottish Agricultural College / University of York) "Predicting the past - Hindcasting an epidemic curve".
- Charles Sutton (University of Edinburgh) “Quasi-Newton Methods for Discrete and Continuous Markov chain Monte Carlo”
- Amy Wilson (University of Edinburgh) “The evaluation of evidence relating to traces of drugs on banknotes”.

Evidence evaluation for discrete data

In forensic science the value of evidence is determined with the likelihood ratio. This compares the likelihood of the evidence if the prosecution proposition is true with the likelihood of the evidence if the defence proposition is true. When the evidence is in the form of measurements, methods are well-developed for multivariate, hierarchical Bayesian multivariate random effects models. Methods are not so well developed for discrete data.

Data are available from a project in forensic phonetics at the University of York in which the number of clicks per minute are recorded for each of 100 speakers, over a period of time ranging from four to six minutes. The evidence may be considered to be the number of clicks from a piece of speech from an unknown source and the number of clicks from a piece of speech from a known source, such as a suspect. The prosecution proposition would be that these two pieces of speech were made by the same speaker and the defence proposition would be that they were made by different speakers. Using these data as an exemplar, possible models for such data are currently under investigation. They will be presented for discussion along with preliminary results. These models include a beta-binomial model and its generalisation to a Dirichlet-multinomial model, a Poisson-gamma model, an empirical model based on relative frequencies and one allowing for correlated discrete data. Joint work with Erica Gold.

Right Censored Geometric Distribution in Surveys for Global Acute Malnutrition

The LQAS design has become popular amongst international agencies for emergency surveys for Global Acute Malnutrition. We suggest however that “gold standard” surveys can be conducted if one instead uses a Bayesian framework, where the Binomial and Geometric distribution have the same likelihood. Joint work with David Mensah.

Modelling tracks of cabbage root fly larvae in a novel study of crop protection

We propose the use of flexible diffusion models to characterise the behaviour of cabbage root fly larvae when exposed to attractant and repellent compounds. In particular, we investigate the use of Hidden Markov Modelling for larvae movements.

Predicting the past — Hindcasting an epidemic curve

When an epidemic outbreak is discovered, it has often been developing unseen for some time. Understanding its origin is then valuable. We present a method to retrospectively find out the development of an infectious disease from a cross-sectional data set collected at a single time point. By exploiting the differing time dependencies of different diagnostic tests, we can triangulate the time of infection for an individual. An MCMC approach is used to set up the posterior distribution of test results given infection, and estimating the distribution of time points given test results. We investigate the properties of this procedure by applying it to artificial data generated from several combinations of diagnostic test responses and epidemic shapes. The estimation of the epidemiological curve is proven to be both robust and precise.

The evaluation of evidence relating to traces of drugs on banknotes

Banknotes can be seized from crime scenes as evidence of illicit drug use or dealing. Mass Spec Analytical Ltd., an analytical chemistry company, have developed a technique to analyse the quantities of drugs on banknotes. Data are available from banknotes seized in criminal investigations, as well as from banknotes from the general circulation. For each sample tested, the analytical response over time is recorded for five different drugs. A peak detection algorithm used to convert these data into a measurement of the quantity of drug on each banknote will be presented.

Two questions are considered. The first focuses on the likelihood of the data under each of two propositions: that a set of seized banknotes is associated with drug crime, and that these banknotes are from the general circulation. The aim is to evaluate the associated likelihood ratio. There is evidence of autocorrelation between adjacent banknotes in samples. Two models have been developed to take this into account: an autoregressive process of order one and a hidden Markov model. Non-parametric models using kernel regression are being developed. These models will be described, with preliminary results presented.

The second question involves the calculation of a likelihood ratio where data are available from bundles within samples, thus the within sample variation may be measured. The propositions are that two samples of banknotes have originated from the same source, and that they have originated from different sources. The use of the above models in evaluating this likelihood ratio will be described. Joint work with Colin Aitken, Richard Sleeman and Jim Carter