Dr. Jose Pina-Sánchez's Publications
‘Exploring the punitive surge: Crown Court sentencing practices before and after the 2011 English riots’, Criminology and Criminal Justice 2016,
‘Refining the measurement of consistency in sentencing: A methodological review’, International Journal of Law, Crime and Justice, 44 (2016), 68-87,
© 2015 Elsevier Ltd.The importance of improving consistency in sentencing has been underscored by institutional reforms in a number of jurisdictions. However, the effectiveness of these policy changes has not been clearly measured. To a certain extent this is due to the methodological confusion reflected by the multiplicity of methods that have been used in the study of consistency in sentencing. Here we review and categorise all of the quantitative methods that have been used to measure consistency in the literature. Our classification differentiates methods based on characteristics such as their robustness, the type of data they require, or whether they are amenable to comparisons in time or across jurisdictions. In this way the paper has a twofold contribution: it simplifies the implementation of future empirical analyses on consistency and facilitates their critical interpretation.
‘Decentralization as a multifaceted concept: A more encompassing index using bayesian statistics’, Revista Espanola de Ciencia Politica, 1.34 (2014), 9-34,
Repository URL: http://eprints.whiterose.ac.uk/89545/
Most measures of political decentralization seem to capture only specific facets of the concept. In particular, the excessive dependence on fiscal indicators has often been criticized since they seem unable to assess the degree of autonomy exerted by subnational governments. On the other hand, efforts directed at developing more encompassing indexes have had to rely on the aggregation of items developed by experts, a process that is prone to idiosyncratic errors. In this paper I propose the development of a measurement framework using a Bayesian factor analysis model for mixed ordinal and continuous outcomes. This model can efficiently combine multiple measures of decentralization regardless of their level of measurement, and in this way make use of both the rigour of fiscal indicators and the wider coverage of qualitative indicators. Applying this model to a set of 14 indicators I elaborate a more encompassing index of decentralization for 33 OECD countries. In order to illustrate the importance of using non-partial measures of decentralization, I use this index to replicate parts of De Mello and Barenstein (2001) exploratory analysis regarding the relationship between decentralization and corruption, showing that such relationship is practically non-existent.
‘Previous convictions at sentencing: Exploring Empirical Trends in the Crown Court’, Criminal Law Review, 2014.1 (2014), 575-588,
© 2015 Thomson Reuters (Professional) UK Limited.Most offenders appearing for sentencing have a criminal record. Despite the prevalence of previous convictions, little is known about their impact on sentence outcomes, and in particular the decision to imprison. How should previous convictions affect sentence outcomes? Competing theoretical models exist. Previous convictions may simply disentitle repeat offenders to first offender mitigation and then fail to aggravate (progressive loss of mitigation). Alternatively, prior convictions may be used to continuously increase the severity of sentence (cumulative sentencing). Until now, due to limitations of the sentencing statistics, it has been impossible to determine which model underpins sentencing practices. The Crown Court Sentencing Survey (CCSS) provides a more adequate source of data. Sentencers complete a return noting the factors taken into account at sentencing, including the number of prior convictions. This article reports new data from the CCCS which demonstrate that for most offences, courts continue to apply the principle of the progressive loss of mitigation. Once this mitigation is lost, there is little further increase in severity as measured by the custody rate, although some variation in the effect of previous convictions does emerge between offences.
‘Enhancing Consistency in Sentencing: Exploring the Effects of Guidelines in England and Wales’, Journal of Quantitative Criminology, 30.4 (2014), 731-748,
DOI: 10.1007/s10940-014-9221-x, Repository URL: http://eprints.whiterose.ac.uk/102906/
© 2014, Springer Science+Business Media New York.Objectives: The development and application of methods to assess consistency in sentencing before and after the 2011 England and Wales assault guideline came into force.Methods: We use the Crown Court Sentencing Survey to compare the goodness of fit of two regression analyses of sentence length on a set of legal factors before and after the assault guideline came into force. We then monitor the dispersion of residuals from these regressions models across time. Finally, we compare the variance in sentence length of equivalent types of offences using exact matching.Results: We find that legal factors can explain a greater portion of variability in sentencing after the guideline was implemented. Furthermore, we detect that the unexplained variability in sentencing decreases steadily during 2011, while results from exact matching point to a statistically significant average reduction in the variance of sentence length amongst same types of offences.Conclusions: We demonstrate the relevance of two new methods that can be used to produce more robust assessments regarding the evolution of consistency in sentencing, even in situations when only observational non-hierarchical data is available. The application of these methods showed an improvement in consistency during 2011 in England and Wales, although this positive effect cannot be conclusively ascribed to the implementation of the new assault guideline.
‘Measurement error in retrospective work histories’, Survey Research Methods, 8.1 (2014), 43-55,
Repository URL: http://eprints.whiterose.ac.uk/89544/
Measurement error in retrospective reports of work status has been difficult to quantify in the past. Issues of confidentiality have made access to datasets linking survey responses to a valid administrative source problematic. This study uses a Swedish register of unemployment as a benchmark against which responses from a survey question are compared and hence the presence of measurement error elucidated. We carry out separate analyses for the different forms that measurement error in retrospective reports of unemployment can take. These are misdates of ends of spells, misclassifications of work status, miscounts of the number of spells of unemployment, misreports of total durations in unemployment, and mismatches of work status in person-day observations. The prevalence of measurement error for different social categories and interview formats is also examined, leading to a better understanding of the error-generating mechanisms that arise when interviewees are asked to produce retrospective reports of work status. We are able to confirm some previously hypothesised error mechanisms - such as 'interference - but also identify interesting patterns - such as non-monotonic dependence of recall time on recall error. © European Survey Research Association.
‘Sentence consistency in England and Wales: Evidence from the crown court sentencing survey’, British Journal of Criminology, 53.6 (2013), 1118-1138,
We assess the use of sentencing guidelines for assault issued in England and Wales, and the consistency with which they are applied by judges in the Crown Court. We use data from the Crown Court Sentencing Survey (CCSS), which records data on legal factors considered in the sentencing guidelines. This gives us access to a wide range of explanatory variables, allowing us to produce more robust findings about consistency in sentencing. We first employ a standard regression model to determine how guideline factors affect sentence outcomes empirically. Second, a random slopes multilevel model is used to analyse whether these factors have been consistently applied across different Crown Court centres. Our results point to a substantial degree of consistency in sentencing. © The Author 2013. Published by Oxford University Press on behalf of the Centre for Crime and Justice Studies (ISTD). All rights reserved.
‘Implications of retrospective measurement error in event history analysis’, Metodología de Encuestas, 15 (2013), 5-25,
Repository URL: http://eprints.whiterose.ac.uk/89549/
It is commonly accepted that the use of retrospective questions in surveys makes interviewees face harder cognitive challenges and therefore leads to less precise measures than questions asking about current states. In this paper we evaluate the effect of using data derived from retrospective questions as the response variable in different event history analysis models: an accelerated life Weibull, an accelerated life exponential, a proportional hazards Cox, and a proportional odds logit. The impact of measurement error is assessed by a comparison of the estimates obtained when the models are specified using durations of unemployment derived from a retrospective question against those obtained using validation data derived from a register of unemployment. Results show large attenuation effects in all the regression coefficients. Furthermore, these effects are relatively similar across models.
‘Paying for the Past: The Role of Previous Convictions at Sentencing in the Crown Court’, in Exploring Sentencing Practice in England and Wales, ed. by Roberts J ([n.pub.], 2015),
‘Defining and measuring consistency in sentencing’, in Exploring Sentencing Practice in England and Wales ([n.pub.], 2015), 76-91,
An Empirical Analysis of the Determinants of Guilty Plea Discount, (Manchester: Cathie Marsh Centre for Census and Survey Research (CCSR), 2013),
Repository URL: http://eprints.whiterose.ac.uk/89548/
In this report, I assess the application of the 2007 Sentencing Guidelines Council guideline, Reductions for a Guilty Plea, empirically using data collected on the Sentencing Council’s Crown Court Sentencing Survey in the year 2011. I begin by using an exploratory analysis to observe the relationship between the level of discount applied and the stage at which the guilty plea was entered. I then consider the possible impact of other factors taken into account when sentencing on the reduction applied for a guilty plea. For this, I specify different models for discrete data to regress the level of discount on a broad set of explanatory variables. Results point towards a substantial degree of agreement between the recommendations provided in the 2007 guideline and the actual level of discount received by offenders who plead guilty. In particular, the stage based approach recommended in the 2007 guideline was found to be the major factor determining the level of discount applied. However, the results also show there to be a number of departures from the guideline, such as: a) a high proportion of cases where the reduction given was higher than the maximum recommended level of 33%, with these anomalies concentrated in specific Courts; b) the presence of particular aggravating factors, on average, leading to lower levels of discount after controlling for the stage when the guilty plea was entered; and c) the presence of the mitigating factor remorse, on average, having a positive significant effect on the level of discount.
Thesis / Dissertations
‘Prevalence, impact, and adjustments of measurement error in retrospective reports of unemployment: an analysis using Swedish administrative data’,
Repository URL: http://eprints.whiterose.ac.uk/89550/
In this thesis I carry out an encompassing analysis of the problem of measurement error in retrospectively collected work histories using data from the “Longitudinal Study of the Unemployed”. This dataset has the unique feature of linking survey responses to a retrospective question on work status to administrative data from the Swedish Register of Unemployment. Under the assumption that the register data is a gold standard I explore three research questions: i) what is the prevalence of and the reasons for measurement error in retrospective reports of unemployment; ii) what are the consequences of using such survey data subject to measurement error in event history analysis; and iii) what are the most effective statistical methods to adjust for such measurement error.Regarding the first question I find substantial measurement error in retrospective reports of unemployment, e.g. only 54% of the subjects studied managed to report the correct number of spells of unemployment experienced in the year prior to the interview. Some reasons behind this problem are clear, e.g. the longer the recall period the higher the prevalence of measurement error. However, some others depend on how measurement error is defined, e.g. women were associated with a higher probability of misclassifying spells of unemployment but not with misdating them.To answer the second question I compare different event history models using duration data from the survey and the register as their response variable. Here I find that the impact of measurement error is very large, attenuating regression estimates by about 90% of their true value, and this impact is fairly consistent regardless of the type of event history model used. In the third part of the analysis I implement different adjustment methods and compare their effectiveness. Here I note how standard methods based on strong assumptions such as SIMEX or Regression Calibration are incapable of dealing with the complexity of the measurement process under analysis. More positive results are obtained through the implementation of ad hoc Bayesian adjustments capable of accounting for the different patterns of measurement error using a mixture model.