The Open Access Publisher and Free Library
05-Criminal justice.jpg

CRIMINAL JUSTICE

CRIMINAL JUSTICE-CRIMINAL LAW-PROCDEDURE-SENTENCING-COURTS

Posts tagged Measure
More Criminals, More Crime: Measuring the Public Safety Impact of New York’s 2019 Bail Law

By Jim Quinn  

Since New York State’s 2019 bail reform went into effect, controversy has swirled around the question of its impact on public safety—as well as its broader success in creating a more just and equitable system. The COVID-19 pandemic (which hit three months after the bail reform’s effective date), the upheaval following the killing of George Floyd, and the subsequent enactment of various police and criminal justice reforms are confounding factors that make assessing the specific effects of the 2019 bail reform particularly complex. This paper attempts to give the public a better sense of the risks of this policy shift and the detrimental effect that the changes have had on public safety. First, I will lay out the content of the bail reform and will measure pertinent impacts on crime and re-offending rates. Then I will review changes made in the 2020 and 2022 amendments. I will look at the push for supervised release and closing Rikers Island and how those initiatives fed into the momentum behind these laws. Finally, I will propose recommendations to improve bail reform’s impact on public safety, which include: 1. Allow judges to set bail, remand, release on recognizance (ROR), or conditions of release for any crime and any defendant. There should be a presumption of release for misdemeanors and nonviolent felonies, which could be rebutted by the defendant’s prior record or other factors that indicate that the defendant is a flight risk. There should be a presumption of bail, remand, or nonmonetary conditions for defendants charged with violent felonies or weapons offenses. This presumption could also be rebutted by evidence of the defendant’s roots in the community, lack of criminal record, and similar factors

New York: The Manhattan Institute, 2022. 29p.

How to Use Administrative Data to Measure and Interpret Racial, Ethnic, and Gender Disparities in Military Justice Outcomes

By Amanda Kraus, Elizabeth Clelan, Heather Wolters, Patty Kannapel

This study was sponsored by the Office of the Executive Director for Force Resiliency within the Office of the Under Secretary of Defense for Personnel and Readiness to address two taskings from the FY 2020 National Defense Authorization Act (NDAA):

  1. Establish criteria for determining when to review data indicating that racial, ethnic, and gender (REG) disparities in military justice outcomes may exist and provide guidance for how to conduct that review.

  2. Conduct an evaluation to identify the causes of identified REG disparities and take steps to address them.

To address the first tasking, the study team combined emerging best practices from the civilian criminal justice system (CCJS) with a review of the military justice system (MJS) to create guidance for data collection, analysis, and reporting that will allow the services to use administrative data to conduct ongoing assessments of how members of all REG groups are treated within the MJS. To address the second tasking, the team used multivariate statistical techniques to analyze available data with the goal of measuring REG disparities in MJS outcomes, holding constant other relevant factors. This report addresses the first tasking; the second tasking is addressed in a companion report titled, Exploring Racial, Ethnic, and Gender Disparities in the Military Justice System.

GUIDING CONCEPTS

To guide our approach to addressing the NDAA tasking, we drew on four concepts related to justice and bias and considered their implications for data collection and analysis.

DISTRIBUTIVE VERSUS PROCEDURAL JUSTICE


Distributive justice relates to the distribution of outcomes within a community. In the MJS context, distributive justice relates directly to REG outcome disparities and suggests that the services should collect data to determine whether people who are the same except for their REG characteristics experience the same MJS outcomes. Procedural justice relates to the system that generates the outcomes. Procedural justice is defined in terms of the rules of the system and the extent to which they are applied consistently and impartially and communicated clearly. To assess procedural justice in the MJS, it is necessary to collect and analyze data on underlying processes, not just final outcomes. It is possible for procedural injustices to occur without generating outcome disparities, and it is possible for a system to be procedurally fair but to generate different outcomes for members of different REG groups. Thus, on their own, average outcome disparities are not complete indicators of bias. 

INDIVIDUAL VERSUS INSTITUTIONAL BIAS


Generally, bias is defined as prejudice for or against one person or group of people, especially in a way considered to be unfair. To cause MJS outcome disparities, such prejudices must be turned into biased actions, which can occur at the individual or institutional level. In the context of the MJS, individual bias is exercised by individual actors within the system through their individual decision-making discretion. It can be both explicit and implicit. Because individual bias is exercised through discretionary decision-making, finding evidence of it in data calls for identifying places in the system where individual discretion matters most to see if this is where disparities occur. Institutional bias is present when the policies, procedures, and practices that define a system consistently create positive or negative outcomes based an individual’s REG status. It can be intentional or unintentional. To identify the presence of institutional bias in the MJS, it is necessary to collect and analyze data that reflect outcomes that are guided by regulation or policy.

CONCERNS ABOUT BIAS IN THE MJS

Bias in the MJS—both real and perceived—can decrease the effectiveness of the MJS and thereby degrade good order and discipline and reduce warfighting readiness. There are widespread and persistent perceptions that the MJS is biased, and these perceptions exist both inside the military, especially among members of color, and outside the military, among the American public and members of Congress.

The broader social context in which concerns about bias are formed matters. Although the services have their own justice system and control over how that system is implemented, their members are drawn from the American population and public support is necessary for continued recruiting and funding. Thus, concerns about REG bias in the MJS will ebb and flow as they ebb and flow in the national culture and they may arise from within or without.

The quality and presentation of data and data analysis also matter. Over the years, analyses of MJS data have done little to alleviate concerns about bias. Given the persistence of these concerns, it makes sense to create a robust system for data collection, rigorous analysis, and appropriate reporting to enable detailed assessments of MJS outcomes and the policies and practices that produce them.

THE MJS

To identify points in the MJS where institutions and individuals apply discretion, as well as important MJS outcomes to study, we created a chart that maps how a case flows through four phases of the MJS—incident processing, pre-trial/pre-hearing, adjudication and sentencing, and post-trial/post-hearing—and identified key steps in each phase.

A main source of institutional discretion in the MJS lies outside the system. Given that servicemembers can enter the system if they are accused of disobeying a regulation, institutional choices about the nature and design of regulations will affect MJS outcomes. Individual discretion is more likely to be applied within the MJS, at different points by different actors. The most individual discretion rests with commanding officers during the incident processing phase and, in later phases, along the disciplinary path and the summary court-martial branch of the judicial path. Once a case is referred to special or general court-martial, discretion is spread across more people. Actors with significant discretionary power on the judicial path include convening authorities, who are military commanders with little or no legal training, and judge advocates, who are legal professionals serving as military judges and trial and defense counsels.

As a whole, the flowchart highlights the importance of considering the full range of outcomes because movement through the system is determined by the outcome at each successive step along the relevant path. The steps within each phase identify the important outcomes.

ADDRESSING MJS BIAS WITH ADMINISTRATIVE DATA

The primary benefit of using administrative data to measure REG disparities in MJS outcomes is that it creates an evidence-based picture of MJS outcomes that distinguishes between isolated incidents and widespread problems. To generate meaningful measures of these disparities, it is necessary to use multivariate analytical techniques that allow researchers to measure REG outcome disparities while accounting for other factors that affect MJS outcomes. The more relevant other factors that can be included in the model, the more likely it is to hold “all else” equal. If REG disparities still exist after accounting for other factors, it is likely that the outcome differences are directly related to REG. Such a finding does not prove that bias exists, but it takes the other factors off the table. The multivariate techniques we identified range in technical sophistication and resource requirements. Disaggregating raw data by multiple outcomes and factors is the easiest of the four approaches we identified, and it can be done by agency staff. While not as conclusive as approaches that control for multiple factors simultaneously, disaggregation provides a more complete picture than bivariate analysis and helps agency staff make informed decisions about where to focus more technical analyses and scarce analytical resources. Used together and on a regular basis, disaggregation and the more complicated approaches provide the basis for ongoing monitoring of REG outcomes to identify and address disparities before they become persistent or systemic. Existing MJS and other reporting requirements provide a natural schedule for conducting assessments and reporting their results.

Application of valid multivariate techniques requires detailed data. Current Department of Defense guidance directs the services to collect nearly all the desired data elements, so if the guidance is implemented, they should be well positioned to conduct meaningful assessments of MJS outcomes. There are two caveats to this conclusion. First, there may be gaps for information on investigations and disciplinary outcomes. Second, the services may not have the resources to implement the data collection guidance. It may be an unfunded mandate.

Finally, the tasking from the FY 2020 NDAA asked for criteria to determine when to further review data indicating that REG disparities in MJS outcomes may exist. There is no scientific or social consensus about which criterion to use or what level of disparity equates to bias. Therefore, the services should work with internal and external stakeholders to select multiple criteria based on the absolute size of a disparity, its statistical significance, and the number of people it affects.

RECOMMENDATIONS TO ADDRESS THE NDAA TASKING

We recommend that the services do not conduct detailed assessments of MJS data only in response to disparities measured by bivariate metrics. Instead, assessments should be conducted regularly using the blueprint provided by lessons learned from the CCJS:

Step 1. Work with internal and external stakeholders to identify issues of concern, set priorities, and develop decision-making criteria

Step 2. Create an analysis plan based on the concerns and priorities identified in Step 1

Step 3. Collect data on MJS outcomes (including nonjudicial outcomes) and relevant control variables in easy-to-use electronic records management systems and ensure they are regularly updated

Step 4. Execute the analysis plan from Step 2 using appropriate quantitative and/or qualitative methods

Step 5. Regularly and transparently report assessment results to all the stakeholders as appropriate

Step 6. Make policy decisions about how to address REG outcome disparities based on the established priorities and criteria

Arlington VA: CNA, 2023. 116p