The Open Access Publisher and Free Library
05-Criminal justice.jpg

CRIMINAL JUSTICE

CRIMINAL JUSTICE-CRIMINAL LAW-PROCDEDURE-SENTENCING-COURTS

Posts tagged Gender Disparities
How to Use Administrative Data to Measure and Interpret Racial, Ethnic, and Gender Disparities in Military Justice Outcomes

By Amanda Kraus, Elizabeth Clelan, Heather Wolters, Patty Kannapel

This study was sponsored by the Office of the Executive Director for Force Resiliency within the Office of the Under Secretary of Defense for Personnel and Readiness to address two taskings from the FY 2020 National Defense Authorization Act (NDAA):

  1. Establish criteria for determining when to review data indicating that racial, ethnic, and gender (REG) disparities in military justice outcomes may exist and provide guidance for how to conduct that review.

  2. Conduct an evaluation to identify the causes of identified REG disparities and take steps to address them.

To address the first tasking, the study team combined emerging best practices from the civilian criminal justice system (CCJS) with a review of the military justice system (MJS) to create guidance for data collection, analysis, and reporting that will allow the services to use administrative data to conduct ongoing assessments of how members of all REG groups are treated within the MJS. To address the second tasking, the team used multivariate statistical techniques to analyze available data with the goal of measuring REG disparities in MJS outcomes, holding constant other relevant factors. This report addresses the first tasking; the second tasking is addressed in a companion report titled, Exploring Racial, Ethnic, and Gender Disparities in the Military Justice System.

GUIDING CONCEPTS

To guide our approach to addressing the NDAA tasking, we drew on four concepts related to justice and bias and considered their implications for data collection and analysis.

DISTRIBUTIVE VERSUS PROCEDURAL JUSTICE


Distributive justice relates to the distribution of outcomes within a community. In the MJS context, distributive justice relates directly to REG outcome disparities and suggests that the services should collect data to determine whether people who are the same except for their REG characteristics experience the same MJS outcomes. Procedural justice relates to the system that generates the outcomes. Procedural justice is defined in terms of the rules of the system and the extent to which they are applied consistently and impartially and communicated clearly. To assess procedural justice in the MJS, it is necessary to collect and analyze data on underlying processes, not just final outcomes. It is possible for procedural injustices to occur without generating outcome disparities, and it is possible for a system to be procedurally fair but to generate different outcomes for members of different REG groups. Thus, on their own, average outcome disparities are not complete indicators of bias. 

INDIVIDUAL VERSUS INSTITUTIONAL BIAS


Generally, bias is defined as prejudice for or against one person or group of people, especially in a way considered to be unfair. To cause MJS outcome disparities, such prejudices must be turned into biased actions, which can occur at the individual or institutional level. In the context of the MJS, individual bias is exercised by individual actors within the system through their individual decision-making discretion. It can be both explicit and implicit. Because individual bias is exercised through discretionary decision-making, finding evidence of it in data calls for identifying places in the system where individual discretion matters most to see if this is where disparities occur. Institutional bias is present when the policies, procedures, and practices that define a system consistently create positive or negative outcomes based an individual’s REG status. It can be intentional or unintentional. To identify the presence of institutional bias in the MJS, it is necessary to collect and analyze data that reflect outcomes that are guided by regulation or policy.

CONCERNS ABOUT BIAS IN THE MJS

Bias in the MJS—both real and perceived—can decrease the effectiveness of the MJS and thereby degrade good order and discipline and reduce warfighting readiness. There are widespread and persistent perceptions that the MJS is biased, and these perceptions exist both inside the military, especially among members of color, and outside the military, among the American public and members of Congress.

The broader social context in which concerns about bias are formed matters. Although the services have their own justice system and control over how that system is implemented, their members are drawn from the American population and public support is necessary for continued recruiting and funding. Thus, concerns about REG bias in the MJS will ebb and flow as they ebb and flow in the national culture and they may arise from within or without.

The quality and presentation of data and data analysis also matter. Over the years, analyses of MJS data have done little to alleviate concerns about bias. Given the persistence of these concerns, it makes sense to create a robust system for data collection, rigorous analysis, and appropriate reporting to enable detailed assessments of MJS outcomes and the policies and practices that produce them.

THE MJS

To identify points in the MJS where institutions and individuals apply discretion, as well as important MJS outcomes to study, we created a chart that maps how a case flows through four phases of the MJS—incident processing, pre-trial/pre-hearing, adjudication and sentencing, and post-trial/post-hearing—and identified key steps in each phase.

A main source of institutional discretion in the MJS lies outside the system. Given that servicemembers can enter the system if they are accused of disobeying a regulation, institutional choices about the nature and design of regulations will affect MJS outcomes. Individual discretion is more likely to be applied within the MJS, at different points by different actors. The most individual discretion rests with commanding officers during the incident processing phase and, in later phases, along the disciplinary path and the summary court-martial branch of the judicial path. Once a case is referred to special or general court-martial, discretion is spread across more people. Actors with significant discretionary power on the judicial path include convening authorities, who are military commanders with little or no legal training, and judge advocates, who are legal professionals serving as military judges and trial and defense counsels.

As a whole, the flowchart highlights the importance of considering the full range of outcomes because movement through the system is determined by the outcome at each successive step along the relevant path. The steps within each phase identify the important outcomes.

ADDRESSING MJS BIAS WITH ADMINISTRATIVE DATA

The primary benefit of using administrative data to measure REG disparities in MJS outcomes is that it creates an evidence-based picture of MJS outcomes that distinguishes between isolated incidents and widespread problems. To generate meaningful measures of these disparities, it is necessary to use multivariate analytical techniques that allow researchers to measure REG outcome disparities while accounting for other factors that affect MJS outcomes. The more relevant other factors that can be included in the model, the more likely it is to hold “all else” equal. If REG disparities still exist after accounting for other factors, it is likely that the outcome differences are directly related to REG. Such a finding does not prove that bias exists, but it takes the other factors off the table. The multivariate techniques we identified range in technical sophistication and resource requirements. Disaggregating raw data by multiple outcomes and factors is the easiest of the four approaches we identified, and it can be done by agency staff. While not as conclusive as approaches that control for multiple factors simultaneously, disaggregation provides a more complete picture than bivariate analysis and helps agency staff make informed decisions about where to focus more technical analyses and scarce analytical resources. Used together and on a regular basis, disaggregation and the more complicated approaches provide the basis for ongoing monitoring of REG outcomes to identify and address disparities before they become persistent or systemic. Existing MJS and other reporting requirements provide a natural schedule for conducting assessments and reporting their results.

Application of valid multivariate techniques requires detailed data. Current Department of Defense guidance directs the services to collect nearly all the desired data elements, so if the guidance is implemented, they should be well positioned to conduct meaningful assessments of MJS outcomes. There are two caveats to this conclusion. First, there may be gaps for information on investigations and disciplinary outcomes. Second, the services may not have the resources to implement the data collection guidance. It may be an unfunded mandate.

Finally, the tasking from the FY 2020 NDAA asked for criteria to determine when to further review data indicating that REG disparities in MJS outcomes may exist. There is no scientific or social consensus about which criterion to use or what level of disparity equates to bias. Therefore, the services should work with internal and external stakeholders to select multiple criteria based on the absolute size of a disparity, its statistical significance, and the number of people it affects.

RECOMMENDATIONS TO ADDRESS THE NDAA TASKING

We recommend that the services do not conduct detailed assessments of MJS data only in response to disparities measured by bivariate metrics. Instead, assessments should be conducted regularly using the blueprint provided by lessons learned from the CCJS:

Step 1. Work with internal and external stakeholders to identify issues of concern, set priorities, and develop decision-making criteria

Step 2. Create an analysis plan based on the concerns and priorities identified in Step 1

Step 3. Collect data on MJS outcomes (including nonjudicial outcomes) and relevant control variables in easy-to-use electronic records management systems and ensure they are regularly updated

Step 4. Execute the analysis plan from Step 2 using appropriate quantitative and/or qualitative methods

Step 5. Regularly and transparently report assessment results to all the stakeholders as appropriate

Step 6. Make policy decisions about how to address REG outcome disparities based on the established priorities and criteria

Arlington VA: CNA, 2023. 116p

Exploring Racial, Ethnic, and Gender Disparities in the Military Justice System

By Amanda Kraus, Elizabeth Clelan, Dan Leeds, Sarah Wilson with Cathy Hiatt, Jared Huff, Dave Reese

Section 540I of the Fiscal Year 2020 National Defense Authorization Act (FY 2020 NDAA) required the secretary of defense, in consultation with the secretaries of the military departments and the secretary of homeland security, to:

  • Issue guidance that establishes criteria to determine when data indicating possible racial, ethnic, or gender (REG) disparities in the military justice process should be further reviewed and describes how such a review should be conducted.

  • Conduct an evaluation to identify the causes of any REG disparities identified in the military justice system (MJS) and take steps to address the identified causes, as appropriate.

The Office of the Executive Director for Force Resiliency within the Office of the Under Secretary of Defense for Personnel and Readiness asked CNA to provide analytic support to fulfill these requirements. CNA addressed four research questions:

  1. What data elements should be tracked, and what disparity indicators should the Department of Defense (DOD) use to monitor trends in MJS outcomes and take appropriate policy actions?

  2. How much of the required data currently exist and to what extent are they standardized across the services?

  3. Do the existing MJS data reveal any differences in military justice outcomes by REG?

  4. Can we identify any specific factors (including bias) that contributed to observed outcome disparities?


The results for the first research question, which support fulfilling the first NDAA requirement, are reported in the document titled, How to Use Administrative Data to Measure and Interpret Racial, Ethnic, and Gender Disparities in Military Justice Outcomes. This report describes how we addressed the remaining three research questions to support fulfilling the second NDAA requirement.

APPROACH

To manage the scope of this effort within the study resources, we limit our analyses to the regular, active duty enlisted forces of each service. To execute the analyses, we constructed multiple datasets for each service, with each dataset comprising records of MJS incidents reported and resolved over the seven years from fiscal year (FY) 2014 through FY 2020. Each incident record includes descriptive features of the incident, including the REG of the accused servicemember. The constructed datasets follow each incident record through various steps in the MJS, but no dataset follows incidents seamlessly from initial reporting to final resolution and they vary in terms of level of detail. Thus, our dataset construction also served as a check on data completeness and allowed us to determine which REG disparities in incident outcomes can currently be tracked for each service.

We then applied quantitative methods (primarily regression analysis) to calculate unconditional and conditional service-specific REG disparity measures for as many MJS outcomes as the data allowed, controlling for other descriptive features of the offender and the incident. Unconditional disparities are measured for the first-observed outcome in each dataset and are based on comparisons between those experiencing the outcome and those in the service’s entire enlisted population. Conditional disparities are measured for outcomes that occur later in the MJS process and are based on comparisons between the servicemembers who experienced the outcome and those who experienced the outcome associated with the previous observed step in the MJS process. For example, for some services, we calculate REG disparities in guilty findings conditional on having completed nonjudicial punishment (NJP) or court-martial (CM) proceedings. This allows us to determine accurately where REG disparities first appear in the MJS and how long they persist.

RESULTS

There are many detailed results presented in the report. Here, we summarize these results by answering the research questions they addressed.

Research question #2: How much of the required data currently exist and to what extent are they standardized across the services?

Most of the MJS data exist and the services generally collect the same data elements, but the ways the data are collected and stored result in data elements and structures that do not always support quantitative analysis and they are not consistent across services. Specifically, despite recent service efforts to improve data collection and storage, the data are still stored in multiple data systems across multiple commands within each service. Thus, it remains cumbersome to follow incidents through the MJS and to prepare the data necessary to compute REG disparities. This, in turn, limits REG disparity analysis for all MJS incidents and creates outcomes that vary by service.

Research question #3: Do the existing MJS data reveal any differences in military justice outcomes by REG?

Our data analysis confirms that there were significant racial and gender disparities in MJS outcomes during the study period.

Across services and outcomes, we found positive racial disparities: in every service, Black enlisted personnel were more likely than White enlisted personnel to be investigated, be involved in NJP in some way, and be involved in CMs in some way, even after controlling for the other factors included in the regression models. Yet, conditional on a case progressing far enough in the MJS to have an adjudicated outcome, Black enlisted personnel were no more likely—and, in many cases, were less likely—than their White counterparts to be found guilty.

In contrast, across services and outcomes, we found negative gender disparities: in every service, female enlisted personnel were less likely than male enlisted personnel to enter the MJS and, conditional on the case progressing to an adjudication point, they were less likely to be found guilty.

Finally, we found few significant ethnic disparities in MJS outcomes. Across services and for most outcomes, Hispanic and non-Hispanic enlisted personnel experienced the modeled outcomes at similar rates.

Research question #4: Can we identify any specific factors (including bias) that contributed to observed outcome disparities?

It is impossible to determine definitively whether bias exists in the MJS solely based on statistical analysis of administrative data records such as those we used in this study. The analysis did, however, allow us to draw two sets of conclusions regarding causes of MJS disparities.

First, controlling for offender-, incident-, and MJS process-related factors did not eliminate REG disparities, and no specific factor emerged as a leading determinant of MJS disparities. Thus, bias remains on the table as a potential cause.

Second, by using the data to show where in the MJS disparities occur, we provide information to help the services decide where to investigate further. Specifically, the largest positive racial disparities were associated with the first-observed outcomes. This suggests that it is important to get more clarity on how and why Black enlisted servicemembers enter the MJS. It would be especially valuable to better understand how outcomes differ depending on whether the initial investigation is conducted by a professional military law enforcement agency (LEA) or by the command and how commanding officers (COs) make their disposition decisions, and to evaluate the relative strengths of cases brought against Black versus White servicemembers.


RECOMMENDATIONS RELATED TO DATA COLLECTION AND ANALYSIS

We make the following recommendations to improve data collection and analytical processes.Provide the services with sufficient funding and support to ensure that MJS incident and case data are collected, stored, and made usable for conditional REG disparity analysis at each step in the MJS.For future data assessments, follow the two key steps recommended in the companion document: support service-specific studies and provide the time and structure for effective collaboration between researchers and MJS experts in each service.Continue efforts to collect complete NJP information.Include common case control numbers in all MJS data systems so that datasets associated with different parts of the MJS can be merged and cases can be followed from investigation through initial disposition to final resolution.Populate variables related to offender characteristics, especially REG, by pulling data from authoritative personnel records.Ensure that all relevant dates are populated.Define all data fields to include all potential outcomes or values, including indicators that a variable is not applicable for a given incident or that the incident has not yet proceeded far enough through the MJS for the variable to apply.Use dropdown menus to minimize data error and inconsistency due to hand entry.

RECOMMENDATIONS RELATED TO REG DISPARITIES

To address the identified MJS outcome disparities, we make the following recommendations that range from specific to general:Seek to address disparities, not bias per se. As reported in the companion document, regardless of their causes, disparities may create perceptions of bias and perceptions of bias have negative effects not only on the effectiveness of the MJS, but also readiness.Begin by studying how outcomes differ depending on whether the initial investigation is conducted by a professional military LEA or by the command, how COs make their disposition decisions, and the relative strengths of cases brought against Black versus White servicemembers.Follow additional steps recommended in the companion document. Specifically, conduct assessments and report results on a regular basis. Do not wait until negative publicity occurs and do not respond only to disparities identified in raw data.Develop procedures and systems for holding leaders accountable for the proper use of discretion across the full range of MJS outcomes. Discretion is a necessary part of law enforcement and justice, but it is also where bias (implicit or explicit) can enter. It is leadership’s job to think more broadly about the role of discretion in the MJS.

Arlington, VA: CNA, 2023. 158p.