Examining the Black Box: A Formative and Evaluability Assessment of Cross-Sectoral Approaches for Intimate Partner and Sexual Violence
By Cynthia Fraga Rizo and Tonya Van Deinse
Intimate partner violence (IPV)—the intentional physical or nonphysical violence between current or former intimate partners—and sexual violence (SV)—non-consensual sexual activities—are pervasive, serious criminal legal system and public health problems in the United States (Centers for Disease Control [CDC], 2017; CDC, 2019; Smith et al., 2018). Survivors of IPV and SV bear the burden of numerous deleterious short- and long-term consequences. To address their myriad service needs, survivors must navigate multiple systems, organizations, and professionals. The complexity of navigating multiple service sectors means IPV/SV survivors often do not receive the help they need at the time when services are most needed. Recognizing this barrier, IPV/SV service providers, including advocates, criminal legal system professionals, and healthcare providers, have been increasingly interested in using cross-sectoral approaches (CSA) to coordinate service delivery to IPV/SV survivors (Gwinn et al., 2007). Family Justice Centers (FJC) and Multi-Agency Model Centers (MAMC) are two commonly implemented CSA models (Alliance for Hope International, 2024; Rizo et al., 2022; Shorey et al., 2014; Simmons et al., 2016). A key underlying assumption of FJCs and MAMCs is that colocation, collaboration, and coordination of services across multiple providers and disciplines will increase survivors’ access to services and ultimately lead to better outcomes. However, limited research exists regarding the implementation and effectiveness of these co-located models. To address these gaps, the research team conducted an evaluability assessment and formative evaluation of IPV/SV CSAs, with a focus on the similarities and differences across colocated models. The project was comprised of two phases: • Phase 1: Evaluability assessment of IPV/SV co-located CSAs. • Phase 2: Formative evaluation of IPV/SV co-located CSAs. The project was conducted in North Carolina, with eight co-located centers participating in the evaluability assessment and six participating in the formative evaluation.
Approach The evaluability assessment was guided by the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework (Aarons et al., 2011) and followed the four steps outlined by Trevisan and Walser’s (2014) evaluability assessment model: (1) focus the assessment, (2) develop the program theory and logic, (3) gather feedback, and (4) apply the assessment findings. Prior to developing the proposal and launching the project, our team worked with a group of statewide leaders to determine the focus of the assessment (e.g., goals, objectives, research questions). The research team then engaged in three primary data collection activities— document review, affiliate interviews, and client-survivor interviews—to document the program theory and logic model of co-located service models and to identify promising strategies for evaluating co-located IPV/SV service models. In total, the team reviewed 199 documents and conducted interviews with 58 affiliates and 30 client-survivors. Following these activities, the research team sought feedback from our Expert Advisory Group (EAG) and partnering sites and used the evaluability assessment findings to develop practice and research materials. The formative evaluation comprised three components—a process evaluation focused on implementation, a client outcome evaluation, and an assessment of the evaluation’s overall feasibility. The implementation evaluation research activities consisted of gathering four different types of data: (1) aggregate annual programmatic data from six partnering sites; (2) client-level service need data (n = 764 completed service navigation logs); (3) staff collaboration survey data (n = 126); and (4) adaptive fidelity self-assessment data (n = 11). The outcome evaluation research activity involved collecting survey data from clients at three-time points (i.e., intake/baseline: n = 41; 3-month follow-up: n = 28; 6-month follow-up: n = 24). The feasibility assessment was based on focus group data with leaders and key contacts at partnering centers (n = 12) to explore their perspectives on the overall evaluation and specific research activities.
Chapel Hill, NC: University of North Carolina Chapel Hill, 2024.146p.