Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

Posts tagged Bias
Defining and Identifying Hate Motives: Bias Indicators For The Australian Context

By Matteo Vergani,  Angelique Stefanopoulos, Alexandra Lee, Haily Tran, Imogen Richards, Dan Goodhardt, Greg Barton

Bias indicators – that is, facts, circumstances, or patterns that suggest that an act was motivated in whole or in part by bias – can be a useful tool for stakeholders working on tackling hate crimes. Government and non-government agencies can use them to improve and standardise data collection around hate crimes, which can have a cascade of positive effects. For example, they can help to demonstrate in court the prejudice motivation of a crime – and we know that this is often hard in Australia, because the legislation has a very high threshold of proving hateful motivation. They can also improve the precision of measurements of the prevalence of hate crimes in communities, which is necessary for planning appropriate mitigation policies and programmes and for assessing their impact. Bias indicators can also be useful for non-government organisations to make sure that their data collection and research is reliable, consistent and a powerful tool for advocacy and education. We acknowledge that bias indicators can be misused: for example, our lists are not to be read as exhaustive, and users should take them as examples only. Also, incidents can present bias indicators from multiple lists, and coders should not stop at trying to code the incident as targeting one identity only. Importantly, our bias indicators lists should not be used by practitioners to make an assessment of whether an incident is bias motivated or not. The absence of bias indicators does not mean that an incident is not hate motivated – if a victim or a witness perceives that there was a prejudice-motivation. At the same time, the presence of a bias indicator does not necessarily demonstrate that an incident is bias motivated (as the term ‘indicator’ implies). Ultimately, a judge will make this decision. In the Australian context, we are proposing that bias indicators should be used to support data collection, and to make sure that all potentially useful evidence is collected when an incident is reported. This report is structured in two parts: in Part 1, we introduce and discuss the concept of bias indicators, including their uses, benefits, and risks. In Part 2, we present a general list of bias indicators (which might be used to code a hate  motivated incident), followed by discrete lists of bias indicators for specific target identities. We also present a separate list for online bias indicators, which might apply to one or more target identities. We are keen to engage with government and non-government agencies that plan to use bias indicators and find this report useful. We welcome opportunities to share additional insights from our research on how 

Melbourne: Centre for Resilient and Inclusive Societies. 2022. 40p.

Stereotypes in ChatGPT - an empirical study

By Busker, A.L.J. Choenni, S.Bargh, M.S.

ChatGPT is rapidly gaining interest and attracts many researchers, practitioners and users due to its availability, potentials and capabilities. Nevertheless, there are several voices and studies that point out the flaws of ChatGPT such as its hallucinations, factually incorrect statements, and potential for promoting harmful social biases. Being the focus area of this contribution, harmful social biases may result in unfair treatment or discrimination of (a member of) a social group. This paper aims at gaining insight into social biases incorporated in ChatGPT language models. To this end, we study the stereotypical behavior of ChatGPT. Stereotypes associate specific characteristics to groups and are related to social biases. The study is empirical and systematic, where about 2300 stereotypical probes in 6 formats (like questions and statements) and from 9 different social group categories (like age, country and profession) are posed ChatGPT.

Rotterham, NETH: Rotterdam University of Applied Sciences - Research Center Creating 010, 2023. 13p.