The Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts in Rule of Law
National Review of Child Sexual Abuse and Sexual Assault Legislation in Australia

By Christopher Dowling,  Siobhan Lawler,  Laura Doherty,  Heather Wolbers 

This is the Australian Institute of Criminology’s (AIC) national review of child sexual abuse and sexual assault legislation. The Australian Attorney-General’s Department (the Department) commissioned this review to support implementation of the Standing Council of Attorneys-General (SCAG) Work Plan to Strengthen Criminal Justice Responses to Sexual Assault 2022–2027 (the Work Plan), under which all jurisdictions agreed to take collective and individual action. Specifically, this review supports SCAG Work Plan Priority 1 (‘Strengthening legal frameworks to ensure victims and survivors have improved justice outcomes and protections’) and aligns with the following corresponding action: 1.1 Criminal laws: Review the criminal offences and legal definitions (including consent) relating to sexual offending in the context of the unique characteristics of each jurisdiction’s legislative framework and criminal justice system and, if necessary, consider progressing and implementing appropriate reforms. The national review also responds to concerns expressed by advocate Grace Tame during a presentation at the November 2021 Meeting of Attorneys-General around the inconsistencies in child sexual abuse and sexual assault laws across Australia. Importantly, this review is being undertaken in the wake of the Royal Commission into Institutional Responses to Child Sexual Abuse, which recommended a series of reforms to the criminal justice system (2017: 194). Although Commonwealth offences were strengthened in response to the Commission’s recommendations, Australian states and territories are at different stages of implementing the recommended reforms. The review broadly addresses these research questions: 1. What is the nature and scope of sexual assault and child sexual abuse legislation in Australia? 2. What differences and similarities (if any) are there between sexual assault and child sexual abuse legislative frameworks in Australia? 3. What impact (if any) do legislative inconsistencies have on: a. the investigation and prosecution of sexual assault and child sexual abuse matters in the criminal justice system; and b. the ability of victims and survivors to receive the support they require? 4. What are the barriers/challenges to achieving consistency in child sexual abuse and sexual assault legislation in Australia? 5. What are the gaps in current legislation for responding to new and emerging trends in sexual violence? 6. What does ‘best practice’ in relation to sexual assault and child sexual abuse legislation look like?   

Canberra:  Australian Institute of Criminology 2024 . 375p.

Combating Illicit Trade and Transnational Smuggling: Key Challenges for Customs and Border Control Agencies 

 By Gautam Basu 

 Customs and border control agencies face key challenges in preventing illicit trade and disrupting transnational smuggling operations. Maintaining the delicate balance between facilitating legitimate trade flows while concurrently deterring those that are illicit is a complex operational task. This paper identifies and delves deeper into three of those challenges: the scale of complexity of physical transportation geography in border management, adaptive capabilities of concealment, evasion, structural and operational flexibility by professional smugglers, and institutional coordination problems which may arise in customs and border control management.  

World Customs Journal,  Volume 8, Number 2

Social Media Bots: Laws, Regulations, and Platform Policies

By Kasey Stricklin and Megan K McBride

Social media bots—simply, automated programs on social media platforms—affect US national security, public discourse, and democracy. As the country continues to grapple with both foreign and domestic disinformation, the laws and platform policies governing the use of social media bots are incredibly important. As part of CNA’s study, Social Media Bots: Implications for Special Operations Forces, our literature review found that the landscape of such regulations is difficult to piece together, and applicable provisions and policies are disparately catalogued. This CNA primer helps to fill this gap by helping policy-makers and national security practitioners understand the laws and social media platform policies as they currently exist. We also consider the challenges and dilemmas faced by legislators, and social media platforms, as they attempt to craft relevant provisions to address social media bots and malign influence, and we conclude with a brief look at the consequences for breaking platform policies.

The Legal Framework: US policy-makers are constrained in their passage of bot-related laws by a number of factors. First, legislators must consider free speech rights granted by the First Amendment of the Constitution. Additionally, Section 230 of the Communications Decency Act (CDA 230) hinders the ability of policy-makers to hold social media platforms legally responsible for any material posted on their site. Further, the slow speed of congressional action compared to technological advancement, and the barriers to obtaining reliable information on the social media bot threat, have proved difficult to overcome. There are no US federal laws governing social media automation, although members of Congress have introduced several relevant pieces of legislation over the last few years. While there is some congressional interest in crafting botrelated legislation, the political will to pass such provisions has yet to materialize.

In the international arena, the European Union has been a leader in efforts to counter disinformation; it introduced a nonbinding Code of Practice in October 2018, to which many of the most prominent social media companies signed on. As a result, the platforms committed themselves to self-regulation aimed at stamping out disinformation on their sites, which includes closing fake accounts and labeling bot communications. In May 2020, the European Commission reported that, though there were positive developments toward countering disinformation, there is still much room for improvement in labeling and removing bots. It is important to keep in mind, though, that the EU has a permanent bureaucracy to study problems and propose legally and non-legally binding legislation. In the US, legislation works differently, as a legislative champion with significant clout needs to emerge in order to push forward a proposal.

Platform Policies: The social media companies face their own dilemmas when thinking about the creation of effective bot regulations. Unlike policy-makers, platforms are beholden to shareholders; and higher platform engagement generally leads to higher share values. Because bots make up a large portion of monthly active users on some platforms, the companies may be reluctant to kick off these automated accounts. However, public pressure since the 2016 US election has created a greater financial incentive to ensure engagement is authentic. The companies also worry about regulating too extensively out of fear they will then be admitting they have an affirmative duty to moderate and thus lead to the revocation of their limited immunities under CDA 230. This tension is evident in the run-up to the US presidential elections, as the social media companies seek to ensure the truthfulness of candidates on their sites, they also risk one side of the political spectrum regarding them as politically biased and seeking to regulate them in response.

Instead of specifically focusing on bot activity, the platforms tend to address bot behavior through other policies on banned behavior. We broke out the policies relevant to bots into four categories: automation, fake accounts and misrepresentation, spam, and artificial amplification. Figure 1 depicts the way these policies often overlap in detailing prohibited bot behaviors. 

The consequences for breaking platform policies vary, with the sites often looking at the specific violation, the severity of the infraction, and the user’s history on the platform. While they may simply hand out a warning or restrict the post’s viewership, the sites also reserve the right to ban users or accounts, and can even go so far as to sue for violation of their terms.

The ever-evolving threats from disinformation and malicious bots will likely continue to cause consternation in the US government. However, experts are skeptical that Congress will find a legislative solution in the near future, despite enhanced attention to the problem. Therefore, the social media platforms are likely to shoulder much of the burden going forward, and it is an open question how and to what extent the platforms should police themselves. As they grapple with the prevalence of automated accounts operating on their sites, the platforms’ policies and enforcement provisions will continue to evolve to meet the threats of the day. However, it may ultimately be the attention of the press and American public, or the initiative of a regulatory agency like the Federal Trade Commission, that provides the needed impetus for change on these issues.

Arlington, VA: CNA, 2000. 40p.

Good Practices in Addressing Illegal Betting: A Handbook for Racing and Sports Organisations to Uphold Integrity

By Asian Racing Federation

This Handbook highlights the risks to the integrity of horse racing and other sports from illegal betting-related sports corruption and provides practical guidance to administrators and other key stakeholders for mitigating against and combatting such corruption. Practical guidance in the Handbook includes an overview of major issues around illegal betting, how to conduct bet monitoring and betting analysis, intelligence gathering and analysis, how to conduct illegal betting investigations, and how to engage stakeholders to combat illegal betting and related corruption.

Hong Kong: Asian Racing Federation, 2020. 130p.

Online Gendered Abuse and Disinformation During the 2024 South African Elections

By Clara Martiny, Terra Rolfe, Bilen Zerie, Aoife Gallagher and Helena Schwertheim

ISD sought to understand how Online Gender-Based Violence (OGBV) affects South African women, focusing on the experience of women politicians, candidates, and political figures during one of South Africa’s most historic general elections in May 2024. ISD analysts used a combination of qualitative and quantitative analytical methods, interviews with experts, and knowledge drawn from online and in-person workshops. Specifically, three online case studies looked at abusive content, gendered disinformation, and harassment targeting women politicians on TikTok, X (formerly Twitter), and Facebook. ISD’s analysis found that South African women in politics often face abuse online in the form of replies or comments to their posts or content about them. Misogynistic actors tend to target their physical attributes, intelligence, and ability to lead. They also often engage with gendered disinformation narratives that sexualize or objectify women. While the legislative frameworks in South Africa are progressive and comprehensive, enforcement is difficult and many women are unaware of the resources available to them. Social media platforms also have policies that address OGBV and gendered disinformation but their enforcement is weak, especially outside of English language content.

Amman Berlin London Paris Washington D C: Institute for Strategic Dialogue , 2024. 37p.