skip to Main Content
info@gaps-uk.org

APPG on Women, Peace and Security: The effect of Innovation and Technological Change on Women, Peace & Security    

On Wednesday 24th May, the All-Party Parliamentary Group on Women, Peace and Security (APPG-WPS) hosted an event titled “The effect of Innovation and Technological Change on Women, Peace and Security”. The event was chaired by Baroness Hodgson, co-chair of the APPG -WPS, focusing on the emerging opportunities and risks innovation and technological change pose to women and girls in situations of conflict as well as in the United Kingdom. The need to bring together the domestic and the international is recognised in the UK’s fifth Women, Peace and Security National Action Plan. Baroness Hodgson highlighted how the topic of innovation and technological change was also the theme of CSW 2023.   

The event took the format of a conversation between Baroness Hodgson and the two speakers: Su Moore, CEO of the Jo Cox Foundation, and Taniel Yusef, outgoing international representative at the Women’s International League for Peace and Freedom (WILPF).  

Su Moore began the event by outlining the newly launched work the Jo Cox Foundation is doing to increase civility in politics, and the online dimensions of this work. Jo Cox, the MP who was murdered in 2016 by a right-wing extremist, herself was passionate about enabling more women to participate and thrive in politics and saw how online abuse in particular was a significant barrier for block for women politicians. The new Civility Commission set up by the Jo Cox Foundation aims to ensure greater civility both off- and online by fostering a more respectful politics across all political levels, from local councils to Members of Parliament. It works across political parties, education, forms of policing and with social media platforms. It launched in February and currently is in a discovery phase to understand the cause of and implementable solutions to abuse.  

The abuse the Foundation seeks to stop is experienced by all politicians, but Su made clear woman politicians face distinct types of abuse. Firstly, the type of abuse tends to be focuses more on the individual, for example someone’s body, voice or family, whereas abuse aimed at male politicians tends to be more about their policies; abuse against women politicians increases exponentially as they achieve more power and visibility; and women experience abuse differently based on the other identities they hold, for example Amnesty International UK found that from the 2017 MP cohort, 41% of abuse was directed towards Black and Minority Ethnic MPs, even though they made up only 11% of the group. The disproportionate number and severity of threats of sexual violence, to themselves as well as their families, women politicians face on social media has direct impacts: it places women politicians in a perpetual state of hyper vigilance; they actively avoid speaking on certain topics knowing the abuse they will face as a result; and on average stand down 5 years younger, and have 6 year shorter terms, compared to their male colleagues. Preventing this abuse matters because it erodes democracy and directly undermines the participation of women politicians – a pillar of the Women Peace and Security agenda.  

Taniel Yusef turned the conversation to a different facet of technological change: algorithms and social and technological bias. Algorithms are the building blocks of artificial intelligence and provide a set of instructions and parameters for technologies automating conflict such as Artificial Intelligence (AI). The algorithms it uses are based on data sets, and so the historical and societal biases embedded in these data sets risks being reproduced by AI if human oversight lacks or is prejudiced. It is especially women of colour that disproportionately are categorised differently based on these algorithms. Consider for example which accents or skin colours are overpoliced and criminalised. This means that the data input already is flawed. 

Taniel Yusef continued to explain, AI analyses data sets based on parameters set by people on what data to look out for, how to categorise it and what to do in response. This is in particular where the usage of AI as an offensive tool can be dangerous. Social biases or explicit military strategic objectives can cause the model to physically not register certain people or mis-categorise them. An automated weapon may then execute an airstrike on what is a civilian target. Accountability then also becomes trickier, as the perpetrator could blame faulty technology. The ability to distinguish between civilian and military, a fundamental principle of humanitarian and human rights law, is fundamentally undermined. This means AI can be (mis)used to cherry-pick how to conduct war and how to tell the story afterwards, with disproportionate negative consequences for women and girls. If technology can’t even see women and girls, especially those of colour, how can it protect them? 

The questions and answers focused on what can be done to confront these risks. Both Taniel and Su underscored that the absence of women in tech is a massive barrier, and more needs to be done to ensure women and girls participate in the development, creation and scrutiny of new technologies, especially when it comes to safety checks and monitoring. Additionally, talking to the women most affected, for example the politicians receiving abuse, is also crucial in identifying what is happening and ensuring effective, tailored responses. Taniel Yusef encouraged a shift in taking the risks women face seriously, as often these risks are the canary in the coalmine for wider violence. Those present also asked about the positive’s tech can bring to women and girls. Su’s example illustrated how WhatsApp groups are power platforms to address loneliness and foster community cohesion, and Taniel underscored how Zoom and video-platforms have radically redefined the politics of labour – both paid and unpaid – since the pandemic.   

Back To Top