Skip to main content
Quick exit Quick exit

Oireachtas Committee on AI: AI and Young People

23 September 2025

Contribution by Dr Emily Bourke, Policy and Participation Coordinator

Thank you, Chair and members of the Committee, for having us here today. My name is Emily Bourke, and I’m speaking on behalf of Belong To, a national LGBTQ+ youth organisation. We work directly with young people aged 14 to 23, and every day we support them to navigate identity, mental health, and digital life. I’m joined also by Rob Byrne, who is here today as a representative for the youth voice of Belong To. 

We regularly consult the young people we work with on online safety, and two particular concerns have emerged in recent years: recommender algorithms and moderation of hateful content.

Young people want to have the choice to opt in or out of online recommender systems and algorithms, allowing them more control over what they see online and the amount of time they spend on social media. While many of the young people we work with speak of the positives of social media — as a place where they can find community and learn about their identity — they also express serious concerns about the content that is pushed to them and their peers. They see hateful content, anti-LGBTQ+ content, daily, and algorithms push it because it gets a reaction from people, despite the harm it causes. 

They are also troubled by the recent weakening of content moderation by online platforms. In light of this, we have been advocating for strong enforcement of the Digital Services Act by the EU on the protection of minors, with risk assessment and solutions which reduce harmful and hateful content online. Responsibility for the implementation and upholding of these solutions must fall to the online platforms responsible for the content, rather than to civil society or to affected individuals.

We know from research that LGBTQ+ young people are three times more likely to experience severe depression, three times more likely to experience extreme anxiety, and five times more likely to experience thoughts of suicide than their wider peer group. Improving their experiences of online spaces is of vital importance.

I’m going to hand over now to Rob, who will illustrate some of the additional concerns that young LGBTQ+ people are dealing with as AI becomes more pervasive.

Contribution by Rob Byrne, Belong To Youth Representative

Thank you. My name is Rob Byrne, and I have been involved with Belong To’s services for the last 4 years. 

A concern for LGBTQ+ youth, particularly those of us who are not open about our identities, is how data is collected through our interactions with AI. AI is being built into more and more websites and apps all the time, often with no way to turn it off. The sale of our data to the highest bidder by big tech corporations, who then use the data to push specific targeted advertisements to us, could unintentionally out people. This is another important reason that we should be able to opt out of recommender algorithms. 

AI is also only as good as the information it is trained on, and many sources can show bias and reinforce harmful stereotypes, particularly when trained on social media. This risks AI training from its own content, creating a loop where bias is recycled and reinforced within the system. 

Some young LGBTQ+ people are also turning to AI chatbots as a form of social interaction when they don’t find acceptance at home. This is dangerous, as they can become withdrawn from social life, and language models aren’t able to replace real human interaction or empathy. Some people have also been able to get around weak guardrails and get detailed instructions on how to harm themselves, with encouragement from the chatbot.

Overall, AI has a lot of potential to be used for good. At the same time, it is being used to the detriment of minority groups and society as a whole. AI and tech companies need to be regulated to put young LGBTQ+ people’s wellbeing above the pursuit of profit.

//