Trinity College Dublin has established a new group to artificial intelligence (AI) accountability research.
The AI Accountability Lab (AIAL) will be led by Dr Abeba Birhane, a research fellow at the ADAPT Research Ireland Centre within the School of Computer Sciences and Statistics at Trinity.
AIAL will focus on critical issues across broader topics such as the examination of opaque technological ecologies and the execution of audits on specific models and training datasets.
The research group is supported by a grant of just under €1.5m from three groups: the AI Collaborative, an initiative of the Omidyar Group; Luminate; and the John D. and Catherine T. MacArthur Foundation.
The lab will examine the impacts of AI technologies and aims to hold powerful entities for technological harms while advocating for policies rooted in evidence.
Research will specifically address potential corporate capture of current regulatory processes, outline justice-driven model evaluation, as well as audits of deployed models, specifically those used on vulnerable groups.
“The AI Accountability Lab aims to foster transparency and accountability in the development and use of AI systems," said Dr Birhane.
"And we have a broad and comprehensive view of AI accountability. This includes better understanding and critical scrutiny of the wider AI ecology – for example via systematic studies of possible corporate capture, to the evaluation of specific AI models, tools, and training datasets.”
The lab is being established in combat AI's effects in exacerbating existing social inequalities that disproportionately affect vulnerable groups.
In sectors such as healthcare, education, and law enforcement, deployment of AI technologies without thorough evaluation can not only have nuanced but catastrophic impact on individuals and groups but can also alter social fabrics.
In the UK, a liver allocation algorithm used by the NHS has has been found to discriminate by age.
No matter how iIl, patients under the age of 45 seem currently unable to receive a transplant, due to the predictive logic underlying the algorithm.
Furthermore, a decision support algorithm deployed by Danish child protection services without formal evaluations has been found to suffer from numerous issues, including information leakage, inconsistent risk scores, and age-based discrimination.
Additionally, errors in facial recognition technologies have led to misidentification and the wrongful arrest of innocent people in the UK and the US.
In education, the use of student data for purposes beyond schooling drew criticism in the UK.
“The new dawn of AI associated with generative AI has heralded a velocity of AI adoption hitherto fore not witnessed," said Gregory O'Hare, professor of artificial intelligence and Head of School of Computer Science & Statistics at Trinity.
"The provenance of such systems is however fundamental. The AI Accountability Lab will be at the forefront of research that will examine such systems; through algorithmic auditability it will create a National and European Centre of Excellence in this space, delivering thought leadership and informing best practice.”
Prof John D Kelleher, director of ADAPT and chair of Artificial Intelligence at Trinity, added: “We are proud to welcome the AI Accountability Lab to ADAPT’s vibrant community of multidisciplinary experts, all dedicated to addressing the critical challenges and opportunities that technology presents.
"By integrating the AIAL within our ecosystem, we reaffirm our commitment to advancing AI solutions that are transparent, fair, and beneficial for society, industry, and government.
"With the support of ADAPT’s collaborative environment, the Lab will be well positioned to drive impactful research that safeguards individuals, shapes policy, and ensures AI serves society responsibly.”
Initially, AIAL will use empirical evidence to inform evidence-driven policies, challenge and dismantle harmful technologies, and hold responsible bodies accountable for adverse consequences of their technology.

The group’s research objectives include addressing structural inequities in AI deployment, examining power dynamics within AI policy-making, and advancing justice-driven audit standards for AI accountability.
The lab will also collaborate with research and policy organisations across Europe and Africa, such as Access Now, to strengthen international accountability measures and policy recommendations.
Photo: Dr Abeba Birhane with provost Dr Linda Doyle at Trinity College Dublin. (Pic: Paul Sharp/Sharpix)