Array ( [s] => [posts_per_page] => 12 [meta_key] => sb_post_date [order] => DESC [orderby] => meta_value [paged] => 1 [post_type] => Array (  => page  => blog-post  => evidence-review  => feature  => job  => media  => news  => press-release  => project  => report  => summary  => survey  => toolkit  => event ) [tax_query] => Array (  => Array ( [taxonomy] => keywords [field] => slug [terms] => Array (  => ai-bias ) [operator] => IN ) [relation] => AND ) )
1–10 of 10
BCS Lovelace lecture 2020/21 with Professor Marta Kwiatkowska on probabilistic model checking for the data-rich world
A research partnership with NHS AI Lab exploring the potential for algorithmic impact assessments in an AI imaging case study
Wicked problems in the use of data-driven systems
Developing foundational tools to enable accountability of public administration algorithmic decision-making systems
Joint briefing with Reset, giving insights and recommendations towards a practical route forward for regulatory inspection of algorithms
The failure of the A-level algorithm highlights the need for a more transparent, accountable and inclusive process in the deployment of algorithms.
The appeal of R (Bridges) v Chief Constable of South Wales shows that, when it comes to facial recognition technology, the status quo cannot continue.
This year’s International Women’s Day theme ‘Each for Equal’ has particular resonance for Black women who experience discrimination.
Facial recognition technology is a complex area, which means the risk of misunderstandings is high.
When it comes to the societal impacts of AI and data, we need to tackle complex problems that don’t necessarily have objective solutions.