Skip to main content

Research Repository

Advanced Search

All Outputs (5)

Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities (2023)
Journal Article
Mansouri-Benssassi, E., Rogers, S., Reel, S., Malone, M., Smith, J., Ritchie, F., & Jefferson, E. (2023). Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities. Heliyon, 9(4), Article e15143. https://doi.org/10.1016/j.heliyon.2023.e15143

Introduction: Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure enviro... Read More about Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities.

Automatic Checking of Research Outputs (ACRO): A tool for dynamic disclosure checks (2021)
Journal Article
Green, E., Ritchie, F., & Smith, J. (2021). Automatic Checking of Research Outputs (ACRO): A tool for dynamic disclosure checks. ESS Statistical Working Papers, 2021 Edition, https://doi.org/10.2785/75954

This paper discusses the issues surrounding the creation of an automatic tool to reduce the burden of output checking in research environments. It describes ACRO (Automatic Checking of Research Outputs), a Stata tool written as a proof-of-concept, an... Read More about Automatic Checking of Research Outputs (ACRO): A tool for dynamic disclosure checks.

Statistical disclosure controls for machine learning models (2021)
Conference Proceeding
Krueger, S., Mansouri-Benssassi, E., Ritchie, F., & Smith, J. (2021). Statistical disclosure controls for machine learning models

Artificial Intelligence (AI) models are trained on large datasets. Where the training data is sensitive, the data holders need to consider risks posed by access to the training data and risks posed by the models that are released. The first problem c... Read More about Statistical disclosure controls for machine learning models.

Understanding output checking (2020)
Report
Green, E., Ritchie, F., & Smith, J. (2020). Understanding output checking. Luxembourg: European Commission (Eurostat - Methodology Directorate)

This report for Eurostat (Methodology) considers the conceptual and practical issues that need to be addressed in designing and implementing automatic disclosure control checking for statistical research outputs. The report covers - The basic theo... Read More about Understanding output checking.

Confidentiality and linked data (2018)
Book Chapter
Ritchie, F., & Smith, J. Confidentiality and linked data. In G. Roarson (Ed.), Privacy and Data Confidentiality Methods – a National Statistician’s Quality Review (1-34). Newport: Office for National Statistics

This chapter considers the confidentiality issues around linked data. It notes that the use and availability of secondary (adminstrative or social media) data, allied to powerful processing and machine learning techniques, in theory means that re-ide... Read More about Confidentiality and linked data.