Many statistical organisations require researchers using detailed sensitive data to undergo ‘safe researcher’ training. Such training has traditionally reflected the ‘policing’ model of data protection. This mirrors the defensive stance often adopted by data providers, which shifts the responsibility of failure onto the user, and which derives its behavioural assumptions from the neoclassical economic models of crime.
In recent years, there has been recognition that this approach is not well-suited in addressing the two most common risks to confidentiality: mistakes, and avoidance of inconvenient regulation. Moreover, it is hard to exploit the benefits of user engagement under the policing model, which encourages ‘them and us’ thinking. Finally, there is little evidence to suggest that students absorb “do/don’t” messages well.
There is a growing acceptance that a ‘community’ model of data protection brings a range of benefits, and that training is an investment in developing that community. This requires a different approach to training, focusing more on attitudinal shifts and less on right/wrong dichotomies.
This paper summarises recent learning about training users of confidential data: what they can learn, what they don’t learn, and how to extract the full benefit from training for both parties. We also explore how, in the community model, trainers and data owners also need to be trained as well researchers.
The paper focuses on face-to-face training, but also considers lessons for other training environments.
We illustrate with an example of the conceptual design of a new training course being developed for the UK Office for National Statistics.