Within the UK’s positivistic paradigm, the belief is that data can predict human behaviour. This has led to the use of algorithmic risk prediction systems from profit making providers, including risk prediction for child abuse. These algorithms assume commonality via a fixed set of characteristics observed within an abusive family. However, our analysis of Serious Case Reviews in the UK found no fixed set of continuous variables which can predict abuse. In addition, the algorithms assume that human error (by the service provider) was at fault, assuming service user behaviour to be predictable and thus able to be controlled. A statistical dilemma is thus formed: when developing a model using familial ‘characteristics’ to estimate incidence of child abuse the model can only operate in two extremes being either: Too narrow in scope where the model will fail to match individuals who fulfil the very specific criteria resulting with false negatives; or too broad in scope where the model will match multiple individuals, but will have numerous false positives. This paper discusses the early findings of our work on risk prediction, explaining the consequences of using this positivistic approach especially with the unacceptably high number of false positives and false negatives observed in the UK’s child protection system.