There is relatively strong agreement among practitioners, researchers and politicians alike that interventions within child and adolescent mental health should have the best possible documentation of effects. At the same time, there are some varying viewpoints as to what the basis should be for claiming that an intervention is evidence-based. Ungsinn’s criteria are built on several organizations’ opinions of what constitutes valid research evidence.


The text below is is taken and translated from Martinussen, M., Reedtz, C., Eng, H., Neumer, S. P., Patras, J., & Mørch, W.T. (2016). Ungsinn – kriterier og prosedyrer for vurdering og klassifisering av tiltak. [Ungsinn – Criteria and procedures for evaluation and classification of interventions]. Tromsø: UiT The Arctic University of Norway


Evidence-based practice

In the report “Føre var  [Better to be safe….]…”, on health-promoting and preventive interventions and recommendations (Major, Dalgard, Mathisen, Nord, Ose, Rognerud & Aarø, 2011), it says that “… it should be a requirement that an intervention be tried and tested beforehand with sufficient documentation that it fulfills its purpose”. The authors of the report believe interventions must be evidence based; i.e., based on knowledge that is acquired through research. The Norwegian Institute of Public Health points to this as a reasonable and necessary requirement for health-promoting and disease-preventive interventions in the area of child and adolescent mental health.

The most common understanding of knowledge-based/evidence-based practice is that knowledge from research is integrated into the practitioners’ experience-based knowledge and the users’ needs and wishes (Sackett, Rosenberg, Gray, Haynes & Richardson, 1996). This is a perception that stems from medicine («evidence-based medicine»), originally used in relation to decisions and treatment choices for the individual patient. This way of understanding knowledge-based practice may also be applied to other decisions that are significant in determining which interventions are made available to users.

The research-based knowledge found in evidence-based practice should be the best available reseach evidence in relation to the clinical question being asked. When the clinical question deals with the effect of the intervention, research-based knowledge of the effect must form the basis of the documentation. It is also important to distinguish between evidence-based interventions (empirically-supported interventions) and evidence-based practice. Evidence-based interventions are those with solid documentation of effect. Evidence-based practice occurs when research-based knowledge on the effects of various interventions is integrated into the practitioners’ experience-based knowledge and the needs and wishes of users.

Ungsinn is a source of information on interventions and their evidence. This gives practitioners and decision-makers access to research-based knowledge that they may use in their work of developing evidence-based practice. The quality of evidence is graded on five levels, whereby the lowest two levels represent the first steps needed to enable us to provide documentation of the effects of the intervention, and the top three levels represent varied quality of documentation based on empirical research.


Various standards for evidence of interventions
Different organizations have developed criteria for what may be viewed as evidence in research and practice. One such organization is the Society for Prevention Research (SPR). In their standards for evidence (Flay et al., 2005), criteria are described for effective interventions examined under well-controlled conditions (efficacy), effective interventions documented in ordinary preventive/clinical practice (effectiveness) and interventions that are appropriate for general dissemination.

According to this standard, an effective intervention (efficacy) should have been tested in at least two comprehensive studies in which a clear assertion on the effect of the intervention for a well-defined target group has been investigated. The intervention must be described in a way that makes it possible for others to replicate the findings in new studies. Valid and reliable psychometric instruments and data collection methods must have been used. The research design must provide a basis for clear causal assertions and generalizations of the findings, something that has consequences for the selection of comparison groups and division into groups (i.e., randomized or matched). There must be concrete descriptions of the selection.  Data must be analyzed with appropriate statistical methods. The results from such studies should show consistent positive effects and there should be positive results from at least one long-term follow-up study (> 6 months). It is recommended to test the intervention under optimal conditions first, with a design that ensures high internal validity such as in well-controlled studies, prior to testing whether the effect can also be documented in new studies under the more realistic conditions of normal practice.

According to SPR, evidence from effectiveness studies should meet all the standards of well-controlled efficacy studies; however, there should additionally be a clear description of the intervention (manuals, training and materials) such that a third party may adopt and implement the method. The intervention must have been evaluated under natural conditions for the target group to whom the method is directed and the intervention needs to have a clear theory that explains the causal mechanism (the mechanism considered to cause effect). Evaluations must have clarified the significance of the results in the real world (practical significance) and should clearly demonstrate to which group/s the results may be generalized.

An intervention that is ready for broad dissemination must meet all standards for interventions documented through well-controlled efficacy and effectiveness studies; however, they must also demonstrate that the method/intervention allows for dissemination on a large scale. The intervention must have explicit programs for training and sustainment of the  practitioners’ skills, in addition to containing precise information on costs tied to the implementation. The intervention must also have assessment tools such that organizations can measure the effects of the intervention and evaluate the implementation process.

Other organizations’ views on evidence
SPR sets high standards for claiming that a preventive intervention is evidence-based or empirically-supported. Many other organizations have established their own criteria for evidence as well. Among others, an American organization, called the Substance Abuse and Mental Health Services Administration (SAMHSA), has systems for evaluating the quality of research. SAMSHA has also established an electronic database to search for interventions evaluated within the fields of mental health and substance abuse; the National Registry of Evidence-Based Programs and Practices (NREPP). The blueprint is a database run by the U.S. Department of Justice that contains effective crime prevention interventions. In the Netherlands there is a database called the Database of Effective Youth Interventions, which contains interventions available in the Netherlands directed towards child and adolescent health, care and welfare. The California Evidence-Based Clearinghouse for Child Welfare (CEBC) presents and classifies programs and measurement instruments that may be used in child welfare.  Each of these databases has its own criteria for assessing and classifying evidence, however, with substantial overlap.

In 2005, the American Psychological Association (APA) formulated a declaration, the APA Policy Statement (American Psychological Association, 2005), in which they listed the factors they determined would ensure an effective psychological practice. The APA largely followed the SPR evidence standards for what they described as practice that was supported by the best research evidence. They additionally pointed to clinical expertise as a requirement for effective practice, as well as delineating which clinical skills such clinical expertise constituted. The Norwegian Psychological Association has adopted a principle declaration on evidence-based psychological practice that supports the APA’s declaration (Norwegian Psychological Association, 2007).

Grading of evidence
Although the various organizations have somewhat differing criteria, all systems acknowledge that evidence does not have an absolute size but is rather something that varies. The consequence is that evidence is graded. The different systems have quite similar standards for what is considered strong evidence, but are somewhat divided when it comes to what may be considered as some evidence or weaker evidence. Although SPR’s standards for evidence include a form of grading, all levels have strict requirements for quality of documentation.  Similarly, the databases of the NREPP (National Registry of Evidence-Based Programs and Practices; NREPP, 2015), the Blueprint (Blueprints for Healthy Youth Development, n.d.) and CEBC (California Evidence-Based Clearinghouse for Child Welfare; CEBC, n.d.) are all systems of grading. In order to determine a positive evaluation of evidence in these databases, positive results from studies of high quality are nonetheless required. At the very least, the studies must have a quasi-experimental research design.

Many national and international organizations (e.g., the World Health Organization and the Knowledge Centre for the Health Services) have come together on a mutual approach to grading  evidence through the system, GRADE (Grading of Recommendations, Assessments, Development and Evaluation; Guyatt, Oxman, Vist, Kunz, Falck-Ytter, Alonso-Coello, & Schünemann, 2008; WHO Guidelines, 2012). The quality of the documentation is graded on four levels: high quality, medium quality, low quality and very low quality. The system is applied in all fields of medicine and health sciences, and it is continuously evaluated and developed. GRADE is based on the idea that RCTs (Randomized Controlled Trials) have the best research design to uncover a causal link between intervention and their effects. Additionally, other factors that may weaken or strengthen the validity of the results are assessed; for example, procedures for randomization, blinding (lack of knowledge on research conditions) and consistency between various studies. In general, GRADE places high demands on methodological quality. The requirements and biases evaluated by GRADE are quite pertinent to testing of medications and medical treatment and are appropriate, to a somewhat lesser degree, for studies that examine the effect of psychosocial interventions.

The Dutch knowledge database, Database of Effective Youth Interventions, has chosen to grade evidence in a way that includes many of the interventions delivered in practice. Their thinking is that more interventions, beyond those that are evaluated with the best methods, may be effective and that simpler studies provide more information than no research at all. They have three levels of evidence: theoretically-based, possibly effective and established as effective, based on the system developed by Veerman and Yperen (2007). If new studies have been performed on an intervention, it may be re-evaluated and reach a higher classification. The same thinking is the basis for Ungsinn’s criteria.



American Psychological Association (APA). (2005). Policy Statement on Evidence-Based Practice in Psychology (2005). Taken from:

Blueprints for Healthy Youth Development. (undated). Program criteria. Retrieved from:

California Evidence-Based Clearinghouse for Child Welfare (CEBC). (undated). CEBC Review Process. Retrieved from:

Flay, R. B., Biglan, A., Boruch,R. F., Castro, F. G., Gottfredson, D., Kellam, S., … Ji, P. (2005). Standards of evidence: criteria for efficacy, effectiveness and dissemination. Prevention Science6, 151-175. doi: 10.1007/s11121-005-5553-y

Guyatt, G. H., Oxman, A., Vist, G. E. V., Kunz, R. Falck-Ytter, Y. Alonso-Coello, P., & Schünemann, H. J. (2008). GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ ,336:924.

Major, E. F., Dalgard, O. S., Mathisen, K. S., Nord, E., Ose, S., Rognerud, M., & Aarø, L. E. (2011). Bedre føre var…Psykisk helse: Helsefremmende og forebyggende tiltak og anbefalinger [Better safe … Mental health preventive and health promotive interventions and recommendations] (report 1). Oslo: Norwegian Institute of Public Health.

National Registry of Evidence-based Programs and Practices (NREPP). (2015). Program review criteria. Retrieved from:

Norwegian Psychological Association. (2007). Prinsipperklæring om evidensbasert psykologisk praksis [The declaration of evidence-based practise]. Retrieved from:

Sackett, D. L., Rosenberg, W. M. Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn’t. British Medical Journal, 312(7023): 71–72.

Veerman, J., W., & van Yperen, T. A. (2007). Degrees of freedom and degrees of certainty: A developmental model for the establishment of evidence-based youth care. Evaluation and Program Planning, 30, 212-221.

WHO. Guideline development handbook. (2012). Retrieved from: