By Silvia Montoya, Director of the UNESCO Institute for Statistics (UIS)
I am in Bangkok at the Asia-Pacific Meeting on Education 2030 (APMED) to find ways to transform learning and meet the skills demand to achieve the SDGs. There is a sense of urgency in the discussions, with references to the global learning crisis that jeopardizes the future of 6 out of 10, or 617 million, children and adolescents who are unable to achieve minimum proficiency levels in reading and mathematics according to data from the UIS.
But it is the numbers that we don’t have that scare me the most.
How many adults in the world lack basic literacy and numeracy skills? The standard answer is that 750 million adults are illiterate worldwide. To be honest, I rarely cite this number, knowing that it is based largely on a single question – “Can you read or write a simple sentence?” – asked in a household survey or census.
How to reduce the technical and financial burden of learning assessments
Through the Global Alliance to Monitor Learning (GAML), we are working with countries and partners to produce the very first internationally-comparable indicators on skills and learning. But clearly the best indicators in the world will amount to little if countries cannot collect the data to produce them. We must be pragmatic and creative – finding flexible ways to adapt existing assessments in order to meet the priorities and contexts of countries.
A UIS paper presents a series of options to help countries reduce the technical and financial burden of conducting the learning assessments needed to produce the data to monitor SDG Target 4.6: “ensure that all youth and a substantial portion of adults, both men and women, achieve literacy and numeracy.”
Ideally, all countries would implement a full spectrum learning assessment such as the OECD’s Survey of Adult Skills (PIAAC) or the World Bank’s Skills Measurement Program (STEP). The problem is many developing countries don’t have the resources to administer these assessments in their current form. The good news is that there are ways to adapt them in order to meet the needs of countries.
Here are the options:
- Reduce the number of domains: countries could decide not to assess numeracy skills and just focus on literacy skills, which would reduce the data collection costs of the assessments by 66%. However, this is not strictly speaking acceptable for SDG 4 monitoring.
- Point or synthetic estimates: countries could use a synthetic estimate based on a sample of respondents with similar characteristics rather than establishing a direct point estimate of skill distribution. The sample sizes could be reduced from 5,000 to 1,500, which would reduce the costs by 30-50%.
- Implementation: countries could reduce implementation costs by taking advantage of existing platforms and attaching a short module to a household survey. While this may have an impact on response rates, it would lead to a significant cost savings in fielding the assessment.
- Mode of delivery: countries have the choice of using either a pencil- or a computer-based approach to administer the assessment. The UIS estimates that the computer-based option can reduce the costs of data collection by 40% while yielding more reliable results across the entire skill distribution.
- Continued skills or classification by threshold: countries can also decide to use a basic threshold proficiency rather than the more complex skills continuum proficiency scale. This could lower the cost and operational burden of learning assessments by 25%.
- Centralizing administration: countries can decide to administer an assessment on their own using clear guidelines or rely on the expertise of an international or regional organization.
How to meet the specific needs of developing countries
Personally, I consider PIAAC to be the “gold standard” in skills assessment. The problem is that it was originally designed for middle- and high-income countries and therefore doesn’t cover the complete range of foundation skills that are a priority in developing countries.
In 2030, I sincerely hope that every country is ready to implement PIAAC. Meanwhile, we need an assessment tool specifically designed to reflect the contexts, needs and priorities of low-income countries. The answer may lie in an adapted version of the Literacy Assessment and Monitoring Program (LAMP), which was originally developed by the UIS for low- and lower-middle-income countries.
With Mini-LAMP, countries will have a streamlined version of the complete set of tools that have already been field tested in 10 countries and translated into a range of languages spoken in different regions. They would also have more options and flexibility in implementing the assessment in order to meet their specific needs. For example, they could use shorter modules to assess literacy and numeracy skills. This would enable countries to reduce testing time while continuing to evaluate both domains.
Mini-LAMP would also be computer-based. So countries will have a fully-adaptive assessment based on the skills of the individual test taker, which not only ensures that results are available more quickly but also shifts expenditures from collection to analysis.
Finally, countries could directly administer mini-LAMP with the support of a regional body. Mini-LAMP will include a comprehensive implementation package and quality assurance guidelines so that countries can take a decentralized approach to administering the assessment rather than relying on an international organization. In short, they will have the flexibility to meet their specific needs and contexts with the assurance and support needed to produce quality data for monitoring and policymaking.