Monitoring, Reporting and Dissemination

 INTRODUCTORY

“You should look at this section if you have only a limited idea of how assessment data can support monitoring, and how reporting and dissemination can be optimised.”

The purpose of assessment activities is to collect data on students’ skills and knowledge. Once instruments have been developed, testing has taken place and data has been collected and analysed, the next step is to focus on optimal reporting and dissemination.

For school and classroom based assessment, data can be reported very simply. For example, students (and their parents) might be given a grade. They should also be given:

  • A simple explanation on what this means in context (e.g. ‘student X performs above their classmates in reading but less well in writing’) and
  • Suggestions on how they should improve (e.g. ‘make sure you keep reading a chapter of a book every day, and make sure you practice writing at least 100 words every day’).

If students or parents have limitations in their reading ability, information should be presented in visual form. For the school community, including teachers and school principals, it is possible to present the data in a simple table or chart along with a summary.

For large scale assessment, traditional approaches to reporting involve creating a lengthy report full of charts and tables, often in paper form. This is then disseminated to educational officials and administrators, where it often sits in their office on a shelf and gathers dust!

This approach to reporting and dissemination is very old-fashioned, however, and does not take account of the needs of stakeholders. It is increasingly recognised that many people find it very difficult to look at charts and tables full of numbers and interpret what this means. This prevents assessment data being optimised to improve teaching and learning.

As a result, there is growing awareness of the need to design reporting and dissemination strategies, both to make the interpretation of assessment data easier and also to recognise that different stakeholders need different types of information.

The key educational stakeholders for reports on large scale assessment include: government ministers, policy makers, state and district officials, school leaders, teachers, parents, students, the general population and the media.

Each of these groups of stakeholders is very distinct, with different needs in terms of the amount of detail, how data is presented and the kind of summary. All stakeholders should receive a written summary as well as any data display, highlighting key findings and also making suggestions about how to improve teaching policy and practice.

One of the benefits of assessment data is that it can enable monitoring over time. For example, a cohort of students might be assessed in grade 3, again when they reach grade, 5 and again when they reach grade 8. This is a very useful way to show how successful the education system is in helping all students improve their learning over time, and identifying particular types of students that need additional support.

To do this, however, the assessment programme needs to be well designed. For example, giving a test form to students in 2019 and a different test form to students in 2020 does not allow monitoring over time. Instead, student performance will depend on the difficulty of the two test forms and it will not be possible to make any valid comparisons.

If monitoring over time is desired, the assessment programme – from the initial planning stage and through all activities – has to be done in a way that complies with global best practice, including linking test forms and performing psychometric analysis. It will then be possible to place student performance on a single reporting scale and follow their progress across different years.

To find out more about how assessment data can how assessment data can support monitoring, and how reporting and dissemination can be optimised, go to #Intermediate.

 INTERMEDIATE

” You should look at this section if you already something about how assessment data can support monitoring, and how reporting and dissemination can be optimised and would like to know more.”

Assessment is a wasted exercise if the data collected is not used to drive improvements in education policy and practice. Unfortunately, the traditional approach to reporting and dissemination often means that education stakeholders are unable to understand and interpret the data collected during assessment. This is particularly acute in large scale assessment but can also be a problem in school and classroom based assessment.

The solution is to design reports to meet the needs of stakeholders. For example, the Education Minister does not need the same level of detail as district officials or parents. In addition, reports should be written to provide simple overviews of key findings – not with lots of details unless these are essential. The use of infographics and data visualisations can also be a helpful way to communicate complex ideas in simple terms. Reports should not illustrate data but also include written summaries of key points and to

In addition to how reports are designed, it is also important to ensure that they are accessible. It Is not a good idea to assume that they will be disseminated. For example, reports given to senior education officials may not be given to junior ones, and reports given to school prinicpals may not be given to teachers. Instead, dissemination should directly target all important stakeholders and easily accessible, including through digital devices

Assessment can enable monitoring over time. This is a really important way of tracking student progress from one grade to another or identifying if there has been an improvement in student achievement due to a new policy or as a result of the introduction of innovative practices. Monitoring is a powerful tool for educational professionals in education contexts that are focused on systematic improvement over time.

Monitoring cannot be done, however, without the use of robust approaches to test design. For example, if students in grade 8 are given test X in 2019 and students in grade 8 (who will be a different cohort) are given test Y in 2020, it is not possible to make a valid inference that students in year 2020 performed better or worse than those in 2019. Instead, student performance will reflect the level of difficulty of the two different test forms.

The way around this is to use a psychometrically robust test design that includes linkages between test forms and that is analysed to enable the construction of described proficiency scales on which student progress can be tracked. This requires a high level of skill in test development and data analysis but is a worthwhile endeavour for its value in stimulating positive change that is driven by empirical data.

To find out more about how assessment data can support monitoring, and how reporting and dissemination can be optimised, go to #Advanced.

  ADVANCED

” You should look at this section if you are already familiar with the use of assessment data to support monitoring, and how reporting and dissemination can be optimised and would like to extend your understanding​​​​​​.”

The fundamental purpose of assessment is to collect valid empirical insights into the proficiency of students in the domain that is being measured in order to inform improvements in educational policy and practice. This objective often fails to be achieved, however, primarily because those education stakeholders who could benefit most from the insights it can offer do not receive relevant information in a format that they can understand.

Data literacy is generally quite low and this means that many people struggle to interpret charts and tables. This problem is made worse when these are presented on the basis of dubious data (such as frequencies or raw scores) and/or with no guidance on how to interpret the data or what action to take. Instead, stakeholders need guidance.

Reports that tell students that they got an ‘A’ or ‘75%’ or ‘7/10’ in a test or exam are not very useful unless students are informed about:

  • The context (for example, what proportion of students received an A, or more than 74% or more than 6/10); and
  • What action they should take to improve (for example, ‘your grammar needs more practice, please use X resource and complete two exercises every day).

Reports that tell teachers that their students performed relatively well or badly compared to other cohorts are not very useful unless they:

  • Explain exactly what skills or knowledge was weakest (for example, ‘your students’ strongest area was algebra but their weakest area was geometry’);
  • Provide the spread of performance (for example, by indicating which students got which questions right or wrong); and
  • State the expected performance (for example, ‘on average your students answered 60% of geometry questions correctly and the benchmark is 80%’).

If the test is designed using MCQs that have been written to highlight misconceptions, the report should also enable teachers to know which incorrect options that students selected (and which misconceptions these represent).

Reports that tell educational administrators that students in certain schools or districts performed better than others are not very useful unless they:

  • Compare like with like – for example they compare schools with a high proportion of students from low socio-economic backgrounds with other schools with similar characteristics;
  • Provide information on what student know and can do and what they do not, such as through the use of described proficiency scales; and
  • Allow student performance to be monitored over time in order to demonstrate the impact of any interventions designed to improve the quality of learning.

Reports that tell educational policy makers that students in certain states, districts or schools performed better than others are not very useful unless they:

  • Identify the characteristics of students that performed least well (for example, ‘students whose parents have not completed primary school were weakest in mathematics’)
  • Include suggestions for action (for example, ‘the findings indicate that an intervention to provide additional support to students whose parents have not completed primary school would help to boost overall mathematics performance’); and
  • Enable the performance of students to be monitored over time (particularly so that the impact of any interventions developed on the basis of previous data can be monitored).

All of the characteristics of reports mentioned above are able to be achieved, but all of them depend on the design of the assessment programme. If an assessment programme is designed to report at district level, for example, it cannot be retro-fitted to report at the school level. At a large scale, rich insights are most likely to be gathered from assessment instruments that have been robustly designed in line with global best practice. These enable the psychometric analysis of data to produce described proficiency scales on which student progress can be tracked. ​​​​​​​

Previous Topic
Next Topic