Evaluating School Programs: An Educators Guide

Free download. Book file PDF easily for everyone and every device. You can download and read online Evaluating School Programs: An Educators Guide file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Evaluating School Programs: An Educators Guide book. Happy reading Evaluating School Programs: An Educators Guide Bookeveryone. Download file Free Book PDF Evaluating School Programs: An Educators Guide at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Evaluating School Programs: An Educators Guide Pocket Guide.

Needs are usually the strategic priorities identified in the plan.

Python Tutorial for Beginners [Full Course] 2019

Inputs are the resources allocated to address those needs. Activities are often referred to as processes or projects. Outcomes and impacts are used interchangeably. Figure 3 gives some common examples of needs, inputs, activities and outcomes.

Shop by category

Good evaluation will make an assessment of how well the activities have been implemented process evaluation and whether these activities made a difference outcome evaluation. If programs are effective, it might also be prudent to ask whether they provide value for money economic evaluation. Figure 3. Some examples of program needs, inputs, activities and outcomes. Process evaluation is particularly helpful where programs fail to achieve their goals. It helps to explain whether that occurred because of a failure of implementation, a design flaw in the program, or because of some external barrier in the operating environment.

Process evaluation also helps to build an understanding of the mechanisms at play in successful programs so that they can be replicated and built upon. Outcome evaluation usually identifies average effects: were the recipients better off under this program than they would have been in its absence. It can explore who the program had an impact on, to what extent, in what ways, and under what circumstances.

This is important because very few programs work for everyone.

Evaluating School Programs: An Educator's Guide

Identifying people who are not responding to the program helps to target alternative courses of action. Economic evaluations help us choose between alternatives when we have many known ways of achieving the same outcomes. In these circumstances, the choice often comes down to what is the most effective use of limited resources.

If programs are demonstrably ineffective, there is little sense in conducting economic evaluations. Ineffective programs do not provide value for money.

You are here

While repeating a school year is relatively uncommon in NSW, it is quite common in some countries such as the United States. It is a practice that has considerable intuitive appeal — if a student is falling behind need the theory is that an additional year of education input will afford them the additional instruction activity required to achieve positive educational outcomes outcome. Evidence suggests that this is true only for a small proportion of students who are held back.

In fact, after one year, students who are held back are on average four months further behind similar-aged peers than they would have been had they not been held back. In situations like this, well-intentioned activities can actually have a negative impact on a majority of students. Once a clear problem statement has been developed, the inputs and activities are identified, and intended outcomes have been established, coherent evaluation questions can be developed.

Good evaluation will ask questions such as:. This is a false dichotomy. The method employed to answer the research question depends critically on the question itself. Qualitative research usually refers to semi-structured techniques such as in-depth interviews, focus groups or case studies. Quantitative research usually refers to more structured approaches to data collection and analysis where the intention is to make statements about a population derived from a sample.

Engaging in effective internal evaluation | Education Review Office

Both approaches will have merit depending on the evaluation question. In-depth interviews and focus groups are often the best ways of understanding whether a program has been implemented as intended and, if not, why not.

These methods have limitations when trying to work out impact because, by definition, information is only gleaned from the people who were interviewed. This is where quantitative methods are more appropriate because they can generalise to describe overall effects across all individuals. However, combining both qualitative and quantitative methods can be useful for identifying for whom and under what conditions the program will be effective. For example, CESE researchers investigating the practices of high-growth NSW schools used quantitative analysis to identify high-growth schools and analyse survey results, and qualitative interviews to find out more about the practices these schools implemented.

The possible sources of data to inform evaluation questions are endless. The key issue is to think about the evaluation question and adopt the data and methods that will provide the most robust answer to that question. The number one question that most evaluations should set out to answer is: did the program achieve what it set out to achieve?


  • See a Problem??
  • Verdrängte Nachbarn: Wadi Salib - Haifas enteignete Erinnerung (German Edition).
  • The Educator’s Guide to Student Data Privacy.
  • Darkening Skies (The Hadrumal Crisis Book 2).

This raises the vexing problem of how to attribute activities to any observed outcomes. No single evaluation approach will give a certain answer to the attribution question. However, some research designs will allow for more certain conclusions that the effects are real and are linked to the program. CESE uses a simple three-level hierarchy to classify the evidence strength, as shown in Figure 4.

There are many variations on this hierarchy, most of which can be found in the health and medical literature. Taking before pre and after post measures is a good start and is often the only way to measure outcomes. However, simple comparisons like this need to be treated cautiously because some outcomes will change over time without any special intervention by schools. This is where reference to benchmarks or comparison groups is critical. For example, if the typical growth in reading achievement over a specified period of time is known, it can be used to benchmark students against that expected growth.

Statements can then be made about whether growth is higher or lower than expected as a result of program activities. An even stronger design is when students or schools, or whatever the target group is comprised of are matched like-for-like with a comparison group. This design is more likely to ensure that differences are due to the program and not due to some other factor or set of factors. These designs are referred to as 'quasi-experiments' in Figure 4. Even better are randomised controlled trials RCTs where participants are randomly allocated to different conditions.

Outcomes are then observed for the different groups and any differences are attributed to the experience they received relative to their peers. RCTs can also be conducted using a wait-list approach where everyone gets the program either immediately or after a waiting period. RCTs allow for strong causal attributions because the random assignment effectively balances the groups on all of the factors that could have influenced those outcomes.

RCTs have a place in educational research but they will probably always be the exception rather than the rule. RCTs are usually reserved for large-scale projects and wouldn't normally be used to measure programs operating at the classroom level. Special skills are required to run these sorts of trials and most of the programs run by education systems would be unsuited to this research design.

This requires taking baseline and follow-up measures and comparing these over time. As a rule, the less rigorous the evaluation methodology, the more likely we are to falsely conclude that a program has been effective. This suggests that stronger research designs are required to truly understand what works, for whom and under what circumstances.

In all of the above, it is crucial for educators to be open-minded about what the results of the evaluation might show and be prepared to act either way. If the school agrees, the process begins. But what if the school denies your request? Where do you go from there?


  • Ideas from the Field.
  • An Educator’s Guide to the “Four Cs”?
  • pretty strange.
  • The Lion & The Lamb (Quilsero Saga Book 4).
  • Lifes A Canadian...BLOG.
  • Rise of the Bull Mongoni?
  • 5 essentials for effective evaluation;

What if you decide to seek a private evaluation instead of a school evaluation? Get clear information about requesting evaluations. Once you request an evaluation, what happens next? There are a number of steps involved in the process. Your child will take a series of tests that look at different areas of learning, including reading, writing, math, and memory.

What do those tests measure, and who gives them? What rights do you have during the process?


  • Footer Navigation Menu.
  • Fables Vol. 3: Storybook Love (Fables (Graphic Novels)).
  • Not by Sight: A Novel (Ozark Mountain Trilogy)?
  • TI para Negócios 2 (Série TI para Negócios) (Portuguese Edition).
  • Internes Marketing vs. interne Markenführung: Zwei erfolgskritische Konzepte in der Gegenüberstellung (German Edition);
  • Verdrängte Nachbarn: Wadi Salib - Haifas enteignete Erinnerung (German Edition).

Knowing the ins and outs of the evaluation process helps you be prepared. It also allows you to prepare your child for the experience.

Changes for 2018-12222 School Quality Reports

Learn how to prepare for an evaluation. After your child is evaluated, the evaluation team at school will look over all the test results and decide if your child is eligible for special education services through an IEP. If not, the team might recommend support through a plan. What are your options? And what if your child had a private evaluation? How do you get the school to consider those results?

Engaging in effective internal evaluation

Find out how to interpret evaluation results, and what comes next. Many of them learn and think differently, or have kids who do. Sign up for weekly emails containing helpful resources for you and your family.