What are the important differences between assessing the impact of a program with a quasi-experiment relative to assessing the impact of a true experiment?
The paramount distinction between a quasi and true experiment is identifiable from the wording of the two. Quasi means almost true, while true experiment has all characteristics of a design program. The true experiment design consists of the manipulation of independent variables between the respective design subjects so as to achieve a counterbalance near or within the subjects. In this case, any other design that does not portray what has been mentioned above amounts to a quasi-experiment design (Rossi, Freeman & Lipsey, 2004).
The very occurrence and application of quasi-experimental design are traceable in situations whereby casual conclusions are hard to draw since no control is at all exercisable over the variables. When used effectively, they give out reliable results in assessing public programs. A quasi-experimental design is appropriately applied in instances where the estimation of the randomized design is required. The quasi-experimental design is mainly done by bringing together participating and nonparticipating targets and endeavoring to match them. The matching of the targets is designed to make equivalent or relevant variables. The expected result is to have a clear comparison of the participants with non-participants so as to evaluate the final measures. The central differences between the two are the fact that quasi-experimental provides additional term representing differences of pre-invention nature (Shadish, Cook & Campbell, 2002).
Assume that reading levels in a school district had been steadily declining over a period of years and that a new reading program was implemented to counter the trend. After two years, reading levels declined again, but the superintendent announced that the reading program was a success because reading levels would have declined even more without the program. What would be necessary to evaluate the validity of such an interpretation?
The failed program has been using a quasi-experimental design. This design solely relies on the matching of the variables to get an estimate. It is essential to note that the failure of the program could be attributed to the use of a design. In this case, the evaluation of the program was not done using reliable variables. Another mode of explanation to support what led to failure in the program is the fact that archival data may have been used. This data may not have been designed to evaluate the program but for other purposes. Fresh research based on the quasi-experimental design, and specifically when archival data is used, does not produce reliable results. For a clear evaluation of the program, a true experiment is required where data will be randomized, relying on manipulation of identifiable variables (Rossi, Freeman & Lipsey, 2004).
Design an evaluation using nonequivalent control groups to assess the impact of providing high school students with home personal computers and software that provide reviews of the subjects they are studying in school. Make certain you include:
How would you select the control group?
I would use a random design to select the control groups to be used in the program.
When would you observe the experimental and control groups?
The experimental and control groups would be ready for observation after determining that, to a greater extent, the design used minimizes critical differences between any intervention and control groups. Any observance before that may present biased effects of the net effects (Rossi, Freeman & Lipsey, 2004).
What are some alternative forms of information you could collect from the experimental and control groups?
The design will generate primary data and secondary data. However, it may be important to seek archival data if the need arises.
What are some of the statistical tests that might be appropriate?
The use of the quantitative method is appropriate since it analyses variables that differ in magnitude and variables that are very easy to measure. Secondly, the qualitative method can also be used to analyze the categorical variables (Rossi, Freeman & Lipsey, 2004).
Suppose that a new treatment was offered to patients of a group medical practice in an upper-middle-class suburb. As a comparison group, similarly, aged patients with the same diagnosis were found in a county hospital providing free (Medicaid) coverage to poor families. Evaluate this nonequivalent control group evaluation design?
The evaluation will require a true experimental test, while the data available will be subjected to statistical methods. The conclusions to be drawn from the medical analysis are not in any way casual. Therefore, it is preferable to use the true experimental design so as to make reliable evaluations of practical problems. Independent identifiable variables will require manipulation and subject designing. Therefore, the only way to achieve that is by use of true experimental research (Martin & Kettner, 1996).
Martin, LL. & Kettner, P.M. (1996). Measuring the performance of human service programs. Thousand Oaks, Calif: Sage.
Rossi, P.H., Freeman, H.E. & Lipsey, M.W. (2004). Evaluation: A systematic approach. Thousand Oaks [u.a.: Sage.
Shadish, W.R., Cook, T.D. & Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston [u.a.: Houghton Mifflin.