This paper is an academic critique of a quantitative research article written by M. O. Thirunarayanan and Aixa Perez-Prado (2002) titled: “Comparing Web-Based and Classroom-Based Learning: A Quantitative Research.” In their articles, the authors conducted a research to compare the achievement of students enrolled in two different sections of the same course in English to Speakers of Other Languages (ESOL). One section was taught in the classroom while the other was web-based. My critique of the article focuses on different aspects in terms of the hypothesis, sampling, choice of variables, instrumentation, discussion, and conclusion.
Given the purpose of the research study, the authors present their problem more as a statement than as a question. The problem statement suggests that many researches have been conducted to measure the achievement of students in non-web-based instruction (Barry & Runyan, 1995; Chu & Schramm, 1975; Schramm, 1962; Whittington, 1987; Wilkinson, 1980). However, fewer studies were available in the web-based instruction domain due to the fact that, at the time of publication, “web-based distance education, [was] a relatively new phenomenon” (Thirunarayanan, & Perez-Prado, 2002, p. 131).
The research article offers one null hypothesis stating “there is no significant difference in the achievement of students enrolled in distance education courses when compared with the achievement of students enrolled in traditional or classroom-based courses” (Thirunarayanan, & Perez-Prado, 2002, p. 131).
The null hypothesis is apparently derived from the literature used to present the problem statement. However, the writers have not discussed the literature in any detail and did not give a clear view of, or a reason for, their choice of hypothesis. The literature concentrates on distance education in quite a different setting than the subject of research, and therefore does not serve as a good example.
Research design and variables
To test the hypothesis, the authors employed a quasi-experimental approach with a pretest-posttest design (Thirunarayanan, & Perez-Prado, 2002). Identifying the independent and dependent research variables is an essential task in any experimental, or quasi-experimental, research. In their article, the authors have not explicitly identified these variables as such. In fact, the word variable didn’t occur once in the text of the article. However, it is safe to assume that the independent variable in the experiment is the instruction of the “ESOL… Principles and Practices II (TSL 4141)” course (Thirunarayanan, & Perez-Prado, 2002, p. 132). The dependent variable is the performance of the students taking this course.
The research took place over the duration of one semester. However no other mediating or intervening variables were taken into account, knowing that a lot could have intervened with the progress of the instruction. The authors of the article assume that the only parameters are the independent variables, despite their discussion of a multitude of extraneous variables. This is an important shortcoming that poses some questions regarding external validity. Did the authors ignore extraneous variables on purpose? The authors admitted that the two sections featured differences in the instruction methods, beyond what is intended to be compared. For instance, study groups were not constant in the web-based section. The classroom students met the instructor weekly while the other section met her three times during the semester. The office hours were the same for both sections, despite the fact that the web-based section may not have easy access to the campus, and the online office hours may not be considered equivalent. The authors dismissed these variables, however their arguments were frail in my opinion (see Thirunarayanan, & Perez-Prado, 2002, p. 133).
For the purpose of conducting their research, Thirunarayanan, & Perez-Prado (2002) chose the students of one particular instructor who teaches the TSL4141 course. The total sample consists of 60 students and divided into two sections, classroom-based (n=31) and Web-based (n=29). The choice of either the course or the instructor was not justified by the authors, which may suggest convenience sampling. The authors have indicated that 95% of the students were in their early 20s and that only five out of the 60 students were male. Students were also not randomly distributed in the two sections, but were left to choose their sections based on unidentified criteria, prior to deciding which section is which. These facts suggest a myriad of other interfering variables that should have been identified and accounted for in the research. A much larger sample should have been employed to ensure that these variables do not have a significant influence on the results.
The ultimate population represented by the sample chosen by the authors is not really clear. However, their hypothesis suggests the target any student in the world being instructed using either of the two methods, namely classroom-based and web-based instruction. Nevertheless, the particularity of the design, namely the choice of an ESOL course, the small sample, the age and gender of the majority, and so forth, do not make the sample a valid representative of the intended population.
The authors employed a pretest-posttest, no control group, instrument to measure the performance of the sample. They administered the same test before and after instruction (at the beginning and end of the semester). The tests were independently scored by two separate individuals with 95% and 96% agreement on the pre- and post-test respectively. The instructor ruled any disagreement (Thirunarayanan, & Perez-Prado, 2002).
The article did not mention whether the pretest was administered to the two groups before or after the sections were decided for online or offline instruction. Moreover, the authors were not clear regarding the format of the pretest and posttest. Were the tests administered in pen and paper format for the offline group and online for the web-based group? Or were they both in pen and paper format? What would be more accurate: to employ the same format for both, or to employ the relevant format for each? The authors did not take into consideration the preferred learning style of the students, which I argue it may affect the results. A student who took all tests during the semester online would be more comfortable taking the posttest online as well.
The results of this research study were in favor of the authors’ null hypothesis. The authors averaged the results of each test for both sections and used the means to compute a t-test value for both the pre- and post-test (Thirunarayanan, & Perez-Prado, 2002). The statistical results were presented in one single table showing the mean M for each test and section, and it associated t-test value. It is obvious, since the authors are testing the method of instruction, to consider the p-value of the posttest, which proves to be > α = 0.05 and hence justifying the acceptance of the null hypothesis.
However, one may argue regarding the validity of the whole test and its interpretation from a logical standpoint. What was the point of the pretest? It the pretest was not used as a control test, then the posttest does not really prove anything at all. The authors’ interpretation of the result is very superficial. One may argue that, if the online group performed significantly less than the offline group before instruction, then performed equally well after instruction, then the online group have performed better than the offline group in comparison. The authors briefly discussed this notion but dismissed it as insignificant (Thirunarayanan, & Perez-Prado, 2002).
Personal Analytic Statement
In my opinion, the results of the research are neither conclusive nor valid. In general, I find the research lacking scientific merit and has little value. The sample was too small to represent the intended population. The variables were not clearly defined and the extraneous variables were ignored. There was no control group or control value to compare with. Finally, the discussion of the result was superficial and inconclusive as the authors constantly repeated that while the results are numerically significant, they are statistically not (Thirunarayanan, & Perez-Prado, 2002).
Barry, M., & Runyan, G. B. (1995). A review of distance-learning studies in the U.S. military. American journal of Distance Education, 9(3), 37-47.
Chu, G. C., & Schramrn, W. (1975). Learning from television: What the research says (rev. ed.). Stanford, CA: Institute for Communication Research. (ERIC No. ED 109 985)
Cohen, L., Manion, L., & Morrison, K. (2000). Research Methods in Education (5th ed.). New York: Routledge/Falmer.
Schramm, W (1962). What we know about learning from instructional television: Educational television: The next ten years. Stanford, CA: Institute for Communication Research.
Thirunarayanan, M. O. & Perez-Prado, A. (2002). Comparing web-based and classroom-based learning: A quantitative study. Journal of Research on Technology in Education, 34(2), 131-137. Retrieved from ProQuest Education Journals database.
Wilkinson, G. L. (1980). Media in instruction: 60 years of research. Washington, DC: Association for Educational Communications and Technology.
Whittington, N. (1987). Is instructional television educationally effective? A research review. American Journal of Distance Education, 1(1), 47-57