Asia-Pacific Forum on Science Learning and Teaching, Volume 11, Issue 2, Article 1 (Dec., 2010)
Didem INEL and Ali Günay BALIM
The effects of using problem-based learning in science and technology teaching upon students’ academic achievement and levels of structuring concepts

Previous Contents Next


Method

The study employed a non-equivalent, pretest-posttest control group design, as an experimental research method (Christensen, 2004; Cohen, Manion and Morrison, 2005). The sample group consisted of 41 seventh-grade students enrolled in an elementary school in Turkey. Table 1 is a symbolic representation of the study.

Table 1. Symbolic representation of the study design
T1= Open-Ended Questions to Determine the Levels of Concept Construction,
T2= Academic Achievement Test on the Unit

Groups

Pretest

Process

Posttest

Experimental Group

T1, T2

Science and Technology curriculum – Problem-based learning method

T1, T2

Control Group

T1, T2

Science and Technology curriculum

T1, T2

The study formed two groups including the experimental and control groups, and during the four-week experimental application process, the experimental group was taught using the problem-based learning (PBL) method, while the control group was taught on the basis of the science and technology curriculum. The data collection instruments were administered to the students in both groups before and after the experimental application.

Implementation of the PBL Method

In this study, in a four-week quasi-experimental application, courses in an experimental group were taught using the PBL method and courses in a control group were taught using only the existing science and technology curriculum in Turkey. Both groups were taught by the same science instructor. In applying problem-based learning to the experimental group, a modular approach was used. The author designed modules and scenarios that were included in problems about the unit “Systems in Our body”. Four modules were used in the experimental application. A module consisted of three or four PBL sessions. In each module, real life problems were included in concepts about the digestive system, the excretory system, the nervous system and the endocrine system. The aim of the problems in the modules was to help the students learn concepts from the biology unit and to attract interest and attention to the lessons. The students were grouped into 4 groups of 5 students each. Students worked together, shared ideas and discussed in order to solve problems throughout the problem-based learning process. During the PBL session, the tutor only guided students in searching for information about concepts in modules, discussing ideas and solving problems.  The academic achievement test on the "Systems in Our Body" unit and open-ended questions, used to determine the levels of concept construction, were administered to the students in both groups before and after the experimental application.

Data Collection Instruments

1. Academic Achievement Test on the Unit "Systems in Our Body"

In the study, an academic achievement test on the unit "Systems in our Body" was developed to determine the cognitive levels of seventh-grade students about the subjects of the digestive system, the excretory system, the control and regulatory systems, which were all covered under the unit “Systems in our Body." Validity process was given priority when developing the academic achievement test. The first aim was to achieve content validity in the validity process for the test. Thus, in test preparation, at least two questions were formulated for each acquisition included for the relevant subject in the science and technology curriculum, and the cognitive levels pertaining to the questions are shown in a table of specifications. At the end of this process, the first version of the academic achievement test of 48 multiple choice questions was produced. This version of the test was submitted to two professors and two research fellows, who are experts in their respective fields, to obtain their opinions so that face validity and content validity could be ensured.  The experts were asked to present their opinions about the test items by marking them on the scales of “relevance to the scientific field,” “relevance to the acquisitions” and “relevance to the cognitive domain.” The options included “relevant” and “irrelevant.” Agreement values for the experts’ responses to the test questions were calculated using an agreement percentage. The expert agreement percentage for the test was determined to be 93% for the section on “relevance to the scientific field,” 87.5% for “relevance to the acquisitions,” and 85% for “relevance to the cognitive domain.”  Şencan (2005) argues that an agreement percentage over .70 represents a good level of agreement among experts. In addition, during the expert opinion stage for the test, necessary corrections were also made in line of expert opinions, seven questions were removed from the test as they were found irrelevant to the acquisitions, and two other questions that were relevant to the acquisitions were added. As a result, after necessary corrections were made, and certain irrelevant questions were removed from the test in line with expert opinions, the final version of the 43-item test was ready for the preliminary application. 370 seventh-grade students participated in the pilot administration of the test. An item analysis for the obtained data revealed item difficulty values for the test questions; items with a value between 0.351 and 0.765 were selected for the final version of the test. The test was found to have an average difficulty value of 0.50. Given the value, the test could arguably have a moderate difficulty level. Apart from item difficulty, item discrimination index was also calculated in the process of item analysis. Items with item discrimination power below 0.30 were removed from the test. Thus, items with an item discrimination index over 0.40 were included in the final version of the test without any changes, while items with item discrimination index between 0.30 and 0.40 were included after necessary corrections. At the end of the item analysis, the final version of the test consisted of 34 multiple choice questions. Finally, the reliability process was implemented for the test questions and the KR-20 value was found to be 0.89.

2. Open-Ended Questions to Determine the Levels of Concept Construction

Eleven open-ended questions were formulated about the "Systems in Our Body" unit in order to determine the students’ levels of concept construction and to make comparisons between the control and experimental groups. Opinions were sought from three experts in order to determine the content and face validity of the open-ended questions. The experts’ opinions were obtained using a scale that contains the sections of “relevance to the scientific field,” “relevance to the acquisitions,” and “relevance to the cognitive domain,” which were ranked as “relevant” and “irrelevant.” Then the experts were asked to state their corrections. The agreement percentage among the experts was determined to be 90% for the section of “relevance to the scientific field,” 85% for the “relevance to the acquisitions” section, and 85% for “relevance to the cognitive domain” section. Given these values, there is arguably a good level of agreement among the experts. Furthermore, in this process, several seventh-grade students were asked to read the open-ended questions prior to the experimental application, and necessary corrections were made when there was anything unclear about the questions. The data obtained from the open-ended questions given to the experimental and control groups in the study as pretest and posttest were analyzed by three experts, and each question was rated within a score range of 0-4. Given correctness levels of the students’ responses to these questions, responses to the open-ended questions were scored 4 for "fully correct," 3 for "partially correct," 2 for "slightly correct,"  1 for "less correct," and 0 for "no response or fully incorrect’ (Abraham, Williamson and Westbrook, 1994). The agreement among the total expert scores for each individual in the experiment and control groups was calculated by using intra-cluster correlation analysis. Şencan (2005) argues that intra-cluster correlation analysis is used to determine inter-expert agreement in data with continuous and normal distribution. Therefore, the data were first tested for fit to normal distribution and goodness of fit of the data obtained from the three experts to normal distribution was tested by Kolmogorov-Smirnov normal distribution test. The results of the analyses revealed a significance value above .05 and that the data had normal distribution. In the intra-cluster correlation analysis, expert agreement for the pretest was .86, while it was calculated as .95 for the posttest.

 

 


Copyright (C) 2010 HKIEd APFSLT. Volume 11, Issue 2, Article 1 (Dec., 2010). All Rights Reserved.