The Effect of Prior Knowledge State Assessment and Progress

download The Effect of Prior Knowledge State Assessment and Progress

of 33

Transcript of The Effect of Prior Knowledge State Assessment and Progress

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    1/33

    The role of prior knowledge state assessment and progress assessment in learning.

    F. DochyCenter for Teacher Training & Research

    University of Leuven, BelgiumG. Moerkerke, D. Sluijsmans

    Center for Educational technology and ExpertiseOpen University, Heerlen, The Netherlands

    Introduction: A model for the integration of assessment and learning in open education

    In 1992, Dochy developed a model for higher education in which informal assessment, formalassessment and learning are integrated. According to Glaser and De Corte (1992) this model

    and other attempts for the integration of assessment and instruction point at a major area forcontinued research (p.1). This model stemmed from a variety of problems observed in higherdistance education, like the problem of equal opportunities for all students at the start, the

    problem of multifunctionality of modules in modular education, the problem of the sequence inwhich course modules are studied and the problem of making appropriate use of priorknowledge. It was claimed that some of these problems could be solved by organizingassessment and learning in a rational way. This claim is in concordance with the so-calledoverall assessment prophecy which has been advocated by Glaser (1981, 1990). According tothe overall assessment prophecy, it is no longer possible to consider assessment only as ameans of determining which individuals are already adapted to or have the potential foradapting to mainstream educational practice. A conceivable goal is to reverse this sequence of

    adaption; rather than requiring individuals to adapt to a means of instruction, the desiredobjective is to adapt the conditions of instruction to individuals in order to maximize their

    potential for success (Dochy, 1992, p.22). This objective can be realized if learning take anindividuals profile of knowledge and skills into account (Pelligrino & Glaser, 1979). In thisway assessment will not only become an integral part of instruction, but will support decisionsabout the way instruction and learning should be designed.

    In the Dochy model, students themselves are responsible for formative assessment (priorknowledge state tests and progress tests) which helps them to get started in their study andgives them the opportunity to monitor their progress. Instructors, on the other hand, areresponsible for formal final testing (which certifies students). The prior knowledge state (PKS)

    is defined as the knowledge state comprising existing declarative and procedural knowledgethat meets the following conditions: "it is present before the implementation of a particularlearning task; it is available or able to be recalled or reconstructed; it is relevant for theachievement of the objectives of the learning task; it is organized in structured schemata; it isto a certain degree transferable or applicable to other learning tasks; and it is dynamic innature" (Dochy & Alexander, 1995). Others define prior knowledge as the knowledge, skills orability that students bring to the learning environment prior to instruction (Jonassen &Grabowski, 1993). In this thesis, however, we will use the definitions Dochy and Alexander(1995) gave for prior knowledge, prior knowledge state assessment and related.The Dochy model is closely related to Nitko's (1989) opinion about the integration ofassessment and learning. Glaser and De Corte (1992) and Snow (1990) believed that

    innovation, when related to the overall assessment prophecy and the use of progress tests, maylead to interesting results. Regularly using tests throughout the learning process makes it

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    2/33

    possible to match the conditions for instruction to the individuals so as to increase everystudent's chance of success. Snow (1990) argues that new approaches to cognitive testing ineducation should be aimed at initial state, daily progress in competence and strategy, weekly

    progress in knowledge or monthly progress in a course. It is clear that our above-mentionedmodel of the progress test fits the last approach mentioned by Snow.

    There are two broad methods for the assessment of prior knowledge: one can ask students toestimate their own mastery level or one can estimate prior knowledge by taking so-calledprior knowledge state tests. In a review of research on informal estimation of the level of

    prior knowledge Dochy (1992) concluded that students are not capable of giving as accurateand reliable estimates of their own level of prior knowledge as prior knowledge state tests do.Falchikov and Boud (1989) meta-analyzed 48 student self-assessment studies and concludedthat most studies indicated overrating on the part of the students. It appears that moreexperienced students in a particular field (that is, students with more prior knowledge) aremore accurate estimators, whereas students taking introductory courses appear to make

    particularly inaccurate self-assessments. Wagemans, Valcke and Dochy (1991) showed that the

    estimation of prior knowledge state level through self-assessment by the students is not reliablewhile objective prior knowledge state tests are. Moreover, Dochy and Bouwens (1992) foundthat students have widely differing conceptions of 'prior knowledge'. This makes self-rating

    procedures invalid. Supported by research that stated the importance of prior knowledge forlearning and by research indicating the inability of students to provide a reliable and validestimate of their prior knowledge, Dochy and his colleagues used prior knowledge state testsfor the assessment of the prior knowledge state in several studies (Dochy, 1995; Dochy, Valcke& Wagemans, 1991; Portier & Wagemans, 1995).

    The literature review (Dochy, 1992 (http://ericae2.educ.cua.edu/barista/dochy1/); Moerkerke,1996) revealed that the amount of prior knowledge has an important positive effect on study

    results. The question remains whether prior knowledge state assessment can be used as aninstrument to facilitate and direct learning. Based on the results of the literature review of thestudies on the effects and role of prior knowledge on learning one might expect that studentswho use instruments like prior knowledge state assessments may benefit from the insights intotheir mastery of the learning objectives. The literature review also revealed that formativeassessment has a positive effect on study results and on end-of-course summative assessment.One might expect that in higher education, where adult students themselves are responsible forlearning, formative assessment also has a positive impact on study results.

    Earlier ideas on using prior knowledge as a tool for learning.

    Based on the literature in the area of research, there is little doubt that prior knowledge couldbe a tool for learning. The main question then would be: how ? Is there an efficient way to mapa students prior knowledge. Therefore, we conducted an investigation from which theresearch agenda was to look for indicators of prior knowledge and to explore further the valueof the variable 'student type' as an indicator. Student type was defined as the choice for acertain study program the student followed, such as Law or Economics.

    These indicators are expected to correlate to a high degree with the test results of students.'Personal and contextual variables' are presented as an operationalization of these 'indicators'.Such a type of indicator is easy to define and information about them is easy to collect. It is

    expected that they can be considered as good indices of the prior knowledge of individualstudents. These indicators can also help to define specific sub-populations within the

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    3/33

    experimental group, which, for example, possess restricted or elaborated prior knowledge.Law students (LS) and economics students (ES) are considered to have different priorknowledge levels. But next to this possible indicator 'student type', we found 77 other possibleindicators. (for an overview, see Dochy, Bouwens, Wagemans and Niestadt, 1991).

    The results of this investigation can be summarised as follows:The expected differences between economics students and law students are notsignificant, but, nevertheless, there is a tendency that economics students perform

    better than law students.The difficulty level of the test-items does not reveal significant differences between ESand LS students.The hypothesis that personal and contextual variables could be valuable as indicatorsof prior knowledge is to be rejected. This fitts with the findings of earlier research (seePowell, Conway and Ross, 1990). Past research has - yet - not been able to detectrelevant and valid 'indicators'. 'Interestingly, the level of previous educationalexperience (formal qualifications), although measured in the study, did not enter the

    model as a significant predictive factor' (Powell, et al., 1990).The slight significant correlation between test results and specific personal andcontextual variables (e.g. preliminary educational level) are of little use as indicatorsof prior knowledge since they cannot be manipulated.

    Students' opinions about prior knowledge state assessment and progress assessment

    Past research, theories about student progress, and theories about the integration ofassessment, learning, and instruction suggest that prior knowledge state assessment might havea positive effect on study results. Moreover, it has become clear that the use of assessment

    techniques as instruments for gathering information on study progress and for directing studysupport should be increased (Moerkerke, 1996). Implementing such results from research andtheoretical notions in educational practice is not that easy. A first step is a clear answer to thequestion of whether students themselves support the overall assessment prophecy and if theyintend clearly to use the instruments. In particular, we want to know this in an educationalcontext where students have to take many instructional decisions themselves. In this study twoquestions are investigated:1. Are students in open higher education willing to use prior knowledge state tests?2. Are students in open higher education willing to use progress tests?These questions were investigated using a questionnaire among students of the OpenUniversity. In distance education, some background variables, like previous education, are

    known to interact with students' preferences and opinions. These variables will also be part ofthe analyses. The questions were investigated in two surveys. In each survey, 2000 studentswere selected.

    Survey I: Opinions about prior knowledge state assessment and progress assessment

    Method

    Population

    A random sample of 2000 students was taken from the student population that subscribed for a

    course between September 1990 and September 1991 (N=35.313). From this sample, 1159students (58%) returned the questionnaire.

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    4/33

    Instrument

    As part of the yearly research with students, we developed a questionnaire, which included aset of questions about prior knowledge state assessment and progress assessment. The designof the questions was based on the assumption that students were not familiar with alternativeassessment formats, like prior knowledge state assessment and progress assessment.

    Data analysis

    Data analysis was executed by means of a descriptive analysis of the answers on the

    questionnaire and -analysis. The -analysis was executed to evaluate the relation between

    each of the specific student-type variables and each of the dependent variables as measured byspecific (sets of) questions. The hypothesis about statistical independence between the

    variables was tested with Pearson's -test (Hays, 1989). The hypothesis was rejected when

    p

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    5/33

    Students' opinions about the way to use prior knowledge state assessment in learning

    The results of prior knowledge state tests can be used for different purposes. Insight into one'sprior knowledge state can support decisions like the choice of the content of the nextinstructional unit, planning the learning process, and asking for support. In general, the studentfelt that the most appropriate use of a prior knowledge state test was for choosing the next

    instructional unit (about 45% of the respondents were positive), followed by using priorknowledge state assessment for planning learning (about 35% of the respondents were

    positive). The use of prior knowledge state tests for determining whether one should ask forsupport and how to do so was still acknowledged by 23% of the respondents.

    Among subgroups there were differences between the number of purposes the respondentschose and the nature of the purposes. The most striking differences were: respondents withouta diploma in higher education more often indicated they wanted to use prior knowledge stateassessment than students with a diploma in higher education; students with 0 to 2 certificatesindicated more often that they wanted to use prior knowledge state assessment than studentswith 3 or more certificates; students with 0 certificates emphasized the use of prior knowledge

    state assessment for content choices, whereas students with 1 or 2 certificates emphasized theuse of prior knowledge state assessment for planning the study; students in the technicalsciences, economics, social sciences and natural sciences also indicated more often that theywanted to use prior knowledge state tests than students in other disciplines.

    The need for the development of alternative forms of feedback

    More than half (about 55%) of the students who had some experience in taking formalexaminations indicated that feedback on their prior formal examinations was insufficient forlearning purposes. About 60% of all students would appreciate personal feedback on priorknowledge tests and progress tests when used as instruments for self-assessment. These resultsindicate that regular feedback should be changed in such a way that students are able to use

    assessment results not only for a quantitative evaluation of mastery, but also for diagnosticpurposes.

    Survey II: Opportunity to take prior knowledge state tests and progress tests and expected useof test results

    Method

    PopulationA random sample of 2000 students was taken from the student population that subscribed for acourse (N=35.005). Of this sample, 1116 students (56%) returned the questionnaire.

    Instrument

    Earlier research, as well as the results of Survey I, showed that new assessment functionalitiescould lead to more efficient learning. These functionalities should be supported by a flexibletest service system. As part of the yearly research, the following four questions were askedconcerning a more flexible test service system for the Open University. These questions relatedto concrete aspects of the model for the integration of assessment, instruction, and learning(Dochy, 1992; http://ericae2.educ.cua.edu/barista/dochy1/). These aspects included theopportunity to take prior knowledge state tests and progress tests, the use of prior knowledge

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    6/33

    state tests and progress tests as parts of formal assessment, and the possibilities for students tocompose their own modules, based on their knowledge base, and to get certificates for them.

    Procedure and data analysis

    The questionnaire was sent to students. The analysis of data was executed by means of a

    descriptive analysis of the answers on the questionnaire and 2 -analyses. The 2 -analysis wasexecuted to evaluate the relationship between each of the specific student-type variables andeach of the dependent variables as measured by the questions. The student-type variables wereidentical to those in the 1992 survey: previous educational experiences, success in studying indistance education, and content domain. The hypothesis about statistical independence

    between the variables was tested with Pearson's 2 -test (Hays, 1989). The hypothesis was

    rejected when p

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    7/33

    The freedom to compose personal study modules and get certificates for them

    Of all respondents, 54% felt it was important that one could compose personal study modulesthat lead to certification. Significant differences between subgroups were found for the student

    variable study success ( 2 =34.297, p=.0000). There is a monotone relation between

    wanting more freedom and the number of certificates, varying from no certificates (66.3%), 1

    or 2 certificates (57.7%), 3 to 6 certificates (48.0%) and 7 or more certificates (43.1%).

    Conclusions

    Survey II investigated the willingness of adult students to take prior knowledge state tests andprogress tests. A substantial number of the respondents would like to have the opportunity totake prior knowledge state tests (49%) or progress tests (55%), and thought that good resultson these tests should be rewarded with credits or certificates (58%). Moreover, a majority ofrespondents (54%) thought that one should be able to get certificates for unique study modulesthat were composed by the individual student. Beginning students with 0 to 2 certificates, in

    particular, wanted the opportunity to use alternative assessment forms (70% and 66%).

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    8/33

    Studies on the effects of using assessment as tools.

    Based on the results of another study of ours on students opinions, however, it can beexpected that not all students will be willing to use these tests (Dochy & Moerkerke, 1994b).Students with a background in higher education and students who were more successful were

    less positive about opportunities to take prior knowledge state tests and progress tests thanstudents without a diploma in higher education and students who just started to study in opendistance education. We expect that the use of prior knowledge state assessment and progressassessment is not as appealing to students with a background in higher education, or tostudents who are successful in studying at the Open University. With this research we want toanswer the following questions:1. Does formative prior knowledge state assessment or formative progress assessment lead to

    increased study-pace?2. Does formative prior knowledge state assessment or progress assessment have a positive

    effect on final test results?3. Can student characteristics that explain differences in participating in the experiment, and

    thus in taking part in formative assessment procedures, be identified?4. Are there treatment-related reasons for substantial dropout in experimental conditions?

    Design of the experiments on the effects of prior knowledge state assessment and progressassessment

    The main question is whether the implementation of prior knowledge state assessment andprogress assessment has a positive effect on student learning. Adult students in distanceeducation were chosen ad random and allocated to four experimental conditions: the No-

    Treatment Condition, a Prior-Knowledge-State-assessment Condition, a Progress-assessment /Students-initiative Condition, and Progress-assessment / Institutes-initiative Condition. Theseconditions were implemented in four courses of the Open University: Marketing, EconomicOrder & Markets, Informatics, and Operational Management. Although students whereallocated to the experimental conditions, they still could volunteer whether or not to

    participate in the assessment procedure. The effects of the treatments on study results wereevaluated by comparing final test scores and participation in examinations.

    Research population and courses

    The subjects were selected from among students who enrolled in courses at the OpenUniversity. The students were enrolled in one course at the time. The courses are self-contained units, which require a set number of hours of study. The students were studyingunder individual study programs with freedom of place of study, time of study, and pace ofstudy. The system of individualized study programs had a practical consequence for theselection of students; since students may enroll in a course at any time during a year, thecreation of experimental groups with a substantial number of subjects must take place over alonger period of time. The selection of subjects took place over a period of seven months wastaken (July 1994-February 1995).

    Students who enrolled in the following courses were selected:

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    9/33

    . Marketing, a course provided by the Department of Business Administration. This course

    required 240 hours of study. Final assessment was done by multiple-choice tests. Theexaminations were given three times a year.

    . Economic Order & Markets, a course provided by the Department of Economics. This

    course required 240 hours of study. Final assessment was done by multiple-choice

    tests. The examinations were supported by SYS, the automated item bank system ofthe Open University. This means that examinations were taken individually.

    . Informatics, a course provided by the Department of Science. This course required 240

    hours of study. Final assessment was done by multiple-choice tests. The examinationswere given three times a year.

    . Operational Management, a course provided by the Department of Business

    Administration, which requires 60 hours of study. Final assessment was done by multiple-choice tests. The examinations were supported by SYS. Examinations were takenindividually.

    When a student enrolled in one of these courses, he or she was assigned to an experimentalcondition. The students enrolling in one of the three 240-hour courses (Marketing, EconomicOrder & Markets, and Informatics) were divided over the four conditions. The studentsenrolling in the 60-hour courses (Operational Management) were divided between the No-Treatment Condition and the Prior-Knowledge-State-assessment Condition.

    The Open University has provided these courses for some years. The past has given someinsight into study tempi and dropout rates. Table 1 shows the number of days betweenenrollment and the first attempt to pass an examination and the number and percentage ofstudents doing a first attempt. At our request, these figures were computed by the departmentof Research and Evaluation of the Open University (Hppener, 1994). These figures should beseen as indicators, not as predictors, for two reasons. First of all, these figures were extractedin June 1994. Every student who enrolled in one of these courses before June 1993 wasselected. So not every student in this sample had over a year to do first attempt for a final test.About 10% of the students do their first attempt after one year. So, considering the selectionmethod, one might expect that the mean and median in Table 1 underestimate the true meanand median for the period before June 1993. Probably the true mean and median would be a bithigher than shown, due to a group of about 10% of the students who enrolled in a course

    between June 1993 and June 1994. Second, there have been some changes in the educationalsystem of the Open University since June 1994. The purpose of these changes is to abandonfirst attempts that are done more than one year after enrollment. The nature of these changes isfinancial rather than instructional. This will probably result in exclusion of students with slowstudy tempi. So, considering this change, one might expect that the means and medians inTable 1 overestimate the true means and medians in the actual educational system. Since both

    factors have different directions and only relate to a small part of the population, we do notbelieve that the true actual means and medians differ to a great extent from the estimatedmeans and differences in Table 1.

    Table 1For each course the numbers (N) of days between enrollment and the first attempt to pass anexamination, and number (N) and percentage (%) of students who do a first attempt at all

    Course N of days betweenenrollment and first attempt

    Total N of students, N and % ofstudents who do first attempt at all

    Mean Median N_total N_attempt

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    10/33

    Marketing 196 172 2465 1306 (53.0%)EconomicOrder &Markets

    174 138 1249 713 (57.1%)

    Informatics 206 175 3138 1229 (39.2%)

    OperationalManagement

    157 109 2720 1502 (45.2%)

    Instruments

    In order to be able to answer the research questions, a number of variables needed to begathered. Some variables were already recorded in the enrollment procedure. During theexperiment, the following instruments were used: questionnaires, prior knowledge state tests,

    progress tests and examinations.

    Background variables from enrollment procedure

    Our study of opinions about prior knowledge state tests and progress tests among OpenUniversity students revealed that opinions differed according to the following characteristics:

    success in studyingandprevious educational experiences. Whereas these backgroundvariables had an impact on students' attitude toward prior knowledge and progress assessment,it is possible that they also influenced students' use of these alternative forms of assessment.These background variables were recorded by the department 'Research and Evaluation'. Othervariables that were recorded in the same procedure were age and sex. Every student starting acourse at the Open University fills in a questionnaire about several personal and situational

    factors. Each time a student starts a new course this information is updated. Students are notobligated to (completely) fill in this questionnaire. So some variables may have missing values.

    Questionnaires

    Depending on the experiences of students with either prior knowledge state tests or progresstests, more or less information about these test forms was provided. In general a questionnairehad the following structure:

    . First a descriptive text briefly explaining prior knowledge and prior knowledge state

    assessment was provided.

    . Next general questions about participants willingness to participate in prior knowledge

    state assessment were introduced.

    . Next subjects could react to statements about possible uses that could be made with the

    results of the tests after their strong and weak points in prior knowledge wereidentified.

    . The following set of questions repeated one by one possible uses of the prior knowledge

    state test results.

    . The questionnaire ended with a number of questions about ones opinions on progress

    assessment.

    In the analyses in this study, the questionnaires were only used to register the response of a

    subject. A student who returned the questionnaire was considered a respondent in the No-

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    11/33

    Treatment Condition. The responses on the questionnaires were discussed elsewhere (Martens,Dochy & Moerkerke, 1995).

    Prior knowledge state assessment and progress assessment

    The prior knowledge state tests and progress tests were made by professional test developers,who worked for the departments. A draft version of a test was judged and commented on a

    priori by the content expert who was responsible for the examinations within the department.After that, a final version of the test was made. Each item was classified by two characteristics:type of behavior needed to answer the item (knowledge, insight, application of knowledge) andtype of knowledge needed to answer the item (facts, concepts, relations, structures, methods).Feedback to students consisted of the correct answers on the items, and advice based on theirresults related to knowledge categories and behavior categories.

    During the experiments, the students took the tests and returned them to the Open Universityfor scoring. Test reliability was determined with Cronbachs alpha. The values for Cronbachs

    alpha for the prior knowledge state tests and the progress test, and the numbers of items ineach test are reported in Table 2. The values for Cronbachs alpha varied from 0.48 to 0.84. Ingeneral, the values of Cronbachs alpha are somewhat lower for the prior knowledge state teststhan for the progress tests. One should compare these values within courses.

    Examinations

    The examinations were regularly organized by the Open University. The examinations forEconomic Order & Markets and Operational Management were supported by SYS. Theexaminations for Marketing, and Informatics were organized three times a year.

    Table 2Cronbachs alpha, number of items of the prior knowledge state tests (PKST) and progresstests (PT) for Marketing, Operational Management, Economic Order & Markets, andInformatics

    Test Cronbachs alpha Number of items

    Marketing, PKST .60 40Marketing, PT1 .64 40Marketing, PT2 .70 40

    Marketing, PT3 .71 40Operational Management, PKST .75 31Economic Order & Markets, PKST .48 40Economic Order & Markets, PT1 .70 40Economic Order & Markets, PT2 .61 40Economic Order & Markets, PT3 .80 40Informatics, PKST .73 50Informatics, PT1 .73 40Informatics, PT2 .83 30Informatics, PT3 .84 30

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    12/33

    Procedure

    When students enrolled, they were assigned to an experimental condition. Additional materialsfor the course were then sent to the students by post. Since procedures varied amongexperimental conditions every condition will be described.

    No-Treatment Condition. After enrolling a student received the questionnaire by post and wasasked to fill it in immediately and return it.

    Prior-Knowledge-State-assessment Condition. After enrolling a student received thequestionnaire and the prior knowledge state test by post and was asked to answer the priorknowledge state test first and then to fill in the questionnaire and to return the answers on thetest and the questionnaire. Students received feedback on their test results as soon as possible(normally within a week after the answers were received at the Open University).

    Progress-assessment / Students-initiative Condition. After enrolling a student received thequestionnaire and three progress tests by post. The progress tests corresponded with the first25% of the course, the second 25% of the course and the third 25% of the course. Thestudents were asked to answer the first progress test as soon as they finished the first 25% of

    the course, then to fill in the questionnaire, and to return the answers on the test and thequestionnaire. Students received feedback on their test results as soon as possible. For thesecond and the third progress tests the same procedure was followed.

    Progress-assessment / Institute s-initiative Condition.A month enrolling a student receivedthe questionnaire and the first progress test by post. Students were asked to answer the first

    progress test when they succeeded in finishing the first 25% of the course, then to fill in thequestionnaire, and to return the answers on the test and the questionnaire. Students were toldthat after four weeks a second progress test, corresponding with the second 25% of thecourse, would be sent and that after eight weeks a third progress test, corresponding with thenext 25% of the course would be sent. Students received feedback on their test results as soonas possible.

    Selection was based on the number of student we wanted to have in each condition, that is,250 students. The selection procedure was planned on the basis of enrollment in earlier years.The selection of students needed to be done over about a six-month period. When morestudents than planned entered during a two-week period, the extra students were allocated tothe No-Treatment Condition. Since the students enrolling in Operational Management werenot assigned to a Progress-assessment Condition, these students were relativelyoverrepresented in the Prior-Knowledge-State-assessment Condition and the No-TreatmentCondition.

    Results of the experiments

    General information about the experiments for all conditions

    Background variables

    We selected 1081 students for the experiment. This group consisted of 874 male students(81%) 207 female students (19%); 638 of the 1081 students (59%) did not yet have anycertificate from the Open University, whereas 210 students (19%) had one or two Ou-

    certificates, 149 students (14%) had three to six Ou-certificates, 84 students (8%) had seven ormore Ou-certificates. Previous educational experience was known for 941 students. Of these ,

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    13/33

    464 (49%) already had a diploma in higher education and 467 (51%) did not have a diploma atthat educational level. The mean age of the students was 32. The youngest student was 18 andthe oldest student was 87.

    ResponseTable 3 shows the distribution of students within courses among different treatments; 292students enrolled in Marketing , 181 students enrolled in Operational Management, 157students enrolled in Economic Order & Markets , and 451 students enrolled in Informatics.Overall, 339 students were assigned to the No-Treatment Condition, 257 to the Prior-Knowledge-State-assessment Condition, 245 to the Progress-assessment / Institute s-initiativeCondition, and 240 to the Progress-assessment / Students-initiative Condition.

    In the last column of Table 3, the response rate within seven months of enrollment is shown foreach condition. The overall response rate was 47%. The response percentages for the No-Treatment Condition and the Prior-Knowledge-State-assessment Condition were 54% and

    57% respectively. The response percentages for the Progress-assessment Conditions are lower:42% for the Institutes-initiative Condition and 32% for the Students-initiative Condition.

    For the interpretation of these percentages one has to consider the fact that definition ofresponse is different for each condition. The students in the No-Treatment Condition andPrior-Knowledge-State-assessment Condition were asked to react within about three weeks ofadministrative enrollment. The students in the Progress-assessment Conditions were asked toreact for the first time after they had studied approximately one-quarter of the course. Anumber of factors could have caused the differences in participation between the Progress-assessment Conditions and the Prior-Knowledge-State-assessment Condition.

    First, attractiveness of the Prior-Knowledge-State-assessment Condition could differ from theattractiveness of the Progress-assessment Conditions. This would mean that students found

    prior knowledge state assessment more attractive than progress assessment. This explanation isprobably not correct. When we investigated students opinions on new assessmentfunctionalities, we found that more students were positive about progress assessment thenabout prior knowledge state assessment (Dochy, Moerkerke, & Martens, 1995).

    Another explanation of participation differences might be that, seven months after enrollment,not all students in the Progress-assessment Conditions had already studied 25% of the course.The first progress test had to be taken when a student had studied 25% of the course. Themedians in Table 1 gives some indications about the likelihood of this explanation. The medians

    are given for the number of days between administrative enrollment and the first attempt topass an examination in previous years. For Marketing, Economic Order & Markets,Informatics, and Operational Management the medians were respectively 172, 138, 175, and109 days. If extrapolate these medians to figure the number of days the students would need tostudy the entire course, than could be expected that seven months after enrollment (210 days)most students would have succeeded in studying a quarter of the course. So this factor is notlikely to have a great influence on differences in response between the Prior-Knowledge-State-assessment Conditions and Progress-assessment Conditions.

    A third explanation might be that during the first months after enrollment students manystudents might dropout. It is a well-known fact that persistence can be low in open distance

    education, especially in courses for beginning students (Kember 1995). The figures in Table 1give an indication of the percentages of dropouts for each course. In previous years the mean

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    14/33

    percentages of students never attending an examination were 47% in Marketing, 43% inEconomic Order & Markets, 61% in Informatics and 55% in Operational Management. It isnot known exactly when students decide to stop studying, but it is likely that it can take severalmonths (after enrollment) to make this decision. Students in the Prior-Knowledge-State-assessment Condition are asked to respond several weeks, and even months earlier, than

    students in the Progress-assessment Conditions. So it could be that students who responded inthe Prior-Knowledge-State-assessment Condition later decided to stop their studies. The samestudents, if they had been assigned to the Progress-assessment Conditions, might never hadresponded. So this factor is likely to have a great impact on differences in response betweenthese conditions.

    Another remarkable difference in participation in assessment procedures occurs between theProgress-assessment Conditions. The response in the Students-initiative Condition is 10%lower than the response in the Institutes-initiative Condition. Since an argument fordifferences in dropout or study rate between these Progress-assessment Conditions can not bemade, the last two aforementioned reasons are not likely to explain this difference. The only

    explanation which might hold is the first reason. This would mean that the Students-initiativeCondition is less attractive than the Institutes-initiative Condition. Kember (1995) gives arationale for this explanation. The Institutes-initiative Condition provides more contact

    between students and the educational institute than the Students-initiative Condition.According to Kembers model of student progress this could have a positive effect onacademic integration and thus on student progress.

    The differences in response rates lead to a number of questions about why students did notparticipate in the treatment conditions. These reasons were investigated in a telephonic survey.Results of this survey will be reported in Section 9.3.

    Table 3The number of subjects in each conditions and course group, and the response within sevenmonths of enrollment.

    Condition Marketing OperationalManagement

    EconomicOrder &Markets

    Informatics

    Total Response

    No-Treatment 83 80 74 102 339 184(54%)

    Prior

    knowledge stateass.

    50 101 21 85 257 146

    (57%)

    Progress ass. /Institute'sinitiative

    74 36 135 245 103(42%)

    Progress ass. /Studentsinitiative

    85 26 129 240 77(32%)

    Total 292 181 157 451 1081 510(47%)

    N response

    % response

    150

    (51%)

    92

    (51%)

    77

    (49%)

    191

    (42%)

    510

    (47%)

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    15/33

    General information about the experiments for the courses

    Background variables

    The distribution of the variablesex was not very different for courses. The percentages of malestudents were 70% for Marketing, 83% for Economic Order & Markets, 85% for Informatics,and 87% for Operational Management. The distribution of the number of previous certificatesfrom the Open University varied among courses. The percentages of students without acertificate from the Open University were 64% for Marketing, 21% for Economic Order &Markets, 85% for Informatics, and 18% for Operational. Marketing and Informatics can becharacterized as courses for beginning students. Economic Order & Markets and OperationalManagement can be characterized as courses for advanced students. The nature ofpreviouseducational experience also differed substantially among courses. The percentages of students

    enrolling who already had a diploma in higher education were 48% for Marketing, 53% forEconomic Order & Markets, 30% for Informatics, and 63% for Operational Management.Informatics, in particular, can be characterized as a course for students without muchexperience in higher education. For all courses the mean age of the students was 32.

    Response

    In Table 4, the percentages of subjects responding within seven months are reported for eachcondition. Marketing had relatively high response percentages in comparison with the wholegroup. The response percentage for the No-Treatment Condition was the same as for thewhole group (54%). Response percentages were relatively high for the Prior-Knowledge-

    State-assessment Condition (70% versus 57%), Progress assessment / Institutes-initiative(49% versus 42%), and Progress assessment/ Students initiative (40% versus 32%). The

    pattern of response percentages for experimental conditions for Marketing was the same as theoverall trend.

    Economic Order & Markets had relatively small numbers of students (157) for the fourconditions. One should bear this in mind when interpreting the response percentages. Theresponse in the No-Treatment Condition was comparable with the whole group (51% versus54%). The response in the Prior-Knowledge-State-assessment Condition was relatively high(71% versus 57%). The response for the Progress-assessment Conditions of Economic Order& Markets differed in a remarkable way from the overall pattern: the response was higher in

    the Students-initiative Condition than in the Institutes-initiative Condition.

    Informatics had the same pattern of response percentages as the overall trend. The response inthe No-Treatment Condition is comparable with the whole group (50% versus 54%). Theresponse in the Prior-Knowledge-State-assessment Condition is relatively high (64% versus57%). The response in the Progress-assessment / Institute s-initiative Condition is comparablewith the whole (40% versus 42%), but the response in the Students-initiative Condition isrelatively low (25% versus 32%).

    Operational Management had two conditions. The response in the No-Treatment Condition isrelatively high compared to the whole group (63% versus 54%). The response in the Prior-

    Knowledge-State-assessment Condition is relatively low compared with the whole group (42%versus 57%).

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    16/33

    Table 4The percentage of students responding within seven months for each condition.

    Course Condition Number of selectedstudents

    Number ofresponding students

    Responsepercentage

    Marketing No-Treatment 83 45 54%Prior knowledge stateassessment

    50 35 70%

    Progress-assessment /Institutes-initiative

    74 36 49%

    Progress-assessment /Students initiative

    85 34 40%

    Economic Order &

    Markets

    No-Treatment 74 38 51%

    Prior knowledge stateassessment

    21 15 71%

    Progress-assessment /Institutes-initiative

    36 13 36%

    Progress-assessment /Students initiative

    26 11 42%

    Informatics No-Treatment 102 51 50%Prior knowledge stateassessment

    85 54 64%

    Progress-assessment /

    Institutes-initiative

    135 54 40%

    Progress-assessment /Students initiative

    129 32 25%

    OperationalManagement

    No-Treatment 80 50 63%

    Prior knowledge stateassessment

    101 42 42%

    Totals 1081 510 47%

    What possible factors could cause course-related differences in response percentages betweencourses and between conditions? The first remarkable finding is that the courses for beginningstudents - Marketing and Informatics - had higher response percentages in the Progress-assessment / Institute s-initiative Condition than in the Progress-assessment / Students-initiative Condition. The only course for advanced students with Progress-assessmentConditions - Economic Order & Markets - gave the opposite picture. Although these findingswere based on a relatively low number of students, they suggest that beginning students needmore external pressure than advanced students. The second remarkable finding was therelatively low response percentage for the Progress-assessment / Students-initiative Conditionfor Informatics. This course attracted many beginning students without a diploma in highereducation. Maybe the combination of starting and being inexperienced led to this finding. The

    third remarkable finding is the relatively low response percentage for the Prior-Knowledge-State-assessment Condition in the course for advanced students, Operational Management.

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    17/33

    The only reason we can find is related to the subject of this course. This course is the only oneof the four courses where mathematical problem-solving skills are important.

    The effect of prior knowledge state assessment and progress assessment on study-pace

    After seven months. 3 students had formally withdrawn from the study, 318 (29%) hadattended examinations, and 760 had not yet attended examinations. In Table 5, the frequenciesof students who attended examinations within seven months are reported for each experimental

    condition. The relationship between condition and study-pace was investigated with a -

    analysis. The hypothesis of statistical independence between these variables could not be

    rejected ( (3, n=1078) = 6.868, p=.08). At first sight, the percentage of students attending an

    examination seems relatively large for the Prior-Knowledge-State-assessment Condition(35%). Students from Operational Management, however, were overrepresented in thiscondition. Operational Management was a 60-hour course; the other courses were 240-hour

    courses.

    Table 5The percentage of students attending examinations within seven months for each condition andcourse

    Condition N of studentsin condition

    N of studentsattending

    % of studentsattending

    No-Treatment 337 88 26%Prior knowledge state

    assessment

    257 90 35%

    Progress-assessment /Institutes-initiative

    244 76 31%

    Progress-assessment /Students initiative

    240 64 27%

    Totals 1078 318 29%

    In Table 6, the frequencies of students who attended examinations within seven months are

    given for the 240-hour courses. A second -analysis of the students in Marketing, Economic

    Order & Markets, and Informatics also revealed statistical independence between the

    conditions and participation at an examination ( (3, n=897) = 6.767, p=.08). The percentage

    of students from the Prior knowledge state assessment group attending examination was stilllarge, but was lower (31% in stead of 35%). The percentage of students from the No-Treatment Condition attending examination was also lower (26% in stead of 22%). Althoughthe differences were not statistically significant, the percentages for the No-TreatmentCondition seem low compared with the assessment conditions.

    Table 6

    The percentage of students attending examinations within seven months for each condition forthe three 240-hour courses: Marketing, Economic Order & Markets, and Informatics

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    18/33

    Condition N of studentsin condition

    N of studentsattending

    % of studentsattending

    No-Treatment 257 56 22%Prior knowledge stateassessment

    156 48 31%

    Progress-assessment /Institutes-initiative

    244 76 31%

    Progress-assessment /Students initiative

    240 64 27%

    Totals 897 244 27%

    In Table 7, the frequencies of students who attended examination within seven months arereported for each course. The percentages were relatively high for the courses for advancedstudents (Economic Order & Markets, 35%; Operational Management, 41%) compared with

    the courses for beginning students (Marketing, 28%; Informatics, 24%). As one might expect,there was a statistical dependence between the course and attending an examination ( (3,

    n=1078) = 20.231, p=.00). This, however, is not of particular interest for the researchquestions posed in this study.

    Table 7The percentage of students attending examinations within seven months for each course

    Course Number of students in

    condition

    Number ofstudents

    attending

    Percentageattending

    Marketing 292 82 28%Economic Order &Markets

    155 54 35%

    Informatics 450 108 24%OperationalManagement

    181 74 41%

    Totals 1078 318 29%

    In Table 8, the percentage of students attending examinations within seven months are givenfor each condition in each course. For each course, the relationship between Whether or not

    students attended the examination and the condition was tested with a -analysis. The

    hypothesis of statistical independence between these variables had to be rejected for Marketing

    ( (3, n=292) = 8.632, p=.03), and for Informatics ( (3, n=450) = 11.223, p=.01). The

    hypothesis of statistical independence could not be rejected for Economic Order & Markets (

    (3, n=155) = 0.8033, p=.85) and Operational Management ( (1, n=181) = 0.046, p=.83). If

    one looks more closely at Marketing and Informatics one sees that more students in theassessment conditions attended examination than those in the No-Treatment Condition. Oneshould remember that both are courses for beginning students. So it seems when it comes to

    attending examinations beginning students benefit more from prior knowledge state assessmentand progress assessment than experienced students.

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    19/33

    Table 8The number (N) and percentage (%) of students attending examinations within seven monthsfor each course and each condition

    Course Condition N of students incondition

    N ofstudentsattending

    %attending

    Marketing No-Treatment 83 14 17%Prior knowledge stateassessment

    50 13 26%

    Progress-assessment /Institutes-initiative

    74 26 35%

    Progress-assessment /Students initiative

    85 29 34%

    Economic Order &Markets

    No-Treatment 72 27 38%

    Prior knowledge stateassessment

    21 6 29%

    Progress-assessment /Institutes-initiative

    36 13 36%

    Progress-assessment /Students initiative

    26 8 31%

    Informatics No-Treatment 102 15 15%Prior knowledge stateassessment

    85 29 34%

    Progress-assessment /Institutes-initiative 134 37 28%

    Progress-assessment /Students initiative

    129 27 21%

    OperationalManagement

    No-Treatment 80 32 40%

    Prior knowledge stateassessment

    101 42 42%

    Total 1078 318 29%

    The effect of formative prior knowledge state assessment on study results and formativeprogress assessment on test results

    The second research question of this study is whether formative prior knowledge stateassessment, or formative progress assessment has a positive effect on study results. Thisquestion was investigated comparing the mean test scores on the examination for differentconditions. Since test scores on examinations of different courses cannot be compared, oneshould analyze the data within a course. In Table 9, the means and standard deviations for the

    final test scores are given per course for each condition. At first glance there were no trends in

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    20/33

    the data. The magnitude of the differences were more systematically investigated by analysis ofvariance.

    Table 9

    Mean and standard deviation for the final test scores for each condition in the four courses.

    Course Mean Standarddeviation

    Number ofsubjects

    MarketingNo-TreatmentPrior knowledge state assessmentProgress-assessment / Institutes

    initiativeProgress-assessment / Students

    initiative

    6.647.857.547.17

    1.391.951.301.60

    14132629

    Operational ManagementNo-TreatmentPrior knowledge state assessment

    6.386.64

    1.811.71

    3242

    Economic Order & MarketNo-TreatmentPrior knowledge state assessmentProgress-assessment / Institutes

    initiativeProgress-assessment / Students

    initiative

    5.816.006.236.25

    1.711.261.241.58

    276

    138

    Informatics

    No-TreatmentPrior knowledge state assessmentProgress-assessment / Institutes

    initiativeProgress-assessment / Students

    initiative

    7.007.007.226.85

    2.651.771.731.81

    15293727

    In Table 10, summaries of the analyses of variance are given. These analyses of variance didnot indicate significant effects of conditions: Marketing (F(3,78) = 1.66, p=0.18), OperationalManagement (F(1,72) = 0.42, p=0.52), Economic Order & Markets (F(3,50) = 0.29, p=0.83),

    and Informatics (F(3,104) = 0.20, p=0.90).

    In Table 10, the analysis of variance was done on all students who were allocated to acondition, whether they responded or not. We know for sure that students who did respond inthe experimental conditions took the prior knowledge state tests, or the progress tests. Thosestudents did get feedback containing the correct answers, their scores, and study advice. We donot know how students who did not respond interacted with the prior knowledge state tests, orthe progress tests. In any case, they did not receive feedback maybe they used the tests asexercises or maybe they did nothing with the tests. This was investigated in the telephonicsurvey, which will be reported on later.

    Table 10

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    21/33

    Summary of the analyses of variance on the scores on the final test for each course withcondition as the source of variance

    Source Sum of Squares

    df MeanSquare

    F ratio Fprobability

    MarketingBetween groups 11.8720 3 3.9573 1.6639 0.1816Within groups 185.5061 78 2.3783Total 197.3780 81

    Operational ManagementBetween groups 1.3031 1 1.3031 0.4243 0.5169Within groups 221.1429 72 3.0714Total 222.4459 73

    Economic Order &Markets

    Between groups 2.1182 3 0.7061 0.2945 0.8292Within groups 119.8818 50 2.3976Total 122.0000 53

    InformaticsBetween groups 2.1742 3 0.7247 0.1985 0.8972Within groups 379.6777 104 3.6507Total 381.8519 107

    Response, therefore, may be an important variable, indicating how actively students interactedwith the prior knowledge state tests, or the progress tests. In Table 11, a summary of analysesof variance is given on the final test scores with conditions and response as sources ofvariance. The same patterns were found in each course. The analyses of variance indicate thatthe differences between the responding and non-responding students are significant for eachcourse: Marketing (F(1,74) = 10.29, p=0.002), Operational Management (F(1,70) = 5.07,

    p=0.03), Economic Order & Markets (F(1,46) = 13.51, p=0.001), and Informatics (F(1,100) =4.367, p=0.04). The lack of significant effects for conditions, and for the interactions betweenresponse and conditions means that differences in test scores can not be accounted for by theavailability of prior knowledge state tests or progress tests, or by the use of prior knowledge

    state tests or progress tests. As a check, a separate analysis of variance was conducted on thetest scores of students who responded. No systematic differences between conditions werefound: Marketing (F(3,50) = 2.07, p=0.12), Operational Management (F(1,39) = 0.75,

    p=0.39), Economic Order & Markets (F(3,27) = 0.20, p=0.89), and Informatics (F(3,70) =0.27, p=0.84). So we may conclude that systematic differences between test scores are relatedto response, but not to condition, or to the interaction between response and condition. Testscores differed for students who responded,, no matter if these were Prior knowledge statetests, progress tests or questionnaires.

    Table 11

    Summary of the analyses of variance on the scores on the final test for each course withcondition and response as the sources of variance

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    22/33

    Source Sum of Squares

    df MeanSquare

    F ratio Fprobability

    Marketing

    Condition 11.174 3 3.725 1.728 .169Response 22.174 1 22.174 10.288 .002Condition x Response 3.839 3 1.280 0.594 .621Residual 159.492 74 2.155Total 197.378 81 2.437

    Operational ManagementCondition 4.881 1 4.881 1.660 .202Response 14.907 1 14.907 5.070 .027Condition x Response 0.408 1 0.408 0.139 .711Residual 205.828 70 2.940

    Total 222.446 73 3.047

    Economic Order &Markets

    Condition 1.440 3 0.480 0.243 .866Response 26.662 1 26.662 13.511 .001Condition x Response 2.446 3 0.815 0.413 .744Residual 90.774 46 1.973Total 122.000 53 2.302

    Informatics

    Condition 1.805 3 0.602 0.169 .917Response 15.552 1 15.552 4.367 .039Condition x Response 7.996 3 2.665 0.748 .526Residual 356.129 100 3.561Total 381.852 107 3.569

    Student characteristics and participation in the assessment conditions

    A total of 1081 students were selected for the experiments, 510 students responded withinseven months, 3 formally withdrew from the course. The third research questions addressedthe problem of whether participation in the assessment conditions is related to studentcharacteristics. In earlier research we found that the intentions to use prior knowledge stateassessment and progress assessment were related to the level of previous educationalexperience and to the number of certificates already collected from the Open University(Dochy, Moerkerke & Alexander, 1995). Given this fact, these variables could also be related

    to the response in the conditions and in particular to the participation in the assessmentconditions.

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    23/33

    Response and the level of previous educational experience

    The educational background was known for 941 students, of which 452 (48.0%) responded. InTable 12, a contingency table of the variables response andprevious educational experience is

    shown for all conditions. The hypothesis of statistical independence between both variableswas tested with Pearsons -analysis. The -value was significant ( (1, n=941) = 4.54,

    p=.03). This means that the hypothesis that the variables response andprevious educationalexperience are statistically independent had to be rejected. Inspection of the expectedfrequencies under the assumption of statistical independence revealed that students with formerqualifications in higher education responded more often than students who had noqualifications in higher education. The actual frequency of students with a diploma in highereducation responding was 244, whereas the expected frequency under statistical independence

    between both variables was 227.7. This may not be interpreted as a difference in theparticipation in assessment procedures since the students in the No-Treatment Condition onlyreceived a questionnaire; students in the other conditions received one or more tests.

    Table 12Frequencies and expected frequencies under the hypothesis of statistical independence betweenresponse and different previous educational experience for all conditions

    No highereducation

    Highereducation

    Total

    Response Frequencies 208 244 452Exp.Frequencies

    224.3 227.7

    Non-response

    Frequencies 259 230 489

    Exp Frequencies 242.7 246.3

    Total 467 474 941

    In order to get an impression of how and whether the use of prior knowledge state tests andprogress tests is related to previous educational experiences, we did additional analyses on thestudents in the No-Treatment Condition and on the students in the treatment conditions. Thehypothesis of statistical independence between the variables response and previous educational

    background could not be rejected for the No-Treatment Condition ( (1, n=289) = 0.94,

    p=.33), or for the treatment conditions ( (1, n=652) = 2.98, p=.08). In Tables 13 and 14,

    contingency tables are shown for respectively the No-Treatment Condition, and for the threetreatment conditions. Although the differences are not significant, inspection of Table 14reveals that students with a qualification in higher education more often use alternativeassessment instruments than students having no qualification in higher education. The actualfrequency of students with a background in higher education who responded in the treatmentconditions was 153. Whereas the expected frequency under statistical independence was 142.0.This result is in contradiction with findings from the questionnaires. Dochy, Moerkerke andAlexander (1995) found that students with no diploma in higher education indicated that theywere interested in alternative assessment instruments more often. So there seems to be adiscrepancy between the opinions and expectations of students about their behavior and theiractual behavior.

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    24/33

    Table 13Frequencies and expected frequencies under the hypothesis of statistical independence ofresponse and different previous educational experience for the No-Treatment Condition

    No highereducation

    Highereducation

    Total

    Response Frequencies 67 91 158Exp.Frequencies

    71.1 86.9

    Non-response

    Frequencies 63 68 131

    Exp Frequencies 58.9 72.1

    Totals 130 159 289

    Table 14Frequencies and expected frequencies under the hypothesis of statistical independence ofresponse and different previous educational experience for the three treatment conditions

    No highereducation

    Highereducation

    Total

    Response Frequencies 141 153 294Exp.Frequencies

    152.0 142.0

    Non-response

    Frequencies 196 162 358

    Exp Frequencies 185.0 173.0Totals 337 315 652

    Finally, the hypothesis of statistical independence between participation in the assessmentprocedures and previous educational experience was separately tested for each treatmentcondition and for each course. Due to a low number of participants in relation to the number ofcells, a more sophisticated analysis for contingency tables could not be performed. The

    hypothesis of independence could not be rejected for Marketing ((1, n=184)=0.008, p=.93),

    Operational Management ((1, n=96)=0.001, p=.97), or Economic Order & Markets ((1,

    n=71)=0.034, p=.85). The hypothesis of independence had to be rejected for. Informatics

    ((1, n=301)=5.510), p=.02). As a group, the students without a diploma in higher education

    did participate in the assessment conditions as often as one would expect under the assumptionof statistical independence between participation and level of previous educational experience.For practical reasons this is an important finding, since 70% of the students who enroll inInformatics do not have a diploma in higher education. The hypothesis of statisticalindependence could not be rejected for any of the treatment conditions: Prior-Knowledge-

    State-assessment Condition ( (1, n=230) = 0.78, p=.38), Progress-assessment / Institute s-

    initiative Condition ( (1, n=215) = 3.42, p=.06), Progress-assessment / Students-initiative

    Condition ( (1, n=207) = 0.61, p=.43).

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    25/33

    Response and success in studying

    In Table 15, a contingency table of the variables response and success in studying is shown forall conditions. The variable success in studying was known for 1081 students, of which 510(47%) responded. The hypothesis of statistical independence between both variables was

    tested with Pearsons -analysis. The -value was significant ( (3, n=1081) = 38.65,

    p=.00), which means that the hypothesis that the variables response and success in studying arestatistically independent had to be rejected. Inspection of Table 15 revealed that students withone or more certificates responded relatively more often than students who did not have anycertificates from the Open University. The actual frequency of responding students without acertificate from the Open University was 251, whereas the expected frequency under statisticalindependence was 301.0.

    Table 15Frequencies and expected frequencies under the hypothesis of statistical independence ofresponse and success in studying for all conditions

    0 certifi-cates

    12certifi-cates

    36certifi-cates

    >6certifi-cates

    Totals

    Response Frequencies 251 120 89 50 510Exp.Frequencies

    301.0 99.1 70.3 39.6

    Non-response

    Frequencies 387 90 60 34 571

    ExpFrequencies

    337.0 110.9 78.7 44.4

    Totals 638 210 149 84 1081

    In order to get insight into how and whether participation in assessment procedures is relatedto success in studying, we did additional analyses on the students in the No-TreatmentCondition and on the students in the treatment conditions. In Table 16, a contingency table for

    the No-Treatment Condition is shown; in Table 17, for the treatment conditions. Significant -

    values were found for the No-Treatment Condition ( (3, n=339) = 16.28, p=.00) and for the

    treatment conditions ( (3, n=742) = 18.32, p=.00).

    Table 16Frequencies and expected frequencies under the hypothesis of statistical independence ofresponse and success in studying for the No-Treatment Condition

    0 certifi-cates

    12certifi-cates

    36certifi-cates

    >6certifi-cates

    Totals

    Response Frequencies 70 48 42 24 184Exp.Frequencies

    88.5 40.7 34.7 20.1

    Non-

    response

    Frequencies 93 27 22 13 155

    Exp 74.5 34.3 29.3 16.9

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    26/33

    Frequencies

    Totals 163 75 64 37 339

    Table 17

    Frequencies and expected frequencies under the hypothesis of statistical independence ofresponse and success in studying for the treatment conditions

    0certificates

    12certificates

    36certificates

    >6certificates

    Totals

    Response Frequencies 181 72 47 26 326Exp.Frequencies

    208.7 59.3 37.3 20.6

    Non-response

    Frequencies 294 63 38 21 416

    ExpFrequencies

    266.3 75.7 47.7 26.4

    Totals 475 135 85 47 742

    Both in the No-Treatment Condition and in the treatment conditions the students without acertificate responded less often, as might be expected, under statistical independence betweenresponse and success in studying. This means, that participation in assessment procedures wasrelated to success in studying, but again these results are not in concordance with findings inearlier research. Dochy, Moerkerke and Alexander (1995) found that students with nocertificates indicated that they were interested in alternative assessment instruments more

    often. And again, there seems to be a discrepancy between the opinions and intentions ofstudents and their actual behavior. Finally, the hypothesis of statistical independence between

    participation in the assessment procedures was separately tested for each treatment conditionand for each course. Due to a low number of participants in relation to the number of cells, amore sophisticated analysis for contingency tables could not be performed. The hypothesis of

    independence could not be rejected for Marketing ((3, n=209)=4.086, p=.25), Operational

    Management ((3, n=101)=6.095, p=.11), or Economic Order & Markets ((3,

    n=83)=3.518, p=.32). The hypothesis of independence had to be rejected for Informatics ((3,

    n=349)=16.466), p=.00). Starting students did not participate in the assessment conditions asoften as one would expect under the assumption of statistical independence. About 85% of all

    students enrolling in Informatics were beginning students. A significant value of was found

    for the Progress-assessment / Students-initiative Condition ( ( 3, n=240) = 13.74, p=.00)

    and the Progress-assessment / Institute s-initiative Condition ( ( 3, n=245) = 9.83, p=.02).

    The -value for the Prior-Knowledge-State-assessment Condition ( ( 3, n=257) = 2.56,

    p=.47) was not significant.

    Telephonic survey into the reasons for not participating in assessment conditions

    The fourth research question is: are there treatment-related reasons for substantial dropout in

    experimental conditions? In the section on general information about the experiments it wasnoticed that 510 of 1081 selected students in the control and experimental groups responded

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    27/33

    (47%). Of those, 742 students in the experimental, groups 326 (44%) responded by taking atest. These students participated in the assessment conditions. In this section we investigate thereasons students gave for not participating in the assessment conditions. Many reasons canexist that explain why subjects do not remain in experimental treatments (Cook & Campbell,1976). Mortality and treatment-related attrition from the experiment are two common causes

    which may explain dropout. The mortality factor may be due to the different kinds of personswho dropped out of a particular treatment groups during the course of an experiment.Treatment-related attrition can be an important factor in dropout in experiments in a fieldsetting, if the experiments lasts for a long time. Experimental treatments differ in theirattractiveness to respondents, so that the number and nature of persons remaining in anexperiment may differ among conditions. These forms of dropout are in most cases attributableto the treatment and can be important findings in their own right. For the evaluation of thetreatments, it is important to know whether subjects drop out because of characteristics of thetreatments or because of reasons which are not related to the treatment, like personalcircumstances, dropout from the study program, etc. Since we were mainly interested in theevaluation of the treatments, the reasons for not participating in the experiment were

    investigated for the Prior-Knowledge-State-assessment Condition and the Progress-assessmentConditions, and not for the No-Treatment condition.

    Method

    Subjects

    The subjects were selected from the non-responding groups. Since the natures of the Prior-Knowledge-State-assessment Condition and the Progress-assessment Conditions differ, non-response needed to be defined for both conditions. For the Prior-Knowledge-State-assessment

    Condition, non-response was defined as not having returned the answers on the Priorknowledge state test within six weeks of enrollment. For the Progress-assessment Conditions,non-response was defined as not having returned the answers on the first Progress test within 3months of enrollment. A random sample of approximately 15 subjects for each course was to

    be taken for those who did not respond in the Prior-Knowledge-State-assessment Condition. Arandom sample of approximately 20 subjects for each course was to be taken for those whodid not respond in the Progress-assessment / Students-initiative Condition. The same was

    planned for the Progress-assessment / Institutes-initiative Condition.

    Instruments

    The reasons for not participating in an assessment condition were investigated by a telephonicsurvey. A protocol was designed to categorize the answers. Four categories of possible reasonswere formulated a priori. The categories are:I Study-related reasons. These reasons are related to the student's study style, study

    pace, or other characteristic of learning behavior. The following reasons weredistinguished:

    II The student never wanted to start learning at all.III The student had already stopped learning.IV The student studied too fast, so there was no reason for participation in the

    assessment procedures.V The student studied too slowly or had not yet started; so there was no reason

    for participation in the assessment procedures.

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    28/33

    VI Treatment-related reasons. These reasons are related to negatively perceivedcharacteristic of the experiment or the instruments. The following reasons weredistinguished:

    VII The student had general objections to participating in experiments.VIII The student did not expect enough profit from the experiment and was not

    prepared to take the test(s).IX The student started but did not complete the test(s).X The student had specific objections to participating in this experiment.

    XI Student had other reasons.XII Student had never received the tests.

    When subjects mentioned more specific reasons for not participating in the assessmentconditions then these reasons were also scored.

    Procedure

    The survey was conducted in two rounds. First, non-respondents in the Prior-Knowledge-State-assessment Conditions were interviewed. The interviews were conducted by fourinterviewers. For practical reasons, each interviewer interviewed non-respondents in onecourse. So interviewer and course were confounded. After data collection, some preliminaryanalyses were carried out. This analysis revealed a statistical significant relationship between

    the interviewer/course and the reasons the students gave for not responding ( (9, n=63)

    =20.57, p=.015). This finding may suggest systematic differences between interviewers, but thedifferences are more likely to be caused by differences between courses. In order to control

    possible interviewer-bias, courses and interviewers were crossed in the second round in whichnon-respondents in both Progress-assessment Conditions were questioned. After the secondround, the relationship between interviewer and reasons for not responding were analyzed: this

    time there was no statistical dependency between interviewers and reasons for not responding( (9, n=124) = 13.39, p=.15).

    Results

    Reasons for not taking prior knowledge state tests

    In the first round, 63 subjects were interviewed. The reasons for not participating in the Prior-Knowledge-State-assessment Condition are given in Table 18. The main reasons for not

    responding in the Prior-Knowledge-State-assessment Condition were study-related.

    Thirty-one subjects (49%) mentioned such reasons: four subjects never had any intention tostart studying the course, five subjects had already quit studying, seven subjects had

    progressed so far before receiving the prior knowledge state test that they decided it was notvalid to take it, 15 subjects said they had not started studying yet, but still intended to start.Eighteen subjects (29%) mentioned experiment-related reasons: four of them had objections to

    being part of an experiment, 14 subjects expected too little profit from testing and feedback,some of them even before taking the test, others while they took the test. Ten subjects (16%)mentioned personal reasons for not responding and four subjects (6%) said they never receivedthe tests. As mentioned before, there were some differences between courses. Since the

    numbers of subject per course are low, interpretation of these differences needs to be done

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    29/33

    cautiously. The non-responding students in Marketing mentioned study-related reasons moreoften than other students.

    Table 18

    Frequencies and percentages of the reasons for non-response in the Prior-Knowledge-State-assessment Condition.

    Reasonnon-responserelated to

    Frequency Percentage Specific reasonfor non-response

    Frequency Percentage

    Study 31 49.2% never wanted tostart

    4 6.3%

    quit study 5 7.9%studied too fast 7 11.1%

    studied too slow(not started yet)

    15 23.8%

    Experiment 18 28.5% general objectionsagainstexperiments

    1 1.6%

    too little profitexpected

    7 11.1%

    test started butnot completed

    7 11.1%

    specific objectionsto experiment

    3 4.8%

    Other 10 15.9% 10 15.9%No testreceived

    4 6.3% 4 6.3%

    Totals 63 100% 63 100%

    Reasons for not taking progress tests

    In the second, round 125 subjects were interviewed. The reasons for not participating in theProgress-assessment Conditions are given in Table 19. Of all the subjects, 101 (81%)mentioned study-related reasons: 58 subjects (46%) had not yet reached the moment in their

    study where the first progress tests should be taken; 31 subjects (25%) said they had alreadyquit the study, 7 subjects (6%) studied too fast: they were already planning examinations whenthe progress tests arrived, 5 subjects (4%) never intended to start studying the course. Of allsubjects only 9 (7%) mentioned experiment-related reasons for not taking progress tests: sevenexpected too little profit from testing and feedback, one started taking the progress test, butdid not complete it. Thirteen subjects (10%) mentioned personal reasons for not taking the

    progress tests and two (2%) said they never received the progress tests.

    Table 19Frequencies and percentages of the reasons for non response in the Progress-assessment

    Conditions

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    30/33

    Reasonnon-responserelated to

    Frequency Percentage Specific reasonsfor non response

    Frequency Percentage

    Study 101 80.8% never wanted to

    start

    5 4.0%

    quit study 31 24.8%studied too fast 7 5.6%studied too slow 58 46.4%

    Experiment 9 7.2% too little profitexpected

    7 5.6%

    test started butnot completed

    1 0.8%

    specific objectionsto experiment

    1 0.8%

    Other 13 10.4% 13 10.4%

    No testreceived

    2 1.6% 2 1.6%

    Total 125 100% 125 100%

    Comparison of reasons for not taking prior knowledge state tests and reasons for not takingprogress tests

    When comparing Tables 18 and 19, one sees that subjects who did not take the priorknowledge state tests seem to mention more experiment-related reasons for their withdrawalfrom the experiment than those who did not take progress tests (about 30% versus 7%). This

    raises the question of whether these differences are statistically significant. In a preliminary,analysis all four categories of reasons were compared with all three treatment conditions. Due

    to low expected frequencies for the categories other reasons and no tests receivedthe -

    square analysis could not be considered valid (3 cells out of 12 had expected frequencies ofless than 5). Pooling and/or exclusion of categories was necessary in order to get a validstatistical analysis: the category no tests receivedwas excluded from further analysis; thecategories Progress-assessment / Students-initiative Condition and Progress-assessment /

    Institutes-initiative Condition were pooled. A -analysis revealed a significant statistical

    dependency between conditions and reasons for not participating in the assessment procedure

    ( (2, n=182) = 20.55, p=.00). It may be concluded that students more often refused to use

    prior knowledge state tests because they objected to this kind of tests in comparison toprogress tests. In both conditions, however, study-related reasons for not taking tests formed amajority.

    Conclusions

    This study posed four research questions. We will discuss the results of the studies for each ofthem.

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    31/33

    Does formative prior knowledge state assessment, or formative progress assessment lead toincreased study-pace?

    After seven months we compared the number of students who attended the examinations foreach condition and for each course. It turned out that for courses with beginning students(Marketing and Informatics), relatively more students in the assessment conditions attended

    examination within seven months than the students in the No-Treatment Condition. This effectwas not found in the courses for advanced students (Economic Order & Markets, andOperational Management).

    Does formative prior knowledge state assessment or progress assessment have a positiveeffect on final test results?

    The conditions did not appear to affect mean test scores. Students who responded, however,got higher scores than students who did not respond, regardless of the condition those studentswere assigned to.

    Can student characteristics that explain differences in participating in formative assessmentprocedures be identified?

    In research on students' opinions two variables that explained the differences between responsepatterns were identified: level of previous educational experience, and success in studying, asdefined by the number of certificates already gained at the Open University. Both variableswere related to response in the experiment. The most discriminative variable was success instudying. Students with no Open University certificates tended to respond less often thanexpected. Success in studying was especially related to significant differences in response in the

    No-Treatment Condition, and the Progress-assessment Conditions. When looking at the fourcourses, this variable was most discriminative in Informatics; a course in which 85% of the

    students do not yet have an Open University certificate yet. The level of previous educationalexperience was also related to response. Students without a diploma in higher educationresponded less frequently than expected. This variable was also most discriminative inInformatics; a course were 70% of the students do not have a diploma in higher education.

    In earlier investigations, we had found that students with a background in higher education andstudents who were more successful were less positive about the opportunities to take priorknowledge state tests and progress tests than students without a diploma in higher educationand students who had just started to study in open distance education. When the opportunityto take progress tests or prior knowledge state tests was actually given, these instruments wereused more often by students who already had a diploma in higher education and students who

    were successful in their study at the Open University. So, there is a discrepancy between thisfinding and the results of the surveys. This discrepancy is probably caused by the large dropoutfor initial courses. Especially categories like non-starters and withdrawers are relativelyoverrepresented in initial courses.

    Are there treatment-related reasons for substantial dropout in experimental conditions?

    Treatment-related attrition can be a major problem in an experiment in a field setting. For thatreason, it is important to know whether attractiveness or other characteristics of a conditionlead to substantial dropout. In a telephonic survey we found that the majority of students whodid not participate in one of the three experimental conditions mentioned study-related reasons

    for not (yet) taking a prior knowledge state test, or a progress test. We interviewed 188 non-participating students. Of these, 142 (75%) mentioned study-related reasons (75%), 27 (14%)

  • 8/14/2019 The Effect of Prior Knowledge State Assessment and Progress

    32/33

    mentioned treatment-related reasons, 23 (12%) mentioned other reasons, and 6 (3%) said theynever received our experimental materials. We may therefore conclude, that, on the whole,treatment-related reasons do not lead to substantial dropout. There were some differences

    between treatments, however. Non-respondents in the Progress-assessment Conditionsmentioned treatment-related reasons for not participating in the experiment more often than

    those in the Prior-Knowledge-State-assessment Condition (28% versus 7%). Non-participantsin the Prior-Knowledge-State-assessment Condition said more often that they started the testbut did not finish it, than non-participants in the Progress-assessment Conditions (11% versus1%). This seems logical since the prior knowledge state tests are at the end-of-course level andthese are not always easy for beginning students.

    One of the pillars of the methodology for prior knowledge state assessment and progressassessment is that students should be able to use the information about their knowledge statesthat is summarized in the feedback. An important conclusion about research into the role of

    prior knowledge in higher education is that it may be fruitful to analyze the complex ofcomponents of prior knowledge in more detail (Dochy, 1992; 1996). Four main types ofdimensions were identified in the field of summarizing knowledge states: content-relateddimensions, cognitive-psychological dimensions, educational-psychological dimensions anditem-characteristic dimensions. For instance, examples of cognitive-psychological dimensionsare the behavioral dimension, the content dimension, etc. Each dimension has a number of

    parameters. For instance, parameters of a content dimension could be facts, concepts,relations, structures and methods. To provide more detailed analysis of the complexcomponents of knowledge states, Dochy (1992) introduced so-called knowledge profiles. A

    profile is a plot, as a graph or a profile, of the raw or standardized scores of a group orindividual on certain parameters (Keeves, 1988). For each person or group a knowledge

    profile can be obtained by combining the test results on a set of parameters. The comparisonbetween individual profiles is known by the generic term 'profile analysis'. The termknowledge profile is used when achievement scores (e.g. on prior knowledge state tests, or

    progress tests) are analyzed following the parameters along a dimension and when results ofthis analysis are plotted on a graph. Figure 2 shows an example of a knowledge profile. InFigure 2, the relationships between some key concepts are illustrated. A dimension is used toconstruct a knowledge profile. Each dimension, consisting of several parameters, represents anapproach to the structure of knowledge (Dochy, 1996).

    Dochy (1992; 1993) claimed that knowledge profiles can give practical indications of studentachievement and learning, thereby making it possible to support the learning process. Wolf,

    Bixby, Glenn and Gardner (1991) also advocated this approach. In an overview of studentassessment these authors stressed the need for a new brand of educational psychometrics, likemore developmental oriented assessments. Dochy (1996) implemented this thought by tryingto identify multiple components of the prior knowledge state, by using prior knowledge statetests, and by using the same kind of tests as progress tests administered several times a year. Inthis approach, the analysis of prior knowledge foresees that in situations where there aresignificant differences between the prior knowledge state of specific subpopulations, the profiledimensions will help to detect and dissect in more detail the strengths and weaknesses of thestudents involved (Dochy, 1992, p. 185). This could be a promising starting point fordifferentiated diagnostic and guidance approaches both for students who