SOA peer&collaborative assesment final · 3. social learning assessment and It is also currently...

69
1 WP7.1 Overview of competence oriented Assessments D.7.1.2 State of the Art/White Paper About peer and collaborative eassessment Camille Tardy, Laurent Moccozet Université de Genève

Transcript of SOA peer&collaborative assesment final · 3. social learning assessment and It is also currently...

1

 

WP7.1  Overview  of  competence  oriented  Assessments  

D.7.1.2  

State  of  the  Art/White  Paper  About  peer  and  collaborative  e-­‐assessment  

Camille Tardy, Laurent Moccozet

Université de Genève

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 2

Table  of  content  

GENERAL  INTRODUCTION   5  

PEER  ASSESSMENT   6  

Introduction  and  definition   6  

Peer  assessment  evaluation   6  

Peer  assessment  models   6  

Identity  and  anonymity   7  

Peer  assessment  for  large  classes   7  

Peer  assessment  with  social  networks   8  PeerEvaluation.org   8  ResearchGate.net   9  

DISCUSSION   10  

REFERENCES   12  

COMPETENCE-­‐BASED  ASSESSMENT   13  

Definition  of  competence   13  

Traditional  instruments  for  assessing  competences   13  

ICT  for  competence-­‐based  assessment   15  

REFERENCES   17  

GROUP  WORK  ASSESSMENT   18  

Group  work   18  

Group  work  assessment   18  

Group  work  assessment  problem   18  

Assessment  strategies   18  

Peer  assessment  of  group  work   19  

REFERENCES   21  

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 3

PEER  ASSESSMENT  TECHNOLOGIES  SURVEY   22  

TYPOLOGY  OF  PEER/COLLABORATIVE  ASSESSMENTS   24  

REFERENCES   26  

PEER/COLLABORATIVE  E-­‐ASSESSMENT  CASE  STUDIES   27  

Case  1:  Using  Google  drive  for  peer-­‐assessment   27  

Case  2:  Developing  an  essay  through  peer-­‐review  on  a  discussion  board   27  

Case  3:  An  assignment  using  anonymous  electronic  peer  review  with  a  Dropbox   28  

Case  4:  Calibrated  peer  assignment   28  

Case  5:  Getting  to  know  Coursera:  peer  assessments   29  

Case  6:  First  massive-­‐scale  class  with  self  and  peer  assessment  in  Coursera   30  

Case  7:  Web-­‐based  peer  assessment:  a  case  study  with  civil  engineering  students   32  

Case  study  8:  Online  peer-­‐assessment  in  a  large  first-­‐year  class   33  

Case  study  9:  Enquiry-­‐based  peer  assessment   33  

Case  study  10:  Coursera  Peer  Assessment  -­‐  Writing  in  the  Sciences   34  

Case  study  11:  Peer  feedback  sessions   34  

Case  study  12:  Reliability  and  validity  of  web-­‐based  portfolio  peer  assessment   35  

Case  study  13:  Teamwork  skills  assessment  for  cooperative  learning   35  

Case  study  14:  Facilitating  peer  and  self-­‐assessment   36  

Case  study  15:  Formative  collaborative  quiz  with  clickers   37  

Case  study  16:  Gamified  work  group  assessment   39  

Case  study  17:  Portfolio-­‐based  collaborative  assessment   40  

Case  study  17:  Acadima   43  

RECOMMENDATION  PROPOSALS   45  

Recommendation  1:  collaborative  learning  environment   45  

Recommendation  2:  e-­‐identity,  anonymity  and  pseudonimity   45  

Recommendation  3:  learning/training  to  peer  assess   45  

Recommendation  4:  peer  tutors  and  mentors   46  

Recommendation  5:  hybrid  assessments  technologies   46  

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 4

Recommendation  6:  peer  assessment  incentives  and  rewards   47  

REFERENCES   48  

AN  EXPERIMENTAL  FRAMEWORK  FOR  PEER  &  COLLABORATIVE  ASSESSMENT   49  

Connect,  a  social  learning  platform   49  

A  peer  assessment  tool  for  group  works   51  

REFERENCES   60  

ANNEXES   61  

Annex  A:  Mind  map  of  scoring  criteria  of  the  assessment  task   61  

Annex  B:  Decomposition  of  skill  assessment   62  

Annex  C:  Description  of  the  decomposition  of  skill  assessment  (c.f.  annex  A)   63  

Annex  D:  WebPA  peer  assessment  form  template  for  group  work   64  

Annex  E:  Cahier  des  charges  pour  la  plateforme  d’enseignement  à  distance  associée  au  module  «  Intervention  en  milieu  scolaire  »   65  

Annex  F:  plug-­‐ins  for  the  connect  platform   68  

 

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 5

General  introduction  

Different types of assessment methods are usually used to evaluate students’ learning [8][6]. These methods are summarized in Figure 1. They can be sorted according to the level of involvement of the lecturer and student in the assessment process. The most current method is the lecturer assessment. With this method, the lecturer is the only one to control the assessment (he/she can involve other colleagues): the lecturer defines the criteria, evaluates the student’s learning result, provides the feedback to the student and eventually assigns a mark. At the opposite, self-assessment only involves the student: the student defines his/her objectives, monitors his/her progress against these objectives and evaluates the final result. However the result of self-assessment is usually not used alone, particularly for grading purpose. It is often integrated into some kind of collaborative assessment process. In-between lecturer assessment and self-assessment stand collaborative and peer assessments. In collaborative assessment, lecturers and students collaborate together whereas in peer assessment, only the student’s peers are collaborating to the assessment. The assessment criteria may be defined either by the lecturer alone or with the involvement of students.

Figure 1 - Assessment methods continuum according to the level of involvement of the student

The authors of the assessment framework described in [6] provide an extensive review of existing environments for collaborative, peer and self-assessment. They organize the review according to various criteria: assessment method, domain, authors and assessors, review process and facilities. They conclude that most of the available systems focus on peer assessment; authors are mainly individuals; the assignment of assessors is random based and the review and the scoring processes are pre-defined and fixed.

Peer/collaborative assessment (and self-assessment) is particularly suited for different purposes and contexts:

1. group work assessment. 2. competence-based assessment, 3. social learning assessment and

It is also currently gaining a lot of interest with the advent of MOOCs where peer assessment appears as a way to crowdsource some learning activities that the teaching staff is no able to achieve any more due to the massive number of students involved.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 6

Peer  assessment  

Introduction  and  definition  

In the last twenty years the interest in peer assessment, also know as peer revision or peer feedback, have rises. Topping [13] describes peer assessment “as an arrangement in which individuals consider the amount, level, value, worth, quality or success of the product or outcomes of learning of peers of similar status”. This method is now part of the education process in schools and universities, through different forms in order to help teachers evaluate students’ work. It helps students to develop critical thinking in order to provide constructive feedback on others work but also to help identify knowledge gaps, increase their motivation and finally improve their work by understanding what makes a high-quality work.

Peer  assessment  evaluation  

Few researches have been conducted in order to evaluate the impact of peer assessment on learning. But it’s still difficult to exactly define what makes an effective peer assessment in an educational context. Its outcomes can vary according to the different conditions and methods that are applied. So it can be customise to suit many situation but it leads to difficulties in defining the different causes and effects as explain by van Zundert et al. in [14]. One major difficulty with peer assessment is that students often don’t believe their peer is capable to review and assess their work, and so they might see it as unfair.

An analytical review [12] aims at identifying the quality assurance criteria integrated in the design of assessment and peer assessment for learning tasks. The authors subdivide the assessment cycle into seven successive steps. The assessment cycle defines the process for the construction, delivery and decision making of assessment tasks: 1) purpose or goal of the assessment; 2) selection of assessment task; 3) setting criteria for the assessment task; 4) administering the assessment; 5) scoring the assessment; 6) appraisal or ‘‘grading of the assessment’’; 7) feedback and further promotion of learning. These tasks are analysed against a set of quality criteria: authenticity (representativeness, meaningfulness, cognitive complexity, content coverage); transparency; fairness; generalisability (comparability, reproducibility, transferability, educational consequences). The authors’ “review of student involvement in assessment adds to [their] observation that clarity (i.e., fairness) and meaningfulness are considered main criteria in the construction and administration of assessments for learning.” They “found that peers as assessors feel they need clear criteria to meaningfully appraise each others work. Students often are involved only in the steps of scoring and appraisal (i.e., not in the preceding step of goal selection or subsequent steps of feedback giving).” We provide a copy of the mind map of scoring criteria of the assessment task in Annex A. A framework for the design and integration of peer assessment activities in teacher training courses is studied in [10]. The authors propose to hierarchically subdivide the peer assessment task into constituent sub-skills. We provide a copy of subdivision hierarchy figure in Annex B and the table describing each constituent in Annex C.

Peer  assessment  models  

Kollar et al. [7] defined peer assessment based on two concepts : identity formation (Who am I?), and affiliation (Who are my peers?). Actual methods don’t allow a lot of interactivity between the assessee and the assessor. They introduced a model of interactive peer assessment. This model is composed of four parts. First, there is the task performance activity, where learners are asked to achieve a given task. Second, the feedback provision

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 7

activity, where one has to assess the quality of another one task. During this activity two points are to be looked at: the product of the first activity and the process used to achieve it and then the form of the feedback itself. The third activity is the feedback reception, where the assessee acknowledges the feedback from the second activity. During this activity, the assessee can ask for clarification or justification on the assessor feedback and that could improve the outcome. Finally, the model describes the revision activity where the student implements the feedbacks in his work. In this collaborative model, both students can commonly work on the revision.

Thanks to the development of network technologies and pedagogical features, the drawbacks of traditional peer assessment, such as student’s anxiety, the need of clear criteria to ensure an objective assessment, and the provision of adequate feedback as cited by [5], can been lessen. Several online peer assessment system exist such as CAP, NetPeas…

In [1], the authors studied the impact of clickers on learning performances. Clickers are remote response devices that instantly transmit student answers to questions. It allows both actors to receive an immediate feedback. They showed that the use of such tools increase the students understanding of concepts and class material. It helps student develop communications skills and interactivity in the class. It allows also the teachers to able to immediately see the level of understanding of their course and so correct it if needed. They finally stated that the learning performances have shown to be enhanced by the development of peer group exercises and collaboration.

Another collaborative tool used in peer assessment is the wiki as described for example in [4]. De Weber et al. have run a study in [2] where the students were asked to complete a wiki on a certain topic and to provide feedback on other groups’ wikis via a web form. The authors showed that students prefer wiki assignment to writing a group paper. They also pointed out that the amount of feedback given could be counterproductive if too many of them were returned to the students.

Identity  and  anonymity  

The anonymity in online peer assessment have been questioned in few research such as [15] and [16]. Yu et al. focused their research on the impact of identity revelation in online peer assessment. They tested four identity revelation modes: three fixed modes (real name, username, anonymous) and a dynamic user self-choice mode). They concluded that each one impacts differently the perception toward the assessor, the learning activity and the classroom climate. They overall recommend the use of system that allows the use of a self-choice identity or real name mode to allow interpersonal relationships between students, as those modes show a more favourable attitude toward their assessors. On the other hand, the nickname and anonymity modes showed negative effects on the learners’ perception toward their assessors (severe comments, irrational and impulsive emotions…).

Peer  assessment  for  large  classes  

In the context of large classes, providing a regular and quality feedback is challenging. In [9], the authors present a technique (PALS) to increase the learning outcome by using tutorial process with peer assessment and traditional academic evaluation. They show that the use of PALS in large classes improved the summative and individual formative feedback to the students. They believed that the technique helped their learning and increased their understanding.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 8

In [3] Fermelis et al. tested a new self and peer assessment (SAPA) online system in cross-disciplinary and cross-faculty context. They manage to return marks only few days after the group assessment submission, which was a big improvement. The teachers also noticed an improvement in student satisfaction and class spirit, but also an increased maturity and confidence. The students reported the system as a great help through the team project, which allowed a better harmony in team despite their heterogeneity.

Finally, Snowball et al. [11] pointed out in their research that the success of peer assessment depends on how the method is introduced to the students. The tutor must work on the student confidence for their role as assessor and for the accuracy of the feedback they receive as assessee.

Peer  assessment  with  social  networks  

As raised in [17], social media are inherently a system of peer evaluation. In research, peer evaluation is a very ancient and common tradition. Many scientific conferences and journals rely nowadays on online environments that manage the peer review process to evaluate scientific papers and decide if they are accepted for publication in the conference proceedings or journal. The researcher submits his/her paper in the system. The paper is then assigned to a few peers that have been pre-selected. The paper assignment process is usually driven according to the area of expertise claimed by the reviewers. The reviewers have to fill a form made of rubrics with marks and comments. A global mark is computed once the reviews have been submitted. The global mark decides if the paper is accepted, accepted after update according to the reviewers’ comments, to be resubmitted or rejected. These systems include blind reviews: the identity of reviewers is kept hidden from the reviewed people; and double blind reviews: the identity of reviewers is kept hidden from the reviewed people and the identity of reviewed people is kept hidden from the reviewing people. The last reviewing mechanism requires following specific guidelines in the paper writing process. The authors are requested to avoid including in the paper any information that could identify them.

More recently, social networks dedicated to peer evaluation and assessment of researchers have emerged. The main idea is to evaluate researchers’ global contribution. However, there are some indicators dedicated to specific contributions such as scientific papers. These indicators are either based on statistics of indirect interactions of peers (for example, the number of times a paper has been viewed) or direct interactions of peers (for example the endorsement of a research competency).

PeerEvaluation.org  

Peer Evaluation is presented by its creator as “an independent Open Access and Open Scholarship online initiative. It lets peers share their primary data, articles and scholarly projects under any shape and form. All social interactions and evaluations are then aggregated and presented as datasets of qualitative indicators of authority, impact and reputation. Peer Evaluation is also keen on diversifying and promoting social processes of dissemination.”[18]

Evaluation indicators are based on the number of peers who are “trusting” a researcher, “collecting” a researcher or “following” a researcher. A researcher can become a trusted member according to some pre-defined rules as described in Figure 2.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 9

Figure 2 – Trusted member rules from peerevaluation.org

ResearchGate.net  

ResearchGate has similar objectives and features as PeerEvaluation. Some interesting indicators are based on statistics. The collected statistics data are displayed on the researcher’s profile to assess his/her achievements. The resulting indicators provide information about the number of times a publication has been viewed, bookmarked, downloaded, cited. These are indirect indicators that are estimated according to peers’ activity. Direct indicators are also possible in ResearchGate such as voting/downvoting or bookmarking for a publication. It is also possible to endorse skills: peers can assess and validate research skills that researchers are claiming on their profile. Peers can also provide peer formative assessment as peer feedback by commenting or discussing a publication.

There are two types of indicators on peer social networks such as PeerEvaluation and ResearchGate. The first category effectively assesses research contributions. The second category assesses researchers’ contribution to the social network. The goal of the second category consists in engaging researchers to connect and contribute to the social network activities. We also note that the validity of the assessment is based on the crowdsourcing principle, which in a way provides a solution to avoid group collusion.

Researchers’ peer review processes rely on the hypothesis that peers are experts in the discipline. In the context of learning, peers are in the process of learning and by default are not expert in the discipline.

Starting from the observation that today’s youth join social media to consume, but also contribute and produce, a peer feedback tool is designed as a form of peer assessment[19]. The use of social media for learning transforms it to a participative activity: participative learning. During their participative activities, students progressively produce artefacts that the authors call Emerging Learning Objects (ELOs). Peer assessment is identified as an important component of participative learning, based on the interactions and feedbacks that learners are naturally sharing through social media. Peer assessment is then integrated in the learning process, which introduces a shift from the assessment of learning to the assessment for learning. Peers’ feedback is a form of formative peer assessment where peers provide suggestions, advices, enhancements, etc. From a survey of the literature and their empirical observations, the authors design a peer feedback learning tool that integrates four main features that allow learners to 1) solicit peer feedback on their own ELOs, 2) collect peer

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 10

feedbacks on their own ELOs, 3) explore ELOs submitted for peer feedbacks, and 4) provide feedback on any ELO submitted for feedback. The resulting tool is expected to arouse spontaneous feedbacks versus formal ones. Moreover, the feedbacks are provided on ELOs and not on the collaboration processes. The resulting participative learning environment is dedicated to motivate students to share their ELOs and support their learning with a lightweight feedback process that is seamlessly integrated in the ELOs workflow.

Waycott et al.[20] examine about 20 lecturers’ experiences of assessment tasks that require students to use social media to produce their work. They analyse the implications of using social media for students’ assessable work. They identify positive and negative consequences of making what they call “students’ work visible”. The notion of visibility implies that students’ work are in some way made visible to extra people than the usual ones: the student performing the work and the teacher. Works are either shared with peers or made totally public. Social learning introduces conflicts with the core of traditional students’ work assessment that is relying on a individual and competitive basis. These conflicts result in tensions between social media activities and formal education usual requirements. Regarding summative assessment, only 3 of the 20 experiences involved peer assessment for the final grade. However, most of the experiences formally involved a form of formative peer assessment based on peer review and feedback.

The inherent contradictions between traditional formal assessment and social learning practices are for example the fear of students that peers steal their good ideas. The result is that some of them are reluctant to publish their work and make it available to others. Another example of contradiction is the need for the monitoring of individual contributions. The authors raised the contradiction at the level of lecturers: although they are willing to integrate social learning in their practice, they have anyway to cope with the assessments’ requirements of their institution.

Gray et al. propose a similar survey[21], where they analyse the assessment process for 17 web 2.0 authoring based assignments. The authors notice similar issues as Waycott et al. Regarding assessment, they note that although most of the assignments involve peer reviews and group assessment as a formative form of peer assessment, very few rely on peer assessment for marking.

Discussion  Assessment for learning is identified as one of the common elements to improve students’ engagement[22]: “The themes and ideas that surface most often in the literature are: embedded collaboration, integrated technology, inquiry-based learning, assessment for learning, and making learning interdisciplinary and relevant to real life." Although we note a trend to move from assessing of learning to assessing for learning, most of the existing peer e-assessment methods remain focused on the assessment of learning. It results in dedicated environments such as webPA that are teacher-centric and teacher-driven by nature and keep assessment activities outside learning that takes place elsewhere. We have seen that researchers have a long tradition of peer assessment and they have developed dedicated platforms to support these assessment activities. With the emerging dedicated social networks such as ResearchGate, assessment activities become collaborative and cooperative activities that are embedded in the whole stream of research activities. One of the advantages is that new indicators of skills and achievements are emerging.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 11

Although the context is not completely similar (researchers are expected to be expert in a discipline, whereas students are novice or junior), the track of incorporating peer assessment as one of the many learning/training activities of a collaborative platform sounds promising.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 12

References  [1] Blasco-Arcas, L., Buil, I., Hernández-Ortega, B. and Sese, F.J. 2013. Using clickers in class. The role of interactivity, active

collaborative learning and engagement in learning performance. Computers \& Education. 62, 0 (2013), 102–110. [2] De Wever, B. and Van Keer, H. 2012. Student Perspectives on Wiki-Tasks and the Introduction of Computer-Supported Peer

Feedback. Procedia - Social and Behavioral Sciences. 69, 0 (2012), 558–565. [3] Fermelis, J., Tucker, R. and Palmer, S. 2007. Online self and peer assessment in large, multi-campus, multi-cohort contexts. (2007),

271–281. [4] Gielen, M. and De Wever, B. 2012. Peer Assessment in a Wiki: Product Improvement, Students’ Learning And Perception Regarding

Peer Feedback. Procedia - Social and Behavioral Sciences. 69, 0 (2012), 585–594. [5] Gielen, S., Peeters, E., Dochy, F., Onghena, P. and Struyven, K. 2010. Improving the effectiveness of peer feedback for learning.

Learning and Instruction. 20, 4 (2010), 304–315. [6] Gouli E., Gogoulou A., and Grigoriadou M., « Supporting Self-, Peer-, and Collaborative- Assessment in E-Learning: The Case of the

PEer and Collaborative ASSessment Environment (PECASSE) », JILR, vol. 19, no 4, p. 615 647, 20081000. [7] Kollar, I. and Fischer, F. 2010. Peer assessment as collaborative learning: A cognitive perspective. Learning and Instruction. 20, 4

(2010), 344–348. [8] Kwok R.C.W. and Ma J., « Use of a group support system for collaborative assessment », Computers & Education, vol. 32, no 2, p. 109

125, févr. 1999. [9] O'moore, L.M. and Baldock, T.E. 2007. Peer Assessment Learning Sessions (PALS): an innovative feedback technique for large

engineering classes. European Journal of Engineering Education. 32, 1 (Mar. 2007), 43–55. [10] Sluijsmans D. and Prins F., « A conceptual framework for integrating peer assessment in teacher education », Studies in Educational

Evaluation, vol. 32, no 1, p. 6 22, 2006. [11] Snowball, J. and Sayigh, E. 2007. Using the tutorial system to improve the quality of feedback to students in large class teaching. South

African Journal of Higher Education. 21, 2 (2007), 321–333. [12] Tillema H., Leenknecht M., and Segers M., « Assessing assessment quality: Criteria for quality assurance in design of (peer)

assessment for learning – A review of research studies », Studies in Educational Evaluation, vol. 37, no 1, p. 25 34, mars 2011. [13] Topping, K. 1998. Peer Assessment Between Students in Colleges and Universities. Review of Educational Research. 68, 3 (Jan.

1998), 249–276. [14] van Zundert, M., Sluijsmans, D. and van Merriënboer, J. 2010. Effective peer assessment processes: Research findings and future

directions. Learning and Instruction. 20, 4 (2010), 270–279. [15] Vanderhoven, E., Raes, A., Schellens, T. and Montrieux, H. 2012. Face-to-Face Peer Assessment in Secondary Education: Does

Anonymity Matter? Procedia - Social and Behavioral Sciences. 69, 0 (2012), 1340–1347. [16] Yu, F.-Y. and Wu, C.-P. 2011. Different identity revelation modes in an online peer-assessment learning environment: Effects on

perceptions toward assessors, classroom climate and learning activities. Computers \& Education. 57, 3 (2011), 2167–2177. [17] « Social media is inherently a system of peer evaluation and is changing the way scholars disseminate their research, raising questions

about the way we evaluate academic authority », Impact of Social Sciences. [En ligne]. Disponible sur: http://blogs.lse.ac.uk/impactofsocialsciences/2011/06/27/social-media-is-inherently-a-system-of-peer-evaluation-and-is-changing-the-way-scholars-disseminate-their-research-raising-questions-about-the-way-we-evaluate-academic-authority/. [Consulté le: 13-juin-2013].

[18] A. Wassef, « Altmetrics: Peer Evaluation, a case study [v0] – altmetrics.org », présenté à altmetrics11, 2011. [19] B. Wasson et V. Vold, « Leveraging new media skills in a peer feedback tool », The Internet and Higher Education, vol. 15, no 4, p.

255-264, oct. 2012. [20] J. Waycott, J. Sheard, C. Thompson, et R. Clerehan, « Making students’ work visible on the social web: A blessing or a curse? »,

Computers & Education, vol. 68, p. 86-95, oct. 2013. [21] K. Gray, C. Thompson, J. Sheard, R. Clerehan, et M. Hamilton, « Students as Web 2.0 Authors: Implications for Assessment Design

and Conduct », Australasian Journal of Educational Technology, vol. 26, no 1, p. 105-122, 2010. [22] J. Parsons et L. Taylor, « Student Engagement: What do we know and what should we do? », University of Alberta, 2011.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 13

Competence-­‐based  assessment  

Definition  of  competence  

According to [1], there is no unique definition for competence. The author who is investigating the meanings of competence suggests that there is in fact a multi-faceted definition depending on the context. Three major meanings are identified. A competence can be:

1. An observable performance 2. The standard or quality of the outcome of the person's performance 3. The underlying attributes of a person such as their knowledge, skills or abilities

However, higher education authorities need to adopt a single precise definition for this term in order to be able to define education policies. For example, the Europass glossary1 defines competence as "The ability to apply learning outcomes adequately in a defined context (education, work, personal or professional development). Competence is not limited to cognitive elements (involving the use of theory, concepts or tacit knowledge); it also encompasses functional aspects (involving technical skills) as well as interpersonal attributes (e.g. social or organisational skills) and ethical values. A learning outcome has to be understood as "the set of knowledge, skills and/or competences an individual has acquired and/or is able to demonstrate after completion of a learning process, either formal, non-formal or informal. Learning outcomes can arise from any form of learning setting (either formal, non-formal or informal)."

Traditional  instruments  for  assessing  competences  

Competence-based assessment is popular in companies where it is a quite earliest way to evaluate workers. A competence is either an observable performance in a job or a set of standards to be reached by a worker in a job[1]. It is also an early assessment approach in some areas of education and particularly in VET (Vocational Education and Training)[4], medicine[2] and life long learning[5]. In these three areas, instruments to assess competences have been defined since a long time. For example, in VET, a standard list is proposed in the “6 guidelines for assessing competence in VET” report[4]:

• Observation o Real work activities at workplace

• Questioning o Self-evaluation form o Interview o Written questionnaire

• Review of products o Work samples/products

• Portfolio o Testimonials/references o Work samples/products o Training record o Assessment record o Journal/work diary/logbook

1 http://europass.cedefop.europa.eu/en/education-and-training-glossary

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 14

o Life experience information • Third-party feedback

o Interviews with, or documentation from employer, o Supervisor, peers

• Structured activities o Project o Presentation o Demonstration o Progressive tasks o Simulation o Exercise such as role-plays

Similar lists of instruments have been identified and defined in medicine such as in [3] where five categories of assessment instruments for competence-based assessments are proposed:

1. Written assessments, 2. Clinical or practical assessments, 3. Observations, 4. Portfolios and other records of performance, 5. And peer and self-assessment.

The European Commission has defined a framework that includes a set of 8 key competences for life long learning in the 21st century[5]:

1. Communication in the mother tongue 2. Communication in foreign languages 3. Mathematical competence and basic competences in science and technology 4. Digital competence 5. Learning to learn 6. Social and civic competences 7. Sense of initiative and entrepreneurship 8. Cultural awareness and expression

The instruments identified to assess these competences are organized into two sub-sets[6]:

1) Summative assessment of key competences to evaluate learning outcomes a. Standardised tests b. Attitudinal questionnaires c. Performance-based assessments

2) Formative assessment to encourage the development of key competences a. Peer and self-assessments b. Portfolio and e-portfolio c. E-assessments

Peer and collaborative (and self) assessments are transversal to all three of the above typologies.

Peer assessment is particularly suited to evaluate competences that are diffuse and therefore difficult to catch. Teamwork skills are required at work but they are not formally taught and they are transversal to the different disciplines. The Teamwork Skills Inventory[9] is based on peer and self-assessment to detect the related competencies and discover flaws. The inventory is performed at the end of a group work among the group members. Groupmates report their

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 15

observations about their peers through twenty-five items. They describe the behaviours of their peers during the group work according to different dimensions: teamwork attendance; information seeking and sharing; communication with peers; critical and creative thinking; and progression with peers.

In [10], peer assessment is used as a learning tool in teacher education. A conceptual framework is defined and evaluated through two studies. The results globally confirm the hypothesis that the proposed approach improves the peer assessment skills of the students. Teacher students learn to define assessment criteria, assessment tasks, scoring and interpreting assessment results.

An online peer assessment and learning process in seven steps is proposed in [11] with the objective to develop skills and capabilities. The seven steps are organized as follows:

1. Provide explicit rationale of the assessment method, 2. Engage learners in an authentic learning context, 3. Involve students in setting assessment criteria, 4. Assess learning and give feedback, 5. Coach for effective performance, 6. Reflect on learning, 7. Tutor check to assure quality.

ICT  for  competence-­‐based  assessment  

ICT are being applied for competence-based assessment. In the report “15 E-assessment Guidelines for the VET Sector"[8] different technologies are identified to support competence-based assessment instruments. In particular regarding the collection of competence evidences:

• Real work / real time evidence: video and image sharing, digital story and video streaming, online trainer, supervisor feedback, online self and peer assessment.

• Simulation and demonstrations: computer simulation, video caption, and virtual classroom.

• Questioning: online quiz, online chat, and virtual classroom. • E-portfolios: online collections of digital artefacts (documents, images, videos,

blogs…).

Key competences can also be assessed with different families of technologies. A typology is proposed in [7] for the 8 key competences defined by the European Commission:

1. Computer based assessment 2. Quizzes and simples games 3. ePortfolios 4. Peer assessment 5. Self assessment 6. Virtual world games 7. Simulations 8. Intelligent tutors

Peer and self assessments are identified to be particularly suited for evaluating the “learning to learn” key competence.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 16

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 17

References  [1] T. Hoffmann, « The meanings of competency », Journal of European Industrial Training, vol. 23, no 6, p. 275-286, août 1999. [2] M. H. Davis et R. M. Harden, « Competency-based assessment: making it a reality », Med Teach, vol. 25, no 6, p. 565-568, nov. 2003. [3] J. M. Shumway et R. M. Harden, « AMEE Guide No. 25: The assessment of learning outcomes for the competent and reflective

physician », Medical Teacher, vol. 25, no 6, p. 569-584, janv. 2003. [4] « 6 Guidelines for assessing competence in VET », Department of Training and Workforce Development, Western Australia, 2012. [5] « European Commission - The European framework for key competences ». [En ligne]. Disponible sur:

http://ec.europa.eu/education/lifelong-learning-policy/key_en.htm. [Consulté le: 28-mai-2013]. [6] « Assessment of Key Competences in initial education and training: Policy Guidance », European Commission, 2012. [7] C. Redecker, « The Use of ICT for the Assessment of Key Competences », JRC Scientific and Policy Reports, EUR 25891, 2013. [8] « 15 E-assessment Guidelines for the VET Sector », Australian Flexible Learning Framework, 2011. [9] P. S. Strom et R. D. Strom, « Teamwork skills assessment for cooperative learning », Educational Research and Evaluation, vol. 17, no

4, p. 233-251, 2011. [10] D. Sluijsmans et F. Prins, « A conceptual framework for integrating peer assessment in teacher education », Studies in Educational

Evaluation, vol. 32, no 1, p. 6-22, 2006. [11] C. Juwah, « Using Peer Assessment to Develop Skills and Capabilities », USDLA Journal, vol. 17, no 1, 2003.

 

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 18

Group  work  assessment  

Group  work  

The importance and benefit of group work in education is nowadays widely recognized [1, 2, 3]: for students, group work allows them to learn from each other and it can also help them to develop transversal skills; for teachers, it can reduce the workload to provide feedback, assess and grade [1]. As raised in [2], online shared workspaces open the way to online group collaboration support, including students’ coordination and learning monitoring. However, the authors indicate also that students may be reluctant to effectively collaborate. Therefore, some strategies have to be defined in order to encourage and moderate students’ collaboration. One recommended way to do so is to take students’ contribution to the collaboration into account in the assessment and grading of the course [3]. Different strategies have been proposed.

Group  work  assessment  

Assessing group work requires addressing the following items: which aspects of the group should be assessed and more precisely how to assess individual contribution to the group and how to assess the contribution of the group[3]. Among the six principles of group work assessment established in [3], one is raising that “a fair system should be used that rewards both individual effort and group collaboration".

Group  work  assessment  problem  

Assessing the collaborative work encourages the participation of students. It can also have additional impacts such as supporting students in acquiring various skills gradually [4]. Group work is facing a huge challenge. The attribution of the same single score to all the members of the group may be felt unfair to some of them according to their personal engagement in the group work [2]. This problem is also known as the “free rider” or “passenger” problem [5, 6]. A free rider is a member of the group that does not contribute or does not contribute at an appropriate level according to his/her teammates. This behaviour can be deliberate or not.

Assessment  strategies  

Assessments strategies rely on a few basic parameters: what should be assess between process and content and what should be marked: individual or group contribution. The strategies vary according to the parameters they choose to assess and the way they score and weight each parameter. All assessment models have advantages and drawbacks. The main priority is to keep them fair and consistent [4]. However, in [6], the author conclude that an important aspect of the assessment it to acknowledge the student individual contribution to the group work process and not only the group product. An interesting effect is that it helps to address the free-rider issue by downgrading free riders in a group according to their individual contribution.

One popular strategy applied for group work assessment is intra-group peer assessment [6, 7]. The evaluation conducted in [7] concludes that peer assessment delivers reliable results provided that certain criteria are met. The approach proposed in [6] combines self and peer assessment and concludes that for the students, it increases their feeling of fairness of the evaluation and consequently their engagement in the group work. A more complex approach is defined in [1], where a semantic-based framework is setup to combine the global result of the group and the individual performances. Students are ranked and marked according to the

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 19

quality of group work, quality of the individual work and relevance of the student in the group.

Peer  assessment  of  group  work  

Loddington proposes a literature review on peer assessment of group work in [8]. He points out the potential benefits for teachers and tutors:

• Reduces workload and marking, • Automates many complex tasks for setting assessment and calculating scores, • Makes it easier to identify where strengths and weaknesses of a class lie, • Aids to relay timely feedback, • Weakens the free rider problem by taking personal contribution into account, • Improve and strengthen assessment criteria.

These benefits are increasing for large numbers of students.

Peer assessment bring also concrete benefits to students:

• Stimulates learning, • Allows learning form peers, • Increase understanding of tutor's expectations.

However, peer assessment induces also some drawbacks:

• Needs to prepare students for peer assessment and to properly explain the assessment process,

• Peer assessment is prone to the subjectivity of students, which can result in marking friends higher than they really deserve. The influence of personal relationships such as dislike is also highlighted,

• Group collusion can be a potential problem, • Peer assessment might be time-consuming for students, • It can have a negative impact on students’ personal relationship within the group.

An example of such a system is described in [9]. The WebPA2 system is an open source online peer assessment tool for group work. The main characteristic is that each student’s mark is adjusted according to his/her individual contribution. The individual contribution is evaluated from the peer assessment process. The global group mark awarded by the teacher is altered according to the peers’ evaluation of each student’s contribution (Figure 3).

WebPA can be used for any type of group work in any discipline. The teacher defines peer assessment forms (an example of template form is available in Annex D) with the different criteria to evaluate. Then the teacher defines the groups with the default option for automated random allocation. Finally, the teacher creates an assignment by allocating one peer assessment form to a set of students groups. The teacher decides the time interval during which the assignment is opened. Once the assignment is completed, the teacher gives a mark to each group. The system then assigns the individual marks taking into account the global group mark and the peers’ evaluations.

2 http://webpaproject.lboro.ac.uk/

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 20

Figure 3 – Individualized group work marking with WebPA

Discussion  The interest of group work in education is widely recognized. We have seen that peer assessment has a great potential to support and improve group work. The development of peer assessment mechanism for group work could probably contribute to the spreading of group work.

Existing peer assessments systems such as WebPA offer many advantages. The authors of [9] have conducted users surveys to evaluate their system. It appears that systems such as WebPA definitely bring many benefits to the students, the teachers and the institutions.

However, some issues remain unsolved. The biggest issue is that anonymity is not possible in this context (the peers need to know who they are assessing) and students can meet “outside” anywhere else either virtually or really. It creates many opportunities for them to connive and cheat during the peer assessment process without any way for the teachers to detect it or prove it. It results that the approach adopted in systems like WebPA cannot address this issue in an easy way.

We also observe that the assessment is disconnected from the assignment work itself. The peer assessment does not take place where students learn, train and produce their assignment work. Of course it remains possible to connect and integrate the assessment environment inside virtual learning environments such as Moodle. Additionally, the basic philosophy of these systems is mainly teacher centric and summative oriented. Although nothing prevents a teacher from defining assessment criteria with the students, it has to be achieved externally from the system.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 21

References  [1] J. T. Fernández-Breis, D. Castellanos-Nieves, and R. Valencia-García, « Measuring individual learning performance in group work

from a knowledge integration perspective », Information Sciences, vol. 179, no 4, p. 339-354, feb. 2009. [2] Q. Wang, « Using online shared workspaces to support group collaborative learning », Computers & Education, vol. 55, no 3, p.

1270-1276, nov. 2010. [3] M. Galton, « Assessing Group Work », in International Encyclopedia of Education (Third Edition), Editors-in-Chief: Penelope

Peterson, E. B. Eva Baker and Barry McGawA2 - Editors-in-Chief: Penelope Peterson, et Barry McGaw, Éd. Oxford: Elsevier, 2010, p. 342-347.

[4] J. Macdonald, « Assessing online collaborative learning: process and product », Computers & Education, vol. 40, no 4, p. 377-391, may 2003.

[5] M. Noonan, « The ethical considerations associated with group work assessments », Nurse Education Today. [6] N. Elliott and A. Higgins, « Self and peer assessment – does it make a difference to student group work? », Nurse Education in

Practice, vol. 5, no 1, p. 40-48, jan. 2005. [7] B. De Wever, H. Van Keer, T. Schellens, and M. Valcke, « Assessing collaboration in a wiki: The reliability of university students’

peer assessment », The Internet and Higher Education, vol. 14, no 4, p. 201-206, sept. 2011. [8] S. Loddington, « Peer assessment of group work: A review of the literature », WebPA Project, Loughborough University, 2008. [9] S. Loddington, K. Pond, N. Wilkinson, et P. Willmot, « A case study of the development of WebPA: An online peer-moderated

marking tool », British Journal of Educational Technology, vol. 40, no 2, p. 329-341, mars 2009.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 22

Peer  assessment  technologies  survey  

This section provide a broad lists of existing tools, framework, services… that can be used for managing peer assessments in different contexts. We provide a taxonomy of technologies types. For each type of technology, we indicate a list of available tools/framework/services and the types of assessment they can be involved in (for diagnostic, formative and summative purposes).

Type Tools Assessment type (f/s/d) Quizs *Dokeos

*Moodle *Top Hat Monocle *Elgg – plugin (izap contest) *Drupal – plugin (Quiz) *Google forms

*www.dokeos.com *moodle.org *www.tophatmonocle.com *www.pluginlotto.com/store/product/tarun/16434/izap-contest-v20-free-elgg-plugin *drupal.org/project/quiz *docs.google.com

Formative

Peer quizzes *Moodle – plugin (Question Creation) *Blubbr *Elgg – plugin *Drupal – plugin *Google forms

*moodle.org/mod/data/view.php?d=13&rid=1120 *www.blubbr.tv *(izap contest) *(Quiz)

Formative, diagnostic, summative

Wikis *Dokeos *Moodle *Elgg – plugin (dokuwiki) *Drupal – plugin (WikiTools) *MediaWiki

*community.elgg.org/plugins/803452/1.4.2/dokuwiki-integration * drupal.org/project/wikitools *www.mediawiki.org

Formative, diagnostic

Comments (blogs) *Dokeos *Mahara *Mural.ly *Elgg *Drupal

*mahara.org *mural.ly

Formative

Clickers / Polls *Dokeos *Moodle *Socrative *Top Hat Monocle *Elgg – plugin (Polls) *Drupal *Google forms

*www.socrative.com *community.elgg.org/plugins/515853/0.8

Diagnostic, formative

E-portfolio *Moodle *Mahara *Mural.ly *Pinterest *Learnist

*çpinterest.com *learni.st

Diagnostic, formative

Questions & Answers

*Top Hat Monocle *Piazza *Quora *Elgg – plugin (Questions-and-Answers) *Drupal – plugin (Question/Answer) *PeerWise

*piazza.com *www.quora.com *community.elgg.org/plugins/1066524/1.0.2/questions-and-answers *drupal.org/project/question_answer *peerwise.cs.auckland.ac.nz

Diagnostic, formative

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 23

Rating (i.e. “stars”) *Dokeos *Moodle *Mahara *Elgg – plugin (elggx fivestar) *Drupal – plugin (fivestar)

*community.elgg.org/plugins/843333/1.8.2/elgg-18-elggx-fivestar *drupal.org/project/fivestar

Summative, diagnostic

Gamification *Elgg – plugin (izap user points) *Drupal – plugin (user points)

*www.pluginlotto.com/store/product/tarun/16077/izap-user-points-free-elgg-plugin *drupal.org/project/userpoints

Summative, diagnostic

Forum *Dokeos *Moodle *Mahara *Elgg *Drupal

Formative, diagnostic

Video annotation *Youtube *Annotating Academic Videos *VideoANT

*http://www.youtube.com/t/annotations_about *https://github.com/entwinemedia/annotations *http://ant.umn.edu/

Formative

Peer review management

*Moodle : ( -workshop -peer review plugin) *BlackBoard (Self and Peer Evaluation Tool) *Aropä *iPeer *Peer Assessment *PeerWise *SPARK *Steam *WebPA *Calibrated peer review

*docs.moodle.org/23/en/Workshop_module *docs.moodle.org/19/en/Peer_Review_Assignment_Type *library.blackboard.com/ref/36ba3329-e441-488a-93ce-7a55543cc999/Content/Shortbread/Self%20and%20Peer%20Assessment%20Overview.htm *aropa.gla.ac.uk/docs/ *ipeer.ctlt.ubc.ca *www.tech.plymouth.ac.uk/learntech/e_learning_areas/peer_assessment.htm *spark.uts.edu.au *peereval.okstate.edu/beta/WelcometoSteam.html *http://webpaproject.lboro.ac.uk/ *http://cpr.molsci.ucla.edu/

Summative, diagnostic, formative

Peer quizzes: Allow student to create quizzes questions in order to challenge their peers. Gamification: Students reward their peers with points if for example, they answer a question, help on project or comment on other’s work. The points can also be earned thanks to a set of rule defined by a teacher where a definite number of points are assigned to a user if he achieves a specific activity. Points can also lead to win badges. Peer review management: Peer assessment activity management tool, usually build in an e-learning system, in order to allow students to assess their peers work. The teacher will create group of peers and associate assessee and assessors…

 

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 24

Typology  of  peer/collaborative  assessments  In this section, we propose a typology for peer/collaborative assessments based on the literature [1][2][3][4]. The aim is to identify the specific criteria and dimensions that characterize peer and collaborative assessments.

Case Short summary description

Course title Course title

Class size Average number of students

Type Diagnostic/Formative/Summative

Output What type of output do assessed students produce?

Year Same year or cross year of study

Assessors Individuals/pairs/groups

Assessed Individuals/pairs/groups

Reward What is the type of reward for the assessors?

Time Class time/seminar time/informal

Requirement Compulsory/voluntary

Status with formal assessment

Replace/complete

Status with final grade Integral/mixed/no grade

Privacy Anonymous/confidential/public

Assessment focus

What types of knowledge, skills or attributes does the case involve as a focus for increased achievement?

Technology used

What technologies are involved in bringing about peer/collaborative effects?

Role played by technology

What is it that the technology does to achieve peer/collaborative effects?

Socio- What actors (teacher/individual student/peers) are involved in

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 25

pedagogical setting

learning and teaching, and what is the relationship between them?

Institutional setting Where is the case situated in terms of education providers?

Practice Summary of the scenario

Analysis Summary of results: advantages/drawbacks/limitations, evidences of effectiveness and impact

Generalization What are the good practices you can extract from this case? What would you recommend to someone who would like to apply the same scenario?

Involvement in assessment tasks cycle

Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7

Student/ Peers/

Teacher

Student/ Peers/

Teacher

Student/ Peers/

Teacher

Student/ Peers/

Teacher

Student/ Peers/

Teacher

Student/ Peers/

Teacher

Student/ Peers/

Teacher Assessment tasks cycle: Task 1 Purpose or goal of the assessment: What is the assessment task supposed to measure, and what needs to be evidenced or shown as an outcome. Task 2 Selection of assessment task: What content needs to be covered in the assessment task; and how will mastery or task completion be shown (choice of content and format of the assessment). Task 3 Setting criteria for the assessment task: What needs to be rated, by whom, on what grounds ? Task 4 Administering the assessment: The way the assessment is conducted Task 5 Scoring the assessment according to criteria (defined in task 3) Task 6 global appraisal of the assessment: i.e. it includes the weighting of each criterion to get the global grade or decision (task 5 and 6 are sometimes merged into a single one) Task 7 feedback

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 26

References  [1] K. Topping, « Peer Assessment between Students in Colleges and Universities », Review of Educational Research, vol. 68, no 3, p.

249-276, oct. 1998. [2] N. Pachler, C. Daly, Y. Mor, et H. Mellar, « Formative e-assessment: Practitioner cases », Computers & Education, vol. 54, no 3, p.

715-721, avr. 2010. [3] H. Tillema, M. Leenknecht, et M. Segers, « Assessing assessment quality: Criteria for quality assurance in design of (peer) assessment

for learning – A review of research studies », Studies in Educational Evaluation, vol. 37, no 1, p. 25-34, mars 2011. [4] D. D. Nulty, « An introductory guide to nine case studies in peer and self-assessment. »

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 27

Peer/collaborative  e-­‐assessment  case  studies   We have collected different case studies descriptions and settings available online or from the literature and submitted to the online survey available at: https://docs.google.com/forms/d/11rEtD-oE2D1ClcJ_4aaCWDKVeFjA6VhRxVyrojmhEg0/viewform Each case study is briefly described, analysed and discussed.

Case  1:  Using  Google  drive  for  peer-­‐assessment  Technology: Google drive (Form and Spreadsheet) Type of assignment: Writing essay Type of assessment: Formative From http://gettingsmart.com/2012/12/the-sidekick-the-superhero-using-google-drive-for-peer-assessment/ This case study is conducted in AP language classes (but can be extended and applied to any discipline) for essay writing assignments. The ICT tool involved is Google Drive and more precisely Goole Form and Google Spreadsheet (both tools are working together as Form results are stored in Spreadsheet documents. Students have to write consecutively 3 free response essays in 3 weeks. Each essay assignment is based on a four days scenario. Each essay is peer assessed at least two times. The assessment is conducted in double blind, using a numbering system for the essays and the students. The allocation of essays to reviewer students is random. Reviewer students have to fill in a review form available through the Google Form service. The review form includes grades (such as “rate the thesis statement”) and arguments (such as “review the student thesis statement” or “suggest how it may be improved”). The resulting Google Spreadsheet is publicly shared among students and they are requested to consult the feedbacks for their essay as homework. The assignment is concluded with a self-assessment to review peers’ constructive remarks and critics. The self-assignment is submitted as another Google Form. This last assessment is not shared among students of the class. The advantages identified are:

• Students are clear with what is expected for the assignment. • Students get a diagnosis about their strengths and weaknesses in writing. • Students can be actively involved in their own learning.

No particular disadvantages or requirements are mentioned.

Case  2:  Developing  an  essay  through  peer-­‐review  on  a  discussion  board  Technology: Discussion board Type of assignment: Essay writing Type of assessment: Formative From http://serc.carleton.edu/introgeo/peerreview/examples/sharks.html Before writing their essay, students are helped to develop the topic they have chosen for their essay through a peer-assessment exercise that is ran on a discussion board. The student has first to answer a few questions about the essay topic in a discussion board. Two peers have

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 28

then to evaluate the student’s answers. Finally, the student has to answer the peers’ comments. No particular advantage or drawback is indicated for this case apart from the development of students’ electronic communication skills and constructive feedback capacity.

Case  3:  An  assignment  using  anonymous  electronic  peer  review  with  a  Dropbox  Technology: Dropbox Type of assignment: Essay writing Type of assessment: Formative From http://serc.carleton.edu/introgeo/peerreview/examples/warming.html Students write their essay anonymously, without indicating their name or any other information that could help to recognize them. They upload their essay documents onto the Dropbox and send the address to the teacher. The teacher can then assign the essay to be reviewed by one or two peers using a criteria grid. The final mark can be assigned based on the average of the peers’ marks. The only advantage indicated for this case is that peer reviewing and assessment avoid teachers to assess students’ work themselves. The author suggests 1) having a test peer review assessment in class to train students; 2) providing students with clear explanations about the assignment and the peer review process (including the review form itself).

Case  4:  Calibrated  peer  assignment  Technology: CPR online web software Type of assignment: Writing essay Type of assessment: Formative/summative From http://serc.carleton.edu/introgeo/peerreview/cpr.html, http://serc.carleton.edu/introgeo/peerreview/examples/dinosaurs.html, http://serc.carleton.edu/introgeo/peerreview/examples/why_study_geo.html, http://serc.carleton.edu/introgeo/peerreview/examples/petroleum.html Calibrated peer assessment is a process organized in four successive steps:

1) Assignment: The student writes an essay and submits it. 2) Peer assessment training or calibration: the student has to peer review 3 example

essays: calibration essays that have already been evaluated by teachers using a rubric form. If the calibration test meets the requirements, the student can move to the next step. Otherwise, the student has to pass a second peer assessment trial.

3) Peer assessment: The student has to assess and grade three peers’ essays. If the student failed the first calibration evaluation, his/her impact of the peers’ mark is lowered.

4) Self-assessment: The student self-assess his/her own essay. The introduction of calibration in peer assessment is particularly important. It trains students to review and assess their peers.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 29

The advantages expressed in the different related cases include: promotion of critical thinking, improvement of writing skills from one assignment to the next one.

Case  5:  Getting  to  know  Coursera:  peer  assessments  Technology: Coursera Type of assignments: NA Type of assessment: Summative From http://cft.vanderbilt.edu/2013/01/getting-to-know-coursera-peer-assessments/ http://cft.vanderbilt.edu/2012/11/getting-to-know-coursersa-assessments/ This case study is more a critical analysis of the way peer assessment is implemented inside Coursera, a xMOOC platform than a real case study. xMOOCs are a particular type of Massively Open Online Courses. They propose a model of teaching based on the traditional teacher-centric approach with the purpose to provide teaching to huge online classes (Figure 4). In the next case we will review an current experiment conducted inside Coursera that aims at addressing some of the issues raised in this one.

Figure 4 – x MOOCs and cMOOCs (figure from Wikipedia)

However, it gives some good feedback about the problems to consider when applying peer assessment for very large online classes. With respect to traditional classes, one must notice, that MOOCs are completely online with distant students from all over the world. This situation introduces many more constraints for teaching in general but also for peer assessment. Peer assessment is particularly critical with respect to the context of MOOC. It is indeed impossible to believe that the teachers and tutors staff can provide assessment and feedback to thousands of online students. As raised in the state of the art section of this document, peer assessment can be viewed as a possible solution for large classes. The author translates this into "Who, after all, has got the time to read 10,000 essays? The answer, for Coursera at least, is other students." In other words, peer assessment is the only way platforms such as Coursera can cope with assessing thousands of essays, which results in crowdsourcing assessment. According to the author, “the model of peer assessment supported by Coursera folds together two assumptions: that peers can approximate or replace the kinds of substantive, constructive expert feedback critical to deeper understanding and that a grade is necessary to learn, full stop". The main issues raised by the authors are:

• Students have to learn to grade

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 30

• Grading peers requires lots of efforts from students • What is the outcome of peer grading for students

The author brings a particular importance to anonymity and privacy. As peer feedback on Coursera is anonymous, follow-up on a comment and discussions are mainly impossible. Therefore, how to create learning communities if peers are not accountable for their feedbacks? Similar feedbacks are available in http://hackeducation.com/2012/08/27/peer-assessment-coursera/

• the variability of feedback, • the lack of feedback on feedback, • the anonymity of feedback, • the lack of community.

Case  6:  First  massive-­‐scale  class  with  self  and  peer  assessment  in  Coursera  Technology: Coursera Type of assignment: Project/problem-based Type of assessment: Summative From http://hci.stanford.edu/research/assess/ This case study describes an experiment to introduce peer assessment in an xMOOC. This strategy has been implemented inside Coursera3. As raised by the authors, "providing feedback and assessment of design and other creative work is extremely time consuming -- this bottleneck is the major capacity constraint for scaling peer assessment". In this example, students are not only grading peers, but they are trained to prior to effectively grade them. The proposed method is based on an existing method called “calibrated peer assessment” where students learn grading through training examples before grading their peers. The peer assessment is combined with self-assessment. The objectives of the peer assessment strategy are: 1) training students to assess others accurately; 2) define a grading system robust to errors; 3) provide qualitative and personalized feedback to students. The authors use rubrics to grade. Rubrics are exemplified in Figure 5. Each row corresponds to a rubric and each cell corresponds to a level of performance. Assessing is mainly numeric, with very few text for feedback (as language is an issue for a MOOC).

3 http://www.coursera.org/

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 31

Figure 5 – Rubrics-based grading form

Students have first to train assessing. They are submitted examples to assess. They get the right to assess their peers once they grade the example close to the grading result assigned by the teaching staff for the example. Each time they perform an example, they get a feedback to explain them is they are higher, lower or close to the staff grade and why the staff has assigned the grade. The peer assessment is a 3 steps process. The student first assesses 5 peers’ assignments. Among the five assignments, the teaching staff has marked one. It serves as “ground truth” for comparison between staff grading and students grading. In the next step, the student self-assess his/her own assignment. For each assignment, a grade is computed as the median grade of the five peers’ assessments. This peers’ grade is compared to the student grade from the self-assessment. If the student’s grade is close to the peers’ grade, then the student gets his/her own grade. Otherwise he/she gets the peers grade. Other strategies can be used, such as assigning the maximum of the two grades.

Figure 6 – 3 steps peer assessment

The authors use data analysis to guide improvements. They update the rubrics according to the results in order to clarify them. In term of assessment feedbacks, they propose feedback

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 32

templates. Students are proposed basic feedback templates that they can customize by completing the template. From the data analysis, they have noticed that the staff grades correlated with the peers’ grades. They are currently exploring other ways to weight the peers grades than using the median. They have noticed what they call a “patriotic” grading, where peers tend to grade their compatriots higher. Grading errors are evaluated by comparing with ground truth grades. The results show that errors are quite balanced. One interesting outcome raised by the authors is that the process stimulates collaborative learning, with students sharing resources, creating assignment aids, answering forum questions, and providing extra peer assessment…

Case  7:  Web-­‐based  peer  assessment:  a  case  study  with  civil  engineering  students  Technology: Google drive Type of assignment: Writing essay Type of assessment: Formative

From http://online-journals.org/i-jep/article/viewArticle/2411 and http://www.slideshare.net/gmatos/icl-2012-final

The assignment process takes place in five successive steps:

1) The student selects an article from an online source. 2) The student uploads the article as a Google Drive document. 3) The student summarizes the article. 4) The student analyses the article. 5) The student gives his/her opinion about the article.

Steps 3 to 5 are written in a Google Drive document that is then shared by the author student with the teacher and with one assigned peer reviewer. The peer reviewer assesses the document using the comment features of Google Drive and grades the work. The teacher then review and grades the author student and the peer reviewer. The author student can then review the feedback and update his/her work. Finally, the teacher reviews the updated online document to give the final grade. During the assignment, additional online documents are provided through Google Drive: the orientation document describes the objectives and tasks to be performed; a table to connect authors to peer reviewers and a table for the management and coordination of the tasks between author, peer and teacher. The authors observed that only a small number of students used their peers’ feedback to improve their essay. The main observations are:

• The use of a digital environment to support the assignment and assessment process did not present any difficulty to the students.

• The support material used to present the assessment process in face to face was very important for the good achievement of the process.

• There is an obvious need to improve students’ feedbacks and communication skills. • The teacher’s grade seems to influence the use of peer’s feedback to improve the

work. • The over evaluation of the teacher’s grade and the small difference between student’s

grade and teacher’s grade question the necessity to have a double intermediate assessment (peer and teacher).

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 33

Case  study  8:  Online  peer-­‐assessment  in  a  large  first-­‐year  class  Technology: Workshop module (Moodle) Type of assignment: Writing essay Type of assessment: Formative

From http://www.academia.edu/665555/Where_Angels_Fear_to_Tread_Online_Peer-Assessment_in_a_Large_First-Year_Class

This case study aims at providing formative feedback to widen participation and develop writing skills for large and diverse classes (800 students).

The Moodle Workshop module was limited to 3 basic features: 1) the submission and the random distribution of assignments; 2) the grading and feedback based on grid forms; and 3) sharing of work with peers’ feedbacks. The peer assessment if only formative and peer feedbacks are anonymous. As students are used to peer assessment, no example essays are provided and no self-assessment is requested. At the end of the assignment period, the best peer scored five essays are publicly published. Students are provided with a rationale about the advantages of peer assessment and detailed information about the assessment process and forums were used for scaffolding.

The assessment process itself is organized into 3 main steps: 1) each student submits the first version of the essay; 2) at least two peers review the essay; and 3) each author student has to answer the peer feedbacks and submit the final essay. The teaching staff grades the final essay. Peers’ assessments are not evaluated nor marked. Engagement was expected by rewarding students: peer assessment replaced one exercise.

Students’ evaluation of the assessment process is mixed, but around ¾ of the students find it more useful to provide than to receive feedback.

Case  study  9:  Enquiry-­‐based  peer  assessment    Technology: BlackBoard, ASK (Assignment Survival Toolkit) Type of assignment: Writing essay Type of assessment: Formative/summative

From : http://www.academia.edu/1495031/Online_peer_assessment_helping_to_facilitate_learning_through_participation

The objective of this case study is to embed enquiry-based learning, information literacy and e-learning in an peer e-assessment assignment.

According to the authors, in enquiry-based learning, students are working in groups to solve problems with the help of a wide range of information resources. The teacher intervenes as a facilitator that enables students to self-regulate their learning.

Thee peer assessment assignment includes four stages and lasts three weeks:

1. Students write a 500 words essay answering a question submitted by the teacher. This first version of the essay is formatively reviewed in face-to-face with tutors.

2. A second longer version of the essay is then written, which is formatively assessed by peers on BlackBoard.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 34

3. Student are then organized in groups and based on ASK (Assignment Survival Toolkit) that provides resources and individualized step-by-step planning for writing an essay, they have to write a third full essay. This full essay includes introduction, body and conclusion. During three weeks, the peers in a group have to reciprocally submit weekly feedbacks about the essays. Students are taught to submit productive feedbacks (based on setting the criteria, selecting the evidence, making a judgement).

4. Students have then to review and update their essay to submit a final version of the essay that is marked.

The results of the students survey raise the importance of feedback both for tutors and peers. The outcomes come both from the feedbacks received and from the peers’ works reviewed and assessed. But it also points out the credibility of feedback: some students favour tutors’ feedbacks against peers’ ones or wonder how to trust the feedbacks of people who are at the same stage of knowledge than themselves.

Another result that arose is that peer assessment facilitates and enhances learning. These results matches the literature ones that indicate that peer assessment encourages students to collaborate, share and reflect.

Case  study  10:  Coursera  Peer  Assessment  -­‐  Writing  in  the  Sciences    Technology: Coursera Type of assignment: Writing essay Type of assessment: Summative

From http://scienceoftheinvisible.blogspot.co.uk/2012/10/coursera-peer-assessment-writing-in.html

This case study is another MOOCs related example of peer assessment. This one is described from the point of view of a student (at least a teacher who followed the course as a student).

The peer assessment assignment is organized as follows:

• Each student has 7 days to write an short essay of few hundred words. • Each student has 7 days to assess 5 essays of peers and grade them on 0-3 scale with short free

text feedback on different rubrics. The assessment is done twice, the first time with updates suggestions and the second one on the revised version for a final mark.

The author indicates that the process worked well for him. However, when wondering if he would apply this model to his students the answer is: “I'd like to think so but I'm not sure. For one thing it's not clear that our students are as confident or motivated as the participants in this course. For another, there is the issue of marking cartels as students indulge in the prisoner's dilemma (as they perceive it) with summative assessment. Sadly, I can't see a system like this being a goer for us.”

Case  study  11:  Peer  feedback  sessions  Technology: Skillshare (@ skillshare.com) Type of assignment: Video presentation of a project results Type of assessment: Formative

From: http://moocnewsandreviews.com/massive-mooc-grading-problem-stanford-hci-group-tackles-peer-assessment/

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 35

This case study describes how an online course platform organizes a lightweight peer assessment system based on “peer feedback sessions”4. Students can opt in joining peer feedback sessions. When doing so, their projects are then submitted to two peers who will be able to submit constructive feedbacks. In return, the peers’ projects are submitted to the student for the sale session. It initiates a discussion between the reviewers and the reviewed.

Figure 7 – Student’s request to submit a project to peers

The author describes a session he has been participating to for a course on video production. He mentions that the peer feedback process has involved him in diving deeper into the course.

Case  study  12:  Reliability  and  validity  of  web-­‐based  portfolio  peer  assessment  Technology: E-portfolio Type of assignment: Project Type of assessment: Formative/summative

From C.-C. Chang, K.-H. Tseng, P.-N. Chou, et Y.-H. Chen, « Reliability and validity of Web-based portfolio peer assessment: A case study for a senior high school’s students taking computer course », Computers & Education, vol. 57, no 1, p. 1306-1316, août 2011.

This case study concerns a class of around 70 students who have to implement and present a project. The assessment process is organized as follows:

1. Students are first provided with portfolio samples, assessment criteria and guidelines. The rubrics of the assessment criteria are tailored according to students’ feedbacks.

2. Students have then to develop their portfolios, monitor peers’ portfolios and participate to forums.

3. Students have finally to perform peer assessment. Peer assessment is anonymous and group-to-group. At the same stage, the teaching staff scores the portfolios.

The global result of this case study is the lack of reliability of portfolio peer assessment. The authors identify the need to avoid or attenuate grading bias. They argue about the burdensome features of portfolio assessment, particularly from the point of view of the teaching staff. They also suggest that “advanced trainings and support so that students would be more likely to get involved in the assessment process with proper abilities".

Case  study  13:  Teamwork  skills  assessment  for  cooperative  learning  Technology: Ad-hoc platform Type of assignment: Group work Type of assessment: Formative/summative From P. S. Strom et R. D. Strom, « Teamwork skills assessment for cooperative learning », Educational Research and Evaluation, vol. 17, no 4, p. 233-251, 2011 and D. Brown, « Implementation of the Teamwork Skills Inventory among adolescents », 2010. [En ligne]. Disponible sur: http://hdl.handle.net/2286/9c2jizip6q5. [Consulté le: 08-mai-2013].

4 http://help.skillshare.com/customer/portal/articles/1104466-what-is-a-peer-feedback-session-

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 36

The TSI is a method based on peer and self-assessment to evaluate teamwork skills (currently, 25 skills are defined). It is an anonymous online assessment tool where students answer questions regarding the individual contribution of each peer and then evaluate their own contribution. Teachers are expected to provide instructions and have discussion about teamwork, the skills… The authors of the method have defined a 5 lessons curriculum to train and teach students about teamwork peer assessment. Teachers have also to express their trust in the fairness that students can achieve during the assessment. Once the workgroup is achieved, each student assesses and marks each of the 25 skills for himself/herself and peers. Once the assessment is achieved, each student get a profile organized into two columns to compare self-assessment with a aggregated view of peers’ assessments. The method integrates features to attenuate over evaluation of peers: a warning pop-up message when the maximum grade is provided for a skill and the inflation rating index that indicates that a student needs additional guidance information for improving assessment. For students, the method helps them to compare their self-evaluation to peers’ one. It also helps them to improve their self- evaluation. For teachers, the process allows evaluating weaknesses for individuals and groups in order to adapt learning and evaluating teachers’ own skills for training students for work group. The difficulties are related to the level of trust that teachers can provide to student to embed them in the assessment process. Peer assessment is also a challenge for the teachers who need to share with students the way he/she assess them. Time and efforts are required to setup proper assignments. Peer assessment supports collaborative learning.

Case  study  14:  Facilitating  peer  and  self-­‐assessment  Technology: WebPA Assignment type: Work group Assessment type: Formative/summative From http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassess_assessingselfpeers.pdf This case study describes how WebPA has been used in the Universities of Hull and Lancaster. The platform has been applied for peer assessment of work group in many disciplines, ranging from English to Civil Engineering. The functioning of the WebPA platform has been described in the state of the art section of the document. Therefore we will focus on the advantages, drawbacks and limitations identified in this case study. The authors rise that tutors have not recorded any complaints for malpractice (which does not mean that there were no problems, but that students did not report them). It seems critical to take the time to explain and demonstrate in face to face: it indicates the importance that teachers give to the process; it allows addressing basic questions; it avoids problems during the assessment process. It is even suggested to improve students’ involvement by defining the assessment criteria in collaboration with them. Based on their experience, the authors indicate that students:

- acquire a greater sense of ownership and control over their learning

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 37

- work harder to get a successful assessment from their peers. Peer assessment also favours dialogue and social interaction between students and can therefore smooth newbie students’ integration. For the teachers, there are obvious practical advantages over “paper”-based peer assessment: it can be accessed from any place and at any time, the results are instantly and securely collected. From a pedagogical viewpoint, it enables assessing skills that are normally difficult, or even impossible, to assess.

Case  study  15:  Formative  collaborative  quiz  with  clickers  Technology: Clicker; votamatic (votamatic.unige.ch) Assignment type: Quiz Assessment type: Collaborative/formative Submitted by: [email protected] This case study has been investigated at the University of Geneva in the context of a first year bachelor course dedicated to multimedia technology with a class of approximately 120 students. Students are using their own devices: laptops, tablets or smartphone. This assessment is performed in face to face at the beginning of a class. A few simple quiz questions have been setup with the votamatic tool. Students have not been advised about the assessment. They are explained about the goal of the assessment; they are told that their answers are anonymous and that there will be no mark for this exercise. Students are then requested to self-organize themselves in groups of 2 or 3 (but students who want to remain alone are authorized to do so). They are given a simple URL to reach the quiz (without any login or authentication process). The quiz can be accessed either on a laptop, tablet or smartphone. By grouping students in teams of 2 or 3, there are enough available devices in the class. Students are given 15 to 20 minutes to answer the quiz. They can ask questions to the teacher, discuss between them during this period. Once the period is over, the teacher stops the quiz and the results are displayed on the screen and all students can view them at the same time on their screen of their device. During this last period, the teacher goes through each question, discusses the results and explains the answers. For the teacher, it is a good way to evaluate the global level of the class and identify weaknesses. The exercise initiates a discussion between the teacher and the class and among students. As the answers are anonymous, students are comfortable to participate and do not express any reluctance to participate. The assessment tool is lightweight and easy to use for the teacher and the student.. The whole exercise takes around 45 to 60 minutes, but it is possible to reduce the time by submitting the quiz between two classes and only discuss the results during the class.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 38

Figure 8 – a snapshot of a quiz

Figure 9 – A snapshot of the display of the quiz results

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 39

Case  study  16:  Gamified  work  group  assessment  Technology: User points; elgg (hec-onnect.unige.ch) Assignment type: Project Assessment type: Formative/summative Submitted by: [email protected], [email protected] This case study has been investigated at the University of Geneva in the context of a first year bachelor course dedicated to an introduction to web services with a class of approximately 300 to 400 students. Gamification is one among various approaches that are applied to engage and organize participation. It consists in introducing game mechanics in non-game contexts. The main objective is to increase the user engagement. This approach has raised a lot of interest and development in education with the expectation to improve students’ engagement in learning activities. One of the techniques involved in gamification is based on user’s reward. The reward is usually based on a score that the user is earning throughout his/her interactions with the system. Whenever the user is acting positively his/her score is increasing. Once the score reaches a pre-defined threshold, the user is getting a reward (a badge that is displayed on his/her profile for example). The basic idea consists in adapting the user points approach in order to estimate students’ individual contribution to the global effort. At the end of the group work, students’ scores are used to assign a mark that is then integrated to compute the final mark. Group work is supported with an online shared workspaces platform. The platform is used for collaborative learning so that students can tutor their peers and provide them feedback during the group work project. The tutoring can apply to the activities of the work group assignment but also to the technical and organisational skills required to use the collaborative platform. We consider each activity that a student can have with the platform and evaluate it according to its contribution to the increasing of the global knowledge of the whole class. A student who publishes a public bookmark is considered as being willing to share a resource with the peers. A student who comments a content produced by another student is considered as being willing to provide a feedback to the peers. These two activities will be positively rewarded. We do not evaluate the quality of the production. Only the intention to contribute is rewarded. We are aware that we may reward “useless” contributions. Our policy is to favour contributions by considering that learning students are not systematically able to perform efficiently from the beginning of the group work. The process takes also into consideration the actions that students can perform to increase their own knowledge. For example, when a student reads a content produced by a peer, we consider that the student is willing to learn from others. Therefore, his/her score will increase. A pre-defined ranking of all the possible actions is established. The ranking is defined according to the weight of the contribution that a given action may have to the global knowledge. Sharing a bookmark will be for example considered as a less significant contribution than commenting a content. The amount of points a student can earn for a given action is depending on the rank of the action. The teacher can monitor the assignment of the user points at any time and get the final amount for each student. He/She can then define by himself/herself how to integrate this scoring of the student’s individual contribution in the final mark of the group work. Students are made aware about the fact that their individual contribution and support to the global platform knowledge is evaluated. The collaborative platform is developed with the open source Elgg social network engine. The core engine is augmented with various plugins. Shared workspaces are defined as groups. Each group has its own workspace and toolbox (the toolbox integrates wiki, blog, forums,

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 40

question/answer, brainstorming tool…). Professors, teaching assistants and staff as well as students are given the same rights on the platform. They can for example create a group for formal or informal learning activities. A gamification plugin has also been partly integrated. The user points system is activated whereas the badges system is disabled. Students cannot access their score. The platform has been used since 2010 for a 1st year bachelor course in Information Systems for students in commercial and management studies. Every year, the class varies between 300 to 400 students. They have to work in groups for the project semester. The project is organized into multiple phases. For each phase they have to produce outputs that are increasing in complexity. During the project they are continuously provided with resources and guidelines (online and face to face) so that they can gradually learn to use the platform and tools, and get used to collaborate. The final mark is computed from the individual contribution score and the evaluation of the final group production. The individual contribution is estimated by defining ranges of user points. The ranges correspond to different levels of contributions from inactive to very active. For each student the individual mark is assigned according to the user points range in which his/her score stands. Therefore, the students in the same group may receive different final project marks. We have already raised the issue of useless contributions with the risk of rewarding them unfairly. From our experience, we have noticed that if we provide students with differentiated types of content, it is possible to discriminate and orient low-level contributions. For example introducing a shoutbox allows gathering most of the “logistics” messages (such as “where do we meet?”). Moreover, by assigning individual contribution marks according to pre-defined ranges of user points, we avoid fostering students who are over-contributing. The resulting collaborative learning platform encourages students to contribute and collaborate. It addresses the “free rider” problem by providing an indicator of the student’s individual contribution. This indicator allows defining a mark that can be taken into consideration for the final mark. Further developments include the refinement of the rules to assign user points and the introduction and evaluation of intra and inter-group peer assessment. The refinement of the user points rules is expected to bring a better estimation of students’ individual participation. The rules to define the ranges of user points to assign marks can also probably be enhanced. The intra-group peer assessment is expected to adjust the individual contribution score with the evaluation from the peers. The inter-group peer assessment is expected to adjust the global group contribution.

Case  study  17:  Portfolio-­‐based  collaborative  assessment  Technology: Blog/Portfolio Assignment type: Project Assessment type: collaborative/formative Submitted by: [email protected], [email protected] At the University of Geneva, physical education is currently taught in dual system: students share their studies between theory at the University and practice in schools. During the theory periods, student trainees are taught by university trainers and are organized in classes. During the practice periods, student trainees stay in primary and secondary schools, they are supervised by field trainers and are organized in binome teams. During their stay in schools, student trainees have to prepare lessons and apply them with classes under the supervision of field trainers (field trainers are themselves physical education teachers).

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 41

This combined peer-tutoring (where student trainees teach to their peers) and peer-assessment (where student trainees assess their peers) approach allows increasing the academic gain for both tutors and tutees. The main issues are: the number of categories of participants involved; the lack of continuity and contacts between the participants (Figure 10.a). This lack of contact does not only affect students and trainers. It also concerns university trainers and field trainers. The objective of this project is to introduce distance learning technology in order to keep the participants connected and stimulate exchanges and feedbacks. The selected approach consists in organizing participants’ interactions around student trainees’ activities with e-portfolios (Figure 10.b). According to the specific context induced by the dual education system, the training platform must be at the same time:

- A common place where the different categories of participants: student trainees, university trainers, field trainers can continuously exchange, harmonize and converge.

- A place where student trainees can depict their activities, get feedbacks, monitor their progress and be evaluated.

The evaluation aspect is particularly important. Trainees feel that the creation of an e-portfolio content is more of a process that is required rather than a product that can demonstrate the development of professional growth. They usually feel this last stage at the end, once they can browse their portfolio content. Therefore, including the portfolio content in the evaluation creates an initial constraint to engage the trainees in the production of content for the portfolio.

(a) (b)

Figure 10 - E-portfolio as a virtual common shared space to overcome dual system barriers

Another important issue is that the structure of the platform needs to reflect the structure of the pedagogical organization: classes, binomial teams and students. It must also reflect the roles of the different participants in the pedagogical organization, particularly in terms of interactions such as feedbacks from the trainers to the trainees. We consider three levels corresponding to the three levels of integration of student trainees: 1) individual level (for individual progress and evaluation) 2) binomial team level (for co-elaboration, feedback and evaluation) and 3) class level (for global management and information and theoretical material dissemination). The levels are organized hierarchically so that when a student trainee submit some contribution at the binomial level, it also appears at the individual level (so that he/she can monitor his/her own progress), but does not appear at the class level as it does not correspond to that level (however, the contribution can be reviewed by other student trainees as all published contribution are made public to all users).

University trainer

Student trainee

Field trainer

University trainer

Student trainee

Field trainer

E-portofolio

university schools

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 42

Implementing such a platform requires taking users skills into consideration regarding the design. The first constraint is that none of the users of the platform is particularly skilled in information technology. Therefore, the platform should not be overloaded with functionalities. It is of course not possible to avoid the additional workload induced by the need to master the platform. However there are few design rules that can be applied to make it simpler. We have devised the following approach: -Rule 1: provide users with only the features that are required, so that they do not get lost with too many options that they have to test and acquire. Each role is clearly identified and gives access to the required features. For example, field trainers do not need to any content other than feedbacks. Therefore, they have no personal blog and are only limited to produce comments. -Rule 2: provide tools that are similar to the ones that users may use in their personal practice of information technologies. For example, the students’ e-portfolio is a blog, which may be already familiar to some of the student trainees. Our main objective at the implementation level is to reflect the same structure as the one developed at the pedagogy level. Figure 11 describes the global architecture organized around the student trainees’ e-portfolios. The class organization is reflected through the use of groups: a student trainee is an individual user with his/her own blog. He/she is member of a binomial team, which is also equipped with a blog. And finally each binomial team is a subgroup of the class group. This structure ensures the dissemination of contents among the appropriate levels. Depending on at which level a post is submitted, it appears in different blogs levels (posts always appear in the individual blog of their author).

Figure 11 - Global overview of the platform structure and organization

We also assign a role to each participant (University trainer, Field trainer, Student trainee). Each role is assigned with some rights to produce contents (corresponding to their possible interactions with the platform and the other roles). For example, student trainees need to submit lessons preparation and practice reports, so they are assigned with a blog and ability to

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 43

submit blog posts. They are also expected to provide feedbacks to their peers, so they are assigned with the ability to submit comment. They are expected to communicate with trainers, so they are assigned with the ability to submit forum topics and posts.

Case  study  17:  Acadima  Technology: Dedicated platform Assignment type: Flashcards Assessment type: Formative Submitted by: [email protected] Acadima - http://www.acadima-information.ch Acadima provides university students throughout Switzerland with the opportunity to write, edit and share both learning and test flashcards targeted on their modules and final exams. Students can enhance the quality of the cards by providing mutual feedback and quality assessments. In this way, a dynamic question pool is developed that can be readily retrieved by students via their AAI access. The flashcards can be called up on smartphones or via a web interface. The skills developed include: - self-organized learning (cooperative, reflexive, motivational, emotional, cognitive --- self competence) - knowledge about how to write good MC-questions --- competence in didactics - peer review --- competence in didactics --- discovering miss concepts - competence in media (critic, skilful, usage, design) The technologies components involve: - collaborative question pool for exam preparation (card status: published - visible for peers) - peer review - "reviewed" flag (through feedback) - prof review - "profproofed" flag - peer voting (card quality voting, card level of difficulty voting) - feedback - gamification (boost your peers), push mechanism - mobile learning. The workflow has to be simple and clear. Students need to see immediately their data. Simple login such as AAI. No advertisement and cost free. Sharing of small amount of data for exam preparation (e.g. learning and test flashcards). Providing feedback features (feedback, voting); building of communities; gamification; freedom equality; trust; community; collaboration and usability. The main goal is not the provision of learning and test flashcards as such, but the contents, which are created and made available by students. Students can enhance the quality of the cards by providing mutual feedback and quality assessment. In this way, a dynamic question pool is developed that can be readily retrieved by students via their AAI access. This will motivate students to work together with each other, such as through the option of marking cards as favorites, rating them as excellent or criticising them. This serves to increase the quality of the cards, because together we are wiser. Acadima promotes two fundamental cooperative processes. On the one hand, cards can be compiled on an altruistic basis for others to use as well and, on the other hand, students can work together on a joint basis. A shared benefit results. People are naturally disposed to cooperate, to exchange information and tasks, and to share their aims. Acadima can be used by university students and teachers for creating and working with flashcards as well as for sharing them with their fellow students. Acadima has been designed in cooperation with universities and students. It is an optimized form of learning and retaining new information. By sharing with others, student

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 44

can be saved from having to create their own sets of flashcards. Acadima enriches learning, teaching and campus life not only for students but also for individual faculties and the wider community. It enables teacher to engage more students in exciting new way, reaching them on their terms and via their devices, and keeping students both informed and involved. An attractive aspect of Acadima is the link it provides between different universities and the resultant openness. Individual cards can, and ought to be swapped for each other, and it is possible to collect useful cards in personalised card collections. The heterogeneous nature of the subject matter becomes clear, and new facts can be linked together. Explorative learners can move forward into previously unknown areas. The focus is on the contents. It would thus be possible to refer to Acadima as "crowd-sourcing for the crowd". Acadima is a straightforward knowledge-imparting tool for teachers and students that is fun to use. We want to tie Acadima into a wider solution, since, together; we can create a meaningful application. Winning applications have to be sufficiently unique. We can attain this goal through the uniqueness of Switzerland's university network. Since 2011, we build student expert groups for the modules of the basic studies curriculum in biology, Division of Biology at University of Zurich. Students of higher semesters such as the advanced studies curriculum join these groups as reviewer. In the end we build up to 20 learning communities covering the main topics of the basic studies curriculum. Their task is to create meaningful learning and test flashcards for exam preparation. In a didactic course they learnt to design good MC questions. Other students can profit of the work of their fellow students and at the same time contribute with annotations, feedbacks and ratings. Lecturers can set 'profproofed' icons.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 45

Recommendation  proposals  

For the time being, this section is dedicated to randomly collect ideas/designs/suggestions inspired by the state of the art and the case studies.

Recommendation  1:  collaborative  learning  environment  

It follows from the current trend and evolution that peer assessment allows to shift from assessment of learning to assessment for learning. Therefore, peer assessment should be fully integrated inside the learning environment as a collaborative learning activity. Moreover, integrating peer assessments tools inside collaborative learning environment will allow opening peer assessment to students, so that they can use and organize peer assessment by themselves next to formal assessment required by teachers.

Recommendation  2:  e-­‐identity,  anonymity  and  pseudonimity  

Privacy/anonymity sounds to be an issue (this is particularly raised in collaborative-like environment such as the MOOC platform Coursera): there is a contradiction between on one side the idea that peer feedback should be anonymous to “protect” peers and avoid conflict and on the other side the need to stimulate a community of learning and to create a discussion between peers.

One direction to study is the pseudonimity, pseudoidentity and pseudonimization: student’s identity could be pseudonimized on the learning platform so that each student can be identified all over the platform without revealing his/her real identity. That would introduce three levels of identity: real id, pseudo id and anonymity. According to the type of activities, each student could be either forced to use one of the three modes or be allowed to choose the id mode.

It could answer most of the privacy concerns raised for MOOCs. This id system could allow having feedback follow-up through the pseudo-id by keeping anonymity at the same time for example. M. Anwar et J. Greer, « Facilitating Trust in Privacy-Preserving E-Learning Environments », IEEE Transactions on Learning Technologies, vol. 5, no 1, p. 62-73, 2012.

Recommendation  3:  learning/training  to  peer  assess  

One concern of students with peer assessment is the risk of being stolen ideas. This indicates the need to create a culture of peer assessment for students. This reinforces an observation from the literature and the case studies that informing and training students about peer assessment is critical for the success of the assessment. It is also closely related to information literacy, plagiarism,

Therefore it seems crucial to plan to develop guidelines, training resources and material for peer assessment and consider it as being embedded inside learning and training. There are probably some ad-hoc training spaces to provide in a similar way as the sandbox used in Wikipedia to let newcomers train themselves writing articles.

For example, in [1], the author, a student who has been following a few MOOC courses, provides a list of “rules” for author and peer.

Guidelines for author student:

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 46

• Concentrate on the constructive • Ignore the destructive • Always find a takeaway • Analyse the reviewers’ analyses • Extract the sub textual assumptions • Remember that your peers are your equals • Feedback is not a blame game • Tough out the low marks • Reflect on the assignment • Actually rewrite the paper

Guidelines for peer student:

• Feedback is a crucial part of the learning process • Read the text closely, slowly • Be specific • Time is of the essence • When you think you’re done, give it one more look • Identity with the international, multi-language audience • There is no global writing style • Be open minded

Best practices and recommendations have to be defined so that they can be explained before assessments, tutored during assessments and checked after assessments.

Recommendation  4:  peer  tutors  and  mentors  

Peer assessment could be performed by “peer tutors” or by a mix between peers and peer tutors. Peer tutors are students that are more advanced than the students they are assessing. They have therefore a deeper knowledge in the discipline. This process requires a completely different organization of the assessment process. It introduces new issues such as how to engage and/or reward peer tutors? One potential advantage would be that the students might attribute higher value to the feedback they get. Another potential advantage is that peer tutors are “outside” of the competition. As students and peer tutors are in separate classes with different planning, the requirement for a “virtual assessment room” completely makes sense. There could be an extension of the “peer tutor” notion to “peer mentor” where peers could be involved in summative and formative assessments activities. Therefore the “assessment room” could be integrated into a collaborative virtual learning community platform.

Recommendation  5:  hybrid  assessments  technologies  

In [2], the author studies two types of assessment techniques currently adopted by MOOCs: Automated Essay Scoring (AES) and Calibrated Peer Review (CPR). The survey shows first that both techniques have different advantages and drawbacks and next that both methods do not allow to assess the whole range of essay types. However, in the conclusion, the author suggests that both techniques could be used in combination in an iterative process.

This idea can be extended and developed to have hybrid models that do not rely only on a single assessment mode, but can complete peer assessment in order to provide complementary

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 47

feedbacks about students’ performances and achievements. The paper cites the AES approach, but other approaches could be used, such as user points systems.

Recommendation  6:  peer  assessment  incentives  and  rewards  

One issue of the success of peer assessment is the engagement of peer assessors. One motivation for students is that peer assessment brings to deeper learning and to the development of personal skills. This motivation needs to be explicitly brought to the attention of students by teachers. However, this incentive alone may not be immediate and concrete enough to be sufficient to arouse students’ engagement.

A complementary approach could be to integrate the assessment tasks into the global summative assessment. It is indeed possible for example to extract “marks for marking”[3] from the assessors’ activities and assessment results. These marks can then be introduced in the final mark according to a pre-defined weight. Therefore students are simultaneously assessed according to the results of their class work and to the result of their contribution to the assessment. Peer assessing someone else’s work is a difficult task, which requires not only efforts but also training. Therefore, it may appear unfair to evaluate assessment performance for students who are motivated to contribute, but have low assessment performance.

Another track could consist in acknowledging students’ contributions to the global learning process. We can make a parallel with researchers’ communities or other thematic communities such as open source ones[4]. In [5] the authors investigate the behaviour of open source communities to analyse social relationships. Their main statement is that research and open source communities are based on the so-called “gift culture”: “You give away knowledge and information in return for status and reputation.” Peer assessment can be seen as a knowledge and information gift that is submitted to the learning community. There could be a systematic evaluation, made by assesses, of the assessments to compute a global reputation for each student based on the average of the evaluations. The reputation can then be made public by displaying it as a value or as a badge. With this approach, the incentive is based on the usefulness of the assessment for the assesse. The risk is that assesses undervalue the assessments. It is possible to combine the “marking marks” strategy and the “reputation” one using userpoints/badges gamification systems such as the Mozilla open badges framework5.

It can even be possible to go beyond the reward of reputation in order to improve social learning with formative assessment. As raised in [5], in researchers or open source communities, “the reputation is secured by the rule that one can use knowledge produced by somebody else, but it must always be clear from whom the idea originates.” It should be possible to have students update their work according to peer assessments. If a student updates his/her work according to a peer’s comment, he/she should acknowledge it in the work. This citation strategy is similar to the one applied in research with publications. The assessors’ reputation can then be established according to the citations. The citation strategy can be extended and made mutual: an assessor could use some ideas from the work he/she is assessing in his/her own work as long as it is indicated. Moreover this strategy could address to some extent a usual issue raised by students during formative peer assessment who fear to be stolen their ideas by the assessors.

5 https://wiki.mozilla.org/Badges

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 48

References  [1] Heidebrink, « Giving As You’d Like To Receive - How to Benefit from MOOC Peer-Assessment », MOOC NEW & REVIEWS, 2013.

[En ligne]. Disponible sur: http://moocnewsandreviews.com/how-to-benefit-from-mooc-peer-assessment/. [Consulté le: 24-juin-2013]. [2] S. Balfour, « Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer ReviewTM | RPA Journal  » Assessing

Writing in MOOCs: Automated Essay Scoring and Calibrated Peer ReviewTM | Journal of Research & Practice in Assessment », Research & Practice in Assessment, vol. 8, p. 40-48, 2013.

[3] P. Davies, « The automatic generation of ‘Marks for Marking’ within the computerised peer-assessment of essays », 2003. [En ligne]. Disponible sur: https://dspace.lboro.ac.uk/dspace-jspui/handle/2134/1908. [Consulté le: 26-juin-2013].

[4] V. Singh et L. Holt, « Learning and best practices for learning in open-source software communities », Computers & Education, vol. 63, p. 98-108, avr. 2013.

[5] M. Bergquist et J. Ljungberg, « The power of gifts: organizing social relationships in open source communities », Information Systems Journal, vol. 11, no 4, p. 305–320, 2001.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 49

An  experimental  framework  for  peer  &  collaborative  assessment   Based on our investigations about peer assessment from the state of the art in the literature and the case studies we have collected, we propose a framework to integrate peer and collaborative assessment in teaching/learning activities. One objective is to move from assessment of learning to assessment for learning. This requires embedding assessment in learning. Peer and collaborative assessment is by nature social, therefore our proposal is to develop a social learning platform where peer and collaborative assessment will be fully integrated. This social learning platform will offer a range of features allowing diagnostic, formative and summative assessment. These features will be available to teachers as well as to students. It is important that students can train and practice assessment themselves. Any social platform is potentially a social learning platform. It includes different tools to produce contents and the basic framework to assess these contents: comments for formative assessment and stars for summative assessment. The objective is also to design a general platform able to support the case studies 16 and 17. Each case study is currently implemented with a dedicated ad-hoc platform. The purpose of the proposed framework is to be able to cover the needs involved by these two case studies (including the new features and new scenarios requested by the teachers) and therefore obtain a general framework that can be offered for social learning with peer and collaborative assessment. The requirements for the new scenario of case study 16 are available in Annex E, directly expressed by the teacher in charge of the learning module.

Connect,  a  social  learning  platform  

The actual collaborative platform that implements the online shared workspaces is developed with the open source social network engine Elgg (available from www.elgg.org). Social networks and communities have been introduced in educational institutions for different purposes: to implement a community model within courses; to support learning by participating in communities; to integrate institutional resources with others; and to create lifelong learning communities [1]. Elgg has already been implemented and investigated with some success in the higher education context. It has been evaluated as an alternative to traditional course management systems to engage students in collaboration and peer learning [2], or as social platform in addition to a Virtual Learning Environment in order to provide students with an integrated Shared Learning Environment [3], or a dual virtual learning space that integrates formal and informal learning [4], [5]. For our purpose, the core engine is augmented with various plugins (the actual list is available in Annex F). Shared workspaces are defined as groups. A group is organized in order to represent a contextualized learning activity following the shared space model proposed in [6]. In this model, a learning activity is conceptually represented by a shared space (a group in our implementation) that integrates people, resources and applications (and eventually sub-groups). To achieve a common goal (a learning activity), people share content resources and applications in a space and use them to achieve their goal. Each group has its own workspace and toolbox (the toolbox integrates wiki, blog, forums, question/answer, brainstorming tool…). Professors, teaching assistants and staff as well as students are given the same rights on the platform. They can for example create a group for formal or informal learning activities.

Groups can be subdivided into sub-groups. This is an important mechanism to manage peer assessment. The top group relates to the whole class and sub-groups correspond to work

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 50

groups. Work groups can also be reduced to one user for individual works. This structure allows keeping the contextual environment and eventually facilitates collaboration and cooperation.

The platform includes different features to achieve formative and summative peer assessment; individual and group assessment; intra and inter-group assessment.

Each content type: files, bookmarks, pictures, blog posts, collaborative texts and videos can be annotated, commented, marked as useful and rated (from 0 to 6).

The platform includes different type of content that can be used for peer assessment: polls, quizzes, forums, questions/answers, discussions, brain-storms …

All the assessment features are freely available to students and teachers (there is no specific roles on the platforms, therefore any user, either a teacher or a student has access to all the possible available features). Students are able to use peer assessment features by themselves and for themselves.

A gamification plugin has also been partly integrated in order to assess individual contribution during group works. The user points system is activated whereas the badges system is disabled. Students earn points according to the actions they have with the platform. Each type of possible action is assigned a pre-defined value according to the estimated importance of its contribution to the global knowledge of the group. The plugin has been slightly modified so that students cannot access their score. The final scores can be collected for a given period of time for a given group (and for the associated sub-groups). The scores can be used to assess the individual contribution of each group member during the group work.

An inter-group assessment tool has been designed and is currently being developed. The basic idea is that the administrator of a group can activate the inter-group assessment features for all the sub-groups it contains. The administrator needs then to define the form including the rubrics that learners will have to fill to assess the group work. All the sub-groups are randomly associated by pairs (assessor/ assessee) for the assessment. At the end of the group work period, the assessee group has to collect and organize all the final contents produced during the work for the assessment in a container (called a set on Elgg). This container is then made available to the assessor group. The assessor group has to assess the assessee group work by collectively filling in the rubrics of the assessment form. The container object (the set) allows defining a lightweight e-portfolio mechanism in the platform.

In order to facilitate mentoring between students on the platform, another tool has been designed. It will be implemented thanks to a grant obtained from the University Computing Board of the University of Geneva. The basic idea consists in facilitating the connexion between students who need or request peer or collaborative formative assessment. A student can claim as a status on the platform his/her need for help for a specific domain and topic and another student can reply and engage a mentoring session that can take place in a dedicated group based on the formative peer assessment tools available.

Finally, the platform is also integrated into the existing framework of online learning tools offered to students inside the University: adoption of the global single sign on authentication mechanism based on shibboleth available for students; implementation of the official graphical design guidelines.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 51

A  peer  assessment  tool  for  group  works  

The peer assessment module works with the group/sub-groups modules and the set module. The set module provides a specific content type, which behaves as a meta-content or a dashboard. A set allows collecting any other type of content (file, text, bookmark, discussion…) available on the platform to organize and present them. The behavior can be compared to a simplified portfolio. The resulting Elgg module will be made publicly available on a repository (however no support will be provided).

Figure 12 - peer assessment process, first steps

The basic structure is to create a group to register all the class. This group is used as a reference for all students. The instructor can use it to provide all information and material for the project. It can be also a place where students can collaborate among groups. Sub-groups or child groups are created for the project. Each sub-group will use the sub-group workspace to collect and produce the resources required for the project.

The peer assessment process is described step by step in figures 12 and 13:

1. The instructor edits a review form (figures 14, 15 and 16). 2. Once the review form is complete, it is propagated to the sub-groups and initiate the

review process (figure 14). 3. Sub-groups are randomly and automatically paired. Each pair defines an assessee and

an assessor. Each sub-group is both an assessee and an assessor. 4. At the end of the project period, each sub-group submits a set that contains all the

production required for the project (figure 19).

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 52

5. Sub-groups review the work submitted by their paired assessee peer group as a set (figure 19) and edit the review form (figures 17 and 18).

6. Each sub-group submits the review form.

The instructor can allow each sub-group to update the group work according to the assessor group feedback or directly evaluate the group works using the assessor group feedback.

Figure 13 - peer assessment process, last steps

The corresponding data model is described in figure 14. The Model is composed of Questions. Each Question corresponds to a rubric that all assessor groups will have to answer. The Form is composed of Answers. Each answer contains the answer to a question completed by an assessor group.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 53

Figure 14 - Data model for the peer review form

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 54

Figure 15 - Peer assessment management for editing/creating review form model and firing the review process

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 55

Figure 16 - Review form model edition

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 56

Figure 17 - Adding a rubric in the review form model

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 57

Figure 18 - Review form to be completed by the assessor group

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 58

Figure 19 - Review form edition during peer assessment

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 59

Figure 20 - Set submitted by the assessee group to be reviewed by the assessor group

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 60

References  [1] K. Ala-Mukta, "Review of Learning in ICT-enabled Networks and Communities", Institute for Prospective Technological Studies,

24061 EN, 2009. [2] N. Garret, B. Thoms, M. Soffer, et T. Ryan, "Extending the Elgg social networking system to enhance the campus conversation",

presented at 2nd International Conference on Design Science Research in Information Systems & Technology, 2007. [3] S. Stanier, "Community@Brighton: The Development of an Institutional Shared Learning Environment", in Technology-Supported

Environments for Personalized Learning: Methods and Case Studies, IGI Global, 2009. [4] A. Calvani, G. Bonaiuti, A. Fini, et M. Ranieri, "Towards e-Learning 2.0: New Paths for Informal Learning and Lifelong Learning–an

Application with Personal Learning Environments", presented at EDEN Annual Conference 2007, Naples, 2007. [5] S. Leone et G. Guazzaroni, "Pedagogical Sustainability of Interoperable Formal and Informal Learning Environments", in Developing

and Utilizing E-Learning Applications, IGI Global, 2011. [6] D. Gillet et E. Bogdanov, "Personal Learning Environments and Embedded Contextual Spaces as Aggregator of Cloud Resources",

présenté à International Workshop on Cloud Education Environments, 2012.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 61

ANNEXES  

Annex  A:  Mind  map  of  scoring  criteria  of  the  assessment  task  From : H. Tillema, M. Leenknecht, et M. Segers, « Assessing assessment quality: Criteria for quality assurance in design of (peer) assessment for learning – A review of research studies », Studies in Educational Evaluation, vol. 37, no 1, p. 25-34, mars 2011.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 62

Annex  B:  Decomposition  of  skill  assessment  From : D. Sluijsmans et F. Prins, « A conceptual framework for integrating peer assessment in teacher education », Studies in Educational Evaluation, vol. 32, no 1, p. 6-22, 2006.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 63

Annex  C:  Description  of  the  decomposition  of  skill  assessment  (c.f.  annex  A)  From : D. Sluijsmans et F. Prins, « A conceptual framework for integrating peer assessment in teacher education », Studies in Educational Evaluation, vol. 32, no 1, p. 6-22, 2006.

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 64

Annex  D:  WebPA  peer  assessment  form  template  for  group  work  

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 65

Annex  E:  Cahier  des  charges  pour  la  plateforme  d’enseignement  à  distance  associée  au  module  «  Intervention  en  milieu  scolaire  »    

Par Benoît Lenzen, enseignant et responsible du module

1. Accessibilité • Accessible aux profils suivants :

o Formateurs universitaires (FU), avec un statut d’administrateur o Formateurs de terrain (FT), avec un statut d’utilisateur/contributeur o Etudiants (ET), avec un statut d’utilisateur/contributeur o Visiteurs, avec un statut d’utilisateur et des accès restreints ?

• Inscription autonome des utilisateurs avec un mot de passe • Accès à la plupart des fonctionnalités de la plateforme dès l’inscription • Accès aux fonctionnalités liées à la répartition en binômes des ET dès celle-ci réalisée

(après quelques semaines de cours en présentiel) 2. Structure

3. Fonctionnalités

Bibliothèques

• 1 bibliothèque générale dans l’espace de la classe, avec les dossiers suivants : o Supports de cours o Textes de référence o Organisation o Matériau de travail (espace suffisamment grand pour stocker de la vidéo) o Ressources diverses

• 1 bibliothèque interne à chaque espace de binôme

Messagerie

• Pour les FU, possibilité d’envoyer un message : o Au groupe ET

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 66

o à un ET déterminé o à un binôme d’ET déterminé o Au groupe FT o A un FT déterminé

• Pour les FT, possibilité d’envoyer un message : o à un ET déterminé o à un binôme d’ET déterminé o Au groupe FU o A un FU déterminé

• Pour les ET, possibilité d’envoyer un message : o à un FT déterminé o Au groupe FU o A un FU déterminé o Au groupe ET o A son/sa partenaire de binôme o A un binôme d’ET déterminé

Publications

• Pour les FU, possibilité : o de déposer et de supprimer des documents :

§ Dans la bibliothèque générale (tous les dossiers) § Dans la bibliothèque interne à chaque espace de binôme

o de télécharger des documents : § Depuis la bibliothèque générale (tous les dossiers) § Depuis la bibliothèque interne à chaque espace de binôme

• Pour les FT, possibilité : o de déposer des documents :

§ Dans le dossier « Ressources diverses » de la bibliothèque générale § Dans la bibliothèque interne à chaque espace de binôme

o de télécharger des documents : § Depuis la bibliothèque générale (tous les dossiers) § Depuis la bibliothèque interne à chaque espace de binôme

• Pour les ET, possibilité : o de déposer des documents :

§ Dans les dossiers « Ressources diverses » et « Matériau de travail » de la bibliothèque générale

§ Dans la bibliothèque interne à son propre espace de binôme o de télécharger des documents :

§ Depuis la bibliothèque générale (tous les dossiers) § Depuis la bibliothèque interne à son propre espace de binôme

Archivage

A la fin de l’année académique, possibilité d’archiver tous les contenus, avant de réinitialiser la plateforme pour l’année suivante

Forum de discussion? 4. Quelques exemples d’utilisation… et les fonctionnalités idéales y associées ! • Un binôme souhaite publier sa préparation de leçon/son rapport d’observation sur la

plateforme dans le délai imparti par les FU et/ou son FT :

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 67

o Un membre du binôme dépose le document dans la bibliothèque interne à son binôme

o Le FT télécharge ce document depuis la bibliothèque interne du binôme o Dans l’idéal, en déposant le document, l’ET aurait directement accès à une

fonction lui permettant d’envoyer via la messagerie ce document au FT concerné

o Cela implique que la messagerie permette l’attachement de pièces jointes aux messages

• Un ET souhaite publier son rapport d’intervention sur la plateforme dans le délai imparti par les FU

o L’ET dépose le document dans la bibliothèque interne à son binôme o Le FU télécharge ce document depuis la bibliothèque interne du binôme o Dans l’idéal, en déposant le document, l’ET aurait directement accès à une

fonction lui permettant d’envoyer via la messagerie ce document au FU concerné

• Un FU souhaite commenter le rapport d’intervention d’un binôme o Le FU dépose le rapport d’intervention commenté dans la bibliothèque interne du

binôme o Dans l’idéal, en déposant le document, le FU aurait directement accès à une

fonction lui permettant d’envoyer via la messagerie ce document aux membres du binôme concerné

• Un ET souhaite partager de la littérature professionnelle avec ses condisciples o L’ET dépose le(s) document(s) dans le dossier « Ressources diverses » de la

bibliothèque générale o Dans l’idéal, en déposant le(s) document(s), l’ET aurait directement accès à

une fonction lui permettant d’informer tous les utilisateurs de son dépôt o De manière générale, il serait donc utile que tout dépôt dans un dossier

quelconque génère une possibilité d’envoi de message à des utilisateurs spécifiés de la plateforme (pouvant aller jusqu’à tous les utilisateurs)

• Un ET souhaite déposer un document de travail (ex : verbatim d’une séquence vidéo) dans le délai imparti par les FU

o L’ET dépose le document dans le dossier « Matériau de travail » de la bibliothèque générale

• Un FT souhaite commenter la préparation de leçon du binôme qu’il accueille dans sa classe

o Le FT dépose la préparation de leçon commentée dans la bibliothèque interne du binôme

o Dans l’idéal, en déposant le document, le FT aurait directement accès à une fonction lui permettant d’envoyer via la messagerie ce document aux membres du binôme concerné

• Un FT souhaite partager son projet d’enseignement avec le binôme qu’il accueille dans sa classe

o Le FT dépose le projet d’enseignement dans la bibliothèque interne du binôme o Dans l’idéal, en déposant le document, le FT aurait directement accès à une

fonction lui permettant d’envoyer via la messagerie ce document aux membres du binôme concerné

   

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 68

Annex  F:  plug-­‐ins  for  the  connect  platform   This annex lists all the plug-ins that are currently integrated inside the Connect platform in addition to the core ones included by default in Elgg. The connect platform is developed with the version 1.8 of Elgg. Some of the plug-ins have been patched and adapted to our context. The user interfaces of many of them has also been translated into French. The plug-ins available for Elgg are mostly available from the Elgg community space available at http://community.elgg.org/plugins. Plug-in lists:

• Following 1.7 • Announcements 1.0 • Minify 0.3 • Site Pages 1.8 • Groups 1.8 • Login Required 1.8.3 • Message Board 1.8 • Search 1.8 • Tag Cloud 1.0 • The Wire 1.8 • Zaudio 1.8 • AU Widgets Framework 1.1 • AU TagTracker Widget 1.0 • Tab Text Widget 1.0 • xGadget 1.0 • Dokuwiki 1.4.1 • Brainstorm 0.1b • Embed Extender 1.8.2 • Tasks Fx 2.0 • Profile Manager 7.3 • bit.ly URL Shortener 1.0 • FAQ 1.8.1 • Group Tools 2.3 • Comment Tracker 1.0 • Activity Tabs 1.2 • Extendafriend 2.4 • Event Calendar 0.85 • SimplePie RSS Feed Integration 0.4 • Video List 1.8-beta2 • Easy Theme 1.3.3 • River addon 2012.06.15 • Elggx Userpoints 1.8.2 • phloorFramework 1.8-12.01.25b • Polls 0.82 • phloorNews 1.8-12.01.19 • User Validation by Admin 1

Learning Infrastructure, WP7.1, D7.1.2, UNIGE 69

• iZAP Elgg Bridge 2.1.2 • iZAP Contest 2.0 • Login As 1.4 • Database Cleaner 1.4 • Answers 1.0.1 • Widget Manager 4.3 • File Tools 1.1.2 • Extended TinyMCE 3.5.7 r18 • UFCOE Shibboleth Auth 1.0 • galliStatus 1.0 • Tidypics Photo Gallery 1.8.0-rc1 • Group messageboard 1.0 • GDocs File Previewer 1.02 • Custom index widgets 2.4 • AU Sub-Groups 1.6 • Rename Groups 1.0 • AU Sets 1.5 • Login Redirector 3.0 • Hide Members item in More Menu 1.4 • River Privacy 1.1 • Chat 1.8.0 • Entity Menu Dropdown 1.0 • Twitter API 1.8.15 • veeplay 1.8.3.3 • Advanced Statistics 0.1 • Mobilize 2013.03.17 • Elggpad 0.8.0