Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design...

10
Towards Value-Sensitive Learning Analytics Design Bodong Chen University of Minnesota Minneapolis, MN, USA [email protected] Haiyi Zhu University of Minnesota Minneapolis, MN, USA [email protected] ABSTRACT To support ethical considerations and system integrity in learning analytics, this paper introduces two cases of applying the Value Sensitive Design methodology to learning analytics design. The first study applied two methods of Value Sensitive Design, namely stakeholder analysis and value analysis, to a conceptual investi- gation of an existing learning analytics tool. This investigation uncovered a number of values and value tensions, leading to design trade-offs to be considered in future tool refinements. The second study holistically applied Value Sensitive Design to the design of a recommendation system for the Wikipedia WikiProjects. To proac- tively consider values among stakeholders, we derived a multi-stage design process that included literature analysis, empirical investi- gations, prototype development, community engagement, iterative testing and refinement, and continuous evaluation. By reporting on these two cases, this paper responds to a need of practical means to support ethical considerations and human values in learning an- alytics systems. These two cases demonstrate that Value Sensitive Design could be a viable approach for balancing a wide range of human values, which tend to encompass and surpass ethical issues, in learning analytics design. CCS CONCEPTS Information systems Data analytics; Social networking sites; Human-centered computing Visual analytics; Ap- plied computing → Collaborative learning; KEYWORDS learning analytics, values, value sensitive design, data ethics, social media ACM Reference Format: Bodong Chen and Haiyi Zhu. 2019. Towards Value-Sensitive Learning An- alytics Design. In The 9th International Learning Analytics & Knowledge Conference (LAK19), March 4–8, 2019, Tempe, AZ, USA. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3303772.3303798 1 INTRODUCTION Over the past few years, considerations of the ethical implications of learning analytics have become a central topic in the field [27]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. LAK19, March 4–8, 2019, Tempe, AZ, USA © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6256-6/19/03. . . $15.00 https://doi.org/10.1145/3303772.3303798 The increasing emphasis on ethical considerations in learning ana- lytics is situated within growing public concerns with algorithmic opacity and bias, as well as awakening advocacy of algorithmic transparency and accountability [1]. In the field of learning ana- lytics, a rich body of theoretical, practical, and policy work has emerged to promote the ethical collection and use of learning data. In 2016, the Journal of Learning Analytics published a special section featuring work on ethics and privacy issues [13]. Ethical frame- works, codes of ethical practice, and ethical consideration checklists have been developed [11, 25, 32]. New policy documents are also put forward by higher education institutions to enhance transparency with regard to data collection and data usage. 1 In contrast with the private sector, education as a social func- tion also has a moral responsibility to promote student success. Therefore, ethical considerations in learning analytics need to go beyond issues that are foregrounding the current public discourse surrounding surveillance and privacy [13]. Educational institutions have “an obligation to act”—to effectively allocate resources to fa- cilitate effective teaching and learning through ethical and timely interventions [26]. Ethical practice in learning analytics entails considerations of not only typical ethical issues but also factors and values pertinent to the larger education system. Recently, Buck- ingham Shum [2] proposes a Learning Analytics System Integrity framework that goes beyond algorithmic transparency and account- ability to consider multiple pillars of a learning analytics system including learning theory, algorithm, human–computer interaction, pedagogy, and human-centered design. While learning analytics as a field is rightfully giving more attention to ethical challenges with data analytics, to foster ethical practice we also need effective methods to consider ethical challenges in a manner that would engage perspectives from different stakeholders while allowing education—as a complex system—to fulfill its societal function. To this end, this paper proposes to apply Value Sensitive De- sign—a theory and methodology initially developed in the field of Human–Computer Interaction (HCI)—to support the design of value-sensitive learning analytics systems. We posit that Value Sen- sitive Design can (1) guide researchers and practitioners in the process of evaluating and refining existing learning analytics ap- plications, and (2) provide systematic guidance on developing new learning analytics applications to more holistically address values pertinent to stakeholders, educators, and the society. The remainder of this paper is structured as follows. We first briefly review work on ethics frameworks, algorithmic transparency, and accountability in the field of learning analytics. Then we turn to the Value Sensitive Design literature, with a focus on its appli- cation to the development of information systems. After outlining our research goals, we introduce two studies in Sections 3 and 4. 1 See the SHEILA project for examples: http://sheilaproject.eu/la-policies/ arXiv:1812.08335v2 [cs.HC] 28 Jan 2019

Transcript of Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design...

Page 1: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

Towards Value-Sensitive Learning Analytics DesignBodong Chen

University of MinnesotaMinneapolis, MN, [email protected]

Haiyi ZhuUniversity of MinnesotaMinneapolis, MN, [email protected]

ABSTRACTTo support ethical considerations and system integrity in learninganalytics, this paper introduces two cases of applying the ValueSensitive Design methodology to learning analytics design. Thefirst study applied two methods of Value Sensitive Design, namelystakeholder analysis and value analysis, to a conceptual investi-gation of an existing learning analytics tool. This investigationuncovered a number of values and value tensions, leading to designtrade-offs to be considered in future tool refinements. The secondstudy holistically applied Value Sensitive Design to the design of arecommendation system for the Wikipedia WikiProjects. To proac-tively consider values among stakeholders, we derived a multi-stagedesign process that included literature analysis, empirical investi-gations, prototype development, community engagement, iterativetesting and refinement, and continuous evaluation. By reporting onthese two cases, this paper responds to a need of practical meansto support ethical considerations and human values in learning an-alytics systems. These two cases demonstrate that Value SensitiveDesign could be a viable approach for balancing a wide range ofhuman values, which tend to encompass and surpass ethical issues,in learning analytics design.

CCS CONCEPTS• Information systems → Data analytics; Social networkingsites; • Human-centered computing → Visual analytics; • Ap-plied computing→ Collaborative learning;

KEYWORDSlearning analytics, values, value sensitive design, data ethics, socialmediaACM Reference Format:Bodong Chen and Haiyi Zhu. 2019. Towards Value-Sensitive Learning An-alytics Design. In The 9th International Learning Analytics & KnowledgeConference (LAK19), March 4–8, 2019, Tempe, AZ, USA. ACM, New York, NY,USA, 10 pages. https://doi.org/10.1145/3303772.3303798

1 INTRODUCTIONOver the past few years, considerations of the ethical implicationsof learning analytics have become a central topic in the field [27].

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected], March 4–8, 2019, Tempe, AZ, USA© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-1-4503-6256-6/19/03. . . $15.00https://doi.org/10.1145/3303772.3303798

The increasing emphasis on ethical considerations in learning ana-lytics is situated within growing public concerns with algorithmicopacity and bias, as well as awakening advocacy of algorithmictransparency and accountability [1]. In the field of learning ana-lytics, a rich body of theoretical, practical, and policy work hasemerged to promote the ethical collection and use of learning data.In 2016, the Journal of Learning Analytics published a special sectionfeaturing work on ethics and privacy issues [13]. Ethical frame-works, codes of ethical practice, and ethical consideration checklistshave been developed [11, 25, 32]. New policy documents are also putforward by higher education institutions to enhance transparencywith regard to data collection and data usage.1

In contrast with the private sector, education as a social func-tion also has a moral responsibility to promote student success.Therefore, ethical considerations in learning analytics need to gobeyond issues that are foregrounding the current public discoursesurrounding surveillance and privacy [13]. Educational institutionshave “an obligation to act”—to effectively allocate resources to fa-cilitate effective teaching and learning through ethical and timelyinterventions [26]. Ethical practice in learning analytics entailsconsiderations of not only typical ethical issues but also factors andvalues pertinent to the larger education system. Recently, Buck-ingham Shum [2] proposes a Learning Analytics System Integrityframework that goes beyond algorithmic transparency and account-ability to consider multiple pillars of a learning analytics systemincluding learning theory, algorithm, human–computer interaction,pedagogy, and human-centered design. While learning analyticsas a field is rightfully giving more attention to ethical challengeswith data analytics, to foster ethical practice we also need effectivemethods to consider ethical challenges in a manner that wouldengage perspectives from different stakeholders while allowingeducation—as a complex system—to fulfill its societal function.

To this end, this paper proposes to apply Value Sensitive De-sign—a theory and methodology initially developed in the fieldof Human–Computer Interaction (HCI)—to support the design ofvalue-sensitive learning analytics systems. We posit that Value Sen-sitive Design can (1) guide researchers and practitioners in theprocess of evaluating and refining existing learning analytics ap-plications, and (2) provide systematic guidance on developing newlearning analytics applications to more holistically address valuespertinent to stakeholders, educators, and the society.

The remainder of this paper is structured as follows. We firstbriefly reviewwork on ethics frameworks, algorithmic transparency,and accountability in the field of learning analytics. Then we turnto the Value Sensitive Design literature, with a focus on its appli-cation to the development of information systems. After outliningour research goals, we introduce two studies in Sections 3 and 4.

1See the SHEILA project for examples: http://sheilaproject.eu/la-policies/

arX

iv:1

812.

0833

5v2

[cs

.HC

] 2

8 Ja

n 20

19

Page 2: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

LAK19, March 4–8, 2019, Tempe, AZ, USA Chen & Zhu

We conclude this paper by discussing the importance of consid-ering values and value tensions in learning analytics design anddevelopment.

2 BACKGROUND2.1 Learning Analytics Systems: Ethics and

ValuesLearning analytics is defined as “the measurement, collection, anal-ysis and reporting of data about learners and their contexts, forpurposes of understanding and optimising learning and the envi-ronments in which it occurs” [24, p. 34]. By harnessing increasinglyabundant educational data and data mining algorithms, learninganalytics aspires to build applications to personalize learning basedon individual progress, predict student success for timely inter-vention, provide real-time feedback on academic writing, and soforth [33]. As learning analytics applications are increasingly in-tegrated in the existing academic technology infrastructure (e.g.student information systems, learning management systems, stu-dent support systems), more educational institutions are becomingequipped with strong, connected analytics capacity to better under-stand students and hopefully to also better support student success.Empirical evidence accumulated so far has indeed demonstratedpromise of learning analytics in supporting student learning [21].

While learning analytics are garnering international interestsacross different levels of education, ethical and privacy concernsare increasingly mentioned in both public and scholarly discourse.Slade and Prinsloo [34] listed a number of ethical challenges inlearning analytics, including the location and interpretation of data;informed consent, privacy, and de-identification of data; and theclassification and management of data. Scholars have organized aseries of workshops at the Learning Analytics and Knowledge (LAK)conference to collectively tackle ethical challenges [e.g. 12], leadingto the development of ethical principles, frameworks, and checkliststo guide ethical research and practice [11, 13, 29, 40]. Empirical workhas also been done, for example, to examine student perceptions ofprivacy issues with regard to learning analytics applications [22].Design work following the Privacy by Design principles has shownpromise in identifying solutions to addressing privacy concerns [37].While institution-level code of ethics are being created, efforts arealso made to help individual practitioners apply abstract principlesand code of ethics in real-world scenarios [23].

Undergirding these discussions of ethics and privacy are valuesin learning analytics systems. Generally speaking, value representseither “something important, worthwhile, or useful” or “one’s judg-ment of what is important in life” (Oxford English Dictionaries,2018). For instance, privacy is a fundamental human right, whichholds not only value for a person but also social and public valuesfor democratic societies [28]. So safeguarding privacy has naturallybecome an important ethical consideration. There are other valuesthat are also important in learning analytics but often go unstated.For instance, as educators we value student success, learner agency,and equitable access to education [13], but these values, despitetheir educational significance, are often neglected in the discussionof data ethics. In other words, various values are not equally recog-nized in current discourse, leaving some values and tensions amongvalues neglected or under-explored. To promote ethical practice in

learning analytics, we need means to interrogate the values that areoften in tension with each other. The learning analytics accountabil-ity analysis put forward by Buckingham Shum [2] considers systemintegrity frommultiple angles and holds promise for eliciting valuesbeyond current thinking on ethical issues. Take automated feedbackon academic writing for example. Values supported by an academicwriting analytics tool include the accurate detection of rhetoricalmoves based on linguistic features and usable feedback interfacethat can facilitate student sense-making. However, the accuratedetection of rhetorical moves may demand more learner data and ishereby in tension with the value of privacy foregrounded in ethicalconsiderations. To better support learning analytics design, thefield needs strategies for weighing competing values and derivingdesign trade-offs to mitigate value tensions.

2.2 Value Sensitive DesignStemming from an orientation towards human values, Value Sensi-tive Design “represents a pioneering endeavor to proactively con-sider human values throughout the process of technology design”[8, p. 1]. In Value Sensitive Design, value is operationally definedas “what is important to people in their lives, with a focus on ethicsand morality” [15, p. 68]. Value Sensitive Design offers a systemicapproach with specific strategies and methods to help researchersand designers explicitly incorporate the consideration of humanvalues into design [8, 15]. It recognizes the complexity of social lifeand attempts to illuminate value tensions among stakeholders.

Value Sensitive Design is a theory, a methodology, and an estab-lished set of methods [8]. As a methodology, Value Sensitive Designinvolves three types of investigations: conceptual, empirical, andtechnical [8]. Together they could form an iterative process of iden-tifying and interrogating values in technology design. Accordingto [8, 16], conceptual investigations are primarily concerned withidentifying direct and indirect stakeholders and eliciting valuesheld by different stakeholders. A conceptual investigation can drawfrom philosophical and theoretical texts in order to conceptualize avalue at hand (e.g. trust, autonomy). Empirical investigations ven-ture further to collect empirical data from stakeholders about theirperceptions of a value and value tensions (e.g. privacy vs. utility) ina specific context. Both qualitative and quantitative methods can beused in empirical investigations. Finally, technical investigations fo-cus on the designed technology itself and seek to either proactivelydesign new technological features to support values or examinehow a current design would support or undermine human values.These three types of investigations can be integrated in an iterativedesign process to holistically address human values.

Fourteen different methods can be drawn to support Value Sensi-tive Design [15]. For example, direct and indirect stakeholder analysiscan lead to the identification of indirect stakeholders of transit infor-mation tools [38]; fictional value scenarios are useful for revealingvalues and tensions when developing mobile apps [7]; value damsand value flows could be a useful analytic method for resolvingvalue tensions among design choices—by removing a design choicebased on strong objection from a small percentage of stakeholders(the value dams) or accepting a design option based on favorablereaction from a certain portion of stakeholders (the value flow)

Page 3: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

Towards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA

[7]. These methods suit varied purposes of design and a projectadopting Value Sensitive Design can draw from multiple methods.

2.3 Overview of the PaperTo examine how Value Sensitive Design—the theory, methodology,and methods—could be applied to learning analytics design, thispaper reports on two studies.

The first study is a conceptual investigation of a learning ana-lytics application that has already been developed. Applying twoValue Sensitive Design methods—stakeholder analysis and valueanalysis [15]—this study aimed to (1) identify key stakeholdersof the developed application, (2) elicit values and value tensions,and (3) derive design trade-offs to be considered in future designiterations.

Moving beyond conceptually investigating an existing system,the second study introduces a case of applying Value SensitiveDesign to proactively consider human values throughout a designprocess. The introduced case is about developing a value-sensitiverecommendation system for the Wikipedia WikiProjects. Applyingideas from Value Sensitive Design, we and colleagues (1) engagedstakeholders in the early stages of the algorithm design and usedstakeholders’ insights to guide the algorithm development, (2) itera-tively improved, adapted, and refined the algorithmic system basedon the stakeholders’ feedback, and (3) evaluated the recommen-dation system not only based on accuracy but also stakeholders’acceptance and their perceived impacts of the system.

3 STUDY I: A CONCEPTUAL INVESTIGATIONOF A DISCUSSION VISUALIZATION TOOL

3.1 Study ContextThe first studywas a conceptual investigation of a learning analyticstool developed to visualize online discussions in a learning plat-form named Yellowdig.2 This study and the creation of Yellowdigwere contextualized within higher education’s interests in fosteringsocial learning and peer interactions among students. Branded asa social media platform designed for education use, Yellowdig re-sembles many of social network sites such as Facebook and Twitter.For example, it features a news feed similar to that of Facebook;learners can contribute text or multimedia posts to the news feed,where other learners can “like” or comment on each other’s posts. By providing social-media features students are already familiarwith, this tool aims to facilitate social learning in online and blendedclasses without intruding into students’ personal social networks.Since its inception in 2014, Yellowdig has been adopted by a numberof higher education institutions.

As one of the early adopters of Yellowdig, Northwestern Univer-sity developed a Yellowdig Visualization Tool to visualize students’peer interactions and conceptual engagement on Yellowdig.3 Thedesign of this tool was similar to many existing social learninganalytics tools such as SNAPP [9], Threadz,4 and CavanNet [3].The tool was chosen for this study because it represented a morerecent design effort, has a fairly polished design, and has been in-formed by participatory design workshops attended by students at2See https://yellowdig.com/3See https://www.teachx.northwestern.edu/projects2016/gruber/4https://threadz.ewu.edu/

the university. In other words, this tool could well represent thestate-of-the-art of learning analytics applications designed for thisparticular context.

As described by its project website, the development of the Yel-lowdig Visualization Tool was motivated by an interest in helpingstudents and faculty learn more about their online discussions byaccessing Yellowdig data. Prior research on educational use of Web2.0 tools and online discussion environments has demonstratedthe need to support learner participation and engagement frompedagogical, technological, and sociocultural angles [6, 18]. TheYellowdig Visualization Tool was one example of supporting stu-dent use of Yellowdig through designing a new learning analyticsapplication, which is expected to be coupled with pedagogical inter-ventions. During the design process, the design team came up withan initial prototype and engaged students in participatory design toelicit their ideas about visualizations that could be useful for theirlearning. The version of the tool analyzed in this study containedthe following key features (see Figure 1):

• Multi-mode, multi-plex network visualization. The visualiza-tion tool represents one class’s Yellowdig discussion as amulti-mode, multi-plex network that contains multiple typesof nodes (students, posts, and comments) and multiple typesof relations (posting, commenting). In this network, eachYellowdig post (also called “pin”) is represented by a bluesquare filled either by solid blue color (if the post containspure text) or a thumbnail (if the post contains a rich me-dia object); comments on posts are shown as light greencircles; users are represented by grey user icons. Followinga typical network visualization heuristic, the size of a postnodes is correlated with the number of connections it has.The whole network is plotted using a force-directed layout,which keeps well connected nodes in the center and pushesless connected nodes to the edge.

• Node highlighting. When a node is chosen by the user, thevisualization would highlight the node’s “2.0 level ego net-work,” which comprises the chosen node (i.e. the ego) andall other nodes that can be reached within two steps. Thecontent of the chosen node is also displayed in the top panelof the tool (see Figure 1).

• Visualization of temporal growth. A movie can be playedto inspect both daily and weekly growth of the discussionnetwork. The placement of all nodes and edges are fixed inadvance. The temporal visualization adds nodes and edgesinto the network in a step-wise manner.

• Filtering mechanisms for instructor use. The instructor viewof this visualization tool provides additional mechanismsfor filtering the network by student names, post tags, andnamed entities (such as Uber) recognized by text miningalgorithms. These filters are displayed on the right panel ofthe visualization for the instructor to explore the discussionnetwork.

In comparison with earlier social learning analytics applications,the Yellowdig Visualization Tool is novel in several ways. First of all,in comparison with one-mode networks adopted in most social net-work analytics, the Yellowdig Visualization Tool turns discussiondata into a multi-mode, multiplex, temporal network that provides

Page 4: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

LAK19, March 4–8, 2019, Tempe, AZ, USA Chen & Zhu

Figure 1: Yellowdig Visualization Tool. (Note: Used with per-mission of the Yellowdig Visualization development team atNorthwestern University.)

a more holistic picture of student discussions. Its node highlightingfeature allows students and the instructor to inspect local networkstructures, and its filtering mechanisms support the instructor toquickly identify discussion trends. To a great extent, the visual-ization tool provides a unique view of the Yellowdig discussion,which can facilitate sense-making activities by its users and serveassessment needs of the instructor.

3.2 MethodsTo study values in such a nascent learning analytics tool, we con-ducted a conceptual investigation [8] to identify its stakeholdersand elicit values and value tensions among identified stakeholders.This conceptual investigation had two components:

(1) Stakeholder analysis. The first component was to identifythe direct and indirect stakeholders of this tool. Accordingto [15], this analysis includes individuals who are in directcontact with this tool (direct stakeholders) and those whomay not interact with the tool directly but are likely to beimpacted by tool usage (indirect stakeholders).

(2) Value analysis. The second component was to identify anddefine the benefits, harms, and values implicated by thislearning analytics tool and to illuminate value tensions em-bedded in the tool and its larger education systems. Values atstake in this analysis include generic human values that rep-resent “what is important to people in their lives" [15, p. 68],as well as values endorsed by the educational literature.

This conceptual investigation was grounded in our knowledge ofhigher-education classes in similar settings and aided by technicalinspection of this Yellowdig Visualization Tool. It is expected thatthrough this study we could come up with suggestions for consid-ering identified values and design trade-offs in future iterations ofthis particular tool as well as other similar applications.

3.3 Findings3.3.1 Stakeholders. In this context, the direct stakeholders includedstudents and instructors accessing the Yellowdig Visualization Tool.It was expected that the tool would assist instructors in analyzingthe data generated on Yellowdig to inform their instructional deci-sions. The tool design also aimed to enhance students’ mastery ofthe course content and provide them with another way of engagingwith the course material and with each other (see project website).However, the adoption of such a learning analytics tool is oftendecided by the instructor and its acceptance among students mayvary. Therefore, students may be further divided into those whochoose to use the tool and those who reject the tool for variousreasons.

Because Yellowdig is a private social media platform, the visu-alization tool is based on data generated by the class and is notaccessible beyond the class. The indirect stakeholders of this toolmay include future cohorts of the class whose discussion perfor-mance may be compared (explicitly or implicitly) with the presentcohort. Academic technology staff members are also indirect stake-holders as they are charged with supporting technology use andmay be asked to provide additional technical support for this tool.Student support services, such as academic advisers and academicanalytics teams, are also indirect stakeholders. For example, an aca-demic adviser may be contacted by the instructor when she findsfrom the tool that a student has been inactive for a few weeks.

3.3.2 Values and tensions. The Yellowdig Visualization Tool aimedat providing another way of engaging with the class discussion,enhancing student learning and social interaction, and assisting theinstructor to grasp discussion content in order to make informedinstructional decisions. However, through the conceptual investiga-tion we found these values supported by the tool can be in tensionwith other values.

For such a learning analytics tool that can be used for assess-ment purposes, student autonomy is an important value that is intension with the tool’s utility for assessment. The network visu-alization and node highlighting features of the tool make it easyto identify posts and comments made by each individual student.Even though such information is also accessible via the originalYellowdig discussion interface, the visualization tool makes it morereadily accessible to all members of the class. These design featuresare meant to support the values of student success, accountabil-ity, discussion engagement, and usability, but they can alsopress students to participate more actively and hereby hinder theirautonomy as discussion participants.

In a same vein, because of the aggregation of student participa-tion data, student privacy as another important value is also attension with these intended benefits. Similarly, even though thevisualization tool does not collect new data from students, it revealsa bird-eye view of each student’s participation patterns and couldappear intruding to some students who are less comfortable withsuch aggregated reporting.

Social well-being, a sense of belonging and social inclusion, isanother important value to be considered in this context. Studentsin the class—using the tool or not—can reasonably expect to befree from embarrassment caused by less active participation or lesscentral status in the network. The force-directed layout adopted to

Page 5: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

Towards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA

generate the network visualization pushes less active participantsto the edge and make them more identifiable by members of theclass. This design is intuitively understandable, is broadly appliedin network visualization, and supports pedagogical decisions byrevealing to the instructor students “at risk” of dropping out ofYellowdig discussion. Nonetheless, students on the edge may feelembarrassed among peers should their identities be revealed. Thislayout can potentially contribute to the “rich-club” phenomenonin online discussions [36], potentially channeling more attentionto students in the center and leading to the exclusion of less activestudents in future interactions.

There are also values that are implicated by algorithmic decisionsembedded in the visualization tool, including freedom from biasand self-image. For instance, the force-directed layout algorithminherently boost the status of students who are more connectedin terms of connection volumes and do not consider the quality oftheir posts. Displaying the thumbnails of discussion posts rendersposts with multimedia objects more attractive in comparison withposts with pure text, and can hereby direct varied attention to postswith and without multimedia objects. Also notable is the choiceof scaling posts based on the number of comments, which directsmore attention to posts that have already attracted more attention.These differentiated treatments of discussion posts embedded in thealgorithms are linked to students’ self-image. When a student’snode gets highlighted in the network visualization, the highlightedego network essentially represents what the student has producedand how his/her participation is linked with peers in the community.Being able to see all connections of a student at once can benefitthe sense of community especially when a student see herselfwell connected. However, it may also pose questions to a student’sself-image if she is less connected with peers in the community.

The visualization tool was designed to provide another wayto make sense of and engage in online discussions on Yellowdig.Values facilitated by the tool include utility, usability, and ease ofinformation seeking in Yellowdig. In particular, the tool providesviews of both the overall discussion structure and micro, localinteraction patterns that are not revealed by the Yellowdig interface.The instructor view provides rich filtering mechanisms that areuseful for filtering discussion posts by students, tags, and/or namedentities. Being able to look at entity names, such as brands in abusiness class, allows the instructor to quickly identify populartopics in student discussions andmake pedagogical decisions on, forexample, whether to address a specific topic in class. However, thefact that these advanced features are reserved for the instructor isat tension with the values of epistemic agency and fairness. Onecan argue that the advanced features can equally benefit students,who also need to complete information seeking tasks to effectivelyparticipate in Yellowdig discussions. Nevertheless, one may alsoargue that providing these additional features places unnecessarycognitive overload and burden on students.

3.4 Discussion and ImplicationsWith aid from technical analysis of the Yellowdig Visualization Tool,the conceptual investigation has revealed a number of values andvalue tensions that have implications for future refinements of the

tool. One premise of discussing the implications is that the accom-modation of these values should not be solely achieved throughtechnology design, but also the design of pedagogical interven-tions [41]. Nonetheless, based on findings from the value analysis,we suggest the following possible design trade-offs that could beconsidered in future technology design:

• Considering the tension between utility (e.g. revealing astudent’s participation) and the values of students’ auton-omy, privacy, social well-being, and self-image, one de-sign idea is to give each student the option of not revealinghis/her name to their peers in the network visualization. Inthis way, students can choose to become less identifiable inthe tool. Another possible idea is to make student nodes non-clickable so that the node highlighting feature is only limitedto post and comment nodes. Both design ideas will sacrificeutility of the visualization tool to support other values suchas privacy and social well-being.

• To support the values of freedom frombias and self-image,future implementation of the network visualization couldconsider adjusting parameters of the force-directed layoutor consider other graph drawing algorithms (such as the cir-cular, arc diagram, hive plot, and hierarchical graph layouts).Here, design trade-offs are likely to involve interpretabilityof a network visualization and its complications concerningstudent status in the given sociocultural context. Empiricalinvestigations of student perceptions of different layoutscould be necessary.

• Considering the values of fairness, epistemic agency, andease of information seeking, the filtering mechanismsprovided to the instructor could be also considered for stu-dent access. Given the tension between privacy and easeof information seeking, filtering by student names couldstill be reserved to the instructor, while the other two filter-ing mechanisms (i.e. by tags or named entities) can be madeavailable for students.

The design ideasmentioned above are presented here to illustratethe promise of Value Sensitive Design for the particular context,instead of disproving the current visualization tool that is quitewell-designed. These design ideas are tentative and by no meanscomprehensive. Further empirical and technical investigations arenecessary in order to advance these design ideas.

4 STUDY II: A HOLISTIC APPLICATION OFVALUE SENSITIVE DESIGN TO ANALGORITHMIC SYSTEM OFWIKIPROJECTS

4.1 Study ContextWhile the first study focuses on social learning in formal classes, thesecond study was situated in the informal context of Wikipedia edit-ing. Specifically, this study was concerned with designing recom-mendation algorithms for group formation in Wikipedia’s WikiPro-jects. A WikiProject organizes “a group of participants... to achievespecific editing goals, or to achieve goals relating to a specific fieldof knowledge.”5 Editing a Wikipedia article is itself a learning andknowledge-creation process. Learning scientists have recognized5See https://en.wikipedia.org/wiki/WikiProject

Page 6: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

LAK19, March 4–8, 2019, Tempe, AZ, USA Chen & Zhu

mass collaboration, on Wikipedia for instance, as an emerging par-adigm of education [5]. Computer-supported collaborative learn-ing (CSCL) researchers have studied the social process of buildingcollective knowledge on Wikipedia, leading to findings about theintricate relations between contributors andWikipedia articles [19].Indeed, coordinated editing in a WikiProject involves a great dealof sense-making, negotiation, synthesis, and knowledge transfor-mation. For this reason, since its inception Wiki as a Web 2.0 tech-nology has been widely adopted in formal classrooms to supportsophisticated collaborative learning experiences [18]. Therefore,facilitating knowledge processes on the Wiki technology—and onWikipedia in particular—is of interest to learning analytics becauseWikipedia editing entails learning, and intelligent supports builtfor Wikipedia are transferable to Wiki-based learning scenarios.

Prior work has illuminated that Wikipedia, like other social me-dia environments, now heavily depends on data-driven algorithmicsystems to support its massive-scale collaboration and governance[17]. On Wikipedia, algorithmic systems have been used to supporta variety of critical tasks such as counter-vandalism, task routing,and the Wikipedia education program. Many of these existing algo-rithmic systems are driven by “Big Data” and supervised machinelearning algorithms. Take predicting vandalism for example. Thefirst step of the task is to define a prediction target, i.e. a maliciousedit. The second step in the process is to use historical data, oftenin large volumes, to train and validate machine learning models.Finally, the validatedmodels are applied to new edits in order to gen-erate predictive scores of being malicious edits. Such algorithmicsystems play important roles in the Wikipedia ecosystem.

However, the data-driven approach can lead to unintended bi-ases in algorithms and potentially negative impacts on the editorcommunity. It is recognized that the data-driven approach relieslargely on historical human judgments, which are subject to his-torical stereotypes, discrimination, and prejudices. Using historicaldata to inform the future runs the risk of reinforcing and repeatinghistorical mistakes and thus fails to capture and incorporate humaninsights on how the system can be improved for the future. Recentethnographic work of Wikipedia has revealed critical issues withcurrent algorithmic systems and calls for the study of sociocul-tural processes, such as newcomer socialization, that are stronglyimpacted by the automated systems [17].

Researchers have called for the development of systematic meth-ods to detect and mitigate biases in existing algorithmic systems[31]. In contrast with a reactive approach that attempts to addressbias when it gets exposed, we argue for an approach that wouldproactively consider human values throughout the process of al-gorithm design in a principled and comprehensive manner. Onepossible way of achieving this goal is to apply principles of ValueSensitive Design [15] to the creation of each algorithmic system.This holistic approach, explicated in great detail elsewhere [43],engages stakeholders in the early stages of algorithm developmentand incorporates stakeholders’ tacit knowledge and insights intothe creation of an algorithm system. By doing this, it is hoped wecould mitigate biases embedded in design choices and hence avoidundermining important stakeholder values.

In this section, we briefly introduce a case of applying ValueSensitive Design to designing a recommendation engine for groupformation in Wikipedia’s WikiProjects. Similar recommendation

tasks are also explored in the MOOC (massive open online course)and collaborative problem-solving contexts [10, 30]. In contrastwith Study I, this study was focused on the recommendation algo-rithm, a key component of a learning analytics system [2]. By doingso, we hope to illustrate the possibility of attending values whenbuilding an algorithmic system’s sub-component that is crucialbut not observable from the user interface. Below, we explain theprocess of applying Value Sensitive Design to the development ofthe recommendation engine, and discuss the implications for thedesign of learning analytics systems.

4.2 Methods4.2.1 Study Context. Retaining and socializing newcomers is acrucial challenge for many volunteer-based knowledge creationcommunity such as Wikipedia. The number of active contributorsin Wikipedia has been declining steadily since 2007, due at least inpart to a sharp decline in the retention of new volunteer editors[20]. WikiProjects, which are groups of contributors who work to-gether to improve Wikipedia, serve as important socialization hubswithin the Wikipedia community. Prior work [14, 42] suggests thatWikiProjects provide three valuable support mechanisms for newmembers: (1) enabling them to locate suitable assistance and expertcollaborators; (2) guiding and structuring their participation byorganizing tasks, to-do lists, and initiatives such as “Collaborationsof the Week;” and (3) offering new editors “protection” for theirwork, by shielding it from unwarranted reverts and “edit wars.”

However, recommending new editors to WikiProjects is not atrivial task. In the English version of Wikipedia alone there arecurrently about 2,000 WikiProjects, with on average 38,628 newusers/editors registered to participate each month. The goal ofthe case study was to create algorithmic tools to recommend newWikipedia editors to existing WikiProjects.

4.2.2 Value Sensitive Design Process. Drawing on the principlesof Value Sensitive Design [15], we derived the following five-stageprocess to proactively accommodate values in the WikiProjectsrecommendation system.

(1) Review literature and conduct empirical studies to under-stand relevant stakeholders’ values, motivations and goals,and identify potential trade-offs.

(2) Based on the results of the first step, identify algorithmicapproaches and create prototype implementations.

(3) Engage and work closely with the stakeholders to identifyviable means to deploy the developed algorithms, recruitparticipants, and gather user feedback.

(4) Deploy the algorithms, gather stakeholder feedback, andrefine and iterate as necessary.

(5) Evaluate stakeholders’ acceptance, algorithm accuracy, andimpacts of the algorithms on relevant stakeholders.

In contrast with Study I that was focused on a conceptual investi-gation, this five-stage process incorporates all three types of ValueSensitive Design methods—conceptual, empirical, and technical—to synergistically address human values. In the proposed process,each component itself should be familiar to HCI scholars. However,the derived process represents an attempt to put multiple ValuesSensitive Design methods into action to collectively consider values.

Page 7: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

Towards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA

Figure 2: The interface of the prototype newcomer recommendations for WikiProject organizers.

4.3 ResultsFollowing the described five-stage process, we have developed thealgorithmic system to recommend new Wikipedia contributors toexisting Wikipedia projects.

4.3.1 Step 1: Review literature and conduct empirical studies tounderstand stakeholders’ values and goals of importance to them,and potential trade-offs. To achieve the design goal, we reviewedprior research and conducted a survey study with 59 membersfrom 17 WikiProjects in order to answer the following questions:(1) Which factors would motivate new editors to participate ina WikiProject? (2) Which factors would motivate WikiProjectsorganizers to recruit new members? (3) What collective outcomesare important for WikiProjects and for Wikipedia in general?

Findings from the survey study included:

• Both newcomers and WikiProject organizers valued the in-terest match (i.e. the match between the editor’s interestsand the project topic) and personal connections (i.e. therelationships between newcomers and current members ofthe community).

• WikiProject organizers valued productivity and wanted torecruit newcomers who could produce high-quality work;

• Although WikiProjects organizers valued editors’ prior ex-perience and were more interested in recruiting newcomerswho already had some editing experience in Wikipedia, theWikipeida community as a whole was more concerned withsocializing “brand-new” editors.

• WikiProject organizers hoped to remain control of the re-cruitment process. Specifically, they objected to the idea ofcomplete automation, i.e. to automatically identify and invitenewcomers deemed relevant by a recommendation system.They wanted to have the final say on whom to be invited totheir projects.

4.3.2 Step 2: Identify algorithmic approaches and develop systemprototypes. Drawing on the results of the first step, the team devel-oped recommendation algorithms that would meet the followingcriteria that were informed by identified stakeholders’ values.

First, the recommendation algorithms needed to satisfy the goalsof new editors and WikiProjects by considering the match between

the editor’s interests and the project topic. To this end, we cre-ated four different recommendation algorithms—two interest-basedalgorithms and two relationship-based algorithms.

• Interest-based algorithms rank candidate editors based onhow closely their editing history matches the topic of aWikiProject. Two types of interest-based algorithms wereconsidered. A rule-based algorithm ranks the match of aneditor to a WikiProject by counting the number of editsby that editor to articles within the scope of the project.A category-based algorithm ranks editors by computing asimilarity score between an editor’s edit history and thetopic of a WikiProject.

• Relationship-based algorithms rank candidate editors basedon relationships with current members of a WikiProject.Two types of relationship-based algorithms were developed.A bonds-based algorithm ranks editors by the strength of“social connections” the editor has to current members of aWikiProject. A co-edit-based algorithm is a version of collab-orative filtering and is inspired by the design of SuggestBot[4]. Candidate editors are ranked by the similarity of theiredit histories to the edit histories of current members of aWikiProject.

Second, the recommendation algorithms needed to balance theinterests of WikiProjects and the collective goal of Wikipedia com-munity by targeting both experienced editors and “brand-new”editors. Specifically, the candidate editors included both the editorswho were new to Wikipedia and editors who were moderately ex-perienced (but not highly experienced). This design instantiatedthe tension we identified previously. Moreover, we also ranked thetwo types of newcomers separately, so that experienced ones didnot overshadow the inexperienced ones.

Finally, the recommendation algorithms should satisfy the goalsof WikiProjects and their organizers by excluding editors likely toproduce low-quality work.

With these developed algorithms, we further designed a userinterface to present top recommendations from each of the fouralgorithms to WikiProject organizers (see Figure 2). The decisionof presenting multiple results was based on a finding from Step1 that some project organizers strongly objected to the idea ofcomplete automation and wished to “remain in the loop” of inviting

Page 8: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

LAK19, March 4–8, 2019, Tempe, AZ, USA Chen & Zhu

newcomers. With the prototype interface, WikiProject organizerscould review the recommendations and decide who they would liketo invite to their projects [43]. As such, their desire to remain inthe loop was accommodated.

4.3.3 Step 3: Engage and work closely with the community. On-line communities like Wikipedia constitute a rich laboratory forresearch. Digital traces of collaboration, social interaction, and pro-duction are made visible, offering rich opportunities for empiricalstudies and experimentation. However, there is an unfortunate tra-dition of some researchers treating such communities merely asdata sources for research instead of “living organisms” with theirown values, goals, and norms. Research studies of Wikipedia oftenencounter resistance from Wikipedia editors because communityvalues are sometimes violated by research activities.

To avoid these problems, and to ensure community values arewell attended to, the team worked with stakeholders to develop aresearch protocol6 that was acceptable to the community. Essen-tially, we developed our research plan on the Wikipedia platform.In other words, we were developing and deploying our algorithmsnot just for, but with, the Wikipedia community.

4.3.4 Step 4: Deploy the algorithms and iteratively refine them. Weiteratively tested and refined our algorithms. Over a duration of sixmonths we sent weekly batches of recommendations to our pilotparticipants, conducted short surveys to seek their views on theserecommendations, and also interviewed project organizers andnewcomers. We used their feedback to make significant changesto the algorithms. Example revisions included refining algorithmexplanations and boosting the threshold of the matching algorithms.In particular, candidate editors will only be recommended by therule-based matching algorithm if they have made minimally fiveedits on project-related articles in the previous month.

4.3.5 Step 5: Evaluate algorithms based on stakeholder acceptance,algorithm accuracy, and impacts of the algorithm on the community.We conducted qualitative and quantitative studies to systematicallyexamine the acceptance, accuracy, and impacts of the algorithms.The goal was to fully understand the influence of the algorithmictool on the community and stakeholders, and to monitor any unin-tended consequences.

Results from our initial evaluation included:(1) Assessing acceptance among stakeholders.We conducted

interviews with community stakeholders, including new-comers and WikiProject organizers who used our recom-mendation systems. The feedback was positive. This gaveus confidence that our system was acceptable to the stake-holders. For example, one organizer wrote: “This puts somescience behind recommendations, and will be a great supple-ment to the current processes.” Editors who were invited tojoin projects also reacted positively. For example, an editorwho was invited to join WikiProject Africa wrote: “Thankyou for reaching out to me and thank you for informing meabout the WikiProject Africa ... I appreciate it.”

(2) Assessing accuracy. Organizers rated the rule-based algo-rithm higher compared to the other three types of algorithms

6See https://meta.wikimedia.org/wiki/Research:WikiProject_Recommendation

that were category-based, bonds-based and co-edit based.The average rating for the rule-based algorithm was 3.24 in a5-point scale, significantly higher than the other three typesof algorithms (t = 3.51, p <.001). The invitation rate (which isanalogous to the click-through rate) for the rule-based algo-rithm was 47%, which was also higher than category-based(16%), bonds-based (22%), and co-edit based (28%). There wasno significant difference among the other three algorithms.We also found that inexperienced editors and experienced ed-itors were equally invited by the project organizers throughthe recommendation system.

(3) Assessing impacts. Our evaluation of the impacts on thecommunity sought to understand what happened to new-comers ifWikiProject organizers invited them to the projects.Initial results suggested that: (1) Only experienced editorswho received invitations from project organizers had a sig-nificant increase in their within-project activities over thebaseline group composed of equally competent editors whodid not receive invitations; and (2) the increase in the in-vited experienced editors’ contributions did not come at thecost of fewer edits of Wikipedia articles beyond the joinedWikiProject [see 43].

4.4 Discussion and Implications4.4.1 Challenges and Future Directions. While this WikiProjectscase demonstrated the promise of Value Sensitive Design and thefive-stage holistic approach, it also helped reveal several challenges.First, it is challenging to explain the algorithms in ways that enablestakeholders to assess them and to provide sensible feedback toimprove them. This constitutes a barrier to full stakeholder engage-ment in the design process. One future direction is to explore theeffectiveness and trade-offs of different strategies and interfacesto help non-expert stakeholders understand algorithmic decision-making.

Second, there is a lack of holistic understanding of the mappingbetween a wide range of human values and a variety of algorithmicdesign options. One promising direction for future work is to designways to translate human values into different algorithmic choices,covering areas such as data pre-processing, model regularizationduring the learning process, model post-processing, and modelselection.

4.4.2 Implications for Learning Analytics System Design. This studyhas a number of implications for learning analytics design. First, thedesigned algorithm products can be transferred to other learningcontexts. For instance, given the increasing popularity of virtualcollaboration in MOOCs [39], we could design similar recommen-dation algorithms to support the formation of productive virtuallearning groups in MOOCs. We have not seen work in this areaattending to learner values yet, and the Value Sensitive Designmethodology applied to the Wikipedia context could contribute tosettings like MOOCs.

Second, the five-stage design process is directly applicable to alearning analytics design project. Even though the study was pri-marily focused on algorithms, the Value Sensitive Design processintroduced in this study can be applied to the design of a learninganalytics system, of which algorithm is an important component

Page 9: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

Towards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA

[2]. While we could embrace the process when choosing or devel-oping network visualization layout algorithms for a social networkanalytics tool, we can also apply the process to other parts of theanalytics system. For example, as demonstrated in this case study,the consideration of human values is critical for the process ofcommunicating analytics results to stakeholders. Other aspects ofthe system, such as the technological infrastructure used to storeand transit learning data can also benefit from the Value SensitiveDesign approach.

5 GENERAL DISCUSSION ANDCONCLUSIONS

To support ethical considerations and system integrity in learninganalytics, this paper introduces the application of Value SensitiveDesign in learning analytics design processes. The first study wasa conceptual investigation of an existing learning analytics tool.Applying Value Sensitive Design methods—specifically stakeholderanalysis and value analysis—we uncovered a number of values(e.g. student autonomy, self-image) and value tensions with thestudied tool. Based on these analyses, we proposed design trade-offs that are subject to further empirical, technical, and conceptualinvestigations.

The second study went a step further. It proposed a holistic pro-cess of conducting Value Sensitive Design and demonstrated itsapplication to building recommendation systems for WikiProjects—an informal, mass collaboration scenario. The design process in-cludes literature analysis, empirical investigations, prototype devel-opment, community engagement, iterative testing and refinement,and continuous evaluation. Even though this study was focused onalgorithms, the process can be readily applied to the design of awhole learning analytics system.

Overall, this paper is motivated by a need of practical means tosupport ethical considerations and human values in learning ana-lytics system. Learning analytics operates in the “middle space” be-tween learning and analytics and needs to consider a wide range offactors including learning theory, data integrity, human–computerinteraction, and sociocultural factors [35]. As a result, a wide rangeof values, which go beyond ethical concerns and are often compet-ing with each other, need to be considered.

Value Sensitive Design is a promising approach to promotingethical practice in learning analytics. Preliminary work in this areahas already been reported by colleagues [37], and more systematicpush for the application of Value Sensitive Design in learning ana-lytics is needed. Two studies reported in this paper mitigate the gap.They demonstrate two ways of adopting Value Sensitive Designin learning analytics projects: one was a conceptual investigation(a “lite” version), whereas the other was a detailed design processcomprising multiple Value Sensitive Design methods (a “holistic”version). It is hoped both versions of using Value Sensitive Designcan strengthen future design of learning analytics systems.

To improve learning analytics systems (esp. its ethical dimen-sion), algorithmic transparency and accountability are not enough;rather, we should consider the system’s integrity holistically [2].This paper is only an initial step towards this direction. Much workremains to be done to infuse Value Sensitive Design more broadlyin the field of learning analytics. Interestingly, several projects have

emerged to advance this work. Besides the Privacy by Design ap-proach mentioned above [37], the Connected Intelligence Centreat the University of Technology Sydney is working on an EthicalDesign Critique (EDC) approach to involve stakeholders to considerethical and risk factors involved in new learning analytics tools.Even though this approach does not directly apply Value SensitiveDesign, it considers competing values identified by different stake-holders and aims to reach resolutions. One future direction can bepurposefully exploring synergies among algorithm auditing, Pri-vacy by Design, Ethical Design Critique, and Value Sensitive Designto comprehensively consider human values in learning analytics.

REFERENCES[1] ACM US Public Policy Council. 2017. Statement on Algorithmic Transparency

and Accountability. Online.[2] Simon Buckingham Shum. 2017. Black box learning analytics? Beyond algorith-

mic transparency. Keynote presented at the Learning Analytics Summer Institute,Ann Arbor, MI.

[3] Bodong Chen, Yu-Hui Chang, Fan Ouyang, and Wanying Zhou. 2018. Fosteringstudent engagement in online discussion through social learning analytics. TheInternet and Higher Education 37 (2018), 21–30. https://doi.org/10.1016/j.iheduc.2017.12.002

[4] Dan Cosley, Dan Frankowski, Loren Terveen, and John Riedl. 2007. SuggestBot:Using Intelligent Task Routing to Help People Find Work in Wikipedia. In Pro-ceedings of the 12th International Conference on Intelligent User Interfaces (IUI ’07).ACM, New York, NY, USA, 32–41. https://doi.org/10.1145/1216295.1216309

[5] Ulrike Cress, Johannes Moskaliuk, and Heisawn Jeong. 2016. Mass Collaborationand Education. Springer. https://doi.org/10.1007/978-3-319-13536-6

[6] Betul C Czerkawski. 2016. Networked learning: design considerations for onlineinstructors. Interactive Learning Environments 24, 8 (Nov. 2016), 1850–1863.https://doi.org/10.1080/10494820.2015.1057744

[7] Alexei Czeskis, Ivayla Dermendjieva, Hussein Yapit, Alan Borning, Batya Fried-man, Brian Gill, and Tadayoshi Kohno. 2010. Parenting from the pocket: valuetensions and technical directions for secure and private parent-teen mobile safety.In Proceedings of the Sixth Symposium on Usable Privacy and Security. ACM, NewYork, NY, USA, 15. https://doi.org/10.1145/1837110.1837130

[8] Janet Davis and Lisa P Nathan. 2013. Value Sensitive Design: Applications,Adaptations, and Critiques. In Handbook of Ethics, Values, and TechnologicalDesign: Sources, Theory, Values and Application Domains, Jeroen van den Hoven,Pieter E Vermaas, and Ibo van de Poel (Eds.). Springer Netherlands, Dordrecht,1–26. https://doi.org/10.1007/978-94-007-6994-6_3-1

[9] Shane Dawson, Aneesha Bakharia, and Elizabeth Heathcote. 2010. SNAPP:Realising the affordances of real-time SNA within networked learning environ-ments. In Proceedings of the 7th International Conference on Networked Learning,Lone Dirckinck-Holmfeld, Vivien Hodgson, Chris Jones, Maarten de Laat, DavidMcConnell, and Thomas Ryberg (Eds.). Lancaster University, Lancaster, UK,125–133.

[10] Nia M M Dowell, Tristan M Nixon, and Arthur C Graesser. 2018. Group commu-nication analysis: A computational linguistics approach for detecting sociocog-nitive roles in multiparty interactions. Behavior research methods (Aug. 2018).https://doi.org/10.3758/s13428-018-1102-z

[11] Hendrik Drachsler and Wolfgang Greller. 2016. Privacy and analytics: It’s aDELICATE issue a checklist for trusted learning analytics. In Proceedings of theSixth International Conference on Learning Analytics & Knowledge. ACM, NewYork, NY, USA, 89–98. https://doi.org/10.1145/2883851.2883893

[12] Hendrik Drachsler, Tore Hoel, Adam Cooper, Gábor Kismihók, Alan Berg, MarenScheffel, Weiqin Chen, and Rebecca Ferguson. 2016. Ethical and Privacy Issuesin the Design of Learning Analytics Applications. In Proceedings of the SixthInternational Conference on Learning Analytics & Knowledge (LAK ’16). ACM,New York, NY, USA, 492–493. https://doi.org/10.1145/2883851.2883933

[13] Rebecca Ferguson, Tore Hoel, Maren Scheffel, and Hendrik Drachsler. 2016. GuestEditorial: Ethics and Privacy in Learning Analytics. Journal of Learning Analytics3, 1 (2016), 5–15. https://doi.org/10.18608/jla.2016.31.2

[14] Andrea Forte, Niki Kittur, Vanessa Larco, Haiyi Zhu, AmyBruckman, and Robert EKraut. 2012. Coordination and beyond: social functions of groups in open contentproduction. In Proceedings of the ACM 2012 conference on Computer SupportedCooperative Work. ACM, 417–426.

[15] Batya Friedman, David G Hendry, and Alan Borning. 2017. A Survey of Value Sen-sitive Design Methods. Foundations and Trends® in Human–Computer Interaction11, 2 (2017), 63–125. https://doi.org/10.1561/1100000015

[16] Batya Friedman, Peter H Kahn, Alan Borning, and Alina Huldtgren. 2013. ValueSensitive Design and Information Systems. In Early engagement and new tech-nologies: Opening up the laboratory, Neelke Doorn, Daan Schuurbiers, Ibo van de

Page 10: Towards Value-Sensitive Learning Analytics DesignTowards Value-Sensitive Learning Analytics Design LAK19, March 4–8, 2019, Tempe, AZ, USA [7]. These methods suit varied purposes

LAK19, March 4–8, 2019, Tempe, AZ, USA Chen & Zhu

Poel, and Michael E Gorman (Eds.). Springer Netherlands, Dordrecht, 55–95.https://doi.org/10.1007/978-94-007-7844-3_4

[17] R Stuart Geiger. 2017. Beyond opening up the black box: Investigating the roleof algorithmic systems in Wikipedian organizational culture. Big Data & Society4, 2 (Sept. 2017), 2053951717730735. https://doi.org/10.1177/2053951717730735

[18] Christine Greenhow and Emilia Askari. 2017. Learning and teaching with socialnetwork sites: A decade of research in K-12 related education. Education andInformation Technologies 22, 2 (March 2017), 623–645. https://doi.org/10.1007/s10639-015-9446-9

[19] Iassen Halatchliyski, Johannes Moskaliuk, Joachim Kimmerle, and Ulrike Cress.2014. Explaining authors’ contribution to pivotal artifacts during mass collab-oration in the Wikipedia’s knowledge base. International Journal of Computer-Supported Collaborative Learning 9, 1 (2014), 97–115.

[20] Aaron Halfaker, R Stuart Geiger, Jonathan T Morgan, and John Riedl. 2013. TheRise and Decline of an Open Collaboration System: How Wikipedia’s Reactionto Popularity Is Causing Its Decline. The American behavioral scientist 57, 5 (May2013), 664–688. https://doi.org/10.1177/0002764212469365

[21] Madeline Huberth, Patricia Chen, Jared Tritz, and Timothy A McKay. 2015.Computer-tailored student support in introductory physics. PloS one 10, 9 (Sept.2015), e0137001. https://doi.org/10.1371/journal.pone.0137001

[22] Dirk Ifenthaler and Monica W Tracey. 2016. Exploring the relationship of ethicsand privacy in learning analytics and design: implications for the field of educa-tional technology. Educational technology research and development: ETR & D 64,5 (Oct. 2016), 877–880. https://doi.org/10.1007/s11423-016-9480-3

[23] Charles Lang, Leah P Macfadyen, Sharon Slade, Paul Prinsloo, and Niall Sclater.2018. The Complexities of Developing a Personal Code of Ethics for LearningAnalytics Practitioners: Implications for Institutions and the Field. In Proceedingsof the 8th International Conference on Learning Analytics and Knowledge (LAK ’18).ACM, New York, NY, USA, 436–440. https://doi.org/10.1145/3170358.3170396

[24] P Long and G Siemens. 2011. Penetrating the Fog: Analytics in Learning andEducation. Educause Review 46, 5 (2011), 30–32.

[25] Paul Prinsloo and Sharon Slade. 2016. Student Vulnerability, Agency and LearningAnalytics: An Exploration. Journal of Learning Analytics 3, 1 (2016), 159–182.https://doi.org/10.18608/jla.2016.31.10

[26] Paul Prinsloo and Sharon Slade. 2017. An Elephant in the Learning AnalyticsRoom: The Obligation to Act. In Proceedings of the Seventh International LearningAnalytics & Knowledge Conference (LAK ’17). ACM, New York, NY, USA, 46–55.https://doi.org/10.1145/3027385.3027406

[27] Paul Prinsloo and Sharon Slade. 2017. Ethics and Learning Analytics: Chartingthe (Un)Charted. In Handbook of Learning Analytics (first ed.), Charles Lang,George Siemens, Alyssa Wise, and Dragan Gasevic (Eds.). Society for LearningAnalytics Research (SoLAR), 49–57. https://doi.org/10.18608/hla17.004

[28] Priscilla M Regan. 1995. Legislating Privacy: Technology, Social Values, and PublicPolicy. University of North Carolina Press, Chapel Hill, NC.

[29] María Jesús Rodríguez-Triana, Alejandra Martínez-Monés, and Sara Villagrá-Sobrino. 2016. LearningAnalytics in Small-Scale Teacher-Led Innovations: Ethicaland Data Privacy Issues. Journal of Learning Analytics 3, 1 (2016), 43–65.

[30] Carolyn Penstein Rosé, Oliver Ferschke, Gaurav Tomar, Diyi Yang, Iris Howley,Vincent Aleven, George Siemens, Matthew Crosslin, Dragan Gasevic, and RyanBaker. 2015. Challenges and opportunities of dual-layer MOOCs: Reflectionsfrom an edX deployment study. In Proceedings of the 11th International Confer-ence on Computer Supported Collaborative Learning. (CSCL 2015). Vol. 2, Vol. 15.International Society of the Learning Sciences, Gothenburg, Sweden, 848–851.

[31] Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014.Auditing algorithms: Research methods for detecting discrimination on internetplatforms. Data and discrimination: converting critical concerns into produc-tive inquiry, a pre-conference at the 64th Annual Meeting of the InternationalCommunication Association.

[32] Niall Sclater. 2016. Developing a Code of Practice for Learning Analytics. Journalof Learning Analytics 3, 1 (2016), 16–42. https://doi.org/10.18608/jla.2016.31.3

[33] George Siemens. 2013. Learning analytics: The emergence of a discipline. TheAmerican behavioral scientist 57, 10 (2013), 1380–1400.

[34] Sharon Slade and Paul Prinsloo. 2013. Learning analytics: Ethical issues anddilemmas. The American behavioral scientist 57, 10 (Oct. 2013), 1510–1529. https://doi.org/10.1177/0002764213479366

[35] Dan Suthers and Katrien Verbert. 2013. Learning analytics as a “middle space”.In Proceedings of the Third International Conference on Learning Analytics andKnowledge - LAK ’13. ACM Press, New York, New York, USA, 1–4. https://doi.org/10.1145/2460296.2460298

[36] Luis M Vaquero and Manuel Cebrian. 2013. The rich club phenomenon in theclassroom. Scientific reports 3 (2013), 1–8. https://doi.org/10.1038/srep01174

[37] Josine Verhagen, Lucie Dalibert, Federica Lucivero, Tjerk Timan, and Larissa Co.2016. Designing values in an adaptive learning platform. In 2nd workshop onEthics & Privacy in Learning Analytic at the 6th International Learning Analytics& Knowledge Conference. CEUR Proceedings, Edinburgh, UK, 1–5.

[38] Kari Edison Watkins, Brian Ferris, Yegor Malinovskiy, and Alan Borning. 2013.Beyond Context-Sensitive Solutions: Using Value-Sensitive Design to IdentifyNeeded Transit Information Tools. In Urban Public Transportation Systems 2013.

American Society of Civil Engineers, Reston, VA, 296–308. https://doi.org/10.1061/9780784413210.026

[39] Miaomiao Wen, Diyi Yang, and Carolyn Penstein Rosé. 2015. Virtual Teamsin Massive Open Online Courses. In Artificial Intelligence in Education (LectureNotes in Computer Science). Springer, Cham, 820–824. https://doi.org/10.1007/978-3-319-19773-9_124

[40] James EWillis, Sharon Slade, and Paul Prinsloo. 2016. Ethical oversight of studentdata in learning analytics: A typology derived from a cross-continental, cross-institutional perspective. Educational Technology Research and Development 64, 5(Oct. 2016), 881–901. https://doi.org/10.1007/s11423-016-9463-4

[41] Alyssa Friend Wise and J Vytasek. 2017. Learning analytics implementationdesign. In Handbook of learning analytics, Charles Lang, George Siemens,Alyssa Friend Wise, and Dragan Gašević (Eds.). Society for Learning AnalyticsResearch, 151–160.

[42] Haiyi Zhu, Robert Kraut, and Aniket Kittur. 2012. Organizing without formalorganization: group identification, goal setting and social modeling in direct-ing online production. In Proceedings of the ACM 2012 conference on ComputerSupported Cooperative Work. ACM, 935–944.

[43] Haiyi Zhu, Bowen Yu, Aaron Halfaker, and Loren Terveen. 2018. Value-SensitiveAlgorithm Design: Method, Case Study, and Lessons. Proceedings of the ACMon Human-Computer Interaction 2, CSCW (2018), 194. https://doi.org/10.1145/3274463