Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of...

29
Journal of Clinical and Translational Science www.cambridge.org/cts Research Methods and Technology Research Article Cite this article: Tigges BB, Miller D, Dudding KM, Balls-Berry JE, Borawski EA, Dave G, Hafer NS, Kimminau KS, Kost RG, Littlefield K, Shannon J, and Menon U (2019) The Measures of Collaboration Workgroup of the Collaboration and Engagement Domain Task Force, National Center for Advancing Translational Sciences, National Institutes of Health. Measuring quality and outcomes of research collaborations: An integrative review. Journal of Clinical and Translational Science 3: 261289. doi: 10.1017/cts.2019.402 Received: 5 June 2019 Revised: 25 July 2019 Accepted: 26 July 2019 Key words: Team science; research collaboration; instrument development; psychometrics; outcome measure; research process measure; research evaluation; scientific collaboration; measures; teamwork Address for correspondence: B. B. Tigges, MSC07 4380, 1 University of New Mexico, Albuquerque, NM 87131-0001, USA. Email: [email protected] © The Association for Clinical and Translational Science 2019. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. Measuring quality and outcomes of research collaborations: An integrative review Beth B. Tigges 1 , Doriane Miller 2 , Katherine M. Dudding 3 , Joyce E. Balls-Berry 4 , Elaine A. Borawski 5 , Gaurav Dave 6 , Nathaniel S. Hafer 7 , Kim S. Kimminau 8 , Rhonda G. Kost 9 , Kimberly Littlefield 10 , Jackilen Shannon 11 , Usha Menon 12 and The Measures of Collaboration Workgroup of the Collaboration and Engagement Domain Task Force, National Center for Advancing Translational Sciences, National Institutes of Health 1 University of New Mexico, College of Nursing, Albuquerque, NM, USA; 2 Department of Internal Medicine, University of Chicago Hospitals, Chicago, IL, USA; 3 Department of Family, Community and Health Systems, University of Arizona, College of Nursing, Tucson, AZ, USA; 4 Division of Epidemiology, Mayo Clinic, Rochester, MN, USA; 5 Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA; 6 Department of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; 7 Center for Clinical and Translational Science, University of Massachusetts Medical School, Worcester, MA, USA; 8 University of Kansas Medical Center, Family Medicine and Community Health, Kansas City, KS, USA; 9 The Rockefeller University, Clinical Research Support Office, New York, NY, USA; 10 University of North Carolina- Greensboro, Office of Research and Engagement, Greensboro, NC, USA; 11 Oregon Health and Sciences University, OHSU-PSU School of Public Health, Portland, OR, USA and 12 University of South Florida College of Nursing, Tampa, FL, USA Abstract Introduction: Although the science of team science is no longer a new field, the measurement of team science and its standardization remain in relatively early stages of development. To describe the current state of team science assessment, we conducted an integrative review of measures of research collaboration quality and outcomes. Methods: Collaboration measures were identified using both a literature review based on specific keywords and an environmental scan. Raters abstracted details about the measures using a standard tool. Measures related to collaborations with clinical care, education, and program delivery were excluded from this review. Results: We identified 44 measures of research collaboration quality, which included 35 measures with reliability and some form of statistical validity reported. Most scales focused on group dynamics. We identified 89 measures of research collaboration outcomes; 16 had reli- ability and 15 had a validity statistic. Outcome measures often only included simple counts of products; publications rarely defined how counts were delimited, obtained, or assessed for reliability. Most measures were tested in only one venue. Conclusions: Although models of collaboration have been developed, in general, strong, reliable, and valid measurements of such collaborations have not been conducted or accepted into practice. This limitation makes it difficult to compare the characteristics and impacts of research teams across studies or to identify the most important areas for intervention. To advance the science of team science, we provide recommendations regarding the development and psychometric testing of measures of collaboration quality and outcomes that can be replicated and broadly applied across studies. Introduction Translating basic science discoveries into demonstrated improvements in public health requires a research team from diverse backgrounds [13]. The US National Institutes of Health National Center for Advancing Translational Sciences recognized this need by establishing a strategic goal to advance translational team science by fostering innovative partnerships and diverse collaborations [4]. In the health sciences, there is significant interest in translational research and moving more quickly from single-study efficacy trials to effective, generalizable interven- tions in health care practice. Foundational to this body of literature is the assumption that cross- disciplinary research teams speed the process of translational research [5]. Analyses of trends in scientific publications suggest that major advances in biological, physical, and social science are produced by research teams; that the work of these teams is cited more often than the work of individual researchers; and that, in the long term, the work has greater scientific impact [69]. In addition, cross-disciplinary diversity is assumed to lead to greater innovation [10]. These observations have become the cornerstone of the translational science movement in the health sciences. https://www.cambridge.org/core/terms. https://doi.org/10.1017/cts.2019.402 Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cambridge Core terms of use, available at

Transcript of Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of...

Page 1: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Journal of Clinical andTranslational Science

www.cambridge.org/cts

Research Methods andTechnologyResearch Article

Cite this article: Tigges BB, Miller D,Dudding KM, Balls-Berry JE, Borawski EA,Dave G, Hafer NS, Kimminau KS, Kost RG,Littlefield K, Shannon J, and Menon U (2019)The Measures of Collaboration Workgroup ofthe Collaboration and Engagement DomainTask Force, National Center for AdvancingTranslational Sciences, National Institutes ofHealth. Measuring quality and outcomes ofresearch collaborations: An integrative review.Journal of Clinical and Translational Science3: 261–289. doi: 10.1017/cts.2019.402

Received: 5 June 2019Revised: 25 July 2019Accepted: 26 July 2019

Key words:Team science; research collaboration;instrument development; psychometrics;outcome measure; research process measure;research evaluation; scientific collaboration;measures; teamwork

Address for correspondence:B. B. Tigges, MSC07 4380, 1 University of NewMexico, Albuquerque, NM 87131-0001, USA.Email: [email protected]

© The Association for Clinical and TranslationalScience 2019. This is an Open Access article,distributed under the terms of the CreativeCommons Attribution licence (http://creativecommons.org/licenses/by/4.0/), whichpermits unrestricted re-use, distribution, andreproduction in any medium, provided theoriginal work is properly cited.

Measuring quality and outcomes of researchcollaborations: An integrative review

Beth B. Tigges1 , Doriane Miller2, Katherine M. Dudding3, Joyce E. Balls-Berry4,

Elaine A. Borawski5, Gaurav Dave6, Nathaniel S. Hafer7, Kim S. Kimminau8, Rhonda

G. Kost9, Kimberly Littlefield10, Jackilen Shannon11, Usha Menon12 and The

Measures of Collaboration Workgroup of the Collaboration and Engagement

Domain Task Force, National Center for Advancing Translational Sciences,

National Institutes of Health

1University of New Mexico, College of Nursing, Albuquerque, NM, USA; 2Department of Internal Medicine, Universityof Chicago Hospitals, Chicago, IL, USA; 3Department of Family, Community and Health Systems, University ofArizona, College of Nursing, Tucson, AZ, USA; 4Division of Epidemiology, Mayo Clinic, Rochester, MN, USA;5Department of Population and Quantitative Health Sciences, Case Western Reserve University School ofMedicine, Cleveland, OH, USA; 6Department of Medicine, University of North Carolina at Chapel Hill, Chapel Hill,NC, USA; 7Center for Clinical and Translational Science, University of Massachusetts Medical School, Worcester,MA, USA; 8University of Kansas Medical Center, Family Medicine and Community Health, Kansas City, KS, USA;9The Rockefeller University, Clinical Research Support Office, New York, NY, USA; 10University of North Carolina-Greensboro, Office of Research and Engagement, Greensboro, NC, USA; 11Oregon Health and SciencesUniversity, OHSU-PSU School of Public Health, Portland, OR, USA and 12University of South Florida College ofNursing, Tampa, FL, USA

Abstract

Introduction:Although the science of team science is no longer a new field, the measurement ofteam science and its standardization remain in relatively early stages of development. Todescribe the current state of team science assessment, we conducted an integrative review ofmeasures of research collaboration quality and outcomes. Methods: Collaboration measureswere identified using both a literature review based on specific keywords and an environmentalscan. Raters abstracted details about the measures using a standard tool. Measures related tocollaborations with clinical care, education, and program delivery were excluded from thisreview. Results: We identified 44 measures of research collaboration quality, which included35 measures with reliability and some form of statistical validity reported. Most scales focusedon group dynamics. We identified 89 measures of research collaboration outcomes; 16 had reli-ability and 15 had a validity statistic. Outcome measures often only included simple counts ofproducts; publications rarely defined how counts were delimited, obtained, or assessed forreliability. Most measures were tested in only one venue. Conclusions: Although models ofcollaboration have been developed, in general, strong, reliable, and valid measurements of suchcollaborations have not been conducted or accepted into practice. This limitation makes itdifficult to compare the characteristics and impacts of research teams across studies or toidentify the most important areas for intervention. To advance the science of team science,we provide recommendations regarding the development and psychometric testing ofmeasuresof collaboration quality and outcomes that can be replicated and broadly applied across studies.

Introduction

Translating basic science discoveries into demonstrated improvements in public health requiresa research team from diverse backgrounds [1–3]. The US National Institutes of Health NationalCenter for Advancing Translational Sciences recognized this need by establishing a strategicgoal to advance translational team science by fostering innovative partnerships and diversecollaborations [4]. In the health sciences, there is significant interest in translational researchand moving more quickly from single-study efficacy trials to effective, generalizable interven-tions in health care practice. Foundational to this body of literature is the assumption that cross-disciplinary research teams speed the process of translational research [5].

Analyses of trends in scientific publications suggest that major advances in biological,physical, and social science are produced by research teams; that the work of these teams is citedmore often than the work of individual researchers; and that, in the long term, the work hasgreater scientific impact [6–9]. In addition, cross-disciplinary diversity is assumed to lead togreater innovation [10]. These observations have become the cornerstone of the translationalscience movement in the health sciences.

https://www.cambridge.org/core/terms. https://doi.org/10.1017/cts.2019.402Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cambridge Core terms of use, available at

Page 2: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Implementing team science can be challenging. Multipleauthors have noted that working in collaboration can be moreexpensive and labor intensive than working alone [11,12]. Notedtrade-offs include added time and effort to communicate withdiverse collaborators, conflicts arising from different goals andassumptions, and increased start-up time with its resulting delayin productivity [13–17]. These opportunity costs may be accept-able if the outcomes of research collaborations can accelerateknowledge or answer the complex health questions faced bytoday’s society.

To test the assumption that research collaboration leads togreater productivity, we need to accurately measure the character-istics of research teams and their outcomes and be able to compareresults across teams [6,12,15,18–27]. Although different measureshave so far shown that collaborations are beneficial, operationaldefinitions of variables that may influence conclusions (constructvalidity) are varied, complicating interpretation of results. Despitesome exceptions [12,19,23,28], there is a lack of attention to thedevelopment and psychometric testing of reliable and valid mea-sures of collaboration. As an initial step, it would be useful to havean overview of the current state of the science in the measurementof research collaborations. In this article, we report the results of anintegrative review of the literature, looking for reliable and validmeasures that describe the quality and outcomes of researchcollaborations.

Materials and Methods

We conducted two reviews. The first focused on measures of col-laboration quality, defined as measures of interactions or processesof the team during the collaboration. The second review focused onoutcomes of the collaboration (e.g., publications, citations). Weused an integrative review approach. An integrative review is a spe-cific type of review that applies a comprehensive methodologyinvolving a combination of different approaches to summarize pastresearch related to a particular topic, including both experimentaland non-experimental studies, and reach conclusions [29,30].

Our research team brainstormed keyword combinations and,based on expert opinion, agreed on final sets of keywords that werecomprehensive enough to cover the topics fully but not so broad asto include non-relevant literature. For the review of collaborationquality, these keywords were “measure/measurement” combinedwith the following terms: community engagement, communityengaged research, collaboration, community academic partner-ship, team science, regulatory collaboration, industry collabora-tion, public–private partnership (focus on research). For thereview associated with collaboration outcomes, the word “out-comes” was added to the above search terms. Our intention wasto include all types of research collaborations, including partner-ships between academic and other community, governmental,and industry partners. The following keywords were considered,tested in preliminary searches, and eliminated by group consensusas being too broad for our purpose: consortium collaboration,public health and medicine collaboration, patient advocacy groupcollaboration, and coalition. Measures of collaboration related toclinical care, education, and program delivery collaborations wereexcluded from this review.

Quality and outcome measures were identified using both a lit-erature review and an environmental scan. We conducted searchesusing the standard databases PubMed, the Comprehensive Indexto Nursing and Allied Health Literature, and PsychInfo, as wellas searched EMBASE, Google Scholar, Scopus, and websites

recommended by members of the research team. After duplicatesand articles that were not focused on a specific scale or measure ofresearch collaboration were eliminated, team members reviewed afinal list of 25 publications for the measures of collaboration qual-ity, including 4 articles describing social network analyses, and 42publications for measures of collaboration outcome. All publica-tions were published prior to 2017. Figs. 1 and 2 provide flow dia-grams of how articles were selected to be included in both reviews.

At least twomembers of the research team reviewed each articleusing a standard data abstraction form that included the name ofthe measure/outcome; construct being measured; sample; anddetails about the measure, including operational definition, num-ber of items, response options, reliability, validity, and otherevidence for supporting its use. Reviewers were also asked to makea judgment as to whether the article included a measure of thecollaboration quality (or outcomes or products) of the scientific/research collaborations; both reviews had a rater agreement of99%. Differences in reviews were resolved through consensus afterdiscussions with a third reviewer.

Fig. 1. Flow diagram of publications included in the final collaboration qualityreview.

Fig. 2. Flow diagram of publications included in the final collaboration outcomesreview.

262 Tigges et al.

https://www.cambridge.org/core/terms. https://doi.org/10.1017/cts.2019.402Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cambridge Core terms of use, available at

Page 3: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Results

Quality Measures

We identified 44 measures of research collaboration quality fromthe 15 publications included in the final summary analyses (see Fig. 1).The specifics of each measure are detailed in Table 1. Threearticles were not included in Table 1 because they all used socialnetwork analysis [31–33]. Four articles covered 80% of themeasures identified [12,19,23,34].

The number of items per measure ranged from 1 to 48, with77% having less than 10 items per measure. A few articles reportedon measures that covered several domains. As shown in Table 1,we have included each domain measure separately if it wasreported as an independent scale with its own individual psycho-metric properties.

Reliability was reported for 35 measures, not reported for fourmeasures, and not applicable for five measures (single-item, self-reported frequency counts, or qualitative responses). Reliabilitymeasures weremost frequently Cronbach’s alphas for internal con-sistency reliability, but also included intraclass correlation coeffi-cients, inter-rater correlations, and, when Rasch analysis wasused, person separation reliability. Test–retest reliability was neverreported. Cronbach’s alpha statistics were >0.70 for 86% of themeasures using that metric. Some form of validity was reportedon 40 measures and typically included exploratory (n= 8) and/or confirmatory factor analysis (n= 26). Convergent or discrimi-nant validity was evident for 38 measures but was based on studyresults, as interpreted by our reviewers, rather than identified bythe authors as a labeled multitrait–multimethod matrix analysisof construct validity. Twelve measures had convergent or discrimi-nant validity only, without any further exploration of validity. Facevalidity and content validity were reported for five measures, alongwith other analyses of validity.

Outcome Measures

We identified 89 outcome measures from the 24 publicationsincluded in the final summary analyses (see Fig. 2). Char-acteristics of each measure are detailed in Table 2. Three publica-tions included over 44 (49%) of the measures identified [17,23,35].However, only two of those [17,23] included measures tested inactual studies; the remaining article [35] included only recommen-dations for specific measures.

Measures were broadly classified into one of the six differentcategories, reflected in Table 2: (1) counts or numerical represen-tations of products (e.g., number of publications; 38 measures);(2) quality indicators of counted products (e.g., journal impactfactor; 7 measures); (3) self-reported perceptions of outcomes(e.g., perceived productivity; 32 measures); (4) peer-reviewedperceptions of outcomes (e.g., progress on the development ofinterventions; 5 measures); (5) qualitative descriptions of out-comes (e.g., descriptive data collected by interview; 6 measures);and (6) health indicators/outcomes (e.g., life expectancy; 1 overallmeasure with 60 different indicators). The number of items permeasure ranged from a single count to a 99-item scale, with over50% of the measures composed of a single count, number, or ratingof a single item.

Twenty-three of the 89 measures were recommendations onmeasures and had no reported reliability or validity as would beexpected [35]. For the remaining 66 measures, only 16 reportedassessments of reliability. Nine of 24 measures in the self-reportedperceptions category included Cronbach’s alpha >0.70, showing

internal consistency reliability. Six measures (3 of 24 in the countsof products category and 3 of 4 in the peer-reviewed category) hadinter-rater agreement described; all were over 80%. Onemeasure inthe peer-reviewed category reported inter-rater reliability ofr= 0.24–0.69. Of these 16 measures with reported reliability, ninehad some form of validity described: confirmatory factor analysis(6 measures) and convergent validity (3 measures). Of the remain-ing 50 measures without reliability data, five had some type ofconvergent validity described and one was supported by principalcomponent analysis. Once again, convergent validity was notformally labeled as such but was evident in terms of correlationsbetween the measure under study and other relevant variables.

Discussion

Quality Measures

Overall, there are a relatively large number of scales, some of themrobust, that have been used to measure the quality or process ofresearch collaborations (e.g., trust, frequency of collaboration).However, many scales have not been extensively used and havebeen subjected to relatively little repeated psychometric studyand analysis. Most have been developed in support of a particularresearch project rather than with the intent of becoming a standardindicator or scale for the field. Although calculated across multipleorganizations, estimates of reliability and/or validity were oftenstudy specific as well. Reports of effect sizes (sensitivity or respon-siveness) were rare and limited to correlations, and construct val-idity has not been explored beyond exploratory or confirmatoryfactor analyses. Given this dearth of replicated psychometric data,it is not surprising that widely accepted, standard scales have notemerged to date. Wide-scale testing of measures of collaboration isessential to establish reliability, validity, and sensitivity or respon-siveness across settings and samples.

Scales developed to date have been primarily focused on groupdynamics (including the quality of interpersonal interactions,trust, and communication). Although these are important factors,few measurements have been made of how well a team functions(such as leadership styles) and the degree to which the team’s workis viewed as synergistic, integrative, or otherwise more valuablethan would occur in a more siloed setting. Oetzel et al.’s [23]beginning psychometric work provides an example of some ofthese types of measures. This is in contrast to the numerous avail-able (or under development) scales to measure attitudes towardcollaborations and quality of collaborations that exist at specificinstitutions.

Despite these limitations, two sets of measures deserve note.First, those reported by Hall et al. [12] andMâsse et al. [19] as mea-sures of collaborations in National Cancer Institute-fundedTransdisciplinary Tobacco Use Research Centers have been usedmore extensively than many of the other scales in this review asindicators of collaboration quality among academic partners(although relatively little additional psychometric data have beenreported beyond initial publications). Second, the measuresreported by Oetzel et al. [23] are unique in that they are scalesto assess research quality involving collaborations betweenacademics and communities, agencies, and/or community-basedorganizations. They are also unique in representing responsesfrom over 200 research partnerships across the USA. This reviewdid not distinguish between partnerships (e.g., involving just twopartnering organizations) and coalitions (involving multipleorganizations).

Journal of Clinical and Translational Science 263

https://www.cambridge.org/core/terms. https://doi.org/10.1017/cts.2019.402Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cambridge Core terms of use, available at

Page 4: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 1. Measures of research collaboration quality

Author Instrument name Construct being measuredSample/

populationNumber of

items Response options Reliability Validity

Bietz et al.[36]

CollaborationSuccess Wizard

Past, present, or futureperceptions of five factors:nature of work; commonground; collaborationreadiness; management,planning and decisionmaking; and technologyreadiness

177 university facultyand staff from 12projects

44 Varied NR NR

Greeneet al. [37]

CRN ParticipantSurvey

Five domains: extent ofcollaboration and qualityof communication;performance of projectsand infrastructure; dataquality; scientificproductivity; and impacton member organizations

Investigators and projectstaff from the HMOCancer ResearchNetwork overa 5-year period

All items notprovided;questionsmodifiedannuallybased onfeedbackand newprojectneeds

Several 5-point Likert scalesranging from “stronglyagree” to “stronglydisagree” or from “veryeffective” to “veryineffective, including“can’t evaluate”; also,open-ended items tocollect qualitative input

Hall et al. [12] Research OrientationScale

Unidisciplinary,multidisciplinary, orinter/transdisciplinaryproclivity of values andattitudes toward research

56 investigators andstaff from four NCITREC centers

10 5-point Likert scale rangingfrom “strongly agree” to“unsure” to “stronglydisagree”

Cronbach’s alpha= 0.74 Construct validity: Exploratoryand confirmatory factoranalyses supporting threefactors. Convergent validity:Higher unidisciplinaryorientation inverselycorrelated with cross-disciplinary collaborativeactivities andmultidisciplinary and inter/transdisciplinary researchorientation. Highermultidisciplinary orientationcorrelated with morecollaborators, more cross-disciplinary collaborativeactivities. Similar findingsfor inter/transdisciplinaryresearch orientation

CompletingDeliverables Scale

Investigators’ expectationsfor their projects’ meetingprojected year-1deliverables

One item foreachproject

5-point Likert scale rangingfrom “highly unlikely” to“highly likely”; eachproject was ratedseparately

NA Convergent validity: Inversecorrelation betweenduration of involvement intransdisciplinary projectsat their center andresearchers’ confidence inmeeting year 1 deliverables

264Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 5: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Hall et al.[12]a

History ofCollaboration(other individualinvestigators)

Number of individualscollaborated with andlength of time for eachindividual

56 investigators andstaff from four NCITREC centers

2 Count of collaborators;number of years ofcollaboration for eachcollaborator

Convergent validity: Numberof collaborators correlatedwith more center-relatedcollaborative activities,number of years working ininter/transdisciplinarycenters, number of yearsworking with inter/transdisciplinary projects,higher multidisciplinary,and inter/transdisciplinaryresearch orientation

History ofCollaboration(other centers orprojects)

Number of years in inter/transdisciplinary centerand projects

4 Counts of number of centersand of years involved withcenter; counts of numberof projects and number ofyears involved withprojects

Convergent validity: Moreyears involved in inter/transdisciplinary centerscorrelated with number ofcollaborators and inverselycorrelated with confidencein completion of 1-yeardeliverables. More yearsinvolved in inter/transdisciplinary projectscorrelated with number ofcollaborators and inverselycorrelated with confidencein completion of 1-yeardeliverables

Satisfaction withCollaboration

Satisfaction with eachindividual collaborator

1 5-point Likert scale rangingfrom “not at all satisfied”to “neutral” to“completely satisfied”

Convergent validity: Moresatisfaction with individualcollaborations correlatedwith more perceivedinstitutional resourcessupporting collaboration,more positive impressions,higher ratings ofinterpersonalcollaborations, higherperception of collaborativeproductivity

Cross-DisciplinaryCollaboration-Activities Scale

Frequency of engagement incollaborative activitiesoutside of his/her primaryfield

6 7-point Likert scale rangingfrom “never” to “weekly”

Cronbach’s alpha= 0.81 Convergent validity: higherfrequency of cross-disciplinary collaborativeactivities correlated withstronger multidisciplinaryand interdisciplinary/transdisciplinary researchorientation

TREC-relatedCollaborativeActivities Scale

Frequency of engagementwith center-specificactivities

3 Cronbach’s alpha= 0.74

(Continued)

JournalofClinicaland

TranslationalScience265

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 6: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 1. (Continued )

Author Instrument name Construct being measuredSample/

populationNumber of

items Response options Reliability Validity

InstitutionalResources Scale

Availability and quality ofinstitutional resources forconducting collaborativeresearch project

8 5-point Likert scale rangingfrom “very poor” to“excellent”

Cronbach’s alpha= 0.87 Convergent validity: Thebetter the perceivedresources, the more positiveresearchers’ perception ofcenter, more satisfied withprevious collaborators,more positively ratedcollaborative productivityand interpersonalcollaboration

Semantic-Differential/Impressions Scale

Investigators’ impressions oftheir research center

21 adjectivepairs

7-point continuum on whichrespondents rate theirimpressions on adjectivepairs (e.g., conflict-harmonious, notsupportive-supportive,fragmented-integrated)

Cronbach’s alpha= 0.98 Convergent validity: Betterimpressions werecorrelated with morecollaboration satisfaction,more confidence incompletion of deliverables,more institutional resourcesfor collaboration, betterinterpersonal collaboration

Interpersonal-CollaborationScale

Interpersonal collaborativeprocess at their center

8 5-point Likert scale withresponse options rangingfrom either “very poor” to“excellent” or “stronglyagree” to “stronglydisagree” with central“neither agree nordisagree”

Cronbach’s alpha= 0.92 Convergent validity: Thebetter the interpersonalcollaboration, the morecollaboration satisfaction,the more confidence incompletion of deliverables,and the more perceivedinstitutional resources forcollaboration

Hall et al.[12]b

Written ProductsProtocol

The integrative(transdisciplinary) aspectsof written researchprotocols, disciplinesrepresented, levels ofanalysis, type of cross-disciplinary integration

21 center developmentalproject proposals fromfour NCI TREC centers

37 itemprotocolused toevaluateproposals

Items describing proposalwith various responseformats; one item – ratewhether “unidisciplinary,”“multidisciplinary,”“interdisciplinary,” or“transdisciplinary”proposal, two itemsregarding transdisciplinaryintegration and scope ofproposal using 10-pointLikert scale ranging from“none” to “'substantial”

Inter-rater reliabilitiesbased on Pearson’scorrelations from 0.24to 0.69; highestreliability for ratingexperimental types(0.69), number ofanalytic levels (0.59),disciplines (0.59) andscope (0.52). Lowerreliability in attemptsto name the cross-disciplinaryintegration in theproposal

Convergent validity: Highernumber of disciplines inproposal, the broader itsintegrative score, larger itsnumber of analytic levels.The higher the type ofdisciplinarity, the broaderits overall scope

266Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 7: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Huang [34] Trust Team trust 290 members of 60technology researchand developmentteams from theIndustrial TechnologyResearch Institute inTaiwan

7 5-point Likert scale rangingfrom “strongly disagree”to “strongly agree”

Cronbach’salpha= 0.87;ICC = 0.46 (p< 0.001)

Face and content validity.Construct validity:Common method varianceanalysis and confirmatoryfactor analysis using partialleast squares latentstructural modeling.Convergent validity withother constructs withcomposite reliability> 0.6and average varianceextracted at least 0.5.Discriminant validity withsquare root of averagevariance for constructgreater than levels ofcorrelations involvingconstruct

Transactive MemorySystem

Team transactive memorysystem

7 Cronbach’salpha= 0.84;ICC = 0.42 (p< 0.001)

Knowledge Sharing Team knowledge sharing 4 Cronbach’salpha= 0.79;ICC = 0.55 (p< 0.001)

Group cohesiveness Group cohesiveness (teamnetwork ties and collectivemind)

5 Cronbach’salpha= 0.78;ICC = 0.42 (p< 0.001)

Team Performance Team performance 4 Cronbach’salpha= 0.82;ICC = 0.53 (p< 0.001)

Lee andBozeman[38]

CollaborationStrategies

Collaboration strategies ormotives for collaboration

443 science facultyaffiliated with NSF orDOE research centersat US universities

13 4-point Likert scale rangingfrom “very important” to“not important”

Cronbach’s alpha forsubscales: Taskmaster(2 items) = 0.60;Nationalist (2items) = 0.57; Mentor(2 items) = 0.57;Follower(3 items) = 0.42;Buddy(3 items) = 0.32;Tactition only oneitem. No overallCronbach’s alphareported

Construct validity:Exploratory factor analysiswith varimax rotationsupporting six factors:Taskmaster, Nationalist,Mentor, Follower, Buddy,and Tactition. Convergentvalidity: Nationalist andMentor collaborationstrategies/motivessignificantly associatedwith number ofcollaborators during past12 months. Only Tactitionstrategy/motivesignificantly associatedwith journal publicationproductivity as measuredby a normal count and afractional count ofpublications during the 3years post survey

(Continued)

JournalofClinicaland

TranslationalScience267

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 8: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 1. (Continued )

Author Instrument name Construct being measuredSample/

populationNumber of

items Response options Reliability Validity

Collaboration Number of collaborators 1 Count of number of persons,by category, with whomthey engaged in researchcollaborations within thepast 12 months. Categorieswere male universityfaculty, male graduatestudents, male researcherswho are not universityfaculty or students, femaleuniversity faculty, femalegraduate students, andfemale researchers whoare not university facultyor students

NA Convergent validity: Zeroorder correlations betweennumber of collaboratorsand journal publicationproductivity, both normalcount and fractional count.Two-stage least squaresregression results,including other moderatingvariables, demonstrated acontinued significantrelationship betweennumber of collaboratorsand normal publicationcount

Mallinsonet al. [39]

MATRICx Motivators and threats tocollaboration readiness

125 faculty, students,researchers

48 (31 threatand 17motivatoritems)

4-point Likert scale:4= describes me/myexperience exactly;3= describes me/myexperience quite well;2= somewhat describesme/my experience;1= does not describe me/my experience at all

Rasch analysis: Personseparation reliabilityfor threat items= 0.92;motivator items(experiencedparticipants)= 0.94;motivator items(inexperiencedparticipants)= 0.85; allitems= 0.67

Construct validity: Raschanalysis and principalcomponents analysis

Mâsse et al.[19]c

TransdisciplinaryIntegration Scale

Attitudes abouttransdisciplinary research

216 research faculty,staff, trainees fromNCI TTURC

15 5-point Likert scale rangingfrom “strongly agree” to“unsure” to “stronglydisagree”

Cronbach’s alpha= 0.89 Construct validity:Confirmatory factoranalysis. Convergentvalidity: Correlations withcenter outcomes

Satisfaction withCollaboration

Satisfaction withcollaboration within acenter

8 5-point Likert scale rangingfrom “inadequate” to“excellent”

Cronbach’s alpha= 0.91 Construct validity:Confirmatory factoranalysis. Convergentvalidity: Satisfaction withcollaborations within acenter correlated withcenter outcomes related tomethods, science andmodels, and improvedinterventions

Impact ofCollaboration

Impact of collaborationwithin a center

5 3 items (meeting, products,overall productivity)-5-point Likert scale rangingfrom “inadequate” to“excellent”; remaining twoitems (researchproductivity, qualityresearch)-5-point Likertscale ranging from“strongly disagree” to “notsure” to “strongly agree”

Cronbach’s alpha= 0.87 Construct validity:Confirmatory factoranalysis. Convergentvalidity: Impact ofcollaborations within acenter correlated withcenter outcomes related tomethods, science andmodels, and improvedinterventions

268Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 9: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Trust and Respect Trust and respect withcollaborations

4 5-point Likert scale rangingfrom “strongly disagree”to “not sure” to “stronglyagree”

Cronbach’s alpha= 0.75 Construct validity:Confirmatory factoranalysis. Convergentvalidity: Trust and respectwith collaborations withina center correlated withcenter outcomes related tomethods, science andmodels, and improvedinterventions

Mazumdaret al. [40]

No formal name Team scientists’ activitiesrelated to grant design,grant implementation,grant analysis, manuscriptreporting, teaching, andservice

Proposed for academicfaculty

6 3 possible ratings for eachitem: major, moderate, orminor; Supported byqualitative comment byreviewer

NR

Misra et al.[28]

TransdisciplinaryOrientation Scale

Values, attitudes, beliefs,conceptual skills,knowledge and behavioralrepertoires thatpredispose an individualto collaborating effectivelyin cross-disciplinaryscientific teams

150 researchers andacademics from theliberal arts, socialsciences, naturalsciences, andengineering

12 5-point Likert scale rangingfrom “strongly agree” to“strongly disagree”;middle three responseoptions not anchored

Overall Cronbach’salpha= 0.93 (values,attitudes, beliefs 6-item subscalealpha= 0.87;conceptual skills andbehaviors 6-itemsubscalealpha= 0.88); secondsample Cronbach’salpha= 0.92

Construct validity:Confirmatory factoranalysis. Convergentvalidity: Critical ratio ofregression weightsstatistically significant.Discriminant validity:Covariance between twofactors and correlationhigh but suggestsdiscriminant validity. Inmultiple regressionanalyses, highertransdisciplinaryorientation associated withproduction of moreinterdisciplinary scientificpapers, more experience inparticipating in cross-disciplinary team science,and independent ratings ofthe potential societyimpact of the researchreported in the scholar’sarticle

Oetzel et al.[23]d

Bridging SocialCapital

Academic and communitypartners have the skillsand cultural knowledge tointeract effectively

Domain: Structural/individual dynamics

138 PIs/PDs and 312academic orcommunity partnersfrom 294 CBPRprojects with USfederal funding in 2009

3 5-point Likert scale:1=not at all; 2=very little;3=somewhat; 4=mostly;5=to a great extent

Cronbach’s alpha= 0.69 Construct validity:Confirmatory factoranalysis. Convergentvalidity: Positive andmoderate correlations withother structural/individualdynamics scales; andcorrelations with multipleoutcomes

(Continued)

JournalofClinicaland

TranslationalScience269

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 10: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 1. (Continued )

Author Instrument name Construct being measuredSample/

populationNumber of

items Response options Reliability Validity

Alignment with CBPRPrinciples: PartnerFocus

Develops individual partnercapacity and equitablepartnerships in all phasesof the research

Domain: Structural/individual dynamics

4 Cronbach’s alpha= 0.82

Alignment with CBPRPrinciples:Community Focus

Builds on resources andstrengths of community forthe well-being ofcommunity

Domain: Structural/individualdynamics

4 Cronbach’s alpha= 0.85

Partner values Shared understanding ofproject mission, priorities,strategies

Domain: Structural/individual dynamics

4 5-point Likert scale:1= strongly disagree;2= disagree; 3= neitheragree nor disagree;4= agree; 5= stronglyagree

Cronbach’s alpha= 0.89

Research Tasks andCommunication:BackgroundResearch

Community partners’ levelof involvement in thebackground research

Domain: Relationaldynamics

5 1= community partners DIDNOT/DO NOT participatein this activity;2= community partnerswere/are CONSULTED onthis activity;3= community partnerswere/are ACTIVELYENGAGED in this activity;4= not at this stage ofresearch; 5= does notapply

Cronbach’s alpha= 0.81 Construct validity:Confirmatory factoranalysis. Convergentvalidity: Positive andmoderate correlations withother relational dynamicsscales; and correlationswith multiple outcomes

Research Tasks andCommunication:Data Collection

Community partners’ levelof involvement in the datacollection

Domain: Relationaldynamics

4 Cronbach’s alpha= 0.69

Research Tasks andCommunication:Analysis andDissemination

Community partners’ level ofinvolvement in dataanalysis and disseminationof findings

Domain: Relational dynamics

3 Cronbach’s alpha= 0.82

Dialogue and MutualLearning:Participation

Degree to which all partnersparticipate in the process

Domain: Relationaldynamics

3 5-point Likert scale:1= strongly disagree;2= disagree; 3= neitheragree nor disagree;4= agree; 5= stronglyagree

Cronbach’s alpha= 0.78

Dialogue and MutualLearning:Cooperation

Degree to which partnerscooperate to resolvedisagreements

Domain: Relationaldynamics

3 Cronbach’s alpha= 0.83

Dialogue and MutualLearning: Respect

Degree to which partnersconvey respect to eachother

Domain: Relationaldynamics

3 Cronbach’s alpha= 0.83 Construct validity:Confirmatory factoranalysis. Convergent anddiscriminant validity: Mixedresults

270Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 11: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Trust Degree of current trustwithin partnership

Domain: Relationaldynamics

4 Cronbach’s alpha= 0.86 Construct validity:Confirmatory factoranalysis. Convergentvalidity: Positive andmoderate correlations withother relational dynamicsscales; and correlationswith multiple outcomes

Influence and PowerDynamics

Degree of voice andinfluence in the decision-making

Domain: Relationaldynamics

3 Cronbach’s alpha= 0.58

ParticipatoryDecision-Making

Degree to which decisionsare made in aparticipatory manner

Domain: Relationaldynamics

4 5-point Likert scale:1= never; 2= rarely;3= sometimes; 4= often;5= always

Cronbach’s alpha= 0.83

Leadership Overall effectiveness ofproject’s leadership

Domain: Relationaldynamics

10 5-point Likert scale:1= very ineffective;2= ineffective;3= somewhat effective;4= effective; 5= veryeffective

Cronbach’s alpha= 0.94

ResourceManagement

Effective use of financialand in-kind resources

Domain: Relationaldynamics

3 5-point Likert scale:1=makes poor use;2=makes fair use;3=makes average use;4=makes good use;5=makes excellent use

Cronbach’s alpha= 0.86

Okamotoet al. [41]

Conflict Task conflict 167 center directors,research core PIs,individual project PIs,key research personnelfrom 10 US NationalInstitutes of HealthCenters for PopulationHealth and HealthDisparities

6 5-point Likert scale from“not at all” to “to a verylarge extent”

Cronbach’s alpha= 0.82 Discriminant validity: Taskconflict not associated withany of the three networkmeasures

(Continued)

JournalofClinicaland

TranslationalScience271

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 12: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 1. (Continued )

Author Instrument name Construct being measuredSample/

populationNumber of

items Response options Reliability Validity

Wootenet al. [27]

Team EvaluationModel Matrix

Functioning ofmultidisciplinarytranslational teams withina two-by-two matrixbased on assessment oftwo dimensions: Teammaturation/developmentand; research andscientific progress

11 Multidisciplinarytranslational researchteams within one USNational Institutes ofHealth, NationalCenter for AdvancingTranslational Science,Clinical andTranslational ScienceAward center

2 Expert panel members ratedeach team on each twodimensions (Maturation/development; andResearch/scientific) on ascale with 0= not present;1= low; 2=medium;3= high. On eachdimension could have afinal score of 0–12 (0–3scores for each of fourcriteria under eachdimension). After initialscoring by individualpanel members, totalexpert panel discuss toreach final consensus oneach team’s scores oneach of the twodimensions. Initial ratingsbased on expert panelmembers’ review of teamlogic model, measurementplan, and all assessmentdata (including surveydata)

NR NR

NR, not reported; CRN, Cancer Research Network; NCI, US National Cancer Institute; TREC, Transdisciplinary Research on Energetics and Cancer; ICC, intraclass correlation coefficients; NSF, US National Science Foundation; DOE, US Department of Energy;MATRICx, Motivation Assessment for Team Readiness, Integration and Collaboration; NA, not applicable; TTURC, Transdisciplinary Tobacco Use Research Center; PI, principal investigator; PD, project director; CBPR, community-based participatory research.aDetails obtained by cross-referencing article (TREC Baseline survey) from https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspx?tid=2&rid=36 [42].bDetail obtained by cross-referencing article (NCI TREC Written Products Protocol 2006-09-27) from https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspx?tid=2&rid=646 [43].cDetails obtained by cross-referencing article (TTURC Researcher Survey 2002) from https://cctst.uc.edu/sites/default/files/cis/survey-TTURC_research.pdf [44].dOriginal instrument shown at http://cpr.unm.edu/research-projects/cbpr-project/index.html–scroll to 2. Quantitative Measures – “Key Informant” and “Community Engagement” survey instruments. Developmental work on measures from Oetzel et al.(2015) continues in an NIH NINR R01 (Wallerstein [PI] 2015-2020 “Engage for Equity” Study; see http://cpr.unm.edu/research-projects/cbpr-project/cbpr-e2.html).

272Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 13: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

JournalofClinicaland

TranslationalScience273

Table 2. Measures of research collaboration outcomes

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Counts or numerical representations of products

Ameredeset al. [45]

Number of extramuralgrants submitted andreceived

110 trainees involved inmultidisciplinarytranslational teams atone NIH CTSA institution

Not specified how data forcounts obtained

NA NA NR NR

Number of publicationsover 3-year period

Not specified how data forcounts obtained

Cummingset al. [13]

Number of publications infinal NSF reports over 4-to 9-year period

549 research groupsfunded by NSF from2000 to 2004

Count of publications for eachresearch group listed in finalreport to NSF or, if no finalreport, last annual report,between 2000 and 2009.Included archival conferenceproceedings, journal articles,book chapters, public reportson project. Each group’spublications counted onlyonce, regardless of number ofco-authors

Number of cumulativepublications for researchgroup pre/post NSFfunding listed in GoogleScholar

Count of publications for eachresearch group extractedthrough Google Scholar searchengine divided into pre-NSFfunding and post-NSF funding(through 2009). Each group’spublications counted only once,regardless of number of co-authors

Reliability evaluated for10% of sample usingraters recruited viaAmazon’s MechanicalTurk. 5 raterscompared eachextracted publicationwith correspondingauthor’s Web page orresume publicationlistings. Publicationcounted if 4 of 5 ratersagreed. Rateragreement was 94%

Number of cumulativepublications for researchgroup pre/post NSFfunding listed in ThomasReuters [formerly ISI]Web of Science andSocial Science database

Count of publications for eachresearch group extractedthrough Thomas Reuters(formerly ISI) Web of Scienceand Social Science databasedivided into pre-NSF fundingand post-NSF funding (through2009). Each group’spublications counted only once,regardless of number of co-authors

Number of cumulativecitations for researchgroup publications pre/post NSF funding listedin Thomas Reuters(formerly ISI) Web ofScience and SocialScience database

Count of citations of each uniquepublication for each researchgroup extracted throughThomas Reuters (formerly ISI)Web of Science and SocialScience database divided intopre-NSF funding and post-NSFfunding (through 2009). Eachgroup’s publications countedonly once, regardless ofnumber of co-authors

(Continued)

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 14: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 2. (Continued )

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Hugheset al. [33]

Number of co-authoredpublications over 4-yearperiod

Active investigators at oneNIH CTSA site

Count of co-authoredpublications between 2006and 2009 involving two ormore investigators. Used Rubyscripts to automaticallyharvest publicationinformation from the USNational Center forBiotechnology Information’sPubMed database

NR

Number of co-authoredgrant proposals over4-year period

Count of co-authored grantproposals submitted between2006 and 2009 involving twoor more investigators(institutional data andadditional data from NIHRePORT)

Lee andBozeman[38]

Number of peer-reviewedjournal papers (as ameasure of individualproductivity)

443 science facultyaffiliated with NSF orDOE research centers atUS universities

Count of peer-reviewed journalpapers from 2001-2003obtained from SCI-Expandedthrough the ISI Web ofScience. Authors identified bymatching name, department,and institution found onrespondent’s CV

Fractional count of co-authored peer-reviewedjournal papers (dividedby number of co-authors) (as a measureof individualproductivity)

Count of co-authored peer-reviewed journal papersdivided by number of co-authors from 2001 to 2003obtained from SCI-Expandedthrough the ISI Web ofScience. Authors identified bymatching name, department,and institution found onrespondent’s CV

Lee [46] Number of inventiondisclosures

427 university faculty –science and engineersfrom research-intensiveuniversities on the NSFlist of top 100 researchuniversities

Self-reported number on one-time anonymous survey

Number of patentsobtained

Self-reported number on one-time anonymous survey

Number of patentspending

Self-reported number on one-time anonymous survey

274Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 15: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Lööf andBroström[47]

Income from new orimproved productsintroduced during 1998-2000 as a proportion ofsales income in year2000

2071 manufacturing andbusiness firms inSweden that havecollaborations withuniversities – data fromCommunity InnovationSurvey III in Sweden1998-2000

Survey data reported by firms;income from new or improvedproducts introduced during1998-2000 as a proportion ofsales income in year 2000

Number of patentapplications by industrypartners in 2000

Survey data reported by firms;count of patent applicationsby industry partners in 2000

Luke et al.[48]

Number of grantsubmissions over 4-yearperiod

1272 research members ofone institutional NIHCTSA center

Counts of new extramuralsubmissions over 4-yearperiod as maintained inuniversity database, includingfederal, state, local, andfoundation grants, contracts,programs, and sub-agreements, excludingrenewals, resubmissions, etc.

Number of publicationsover 4-year period

Counts of publications over 5-year period based onbibliometric data obtainedfrom Elsevier Scopus

Mâsseet al. [19]

Number of submitted andpublished articles andabstracts

216 research faculty, staff,and trainees from NCITTURC

Counts of submitted andpublished articles andabstracts (to date) reported inwritten survey by participants(research faculty, staff, andtrainees) in Year 3 of center

Petersen[49]

Normalized number ofpublications per year

473 pairs of researchcollaborators whopublished in ThompsonReuters Web ofKnowledge publications(spanning 15,000 careeryears, 94,000publications, and166,000 collaborators)

Publication counts aggregatedover relevant time periods,normalized by the baselineaverage calculated over theperiod of analysis

Normalized number ofcitations per year

Citation count in a given censusyear converted to a normalizedz score (to correct for olderpublications that have moretime to accrue citations thannewer publications)

Philbin [35] Number of publicationsand conferenceproceedings (as indicatorof technology knowledgesharing andimprovement)

None – measure proposedbased on lit review andinterviews with 32university and industryrepresentatives involvedin researchcollaborations

Counts of publications inscientific journals and peer-reviewed conferenceproceedings (no suggestionregarding what data sourcewould be used to ascertaincounts)

NA NA

Quality of researchpublication as measuredby a citation index (asindicator of technologyknowledge sharing andimprovement)

Citation index value for eachpublication (exact citationindex not specified)

(Continued)

JournalofClinicaland

TranslationalScience275

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 16: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 2. (Continued )

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Number of studentsassociated withcollaboration (asindicator of technologyknowledge sharing andimprovement

Count of number of students,including postgraduatemasters and PhD levels,associated with thecollaboration

Financial value of projectsaccording to sponsorand sector (as indicatorof project and businessknowledge sharing andimprovement)

US dollar calculation of financialvalue of projects according tosponsor and sector, includingmeasures for growth anddecline and market share (nocalculation details provided)

Third-party recognition ofcollaboration results(e.g., awards) (asindicator of technologysustainability ofcollaboration)

Level of third-party recognitionof collaboration results (e.g.,number of awards) (nospecifics regardingmeasurement other thancount of awards)

Number of universitystudents recruited asnew staff into thecompany (as indicator ofsocial sustainability ofcollaboration)

Number of students from theuniversity recruited as newstaff into the company (nospecifics provided)

Completion of projectmilestones ordeliverables (asindicator of projects andbusiness knowledgesharing andimprovement)

Description of completion ofproject milestones ordeliverables that wereachieved according to time,cost, and quality requirements(no specifics provided)

Number of staff exchangesand student placements(as indictor of socialknowledge sharing andimprovement)

Counts of staff exchanges andstudent placements (nospecifics provided)

Attendance at key events(as indicator of socialknowledge sharing andimprovement)

Percentage attendance at keyevents, such as customer andmilestone reviews and invitedlectures (no specifics as towhat the denominatorwould be)

276Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 17: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Value of follow-on workand spin-off projects thathave arisen as aconsequence of initialfunding (as indicator ofproject and businesssustainability ofcollaboration)

Value of follow-on work and“spin-off” projects that havearisen as a consequence ofinitial funding (no specifics ofhow value would bequantified)

Value of intellectualproperty includingpatents and licenseagreements arising fromthe collaboration (asindicator of project andbusiness sustainabilityof collaboration)

Value of intellectual propertyincluding patents and licenseagreements arising from thecollaboration (no specificsregarding how value isquantified)

Long-term return oninvestment accrued fromresearch investment (asindicator of project andbusiness sustainability ofcollaboration)

Level of long-term return oninvestment accrued fromresearch investment (nospecifics provided)

Efficiency of contractmanagement (asindicator of projects andbusiness knowledgesharing andimprovement)

Measure of the efficiency ofcontract management (e.g.,submission of invoices) (nospecifics provided)

Extent of adoption ofresearch results in newproducts and servicesdeveloped by thecompany (as indicator oftechnology sustainabilityof collaboration)

Extent of adoption of researchresults in new products andservices by the company (nospecifics provided)

Stviliaet al. [50]

Number of publications 89 scientific teamsconducting experimentsat the NHMFL between2005 and 2008

Counts of publication from a listof publications between 2005and 2009 downloaded fromthe NHMFL website

NR NR

Trochimet al. [17]

Number of citations: total,self-adjusted, andexpected

216 research faculty, staff,and trainees from NCITTURC

Bibliometric analysis ofpublications resulting fromTTURC research and citingTTURC grant. Analysisproduces both total, self-adjusted, and expectedcitation counts

Wang andHicks [51]

Average number ofcitations for new orrepeated co-authorteams

43,996 publications datafrom 1310 US scientistsfunded by NSF

Average number of forwardcitations (over 5-year periodpost publication) per paper fornew and repeated (within last3 years) co-author teampapers based on lifetimepublication data from ThomasReuters Web of Science

(Continued)

JournalofClinicaland

TranslationalScience277

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 18: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 2. (Continued )

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Wuchtyet al. [8]

Number of citations foreach paper/patent

19.9 million papers in theISI Web of Sciencedatabase and 2.1 millionpatents (all US patentsregistered since 1975)

Count of all research articles inISI Web of Science databasepublished since 1944; count ofall US registered patents since1975

Quality indicators of counted products

Lee et al.[52]

Novelty of research 1493 research-activefaculty in science,engineering, and socialsciences withpublications included inThomson Reuters Webof Science with at leasttwo authors from thesame institution on thepaper

Formula cited in Lee et al.(2014) paper uses two steps:(1) calculate the commonnessof co-cited journal pairs forthe whole Web of Sciencedatabase; (2) calculate thenovelty of papers based ontheir references for thesampled papers (and takingonly the 10th percentile)

NA NA NR NR

Impact of research basedon citation percentiles

High impact defined as being inthe top 1% of most citedpapers in that Web of Sciencefield in that year

Trochimet al. [17]

Journal impact factor foreach publication

Research faculty, staff,and trainees at NCITTURC

Bibliometric analysis of journalimpact factor for eachpublication resulting fromTTURC research and citingTTURC grant; defined asaverage number of citations ofa journal of all articlespublished in previous 2 years

Journal performanceindicator for eachpublication

Bibliometric analysis of journalperformance indicator foreach publication resultingfrom TTURC research andciting TTURC grant; defined asaverage number ofpublications to date for allpublications in a journal in aparticular year

Field performanceindicator for eachpublication

Bibliometric analysis of fieldperformance indicator for eachpublication resulting fromTTURC research and citingTTURC grant; defined as journalperformance indicator for alljournals in a field

278Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 19: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

5-Year Journal IF for eachpublication

Bibliometric analysis of 5 Yearjournal IF for each publicationresulting from TTURC researchand citing TTURC grant; definedas average number of citationsto publications over a 5-yearperiod

Wuchtyet al. [8]

RTI with and without self-citations

19.9 million papers in theISI Web of Sciencedatabase and 2.1 millionpatents (all US patentsregistered since 1975)

RTI=Mean number of citationsreceived by team-authoredwork divided by the meannumber of citations receivedby solo-authored work(>1= team produced morehighly cited papers than soleauthors; <1= vice versa; if = 1,no difference between soleand team authors)

Self-reported perceptions of outcomes

Ameredeset al. [45]

Perceived competency(confidence) in NIH CTSArecommendedtranslational researchcompetencies

32 early career scholars atone NIH CTSA institution

Each of 99 items (reflecting 15competencies) rated in writtensurvey by participants; finalscore used in analysis was anaverage of items for eachcompetency sub-scale

99 6-point scale rangingfrom 0 to 5; specificanchors notprovided; higherscores indicatedconfidence

NR Construct validity:Principalcomponentsanalysis

Greeneet al. [37]

Perceived impact. Unclearbecause only havesample items and scalesnot constructed. Sixsample questions appearto measure perceivedimpact on health plan,research organization,and individual

Investigators and projectstaff from the HMOCancer ResearchNetwork over a 5-yearperiod

Measured using structuredquestions with Likert-typeresponses included in annualsurvey sent to all consortiumsites/members

Six sample questionsonly

4-point Likert scaleranging from “agree”to “disagree” (with a“can’t evaluate”option)

NR

Hager et al.[53]

Perceived research self-efficacy (skill)

Six interprofessionalfaculty fellows (dentists,pharmacists, physicians)

Research self-efficacy scaledeveloped by Bieschke, Bishopand Garcia [54]

Not specified 100-point responsescale; no anchorsprovided

Hall et al.[12]

Investigators’ perceptionsof center as a whole, aswell as how they feel asa member of center infirst year of center

56 Investigators and stafffrom four NCI TRECCenters

Semantic-differential/impressionScale: Ratings on a 7-pointcontinuum on word/phrasepairs such as conflicted –harmonious; not supportive –supportive; scientificallyfragmented – scientificallyintegrated

Not specified 7-point continuum; noanchors provided

Cronbach’s alpha= 0.98

Completing DeliverablesScale: Investigators’expectations for theirprojects’ meetingprojected year 1deliverables

Not specified how final scorecomputed

One item for eachproject

5-point Likert scaleranging from “highlyunlikely” to “highlylikely”; each projectrated separately

NA Convergentvalidity: Inversecorrelationbetween durationof involvement intransdisciplinaryprojects at theircenter andresearchers’confidence inmeeting year 1deliverables

(Continued)

JournalofClinicaland

TranslationalScience279

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 20: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 2. (Continued )

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Hall et al.[12]a

Collaborative ProductivityScale: Perception ofcollaborativeproductivity withincenter, includingproductivity of scientificmeetings, centers’overall productivity

Rate the collaboration withinyour center: Productivity ofcollaborative meetings, overallproductivity of center (ratedon 5-point scale “very poor”to “excellent”); in general,collaboration has improvedyour research productivity (5-point scale from “stronglydisagree” to “strongly agree”).Unclear whether three itemswere summed, summed andaveraged, or if some othercalculation was used todetermine final scale value

Unclear; appears to bethree items

5-point Likert scaleranging from “verypoor” to “excellent”.Also asked torespond to astatement aboutcollaboration andresearchproductivity, ratingon a 5-point scalefrom “stronglydisagree” to“strongly agree” withcentral “neither”response option

Cronbach’s alpha= 0.95 Convergentvalidity: thebetter theperceivedcollaborativeproductivity, thebetter thecollaborationsatisfaction,more confidencein completion ofdeliverables,more perceivedinstitutionalresources forcollaboration,betterimpressions,betterinterpersonalcollaborations

Hall et al.[12]b

Cross-DisciplinaryCollaboration ActivitiesScale: Perceivedfrequency ofengagement incollaborative activitiesoutside of one’s primaryfield

Please assess the frequencywith which you typicallyengage in each of theactivities listed below (e.g.,read journals or publicationsoutside of your primary field)

9 7-point Likert scaleranging from “never”to “weekly”

Cronbach’s alpha= 0.81 Convergentvalidity: Higherfrequency ofcross-disciplinarycollaborativeactivitiescorrelated withstrongermultidisciplinaryandinterdisciplinary/transdisciplinaryresearchorientation

Hanel andSt-Pierre[55]

Originality of Innovation:Firm representatives’perceptions/ ratings ofmost importantinnovation in terms oforiginality

5944 manufacturingprovincial enterprisesincluded in the StatisticsCanada Survey ofInnovation 1999 thatreported research anddevelopmentcollaborativearrangements withuniversities to developnew or significantlyimproved products ormanufacturing processesduring the previous 3years (1997–1999)

Firms asked if most importantinnovation was a “world first,”a “Canadian first,” or a “firmfirst”

NA NA NR NR

280Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 21: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Mâsseet al. [19]a

Methods Index:Perceptions of newmethods created (ingeneral), specificallydevelopment orrefinement of methodsfor gathering data

216 research faculty, staff,and trainees from NCITTURC

Average of seven items – eachitem rated in written survey byparticipants (research faculty,staff, and trainees)

7 4-point Likert scaleranging from “noprogress” to“excellent progress”;also option of “doesnot apply”

Convergentvalidity:Correlations withsatisfaction withcollaboration,impact ofcollaboration,trust andrespect, andtransdisciplinaryintegration

Science and Models Index:Perceptions of newscience and models oftobacco use; to includeunderstanding multipledeterminants of thestages of nicotineaddiction

Average of 17 items; each itemrated in written survey byparticipants (research faculty,staff, and trainees)

17

Improved InterventionsIndex: Perceptions ofimproved interventionsdeveloped (in general –most items not specificto tobacco use);specifically progress inpharmacologicinterventions

Average of 12 items; each itemrated in written survey byparticipants (research faculty,staff, and trainees)

12

Oetzelet al. [23]c

Partnership Synergy:Partner’s ability todevelop goals, recognizechallenges, respond toneeds, and work togetherDomain: Intervention/Research

138 PIs/PDs and 312academic or communitypartners from 294 CBPRprojects with US federalfunding in 2009

Each item rated in writtensurvey by participants. Finalscore used in analysis was anaverage of five items

5 5-point Likert scale:1= not at all;2= very little;3= somewhat;4=mostly; 5= to agreat extent

Cronbach’s alpha= 0.90 Construct validity:Confirmatoryfactor analysis.Convergentvalidity:Correlations withstructural/individualdynamics scales,relationaldynamics scales,and otheroutcomevariables

Systems and CapacityChanges: PartnerCapacity Building.Develops the skills tobenefit individualmembersDomain:Outcomes

Each item rated in writtensurvey by participants. Finalscore used in analysis was anaverage of three items

4 Cronbach’s alpha= 0.80

Systems and CapacityChanges: AgencyCapacity Building.Develops the reputationand the skills ofagencies involved in thepartnershipDomain: Outcomes

Each item rated in writtensurvey by participants. Finalscore used in analysis was anaverage of three items

4 Cronbach’s alpha= 0.87

Systems and CapacityChanges. Changes inPower Relations: Degreeto which power andcapacity has beendeveloped in thecommunity membersDomain: Outcomes

Each item rated in writtensurvey by participants. Finalscore used in analysis was anaverage of five items

5 5-point Likert scale:1= stronglydisagree;2= disagree;3= neither agree nordisagree; 4= agree;5= strongly agree

Cronbach’s alpha= 0.81

(Continued)

JournalofClinicaland

TranslationalScience281

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 22: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 2. (Continued )

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Systems and CapacityChanges: Sustainabilityof Partnership/Project:Likelihood of the projectand partnershipcontinuing beyond thefunding periodDomain: Outcomes

Each item rated in writtensurvey by participants. Finalscore used in analysis was anaverage of three items

3 Cronbach’s alpha= 0.71

Health Outcomes:CommunityTransformation: Policychanges and communityimprovementDomain: Outcomes

Each item rated in writtensurvey by participants. Finalscore used in analysis was anaverage of four items

7 5-point Likert scale:1= not at all;2= very little;3= somewhat;4=mostly; 5= to agreat extent

Cronbach’s alpha= 0.79

Health Outcomes:Community HealthImprovement.Improvement of healthfor the community as aresult of the project

Single item rated in writtensurvey by participants: Overall,how much did your researchproject improve the health ofthe community?

1 5-point Likert scale:1= not at all; 2= alittle; 3= somewhat;4= quite a bit; 5= alot

NA

Philbin [35] Satisfaction: Perception ofcollaborators’satisfaction (as indicatorof social knowledgesharing andimprovement)

None – measure proposedbased on literaturereview and interviewswith 32 university andindustry representativesinvolved in researchcollaborations

Satisfaction of students,academic staff, and industrialcontacts involved in thecollaboration (no specificsprovided)

NA NA NA NA

Value of TechnologyImprovement Delivered:Company’s perceptionof value of technologyimprovements delivered(as indicator oftechnology sustainabilityof collaboration)

Survey of companyrepresentatives to measurethe value of technologyimprovements deliveredassociated with the researchcollaboration (no specificsprovided)

University and companystaffs’ perceptions ofIncorporation ofknowledge developedinto continuingprofessionaldevelopment (asindicator of technologysustainability ofcollaboration)

Measurement of how knowledgedeveloped is beingincorporated into continuingprofessional development forboth university and companystaff involved (no specificsprovided)

282Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 23: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Perceptions of relevanceof research tocompany’s businessobjectives (as indicatorof projects and businessknowledge sharing andimprovement)

Measure of relevance ofresearch to company’sbusiness objectives (nospecifics provided)

University and company’sperceptions of thepercentage alignment ofresearch toorganizational strategies(as indicator of projectand businesssustainability ofcollaboration)

Percentage alignment ofresearch to organizationalstrategy, for both universityand company perspectives (nospecifics provided)

Perceptions of the extentof personal relationshipsbetween company anduniversity resulting fromthe collaboration (asindicator of socialsustainability ofcollaboration)

Numerical measure for theextent of personalrelationships betweencompany and universityresulting from thecollaboration (no specificsprovided)

Perceptions of the level ofinteractions betweensenior levels of thecollaborators, especiallyat company board leveland senior academicfaculty level (asindicator of socialsustainability ofcollaboration)

Level of interactions betweensenior levels of thecollaborators, especially atcompany board level andsenior academic faculty level(no specifics provided)

Perceptions of the level ofinfluence by universityfaculty on company’scorporate strategy (asindicator of socialsustainability ofcollaboration)

Level of influence by universityfaculty on company’scorporate strategy (nospecifics provided)

(Continued)

JournalofClinicaland

TranslationalScience283

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 24: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 2. (Continued )

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Trochimet al. [17]d

Methods Progress Scale:Perceptions of progresson development ofmethods in last 12months (Intermediateterm outcome)

216 research faculty, staff,and trainees from NCITTURC

Self-report survey itemsadministered annually

7 4-point Likert scale:1= no progress;2= some progress;3= good progress;4= excellentprogress

NR NR

Science and Models Scale:Perceptions of progresson development ofscience and models inlast 12 months(Intermediate termoutcome)

17

Progress on Developmentof Interventions Index:Perceptions of progresson development of newand Improvedinterventions in last 12months (Intermediateterm outcome)

12

Policy Impact Index:Perceptions of progresson policy outcomes inlast 12 months (long-term outcome)

4 Yes/no

Translation to PracticeIndex: Perceptions ofprogress on translationinto practice outcomesin last 12 months (long-term outcome)

9

Health Outcomes ImpactScale: Perceptions ofoptimism regardingpositive healthoutcomes from centerresearch within next 5years (long-termoutcome)

6 5-point Likert scale:1= not at alloptimistic;2= somewhatoptimistic;3=moderatelyoptimistic; 4= veryoptimistic;5= extremelyoptimistic

284Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 25: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Peer review perceptions of outcomes

Hall et al.[12]e

Written Products Protocol:Cross-disciplinarycollaboration andproductivity

21 center developmentalproject proposals fromfour NCI TREC centers

Unclear how final scorecalculated

37 item protocol usedto evaluateproposals.Dimensions of cross-disciplinarityassessed: disciplinesrepresented, levelsof analysis, type ofcross-disciplinaryintegration, scope oftransdisciplinaryintegration, anoverall assessmentof general scope ofeach proposal

Items describingproposal withvarious responseformats; one item –rate whether“unidisciplinary”,“multidisciplinary”,“interdisciplinary” or“transdisciplinary”proposal, two itemsre: transdisciplinaryintegration andscope of proposalusing 10-point Likertscale ranging from“none” to“'substantial”

Inter-rater reliability ofr = 0.24–0.69. Highestinter-rater reliabilitywas for experimentaltypes (0.69), numberof analytic levels(0.59), disciplines(0.59), and scope(0.52). Lowest inter-rater reliability was fortype of cross-disciplinary integration(0.24)

Convergentvalidity: Highernumber ofdisciplines inproposal, thebroader itsintegrative score,larger its numberof analytic level.The higher thetype ofdisciplinarity, thebroader itsoverall scope

Philbin [35] Knowledge ImprovementIndex: Level ofknowledge improvement(as indicator oftechnology knowledgesharing andimprovement)

None – measure proposedbased on literaturereview and interviewswith 32 university andindustry representativesinvolved in researchcollaborations

Independent numerical rating ofthe level of knowledgeimprovement (unit of analysisnot specified; no specificsprovided)

NA NA NA NA

Trochimet al.[17]d

Peer review of progress ondevelopment ofmethods in last 12months (intermediateterm outcome)

216 research faculty, staff,and trainees from NCITTURC

Peer review of SubprojectAnnual Progress ReportSummaries from seven centersfor progress on outcome; tworandomly assigned reviewersfor each

1 5-point scale; anchorsnot specified

>80% agreement byboth raters with nomore than 1-pointdifference. Also, bothKendall’s tau b andSpearman’s rhodemonstrated positiveand significantagreement

NR

Peer review of progress ondevelopment of scienceand models in last 12months (Intermediateterm outcome)

Peer review of Subproject AnnualProgress Report Summariesfrom seven centers for progresson outcome; two randomlyassigned reviewers for each

1

Peer review of progress ondevelopment ofInterventions in last 12months (Intermediateterm outcome)

Peer review of Subproject AnnualProgress Report Summariesfrom seven centers for progresson outcome; two randomlyassigned reviewers for each

1

Qualitative descriptions of outcomes

ArmstrongandJackson-Smith[56]

Improved interdisciplinaryunderstanding for:individuals, researchteam, and university/systematic (IntegrativeCapacity)

24 team members: 11academic researchers; 6non-academic teammembers (from relatedorganizations andcompanies); 7 graduatestudents

30- to 60-minute semi-structuredinterviews with qualitativeanalysis of interviewtranscripts

NA

Team capacity to work inintegrated manner, for:individuals, researchteam, and university/systematic (IntegrativeCapacity)

30- to 60-minute semi-structuredinterviews with qualitativeanalysis of interviewtranscripts

(Continued)

JournalofClinicaland

TranslationalScience285

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 26: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Table 2. (Continued )

Author OutcomeSample/

populationDetail about outcome

measurementIf scale:

number of ItemsIf scale:

response options Reliability Validity

Development of researchplan for: individuals,research team, anduniversity /systematic(Integrative Capacity)

30- to 60-minute semi-structuredinterviews with qualitativeanalysis of interviewtranscripts

Hager et al.[53]

Perceptions of impact offaculty developmentcollaborative researchfellowship

Six interprofessionalfaculty fellows (dentists,pharmacists, physicians)

Qualitative observations madeand recorded after eachseminar or learning sessionand analyzed for themes atend of fellowship

Stokolset al. [25]

TransdisciplinaryConceptual Integration;e.g., a transdisciplinaryeconomic model (toassess the costs ofsmoking), new researchproposal developmentby transdisciplinaryteams, new directionsfor transdisciplinarycollaborations

NIH TTURC investigatorsat three centers

Descriptions provided by TTURCinvestigators through open-ended “periodic interviews”from 1999 to 2004

Vogel et al.[57]

Perceived impacts oftransdisciplinary teamscience

31 investigators, staff, andtrainees from the NCITREC centers

15 question in-depth semi-structured interviews(interview guide provided)

Healtd indicators/outcomes

Aguilar-Gaxiolaet al. [58]

60 specific categories ofcommunity healthindicators (e.g., lifeexpectancy; preventablehospitalizations) withinan organizing structureof nine determinants ofhealth (e.g., generalhealth status)

21 health indicatorprojects

Not provided in this article NA NA

NA, not applicable; NR, not reported; NIH, National Institutes of Health; CTSA, Clinical and Translational Science Award; NSF, National Science Foundation; ISI, Institute for Scientific Information; DOE, US Department of Energy; SCI, Science Citation Index; CV,curriculum vitae; NCI, National Cancer Institute; IF, impact factor; TTURC, Transdisciplinary Tobacco Use Research Centers; NHMFL, National High Magnetic Field Laboratory;RTI, relative team impact; TREC, Transdisciplinary Research on Energetics andCancer; PI, principal investigator; PD, project director; CBPR, community-based participatory research.aDetails obtained by cross-referencing article (TTURC Researcher Survey 2002) from https://cctst.uc.edu/sites/default/files/cis/survey-TTURC_research.pdf [44].bDetails obtained by cross-referencing article (TREC Baseline survey) from https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspx?tid=2&rid=36 [42].cOriginal instrument available at http://cpr.unm.edu/research-projects/cbpr-project/index.html–scroll to 2 (Quantitative Measures – “Key Informant” and “Community Engagement” survey instruments). Developmental work on measures from Oetzel et al.(2015) continues in an NIH NINR R01 (Wallerstein [PI], 2015–2020 “Engage for Equity” study (see http://cpr.unm.edu/research-projects/cbpr-project/cbpr-e2.html).dDetails obtained by cross-referencing article (TTURC Researcher Survey 2002) from https://cctst.uc.edu/sites/default/files/cis/survey-TTURC_research.pdf [44] and from Kane and Trochim [59].eDetails obtained by cross-referencing article (NCI TREC Written Products Protocol 2006-09-27) from https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspx?tid=2&rid=646 [43].

286Tigges

etal.

https://ww

w.cam

bridge.org/core/terms. https://doi.org/10.1017/cts.2019.402

Dow

nloaded from https://w

ww

.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cam

bridge Core terms of use, available at

Page 27: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

Outcome Measures

Similar to measures of collaboration quality, little agreement existsas to how to best measure outcomes of research collaborations. Byfar, the most common type of measurement is a simple count ofproducts over a set period of time (e.g., publications, grants,and/or patents). Interestingly, the procedures used for countingor calculating these products are rarely reported and thereforeare not replicable. In addition, published reports infrequentlyinclude any type of verification of counts, leaving the reliabilityof such counts or calculations in question.

The second most common type of measure is the use ofself-reported scales to quantify the researchers’ perceptions ofcollaboration outcomes. These include measures of perceived pro-ductivity or progress, changes in relationships with partners,increased capacity, and sustainability. Few of these measures, withthe exception of the psychometric works of Hall et al. [12] andOetzel et al. [23], have documented reliability and validity. Ingeneral, despite a relatively large number of scales, most of thesewere not developed for the purpose of becoming standard indica-tors or measures and most have had little psychometric study orreplication.

Efforts to measure the quality of counted products, such asconsideration of citation percentiles, journal impact factors, orfield performance indicators, offer important alternatives in thequantity versus quality debate and actually may be useful for evalu-ating the long-term scientific impact of collaborative outcomes.Likewise, peer-reviewed ratings of outcomes based on reviews ofproposals or progress reports could provide more neutral andstandardized measures of collaboration impact. Both of these cat-egories of measures are used infrequently but could have signifi-cant influence if applied more widely in the evaluation ofcollaborative work. However, further work on a reliable rating’sscale for use in peer review is needed before it is able to providecomparable results across studies.

Recommendations

Remarkably, the results of this review, which defines research col-laborations to include different types of collaborative partnerships,are very similar to reviews of measures of community coalitions[60] and community-based participatory research [61] conducted15 and 7 years ago, respectively. Both of those studies concludedthat there are few reliable and valid measures. In the interveningyears, some progress has been made as noted [see Refs. 12, 19,23 as examples]. Based on this observation and our findings in thisstudy, we offer six recommendations to advance the field of teamscience: (1) We must pay careful attention and devote resources tothe development and psychometric testing of measures of researchcollaboration quality and outcomes that can be replicated andbroadly applied. Measures listed in this review with solid initialreliability and validity indicators provide reasonable starting pointsfor continued development; however, measures of other constructswill also be necessary. (2) To establish validity for use in differentpopulations and settings, designed measures should be testedacross various research partner and stakeholder relationships(e.g., academia, industry, government, patient, community, andadvocacy groups). (3) When evaluating outcomes, it is critical thatwe focus on both the quality and quantity of products and the useof rating scales for peer review. (4) The sensitivity and responsive-ness of measures to interventions should be evaluated as an addi-tional psychometric property. (5) Publications reporting onassessments of collaborations should include a clear description

of the measures used; the reliability, validity, and sensitivity orresponsiveness of the measures; and a statement on their general-izability. (6) Reports incorporating the use of narrowly applicablemeasures should include a justification for not using a morebroadly applicable measure.

Conclusions

Although a few studies have conducted exemplary psychometricanalyses of some measures of both collaboration quality and out-comes, most existing measures are not well-defined; do not havewell-documented reliability, validity, or sensitivity or responsive-ness (quality measures); and have not been replicated. Constructvalidity, in particular, requires further exploration. Most of thereported measures were developed for a single project and werenot tested across projects or types of teams. Published articlesdo not use consistent measures and often do not provide opera-tional definitions of the measures that were used. As a result ofall of these factors, it is difficult to compare the characteristicsand impact of research collaborations across studies.

Team science and the study of research collaborations arebecoming better and more rigorous fields of inquiry; however,to truly understand the reasons that some teams succeed andothers fail, and to develop effective interventions to facilitate teameffectiveness, accurate and precise measurements of the character-istics and the outcomes of the collaborations are needed to furthertranslational science and the concomitant improvements in publichealth.

Acknowledgements. This work was supported by the National Center forAdvancing Translational Sciences, National Institutes of Health, Grant num-bers: UL1 TR001449 (BBT), UL1 TR000430 (DM), UL1 TR002389 (DM),UL1 TR002377 (JBB), UL1 TR000439 (EAB), UL1 TR002489 (GD), UL1TR001453 (NSH), UL1 TR002366 (KSK), UL1 TR001866 (RGK), UL1TR003096 (KL), UL1 TR000128 (JS), and U24 TR002269 (University ofRochester Center for Leading Innovation and Collaboration CoordinatingCenter for the Clinical and Translational Science Awards Program).

Disclosures. The authors have no conflicts of interest to declare.

Disclaimer. The views expressed in this article are the responsibility of theauthors and do not necessarily represent the position of the National Centerfor Advancing Translational Sciences, the National Institutes of Health, orthe US Department of Health and Human Services.

References

1. Sung NS, Crowley WF, Genel M, et al. Central challenges facing thenational clinical research enterprise. Journal of the American MedicalAssociation 2003; 289(10): 1278–1287.

2. Westfall JM, Mold J, Fagnan L. Practice-based research – “blue highways”on the NIH roadmap. Journal of the American Medical Association 2007;297(4): 403–406.

3. Woolf SH. The meaning of translational research and why it matters.Journal of the American Medical Association 2008; 299(2): 211–213.

4. National Center for Advancing Translational Sciences (NCATS).Strategic Goal 2: advance translational team science by fostering innovativepartnerships and collaborations with a strategic array of stakeholders[Internet], 2017. https://ncats.nih.gov/strategicplan/goal2. Accessed April25, 2019.

5. Disis ML, Slattery JT. The road we must take: multidisciplinary team sci-ence. Science Translational Medicine 2010; 2: 22cm9.

6. Fortunato S, Bergstrom CT, Börner K, et al. Science of science. Science2018; 359: eaao0185.

Journal of Clinical and Translational Science 287

https://www.cambridge.org/core/terms. https://doi.org/10.1017/cts.2019.402Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cambridge Core terms of use, available at

Page 28: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

7. Jones BF, Wuchty S, Uzzi B. Multi-university research teams: shiftingimpact, geography, and stratification in science. Science 2008; 322:1259–1262.

8. Wuchty S, Jones BF, Uzzi B. The increasing dominance of teams in pro-duction of knowledge. Science 2007; 316: 1036–1039.

9. Uzzi B, Mukherjee S, Stringer M, et al. Atypical combinations and scien-tific impact. Science 2013; 342: 468–472.

10. Larivière V, Haustein S, Börner K. Long-distance interdisciplinarity leadsto higher scientific impact. PLoS ONE 2015; 10(3): e0122565.

11. Basner JE, TheiszKI, JensenUS, et al. Measuring the evolution and outputof cross-disciplinary collaborations within the NCI Physical Sciences-Oncology Centers Network. Research Evaluation 2013; 22: 285–297.

12. Hall KL, Stokols D, Moser RP, et al. The collaboration readiness of trans-disciplinary research teams and centers. Findings from the National CancerInstitute’s TREC year-one evaluation study.American Journal of PreventiveMedicine 2008; 35(2S) :S161–S172.

13. Cummings JN, Kiesler S, Zadeh RB, et al. Group heterogeneity increasesthe risks of large group size: a longitudinal study of productivity in researchgroups. Psychological Science 2013; 24(6): 880–890.

14. Hall KL, Stokols D, Stipelman BA, et al. Assessing the value of team sci-ence. A study comparing center- and investigator-initiated grants.American Journal of Preventive Medicine 2012; 42(2): 157–163.

15. Hall KL, Vogel AL, Stipelman BA, et al. A four-phase model of transdis-ciplinary team-based research: goals, team processes, and strategies.Translational Behavioral Medicine: Practice, Policy, Research 2012; 2:415–430.

16. Salazar MR, Lant TK, Fiore SM, et al. Facilitating innovation in diversescience teams through integrative capacity. Small Group Research 2012;43(5): 527–558.

17. Trochim WM, Marcus SE, Mâsse LC, et al. The evaluation of largeresearch initiatives: a participatory integrative mixed-methods approach.American Journal of Evaluation 2008; 29(1): 8–28.

18. Luukkonen T, Tijssen RJW, Persson O, et al. The measurement ofinternational scientific collaboration. Scientometrics 1993; 28(1): 15–36.

19. Mâsse LC, Moser RP, Stokols D, et al. Measuring collaboration and trans-disciplinary integration in team science. American Journal of PreventiveMedicine 2008; 35(2S): S151–S160.

20. Milojevic S. Principles of scientific research team formation and evolution.Proceedings of the National Academy of Sciences of the United States ofAmerica 2014; 111(11): 3984–3989.

21. Cooke NJ and Hilton ML, eds.; Committee on the Science of TeamScience; Board on Behavioral, Cognitive, and Sensory Sciences;Division of Behavioral and Social Sciences and Education; NationalResearch Council. Enhancing the effectiveness of team science.Washington, DC: The National Academies Press; 2015.

22. Nichols LG.A topicmodel approach tomeasuring interdisciplinarity at theNational Science Foundation. Scientometrics 2014; 100: 741–754.

23. Oetzel JG, Zhou C, Duran B, et al. Establishing the psychometric proper-ties of constructs in a community-based participatory research conceptualmodel. American Journal of Health Promotion 2015; 29(5): e188–e202.

24. Salas E, Grossman R, Hughes AM, et al. Measuring team cohesion: obser-vations from the science. Human Factors 2015; 57(3): 365–374.

25. Stokols D, Harvey R, Gress J, et al. In vivo studies of transdisciplinaryscientific collaboration. Lessons learned and implications for activeliving research. American Journal of Preventive Medicine 2005; 28(2S2):202–213.

26. Wageman R, Hackman JR, Lehman E. Team diagnostic survey: develop-ment of an instrument. The Journal of Applied Behavioral Science 2005;41(4): 373–398.

27. Wooten KC, Rose RM, Ostir GV, et al. Assessing and evaluating multi-disciplinary translational teams: a mixed methods approach. Evaluationand the Health Professions 2014; 37(1): 33–49.

28. Misra S, StokolsD, Cheng L.The transdisciplinary orientation scale: factorstructure and relation to the integrative quality and scope of scientific pub-lications. Journal of Translational Medicine and Epidemiology 2015; 3(2):1042.

29. Souza MT, Silva MD, Carvalho RD. Integrative review: what is it? How todo it? Einstein 2010; 8(1): 102–106.

30. Whitehead D, LoBiondo-Wood G, Haber J. Nursing and MidwiferyResearch: Methods and Appraisal for Evidence Based Practice. 5th ed.Chatswood NSW, Australia: Elsevier, 2016.

31. Bian J, Xie M, Topaloglu U, Hudson T, et al. Social network analysis ofbiomedical research collaboration networks in a CTSA institution. Journalof Biomedical Informatics 2014; 52: 130–140.

32. Franco ZE, Ahmed SM, Maurana CA, et al. A social network analysis of140 community-academic partnerships for health: examining the healthierWisconsin partnership program. Clinical and Translational Science 2015;8(4): 311–319.

33. Hughes ME, Peeler J, Hogenesch JB. Network dynamics to evaluate per-formance of an academic institution. Science Translational Medicine 2010;2(53): 53ps49.

34. Huang C. Knowledge sharing and group cohesiveness on performance: anempirical study of technology R&D teams in Taiwan. Technovation 2009;29: 786–797.

35. Philbin S. Measuring the performance of research collaborations.Measuring Business Excellence 2008; 12(3): 16–23.

36. Bietz MJ, Abrams S, Cooper DM, et al. Improving the odds through theCollaboration SuccessWizard.Translational BehavioralMedicine: Practice,Policy, Research 2012; 2: 480–486.

37. Greene SM, Hart G, Wagner EH.Measuring and improving performancein multicenter research consortia. Journal of the National Cancer InstituteMonographs 2005; 35: 26–32.

38. Lee S, Bozeman B. The impact of research collaboration on scientific pro-ductivity. Social Studies of Science 2005; 35(5): 673–702.

39. Mallinson T, Lotrecchiano GR, Schwartz LS, et al. Pilot analysis of theMotivation Assessment for Team Readiness, Integration, andCollaboration (MATRiCx) using Rasch analysis. Journal of InvestigativeMedicine 2016;0:1–8.

40. Mazumdar M, Messinger S, Finkelstein DM, et al. Evaluating academicscientists collaborating in team-based research: a proposed framework.Academic Medicine 2015; 90(10): 1302–1308.

41. Okamoto J, The Centers for Population Health and Health DisparitiesEvaluation Working Group. Scientific collaboration and team science: asocial network analysis of the centers for population health and healthdisparities. Translational Behavioral Medicine: Practice, Policy, Research2015; 5: 12–23.

42. National Cancer Institute, National Institutes of Health. Team ScienceToolkit: TREC baseline survey [Internet], No date. https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspx?tid=2&rid=36.Accessed April 25, 2019.

43. National Cancer Institute, National Institutes of Health. Team ScienceToolkit: Written products protocol – TREC I study [Internet], 2006. https://www.teamsciencetoolkit.cancer.gov/Public/TSResourceMeasure.aspxtid=?2&rid=646. Accessed April 25, 2019.

44. University of Cincinnati.Appendix D1: Researcher Survey Form [Internet],2002. https://cctst.uc.edu/sites/default/files/cis/survey-TTURC_research.pdf. Accessed April 25, 2019.

45. Ameredes BT, Hellmich MR, Cestone CM, et al. The MultidisciplinaryTranslational Team (MTT) Model for training and development of trans-lational research investigators. Clinical and Translational Science 2015;8(5): 533–541.

46. Lee YS. The sustainability of university-industry research collaboration:an empirical assessment. Journal of Technology Transfer 2000; 25:111–133.

47. Lööf H, Broström A. Does knowledge diffusion between university andindustry increase innovativeness? Journal of Technology Transfer 2008;33: 73–90.

48. Luke DA, Carothers BJ, Dhand A, et al. Breaking down silos: mappinggrowth of cross-disciplinary collaboration in a translational science initia-tive. Clinical and Translational Science 2015; 8(2): 143–149.

49. Petersen AM. Quantifying the impact of weak, strong, and super ties inscientific careers. Proceedings of the National Academy of Sciences of theUnited States of America 2015; 112(34): E4671–E4680.

50. Stvilia B, Hinnant C, Schindler K, et al. Team diversity and publicationpatterns in a scientific laboratory. Journal of American Society forInformation Science and Technology 2011; 62(2): 270–283.

288 Tigges et al.

https://www.cambridge.org/core/terms. https://doi.org/10.1017/cts.2019.402Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cambridge Core terms of use, available at

Page 29: Measuring quality and outcomes of research collaborations ......Measuring quality and outcomes of research collaborations: An integrative review. ... [31–33]. Four articles covered

51. Wang J, Hicks D. Scientific teams: self-assembly, fluidness, and inter-dependence. Journal of Informetrics 2014; 9(1): 197–207.

52. Lee Y, Walsh JP, Wang J. Creativity in scientific teams: unpacking noveltyand impact. Research Policy 2014; 44: 684–697.

53. Hager K, St Hill C, et al. Development of an interprofessional and inter-disciplinary collaborative research practice for clinical faculty. Journal ofInterprofessional Care 2016; 30(2): 265–267.

54. Bieschke K, Bishop R, Garcia V. The utility of the research self-efficacyscale. Journal of Career Assessment 1996; 4: 59–75.

55. Hanel P, St-Pierre M. Industry-university collaboration byCanadian manufacturing firms. Journal of Technology Transfer 2006; 31:485–499.

56. Armstrong A, Jackson-Smith D. Forms and levels of integration: evalu-ation of an interdisciplinary team-building project. Journal of ResearchPractice 2013; 9(1): Article M1.

57. Vogel AL, Stipelman BA, Hall KL, et al. Pioneering the transdisciplinaryteam science approach: lessons learned from the National Cancer Institute

grantees. Journal of Translational Medicine and Epidemiology 2014;2(2): 1027.

58. Aguilar-Gaxiola S, Ahmed S, Franco Z, et al. Toward a unified taxonomyof health indicators: academic health centers and communities workingtogether to improve population health. Academic Medicine 2014; 89(4):564–572.

59. KaneM, TrochimWM. Evaluation of large initiatives of scientific researchat the National Institutes of Health. In: Presentation at the AmericanEvaluation Association Conference. Portland, Oregon, USA. November4, 2006.

60. Granner ML, Sharpe PA. Evaluating community coalition characteristicsand functioning: a summary of measurement tools. Health EducationResearch 2004; 19(5): 514–532.

61. Sandoval JA, Lucero J, Oetzel J, et al. Process and outcome constructsfor evaluating community-based participatory research projects: amatrix of existing measures. Health Education Research 2012; 27(4):680–690.

Journal of Clinical and Translational Science 289

https://www.cambridge.org/core/terms. https://doi.org/10.1017/cts.2019.402Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 22 Apr 2020 at 11:26:08, subject to the Cambridge Core terms of use, available at