Analysis of the Work Product of 2002 Participants in the ... · This second report focuses on the...
Transcript of Analysis of the Work Product of 2002 Participants in the ... · This second report focuses on the...
Analysis of the Work Product of 2002 Participants
in the Knight Public Health Journalism Fellowship Program
and Boot Camp in Public Health at the CDC
By
Lee B. Becker & Tudor Vlad
James M. Cox Jr. Center for International Mass Communication Training and Research
Grady College of Journalism and Mass Communication
University of Georgia
Special Report to the John S. and James L. Knight Foundation
March 1, 2005
Contributors to this report include graduate research assistants Marcia Apperson, Federico De Gregorio,
Angela Hains, Benandre Parham, Amanda Swennes and Lauren Teffeau and undergraduate research
clerks W indi Blizzard, Jennifer Borja, Teah W est, Katie W illiams and Katherine W ooten.
-1-
Executive Summary
The Knight Public Health Journalism Fellowship and the Knight Public Health Boot Camp had
significant impact on the journalists who participated. Specifically, the participants
• W ere more likely to write health stories after they participated in the workshop than before
and more likely to write health stories than a control group of health journalists working at
comparable media organizations;
• W ere less likely to use either individual doctors or patients as the sources of the stories
they wrote and more likely to use health experts and government officials;
• W ere more likely to write stories about health risks after the programs than before and
more likely to do so than were journalists in the control group;
• W ere more likely, when writing about health risks after the program, to focus on the
implications for society, rather than on the individual implications of the risk;
• And were more likely to reference the Centers for Disease Control and Prevention in their
stories after the program than before and more likely than were members of a control
group of journalists.
At the same time, the evidence is that the Knight Public Health Journalism Fellowship and the
Knight Public Health Boot Camp had little impact on the technical characteristics of the stories written by
the journalists after the program. Specifically, the program seemed to have little or no impact on
• The number of sources used or the number of attributions in the stories;
• The use of statistical materials in stories;
• The use of research findings or methodologies;
• Or the use of medical terminology.
In general, it seems the program did more to stimulate the interest of the journalists, expand their
view of what is appropriate medical and health news, and increase their range of sources rather than
change the technical character of what they wrote. These findings are consistent with what the journalists
said themselves when interviewed and asked to report on the consequences of the Knight Public Health
Journalism Fellowship and the Knight Public Health Boot Camp.
-2-
Background
The Knight Public Health Journalism Fellowship began in 2000, and the Knight Public Health
Journalism Boot Camp started in 2002. In 2002, the Fellowship lasted four months and began with the
Boot Camp. Six journalists participated in the Fellowship program. An additional 12 journalists participated
in the Boot Camp.
In the first phase of the evaluation of this program, interviews were conducted with all six of the
Fellows and all 12 of the participants in the Boot Camp Program. In addition, interviews were conducted
with eight of the editors to whom the Boot Camp participants reported and five of the editors to whom the
Fellows reported. The results of these interviews were reported to the Knight Foundation in December of
2003.
This second report focuses on the work product of the journalists who participated in these
programs in 2002. It employed two distinct methodologies to examine that work product.
This report is based on both content analysis and focus group analysis, tools widely used in
evaluation research. Content analysis, a technique for systematically studying the characteristics of
communication messages, is employed in many social science disciplines, including mass
communication. Focus group analysis is the study of the conversations of selected individuals assembled
by the researcher to engage in a discussion. The discussion is led by a trained facilitator and follows a
predetermined script. This technique is widely used in marketing and other types of social research.
Content Analysis Methodology
The content analysis was of the work of journalists who participated in the Knight Public Health
Journalism Fellowship and Boot Camp programs in 2002 and of the work of journalists who were
comparable in many ways but who did not participate in that program. This second group is a control
group, since its members did not participate in the training program. The work of both groups of journalists
was studied both before and after the training programs.
The goal was to obtain copies of stories written by the participants in the Boot Camp and
Fellowship programs both before and after they completed the programs and by the control group
members during the same periods. By examining the characteristics of these articles, it should be possible
-3-
to determine if the program had any impact on stories actually written. Ideally, these stories should be
selected unobtrusively, i.e., independently from the journalists who produced the work. Public archives of
stories produced by media in the country allow for such unobtrusive selection.
To begin the search process, the media outlet for each of the 18 journalists participating in the
programs was identified from program records. The goal was to find as many electronic records of stories
written by these journalists in the specified periods as possible. LexisNexis was used as the primary
archive. First, the news category was restricted to “U.S. News,” meaning that only articles appearing in
U.S. media would be included. Next, the search was restricted to the actual media organization for which
the journalist worked. A journalist’s name, specific media organization, and the target dates were specified
in the search fields. For example, since the CDC Boot Camp ran from June 17 to June 26, 2002, the
three-month time frame before the Boot Camp was March 16 to June 16, 2002. This was specified for the
first search. The three-month period following the Boot Camp was June 27 to Sept. 27. A second search
was conducted using these dates. Everything that the journalist produced was collected. The search dates
for the Fellows were March 16 to June 16, 2002, and Oct. 29, 2002, to Jan. 29, 2003. ProQuest was used
as a secondary search tool if LexisNexis did not carry the specific media organization in its database.
Procedures for the search were parallel to those for LexisNexis.
The work of eight of the 12 Boot Camp participants and of four of the six Fellows was available in
LexisNexis. The work of one of each was located in ProQuest. The four journalists whose work could not
be found this way were freelancers.
In an effort to find the work of these four journalists, searches of LexisNexis and ProQuest were
conducted, using the specified time period and the names of the journalists as criteria. In addition, general
searches were conducted using the Google search engine. This produced only seven articles for the four
journalists and inadequate records to allow for before and after comparisons.
A group of health/medical reporters, who worked at publications comparable to those where the
Boot Campers and Fellows worked, was identified to serve as a control. An examination of their work
before and after the Knight Training programs would identify any changes in journalistic product not
attributable to the Knight program itself.
-4-
The first step was to obtain the names and publications of the 2003 participants in the Knight Boot
Camp and Fellowship programs. These individuals should be comparable to the 2002 participants in key
ways, as they, too, had been selected for the training programs.
Seventeen journalists participated in the Boot Camp program in 2003; eight were Fellows. The
goal was to match individual participants from 2002 with comparable individuals from 2003 in terms of
organization size. In fact, matches were difficult. Only 11 participants in the 2003 program were included in
the control group.
To supplement this group of 11, Editor and Publisher Yearbook was consulted to find a
newspaper comparable in terms of size and location. The newspaper web site was consulted to find the
name of a health reporter. If more than one name was listed, the first was chosen. If that person did not
have stories published in the time period, the next person on the list was selected. Three journalists were
selected in this fashion.
To fill the remaining four positions, the directory of the Association of Health Care Journalists was
used to locate health journalists working at media organizations comparable in terms of size and location.
Three journalists were included in the control group from this directory. The remaining ummatched
journalist worked for a magazine as a media critic. A roughly comparable magazine with a media critic
was identified, and this media critic was included as the final control group journalist.
Once a control group similar in makeup to the participant group was compiled, articles were
collected using the same procedures as those used for the Knight 2002 program participants. Articles
published during all three time frames were collected for the control, specifically for the periods March 16
to June 16, 2002, June 27 to Sept. 27, 2002, and from Oct. 29, 2002 to Jan. 29, 2003.
Searches on LexisNexis provided stories for 10 of the journalists in the control group. ProQuest
produced stories for the remaining four. Because of the inadequate data for the freelancers who
participated in the Knight training programs, no stories were gathered for the freelancers in the control
group. Freelancers were dropped for all further analysis. Fourteen journalists were in the final control
group, the same as the number of journalists who participated in the 2002 Knight training programs.
-5-
The final comparisons possible were between the before and after work of nine participants in the
Boot Camp program and five fellows, between the work of this group and a control group made up of 14
comparable journalists, and between the work of these comparable journalists before the training sessions
were held and after.
Both the group of participants and the control group consisted of 10 newspaper journalists, two
magazine journalists, one radio journalist, and one wire service journalist.
Formatting Articles for Content Analysis
A total of 2,054 news articles was collected for the entire project using computer search engines.
Of these, 920 were written by the participants in the CDC Boot Camp and Fellowship programs. These
articles were written within a three-month period prior to the Boot Camp and a three-month period
immediately following completion of the program. For the Boot Camp participants, this three-month period
followed completion of that two-week program. For the Fellows, this three-month period followed
completion of the four-month program. Another 1,134 articles written by a control group of journalists were
collected in the same way. For this group, articles were included if they were published in the three
months before the Boot Camp and Fellowship, the three months after the Boot Camp, and the three
months following the Fellowship Program.
These articles were saved as text documents. The media organization, headline, page number (if
appropriate), and journalist’s name were all recorded in a spreadsheet, and each article was assigned a
unique, six-digit code. This same identification number was transferred to the text of the stories. All texts
were formatted to have the same one-inch page margins, Times New Roman font and 12-point type size.
Additionally, all blank lines were erased and paragraphs were indented. The headline, reporter’s name,
organization, date, and all other identifying information then were stripped from the text. After the articles
had all been reformatted and entered into the spreadsheet, duplicates were removed from the files. For
instance, the Associated Press often posts versions of the same article several times throughout the
course of a day. Either the first story or the longest story was kept.
After elimination of duplicate stories and stories written by the freelancers, the total number of
stories available for analysis was 1,434. Of these, 626 were written by program participants and 808 were
-6-
written by members of the control group, whose stories were captured in all three time periods. Of the 626
stories written by participants in the training program, 221 were written by Boot Camp participants before
the training and 220 were written after. Fellows wrote 99 stories before the Fellowship and 86 afterwards.
Of the 808 stories written by members of the control group, 281 were written before the Boot Camp was
held, 255 were written after the Boot Camp but before the Fellowship ended, and 272 were written after
the Fellowship ended. See Table 1.
Coding Procedures
The content analysis was designed to measure characteristics of the articles that might be
expected to change as a result of the training program. The article was classified as either news, features
or column or opinion pieces and as routine or breaking news versus enterprise pieces. The article was
classified by topic covered. The number of sources was counted, and sources were classified by type and
affiliation. The number of attributions were counted, as were the number of sentences that contained at
least one statistic. The article was classified as including research findings or not and including a
description of research methodology or not. The number of times that the article mentioned the CDC was
counted, and the nature of the CDC reference was classified. Use of medical terms was coded, and
whether the terms were explained. The article was coded as mentioning a health risk or not. If so, the
article was classified as explaining level of susceptibility, providing predictors of risk, providing advice on
dealing with the risk, including chances of survival if affected, and discussing impact of the risk on society.
The article was given a general classification as confusing or not and as to whether it included different
viewpoints or opinions.
A coding booklet was developed to inform coders how to classify the articles. Instructions and
definitions were given for each item. W hen the coders read the articles, the only identifying information
they saw was the article number. As noted, all other information, including dateline, had been removed.
The articles were randomly assigned to coders, without regard to journalist or time period of publication.
Four coders were paid to code the articles. The coders first were given the coding booklet and a
blank coding sheet. After they familiarized themselves with these materials, they met with the coding
supervisor who went through every item with them again. Next they practiced by coding a set of 10 articles
-7-
that also had been coded by the coding supervisor. The supervisor then met with the coders and
discussed discrepancies. The coders then read and classified another set of 10 stories and met again with
the supervisor, who also had coded these stories. One of the four coders, who remained inconsistent after
this second practice session, coded a third block of six stories. The stories used for the practice were
actual stories for the project and were reshuffled into the batch for subsequent coding. Actual coding took
place from May through August of 2004.
An analysis of the practice coding of the four coders with the coding supervisor showed a high
level of agreement. Across 47 comparisons, 16 times agreement was 100%, 17 times agreement was
90%. Average agreement across the 47 comparisons was 84.9%. These computations are based on the
Holsti coefficient and are at an acceptable level of reliability.
In addition, reliability was assessed by comparing the agreement among the coders once they
actually started their coding and by comparing agreement for each of the coders across time. For this
second comparison, the coders repeat coded a subset of articles. In fact, each of these analyses was
conducted twice, once for the coding of the work of the journalists who were in the training program and
once for the coding of the work of the control group.
For the comparisons involving the articles of the journalists in the training programs, 15% of the
articles were coded by all three of the coders then working on the project. Across 47 comparisons, eight
times agreement was 100% and 14 times it was 90%. The average agreement across the 47 comparisons
was 98.5%, using Holsti’s coefficient. This is a very high score.
For the comparisons involving the articles of the journalists in the control group, 15% of the
articles were coded by all four of the coders then working on the project. Across 47 comparisons, 17 times
agreement was 100% and six times it was 90%. Average agreement across the 47 comparisons was
84.9%, again a high score for agreement.
The three coders classifying the articles of the journalists who participated in the training program
reread 20 articles selected randomly five weeks after they had coded them the first time. The average
agreement was 89.4%. The four coders classifying the articles of the journalists in the control group also
-8-
reread 20 articles selected randomly five weeks after they had coded them the first time. The average
agreement was 90.1%.
Independently, the total number of words in the story, the total number of sentences, the total
number of lines, and the total number of paragraphs were counted via the word processing program,
Microsoft W ord. The number of words per sentence, reading ease, and grade level required for
comprehension also were computed by W ord.
Since the data from the content analysis are a census of electronic records for the journalists in
the Knight programs and the control group, the findings below are presented descriptively. In other words,
differences observed are treated as real, and sample error is not computed. Of course, measurement
error still can lead to false inferences about the effects of the Knight programs on the journalists. For that
reason, only those findings that are consistent across all comparisons and are consistent with
expectations about effects are treated as real.
Findings: Content Analysis
The participants in the Boot Camp program had produced slightly longer stories than the
members of the control group in the three month period before the Boot Camp was held, as Table 2
shows. The average story written by the Boot Camp participants was 771 words long, compared with 713
words for the members of the control group. Story length after the fellowship dropped for the participants
in the Boot Camp and increased for the members of the control group. In general, these differences are
quite small. They hold, however, if total sentences, total lines of copy, and total number of paragraphs are
used as measures of story length.
Participants in the Fellowship program also wrote longer stories than members of the control
group before the Fellowship program, but they shortened their average word length after the Fellowship
program, while the members of the control group increased the average number of words in stories
written. These differences exist if other measures of story size–total sentences, total lines of copy and
total numbers of paragraphs–are used.
A common measure of writing complexity is average number of words per sentence. Table 3
shows that the participants in the Boot Camp did not write more complex stories after the workshop.
-9-
Complexity of the stories of the members of the control group also did not change. The Fellows also did
not change appreciably the number of words per sentence in their stories from before to after the
Fellowship program. The control group similarly did not change the complexity of stories written during the
comparison period. Two other measures of writing complexity showed the same pattern: the Flesch
Reading Ease measure and the Flesch-Kincaid Grade Level measure. There is no evidence, in short, that
the participants in the programs at the CDC started writing in a more complex fashion or style as a result
of the experience.
There is evidence, however, that participants in the Knight CDC programs wrote more about
health issues and topics after the programs than before. The increase, in addition, seems to be beyond
what would be expected by external circumstances, as reflected in the work of the control group. Table 4
shows that 71.0% of the stories written by the participants in the Boot Camp before the program were
about health issues or topics; after the program, that percentage increased to 82.8%. The members of the
control group, in contrast, showed a more modest change in the percentage of stories they produced that
were on health topics between the comparison periods.
The journalists about to join the Fellowship program produced a higher percentage of copy on
health topics and issues in the three months before the program than did the control group journalists, and
they increased the percentage of their copy on health topics in the three months after the program. The
members of the control group did not change in terms of the percentage of stories produced on health
topics.
The health stories written by the participants in the Boot Camp program were less likely to be
feature oriented (and more likely to be hard news in approach) after the program than before, as Table 4
shows. The control group did not change during the period. The Boot Camp participants were less likely to
do enterprise pieces (in favor of breaking news) after the program than before, while the control group
participants did not change in this regard. The Fellowship program participants showed a small drop in the
amount of feature writing they did, mirrored in the work of the control group. The Fellows also showed only
slight changes in the percentage of their work that was of an enterprise nature, again comparable to what
was experienced by the control group.
-10-
The key finding of these early tables is that the Knight programs at the CDC increased the relative
amount of stories the program participants wrote after the program on health topics. In the remaining
tables in this report, only the stories written by the journalists on health topics are examined.
Table 5 shows that the Knight CDC programs did not alter the number of sources used by the
journalists who participated in them. Boot Camp participants and Fellows used nearly the same average
number of sources before as after the program, and the numbers were very similar to those used by the
control group journalists during the same time periods. W hat the table does not show, however, is that
there was a shift in the types of sources used. In general, the journalists in the program shifted away from
use of patients and doctors as sources of stories as a result of the program. The shifts were more
pronounced than what would be expected based on the work of the control group. Differences between
the two time periods in the work of the control group can be attributed to differences in the stories being
covered across time. For example, the average Boot Camp participant story written before the program
relied on .47 doctors or health practitioners, while after the program that number dropped to .24.
Journalists in the control group showed a change from .44 doctors or health practitioner sources to .36.
For the Fellows, the decline from before to after their program was from .49 health care practitioners or
doctors to .28, while the control group journalists dropped from .44 to .35. The changes shown for the
control group suggest that there was some change in the nature of health stories from before to after the
program. The Knight program, in addition, seems to have contributed to a shift in the type of sources
used. In general, the shift for the Boot Camp participants was to a variety of experts and officials and to
public relations sources. For the Fellows, the shift was to government officials.
Similarly, there is no real evidence of change in the number of attributions made by the journalists
who participated in the Knight programs at the CDC from before to after the programs (Table 6). W hat
does change is the type of sources to whom information is attributed. The CDC is much more likely to be
a source after the program than before both for the Boot Camp participants and for the Fellows. The
average story written by the Boot Camp participants before the training program had .12 sources linked to
the CDC. After the program, that number was .45. For the Fellows, the average number of sources
affiliated with the CDC before the program was .07. After the program, it was .29. In comparison, the
-11-
control group journalists showed no change in use of CDC sources in the three months after the Boot
Camp program compared to the three months before it. These journalists actually were less likely to
attribute to a CDC-affiliated source in the three months after the Fellowship program ended than in the
three months before it began. The Boot Camp participants also were less likely after the program to
attribute information to hospitals, doctors’ offices and medical centers than before and less likely to
attribute information to associations and medical organizations after than before. W hile the control group
showed drops during these periods as well, suggesting some change in the nature of the stories covered,
the drops for the control groups were considerably smaller than for the Boot Camp participants. The
Fellows also were less likely to attribute to sources affiliated with organizations and associations after the
program than before, while the control group journalists actually increased their use of these sources
during this period.
The Knight training programs did not lead the participants to increase the amount of statistical
materials in their stories, as Table 7 shows. Participants in the Boot Camp program, on average, had 1.7
sentences containing at least some statistical materials in their stories before the training program. That
number dropped to 1.4 after the program. The control group also showed a slight decline in the number of
sentences with statistical materials during this period. The Fellows, who wrote stories that, on average,
had 2.0 sentences with statistical materials included before their program, showed no change afterwards.
The control group, who uses fewer sentences with statistical materials before the training program,
showed only slight change in the period after the Fellowship program.
A third of the stories written by the participants in the Boot Camp program before the program
began included findings of scientific research, a figure that was considerably higher than was the case for
the control group journalists, for whom only a quarter of stories written deal with research (Table 8). The
percentage of stories dealing with research findings declined for both groups after the workshop took
place. Four in ten of the stories written by the participants in the Fellowship program before the program
began deal with research findings. That figure dropped quite strongly in the period after the program. The
participants in the control group, however, increased their reporting of scientific findings, so the Fellows
-12-
and the control group journalists actually looked quite similar in the three months after the Fellowship
program.
Only 7.6% of the stories written by the participants in the Boot Camp program included
information on research methodology in the period before the Boot Camp program (Table 8). That figure
was unchanged in the period after the program. The control group journalists actually showed a decline in
inclusion of methodological details from the pre-program to post-program period. The Fellows, in contrast,
included information on research methodology in three of 10 of the stories they wrote before the program
began, but that figure dropped to half that level after the workshop. The control group journalists showed
no change.
Before the Boot Camp program, 23.6% of the stories written by the eventual participants included
medical terms, while the figure for the control group in this period was a nearly identical 22.4%. (See
Table 9.) The percentage of stories including medical terms increased for both groups by a nearly identical
amount in the post-program period. The Fellows included medical terminology in nearly a third of the
stories they wrote before the program, compared with a ratio of about two in 10 for the control group
journalists. In the post-program period, however, the Fellows and the control group both included medical
terminology in about a quarter of their stories. If the story included medical terms, the eventual Boot Camp
participants explained the terms two-thirds of the time in the three months before the workshop, while the
control group explained the terms more than eight of 10 times. The participants in the Boot Camp program
showed no change in their inclusion of explanations of terms in the period after the workshop; the control
group journalists were less likely to explain terms than before the workshop, but their stories continued to
include more explanations of terms than did the stories of the Boot Camp participants in the period after
the program was completed. The Fellows, in contrast, were nearly as likely as the control group to include
terminological explanations in the period before the workshop. After the workshop, the Fellows actually
were more likely than the control group journalists to include explanations of terms in their stories.
Clearly the Knight training programs at the CDC made the journalists who participated more
sensitive to health risks and health threats, and they wrote about these topics as a result. Even before the
workshops, the journalists who would be participating in them were more likely to write about health risks
-13-
and health threats than the members of the control group, Table 10 shows. Four in 10 of the health stories
written by those who would be participating in the Boot Camp and nearly five in 10 of the health stories
written by those who would become Fellows dealt with health risks in the three months before the
programs began. Only a third of the stories written by the journalists in the control group in this same
period were focused on health risks. After the Knight programs at the CDC, the journalists who
participated in the Boot Camp program devoted six in 10 of their stories to topics that focused on health
risks, as did the journalists who participated in the Fellowship program. The control group journalists, in
contrast, still only focused on health risks in a third of their health stories.
Despite the fact that the journalists participating in the Knight programs at the CDC focused more
on stories about health risks once they completed the programs, they didn’t change much the way they
covered these stories. As Table 10 shows, stories about health risks were about equally likely to contain
information on susceptibility to the health risk after the program as they were before it. The Fellows, in
fact, were even less likely to include this aspect of the story than were members of the control group.
Both the participants in the Boot Camp program and the Fellows were more likely to mention
predictors of risk after the program than before, but they look pretty much like the control group in the end
(Table 10). Both the Boot Camp participants and the members of the control group declined in their focus
on explaining how individual readers and viewers could deal with health risks during the study period,
suggesting the change is a result of story change, rather than the training programs. The Fellows
increased their focus on dealing with risks after the Knight program, but they still did not reach the same
level as the control group, which changed little during the period of study.
Participants in the programs did provide more information explaining an individual’s chances of
survival if affected by a health risk after the programs were completed. For the Boot Camp participants,
the change was small. During the time periods of the study, however, the control group also was more
likely to include this information (Table 10).
The Knight training programs at the CDC did seem to make the participating journalists more
likely to include information about the impact of health risks on society (Table 10). Both those in the Boot
Camp program and the Fellows were more likely to include information on societal impact on the stories
-14-
they wrote on health risks after the program than before. Members of the control group also included more
of this type of information after the programs than before, but the gain was smaller.
The likelihood that the Centers for Disease Control and Prevention would be mentioned in a
health story written by the journalists who participated in the Knight programs increased as a result of the
programs (Table 11). Before the training programs, the CDC was mentioned, on average, 0.3 times in the
health stories written by the Boot Camp participants. That figure increased to 0.7 after the program. The
control group journalists showed a more modest change. For the Fellows, the CDC was mentioned 0.1
times per story before the program; the figure was 0.5 after the program was completed. For the control
group, the mentioning of the CDC actually declined over the same time period. Consistent with the
findings on use of sources, the data in Table 11 shows that the CDC became a somewhat more
prominent part of the coverage of health for those journalists who participated in the program. The finding
is hardly surprising, but it confirms that an outcome of the program is increased awareness of the CDC
and the resources it provides to health journalists.
Methodology: Focus Groups
Three focus groups were conducted to get reader reactions to the work product of the journalists
who had participated in the Knight training programs. As part of the focus group discussion, the
participants were asked about their use of sources of information about health issues and then asked to
read and evaluate four articles written by two journalists who had completed the Knight program. One of
those journalists had completed the Boot Camp program; the other had completed the Fellowship
program. Two of the stories were written before the training programs; two were written after.
Articles by all Boot Camp participants and all Fellows that were collected during the content
analysis phase of the project were considered for use in the focus group. One Boot Camp participant was
eliminated from consideration because the articles were radio transcripts, making comparisons with the
work of the other journalists, most of whom worked for print, problematic. Articles that did not concern
health or medical issues were also not considered. Next, articles that were under 400 words and over
1,500 words were eliminated. Multiple stories concerning one topic were removed from consideration,
leaving only one representative work per topic. Articles that had a joint byline were eliminated as well.
-15-
Articles with a hard science slant were given greater preference than softer stories like herbal remedies,
exercise tips, and other less traditional issues. This process resulted in one to three articles written before
and after training for each of the eight Boot Camp participants and four Fellows. One Boot Camp
participant and one Fellow were selected at random. A story written before the training program and a
story written after was chosen randomly for each of these two.
The article written by a Boot Camp participant before the training program was about vaccine
shortages (715 words); the article written by this journalist after the program concerned heart defibrillators
(1,053 words). The article written by the Fellow before the training program covered asthma research (523
words); the article written after dealt with gene discovery (414). Both articles written by the Boot Camp
participant used a human-interest approach. The articles written by the Fellow were more technical in
nature and did not include a strong human interest angle.
The focus groups were held on: December 2, 2004, February 3, 2005, and February 22, 2005.
Participants in the focus groups were recruited by the Survey Research Center at the University of
Georgia using random-digit-dialing for numbers in the Athens, Georgia, calling area. Participants in the
first two focus groups were screened to eliminate University of Georgia faculty and medical or health
professionals. Quotas were set to create a pool of participants equally divided in terms of gender, with
one-third African-Americans and some Hispanic representation. No one under the age of 18 was included.
Of course, many of those recruited did not actually show up for the sessions. The December
focus group consisted of five women and two men. One of the women and one man were African-
American. The oldest individual was 77 and the youngest was 38. A wide range of education levels was
represented, from a white woman with no high school degree to a white male with a doctorate.
The February 3 focus group consisted of three females and eight males. Only one participant, a
male, was African-American. The oldest individual was 74 and the youngest 21. One woman only had
obtained a high school degree; the other participants had some college or more education. Five
individuals had doctorates.
The third focus group employed a different screening procedure for recruitment of participants.
Instead of a lay audience, the recruitment for this focus group concentrated on individuals who worked in
-16-
health and medical fields. Because these people came from such a small subset of the population, quotas
could not be used for recruitment. In the end, seven women and three men participated. Two of the
participants, both women, were African-American; one, a male, was Asian-American. The oldest individual
was 55; the youngest was 22. All had college degrees; seven had doctorates.
All the sessions were held at the Survey Research Center at the University of Georgia.
Participants arrived at the Center, signed a consent form, and completed a preliminary questionnaire that
asked basic demographic questions and questions about their use of the media. They also completed
paperwork for payment for participation in the session. Each respondent was promised and subsequently
sent a check for $75 for participation in the focus group.
In the first focus group, all of the participants except one expressed “some interest” in health and
medical issues. The seventh expressed a “great deal” of interest. For the second focus group, four had
“some interest” in health and medical issues, while seven individuals expressed a “great deal of interest.”
Three of the participants in the final session expressed “some interest” in health and medical issues;
seven expressed a “great deal of interest.”
Respondents next were invited to help themselves to sandwiches and other snacks and then seat
themselves at a table. Each participant was given a name tag. Although participants were guaranteed
confidentiality, they were informed that each session was being taped with audio and video recorders to
maintain a record of the conversation.
A trained moderator who was familiar with the research objectives of this project ran each focus
group session. The moderator, following a predetermined protocol, first asked the participants to introduce
themselves and then asked how they learn “about health and medical topics that are important to you?”
They next were asked if they read or listened to stories in the media and to “pass judgment” on the stories
they read. After this initial discussion, the participants were told they were going to read four articles taken
from newspapers and to comment on them.
The order of the discussion of the four articles was determined by chance. The story about gene
discovery, written by a Fellow after the program, was discussed first, followed by the story on vaccine
shortage, written by a Boot Camp participant before the workshop. The third story, about asthma
-17-
research, was written by a Fellow before the program, and the final story was about heart defibrillators,
written by a Boot Camp participant after the workshop. After the respondents were given a chance to read
the first article, they were asked what they “like most” about the article. The next question asked the
participants what they “like least” about the article. The moderator probed to find out how the article could
be changed as the discussion progressed. The same procedures were used for each of the three
remaining articles. Finally, the focus group participants were asked to “rank these four articles in terms of
their quality.” Those rankings were marked on the articles, which were then collected.
Findings: Focus Groups
The focus group was designed to provide additional feedback on the work of the journalists who
participated in the Knight programs at the CDC. W hile only a very small number of stories could be
reviewed by the participants, they did have comments about them that were informative. The focus group
participants also provide insight into the ways the public learns about and evaluates health information it
receives.
Since the first two focus groups were drawn from the general population, while the third focus
group was made up of health professionals, Groups 1 and 2 are discussed first and together, and the third
group is discussed separately.
Groups 1 and 2 Comments on the Media
The participants in the first focus group were mostly passive in finding information from the media,
with their exposure to the issues being on a circumstantial basis. Overall, they did not seem very
interested in medical and health issues.
Most of their health information came from the Internet, television and newspapers, with a heavier
emphasis on TV and newspapers. Two people in the group used two or more newspapers daily. One of
those respondents said, “I am an avid reader. I read three papers a day, and not a whole lot on the
Internet, but I do still get some journals that I still enjoy to read, but primarily the print.” Many claimed they
only become interested in seeking information on a particular issue when it has been brought to their
attention by a doctor or extensive media coverage.
-18-
There seemed to be a general consensus in the group that newspapers go more in-depth than
television in covering health news because television is primarily interested in shock value. One
respondent said about newspapers, “I think they’re more in depth. You have a lot to read and think about.
The news I feel sometimes tries to shock or entertain you, and I think if you have a disease or condition,
you’re far more interested in that.” In general, participants found television to be more convenient than
other forms of media.
The second focus group was more purposeful and active in its finding and consuming health-
related issues in the media. Participants were very interested in medical and health issues in particular.
Many of the respondents were suffering from a specific health problem or providing care for someone with
a health problem. This led them to seek extensive information on the health problems they confronted.
This group utilized the same sources as the first group, but group members also reported using
radio. In general, the group depended more heavily on the Internet, magazines and scientific journals.
Some members of the group expressed concern about the Internet, saying it contains so much
information that it can overwhelm users. One respondent who used the Internet almost daily said, “Its
strength is its weakness.”
Participants from both focus groups agreed that caution is necessary regarding information
people find in the media. A participant from the first group said, “My father was a space scientist and he
used to get interviewed by the press, and he said basically very few journalists know anything about
science, and I think that is very much the case in medicine too. The worst thing to me about medical
journalism is the same with any science journalism which is just ignorant people writing.” A participant in
the second group also said that caution was necessary, noting that he distrusted research in general and
felt that researchers find what they want to find, especially when there is an economic incentive.
Group 1 and 2 Comments on Article #1 on a Gene Linked to Prostate Cancer and Written by a
Fellow After the Training Program
Participants in both groups were concerned that the language was too technical and scientific for
the average reader. One participant said, “Jargon is a good clue to me that the source is shady and only
-19-
has superficial knowledge of it. My immediate impression is an interest in it and I’d look in other sources
for more information.”
The overall consensus was that this was not an article that the general public would want to read.
Most participants agreed that the author could have explained the subject in simpler terms. There also
seemed to be some skepticism, especially with the second group, concerning the emphasis the article
placed on the gene discovery. Several participants said that the article gave readers a false hope.
Group 1 and 2 Comments on Article #2 on Vaccine Shortage and Written by a Boot Camp
Participant Before the Training Program
For the most part, participants in both groups liked the human-interest aspect of this article.
Members from both groups expressed appreciation of the simpler language and greater accessibility of
the article. Some members found the language to be too simple, however, saying the article had a
paternalistic and patronizing tone. As one member said, “I feel like they are talking down to us.” Some said
the article made too many generalizations and did not explain the issue as thoroughly as it should have,
particularly regarding federal and state regulations concerning vaccination. One participant said, “On this
page it says ‘the government’ has made temporary changes. Does that mean a county government, or
does that mean the federal government, and the reason I mention it is if one really wanted to know why
America has such a problem with vaccines. It’s very much a political question and it very much involves
the role of government. It’s a very important story, and you read it, and although it sounds like you’re
getting information, I think you’re not getting the information you need to really understand what’s at play.”
Group 1 and 2 Comments on Article #3 on Asthma Research and Written by a Fellow Before the
Training Program
The first group seemed to think that the article was stating things that were already known to
people, so it did not seem to interest them. Interest was higher in the second group. As one person
mentioned in the first group, “I think they were listing some things that I think mostly what they said we
already knew. I can see why children who have asthma frequently are obese because they can’t do the
exercise. If they start running, they can’t breathe, and they have to sit down.” Both groups felt that the
article did not provide enough contextualized information about asthma, its causes, and the links between
-20-
it and diet and exercise. As one participant mentioned, “It’s interesting and has good information about
treating asthma, but it never once mentioned the cause of asthma or how much it has increased
astronomically over the last 30 years. I’d like to know why this is.”
Group 1 and 2 Comments on Article #4 on Heart Defibrillators and Written by a Boot Camp
Participant After the Training Program
Overall, members of both groups liked this article and the way it approached the subject matter.
Generally speaking, this was considered to be the best article by both groups. The first group, however,
was concerned with the repeated mentions of one manufacturing company of defibrillators, causing
several focus group participants to say that it was “just advertising.” The second group was more
concerned with the lack of sources used in this article. As one person summed up, “I was curious ‘cause it
was a longer article, but there were only three sources quoted. I was wondering why they didn’t talk to an
EMT or a cardiologist. I tend to be skeptical when I don’t have an attribution.”
Group 3 Comments on the Media
In general, the medical professional focus group participants criticized the content of the articles
rather than their format or the journalistic style. Compared to the lay audience groups, which were
generally more interested in the human-interest aspect of the articles, the professionals were more
accepting of the articles’ scientific components.
The medical professionals, as a whole, placed more emphasis on reading scientific journals and
on listening to expert opinions expressed by scientific colleagues (i.e., drug representatives and doctors)
than on using mass media such as magazines, newspapers and the Internet to get health-related
information. Most of the participants also made a clear distinction between the information presented in
the “popular press” and in scientific journals. Concern was expressed repeatedly over the public relations
and marketing aspects of the articles. Furthermore, the group noted the importance of the first few
sentences of an article because, they felt, most readers do not read them all the way through.
Regarding the Internet, the group generally expressed the view that, for the average person, there
is too much information to go through and the general public does not necessarily know what information
is the most important or factual. The group commented that a lot of information that is disseminated gives
-21-
readers false hope, too much information, or misinformation. For example, one participant commented,
“Being a nutritionist, I look out for a lot of different articles and we are bombarded with the new oil or the
new vitamin. It’s really hard because these magazines, especially for women, are telling them to do all of
these things and every week or every month it’s something new. People just get really frustrated, and I just
think that they put some information out there that is just too much. Sometimes too much information is
bad.”
For the most part, the participants consulted their media sources almost daily.
Group 3 Comments on Article #1 on a Gene Linked to Prostate Cancer and Written by a Fellow
After the Training Program
Overall, the group commented that the author used good, relevant sources, and that the author
attributed sources well enough that readers could know where to look if they wanted more information. It
was also important to the group that the article stressed early diagnosis and pointed out that the gene was
not going to be a catch-all for detecting prostate cancer.
One of the negative aspects of the article that the group discussed was the way the tone of the
article changed from a more general public approach to one that was overly scientific. The overall
sentiment was that the author tried to straddle the fence and failed. One participant said, “It would really
depend on the publication or what audience the article is aimed at. Half of it is very simplistic and
introduces you to cancer information and then it hits you with a whole bunch of jargon, which I know the
most general readers of the public would not understand. So you either have to tune it up for a scientific
audience or remove the jargon for the laymen.”
The group also noted that the statistics the author used were not put into context very well and
that they did not give any information regarding false positives from the tests. Furthermore, the article did
not explain what was wrong with the previous method used for detecting prostate cancer. This was a
criticism of the article that was echoed by many of the group members.
-22-
Group 3 Comments on Article #2 on Vaccine Shortage and Written by a Boot Camp Participant
Before the Training Program
Many of the participants seemed to agree that the article included good information, but they said
that it was poorly organized, which made comprehension of the subject difficult. Another complaint was
that the information did not flow very easily. Participants agreed that the article needed more information
and explanation. For example, one participant said, “My biggest thing is ‘other vaccines are in short supply
because of production and compliance problems.’ Compliance from the pharmaceutical companies?” The
group never reached a consensus as to what was meant by compliance problems, but it agreed that the
statement was very unclear.
The group also agreed that the author needed to include references more applicable to the
article’s subject. In addition, several said, the article did not explain the reasons for the vaccine shortage,
and the group agreed that the author could have done more to address the facts behind the problem of
supply and demand.
Group 3 Comments on Article #3 on Asthma Research and Written by a Fellow Before the
Training Program
The group members generally agreed that they liked the way the article began by listing the goals
of the research project, but several said the lead was somewhat deceptive and incongruous with the rest
of the article. This called into question the appropriateness of the attributions and quotes that the author
used. It was generally agreed that the article began strongly, but fell apart toward the end.
There was concern from the group regarding the way the article seemed to be generalizing based
on certain race and class attributes. One participant commented that, “All the way through, it seems to
ignore environmental risk factors in favor of socioeconomic issues, and pretty much it’s a classic example
of ‘blame the victim.’ It’s all to do with the victim’s own actions and it pigeonholes all the way through as a
disease of poor, working-class areas, and, seeing as it’s talking about the Sickle Cell Association of such-
and-such county, poor African-American counties. I don’t think it was very useful in that respect.”
-23-
Participants generally agreed that the article did not contain enough medical information showing
a link between asthma, smoking and obesity. The group also noted that the author made assumptions
about the readers that they believed to be inappropriate.
Group 3 Comments on Article #4 on Heart Defibrillators and Written by a Boot Camp Participant
After the Training Program
The group agreed that the article was packed with information that served the public interest. The
lead also kept the reader’s attention and held the article together. The marketing of a particular brand of
defibrillator was a red flag for most of the participants.
Participants noted that, compared to the other articles, the author did not include many scientific
attributions or quotes. One participant said, “W ith their scientific data, on the second page, they state,
‘One study found … that graders could apply the shock,’ but they don’t give the journal. They don’t allow
you to look it up. Then they talk about this other study, at the end, but they don’t tell who’s conducting the
study.”
The group especially expressed concern about Good Samaritan laws, factual correctness and the
emotional experiences users of heart defibrillators may encounter, issues that were not fully explained in
the article.
Rankings of the Four Articles
Table 12 shows the rankings of the four articles by the members of the three focus groups. The
clear favorite was the final article, written by a Boot Camp participant after the program. It received 23 first
place ranks and no last place ranks. The clear loser, however, was the first article, written by a Fellow
after the training. It received only 1 first place rank and 13 last place ranks. Articles 2 and 3 received
mixed reviews. Despite the differences in the groups, they agreed on these rankings. Article 4 was the
clear favorite across all three groups; article 1 was clearly lowly ranked, though the third group, made up
of experts, was a bit more forgiving than Group 1 and Group 2.
Summary and Conclusions
This study provides the first concrete evidence that the Knight Public Health Journalism
Fellowship and the Knight Public Health Boot Camp had significant impact on the journalists who
-24-
participated. The journalists, in the interviews conducted in the first phase of this project, evaluated the
program favorably and said that it had had impact on them. The interviews conducted with their editors
were consistent with this interpretation. The journalists reported returning to the newsroom with new
expertise about health journalism and new sources. They said they shared these ideas with their
colleagues.
The content analysis shows that the participants in these two programs actually were more likely
to write about health issues after the program than before and were more likely to do so than were
journalists in the control group. The Knight program participants used different sources and relied more on
the CDC. They also were more likely to write stories dealing with health risks and more likely to provide a
broad societal focus for these stories. The evidence is that the journalists moved away from reporting on
individual cases, using doctors and patients as sources, to reporting a more complex picture about health,
based on the perspective of experts.
At the same time, the journalists did not write more complex stories, did not include more
statistical material, and did not use more research findings or methodological details. In sum, they seem
not to have started writing for health experts, rather than for the public. Or if they did write that way, their
editors corrected the problem before it actually reached the public.
The focus group analysis, though limited in scope, suggests that the public is not easily satisfied
with the health information it is given by the media or other sources. The public is very skeptical and
critical. In fact, the health experts, who might be expected to be more critical than the lay audience, were
actually more forgiving in the focus group discussions. They seemed to realize how difficult it is for the
journalists to actually convey the complexity of health findings.
The data at hand come from a single class, namely the participants in the Boot Camp in 2002 and
the 2002 Fellows. Only the work of 14 of the 18 journalists was actually available for analysis. Analysis of
the work of the 2003 and 2004 participants, now underway, will allow for a fuller understanding of the
program’s impact. It also will be possible to look at differences among the participants in terms of their
prior experience and their subsequent assignments to determine which types of journalists are most
affected by the program.
-25-
The data come from a complex research design in which the work of the journalists was analyzed,
without the knowledge of the journalists involved. The journalists did not submit their best work. The
researcher selected all work in the electronic files that was produced by the journalists and attributed to
them–before and after the training program. In addition, the changes observed in the work of the
journalists in the training program were compared with the work of a group of journalists similar to the
journalists in the training program. That matching was not in terms of specific characteristics of the
journalists, but rather in terms of media organizations. The type of organization should determine much of
what the journalists do on a day-to-day basis, making the control an appropriate one. Changes in the
working environment of journalists covering health that might affect the work of all health journalists were
thus represented by the changes in the behavior of the control group over time. This change allowed for
an assessment of the change taking place in the work of the journalists in the training program and proper
attribution of impact to the training program itself.
The evidence is that the Knight Public Health Journalism Fellowship and the Knight Public Health
Boot Camp did produce change. In sum, the training worked.
-26-
Appendices
Tables
Articles Used in Focus Groups
Boot Camp Fellow
Before 221 99 281 601After Boot Camp 220 255 475After Fellowship 86 272 358Total 441 185 808 1,434
Participant in Training Control Total
Table 1: Number of Stories Analyzed by Source
Table 2: Size of Stories (Total Words)
Timeline Boot Camp Control TotalMean 771.1 713.1 738.6N 221 281 502Std. Deviation 388.5 392.0 391.1Mean 742.9 767.2 755.9N 220 255 475Std. Deviation 364.6 429.5 400.6Mean 757.0 738.9 747.1N 441 536 977Std. Deviation 376.6 410.8 395.6
Timeline Fellow Control TotalMean 745.7 713.1 721.6N 99 281 380Std. Deviation 562.4 392.0 442.2Mean 661.5 809.6 774.0N 86 272 358Std. Deviation 442.4 618.1 583.6Mean 706.6 760.6 747.0N 185 553 738Std. Deviation 510.6 517.5 516.0
Before Fellowship
After Fellowship
Total
Before Boot Camp
After Boot Camp
Total
Table 3: Story Complexity (Words per Sentence)
Timeline Boot Camp Control TotalMean 20.3 20.1 20.2N 221 281 502Std. Deviation 3.0 3.1 3.0Mean 20.2 20.0 20.1N 220 255 475Std. Deviation 2.9 3.1 3.0Mean 20.2 20.1 20.1N 441 536 977Std. Deviation 2.9 3.1 3.0
Timeline Fellow Control TotalMean 21.4 20.1 20.5N 99 281 380Std. Deviation 4.1 3.1 3.4Mean 22.5 20.8 21.2N 86 272 358Std. Deviation 4.8 3.3 3.8Mean 22.0 20.5 20.8N 185 553 738Std. Deviation 4.5 3.2 3.6
After Fellowship
Total
Before Boot Camp
After Boot Camp
Total
Before Fellowship
Table 4: Story TypeTimeline Boot Camp Control TotalBefore Boot Camp % Related to Health 71.0 73.0 72.1
N 221 281 502After Boot Camp % Related to Health 82.7 77.6 80.0
N 220 255 475Total % Related to Health 76.9 75.2 75.9
N 441 536 977
Timeline Fellow Control TotalBefore Fellowship % Related to Health 82.8 73.0 75.5
N 99 281 380After Fellowship % Related to Health 88.4 73.5 77.1
N 86 272 358Total % Related to Health 85.4 73.2 76.3
N 185 553 738
Timeline* Boot Camp Control TotalBefore Boot Camp % Feature 19.1 17.1 18.0
N 157 205 362After Boot Camp % Feature 14.8 17.2 16.1
N 182 198 380Total % Feature 16.8 17.1 17.0
N 339 403 742
Timeline* Fellow Control TotalBefore Fellowship % Feature 8.5 17.1 14.6
N 82 205 287After Fellowship % Feature 5.3 11.0 9.4
N 76 200 276Total % Feature 7.0 14.1 12.1
N 158 405 563
Timeline* Boot Camp Control TotalBefore Boot Camp % Enterprise 26.8 26.8 26.8
N 157 205 362After Boot Camp % Enterprise 18.7 28.3 23.7
N 182 198 380Total % Enterprise 22.4 27.5 25.2
N 339 403 742
Timeline* Fellow Control TotalBefore Fellowship % Enterprise 19.5 26.8 24.7
N 82 205 287After Fellowship % Enterprise 18.4 25.5 23.6
N 76 200 276Total % Enterprise 18.9 26.1 24.2
N 158 405 563*Only health stories.
Table 5: Number of SourcesTimeline Boot Camp Control Total
Mean 5.6 5.0 5.3N 157 205 362Std. Deviation 3.1 3.3 3.2Mean 5.2 5.3 5.2N 182 198 380Std. Deviation 3.0 3.8 3.4Mean 5.4 5.2 5.3N 339 403 742Std. Deviation 3.0 3.5 3.3
Timeline Fellow Control TotalMean 5.0 5.0 5.0N 82 205 287Std. Deviation 3.6 3.3 3.4Mean 4.7 5.8 5.5N 76 200 276Std. Deviation 2.7 4.1 3.8Mean 4.9 5.4 5.2N 158 405 563Std. Deviation 3.2 3.7 3.6
After Fellowship
Total
Before Boot Camp
After Boot Camp
Total
Before Fellowship
Table 6: Number of AttributionsTimeline Boot Camp Control Total
Mean 10.7 10.7 10.7N 157 205 362Std. Deviation 5.6 7.5 6.8Mean 10.7 11.0 10.9N 182 198 380Std. Deviation 6.0 7.0 6.5Mean 10.7 10.9 10.8N 339 403 742Std. Deviation 5.8 7.3 6.6
Timeline Fellow Control TotalMean 9.9 10.7 10.5N 82 205 287Std. Deviation 8.3 7.5 7.8Mean 9.4 11.6 11.0N 76 200 276Std. Deviation 6.2 9.0 8.3Mean 9.7 11.2 10.7N 158 405 563Std. Deviation 7.3 8.3 8.0
After Fellowship
Total
Before Boot Camp
After Boot Camp
Total
Before Fellowship
Table 7: Number of Sentences with StatisticsTimeline Boot Camp Control Total
Mean 1.7 1.5 1.6N 157 205 362Std. Deviation 2.9 2.5 2.7Mean 1.4 1.2 1.3N 182 198 380Std. Deviation 1.9 1.8 1.8Mean 1.5 1.3 1.4N 339 403 742Std. Deviation 2.4 2.2 2.3
Timeline Fellow Control TotalMean 2.0 1.5 1.6N 82 205 287Std. Deviation 3.7 2.5 2.9Mean 2.0 1.6 1.7N 76 199 275Std. Deviation 2.7 2.5 2.5Mean 2.0 1.5 1.7N 158 404 562Std. Deviation 3.3 2.5 2.7
After Fellowship
Total
Before Boot Camp
After Boot Camp
Total
Before Fellowship
Table 8: Research Reports, MethodolgyTimeline Boot Camp Control TotalBefore Boot Camp % Include Findings 34.4 25.9 29.6
N 157 205 362After Boot Camp % Include Findings 22.5 16.7 19.5
N 182 198 380Total % Include Findings 28.0 21.3 24.4
N 339 403 742
Timeline Fellow Control TotalBefore Fellowship % Include Findings 41.5 25.9 30.3
N 82 205 287After Fellowship % Include Findings 32.9 32.0 32.2
N 76 200 276Total % Include Findings 37.3 28.9 31.3
N 158 405 563
Timeline Boot Camp Control TotalBefore Boot Camp % Include Methodology 7.6 10.7 9.4
N 157 205 362After Boot Camp % Include Methodology 7.7 6.6 7.1
N 182 198 380Total % Include Methodology 7.7 8.7 8.2
N 339 403 742
Timeline Fellow Control TotalBefore Fellowship % Include Methodology 29.3 10.7 16.0
N 82 205 287After Fellowship % Include Methodology 15.8 11.0 12.3
N 76 200 276Total % Include Methodology 22.8 10.9 14.2
N 158 405 563
Table 9: Medical TermsTimeline Boot Camp Control TotalBefore Boot Camp % Include Terms 23.6 22.4 22.9
N 157 205 362After Boot Camp % Include Terms 27.5 25.3 26.3
N 182 198 380Total % Include Terms 25.7 23.8 24.7
N 339 403 742
Timeline Fellow Control TotalBefore Fellowship % Include Terms 32.9 22.4 25.4
N 82 205 287After Fellowship % Include Terms 25.0 26.5 26.1
N 76 200 276Total % Include Terms 29.1 24.4 25.8
N 158 405 563
Timeline* Boot Camp Control TotalBefore Boot Camp % Explained Included Terms 67.6 82.6 75.9
N 37 46 83After Boot Camp % Explained Included Terms 66.0 74.0 70.0
N 50 50 100Total % Explained Included Terms 66.7 78.1 72.7
N 87 96 183
Timeline* Fellow Control TotalBefore Fellowship % Explained Included Terms 77.8 82.6 80.8
N 27 46 73After Fellowship % Explained Included Terms 73.7 66.0 68.1
N 19 53 72Total % Explained Included Terms 76.1 73.7 74.5
N 46 99 145*Only stories with medical terms.
Table 10: Health RisksTimeline Boot Camp Control TotalBefore Boot Camp % Focus on Health Risks 41.4 34.1 37.3
N 157 205 362After Boot Camp % Focus on Health Risks 59.3 33.3 45.8
N 182 198 380Total % Focus on Health Risks 52.2 33.7 41.6
N 339 403 742
Timeline Fellow Control TotalBefore Fellowship % Focus on Health Risks 48.8 34.1 38.3
N 82 205 287After Fellowship % Focus on Health Risks 59.2 36.5 42.8
N 76 200 276Total % Focus on Health Risks 53.8 35.3 40.5
N 158 405 563
Timeline* Boot Camp Control TotalBefore Boot Camp % Explain Susceptibility 29.2 32.9 31.1
N 65 70 135After Boot Camp % Explain Susceptibility 28.7 25.8 27.6
N 108 66 174Total % Explain Susceptibility 28.9 29.4 29.1
N 173 136 309
Timeline* Fellow Control TotalBefore Fellowship % Explain Susceptibility 20.0 32.9 28.2
N 40 70 110After Fellowship % Explain Susceptibility 24.4 39.7 33.9
N 45 73 118Total % Explain Susceptibility 22.4 36.4 31.1
N 85 143 228
Timeline* Boot Camp Control TotalBefore Boot Camp % Mention Predictors of Risks 15.4 22.9 19.3
N 65 70 135After Boot Camp % Mention Predictors of Risks 21.3 22.7 21.8
N 108 66 174Total % Mention Predictors of Risks 19.1 22.8 20.7
N 173 136 309
Timeline* Fellow Control TotalBefore Fellowship % Mention Predictors of Risks 12.5 22.9 19.1
N 40 70 110After Fellowship % Mention Predictors of Risks 20.0 26.0 23.7
N 45 73 118Total % Mention Predictors of Risks 16.5 24.5 21.5
N 85 143 228
Table 10: Health Risks (continued)Timeline* Boot Camp Control TotalBefore Boot Camp % Explain Deal with Risks 63.1 58.6 60.7
N 65 70 135After Boot Camp % Explain Deal with Risks 55.6 50.0 53.4
N 108 66 174Total % Explain Deal with Risks 58.4 54.4 56.6
N 173 136 309
Timeline* Fellow Control TotalBefore Fellowship % Explain Deal with Risks 35.0 58.6 50.0
N 40 70 110After Fellowship % Explain Deal with Risks 46.7 60.3 55.1
N 45 73 118Total % Explain Deal with Risks 41.2 59.4 52.6
N 85 143 228
Timeline* Boot Camp Control TotalBefore Boot Camp % Explain Chances of Survival 12.3 7.1 9.6
N 65 70 135After Boot Camp % Explain Chances of Survival 14.8 30.3 20.7
N 108 66 174Total % Explain Chances of Survival 13.9 18.4 15.9
N 173 136 309
Timeline* Fellow Control TotalBefore Fellowship % Explain Chances of Survival 0.0 7.1 4.5
N 40 70 110After Fellowship % Explain Chances of Survival 17.8 21.9 20.3
N 45 73 118Total % Explain Chances of Survival 9.4 14.7 12.7
N 85 143 228
Timeline* Boot Camp Control TotalBefore Boot Camp % Explain Societal Impact 1.5 2.9 2.2
N 65 70 135After Boot Camp % Explain Societal Impact 10.2 6.1 8.6
N 108 66 174Total % Explain Societal Impact 6.9 4.4 5.8
N 173 136 309
Timeline* Fellow Control TotalBefore Fellowship % Explain Societal Impact 7.5 2.9 4.5
N 40 70 110After Fellowship % Explain Societal Impact 13.3 5.5 8.5
N 45 73 118Total % Explain Societal Impact 10.6 4.2 6.6
N 85 143 228*Only risk stories.
Table 11: Number of Times CDC MentionedTimeline Boot Camp Control Total
Mean 0.3 0.4 0.3N 157 205 362Std. Deviation 1 1.2 1.1Mean 0.7 0.5 0.6N 182 198 380Std. Deviation 1.7 1.7 1.7Mean 0.5 0.4 0.4N 339 403 742Std. Deviation 1.4 1.5 1.4
Timeline Fellow Control TotalMean 0.1 0.4 0.3N 82 205 287Std. Deviation 0.5 1.2 1.0Mean 0.5 0.2 0.3N 76 200 276Std. Deviation 1.5 1.0 1.1Mean 0.3 0.3 0.3N 158 405 563Std. Deviation 1.1 1.1 1.1
After Fellowship
Total
Before Boot Camp
After Boot Camp
Total
Before Fellowship
Table 12: Rankings of Four Articles by Focus Group Members
Rank Group 1 Group 2 Group 3 Total1 0 1 0 12 2 2 3 73 2 0 5 74 3 8 2 13
N 7 11 10 28
Rank Group 1 Group 2 Group 3 Total1 0 1 2 32 3 3 3 93 3 5 2 104 1 2 3 6
N 7 11 10 28
Rank Group 1 Group 2 Group 3 Total1 0 1 0 12 2 5 2 93 2 4 3 94 3 1 5 9
N 7 11 10 28
Rank Group 1 Group 2 Group 3 Total1 7 8 8 232 0 1 2 33 0 2 0 24 0 0 0 0
N 7 11 10 28
Article 1 on a Gene Linked to Prostate Cancer and Written by a Fellow After the Training Program
Article 2 on Vaccine Shortage and Written by a Boot Camp Participant Before the Training Program
Article 3 on Asthma Research and Written by a Fellow Before the Training Program
Article 4 on Heart Defibrillators and Written by a Boot Camp Participant After the Training Program
1 Article 2
Boot Camp Participant Before
Kathryn Herlich says she was taken aback recently when a nurse in her doctor's office said her 2-month-old daughter, Hayley, would not receive her scheduled DTaP shot.
"They said they were out of the vaccine and I was, like, 'Ooh, that's weird,'" says Herlich, a stay-at-home mom in XX.
A couple of weeks later, the office phoned to say they'd received a supply of the vaccine fordiphtheria, tetanus and pertussis, and that Herlich could bring in Hayley for her required shot.
This is an increasingly common scenario as the nation experiences intermittent shortages ofseveral common vaccines. Most are intended to prevent common childhood illnesses such asmeasles, mumps and chicken pox.
"It's been an ongoing, important story for us," says Dr. Fernando Guerra, director of the XXMetropolitan Health District. "The problem, of course, is that if you send someone away becauseyou don't have enough vaccine today, it's that much harder to get them to come back againtomorrow."
There's no single reason for the current spate of shortages.
"It's a convergence of issues," explains Curtis Allen, spokesperson with the Centers for DiseaseControl and Prevention in Atlanta.
For example, about a year and a half ago, Wyeth Lederle ceased production of the DTaPvaccine, which children should get five times before age 6. That left only two companies,Aventis Pasteur and GlaxoSmithKline, producing the vaccine.
This shortage was further exacerbated when the CDC asked manufacturers to cease addingthimerosal, a mercury-based preservative, to 10-dose vials of DTaP. As a result, the vaccinemust now be made in single doses, and each vial has to be overfilled slightly so the full dose canbe removed via syringe. This cuts yield by about 25 percent.
Other vaccines are in short supply because of production and compliance problems (measles,mumps and rubella, varicella and pneumococcal conjugate vaccine) or unexpected demand(PCV).
Making vaccines is a tedious and exacting business. A batch of Td vaccine, which preventstetanus when someone gets a puncture wound, takes almost a year to make. So it's not as ifmanufacturers can simply turn on the spigot to increase the supply.
To date, health officials say there haven't been any outbreaks of vaccine-preventable diseases.This is in part because of what's called "herd immunity." That means that, if there are enoughpeople in the general community vaccinated against a disease, a single person's chances of beingexposed to an infectious organism remain small.
2 Article 4
"But someone who is unprotected comes in contact with a person who is infected from outsidethe area, they'd be susceptible," warns Guerra.
The government has made temporary changes to its routine dosing schedules. It is alsoencouraging physicians to establish recall programs to contact parents of children deniedvaccines when new supplies arrive, according to the CDC's Allen.
"It's also the parent's responsibility to keep track of any vaccines their kids have missed," hesays.
Locally, the Metropolitan Health District has sufficient quantities of all vaccines to meetimmediate needs.
"Any shortages are probably greater in the public sector," Guerra explains. "But supplies maybecome tight as the back-to-school season approaches."
The health district has recommended that school districts give children who are unable to receivethe full regimen of required vaccines a temporary deferment and allow them to register for fallclasses.
According to renowned infectious disease expert Dr. Paul Offit, the root cause of currentshortages (he says it's "the worst that I can remember") goes beyond simple manufacturingglitches. Offit, chief of infectious diseases at Children's Hospital in Philadelphia, says they're asign of "the crumbling of the infrastructure of vaccine production."
For example, 20 years ago there were 18 pharmaceutical companies making vaccines. Today,due to mergers and companies leaving the vaccine business, there are only four.
With federal and state governments buying about half of all the vaccine manufactured, priceshave been kept artificially low.
Finally, fear of litigation and a growing anti-vaccine movement have made many companiesskittish about the business.
Preventing future vaccine shortages will be difficult. There have been discussions about a jointpublic-private partnership to ensure adequate supplies, says Guerra, a member of the NationalVaccine Advisory Committee. This would include a pending $1 billion federal vaccinelaboratory that XX is reportedly in the running to get.
3 Article 4
Boot Camp Participant After
Most people probably have only the barest inkling of what a heart defibrillator is or does. It'slikely the only time they've seen one has been in the movies or on TV shows such as "ER": Adoctor places two paddles on an unconscious patient's chest and yells "Clear!" The body lurchesupward as the paddles deliver a jolt of electricity to restart the patient's heart.
But these machines, in the form of portable, laptop-size units called automated externaldefibrillators, are expected to become almost as common as fire extinguishers in coming yearsas efforts to place them in public places gain momentum. AEDs are already required equipmenton airplanes and in many federal buildings. But increasingly, the electronic devices - simpleenough to be used by those who've never set foot in a medical school - are appearing inshopping malls, theme parks, office buildings, schools, even private homes.
Last month, President Bush signed the Community AED Act. It appropriates more than $30million over the next four years for the purchase of AEDs by state and local governments andfor training in their use. New York recently became the first state to require AEDs in allschools. And while XX has no similar law, AEDs are in place in a number of area businesses,according to Megan E. Galloway-Winkler of the local branch of the American HeartAssociation.
Available by prescription only, an AED is used to restore normal heartbeat in an individualstricken with sudden cardiac arrest, a condition that kills about 250,000 Americans annually. It'sestimated that broad use of these machines could save 50,000 lives every year.
The most common form of cardiac arrest is ventricular fibrillation, which occurs when theelectrical system that controls the heart's rhythmic beating goes haywire. During this "electricalstorm," the heart stops pumping blood. Victims quickly lose consciousness and, if the heartcannot be restarted, will die within minutes.
The only available therapy for ventricular fibrillation is electrical shock. Whilecardiopulmonary resuscitation can delay the inevitable, the rule of thumb is that for everyminute a victim goes without defibrillation, chances of survival decrease by about 10 percent.
Enter the effort to improve public access to AEDs. For example, one of the aims of theAmerican Heart Association's Operation Heartbeat program is to increase availability of AEDsin all public buildings.
Others say availability of AEDs will occur via legislative fiat.
"I predict that within the next five years, all public facilities will be required to have AEDs,"says Sam O'Krent, chief executive officer of O'Krent's Abbey Flooring Center, which has had aAED on site since last September. O'Krent purchased the $4,000 unit (which includes stafftraining) in response to his father's 1997 death due to heart disease.
"That's when I first became aware of the toll the disease takes each year," explains O'Krent, who
4 Article 4
until last month was chairman of the board of the local chapter of the American HeartAssociation.
The AED in O'Krent's store is located on the second of three floors, near the stairway so it canbe retrieved from anywhere in the building within three minutes. All O'Krent employeesundergo a four-hour training class that includes, in addition to the operation of the machine,instruction in cardiopulmonary resuscitation.
AEDs are designed to be simple to use, difficult to misuse. One study found that after only oneminute of instruction, sixth-graders could apply a shock as effectively and almost as quickly astrained emergency medical personnel.
To use an AED, a would-be rescuer presses a button to turn the machine on and places twoelectrode pads on the victim's bare chest. Screen messages and a computerized voice walk therescuer through the process step-by-step.
Once the electrode pads are in place, the AED analyzes the victim's heart rhythm like anelectrocardiogram. If it senses abnormal electrical activity indicating cardiac arrest, it firstwarns everyone to stay clear of the victim and then instructs the user to press another button todeliver an electric shock.
The AED will not shock if the patient's heart is still beating. Some newer models apply theshock themselves, so the user doesn't have to take any active role beyond turning it on andplacing the pads.
"Even with training, even if the victim survives, using an AED in an emergency situation can bea very emotional experience," says Dr. James A. D'Orta, chief executive officer of LifeLinkMD,which provides AED support services.
Most AEDs also keep a permanent record of the patient's heart rhythm for later analysis by aphysician.
The price of an AED has dropped significantly in recent years. LifeLinkMD offers a $3,600,one-year program that includes the AED, training for up to five people and medical supportincluding a crisis intervention team. After that, the program costs $660 annually.
Manufacturers are shooting to develop AEDs that will cost less than $1,000, although trainingand other support services may be extra.
In the end, price and legislation may not be the forces that bring AEDs to the masses. Instead, itmay be case law.
Although resistance to AEDs was initially fueled by a fear of lawsuits, so-called GoodSamaritan laws that protect would-be rescuers are now on the books in most states (includingTexas) and have never been successfully challenged.
5 Article 4
"When you're using an AED, the person is dead so you can't make the situation worse," saysGalloway-Winkler. "You can't hurt a dead person."
It's now reached the point were a company has a greater chance of being sued for not having anAED on site.
Observers expect defibrillators to conquer the home market next. LifeLinkMD already offers aleasing plan for $100 per month with a four-year lease.
A study, known as the Public Access Defibrillator Trial, is examining whether AEDs placed inpublic locations in 24 cities in the United States and Canada will improve cardiac-arrest survivalrates. Results aren't expected for at least another year or two.
Until then, Sam O'Krent says it nonetheless gives him a feeling of security to know that, shouldsomeone go into cardiac arrest in his store, that person will have a fighting chance of survival.
"It's like having an insurance policy," he says. "We haven't had to use it, and I hope we neverwill. But it's there, just in case."
6 Article 3
Fellow Before
Carol Boyd knew she had to do something after a teenager died from asthma in a hospitalemergency room five years ago.
Boyd, a registered nurse at the American Lung Association of XX, reported progress Tuesday inthe XX Asthma Project. The project aims to identify asthma in children, prevent asthma-relatedhospitalizations and improve asthmatic children's school grades and ability to play sports. Theproject began a year ago in partnership with the Health Care District and the county healthdepartment.
Asthma, a chronic lung disease with no cure, results in acute attacks in which the personcoughs, wheezes and experiences chest tightness. These episodes can be minimized with proper care.
The Quantum Foundation Inc. of XX awarded the association $300,000 last year to start theproject, and the Children's Services Council added its support.
The project targets the region of XX County with the highest prevalence of asthma in children -the sugar cane and agricultural production area, according to the foundation said.
A review of 1998-1999 school health data showed that asthma was the leading cause of schoolabsenteeism in the county, the foundation said. The problem is at least twice as acute in XX,XX, XX and XX, according to Quantum. The most recent annual report on school health showsnearly 9,000 students in XX County schools were diagnosed with asthma, up from about 6,840in 1998.
The XX project screened 1,005 middle school students with a questionnaire and identified 71 aspotentially having asthma, Boyd said. The project sent letters home with these students,encouraging the parent to take the child to a doctor to be evaluated.
Boyd said the project also will help parents reduce irritants that can trigger asthma attacks inhomes of severely asthmatic children who have gone to the emergency room in the past year.Seven homes of asthmatic children have been inspected so far. Some common triggers are dustmites, cockroaches, cigarette smoke and pet dander.
"We've got a lot of smokers here," said Mary Walker of the Sickle Cell Association of XXCounty. "They smoke in their home and car. They're killing their kids."
Project staff provide the family with dust mite-proof pillow and mattress covers, cleaning kitsand tools to manage their child's asthma.
One of these tools is called a "peak flow meter." It's a cheap device that measures the amount ofair a person can exhale. Used over time the meters can tell asthmatics whether their medicine isworking or if they're headed for an attack. Kids also learn breathing exercises to control their asthma.
7 Article 3
If the project gets more asthmatic children to exercise, it could pay off in other ways. A 1998study published in the Archives of Pediatric and Adolescent Medicine found that children andteens with asthma are significantly more likely than those who are asthma-free to be obese.
A recent health department study of 581 freshmen at XX Central High School found that abouthalf the females and 42 percent of males are overweight or at risk of becoming overweight, ahealth department official said Tuesday. It's not known what percentage of these students have asthma.
8 Article 1
Fellow After
A research team at XX University has discovered a gene involved in the early stages of cancerof the colon, pancreas and prostate, raising the possibility of earlier diagnosis and lower death rates.
The gene, called SIM2, is found on chromosome 21. Dr. Ramaswamy Narayanan, who led theresearch, found that the gene isn't active in normal human tissue but is turned on in the earlystages of these cancers, the university announced Tuesday.
Using matched sets of frozen precancerous and malignant tissue from a bank sponsored by theNational Cancer Institute, researchers detected the protein encoded by the SIM2 gene in theprecancerous tissue of 10 out of 10 prostate cancer patients and 23 out of 24 colon cancerpatients, Narayanan said. The finding is reported in this month's Anticancer Research, aEuropean journal.
"We need to analyze a large number of samples in clinical trials, but if the trend is true, then itcertainly has the potential to become a major prostate cancer marker," Narayanan said.
Prostate cancer, which killed an estimated 31,500 men last year, is the second-leading cause ofU.S. cancer deaths in men. The only diagnostic test available now detects prostate-specificantigen, which isn't a reliable predictor of cancer. Men who have a positive test face anagonizing choice whether to undergo prostate gland removal.
Colorectal cancer is the third-leading cause of cancer deaths in both men and women, followedby pancreatic cancer. These two types of cancer together killed an estimated 85,600 U.S.residents last year.
"What they may be detecting is not the cause of the cancer but something that is associated withthe cancer," said Dr. Paul Billings, vice president of Wipro Health Sciences, a Californiabiotechnology consulting firm. Still, if the test is a better predictor of cancer than the PSA test,"it may be an important diagnostic tool."
The research, supported by a federal grant and based at XX University's Center for MolecularBiology and Biotechnology, made the gene discovery through the use of bioinformatics, a newinvestigative technique that applies high-speed computational approaches to genome sequencesto identify cancer-specific genes.
The discovery highlights the importance of Affymetrix's GeneChip - the entire human genomeon a chip the size of a penny - in cancer research. XX University is one of only two places inXX with the machine needed to read Affymetrix's GeneChip.
The discovery will be licensed by Forseti Biosciences Inc., the university's first biotechnologyspin-off company, of which Narayanan is president.