iLAB

The art, science and mystery of nonprofit news assessment

July 10, 2013

 

Abstract

This report seeks to answer the two-pronged question, “What is ‘impact,’ and how can it be measured consistently across nonprofit newsrooms?”

A review of recent, relevant literature and our informal conversations with experts in the field reveal growing ambitions toward the goal of developing a common framework for assessing journalism’s impact, yet few definitive conclusions about how exactly to reach that framework.

This is especially the case when journalism’s “impact” is defined by its ultimate social outcomes — not merely the familiar metrics of audience reach and website traffic.

As with all journalism, the frame defines the story, and audience is all-important. Defining “impact” as a social outcome proves a complicated proposition that generally evolves according to the constituency attempting to define it. Because various stakeholders have their own reasons for wanting to measure the impact of news, understanding those interests is an essential step in crafting measurement tools and interpreting the metrics they produce.

Limitations of impact assessment arise from several sources: the assumptions invariably made about the product and its outcome; the divergent and overlapping categories into which nonprofit journalism falls in the digital age; and the intractable problem of attempting to quantify “quality.”

These formidable challenges, though, don’t seem to deter people from posing and attempting to find answers to the impact question.

Various models for assessing impact are continually being tinkered with, and lessons from similar efforts in other fields offer useful insight for this journalistic endeavor. And past research has pointed to specific needs and suggestions for ways to advance the effort. From all of this collective wisdom, several principles emerge as the cornerstones upon which to build a common framework for impact assessment.

Download PDF

 

Introduction

Lauren Hasler spends about eight hours a month entering data, combing through Google Alerts, sorting spreadsheets and mapping her newsroom’s reach. It’s part of her job as Public Engagement Editor at the Wisconsin Center for Investigative Journalism.* But she goes beyond the call of duty to share her process with her counterparts at other nonprofits through the Investigative News Network (Hasler, 2012).*

By disseminating her methods — which are some of the most rigorous employed among small nonprofits of WCIJ’s ilk — Hasler is helping to build a common framework by which newsrooms can measure their impact.

This helps WCIJ keep its work relevant to its audience, so it matters to her newsroom. It also aids journalists in other newsrooms with whom she shares her process. And in ways both subtle and profound, it embodies a microcosm of a very big conversation that’s picking up steam among nonprofit newsrooms around the country — and the philanthropic foundations that fund them.

* (Full disclosure: Charles Lewis is a founding Board member and officer of WCIJ). He is also the co-founder and current Board officer of INN).

Graphics by Cristina Keane, Investigative Reporting Workshop

Both nonprofit newsrooms and their charitable donors are in business to “do good,” i.e. to provide vitally important information to better inform citizens -- and both want to know they’re succeeding. But, as of now, a clear mechanism for gauging the impact of nonprofit news remains elusive. This report seeks to answer the two-pronged question, “What is ‘impact,’ and how can it be measured consistently across nonprofit newsrooms?”

Why measure

The chorus of voices calling for a unified system of measurement is led by the foundations supporting nonprofit journalism. Funders are, understandably, looking for a way to gauge the social impact of their financial investments. Why? Because some of them understandably are feeling a bit overwhelmed and besieged by proliferating prospective grantees – while there were once just a few nonprofit news organizations, today there are literally scores of them. In the United States, we have gone from a few scrappy nonprofit reporting organizations doing public service journalism back in the late 1970s into the early 1990s such as the Center for Investigative Reporting (begun in 1977) and the Center for Public Integrity (1989) and others to astonishing growth in the field in just the past few years. Today, for example, the Investigative News Network (INN – begun in 2009) has 82 member nonprofit news organizations, and that number will continue to rise, increasing the pressure on grantors with finite resources to find ways to differentiate between them.

At the same time, it should be noted that veteran reporters and editors, particularly of the investigative ilk, have an inherent, almost visceral dislike of audience measurement and engagement strategies and other metrics-producing data. They perceive themselves, first and foremost, as intrepid hunter-gatherers of information, hearty truth-tellers treading through the often extremely difficult, well-nigh impossible terrain of disingenuous politicians, opaque institutions, potentially litigious, public relations-larded corporations, trying to do original reporting that cannot be reduced to mere data, an inhospitable milieu. Breaking through all of those external obstacles requires great stamina, perseverance, creativity and verve, and when important journalism ensues, it is heroic and magnificent to behold, and the public becomes engaged, enraged and wrongs can be righted. Or perhaps there will be less drama and citizens will merely become better informed about their community and their nation. In any case, independently gathered information by private citizens, not the government, has been a bedrock principle of this democracy since the 1700s, for without reliable information, there is no informed citizenry so fundamental to the concept of self-governance itself.

But as the contraction of commercial newsrooms has played out over the past two decades, in both newspapers and television, the financial and other professional pressures on individual reporters and editors have been enormous and they have taken a toll. Beaten down by years of “efficiency experts” with clipboards and calculators checking on their annual story count and “output” and overall “productivity” as publicly owned media corporations looked for ways to downsize and remove thousands of journalists, newsroom veterans understandably have an almost Pavlovian response to attempts to measure the immeasurable. To them, the current infatuation with impact metrics is déjà vu all over again, the latest non-journalistic, financial way to say no, to narrow the aperture of their time-consuming, costly, reportorial “possible” and prioritize precious, indeed quite finite, resources.

They also believe, correctly, that sometimes the most significant journalism is the least read, least viewed initially, stories discovered months or even years later, or maybe crucial to public understanding of complex issues but in an un-dramaticway. How does anyone measure that?

Within the field itself, if reporters crow too much about the success of their stories in regard to “impact” and “change,” they are criticized and professionally labeled as a shill for a cause, an “advocate,” still perceived inside traditional newsrooms as unprofessional. Most social entrepreneurs who have started new nonprofit news organizations came from the commercial journalism world and left as it became increasingly hostile to serious, long-form reporting. But they still see themselves, very much so, as professional journalists. Not surprisingly, for the most part, they are reluctant to aggressively discuss their successes, but if and when they don’t, grantors then sometimes wonder why they should be funded. Most “outreach” about the importance of their work is thus done privately, mostly for foundation grant proposals, generally outside public view, not easily accessible to the American people. As traditional major newspapers decades ago stopped educating the public about why journalism matters and its critical civic importance to democracy, the nonprofit journalism ecosystem must – but doesn’t yet – effectively educate the public on why and how information about the uses and abuses of power in our society is so important. There have been numerous surveys of the “new journalism ecosystem” that continues to emerge and evolve. However, at this juncture, depending upon the precise methodology and the period of time being examined, even the actual number of journalism nonprofits is unsettled and seems to vary. For example, there have been no fewer than nine prior “lists and databases” of news organizations in the “noncommercial” or nonprofit journalism space (including the Investigative Reporting Workshop in 2010 and 2011).

The Pew Center for Excellence in Journalism has conducted the most extensive and the most recent research to date on this fluid and dynamic space, adding up all of the mentioned organizations in those lists and databases and culled from it a cumulative total of no fewer than 1,800 nonprofit enterprises.

From that, using various methodological means, it narrowed that number to 172 nonprofit news organizations in 41 states throughout the United States (two-thirds of which are sponsored by another nonprofit organization such as a university) that were created over a quarter-century period between 1987-2012. And bear in mind that this elaborate effort did not include the Associated Press, founded in 1846 or NPR (National Public Radio), created in 1970, the only news organization in the U.S. to double its audience in the decade after 9/11, or Mother Jones magazine, winner of two National Magazine Awards which was founded by a nonprofit publisher. All are major, prize-winning, nonprofit news organizations not included in what is billed as “the most comprehensive” analysis to date.”

And if the mere tabulation of the actual number of nonprofit news organizations is so difficult on which to agree, imagine how difficult it is to arrive at a common understanding and consensus of what constitutes “impact” and how it ought to be measured? So along with humility and acknowledgment of the complexities and lack of consensus that exists about basic issues, let us also recognize the still unfolding reality of this emerging journalism space. After all, it was only five years ago, in 2008, that the Pulitzer Committee decided to change and broaden the eligibility rules for the Pulitzer Prize, opening up the nomination process for the first time to independent online news organizations. Against this backdrop, the chorus of voices seeking a coherent and consensus way to measure impact includes nonprofit newsrooms themselves, as the above-noted work of the Wisconsin Center for Investigative Journalism demonstrates. This interest not just by donors but also by grantees stems partly from newsrooms’ need to prove their merits to funders, certainly, but also out of their sincere interest in making well-informed editorial decisions and publishing important investigative reporting projects that will most resonate with the communities their journalism serves.

This influx of new content providers and organizational supporters is promising and exciting, but also not without complications, especially when attempting to develop a shared assessment tool.

The new newsrooms are opening their doors with varying levels of prior experience, wildly divergent funding models and inconsistent institutional capacity. Even their missions fluctuate: from investigative reporting on issues of national or international import to coverage of the local school board and everything in between.

"Journalism's present state: change. Horizons morph before our very eyes. There is no new normal — yet."

Likewise, many foundations new to the journalism world face a learning curve in the news business. For example, while there are relatively few journalism or media foundations in the U.S., there are many other foundations that happen to be supportive of journalism. According to J-Lab (The Institute for Interactive Journalism), from 2005 through 2009, 180 foundations contributed $145 million to nonprofit news organizations. The profession’s history and role in society are unique, as is its culture of independence and its keen sensitivity to conflicts of interest or even the appearance of conflicts of interest.

The challenge of harnessing and encouraging all this new energy is further increased in the context of journalism’s present state: change. Benchmarks are gone or disappearing. Horizons morph before our very eyes. New technology is replaced before its potential is even recognized. There is no new normal — yet.

If there ever will be, newsrooms and foundations must work together, at the ground level where all is ripe for re-creation, to build it. They must learn from each other to save precious time in this era of rapid transformation, before journalism’s relevance is not just challenged, but lost. And they must embrace both optimism and realism about which aspects of success can be measured and when the best you can do is simply trust.

Defining Impact

A review of literature and our conversations with experts in the field of measuring journalism’s impact reveal a growing ambition to develop a common framework for assessing journalism — yet few definitive conclusions about how to achieve it. This is especially the case when journalism’s “impact” is defined by its ultimate social outcomes, not merely the familiar (if not always understood) metrics of website traffic.

This distinction between social outcome and website traffic is critical to establish before discussing impact and how to measure it — critical for this report and every bit as much for news organizations and foundations engaged in assessment of news operations.

Many recent studies have developed models helpful in peeling back the different “strata” of engagement, as J-Lab puts it (Schaffer & Polgreen, 2012). The terminology and diagrams are generally variations on a theme of progressive measurements: reach, engagement and impact.

Reach

Measuring impact starts with quantifying “reach,” or the number of individuals who come into contact with news content. Methods for measuring digital reach pose their own challenges, as discussed below. And in practice, most nonprofit newsrooms (and even traditional media outlets) are only just beginning to learn how to measure reach, much less analyze it. For example, should the value of page views on an article about a planning and zoning meeting be compared with equal weight against the page views on a profile of a popular athlete? And while website analytics programs allow newsrooms to track the amount of time visitors spend on their websites, how meaningful is this metric? After all, the timestamp could be short because the site was effective at directing the visitor to the information he desired — or it could be long if it took some searching to satisfy her curiosity (Chinn, 2012).

 

Interpreting what these numbers mean takes time. But even assuming they are measured accurately and are reasonably understood, reach reveals virtually nothing about the quality of the contact it measures, nor its impact on the audience member.

Engagement

The concept of “engagement” is a more sophisticated iteration of reach that encourages and explores an exchange of information between the news source and audience members. American University Center for Social Media’s 2010 report “Spreading the Zing” describes engagement as the way users move “beyond media consumption to interact with a project or outlet, both online or off” (Clark & Schardt, 2010). The authors suggest producers track “content creation, interaction, conversation, amplification, mobilization, and more.” Yet two years later, J-Lab, also at the American University’s School of Communication, found that many newsrooms conflate reach with engagement. In “Engaging Audiences” (Schaffer & Polgreen, 2012), the J-Lab authors determined that “cultivating audience participation is still a nut that needs to be cracked.”

Even still, engagement occurs on an interpersonal, not societal, level. It is only a steppingstone in the quest to measure the deep social impact to which philanthropies, and indeed the journalism tradition, aspire.

Impact

In his spring 2006 working paper for the Joan Shorenstein Center on the Press, Politics and Public Policy, Robert Picard called for a fundamental paradigm shift in how we understand, deliver, measure and convey the value of journalism in the 21st century. He argued that, with choice of consumption up to the consumer more than ever before, news organizations must raise the bar of their social value in order to maintain relevance with their audiences and be worth their monetary cost.

“The challenges of journalism and news organizations in the twenty-first century are not merely of creating value for consumers but also of creating value for citizens and society. ... They will need to reemphasize the values underlying news consumption: the provision of information that helps in people’s daily lives, informs them as citizens, and helps them participate in and engage with society,” Picard wrote (p. 149-150).

Journalism as we know it is no longer a necessary, nor even a sufficient condition for democracy, Picard argued. If journalism doesn’t step up to the job that needs doing today, something else will — and the news industry will wither. His reasoning proved prescient of the need that’s now being felt to measure social value, otherwise known as “impact.”

Understanding which strata to aim for is foundational in assessing the success of journalism, because more reach does not always equate to greater impact. Richard Tofel of ProPublica points out in his recently published white paper, “Non-Profit Journalism: Issues Around Impact,” that a broader reach of explanatory journalism, for example, may indeed enhance the impact of a published report: The more people read it, the greater the multiplier of potentially improved understanding. A direct correlation does not exist with investigative journalism, on the other hand, because the number of individuals who actually hold positions capable of correcting systemic problems is typically quite small. In the case of investigative journalism, therefore, a targeted rather than broad reach often is the catalyst for impact.

Challenges

While the reasons to measure impact are many, attempts to do so reveal a multitude of challenges and, ultimately, limitations. Assumptions sneak up. “Journalism” in today’s cross-platform world is hard to define. And the elusive x-factor that unifies the components of creative works to produce more than the sum of their parts remains recognizable, yet immeasurable.

Assumption: statistical meaning

Chief among the assumptions made about measuring impact is that, quite simply, it’s even possible. Digital delivery systems and their associated analytics tempt us into believing this fantasy is not fiction. And to be sure, it is possible now to know more about news audiences and how they use the news than it was in any analogous version of history. Yet analytic tools remain far from standards of reliability or consistency.

A 2010 study by the Tow Center for Digital Journalism at Columbia University (Graves, Kelly & Gluck, 2010) found that proprietary formulas held by commercial analytics companies are subject to unexplained methodological changes that can produce dramatic swings in reported results: “What is supposedly the most measurable medium in history is beset by a frightening tangle of incompatible standards and contradictory results.” The Tow researchers found that statistics as simple as website traffic, provided by different services, can vary by more than 100 percent. Even the relatively clear task of measuring reach then, compared to the more wooly assessment of impact, becomes impossible to conduct reliably with the tools currently available. While such a technical challenge may sound daunting, it nonetheless, conceivably, can be overcome. Yet FSG’s “From Insight to Action” (Kramer, Graves, Hirschhorn & Fiske, 2007) addresses the more fundamental issue of causation versus correlation in ascribing the influences that drive social change. Ultimately, the authors state, no one person or entity can take credit for social impact when society does not operate as a linear series of forces, each within its own vacuum. Look for progress, they advise, not credit for it, because a certain amount of this work is done on faith.

The “Taxonomy of Outcomes” by the Urban Institute and the Center for What Works acknowledges another statistical limitation by pointing out that many outcomes are difficult to measure explicitly. The Center for Social Media’s 2010 report “Spreading the Zing” (Clark & Schardt, 2010) also points out that, with the digital landscape “ever-shifting,” it is difficult to establish a clear and lasting assessment rubric.

And a separate challenge both technical and ethical in nature continues to brew. Pew’s State of the News Media 2012 notes that online privacy concerns are becoming more prominent in American society. As digital tools and even laws to protect Internet users’ privacy become more effective and achieve a critical mass, audience tracking to measure even simple reach may become increasingly difficult, expensive and ethically questionable for journalism organizations.

Assumption: digital connectivity

A more tactile assumption, and one approaching ubiquity in the related literature, is that impact assessment is a digital enterprise. This implies either that a) all audiences and news delivery systems are online, or b) the connected among us suffice as proxies for the unplugged. Yet, as the Investigative Reporting Workshop’s 2012 nationwide investigation “Connected: The Media and Broadband Project” (Dunbar, 2012) reveals, a digital divide persists. “Nationwide, the Workshop survey found that 40 percent of households did not have broadband connection in the home through December 2010,” the author wrote. Laying other data on top of connectivity, the Workshop also found that the lack of connectivity contributes to a self-perpetuating cycle of division: Those without access are more likely to be disadvantaged, a condition exacerbated by the lack of access that distinguishes them. Yet digital media and the public’s access to that digital technology is an unspoken assumption of most studies, reports and explorations of impact assessment. FSG’s “IMPACT: A Practical Guide to Evaluating Community Information Projects” is rare in addressing the need to measure the impact of media for offline users. The recommended analog methods are not surprising: surveys and interviews. They are, however, expensive and almost certainly involve a small sample size. Nonetheless, the greater expense of such qualitative methods may be worthwhile if they are used, as FSG suggests, to measure not just the reach but also the offline, or social, impact of information.

Challenge: categorization

A third common assumption in the literature hints at another challenge for impact assessment: that any content whose impact is being assessed is bound to be multimedia. The assumption itself is understandable to a large degree: The continued shift to mobile news consumption (Pew, 2012) is forcing newsrooms to publish on multiple platforms, which in turn requires journalists to tell stories in many different ways. Also, many inroads into audience engagement have been pioneered by those outside the newspaper industry: Consider the sea change precipitated by National Public Radio’s “Audience 88” (Giovannoni, Liebold, Thomas & Clifford, 1999). Additionally, a special effort has been made by many researchers to include all forms of media in their studies: Rather than focusing on a traditional conception of news, reports such as the Center for Social Media’s series on audience engagement (Clark & Aufderheide, 2008; Aufderheide, Clark, Nisbet, Dessauer & Donnelly, 2009; Clark & Van Slyke, 2010; Clark & Schardt, 2010; Clark & Abrash, 2011) and several studies funded by major foundations also explore the impact of documentary film, “community information projects,” investigative and advocacy journalism, citizen reporting, public and community radio and television, entertainment and even gaming (Barrett & Leddy, 2008; Kramer, Parkhurst & Vaidyanathan, 2009; Schaffer, 2009; Kaufman & Albon, 2010).

The distinction between advocacy and journalism — investigative journalism, especially — is aptly described by Richard Tofel of ProPublica in his recently published white paper, “Non-Profit Journalism: Issues Around Impact.” The most profound differences, he writes, “may begin with process, but culminate in much more: Journalism begins with questions and progresses, as facts are determined, to answers. Advocacy begins with answers, with the facts already assumed to be established.” Tofel goes on to explain that his organization has embraced “impact” as a goal, but has sought it “only through journalistic means.” In a presentation at the Missouri School of Journalism in March 2013, Tom Rosenstiel pointed out that one of the most common myths about journalism today is the idea that the platform is the source. “Platforms are not sources,” he argued. If someone reports that they get most of their news on Twitter, for example, that does not make Twitter the source of the news; it is simply the delivery mechanism. Likewise, it could be argued, formats are not content. Media can be conveyed visually through photographs, or broadcast on the radio — or both — but existing in multimedia formats does not necessarily make all content comparable. Some will be news. Some will be advocacy. Some will be fluff.

This cross-platform complexity demands careful semantic parsing of media categories, especially when attempting to devise a common framework for assessment that is implicitly predicated on comparison. Apples, after all, should be compared to apples, not oranges.

Challenge: quantifying “quality”

Quality is simply understood as an essential ingredient of impact. The supposition of quality is baked into virtually every study or report on impact assessment published in the last five years. But only occasionally is the elusive nature of quality acknowledged.

Pew and Monitor recognized the difficult task of quantifying quality in their report “How the Public Perceives Community Information Systems” (Rainie, Purcell, Siesfeld & Patel, 2011) This three-city study found that those members of the public who avidly consume news media and online information are more likely to be involved and feel they have an impact in their community. “Yet,” the authors point out, “several indicators are difficult to measure and assess independently without complicated and expensive methodologies — notably, the quality of a community’s journalism...”

In their 2010 report “Spreading the Zing” for the Association of Independents in Radio and the Center for Social Media, the authors describe five interlocking elements of impact, starting with reach, moving to engagement and culminating in what they call “Zing.” Clark and Schardt do not go so far as to try to develop a metric for measuring quality, but it is part of their explanation of zing: “How skilled was the maker in crafting the work?” they ask. In their model, the quality of a media’s craft is an essential precursor to “movement,” which is defined not in partisan terms but as “the capacity to provide users with the option to do something in their roles as citizens.” Craft and movement converge to create “zing,” which is diagrammed as the ultimate achievement of impact.

Existing models are in their infancy

The bulk of what already exists for measuring the “impact” of nonprofit journalism stops short of assessing civic impact or societal change. Special effort must be made when evaluating the literature to distinguish between best practices for having an impact and ways to actually measure that impact.

Assessment research

In efforts to measure impact, a common conflation occurs between impact and output, or impact and reach. J-Lab’s report “Engaging Audiences: Measuring Interactions, Engagement & Conversion” (Schaffer & Polgreen, 2012) reviews responses to a national survey of (mostly) news startups about what those newsrooms do to engage their audiences, what goals are in mind, and how they measure that engagement. Effort is made to distinguish between reach and impact in the report because the researchers found that many news organizations conflated the two. Generally speaking, “Engaging Audiences” uncovered no tools for measuring the civic impact of modern, nonprofit journalism. Although accounting for one’s success is regarded as essential, the methodology remains anecdotal.

One pattern that emerges from those anecdotes is that the journalism with impact tends to be produced and presented with a specific mission for a targeted audience. It makes sense that those stories into which much planning and effort were poured are apt to be better monitored for impact. But does this mean that the daily grind has no impact? Other anecdotes also point to the broad range of potential impacts (or types of impact) journalism can have, and they underscore the gray area between reporting and advocacy. That no-man’s land becomes all the more difficult to avoid when mission is emphasized as much as it is in this and many other “best practice” rosters for nonprofit journalism.

"The bulk of what already exists for measuring the 'impact' of nonprofit journalism stops short of assessing civic impact or societal change."

The McCormick Foundation — clearly interested in this subject, as the funder of this research and analysis — conducted a Youth Impact Study in 2010 to review a collection of grants it had made for news literacy and youth journalism in the Chicago area. By rendering their grantee survey results graphically with digital mapping technology, the foundation achieved a great description of both reach and penetration (the number of grantees funded in each community area). Regarding social impact, however, and what the students and teachers who were reached with the grant-funded programming took away from it or how it changed their communities, this model brings impact assessment no further. It remains more about reach than impact.

The foundation also developed an instrument for evaluating journalism programs, which includes brief reference to “Progress Indicators (Benchmarks of Success)” for each wing of programming it funds: content, audience, and rights (as in press rights). It also contains strategies, 2010 baseline figures, 2012-13 targets, 2012 activities, and 2015 targets. Significantly, this reflects the common interest in metrics of reach, but it also consistently identifies “quality” as one of its progress indicators and targets. Reaching back to the conundrum of how to quantify such quality, we see that even the best-intentioned models are not necessarily realistic in their current form.

Finally, the excellent work of Pew — a staple in the diet of those wanting to understand audience behavior — consists primarily of survey-based assessments of how news is consumed. Technology choice and use are often the focus of these studies, rather than inquiries into how consumption of news changes the decision-making process of the audience or alters the course of their community in any way. The scope of many media analyses, also is limited to data from the largest information sources in the country. No discreet data exists that can be used to compare apples to apples when it comes to audience use of nonprofit journalism produced in differently sized newsrooms of varying capacities.

This is what makes the work of nonprofits like the Wisconsin Center for Investigative Journalism all the more notable: They are inventing the tools themselves. In “Nonprofit Best Practices: Tracking Your Impact” (Hasler, 2012), WCIJ’s public engagement director Lauren Hasler lays out her methods for conducting online searches, creating a spreadsheet and then mapping the ripple-effect of the organization’s stories. Efforts like Hasler’s are important steps for moving the field forward, but as J-Lab’s “Engaging Audiences” (Schaffer & Polgreen, 2012) implies, they are insufficient as a goal in themselves. The J-Lab report did not profile WCIJ, but it found from the newsrooms it studied: “Nearly all of the sites surveyed struggle with staffing, bandwidth and crafting measurable engagement strategies that can help set organizational goals. Moreover, they blame their inability to track or measure audience conversion on lack of resources and lack of tools that could support such work.”

Tools and methods

A host of both quantitative tools and qualitative methods for assessing impact is explained in the Center for Social Media’s 2009 report “Scan and Analysis of Best Practices in Digital Journalism In and Outside U.S. Public Broadcasting” (Aufderheide, Clark, Nisbet, Dessauer & Donnelly, 2009). These include: in-depth interviews with influential potential users of your media prior to the launch of a new initiative; longitudinal panel surveys of users; content analysis and advanced tracking features to gauge the “quality,” not just quantity, of user engagement (Nielsen Online and ComScore are mentioned as the next level beyond Google Analytics); and monitoring media and policy impact through tools like Google News alerts, Technorati, and Wikio. The skills required to utilize such methods properly, and the financial resources to pay for the work to be done, are obstacles for many nonprofit newsrooms. And still, the latter of these tools (and the only digitally based methodology) only begins to steer assessment beyond newsroom-audience interactions and toward social outcomes.

A range of tools for impact planning and assessment are provided in the thorough report “Deepening Engagement for Lasting Impact: Measuring Media Performance and Results,” forthcoming from the LFA Group, on behalf of the Bill & Melinda Gates Foundation and the John S. and James L. Knight Foundation. Critical to the process prescribed by the report is the first step of setting goals. Harking back to Tofel’s distinction between journalism and advocacy, the importance of customizing an assessment framework appropriately for the media is at the heart of the inquiry. A journalistic venture, for example, may outline reporting goals but would not extend its expectations to the same realm of potential impact that an advocacy organization may aspire to. Their tools and process are different; so should be their goals.

By far the most statistically ambitious attempt to develop a new measurement of impact was developed at the Lear Center at the University of Southern California’s Annenberg School (Norman Lear Center, 2012). The methodology consisted of a series of online surveys to evaluate the impact of films by Participant Media. For a study measuring the impact of the film Food, Inc., researchers developed a new survey instrument to find out: “Can films really change people’s behavior?” Based on a survey of more than 20,000 people, the findings show, in part, that those who watched Food, Inc. “had significantly changed their eating and food shopping habits.” This conclusion was reached after comparing the survey responses of filmgoers to those of “non-viewers who were virtually identical in 17 traits, including their degree of interest in sustainable agriculture and their past efforts to improve food safety.” A press release about the Lear Center study describes it as “propensity score matching” borrowed from clinical research and communication studies.

As impact assessment efforts move forward, such social science techniques should not be ignored, lest researchers risk reinventing a wheel instead of adapting proven methods.

Advancing impact assessment

Thankfully, and as is often the case with creative ventures, the obstacles to impact assessment have done more to spur innovation than dissuade progress. Various models for assessing impact are continually being tinkered with, as described above, and lessons from similar efforts in other fields offer helpful insights for this journalistic endeavor.

A “Taxonomy of Outcomes” developed by the Urban Institute and the Center for What Works (2006) attempts to help nonprofits recognize success by matching desired outcomes with appropriate indicators. This is far from a checklist; rather, it is a draft of a common “framework” for socially geared and advocacy nonprofits “that provides guidance and context, helping users learn what they need to know.” As a tool with applicability to a range of organizations, the taxonomy also aims to increase efficiency by adapting, rather than reinventing, the assessment wheel. (For example, it keeps program coordinators from having to start a fresh evaluation rubric every time they start a new initiative or assess a current one.)

The taxonomy was developed with the goal of filling a gap where not much is known or documented about desired outcomes for a range of programs. It started with identifying outcomes and indicators already in use or that had been recommended. The related document “Candidate Outcome Indicators: Community Organizing Program,” also prepared by the Urban Institute and the Center for What Works, contains nuts-and-bolts and also principle-based advice for “starting or improving outcome measurement efforts.” The field there is community organizing — not journalism — but many principles and approaches are nonetheless applicable. A sample “outcome sequence chart” and “candidate outcome indicators” are provided. Important suggestions and cautions are also made, such as tabulating outcomes by various client categories to see if different groups of people respond in different ways to the same programs. As journalism seeks to understand its audiences more deeply, this is perfectly relevant to inquiries into how discrete audiences consume different types of news.

Also intended for non-journalism nonprofits and the foundations that keep them afloat, FSG’s “IMPACT: A Practical Guide to Evaluating Community Information Projects” (2011) is meant for practical application by mission-driven organizations. It, too, offers potential as a resource for assessing journalism, with several templates and rubrics that organizations can borrow and adapt to help craft their evaluation plans.

From a different FSG report, “From Insight to Action” (Kramer, Graves, Hirschhorn & Fiske, 2007), the “Success Measures” evaluation tool emerges as an example of a common framework that’s been used well in the field of community development. “The approach and indicators were developed in a collaborative way by more than 300 practitioners, organizations, and researchers and were then tested with more than 50 community-based organizations,” the report describes. A set of 44 indicators was crafted to assess work in affordable housing, economic development and community building. Fully customizable, Web-based data collection tools were built, to which users gain access for an annual fee. “Funders often pay for their grantees to use Success Measures, helping the organizations improve their practices while also providing the foundation with better data on how its grantees are performing,” the report states.

The Success Measures Data System is also detailed in FSG’s 2009 report “Breakthroughs in Shared Measurement and Social Impact” (Kramer, Parkhurst & Vaidyanathan, 2009), which offers a helpful description of the ways organizations are working together to establish common frameworks for evaluation and, in some cases, to coordinate their efforts as they each tackle different aspects of the same social problem.

Within the media and journalism realm, the forthcoming report for the Gates and Knight foundations, prepared by the LFA Group, is valuable for its distinction between assessing impact at the individual and systems levels. This gives legs to the premise that audience “engagement” on an interpersonal level is a fundamentally necessary step toward achieving broader social “impact.”

Gleaned principles to guide advancement

Many of the reports, studies and reflections included in this literature refer to one another as sources and inspiration. Most, however, end up recreating a wheel, coining new terms or restating lists. What follows is an attempt to give a comprehensive accounting of the recommended principles about impact assessment gleaned from relevant research to date.

Methodologically sound indicators: The Urban Institute’s Taxonomy of Outcomes (2006) establishes the following six basic criteria for indicators of impact. They must be: 1) specific, 2) observable, 3) understandable, 4) relevant, 5) time-bound, and 6) reliable.

Recognize the limitations of data: Much of the literature on assessment points out that indicators of impact, at most, can serve as mere proxies for the true impact of journalism. Additionally, the “Candidate Outcome Indicators” developed by the Urban Institute and the Center for What Works (2006) caution that the taxonomy they developed for community organizing programs should not be used to determine “why” an outcome has or hasn’t occurred, but simply “if” it has. FSG’s “Breakthroughs in Shared Measurement and Social Impact” (Kramer, Parkhurst, Vaidyanathan, 2009) also cautions against attempting to rely on metrics to measure social value; that is better left for qualitative assessments and knowledgeable human interpretations of the results.

Define impact: The Fledgling Fund’s “Assessing Creative Media’s Social Impact” (Barrett and Leddy, 2008) produces a taxonomy split into different types of outcomes: program-centered, participant-centered, community-centered, and organization-centered. This specificity brings us back to the important complexity of defining impact, given that the shape of the measurement tool will vary depending on where the impact is desired. The FSG report “IMPACT: A Practical Guide to Evaluating Community Information Projects” (2011) also points out that an organization’s evaluation will be different depending on whom it’s produced for, what learning is desired from it, and what stage of development the project occupies at the time of assessment.

Stages of impact: The Fledgling Fund’s “Assessing Creative Media’s Social Impact” (Barrett and Leddy, 2008) is particularly valuable for the way it clearly lays out the different stages of impact: from simple product to awareness to individual impact to social impact. These stages, or strata, are echoed in several later reports by other authors. The Fledgling report also provides a matrix that links specific indicators to each one of those stages (p. 19).

Beyond reach: The Center for Social Media’s “Spreading the Zing” (Clark and Schardt, 2010) eloquently develops many of the guidelines for assessing impact that are presented by Fledgling, whose authors lay out their premise as thinking beyond “box office success.” Instead, the authors suggest, ask a question such as, “How many people better understand the issue because the film was made?” They also recommend: using a range of data to assess impact; distinguishing between measuring output and measuring outcome; setting realistic expectations for outcomes; and working collaboratively with partners. It is important to remember, however, that Fledgling’s methodology of impact assessment was developed for explicit social activism, albeit a media-based activism. Adapting recommendations from this different media culture introduces some of the complications addressed in the section of this paper that explores the complex task of categorizing media and journalism ventures in the digital age.

Holistic approach: Another takeaway from the Fledgling Fund’s report is that a traceable mechanism for “movement” (to borrow a term from the Center for Social Media) must be built into a media initiative if the goal is both to get that movement and be able to track it. A strong case is made that outreach strategies must be clearly linked to impact assessment and, even before that, to product development. “For each project, we strive to determine what type of outreach will be most effective given the issue addressed in the film and the film’s narrative,” the report summarizes. This approach, in part, is related to Fledgling’s acknowledgment that sometimes there is not even enough critical mass of understanding about an issue to expect social change. In these cases, the seeds of social change are sown by first cultivating awareness. Jessica Clark and Sue Schardt, in their 2010 report “Spreading the Zing,” appear to have been heavily influenced by Fledgling and FSG’s work. They echo the same commitment to integrating impact with organizational planning. In “Zing,” the authors assume that all public media projects a) start with mission, and b) closely align strategic planning and assessment.

Iterative process: It also is worth noting that, in the Fledgling Fund’s report, the assessment process is iterative (p. 19-20). Because it is still such a new field, it is inadvisable to hold onto any rubric too tightly. In a similar vein, “Spreading the Zing” points out that, with the digital landscape “ever-shifting,” it’s very difficult to establish a clear assessment rubric. A related point made by ProPublica’s Richard Tofel is that of allowing an extended period of time over which to assess the impact of journalism. Given the pace of public policy toward which much investigative work is geared, Tofel has found it worthwhile to continue following the long tail of impact even after his organization’s projects are complete. Sometimes it is years before the definitive impact can be identified.

Learning not grading: The Fledgling authors point out that the evaluative process is more about learning and less designed as a punitive exercise. This paradigm is also explained at great length in FSG’s “From Insight to Action: New Directions in Foundation Evaluation” (Kramer, Graves, Hirschhorn & Fiske, 2007). The FSG report offers good insight into foundations’ somewhat recent shift in their own self-evaluation paradigm: moving away from strictly assessing the impact (success) of past grants, and placing new emphasis on consideration of evaluative techniques that are more forward-looking. The report also offers helpful strategies to maximize the value of evaluation programs (p. 46), summarized as responsive evaluation, inclusive participation, meaningful and localized data, and useful dissemination products.

Impact varies by audience: The Pew/Monitor Institute report “How the Public Perceives Community Information Systems” (Rainie, Purcell, Siesfeld & Patel, 2011) points out that we can expect to have different impact with the same material depending on who the audience is. As pointed out in the literature review, this is another layer — and a crucial one — to consider for any attempts at both crafting and interpreting an assessment model.

Common framework, not master metric: In “Zing,” the authors espouse the notion that there is no “master metric” for evaluation, but that all tools (both quantitative and qualitative) must be used. The authors of the Center for Social Media’s “Scan and Analysis of Best Practices in Digital Journalism In and Outside U.S. Public Broadcasting” (Aufderheide, Clark, Nisbet, Dessauer & Donnelly, 2009), along with Tom Rosenstiel in the conclusion of that report, also deftly caution against putting all of one’s evaluative eggs in a single metrics basket.

Conclusion

Implicit in Picard’s (2006) call for redefining the “value” of journalism is the imperative to develop a new means by which to measure the industry’s success in meeting this challenge. Traditional indicators, after all, may no longer suffice within a new frame.

But it would be short-sighted to assume that nonprofit newsrooms and their foundation supporters, or any media producers, are the only constituencies invested in the progress of impact assessment. The relationship between news consumption and social impact is historically so well understood — or, assumed — that rates of newspaper readership often serve as a proxy for how well an electorate is informed (Adserà, Boix & Payne, 2000). Following the severe decline of the newspaper industry as we knew it, what data are social scientists to turn to now to gauge or represent the information levels in a given community? Without a common system of measurement, broader social research stemming from and relating to a society’s information levels is imperiled. An even more pressing reason, as Matthew Gentzkow’s comments in a 2009 American Journalism Review article underscore (Smolkin, 2009), is to quicken the industry’s development of new communication and business models that will fill the gap of what newspapers have delivered when those print editions are all but gone. As the platforms, formats, missions and funding mechanisms of news continue to morph, organizations must re-define the fundamental social impact they desire to achieve and learn new ways to gauge it. Without concerted and coordinated efforts to this end, how are news providers to recognize an alien version of success? Without knowing how to measure impact, will we even know it when we see it?

Herein lies the motivation for impact assessment that is common among all parties — publishers, academicians, funders of new nonprofit journalism models, and indeed the public for whom journalism’s impact is intended. But when it comes to specific methodologies for assessing that impact, divergent perspectives and priorities start to surface. Journalism’s shifting landscape renders impact harder to pin down; yet by the same token its frontier characteristics require all invested constituencies to navigate the new territory together in order to work quickly toward a stable and successful future.

In addition to the gleaned principles outlined above, we caution that particular effort must be made to guard against the assumptions outlined in this report, in particular the tendency to lump all multimedia content together for measurement by the same yardstick. We reiterate the stated need for improved tools with which nonprofit newsrooms can track the many stages of news impact (Schaffer & Polgreen, 2012). And we urge researchers to unify their terminology in order to move conversations forward clearly on the path to a useful, common framework for impact assessment.

Finally, and most importantly, we encourage newsrooms to honor their audiences and their own reporting efforts by gauging the impact of their journalism in a realistic and relevant way. Count not just the number of articles or videos you publish, but also the ways your publication reverberates in your community. Do other newsrooms or newsletters pick up your bylines, or push forward the news that you break? Make note of it. Do legislators or policymakers propose changes based on the revelations you report? Follow those trajectories. Such common sense suggestions, after all, are part and parcel of reporting. Don’t take that skill set for granted. Start a master document — either for the whole newsroom or for each big story — and simply keep track of where the stories lead. In time, you’ll devise a system appropriate to your unique newsroom’s pace and style.

"What we are talking about is nothing less than the relevance and resonance of original reporting in the 21st century Internet Age."

You will also, inevitably, become more familiar with the qualities that distinguish the “general” audience from the “impact” audience of your work. In other words: who is consuming your news, who is responding to it and who is in the position to effect social change because of it. Audience knowledge is certainly helpful when creating a tracking mechanism, such as at the planning stages of any reporting project, but the process of impact assessment also will be instructive about what there is to learn. Ultimately, what we are talking about is nothing less than the relevance and resonance of original reporting in the 21st century Internet Age. For decades, the commercial media sector has grappled with this question, not very successfully. And it is unrealistic to expect the hearty souls who have had the grit to bravely attempt to enlarge the public space for this important work, to also simultaneously find the awakened public nerve and to master market penetration and audience development and other such issues.

At this early stage in both nonprofit journalism and impact assessment, there is much to learn — by newsrooms and philanthropic institutions, alike. To that end, we encourage foundations to invest in impact assessment by funding assessment experiments in newsrooms of all sizes. Simple awareness of impact assessment, much less understanding, has yet to reach a critical mass within the nonprofit news landscape. Borrowing terminology from the field of impact study, it could be said that “engagement,” therefore, is only rarely possible and true “impact” a long way off. And at this early juncture, new nonprofit organizations trying to establish themselves in a recession-recovering economy, no small feat if accomplished, should be judged by their original news content and the character of who they are and the public void they are filling — and not by their ability to achieve, measure and convey impact. Much more must be done to build their capacity and, with a diversity of newsroom perspectives, build rigorous and realistic expectations for the good that journalism can do.

About the Authors

Charles Lewis is a tenured professor of journalism and founding executive editor of the Investigative Reporting Workshop at the American University School of Communication in Washington, D.C. After an 11-year career as a television network producer at ABC News and CBS News “60 Minutes,” he founded and directed the Center for Public Integrity (1989-2004), where he also began its International Consortium of Investigative Journalists (1997), Global Integrity (2005), and the Fund for Independence in Journalism (2003). He also was the principal co-author of five books with Center staff. He studied political science at the University of Delaware, graduating with honors and distinction, and received his master’s degree from the Johns Hopkins University School of Advanced International Studies (SAIS).

Hilary Niles joined the Investigative Reporting Workshop as a researcher in 2012 while earning her master’s degree at the Missouri School of Journalism, where she spent two years as a graduate assistant at Investigative Reporters and Editors. In June 2013, Hilary joined the Vermont-based nonprofit news site VTDigger.org as a business reporter and data specialist. Hilary’s multimedia investigative reporting on public policy is complemented by research on global press freedom. Prior to working in public radio, community newspapers and online journalism, she studied English at the University of New Hampshire and documentary writing and editing at the Salt Institute for Documentary Studies in Portland, Maine.

Special thanks to the McCormick Foundation.

Works Cited, Impact Study for Investigative Reporting Workshop, Spring 2013

Adserà, Alícia, Boix, Charles, & Payne, Mark (2000). Are you being served?: Political accountability and quality of government. Inter-American Development Bank, Research Department, Working Paper #438.

Aufderheide, Pat, Clark, Jessica, Nisbet, Matthew C., Dessauer, Carin, & Donnelly, Katie (2009). Scan and analysis of best practices in digital journalismboth within and outside U.S. public broadcasting. Center for Social Media at American University.

Barrett, Diana, & Sheila Leddy (2008). Assessing creative media’s social impact. The Fledgling Fund. Retrieved from http://www.thefledglingfund.org/media/research.html

Chinn, Dana (2012). Managing media with metrics. Community Journalism Executive Training, Investigative News Network and the University of Southern California Annenberg School for Communication and Journalism.

Clark, Jessica, & Abrash, Barbara (2011). Social justice documentary: Designing for impact. Center for Social Media at American University, and Center for Media, Culture and History, New York University.

Clark, Jessica, & Aufderheide, Pat (2008). Public media 2.0: Dynamic, engaged publics. Future of Public Media, Center for Social Media at American University.

Clark, Jessica, & Schardt, Sue (2010). Spreading the zing: Reimagining public media through the Makers Quest 2.0. Association of Independents in Radio, and Center for Social Media at American University.

Clark, Jessica, & Van Slyke, Tracy (2010). Investing in impact: Media summits reveal pressing needs, tools for evaluating public interest media. Center for Social Media at American University, and The Media Consortium.

Dunbar, John (2012). Poverty stretches the rural divide. CONNECTED: The media and broadband project. Investigative Reporting Workshop. Retrieved at http://investigativereportingworkshop.org/investigations/broadband-adoption/story/poverty-stretches-digital-divide

FSG (2011). IMPACT: A practical guide to evaluating community information projects. Retrieved from http://www.knightfoundation.org/media/uploads/publication_pdfs/Impact-a-guide-to-Evaluating_Community_Info_Projects.pdf

Giovannoni, David, Liebold, Linda K., Thomas, Thomas J., & Clifford, Theresa R. (1999). Audience 88: Terms and concepts. Corporation for Public Broadcasting, and David Giovannoni, Audience Research Analysis. Retrieved from http://www.aranet.com/library/pdf/doc-0013.pdf

Graves, Lucas, Kelly, John, & Gluck, Melissa (2010). Confusion online: Faulty metrics and the future of digital journalism. Tow Center for Digital Journalism.

Hasler, Lauren (2012). Nonprofit best practices: Tracking your impact. Investigative News Network. Retrieved from http://investigativenewsnetwork.org/article/nonprofit-best-practices-tracking-your-impact

Kaufman, Peter B., & Albon, Mary (2010). Funding media, strengthening democracy: Grantmaking for the 21st Century. The GFEM Media Funding Tracker. Grantmakers in Film + Electronic Media.

Kramer, Mark, Graves, Rebecca, Hirschhorn, Jason, & Fiske, Leigh (2007). From insight to action: New directions in foundation evaluation. FSG Social Impact Advisors.

Kramer, Mark, Parkhurst, Marcie, & Vaidyanathan, Lalitha (2009). Breakthroughs in shared measurement and social impact. FSG Social Impact Advisors.

Lewis, Charles, Butts, Brittney, & Musselwhite, Kate (2012). The New journalism ecosystem. iLab, Investigative Reporting Workshop. Retrieved from http://investigativereportingworkshop.org/ilab/story/second-look

McCormick Foundation (2012). MF youth impact study 2010. McCormick Media Matters. McCormick Journalism Program. Retrieved at http://mccormickmediamatters.blogspot.com/2012/05/mf-youth-impact-study-2010.html

Mitchell, Amy; Jurkowitz, Mark; Holcomb, Jesse; Enda, Jodi and Anderson, Monica (2013). Nonprofit Journalism – A Growing But Fragile Part of the U.S. News System. Retrieved from: http://www.journalism.org/analysis_report/nonprofit_journalism

Norman Lear Center at USC-Annenberg (2012). Measuring media’s impact. Retrieved at http://www.learcenter.org/html/projects/?cm=foodinc

Pew Research Center’s Project for Excellence in Journalism (2012). The state of news media 2012. Retrieved at http://stateofthemedia.org/overview-2012

Picard, Robert G. (2006). Journalism, value creation and the future of news organizations. Joan Shorenstein Center on the Press, Politics and Public Policy, Working Paper Series #2006-4.

Rainie, Lee, Purcell, Kristen, Siesfeld, Tony, & Patel, Mayur (2011). How the public perceives community information systems. Pew Research Center’s Internet & American Life Project, and Monitor Institute.

Schaffer, Jan (2009). New media makers: A toolkit for innovators in community media and grant making. J-Lab: The Institute for Interactive Journalism. Retrieved from http://www.j-lab.org/_uploads/publications/new_media_makers.pdf

Schaffer, Jan, & Polgreen, Erin (2012). Engaging audiences: Measuring interactions, engagement and conversions. J-Lab: The Institute for Interactive Journalism. Retrieved from http://www.j-lab.org/_uploads/publications/engaging-audiences/EngagementReport_web.pdf

Smolkin, Rachel (2009). Cities without newspapers. American Journalism Review, June/July 2009.

Tofel, Richard J. (2013). Nonprofit journalism: Issues around impact. LFA Group, and ProPublica. Retrieved from http://s3.amazonaws.com/propublica/assets/about/LFA_ProPublica-white-paper_2.1.pdf

Urban Institute, & Center for What Works (2006). Candidate outcome indicators.

Urban Institute, & Center for What Works (2006). Taxonomy of outcomes. Retrieved from http://urban.org/center/met/projects/upload/taxonomy_of_outcomes.pdf

Westphal, David (2009). Philanthropic foundations: Growing funders of the news. USC Annenberg School for Communication Center on Communication Leadership & Policy Research Series, July 2009. Retrieved from http://communicationleadership.usc.edu/pubs/PhilanthropicFoundations.pdf



New Economic Models

Investigative News Network

Investigative News Network

This group of more than 20 nonprofit news organizations was formed in July 2009 to organize the best investigative reporting sites out there. Workshop executive editor Charles Lewis, one of the original four board members, announces the group's first CEO, Kevin Davis.

iLab Projects

Reporting on Washington for those outside the Beltway

The decline of local news is highly visible in the nation's capital, where the once-robust tradition of regional reporting — covering the federal government as it pertains to specific regions, states and communities — is now a shadow of its former self. “When I started, regional reporting was very important,” said Stephen Hess of the Brookings Institution. “I've watched it, over time, fade away.”