A couple of days ago, Kevin Anderson posted about “The Future of Context and the Future of Journalism,” and one phrase jumped out at me as I read his thoughts about the inundation of information.
Creating more duplicative content is only reinforcing the problem, causing audiences to shut off.
Do I hear an “ouch”? A couple of weeks ago, after watching a Champions League football match, I was browsing news stories, and was blown away by the number of separate reports about the same event. As I said then, does every news organisation churn out a match report to inform their readers, or really to grab some search traffic?
To give a local example, recently I heard from a sports reporter for my regional newspaper, that at local football games there are often 3 reporters from Northcliffe Media, each reporting for 1 of the 3 Northcliffe owned regional newspapers. To put this in perspective, the local football teams are not top-flight, and only attract a few thousand supporters. And yet one newspaper group is paying 3 reporters to do pretty much the same job. All this whilst the crowd is likely to contain hundreds of people equipped with live reporting equipment (ie a smartphone), and at least 30 of them would be competent and willing to add value to the reporting process. For free.
So why is this happening?
Every print product had to have everything, because the content was siloed by the very nature of print. There were no hyperlinks. But now, when almost every article ever written is available with just a few clicks, does it still make sense for every newspaper to pay journalists to write the same stories?
Secondly: The Rush for Reach
When newspapers did start publishing online, the old behaviour was reinforced, because they became stuck in a never-ending chase of search traffic. So stories had to be written, and SEO’ed, and published on every breaking topic, just so that search visibility was maintained.
The result of this duplication is that structural inefficiency is deeply embedded within every traditional publisher.
How is this solved?
One of the clear solutions available, and most publishers are at least looking in this direction with some degree of interest, is the aggregation and curation of external content. By gathering and creating meaning out of the wealth of content already available on the web, news organisations can actually create a better product. To give a practical example; rather than writing up every breaking angle of an important event, forward-thinking news organisations can pull in the relevant stories from their competitors, add a social layer by pulling in a live twitter feed on the subject, and provide a quick and evolving summary of the subject.
It is cheaper than duplicated reporting, it fulfils the publisher’s need for a SEO hook, and it serves the reader more effectively.
To give a relevant case study, we are working with a large publisher in the African market that is serving the local market with international news and analysis. Instead of employing journalists to cover every major international story, we are delivering an intelligent aggregation of the most important viewpoints, automatically curated and threaded. The editorial team then simply have to write short summaries of every story, providing the African perspective, and cutting out the stage of duplicating the actual reporting. The readers get a much more varied and detailed analysis, and news organisation operates on a much smaller news budget. As Jeff Jarvis says:
Cover what you do best, and link to the rest.
On Monday I will be posting about the various ways that news organisations can cut the cost of news, whilst better serving their readers. To get alerted to that post, add the feed to your RSS reader, or follow me on Twitter.