Cookies help us display personalized product recommendations and ensure you have great shopping experience.

By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
SmartData CollectiveSmartData Collective
  • Analytics
    AnalyticsShow More
    data analytics and truck accident claims
    How Data Analytics Reduces Truck Accidents and Speeds Up Claims
    7 Min Read
    predictive analytics for interior designers
    Interior Designers Boost Profits with Predictive Analytics
    8 Min Read
    image fx (67)
    Improving LinkedIn Ad Strategies with Data Analytics
    9 Min Read
    big data and remote work
    Data Helps Speech-Language Pathologists Deliver Better Results
    6 Min Read
    data driven insights
    How Data-Driven Insights Are Addressing Gaps in Patient Communication and Equity
    8 Min Read
  • Big Data
  • BI
  • Exclusive
  • IT
  • Marketing
  • Software
Search
© 2008-25 SmartData Collective. All Rights Reserved.
Reading: Is Differentiated Content Enough To Save Newspapers?
Share
Notification
Font ResizerAa
SmartData CollectiveSmartData Collective
Font ResizerAa
Search
  • About
  • Help
  • Privacy
Follow US
© 2008-23 SmartData Collective. All Rights Reserved.
SmartData Collective > Uncategorized > Is Differentiated Content Enough To Save Newspapers?
Uncategorized

Is Differentiated Content Enough To Save Newspapers?

Daniel Tunkelang
Daniel Tunkelang
7 Min Read
SHARE

The Guardian headline sums it up: “Big newspaper sites ‘erode value of news’, says Sly Bailey“. Sly Bailey is the  chief executive of the Trinity Mirror, one of the UK’s largest newspaper publishers. Here’s what she has to say:

A consumer is now as likely to discover newspaper content on Google, visit our sites, then flit away before even discovering that it was the Daily Mirror or the Telegraph that created the content in the first place.

Or worse, they may visit an aggregator like Google News, browse a digital deli of expensive-to-produce news from around the world, and then click on an ad served up to them by Google. For which we get no return. By the absurd relentless chasing of unique user figures we are flag-waving our way out of business.

So far, so good: she’s making the devaluation argument we’ve hopefully all seen by now, and one I agree with.

It’s where Bailey goes next that intrigues me:

She called for a change to the accepted norms, arguing that publishers could “reverse the erosion of value in news content” by rejecting a relentless quest for high user numbers, in favour of a move away from “gene…

More Read

Image
Calculating the Soft Costs of Hadoop
Should Public Safety Departments be Merged?
Current Internet Explorer security flaw even worse than usual ones: Use Firefox or Chrome
Hailing Frequencies Open
From “Can You Hear Me Now?” to #KimsDataStash

The Guardian headline sums it up: “Big newspaper sites ‘erode value of news’, says Sly Bailey“. Sly Bailey is the  chief executive of the Trinity Mirror, one of the UK’s largest newspaper publishers. Here’s what she has to say:

A consumer is now as likely to discover newspaper content on Google, visit our sites, then flit away before even discovering that it was the Daily Mirror or the Telegraph that created the content in the first place.

Or worse, they may visit an aggregator like Google News, browse a digital deli of expensive-to-produce news from around the world, and then click on an ad served up to them by Google. For which we get no return. By the absurd relentless chasing of unique user figures we are flag-waving our way out of business.

So far, so good: she’s making the devaluation argument we’ve hopefully all seen by now, and one I agree with.

It’s where Bailey goes next that intrigues me:

She called for a change to the accepted norms, arguing that publishers could “reverse the erosion of value in news content” by rejecting a relentless quest for high user numbers, in favour of a move away from “generalised packages of news” to instead concentrate on content with “unique and intrinsic value”.

On one hand, of course it’s necessary for publishers to offer unique value, regardless of how they can monetize it, or else they commoditize themselves by default. On the other hand, it may not be sufficient. Without effective monetization, publishers create value but don’t capture it. That’s fine if you are Wikipedia (which certainly offers content with “unique and intrinsic value”) and manage to get by on donations. But it doesn’t work so well if you are an online newspaper whose efforts serve more to line Google’s pockets than your own.

Let me make this last point more concrete. Say that you’re an online newspaper, and you invest in developing unique content. Google will happily index your content (assuming you allow it to), and thus you create value on the web. You can even monetize some of that value, by delivering ads to people who visit your site. But Google delivers at least as many ads to those same people, with much less effort. Moreover, as long as Google is the gateway to your content (which is the status quo), you’re unlikely to change that distribution of rents, or to build reader loyalty.

What you really want as an online publisher is for people to seek out your content, not just to stumble into it through search engines and aggregators. I’m curious what would happen if a critical mass of publishers used robots.txt to stop being crawled–and publicly announced that they were doing so. In the short term, they’d  lose a significant amount traffic–and that short-term hit in the current economic climate might amount to fiscal suicide. But in the long term it may be the only way for publishers to prove their own brand value, something they may have to do in order to bring Google and their other bêtes noires to the negotiating table.

There are alternative strategies, such as requiring registration or putting up pay walls. But those have the disadvantage that they break the broader link economy (though I may be using the phrase slightly differently from Jeff Jarvis), which on the whole is quite different from the relationship between publishers and search engines / aggregators. I at least believe that The Guardian obtains more brand credit from someone clicking through this post than from someone seeing it in a sea of search results or aggregated news articles. I recognize that the distinction isn’t always black and white, e.g., aggregators like Techmeme concentrate heavily on a small set of sites that readers recognize over time. In general, however, I’d say there is a difference between following a deliberate citation vs. clicking through a link produced without any human intentionality.

I realize I may come across like a romantic, emphasizing the human element, but the distinction I’m after isn’t sentimental. Rather, it’s the idea that the long-term value of publisher depends on readers knowing and caring who the publisher is. They need to break through the commodified experience of search engines that, by design, dilute the differentiation among brands. In any case, the current path for many publishers looks like tragedy without the romance. Those that aim for long-term survival will have to take some chances to buck this inertia.

Link to original post

Share This Article
Facebook Pinterest LinkedIn
Share

Follow us on Facebook

Latest News

data analytics and truck accident claims
How Data Analytics Reduces Truck Accidents and Speeds Up Claims
Analytics Big Data Exclusive
predictive analytics for interior designers
Interior Designers Boost Profits with Predictive Analytics
Analytics Exclusive Predictive Analytics
big data and cybercrime
Stopping Lateral Movement in a Data-Heavy, Edge-First World
Big Data Exclusive
AI and data mining
What the Rise of AI Web Scrapers Means for Data Teams
Artificial Intelligence Big Data Exclusive

Stay Connected

1.2kFollowersLike
33.7kFollowersFollow
222FollowersPin

You Might also Like

How you can use Social Media for Online Business Branding?

5 Min Read

Perfect IT

6 Min Read

The Perfect Chocolate

3 Min Read

Can Big Data Solve the Skill vs. Luck Mystery in Fantasy Sports?

11 Min Read

SmartData Collective is one of the largest & trusted community covering technical content about Big Data, BI, Cloud, Analytics, Artificial Intelligence, IoT & more.

AI and chatbots
Chatbots and SEO: How Can Chatbots Improve Your SEO Ranking?
Artificial Intelligence Chatbots Exclusive
data-driven web design
5 Great Tips for Using Data Analytics for Website UX
Big Data

Quick Link

  • About
  • Contact
  • Privacy
Follow US
© 2008-25 SmartData Collective. All Rights Reserved.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?