Home > ResourceBlog > Article

« All ResourceBlog Articles

 

Bookmark and Share   Feed

Monday, 7th June 2010

British Library Comments on James Murdoch's Accusations

A couple of weeks ago we posted about James Murdoch, CEO of News Corp. and also Rupert Murdoch's son) making some outlandish comments against the British Library plan to digitise all of the public domain newspapers in their massive collection and with the hopes of getting publishers to agree (with compensation) to allow them to digitise in-copyright content.

What Mr. Murdoch had to stay illustrated that he or his staff did not read the carefully worded British Library news release and wasn't aware of other British Library digitisation projects.

We have links to all of the materials and sites in this May 21, 2010 ResourceShelf post.

Today, an article in The Guardian titled, "James Murdoch v the British Library," brings up his comments from a from the end of May.

Basically, Mr. Murdoch doesn't like making content available at no charge (even thought it's out of copyright) via the library because it hurts the market.

Public sector interest is to distribute content for near zero cost harming the market in so doing, and then justifying increased subsidies to make up for the damage it has inflicted.

We also hear from the British Library:

Patrick Fleming, an associate director at the British Library, says that Murdoch's criticisms are "patently not true" because the library is treading very carefully. Its idea is to digitise newspapers from before 1900 which should be out of copyright because the copyright in a newspaper article extends to the life of the writer plus 70 years. Meanwhile, Fleming says, "any newspaper published after 1900 will only be made available with the consent of the copyright owner". Once digitised, newspapers will be made available free "in the British Library reading rooms", matching today's print-based model, and online via "a micropayment website" that will be run by Brightsolid as it hopes to sell articles to amateur genealogists and anybody interested in history.

Precisely. These points are made in the original news release that we reacted to and suggested Mr. Murdoch or one of his people read.

1) Newspapers Out of Copyright

2) Newspapers After that Time Will Be Available

The article goes on to say:

Not only will none of News International's titles be digitised without permission, but in a major retreat the Times archive pre-1900 won't be digitised, Fleming says, because the Times has already done that. "The British Library purchases access to the Times online library" and so out of copyright Times archive is kept behind a paywall, because it is difficult to copy, or even to access, piles of old newspapers.

The Times of London archive (1785-1985) is available from Gale. The database has been available for some time (several years). Mr. Murdoch owns The Times.

The Guardian article goes on to call Google a "media pirate."

Well, that can be the topic of another post. Briefly, the author is not clear if he is speaking about Google and its web crawler or other projects (like Google Book Search) where the company is are making deals, digitizing the content themselves, and dealing with material that still has a valid copyright. Again, the BL newspaper project will only accept in-copyright newspaper content if and only if the publisher/owner of that content agrees to it. If they don't agree it will not be digitised by the BL. It's an opt-in situation.

Google Book Search takes the opposite approach. Authors (or those who hold the active copyright) must opt-out of the program and everything is being digitized with or without the permission of the copyright holder unless they opt-out.

As for the Google crawler (Googlebot), if those who placed content on the web don't want that material crawled and then made accessible from Google, Bing, or anyone else they can add a few lines of code to the server or to an individual item (as metadata) and the material will not be crawled. It's rather simple. We're talking about is the robots.txt file. This is perhaps the most well-known and easiest way to make content inaccessible via web search engines.

Source: The Guardian

Views: 1212




« All ResourceBlog Articles

 

FreePint

FreePint supports the value of information in the enterprise. Read more »


FeedLatest FreePint Content:


All FreePint Content »
FreePint Topics »


A FreePint Subscription delivers articles and reports that support your organisation's information practice, content and strategy.

Find out more and order a FreePint Subscription by visiting the
completing our online form: Subscription Order page.


FreePint Testimonials

"The CoP provides a great opportunity to talk, listen and share ideas. It is beneficial to get input from peers. You do pick up feedback from ..."

Read more testimonials and supply yours »






 

 
 
 

Register

Register to receive the free ResourceShelf Newsletter, featuring highlighted posts.

Find out more »

Article Categories

All Article Categories »

Archive

All Archives »