eHow. An interesting community, that pays for content.

May 7th, 2008 - by pafalafaga

I’ve been playing around with for about a month now, and I must say, I’m pleasantly surprised and possibly, mildly addicted. They invite user-submitted content on “How to” topics. This can be pretty much any topic of your choosing (no matter how much of a stretch), as long as you can present it in Step 1, Step 2-style format.

It may come as a surprise to some to find out that there are sites out there…tons of them actually…that will pay you to post content. The usual model works like this: we’ll build a site, you fill it with content — writing, photos, video — and we’ll split the ad revenue.

That’s fine, in theory, but many such sites suffer from two key failings. One, they just don’t (yet) drive enough traffic to generate much views on your content, much less clicks on ads, and Two, they’re not particularly generous in their sharing arrangements.

eHow may well be the rare exception. They’re a long established site, and pretty visible in search results. And while they don’t provide details on how they split the take on ad revenue, I’m already making more per day from eHow than I do from my two personal websites that I’ve been working on for years now.

It’s hard to generalize eHow earnings. Some articles sit and do nothing…no views, no earnings. Others get lots of views, lots of earnings. And there’s a few oddballs with a ton of views, but zero revenue, and vice versa.

Overall, though, a typical article pulls in about two to four cents a day…let’s say three cents, for the sake of discussion. It takes me about a half hour of work to write and post an article to eHow. So, you say, why bother, just to get three pennies for a half hours work?

Once you post to eHow, that article is up there. Presumably, forever! Three cents a day become ten dollars a year, and $100 in ten years. Which is not bad for a half hour of work. Post a hundred articles, and the figure becomes $10,000 in ten years. A thousand articles and….you get the picture.

I’m up to 60 or 70 articles so far (and I’m not proud of all of them, but so far, I’m leaving them there).

Of course, there’s no knowing if the model will hold for ten years. Things may go bust for a variety of reasons. But then again, they may improve. I’ve already gotten much faster at posting new items, and I feel like I’m learning how to target good themes, match up with the ad language, and drive a bit of traffic to the articles.

The money part is intriguing, no doubt. There’s also quite a nice community over at eHow, who I’m coming to enjoy. For anyone looking for a place to spend some time, earn some spare change, and tell the world how to do things, eHow is probably the place for you.

Cheers, all.


UofM says paid Q&A sites get best results

April 15th, 2008 - by eiffel

Harper, Raban, Rafaeli & Konstan from the University of Minnesota have investigated online Q&A services to find predictors of answer quality. In their paper, they report that:

First, you get what you pay for in Q&A sites. Answer quality was typically higher in Google Answers (a fee-based site) than in the free sites we studied, and paying more money for an answer led to better outcomes. Second, we find that a Q&A site’s community of users contributes to its success.

This doesn’t come as a surprise to me, but the breakdown of judged answer quality is interesting:

  • Quality score 0.68 – Google Answers $30 questions
  • Quality score 0.59 – Google Answers $10 questions
  • Quality score 0.51 – Yahoo Answers
  • Quality score 0.41 – Google Answers $3 questions
  • Quality score 0.41 – Library reference services
  • Quality score 0.40 – Microsoft Live QnA
  • Quality score 0.33 – AllExperts

Bobbie7’s answer to Which actress has the first female line in a talking movie? was highlighted in the report.

One of the luxuries of academic research is being able to take your time. The paper was published earlier this month, but Google Answers closed in 2006.

Spotlighting Google Answers Questions and Researchers

March 13th, 2008 - by eiffel

Google operated the Google Answers service for four-and-a-half years, and during that time regularly highlighted questions and researchers deserving of a moment in the spotlight.

Here, for posterity, are the questions which were awarded “Question of the week/month/quarter”:

“Safety of Helicopter/Plane Tour of Grand Canyon”

“Trade Magazines in Western Europe”

“13 Companies That I Need Information About”

“People Who Don’t Wear Jeans”


“Pay rates for Google Answers research”

Explanation of a Poem”

“Constitution vs Articles of Confederation”

TV Show: The Greatest American Hero”

“Harrison Ford testimony”

“Seeing Life from the Other Side”

“Camus and Quotation”

“After Dinner Toasts”

“Educator in jail setting needs information to pass on to those who qualify”

“Spatial Geometry, Physics, Mathematics”

“MMR vaccine and Autism”

“Men’s Typical Outfits in Ireland”

“Town Listed in Germany”

“Jason Project”

“Login Names (Like Pinkfreud)”

Economic Systems”

“Gravitational Forces”

“Animals that Communicate the Location of Food to Other Animals”

“Faces of Americans”

“How to Use This Site, and Get Answers?”

“Christmas Trees”

“New Years Traditions in United States”

“Algorithm for Optimization of Sheet Metal Cutting”

Translate German to English”

Making a Living”

“Cause, the Chemistry, and the Consequences of the Halifax Explosion”

“Parents Away From 3-yr Old For a Week – How Should We Handle It?”

“Russian Emigrees in Paris, 1920-1939”

Sewing Projects”

“Greenland Pool”

Drowning in a reorganization dilemma”

“Helen Keller”


“Captain Cook 18th Century Medals”



Here are the researchers who cheap car insurance quotes were awarded “Researcher of the week/month/quarter”:


secret901 -ga


March 6th, 2008 - by eiffel

UNdata isn’t the antithesis of data (in the way that UNcyclopedia is the antithesis of an encyclopedia).


Undata is a remarkably convenient way to access statistical data collected from the many and varied international governmental organizations that make up the United Nations.

Over 55 million database records are held on the site. A free-text search box helps you locate the data sets of interest, as does an “Explore” link. Once you locate the data you are looking for, you can refine it by applying filters, and can choose a column as a “pivot” to produce a cross-tabulation.

Best of all, you can then click “Link to this page” to create a static URL to your customized version of the data. For example, I looked up the land area of protected spaces (parks, etc) and applied a filter to restrict the results to Australia and Iraq. Choosing “Country” as the pivot column changed it to a two-dimensional table showing changes to protected spaces by country and year, allowing me to obtain this link to my data table.

If the presentation of complex data is important to you, you should visit Gapminder. Choose the Gapminder World option, and you can view five-dimensional data in a very intuitive way. Suppose you were interested at trends in carbon output on a country-by-country basis, correlated with wealth and life expectancy, and varying over time – it’s no problem. Put carbon output on one axis, wealth (GDP) on the other, make the size of each data point represent life expectancy, and hit the “Play” button for a dramatic presentation of how it changes over time. Colour coding is used to indicate countries, which can be highlighted or labelled if you like.


Does that sound like hard work? Well sit back and watch Hans Roslings use the Gapminder animated charts to take you on a dramatic video tour of world data. He presents several data sets illustrate world problems and to suggest insights into possible solutions. He’s an engaging presenter, so do watch the video until the end, where there is a most entertaining finale!

Dodging misinformation

February 5th, 2008 - by eiffel

In my previous post I mentioned a heuristic which we can use to help judge the reliability of what we read: Has a fact been derived from a single grand vision, or from many different ideas?

That idea is that a fact derived from the convergence of many different ideas is more likely to be reliable and robust than one derived from a single grand vision (which might turn out to be a single grand delusion).

What are some other heuristics that we can use when trying to dodge misinformation? There are plenty of good suggestions at Google Answers, on a question entitled Quality of Information. The question itself is delightful to read, because it’s so beautifully written.

Here is a synthesis of ideas offered by pinkfreud-ga (who posted the answer), plus commenters journalist-ga, j-philipp-ga, aceresearcher-ga, luciaphile-ga and voila-ga (all of whom were Google Answers Researchers).

  • The context of a website can cast doubt on the authority of its content. Can you trust information placed on a site whose purpose seems to be not the spread of knowledge, but the spread of animated graphics, uninvited MIDI music and intrusive pop-ups? (it's a joke)
  • Likewise, the purported authorship of a website can cast doubt on the authority of its content. Would you rely on a source identified only as armadillogirl?
  • Poor spelling can be a warning (Kemlo’s posts excepted, of course). If someone hasn’t taken the time to check the spelling of their text, are they likely to have taken the time to check its correctness? It’s like your local takeaway: there’s no hygiene reason why the outside of their windows must be clean, but if they have cleaned their windows then they have probably also cleaned in more important places too.
  • Does the website have an identifiable agenda? Even if it’s unrelated to the information that you are interested in, an agenda increases the possibility that the information is not been assembled in a rigorous and balanced way.
  • Is the site hosted at a location that is generally used for the dissemination of information? It’s prejudice I know, but a MySpace site may not be as reliable as an .edu site.

This might all sound a bit discouraging, but there are also some positive indicators:

  • Are sources cited?
  • Has time and care been taken to lay out the site?
  • Does the site appear to be motivated by the desire to spread knowledge?
  • Is the site regularly kept up-to-date?

Some types of content should raise warning flags. This kind of content is not always untrustworthy, but you do need to cross-check:

  • Quotes and their origins. Misinformation gets repeated as gospel. You need to check the original source if it is at all possible.
  • Something unbelievable yet somehow compelling. This is the stuff of which urban legends are made. If it was so unbelievable yet true, it would be more widely mentioned and discussed than on niche websites.
  • A “fact” that could hurt the reputation of a person or a company. It’s quite likely that it’s nothing more than malicious fiction.
  • Someone wants to sell something. Maybe use their website if you want to buy what they are sellng. Don’t rely on the website for anything else.
  • Someone is seeking help of some sort by a mass appeal. Way less than one percent of these are going to be genuine. If you want to help, cross-check with other sources. A genuine appeal will be verifiable.
  • Someone is warning that your health (or that of your PC) is in peril. As if they would really know which files you should delete on my computer, or where you should send your bank account details to, to fix this fake “problem”.
  • Something is claimed to be true but cannot be explained by science. Well maybe it is true, but if neither they nor you can prove it, then all bets are off. Believe in it if you like, but don’t pass it on as a fact.
  • Knowledge is claimed to be suppressed by a conspiracy. It’s tempting to dismiss all of these claims as being made by nut-cases, but history shows that some of the conspiracy theories turn out to be true. The problem is that usually no-one knows which ones are true until the government archives are released fifty years later. If you want to establish claims of a conspiracy as fact, you need to look for evidence elsewhere, and not simply accept the word of the conspiracy site.
  • An extremely good or bad claim. So you’ve apparently won the lottery, or have a rare disease and will die tomorrow? It might be worth looking for evidence elsewhere.

J_philipp-ga warns us about statistics in general: There’s a saying that statistics are like bikinis. (“What they reveal is suggestive, but what they conceal is vital.”) Unless a website has a good reason to make a statistic out of the information, look further and use the original data instead.

Finally, pinkfreud-ga gives us a thought to ponder:

Brownie points for good humor and wit (genuinely funny people are, in my experience, more careful with facts and tend to be more trustworthy than are humorless wretches).

I never thought of it that way before, but I agree, and I like it that way.

Authoritative Misinformation

January 10th, 2008 - by eiffel

So you want some information, and you need it to be more reliable than the average web page. Who do you turn to?

You make the effort to track down an authority.

Not so fast. It doesn’t always work.

Pope Urban VIII, who was learned in the sciences, ratified the statement that The proposition that the Earth is not the centre of the world and immovable but that it moves, and also with a diurnal motion, is equally absurd and false philosophically and theologically considered at least erroneous in faith.

George Bush Junior and Tony Blair were quite sure that Iraq was stockpiling weapons of mass destruction.

OK, so maybe it’s not the best idea to depend upon political or religious authorities. What kind of authorities can we rely on?


Not so fast.

All scientists? Would you rely on scientists employed by a drug company, if you’re seeking information about that company’s drug? No? How about government scientists then?

That wouldn’t have helped if you had wanted to know about a possible connection between BSE and CJD in the 1990s. New Statesman suggests that the “independent” scientists contracted by the Ministry of Agriculture, Fisheries and Food were anything but. And Dr Harash Narang, the BSE/CJD whistleblower, was stripped of his authority as a result of his persistent warnings about BSE’s linkage to CJD.

What about a scientist of Einstein’s calibre? Sure, he got the theory of relativity right, but he was too quick to dismiss Alexander Friedman’s “expanding universe” solution to his (Einstein’s) equations, and also to dismiss Lemaitre’s early insights towards what would become Hubble’s Law.

So what can you do, as a researcher, if you can’t depend on authority? In a slow-moving field, you can look for a developed consensus. The Wikipedia “talk” pages can help you to discover whether consensus has formed or not.

In a quick-moving field, you can examine the evidence and draw your own conclusions – but that’s not easy unless you’re a specialist in the field. It can help to use sites such as zFacts which attempt to distill the salient facts about controversial issues.

Sometimes it’s worth checking out the “debunking” sites. Uncomfortable though it may seem, sometimes it’s easier to establish the validity of a debunking than the validity of the original postulate. is often worth checking out. It’s not intellectual but it often has its finger “on the pulse”.


Whether you have faith in what is said by authority figures, or whether you know that they can be as fallible and misguided as the rest of us, you may enjoy Christopher Cerf’s book The Experts Speak: The Definitive Compendium of Authoritative Misinformation. It’s by no means the only book in this nook though. There’s also 776 Stupidest Things Ever Said and Facts and fallacies: A book of definitive mistakes and misguided predictions.

On a more serious note there’s Expert Political Judgment: How Good Is It? How Can We Know? One of the reviews at the Amazon page for that book says in part:

This book is a rather dry description of good research into the forecasting abilities of people who are regarded as political experts. It is unusually fair and unbiased. His most important finding about what distinguishes the worst from the not-so-bad is that those on the hedgehog end of Isaiah Berlin’s spectrum (who derive predictions from a single grand vision) are wrong more often than those near the fox end (who use many different ideas).

Now that’s really interesting because it gives us a heuristic which we can use to help judge what we read: Has a fact been derived from a single grand vision, or from many different ideas?

I like that.

In a future Web Owls post I will explore other heuristics that we can make use of when trying to dodge misinformation.

ResearchWikis for free market research

January 3rd, 2008 - by eiffel

If I had to pick a topic that I thought was unsuited to wikis, market research would be that topic. Sources are closely guarded, figures are usually unverifiable and sometimes unsubstantiated, and the entire market research industry is built on some rather flimsy assumptions. No, a market research wiki could never work.


That hasn’t stopped ResearchWikis from making a good go of it. I checked a few of their pages, such as Aluminum Market Research, and found usable though fairly basic information covering market background, market structure, industry definitions, market metrics, industry players, trends, recent developments, and some sources. This would make a good first port of call; an overview and familiarization pass before settling down to some serious market research work.

The initial market research looks like it has been seeded by ResearchBuy, who are more than happy to sell you a more advanced report or to provide you with custom research.

The site is being actively maintained, but just by one user named John. It will be interesting to see how ResearchWikis holds up once people start editing the site in earnest.

May auld acquaintance

December 26th, 2007 - by pafalafaga

Happy New Year Everyone!

Here’s how they ushered in the new year of 1795, according to the Times of London.

London tims

Happy New Year

Vive la Bagatelle, indeed. A new year and a merry one, to one and all.


Times HeaderHappy New Year

WoTY W00t!

December 13th, 2007 - by pafalafaga

I’m going to try something bold, and see if I can post an image here…somehow, it never seems to go as smoothly as I hope it will.

W00t! (not Woot) — a word I first came across in online groups just like this one — is officially the Merriam Webster Word of the Year (WoTY) for 2007.

So, for no particular reason, I decided to plot it’s dramatic emergence…hence, my image:

The Rise and Fall and Rise of W00t!

W00t!  I think it worked! 

But…what the heck was going on in 2000!!!!  Oodles of W00ts! at Millennium celebrations?  W00tsclamations about Y2K?

Your speculations are actively encouraged. 

Addendum:  And here’s my new favorite Adsense ad:

A Great New Resource…sigh

November 30th, 2007 - by pafalafaga

I’m here to blow off some steam.

What is it about university-based search engines that makes them — without exception — so frustratingly clumsy?

The latest entry from Carnegie-Mellon University — the Universal Digital Library aka The Million Book Project — should have us all jumping for joy.  Then Million Book Project does exactly what it says — makes a million-plus volumes available for immediate online access. 

Wow!  This is a phenomenal accomplishment.  Amazing.  Undreamed of a mere two decades ago.  The entire world now has instant access to a large research library, covering just about any topic under the sun, and in multiple languages too.

But just try using the danged thing, and you might find your enthusiasm quickly fading.

First off, the images aren’t web-compatible, nor are they based on a common add-in like Adobe PDF.  You need to download not one, but two, separate viewers in order to see the books themselves. 

The viewer downloads don’t happen automatically, when you try to view a book.  Instead, your viewing attempt will simply fail, with no explanation of why.  You need to find the instructions squirreled away in the FAQs, and go through the (unusually cumbersome…including a requirement to register) process for obtaining the software.

Then, if you know exactly what book you’re looking for, you can do a quick Title or Author search.   My search for “Oliver Twist” pulled up 18 copies of, essentially, the same book.  (While this may be useful for scholars wanting to compare editions, one wonders whether it was the best use of limited resources?)

Ready to read Oliver Twist?  Perhaps the book you click on will open, perhaps not.  the volumes housed on the library’s China server, in particular, seem to go through 45 minutes worth of firewalls before deciding whether to grant acces or not.

But if you’re lucky enough to get an image, you can begin reading…one page at a time!  Click to open the page.  Wait. Adjust the viewer format.  Read the page.  Click.  Wait.  Adjust the viewer format again. Read the next page.  Click. Wait.  Adjust the…

There’s no way to access a chapter at a time or, heaven forbid, download the entire book.

Want to search within Oliver Twist for a particular passage you recall from your school days?  Sorry.  No in-the-book searching is available!

I don’t mean to diminish the exceptional accomplishment of the Universal Digital Library…it really is a momentous achievement.  But it just doesn’t flow the way one has come to expect tools on the internet to flow.  For some reason, university-based systems just don’t seem able to manage the flow. 

I’ve written before about the Making of America, and other digital content online at the University of Michigan.  MoA is one of my favorite historic research tools, but it’s so damned slow and cumbersome — right down to its unwieldy URLs — that it seems to be deliberately designed to hide itself from the research community, and to frustrate its users once they happen to find it.

The Internet Archive, which grew up at UC Berkeley, is another university-launched frustration.  Without a doubt, this is one of the internet’s great resources, but still, it’s so hard to manuever around and search that it can make you crazy.  They toyed with full-text search capabilities a few years ago, but it never worked well, and has long since disappeared from view (I can’t even find it in the archive of the Archive!).

Like internet-savvy researchers everywhere, I’ve grown familiar with the fast, easy-access capabilities of in-the-book search engines like Amazon and Google Books, or commercial services like Questia.  Perhaps I’m being unreasonable, but I expect to see these in any online collection, whether of library books, or web pages.  Why can’t universities seem to manage this?

Of course, the Universal Digital Library, MoA, and the Internet Archive all operate on a shoestring, and don’t have the resources of Google or Amazon…or even tiny Questia…to add a lot of capabilities like a user-friendly design and full-text searching.

But somehow, the non-profit Wikipedia manages to do it!

I will now return you to the 20% whimsy portion of your program.