York University Libraries Open Access Week 2012 - blogvsbook

Yesterday, York University Libraries held a debate in the Scott Library entitled, "Be it resolved the blog replace the book?" The debate turned out pretty awesome, and somehow the team arguing for the book won!? (Some might say it was because of @adr's compelling closing statements.) 

Along with livestreaming the debate on ustream, I pulled together (a special thanks to Ed Summers, and his very permissive licensing) a little node.js application to display a "twitterfall" of the hashtag for the event. As is always the case, technology is bound to fail, somehow, someway, at a live event. Turns out that we owe a very special thank you to the giant Amazon outage, which in turn took out Heroku's infrastructure. Good thing my paranoia urged me to use a backup application to snag the archive for the stream; with all of the variations on the hashtag.

Enough about the debate, and Amazon's large internet burp! What I want to really talk about is some fun ways to play with the data we collected from the Twitter API. The backup application I mentioned earlier, has some nice visualizations incorporated in it. Along with its ease of use, it is pretty slick and simple to use application. But, most important, I have a csv (deposited in the OCUL Dataverse site) of all the tweets, for all the hashtags I could figure out. Which means we (yes you! Download the csv and have fun with this too!) can start doing some cool visualizations. 

Inspired by @Sarah0s' "Dead easy data visualization for libraries" talk at AccessYUL I decided to play with infogr.am to see how easy it would be to toss together a visualization of the number of tweets per user.


This is a fairly basic and easy one to make. You only need two columns: twitter usernames, and corresponding number of tweets. Once you have those entered, just hit publish, and you're good to go. 

So, that is something quick and easy. I have "Designing Data Visualizations" on the way. Hopefully that inspires me a bit more, and maybe I'll start playing with d3js again. Should be fairly straightforward to drop the csv into Google Refine and get some json back. In the interim, I just leave it up to Bill Denton to show us some really cool stuff with the data in R 

Related

comments powered by Disqus