Originally posted here.
Introduction A few of the Archives Unleashed team members have a pretty in-depth background of working with Twitter data. Jimmy Lin spent some time at Twitter during an extended-sabbatical, Sam Fritz spent some time working with members of the Social Media Lab team previous to joining the Archives Unleashed Project, and Ian Milligan and I have done a fair bit of analysis and writing on our process of collecting and analyzing Canadian Federal Election tweets.
Juxta A couple years ago I wrote about a method for creating a collage out of 1.2M images collected from the 2015 Canadian Federal Election Twitter dataset. That method was very resource intensive in terms of the amount of temporary disk storage required to create the collage. As the number of images in a given collage increased, the amount of temporary disk space scaled exponentially; 3.5T for 1.2M #exln42 images, and ~90T for 6.
I’ve been collecting tweets to @realDonaldTrump since June 2017. In my most recent time pulling together, and deduping the dataset I asked myself, “I wonder how many occurrences of ‘fuck’ are in the dataset.” Or, how many fucks are there to give?
The data is updated by running a query on the Standard Search API every five days.
$ twarc search 'to:realdonaldtrump' --log donald_search_$DATE.log > donald_search_$DATE.jsonl Which yields something like this every five days.
At this past week’s Archives Unleashed dataton, I jokingly created some wordclouds of my Co-PI’s timelines.
Finished my most likely bigly winning #hackarchives project: A Word Cloud of @lintool's timeline!https://t.co/eK2KPGjaGo
— nick ruest (@ruebot) April 27, 2018
Or, @ianmilligan1 #HackArchiveshttps://t.co/qMxiet0osl
— nick ruest (@ruebot) April 27, 2018
Mat Kelly asked about the process this morning, so here is a little how-to of the pipeline:
twarc jq wordcloud_cli.
This is the text for my presention at the “National Forum on Ethics and Archiving the Web”.
I had the honour of being on an Archiving Trauma panel with some great people. Michael Connor, Chido Muchemwa, Coral Salomón, Tonia Sutherland, and Lauren Work, thank you for sharing your stories!
The world is a beautiful and terrible place.
Twitter can be beautiful.
Twitter is fucking awful.
So, capturing traumatic events on Twitter.
Introduction List of bots I run, divided up by type.
anon @gccaedits IP address ranges Periodic Twitter archive requests diffengine @canadaland_diff Caution: This account is temporarily restricted. @cbc_diff Account Suspended @cpc_diff Caution: This account is temporarily restricted. @fairpressdiff Caution: This account is temporarily restricted. @globemail_diff Caution: This account is temporarily restricted. @greenparty_diff @lapress_diff Caution: This account is temporarily restricted. @liberalca_diff Caution: This account is temporarily restricted.
Tweets to Donald Trump (@realDonaldTrump) 59,261,490 tweet ids for tweets directed at Donald Trump (@realDonaldTrump), collected with Documenting the Now's twarc. Tweets can be “rehydrated” with Documenting the Now’s twarc, or Hydrator.
twarc hydrate to_realdonaldtrump_ids.txt to_donaltrump.jsonl. Tweets from May 7, 2017 - June 21, 2017 of the dataset used a combination of the Filter (Streaming) API and Search API. The Filter API failed on June 21, 2017. From June 23, 2017 forward only the Search API was used to collect.
Overview A couple Saturday mornings ago, I was on the couch listening to records and reading a book when Christina Harlow and MJ Suhonos asked me about collecting #WomensMarch tweets. Little did I know at the time #WomensMarch would be the largest volume collection I have ever seen. By the time I stopped collecting a week later, we’d amassed 14,478,518 unique tweet ids from 3,582,495 unique users, and at one point hit around 1 million tweets in a single hour.
Background Last August, I began capturing the #elxn42 hashtag as an experiment, and potential research project with Ian Milligan. Once Justin Trudeau was sworn in as the 23rd Prime Minister of Canada, we stopped collection, and began analysing the dataset. We wrote that analysis up for the Code4Lib Journal, which will be published in the next couple weeks. In the interim, you can check out our pre-print here. Included in that dataset is a line-deliminted list of a url to every embedded image tweeted in the dataset; 1,203,867 images.
On November 13, 2015 I was at the “Web Archives 2015: Capture, Curate, Analyze” listening to Ian Milligan give the closing keynote when Thomas Padilla tweeted the following to me:
@ruebot terrible news, possible charlie hebdo connection - https://t.co/SkEusgqgz5
— Thomas Padilla (@thomasgpadilla) November 13, 2015
I immediately started collecting.
When tragedies like this happen, I feel pretty powerless. But, I figure if I can collect something like this, similar to what I did for the Charlie Hebdo attacks, it’s something.