The website for Elsevier Labs (where I work) went online this week, do check it out and provide feedback on what kind of content you would like to see on there. One of the components on the front page is a Twitter feed. So far I have avoided Twitter, because I thought it was a bit presumptious to assume that people would actually care about my 140 character thoughts. So one of my first suggestions for "improvements" was to ask if we could include feeds from Quora and LinkedIn, where I am actually somewhat active.
While making the suggestions, I also checked what was available, and found this unofficial API for Quora written by Christopher Su, complete with a Heroku server that you can connect to. So as a little proof of concept, I decided to see if I could pull out data about my own activity on Quora for the last 2 months. Sadly, I couldn't find a LinkedIn API for user activity, if you know of any, please let me know and I will check it out.
The code is really simple, and uses just 2 of the 6 or so services provided by the Quora API, the profile and the answer activity services. Here is the (really simple) code to pull out my activity.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | # -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
from datetime import datetime, timedelta
import urllib2
import yaml
profile = urllib2.urlopen("http://quora-api.herokuapp.com/users/your-quora-id")
pjson = yaml.load(profile)
profile.close()
answers = urllib2.urlopen("http://quora-api.herokuapp.com/users/your-quora-id/activity/answers")
ajson = yaml.load(answers)
answers.close()
now = datetime.utcnow()
print("<html>")
print("<head><title></title><body>")
print("<ul>")
for item in ajson["items"]:
summary = BeautifulSoup(item["summary"])
summary_text = summary.get_text()[10:100]
pubdttm = datetime.strptime(item["published"], "%a, %d %b %Y %H:%M:%S %Z")
# only show recent posts (2 months old)
if pubdttm <= now - timedelta(weeks=8):
continue
print("<li>%d/%d/%d: " % (pubdttm.month, pubdttm.day, pubdttm.year))
print("%s (answers: %d, followers: %d, following: %d) " % (
pjson["name"], pjson["answers"], pjson["followers"],
pjson["following"]))
print("answered <b>%s</b>: <a href=\"%s\">%s...</a>" % (
item["title"], item["link"], summary_text))
print("</li>")
print("</ul>")
print("</body></html>")
|
And it produces an HTML snippet that (without any styling) looks something like this:
So in any case that was all I had for this week. I plan on being more active on Twitter going forward, because over the last few years I have observed a few good friends using it to share papers and articles, which seems like a good use case for it - blogs are a bit too heavyweight for that sort of stuff. In fact, I have already started doing so, you can find my posts by searching for @palsujit. As part of this effort I will also, at the risk of being considered incredibly self-serving, start promoting my own blog posts on Twitter. I also hope to write meatier blog posts here once things have stabilized a bit and the learning curve at the job is not as steep as it is now.