The Social Media Index (SM-Index): A Pilot Project

As promised last week, I have put together a pilot Social Media Index (SM-Index) for free open-access medical education (FOAM) websites. There were 25 sites included within this analysis. They were selected from the sites that I frequent as well as anyone that volunteered their site when I asked on twitter. This article discusses the thinking behind the creation of the SM-Index and the methods used to calculate it. The index itself is located here: SM-Index.

Note that this is not a novel concept. There has been a social media index created for blogs/people (here and here) as well as companies. However, to my knowledge, nobody has done this for our particular area of medical education or for the same reasons.

As I anticipated the index being somewhat controversial, I have done my best to keep the methodology as transparent as possible. If you e-mail me I will even send you the Excel file data used to calculate the values.

Why create a SM-Index?


Three reasons (elaborated on in posts A Market of Ideas and Educational Scholarship: Measuring the Impact of FOAM):

  • Open-access medical education needs a way to measure output relative to others for promotion/tenure/recognition/etc in the same way that the classical academia measures theirs through publication numbers, their h-index, etc. Florida State University was one of the first to call for called for these contributions to be recognized in the report of their 2010 Open Education Resources Task Force (see page 26). Thanks to Javier Benitez for the reference.
  • I suspect that post-publication peer review correlates directly with the number of people that read and listen to content. If this is true, then it follows that the popularity of a resource should correlate with its credibility as legitimate academic scholarship. See this piece for a discussion of the requirements for academic scholarship.
  • Newcomers to open access medical education often do not know where to start. Assuming popularity correlates with quality, reviewing the SM-Index would help them to find the most credible content in their area of interest.

Is this just a giant popularity contest?


Notwithstanding the above meme, Iprefer to think of it as a reputation contest. When I repeatedly visit a website or follow someone on Twitter/Google+/Facebook it is an acknowledgement that they are producing good content that I want to hear about. If a lot of people feel the same it is likely that they have a strong reputation for being a credible source.

What values were used to make up the SM-Index?

The index is a composite value derived from two composite website traffic indices (Alexa and PageRank) and three social media modalities (Twitter, Facebook and Google+). I decided to use the composite indices such as Alexa and PageRank rather than primary website traffic data for ease of access (acquiring Google Analytics data for each site would be very time consuming), transparency (everyone can see these values) and credibility (I doubt I can analyze it as appropriately as they do). The social media modalities were selected for the frequency of their use. The way the SM-index was calculated resulted in more frequently used social media sites (every site on the current index had a Twitter feed, Alexa data and a PageRank) and this resulted in them being weighted more heavily than less frequently used social media sites (fewer Google+ Communities and Facebook Pages).

The values used from each modality were:

Alexa Ranking – Alexa is a website ranking service with methodology that they have not made public. The lack of transparency of their ranking system is criticized by some, but it decreases the ability of a site to “game” the system (although it still occurs). There are toolbars you can install for your browser that will allow you to see the Alexa ranking of the sites that you visit.

PageRank – PageRank is the value that google uses to determine the relative importance of a website based on the number of other sites linking to it and their importance. There are toolbars you can install for your browser that will allow you to see the PageRank of the sites that you visit.

Twitter – Followers, specifically the number of followers of the most-followed author/podcaster on the site (ie – Simon Carley‘s follower number was used rather than St Emlyn’s because he is an author has more followers than the site’s handle).

Facebook Page – Likes

Google+ Community – Followers

How was the SM-Index calculated?



Each site was given a rank for each modality

For example, Twitter ranks ranged from 1-25 because all of the sites had a twitter feed. Because only 7 sites had active Google+ communities this modality was ranked from 1-8 with 8 being the score for sites that have no community. Similarly, the Facebook indices went from 1-11.

Each website’s rank in each modality was averaged

For example, Emergency Medicine Ireland ranked 12/7/11/6/4 for Alexa/Twitter/Facebook/Google+/PageRank, respectively. This gave it a mean ranking of 8.0.

The SM-Index was calculated

The average rank (RANK) for each site was inverted and multiplied by a number (k) equal to 1000 times the average rank of the top website (RANKh):

SM-Index = k (1 / RANK) where k = 1000 x RANKh

In this case, LITFL’s average rank was 1.4 across the 6 modalities and k equaled 1400. If the top site was ever ranked #1 in every modality k would equal 1000.

Sorry for making it complicated. This was done to create a SM-Index with a theoretical range of 1 to 1000 and was relative to all of the other sites included in the SM-Index. In this way, if I repeat the SM-Index in the future the top ranked website will still have a SM-Index of 1000 and all of the sites will still to be ranked relative to each other.


Reviewing the results of this experiment, I find the rankings to be fairly consistent with my expectations for the sites. The top sites (LITFL, EMCrit and ALiEM) ranked consistently high across all modalities. Facebook and Google+ were less robust indicators than Alexa/Twitter/PageRank because not all sites used them.  However, because they were ranked only from 1-11 (Facebook) and 1-8 (Google+), respectively, they had correspondingly less effect on the SM-Index than the more robust indicators that ranked sites from 1-25. I’ll leave a more robust analysis to anyone reviewing the index when it is posted.


This was a pilot and the response to it will determine if the SM-Index should be modified, expanded, repeated (monthly, quarterly, annually?) or trashed. Check out the Social Media Index for more details. Please message me on twitter, leave a comment on this post or write a critique of what I have done on your own website. That is, after all, how scholarship works.

Thanks to Michelle Lin, Eve Purdy, Danica Kindrachuk, Jon Sherbino, Javier Benitez and Damien Roland for discussing this topic with me.

(Visited 9 times, 1 visits today)
Brent Thoma

Brent Thoma

Editor in Chief at BoringEM
+ Brent Thoma is a wannabe medical educator, researcher, and blogging geek who works at the University of Saskatchewan as an emergency physician, trauma team leader, and research director. He founded BoringEM as a resident and designed the CanadiEM website.
Brent Thoma


Emergency Physician & Research Director @USaskEM | Editor @WeAreCanadiEM/@ALiEMteam | #SoMe Editor @CJEMonline | Tweet & study technology-enhanced #MedEd
RT @Rescudoc: Calling all scientifically inclined EM faculty - Reserve your spot in Banff - https://t.co/mKSuWJzeEy https://t.co/vr6EWRyJQ7 - 3 hours ago
Brent Thoma
Brent Thoma

Latest posts by Brent Thoma (see all)

  • http://stemlynsblog.org Simon Carley (@EMManchester)

    Hi Brent,

    Great work and interesting to see how the results have popped out.

    A few thoughts?

    Is this open to gamesmanship and manipulation? I don’t think the FOAM community is up for that, but it’s interesting to consider for the future.

    This may be a measure of how technically minded you are at ensuring that your blog appears in the right ranking systems. So this could be a measure of technical rather than clinical expertise.

    How do we measure quality? The Sun newspaper is popular, widely read and very popular in the UK. Is it high quality stuff? I don’t think so….

    Really interesting work though, very much enjoyed it.



    • http://canadiem.org Brent Thoma

      Thanks for the great comment! Got me thinking

      This is mostly speculation, but I think almost any index is potentially open to gamesmanship. However, it seems to me that this would be a particularly hard one to game. The internet has sites providing ways to boost your Alexa and PageRank data, for example, but they ultimately still need to increase traffic/links to the site and Amazon/Google fight back with ways to make gaming them more difficult.

      I’m not sure why my site has a relatively high Alexa ranking relative to the rest of my numbers, but I’d like to think it is due to the crazy effort I make to appropriately SEO tag everything that I publish. Certainly, less tech-savvy people might not be as good at that (although I don’t think of myself as tech-savvy at all!). Anyways, without a substantial improvement (I’m talking orders of magnitude) it would not affect the rankings dramatically because of the large differences between the sites and the difficulty of boosting the size of Facebook/Twitter/Google+ communities. Ultimately, I think any effort at gamesmanship would be better spent producing good content, but I guess we’ll see!

      Regarding the Sun newspaper example, I think we can avoid that by simply excluding sites that are not focused on producing FOAMed. For example, I wouldn’t include KevinMD because although it produces a lot of articles of interest to the medical community, from what I’ve read on there it doesn’t seem to provide any medical education. Limiting the scope of this index to FOAM-producing sites, I think their popularity correlates with their quality as discussed and vice versa

      It’s not perfect, but I think it is better than nothing. I look forward to hearing reviews. Consider critiquing it (and the idea of it) on your site?

      • amcunningham

        Just coming back to this post… I do think that KevinMD posts are a form of medical education. They often raise questions related to our professional identity and as such are important. But they are not designed to teach specific knowledge or skills. But being a doctor isn’t about just these and hence neither is medical education.

  • Pingback: The FOAMed universe – normal laws of evaluation don’t work here | The Rolobot Rambles()

  • Pingback: The #FOAMed universe – normal laws of evaluation don’t work here | The Rolobot Rambles()

  • http://www.ernursepro.com erNURSEpro

    Realize this is a very old post and certainly there has been lots about peer review coming to social media, but curious as to how podcast legitimacy would be calculated. There are a few big shows up on this list but as a podcast producer if you don’t engage in tweeting or facebook as much as you should and your website is only show notes you could still produce a high quality show with a minimal ranking. Listenership is not reflected in these rankings. Any thoughts on this?

    By the way, thanks for all your efforts in propagating social media and medicine!

    • Brent Thoma


      Thanks for the reply and sorry for the delay in getting back to you!

      The SMi has actually come a long way since this post. A study assessing it has been published in WestJEM: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4380373/

      And moved to ALiEM:

      In terms of podcasts, your arguments certainly make sense but the data doesn’t really play them out. The podcasts on the SMi actually seem to score incredibly well. While there might have been a time when podcasters didn’t interact with their audience at all beyond their recordings, almost all of them now have Twitter and Facebook accounts that are quite well followed. Additionally, lots of people seem to visit their websites to review those show notes (I’m not sure if that’s in a primary way where they look it up pre/during the podcast or a delayed one where they look back on it, but they do have good numbers).

      Additional problems in trying to look at podcasts in a different way include the lack of a public, easily accessible resource for podcast metrics and the fact that many podcasts have blogs and many blogs have podcasts (e.g. PHARM, St Emlyn’s).

      That being the case, I haven’t made any modifications to the formula or rankings to distinguish between blogs and podcasts. I hope that rationale makes sense even if it isn’t perfect. I’m happy to respond to any further questions. Check out those links for more details.