![]() |
Sub-network of Twitter Followers |
I occasionally look at these scores and noticed that they had changed in the last few weeks. Both of my scores had fallen in the last few weeks.
Had I grown less influential during my recent travel visiting clients? I don't think so!
I had spent less time on Twitter, but that does not mean I am less influential today than at the beginning of the month. Hey folks at @klout and @peerindex... I have news for you! Influence is not like a suntan. It is not dependent on daily exposure/activity on Twitter!!
Influence is not like a suntan, it does not change much based on *daily* exposure!
I looked up a few twitter friends/colleagues and noticed they had similar scores across Klout and PeerIndex also -- some where closer than others.
Next, we retrieved the Klout and PeerIndex scores for all people [~ 200] I follow on Twitter to see if there were any interesting patterns in this sample. Some of them had almost identical Klout and PeerIndex scores, some were not calculated by one or the other service, and some had divergent scores.
Which score more accurately gauges real influence on Twitter? Are either of these influence scores significantly better than the back-of-the-napkin Twitter metrics [LFR score] I described earlier? How precise are these scores?
I found both Klout, and PeerIndex scores on 177 of the 200 people I follow. Of course, there is an LFR score for everyone on Twitter -- it is easily calculated by looking at a person's Followers and Listed counts in their Twitter Profile.
- Looking at all three scores we see some difference, but not much.
- Klout and Peer Index differ by an average of 13 across the 200 people I follow
- Klout and LFR differ by 15 on average
- Peer Index and LFR differ by 18 on average.
Does it really matter which score we use? How accurately can you measure something as nebulous as influence or attention? Is a several point difference between scores a significant delta?
To calculate LFR quickly, add a zero (0) to the Listed number and then divide that by the number of Followers, i.e. I have 4088 followers and appear on 483 Lists, my LFR is 4830/4088 = 1.18. A number > 1.00 means people are paying attention to you, a score approaching 2.00 means you have the focused attention of many! Of course, the good news is you do not need to be popular to receive deserved attention!
LFR finds us such Twitter gems as @VenessaMiemis (LFR=1.66), @zenext (LFR=1.69), @jhagel(LFR=1.72), and @twliterary(LFR=1.50), each is paid great attention to in their respective field, and on Twitter.
What other Twitter influence/attention metrics do you track?
UPDATE1: Interesting interview by Augie Ray with Azeem Azhar, CEO of Peer Index.
I like Azeem's concept of "cheap"(i.e. following) and "expensive"(i.e. responding) activities on Twitter. I agree, it is more important to look at the expensive activities to gain a more realistic perspective of who/what is really important. IMHO, the power of LFR is in the very expensive activity of creating and curating Lists on Twitter!
Valdis, the number that I would look at is "mean time to response", which is a measure of kinetic energy in a network rather than potential energy. The real power of Twitter is not just in building a network but in getting it to move when you need it to move.
ReplyDeleteVery interesting brother. I think of influence as measurable reaches and such, and also the more intangible impact on how people, not just what they know and what they think. We need to het clever about that as well!
ReplyDeleteValdis
ReplyDeletealways great to hear from you -
we ran some numbers on this on a sample of 5,000 users and compared their PI to the LFR - we found the correlation across 5,000 accounts was 0.03 - or pretty close to random.
This is as we would expect - even with a sample as large as 200 (which you did), it is quite hard to choose a sample that is representative of twitter as large; and very easy to choose a subset that is unrepresentative.
The other thing to note is that the distribution function we use to map raw scores to a 1 to 100 distributiion - can distort attempts to reverse engineer the factors.
So the other thing we looked as was the correlation of the ranks - that is we ranked the 5,000 users in order of their PI; and in order of their LFR, to look at the correlation of the ordinal ranking - this has a correlation of -0.2 - weakly negative.
Final observation I would make is that these overall scores are useful to a point - much more useful are our in-topic PI scores - which require some pretty heavy semantic analysis to derive.
Thanks for the kind attention...
Valdis
ReplyDeletealways great to hear from you -
we ran some numbers on this on a sample of 5,000 users and compared their PI to the LFR - we found the correlation across 5,000 accounts was 0.03 - or pretty close to random.
This is as we would expect - even with a sample as large as 200 (which you did), it is quite hard to choose a sample that is representative of twitter as large; and very easy to choose a subset that is unrepresentative.
The other thing to note is that the distribution function we use to map raw scores to a 1 to 100 distributiion - can distort attempts to reverse engineer the factors.
So the other thing we looked as was the correlation of the ranks - that is we ranked the 5,000 users in order of their PI; and in order of their LFR, to look at the correlation of the ordinal ranking - this has a correlation of -0.2 - weakly negative.
Final observation I would make is that these overall scores are useful to a point - much more useful are our in-topic PI scores - which require some pretty heavy semantic analysis to derive.
Thanks for the kind attention...
Hi Valdis,
ReplyDeleteInteresting! But there are many other considerations that are tied to an individual's interest graph and how they curate content. PeerIndex does a better job because it covers more topics. This contributes to the differences between "specialists" and "generalists."
For example, as a curator, I cover many topics and follow more people. I also work across several industries and need to build influence in many fields. So my main list groupings are much more diverse, and include everything from innovation and social media, to healthcare and pharma, and inspiration and culture.
However someone who is specialized would fall into more lists in the same category and would find themselves added to lists in that category more often. Therefore, the result of higher LFR and, as you point out, "more respected in their field."
However, a good curator can be tied to many of those deep specialists and have considerable influence. A quick example, I am followed by John Hagel, Michael Wu, Alex Butler and the MLB. These are all big influencers in their field, but have nothing to do with each other.
I also agree with Edward Vielmetti that the real power of Twitter is "getting the network to move when you need it to." And not all of that is transparent because a lot happens through the Direct Message channel. There is so much more to this; I think I need to write a post when I have time. :)
Azeem,
ReplyDeleteGlad you responded! Waiting to hear what the @Klout folks think.
You know we would all like to see your correlations with Klout... on the same random 5000! ;-)
Good that Peer Index[PI] and LFR do not correlate highly, that probably means they are measuring different things and can be complementary instead of redundant. I have found the rule-of-thumb: "LFR > 1" an excellent way of sorting folks I don't know into follow/don't follow buckets. I especially like how LFR finds the "non-obvious" folks. The power of LFR is not in computer analysis (like your semantic analysis) it is in the human analysis that people do before they place a selected few on a Twitter list.
Also would like to get your input on why PI scores change daily/weekly? As you know in Real Life influence does not ebb and flow so quickly. My active links today may be different from my active links tomorrow, but that does not mean my overall network has changed. Don't measure the shoreline of England with a micrometer -- you get a false sense of accuracy and the shoreline changes with every new wave. Apply the right perspectives to the right dynamics!
The question with any metric is how you can manipulate it to your own ends.
ReplyDeleteFor LFR, it's pretty straightforward. Block lots of your unloved followers, and the ones that love you, encourage them to add you to their lists.
Valdis, have you done any looking at lists from a network point of view? I can see a network where the nodes are of two types, lists and people, and the time series you want to watch is the propogation of people onto other people's lists.
I think it's fascinating to encounter examples of considerable attention being devoted to measuring attention and/or influence on Twitter.
ReplyDeleteFor another example that might warrant some attention, you might be interested in Alex Braustein's recent post with his own sampling and analysis of Twitter influence metric systems, Why Your Klout Score is Meaningless.
Hi Vladis, actually this observation has been reviewed by Daniel Gayo-Avello in his paper "Nepotistic relationships in Twitter and their impact on rank prestige algorithms". He talks about how those scores are prune to being gambled and gives very good explanations and examples. I would recommend reading this paper (http://arxiv.org/pdf/1004.0816).
ReplyDeleteCheers Plotti
https://sites.google.com/site/twitterresearch09/