I was watching this TED talk, which suggested that on average your friends tend to individually have more friends than you do. To define this more formally, we are comparing the average number of friends with:
average over each person p of:
friend popularity, defined as:
average over each friend f of p:
number of friends f has
Intuitively, this seems to make sense. After all, if someone has a high number of friends, they will tend to increase friend popularity and affect a high number of people, while those people who decrease friend popularity only affect a low number of people. Does this result hold for all graphs?
Given a person p
, let t
stand for:
sum over each friend f of p:
number of friends f has
It is pretty clear that sum(t)=sum(f^2)
as a person with f
friends has value of f
towards their f
friends value of t
.
We are then trying to determine whether: sum(t/f)>sum(f)
holds for all graphs.