- The sample grows from small to very large over the course of the 5 years
- The sample is different to a large extent every year (some reviews are repeat users of course, but that’s a small part of the sample overall)
- The questions are consistent across the 5 years
- The underlying products are not fixed, there could be new products in the later years and/or some older products from the early years might have been replaced or even went out of business
- Because the data range for all the satisfaction questions tends towards the upper end and the sample size is large, the variation is mostly between 5 – 7. Just because the variation is small though, it doesn’t negate the ability to infer trends from that variation.
- In one sense this trend for the data to fall in the upper 30% is counter intuitive. It’s almost tribal knowledge that disgruntled customers are more likely to write reviews than happy customers (or just neutral customers for that matter). In B2B this doesn’t seem to be the case. I suspect that it is partly like that because many reviews are captured because of outreach by G2 and by the vendors. I should also note though, that I’m not saying only happy customers are a part of that outreach. In fact that’s not really possible considering the sheer number of respondents and the fact that by far the lion’s share of the reviews come from our outreach and our organic traffic, not the vendors.
Overall the trends by category are the most relevant, since it compares similar products in a similar circumstance. We could segment this by company size, industry, geography, role, cloud / non-cloud and of course by product / company. I’ll pull out a few more data sets like this in the near future, and maybe drill into a category all the way to the products and with some more detail segmentation.