Musings on a Steem Quality Score

in #steemit6 years ago (edited)

I got into conversations with @motoengineer and @petermail having to do with the quality of authors and their content here on the STEEM blockchain.

@petermail apparently has built an app that filters out posts from authors that do not reply to comments on the theory that they are just writing for rewards and are not part of the community. You can read his post here: https://steemit.com/steemit/@petermail/notification-steemit-app. I have not used it, but the idea seems sound.

I think that's a good first step, but I think we can do a lot better.

What Would a Quality Score Look Like?

So I've been kicking this around in my head all day, and here are some factors that I think should go into a quality score for an author in no particular order:

  • Original post activity - In the last time period (let's say 7 days), how many original posts has the author made? A high quality author should be creating at least some content. I wouldn't want to weight this too heavily since lots of posts don't correlate to quality either. Maybe a simple yes/no flag for this one.
  • Median comments on those original post activities - I'd like to see how many comments this author gets compared to all authors in the given time period. I'd throw out all the zero-comment posts. This shows that the community is engaged in the content. There is an issue with spam comments though. If you could scrape the blockchain and only count those from accounts with decent reps that would probably be better.
  • Comment activity on own posts - How many of those comments in the previous factor are self-comments? 50% would be the author responding to every single comment. 0% would be the author not responding at all. But there can be several commentators interacting on their own. So the ideal might be something like 25% for this factor.
  • Comment activity on other posts - It's a group effort, eh? A quality author will be engaged with other authors presenting other ideas. Again, this would be compared the median comment activity for the whole blockchain during the time period.
  • Median unique votes - Counting the number of unique votes that a piece of content gets I think is also important. This is a little tricky since some bots will churn out votes by the metric shitton. But counting unique votes would at least de-duplicate voting rings. And then compare this unique vote value to other posts during the same time period, again eliminating those that get zero votes or only self-vote.
  • Average number of comments - Excluding self votes, are the comments generating upvotes from the people involved? I think number of votes is more important than value of votes in this case.
  • Average value of posts - This would be a little different than the usual post value. Here I would exclude known bots, blacklists, self-votes, and only count unique votes. But this should be still included because yes, those who have staked their claim in the blockchain do matter more to the health of the whole system.
  • Other suggestions you might have. Leave a comment!

Each of these factors would be weighted and combined into a single normalized score from 0 to 100. You could have different grades like:

  • Under 30, you need to step up your game
  • 40-60, solid effort
  • Over 70, high quality shitposter author

As this is a rolling time period score, an individual author's score will fluctuate. If you go silent for a month, your score will drop accordingly. If you stop engaging with the community, your score will go down.

How Would a Quality Score be Used?

I can think of several ways. First, it might be a SMT all by itself. So higher quality authors would get a boost from being higher quality authors.

Next, you could build a filter for yourself to only show those higher quality authors.

Also, bot owners could use such a quality score to ensure that their votes aren't going towards spam. I don't know how likely they would be to take this on as a general rule. I know with @minnowbooster the whitelist that gives you access to big votes is reserved for those deemed suitable by the rest of the whitelist community. So they might be able to use such a quality score as a filtering criteria.

Could This be Gamed?

Well... yes. Every system can be gamed. With a multi-factor score like this it would be more expensive to game though. Ideally, it would be hard enough to game so that the reward to effort ratio is low. If that ratio holds true, the number of people trying to game the system will be low.

How Soon Will This be Ready?

Well, I'm like... the ideas man, man. My coding skills might as well be non-existent unless you include my 20 year old MATLAB skillz. Fun fact, I made a penny-stock news bot in MATLAB that scraped yahoo finance news and posted to an IRC channel way back in the day.

So if you want to build something, just give me 30% equity in the project and we'll be off to the races!

Edit: just had another idea. The factor weighting can be personalized. Using a machine learning type of structure that adapts the weights to your preferences could create a customized list of what is quality to you.

Further, in such a case new authors could be suggested to you for fitting your quality profile.

Sort:  

We were just having a conversation about this over on this post, you might be interested in seeing what thoughts are there.

In addition to that, I wonder about just looking at how much voting power an author has spent upvoting other accounts in their comments. There's some circle-jerk gaming potential there but it's also one of the ways to tell the most engaged posters.

Weirdly, I would be caught in Peter's filter because I curate the (so far rare) comments on @doctorworm from this account instead of that one, because it doesn't have much voting power yet. I think people haven't quite caught on that it has dolphin-account comment policies even though it's a plankton.

Yes a big enough circle jerk could simulate engagement. But that all takes resources and work. What I was trying to get at towards the end was that if the system were calibrated correctly, it would be a case of diminishing returns for their circle jerker.

So maybe they could get to a score of 65 or 70 and marginally pass as quality, but it would just be too expensive and resource intensive to get to an 85.

Like all human systems the scores should be normally distributed. So going from 70 to 85 would be exponentially more difficult.

Interesting, and certainly some ideas worth exploring. I may have some time in the next few days to explore. Cheers.

There's some good thoughts here.

I can imagine that you could also take into account the average length of the comments. I have noticed that many good posts trigger longer comments and many shitposts get comments like: great post, thank you, good info, etc.

That’s a good idea. Good info!

hi @nealmcspadden, interesting idea. Anything that helps good content be found and manual curation possible is a win win in my eyes.

You might want to have a talk with @abh12345 as he runs the curation league and leagues of excellence, and that has some of the same variables.

:)

Feeling a bit steemed out today but will be looking at this at some point.

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.033
BTC 63626.14
ETH 3107.70
USDT 1.00
SBD 3.87