I need feedback from you about the new moderation guidelines for the development category!

in #utopian-io5 years ago (edited)

As some of you may know we have been reworking the Utopian guidelines completely for quite a while now. We will be splitting the guidelines into moderation guidelines (will be used by moderators to judge the contributions) and contribution guidelines (guidelines to help contributors and much less overwhelming than before). Since we are nearing the deadline most of us have been receiving feedback about the reworked guidelines from others. In this post I specifically touch on the moderation guidelines.

Most of the feedback I've received is that part of the moderation guidelines are too informal and lack specific objective metrics. Since most of the questions on the questionnaire are subjective I've been having some trouble fixing this. This is where you guys come in! I have pasted the (reworked) moderation guidelines below (I've added some of my thoughts as quotes) and would love to hear what you guys can think can be improved.

Moderation guidelines

Evaluation of Development contributions is primarily based on the significance of the added feature(s), the volume of work and the quality of the code.

Contributions should include a description and images (if applicable) of the added feature(s). A justification of technical choices is welcome, but not mandatory. Contributions should also include links to the relevant commits or pull requests, so that it's clear which are part of the contribution described.

Evaluating the amount of work and significance of the added feature(s) will of course be subjective. For the significance it is recommended to distinguish between the addition of core features, cosmetic changes, bug fixes and so on and weigh their overall impact on the project.

Feedback received for the above says it should be more formal and use more objective metrics to define the significance. It also still includes the amount of work, but that has apparently been removed from the guidelines?!

Consider the following during the evaluation:

Submission content

Very straightforward. All of these things should make the post more pleasant to read, while not requiring much effort from the contributor and also making it easier for us to review.

  • Can the project's description be found in the current post, README or elsewhere (linked in the post itself)?
  • Does the contribution contain a description/pictures/code snippets of the added feature(s)?
  • Is it clear which commits or pull requests are relevant to the contribution and were they committed or merged in the given timeframe?

Significance of the contribution

The problem. How do you accurately judge the significance of an added feature? It's even more subjective than the removed "How would you rate the total amount of work?" question (I still disagree with the removal of this), so I really need help with this one. I have no idea how to define this using objective metrics since this is basically all down to the reviewer's experience and knowledge...

  • What is the total significance of the added features on the project as a whole?

Quality of the code

Not much to say about this since I tried to make it as objective as possible. If you think anything should be included and/or excluded, then please let me know what and why.

  • Does the code follow the best practices of the specific language used?
  • Does it include outdated, potentially insecure or low-quality code?
  • Does it follow the DRY principle or similar principles? Is it maintainable?
  • Does the code include unit tests?
  • Is the code formatted correctly and consistently.
  • How efficient is the code and does it avoid using unnecessary resources?

Commit messages

Pretty straightforward. It should be clear what was added in each commit and it's better if they follow established commit message guidelines/conventions.

  • Is it clear what was added in each commit?
  • Do they follow established commit message guidelines/conventions?

Readability

One of the things I've wanted to change for a very long while now is this. We currently judge the quality of comments, but to me it doesn't make much sense to penalise someone for not including comments, while they have made their code readable and understandable in other ways. So instead of judging the quality of the comments, we should simply judge the readability of the code, right?

  • Do the comments make the code easier to understand when necessary?
  • Are the components, modules, classes, functions, arguments, exceptions and variables named clearly?
  • Is whitespace used in a way that improves the readability of the code?

I greatly appreciate any help and feedback, negative or otherwise, so please leave your thoughts below if you have time to spare!

Sort:  

What is the total significance of the added features on the project as a whole?

I think that it would be good to make this as objective as possible. I favour the volume / value approach (which is not dissimilar to the current questionnaire).

One benefit of this approach: Looking ahead, to an era with a greater number of task requests, it would be great if users could know in advance the "volume" rating for the task that they are taking on.

So for development contributions based on task requests we could have:

  • Volume: How many new features are to be added to complete the task. Are they small / medium / large new features. Combine number and magnitude to give an overall "volume" weighting. For task requests this weighting can be agreed in advance and specified in the task request.
  • Value: Do the new features meet the desired requirements of the task request? (Thus adding the desired value to the project). Perhaps a bonus rating scale depending on how well they meet the requirements / going above and beyond etc.

For users working on their own projects I would suggest trying to align to the above approach:

  • Volume: As above, number of new features and magnitude of those features, just not agreed in advance.
  • Value: User to state in the contribution post what they were trying to achieve and why (thus why it adds value to their project) and illustrate the new features in the post. Moderator to agree whether the features add value (generally yes unless clearly not) and whether the features meet the stated aims. Again bonus scale for coolness.

The problem is that they decided to scrap the amount of work from the guidelines as they didn't deem it important; apparently just rating the significance is enough. This means I'm stuck trying to think of a way to solely judge that, instead of having some sort of combination of volume and significance as we did before, and as you have touched on above.

I think you've brought up some great points about the value of a feature and if the contributor met its goal (especially for task requests). Still, it's pretty hard to come up with an objective way of judging the significance of a feature (especially for own projects) in comparison to the rest of the project (and other projects). I think this kind of stuff will always be subjective no matter how hard we try to use objective metrics, so I am a bit at a loss how to rate this. Especially without taking the amount of work into account.

Yea, I really think amount of work should be one of the main metrics. I also believe comments + commit messages should affect the score more.

Nice improvement on the guidelines.
Here is a little feedback compared to utopian V2:

  • Commits message are irrelevant for us because the pull request is squashed and merged into the develop branch. The Pull request also needs to be rebased from the develop branch which can result in a single commit in the pull request. This is done because the pull request has to contain only the files modified by the author. That way the review of the code is easier when the development is done by a team.

  • To be sure that we know who did what, all the functions must be commented and have the author.

  • About testing the code there are different way to do this and sometimes unit tests are not enough and integration tests or end to end tests can be provided too.
    For example on the V2 we don't do unit tests for the API but integration tests. But for the frontend we will do unit tests and end to end tests

We also try to provide a coverage that is above 50%. This could be added to the guideline.

I guess there will always be exceptions to the guidelines, so I think they are just meant as a helping hand, and if things aren't applicable, then use common sense. I'll definitely adjust the unit test bit to be more generalised for all kinds of tests (like the ones you mentioned).

Do you have an idea on how we can use objective metrics to determine the significance of a contribution? It's pretty important, especially now that they've removed the amount of work as part of the score, since it will probably have the biggest influence on a contributor's final score.

I don't think it's possible to have objective metrics because the context of the development is crucial.
Let's take 2 examples:

  1. A text input is added to a form to add a piece of information

    • It doesn't look like much, but to validate this addition, the developer had:
      • to modify multiple files in the front (preview cards, details page, edit form, ...)
      • to update the database model and may be write migration scripts
      • to update the api because it's a micro service architecture
      • to write tests because the project requires it
      • to update the admin website
  2. Add an error logger in the backend

    • This looks complicated, may be not, it depends what is the tech stack, etc. To do this the developer had to:
      • create a little plugin and do a bit java aspect programming. So really fast thanks to the architecture of the project

In case 1, the added information is in fact buried in the website and it's barely used. But it required a ton of work because the project is huge and complicated
In case 2, it's a vital part of the system but it required very little work.

But from an external point of view case 1 seems really important like case 2 not so much if we don't really know or understand the context.

Imho, it should be the project owner decision. He's the only one truly capable of judging the impact of the added feature.

I agree that it's not really possible but that's what they are asking, haha. Since we have to review the contributions and give them a score, we can't really leave it to the project owners to decide the significance (obviously if it's a task request, then yes). I also think it's easier to judge the amount of work it would take for an average developer to complete something, than it is to rate the significance.

Let me put it this way: what do you think we should put the most emphasis on when scoring a contribution and why? Currently it's the amount of work, the significance and the quality of the code. I think they want to remove the question about the amount of work as I've mentioned before, and since it's not really possible to judge the significance with objective metrics, all it leaves is the quality of the code.

All of this has left me pretty confused, to be honest. I'll guess I'll ask during the weekly tomorrow and see what they have to say - maybe that will clarify a few things.

To be sure that we know who did what, all the functions must be commented and have the author.

woot

I agree to all the points. Now when we say significance, let's say a one line of code as well as 1000 lines of code both are significant for a project, then how do you distinguish between those two.

Though we also should have a little different parameters to judge a feature vs a new project. I would also give newer technology an edge over old technologies because there might be a lot of thought process and well as research has gone behind it.

Hi, @amosbastian!

You just got a 0.17% upvote from SteemPlus!
To get higher upvotes, earn more SteemPlus Points (SPP). On your Steemit wallet, check your SPP balance and click on "How to earn SPP?" to find out all the ways to earn.
If you're not using SteemPlus yet, please check our last posts in here to see the many ways in which SteemPlus can improve your Steem experience on Steemit and Busy.

Congratulations @amosbastian! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You received more than 6000 as payout for your posts. Your next target is to reach a total payout of 7000

Click here to view your Board of Honor
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Support SteemitBoard's project! Vote for its witness and get one more award!

Coin Marketplace

STEEM 0.30
TRX 0.11
JST 0.033
BTC 64104.40
ETH 3148.52
USDT 1.00
SBD 4.25