The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.
Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.
Not that much harder anymore. You don’t need a good language model, just one that can spit out believable blurbs of text. Alternatively, you can do what Reddit bots do and just copy parts of other comments.
Using the comment count also promotes rage-bait, making the platform much more polarizing and toxic.
So, the question becomes how do we rank posts and comments in a way that is not based on either upvotes or down votes or number of comments? I could see a trust value being made for each user based on trusted users marking others as trusted combined with a personal trust score, but that puts a barrier on new users and enforces echo chambers.
What else could be tried?
I think the best option is to rank things by votes and just put in the best effort to eliminate vote manipulation.
Only if trust starts at 0. A system where trust started high enough to not filter out posts and comments would avoid that issue.
Maybe instances should be assigned a rank for how dependable they are. Length of time active, number of active users… Stuff like that and each instance keeps track of its own rankings for each instance it is federated with. Put the upvote and those stats in a magic box to calculate the actual upvote value.