Follow

please only vote if you are a BIPoC and have experienced persistent harassment on the fediverse

imagine your instance had a feature where your mods and mods from other instances could quickly and easily share block/bans across fedi, to more rapidly and thoroughly respond to toxic behavior, and to combat users who ban evade by switching instances

would you feel overall safer on the fediverse? would you want to use fedi more or more often?

everyone please boost :boost_ok: 🙏

if you know a BIPoC who has been harassed fully off the fediverse, if you think they would be comfortable answering this poll, i would be extremely super interested to hear their input 💯 💯

Show thread

sorry i didn't have room to put a spider-man option on this poll. i'll post an update when the poll ends so everyone can see the results. thank you to everyone for not voting if the poll isn't for you. and big thank you to the people who the poll is for, for helping provide feedback 🙌

Show thread

for those who are dying to know, it's currently split 50/50 between the two extremes, yes/yes and no/no

maximum intrigue 😳 😳

Show thread

so to summarize the results, if mods could federate block/ban actions across instances,

63% of BIPoC who have previously experienced persistent harassment would feel safer, and

52% would want to spend time on fedi more or more often

Show thread

@red Right? Didn't always agree with him but whether he was dishing out a bitter pill or posting lighter fare, it was nice to get a different indigenous perspective.

@red poll people can vote in to get a notification ~1d after original poll ends

@red who is going to validate the blocklist for accuracy? There's already a instance blocklist floating out there that claims I'm a TERF which is bullshit.

How would one appeal if they were falsely added to the list; is there a plan to include a way to automate un-blocking?

@feld i’m thinking not about one centralized blocklist

rather, i’m thinking of a way for admins/mods to federate their block recommendations, (like how people make block recommendation toots, except with an actual ux). that way each instance’s admins/mods A) won’t miss a notification, and B) can replicate those blocks/bans (or disregard them) with a series of one-click decisions based on their own independent judgement and local codes of conduct

@red but how do you prevent this from transforming from a rapid response mechanism and turning into dogpiling / mob justice that ends up significantly harming wrongfully accused?

The internet and social media is really good and turning lies into outrage and ruining lives. Remember what happened to the wrongly accused of the Boston Marathon Bombing, coordinated via Reddit? Nobody did fact checking, it was just blind outrage.

@feld what i’m proposing is something that people are already doing. but right now they are doing it based on second-hand information and screenshots. i’m proposing it be done through the moderation interface so instead it can link back to actual toots and accounts and be based on objective evidence.

also i am not interested in having this conversation. if you’d like to have it elsewhere (not in my mentions) feel free. please do not continue this conversation in my mentions. thank you

@red while this would be helpful, it wouldn't address the "hidden in plain sight" factor that is pervasive here.

@sunflowers i’m interested to learn more about this. maybe i’ve heard about this factor before but it’s not popping to mind right now. i tried doing a cursory internet search, but i don’t think the articles i found were related. if you had a suggestion for how i might understand the issue better, that would be super helpful. (but of course i’m not trying to make more work for you, so please feel free to disregard my lack of understanding.) ✌️✌️

@red I like @darius hometown fork and manifesto at runyourown.social The goal is to have a private federation of networks with similar code of conduct.

@dna i’ve been a fan of your work for a long while @darius 😇💯🏆

thank you for this link! i think i’d read it before a while back, but more people should

@red Is there a working group of some sort for implementing this feature? I know people have been talking about a block sharing mechanism for a while. Would love to try to donate some time to help out with this

@z i’m looking into organizing an effort

i know there is one group working on “trust propagation” which is basically a mechanism for automating blocks/bans based on the actions of peer instances (for example, if three of your peer instances manually block a user, your instance auto-blocks the user)

i’m thinking of a manual approach, where mods can choose to publish a block report to peer instances (basically how people now make block recommendation toots, but with a good UX)

@red That sounds pretty good. Lemme know if you could use an extra dev.

@z that’d be awesome! i’ll keep you in the loop 👏

@red @z I would also be interested in helping :)
But due to uni exams, I won't be able to do much until sometime in August...

@caluera @z oh cool, well i’ll keep you looped in too, but feel free to just ignore everything until you have the time for it. good luck with your exams! 👏🏆

@red Some admins block instances just because they decide not to block everybody. How would you mitigate that behavior?

@abloo the basic idea is, when mods take a block action, they would have the option to publish that action to peer instances. peer instances could then assess the report and based on available evidence choose to replicate that block for their instance, dismiss the report, or simply wait to see how other peer instances respond

@red @abloo has anyone thought of ways this could be abused to punish marginalised people unfairly? (hopefully you understand what I mean)

@zensaiyuki @abloo i mean, i think it should be fairly obvious

are you suggesting there's a non-obvious application as well?

@red @abloo just off the top of my head, suppose a peer decides they’re “redpilled” now and starts suggesting blocking bipoc, and obfuscates their reasons/lies about it. how would that be caught?

@zensaiyuki @abloo if admins are blocking people based on hearsay, they are shitty admins and i personally would want to get blocked by them

but what you're describing is also not different than what's happening now. what's happening now is people are just tooting "block recommendation" and there isn't a UX for it

@red @abloo understand, my intent is not to poke holes in the proposal, I just think a design step for inclusivity should include asking the question “how can this be abused”, and, I am assuming you all know what you’re doing, did that step already, and found the design to be airtight- or mostly good with x y z issuesx that’s what i am asking about because I am curious and wanna learn.

@zensaiyuki @abloo i think "airtight" is a fallacy. nothing is airtight and we are constantly vulnerable. imo the best implementation will make it *harder* to be a shitbag than to be a good upstanding person, and therefore by the law of averages, peoples' general behavior will skew towards good

but i'm also pretty cynical, so maybe don't listen to me

@red @abloo that’s a pretty good philosophy. i think, what i have read here is not concrete enough for me to judge. there is though, I believe a concrete difference between a one click “accept block rec” and a toot that you have to put the work in and investigate. I’d be concerned about mitigating witch hunt scenarios where the heresay block rec travels halfway round the world before the truth gets its pants on. but that concern of course should be balanced against current harm.

@red @abloo and like i said, I dunno, it’s just an example, first thing i thought of. for all i know it’s not a real problem in practice.

Sign in to participate in the conversation
moon holiday

every day is a moon holiday when you're living in fully-automated luxury gay space communism. lets dance to honor our lesbian aunt the moon under the silver glow of her justice and grace