Discuss Scratch
- SuperSean12
-
500+ posts
Moderator bots
Scratch should add moderator bots, they would moderate comments, project, etc, so the users dont have to, and the bots will instantly report it to the scratch team
They will be AIs and the scratch team will teach it what is inappropriate and what is not.
They will be AIs and the scratch team will teach it what is inappropriate and what is not.
Last edited by SuperSean12 (Aug. 8, 2020 06:43:35)
- Maximouse
-
1000+ posts
Moderator bots
I don't think bots are smart enough to report inappropriate projects.
- SuperSean12
-
500+ posts
Moderator bots
i forgot to say they are learning AIs I don't think bots are smart enough to report inappropriate projects.
- thr565ono
-
100+ posts
Moderator bots
Well, there are millions of scratchers, and less than 50 moderators, so they are busy people Why can't the humans do it?
- -InsanityPlays-
-
1000+ posts
Moderator bots
Well there's the forum helpers and the community mods (archived forums).Well, there are millions of scratchers, and less than 50 moderators, so they are busy people Why can't the humans do it?
Also the forums don't have a profanity detector. Why?
Last edited by -InsanityPlays- (Aug. 8, 2020 08:53:06)
- Jeffalo
-
1000+ posts
Moderator bots
“just add moderator bots, it's very simple”
if moderator bots were good enough to not flag appropriate content and correctly flag inappropriate content then every website would be spam and inappropriate free. it's physically impossible with today's tech and especially for the scratch team to just add like that.
if moderator bots were good enough to not flag appropriate content and correctly flag inappropriate content then every website would be spam and inappropriate free. it's physically impossible with today's tech and especially for the scratch team to just add like that.
- Maximouse
-
1000+ posts
Moderator bots
They do have a filter that replaces bad words with asterisks, but it's much less efficient than the comment one. Also the forums don't have a profanity detector. Why?
- Col_Cat228
-
1000+ posts
Moderator bots
Dude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possible
- ioton
-
500+ posts
Moderator bots
The OP says thatWell, there are millions of scratchers, and less than 50 moderators, so they are busy people Why can't the humans do it?
meaning more work for the ST, unless they build an extremely smart AI. Lets say I put a bad word drawn by pen. How would a bot detect that? There's many ways to draw letters. I could draw it with dots. With lines. Stamping squares. the bots will instantly report it to the scratch team
It's not that easy. There's so many ways that I could do something that could get me reported. Can an AI track down every single site that's not appropriate? i forgot to say they are learning AIs
- Maximouse
-
1000+ posts
Moderator bots
Machine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?
- SuperSean12
-
500+ posts
Moderator bots
It would get existing data Machine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?
- --Explosion--
-
1000+ posts
Moderator bots
Yes, this would be very hard to add. I bet though with some super high tech deep learning it could be done though but that would be very hard to implement.
- _ReykjaviK_
-
500+ posts
Moderator bots
People will find ways to get by the system. It would be hard to add.
- Za-Chary
-
1000+ posts
Moderator bots
We already rely on quite a bit of automated moderation processes. This has made a lot of people very angry and been widely regarded as a bad move (even though it actually does help us quite a bit).
I suppose the point is that no matter how much automation we do, it won't be perfect, and it's possible that some Scratchers may suffer because of it. A fine balance would have to be made to make sure that the moderation is as effective as possible without making too many mistakes.
I suppose the point is that no matter how much automation we do, it won't be perfect, and it's possible that some Scratchers may suffer because of it. A fine balance would have to be made to make sure that the moderation is as effective as possible without making too many mistakes.
- Basic88
-
1000+ posts
Moderator bots
We aren't roblox. I have a feeling that if this happened, you would get this alert for having gobo in your project (yes, very ridiculous).
- garnet-chan
-
100+ posts
Moderator bots
Well, they probably can. I'm no legit coder, but the bot could scan a comment for any words that are bad words misspelled. Dude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possible
- CatsUnited
-
1000+ posts
Moderator bots
Automating part of the moderation process is important in trying to keep up with the massive amount of content input into this site, though I wouldn't want to go as far as to automatically take down projects algorithmically. Even if that system were to be introduced, it'd still need a lot of human intervention and people are going to get mad if they realise a machine is determining whether or not their project should be public
Dude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possible
okay looks like we're waiting for GPT-4 to come out; pack up your bags everyone lol Machine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?
- ElsieBreeze
-
100+ posts
Moderator bots
Fwiw GPT-3 is sometimes generating some quite racist output so they're blocking some output of GPT-3 if it contains certain things. Automating part of the moderation process is important in trying to keep up with the massive amount of content input into this site, though I wouldn't want to go as far as to automatically take down projects algorithmically. Even if that system were to be introduced, it'd still need a lot of human intervention and people are going to get mad if they realise a machine is determining whether or not their project should be publicDude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possibleokay looks like we're waiting for GPT-4 to come out; pack up your bags everyone lol Machine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?