At OpenWeb, we employ a multi-layered approach to moderation that leverages the latest advances in AI and Machine Learning. However, even the most advanced AI has its limits: any content that requires a closer look is sent to our staff of human moderators to ensure that the right decision is made, with the specific community’s guidelines in mind. Usually, our moderation staff only reviews user content when a specific moderation decision has been appealed by a user, if a piece of content was flagged by the community, and/or if our models encounter content that they just can’t yet quite understand.
Read more about our Moderation Standards.