We’ve introduced AI-powered moderation to help keep Snap Feed safe, appropriate, and helpful for everyone. This system automatically flags requests that may include sensitive or inappropriate content. Flagged requests are hidden from the public Snap Feed, but still visible to you to action.
How does it AI moderation work?
When a Snapper submits a request through the Snap Send Solve app, the photos and description are reviewed by our AI system to detect the following:
Harmful or inappropriate content
(e.g. harassment, hate speech, explicit material, violence)
Personally identifiable information (PII)
(e.g. full names, specific addresses linked to individuals, email addresses)
Aggressive or harsh criticism of Solvers
(especially if targeting specific people or organisations)
If any of this content is found, the request is flagged and removed from public view in Snap Feed to keep the platform constructive and respectful.
How can I see if a request is flagged?
If you’re an Enterprise Portal user, you can see if a request has been flagged in two places:
If you're an Enterprise Portal user and don't see the "Flagged by AI" column, click the settings cog in the top right corner above the Reports list. Then use the column display panel to check the column so that it is visible.
What happens when a request is flagged?
It won’t appear in the public Snap Feed
It can still be viewed and actioned by Solvers
Snappers can still share it directly (e.g. via link)
This helps protect privacy and reduce harmful content—while still giving Solvers the information they need.
A note about AI moderation
AI moderation is currently in beta. While it helps filter content at scale, it’s not perfect. Some requests may be flagged incorrectly, and others may not be caught. Snap Send Solve run regular reviews to make sure sensitive content doesn’t appear publicly in Snap Feed.
Need more help? Get support from our team by emailing us at contact@snapsendsolve.com or sending us a message.