This tutorial covers how to add a new abuse type to your existing Sift Science integration. It is meant to be used in addition to one of our complete integration guides.
Whichever abuse type you're adding, it's important to send all the useful data you can so that we can best distinguish between good and bad users. This includes both event and fields additions to your existing integration as well as sending feedback that's specific to the new type of fraud. Finally, you'll use abuse-specific scores and/or Decisions to automate.
It's important to send all the relevant data you can in order to get the best predictions for which users will commit abuse and which users you can reduce friction for. Below, we outline the core events for each abuse type. Vertical-specific custom fields can be found in our integration guides. If there is a core event related to the abuse type that isn't covered by our reserved events, send it as a custom event.
- $content_status for when the status changes, e.g. from
- $flag_content for when other users flag content as suspicious
- $create_content (review)
- $create_content (message)
- $create_account if promotions are added at account creation.
- $create_order if promotions are applied on the order.
- $add_promotion if promotions are applied as a separate event.
If your team uses the Sift Science console, you'll set up Decision buttons for each action they apply to users/orders. These buttons have the ability to send a request to webhooks you create, as well as to give the specific feedback of which users are good and bad for each fraud type you're fighting.
If your team uses an internal tool to conduct reviews, you'll connect our Labels API to your tool. This way, when your team finds an abusive user, the appropriate abuse-specific label will be sent to Sift Science. It's important to be as accurate as possible with the labels you send so that you can get the best results possible. You should even send labels for abuse types you aren't signed up for.
If you're using Sift Science to fight payment abuse, when you receive a chargeback you'll send a $chargeback event and a Labels API event.
You can automate synchronously, using our API responses to take action on users/orders, as well as
asynchronously, using webhooks that listen for requests sent during review in our console. To illustrate,
we'll imagine that ChairsThisSecond, an on-demand chair delivery company already uses Sift to fight
payment_abuse. They've decided to add
promo_abuse as their referral promotion is being abused.
ChairsThisSecond already has a
payment_abuse workflow set up on the
$create_order event to block fraudulent payments. They'll need to add a
$create_account to stop signups that abuse referral codes. When making their
$create_account requests, they'll request that the
promo_abuse score be returned in the response, along with the Decision to be made on the account.
response = track(event, event_properties, :return_workflow_status => true, :abuse_types => ['promo_abuse'])
Automating off of Review
ChairsThisSecond uses the Sift Science console to review users suspected of
payment_abuse. They currently make
Block Order and
Allow Order decisions. If a user abuses a referral code and makes it through to place an order, they don't want to cancel the order if the payment method is good. Instead, they want to remove the promotion. So, they'll add a new
Accept, No Promo Decision and have their orders webhook listen for the new Decision.