Layer 1 copy


After you send in user data, Sift will immediately start assessing your users for risk. You can improve these risk scores by sending feedback about the specific abuse you face. There are a couple ways to do this:

  • Create a Feedback-Only Workflow where you review a subset of highly scored users in Sift and give us feedback about how well we are predicting fraud for you (This tutorial will explain how to do this).
  • If you are already doing manual review in your existing system, use the Labels API to send Sift the outcome of your manual review.
  • In addition to one of the two methods above, if you already have a large set of known fraudsters you can backfill historical data and label the fraudulent accounts retroactively.

Once you’ve completed all the steps in the Getting Started Guide, you’ll be sending us continuous feedback to keep improving your scores in real time, catching bad users as soon as they appear. For more information on this, see the third section in the Getting Started Guide.

Feedback-Only Workflow Overview

The goal of a Feedback-Only Workflow is to teach you how to create a Workflow in Sift and provide feedback on users through manual review. At the end of this tutorial, you’ll have a Workflow that looks something like this:

This will work by adding each user that meets the criteria of your Workflow to a Sift Review Queue so that you and your fraud team can log into Sift and go through the queue, giving your feedback on whether or not the user appears risky. No real business actions will be taken in your system since this Workflow is only designed to give Sift Science feedback.

Note: It isn’t necessary to review every risky user. The goal is to give us feedback on a subset of your users in order to more quickly identify and customize our risk scores to your specific fraud.

Intended Audience

Feedback-Only Workflows are ideal for people who want to get a sense of how Workflows work without making any code changes in their backend. They are also ideal for people who are passively evaluating Sift's risk scores for a period of time before fully integrating automated Decisions.


You must have integrated with our Events API, which sends data to Sift in order to populate the Review Queues associated with the new Workflow.

Determine Your Triggering Event

First step in creating a Workflow is to determine which key event you want to have trigger the Workflow evaluation. If you already have fraud processes set up, you'll already know when you queue or block users based on risk. Some common examples are:

  • If company is preventing chargebacks, they may want to assess users for payment abuse before allowing them to complete an order.
  • If company is preventing abuse of promotions, they may assess users for promotion abuse before allowing them to redeem promotions.

Try it Out: Create a Feedback-Only Workflow

  1. Go to the Workflows tab in the Automate tab of the Sift Science Console.
  2. Click Create Workflow.
  3. Search for your triggering event in step 1.
  4. Choose Users as the entity.
  5. click Save Draft and we will come back to complete this workflow in the next step.

It should look like this:

Create Your Decisions

Since the goal of the Feedback-Only Workflow is to review highly scored users and tell Sift whether we are accurately identifying your fraudsters, your Decisions should be pretty simple:

  • "Looks OK" (Already set up) - This Decision tells us that the user is OK. Since you are reviewing users Sift has identified as suspcious by their high score, this means you disagree with Sift’s risk score.
  • "Looks Bad" (Already set up) - This Decision means that you agree with Sift’s risk score and believe the user is fraudulent.
  • "Skipped" (Need to set-up!) - This Decision means an analyst has looked at the user in the queue but they couldn’t decide whether the user was fraudulent or not. It tells Sift that you are not sure.
  • "Not Reviewed" (Need to set-up!) - This Decision is for all users that don’t meet the risk threshold, or all users that timeout in the queue, you can use this Decision to indicate that this user hasn’t been reviewed.

The next choice is the category of Decision. There are three available categories:

  • Block - This category is used for Decisions which stop a user from interacting with your site.
  • Accept - This category is used for Decisions which allow a user to interact with your site.
  • Watch - This category is used for Decisions which do not fall into any other category.

In this case both of the new Decisions are neither Accept nor Block. So they will both be categorized as "Watch".

Because the goal is to simply give feedback, there is no need to set up webhooks on these Decisions or do any further integration.

The last choice to make is whether you want to make these new decision accessible in Explore. Because these Decisions are designed to be part of a Workflow we are going to allow these Decisions to be taken only through Review Queues.

Our decisions should include this information:

  • Skipped - Watch - Reviewed but not clear whether it is fraud or not - Users - No Webhook - Not in Explore
  • Not Reviewed - Watch - Did not meet the criteria necessary to review - Users - No Webhook - Not in Explore

Try it Out: Create Decisions

  1. Go to the Decisions tab in the Automate tab of the Sift Science Console.
  2. Click Create Decision.
  3. Fill in the Decision as described above.

It should look like this:

Set Criteria for Sending to Review Queues

Every business will have a different threshold that they are comfortable with based on their priorities. At first pick a high score to filter out any good users. If there are too many users scored high you don’t have to review all the users in the review queue. If there are too few you can lower the score threshold to include more orders in the criteria.

You can get a sense of meaningful thresholds from Explore before you pick your initial criteria. In our example we're going to pick a threshold of 75.

Try it Out: Create a Route

  1. Open the saved draft of your Feedback-Only Workflow from the Workflows tab in the Sift Science Console.
  2. In the first Route, add a Criteria that matches users greater than the Sift Score your chose.

It should look like this:

Create a Review Queue

The key aspects of a Review Queue are:

  • Timeout
  • Timeout Decision
  • Decisions Available in the Review Queue

Timeout: Timeout specifies how long a user can wait for an analyst to make a decision in the queue. When a user hits the timeout, the Timeout Decision will automatically be taken on that user. You should set the length of your timeout based on how often you’ll be logging into Sift to go through these queues.

Timeout Decision: Users that are left in the queue for longer than the timeout specified will have the Timeout Decision applied to them. The Timeout Decision helps you clarify what you want to have happen to users when the queue is over-loaded (i.e. do you default to accept or reject?). But for our feedback goals, we don’t mind if a user times out, we just want to make it clear what the user’s status is to other analysts in the console. So we suggest making a Decision like “Skipped”, meaning the user was risky enough to warrant review, but no analyst made a decision on them.

Decisions Available in the Queue: In this case, the analysts only need decisions that tell Sift whether the analyst believes the user is good, bad, or skipped (unsure).

Try it Out: Create a Review Queue

  1. In the THEN column, select: Add to Review Queue.
  2. Create a new Review Queue.
  3. Name your queue something that your team will recognize.
  4. Add the timeout time that makes sense for your review capacity.
  5. Add the timeout decision ("Skipped").
  6. Add the three queue decisions we discussed above.

It should look like this:

Wrap Up

The last step is to make your workflow live and to add the action for all users that don't match any Route.

Try it Out: Make your Feedback Workflow Live

  1. In the everything else route, choose Set a Decision. From the drop-down choose “Not Reviewed”.
  2. Click Make Live in the upper right hand corner.
  3. Confirm that you're ready to go live (this is easy since none of your production systems are integrated with this Workflow).

It should look like this:

Now, as you send user events that trigger this Workflow, you can go to the Review Tab and view users coming in for review.

It should look like this:

Next Steps

  1. Feel free to tweak your thresholds if you feel that they’re either too broad or too specific! Just edit your Workflow and go live again. Ideally these Workflows will help you narrow down the thresholds you want to use for automated blocking or review of users via Sift.
  2. When you are ready to create a Workflow with automated Decisions, read our Creating a Workflow tutorial.