Layer 1 copy

What are Decisions?

Decisions are the way to give feedback that Sift uses to recognize fraud patterns unique to your company. The more feedback you send to Sift Science, the more accurate your risk predictions will become. You can make decisions manually while reviewing a user in the console, or automatically for users that don’t require further investigation via Workflows.

You can connect Decisions to your system so that any action you take in Sift's Console (for example blocking an order or user) sends a message to your system to do the same.

Common Decisions

Common decisions depend on your business and the types of abuse you're fighting. Here are some examples by abuse type:

  • Payment Abuse: Bad Order, Good Order, Unclear, Add Verification.
  • Content Abuse: Block Content, Good Content, Unclear, Restrict Account.
  • Account Abuse: Bad Account, Good Account, Unclear, Restrict Account.
  • Promo Abuse: Block Promo, Allow Promo, Unclear, Close Account.

Two Ways to Make Decisions

Within the console

Who should choose this: Teams who use the Sift Science Console for manual review

How: You can apply Decisions in both the Review and Explore tabs. In the Review tab, you can work through a queue of users or orders, applying your pre-configured Decisions along the way. The default decisions available are “Looks Bad” and “Looks OK”. You can customize both your Decisions and what goes into a queue by following our Review Queues Guide.

Alternatively, you can make Decisions in the Explore tab. Just select the appropriate decision from the blue Decision drop down.

From Your Internal Systems and Dashboards

Who should choose this: Teams who primarily use their own internal dashboards for manual review

How: You can use our Labels API to send feedback automatically when you make fraud decisions in your internal tool.

FAQs for Sending Feedback

When should I send feedback?

  • If you’re using Sift’s Review Queues, you’ll set up whatever block/accept/reject decisions you normally would. For example: Ban User, Block Order, Accept Order, or Accept Account.
  • If you are using an internal review queue, make sure to send Bad/Not Bad Labels for every block/accept decision you make in manual review.
  • Your support team decides to un-ban/ban a user after the initial risk decision was made.
  • You received a chargeback with a fraud reason code for a transaction.
  • You receive any other information after the initial fraud decision that changes your verdict on the user’s riskiness.

Should I send feedback for users banned automatically by internal rules?

  • If you are using an internal rules engine to make decisions on users, instead of Sift’s Workflows Platform, don’t send labels based on automated decisions where analysts haven’t reviewed (i.e. users banned automatically for a rule configured internally or based only on their Sift Score). Only send in Labels through the Label API when an analyst has manually reviewed a user.

Should I send feedback on co-workers and test data?

  • Don't label test users or members of your own team who happen to be users of your business. These users often have odd behavior patterns compared to your normal users, so labeling them will make it harder for us to identify fraud patterns.
  • If you notice that you're being linked to users you don't know by your device, talk to your developers to make sure that they're disabling Javascript when you investigate users or take phone orders.
  • Finally, it's best to send test users and orders to your Sandbox (test) account and not to your Production account where all the real user data is going.

How to handle non fraud decisions

  • If you're using your admin tool for review, don't send us labels for users you've taken action on due to business policies or other reasons unrelated to fraud. We don't want your model to incorporate any information that might skew fraud predictions.
  • If you're reviewing in Sift's Console and want to mark a user for a non-fraud reason, you'll need to jump into your internal dashboard to do so. Make sure not to use a Decision button to mark this as doing so will hurt our ability to predict fraud for you.

A Smart Feedback Game Plan

Here are suggested game plans for two example companies with different needs:

Bus Tickets Inc.

No orders held for review. Using the Sift Science console.

Bus Tickets Inc. sells transportation tickets and offers an instant download after purchase. The fraud they see comes in the form of card testers using stolen credit cards. They automatically block orders with Sift Score greater than 80 and accept the rest. Due to the on-demand nature of their business, no orders are held for manual review.

Game plan:

  • To give Sift feedback, better understand their fraud patterns, and determine which orders to block, their team dedicates 30-60 minutes of employee time each week to reviewing a sample of recently placed orders in the Sift Science console.
  • Since they reject orders with a Sift Score > 80, they created a List in the Sift Science Console to view orders from the past week with scores > 70 to give feedback and keep an eye on their blocking threshold of 80 (since their ideal threshold may shift over time).
  • Since they deal with fraudulent credit card testers, they also created a List to review users with more than 2 unique billing BINs in the past day.

Furniture Online Co.

Using their own internal dashboard. Holding some orders for review by their 5-person team.

Furniture Online Co. sells all types home furnishings online, shipping all over the world. They accept, reject, or hold orders for review based on a user's Sift Score. They do manual review within their application and supplement their investigations with the Sift Science Console.

Game plan:

  • Every time a fraud analyst makes a decision, they update the order status within Furniture Online Co.'s internal application. Developers have integrated their internal application with the Labels API so that cancelling orders due to suspected fraud sends a Bad label to Sift Science.
  • As part of their regular review process, analysts jump to the Sift Science Console for deeper analysis of tricky cases. In particular, the network graph helps the analysts proactively find related users.
  • On a weekly basis, each team member sets aside 15 minutes to proactively find and label other users through Lists they have created in the Sift Science Console.

Questions?

Our team is happy to answer any questions or concerns that you might have! Just drop us a note at support@siftscience.com.

Go to console