When reviewing customers, fraud analysts often have a difficult decision to make based on the data available.
Occasionally they may come across cases where they're not confident enough to be able to make an accurate judgement as to whether a customer is Fraudster or Genuine, and as mentioned in our Manual Reviews help article - accuracy is hugely important when conducting reviews.
As a result, we're providing two additional labels,"Potentially Fraudster" and "Potentially Genuine", to any clients that would like to allow their analyst teams to be able to express uncertainty in these difficult-to-judge cases.
Potential Manual Reviews on the dashboard
Once switched on for you account, in addition to the default Fraudster and Genuine review options you'll see the additional "Potentially Fraudster" and "Potentially Genuine" review labels on the customer profile.
The logic underlying these additional conditions are fully configurable, allowing you to choose a setup that best works with your existing processes and risk appetite.
Choosing your configuration
Some suggestions for how you may wish to configure Potentially Fraudster labels:
- Prevent if Fraud Score is above a lower Fraud Score Prevent Threshold than your normal threshold
- Prevent if order value is above some threshold
- Prevent if total failed transaction count goes above some threshold
- Any combination of the above
- Review (send to 3DS in most cases, depending on how you handle Ravelins recommendation)
Some suggestions for how you may wish to configure Potentially Genuine labels:
- Allow for X days
- Allow up to a total of $X USD in successful spend
- Allow if order value is below some threshold
- Allow if Fraud Score is below a higher Fraud Score Allow Threshold than your normal threshold
- Allow regardless of the Fraud Score, but with lesser priority than connect or rules
- Any combination of the above
To switch on Potential Manual reviews for your account and talk through your desired configuration, please contact your Ravelin Investigations Analyst.
Impact of Potential Reviews on your model
Unlike the default Genuine & Fraudster Manual Reviews, Potential Reviews will not initially be used to train your model. However, we will evaluate their accuracy over time, and if results appear positive we may choose to use them as weak labels for your model.
Even if Potential Reviews are not used in your model training however, they should still indirectly benefit your models accuracy over time, as we would expect to see Genuine and Fraudster labels improving in accuracy if they're only used in cases where you're sure.
Adjusting your configuration for Potential Reviews
If at any point you decide you're unhappy with your configuration, you may contact us to ask for the logic to be amended, or to have the additional options switched off.
However, regular modifications of the underlying configuration is not recommended as it may lead to confusion for any Fraud Analysts as to the impact of using the Potential Reviews, and it may also lead to an inconsistent experience for any customers labeled with a Potential Review.
Analysing Potential Reviews
Once the new reviews start to be used by your team, you can find them in your Activity Feed with all other reviews:
In Explore, you can use the Manual Review Status filter to find the customers reviewed with the labels:
And in Analytics they will be included in the "Review Label" segment, when viewing the "Manual Review Count" metric.