Different customers have different requirements and expectations from Fraud Management Systems. However, one of the things I have often observed is not seeing the forest for the trees. It's actually very common for the team to get dragged into discussing the very specific and even niche details or functionalities when trying to deploy a fraud solution within the organization. This seems even more common among organizations deploying their first fraud solution.
But let me tell you - If you don't have a fraud solution, or if you do but it's not covering channels, products, or fraud typologies where the actual fraud happens, don't overthink and delay the deployment.
One of our customers' mistakes is sacrificing the go-live date for more functionality. And here I want to say wholeheartedly - please don't. It does make sense only in scarce situations. I'm not saying you should be open to compromise on any functionality - not at all. I suggest establishing a common understanding of the MVP (Minimum Viable Product) within the delivery team (customer & vendor) to ensure the critical functionalities are captured and present in the design blueprint. Everything else is up for discussion.
There isn't a simple formula or guidelines saying which functionality might be hard to deliver and which might be pretty straightforward, as each solution is different. Each solution is better in certain aspects and worse in others. For some solutions, it might be effortless to integrate with an external system via REST API; for others, rule building and scoring are fully configurable and might be the easy part. For others, altering the user interfaces and screen layouts might be trivial.
The general guiding principle should be to understand these specifics and try to deliver the "easy to do" or "out-of-the-box" functionalities that are aligned with the business owner's expectations or desires and try to postpone those that could delay the process due to complexities of any kind (on vendors' or the customers' side alike).
Regarding this particular topic, I vividly remember one project where the go-live date was the single constant that couldn't be touched; everything else could - from IT architecture, integration, and business functionalities. This was decided by the CEO himself. I consider till this day this project the most efficient one we have ever delivered - it was completed in significantly shorter than average time, and it was claimed as a huge success by the customer's senior management too.
The benefits of getting the solution up and running in production as early as possible were 1) reduced losses from the go-live date onwards and 2) reduction/mitigation of wasted efforts. Wasted effort in this context is any effort that would ultimately result in no additional functionality or business benefits - most commonly materialized as meetings, analysis, documents, and presentations.
So where did the wasted effort come from, and what were the reasons for the go-live delays I observed?
The first and probably most frequent one is - discussions on specific topics that weren't even that important from a business perspective—and certainly not needed for the project's initial phase. But these had to happen, and we had to go through the tedious (sometimes technical, sometimes business, and sometimes both) discussions on the topic as the customer viewed the functionality as critical.
To mitigate the risk of wasted effort, try to align on what functionality is critical and has to come as part of initial delivery and which functionalities can come later, but be honest about it. Otherwise, you will end up with the same list as the one you started with.
Examples of critical functionalities:
Examples of non-critical functionalities:
Another situation that leads to wasted efforts is related to the established cooperation model between the vendor and customer. Sometimes, customers consider the vendor purely as an IT vendor who only brings technology to the table. Some vendors might even position themselves in this role. And this is OK if both parties are transparent about it and consciously manage the overall delivery of the project with this in mind.
I would advise, though, to prefer a vendor who brings to the table not only the technology (the SW) and a team capable of deploying it but also a business domain people who will help to shape the final delivery considering the customer's views (business as well as IT) along with the view of the vendor. In addition, if the vendor is experienced, he can provide invaluable input from previous implementations and help mitigate wasted efforts.
One example of the such situation was when the customer raised the topic of loading historical data into the fraud detection system before going live. This would allow the behavioral profiles to "mature" and make the solution more accurate from day one. This is an absolutely legitimate request and one that makes perfect sense. However, having worked with this customer before, we advised against such a step in the very first session. We provided several practical reasons (considering their historical data availability and quality and the ability and effort required to consolidate them from multiple different systems). After several weeks of discussions with various business and IT teams, the bank ultimately agreed and decided not to load the historical data. But the damage has been done. And though the discussions happened early in the project, the actual impact on go-live was still measured in weeks.
Though details about the fraud management system within the organization should be a well-kept "secret" and specific details would be known only to those who need to know, it is imperative to have a close alignment between business and IT stakeholders to ensure that "what business wants, IT can deliver - fast."
What do I mean by that?
One of the initial discussions we have with the customer is to identify the business pain points within the fraud domain - what fraud typologies, which channels, and/or which products are the biggest problem and should be addressed on priority. After this discussion, we ask to involve the IT teams and check with them the feasibility of integrating the selected channels and their respective data source into the fraud management system. There could be many reasons why this follow-up discussion is essential. To successfully deliver the project with the highest possible value to the customer, is to ensure that we prioritize the overall phases or deployment with the shortest possible time to market (TTM).
It wouldn't make much sense to try to integrate Internet Banking and Mobile Banking transactions if they were going to replace the systems within the next 6-12 months or if they were planning an upgrade that impacted the integration interfaces. Another example might be an attempt to integrate transactions that were not readily available (e.g., non-financial transactions like login) in middleware. It would require this extra integration step to be made available for the fraud solution. There might be many reasons why a specific channel, system, or product might be considered a high priority from a business perspective while being the least preferred option from an IT perspective, some of which might be the technical complexity or related efforts.
To achieve the best possible outcome in the shortest time possible, it is wise to align the views of Business and IT people and choose the roadmap considering both views. Otherwise, you might get to start the project with a top-priority use case from a business perspective. Still, it will take more time on the IT side to implement, during which you could have already reaped some benefits of having the fraud system in place - though maybe with a different channel or product.
The answer is easy - YES, they would. Compromise doesn't always end up as a win-win situation from every angle of view, but it doesn't mean that compromise will result in a flawed or failed system. We need to go back to the initial paragraph and re-iterate the problem - for the trees; we often don't see the forest.
Though we might have compromised on some capabilities or functionalities, what we often gain is the before-mentioned "forest." Deploying the fraud management system to production is not like switching the light, going from no fraud system to 100% detection capability in a matter of hours. Especially for organizations that didn't have a fraud management system before, this step brings a lot of new things. There is always a ramp-up phase (usually a couple of weeks) during which the staff is:
During this period, the analyst, managers, and call center agents are all getting used to many aspects of their daily work impacted by the new solution. It takes time for the dust to settle and return to business as usual.
This ramp-up phase will happen, and the sooner you reach this phase, the sooner your fraud operations can reap the benefits of the new system. And how do you know you are back to business as usual? Users will start identifying the things that annoy them or were not considered in the initial phase, or they want them added to make their life easier.
OK, so the last question - When does it make sense to postpone go-live in favor of more functionalities?
I would say that if you have reached the situation described in the paragraph above - business as usual after phase 1 go-live. This is when it would probably be OK to extend the timelines to get the desired functionality. The moment when you already have a fraud management system in place, and you are looking for incremental gains on top of the current solution.