BIG BANK, BIGGER DATA – A CASE STUDY
I was asked to create a solution for browsing and searching a data lake made up of many terabytes of people, business and entity details.
Due to NDAs the names and some details have been changed. The client is a well-known global bank we shall call BIG BANK. The agency is a big data specialist we shall call BIG DATA.
The first stage of any project is to understand why the business is doing this? What is the problem that we are solving?
This all started with the brief followed by some initial fleshing out of the product that needs building. Was the initial product idea a good solution? Was this the path to start down for initial tests?
To discover these things, we had a kick off meeting where everyone was invited, and the problem was laid bare.
BIG BANK’s problem: We have no reasonably quick, reliable and easy system for verifying customers for AML (Anti Money Laundering) and KYC (Know Your Customer). Our current system is painfully slow, difficult and labour intensive.
We started to talk about probable directions for a solution. This was based on a group view. Next I started individual interviews with the stakeholders from the client, the business SMEs and the technology department.
Research is something that was and always is ongoing but now it was done in earnest. Now that I had an idea of the problems and some possible solution starting points, I started to have a good look at how the competition were solving the problems.
Sadly, there were no other businesses that deliver the level of data that was required. Some did smaller parts and proved useful, so I was at least able to store away a few nuggets for later.
Other research included general reading up on big data problem solving and common patterns. How do people search vast amounts of data? How do they generally browse it? Etc.
I now had enough to start brain storming initial ideas and concepts. To help this process I created four personas that were based on typical users as discovered in the requirements gathering interviews.
These personas gave us a view of the reasons for wanting this data, the pain points, the trust levels and what they did with it once gathered. The personas told us who the users are and what they do. It gave us perspective and helped us rationalise solutions, especially when explaining to stakeholders some reasoning.
They allowed us to start creating user journeys.
Candidate One and the Minimum Viable Product
Perhaps a little early to start talking about the MVP but now it takes root in the form of the first candidate. Candidate One is the first solution idea. This is born out of brain storming, and spit-balling with the team based on the gathered data and research. The first candidate is done as early as possible. The purpose is to have something to show the stakeholders that we are heading in the right direction. It might be wrong, but at least we did not waste too much time finding out. The philosophy is to share early and share often. This way the project was always kept on track and remained highly transparent.
Wireframing to Prototype to Proof of Concept
From here on we were in an iterative, agile cycle of continuous improvement, fleshing out what would become the initial backlog for the development of the MVP.
The cycle was based on two-week sprints, working at first with wireframes. Once these where tentatively agreed upon we started to flesh them out as prototypes which could be more easily demoed and tested.
In time these turned into a proof of concept – a fully working demo that showed how the MVP would work. This was done by a multidisciplinary team made op of UI/UX designers, developers, big data specialists, BA and others.
At every step of the way, the stakeholders from BIG BANK and BIG DATA were involved and kept up to date. This proved very popular and so when it came time to hand over the MVP solution, the customer got exactly what they asked for because they were able to be a part of finding the elegant solution.