All industries, and specifically the FinTech segment, are walking the early legs of the hype cycle in Generative AI and LLMs. AI, however, has not been new to most of these organisations. Predictive AI has for long been used to profile borrowers, attach risk premiums, and rate investment products. So what is different this time?
There is a growing collective belief among the industry leaders that, with Generative AI, the ways of working have changed forever and there is enough early proof of value. At the heart of this wave is the democratisation of LLMs (Large Language Models) - the phrase and the technology. There are a few things that make LLMs different and powerful versus incumbent alternatives.
- Creativity : LLMs can help comprehend and ‘complete’ natural language sentences. Given a data source, they can seem to ‘give answers’.
- Comfort with imperfection : LLMs work on semantics. The questions or instructions need not be prescriptive. Synonyms, spell errors, unstate references and other such ambiguities are taken care of.
- Comprehensiveness : It takes exponentially less content to ‘train’ LLMs to perform a variety of tasks, which in the past would require firms to build unique models..
- Conversational : LLMs mimic human conversations. Hence all operations - be it on the go training, prompting, boundary conditions in addition to the input-output experience, works in natural language.
- Coherence : LLM agents or microbots can work with deductive logic and can deduce ‘steps’ to reach logical conclusions.
These LLM boons unfortunately come at a price. LLMs are few. Powerful and accurate LLMs are fewer and almost oligarchic. As a result, organisations who want to ‘do-it-themselves’ have three choices.
- Part with data walls and be comfortable sharing part or whole of their and their users’ data to these LLMs with suboptimal and expensive API calls.
- Invest time, capital and labour to reinvent prior art using open source models, which of course is not optimised for infrastructure costs and accuracy .
- Partner with solution provider that ticks specific boxes
FinTechs today realise LLMs alone with a DIY project cannot solve problems. They need the capabilities, the security and other goodness of a platform. This is similar to how in the past, a simple access to a Python library for performing Logistic Regression would not necessarily mean readiness for real time predictive classification.
In this rapidly evolving space, a DIY solution can lock firms into suboptimal tech stack. This is a problem that can be solved by working with an expert partner leading innovation in GenAI. More and more companies are partnering with these solution providers so that they shorten the time to value while being capital efficient. Of course, they scout for companies that have hyper-optimised themselves along
- Data containment
- Cost per query
- Time to value
Discounting the build or buy dilemma, which largely constitutes the ‘effort’, adopting Generative AI as a way of working has disproportionate ‘returns’. This is largely due to the vast variety of use cases it can be applied to and show quick value. Here are a few of them.
For the End Users : Faster, accurate and richer experience
Retail users (e.g. traders, investors, for insurance, for banking)
- Topical literacy : What does ROCE mean?
- Getting started : How do I place a limit order?
- Research : Compare shareholding for A and B.
- Synthesise : Summarise the earnings call into 5 main points.
- Support : When will my money be credited?
- Analysis : Show me my 3 most profitable trades.
- Wisdom : Encode my swing trade strategy into a sell order
Bulk users (e.g. developers, code-based traders)
- All use cases for retail users, plus
- Connecting : Show me the code snippet for authentication
- Instructions : Place a sell order when price hits 9 period SMA
For the Organisation : Materially improved metrics
Customer Success teams
- User engagement on platform : Up to 30% more
- Avg ticket deflection : From 10-30% to 70-80%
- Avg time to first response : From Minutes to Seconds
- Time to resolution : From Hours to Minutes
- Agent productivity : Up to 2X
- Faster go-live : From months to weeks
- Near zero prep : No exclusive content creation for support
- Source freshness : Almost 100%
- Maintenance effort : Almost nil, 10-20% bandwidth freed
- More insights, better products : What topics are end users asking about? How much is our content able to answer? What new content must we build?
To realise the maximum value, FinTech companies have to acknowledge the complexities associated with solving one or more of these use cases. A good partner or a native solution must address the same.
1. Complexity of ingestion : Information can be sourced from a variety of places. It is important that the ingestion pipeline can handle this variety, index it and make it quickly retrievable. For example, information could reside in
- Static sources : e.g. Webpages, documentation, reports
- Dynamic sources - agnostic to user : e.g. newsfeed, stock quotes
- Dynamic sources - specific to user : e.g. user dashboard, activity
- Developer documentations : e.g. dev docs
2. Complexity of intents : The same user query could be answered from a host of sources. Which sources to invoke when - is a search problem, which needs to be solved much before the LLM constructs an answer. For example, ‘What was my last trade?’ has to look into the user transactions database while ‘How do I check my last trade?’ has to peek into a support document.
3. Complexity of workflows : The solution must know when to segue into a safer, less punishing workflow. For example, doubtful intents must always be reconfirmed and handoff priority customers to live agents.
4. Complexity of personas : A user new to the platform must be engaged differently from a power user. Similarly, the way a conversation would work for a retail trader would be different from a code-filled developer centric conversation.
5. Complexity of accuracy : No instance of LLM models can be absolutely perfect out of the box. Since it is dealing with financial decisions, it is imperative that the GenAI used is ‘honest’. A simple ‘I don’t know’ to a user query is far less punitive than ‘a made up answer’. Short term honesty goes a long way building trust than longer term accuracy.
Organisations must be cognizant of the fragility of LLMs when faced with these complexities. A sensible approach to incorporating GenAI in core workflows can create a never-before-possible experience for customers of FinTech services, leading to a significant increase not only in the bottom line, but also the top line for these firms.