From Revolut to the Agentic Frontier: How Brighty's Nick Denisenko Is Rewriting the Rules of AI-Powered Finance
There are plenty of people talking about AI in finance. Nick Denisenko is one of the rare few actually building it — with real money, real compliance requirements, and real consequences on the line. As the CTO and Co-Founder of Brighty, Nick is at the forefront of a new wave of fintech that doesn’t just use artificial intelligence as a feature, but as a foundational layer of how financial operations are designed, executed, and audited.
Nick’s path to this point is anything but ordinary. A seasoned fintech leader with over a decade of experience in applied mathematics, software development, and net banking, he joined Revolut as employee number 20 — back when the now-$45 billion company was still finding its footing. As a Lead Backend Engineer, he played a critical role in building out Revolut Business, the company’s most profitable division, where he sharpened his expertise in scaling financial products that bridge traditional banking and the digital economy. That rare combination of deep technical fluency and financial domain knowledge now sits at the core of everything he’s building at Brighty.
In an exclusive interview with AI World Today, Nick pulls back the curtain on Brighty’s agentic infrastructure — from how they design AI systems that can manage liquidity without hallucinating transactions, to why the CISO is becoming the most important AI role in the modern fintech stack. He also shares his unfiltered take on where autonomous agents are genuinely ready to take the wheel, and where a human hand must always remain on the brake.
Nick, everyone’s talking about AI, but you’re actually putting it in charge of people’s money. When you wake up and check the system, what’s the one metric or “red flag” that tells you if your agentic dream is working or if it’s becoming a nightmare?
We didn’t reinvent the wheel with AI - we optimized existing processes. So we still rely on the same metrics: SLAs, KPIs, and alerts.
The real signal comes when a model goes down and we have to revert to old workflows, even for a few hours. That’s when it’s clear the system is working - because going back suddenly feels painfully inefficient and almost unthinkable.
Be honest: how much of a mid-market company’s daily finance grind can we actually hand over to agents today without losing sleep? And where is the line where you’d still want a human standing guard, no matter how smart the tech gets?
Look, any company still paying humans to manually move data from an invoice to a payment portal is basically burning capital. That’s the absolute baseline. The “ceiling” we are pushing toward is the complete automation of the entire cycle—issuance, routing, and all that back-office friction.
In my experience, the tech is already there for most transactional work. The real bottleneck isn’t the “brain” of the agent; it’s the approval context. You cannot have an agent executing payments in a vacuum. It must surface the final action for a human “trigger.” But where the real magic happens is in liquidity management. If an account is dry, a mediocre bot just throws an error. A great agent identifies where the capital is sitting and asks, “Should I reallocate from here to cover this?” That is the shift from data entry to actual utility.
Agents are only as good as the context they’re given. What’s the secret to exposing things like FX provenance or compliance flags so the agent actually “gets it” and doesn’t have to nudge a human for every minor clarification?
The biggest “aha!” moment for us was realizing that context decay kills reliability. If an agent loses the “why” or the “how” as it moves through a chain of tasks, it fails. You have to treat things like FX rates and counterparty verification as first-class citizens—hardcoded into the metadata, not something the agent has to go “fetch” or guess.
If an agent has to pause and ask for clarification because it doesn’t know if a vendor is cleared or if the balance is sufficient, the user loses trust and abandons the tool. To build something people actually use, you need structured, real-time account states and pre-validated compliance flags. You build for zero-friction execution, or you’re just building a liability.
When an agent inevitably messes up—pays the wrong person or trips a compliance wire—how do you pull the “black box” apart? How are you building things so an auditor can look back and see exactly where the logic derailed?
We treat forensic traceability as a core product feature, not a boring compliance requirement. You need immutable logs that capture a “snapshot” of the world at the exact millisecond a decision was made. Not just the output, but the input: What did the agent know? Which policy was active? What was the account balance?
There’s also a philosophical point here: when a bot acts, the accountability lies with the person who gave it the keys. We don’t hide behind “the AI did it.” Our infrastructure is designed so a compliance officer can reconstruct the entire decision tree in seconds. If you can’t explain exactly why a bot moved $50k, you shouldn’t be moving money at all.
There’s this idea that if a bank isn’t easy for an AI to “read” and talk to, it’ll basically stop existing in the payments space. Do you buy into that? Is the next decade of competition really just a race to be the most agent-friendly platform?
100%. Traditional banking UIs are basically walking ghosts at this point. Once you’ve managed a treasury through an agentic interface, going back to a mobile app feels like using a rotary phone. It’s an order of magnitude slower.
The “UI wars” are over. The next ten years of fintech will be won on API quality and data structure. If a bank isn’t “agent-ready”—meaning its data is structured and accessible for machine reasoning—it simply won’t be invited to the transaction. We aren’t just predicting this; we see it in the data every day. If you aren’t on the agent’s map, you don’t exist.
Who are you actually hiring at Brighty to make this happen? Is it all prompt engineers and AI safety geeks now, and how do you get them to play nice with the hardcore infra engineers who’ve been keeping the lights on?
We don’t just “hire” for AI; we bake AI fluency into the company culture. It’s a core competency we subsidize and push for every single employee.
Structurally, the biggest change is the evolution of the CISO (Chief Information Security Officer). In an agentic world, the CISO isn’t just guarding the perimeter; they are the “Lead Auditor of Logic.” They oversee agent configurations, review routing rules, and ensure that our autonomous flows don’t create “hallucinated” financial risks. When agents handle live money, security and architecture become the same thing. You have to build with those constraints from line one of the code.
The “hallucination” problem is a meme in creative AI, but it’s a catastrophe in banking. How do you build a “sandbox” for agents where they can be autonomous but physically unable to invent a transaction that doesn’t exist?
This problem becomes much less acute if the AI is not a free-form decision maker, but an orchestrator of deterministic, pre-verified scripts.
In that setup, the agent doesn’t “create” transactions - it only triggers workflows that you’ve already designed, audited, and constrained. All state transitions happen inside systems of record (ledger, core banking, custodians), not inside the model. The AI never has write authority beyond calling strictly typed APIs with validation at multiple layers.
The key is that there is no semantic space for hallucination inside the execution layer. Scripts define:
allowed actions
required inputs
validation rules
reconciliation steps
We’ve spent decades moving from Monoliths to Microservices. Does adding an “Agentic Layer” just create a new kind of “Spaghetti Tech Debt,” or is this actually the cleanup crew we’ve been waiting for?
It’s not spaghetti - it’s microservices evolved.
Agentic layers are modular and vendor-agnostic - swap models or providers without breaking anything. Unlike traditional tech debt that hides in code nobody reads, agentic systems fail loudly and can flag or fix issues themselves.
You’re not adding another integration layer to maintain - you’re adding one that maintains itself. Cleanup crew, not new mess.
If an agent can navigate complex DeFi protocols or FX markets better than a human trader, does Brighty become a tech company that happens to have a license, or are you still a bank at heart?
We’re developers first - using AI to rethink and improve how finance works.
Brighty is fundamentally a fintech: the license is just infrastructure. The real value is in building systems that make financial operations faster, smarter, and more efficient across DeFi, FX, and traditional rails.
So in essence - a tech company operating within a regulated framework.
Let’s talk about the “Off-Switch.” In a world of autonomous agents, how do you design a kill-switch that doesn’t freeze the entire platform but stops a rogue agent from spiraling out of control in milliseconds?
At this stage, we do not allow AI procedures to run independently of humans. Our agents are not autonomous - they are initiated, supervised, and confirmed by an operator.
That is a deliberate design choice. We prioritize strong observability, traceability, and operator control over full autonomy. In practice, the primary off-switch is human consent: if the operator does not approve or continue the flow, the agent stops.
So the safest kill-switch is not a dramatic system-wide freeze - it is keeping decisive control at the human layer while ensuring every step is visible and interruptible.
Nick Denisenko’s vision for agentic finance is neither utopian nor reckless — it’s pragmatic, deeply technical, and grounded in hard-won lessons from the front lines of fintech. What stands out most from this conversation is not just how far AI has come in automating financial operations, but how seriously Brighty is thinking about the guardrails: immutable audit logs, human-confirmed execution, and a cultural mandate that accountability can never be outsourced to an algorithm. As the race to become “agent-ready” accelerates across the banking sector, Nick’s framework offers a compelling blueprint — one where the smartest systems are not the most autonomous, but the most trustworthy. For anyone building at the intersection of AI and financial infrastructure, this is a conversation worth revisiting more than once.



