When AI Agents Start Paying: What Banks Must Prepare For
Agentic payments move fraud from “card-not-present” to “person-not-present.” Here are the new fraud vectors, why traditional detection breaks, and what Southeast Asian banks should do now.
RTD Team
Run-True Decision
In February 2026, DBS became the first bank in Asia-Pacific to pilot Visa’s Intelligent Commerce solution for agentic payments — real-world food and beverage transactions initiated not by a human tapping a card, but by an AI agent acting on the customer’s behalf. Mastercard’s Agent Pay is live with US issuers and expanding globally. Google’s Agent Payments Protocol already has over 60 partners. Alipay’s AI Pay exceeded 120 million transactions in a single week.
Agentic commerce — AI agents autonomously discovering products, comparing prices across borders, and executing payments within limits set by humans — is no longer a whitepaper concept. It is entering production.
For bank fraud teams, this creates a problem that no amount of tuning existing rules will solve. Every fraud detection system in operation today was built on one foundational assumption: a human is on the other side of the transaction. When that assumption breaks, so does the detection model.
What Agentic Payments Actually Change
In a traditional payment flow, a human customer browses a merchant site, selects items, enters payment details, and confirms the purchase. The fraud system evaluates signals from that human — device fingerprint, typing cadence, geolocation, time-of-day patterns, transaction velocity against historical behaviour.
In an agentic payment flow, an AI agent replaces the human in the buying journey. The customer sets parameters (“find me the cheapest business-class flight to Jakarta under $800”), and the agent handles discovery, comparison, and transaction initiation. The underlying payment rails — card networks, bank transfers, FX settlement — remain the same. But the entity triggering the payment is fundamentally different.
This creates three structural gaps that current systems are not designed to handle:
- The authorization gap. Agent authority is cryptographically delegated, but the agent’s decision logic is opaque. Did the agent act within its mandate, or was it compromised? Current authorization frameworks assume direct human consent at the point of transaction.
- The identity gap. There is no clear “customer” in the traditional sense. The agent is acting on behalf of a user, but it is not the user. Who is liable if the transaction is fraudulent — the agent platform, the cardholder, the merchant, or the payment network?
- The behaviour gap. An AI agent buying 50 items across 12 merchants in 3 minutes looks identical to a bot attack under current velocity rules. But it might be a legitimate bulk procurement agent doing exactly what its owner asked.
The industry shorthand captures it well: we are moving from card-not-present to person-not-present transactions. And that shift demands a rethink of fraud detection from the ground up.
Five New Fraud Vectors Banks Need to Understand
Agentic payments don’t just change the shape of legitimate transactions. They create entirely new attack surfaces that fraudsters are already exploring.
1. Agent Impersonation
In multi-agent systems where one agent negotiates with another — a buyer agent interacting with a merchant agent — attackers can spoof trusted agent identities. A fraudulent agent poses as a verified buyer’s agent, tricks a merchant system into processing a transaction, and disappears. Unlike human identity theft, agent impersonation can be executed at machine speed across thousands of merchants simultaneously.
2. Prompt Injection and Agent Hijacking
AI agents consume unstructured data from multiple sources: product descriptions, merchant metadata, pricing feeds. Attackers can embed malicious instructions in this data — a product listing that tells the buyer’s agent to add additional items, increase the transaction amount, or redirect payment to a different merchant. The agent follows the injected instruction because it cannot distinguish it from legitimate context.
3. Credential Theft and Privilege Escalation
Agent credentials — API keys, OAuth tokens, service tokens — are high-value targets. A compromised agent doesn’t just give attackers access to one account; it gives them an automated transaction engine that can execute purchases, process refunds, and initiate transfers at scale. The agent becomes what security researchers call an “acceleration layer for fraud.”
4. Automated Fraud at Scale
Fraudsters are already using agentic AI to orchestrate multi-step attack chains: synthetic identity creation, deepfake video generation to pass biometric verification, device telemetry tampering, and automated retry loops that vary parameters until they succeed. Experian’s 2026 fraud forecast names “machine-to-machine mayhem” as the top threat for the year. One in 50 forged documents now uses AI-assisted generation.
5. First-Party Fraud via Agent Mandates
This is the subtle one. A customer authorizes an agent to make purchases, then disputes the transactions after delivery — claiming the agent exceeded its mandate or was compromised. Proving what an agent was authorized to do, versus what it actually did, versus what the customer claims they intended, creates a liability maze that existing chargeback frameworks were never designed to navigate.
Why Traditional Fraud Detection Breaks
The challenge is not that existing fraud systems are bad. It is that they were designed for a world that is changing underneath them.
Behavioural models trained on humans fail on agents. Machine learning models that detect fraud by spotting deviations from “normal” customer behaviour — unusual transaction times, atypical merchants, spending velocity spikes — produce massive false positive rates when the “customer” is an AI agent. An agent that compares prices across 40 international merchants in two minutes is behaving normally. A human doing the same thing is almost certainly compromised.
Authorization checks assume direct human consent. Strong Customer Authentication (SCA) under PSD2 and equivalent frameworks requires evidence that the human authorized the specific transaction. But in agentic flows, the human authorized the agent, and the agent authorized the transaction. No current regulatory framework in Southeast Asia provides clear guidance on whether this indirect chain of consent satisfies authentication requirements.
Real-time decisioning gets harder, not easier. Traditional fraud systems can afford to be conservative — decline a suspicious transaction and let the human retry. But declining an agent transaction triggers automated retries, potentially across multiple payment methods, creating cascading false signals. Agent transactions demand faster, more confident decisions with less room for “maybe.”
The fraud detection question is no longer “Is this the real cardholder?” It is “Is this a real agent, does it have authority, and is it compromised?”
Know Your Agent: The New KYC
The industry is responding with a new verification paradigm: Know Your Agent (KYA) — the identity framework for AI agents, analogous to Know Your Customer for humans. KYA answers three questions: Who made this agent? Who does it represent? What is it authorized to do?
Several frameworks are emerging:
- Mastercard Agentic Tokens — Dynamic, short-lived, cryptographically secured credentials that are merchant-specific and amount-specific. If intercepted, they cannot be reused. Each agent is uniquely identified, and tokens replace raw card credentials in the transaction flow.
- Google Agent Payments Protocol (AP2) — An open protocol that uses “Mandates” — cryptographically signed digital contracts proving a user authorized an agent to act. Built on Verifiable Credentials, AP2 creates tamper-proof authorization chains with clear liability assignment.
- Skyfire KYA — A framework focused on pre-credentialing AI agents with continuous attestation, ensuring agents are verified before they can initiate any transaction.
The common thread across these frameworks: agents need a digital passport — a cryptographically verifiable credential containing the agent’s identity, its owner, its authorized capabilities, transaction limits, and a continuously updated reputation score. This is the foundational layer that fraud systems will need to evaluate alongside traditional transaction signals.
The Southeast Asian Context
Southeast Asian banks face a specific combination of factors that make agentic payment fraud particularly acute:
Real-time payments are scaling faster than fraud controls. RTP adoption across ASEAN is growing rapidly, but the irrevocability of real-time settlement — money moves in seconds with no reversal window — means agent-initiated fraud has immediate, irreversible financial impact. Money mule networks already exploit RTP systems for rapid fund dispersal; agentic automation will accelerate this further.
AI adoption for fraud detection lags global peers. Research consistently shows that Southeast Asian banks underutilize AI in their fraud and anti-money-laundering operations compared to banks in Europe and North America. This creates a dangerous asymmetry: the fraud is getting more sophisticated through AI, but the defences haven’t caught up.
Regulatory frameworks are still forming. Singapore leads the region through MAS initiatives on AI governance, but there is no ASEAN-wide standard for agent-initiated payment authentication. Each market — Singapore, Malaysia, Thailand, the Philippines, Indonesia — has different payment regulations. KYA standards haven’t been mandated anywhere in the region, leaving room for inconsistent implementations that create cross-border gaps.
Cross-border scam operations will adopt agentic tools. Southeast Asia already faces a $5 billion annual scam challenge, with coordinated operations exploiting jurisdictional gaps across ASEAN. Agentic AI gives these operations the ability to automate multi-step fraud chains without human bottlenecks — synthetic identity creation, deepfake verification, and rapid fund movement in a single automated workflow.
The DBS-Visa pilot is a positive signal that the region’s leading institutions are moving early. But for the majority of banks across ASEAN, the gap between agentic payment readiness and current fraud detection capability is significant.
What Banks Should Do Now
Agentic payments are still early — most providers acknowledge that 2026 will be a pilot year, not a revenue year. But history shows that fraud operations adopt new technologies faster than the institutions they target. The preparation window is now.
- Monitor agent traffic share. Start tracking what percentage of your transaction volume is initiated by AI agents versus humans. This baseline will inform every decision that follows — from model retraining thresholds to capacity planning.
- Separate agent and human behavioural baselines. Agent transactions need their own risk profiles. Velocity rules, geographic patterns, and spending distributions that work for human customers will produce unacceptable false positive rates on agent traffic. Build parallel models, not one-size-fits-all rules.
- Build agent identity verification into the fraud decisioning pipeline. Prepare to evaluate agent credentials — Agentic Tokens, AP2 Mandates, KYA attestations — as first-class signals in your risk scoring. This is not a future requirement; Mastercard’s Agent Pay is live now, and AP2 has 60+ partners.
- Invest in mandate validation. The question “Did the customer authorize this?” becomes “Did the customer authorize this agent, and does this transaction fall within the agent’s delegated authority?” Your fraud system needs the ability to evaluate cryptographic proof of delegation, not just card credentials.
- Track regional regulatory developments. Watch for MAS, Bank of Thailand, and Bank Negara Malaysia guidance on agent-initiated payment authentication. Early clarity on regulatory expectations will shape which KYA frameworks gain traction in Southeast Asia.
- Stress-test your chargeback and dispute processes. Existing chargeback frameworks assume a human authorized (or didn’t authorize) a transaction. Agent-initiated disputes will require new evidence standards — agent logs, mandate records, credential verification trails. Banks that can’t produce this evidence will absorb the losses.
The Window Is Open
Morgan Stanley estimates that agentic commerce could represent $190–385 billion in US e-commerce spending by 2030. Globally, industry projections put the agentic payment market at $93 billion by 2032. The DBS pilot in Singapore signals that this future is arriving in Southeast Asia, not someday, but now.
The banks that prepare their fraud detection infrastructure for person-not-present transactions — building KYA evaluation, agent-specific behavioural models, and mandate validation into their decisioning pipelines — will be positioned to capture the agentic commerce opportunity safely. The banks that don’t will find out the hard way that their fraud models have a blind spot the size of an entire new transaction paradigm.
The fraud doesn’t wait for the regulations to catch up. Neither should the banks.