Coinbase has rebuilt its anti-fraud stack by tightly integrating machine learning models with a high-speed rules engine, cutting response times to new scam patterns from days to hours as AI-enabled scams surge across crypto.
The company now runs a dual-track system: models handle long-term defenses while rules provide rapid response. Rules capture new fraud types and feed them back into models to strengthen protections over time. Coinbase transformed a previously manual rule-creation workflow into an automated, data-driven recommendation system by restructuring data, automating schema evolution, and giving risk teams notebook-based analytical tools.
Performance of rule backtesting has improved more than tenfold, letting Coinbase trial and deploy protections far more quickly as scam behavior evolves. Machine learning recommends rule parameters to lower false positives while combating fraud and minimizing disruption to normal users — a key balance for a major exchange handling billions in volume.
This upgrade builds on Coinbase’s prior work on scalable, blockchain-aware ML systems designed to manage product risk without degrading user experience. The latest investment includes automated, event-driven rule generation and potential “one-click” conversion of efficient rules into model features, moving the exchange closer to fully automated risk management as fraudsters use AI to probe and exploit weaknesses faster.
The overhaul comes as crypto fraud has industrialized. Blockchain intelligence firm TRM Labs reported global crypto fraud of about $35 billion in 2025 and estimated that including underreporting could push total annual losses far higher. In a separate 2026 crime report, TRM said illicit crypto flows hit a record $158 billion in 2025, noting scam networks are increasingly professionalized and using AI to scale impersonation and outreach.
Coinbase’s CISO Philip Martin Lunglhofer has noted growing AI use cases to detect fraud; the firm already employs machine learning to monitor user activity and support chats for signs of scams or account takeovers. The new system’s faster backtesting, automated rule generation, and model integration aim to keep pace with AI-supercharged fraud by shortening the time from detection to defense from days to hours.
