ℹ️
Reference Content: This is a copy of content from the PCI Security Standards Council blog, preserved for tracking changes over time.
View Original →

The AI Exchange: Innovators in Payment Security Featuring Toast, Inc.

By Alicia Malone

The AI Exchange: Innovators in Payment Security Featuring Toast, Inc.


Welcome to the PCI Security Standards Council’s blog series, The AI Exchange: Innovators in Payment Security. This special, ongoing feature of our PCI Perspectives blog offers a resource for payment security industry stakeholders to exchange information about how they are adopting and implementing artificial intelligence (AI) into their organizations.  

In this edition of The AI Exchange, Toast, Inc. Senior Director, Technical Compliance, Mahmoud Sultan, offers insight into how his company is using AI, and how this rapidly growing technology is shaping the future of payment security. 

How have you most recently incorporated artificial intelligence within your organization? 

At Toast, we’re incorporating AI in two complementary ways: customer-facing capabilities embedded in our commerce, and payments platform for restaurants and retailers, and internal productivity enablement across engineering and corporate functions.

On the product side, Toast IQ has expanded from a set of “smart features” into a more conversational, task-oriented AI assistant built directly into the Toast ecosystem—helping operators ask questions, surface insights, and take action across day-to-day workflows. Current production live examples include identifying top-selling items by daypart, executing tasks like adding or updating menu items, and quickly answering operational questions (e.g., scheduling/staffing) through Toast’s mobile and web experiences.

Because Toast is also deeply integrated with payments processing, we view AI as a way to improve both operational decision-making and the trust signals customers depend on—while staying aligned with PCI, other leading security fundamentals, and strong governance front-and-center.

Internally, we’re applying and exploring many AI use cases including accelerating the engineering lifecycle (e.g., drafting, analysis, knowledge workflows, and code assistance). We’re also exploring AI-enabled approaches to support GRC operations—such as summarizing evidence, highlighting exceptions, and generating first-pass narratives that assist (not replace) human review at this time. 

What is the most significant change you’ve seen in your organization since AI-use has become so much more prevalent? 

The biggest change is that AI has shifted from being “a tool” to becoming a workflow layer—compressing the time between question → insight → action. For our customers, that can mean faster decisions inside a system that spans orders, operations, and payments. For internal teams, it can mean moving faster while still meeting a high bar for assurance, audit readiness, and control discipline.

Equally important, increased AI adoption has accelerated the need for clear governance: defining where AI is appropriate, how data is handled, what must remain human-driven, and how outcomes are validated. In practice, it pushes organizations to operationalize “trust-by-design” earlier—building guardrails into product development, operational workflows, and vendor management—rather than relying solely on after-the-fact reviews. 

How do you see AI evolving or impacting payment security in the future? 

AI will likely be a force multiplier for payment security in a variety of ways including: 

  1. Detection at machine speed: AI-driven anomaly detection and behavioral analytics can help identify fraud patterns, account takeover attempts, and operational signals that humans would struggle to catch—especially as attackers automate and scale.
  2. Adaptive controls: We’ll see more dynamic risk-based decisioning (e.g., step-up verification, transaction friction, or routing decisions) based on contextual signals rather than static rules—improving both protection and user experience.
  3. Continuous assurance: AI can help shift compliance from periodic snapshots to more continuous monitoring by triaging evidence, flagging exceptions, and accelerating remediation cycles—while preserving strong human oversight and traceability. 

For payments ecosystems, the goal is higher confidence with less friction: security that’s not only strong, but also more scalable and more integrated into how teams build and operate. 

What potential risks should organizations consider as AI becomes more integrated into payment security? 

As AI becomes embedded into payment security, organizations should plan for risks across security, integrity, privacy, and governance, including: 

  • Adversarial use of AI: automated phishing/social engineering, synthetic identities, and accelerated vulnerability exploitation.
  • Model risk: data poisoning, prompt injection (for LLM-based workflows), and evasion techniques that can reduce detection effectiveness.
  • Data governance: ensuring payment-related and other sensitive data is properly scoped, minimized, protected, and not inappropriately exposed to third parties—especially when using external models/services.
  • Explainability and accountability: if AI influences security outcomes (e.g., blocking, routing, step-up actions), organizations must be able to justify decisions, monitor quality and bias, and maintain human override paths.
  • Over-reliance: At least for now AI should augment—not replace—core security fundamentals and expert judgment, particularly for high-impact determinations, realizing that this pendulum will likely swing further over time.

What advice would you provide for an organization just starting their journey into using AI? 

Start with high-value, low-risk use cases, and build the governance foundations early: 

  • Pick the right first use cases: prioritize productivity and decision support before automated enforcement (e.g., summarization, triage, investigation acceleration, policy mapping).
  • Define guardrails upfront: data classification, acceptable use, human-in-the-loop requirements, auditability, and retention boundaries.
  • Threat model AI features: treat AI like any other production capability—secure SDLC, abuse cases, logging/monitoring, and red-team approaches where appropriate.
  • Measure outcomes: track accuracy, false positives/negatives, drift, and operational impact—not just novelty.
  • Don’t skip vendor diligence: understand model boundaries, data handling/retention, training usage, and security controls when using third-party AI.

This approach helps organizations move quickly while keeping trust, safety, and compliance as first-order requirements. 

What AI trend (not limited to payments) are you most excited about? 

I’m most excited about AI that reduces operational toil while increasing assurance—especially agentic workflows that can gather context, propose actions, and prepare evidence packages, while keeping humans in control for approval and final judgment (for now).

In practical terms, that includes AI-assisted engineering workflows, AI-augmented investigations, and AI-enabled compliance operations (e.g., evidence summarization, exception detection, and control effectiveness signals). Done well, these capabilities can help make security and compliance more seamless and integrated into day-to-day operations—less of a late-stage blocker, and more of an enabling mechanism for trusted innovation. 

View More Content on Artificial Intelligence

Learn More About Toast, Inc.