TOAN is a toolkit designed to simplify the generation of poisoned datasets for machine learning robustness research. It unifies state-of-the-art adversarial techniquesTOAN is a toolkit designed to simplify the generation of poisoned datasets for machine learning robustness research. It unifies state-of-the-art adversarial techniques

The Poison in the Pipeline: Why AI Training Data Is Your Biggest Security Blind Spot

\ My last project of the year borders on data and data security as models aren’t just big anymore, they’re multimodal. These massive systems don’t just read text; they simultaneously interpret images, handle code, and process conversation.

I wanted to have a toolkit that will enable me build secure pipelines when it comes to dataset whether offensive or defensive and my first turn was offensive.

I couldn’t find any readily available dataset for this purpose, I had to look for implementations that could provide said dataset. Finding one for vision and text wasn’t an issue, the main issue was finding one for mulitmodal datasets and I haven’t tried looking for that of video and audio.

I decided to build a toolkit for myself and also for security researchers and those interested in AI systems security called TOAN. TOAN which was the abbreviation of Thinking Of A Name which was given by someone within my network when I talked about it, I was asked if it was on Github, my answer was “No, thinking of a name.” And he gave the abbreviation. I had to change it to mean Text. Object. And. Noise. 

TOAN (Text. Object. And. Noise) is a new unified CLI toolkit designed to solve the problem of fragmentation. 

Its design mandate: Be the single standardized interface for generating poison datasets across the three key areas of modern AI: computer vision, natural language processing, and the most complex arena, multimodal learning.

TOAN distills poisoning methods into two critical, well-defined categories:

Type 1: Availability Attacks (The Loud Warning Sign)

These are attacks on the model’s functionality. The attacker’s purpose is straightforward: degrade overall model performance so severely that it becomes useless. The goal is to maximize the model’s loss and minimize its accuracy.

How they achieve degradation:

  • Inject data with noisy labels or extreme outliers
  • Example: Inject thousands of perfectly normal images of dogs but intentionally label them as cats
  • Or inject images completely covered in extremely high-frequency noise, forcing the model to learn features from chaos

The result: When training finishes, the model’s accuracy is terrible.

This is noisy, noticeable, and relatively easy to detect once the damage is done.

Type 2: Integrity Attacks (The Sleeper Agents)

Researchers usually call these backdoors. The goal is not to degrade overall performance, but to inject a hidden specific trigger, can be a pattern, a visual patch, or a specific phrase into the training data.

The key is stealth. The model has to behave perfectly normally on almost all clean, legitimate data.

You run all your standard accuracy and stress tests. The model passes with flying colors. You deploy it believing it to be robust.

But inside, a vulnerability is just waiting.

The moment an attacker presents the model with that specific injected pattern (that backdoor trigger) at inference time, the model executes a malicious pre-programmed command. It might provide a dramatically wrong classification or even exfiltrate data.

It’s a targeted, precise, and potentially catastrophic failure that is only visible when the trigger is activated.

This distinction is crucial for understanding how to allocate security resources:

  • Availability attacks are loud, easy to detect upon final testing
  • Integrity attacks pose a far greater silent long-term risk to critical infrastructure because they can lie hidden for months or years

By the time they’re activated, the damage could be widespread and the model is already deeply embedded in the supply chain.

TOAN implements 10 distinct image poisoning recipes, handles major relevant datasets: CIFAR-10, the massive ImageNet, MNIST, and the likes. 

The text component supports both common NLP tasks and more advanced text generation tasks. Critically, because it’s built on modern standards, it works with virtually any dataset available through the Hugging Face platform.

The multimodal component defines two correlated triggers simultaneously:

  1. Visual patch: Generated and applied to the image (could be a specific color dot, unusual noise pattern, or subtle change in brightness localized to one area)
  2. Corresponding trigger phrase: A specific phrase (let’s use “spectral shift”) injected into the caption associated with that poisoned image

I deliberately excluded detection and defense tools from TOAN as the toolkit is to serve as a red team tool, Its singular focus is generating poison datasets.

I made the tool easy to use, installation can be via cloning the repository, installing via pip or uv. Due to the time-consuming nature of data poisoning runs on massive datasets, I implemented dry run which allows users to verify their entire configuration using a tiny subset of data within minutes.

This immediate feedback prevents security teams from committing to resource-intensive full poisoning runs that are doomed to fail due to a simple configuration error.

The bottom line is that TOAN solves the fragmentation problem in AI security research by unifying state-of-the-art data poisoning techniques under one modern, reliable roof.

Wishing you all a Merry Christmas and a prosperous New Year

Github: TOAN

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.