Skip to content

ToxiGuard AI is a browser extension that detects and censors toxic language in real-time using TensorFlow.js. It offers fine-grained controls, visual feedback, auto-censor, adjustable sensitivity, and respects user privacy.

Notifications You must be signed in to change notification settings

Life-Experimentalist/ToxicGuard_AI

Repository files navigation

πŸ›‘οΈ Toxic Shield (ToxicGuard AI)

A privacy-forward browser extension that detects and optionally censors toxic language in real-time using TensorFlow.js and the @tensorflow-models/toxicity model.


Table of Contents


Project Overview

Toxic Shield (aka ToxicGuard AI) is a cross-browser Manifest V3 extension that:

  • Loads a local copy or CDN copy of TensorFlow.js and the toxicity model in content scripts.
  • Monitors text inputs, textareas and contenteditable elements for toxic content.
  • Highlights or auto-censors offensive content depending on user settings.
  • Exposes a popup UI for toggling detection and auto-censoring.

Key files:

  • manifest.json β€” Extension registration and content script loading
  • background.js β€” Service worker (install defaults, injects content script, routes messages)
  • content.js β€” Detection engine loaded into web pages
  • popup.html / popup.js β€” Settings UI
  • lib/tensorflow/* β€” Optional local TFJS + toxicity model assets (fallback to CDN)
  • test.html β€” Local test harness for debugging

Quick Start (PowerShell)

Clone, install (if needed), and load the extension in developer mode:

# Clone the repo
git clone https://github.com/Life-Experimentalists/ToxicGuard_AI.git ; cd ToxicGuard_AI

# (Optional) Download local TFJS assets if you prefer offline usage
node setup.js

# Load the folder as an unpacked extension in your browser:
# Chrome/Edge: open chrome://extensions and "Load unpacked"
# Firefox: open about:debugging β†’ This Firefox β†’ Load Temporary Add-on

Notes: The above commands are PowerShell examples. When giving commands in Windows follow PowerShell syntax.


Architecture & Diagrams

Architecture Overview (Mermaid)

flowchart LR
  UI[User Interface / Page Inputs]
  CS[content.js β€” Content Script]
  MODEL[TensorFlow.js + @tensorflow-models/toxicity]
  BG[background.js / Service Worker]
  STORAGE[chrome.storage.local]
  POPUP[popup.html / popup.js]

  UI -->|input events| CS
  CS -->|loads model| MODEL
  CS -->|sends settings / telemetry| BG
  BG -->|persists settings| STORAGE
  POPUP -->|updates settings| BG
  BG -->|broadcasts changes| CS
  CS -->|visual feedback| UI
Loading

Elements and single-line explanations:

  • UI β€” The web page elements (input, textarea, contenteditable) that users interact with.
  • CS β€” content.js, injected into pages; observes inputs, debounces events and runs detection.
  • MODEL β€” TensorFlow.js runtime and @tensorflow-models/toxicity classifier performing predictions.
  • BG β€” background.js, service worker that manages defaults, messaging, and cross-tab sync.
  • STORAGE β€” chrome.storage.local where user preferences and thresholds are persisted.
  • POPUP β€” popup.html / popup.js, the extension settings UI that modifies preferences.

Detection Sequence (Mermaid)

sequenceDiagram
  participant User as User typing
  participant CS as content.js
  participant Model as Toxicity Model
  participant BG as background.js

  User->>CS: input event (debounced)
  CS->>Model: classify(text)
  Model-->>CS: predictions
  alt toxic detected
    CS->>CS: highlight or censor text
    CS->>BG: send telemetry/settings update (optional)
    CS-->>User: visual feedback (tooltip/border/censor)
  else clean
    CS-->>User: no action or subtle indicator
  end
Loading

Elements and single-line explanations:

  • User β€” Person typing or pasting text into page inputs.
  • CS β€” content.js, which debounces, prepares text, and runs classification.
  • Model β€” The toxicity classifier returning per-category predictions and probabilities.
  • BG β€” background.js, receives optional telemetry, stores settings, and broadcasts config.

Component Map (Mermaid)

graph TD
  M[manifest.json]
  BG_FILE[background.js]
  CS_FILE[content.js]
  POPUP[popup.html / popup.js]
  LIB[lib/tensorflow/* or CDN]
  UI[page input elements]
  TEST[test.html]
  CSS[styles.css / popup styles]

  M --> BG_FILE
  M --> CS_FILE
  M --> POPUP
  CS_FILE --> LIB
  CS_FILE --> UI
  POPUP --> BG_FILE
  BG_FILE --> STORAGE[chrome.storage.local]
  TEST --> CS_FILE
  CSS --> POPUP
Loading

Elements and single-line explanations:

  • manifest.json β€” Declares permissions, content scripts, and web_accessible_resources.
  • background.js β€” Bootstraps default settings, handles messaging and storage interactions.
  • content.js β€” Runs in page context, loads model and inspects user input for toxicity.
  • popup.html / popup.js β€” Settings UI to enable/disable detection and tweak thresholds.
  • lib/tensorflow/* β€” Local static assets (tf.min.js, toxicity.min.js) used when offline.
  • page input elements β€” Inputs, textareas, and contenteditable regions targeted by content.js.
  • test.html β€” Developer test harness to exercise inputs, shadow DOM, iframes and dynamic nodes.
  • styles.css β€” Shared styling for popup/test UI.

Developer: semantic_search workflow (Mermaid)

flowchart LR
  Dev[Developer]
  VS[VS Code Workspace]
  SSEARCH[semantic_search]
  RESULTS[Search Results]
  OPEN[Open file / Jump to symbol]
  EDIT[Edit & Test]

  Dev --> VS
  VS --> SSEARCH
  SSEARCH --> RESULTS
  RESULTS --> OPEN
  OPEN --> EDIT
  EDIT --> VS
Loading

Elements and single-line explanations:

  • Dev β€” The developer working on the project in their editor.
  • VS β€” Visual Studio Code workspace containing the extension source.
  • semantic_search β€” The code search utility used to quickly find symbols or code paths.
  • RESULTS β€” The matched files, lines or symbols returned by the search.
  • OPEN β€” Action to open the matched file and navigate to the exact line or symbol.
  • EDIT β€” Developer modifies code, then runs local tests or loads the extension for validation.

Folder Structure

ToxicGuard_AI/
β”œβ”€ background.js
β”œβ”€ content.js
β”œβ”€ popup.html
β”œβ”€ popup.js
β”œβ”€ manifest.json
β”œβ”€ manifest-v3.json (compat / alternate)
β”œβ”€ lib/
β”‚  └─ tensorflow/
β”‚     β”œβ”€ tf.min.js
β”‚     └─ toxicity.min.js
β”œβ”€ test.html
β”œβ”€ setup.js
β”œβ”€ script.js (shared dictionaries/helpers)
β”œβ”€ styles.css
└─ icons/
   β”œβ”€ icon16.png
   └─ icon128.png

Development & Validation

  • Use PowerShell commands shown in Quick Start.
  • setup.js will download TFJS assets into lib/tensorflow when run with Node.js.
  • Validate cross-browser manifest compatibility before publishing.

Recommended workflow:

# Download TFJS locally (optional)
node setup.js

# Load in browser for local testing (use the browser developer extension UI)
# Use test.html to exercise input scenarios

Contributing

Please follow the contribution guidelines in .github/CONTRIBUTING.md. Keep changes small, document behavior and test the extension on Chromium and Firefox.

License

This project uses the Apache-2.0 license for included TFJS assets (see individual files) and the repository's LICENSE file if present.

About

ToxiGuard AI is a browser extension that detects and censors toxic language in real-time using TensorFlow.js. It offers fine-grained controls, visual feedback, auto-censor, adjustable sensitivity, and respects user privacy.

Topics

Resources

Contributing

Stars

Watchers

Forks