Skip to content

Machine-learning Engine for Detecting Unpermitted Shapes Automatically

Notifications You must be signed in to change notification settings

CPP-SIIL/MEDUSA

Repository files navigation

MEDUSA

Machine-learning Engine for Detecting Unlawful Shapes Automatically

TLDR: MEDUSA is a Graph Neural Network (GNN) classifier designed to look for ghost gun parts in 3D models.

Problem Statement

The Cal Poly Pomona Maker Studio offers free 3D printing for any active student or staff, which opens our 3D printers to the risk of malicious users attempting to 3D print firearm components. 24/7 monitoring of our 3D print farm by human staff is expensive, unreliable, and inefficient, creating a need for an automated monitoring solution.

Existing solutions

Efforts to keep ghost gun parts off printers and file shares span policy, infrastructure, and model-level detection.

Platform-side moderation of uploads:

File repositories (e.g. Thingiverse) now combine AI flagging with human review to block firearm models during upload and remove existing files. However, they are proprietary, policy-driven, and don't stop users from modeling components themselves or trading models behind closed doors. See coverage of Thingiverse’s AI-driven enforcement and recent takedowns: Tom’s Hardware, The Register, and ABC News.

Workflow/infrastructure blockers

Print&Go's “3D GUN'T” is a commercial tool that analyzes CAD files (and in some deployments, camera streams) to detect gun-like shapes and stop jobs before and during printing. 3DPrinterOS and Montclair State have announced a collaboration to identify gun components from their “design signatures” within a cloud print-management stack. The issue with these approaches is both the models and datasets are closed-source, and there aren't peer-reviewed metrics available. Sources: Print&Go product post, 3Printr news, VoxelMatters, and Fabbaloo.

Methods used for geometry detection

Multi-view rendering (2D CNN/Vision-based Transformer)

  • Render silhouettes/RGB/depth from several viewpoints and fuse features to classify objects
  • Pros: Easy to implement, leverages proven 2D models and architectures
  • Cons: Loses fine geometric detail; “dummy material” (i.e., model a box around the component to remove later) can cause the model to fail
  • Refs: MVCNN (ICCV 2015), MVCNN code

Volumetric / Voxel Grids (3D CNN)

  • Voxelize the mesh and run 3D convolutions
  • Pros: Direct 3D receptive fields; simple to implement
  • Cons: Memory scales cubically with resolution, causing models to be coarse at practical sizes
  • Refs: VoxNet (IROS 2015), 3D ShapeNets (CVPR 2015)

Point-cloud networks (PointNet family / DGCNN)

  • Sample points (optionally with normals) from the surface and learn permutation-invariant features
  • Pros: Good fidelity vs. cost; robust to messy meshes and re-meshing
  • Cons: Sampling/normal-estimation choices affect stability; limited explicit topology use
  • Refs: PointNet (CVPR 2017), PointNet++ (NeurIPS 2017), DGCNN (arXiv)

Requirements

MEDUSA must be able to:

  • Differentiate between gun components and normal 3D models
  • Run within a reasonable time and compute budget on consumer hardware
  • Detect ghost gun parts even when obfuscated by “dummy material”
  • Detect both ghost gun components and full assemblies
  • Handle messy geometry via tolerant sampling
  • Scale across a print farm

Overview/Key Processes

  1. STL Sampling: Sample converted 3D mesh structures generated from STL 3D models
  2. Feature Extraction: Compute geometric features for each vertex (position, normals, curvature, etc.)
  3. Graph Neural Network: Using Graph Attention and Graph Convolutional layers for model structure (GAT, GCN; PyG docs: GATConv, GCNConv)
  4. Classification: Binary classification with proper handling of imbalanced datasets (e.g., weighted loss in PyTorch CrossEntropyLoss weight param).

Dataset

The dataset contains approximately 300 STL files:

  • Positive class: ~100 ghost gun part STL files (barrels, frames, slides, triggers, etc.)

    • Dataset was obtained by browsing publicly available ghost-gun part assemblies
  • Negative class: ~200 non-ghost-gun STL files (various mechanical parts)

    • trimesh was used to download 200 random STL files.

Notable files:

gnn_model.py   # The GNN itself
data_loader.py # Handles dataset loading, preprocessing, and batch creation for training.
train.py       # Train, evaluate, and save the GNN

Model Architecture

  • Graph Attention Network (GAT) or GCN layers (papers, https://arxiv.org/abs/1609.02907)
  • Batch normalization and residual connections
  • Multiple pooling strategies (mean, max, add, concat; e.g., PyG global pools: mean)
  • Deeper classification head with dropout

Key Features

  • Automatic STL Processing: Converts 3D meshes to graph representations
  • Geometric Features: Extracts meaningful features from 3D geometry
  • Imbalanced Dataset Handling: Uses class weights and proper evaluation metrics
  • Caching: Caches processed graphs for faster subsequent runs
  • Early Stopping: Prevents overfitting with patience-based stopping (e.g., PyTorch Lightning EarlyStopping)
  • Model Checkpointing: Saves best model based on validation loss

Output

The training process generates:

  • outputs/*/results.json: Training and test metrics
  • outputs/*/model.pth: Saved model weights and configuration
  • outputs/*/training_history.png: Training curves
  • TensorBoard logs in runs/ directory (see PyTorch tutorial: TensorBoard with PyTorch)
  • Run tensorboard --logdir "runs" to view model training runs

Performance Considerations

  • Memory Usage: Larger graphs (more vertices/edges) require more memory
  • Processing Time: STL conversion is the most time-consuming step
  • Caching: First run processes all STL files; subsequent runs use cache
  • Batch Size: Adjust based on available GPU memory

Model Performance

Training

Loss: 0.02914 Training Accuracy: 81.858% Validation Accuracy: 88.4512%

Validation

Model Accuracy: 76.19% Precision: 75.76% Recall: 76.19% F1: 75.79%

MEDUSA Training Graphs

MEDUSA Validation Accuracy Graph

MEDUSA Validation Confusion Matrix

Future Improvements

  • Support for more 3D file formats (OBJ, PLY, etc.)
  • Advanced geometric features (curvature, shape descriptors)
  • Data augmentation techniques for 3D graphs
  • Ensemble methods for improved accuracy
  • Increase sampling size/efficiency

Limitations

  • Extremely sparse sampling was required due to limited compute
  • Large, free ghost-gun component datasets are difficult to find and heavily restricted

Demo

The demo dir contains the code for a live interactive demo of the model.

Features:

  • STL selection menu with live preview
  • Dynamic model loading
  • Model inference API
  • Model activation visualization (In Progress)

Screenshot of MEDUSA Demo Frontend

Improvements:

  • Display real GNN weights after inference
  • Display more accurate visual of GNN architecture
  • Improve 3D model render visuals

Made In Association With:

MS Logo SIIL Logo

BSD 3-Clause License

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution.
  3. Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

DISCLAIMER:

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

About

Machine-learning Engine for Detecting Unpermitted Shapes Automatically

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages