Skip to content

Transform data into AI-ready context. Deploy knowledge graphs, private LLMs, and intelligent agents with complete data sovereignty. From data silos to actionable AI insights.

License

Notifications You must be signed in to change notification settings

trustgraph-ai/trustgraph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Ready Data Infrastructure

TrustGraph provides an event-driven data-to-AI platform that transforms data into AI-ready datasets through automated structuring, knowledge graph construction, and vector embeddings mapping — all deployable privately, on-prem, or in cloud. Deploy and manage open LLMs within the same platform, ensuring complete data sovereignty while enabling agents that generate real, actionable insights.

Table of Contents

Key Features

TrustGraph is not just another AI framework but a complete, production-ready platform that bridges the gap between raw data and intelligent, adaptable agent deployments.

  • Data Ingestion

PDF parsing, OCR, configurable chunking, custom metadata, and bring your own schemas for large scale data ingestion.

  • Data Transformation

Transform unstructured data into flat knowledge graphs or bring your own ontology for rich graphs.

  • Context Retrieval

Deterministic knowledge graph queries, no LLMs in retrieval path.

  • Event-Driven

All services communicate through Apache Pulsar topics rather than direct RPC.

  • Multi-Backend Storage

Data storage with Apache Cassandra, Neo4j, Qdrant, Milvus, Memgraph, FalkorDB, and Pinecone.

  • Data Sovereignty

Deploy the entire stack—data pipelines, knowledge graphs, vector stores, and LLMs—on-premises, in your VPC, or across hybrid environments.

  • Private LLM Inferencing

Support for all major LLM APIs or deploy and manage open models directly integrated with the platform.

  • Three-Dimensional Multi-Tenancy

3 layers of isolation with flows for processing logic, collections for data access, and tool groups for multi-agent.

  • Container-First

Containerized deployments with Docker or Kubernetes. Built for enterprise scale with monitoring, observability, and management.

  • MCP Integration

Native support for MCP enables standardized agent communication with third-party tools and services while maintaining data sovereignty.

  • Observable by Design

3D visualization of knowledge graphs. Prometheus metrics and Grafana dashboards track latency, throughput, costs, and errors.

Why TrustGraph?

Why TrustGraph?

Agentic MCP Demo

Agentic MCP Demo

Getting Started

Watch TrustGraph 101

TrustGraph 101

Configuration Builder

The Configuration Builder assembles all of the selected components and builds them into a deployable package. It has 4 sections:

  • Version: Select the version of TrustGraph you'd like to deploy
  • Component Selection: Choose from the available deployment platforms, LLMs, graph store, VectorDB, chunking algorithm, chunking parameters, and LLM parameters
  • Customization: Enable OCR pipelines and custom embeddings models
  • Finish Deployment: Download the launch YAML files with deployment instructions

Workbench

The Workbench is a UI that provides tools for interacting with all major features of the platform. The Workbench is enabled by default in the Configuration Builder and is available at port 8888 on deployment. The Workbench has the following capabilities:

  • Agentic, GraphRAG and LLM Chat: Chat interface for agentic flows, GraphRAG queries, or directly interfacing with a LLM
  • Semantic Discovery: Analyze semantic relationships with vector search, knowledge graph relationships, and 3D graph visualization
  • Data Management: Load data into the Librarian for processing, create and upload Knowledge Packages
  • Flow Management: Create and delete processing flow patterns
  • Prompt Management: Edit all LLM prompts used in the platform during runtime
  • Agent Tools: Define tools used by the Agent Flow including MCP tools
  • MCP Tools: Connect to MCP servers

Knowledge Cores

A challenge facing RAG architectures is the ability to quickly reuse and remove datasets from pipelines. TrustGraph stores the results of the data ingestion process in reusable Knowledge Cores. Knowledge cores can be loaded and removed during runtime. Some sample knowledge cores are here.

A Knowledge Core has two components:

  • Knowledge graph triples
  • Vector embeddings mapped to the knowledge graph

Integrations

TrustGraph provides maximum flexibility to avoid vendor lock-in.

LLM APIs
  • Anthropic
  • AWS Bedrock
  • AzureAI
  • AzureOpenAI
  • Cohere
  • Google AI Studio
  • Google VertexAI
  • Mistral
  • OpenAI
LLM Orchestration
  • LM Studio
  • Llamafiles
  • Ollama
  • TGI
  • vLLM
VectorDBs
  • Qdrant (default)
  • Pinecone
  • Milvus
Graph Storage
  • Apache Cassandra (default)
  • Neo4j
  • Memgraph
  • FalkorDB
Observability
  • Prometheus
  • Grafana
Control Plane
  • Apache Pulsar
Clouds
  • AWS
  • Azure
  • Google Cloud
  • OVHcloud
  • Scaleway

Observability & Telemetry

Once the platform is running, access the Grafana dashboard at:

http://localhost:3000

Default credentials are:

user: admin
password: admin

The default Grafana dashboard tracks the following:

Telemetry
  • LLM Latency
  • Error Rate
  • Service Request Rates
  • Queue Backlogs
  • Chunking Histogram
  • Error Source by Service
  • Rate Limit Events
  • CPU usage by Service
  • Memory usage by Service
  • Models Deployed
  • Token Throughput (Tokens/second)
  • Cost Throughput (Cost/second)

Contributing

Developer's Guide

License

TrustGraph is licensed under Apache 2.0.

Copyright 2024-2025 TrustGraph

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Support & Community

  • Bug Reports & Feature Requests: Discord
  • Discussions & Questions: Discord
  • Documentation: Docs

Languages