Skip to content

Conversation

@BenCookie95
Copy link
Contributor

Summary

Add the first iteration of artifacts to Agents

Ticket Link

https://mattermost.atlassian.net/browse/MM-66391

Screenshots

Release Note

Add support for artifacts

@dryrunsecurity
Copy link

DryRun Security

🟡 Please give this pull request extra attention during review.

This pull request introduces multiple security concerns: it emits user- and LLM-generated artifact fields (Title, Type, Content) into post properties and WebSocket events without HTML/JS-specific encoding or sanitization, creating a realistic stored/reflected XSS risk when those fields are rendered by the frontend; and it adds an /api/v1/ai_bots endpoint guarded only by a middleware that checks for the presence of a Mattermost-Plugin-ID header (not its validity), which can disclose sensitive bot configuration (ChannelIDs, UserIDs) to spoofing actors.

🟡 Potential Cross-Site Scripting in streaming/streaming.go
Vulnerability Potential Cross-Site Scripting
Description User-controlled or external content (artifact.Title, artifact.Type, and artifact.Content serialized into artifactsJSON) is placed into post props and published over WebSocket as a JSON string without any encoding, escaping, or sanitization specific to HTML/JS contexts. The code does: post.AddProp(ArtifactsProp, string(artifactsJSON)) and PublishWebSocketEvent(..., {"artifact": string(artifactsJSON), ...}). Those values may later be rendered into the web UI. The server code does not perform any HTML-encoding or use a trusted sanitizer (e.g., DOMPurify) before storing or emitting the artifact payload. Because consumers (the webapp) may insert artifact fields into the DOM (for example rendering artifact.Title or artifact.Content as HTML) if they do not properly escape or sanitize, this creates a realistic path for stored or reflected XSS. No framework auto-escaping or sanitization is applied here on the server side, and the payload is explicitly converted to string and logged/published, increasing the risk of downstream unsafe rendering.

p.sendPostStreamingAnnotationsEvent(post, string(annotationsJSON))
}
}
case llm.EventTypeArtifact:
if artifact, ok := event.Value.(llm.Artifact); ok {
// Get existing artifacts array or create new one
var artifacts []llm.Artifact
if existingProp := post.GetProp(ArtifactsProp); existingProp != nil {
if existingJSON, ok := existingProp.(string); ok {
_ = json.Unmarshal([]byte(existingJSON), &artifacts)
}
}
// Append new artifact
artifacts = append(artifacts, artifact)
// Marshal and store
artifactsJSON, err := json.Marshal(artifacts)
if err != nil {
p.mmClient.LogError("Failed to marshal artifacts", "error", err)
} else {
post.AddProp(ArtifactsProp, string(artifactsJSON))
p.mmClient.LogDebug("Added artifact to post props", "post_id", post.Id, "title", artifact.Title, "type", artifact.Type)
// Send WebSocket event for artifact
p.mmClient.PublishWebSocketEvent("postupdate", map[string]interface{}{
"post_id": post.Id,
"control": "artifact",
"artifact": string(artifactsJSON),
}, &model.WebsocketBroadcast{
ChannelId: post.ChannelId,
})
}
}
}
case <-ctx.Done():
// Persist any accumulated reasoning before canceling

Cross-Site Scripting (XSS) via LLM-generated Artifacts in openai/openai.go
Vulnerability Cross-Site Scripting (XSS) via LLM-generated Artifacts
Description The llm.DetectArtifacts function processes raw, unfiltered LLM output to extract structured artifacts. These artifacts, including their content and type, are then transmitted to the frontend via WebSocket events and stored as post properties. The ArtifactViewer component on the frontend renders this content. If an attacker can manipulate the LLM to generate an artifact with malicious HTML or JavaScript in its content (especially for document type artifacts), this content could be rendered directly by the client-side markdown renderer, leading to Cross-Site Scripting (XSS).

artifacts := llm.DetectArtifacts(completeMessageText.String())
for _, artifact := range artifacts {
output <- llm.TextStreamEvent{
Type: llm.EventTypeArtifact,

Information Disclosure via Inter-Plugin API in api/api.go
Vulnerability Information Disclosure via Inter-Plugin API
Description The newly added /api/v1/ai_bots endpoint is protected by the interPluginAuthorizationRequired middleware, which only checks for the presence of a Mattermost-Plugin-ID header, not its validity or if the plugin is authorized. This allows any unauthenticated actor to spoof a Mattermost-Plugin-ID header and access the endpoint. The endpoint then returns sensitive AI bot configuration data, including ChannelIDs and UserIDs, which are access control lists for the bots. This information can be used for reconnaissance to identify potential targets and plan further attacks.

llmBridgeRoute.GET("/ai_bots", a.handleGetAIBots)
router.Use(a.MattermostAuthorizationRequired)


All finding details can be found in the DryRun Security Dashboard.

Base automatically changed from llm-bridge-cs to master October 30, 2025 14:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants