Skip to content

Conversation

@ddurieux
Copy link
Contributor

@ddurieux ddurieux commented Dec 11, 2025

Add event to replace connection uuid by custom uuid. The goal is to manage only an uuid of the player when this uuid is generated by another service for exemple, the authentication service

When player connect, it has an UUID, and it's complex to manage the connexion UUID and the player UUID, so I add this code to replace the connection UUID by the player UUID. With that, I have only 1 UUID and my life is better :D

…anage only an uuid of the player when this uuid is generated by another service for exemple, the authentication service
Copy link
Member

@tristanpoland tristanpoland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty good to me. I will merge it momentarily. I will, however, wait to see what copilot thinks of it and we'll need to review it further against code I've already written, so that I know what to write for breaking changes in the release notes if applicable.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an event handler to replace a connection's temporary UUID with a custom UUID (typically from an authentication service). This addresses the challenge of managing two separate UUIDs by allowing the player UUID to supersede the connection UUID, simplifying identity management across the system.

Key Changes:

  • Adds update_player_id event handler that accepts old and new player IDs and updates the connection manager's player ID mapping
  • Implements async processing using thread spawning with a new tokio runtime
  • Includes logging for successful updates and error cases

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +565 to +586
// Spawn a dedicated thread with its own runtime to handle the async work
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context
std::thread::spawn(move || {
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");

rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
});
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thread spawning with new runtime creation is inconsistent with the existing pattern used in other event handlers in this file. Lines 472-498 and 513-533 show that other on_core_async handlers use tokio::runtime::Handle::try_current() with block_on instead of spawning new threads. This approach is more efficient, avoids unnecessary thread creation, and follows the established codebase pattern for handling async work in event handlers.

Suggested change
// Spawn a dedicated thread with its own runtime to handle the async work
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context
std::thread::spawn(move || {
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");
rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
});
// Use the current Tokio runtime handle to block_on the async work, matching the pattern used in other handlers
match tokio::runtime::Handle::try_current() {
Ok(handle) => {
handle.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
}
Err(e) => {
warn!("⚠️ No current Tokio runtime in update_player_id handler: {}", e);
}
}

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually won't work because there's no runtime guaranteed in this context. We do in fact need to spawn a dedicated one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually won't work because there's no runtime guaranteed in this context. We do in fact need to spawn a dedicated one.

@ddurieux we should figure out what we need to do to create a global runtime for this. Creating one on each request is pretty intense. So, my thought is that we create a lumenal runtime somewhere central and we pass in the handle to it?

// with a permanent database player_id after authentication
let connection_manager_for_update = self.connection_manager.clone();
self.horizon_event_system
.on_core_async("update_player_id", move |event: serde_json::Value| {
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using serde_json::Value as the event type is inconsistent with other event handlers in this file which use strongly-typed event structs (e.g., AuthenticationStatusSetEvent, AuthenticationStatusGetEvent). Consider creating a dedicated event struct like PlayerIdUpdateEvent with old_player_id and new_player_id fields in the horizon_event_system crate. This would provide type safety, better documentation, and consistency with the existing event architecture.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux this may be worth doing.

Comment on lines +568 to +585
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");

rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The panic message "Failed to build runtime for update_player_id" doesn't provide helpful context for debugging. If runtime creation fails (e.g., due to resource exhaustion), this will crash the entire server. Consider logging an error instead and returning gracefully, as the event handler should be resilient to failures.

Suggested change
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");
rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
let rt_result = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build();
match rt_result {
Ok(rt) => {
rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
}
Err(e) => {
error!("❌ Failed to build runtime for update_player_id: {}", e);
// Optionally, you could also log the event or any other context here
return;
}
}

Copilot uses AI. Check for mistakes.
Comment on lines +574 to +583
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a potential race condition in the player ID update sequence. The lookup and update are not atomic - between checking if the connection exists for old_player_id and calling set_player_id, another thread could disconnect the player or modify the connection state. This could lead to the new player ID being assigned to a stale or incorrect connection. Consider implementing an atomic update operation in the ConnectionManager that verifies the old player ID matches before updating, similar to a compare-and-swap pattern.

Suggested change
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
// Atomically update the player_id if the old_player_id is still present
let swapped = conn_mgr.compare_and_swap_player_id(old_player_id, new_player_id).await;
if swapped {
info!(
"🔄 Atomically updated player_id from {} to {}",
old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to atomically update player_id from {} to {} (connection may have been removed or changed)", old_player_id, new_player_id);

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux this also seems like a pretty reasonable change to include.

if let (Ok(old_player_id), Ok(new_player_id)) = (old_player_id, new_player_id) {
// Spawn a dedicated thread with its own runtime to handle the async work
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context
std::thread::spawn(move || {
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spawned thread is detached and its lifecycle is not tracked. If many update events are received rapidly, this could spawn numerous unmanaged threads. The thread handle is discarded, so there's no way to wait for completion or handle errors that occur after spawning. Consider tracking spawned threads or using a task queue/thread pool pattern.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux we should be able to take care of this with my earlier comment about using Luminal as our asynchronous runtime for this particular task.

Comment on lines +550 to +594
// Register handler to update player_id in connection manager
// This allows plugins to replace the temporary connection-level player_id
// with a permanent database player_id after authentication
let connection_manager_for_update = self.connection_manager.clone();
self.horizon_event_system
.on_core_async("update_player_id", move |event: serde_json::Value| {
let conn_mgr = connection_manager_for_update.clone();

info!("🔄 Received update_player_id event: {:?}", event);

// Deserialize player IDs from the event
let old_player_id = serde_json::from_value::<horizon_event_system::PlayerId>(event["old_player_id"].clone());
let new_player_id = serde_json::from_value::<horizon_event_system::PlayerId>(event["new_player_id"].clone());

if let (Ok(old_player_id), Ok(new_player_id)) = (old_player_id, new_player_id) {
// Spawn a dedicated thread with its own runtime to handle the async work
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context
std::thread::spawn(move || {
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");

rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
});
} else {
warn!("⚠️ Failed to deserialize player IDs from update_player_id event: {:?}", event);
}

Ok(())
})
.await
.map_err(|e| ServerError::Internal(e.to_string()))?;
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The update_player_id event handler lacks test coverage. Given that this feature involves critical player identity management with potential race conditions and state consistency concerns, tests should be added to verify: 1) successful player ID updates, 2) handling of non-existent old_player_id, 3) concurrent update scenarios, and 4) proper error handling for malformed events. Test examples can be found in auth_integration_tests.rs for similar event handlers.

Copilot uses AI. Check for mistakes.

if let (Ok(old_player_id), Ok(new_player_id)) = (old_player_id, new_player_id) {
// Spawn a dedicated thread with its own runtime to handle the async work
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment "This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context" appears to be incorrect. Looking at other on_core_async handlers in this same file (lines 467-503 and 508-537), they successfully use tokio::runtime::Handle::try_current() to access the current runtime context. This comment may mislead future maintainers about the necessity of the thread spawning pattern.

Suggested change
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux we should look into this and verify that the comment is correct. The other handlers may be doing something unstable if it is.

@tristanpoland tristanpoland changed the title Add event to replace connection uuid by custom uuid. The goal is to m… Add event to replace connection uuid by custom uuid. Dec 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants