-
-
Notifications
You must be signed in to change notification settings - Fork 14
Add event to replace connection uuid by custom uuid. #299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -547,6 +547,51 @@ impl GameServer { | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .await | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .map_err(|e| ServerError::Internal(e.to_string()))?; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Register handler to update player_id in connection manager | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // This allows plugins to replace the temporary connection-level player_id | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // with a permanent database player_id after authentication | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| let connection_manager_for_update = self.connection_manager.clone(); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| self.horizon_event_system | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| .on_core_async("update_player_id", move |event: serde_json::Value| { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| let conn_mgr = connection_manager_for_update.clone(); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| info!("🔄 Received update_player_id event: {:?}", event); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Deserialize player IDs from the event | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| let old_player_id = serde_json::from_value::<horizon_event_system::PlayerId>(event["old_player_id"].clone()); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| let new_player_id = serde_json::from_value::<horizon_event_system::PlayerId>(event["new_player_id"].clone()); | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| if let (Ok(old_player_id), Ok(new_player_id)) = (old_player_id, new_player_id) { | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // Spawn a dedicated thread with its own runtime to handle the async work | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| // This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ddurieux we should look into this and verify that the comment is correct. The other handlers may be doing something unstable if it is.
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spawned thread is detached and its lifecycle is not tracked. If many update events are received rapidly, this could spawn numerous unmanaged threads. The thread handle is discarded, so there's no way to wait for completion or handle errors that occur after spawning. Consider tracking spawned threads or using a task queue/thread pool pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ddurieux we should be able to take care of this with my earlier comment about using Luminal as our asynchronous runtime for this particular task.
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a potential race condition in the player ID update sequence. The lookup and update are not atomic - between checking if the connection exists for old_player_id and calling set_player_id, another thread could disconnect the player or modify the connection state. This could lead to the new player ID being assigned to a stale or incorrect connection. Consider implementing an atomic update operation in the ConnectionManager that verifies the old player ID matches before updating, similar to a compare-and-swap pattern.
| // Get the connection_id for this player | |
| if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await { | |
| // Update the player_id stored in the connection | |
| conn_mgr.set_player_id(connection_id, new_player_id).await; | |
| info!( | |
| "🔄 Updated player_id in connection {} from {} to {}", | |
| connection_id, old_player_id, new_player_id | |
| ); | |
| } else { | |
| warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id); | |
| // Atomically update the player_id if the old_player_id is still present | |
| let swapped = conn_mgr.compare_and_swap_player_id(old_player_id, new_player_id).await; | |
| if swapped { | |
| info!( | |
| "🔄 Atomically updated player_id from {} to {}", | |
| old_player_id, new_player_id | |
| ); | |
| } else { | |
| warn!("⚠️ Failed to atomically update player_id from {} to {} (connection may have been removed or changed)", old_player_id, new_player_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ddurieux this also seems like a pretty reasonable change to include.
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The panic message "Failed to build runtime for update_player_id" doesn't provide helpful context for debugging. If runtime creation fails (e.g., due to resource exhaustion), this will crash the entire server. Consider logging an error instead and returning gracefully, as the event handler should be resilient to failures.
| let rt = tokio::runtime::Builder::new_current_thread() | |
| .enable_all() | |
| .build() | |
| .expect("Failed to build runtime for update_player_id"); | |
| rt.block_on(async move { | |
| // Get the connection_id for this player | |
| if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await { | |
| // Update the player_id stored in the connection | |
| conn_mgr.set_player_id(connection_id, new_player_id).await; | |
| info!( | |
| "🔄 Updated player_id in connection {} from {} to {}", | |
| connection_id, old_player_id, new_player_id | |
| ); | |
| } else { | |
| warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id); | |
| } | |
| }); | |
| let rt_result = tokio::runtime::Builder::new_current_thread() | |
| .enable_all() | |
| .build(); | |
| match rt_result { | |
| Ok(rt) => { | |
| rt.block_on(async move { | |
| // Get the connection_id for this player | |
| if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await { | |
| // Update the player_id stored in the connection | |
| conn_mgr.set_player_id(connection_id, new_player_id).await; | |
| info!( | |
| "🔄 Updated player_id in connection {} from {} to {}", | |
| connection_id, old_player_id, new_player_id | |
| ); | |
| } else { | |
| warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id); | |
| } | |
| }); | |
| } | |
| Err(e) => { | |
| error!("❌ Failed to build runtime for update_player_id: {}", e); | |
| // Optionally, you could also log the event or any other context here | |
| return; | |
| } | |
| } |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thread spawning with new runtime creation is inconsistent with the existing pattern used in other event handlers in this file. Lines 472-498 and 513-533 show that other on_core_async handlers use tokio::runtime::Handle::try_current() with block_on instead of spawning new threads. This approach is more efficient, avoids unnecessary thread creation, and follows the established codebase pattern for handling async work in event handlers.
| // Spawn a dedicated thread with its own runtime to handle the async work | |
| // This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context | |
| std::thread::spawn(move || { | |
| let rt = tokio::runtime::Builder::new_current_thread() | |
| .enable_all() | |
| .build() | |
| .expect("Failed to build runtime for update_player_id"); | |
| rt.block_on(async move { | |
| // Get the connection_id for this player | |
| if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await { | |
| // Update the player_id stored in the connection | |
| conn_mgr.set_player_id(connection_id, new_player_id).await; | |
| info!( | |
| "🔄 Updated player_id in connection {} from {} to {}", | |
| connection_id, old_player_id, new_player_id | |
| ); | |
| } else { | |
| warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id); | |
| } | |
| }); | |
| }); | |
| // Use the current Tokio runtime handle to block_on the async work, matching the pattern used in other handlers | |
| match tokio::runtime::Handle::try_current() { | |
| Ok(handle) => { | |
| handle.block_on(async move { | |
| // Get the connection_id for this player | |
| if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await { | |
| // Update the player_id stored in the connection | |
| conn_mgr.set_player_id(connection_id, new_player_id).await; | |
| info!( | |
| "🔄 Updated player_id in connection {} from {} to {}", | |
| connection_id, old_player_id, new_player_id | |
| ); | |
| } else { | |
| warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id); | |
| } | |
| }); | |
| } | |
| Err(e) => { | |
| warn!("⚠️ No current Tokio runtime in update_player_id handler: {}", e); | |
| } | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually won't work because there's no runtime guaranteed in this context. We do in fact need to spawn a dedicated one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually won't work because there's no runtime guaranteed in this context. We do in fact need to spawn a dedicated one.
@ddurieux we should figure out what we need to do to create a global runtime for this. Creating one on each request is pretty intense. So, my thought is that we create a lumenal runtime somewhere central and we pass in the handle to it?
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The update_player_id event handler lacks test coverage. Given that this feature involves critical player identity management with potential race conditions and state consistency concerns, tests should be added to verify: 1) successful player ID updates, 2) handling of non-existent old_player_id, 3) concurrent update scenarios, and 4) proper error handling for malformed events. Test examples can be found in auth_integration_tests.rs for similar event handlers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using
serde_json::Valueas the event type is inconsistent with other event handlers in this file which use strongly-typed event structs (e.g.,AuthenticationStatusSetEvent,AuthenticationStatusGetEvent). Consider creating a dedicated event struct likePlayerIdUpdateEventwithold_player_idandnew_player_idfields in thehorizon_event_systemcrate. This would provide type safety, better documentation, and consistency with the existing event architecture.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ddurieux this may be worth doing.