Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions crates/game_server/src/server/core.rs
Original file line number Diff line number Diff line change
Expand Up @@ -547,6 +547,51 @@ impl GameServer {
.await
.map_err(|e| ServerError::Internal(e.to_string()))?;

// Register handler to update player_id in connection manager
// This allows plugins to replace the temporary connection-level player_id
// with a permanent database player_id after authentication
let connection_manager_for_update = self.connection_manager.clone();
self.horizon_event_system
.on_core_async("update_player_id", move |event: serde_json::Value| {
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using serde_json::Value as the event type is inconsistent with other event handlers in this file which use strongly-typed event structs (e.g., AuthenticationStatusSetEvent, AuthenticationStatusGetEvent). Consider creating a dedicated event struct like PlayerIdUpdateEvent with old_player_id and new_player_id fields in the horizon_event_system crate. This would provide type safety, better documentation, and consistency with the existing event architecture.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux this may be worth doing.

let conn_mgr = connection_manager_for_update.clone();

info!("🔄 Received update_player_id event: {:?}", event);

// Deserialize player IDs from the event
let old_player_id = serde_json::from_value::<horizon_event_system::PlayerId>(event["old_player_id"].clone());
let new_player_id = serde_json::from_value::<horizon_event_system::PlayerId>(event["new_player_id"].clone());

if let (Ok(old_player_id), Ok(new_player_id)) = (old_player_id, new_player_id) {
// Spawn a dedicated thread with its own runtime to handle the async work
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment "This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context" appears to be incorrect. Looking at other on_core_async handlers in this same file (lines 467-503 and 508-537), they successfully use tokio::runtime::Handle::try_current() to access the current runtime context. This comment may mislead future maintainers about the necessity of the thread spawning pattern.

Suggested change
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux we should look into this and verify that the comment is correct. The other handlers may be doing something unstable if it is.

std::thread::spawn(move || {
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spawned thread is detached and its lifecycle is not tracked. If many update events are received rapidly, this could spawn numerous unmanaged threads. The thread handle is discarded, so there's no way to wait for completion or handle errors that occur after spawning. Consider tracking spawned threads or using a task queue/thread pool pattern.

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux we should be able to take care of this with my earlier comment about using Luminal as our asynchronous runtime for this particular task.

let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");

rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
Comment on lines +574 to +583
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a potential race condition in the player ID update sequence. The lookup and update are not atomic - between checking if the connection exists for old_player_id and calling set_player_id, another thread could disconnect the player or modify the connection state. This could lead to the new player ID being assigned to a stale or incorrect connection. Consider implementing an atomic update operation in the ConnectionManager that verifies the old player ID matches before updating, similar to a compare-and-swap pattern.

Suggested change
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
// Atomically update the player_id if the old_player_id is still present
let swapped = conn_mgr.compare_and_swap_player_id(old_player_id, new_player_id).await;
if swapped {
info!(
"🔄 Atomically updated player_id from {} to {}",
old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to atomically update player_id from {} to {} (connection may have been removed or changed)", old_player_id, new_player_id);

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddurieux this also seems like a pretty reasonable change to include.

}
});
Comment on lines +568 to +585
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The panic message "Failed to build runtime for update_player_id" doesn't provide helpful context for debugging. If runtime creation fails (e.g., due to resource exhaustion), this will crash the entire server. Consider logging an error instead and returning gracefully, as the event handler should be resilient to failures.

Suggested change
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");
rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
let rt_result = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build();
match rt_result {
Ok(rt) => {
rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
}
Err(e) => {
error!("❌ Failed to build runtime for update_player_id: {}", e);
// Optionally, you could also log the event or any other context here
return;
}
}

Copilot uses AI. Check for mistakes.
});
Comment on lines +565 to +586
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thread spawning with new runtime creation is inconsistent with the existing pattern used in other event handlers in this file. Lines 472-498 and 513-533 show that other on_core_async handlers use tokio::runtime::Handle::try_current() with block_on instead of spawning new threads. This approach is more efficient, avoids unnecessary thread creation, and follows the established codebase pattern for handling async work in event handlers.

Suggested change
// Spawn a dedicated thread with its own runtime to handle the async work
// This is necessary because on_core_async handlers don't have a guaranteed tokio runtime context
std::thread::spawn(move || {
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed to build runtime for update_player_id");
rt.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
});
// Use the current Tokio runtime handle to block_on the async work, matching the pattern used in other handlers
match tokio::runtime::Handle::try_current() {
Ok(handle) => {
handle.block_on(async move {
// Get the connection_id for this player
if let Some(connection_id) = conn_mgr.get_connection_id_by_player(old_player_id).await {
// Update the player_id stored in the connection
conn_mgr.set_player_id(connection_id, new_player_id).await;
info!(
"🔄 Updated player_id in connection {} from {} to {}",
connection_id, old_player_id, new_player_id
);
} else {
warn!("⚠️ Failed to find connection for player {} when updating player_id", old_player_id);
}
});
}
Err(e) => {
warn!("⚠️ No current Tokio runtime in update_player_id handler: {}", e);
}
}

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually won't work because there's no runtime guaranteed in this context. We do in fact need to spawn a dedicated one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually won't work because there's no runtime guaranteed in this context. We do in fact need to spawn a dedicated one.

@ddurieux we should figure out what we need to do to create a global runtime for this. Creating one on each request is pretty intense. So, my thought is that we create a lumenal runtime somewhere central and we pass in the handle to it?

} else {
warn!("⚠️ Failed to deserialize player IDs from update_player_id event: {:?}", event);
}

Ok(())
})
.await
.map_err(|e| ServerError::Internal(e.to_string()))?;
Comment on lines +550 to +594
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The update_player_id event handler lacks test coverage. Given that this feature involves critical player identity management with potential race conditions and state consistency concerns, tests should be added to verify: 1) successful player ID updates, 2) handling of non-existent old_player_id, 3) concurrent update scenarios, and 4) proper error handling for malformed events. Test examples can be found in auth_integration_tests.rs for similar event handlers.

Copilot uses AI. Check for mistakes.

// Register a simple ping handler for testing validity of the client connection
self.horizon_event_system
Expand Down
Loading