Skip to content

Conversation

@blktests-ci
Copy link

@blktests-ci blktests-ci bot commented Jan 8, 2026

Pull request for series with
subject: blk-cgroup: cleanup and bugfixs in blk-cgroup
version: 1
url: https://patchwork.kernel.org/project/linux-block/list/?series=1039644

@blktests-ci
Copy link
Author

blktests-ci bot commented Jan 8, 2026

Upstream branch: aacb0a6
series: https://patchwork.kernel.org/project/linux-block/list/?series=1039644
version: 1

@blktests-ci
Copy link
Author

blktests-ci bot commented Jan 9, 2026

Upstream branch: 623fb99
series: https://patchwork.kernel.org/project/linux-block/list/?series=1039644
version: 1

@blktests-ci blktests-ci bot force-pushed the series/1039644=>linus-master branch from 6c64a8b to e637731 Compare January 9, 2026 04:58
Zheng Qixing added 3 commits January 13, 2026 15:29
When switching an IO scheduler on a block device, blkcg_activate_policy()
allocates blkg_policy_data (pd) for all blkgs attached to the queue.
However, blkcg_activate_policy() may race with concurrent blkcg deletion,
leading to use-after-free and memory leak issues.

The use-after-free occurs in the following race:

T1 (blkcg_activate_policy):
  - Successfully allocates pd for blkg1 (loop0->queue, blkcgA)
  - Fails to allocate pd for blkg2 (loop0->queue, blkcgB)
  - Enters the enomem rollback path to release blkg1 resources

T2 (blkcg deletion):
  - blkcgA is deleted concurrently
  - blkg1 is freed via blkg_free_workfn()
  - blkg1->pd is freed

T1 (continued):
  - Rollback path accesses blkg1->pd->online after pd is freed
  - Triggers use-after-free

In addition, blkg_free_workfn() frees pd before removing the blkg from
q->blkg_list. This allows blkcg_activate_policy() to allocate a new pd
for a blkg that is being destroyed, leaving the newly allocated pd
unreachable when the blkg is finally freed.

Fix these races by extending blkcg_mutex coverage to serialize
blkcg_activate_policy() rollback and blkg destruction, ensuring pd
lifecycle is synchronized with blkg list visibility.

Link: https://lore.kernel.org/all/[email protected]/
Fixes: f1c006f ("blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()")
Signed-off-by: Zheng Qixing <[email protected]>
When switching IO schedulers on a block device, blkcg_activate_policy()
can race with concurrent blkcg deletion, leading to a use-after-free in
rcu_accelerate_cbs.

T1:                               T2:
		                  blkg_destroy
                 		  kill(&blkg->refcnt) // blkg->refcnt=1->0
				  blkg_release // call_rcu(__blkg_release)
                                  ...
				  blkg_free_workfn
                                  ->pd_free_fn(pd)
elv_iosched_store
elevator_switch
...
iterate blkg list
blkg_get(blkg) // blkg->refcnt=0->1
                                  list_del_init(&blkg->q_node)
blkg_put(pinned_blkg) // blkg->refcnt=1->0
blkg_release // call_rcu again
rcu_accelerate_cbs // uaf

Fix this by replacing blkg_get() with blkg_tryget(), which fails if
the blkg's refcount has already reached zero. If blkg_tryget() fails,
skip processing this blkg since it's already being destroyed.

Link: https://lore.kernel.org/all/[email protected]/
Fixes: f1c006f ("blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()")
Signed-off-by: Zheng Qixing <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Move the teardown sequence which offlines and frees per-policy
blkg_policy_data (pd) into a helper for readability.

No functional change intended.

Signed-off-by: Zheng Qixing <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Yu Kuai <[email protected]>
@blktests-ci
Copy link
Author

blktests-ci bot commented Jan 13, 2026

Upstream branch: 623fb99
series: https://patchwork.kernel.org/project/linux-block/list/?series=1041576
version: 2

@blktests-ci blktests-ci bot added V2 and removed V1 V1-ci-pass labels Jan 13, 2026
@blktests-ci blktests-ci bot force-pushed the series/1039644=>linus-master branch from e637731 to 1e4928d Compare January 13, 2026 06:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant