0.0
NA
CVE-2024-53054
Linux Kernel: Cgroup/BPF Deadlock
Description

Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.

INFO

Published Date :

Nov. 19, 2024, 6:15 p.m.

Last Modified :

Nov. 28, 2024, 5:15 p.m.

Source :

416baaa9-dc9f-4396-8d5f-8c081fb06d67

Remotely Exploitable :

No

Impact Score :

Exploitability Score :

Affected Products

The following products are affected by CVE-2024-53054 vulnerability. Even if cvefeed.io is aware of the exact versions of the products that are affected, the information is not represented in the table below.

ID Vendor Product Action
1 Linux linux_kernel

We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).

Results are limited to the first 15 repositories due to potential performance issues.

The following list is the news that have been mention CVE-2024-53054 vulnerability anywhere in the article.

The following table lists the changes that have been made to the CVE-2024-53054 vulnerability over time.

Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.

  • CVE Rejected by 416baaa9-dc9f-4396-8d5f-8c081fb06d67

    Nov. 28, 2024

    Action Type Old Value New Value
  • CVE Modified by 416baaa9-dc9f-4396-8d5f-8c081fb06d67

    Nov. 28, 2024

    Action Type Old Value New Value
    Changed Description In the Linux kernel, the following vulnerability has been resolved: cgroup/bpf: use a dedicated workqueue for cgroup bpf destruction A hung_task problem shown below was found: INFO: task kworker/0:0:8 blocked for more than 327 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Workqueue: events cgroup_bpf_release Call Trace: <TASK> __schedule+0x5a2/0x2050 ? find_held_lock+0x33/0x100 ? wq_worker_sleeping+0x9e/0xe0 schedule+0x9f/0x180 schedule_preempt_disabled+0x25/0x50 __mutex_lock+0x512/0x740 ? cgroup_bpf_release+0x1e/0x4d0 ? cgroup_bpf_release+0xcf/0x4d0 ? process_scheduled_works+0x161/0x8a0 ? cgroup_bpf_release+0x1e/0x4d0 ? mutex_lock_nested+0x2b/0x40 ? __pfx_delay_tsc+0x10/0x10 mutex_lock_nested+0x2b/0x40 cgroup_bpf_release+0xcf/0x4d0 ? process_scheduled_works+0x161/0x8a0 ? trace_event_raw_event_workqueue_execute_start+0x64/0xd0 ? process_scheduled_works+0x161/0x8a0 process_scheduled_works+0x23a/0x8a0 worker_thread+0x231/0x5b0 ? __pfx_worker_thread+0x10/0x10 kthread+0x14d/0x1c0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x59/0x70 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 </TASK> This issue can be reproduced by the following pressuse test: 1. A large number of cpuset cgroups are deleted. 2. Set cpu on and off repeatly. 3. Set watchdog_thresh repeatly. The scripts can be obtained at LINK mentioned above the signature. The reason for this issue is cgroup_mutex and cpu_hotplug_lock are acquired in different tasks, which may lead to deadlock. It can lead to a deadlock through the following steps: 1. A large number of cpusets are deleted asynchronously, which puts a large number of cgroup_bpf_release works into system_wq. The max_active of system_wq is WQ_DFL_ACTIVE(256). Consequently, all active works are cgroup_bpf_release works, and many cgroup_bpf_release works will be put into inactive queue. As illustrated in the diagram, there are 256 (in the acvtive queue) + n (in the inactive queue) works. 2. Setting watchdog_thresh will hold cpu_hotplug_lock.read and put smp_call_on_cpu work into system_wq. However step 1 has already filled system_wq, 'sscs.work' is put into inactive queue. 'sscs.work' has to wait until the works that were put into the inacvtive queue earlier have executed (n cgroup_bpf_release), so it will be blocked for a while. 3. Cpu offline requires cpu_hotplug_lock.write, which is blocked by step 2. 4. Cpusets that were deleted at step 1 put cgroup_release works into cgroup_destroy_wq. They are competing to get cgroup_mutex all the time. When cgroup_metux is acqured by work at css_killed_work_fn, it will call cpuset_css_offline, which needs to acqure cpu_hotplug_lock.read. However, cpuset_css_offline will be blocked for step 3. 5. At this moment, there are 256 works in active queue that are cgroup_bpf_release, they are attempting to acquire cgroup_mutex, and as a result, all of them are blocked. Consequently, sscs.work can not be executed. Ultimately, this situation leads to four processes being blocked, forming a deadlock. system_wq(step1) WatchDog(step2) cpu offline(step3) cgroup_destroy_wq(step4) ... 2000+ cgroups deleted asyn 256 actives + n inactives __lockup_detector_reconfigure P(cpu_hotplug_lock.read) put sscs.work into system_wq 256 + n + 1(sscs.work) sscs.work wait to be executed warting sscs.work finish percpu_down_write P(cpu_hotplug_lock.write) ...blocking... css_killed_work_fn P(cgroup_mutex) cpuset_css_offline P(cpu_hotplug_lock.read) ...blocking... 256 cgroup_bpf_release mutex_lock(&cgroup_mutex); ..blocking... To fix the problem, place cgroup_bpf_release works on a dedicated workqueue which can break the loop and solve the problem. System wqs are for misc things which shouldn't create a large number of concurrent work items. If something is going to generate > ---truncated--- Rejected reason: This CVE ID has been rejected or withdrawn by its CVE Numbering Authority.
    Removed CVSS V3.1 NIST: AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
    Removed CWE NIST: CWE-667
    Removed CPE Configuration 2532240 Config Identifier: 0, OR *cpe:2.3:o:linux:linux_kernel:6.12:rc1:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc2:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc4:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc3:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc5:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* versions from (including) 6.7 from (excluding) 6.11.7 *cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* versions from (including) 6.2 from (excluding) 6.6.60 *cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* versions from (including) 5.3 from (excluding) 6.1.116
    Removed Reference kernel.org: https://git.kernel.org/stable/c/0d86cd70fc6a7ba18becb52ad8334d5ad3eca530
    Removed Reference kernel.org: https://git.kernel.org/stable/c/117932eea99b729ee5d12783601a4f7f5fd58a23
    Removed Reference kernel.org: https://git.kernel.org/stable/c/6dab3331523ba73db1345d19e6f586dcd5f6efb4
    Removed Reference kernel.org: https://git.kernel.org/stable/c/71f14a9f5c7db72fdbc56e667d4ed42a1a760494
    Removed Reference Type kernel.org: https://git.kernel.org/stable/c/0d86cd70fc6a7ba18becb52ad8334d5ad3eca530 Types: Patch
    Removed Reference Type kernel.org: https://git.kernel.org/stable/c/117932eea99b729ee5d12783601a4f7f5fd58a23 Types: Patch
    Removed Reference Type kernel.org: https://git.kernel.org/stable/c/6dab3331523ba73db1345d19e6f586dcd5f6efb4 Types: Patch
    Removed Reference Type kernel.org: https://git.kernel.org/stable/c/71f14a9f5c7db72fdbc56e667d4ed42a1a760494 Types: Patch
  • Initial Analysis by [email protected]

    Nov. 22, 2024

    Action Type Old Value New Value
    Added CVSS V3.1 NIST AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
    Added CWE NIST CWE-667
    Added CPE Configuration OR *cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* versions from (including) 5.3 up to (excluding) 6.1.116 *cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* versions from (including) 6.2 up to (excluding) 6.6.60 *cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:* versions from (including) 6.7 up to (excluding) 6.11.7 *cpe:2.3:o:linux:linux_kernel:6.12:rc1:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc2:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc3:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc4:*:*:*:*:*:* *cpe:2.3:o:linux:linux_kernel:6.12:rc5:*:*:*:*:*:*
    Changed Reference Type https://git.kernel.org/stable/c/0d86cd70fc6a7ba18becb52ad8334d5ad3eca530 No Types Assigned https://git.kernel.org/stable/c/0d86cd70fc6a7ba18becb52ad8334d5ad3eca530 Patch
    Changed Reference Type https://git.kernel.org/stable/c/117932eea99b729ee5d12783601a4f7f5fd58a23 No Types Assigned https://git.kernel.org/stable/c/117932eea99b729ee5d12783601a4f7f5fd58a23 Patch
    Changed Reference Type https://git.kernel.org/stable/c/6dab3331523ba73db1345d19e6f586dcd5f6efb4 No Types Assigned https://git.kernel.org/stable/c/6dab3331523ba73db1345d19e6f586dcd5f6efb4 Patch
    Changed Reference Type https://git.kernel.org/stable/c/71f14a9f5c7db72fdbc56e667d4ed42a1a760494 No Types Assigned https://git.kernel.org/stable/c/71f14a9f5c7db72fdbc56e667d4ed42a1a760494 Patch
  • CVE Received by 416baaa9-dc9f-4396-8d5f-8c081fb06d67

    Nov. 19, 2024

    Action Type Old Value New Value
    Added Description In the Linux kernel, the following vulnerability has been resolved: cgroup/bpf: use a dedicated workqueue for cgroup bpf destruction A hung_task problem shown below was found: INFO: task kworker/0:0:8 blocked for more than 327 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Workqueue: events cgroup_bpf_release Call Trace: <TASK> __schedule+0x5a2/0x2050 ? find_held_lock+0x33/0x100 ? wq_worker_sleeping+0x9e/0xe0 schedule+0x9f/0x180 schedule_preempt_disabled+0x25/0x50 __mutex_lock+0x512/0x740 ? cgroup_bpf_release+0x1e/0x4d0 ? cgroup_bpf_release+0xcf/0x4d0 ? process_scheduled_works+0x161/0x8a0 ? cgroup_bpf_release+0x1e/0x4d0 ? mutex_lock_nested+0x2b/0x40 ? __pfx_delay_tsc+0x10/0x10 mutex_lock_nested+0x2b/0x40 cgroup_bpf_release+0xcf/0x4d0 ? process_scheduled_works+0x161/0x8a0 ? trace_event_raw_event_workqueue_execute_start+0x64/0xd0 ? process_scheduled_works+0x161/0x8a0 process_scheduled_works+0x23a/0x8a0 worker_thread+0x231/0x5b0 ? __pfx_worker_thread+0x10/0x10 kthread+0x14d/0x1c0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x59/0x70 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 </TASK> This issue can be reproduced by the following pressuse test: 1. A large number of cpuset cgroups are deleted. 2. Set cpu on and off repeatly. 3. Set watchdog_thresh repeatly. The scripts can be obtained at LINK mentioned above the signature. The reason for this issue is cgroup_mutex and cpu_hotplug_lock are acquired in different tasks, which may lead to deadlock. It can lead to a deadlock through the following steps: 1. A large number of cpusets are deleted asynchronously, which puts a large number of cgroup_bpf_release works into system_wq. The max_active of system_wq is WQ_DFL_ACTIVE(256). Consequently, all active works are cgroup_bpf_release works, and many cgroup_bpf_release works will be put into inactive queue. As illustrated in the diagram, there are 256 (in the acvtive queue) + n (in the inactive queue) works. 2. Setting watchdog_thresh will hold cpu_hotplug_lock.read and put smp_call_on_cpu work into system_wq. However step 1 has already filled system_wq, 'sscs.work' is put into inactive queue. 'sscs.work' has to wait until the works that were put into the inacvtive queue earlier have executed (n cgroup_bpf_release), so it will be blocked for a while. 3. Cpu offline requires cpu_hotplug_lock.write, which is blocked by step 2. 4. Cpusets that were deleted at step 1 put cgroup_release works into cgroup_destroy_wq. They are competing to get cgroup_mutex all the time. When cgroup_metux is acqured by work at css_killed_work_fn, it will call cpuset_css_offline, which needs to acqure cpu_hotplug_lock.read. However, cpuset_css_offline will be blocked for step 3. 5. At this moment, there are 256 works in active queue that are cgroup_bpf_release, they are attempting to acquire cgroup_mutex, and as a result, all of them are blocked. Consequently, sscs.work can not be executed. Ultimately, this situation leads to four processes being blocked, forming a deadlock. system_wq(step1) WatchDog(step2) cpu offline(step3) cgroup_destroy_wq(step4) ... 2000+ cgroups deleted asyn 256 actives + n inactives __lockup_detector_reconfigure P(cpu_hotplug_lock.read) put sscs.work into system_wq 256 + n + 1(sscs.work) sscs.work wait to be executed warting sscs.work finish percpu_down_write P(cpu_hotplug_lock.write) ...blocking... css_killed_work_fn P(cgroup_mutex) cpuset_css_offline P(cpu_hotplug_lock.read) ...blocking... 256 cgroup_bpf_release mutex_lock(&cgroup_mutex); ..blocking... To fix the problem, place cgroup_bpf_release works on a dedicated workqueue which can break the loop and solve the problem. System wqs are for misc things which shouldn't create a large number of concurrent work items. If something is going to generate > ---truncated---
    Added Reference kernel.org https://git.kernel.org/stable/c/71f14a9f5c7db72fdbc56e667d4ed42a1a760494 [No types assigned]
    Added Reference kernel.org https://git.kernel.org/stable/c/0d86cd70fc6a7ba18becb52ad8334d5ad3eca530 [No types assigned]
    Added Reference kernel.org https://git.kernel.org/stable/c/6dab3331523ba73db1345d19e6f586dcd5f6efb4 [No types assigned]
    Added Reference kernel.org https://git.kernel.org/stable/c/117932eea99b729ee5d12783601a4f7f5fd58a23 [No types assigned]
EPSS is a daily estimate of the probability of exploitation activity being observed over the next 30 days. Following chart shows the EPSS score history of the vulnerability.
CWE - Common Weakness Enumeration

While CVE identifies specific instances of vulnerabilities, CWE categorizes the common flaws or weaknesses that can lead to vulnerabilities. CVE-2024-53054 is associated with the following CWEs:

Common Attack Pattern Enumeration and Classification (CAPEC)

Common Attack Pattern Enumeration and Classification (CAPEC) stores attack patterns, which are descriptions of the common attributes and approaches employed by adversaries to exploit the CVE-2024-53054 weaknesses.

NONE - Vulnerability Scoring System