CVE-2023-53151
md/raid10: prevent soft lockup while flush writes
Description
In the Linux kernel, the following vulnerability has been resolved: md/raid10: prevent soft lockup while flush writes Currently, there is no limit for raid1/raid10 plugged bio. While flushing writes, raid1 has cond_resched() while raid10 doesn't, and too many writes can cause soft lockup. Follow up soft lockup can be triggered easily with writeback test for raid10 with ramdisks: watchdog: BUG: soft lockup - CPU#10 stuck for 27s! [md0_raid10:1293] Call Trace: <TASK> call_rcu+0x16/0x20 put_object+0x41/0x80 __delete_object+0x50/0x90 delete_object_full+0x2b/0x40 kmemleak_free+0x46/0xa0 slab_free_freelist_hook.constprop.0+0xed/0x1a0 kmem_cache_free+0xfd/0x300 mempool_free_slab+0x1f/0x30 mempool_free+0x3a/0x100 bio_free+0x59/0x80 bio_put+0xcf/0x2c0 free_r10bio+0xbf/0xf0 raid_end_bio_io+0x78/0xb0 one_write_done+0x8a/0xa0 raid10_end_write_request+0x1b4/0x430 bio_endio+0x175/0x320 brd_submit_bio+0x3b9/0x9b7 [brd] __submit_bio+0x69/0xe0 submit_bio_noacct_nocheck+0x1e6/0x5a0 submit_bio_noacct+0x38c/0x7e0 flush_pending_writes+0xf0/0x240 raid10d+0xac/0x1ed0 Fix the problem by adding cond_resched() to raid10 like what raid1 did. Note that unlimited plugged bio still need to be optimized, for example, in the case of lots of dirty pages writeback, this will take lots of memory and io will spend a long time in plug, hence io latency is bad.
INFO
Published Date :
Sept. 15, 2025, 2:15 p.m.
Last Modified :
Sept. 15, 2025, 3:22 p.m.
Remotely Exploit :
No
Source :
416baaa9-dc9f-4396-8d5f-8c081fb06d67
Affected Products
The following products are affected by CVE-2023-53151
vulnerability.
Even if cvefeed.io
is aware of the exact versions of the
products
that
are
affected, the information is not represented in the table below.
No affected product recoded yet
Solution
- Apply the kernel patch for raid10.
- Include cond_resched() in raid10 write handling.
- Optimize unlimited plugged bio for memory usage.
- Improve I/O latency during writeback.
References to Advisories, Solutions, and Tools
Here, you will find a curated list of external links that provide in-depth
information, practical solutions, and valuable tools related to
CVE-2023-53151
.
CWE - Common Weakness Enumeration
While CVE identifies
specific instances of vulnerabilities, CWE categorizes the common flaws or
weaknesses that can lead to vulnerabilities. CVE-2023-53151
is
associated with the following CWEs:
Common Attack Pattern Enumeration and Classification (CAPEC)
Common Attack Pattern Enumeration and Classification
(CAPEC)
stores attack patterns, which are descriptions of the common attributes and
approaches employed by adversaries to exploit the CVE-2023-53151
weaknesses.
We scan GitHub repositories to detect new proof-of-concept exploits. Following list is a collection of public exploits and proof-of-concepts, which have been published on GitHub (sorted by the most recently updated).
Results are limited to the first 15 repositories due to potential performance issues.
The following list is the news that have been mention
CVE-2023-53151
vulnerability anywhere in the article.
The following table lists the changes that have been made to the
CVE-2023-53151
vulnerability over time.
Vulnerability history details can be useful for understanding the evolution of a vulnerability, and for identifying the most recent changes that may impact the vulnerability's severity, exploitability, or other characteristics.
-
New CVE Received by 416baaa9-dc9f-4396-8d5f-8c081fb06d67
Sep. 15, 2025
Action Type Old Value New Value Added Description In the Linux kernel, the following vulnerability has been resolved: md/raid10: prevent soft lockup while flush writes Currently, there is no limit for raid1/raid10 plugged bio. While flushing writes, raid1 has cond_resched() while raid10 doesn't, and too many writes can cause soft lockup. Follow up soft lockup can be triggered easily with writeback test for raid10 with ramdisks: watchdog: BUG: soft lockup - CPU#10 stuck for 27s! [md0_raid10:1293] Call Trace: <TASK> call_rcu+0x16/0x20 put_object+0x41/0x80 __delete_object+0x50/0x90 delete_object_full+0x2b/0x40 kmemleak_free+0x46/0xa0 slab_free_freelist_hook.constprop.0+0xed/0x1a0 kmem_cache_free+0xfd/0x300 mempool_free_slab+0x1f/0x30 mempool_free+0x3a/0x100 bio_free+0x59/0x80 bio_put+0xcf/0x2c0 free_r10bio+0xbf/0xf0 raid_end_bio_io+0x78/0xb0 one_write_done+0x8a/0xa0 raid10_end_write_request+0x1b4/0x430 bio_endio+0x175/0x320 brd_submit_bio+0x3b9/0x9b7 [brd] __submit_bio+0x69/0xe0 submit_bio_noacct_nocheck+0x1e6/0x5a0 submit_bio_noacct+0x38c/0x7e0 flush_pending_writes+0xf0/0x240 raid10d+0xac/0x1ed0 Fix the problem by adding cond_resched() to raid10 like what raid1 did. Note that unlimited plugged bio still need to be optimized, for example, in the case of lots of dirty pages writeback, this will take lots of memory and io will spend a long time in plug, hence io latency is bad. Added Reference https://git.kernel.org/stable/c/00ecb6fa67c0f772290c5ea5ae8b46eefd503b83 Added Reference https://git.kernel.org/stable/c/010444623e7f4da6b4a4dd603a7da7469981e293 Added Reference https://git.kernel.org/stable/c/1d467e10507167eb6dc2c281a87675b731955d86 Added Reference https://git.kernel.org/stable/c/634daf6b2c81015cc5e28bf694a6a94a50c641cd Added Reference https://git.kernel.org/stable/c/84a578961b2566e475bfa8740beaf0abcc781a6f Added Reference https://git.kernel.org/stable/c/d0345f7c7dbc5d42e4e6f1db99c1c1879d7b0eb5 Added Reference https://git.kernel.org/stable/c/f45b2fa7678ab385299de345f7e85d05caea386b Added Reference https://git.kernel.org/stable/c/fbf50184190d55f8717bd29aa9530c399be96f30