cfg_lock){+.+.}-{3:3}, at: ice_reset_vf+0x22f/0x4d0 [ice] but
task is already holding lock: ff40d43ea1961210
(&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] which
lock already depends on the new lock. the existing dependency chain
(in reverse order) is: -> #1 (&pf->lag_mutex){+.+.}-{3:3}:
__lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0
__mutex_lock+0x9b/0xbf0 ice_vc_cfg_qs_msg+0x45/0x690 [ice]
ice_vc_process_vf_msg+0x4f5/0x870 [ice] __ice_clean_ctrlq+0x2b5/0x600
[ice] ice_service_task+0x2c9/0x480 [ice] process_one_work+0x1e9/0x4d0
worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50
ret_from_fork_asm+0x1b/0x30 -> #0 (&vf->cfg_lock){+.+.}-{3:3}:
check_prev_add+0xe2/0xc50 validate_chain+0x558/0x800
__lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0
__mutex_lock+0x9b/0xbf0 ice_reset_vf+0x22f/0x4d0 [ice]
ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480
[ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0
kthread+0x104/0x140 ret_from_fork+0x31/0x50
ret_from_fork_asm+0x1b/0x30 other info that might help us debug this:
Possible unsafe locking scenario: CPU0 CPU1 ---- ----
lock(&pf->lag_mutex); lock(&vf->cfg_lock); lock(&pf->lag_mutex);
lock(&vf->cfg_lock); *** DEADLOCK *** 4 locks held by
kworker/60:3/6771: #0: ff40d43e05428b38
((wq_completion)ice){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #1:
ff50d06e05197e58 ((work_completion)(&pf->serv_task)){+.+.}-{0:0}, at:
process_one_work+0x176/0x4d0 #2: ff40d43ea1960e50
(&pf->vfs.table_lock){+.+.}-{3:3}, at:
ice_process_vflr_event+0x48/0xd0 [ice] #3: ff40d43ea1961210
(&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] stack
backtrace: CPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: G W O
6.8.0-rc6 #54 Hardware name: Workqueue: ice ice_service_task [ice]
Call Trace:
![]() |
Home ▼ Bookkeeping
Online ▼ Security
Audits ▼
Managed
DNS ▼
About
Order
FAQ
Acceptable Use Policy
Dynamic DNS Clients
Configure Domains Dyanmic DNS Update Password Network
Monitor ▼
Enterprise Package
Advanced Package
Standard Package
Free Trial
FAQ
Price/Feature Summary
Order/Renew
Examples
Configure/Status Alert Profiles | ||
CVE ID: | CVE-2024-36003 |
Description: | In the Linux kernel, the following vulnerability has been resolved:
ice: fix LAG and VF lock dependency in ice_reset_vf() 9f74a3dfcf83
("ice: Fix VF Reset paths when interface in a failed over aggregate"),
the ice driver has acquired the LAG mutex in ice_reset_vf(). The
commit placed this lock acquisition just prior to the acquisition of
the VF configuration lock. If ice_reset_vf() acquires the
configuration lock via the ICE_VF_RESET_LOCK flag, this could deadlock
with ice_vc_cfg_qs_msg() because it always acquires the locks in the
order of the VF configuration lock and then the LAG mutex. Lockdep
reports this violation almost immediately on creating and then
removing 2 VF: ======================================================
WARNING: possible circular locking dependency detected 6.8.0-rc6 #54
Tainted: G W O ------------------------------------------------------
kworker/60:3/6771 is trying to acquire lock: ff40d43e099380a0
(&vf->cfg_lock){+.+.}-{3:3}, at: ice_reset_vf+0x22f/0x4d0 [ice] but
task is already holding lock: ff40d43ea1961210
(&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] which
lock already depends on the new lock. the existing dependency chain
(in reverse order) is: -> #1 (&pf->lag_mutex){+.+.}-{3:3}:
__lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0
__mutex_lock+0x9b/0xbf0 ice_vc_cfg_qs_msg+0x45/0x690 [ice]
ice_vc_process_vf_msg+0x4f5/0x870 [ice] __ice_clean_ctrlq+0x2b5/0x600
[ice] ice_service_task+0x2c9/0x480 [ice] process_one_work+0x1e9/0x4d0
worker_thread+0x1e1/0x3d0 kthread+0x104/0x140 ret_from_fork+0x31/0x50
ret_from_fork_asm+0x1b/0x30 -> #0 (&vf->cfg_lock){+.+.}-{3:3}:
check_prev_add+0xe2/0xc50 validate_chain+0x558/0x800
__lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0
__mutex_lock+0x9b/0xbf0 ice_reset_vf+0x22f/0x4d0 [ice]
ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480
[ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0
kthread+0x104/0x140 ret_from_fork+0x31/0x50
ret_from_fork_asm+0x1b/0x30 other info that might help us debug this:
Possible unsafe locking scenario: CPU0 CPU1 ---- ----
lock(&pf->lag_mutex); lock(&vf->cfg_lock); lock(&pf->lag_mutex);
lock(&vf->cfg_lock); *** DEADLOCK *** 4 locks held by
kworker/60:3/6771: #0: ff40d43e05428b38
((wq_completion)ice){+.+.}-{0:0}, at: process_one_work+0x176/0x4d0 #1:
ff50d06e05197e58 ((work_completion)(&pf->serv_task)){+.+.}-{0:0}, at:
process_one_work+0x176/0x4d0 #2: ff40d43ea1960e50
(&pf->vfs.table_lock){+.+.}-{3:3}, at:
ice_process_vflr_event+0x48/0xd0 [ice] #3: ff40d43ea1961210
(&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0 [ice] stack
backtrace: CPU: 60 PID: 6771 Comm: kworker/60:3 Tainted: G W O
6.8.0-rc6 #54 Hardware name: Workqueue: ice ice_service_task [ice]
Call Trace: |
Test IDs: | None available |
Cross References: |
Common Vulnerability Exposure (CVE) ID: CVE-2024-36003 https://git.kernel.org/stable/c/740717774dc37338404d10726967d582414f638c https://git.kernel.org/stable/c/740717774dc37338404d10726967d582414f638c https://git.kernel.org/stable/c/96fdd1f6b4ed72a741fb0eb705c0e13049b8721f https://git.kernel.org/stable/c/96fdd1f6b4ed72a741fb0eb705c0e13049b8721f https://git.kernel.org/stable/c/de8631d8c9df08440268630200e64b623a5f69e6 https://git.kernel.org/stable/c/de8631d8c9df08440268630200e64b623a5f69e6 |