Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[PATCH] lockdep: more unlock-on-error fixes

- returns after DEBUG_LOCKS_WARN_ON added in 3 places

- debug_locks checking after lookup_chain_cache() added in
__lock_acquire()

- locking for testing and changing global variable max_lockdep_depth
added in __lock_acquire()

From: Ingo Molnar <mingo@elte.hu>

My __acquire_lock() cleanup introduced a locking bug: on SMP systems we'd
release a non-owned graph lock. Fix this by moving the graph unlock back,
and by leaving the max_lockdep_depth variable update possibly racy. (we
dont care, it's just statistics)

Also add some minimal debugging code to graph_unlock()/graph_lock(),
which caught this locking bug.

Signed-off-by: Jarek Poplawski <jarkao2@o2.pl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Jarek Poplawski and committed by
Linus Torvalds
381a2292 898552c9

+19 -4
+19 -4
kernel/lockdep.c
··· 70 70 71 71 static inline int graph_unlock(void) 72 72 { 73 + if (debug_locks && !__raw_spin_is_locked(&lockdep_lock)) 74 + return DEBUG_LOCKS_WARN_ON(1); 75 + 73 76 __raw_spin_unlock(&lockdep_lock); 74 77 return 0; 75 78 } ··· 715 712 struct lock_list *entry; 716 713 int ret; 717 714 715 + if (!__raw_spin_is_locked(&lockdep_lock)) 716 + return DEBUG_LOCKS_WARN_ON(1); 717 + 718 718 if (depth > max_recursion_depth) 719 719 max_recursion_depth = depth; 720 720 if (depth >= RECURSION_LIMIT) ··· 1299 1293 if (!subclass || force) 1300 1294 lock->class_cache = class; 1301 1295 1302 - DEBUG_LOCKS_WARN_ON(class->subclass != subclass); 1296 + if (DEBUG_LOCKS_WARN_ON(class->subclass != subclass)) 1297 + return NULL; 1303 1298 1304 1299 return class; 1305 1300 } ··· 1315 1308 struct list_head *hash_head = chainhashentry(chain_key); 1316 1309 struct lock_chain *chain; 1317 1310 1318 - DEBUG_LOCKS_WARN_ON(!irqs_disabled()); 1311 + if (DEBUG_LOCKS_WARN_ON(!irqs_disabled())) 1312 + return 0; 1319 1313 /* 1320 1314 * We can walk it lock-free, because entries only get added 1321 1315 * to the hash: ··· 1402 1394 return; 1403 1395 } 1404 1396 id = hlock->class - lock_classes; 1405 - DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS); 1397 + if (DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS)) 1398 + return; 1399 + 1406 1400 if (prev_hlock && (prev_hlock->irq_context != 1407 1401 hlock->irq_context)) 1408 1402 chain_key = 0; ··· 2215 2205 if (!check_prevs_add(curr, hlock)) 2216 2206 return 0; 2217 2207 graph_unlock(); 2218 - } 2208 + } else 2209 + /* after lookup_chain_cache(): */ 2210 + if (unlikely(!debug_locks)) 2211 + return 0; 2212 + 2219 2213 curr->lockdep_depth++; 2220 2214 check_chain_key(curr); 2221 2215 if (unlikely(curr->lockdep_depth >= MAX_LOCK_DEPTH)) { ··· 2228 2214 printk("turning off the locking correctness validator.\n"); 2229 2215 return 0; 2230 2216 } 2217 + 2231 2218 if (unlikely(curr->lockdep_depth > max_lockdep_depth)) 2232 2219 max_lockdep_depth = curr->lockdep_depth; 2233 2220