Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

locking/lockdep: Fix stack trace caching logic

check_prev_add() caches saved stack trace in static trace variable
to avoid duplicate save_trace() calls in dependencies involving trylocks.
But that caching logic contains a bug. We may not save trace on first
iteration due to early return from check_prev_add(). Then on the
second iteration when we actually need the trace we don't save it
because we think that we've already saved it.

Let check_prev_add() itself control when stack is saved.

There is another bug. Trace variable is protected by graph lock.
But we can temporary release graph lock during printing.

Fix this by invalidating cached stack trace when we release graph lock.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: glider@google.com
Cc: kcc@google.com
Cc: peter@hurleysoftware.com
Cc: sasha.levin@oracle.com
Link: http://lkml.kernel.org/r/1454593240-121647-1-git-send-email-dvyukov@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

authored by

Dmitry Vyukov and committed by
Ingo Molnar
8a5fd564 765bdb40

+10 -6
+10 -6
kernel/locking/lockdep.c
··· 1822 1822 */ 1823 1823 static int 1824 1824 check_prev_add(struct task_struct *curr, struct held_lock *prev, 1825 - struct held_lock *next, int distance, int trylock_loop) 1825 + struct held_lock *next, int distance, int *stack_saved) 1826 1826 { 1827 1827 struct lock_list *entry; 1828 1828 int ret; ··· 1883 1883 } 1884 1884 } 1885 1885 1886 - if (!trylock_loop && !save_trace(&trace)) 1887 - return 0; 1886 + if (!*stack_saved) { 1887 + if (!save_trace(&trace)) 1888 + return 0; 1889 + *stack_saved = 1; 1890 + } 1888 1891 1889 1892 /* 1890 1893 * Ok, all validations passed, add the new lock ··· 1910 1907 * Debugging printouts: 1911 1908 */ 1912 1909 if (verbose(hlock_class(prev)) || verbose(hlock_class(next))) { 1910 + /* We drop graph lock, so another thread can overwrite trace. */ 1911 + *stack_saved = 0; 1913 1912 graph_unlock(); 1914 1913 printk("\n new dependency: "); 1915 1914 print_lock_name(hlock_class(prev)); ··· 1934 1929 check_prevs_add(struct task_struct *curr, struct held_lock *next) 1935 1930 { 1936 1931 int depth = curr->lockdep_depth; 1937 - int trylock_loop = 0; 1932 + int stack_saved = 0; 1938 1933 struct held_lock *hlock; 1939 1934 1940 1935 /* ··· 1961 1956 */ 1962 1957 if (hlock->read != 2 && hlock->check) { 1963 1958 if (!check_prev_add(curr, hlock, next, 1964 - distance, trylock_loop)) 1959 + distance, &stack_saved)) 1965 1960 return 0; 1966 1961 /* 1967 1962 * Stop after the first non-trylock entry, ··· 1984 1979 if (curr->held_locks[depth].irq_context != 1985 1980 curr->held_locks[depth-1].irq_context) 1986 1981 break; 1987 - trylock_loop = 1; 1988 1982 } 1989 1983 return 1; 1990 1984 out_bug: