Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

crypto: cryptd - Protect per-CPU resource by disabling BH.

The access to cryptd_queue::cpu_queue is synchronized by disabling
preemption in cryptd_enqueue_request() and disabling BH in
cryptd_queue_worker(). This implies that access is allowed from BH.

If cryptd_enqueue_request() is invoked from preemptible context _and_
soft interrupt then this can lead to list corruption since
cryptd_enqueue_request() is not protected against access from
soft interrupt.

Replace get_cpu() in cryptd_enqueue_request() with local_bh_disable()
to ensure BH is always disabled.
Remove preempt_disable() from cryptd_queue_worker() since it is not
needed because local_bh_disable() ensures synchronisation.

Fixes: 254eff771441 ("crypto: cryptd - Per-CPU thread implementation...")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

authored by

Sebastian Andrzej Siewior and committed by
Herbert Xu
91e8bcd7 42a01af3

+11 -12
+11 -12
crypto/cryptd.c
··· 39 39 }; 40 40 41 41 struct cryptd_queue { 42 + /* 43 + * Protected by disabling BH to allow enqueueing from softinterrupt and 44 + * dequeuing from kworker (cryptd_queue_worker()). 45 + */ 42 46 struct cryptd_cpu_queue __percpu *cpu_queue; 43 47 }; 44 48 ··· 129 125 static int cryptd_enqueue_request(struct cryptd_queue *queue, 130 126 struct crypto_async_request *request) 131 127 { 132 - int cpu, err; 128 + int err; 133 129 struct cryptd_cpu_queue *cpu_queue; 134 130 refcount_t *refcnt; 135 131 136 - cpu = get_cpu(); 132 + local_bh_disable(); 137 133 cpu_queue = this_cpu_ptr(queue->cpu_queue); 138 134 err = crypto_enqueue_request(&cpu_queue->queue, request); 139 135 140 136 refcnt = crypto_tfm_ctx(request->tfm); 141 137 142 138 if (err == -ENOSPC) 143 - goto out_put_cpu; 139 + goto out; 144 140 145 - queue_work_on(cpu, cryptd_wq, &cpu_queue->work); 141 + queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work); 146 142 147 143 if (!refcount_read(refcnt)) 148 - goto out_put_cpu; 144 + goto out; 149 145 150 146 refcount_inc(refcnt); 151 147 152 - out_put_cpu: 153 - put_cpu(); 148 + out: 149 + local_bh_enable(); 154 150 155 151 return err; 156 152 } ··· 166 162 cpu_queue = container_of(work, struct cryptd_cpu_queue, work); 167 163 /* 168 164 * Only handle one request at a time to avoid hogging crypto workqueue. 169 - * preempt_disable/enable is used to prevent being preempted by 170 - * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent 171 - * cryptd_enqueue_request() being accessed from software interrupts. 172 165 */ 173 166 local_bh_disable(); 174 - preempt_disable(); 175 167 backlog = crypto_get_backlog(&cpu_queue->queue); 176 168 req = crypto_dequeue_request(&cpu_queue->queue); 177 - preempt_enable(); 178 169 local_bh_enable(); 179 170 180 171 if (!req)