Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[PATCH] uml: fix proc-vs-interrupt context spinlock deadlock

This spinlock can be taken on interrupt too, so spin_lock_irq[save] must be
used.

However, Documentation/networking/netdevices.txt explains we are called with
rtnl_lock() held - so we don't need to care about other concurrent opens.
Verified also in LDD3 and by direct checking. Also verified that the network
layer (through a state machine) guarantees us that nobody will close the
interface while it's being used. Please correct me if I'm wrong.

Also, we must check we don't sleep with irqs disabled!!! But anyway, this is
not news - we already can't sleep while holding a spinlock. Who says this is
guaranted really by the present code?

Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by

Paolo 'Blaisorblade' Giarrusso and committed by
Linus Torvalds
48af05ed 06837504

+4 -12
+4 -12
arch/um/drivers/net_kern.c
··· 114 114 struct uml_net_private *lp = dev->priv; 115 115 int err; 116 116 117 - spin_lock(&lp->lock); 118 - 119 117 if(lp->fd >= 0){ 120 118 err = -ENXIO; 121 119 goto out; ··· 147 149 */ 148 150 while((err = uml_net_rx(dev)) > 0) ; 149 151 150 - spin_unlock(&lp->lock); 151 - 152 152 spin_lock(&opened_lock); 153 153 list_add(&lp->list, &opened); 154 154 spin_unlock(&opened_lock); ··· 156 160 if(lp->close != NULL) (*lp->close)(lp->fd, &lp->user); 157 161 lp->fd = -1; 158 162 out: 159 - spin_unlock(&lp->lock); 160 163 return err; 161 164 } 162 165 ··· 164 169 struct uml_net_private *lp = dev->priv; 165 170 166 171 netif_stop_queue(dev); 167 - spin_lock(&lp->lock); 168 172 169 173 free_irq(dev->irq, dev); 170 174 if(lp->close != NULL) 171 175 (*lp->close)(lp->fd, &lp->user); 172 176 lp->fd = -1; 173 - 174 - spin_unlock(&lp->lock); 175 177 176 178 spin_lock(&opened_lock); 177 179 list_del(&lp->list); ··· 238 246 struct uml_net_private *lp = dev->priv; 239 247 struct sockaddr *hwaddr = addr; 240 248 241 - spin_lock(&lp->lock); 249 + spin_lock_irq(&lp->lock); 242 250 set_ether_mac(dev, hwaddr->sa_data); 243 - spin_unlock(&lp->lock); 251 + spin_unlock_irq(&lp->lock); 244 252 245 253 return(0); 246 254 } ··· 250 258 struct uml_net_private *lp = dev->priv; 251 259 int err = 0; 252 260 253 - spin_lock(&lp->lock); 261 + spin_lock_irq(&lp->lock); 254 262 255 263 new_mtu = (*lp->set_mtu)(new_mtu, &lp->user); 256 264 if(new_mtu < 0){ ··· 261 269 dev->mtu = new_mtu; 262 270 263 271 out: 264 - spin_unlock(&lp->lock); 272 + spin_unlock_irq(&lp->lock); 265 273 return err; 266 274 } 267 275