Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (24 commits)
e100: do not go D3 in shutdown unless system is powering off
netfilter: revised locking for x_tables
Bluetooth: Fix connection establishment with low security requirement
Bluetooth: Add different pairing timeout for Legacy Pairing
Bluetooth: Ensure that HCI sysfs add/del is preempt safe
net: Avoid extra wakeups of threads blocked in wait_for_packet()
net: Fix typo in net_device_ops description.
ipv4: Limit size of route cache hash table
Add reference to CAPI 2.0 standard
Documentation/isdn/INTERFACE.CAPI
update Documentation/isdn/00-INDEX
ixgbe: Fix WoL functionality for 82599 KX4 devices
veth: prevent oops caused by netdev destructor
xfrm: wrong hash value for temporary SA
forcedeth: tx timeout fix
net: Fix LL_MAX_HEADER for CONFIG_TR_MODULE
mlx4_en: Handle page allocation failure during receive
mlx4_en: Fix cleanup flow on cq activation
vlan: update vlan carrier state for admin up/down
netfilter: xt_recent: fix stack overread in compat code
...

+795 -447
+13 -4
Documentation/isdn/00-INDEX
··· 2 2 - this file (info on ISDN implementation for Linux) 3 3 CREDITS 4 4 - list of the kind folks that brought you this stuff. 5 + HiSax.cert 6 + - information about the ITU approval certification of the HiSax driver. 5 7 INTERFACE 6 - - description of Linklevel and Hardwarelevel ISDN interface. 8 + - description of isdn4linux Link Level and Hardware Level interfaces. 9 + INTERFACE.fax 10 + - description of the fax subinterface of isdn4linux. 11 + INTERFACE.CAPI 12 + - description of kernel CAPI Link Level to Hardware Level interface. 7 13 README 8 14 - general info on what you need and what to do for Linux ISDN. 9 15 README.FAQ ··· 18 12 - info for running audio over ISDN. 19 13 README.fax 20 14 - info for using Fax over ISDN. 15 + README.gigaset 16 + - info on the drivers for Siemens Gigaset ISDN adapters. 21 17 README.icn 22 18 - info on the ICN-ISDN-card and its driver. 23 19 README.HiSax ··· 45 37 README.sc 46 38 - info on driver for Spellcaster cards. 47 39 README.x25 48 - _ info for running X.25 over ISDN. 40 + - info for running X.25 over ISDN. 49 41 README.hysdn 50 - - info on driver for Hypercope active HYSDN cards 51 - 42 + - info on driver for Hypercope active HYSDN cards 43 + README.mISDN 44 + - info on the Modular ISDN subsystem (mISDN).
+213
Documentation/isdn/INTERFACE.CAPI
··· 1 + Kernel CAPI Interface to Hardware Drivers 2 + ----------------------------------------- 3 + 4 + 1. Overview 5 + 6 + From the CAPI 2.0 specification: 7 + COMMON-ISDN-API (CAPI) is an application programming interface standard used 8 + to access ISDN equipment connected to basic rate interfaces (BRI) and primary 9 + rate interfaces (PRI). 10 + 11 + Kernel CAPI operates as a dispatching layer between CAPI applications and CAPI 12 + hardware drivers. Hardware drivers register ISDN devices (controllers, in CAPI 13 + lingo) with Kernel CAPI to indicate their readiness to provide their service 14 + to CAPI applications. CAPI applications also register with Kernel CAPI, 15 + requesting association with a CAPI device. Kernel CAPI then dispatches the 16 + application registration to an available device, forwarding it to the 17 + corresponding hardware driver. Kernel CAPI then forwards CAPI messages in both 18 + directions between the application and the hardware driver. 19 + 20 + Format and semantics of CAPI messages are specified in the CAPI 2.0 standard. 21 + This standard is freely available from http://www.capi.org. 22 + 23 + 24 + 2. Driver and Device Registration 25 + 26 + CAPI drivers optionally register themselves with Kernel CAPI by calling the 27 + Kernel CAPI function register_capi_driver() with a pointer to a struct 28 + capi_driver. This structure must be filled with the name and revision of the 29 + driver, and optionally a pointer to a callback function, add_card(). The 30 + registration can be revoked by calling the function unregister_capi_driver() 31 + with a pointer to the same struct capi_driver. 32 + 33 + CAPI drivers must register each of the ISDN devices they control with Kernel 34 + CAPI by calling the Kernel CAPI function attach_capi_ctr() with a pointer to a 35 + struct capi_ctr before they can be used. This structure must be filled with 36 + the names of the driver and controller, and a number of callback function 37 + pointers which are subsequently used by Kernel CAPI for communicating with the 38 + driver. The registration can be revoked by calling the function 39 + detach_capi_ctr() with a pointer to the same struct capi_ctr. 40 + 41 + Before the device can be actually used, the driver must fill in the device 42 + information fields 'manu', 'version', 'profile' and 'serial' in the capi_ctr 43 + structure of the device, and signal its readiness by calling capi_ctr_ready(). 44 + From then on, Kernel CAPI may call the registered callback functions for the 45 + device. 46 + 47 + If the device becomes unusable for any reason (shutdown, disconnect ...), the 48 + driver has to call capi_ctr_reseted(). This will prevent further calls to the 49 + callback functions by Kernel CAPI. 50 + 51 + 52 + 3. Application Registration and Communication 53 + 54 + Kernel CAPI forwards registration requests from applications (calls to CAPI 55 + operation CAPI_REGISTER) to an appropriate hardware driver by calling its 56 + register_appl() callback function. A unique Application ID (ApplID, u16) is 57 + allocated by Kernel CAPI and passed to register_appl() along with the 58 + parameter structure provided by the application. This is analogous to the 59 + open() operation on regular files or character devices. 60 + 61 + After a successful return from register_appl(), CAPI messages from the 62 + application may be passed to the driver for the device via calls to the 63 + send_message() callback function. The CAPI message to send is stored in the 64 + data portion of an skb. Conversely, the driver may call Kernel CAPI's 65 + capi_ctr_handle_message() function to pass a received CAPI message to Kernel 66 + CAPI for forwarding to an application, specifying its ApplID. 67 + 68 + Deregistration requests (CAPI operation CAPI_RELEASE) from applications are 69 + forwarded as calls to the release_appl() callback function, passing the same 70 + ApplID as with register_appl(). After return from release_appl(), no CAPI 71 + messages for that application may be passed to or from the device anymore. 72 + 73 + 74 + 4. Data Structures 75 + 76 + 4.1 struct capi_driver 77 + 78 + This structure describes a Kernel CAPI driver itself. It is used in the 79 + register_capi_driver() and unregister_capi_driver() functions, and contains 80 + the following non-private fields, all to be set by the driver before calling 81 + register_capi_driver(): 82 + 83 + char name[32] 84 + the name of the driver, as a zero-terminated ASCII string 85 + char revision[32] 86 + the revision number of the driver, as a zero-terminated ASCII string 87 + int (*add_card)(struct capi_driver *driver, capicardparams *data) 88 + a callback function pointer (may be NULL) 89 + 90 + 91 + 4.2 struct capi_ctr 92 + 93 + This structure describes an ISDN device (controller) handled by a Kernel CAPI 94 + driver. After registration via the attach_capi_ctr() function it is passed to 95 + all controller specific lower layer interface and callback functions to 96 + identify the controller to operate on. 97 + 98 + It contains the following non-private fields: 99 + 100 + - to be set by the driver before calling attach_capi_ctr(): 101 + 102 + struct module *owner 103 + pointer to the driver module owning the device 104 + 105 + void *driverdata 106 + an opaque pointer to driver specific data, not touched by Kernel CAPI 107 + 108 + char name[32] 109 + the name of the controller, as a zero-terminated ASCII string 110 + 111 + char *driver_name 112 + the name of the driver, as a zero-terminated ASCII string 113 + 114 + int (*load_firmware)(struct capi_ctr *ctrlr, capiloaddata *ldata) 115 + (optional) pointer to a callback function for sending firmware and 116 + configuration data to the device 117 + 118 + void (*reset_ctr)(struct capi_ctr *ctrlr) 119 + pointer to a callback function for performing a reset on the device, 120 + releasing all registered applications 121 + 122 + void (*register_appl)(struct capi_ctr *ctrlr, u16 applid, 123 + capi_register_params *rparam) 124 + void (*release_appl)(struct capi_ctr *ctrlr, u16 applid) 125 + pointers to callback functions for registration and deregistration of 126 + applications with the device 127 + 128 + u16 (*send_message)(struct capi_ctr *ctrlr, struct sk_buff *skb) 129 + pointer to a callback function for sending a CAPI message to the 130 + device 131 + 132 + char *(*procinfo)(struct capi_ctr *ctrlr) 133 + pointer to a callback function returning the entry for the device in 134 + the CAPI controller info table, /proc/capi/controller 135 + 136 + read_proc_t *ctr_read_proc 137 + pointer to the read_proc callback function for the device's proc file 138 + system entry, /proc/capi/controllers/<n>; will be called with a 139 + pointer to the device's capi_ctr structure as the last (data) argument 140 + 141 + - to be filled in before calling capi_ctr_ready(): 142 + 143 + u8 manu[CAPI_MANUFACTURER_LEN] 144 + value to return for CAPI_GET_MANUFACTURER 145 + 146 + capi_version version 147 + value to return for CAPI_GET_VERSION 148 + 149 + capi_profile profile 150 + value to return for CAPI_GET_PROFILE 151 + 152 + u8 serial[CAPI_SERIAL_LEN] 153 + value to return for CAPI_GET_SERIAL 154 + 155 + 156 + 5. Lower Layer Interface Functions 157 + 158 + (declared in <linux/isdn/capilli.h>) 159 + 160 + void register_capi_driver(struct capi_driver *drvr) 161 + void unregister_capi_driver(struct capi_driver *drvr) 162 + register/unregister a driver with Kernel CAPI 163 + 164 + int attach_capi_ctr(struct capi_ctr *ctrlr) 165 + int detach_capi_ctr(struct capi_ctr *ctrlr) 166 + register/unregister a device (controller) with Kernel CAPI 167 + 168 + void capi_ctr_ready(struct capi_ctr *ctrlr) 169 + void capi_ctr_reseted(struct capi_ctr *ctrlr) 170 + signal controller ready/not ready 171 + 172 + void capi_ctr_suspend_output(struct capi_ctr *ctrlr) 173 + void capi_ctr_resume_output(struct capi_ctr *ctrlr) 174 + signal suspend/resume 175 + 176 + void capi_ctr_handle_message(struct capi_ctr * ctrlr, u16 applid, 177 + struct sk_buff *skb) 178 + pass a received CAPI message to Kernel CAPI 179 + for forwarding to the specified application 180 + 181 + 182 + 6. Helper Functions and Macros 183 + 184 + Library functions (from <linux/isdn/capilli.h>): 185 + 186 + void capilib_new_ncci(struct list_head *head, u16 applid, 187 + u32 ncci, u32 winsize) 188 + void capilib_free_ncci(struct list_head *head, u16 applid, u32 ncci) 189 + void capilib_release_appl(struct list_head *head, u16 applid) 190 + void capilib_release(struct list_head *head) 191 + void capilib_data_b3_conf(struct list_head *head, u16 applid, 192 + u32 ncci, u16 msgid) 193 + u16 capilib_data_b3_req(struct list_head *head, u16 applid, 194 + u32 ncci, u16 msgid) 195 + 196 + 197 + Macros to extract/set element values from/in a CAPI message header 198 + (from <linux/isdn/capiutil.h>): 199 + 200 + Get Macro Set Macro Element (Type) 201 + 202 + CAPIMSG_LEN(m) CAPIMSG_SETLEN(m, len) Total Length (u16) 203 + CAPIMSG_APPID(m) CAPIMSG_SETAPPID(m, applid) ApplID (u16) 204 + CAPIMSG_COMMAND(m) CAPIMSG_SETCOMMAND(m,cmd) Command (u8) 205 + CAPIMSG_SUBCOMMAND(m) CAPIMSG_SETSUBCOMMAND(m, cmd) Subcommand (u8) 206 + CAPIMSG_CMD(m) - Command*256 207 + + Subcommand (u16) 208 + CAPIMSG_MSGID(m) CAPIMSG_SETMSGID(m, msgid) Message Number (u16) 209 + 210 + CAPIMSG_CONTROL(m) CAPIMSG_SETCONTROL(m, contr) Controller/PLCI/NCCI 211 + (u32) 212 + CAPIMSG_DATALEN(m) CAPIMSG_SETDATALEN(m, len) Data Length (u16) 213 +
+171
drivers/isdn/capi/kcapi.c
··· 270 270 mutex_unlock(&ap->recv_mtx); 271 271 } 272 272 273 + /** 274 + * capi_ctr_handle_message() - handle incoming CAPI message 275 + * @card: controller descriptor structure. 276 + * @appl: application ID. 277 + * @skb: message. 278 + * 279 + * Called by hardware driver to pass a CAPI message to the application. 280 + */ 281 + 273 282 void capi_ctr_handle_message(struct capi_ctr * card, u16 appl, struct sk_buff *skb) 274 283 { 275 284 struct capi20_appl *ap; ··· 357 348 358 349 EXPORT_SYMBOL(capi_ctr_handle_message); 359 350 351 + /** 352 + * capi_ctr_ready() - signal CAPI controller ready 353 + * @card: controller descriptor structure. 354 + * 355 + * Called by hardware driver to signal that the controller is up and running. 356 + */ 357 + 360 358 void capi_ctr_ready(struct capi_ctr * card) 361 359 { 362 360 card->cardstate = CARD_RUNNING; ··· 375 359 } 376 360 377 361 EXPORT_SYMBOL(capi_ctr_ready); 362 + 363 + /** 364 + * capi_ctr_reseted() - signal CAPI controller reset 365 + * @card: controller descriptor structure. 366 + * 367 + * Called by hardware driver to signal that the controller is down and 368 + * unavailable for use. 369 + */ 378 370 379 371 void capi_ctr_reseted(struct capi_ctr * card) 380 372 { ··· 415 391 416 392 EXPORT_SYMBOL(capi_ctr_reseted); 417 393 394 + /** 395 + * capi_ctr_suspend_output() - suspend controller 396 + * @card: controller descriptor structure. 397 + * 398 + * Called by hardware driver to stop data flow. 399 + */ 400 + 418 401 void capi_ctr_suspend_output(struct capi_ctr *card) 419 402 { 420 403 if (!card->blocked) { ··· 431 400 } 432 401 433 402 EXPORT_SYMBOL(capi_ctr_suspend_output); 403 + 404 + /** 405 + * capi_ctr_resume_output() - resume controller 406 + * @card: controller descriptor structure. 407 + * 408 + * Called by hardware driver to resume data flow. 409 + */ 434 410 435 411 void capi_ctr_resume_output(struct capi_ctr *card) 436 412 { ··· 450 412 EXPORT_SYMBOL(capi_ctr_resume_output); 451 413 452 414 /* ------------------------------------------------------------- */ 415 + 416 + /** 417 + * attach_capi_ctr() - register CAPI controller 418 + * @card: controller descriptor structure. 419 + * 420 + * Called by hardware driver to register a controller with the CAPI subsystem. 421 + * Return value: 0 on success, error code < 0 on error 422 + */ 453 423 454 424 int 455 425 attach_capi_ctr(struct capi_ctr *card) ··· 505 459 506 460 EXPORT_SYMBOL(attach_capi_ctr); 507 461 462 + /** 463 + * detach_capi_ctr() - unregister CAPI controller 464 + * @card: controller descriptor structure. 465 + * 466 + * Called by hardware driver to remove the registration of a controller 467 + * with the CAPI subsystem. 468 + * Return value: 0 on success, error code < 0 on error 469 + */ 470 + 508 471 int detach_capi_ctr(struct capi_ctr *card) 509 472 { 510 473 if (card->cardstate != CARD_DETECTED) ··· 534 479 535 480 EXPORT_SYMBOL(detach_capi_ctr); 536 481 482 + /** 483 + * register_capi_driver() - register CAPI driver 484 + * @driver: driver descriptor structure. 485 + * 486 + * Called by hardware driver to register itself with the CAPI subsystem. 487 + */ 488 + 537 489 void register_capi_driver(struct capi_driver *driver) 538 490 { 539 491 unsigned long flags; ··· 551 489 } 552 490 553 491 EXPORT_SYMBOL(register_capi_driver); 492 + 493 + /** 494 + * unregister_capi_driver() - unregister CAPI driver 495 + * @driver: driver descriptor structure. 496 + * 497 + * Called by hardware driver to unregister itself from the CAPI subsystem. 498 + */ 554 499 555 500 void unregister_capi_driver(struct capi_driver *driver) 556 501 { ··· 574 505 /* -------- CAPI2.0 Interface ---------------------------------- */ 575 506 /* ------------------------------------------------------------- */ 576 507 508 + /** 509 + * capi20_isinstalled() - CAPI 2.0 operation CAPI_INSTALLED 510 + * 511 + * Return value: CAPI result code (CAPI_NOERROR if at least one ISDN controller 512 + * is ready for use, CAPI_REGNOTINSTALLED otherwise) 513 + */ 514 + 577 515 u16 capi20_isinstalled(void) 578 516 { 579 517 int i; ··· 592 516 } 593 517 594 518 EXPORT_SYMBOL(capi20_isinstalled); 519 + 520 + /** 521 + * capi20_register() - CAPI 2.0 operation CAPI_REGISTER 522 + * @ap: CAPI application descriptor structure. 523 + * 524 + * Register an application's presence with CAPI. 525 + * A unique application ID is assigned and stored in @ap->applid. 526 + * After this function returns successfully, the message receive 527 + * callback function @ap->recv_message() may be called at any time 528 + * until capi20_release() has been called for the same @ap. 529 + * Return value: CAPI result code 530 + */ 595 531 596 532 u16 capi20_register(struct capi20_appl *ap) 597 533 { ··· 659 571 660 572 EXPORT_SYMBOL(capi20_register); 661 573 574 + /** 575 + * capi20_release() - CAPI 2.0 operation CAPI_RELEASE 576 + * @ap: CAPI application descriptor structure. 577 + * 578 + * Terminate an application's registration with CAPI. 579 + * After this function returns successfully, the message receive 580 + * callback function @ap->recv_message() will no longer be called. 581 + * Return value: CAPI result code 582 + */ 583 + 662 584 u16 capi20_release(struct capi20_appl *ap) 663 585 { 664 586 int i; ··· 700 602 } 701 603 702 604 EXPORT_SYMBOL(capi20_release); 605 + 606 + /** 607 + * capi20_put_message() - CAPI 2.0 operation CAPI_PUT_MESSAGE 608 + * @ap: CAPI application descriptor structure. 609 + * @skb: CAPI message. 610 + * 611 + * Transfer a single message to CAPI. 612 + * Return value: CAPI result code 613 + */ 703 614 704 615 u16 capi20_put_message(struct capi20_appl *ap, struct sk_buff *skb) 705 616 { ··· 775 668 776 669 EXPORT_SYMBOL(capi20_put_message); 777 670 671 + /** 672 + * capi20_get_manufacturer() - CAPI 2.0 operation CAPI_GET_MANUFACTURER 673 + * @contr: controller number. 674 + * @buf: result buffer (64 bytes). 675 + * 676 + * Retrieve information about the manufacturer of the specified ISDN controller 677 + * or (for @contr == 0) the driver itself. 678 + * Return value: CAPI result code 679 + */ 680 + 778 681 u16 capi20_get_manufacturer(u32 contr, u8 *buf) 779 682 { 780 683 struct capi_ctr *card; ··· 801 684 } 802 685 803 686 EXPORT_SYMBOL(capi20_get_manufacturer); 687 + 688 + /** 689 + * capi20_get_version() - CAPI 2.0 operation CAPI_GET_VERSION 690 + * @contr: controller number. 691 + * @verp: result structure. 692 + * 693 + * Retrieve version information for the specified ISDN controller 694 + * or (for @contr == 0) the driver itself. 695 + * Return value: CAPI result code 696 + */ 804 697 805 698 u16 capi20_get_version(u32 contr, struct capi_version *verp) 806 699 { ··· 830 703 831 704 EXPORT_SYMBOL(capi20_get_version); 832 705 706 + /** 707 + * capi20_get_serial() - CAPI 2.0 operation CAPI_GET_SERIAL_NUMBER 708 + * @contr: controller number. 709 + * @serial: result buffer (8 bytes). 710 + * 711 + * Retrieve the serial number of the specified ISDN controller 712 + * or (for @contr == 0) the driver itself. 713 + * Return value: CAPI result code 714 + */ 715 + 833 716 u16 capi20_get_serial(u32 contr, u8 *serial) 834 717 { 835 718 struct capi_ctr *card; ··· 857 720 } 858 721 859 722 EXPORT_SYMBOL(capi20_get_serial); 723 + 724 + /** 725 + * capi20_get_profile() - CAPI 2.0 operation CAPI_GET_PROFILE 726 + * @contr: controller number. 727 + * @profp: result structure. 728 + * 729 + * Retrieve capability information for the specified ISDN controller 730 + * or (for @contr == 0) the number of installed controllers. 731 + * Return value: CAPI result code 732 + */ 860 733 861 734 u16 capi20_get_profile(u32 contr, struct capi_profile *profp) 862 735 { ··· 1050 903 } 1051 904 #endif 1052 905 906 + /** 907 + * capi20_manufacturer() - CAPI 2.0 operation CAPI_MANUFACTURER 908 + * @cmd: command. 909 + * @data: parameter. 910 + * 911 + * Perform manufacturer specific command. 912 + * Return value: CAPI result code 913 + */ 914 + 1053 915 int capi20_manufacturer(unsigned int cmd, void __user *data) 1054 916 { 1055 917 struct capi_ctr *card; ··· 1137 981 EXPORT_SYMBOL(capi20_manufacturer); 1138 982 1139 983 /* temporary hack */ 984 + 985 + /** 986 + * capi20_set_callback() - set CAPI application notification callback function 987 + * @ap: CAPI application descriptor structure. 988 + * @callback: callback function (NULL to remove). 989 + * 990 + * If not NULL, the callback function will be called to notify the 991 + * application of the addition or removal of a controller. 992 + * The first argument (cmd) will tell whether the controller was added 993 + * (KCI_CONTRUP) or removed (KCI_CONTRDOWN). 994 + * The second argument (contr) will be the controller number. 995 + * For cmd==KCI_CONTRUP the third argument (data) will be a pointer to the 996 + * new controller's capability profile structure. 997 + */ 998 + 1140 999 void capi20_set_callback(struct capi20_appl *ap, 1141 1000 void (*callback) (unsigned int cmd, __u32 contr, void *data)) 1142 1001 {
+23 -7
drivers/net/e100.c
··· 2728 2728 #define E100_82552_SMARTSPEED 0x14 /* SmartSpeed Ctrl register */ 2729 2729 #define E100_82552_REV_ANEG 0x0200 /* Reverse auto-negotiation */ 2730 2730 #define E100_82552_ANEG_NOW 0x0400 /* Auto-negotiate now */ 2731 - static int e100_suspend(struct pci_dev *pdev, pm_message_t state) 2731 + static void __e100_shutdown(struct pci_dev *pdev, bool *enable_wake) 2732 2732 { 2733 2733 struct net_device *netdev = pci_get_drvdata(pdev); 2734 2734 struct nic *nic = netdev_priv(netdev); ··· 2749 2749 E100_82552_SMARTSPEED, smartspeed | 2750 2750 E100_82552_REV_ANEG | E100_82552_ANEG_NOW); 2751 2751 } 2752 - if (pci_enable_wake(pdev, PCI_D3cold, true)) 2753 - pci_enable_wake(pdev, PCI_D3hot, true); 2752 + *enable_wake = true; 2754 2753 } else { 2755 - pci_enable_wake(pdev, PCI_D3hot, false); 2754 + *enable_wake = false; 2756 2755 } 2757 2756 2758 2757 pci_disable_device(pdev); 2759 - pci_set_power_state(pdev, PCI_D3hot); 2758 + } 2760 2759 2761 - return 0; 2760 + static int __e100_power_off(struct pci_dev *pdev, bool wake) 2761 + { 2762 + if (wake) { 2763 + return pci_prepare_to_sleep(pdev); 2764 + } else { 2765 + pci_wake_from_d3(pdev, false); 2766 + return pci_set_power_state(pdev, PCI_D3hot); 2767 + } 2762 2768 } 2763 2769 2764 2770 #ifdef CONFIG_PM 2771 + static int e100_suspend(struct pci_dev *pdev, pm_message_t state) 2772 + { 2773 + bool wake; 2774 + __e100_shutdown(pdev, &wake); 2775 + return __e100_power_off(pdev, wake); 2776 + } 2777 + 2765 2778 static int e100_resume(struct pci_dev *pdev) 2766 2779 { 2767 2780 struct net_device *netdev = pci_get_drvdata(pdev); ··· 2805 2792 2806 2793 static void e100_shutdown(struct pci_dev *pdev) 2807 2794 { 2808 - e100_suspend(pdev, PMSG_SUSPEND); 2795 + bool wake; 2796 + __e100_shutdown(pdev, &wake); 2797 + if (system_state == SYSTEM_POWER_OFF) 2798 + __e100_power_off(pdev, wake); 2809 2799 } 2810 2800 2811 2801 /* ------------------ PCI Error Recovery infrastructure -------------- */
+21 -10
drivers/net/forcedeth.c
··· 1880 1880 np->tx_pkts_in_progress = 0; 1881 1881 np->tx_change_owner = NULL; 1882 1882 np->tx_end_flip = NULL; 1883 + np->tx_stop = 0; 1883 1884 1884 1885 for (i = 0; i < np->tx_ring_size; i++) { 1885 1886 if (!nv_optimized(np)) { ··· 2531 2530 struct fe_priv *np = netdev_priv(dev); 2532 2531 u8 __iomem *base = get_hwbase(dev); 2533 2532 u32 status; 2533 + union ring_type put_tx; 2534 + int saved_tx_limit; 2534 2535 2535 2536 if (np->msi_flags & NV_MSI_X_ENABLED) 2536 2537 status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; ··· 2592 2589 /* 1) stop tx engine */ 2593 2590 nv_stop_tx(dev); 2594 2591 2595 - /* 2) check that the packets were not sent already: */ 2592 + /* 2) complete any outstanding tx and do not give HW any limited tx pkts */ 2593 + saved_tx_limit = np->tx_limit; 2594 + np->tx_limit = 0; /* prevent giving HW any limited pkts */ 2595 + np->tx_stop = 0; /* prevent waking tx queue */ 2596 2596 if (!nv_optimized(np)) 2597 2597 nv_tx_done(dev, np->tx_ring_size); 2598 2598 else 2599 2599 nv_tx_done_optimized(dev, np->tx_ring_size); 2600 2600 2601 - /* 3) if there are dead entries: clear everything */ 2602 - if (np->get_tx_ctx != np->put_tx_ctx) { 2603 - printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name); 2604 - nv_drain_tx(dev); 2605 - nv_init_tx(dev); 2606 - setup_hw_rings(dev, NV_SETUP_TX_RING); 2607 - } 2601 + /* save current HW postion */ 2602 + if (np->tx_change_owner) 2603 + put_tx.ex = np->tx_change_owner->first_tx_desc; 2604 + else 2605 + put_tx = np->put_tx; 2608 2606 2609 - netif_wake_queue(dev); 2607 + /* 3) clear all tx state */ 2608 + nv_drain_tx(dev); 2609 + nv_init_tx(dev); 2610 2610 2611 - /* 4) restart tx engine */ 2611 + /* 4) restore state to current HW position */ 2612 + np->get_tx = np->put_tx = put_tx; 2613 + np->tx_limit = saved_tx_limit; 2614 + 2615 + /* 5) restart tx engine */ 2612 2616 nv_start_tx(dev); 2617 + netif_wake_queue(dev); 2613 2618 spin_unlock_irq(&np->lock); 2614 2619 } 2615 2620
+2 -49
drivers/net/ixgbe/ixgbe_common.c
··· 50 50 static void ixgbe_enable_rar(struct ixgbe_hw *hw, u32 index); 51 51 static void ixgbe_disable_rar(struct ixgbe_hw *hw, u32 index); 52 52 static s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr); 53 - static void ixgbe_add_mc_addr(struct ixgbe_hw *hw, u8 *mc_addr); 54 53 static void ixgbe_add_uc_addr(struct ixgbe_hw *hw, u8 *addr, u32 vmdq); 55 54 56 55 /** ··· 1376 1377 * Clear accounting of old secondary address list, 1377 1378 * don't count RAR[0] 1378 1379 */ 1379 - uc_addr_in_use = hw->addr_ctrl.rar_used_count - 1380 - hw->addr_ctrl.mc_addr_in_rar_count - 1; 1380 + uc_addr_in_use = hw->addr_ctrl.rar_used_count - 1; 1381 1381 hw->addr_ctrl.rar_used_count -= uc_addr_in_use; 1382 1382 hw->addr_ctrl.overflow_promisc = 0; 1383 1383 ··· 1491 1493 } 1492 1494 1493 1495 /** 1494 - * ixgbe_add_mc_addr - Adds a multicast address. 1495 - * @hw: pointer to hardware structure 1496 - * @mc_addr: new multicast address 1497 - * 1498 - * Adds it to unused receive address register or to the multicast table. 1499 - **/ 1500 - static void ixgbe_add_mc_addr(struct ixgbe_hw *hw, u8 *mc_addr) 1501 - { 1502 - u32 rar_entries = hw->mac.num_rar_entries; 1503 - u32 rar; 1504 - 1505 - hw_dbg(hw, " MC Addr =%.2X %.2X %.2X %.2X %.2X %.2X\n", 1506 - mc_addr[0], mc_addr[1], mc_addr[2], 1507 - mc_addr[3], mc_addr[4], mc_addr[5]); 1508 - 1509 - /* 1510 - * Place this multicast address in the RAR if there is room, 1511 - * else put it in the MTA 1512 - */ 1513 - if (hw->addr_ctrl.rar_used_count < rar_entries) { 1514 - /* use RAR from the end up for multicast */ 1515 - rar = rar_entries - hw->addr_ctrl.mc_addr_in_rar_count - 1; 1516 - hw->mac.ops.set_rar(hw, rar, mc_addr, 0, IXGBE_RAH_AV); 1517 - hw_dbg(hw, "Added a multicast address to RAR[%d]\n", rar); 1518 - hw->addr_ctrl.rar_used_count++; 1519 - hw->addr_ctrl.mc_addr_in_rar_count++; 1520 - } else { 1521 - ixgbe_set_mta(hw, mc_addr); 1522 - } 1523 - 1524 - hw_dbg(hw, "ixgbe_add_mc_addr Complete\n"); 1525 - } 1526 - 1527 - /** 1528 1496 * ixgbe_update_mc_addr_list_generic - Updates MAC list of multicast addresses 1529 1497 * @hw: pointer to hardware structure 1530 1498 * @mc_addr_list: the list of new multicast addresses ··· 1506 1542 u32 mc_addr_count, ixgbe_mc_addr_itr next) 1507 1543 { 1508 1544 u32 i; 1509 - u32 rar_entries = hw->mac.num_rar_entries; 1510 1545 u32 vmdq; 1511 1546 1512 1547 /* ··· 1513 1550 * use. 1514 1551 */ 1515 1552 hw->addr_ctrl.num_mc_addrs = mc_addr_count; 1516 - hw->addr_ctrl.rar_used_count -= hw->addr_ctrl.mc_addr_in_rar_count; 1517 - hw->addr_ctrl.mc_addr_in_rar_count = 0; 1518 1553 hw->addr_ctrl.mta_in_use = 0; 1519 - 1520 - /* Zero out the other receive addresses. */ 1521 - hw_dbg(hw, "Clearing RAR[%d-%d]\n", hw->addr_ctrl.rar_used_count, 1522 - rar_entries - 1); 1523 - for (i = hw->addr_ctrl.rar_used_count; i < rar_entries; i++) { 1524 - IXGBE_WRITE_REG(hw, IXGBE_RAL(i), 0); 1525 - IXGBE_WRITE_REG(hw, IXGBE_RAH(i), 0); 1526 - } 1527 1554 1528 1555 /* Clear the MTA */ 1529 1556 hw_dbg(hw, " Clearing MTA\n"); ··· 1523 1570 /* Add the new addresses */ 1524 1571 for (i = 0; i < mc_addr_count; i++) { 1525 1572 hw_dbg(hw, " Adding the multicast addresses:\n"); 1526 - ixgbe_add_mc_addr(hw, next(hw, &mc_addr_list, &vmdq)); 1573 + ixgbe_set_mta(hw, next(hw, &mc_addr_list, &vmdq)); 1527 1574 } 1528 1575 1529 1576 /* Enable mta */
+4 -6
drivers/net/ixgbe/ixgbe_main.c
··· 3646 3646 3647 3647 ixgbe_reset(adapter); 3648 3648 3649 + IXGBE_WRITE_REG(&adapter->hw, IXGBE_WUS, ~0); 3650 + 3649 3651 if (netif_running(netdev)) { 3650 3652 err = ixgbe_open(adapter->netdev); 3651 3653 if (err) ··· 4577 4575 const struct ixgbe_info *ii = ixgbe_info_tbl[ent->driver_data]; 4578 4576 static int cards_found; 4579 4577 int i, err, pci_using_dac; 4580 - u16 pm_value = 0; 4581 4578 u32 part_num, eec; 4582 4579 4583 4580 err = pci_enable_device(pdev); ··· 4764 4763 4765 4764 switch (pdev->device) { 4766 4765 case IXGBE_DEV_ID_82599_KX4: 4767 - #define IXGBE_PCIE_PMCSR 0x44 4768 - adapter->wol = IXGBE_WUFC_MAG; 4769 - pci_read_config_word(pdev, IXGBE_PCIE_PMCSR, &pm_value); 4770 - pci_write_config_word(pdev, IXGBE_PCIE_PMCSR, 4771 - (pm_value | (1 << 8))); 4766 + adapter->wol = (IXGBE_WUFC_MAG | IXGBE_WUFC_EX | 4767 + IXGBE_WUFC_MC | IXGBE_WUFC_BC); 4772 4768 break; 4773 4769 default: 4774 4770 adapter->wol = 0;
+1 -1
drivers/net/mlx4/en_netdev.c
··· 583 583 err = mlx4_en_activate_cq(priv, cq); 584 584 if (err) { 585 585 mlx4_err(mdev, "Failed activating Rx CQ\n"); 586 - goto rx_err; 586 + goto cq_err; 587 587 } 588 588 for (j = 0; j < cq->size; j++) 589 589 cq->buf[j].owner_sr_opcode = MLX4_CQE_OWNER_MASK;
+4
drivers/net/mlx4/en_rx.c
··· 610 610 used_frags = mlx4_en_complete_rx_desc(priv, rx_desc, skb_frags, 611 611 skb_shinfo(skb)->frags, 612 612 page_alloc, length); 613 + if (unlikely(!used_frags)) { 614 + kfree_skb(skb); 615 + return NULL; 616 + } 613 617 skb_shinfo(skb)->nr_frags = used_frags; 614 618 615 619 /* Copy headers into the skb linear buffer */
+16 -25
drivers/net/veth.c
··· 210 210 211 211 static struct net_device_stats *veth_get_stats(struct net_device *dev) 212 212 { 213 - struct veth_priv *priv; 214 - struct net_device_stats *dev_stats; 215 - int cpu; 213 + struct veth_priv *priv = netdev_priv(dev); 214 + struct net_device_stats *dev_stats = &dev->stats; 215 + unsigned int cpu; 216 216 struct veth_net_stats *stats; 217 - 218 - priv = netdev_priv(dev); 219 - dev_stats = &dev->stats; 220 217 221 218 dev_stats->rx_packets = 0; 222 219 dev_stats->tx_packets = 0; ··· 222 225 dev_stats->tx_dropped = 0; 223 226 dev_stats->rx_dropped = 0; 224 227 225 - for_each_online_cpu(cpu) { 226 - stats = per_cpu_ptr(priv->stats, cpu); 228 + if (priv->stats) 229 + for_each_online_cpu(cpu) { 230 + stats = per_cpu_ptr(priv->stats, cpu); 227 231 228 - dev_stats->rx_packets += stats->rx_packets; 229 - dev_stats->tx_packets += stats->tx_packets; 230 - dev_stats->rx_bytes += stats->rx_bytes; 231 - dev_stats->tx_bytes += stats->tx_bytes; 232 - dev_stats->tx_dropped += stats->tx_dropped; 233 - dev_stats->rx_dropped += stats->rx_dropped; 234 - } 232 + dev_stats->rx_packets += stats->rx_packets; 233 + dev_stats->tx_packets += stats->tx_packets; 234 + dev_stats->rx_bytes += stats->rx_bytes; 235 + dev_stats->tx_bytes += stats->tx_bytes; 236 + dev_stats->tx_dropped += stats->tx_dropped; 237 + dev_stats->rx_dropped += stats->rx_dropped; 238 + } 235 239 236 240 return dev_stats; 237 241 } ··· 259 261 netif_carrier_off(dev); 260 262 netif_carrier_off(priv->peer); 261 263 264 + free_percpu(priv->stats); 265 + priv->stats = NULL; 262 266 return 0; 263 267 } 264 268 ··· 291 291 return 0; 292 292 } 293 293 294 - static void veth_dev_free(struct net_device *dev) 295 - { 296 - struct veth_priv *priv; 297 - 298 - priv = netdev_priv(dev); 299 - free_percpu(priv->stats); 300 - free_netdev(dev); 301 - } 302 - 303 294 static const struct net_device_ops veth_netdev_ops = { 304 295 .ndo_init = veth_dev_init, 305 296 .ndo_open = veth_open, ··· 308 317 dev->netdev_ops = &veth_netdev_ops; 309 318 dev->ethtool_ops = &veth_ethtool_ops; 310 319 dev->features |= NETIF_F_LLTX; 311 - dev->destructor = veth_dev_free; 320 + dev->destructor = free_netdev; 312 321 } 313 322 314 323 /*
+2 -2
include/linux/netdevice.h
··· 104 104 # else 105 105 # define LL_MAX_HEADER 96 106 106 # endif 107 - #elif defined(CONFIG_TR) 107 + #elif defined(CONFIG_TR) || defined(CONFIG_TR_MODULE) 108 108 # define LL_MAX_HEADER 48 109 109 #else 110 110 # define LL_MAX_HEADER 32 ··· 500 500 * 501 501 * int (*ndo_set_mac_address)(struct net_device *dev, void *addr); 502 502 * This function is called when the Media Access Control address 503 - * needs to be changed. If not this interface is not defined, the 503 + * needs to be changed. If this interface is not defined, the 504 504 * mac address can not be changed. 505 505 * 506 506 * int (*ndo_validate_addr)(struct net_device *dev);
+68 -5
include/linux/netfilter/x_tables.h
··· 354 354 /* What hooks you will enter on */ 355 355 unsigned int valid_hooks; 356 356 357 - /* Lock for the curtain */ 358 - struct mutex lock; 359 - 360 357 /* Man behind the curtain... */ 361 358 struct xt_table_info *private; 362 359 ··· 431 434 432 435 extern struct xt_table_info *xt_alloc_table_info(unsigned int size); 433 436 extern void xt_free_table_info(struct xt_table_info *info); 434 - extern void xt_table_entry_swap_rcu(struct xt_table_info *old, 435 - struct xt_table_info *new); 437 + 438 + /* 439 + * Per-CPU spinlock associated with per-cpu table entries, and 440 + * with a counter for the "reading" side that allows a recursive 441 + * reader to avoid taking the lock and deadlocking. 442 + * 443 + * "reading" is used by ip/arp/ip6 tables rule processing which runs per-cpu. 444 + * It needs to ensure that the rules are not being changed while the packet 445 + * is being processed. In some cases, the read lock will be acquired 446 + * twice on the same CPU; this is okay because of the count. 447 + * 448 + * "writing" is used when reading counters. 449 + * During replace any readers that are using the old tables have to complete 450 + * before freeing the old table. This is handled by the write locking 451 + * necessary for reading the counters. 452 + */ 453 + struct xt_info_lock { 454 + spinlock_t lock; 455 + unsigned char readers; 456 + }; 457 + DECLARE_PER_CPU(struct xt_info_lock, xt_info_locks); 458 + 459 + /* 460 + * Note: we need to ensure that preemption is disabled before acquiring 461 + * the per-cpu-variable, so we do it as a two step process rather than 462 + * using "spin_lock_bh()". 463 + * 464 + * We _also_ need to disable bottom half processing before updating our 465 + * nesting count, to make sure that the only kind of re-entrancy is this 466 + * code being called by itself: since the count+lock is not an atomic 467 + * operation, we can allow no races. 468 + * 469 + * _Only_ that special combination of being per-cpu and never getting 470 + * re-entered asynchronously means that the count is safe. 471 + */ 472 + static inline void xt_info_rdlock_bh(void) 473 + { 474 + struct xt_info_lock *lock; 475 + 476 + local_bh_disable(); 477 + lock = &__get_cpu_var(xt_info_locks); 478 + if (!lock->readers++) 479 + spin_lock(&lock->lock); 480 + } 481 + 482 + static inline void xt_info_rdunlock_bh(void) 483 + { 484 + struct xt_info_lock *lock = &__get_cpu_var(xt_info_locks); 485 + 486 + if (!--lock->readers) 487 + spin_unlock(&lock->lock); 488 + local_bh_enable(); 489 + } 490 + 491 + /* 492 + * The "writer" side needs to get exclusive access to the lock, 493 + * regardless of readers. This must be called with bottom half 494 + * processing (and thus also preemption) disabled. 495 + */ 496 + static inline void xt_info_wrlock(unsigned int cpu) 497 + { 498 + spin_lock(&per_cpu(xt_info_locks, cpu).lock); 499 + } 500 + 501 + static inline void xt_info_wrunlock(unsigned int cpu) 502 + { 503 + spin_unlock(&per_cpu(xt_info_locks, cpu).lock); 504 + } 436 505 437 506 /* 438 507 * This helper is performance critical and must be inlined
+4 -2
include/linux/wait.h
··· 440 440 int autoremove_wake_function(wait_queue_t *wait, unsigned mode, int sync, void *key); 441 441 int wake_bit_function(wait_queue_t *wait, unsigned mode, int sync, void *key); 442 442 443 - #define DEFINE_WAIT(name) \ 443 + #define DEFINE_WAIT_FUNC(name, function) \ 444 444 wait_queue_t name = { \ 445 445 .private = current, \ 446 - .func = autoremove_wake_function, \ 446 + .func = function, \ 447 447 .task_list = LIST_HEAD_INIT((name).task_list), \ 448 448 } 449 + 450 + #define DEFINE_WAIT(name) DEFINE_WAIT_FUNC(name, autoremove_wake_function) 449 451 450 452 #define DEFINE_WAIT_BIT(name, word, bit) \ 451 453 struct wait_bit_queue name = { \
+1
include/net/bluetooth/hci.h
··· 101 101 /* HCI timeouts */ 102 102 #define HCI_CONNECT_TIMEOUT (40000) /* 40 seconds */ 103 103 #define HCI_DISCONN_TIMEOUT (2000) /* 2 seconds */ 104 + #define HCI_PAIRING_TIMEOUT (60000) /* 60 seconds */ 104 105 #define HCI_IDLE_TIMEOUT (6000) /* 6 seconds */ 105 106 #define HCI_INIT_TIMEOUT (10000) /* 10 seconds */ 106 107
+5 -3
include/net/bluetooth/hci_core.h
··· 171 171 __u8 auth_type; 172 172 __u8 sec_level; 173 173 __u8 power_save; 174 + __u16 disc_timeout; 174 175 unsigned long pend; 175 176 176 177 unsigned int sent; ··· 181 180 struct timer_list disc_timer; 182 181 struct timer_list idle_timer; 183 182 184 - struct work_struct work; 183 + struct work_struct work_add; 184 + struct work_struct work_del; 185 185 186 186 struct device dev; 187 187 ··· 350 348 if (conn->type == ACL_LINK) { 351 349 del_timer(&conn->idle_timer); 352 350 if (conn->state == BT_CONNECTED) { 353 - timeo = msecs_to_jiffies(HCI_DISCONN_TIMEOUT); 351 + timeo = msecs_to_jiffies(conn->disc_timeout); 354 352 if (!conn->out) 355 - timeo *= 5; 353 + timeo *= 2; 356 354 } else 357 355 timeo = msecs_to_jiffies(10); 358 356 } else
+2
net/8021q/vlan.c
··· 492 492 continue; 493 493 494 494 dev_change_flags(vlandev, flgs & ~IFF_UP); 495 + vlan_transfer_operstate(dev, vlandev); 495 496 } 496 497 break; 497 498 ··· 508 507 continue; 509 508 510 509 dev_change_flags(vlandev, flgs | IFF_UP); 510 + vlan_transfer_operstate(dev, vlandev); 511 511 } 512 512 break; 513 513
+5
net/8021q/vlan_dev.c
··· 462 462 if (vlan->flags & VLAN_FLAG_GVRP) 463 463 vlan_gvrp_request_join(dev); 464 464 465 + netif_carrier_on(dev); 465 466 return 0; 466 467 467 468 clear_allmulti: ··· 472 471 if (compare_ether_addr(dev->dev_addr, real_dev->dev_addr)) 473 472 dev_unicast_delete(real_dev, dev->dev_addr, ETH_ALEN); 474 473 out: 474 + netif_carrier_off(dev); 475 475 return err; 476 476 } 477 477 ··· 494 492 if (compare_ether_addr(dev->dev_addr, real_dev->dev_addr)) 495 493 dev_unicast_delete(real_dev, dev->dev_addr, dev->addr_len); 496 494 495 + netif_carrier_off(dev); 497 496 return 0; 498 497 } 499 498 ··· 614 611 { 615 612 struct net_device *real_dev = vlan_dev_info(dev)->real_dev; 616 613 int subclass = 0; 614 + 615 + netif_carrier_off(dev); 617 616 618 617 /* IFF_BROADCAST|IFF_MULTICAST; ??? */ 619 618 dev->flags = real_dev->flags & ~(IFF_UP | IFF_PROMISC | IFF_ALLMULTI);
+4 -6
net/bluetooth/hci_conn.c
··· 215 215 conn->state = BT_OPEN; 216 216 217 217 conn->power_save = 1; 218 + conn->disc_timeout = HCI_DISCONN_TIMEOUT; 218 219 219 220 switch (type) { 220 221 case ACL_LINK: ··· 425 424 if (sec_level == BT_SECURITY_SDP) 426 425 return 1; 427 426 428 - if (sec_level == BT_SECURITY_LOW) { 429 - if (conn->ssp_mode > 0 && conn->hdev->ssp_mode > 0) 430 - return hci_conn_auth(conn, sec_level, auth_type); 431 - else 432 - return 1; 433 - } 427 + if (sec_level == BT_SECURITY_LOW && 428 + (!conn->ssp_mode || !conn->hdev->ssp_mode)) 429 + return 1; 434 430 435 431 if (conn->link_mode & HCI_LM_ENCRYPT) 436 432 return hci_conn_auth(conn, sec_level, auth_type);
+35 -1
net/bluetooth/hci_event.c
··· 883 883 if (conn->type == ACL_LINK) { 884 884 conn->state = BT_CONFIG; 885 885 hci_conn_hold(conn); 886 + conn->disc_timeout = HCI_DISCONN_TIMEOUT; 886 887 } else 887 888 conn->state = BT_CONNECTED; 888 889 ··· 1064 1063 hci_proto_connect_cfm(conn, ev->status); 1065 1064 hci_conn_put(conn); 1066 1065 } 1067 - } else 1066 + } else { 1068 1067 hci_auth_cfm(conn, ev->status); 1068 + 1069 + hci_conn_hold(conn); 1070 + conn->disc_timeout = HCI_DISCONN_TIMEOUT; 1071 + hci_conn_put(conn); 1072 + } 1069 1073 1070 1074 if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->pend)) { 1071 1075 if (!ev->status) { ··· 1485 1479 1486 1480 static inline void hci_pin_code_request_evt(struct hci_dev *hdev, struct sk_buff *skb) 1487 1481 { 1482 + struct hci_ev_pin_code_req *ev = (void *) skb->data; 1483 + struct hci_conn *conn; 1484 + 1488 1485 BT_DBG("%s", hdev->name); 1486 + 1487 + hci_dev_lock(hdev); 1488 + 1489 + conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); 1490 + if (conn) { 1491 + hci_conn_hold(conn); 1492 + conn->disc_timeout = HCI_PAIRING_TIMEOUT; 1493 + hci_conn_put(conn); 1494 + } 1495 + 1496 + hci_dev_unlock(hdev); 1489 1497 } 1490 1498 1491 1499 static inline void hci_link_key_request_evt(struct hci_dev *hdev, struct sk_buff *skb) ··· 1509 1489 1510 1490 static inline void hci_link_key_notify_evt(struct hci_dev *hdev, struct sk_buff *skb) 1511 1491 { 1492 + struct hci_ev_link_key_notify *ev = (void *) skb->data; 1493 + struct hci_conn *conn; 1494 + 1512 1495 BT_DBG("%s", hdev->name); 1496 + 1497 + hci_dev_lock(hdev); 1498 + 1499 + conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); 1500 + if (conn) { 1501 + hci_conn_hold(conn); 1502 + conn->disc_timeout = HCI_DISCONN_TIMEOUT; 1503 + hci_conn_put(conn); 1504 + } 1505 + 1506 + hci_dev_unlock(hdev); 1513 1507 } 1514 1508 1515 1509 static inline void hci_clock_offset_evt(struct hci_dev *hdev, struct sk_buff *skb)
+16 -21
net/bluetooth/hci_sysfs.c
··· 9 9 struct class *bt_class = NULL; 10 10 EXPORT_SYMBOL_GPL(bt_class); 11 11 12 - static struct workqueue_struct *btaddconn; 13 - static struct workqueue_struct *btdelconn; 12 + static struct workqueue_struct *bluetooth; 14 13 15 14 static inline char *link_typetostr(int type) 16 15 { ··· 87 88 88 89 static void add_conn(struct work_struct *work) 89 90 { 90 - struct hci_conn *conn = container_of(work, struct hci_conn, work); 91 + struct hci_conn *conn = container_of(work, struct hci_conn, work_add); 91 92 92 - flush_workqueue(btdelconn); 93 + /* ensure previous add/del is complete */ 94 + flush_workqueue(bluetooth); 93 95 94 96 if (device_add(&conn->dev) < 0) { 95 97 BT_ERR("Failed to register connection device"); ··· 114 114 115 115 device_initialize(&conn->dev); 116 116 117 - INIT_WORK(&conn->work, add_conn); 117 + INIT_WORK(&conn->work_add, add_conn); 118 118 119 - queue_work(btaddconn, &conn->work); 119 + queue_work(bluetooth, &conn->work_add); 120 120 } 121 121 122 122 /* ··· 131 131 132 132 static void del_conn(struct work_struct *work) 133 133 { 134 - struct hci_conn *conn = container_of(work, struct hci_conn, work); 134 + struct hci_conn *conn = container_of(work, struct hci_conn, work_del); 135 135 struct hci_dev *hdev = conn->hdev; 136 + 137 + /* ensure previous add/del is complete */ 138 + flush_workqueue(bluetooth); 136 139 137 140 while (1) { 138 141 struct device *dev; ··· 159 156 if (!device_is_registered(&conn->dev)) 160 157 return; 161 158 162 - INIT_WORK(&conn->work, del_conn); 159 + INIT_WORK(&conn->work_del, del_conn); 163 160 164 - queue_work(btdelconn, &conn->work); 161 + queue_work(bluetooth, &conn->work_del); 165 162 } 166 163 167 164 static inline char *host_typetostr(int type) ··· 438 435 439 436 int __init bt_sysfs_init(void) 440 437 { 441 - btaddconn = create_singlethread_workqueue("btaddconn"); 442 - if (!btaddconn) 438 + bluetooth = create_singlethread_workqueue("bluetooth"); 439 + if (!bluetooth) 443 440 return -ENOMEM; 444 - 445 - btdelconn = create_singlethread_workqueue("btdelconn"); 446 - if (!btdelconn) { 447 - destroy_workqueue(btaddconn); 448 - return -ENOMEM; 449 - } 450 441 451 442 bt_class = class_create(THIS_MODULE, "bluetooth"); 452 443 if (IS_ERR(bt_class)) { 453 - destroy_workqueue(btdelconn); 454 - destroy_workqueue(btaddconn); 444 + destroy_workqueue(bluetooth); 455 445 return PTR_ERR(bt_class); 456 446 } 457 447 ··· 453 457 454 458 void bt_sysfs_cleanup(void) 455 459 { 456 - destroy_workqueue(btaddconn); 457 - destroy_workqueue(btdelconn); 460 + destroy_workqueue(bluetooth); 458 461 459 462 class_destroy(bt_class); 460 463 }
+9 -1
net/bridge/br_netfilter.c
··· 788 788 return NF_STOLEN; 789 789 } 790 790 791 + #if defined(CONFIG_NF_CONNTRACK_IPV4) || defined(CONFIG_NF_CONNTRACK_IPV4_MODULE) 791 792 static int br_nf_dev_queue_xmit(struct sk_buff *skb) 792 793 { 793 - if (skb->protocol == htons(ETH_P_IP) && 794 + if (skb->nfct != NULL && 795 + (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb)) && 794 796 skb->len > skb->dev->mtu && 795 797 !skb_is_gso(skb)) 796 798 return ip_fragment(skb, br_dev_queue_push_xmit); 797 799 else 798 800 return br_dev_queue_push_xmit(skb); 799 801 } 802 + #else 803 + static int br_nf_dev_queue_xmit(struct sk_buff *skb) 804 + { 805 + return br_dev_queue_push_xmit(skb); 806 + } 807 + #endif 800 808 801 809 /* PF_BRIDGE/POST_ROUTING ********************************************/ 802 810 static unsigned int br_nf_post_routing(unsigned int hook, struct sk_buff *skb,
+13 -1
net/core/datagram.c
··· 64 64 return sk->sk_type == SOCK_SEQPACKET || sk->sk_type == SOCK_STREAM; 65 65 } 66 66 67 + static int receiver_wake_function(wait_queue_t *wait, unsigned mode, int sync, 68 + void *key) 69 + { 70 + unsigned long bits = (unsigned long)key; 71 + 72 + /* 73 + * Avoid a wakeup if event not interesting for us 74 + */ 75 + if (bits && !(bits & (POLLIN | POLLERR))) 76 + return 0; 77 + return autoremove_wake_function(wait, mode, sync, key); 78 + } 67 79 /* 68 80 * Wait for a packet.. 69 81 */ 70 82 static int wait_for_packet(struct sock *sk, int *err, long *timeo_p) 71 83 { 72 84 int error; 73 - DEFINE_WAIT(wait); 85 + DEFINE_WAIT_FUNC(wait, receiver_wake_function); 74 86 75 87 prepare_to_wait_exclusive(sk->sk_sleep, &wait, TASK_INTERRUPTIBLE); 76 88
+36 -89
net/ipv4/netfilter/arp_tables.c
··· 253 253 indev = in ? in->name : nulldevname; 254 254 outdev = out ? out->name : nulldevname; 255 255 256 - rcu_read_lock_bh(); 257 - private = rcu_dereference(table->private); 258 - table_base = rcu_dereference(private->entries[smp_processor_id()]); 256 + xt_info_rdlock_bh(); 257 + private = table->private; 258 + table_base = private->entries[smp_processor_id()]; 259 259 260 260 e = get_entry(table_base, private->hook_entry[hook]); 261 261 back = get_entry(table_base, private->underflow[hook]); ··· 273 273 274 274 hdr_len = sizeof(*arp) + (2 * sizeof(struct in_addr)) + 275 275 (2 * skb->dev->addr_len); 276 + 276 277 ADD_COUNTER(e->counters, hdr_len, 1); 277 278 278 279 t = arpt_get_target(e); ··· 329 328 e = (void *)e + e->next_offset; 330 329 } 331 330 } while (!hotdrop); 332 - 333 - rcu_read_unlock_bh(); 331 + xt_info_rdunlock_bh(); 334 332 335 333 if (hotdrop) 336 334 return NF_DROP; ··· 711 711 /* Instead of clearing (by a previous call to memset()) 712 712 * the counters and using adds, we set the counters 713 713 * with data used by 'current' CPU 714 - * We dont care about preemption here. 714 + * 715 + * Bottom half has to be disabled to prevent deadlock 716 + * if new softirq were to run and call ipt_do_table 715 717 */ 716 - curcpu = raw_smp_processor_id(); 718 + local_bh_disable(); 719 + curcpu = smp_processor_id(); 717 720 718 721 i = 0; 719 722 ARPT_ENTRY_ITERATE(t->entries[curcpu], ··· 729 726 if (cpu == curcpu) 730 727 continue; 731 728 i = 0; 729 + xt_info_wrlock(cpu); 732 730 ARPT_ENTRY_ITERATE(t->entries[cpu], 733 731 t->size, 734 732 add_entry_to_counter, 735 733 counters, 736 734 &i); 735 + xt_info_wrunlock(cpu); 737 736 } 738 - } 739 - 740 - 741 - /* We're lazy, and add to the first CPU; overflow works its fey magic 742 - * and everything is OK. */ 743 - static int 744 - add_counter_to_entry(struct arpt_entry *e, 745 - const struct xt_counters addme[], 746 - unsigned int *i) 747 - { 748 - ADD_COUNTER(e->counters, addme[*i].bcnt, addme[*i].pcnt); 749 - 750 - (*i)++; 751 - return 0; 752 - } 753 - 754 - /* Take values from counters and add them back onto the current cpu */ 755 - static void put_counters(struct xt_table_info *t, 756 - const struct xt_counters counters[]) 757 - { 758 - unsigned int i, cpu; 759 - 760 - local_bh_disable(); 761 - cpu = smp_processor_id(); 762 - i = 0; 763 - ARPT_ENTRY_ITERATE(t->entries[cpu], 764 - t->size, 765 - add_counter_to_entry, 766 - counters, 767 - &i); 768 737 local_bh_enable(); 769 - } 770 - 771 - static inline int 772 - zero_entry_counter(struct arpt_entry *e, void *arg) 773 - { 774 - e->counters.bcnt = 0; 775 - e->counters.pcnt = 0; 776 - return 0; 777 - } 778 - 779 - static void 780 - clone_counters(struct xt_table_info *newinfo, const struct xt_table_info *info) 781 - { 782 - unsigned int cpu; 783 - const void *loc_cpu_entry = info->entries[raw_smp_processor_id()]; 784 - 785 - memcpy(newinfo, info, offsetof(struct xt_table_info, entries)); 786 - for_each_possible_cpu(cpu) { 787 - memcpy(newinfo->entries[cpu], loc_cpu_entry, info->size); 788 - ARPT_ENTRY_ITERATE(newinfo->entries[cpu], newinfo->size, 789 - zero_entry_counter, NULL); 790 - } 791 738 } 792 739 793 740 static struct xt_counters *alloc_counters(struct xt_table *table) ··· 745 792 unsigned int countersize; 746 793 struct xt_counters *counters; 747 794 struct xt_table_info *private = table->private; 748 - struct xt_table_info *info; 749 795 750 796 /* We need atomic snapshot of counters: rest doesn't change 751 797 * (other than comefrom, which userspace doesn't care ··· 754 802 counters = vmalloc_node(countersize, numa_node_id()); 755 803 756 804 if (counters == NULL) 757 - goto nomem; 805 + return ERR_PTR(-ENOMEM); 758 806 759 - info = xt_alloc_table_info(private->size); 760 - if (!info) 761 - goto free_counters; 762 - 763 - clone_counters(info, private); 764 - 765 - mutex_lock(&table->lock); 766 - xt_table_entry_swap_rcu(private, info); 767 - synchronize_net(); /* Wait until smoke has cleared */ 768 - 769 - get_counters(info, counters); 770 - put_counters(private, counters); 771 - mutex_unlock(&table->lock); 772 - 773 - xt_free_table_info(info); 807 + get_counters(private, counters); 774 808 775 809 return counters; 776 - 777 - free_counters: 778 - vfree(counters); 779 - nomem: 780 - return ERR_PTR(-ENOMEM); 781 810 } 782 811 783 812 static int copy_entries_to_user(unsigned int total_size, ··· 1027 1094 (newinfo->number <= oldinfo->initial_entries)) 1028 1095 module_put(t->me); 1029 1096 1030 - /* Get the old counters. */ 1097 + /* Get the old counters, and synchronize with replace */ 1031 1098 get_counters(oldinfo, counters); 1099 + 1032 1100 /* Decrease module usage counts and free resource */ 1033 1101 loc_cpu_old_entry = oldinfo->entries[raw_smp_processor_id()]; 1034 1102 ARPT_ENTRY_ITERATE(loc_cpu_old_entry, oldinfo->size, cleanup_entry, ··· 1099 1165 return ret; 1100 1166 } 1101 1167 1168 + /* We're lazy, and add to the first CPU; overflow works its fey magic 1169 + * and everything is OK. */ 1170 + static int 1171 + add_counter_to_entry(struct arpt_entry *e, 1172 + const struct xt_counters addme[], 1173 + unsigned int *i) 1174 + { 1175 + ADD_COUNTER(e->counters, addme[*i].bcnt, addme[*i].pcnt); 1176 + 1177 + (*i)++; 1178 + return 0; 1179 + } 1180 + 1102 1181 static int do_add_counters(struct net *net, void __user *user, unsigned int len, 1103 1182 int compat) 1104 1183 { 1105 - unsigned int i; 1184 + unsigned int i, curcpu; 1106 1185 struct xt_counters_info tmp; 1107 1186 struct xt_counters *paddc; 1108 1187 unsigned int num_counters; ··· 1171 1224 goto free; 1172 1225 } 1173 1226 1174 - mutex_lock(&t->lock); 1227 + local_bh_disable(); 1175 1228 private = t->private; 1176 1229 if (private->number != num_counters) { 1177 1230 ret = -EINVAL; 1178 1231 goto unlock_up_free; 1179 1232 } 1180 1233 1181 - preempt_disable(); 1182 1234 i = 0; 1183 1235 /* Choose the copy that is on our node */ 1184 - loc_cpu_entry = private->entries[smp_processor_id()]; 1236 + curcpu = smp_processor_id(); 1237 + loc_cpu_entry = private->entries[curcpu]; 1238 + xt_info_wrlock(curcpu); 1185 1239 ARPT_ENTRY_ITERATE(loc_cpu_entry, 1186 1240 private->size, 1187 1241 add_counter_to_entry, 1188 1242 paddc, 1189 1243 &i); 1190 - preempt_enable(); 1244 + xt_info_wrunlock(curcpu); 1191 1245 unlock_up_free: 1192 - mutex_unlock(&t->lock); 1193 - 1246 + local_bh_enable(); 1194 1247 xt_table_unlock(t); 1195 1248 module_put(t->me); 1196 1249 free:
+35 -91
net/ipv4/netfilter/ip_tables.c
··· 338 338 tgpar.hooknum = hook; 339 339 340 340 IP_NF_ASSERT(table->valid_hooks & (1 << hook)); 341 - 342 - rcu_read_lock_bh(); 343 - private = rcu_dereference(table->private); 344 - table_base = rcu_dereference(private->entries[smp_processor_id()]); 341 + xt_info_rdlock_bh(); 342 + private = table->private; 343 + table_base = private->entries[smp_processor_id()]; 345 344 346 345 e = get_entry(table_base, private->hook_entry[hook]); 347 346 ··· 435 436 e = (void *)e + e->next_offset; 436 437 } 437 438 } while (!hotdrop); 438 - 439 - rcu_read_unlock_bh(); 439 + xt_info_rdunlock_bh(); 440 440 441 441 #ifdef DEBUG_ALLOW_ALL 442 442 return NF_ACCEPT; ··· 894 896 895 897 /* Instead of clearing (by a previous call to memset()) 896 898 * the counters and using adds, we set the counters 897 - * with data used by 'current' CPU 898 - * We dont care about preemption here. 899 + * with data used by 'current' CPU. 900 + * 901 + * Bottom half has to be disabled to prevent deadlock 902 + * if new softirq were to run and call ipt_do_table 899 903 */ 900 - curcpu = raw_smp_processor_id(); 904 + local_bh_disable(); 905 + curcpu = smp_processor_id(); 901 906 902 907 i = 0; 903 908 IPT_ENTRY_ITERATE(t->entries[curcpu], ··· 913 912 if (cpu == curcpu) 914 913 continue; 915 914 i = 0; 915 + xt_info_wrlock(cpu); 916 916 IPT_ENTRY_ITERATE(t->entries[cpu], 917 917 t->size, 918 918 add_entry_to_counter, 919 919 counters, 920 920 &i); 921 + xt_info_wrunlock(cpu); 921 922 } 922 - 923 - } 924 - 925 - /* We're lazy, and add to the first CPU; overflow works its fey magic 926 - * and everything is OK. */ 927 - static int 928 - add_counter_to_entry(struct ipt_entry *e, 929 - const struct xt_counters addme[], 930 - unsigned int *i) 931 - { 932 - ADD_COUNTER(e->counters, addme[*i].bcnt, addme[*i].pcnt); 933 - 934 - (*i)++; 935 - return 0; 936 - } 937 - 938 - /* Take values from counters and add them back onto the current cpu */ 939 - static void put_counters(struct xt_table_info *t, 940 - const struct xt_counters counters[]) 941 - { 942 - unsigned int i, cpu; 943 - 944 - local_bh_disable(); 945 - cpu = smp_processor_id(); 946 - i = 0; 947 - IPT_ENTRY_ITERATE(t->entries[cpu], 948 - t->size, 949 - add_counter_to_entry, 950 - counters, 951 - &i); 952 923 local_bh_enable(); 953 - } 954 - 955 - 956 - static inline int 957 - zero_entry_counter(struct ipt_entry *e, void *arg) 958 - { 959 - e->counters.bcnt = 0; 960 - e->counters.pcnt = 0; 961 - return 0; 962 - } 963 - 964 - static void 965 - clone_counters(struct xt_table_info *newinfo, const struct xt_table_info *info) 966 - { 967 - unsigned int cpu; 968 - const void *loc_cpu_entry = info->entries[raw_smp_processor_id()]; 969 - 970 - memcpy(newinfo, info, offsetof(struct xt_table_info, entries)); 971 - for_each_possible_cpu(cpu) { 972 - memcpy(newinfo->entries[cpu], loc_cpu_entry, info->size); 973 - IPT_ENTRY_ITERATE(newinfo->entries[cpu], newinfo->size, 974 - zero_entry_counter, NULL); 975 - } 976 924 } 977 925 978 926 static struct xt_counters * alloc_counters(struct xt_table *table) ··· 929 979 unsigned int countersize; 930 980 struct xt_counters *counters; 931 981 struct xt_table_info *private = table->private; 932 - struct xt_table_info *info; 933 982 934 983 /* We need atomic snapshot of counters: rest doesn't change 935 984 (other than comefrom, which userspace doesn't care ··· 937 988 counters = vmalloc_node(countersize, numa_node_id()); 938 989 939 990 if (counters == NULL) 940 - goto nomem; 991 + return ERR_PTR(-ENOMEM); 941 992 942 - info = xt_alloc_table_info(private->size); 943 - if (!info) 944 - goto free_counters; 945 - 946 - clone_counters(info, private); 947 - 948 - mutex_lock(&table->lock); 949 - xt_table_entry_swap_rcu(private, info); 950 - synchronize_net(); /* Wait until smoke has cleared */ 951 - 952 - get_counters(info, counters); 953 - put_counters(private, counters); 954 - mutex_unlock(&table->lock); 955 - 956 - xt_free_table_info(info); 993 + get_counters(private, counters); 957 994 958 995 return counters; 959 - 960 - free_counters: 961 - vfree(counters); 962 - nomem: 963 - return ERR_PTR(-ENOMEM); 964 996 } 965 997 966 998 static int ··· 1236 1306 (newinfo->number <= oldinfo->initial_entries)) 1237 1307 module_put(t->me); 1238 1308 1239 - /* Get the old counters. */ 1309 + /* Get the old counters, and synchronize with replace */ 1240 1310 get_counters(oldinfo, counters); 1311 + 1241 1312 /* Decrease module usage counts and free resource */ 1242 1313 loc_cpu_old_entry = oldinfo->entries[raw_smp_processor_id()]; 1243 1314 IPT_ENTRY_ITERATE(loc_cpu_old_entry, oldinfo->size, cleanup_entry, ··· 1308 1377 return ret; 1309 1378 } 1310 1379 1380 + /* We're lazy, and add to the first CPU; overflow works its fey magic 1381 + * and everything is OK. */ 1382 + static int 1383 + add_counter_to_entry(struct ipt_entry *e, 1384 + const struct xt_counters addme[], 1385 + unsigned int *i) 1386 + { 1387 + ADD_COUNTER(e->counters, addme[*i].bcnt, addme[*i].pcnt); 1388 + 1389 + (*i)++; 1390 + return 0; 1391 + } 1311 1392 1312 1393 static int 1313 1394 do_add_counters(struct net *net, void __user *user, unsigned int len, int compat) 1314 1395 { 1315 - unsigned int i; 1396 + unsigned int i, curcpu; 1316 1397 struct xt_counters_info tmp; 1317 1398 struct xt_counters *paddc; 1318 1399 unsigned int num_counters; ··· 1380 1437 goto free; 1381 1438 } 1382 1439 1383 - mutex_lock(&t->lock); 1440 + local_bh_disable(); 1384 1441 private = t->private; 1385 1442 if (private->number != num_counters) { 1386 1443 ret = -EINVAL; 1387 1444 goto unlock_up_free; 1388 1445 } 1389 1446 1390 - preempt_disable(); 1391 1447 i = 0; 1392 1448 /* Choose the copy that is on our node */ 1393 - loc_cpu_entry = private->entries[raw_smp_processor_id()]; 1449 + curcpu = smp_processor_id(); 1450 + loc_cpu_entry = private->entries[curcpu]; 1451 + xt_info_wrlock(curcpu); 1394 1452 IPT_ENTRY_ITERATE(loc_cpu_entry, 1395 1453 private->size, 1396 1454 add_counter_to_entry, 1397 1455 paddc, 1398 1456 &i); 1399 - preempt_enable(); 1457 + xt_info_wrunlock(curcpu); 1400 1458 unlock_up_free: 1401 - mutex_unlock(&t->lock); 1459 + local_bh_enable(); 1402 1460 xt_table_unlock(t); 1403 1461 module_put(t->me); 1404 1462 free:
+1 -1
net/ipv4/route.c
··· 3397 3397 0, 3398 3398 &rt_hash_log, 3399 3399 &rt_hash_mask, 3400 - 0); 3400 + rhash_entries ? 0 : 512 * 1024); 3401 3401 memset(rt_hash_table, 0, (rt_hash_mask + 1) * sizeof(struct rt_hash_bucket)); 3402 3402 rt_hash_lock_init(); 3403 3403
+37 -86
net/ipv6/netfilter/ip6_tables.c
··· 365 365 366 366 IP_NF_ASSERT(table->valid_hooks & (1 << hook)); 367 367 368 - rcu_read_lock_bh(); 369 - private = rcu_dereference(table->private); 370 - table_base = rcu_dereference(private->entries[smp_processor_id()]); 368 + xt_info_rdlock_bh(); 369 + private = table->private; 370 + table_base = private->entries[smp_processor_id()]; 371 371 372 372 e = get_entry(table_base, private->hook_entry[hook]); 373 373 ··· 466 466 #ifdef CONFIG_NETFILTER_DEBUG 467 467 ((struct ip6t_entry *)table_base)->comefrom = NETFILTER_LINK_POISON; 468 468 #endif 469 - rcu_read_unlock_bh(); 469 + xt_info_rdunlock_bh(); 470 470 471 471 #ifdef DEBUG_ALLOW_ALL 472 472 return NF_ACCEPT; ··· 926 926 /* Instead of clearing (by a previous call to memset()) 927 927 * the counters and using adds, we set the counters 928 928 * with data used by 'current' CPU 929 - * We dont care about preemption here. 929 + * 930 + * Bottom half has to be disabled to prevent deadlock 931 + * if new softirq were to run and call ipt_do_table 930 932 */ 931 - curcpu = raw_smp_processor_id(); 933 + local_bh_disable(); 934 + curcpu = smp_processor_id(); 932 935 933 936 i = 0; 934 937 IP6T_ENTRY_ITERATE(t->entries[curcpu], ··· 944 941 if (cpu == curcpu) 945 942 continue; 946 943 i = 0; 944 + xt_info_wrlock(cpu); 947 945 IP6T_ENTRY_ITERATE(t->entries[cpu], 948 946 t->size, 949 947 add_entry_to_counter, 950 948 counters, 951 949 &i); 950 + xt_info_wrunlock(cpu); 952 951 } 953 - } 954 - 955 - /* We're lazy, and add to the first CPU; overflow works its fey magic 956 - * and everything is OK. */ 957 - static int 958 - add_counter_to_entry(struct ip6t_entry *e, 959 - const struct xt_counters addme[], 960 - unsigned int *i) 961 - { 962 - ADD_COUNTER(e->counters, addme[*i].bcnt, addme[*i].pcnt); 963 - 964 - (*i)++; 965 - return 0; 966 - } 967 - 968 - /* Take values from counters and add them back onto the current cpu */ 969 - static void put_counters(struct xt_table_info *t, 970 - const struct xt_counters counters[]) 971 - { 972 - unsigned int i, cpu; 973 - 974 - local_bh_disable(); 975 - cpu = smp_processor_id(); 976 - i = 0; 977 - IP6T_ENTRY_ITERATE(t->entries[cpu], 978 - t->size, 979 - add_counter_to_entry, 980 - counters, 981 - &i); 982 952 local_bh_enable(); 983 - } 984 - 985 - static inline int 986 - zero_entry_counter(struct ip6t_entry *e, void *arg) 987 - { 988 - e->counters.bcnt = 0; 989 - e->counters.pcnt = 0; 990 - return 0; 991 - } 992 - 993 - static void 994 - clone_counters(struct xt_table_info *newinfo, const struct xt_table_info *info) 995 - { 996 - unsigned int cpu; 997 - const void *loc_cpu_entry = info->entries[raw_smp_processor_id()]; 998 - 999 - memcpy(newinfo, info, offsetof(struct xt_table_info, entries)); 1000 - for_each_possible_cpu(cpu) { 1001 - memcpy(newinfo->entries[cpu], loc_cpu_entry, info->size); 1002 - IP6T_ENTRY_ITERATE(newinfo->entries[cpu], newinfo->size, 1003 - zero_entry_counter, NULL); 1004 - } 1005 953 } 1006 954 1007 955 static struct xt_counters *alloc_counters(struct xt_table *table) ··· 960 1006 unsigned int countersize; 961 1007 struct xt_counters *counters; 962 1008 struct xt_table_info *private = table->private; 963 - struct xt_table_info *info; 964 1009 965 1010 /* We need atomic snapshot of counters: rest doesn't change 966 1011 (other than comefrom, which userspace doesn't care ··· 968 1015 counters = vmalloc_node(countersize, numa_node_id()); 969 1016 970 1017 if (counters == NULL) 971 - goto nomem; 1018 + return ERR_PTR(-ENOMEM); 972 1019 973 - info = xt_alloc_table_info(private->size); 974 - if (!info) 975 - goto free_counters; 976 - 977 - clone_counters(info, private); 978 - 979 - mutex_lock(&table->lock); 980 - xt_table_entry_swap_rcu(private, info); 981 - synchronize_net(); /* Wait until smoke has cleared */ 982 - 983 - get_counters(info, counters); 984 - put_counters(private, counters); 985 - mutex_unlock(&table->lock); 986 - 987 - xt_free_table_info(info); 1020 + get_counters(private, counters); 988 1021 989 1022 return counters; 990 - 991 - free_counters: 992 - vfree(counters); 993 - nomem: 994 - return ERR_PTR(-ENOMEM); 995 1023 } 996 1024 997 1025 static int ··· 1268 1334 (newinfo->number <= oldinfo->initial_entries)) 1269 1335 module_put(t->me); 1270 1336 1271 - /* Get the old counters. */ 1337 + /* Get the old counters, and synchronize with replace */ 1272 1338 get_counters(oldinfo, counters); 1339 + 1273 1340 /* Decrease module usage counts and free resource */ 1274 1341 loc_cpu_old_entry = oldinfo->entries[raw_smp_processor_id()]; 1275 1342 IP6T_ENTRY_ITERATE(loc_cpu_old_entry, oldinfo->size, cleanup_entry, ··· 1340 1405 return ret; 1341 1406 } 1342 1407 1408 + /* We're lazy, and add to the first CPU; overflow works its fey magic 1409 + * and everything is OK. */ 1410 + static int 1411 + add_counter_to_entry(struct ip6t_entry *e, 1412 + const struct xt_counters addme[], 1413 + unsigned int *i) 1414 + { 1415 + ADD_COUNTER(e->counters, addme[*i].bcnt, addme[*i].pcnt); 1416 + 1417 + (*i)++; 1418 + return 0; 1419 + } 1420 + 1343 1421 static int 1344 1422 do_add_counters(struct net *net, void __user *user, unsigned int len, 1345 1423 int compat) 1346 1424 { 1347 - unsigned int i; 1425 + unsigned int i, curcpu; 1348 1426 struct xt_counters_info tmp; 1349 1427 struct xt_counters *paddc; 1350 1428 unsigned int num_counters; ··· 1413 1465 goto free; 1414 1466 } 1415 1467 1416 - mutex_lock(&t->lock); 1468 + 1469 + local_bh_disable(); 1417 1470 private = t->private; 1418 1471 if (private->number != num_counters) { 1419 1472 ret = -EINVAL; 1420 1473 goto unlock_up_free; 1421 1474 } 1422 1475 1423 - preempt_disable(); 1424 1476 i = 0; 1425 1477 /* Choose the copy that is on our node */ 1426 - loc_cpu_entry = private->entries[raw_smp_processor_id()]; 1478 + curcpu = smp_processor_id(); 1479 + xt_info_wrlock(curcpu); 1480 + loc_cpu_entry = private->entries[curcpu]; 1427 1481 IP6T_ENTRY_ITERATE(loc_cpu_entry, 1428 1482 private->size, 1429 1483 add_counter_to_entry, 1430 1484 paddc, 1431 1485 &i); 1432 - preempt_enable(); 1486 + xt_info_wrunlock(curcpu); 1487 + 1433 1488 unlock_up_free: 1434 - mutex_unlock(&t->lock); 1489 + local_bh_enable(); 1435 1490 xt_table_unlock(t); 1436 1491 module_put(t->me); 1437 1492 free:
+2 -2
net/netfilter/Kconfig
··· 275 275 help 276 276 This option enables support for a netlink-based userspace interface 277 277 278 + endif # NF_CONNTRACK 279 + 278 280 # transparent proxy support 279 281 config NETFILTER_TPROXY 280 282 tristate "Transparent proxying support (EXPERIMENTAL)" ··· 291 289 see Documentation/networking/tproxy.txt. 292 290 293 291 To compile it as a module, choose M here. If unsure, say N. 294 - 295 - endif # NF_CONNTRACK 296 292 297 293 config NETFILTER_XTABLES 298 294 tristate "Netfilter Xtables support (required for ip_tables)"
+15 -1
net/netfilter/nf_conntrack_proto_dccp.c
··· 633 633 if (!nest_parms) 634 634 goto nla_put_failure; 635 635 NLA_PUT_U8(skb, CTA_PROTOINFO_DCCP_STATE, ct->proto.dccp.state); 636 + NLA_PUT_U8(skb, CTA_PROTOINFO_DCCP_ROLE, 637 + ct->proto.dccp.role[IP_CT_DIR_ORIGINAL]); 636 638 nla_nest_end(skb, nest_parms); 637 639 read_unlock_bh(&dccp_lock); 638 640 return 0; ··· 646 644 647 645 static const struct nla_policy dccp_nla_policy[CTA_PROTOINFO_DCCP_MAX + 1] = { 648 646 [CTA_PROTOINFO_DCCP_STATE] = { .type = NLA_U8 }, 647 + [CTA_PROTOINFO_DCCP_ROLE] = { .type = NLA_U8 }, 649 648 }; 650 649 651 650 static int nlattr_to_dccp(struct nlattr *cda[], struct nf_conn *ct) ··· 664 661 return err; 665 662 666 663 if (!tb[CTA_PROTOINFO_DCCP_STATE] || 667 - nla_get_u8(tb[CTA_PROTOINFO_DCCP_STATE]) >= CT_DCCP_IGNORE) 664 + !tb[CTA_PROTOINFO_DCCP_ROLE] || 665 + nla_get_u8(tb[CTA_PROTOINFO_DCCP_ROLE]) > CT_DCCP_ROLE_MAX || 666 + nla_get_u8(tb[CTA_PROTOINFO_DCCP_STATE]) >= CT_DCCP_IGNORE) { 668 667 return -EINVAL; 668 + } 669 669 670 670 write_lock_bh(&dccp_lock); 671 671 ct->proto.dccp.state = nla_get_u8(tb[CTA_PROTOINFO_DCCP_STATE]); 672 + if (nla_get_u8(tb[CTA_PROTOINFO_DCCP_ROLE]) == CT_DCCP_ROLE_CLIENT) { 673 + ct->proto.dccp.role[IP_CT_DIR_ORIGINAL] = CT_DCCP_ROLE_CLIENT; 674 + ct->proto.dccp.role[IP_CT_DIR_REPLY] = CT_DCCP_ROLE_SERVER; 675 + } else { 676 + ct->proto.dccp.role[IP_CT_DIR_ORIGINAL] = CT_DCCP_ROLE_SERVER; 677 + ct->proto.dccp.role[IP_CT_DIR_REPLY] = CT_DCCP_ROLE_CLIENT; 678 + } 672 679 write_unlock_bh(&dccp_lock); 673 680 return 0; 674 681 } ··· 790 777 .print_conntrack = dccp_print_conntrack, 791 778 #if defined(CONFIG_NF_CT_NETLINK) || defined(CONFIG_NF_CT_NETLINK_MODULE) 792 779 .to_nlattr = dccp_to_nlattr, 780 + .nlattr_size = dccp_nlattr_size, 793 781 .from_nlattr = nlattr_to_dccp, 794 782 .tuple_to_nlattr = nf_ct_port_tuple_to_nlattr, 795 783 .nlattr_tuple_size = nf_ct_port_nlattr_tuple_size,
+1
net/netfilter/nf_conntrack_proto_udplite.c
··· 204 204 .error = udplite_error, 205 205 #if defined(CONFIG_NF_CT_NETLINK) || defined(CONFIG_NF_CT_NETLINK_MODULE) 206 206 .tuple_to_nlattr = nf_ct_port_tuple_to_nlattr, 207 + .nlattr_tuple_size = nf_ct_port_nlattr_tuple_size, 207 208 .nlattr_to_tuple = nf_ct_port_nlattr_to_tuple, 208 209 .nla_policy = nf_ct_port_nla_policy, 209 210 #endif
+28 -25
net/netfilter/x_tables.c
··· 625 625 } 626 626 EXPORT_SYMBOL(xt_free_table_info); 627 627 628 - void xt_table_entry_swap_rcu(struct xt_table_info *oldinfo, 629 - struct xt_table_info *newinfo) 630 - { 631 - unsigned int cpu; 632 - 633 - for_each_possible_cpu(cpu) { 634 - void *p = oldinfo->entries[cpu]; 635 - rcu_assign_pointer(oldinfo->entries[cpu], newinfo->entries[cpu]); 636 - newinfo->entries[cpu] = p; 637 - } 638 - 639 - } 640 - EXPORT_SYMBOL_GPL(xt_table_entry_swap_rcu); 641 - 642 628 /* Find table by name, grabs mutex & ref. Returns ERR_PTR() on error. */ 643 629 struct xt_table *xt_find_table_lock(struct net *net, u_int8_t af, 644 630 const char *name) ··· 662 676 EXPORT_SYMBOL_GPL(xt_compat_unlock); 663 677 #endif 664 678 679 + DEFINE_PER_CPU(struct xt_info_lock, xt_info_locks); 680 + EXPORT_PER_CPU_SYMBOL_GPL(xt_info_locks); 681 + 682 + 665 683 struct xt_table_info * 666 684 xt_replace_table(struct xt_table *table, 667 685 unsigned int num_counters, 668 686 struct xt_table_info *newinfo, 669 687 int *error) 670 688 { 671 - struct xt_table_info *oldinfo, *private; 689 + struct xt_table_info *private; 672 690 673 691 /* Do the substitution. */ 674 - mutex_lock(&table->lock); 692 + local_bh_disable(); 675 693 private = table->private; 694 + 676 695 /* Check inside lock: is the old number correct? */ 677 696 if (num_counters != private->number) { 678 697 duprintf("num_counters != table->private->number (%u/%u)\n", 679 698 num_counters, private->number); 680 - mutex_unlock(&table->lock); 699 + local_bh_enable(); 681 700 *error = -EAGAIN; 682 701 return NULL; 683 702 } 684 - oldinfo = private; 685 - rcu_assign_pointer(table->private, newinfo); 686 - newinfo->initial_entries = oldinfo->initial_entries; 687 - mutex_unlock(&table->lock); 688 703 689 - synchronize_net(); 690 - return oldinfo; 704 + table->private = newinfo; 705 + newinfo->initial_entries = private->initial_entries; 706 + 707 + /* 708 + * Even though table entries have now been swapped, other CPU's 709 + * may still be using the old entries. This is okay, because 710 + * resynchronization happens because of the locking done 711 + * during the get_counters() routine. 712 + */ 713 + local_bh_enable(); 714 + 715 + return private; 691 716 } 692 717 EXPORT_SYMBOL_GPL(xt_replace_table); 693 718 ··· 731 734 732 735 /* Simplifies replace_table code. */ 733 736 table->private = bootstrap; 734 - mutex_init(&table->lock); 735 737 736 738 if (!xt_replace_table(table, 0, newinfo, &ret)) 737 739 goto unlock; ··· 1143 1147 1144 1148 static int __init xt_init(void) 1145 1149 { 1146 - int i, rv; 1150 + unsigned int i; 1151 + int rv; 1152 + 1153 + for_each_possible_cpu(i) { 1154 + struct xt_info_lock *lock = &per_cpu(xt_info_locks, i); 1155 + spin_lock_init(&lock->lock); 1156 + lock->readers = 0; 1157 + } 1147 1158 1148 1159 xt = kmalloc(sizeof(struct xt_af) * NFPROTO_NUMPROTO, GFP_KERNEL); 1149 1160 if (!xt)
+4 -5
net/netfilter/xt_recent.c
··· 474 474 struct recent_table *t = pde->data; 475 475 struct recent_entry *e; 476 476 char buf[sizeof("+255.255.255.255")], *c = buf; 477 - __be32 addr; 477 + union nf_inet_addr addr = {}; 478 478 int add; 479 479 480 480 if (size > sizeof(buf)) ··· 506 506 add = 1; 507 507 break; 508 508 } 509 - addr = in_aton(c); 509 + addr.ip = in_aton(c); 510 510 511 511 spin_lock_bh(&recent_lock); 512 - e = recent_entry_lookup(t, (const void *)&addr, NFPROTO_IPV4, 0); 512 + e = recent_entry_lookup(t, &addr, NFPROTO_IPV4, 0); 513 513 if (e == NULL) { 514 514 if (add) 515 - recent_entry_init(t, (const void *)&addr, 516 - NFPROTO_IPV4, 0); 515 + recent_entry_init(t, &addr, NFPROTO_IPV4, 0); 517 516 } else { 518 517 if (add) 519 518 recent_entry_update(t, e);
+3 -3
net/xfrm/xfrm_state.c
··· 794 794 { 795 795 static xfrm_address_t saddr_wildcard = { }; 796 796 struct net *net = xp_net(pol); 797 - unsigned int h; 797 + unsigned int h, h_wildcard; 798 798 struct hlist_node *entry; 799 799 struct xfrm_state *x, *x0, *to_put; 800 800 int acquire_in_progress = 0; ··· 819 819 if (best) 820 820 goto found; 821 821 822 - h = xfrm_dst_hash(net, daddr, &saddr_wildcard, tmpl->reqid, family); 823 - hlist_for_each_entry(x, entry, net->xfrm.state_bydst+h, bydst) { 822 + h_wildcard = xfrm_dst_hash(net, daddr, &saddr_wildcard, tmpl->reqid, family); 823 + hlist_for_each_entry(x, entry, net->xfrm.state_bydst+h_wildcard, bydst) { 824 824 if (x->props.family == family && 825 825 x->props.reqid == tmpl->reqid && 826 826 !(x->props.flags & XFRM_STATE_WILDRECV) &&