Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6

* 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: (53 commits)
[TCP]: Verify the presence of RETRANS bit when leaving FRTO
[IPV6]: Call inet6addr_chain notifiers on link down
[NET_SCHED]: Kill CONFIG_NET_CLS_POLICE
[NET_SCHED]: act_api: qdisc internal reclassify support
[NET_SCHED]: sch_dsmark: act_api support
[NET_SCHED]: sch_atm: act_api support
[NET_SCHED]: sch_atm: Lindent
[IPV6]: MSG_ERRQUEUE messages do not pass to connected raw sockets
[IPV4]: Cleanup call to __neigh_lookup()
[NET_SCHED]: Revert "avoid transmit softirq on watchdog wakeup" optimization
[NETFILTER]: nf_conntrack: UDPLITE support
[NETFILTER]: nf_conntrack: mark protocols __read_mostly
[NETFILTER]: x_tables: add connlimit match
[NETFILTER]: Lower *tables printk severity
[NETFILTER]: nf_conntrack: Don't track locally generated special ICMP error
[NETFILTER]: nf_conntrack: Introduces nf_ct_get_tuplepr and uses it
[NETFILTER]: nf_conntrack: make l3proto->prepare() generic and renames it
[NETFILTER]: nf_conntrack: Increment error count on parsing IPv4 header
[NET]: Add ethtool support for NETIF_F_IPV6_CSUM devices.
[AF_IUCV]: Add lock when updating accept_q
...

+2955 -1328
+59
Documentation/networking/mac80211-injection.txt
··· 1 + How to use packet injection with mac80211 2 + ========================================= 3 + 4 + mac80211 now allows arbitrary packets to be injected down any Monitor Mode 5 + interface from userland. The packet you inject needs to be composed in the 6 + following format: 7 + 8 + [ radiotap header ] 9 + [ ieee80211 header ] 10 + [ payload ] 11 + 12 + The radiotap format is discussed in 13 + ./Documentation/networking/radiotap-headers.txt. 14 + 15 + Despite 13 radiotap argument types are currently defined, most only make sense 16 + to appear on received packets. Currently three kinds of argument are used by 17 + the injection code, although it knows to skip any other arguments that are 18 + present (facilitating replay of captured radiotap headers directly): 19 + 20 + - IEEE80211_RADIOTAP_RATE - u8 arg in 500kbps units (0x02 --> 1Mbps) 21 + 22 + - IEEE80211_RADIOTAP_ANTENNA - u8 arg, 0x00 = ant1, 0x01 = ant2 23 + 24 + - IEEE80211_RADIOTAP_DBM_TX_POWER - u8 arg, dBm 25 + 26 + Here is an example valid radiotap header defining these three parameters 27 + 28 + 0x00, 0x00, // <-- radiotap version 29 + 0x0b, 0x00, // <- radiotap header length 30 + 0x04, 0x0c, 0x00, 0x00, // <-- bitmap 31 + 0x6c, // <-- rate 32 + 0x0c, //<-- tx power 33 + 0x01 //<-- antenna 34 + 35 + The ieee80211 header follows immediately afterwards, looking for example like 36 + this: 37 + 38 + 0x08, 0x01, 0x00, 0x00, 39 + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 40 + 0x13, 0x22, 0x33, 0x44, 0x55, 0x66, 41 + 0x13, 0x22, 0x33, 0x44, 0x55, 0x66, 42 + 0x10, 0x86 43 + 44 + Then lastly there is the payload. 45 + 46 + After composing the packet contents, it is sent by send()-ing it to a logical 47 + mac80211 interface that is in Monitor mode. Libpcap can also be used, 48 + (which is easier than doing the work to bind the socket to the right 49 + interface), along the following lines: 50 + 51 + ppcap = pcap_open_live(szInterfaceName, 800, 1, 20, szErrbuf); 52 + ... 53 + r = pcap_inject(ppcap, u8aSendBuffer, nLength); 54 + 55 + You can also find sources for a complete inject test applet here: 56 + 57 + http://penumbra.warmcat.com/_twk/tiki-index.php?page=packetspammer 58 + 59 + Andy Green <andy@warmcat.com>
+152
Documentation/networking/radiotap-headers.txt
··· 1 + How to use radiotap headers 2 + =========================== 3 + 4 + Pointer to the radiotap include file 5 + ------------------------------------ 6 + 7 + Radiotap headers are variable-length and extensible, you can get most of the 8 + information you need to know on them from: 9 + 10 + ./include/net/ieee80211_radiotap.h 11 + 12 + This document gives an overview and warns on some corner cases. 13 + 14 + 15 + Structure of the header 16 + ----------------------- 17 + 18 + There is a fixed portion at the start which contains a u32 bitmap that defines 19 + if the possible argument associated with that bit is present or not. So if b0 20 + of the it_present member of ieee80211_radiotap_header is set, it means that 21 + the header for argument index 0 (IEEE80211_RADIOTAP_TSFT) is present in the 22 + argument area. 23 + 24 + < 8-byte ieee80211_radiotap_header > 25 + [ <possible argument bitmap extensions ... > ] 26 + [ <argument> ... ] 27 + 28 + At the moment there are only 13 possible argument indexes defined, but in case 29 + we run out of space in the u32 it_present member, it is defined that b31 set 30 + indicates that there is another u32 bitmap following (shown as "possible 31 + argument bitmap extensions..." above), and the start of the arguments is moved 32 + forward 4 bytes each time. 33 + 34 + Note also that the it_len member __le16 is set to the total number of bytes 35 + covered by the ieee80211_radiotap_header and any arguments following. 36 + 37 + 38 + Requirements for arguments 39 + -------------------------- 40 + 41 + After the fixed part of the header, the arguments follow for each argument 42 + index whose matching bit is set in the it_present member of 43 + ieee80211_radiotap_header. 44 + 45 + - the arguments are all stored little-endian! 46 + 47 + - the argument payload for a given argument index has a fixed size. So 48 + IEEE80211_RADIOTAP_TSFT being present always indicates an 8-byte argument is 49 + present. See the comments in ./include/net/ieee80211_radiotap.h for a nice 50 + breakdown of all the argument sizes 51 + 52 + - the arguments must be aligned to a boundary of the argument size using 53 + padding. So a u16 argument must start on the next u16 boundary if it isn't 54 + already on one, a u32 must start on the next u32 boundary and so on. 55 + 56 + - "alignment" is relative to the start of the ieee80211_radiotap_header, ie, 57 + the first byte of the radiotap header. The absolute alignment of that first 58 + byte isn't defined. So even if the whole radiotap header is starting at, eg, 59 + address 0x00000003, still the first byte of the radiotap header is treated as 60 + 0 for alignment purposes. 61 + 62 + - the above point that there may be no absolute alignment for multibyte 63 + entities in the fixed radiotap header or the argument region means that you 64 + have to take special evasive action when trying to access these multibyte 65 + entities. Some arches like Blackfin cannot deal with an attempt to 66 + dereference, eg, a u16 pointer that is pointing to an odd address. Instead 67 + you have to use a kernel API get_unaligned() to dereference the pointer, 68 + which will do it bytewise on the arches that require that. 69 + 70 + - The arguments for a given argument index can be a compound of multiple types 71 + together. For example IEEE80211_RADIOTAP_CHANNEL has an argument payload 72 + consisting of two u16s of total length 4. When this happens, the padding 73 + rule is applied dealing with a u16, NOT dealing with a 4-byte single entity. 74 + 75 + 76 + Example valid radiotap header 77 + ----------------------------- 78 + 79 + 0x00, 0x00, // <-- radiotap version + pad byte 80 + 0x0b, 0x00, // <- radiotap header length 81 + 0x04, 0x0c, 0x00, 0x00, // <-- bitmap 82 + 0x6c, // <-- rate (in 500kHz units) 83 + 0x0c, //<-- tx power 84 + 0x01 //<-- antenna 85 + 86 + 87 + Using the Radiotap Parser 88 + ------------------------- 89 + 90 + If you are having to parse a radiotap struct, you can radically simplify the 91 + job by using the radiotap parser that lives in net/wireless/radiotap.c and has 92 + its prototypes available in include/net/cfg80211.h. You use it like this: 93 + 94 + #include <net/cfg80211.h> 95 + 96 + /* buf points to the start of the radiotap header part */ 97 + 98 + int MyFunction(u8 * buf, int buflen) 99 + { 100 + int pkt_rate_100kHz = 0, antenna = 0, pwr = 0; 101 + struct ieee80211_radiotap_iterator iterator; 102 + int ret = ieee80211_radiotap_iterator_init(&iterator, buf, buflen); 103 + 104 + while (!ret) { 105 + 106 + ret = ieee80211_radiotap_iterator_next(&iterator); 107 + 108 + if (ret) 109 + continue; 110 + 111 + /* see if this argument is something we can use */ 112 + 113 + switch (iterator.this_arg_index) { 114 + /* 115 + * You must take care when dereferencing iterator.this_arg 116 + * for multibyte types... the pointer is not aligned. Use 117 + * get_unaligned((type *)iterator.this_arg) to dereference 118 + * iterator.this_arg for type "type" safely on all arches. 119 + */ 120 + case IEEE80211_RADIOTAP_RATE: 121 + /* radiotap "rate" u8 is in 122 + * 500kbps units, eg, 0x02=1Mbps 123 + */ 124 + pkt_rate_100kHz = (*iterator.this_arg) * 5; 125 + break; 126 + 127 + case IEEE80211_RADIOTAP_ANTENNA: 128 + /* radiotap uses 0 for 1st ant */ 129 + antenna = *iterator.this_arg); 130 + break; 131 + 132 + case IEEE80211_RADIOTAP_DBM_TX_POWER: 133 + pwr = *iterator.this_arg; 134 + break; 135 + 136 + default: 137 + break; 138 + } 139 + } /* while more rt headers */ 140 + 141 + if (ret != -ENOENT) 142 + return TXRX_DROP; 143 + 144 + /* discard the radiotap header part */ 145 + buf += iterator.max_length; 146 + buflen -= iterator.max_length; 147 + 148 + ... 149 + 150 + } 151 + 152 + Andy Green <andy@warmcat.com>
+6
MAINTAINERS
··· 2330 2330 T: git kernel.org:/pub/scm/linux/kernel/git/jbenc/mac80211.git 2331 2331 S: Maintained 2332 2332 2333 + MACVLAN DRIVER 2334 + P: Patrick McHardy 2335 + M: kaber@trash.net 2336 + L: netdev@vger.kernel.org 2337 + S: Maintained 2338 + 2333 2339 MARVELL YUKON / SYSKONNECT DRIVER 2334 2340 P: Mirko Lindner 2335 2341 M: mlindner@syskonnect.de
+1 -5
crypto/Kconfig
··· 12 12 # 13 13 # Cryptographic API Configuration 14 14 # 15 - menu "Cryptographic options" 16 - 17 - config CRYPTO 15 + menuconfig CRYPTO 18 16 bool "Cryptographic API" 19 17 help 20 18 This option provides the core Cryptographic API. ··· 471 473 source "drivers/crypto/Kconfig" 472 474 473 475 endif # if CRYPTO 474 - 475 - endmenu
+29 -2
crypto/ablkcipher.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/seq_file.h> 21 21 22 + static int setkey_unaligned(struct crypto_ablkcipher *tfm, const u8 *key, unsigned int keylen) 23 + { 24 + struct ablkcipher_alg *cipher = crypto_ablkcipher_alg(tfm); 25 + unsigned long alignmask = crypto_ablkcipher_alignmask(tfm); 26 + int ret; 27 + u8 *buffer, *alignbuffer; 28 + unsigned long absize; 29 + 30 + absize = keylen + alignmask; 31 + buffer = kmalloc(absize, GFP_ATOMIC); 32 + if (!buffer) 33 + return -ENOMEM; 34 + 35 + alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); 36 + memcpy(alignbuffer, key, keylen); 37 + ret = cipher->setkey(tfm, alignbuffer, keylen); 38 + memset(alignbuffer, 0, absize); 39 + kfree(buffer); 40 + return ret; 41 + } 42 + 22 43 static int setkey(struct crypto_ablkcipher *tfm, const u8 *key, 23 44 unsigned int keylen) 24 45 { 25 46 struct ablkcipher_alg *cipher = crypto_ablkcipher_alg(tfm); 47 + unsigned long alignmask = crypto_ablkcipher_alignmask(tfm); 26 48 27 49 if (keylen < cipher->min_keysize || keylen > cipher->max_keysize) { 28 50 crypto_ablkcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); 29 51 return -EINVAL; 30 52 } 53 + 54 + if ((unsigned long)key & alignmask) 55 + return setkey_unaligned(tfm, key, keylen); 31 56 32 57 return cipher->setkey(tfm, key, keylen); 33 58 } ··· 91 66 seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize); 92 67 seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize); 93 68 seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize); 94 - seq_printf(m, "qlen : %u\n", ablkcipher->queue->qlen); 95 - seq_printf(m, "max qlen : %u\n", ablkcipher->queue->max_qlen); 69 + if (ablkcipher->queue) { 70 + seq_printf(m, "qlen : %u\n", ablkcipher->queue->qlen); 71 + seq_printf(m, "max qlen : %u\n", ablkcipher->queue->max_qlen); 72 + } 96 73 } 97 74 98 75 const struct crypto_type crypto_ablkcipher_type = {
+2 -2
crypto/algapi.c
··· 34 34 if (alg) { 35 35 if (crypto_is_larval(alg)) { 36 36 struct crypto_larval *larval = (void *)alg; 37 - complete(&larval->completion); 37 + complete_all(&larval->completion); 38 38 } 39 39 crypto_mod_put(alg); 40 40 } ··· 164 164 continue; 165 165 166 166 larval->adult = alg; 167 - complete(&larval->completion); 167 + complete_all(&larval->completion); 168 168 continue; 169 169 } 170 170
+1 -1
crypto/api.c
··· 144 144 down_write(&crypto_alg_sem); 145 145 list_del(&alg->cra_list); 146 146 up_write(&crypto_alg_sem); 147 - complete(&larval->completion); 147 + complete_all(&larval->completion); 148 148 crypto_alg_put(alg); 149 149 } 150 150
+25
crypto/blkcipher.c
··· 336 336 return blkcipher_walk_next(desc, walk); 337 337 } 338 338 339 + static int setkey_unaligned(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) 340 + { 341 + struct blkcipher_alg *cipher = &tfm->__crt_alg->cra_blkcipher; 342 + unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); 343 + int ret; 344 + u8 *buffer, *alignbuffer; 345 + unsigned long absize; 346 + 347 + absize = keylen + alignmask; 348 + buffer = kmalloc(absize, GFP_ATOMIC); 349 + if (!buffer) 350 + return -ENOMEM; 351 + 352 + alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); 353 + memcpy(alignbuffer, key, keylen); 354 + ret = cipher->setkey(tfm, alignbuffer, keylen); 355 + memset(alignbuffer, 0, absize); 356 + kfree(buffer); 357 + return ret; 358 + } 359 + 339 360 static int setkey(struct crypto_tfm *tfm, const u8 *key, 340 361 unsigned int keylen) 341 362 { 342 363 struct blkcipher_alg *cipher = &tfm->__crt_alg->cra_blkcipher; 364 + unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); 343 365 344 366 if (keylen < cipher->min_keysize || keylen > cipher->max_keysize) { 345 367 tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; 346 368 return -EINVAL; 347 369 } 370 + 371 + if ((unsigned long)key & alignmask) 372 + return setkey_unaligned(tfm, key, keylen); 348 373 349 374 return cipher->setkey(tfm, key, keylen); 350 375 }
+30 -3
crypto/cipher.c
··· 20 20 #include <linux/string.h> 21 21 #include "internal.h" 22 22 23 + static int setkey_unaligned(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) 24 + { 25 + struct cipher_alg *cia = &tfm->__crt_alg->cra_cipher; 26 + unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); 27 + int ret; 28 + u8 *buffer, *alignbuffer; 29 + unsigned long absize; 30 + 31 + absize = keylen + alignmask; 32 + buffer = kmalloc(absize, GFP_ATOMIC); 33 + if (!buffer) 34 + return -ENOMEM; 35 + 36 + alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); 37 + memcpy(alignbuffer, key, keylen); 38 + ret = cia->cia_setkey(tfm, alignbuffer, keylen); 39 + memset(alignbuffer, 0, absize); 40 + kfree(buffer); 41 + return ret; 42 + 43 + } 44 + 23 45 static int setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) 24 46 { 25 47 struct cipher_alg *cia = &tfm->__crt_alg->cra_cipher; 26 - 48 + unsigned long alignmask = crypto_tfm_alg_alignmask(tfm); 49 + 27 50 tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK; 28 51 if (keylen < cia->cia_min_keysize || keylen > cia->cia_max_keysize) { 29 52 tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; 30 53 return -EINVAL; 31 - } else 32 - return cia->cia_setkey(tfm, key, keylen); 54 + } 55 + 56 + if ((unsigned long)key & alignmask) 57 + return setkey_unaligned(tfm, key, keylen); 58 + 59 + return cia->cia_setkey(tfm, key, keylen); 33 60 } 34 61 35 62 static void cipher_crypt_unaligned(void (*fn)(struct crypto_tfm *, u8 *,
+37 -1
crypto/hash.c
··· 22 22 return alg->cra_ctxsize; 23 23 } 24 24 25 + static int hash_setkey_unaligned(struct crypto_hash *crt, const u8 *key, 26 + unsigned int keylen) 27 + { 28 + struct crypto_tfm *tfm = crypto_hash_tfm(crt); 29 + struct hash_alg *alg = &tfm->__crt_alg->cra_hash; 30 + unsigned long alignmask = crypto_hash_alignmask(crt); 31 + int ret; 32 + u8 *buffer, *alignbuffer; 33 + unsigned long absize; 34 + 35 + absize = keylen + alignmask; 36 + buffer = kmalloc(absize, GFP_ATOMIC); 37 + if (!buffer) 38 + return -ENOMEM; 39 + 40 + alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); 41 + memcpy(alignbuffer, key, keylen); 42 + ret = alg->setkey(crt, alignbuffer, keylen); 43 + memset(alignbuffer, 0, absize); 44 + kfree(buffer); 45 + return ret; 46 + } 47 + 48 + static int hash_setkey(struct crypto_hash *crt, const u8 *key, 49 + unsigned int keylen) 50 + { 51 + struct crypto_tfm *tfm = crypto_hash_tfm(crt); 52 + struct hash_alg *alg = &tfm->__crt_alg->cra_hash; 53 + unsigned long alignmask = crypto_hash_alignmask(crt); 54 + 55 + if ((unsigned long)key & alignmask) 56 + return hash_setkey_unaligned(crt, key, keylen); 57 + 58 + return alg->setkey(crt, key, keylen); 59 + } 60 + 25 61 static int crypto_init_hash_ops(struct crypto_tfm *tfm, u32 type, u32 mask) 26 62 { 27 63 struct hash_tfm *crt = &tfm->crt_hash; ··· 70 34 crt->update = alg->update; 71 35 crt->final = alg->final; 72 36 crt->digest = alg->digest; 73 - crt->setkey = alg->setkey; 37 + crt->setkey = hash_setkey; 74 38 crt->digestsize = alg->digestsize; 75 39 76 40 return 0;
+10
drivers/net/Kconfig
··· 82 82 To compile this driver as a module, choose M here: the module 83 83 will be called bonding. 84 84 85 + config MACVLAN 86 + tristate "MAC-VLAN support (EXPERIMENTAL)" 87 + depends on EXPERIMENTAL 88 + ---help--- 89 + This allows one to create virtual interfaces that map packets to 90 + or from specific MAC addresses to a particular interface. 91 + 92 + To compile this driver as a module, choose M here: the module 93 + will be called macvlan. 94 + 85 95 config EQUALIZER 86 96 tristate "EQL (serial line load balancing) support" 87 97 ---help---
+1
drivers/net/Makefile
··· 128 128 129 129 obj-$(CONFIG_DUMMY) += dummy.o 130 130 obj-$(CONFIG_IFB) += ifb.o 131 + obj-$(CONFIG_MACVLAN) += macvlan.o 131 132 obj-$(CONFIG_DE600) += de600.o 132 133 obj-$(CONFIG_DE620) += de620.o 133 134 obj-$(CONFIG_LANCE) += lance.o
+1 -1
drivers/net/bnx2.c
··· 6218 6218 struct bnx2 *bp = netdev_priv(dev); 6219 6219 6220 6220 if (CHIP_NUM(bp) == CHIP_NUM_5709) 6221 - return (ethtool_op_set_tx_hw_csum(dev, data)); 6221 + return (ethtool_op_set_tx_ipv6_csum(dev, data)); 6222 6222 else 6223 6223 return (ethtool_op_set_tx_csum(dev, data)); 6224 6224 }
+496
drivers/net/macvlan.c
··· 1 + /* 2 + * Copyright (c) 2007 Patrick McHardy <kaber@trash.net> 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License as 6 + * published by the Free Software Foundation; either version 2 of 7 + * the License, or (at your option) any later version. 8 + * 9 + * The code this is based on carried the following copyright notice: 10 + * --- 11 + * (C) Copyright 2001-2006 12 + * Alex Zeffertt, Cambridge Broadband Ltd, ajz@cambridgebroadband.com 13 + * Re-worked by Ben Greear <greearb@candelatech.com> 14 + * --- 15 + */ 16 + #include <linux/kernel.h> 17 + #include <linux/types.h> 18 + #include <linux/module.h> 19 + #include <linux/init.h> 20 + #include <linux/errno.h> 21 + #include <linux/slab.h> 22 + #include <linux/string.h> 23 + #include <linux/list.h> 24 + #include <linux/notifier.h> 25 + #include <linux/netdevice.h> 26 + #include <linux/etherdevice.h> 27 + #include <linux/ethtool.h> 28 + #include <linux/if_arp.h> 29 + #include <linux/if_link.h> 30 + #include <linux/if_macvlan.h> 31 + #include <net/rtnetlink.h> 32 + 33 + #define MACVLAN_HASH_SIZE (1 << BITS_PER_BYTE) 34 + 35 + struct macvlan_port { 36 + struct net_device *dev; 37 + struct hlist_head vlan_hash[MACVLAN_HASH_SIZE]; 38 + struct list_head vlans; 39 + }; 40 + 41 + struct macvlan_dev { 42 + struct net_device *dev; 43 + struct list_head list; 44 + struct hlist_node hlist; 45 + struct macvlan_port *port; 46 + struct net_device *lowerdev; 47 + }; 48 + 49 + 50 + static struct macvlan_dev *macvlan_hash_lookup(const struct macvlan_port *port, 51 + const unsigned char *addr) 52 + { 53 + struct macvlan_dev *vlan; 54 + struct hlist_node *n; 55 + 56 + hlist_for_each_entry_rcu(vlan, n, &port->vlan_hash[addr[5]], hlist) { 57 + if (!compare_ether_addr(vlan->dev->dev_addr, addr)) 58 + return vlan; 59 + } 60 + return NULL; 61 + } 62 + 63 + static void macvlan_broadcast(struct sk_buff *skb, 64 + const struct macvlan_port *port) 65 + { 66 + const struct ethhdr *eth = eth_hdr(skb); 67 + const struct macvlan_dev *vlan; 68 + struct hlist_node *n; 69 + struct net_device *dev; 70 + struct sk_buff *nskb; 71 + unsigned int i; 72 + 73 + for (i = 0; i < MACVLAN_HASH_SIZE; i++) { 74 + hlist_for_each_entry_rcu(vlan, n, &port->vlan_hash[i], hlist) { 75 + dev = vlan->dev; 76 + if (unlikely(!(dev->flags & IFF_UP))) 77 + continue; 78 + 79 + nskb = skb_clone(skb, GFP_ATOMIC); 80 + if (nskb == NULL) { 81 + dev->stats.rx_errors++; 82 + dev->stats.rx_dropped++; 83 + continue; 84 + } 85 + 86 + dev->stats.rx_bytes += skb->len + ETH_HLEN; 87 + dev->stats.rx_packets++; 88 + dev->stats.multicast++; 89 + dev->last_rx = jiffies; 90 + 91 + nskb->dev = dev; 92 + if (!compare_ether_addr(eth->h_dest, dev->broadcast)) 93 + nskb->pkt_type = PACKET_BROADCAST; 94 + else 95 + nskb->pkt_type = PACKET_MULTICAST; 96 + 97 + netif_rx(nskb); 98 + } 99 + } 100 + } 101 + 102 + /* called under rcu_read_lock() from netif_receive_skb */ 103 + static struct sk_buff *macvlan_handle_frame(struct sk_buff *skb) 104 + { 105 + const struct ethhdr *eth = eth_hdr(skb); 106 + const struct macvlan_port *port; 107 + const struct macvlan_dev *vlan; 108 + struct net_device *dev; 109 + 110 + port = rcu_dereference(skb->dev->macvlan_port); 111 + if (port == NULL) 112 + return skb; 113 + 114 + if (is_multicast_ether_addr(eth->h_dest)) { 115 + macvlan_broadcast(skb, port); 116 + return skb; 117 + } 118 + 119 + vlan = macvlan_hash_lookup(port, eth->h_dest); 120 + if (vlan == NULL) 121 + return skb; 122 + 123 + dev = vlan->dev; 124 + if (unlikely(!(dev->flags & IFF_UP))) { 125 + kfree_skb(skb); 126 + return NULL; 127 + } 128 + 129 + skb = skb_share_check(skb, GFP_ATOMIC); 130 + if (skb == NULL) { 131 + dev->stats.rx_errors++; 132 + dev->stats.rx_dropped++; 133 + return NULL; 134 + } 135 + 136 + dev->stats.rx_bytes += skb->len + ETH_HLEN; 137 + dev->stats.rx_packets++; 138 + dev->last_rx = jiffies; 139 + 140 + skb->dev = dev; 141 + skb->pkt_type = PACKET_HOST; 142 + 143 + netif_rx(skb); 144 + return NULL; 145 + } 146 + 147 + static int macvlan_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) 148 + { 149 + const struct macvlan_dev *vlan = netdev_priv(dev); 150 + unsigned int len = skb->len; 151 + int ret; 152 + 153 + skb->dev = vlan->lowerdev; 154 + ret = dev_queue_xmit(skb); 155 + 156 + if (likely(ret == NET_XMIT_SUCCESS)) { 157 + dev->stats.tx_packets++; 158 + dev->stats.tx_bytes += len; 159 + } else { 160 + dev->stats.tx_errors++; 161 + dev->stats.tx_aborted_errors++; 162 + } 163 + return NETDEV_TX_OK; 164 + } 165 + 166 + static int macvlan_hard_header(struct sk_buff *skb, struct net_device *dev, 167 + unsigned short type, void *daddr, void *saddr, 168 + unsigned len) 169 + { 170 + const struct macvlan_dev *vlan = netdev_priv(dev); 171 + struct net_device *lowerdev = vlan->lowerdev; 172 + 173 + return lowerdev->hard_header(skb, lowerdev, type, daddr, 174 + saddr ? : dev->dev_addr, len); 175 + } 176 + 177 + static int macvlan_open(struct net_device *dev) 178 + { 179 + struct macvlan_dev *vlan = netdev_priv(dev); 180 + struct macvlan_port *port = vlan->port; 181 + struct net_device *lowerdev = vlan->lowerdev; 182 + int err; 183 + 184 + err = dev_unicast_add(lowerdev, dev->dev_addr, ETH_ALEN); 185 + if (err < 0) 186 + return err; 187 + if (dev->flags & IFF_ALLMULTI) 188 + dev_set_allmulti(lowerdev, 1); 189 + 190 + hlist_add_head_rcu(&vlan->hlist, &port->vlan_hash[dev->dev_addr[5]]); 191 + return 0; 192 + } 193 + 194 + static int macvlan_stop(struct net_device *dev) 195 + { 196 + struct macvlan_dev *vlan = netdev_priv(dev); 197 + struct net_device *lowerdev = vlan->lowerdev; 198 + 199 + dev_mc_unsync(lowerdev, dev); 200 + if (dev->flags & IFF_ALLMULTI) 201 + dev_set_allmulti(lowerdev, -1); 202 + 203 + dev_unicast_delete(lowerdev, dev->dev_addr, ETH_ALEN); 204 + 205 + hlist_del_rcu(&vlan->hlist); 206 + synchronize_rcu(); 207 + return 0; 208 + } 209 + 210 + static void macvlan_change_rx_flags(struct net_device *dev, int change) 211 + { 212 + struct macvlan_dev *vlan = netdev_priv(dev); 213 + struct net_device *lowerdev = vlan->lowerdev; 214 + 215 + if (change & IFF_ALLMULTI) 216 + dev_set_allmulti(lowerdev, dev->flags & IFF_ALLMULTI ? 1 : -1); 217 + } 218 + 219 + static void macvlan_set_multicast_list(struct net_device *dev) 220 + { 221 + struct macvlan_dev *vlan = netdev_priv(dev); 222 + 223 + dev_mc_sync(vlan->lowerdev, dev); 224 + } 225 + 226 + static int macvlan_change_mtu(struct net_device *dev, int new_mtu) 227 + { 228 + struct macvlan_dev *vlan = netdev_priv(dev); 229 + 230 + if (new_mtu < 68 || vlan->lowerdev->mtu < new_mtu) 231 + return -EINVAL; 232 + dev->mtu = new_mtu; 233 + return 0; 234 + } 235 + 236 + /* 237 + * macvlan network devices have devices nesting below it and are a special 238 + * "super class" of normal network devices; split their locks off into a 239 + * separate class since they always nest. 240 + */ 241 + static struct lock_class_key macvlan_netdev_xmit_lock_key; 242 + 243 + #define MACVLAN_FEATURES \ 244 + (NETIF_F_SG | NETIF_F_ALL_CSUM | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST | \ 245 + NETIF_F_GSO | NETIF_F_TSO | NETIF_F_UFO | NETIF_F_GSO_ROBUST | \ 246 + NETIF_F_TSO_ECN | NETIF_F_TSO6) 247 + 248 + #define MACVLAN_STATE_MASK \ 249 + ((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT)) 250 + 251 + static int macvlan_init(struct net_device *dev) 252 + { 253 + struct macvlan_dev *vlan = netdev_priv(dev); 254 + const struct net_device *lowerdev = vlan->lowerdev; 255 + 256 + dev->state = (dev->state & ~MACVLAN_STATE_MASK) | 257 + (lowerdev->state & MACVLAN_STATE_MASK); 258 + dev->features = lowerdev->features & MACVLAN_FEATURES; 259 + dev->iflink = lowerdev->ifindex; 260 + 261 + lockdep_set_class(&dev->_xmit_lock, &macvlan_netdev_xmit_lock_key); 262 + return 0; 263 + } 264 + 265 + static void macvlan_ethtool_get_drvinfo(struct net_device *dev, 266 + struct ethtool_drvinfo *drvinfo) 267 + { 268 + snprintf(drvinfo->driver, 32, "macvlan"); 269 + snprintf(drvinfo->version, 32, "0.1"); 270 + } 271 + 272 + static u32 macvlan_ethtool_get_rx_csum(struct net_device *dev) 273 + { 274 + const struct macvlan_dev *vlan = netdev_priv(dev); 275 + struct net_device *lowerdev = vlan->lowerdev; 276 + 277 + if (lowerdev->ethtool_ops->get_rx_csum == NULL) 278 + return 0; 279 + return lowerdev->ethtool_ops->get_rx_csum(lowerdev); 280 + } 281 + 282 + static const struct ethtool_ops macvlan_ethtool_ops = { 283 + .get_link = ethtool_op_get_link, 284 + .get_rx_csum = macvlan_ethtool_get_rx_csum, 285 + .get_tx_csum = ethtool_op_get_tx_csum, 286 + .get_tso = ethtool_op_get_tso, 287 + .get_ufo = ethtool_op_get_ufo, 288 + .get_sg = ethtool_op_get_sg, 289 + .get_drvinfo = macvlan_ethtool_get_drvinfo, 290 + }; 291 + 292 + static void macvlan_setup(struct net_device *dev) 293 + { 294 + ether_setup(dev); 295 + 296 + dev->init = macvlan_init; 297 + dev->open = macvlan_open; 298 + dev->stop = macvlan_stop; 299 + dev->change_mtu = macvlan_change_mtu; 300 + dev->change_rx_flags = macvlan_change_rx_flags; 301 + dev->set_multicast_list = macvlan_set_multicast_list; 302 + dev->hard_header = macvlan_hard_header; 303 + dev->hard_start_xmit = macvlan_hard_start_xmit; 304 + dev->destructor = free_netdev; 305 + dev->ethtool_ops = &macvlan_ethtool_ops; 306 + dev->tx_queue_len = 0; 307 + } 308 + 309 + static int macvlan_port_create(struct net_device *dev) 310 + { 311 + struct macvlan_port *port; 312 + unsigned int i; 313 + 314 + if (dev->type != ARPHRD_ETHER || dev->flags & IFF_LOOPBACK) 315 + return -EINVAL; 316 + 317 + port = kzalloc(sizeof(*port), GFP_KERNEL); 318 + if (port == NULL) 319 + return -ENOMEM; 320 + 321 + port->dev = dev; 322 + INIT_LIST_HEAD(&port->vlans); 323 + for (i = 0; i < MACVLAN_HASH_SIZE; i++) 324 + INIT_HLIST_HEAD(&port->vlan_hash[i]); 325 + rcu_assign_pointer(dev->macvlan_port, port); 326 + return 0; 327 + } 328 + 329 + static void macvlan_port_destroy(struct net_device *dev) 330 + { 331 + struct macvlan_port *port = dev->macvlan_port; 332 + 333 + rcu_assign_pointer(dev->macvlan_port, NULL); 334 + synchronize_rcu(); 335 + kfree(port); 336 + } 337 + 338 + static void macvlan_transfer_operstate(struct net_device *dev) 339 + { 340 + struct macvlan_dev *vlan = netdev_priv(dev); 341 + const struct net_device *lowerdev = vlan->lowerdev; 342 + 343 + if (lowerdev->operstate == IF_OPER_DORMANT) 344 + netif_dormant_on(dev); 345 + else 346 + netif_dormant_off(dev); 347 + 348 + if (netif_carrier_ok(lowerdev)) { 349 + if (!netif_carrier_ok(dev)) 350 + netif_carrier_on(dev); 351 + } else { 352 + if (netif_carrier_ok(lowerdev)) 353 + netif_carrier_off(dev); 354 + } 355 + } 356 + 357 + static int macvlan_validate(struct nlattr *tb[], struct nlattr *data[]) 358 + { 359 + if (tb[IFLA_ADDRESS]) { 360 + if (nla_len(tb[IFLA_ADDRESS]) != ETH_ALEN) 361 + return -EINVAL; 362 + if (!is_valid_ether_addr(nla_data(tb[IFLA_ADDRESS]))) 363 + return -EADDRNOTAVAIL; 364 + } 365 + return 0; 366 + } 367 + 368 + static int macvlan_newlink(struct net_device *dev, 369 + struct nlattr *tb[], struct nlattr *data[]) 370 + { 371 + struct macvlan_dev *vlan = netdev_priv(dev); 372 + struct macvlan_port *port; 373 + struct net_device *lowerdev; 374 + int err; 375 + 376 + if (!tb[IFLA_LINK]) 377 + return -EINVAL; 378 + 379 + lowerdev = __dev_get_by_index(nla_get_u32(tb[IFLA_LINK])); 380 + if (lowerdev == NULL) 381 + return -ENODEV; 382 + 383 + if (!tb[IFLA_MTU]) 384 + dev->mtu = lowerdev->mtu; 385 + else if (dev->mtu > lowerdev->mtu) 386 + return -EINVAL; 387 + 388 + if (!tb[IFLA_ADDRESS]) 389 + random_ether_addr(dev->dev_addr); 390 + 391 + if (lowerdev->macvlan_port == NULL) { 392 + err = macvlan_port_create(lowerdev); 393 + if (err < 0) 394 + return err; 395 + } 396 + port = lowerdev->macvlan_port; 397 + 398 + vlan->lowerdev = lowerdev; 399 + vlan->dev = dev; 400 + vlan->port = port; 401 + 402 + err = register_netdevice(dev); 403 + if (err < 0) 404 + return err; 405 + 406 + list_add_tail(&vlan->list, &port->vlans); 407 + macvlan_transfer_operstate(dev); 408 + return 0; 409 + } 410 + 411 + static void macvlan_dellink(struct net_device *dev) 412 + { 413 + struct macvlan_dev *vlan = netdev_priv(dev); 414 + struct macvlan_port *port = vlan->port; 415 + 416 + list_del(&vlan->list); 417 + unregister_netdevice(dev); 418 + 419 + if (list_empty(&port->vlans)) 420 + macvlan_port_destroy(dev); 421 + } 422 + 423 + static struct rtnl_link_ops macvlan_link_ops __read_mostly = { 424 + .kind = "macvlan", 425 + .priv_size = sizeof(struct macvlan_dev), 426 + .setup = macvlan_setup, 427 + .validate = macvlan_validate, 428 + .newlink = macvlan_newlink, 429 + .dellink = macvlan_dellink, 430 + }; 431 + 432 + static int macvlan_device_event(struct notifier_block *unused, 433 + unsigned long event, void *ptr) 434 + { 435 + struct net_device *dev = ptr; 436 + struct macvlan_dev *vlan, *next; 437 + struct macvlan_port *port; 438 + 439 + port = dev->macvlan_port; 440 + if (port == NULL) 441 + return NOTIFY_DONE; 442 + 443 + switch (event) { 444 + case NETDEV_CHANGE: 445 + list_for_each_entry(vlan, &port->vlans, list) 446 + macvlan_transfer_operstate(vlan->dev); 447 + break; 448 + case NETDEV_FEAT_CHANGE: 449 + list_for_each_entry(vlan, &port->vlans, list) { 450 + vlan->dev->features = dev->features & MACVLAN_FEATURES; 451 + netdev_features_change(vlan->dev); 452 + } 453 + break; 454 + case NETDEV_UNREGISTER: 455 + list_for_each_entry_safe(vlan, next, &port->vlans, list) 456 + macvlan_dellink(vlan->dev); 457 + break; 458 + } 459 + return NOTIFY_DONE; 460 + } 461 + 462 + static struct notifier_block macvlan_notifier_block __read_mostly = { 463 + .notifier_call = macvlan_device_event, 464 + }; 465 + 466 + static int __init macvlan_init_module(void) 467 + { 468 + int err; 469 + 470 + register_netdevice_notifier(&macvlan_notifier_block); 471 + macvlan_handle_frame_hook = macvlan_handle_frame; 472 + 473 + err = rtnl_link_register(&macvlan_link_ops); 474 + if (err < 0) 475 + goto err1; 476 + return 0; 477 + err1: 478 + macvlan_handle_frame_hook = macvlan_handle_frame; 479 + unregister_netdevice_notifier(&macvlan_notifier_block); 480 + return err; 481 + } 482 + 483 + static void __exit macvlan_cleanup_module(void) 484 + { 485 + rtnl_link_unregister(&macvlan_link_ops); 486 + macvlan_handle_frame_hook = NULL; 487 + unregister_netdevice_notifier(&macvlan_notifier_block); 488 + } 489 + 490 + module_init(macvlan_init_module); 491 + module_exit(macvlan_cleanup_module); 492 + 493 + MODULE_LICENSE("GPL"); 494 + MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 495 + MODULE_DESCRIPTION("Driver for MAC address based VLANs"); 496 + MODULE_ALIAS_RTNL_LINK("macvlan");
+1 -1
drivers/net/tg3.c
··· 8318 8318 8319 8319 if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5755 || 8320 8320 GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5787) 8321 - ethtool_op_set_tx_hw_csum(dev, data); 8321 + ethtool_op_set_tx_ipv6_csum(dev, data); 8322 8322 else 8323 8323 ethtool_op_set_tx_csum(dev, data); 8324 8324
-20
include/linux/crypto.h
··· 295 295 }; 296 296 297 297 struct cipher_tfm { 298 - void *cit_iv; 299 - unsigned int cit_ivsize; 300 - u32 cit_mode; 301 298 int (*cit_setkey)(struct crypto_tfm *tfm, 302 299 const u8 *key, unsigned int keylen); 303 - int (*cit_encrypt)(struct crypto_tfm *tfm, 304 - struct scatterlist *dst, 305 - struct scatterlist *src, 306 - unsigned int nbytes); 307 - int (*cit_encrypt_iv)(struct crypto_tfm *tfm, 308 - struct scatterlist *dst, 309 - struct scatterlist *src, 310 - unsigned int nbytes, u8 *iv); 311 - int (*cit_decrypt)(struct crypto_tfm *tfm, 312 - struct scatterlist *dst, 313 - struct scatterlist *src, 314 - unsigned int nbytes); 315 - int (*cit_decrypt_iv)(struct crypto_tfm *tfm, 316 - struct scatterlist *dst, 317 - struct scatterlist *src, 318 - unsigned int nbytes, u8 *iv); 319 - void (*cit_xor_block)(u8 *dst, const u8 *src); 320 300 void (*cit_encrypt_one)(struct crypto_tfm *tfm, u8 *dst, const u8 *src); 321 301 void (*cit_decrypt_one)(struct crypto_tfm *tfm, u8 *dst, const u8 *src); 322 302 };
+1
include/linux/ethtool.h
··· 265 265 u32 ethtool_op_get_tx_csum(struct net_device *dev); 266 266 int ethtool_op_set_tx_csum(struct net_device *dev, u32 data); 267 267 int ethtool_op_set_tx_hw_csum(struct net_device *dev, u32 data); 268 + int ethtool_op_set_tx_ipv6_csum(struct net_device *dev, u32 data); 268 269 u32 ethtool_op_get_sg(struct net_device *dev); 269 270 int ethtool_op_set_sg(struct net_device *dev, u32 data); 270 271 u32 ethtool_op_get_tso(struct net_device *dev);
+11
include/linux/ieee80211.h
··· 227 227 #define WLAN_CAPABILITY_SHORT_SLOT_TIME (1<<10) 228 228 #define WLAN_CAPABILITY_DSSS_OFDM (1<<13) 229 229 230 + /* 802.11g ERP information element */ 231 + #define WLAN_ERP_NON_ERP_PRESENT (1<<0) 232 + #define WLAN_ERP_USE_PROTECTION (1<<1) 233 + #define WLAN_ERP_BARKER_PREAMBLE (1<<2) 234 + 235 + /* WLAN_ERP_BARKER_PREAMBLE values */ 236 + enum { 237 + WLAN_ERP_PREAMBLE_SHORT = 0, 238 + WLAN_ERP_PREAMBLE_LONG = 1, 239 + }; 240 + 230 241 /* Status codes */ 231 242 enum ieee80211_statuscode { 232 243 WLAN_STATUS_SUCCESS = 0,
+9
include/linux/if_macvlan.h
··· 1 + #ifndef _LINUX_IF_MACVLAN_H 2 + #define _LINUX_IF_MACVLAN_H 3 + 4 + #ifdef __KERNEL__ 5 + 6 + extern struct sk_buff *(*macvlan_handle_frame_hook)(struct sk_buff *); 7 + 8 + #endif /* __KERNEL__ */ 9 + #endif /* _LINUX_IF_MACVLAN_H */
-7
include/linux/if_vlan.h
··· 127 127 * like DHCP that use packet-filtering and don't understand 128 128 * 802.1Q 129 129 */ 130 - struct dev_mc_list *old_mc_list; /* old multi-cast list for the VLAN interface.. 131 - * we save this so we can tell what changes were 132 - * made, in order to feed the right changes down 133 - * to the real hardware... 134 - */ 135 - int old_allmulti; /* similar to above. */ 136 - int old_promiscuity; /* similar to above. */ 137 130 struct net_device *real_dev; /* the underlying device/interface */ 138 131 unsigned char real_dev_addr[ETH_ALEN]; 139 132 struct proc_dir_entry *dent; /* Holds the proc data */
+8
include/linux/netdevice.h
··· 190 190 struct dev_addr_list *next; 191 191 u8 da_addr[MAX_ADDR_LEN]; 192 192 u8 da_addrlen; 193 + u8 da_synced; 193 194 int da_users; 194 195 int da_gusers; 195 196 }; ··· 517 516 void *saddr, 518 517 unsigned len); 519 518 int (*rebuild_header)(struct sk_buff *skb); 519 + #define HAVE_CHANGE_RX_FLAGS 520 + void (*change_rx_flags)(struct net_device *dev, 521 + int flags); 520 522 #define HAVE_SET_RX_MODE 521 523 void (*set_rx_mode)(struct net_device *dev); 522 524 #define HAVE_MULTICAST ··· 564 560 565 561 /* bridge stuff */ 566 562 struct net_bridge_port *br_port; 563 + /* macvlan */ 564 + struct macvlan_port *macvlan_port; 567 565 568 566 /* class/net/name entry */ 569 567 struct device dev; ··· 1106 1100 extern int dev_unicast_add(struct net_device *dev, void *addr, int alen); 1107 1101 extern int dev_mc_delete(struct net_device *dev, void *addr, int alen, int all); 1108 1102 extern int dev_mc_add(struct net_device *dev, void *addr, int alen, int newonly); 1103 + extern int dev_mc_sync(struct net_device *to, struct net_device *from); 1104 + extern void dev_mc_unsync(struct net_device *to, struct net_device *from); 1109 1105 extern void dev_mc_discard(struct net_device *dev); 1110 1106 extern int __dev_addr_delete(struct dev_addr_list **list, int *count, void *addr, int alen, int all); 1111 1107 extern int __dev_addr_add(struct dev_addr_list **list, int *count, void *addr, int alen, int newonly);
+17
include/linux/netfilter/xt_connlimit.h
··· 1 + #ifndef _XT_CONNLIMIT_H 2 + #define _XT_CONNLIMIT_H 3 + 4 + struct xt_connlimit_data; 5 + 6 + struct xt_connlimit_info { 7 + union { 8 + u_int32_t v4_mask; 9 + u_int32_t v6_mask[4]; 10 + }; 11 + unsigned int limit, inverse; 12 + 13 + /* this needs to be at the end */ 14 + struct xt_connlimit_data *data __attribute__((aligned(8))); 15 + }; 16 + 17 + #endif /* _XT_CONNLIMIT_H */
-30
include/net/act_api.h
··· 121 121 extern int tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int, int); 122 122 extern int tcf_action_copy_stats (struct sk_buff *,struct tc_action *, int); 123 123 #endif /* CONFIG_NET_CLS_ACT */ 124 - 125 - extern int tcf_police(struct sk_buff *skb, struct tcf_police *p); 126 - extern void tcf_police_destroy(struct tcf_police *p); 127 - extern struct tcf_police * tcf_police_locate(struct rtattr *rta, struct rtattr *est); 128 - extern int tcf_police_dump(struct sk_buff *skb, struct tcf_police *p); 129 - extern int tcf_police_dump_stats(struct sk_buff *skb, struct tcf_police *p); 130 - 131 - static inline int 132 - tcf_police_release(struct tcf_police *p, int bind) 133 - { 134 - int ret = 0; 135 - #ifdef CONFIG_NET_CLS_ACT 136 - if (p) { 137 - if (bind) 138 - p->tcf_bindcnt--; 139 - 140 - p->tcf_refcnt--; 141 - if (p->tcf_refcnt <= 0 && !p->tcf_bindcnt) { 142 - tcf_police_destroy(p); 143 - ret = 1; 144 - } 145 - } 146 - #else 147 - if (p && --p->tcf_refcnt == 0) 148 - tcf_police_destroy(p); 149 - 150 - #endif /* CONFIG_NET_CLS_ACT */ 151 - return ret; 152 - } 153 - 154 124 #endif
+38
include/net/cfg80211.h
··· 11 11 * Copyright 2006 Johannes Berg <johannes@sipsolutions.net> 12 12 */ 13 13 14 + 15 + /* Radiotap header iteration 16 + * implemented in net/wireless/radiotap.c 17 + * docs in Documentation/networking/radiotap-headers.txt 18 + */ 19 + /** 20 + * struct ieee80211_radiotap_iterator - tracks walk thru present radiotap args 21 + * @rtheader: pointer to the radiotap header we are walking through 22 + * @max_length: length of radiotap header in cpu byte ordering 23 + * @this_arg_index: IEEE80211_RADIOTAP_... index of current arg 24 + * @this_arg: pointer to current radiotap arg 25 + * @arg_index: internal next argument index 26 + * @arg: internal next argument pointer 27 + * @next_bitmap: internal pointer to next present u32 28 + * @bitmap_shifter: internal shifter for curr u32 bitmap, b0 set == arg present 29 + */ 30 + 31 + struct ieee80211_radiotap_iterator { 32 + struct ieee80211_radiotap_header *rtheader; 33 + int max_length; 34 + int this_arg_index; 35 + u8 *this_arg; 36 + 37 + int arg_index; 38 + u8 *arg; 39 + __le32 *next_bitmap; 40 + u32 bitmap_shifter; 41 + }; 42 + 43 + extern int ieee80211_radiotap_iterator_init( 44 + struct ieee80211_radiotap_iterator *iterator, 45 + struct ieee80211_radiotap_header *radiotap_header, 46 + int max_length); 47 + 48 + extern int ieee80211_radiotap_iterator_next( 49 + struct ieee80211_radiotap_iterator *iterator); 50 + 51 + 14 52 /* from net/wireless.h */ 15 53 struct wiphy; 16 54
-3
include/net/inet_timewait_sock.h
··· 209 209 extern struct inet_timewait_sock *inet_twsk_alloc(const struct sock *sk, 210 210 const int state); 211 211 212 - extern void __inet_twsk_kill(struct inet_timewait_sock *tw, 213 - struct inet_hashinfo *hashinfo); 214 - 215 212 extern void __inet_twsk_hashdance(struct inet_timewait_sock *tw, 216 213 struct sock *sk, 217 214 struct inet_hashinfo *hashinfo);
+1
include/net/iucv/af_iucv.h
··· 60 60 char dst_user_id[8]; 61 61 char dst_name[8]; 62 62 struct list_head accept_q; 63 + spinlock_t accept_q_lock; 63 64 struct sock *parent; 64 65 struct iucv_path *path; 65 66 struct sk_buff_head send_skb_q;
+12 -10
include/net/mac80211.h
··· 347 347 * @mac_addr: pointer to MAC address of the interface. This pointer is valid 348 348 * until the interface is removed (i.e. it cannot be used after 349 349 * remove_interface() callback was called for this interface). 350 + * This pointer will be %NULL for monitor interfaces, be careful. 350 351 * 351 352 * This structure is used in add_interface() and remove_interface() 352 353 * callbacks of &struct ieee80211_hw. 354 + * 355 + * When you allow multiple interfaces to be added to your PHY, take care 356 + * that the hardware can actually handle multiple MAC addresses. However, 357 + * also take care that when there's no interface left with mac_addr != %NULL 358 + * you remove the MAC address from the device to avoid acknowledging packets 359 + * in pure monitor mode. 353 360 */ 354 361 struct ieee80211_if_init_conf { 355 362 int if_id; ··· 581 574 * to returning zero. By returning non-zero addition of the interface 582 575 * is inhibited. Unless monitor_during_oper is set, it is guaranteed 583 576 * that monitor interfaces and normal interfaces are mutually 584 - * exclusive. The open() handler is called after add_interface() 585 - * if this is the first device added. At least one of the open() 586 - * open() and add_interface() callbacks has to be assigned. If 587 - * add_interface() is NULL, one STA interface is permitted only. */ 577 + * exclusive. If assigned, the open() handler is called after 578 + * add_interface() if this is the first device added. The 579 + * add_interface() callback has to be assigned because it is the only 580 + * way to obtain the requested MAC address for any interface. 581 + */ 588 582 int (*add_interface)(struct ieee80211_hw *hw, 589 583 struct ieee80211_if_init_conf *conf); 590 584 ··· 928 920 struct sk_buff * 929 921 ieee80211_get_buffered_bc(struct ieee80211_hw *hw, int if_id, 930 922 struct ieee80211_tx_control *control); 931 - 932 - /* Low level drivers that have their own MLME and MAC indicate 933 - * the aid for an associating station with this call */ 934 - int ieee80211_set_aid_for_sta(struct ieee80211_hw *hw, 935 - u8 *peer_address, u16 aid); 936 - 937 923 938 924 /* Given an sk_buff with a raw 802.11 header at the data pointer this function 939 925 * returns the 802.11 header length in bytes (not including encryption
+2
include/net/netfilter/ipv4/nf_conntrack_ipv4.h
··· 12 12 /* Returns new sk_buff, or NULL */ 13 13 struct sk_buff *nf_ct_ipv4_ct_gather_frags(struct sk_buff *skb); 14 14 15 + extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4; 16 + 15 17 extern struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4; 16 18 extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4; 17 19 extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp;
+1 -1
include/net/netfilter/ipv6/nf_conntrack_ipv6.h
··· 7 7 extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp6; 8 8 extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6; 9 9 10 - extern int nf_ct_ipv6_skip_exthdr(struct sk_buff *skb, int start, 10 + extern int nf_ct_ipv6_skip_exthdr(const struct sk_buff *skb, int start, 11 11 u8 *nexthdrp, int len); 12 12 13 13 extern int nf_ct_frag6_init(void);
+4
include/net/netfilter/nf_conntrack.h
··· 186 186 187 187 extern void nf_conntrack_flush(void); 188 188 189 + extern int nf_ct_get_tuplepr(const struct sk_buff *skb, 190 + unsigned int nhoff, 191 + u_int16_t l3num, 192 + struct nf_conntrack_tuple *tuple); 189 193 extern int nf_ct_invert_tuplepr(struct nf_conntrack_tuple *inverse, 190 194 const struct nf_conntrack_tuple *orig); 191 195
+3 -5
include/net/netfilter/nf_conntrack_l3proto.h
··· 58 58 59 59 /* 60 60 * Called before tracking. 61 - * *dataoff: offset of protocol header (TCP, UDP,...) in *pskb 61 + * *dataoff: offset of protocol header (TCP, UDP,...) in skb 62 62 * *protonum: protocol number 63 63 */ 64 - int (*prepare)(struct sk_buff **pskb, unsigned int hooknum, 65 - unsigned int *dataoff, u_int8_t *protonum); 64 + int (*get_l4proto)(const struct sk_buff *skb, unsigned int nhoff, 65 + unsigned int *dataoff, u_int8_t *protonum); 66 66 67 67 int (*tuple_to_nfattr)(struct sk_buff *skb, 68 68 const struct nf_conntrack_tuple *t); ··· 89 89 extern void nf_ct_l3proto_put(struct nf_conntrack_l3proto *p); 90 90 91 91 /* Existing built-in protocols */ 92 - extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4; 93 - extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6; 94 92 extern struct nf_conntrack_l3proto nf_conntrack_l3proto_generic; 95 93 96 94 static inline struct nf_conntrack_l3proto *
-8
include/net/pkt_cls.h
··· 65 65 { 66 66 #ifdef CONFIG_NET_CLS_ACT 67 67 struct tc_action *action; 68 - #elif defined CONFIG_NET_CLS_POLICE 69 - struct tcf_police *police; 70 68 #endif 71 69 }; 72 70 ··· 89 91 { 90 92 #ifdef CONFIG_NET_CLS_ACT 91 93 return !!exts->action; 92 - #elif defined CONFIG_NET_CLS_POLICE 93 - return !!exts->police; 94 94 #else 95 95 return 0; 96 96 #endif ··· 125 129 #ifdef CONFIG_NET_CLS_ACT 126 130 if (exts->action) 127 131 return tcf_action_exec(skb, exts->action, res); 128 - #elif defined CONFIG_NET_CLS_POLICE 129 - if (exts->police) 130 - return tcf_police(skb, exts->police); 131 132 #endif 132 - 133 133 return 0; 134 134 } 135 135
+3 -1
include/net/pkt_sched.h
··· 89 89 __qdisc_run(dev); 90 90 } 91 91 92 + extern int tc_classify_compat(struct sk_buff *skb, struct tcf_proto *tp, 93 + struct tcf_result *res); 92 94 extern int tc_classify(struct sk_buff *skb, struct tcf_proto *tp, 93 - struct tcf_result *res); 95 + struct tcf_result *res); 94 96 95 97 /* Calculate maximal size of packet seen by hard_start_xmit 96 98 routine of this device.
+1 -1
include/net/sch_generic.h
··· 290 290 { 291 291 sch->qstats.drops++; 292 292 293 - #ifdef CONFIG_NET_CLS_POLICE 293 + #ifdef CONFIG_NET_CLS_ACT 294 294 if (sch->reshape_fail == NULL || sch->reshape_fail(skb, sch)) 295 295 goto drop; 296 296
+2 -1
net/8021q/vlan.c
··· 373 373 new_dev->open = vlan_dev_open; 374 374 new_dev->stop = vlan_dev_stop; 375 375 new_dev->set_multicast_list = vlan_dev_set_multicast_list; 376 + new_dev->change_rx_flags = vlan_change_rx_flags; 376 377 new_dev->destructor = free_netdev; 377 378 new_dev->do_ioctl = vlan_dev_ioctl; 378 379 379 - memset(new_dev->broadcast, 0, sizeof(ETH_ALEN)); 380 + memset(new_dev->broadcast, 0, ETH_ALEN); 380 381 } 381 382 382 383 static void vlan_transfer_operstate(const struct net_device *dev, struct net_device *vlandev)
+1
net/8021q/vlan.h
··· 69 69 u32 flag, short flag_val); 70 70 void vlan_dev_get_realdev_name(const struct net_device *dev, char *result); 71 71 void vlan_dev_get_vid(const struct net_device *dev, unsigned short *result); 72 + void vlan_change_rx_flags(struct net_device *dev, int change); 72 73 void vlan_dev_set_multicast_list(struct net_device *vlan_dev); 73 74 74 75 int vlan_check_real_dev(struct net_device *real_dev, unsigned short vlan_id);
+21 -146
net/8021q/vlan_dev.c
··· 612 612 *result = VLAN_DEV_INFO(dev)->vlan_id; 613 613 } 614 614 615 - static inline int vlan_dmi_equals(struct dev_mc_list *dmi1, 616 - struct dev_mc_list *dmi2) 617 - { 618 - return ((dmi1->dmi_addrlen == dmi2->dmi_addrlen) && 619 - (memcmp(dmi1->dmi_addr, dmi2->dmi_addr, dmi1->dmi_addrlen) == 0)); 620 - } 621 - 622 - /** dmi is a single entry into a dev_mc_list, a single node. mc_list is 623 - * an entire list, and we'll iterate through it. 624 - */ 625 - static int vlan_should_add_mc(struct dev_mc_list *dmi, struct dev_mc_list *mc_list) 626 - { 627 - struct dev_mc_list *idmi; 628 - 629 - for (idmi = mc_list; idmi != NULL; ) { 630 - if (vlan_dmi_equals(dmi, idmi)) { 631 - if (dmi->dmi_users > idmi->dmi_users) 632 - return 1; 633 - else 634 - return 0; 635 - } else { 636 - idmi = idmi->next; 637 - } 638 - } 639 - 640 - return 1; 641 - } 642 - 643 - static inline void vlan_destroy_mc_list(struct dev_mc_list *mc_list) 644 - { 645 - struct dev_mc_list *dmi = mc_list; 646 - struct dev_mc_list *next; 647 - 648 - while(dmi) { 649 - next = dmi->next; 650 - kfree(dmi); 651 - dmi = next; 652 - } 653 - } 654 - 655 - static void vlan_copy_mc_list(struct dev_mc_list *mc_list, struct vlan_dev_info *vlan_info) 656 - { 657 - struct dev_mc_list *dmi, *new_dmi; 658 - 659 - vlan_destroy_mc_list(vlan_info->old_mc_list); 660 - vlan_info->old_mc_list = NULL; 661 - 662 - for (dmi = mc_list; dmi != NULL; dmi = dmi->next) { 663 - new_dmi = kmalloc(sizeof(*new_dmi), GFP_ATOMIC); 664 - if (new_dmi == NULL) { 665 - printk(KERN_ERR "vlan: cannot allocate memory. " 666 - "Multicast may not work properly from now.\n"); 667 - return; 668 - } 669 - 670 - /* Copy whole structure, then make new 'next' pointer */ 671 - *new_dmi = *dmi; 672 - new_dmi->next = vlan_info->old_mc_list; 673 - vlan_info->old_mc_list = new_dmi; 674 - } 675 - } 676 - 677 - static void vlan_flush_mc_list(struct net_device *dev) 678 - { 679 - struct dev_mc_list *dmi = dev->mc_list; 680 - 681 - while (dmi) { 682 - printk(KERN_DEBUG "%s: del %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address from vlan interface\n", 683 - dev->name, 684 - dmi->dmi_addr[0], 685 - dmi->dmi_addr[1], 686 - dmi->dmi_addr[2], 687 - dmi->dmi_addr[3], 688 - dmi->dmi_addr[4], 689 - dmi->dmi_addr[5]); 690 - dev_mc_delete(dev, dmi->dmi_addr, dmi->dmi_addrlen, 0); 691 - dmi = dev->mc_list; 692 - } 693 - 694 - /* dev->mc_list is NULL by the time we get here. */ 695 - vlan_destroy_mc_list(VLAN_DEV_INFO(dev)->old_mc_list); 696 - VLAN_DEV_INFO(dev)->old_mc_list = NULL; 697 - } 698 - 699 615 int vlan_dev_open(struct net_device *dev) 700 616 { 701 617 struct vlan_dev_info *vlan = VLAN_DEV_INFO(dev); ··· 628 712 } 629 713 memcpy(vlan->real_dev_addr, real_dev->dev_addr, ETH_ALEN); 630 714 715 + if (dev->flags & IFF_ALLMULTI) 716 + dev_set_allmulti(real_dev, 1); 717 + if (dev->flags & IFF_PROMISC) 718 + dev_set_promiscuity(real_dev, 1); 719 + 631 720 return 0; 632 721 } 633 722 ··· 640 719 { 641 720 struct net_device *real_dev = VLAN_DEV_INFO(dev)->real_dev; 642 721 643 - vlan_flush_mc_list(dev); 722 + dev_mc_unsync(real_dev, dev); 723 + if (dev->flags & IFF_ALLMULTI) 724 + dev_set_allmulti(real_dev, -1); 725 + if (dev->flags & IFF_PROMISC) 726 + dev_set_promiscuity(real_dev, -1); 644 727 645 728 if (compare_ether_addr(dev->dev_addr, real_dev->dev_addr)) 646 729 dev_unicast_delete(real_dev, dev->dev_addr, dev->addr_len); ··· 679 754 return err; 680 755 } 681 756 757 + void vlan_change_rx_flags(struct net_device *dev, int change) 758 + { 759 + struct net_device *real_dev = VLAN_DEV_INFO(dev)->real_dev; 760 + 761 + if (change & IFF_ALLMULTI) 762 + dev_set_allmulti(real_dev, dev->flags & IFF_ALLMULTI ? 1 : -1); 763 + if (change & IFF_PROMISC) 764 + dev_set_promiscuity(real_dev, dev->flags & IFF_PROMISC ? 1 : -1); 765 + } 766 + 682 767 /** Taken from Gleb + Lennert's VLAN code, and modified... */ 683 768 void vlan_dev_set_multicast_list(struct net_device *vlan_dev) 684 769 { 685 - struct dev_mc_list *dmi; 686 - struct net_device *real_dev; 687 - int inc; 688 - 689 - if (vlan_dev && (vlan_dev->priv_flags & IFF_802_1Q_VLAN)) { 690 - /* Then it's a real vlan device, as far as we can tell.. */ 691 - real_dev = VLAN_DEV_INFO(vlan_dev)->real_dev; 692 - 693 - /* compare the current promiscuity to the last promisc we had.. */ 694 - inc = vlan_dev->promiscuity - VLAN_DEV_INFO(vlan_dev)->old_promiscuity; 695 - if (inc) { 696 - printk(KERN_INFO "%s: dev_set_promiscuity(master, %d)\n", 697 - vlan_dev->name, inc); 698 - dev_set_promiscuity(real_dev, inc); /* found in dev.c */ 699 - VLAN_DEV_INFO(vlan_dev)->old_promiscuity = vlan_dev->promiscuity; 700 - } 701 - 702 - inc = vlan_dev->allmulti - VLAN_DEV_INFO(vlan_dev)->old_allmulti; 703 - if (inc) { 704 - printk(KERN_INFO "%s: dev_set_allmulti(master, %d)\n", 705 - vlan_dev->name, inc); 706 - dev_set_allmulti(real_dev, inc); /* dev.c */ 707 - VLAN_DEV_INFO(vlan_dev)->old_allmulti = vlan_dev->allmulti; 708 - } 709 - 710 - /* looking for addresses to add to master's list */ 711 - for (dmi = vlan_dev->mc_list; dmi != NULL; dmi = dmi->next) { 712 - if (vlan_should_add_mc(dmi, VLAN_DEV_INFO(vlan_dev)->old_mc_list)) { 713 - dev_mc_add(real_dev, dmi->dmi_addr, dmi->dmi_addrlen, 0); 714 - printk(KERN_DEBUG "%s: add %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address to master interface\n", 715 - vlan_dev->name, 716 - dmi->dmi_addr[0], 717 - dmi->dmi_addr[1], 718 - dmi->dmi_addr[2], 719 - dmi->dmi_addr[3], 720 - dmi->dmi_addr[4], 721 - dmi->dmi_addr[5]); 722 - } 723 - } 724 - 725 - /* looking for addresses to delete from master's list */ 726 - for (dmi = VLAN_DEV_INFO(vlan_dev)->old_mc_list; dmi != NULL; dmi = dmi->next) { 727 - if (vlan_should_add_mc(dmi, vlan_dev->mc_list)) { 728 - /* if we think we should add it to the new list, then we should really 729 - * delete it from the real list on the underlying device. 730 - */ 731 - dev_mc_delete(real_dev, dmi->dmi_addr, dmi->dmi_addrlen, 0); 732 - printk(KERN_DEBUG "%s: del %.2x:%.2x:%.2x:%.2x:%.2x:%.2x mcast address from master interface\n", 733 - vlan_dev->name, 734 - dmi->dmi_addr[0], 735 - dmi->dmi_addr[1], 736 - dmi->dmi_addr[2], 737 - dmi->dmi_addr[3], 738 - dmi->dmi_addr[4], 739 - dmi->dmi_addr[5]); 740 - } 741 - } 742 - 743 - /* save multicast list */ 744 - vlan_copy_mc_list(vlan_dev->mc_list, VLAN_DEV_INFO(vlan_dev)); 745 - } 770 + dev_mc_sync(VLAN_DEV_INFO(vlan_dev)->real_dev, vlan_dev); 746 771 }
+2 -2
net/bridge/netfilter/ebtables.c
··· 1525 1525 if ((ret = nf_register_sockopt(&ebt_sockopts)) < 0) 1526 1526 return ret; 1527 1527 1528 - printk(KERN_NOTICE "Ebtables v2.0 registered\n"); 1528 + printk(KERN_INFO "Ebtables v2.0 registered\n"); 1529 1529 return 0; 1530 1530 } 1531 1531 1532 1532 static void __exit ebtables_fini(void) 1533 1533 { 1534 1534 nf_unregister_sockopt(&ebt_sockopts); 1535 - printk(KERN_NOTICE "Ebtables v2.0 unregistered\n"); 1535 + printk(KERN_INFO "Ebtables v2.0 unregistered\n"); 1536 1536 } 1537 1537 1538 1538 EXPORT_SYMBOL(ebt_register_table);
+42 -1
net/core/dev.c
··· 98 98 #include <linux/seq_file.h> 99 99 #include <linux/stat.h> 100 100 #include <linux/if_bridge.h> 101 + #include <linux/if_macvlan.h> 101 102 #include <net/dst.h> 102 103 #include <net/pkt_sched.h> 103 104 #include <net/checksum.h> ··· 1814 1813 #define handle_bridge(skb, pt_prev, ret, orig_dev) (skb) 1815 1814 #endif 1816 1815 1816 + #if defined(CONFIG_MACVLAN) || defined(CONFIG_MACVLAN_MODULE) 1817 + struct sk_buff *(*macvlan_handle_frame_hook)(struct sk_buff *skb) __read_mostly; 1818 + EXPORT_SYMBOL_GPL(macvlan_handle_frame_hook); 1819 + 1820 + static inline struct sk_buff *handle_macvlan(struct sk_buff *skb, 1821 + struct packet_type **pt_prev, 1822 + int *ret, 1823 + struct net_device *orig_dev) 1824 + { 1825 + if (skb->dev->macvlan_port == NULL) 1826 + return skb; 1827 + 1828 + if (*pt_prev) { 1829 + *ret = deliver_skb(skb, *pt_prev, orig_dev); 1830 + *pt_prev = NULL; 1831 + } 1832 + return macvlan_handle_frame_hook(skb); 1833 + } 1834 + #else 1835 + #define handle_macvlan(skb, pt_prev, ret, orig_dev) (skb) 1836 + #endif 1837 + 1817 1838 #ifdef CONFIG_NET_CLS_ACT 1818 1839 /* TODO: Maybe we should just force sch_ingress to be compiled in 1819 1840 * when CONFIG_NET_CLS_ACT is? otherwise some useless instructions ··· 1941 1918 #endif 1942 1919 1943 1920 skb = handle_bridge(skb, &pt_prev, &ret, orig_dev); 1921 + if (!skb) 1922 + goto out; 1923 + skb = handle_macvlan(skb, &pt_prev, &ret, orig_dev); 1944 1924 if (!skb) 1945 1925 goto out; 1946 1926 ··· 2547 2521 { 2548 2522 unsigned short old_flags = dev->flags; 2549 2523 2524 + ASSERT_RTNL(); 2525 + 2550 2526 if ((dev->promiscuity += inc) == 0) 2551 2527 dev->flags &= ~IFF_PROMISC; 2552 2528 else ··· 2563 2535 dev->name, (dev->flags & IFF_PROMISC), 2564 2536 (old_flags & IFF_PROMISC), 2565 2537 audit_get_loginuid(current->audit_context)); 2538 + 2539 + if (dev->change_rx_flags) 2540 + dev->change_rx_flags(dev, IFF_PROMISC); 2566 2541 } 2567 2542 } 2568 2543 ··· 2604 2573 { 2605 2574 unsigned short old_flags = dev->flags; 2606 2575 2576 + ASSERT_RTNL(); 2577 + 2607 2578 dev->flags |= IFF_ALLMULTI; 2608 2579 if ((dev->allmulti += inc) == 0) 2609 2580 dev->flags &= ~IFF_ALLMULTI; 2610 - if (dev->flags ^ old_flags) 2581 + if (dev->flags ^ old_flags) { 2582 + if (dev->change_rx_flags) 2583 + dev->change_rx_flags(dev, IFF_ALLMULTI); 2611 2584 dev_set_rx_mode(dev); 2585 + } 2612 2586 } 2613 2587 2614 2588 /* ··· 2814 2778 int ret, changes; 2815 2779 int old_flags = dev->flags; 2816 2780 2781 + ASSERT_RTNL(); 2782 + 2817 2783 /* 2818 2784 * Set the flags on our device. 2819 2785 */ ··· 2829 2791 /* 2830 2792 * Load in the correct multicast list now the flags have changed. 2831 2793 */ 2794 + 2795 + if (dev->change_rx_flags && (dev->flags ^ flags) & IFF_MULTICAST) 2796 + dev->change_rx_flags(dev, IFF_MULTICAST); 2832 2797 2833 2798 dev_set_rx_mode(dev); 2834 2799
+75
net/core/dev_mcast.c
··· 102 102 return err; 103 103 } 104 104 105 + /** 106 + * dev_mc_sync - Synchronize device's multicast list to another device 107 + * @to: destination device 108 + * @from: source device 109 + * 110 + * Add newly added addresses to the destination device and release 111 + * addresses that have no users left. The source device must be 112 + * locked by netif_tx_lock_bh. 113 + * 114 + * This function is intended to be called from the dev->set_multicast_list 115 + * function of layered software devices. 116 + */ 117 + int dev_mc_sync(struct net_device *to, struct net_device *from) 118 + { 119 + struct dev_addr_list *da; 120 + int err = 0; 121 + 122 + netif_tx_lock_bh(to); 123 + for (da = from->mc_list; da != NULL; da = da->next) { 124 + if (!da->da_synced) { 125 + err = __dev_addr_add(&to->mc_list, &to->mc_count, 126 + da->da_addr, da->da_addrlen, 0); 127 + if (err < 0) 128 + break; 129 + da->da_synced = 1; 130 + da->da_users++; 131 + } else if (da->da_users == 1) { 132 + __dev_addr_delete(&to->mc_list, &to->mc_count, 133 + da->da_addr, da->da_addrlen, 0); 134 + __dev_addr_delete(&from->mc_list, &from->mc_count, 135 + da->da_addr, da->da_addrlen, 0); 136 + } 137 + } 138 + if (!err) 139 + __dev_set_rx_mode(to); 140 + netif_tx_unlock_bh(to); 141 + 142 + return err; 143 + } 144 + EXPORT_SYMBOL(dev_mc_sync); 145 + 146 + 147 + /** 148 + * dev_mc_unsync - Remove synchronized addresses from the destination 149 + * device 150 + * @to: destination device 151 + * @from: source device 152 + * 153 + * Remove all addresses that were added to the destination device by 154 + * dev_mc_sync(). This function is intended to be called from the 155 + * dev->stop function of layered software devices. 156 + */ 157 + void dev_mc_unsync(struct net_device *to, struct net_device *from) 158 + { 159 + struct dev_addr_list *da; 160 + 161 + netif_tx_lock_bh(from); 162 + netif_tx_lock_bh(to); 163 + 164 + for (da = from->mc_list; da != NULL; da = da->next) { 165 + if (!da->da_synced) 166 + continue; 167 + __dev_addr_delete(&to->mc_list, &to->mc_count, 168 + da->da_addr, da->da_addrlen, 0); 169 + da->da_synced = 0; 170 + __dev_addr_delete(&from->mc_list, &from->mc_count, 171 + da->da_addr, da->da_addrlen, 0); 172 + } 173 + __dev_set_rx_mode(to); 174 + 175 + netif_tx_unlock_bh(to); 176 + netif_tx_unlock_bh(from); 177 + } 178 + EXPORT_SYMBOL(dev_mc_unsync); 179 + 105 180 /* 106 181 * Discard multicast list when a device is downed 107 182 */
+12
net/core/ethtool.c
··· 52 52 53 53 return 0; 54 54 } 55 + 56 + int ethtool_op_set_tx_ipv6_csum(struct net_device *dev, u32 data) 57 + { 58 + if (data) 59 + dev->features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; 60 + else 61 + dev->features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM); 62 + 63 + return 0; 64 + } 65 + 55 66 u32 ethtool_op_get_sg(struct net_device *dev) 56 67 { 57 68 return (dev->features & NETIF_F_SG) != 0; ··· 991 980 EXPORT_SYMBOL(ethtool_op_set_tso); 992 981 EXPORT_SYMBOL(ethtool_op_set_tx_csum); 993 982 EXPORT_SYMBOL(ethtool_op_set_tx_hw_csum); 983 + EXPORT_SYMBOL(ethtool_op_set_tx_ipv6_csum); 994 984 EXPORT_SYMBOL(ethtool_op_set_ufo); 995 985 EXPORT_SYMBOL(ethtool_op_get_ufo);
+1 -1
net/ipv4/arp.c
··· 885 885 if (n == NULL && 886 886 arp->ar_op == htons(ARPOP_REPLY) && 887 887 inet_addr_type(sip) == RTN_UNICAST) 888 - n = __neigh_lookup(&arp_tbl, &sip, dev, -1); 888 + n = __neigh_lookup(&arp_tbl, &sip, dev, 1); 889 889 } 890 890 891 891 if (n) {
+2 -3
net/ipv4/inet_timewait_sock.c
··· 14 14 #include <net/ip.h> 15 15 16 16 /* Must be called with locally disabled BHs. */ 17 - void __inet_twsk_kill(struct inet_timewait_sock *tw, struct inet_hashinfo *hashinfo) 17 + static void __inet_twsk_kill(struct inet_timewait_sock *tw, 18 + struct inet_hashinfo *hashinfo) 18 19 { 19 20 struct inet_bind_hashbucket *bhead; 20 21 struct inet_bind_bucket *tb; ··· 47 46 #endif 48 47 inet_twsk_put(tw); 49 48 } 50 - 51 - EXPORT_SYMBOL_GPL(__inet_twsk_kill); 52 49 53 50 /* 54 51 * Enter the time wait state. This is called with locally disabled BH.
+1 -1
net/ipv4/netfilter/arp_tables.c
··· 1184 1184 if (ret < 0) 1185 1185 goto err4; 1186 1186 1187 - printk("arp_tables: (C) 2002 David S. Miller\n"); 1187 + printk(KERN_INFO "arp_tables: (C) 2002 David S. Miller\n"); 1188 1188 return 0; 1189 1189 1190 1190 err4:
+15 -10
net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
··· 78 78 return skb; 79 79 } 80 80 81 - static int 82 - ipv4_prepare(struct sk_buff **pskb, unsigned int hooknum, unsigned int *dataoff, 83 - u_int8_t *protonum) 81 + static int ipv4_get_l4proto(const struct sk_buff *skb, unsigned int nhoff, 82 + unsigned int *dataoff, u_int8_t *protonum) 84 83 { 84 + struct iphdr _iph, *iph; 85 + 86 + iph = skb_header_pointer(skb, nhoff, sizeof(_iph), &_iph); 87 + if (iph == NULL) 88 + return -NF_DROP; 89 + 85 90 /* Never happen */ 86 - if (ip_hdr(*pskb)->frag_off & htons(IP_OFFSET)) { 91 + if (iph->frag_off & htons(IP_OFFSET)) { 87 92 if (net_ratelimit()) { 88 - printk(KERN_ERR "ipv4_prepare: Frag of proto %u (hook=%u)\n", 89 - ip_hdr(*pskb)->protocol, hooknum); 93 + printk(KERN_ERR "ipv4_get_l4proto: Frag of proto %u\n", 94 + iph->protocol); 90 95 } 91 96 return -NF_DROP; 92 97 } 93 98 94 - *dataoff = skb_network_offset(*pskb) + ip_hdrlen(*pskb); 95 - *protonum = ip_hdr(*pskb)->protocol; 99 + *dataoff = nhoff + (iph->ihl << 2); 100 + *protonum = iph->protocol; 96 101 97 102 return NF_ACCEPT; 98 103 } ··· 405 400 .get = &getorigdst, 406 401 }; 407 402 408 - struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 = { 403 + struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 __read_mostly = { 409 404 .l3proto = PF_INET, 410 405 .name = "ipv4", 411 406 .pkt_to_tuple = ipv4_pkt_to_tuple, 412 407 .invert_tuple = ipv4_invert_tuple, 413 408 .print_tuple = ipv4_print_tuple, 414 409 .print_conntrack = ipv4_print_conntrack, 415 - .prepare = ipv4_prepare, 410 + .get_l4proto = ipv4_get_l4proto, 416 411 #if defined(CONFIG_NF_CT_NETLINK) || defined(CONFIG_NF_CT_NETLINK_MODULE) 417 412 .tuple_to_nfattr = ipv4_tuple_to_nfattr, 418 413 .nfattr_to_tuple = ipv4_nfattr_to_tuple,
+13 -44
net/ipv4/netfilter/nf_conntrack_proto_icmp.c
··· 136 136 unsigned int hooknum) 137 137 { 138 138 struct nf_conntrack_tuple innertuple, origtuple; 139 - struct { 140 - struct icmphdr icmp; 141 - struct iphdr ip; 142 - } _in, *inside; 143 139 struct nf_conntrack_l4proto *innerproto; 144 140 struct nf_conntrack_tuple_hash *h; 145 - int dataoff; 146 141 147 142 NF_CT_ASSERT(skb->nfct == NULL); 148 143 149 - /* Not enough header? */ 150 - inside = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(_in), &_in); 151 - if (inside == NULL) 152 - return -NF_ACCEPT; 153 - 154 - /* Ignore ICMP's containing fragments (shouldn't happen) */ 155 - if (inside->ip.frag_off & htons(IP_OFFSET)) { 156 - pr_debug("icmp_error_message: fragment of proto %u\n", 157 - inside->ip.protocol); 144 + /* Are they talking about one of our connections? */ 145 + if (!nf_ct_get_tuplepr(skb, 146 + skb_network_offset(skb) + ip_hdrlen(skb) 147 + + sizeof(struct icmphdr), 148 + PF_INET, &origtuple)) { 149 + pr_debug("icmp_error_message: failed to get tuple\n"); 158 150 return -NF_ACCEPT; 159 151 } 160 152 161 153 /* rcu_read_lock()ed by nf_hook_slow */ 162 - innerproto = __nf_ct_l4proto_find(PF_INET, inside->ip.protocol); 163 - 164 - dataoff = ip_hdrlen(skb) + sizeof(inside->icmp); 165 - /* Are they talking about one of our connections? */ 166 - if (!nf_ct_get_tuple(skb, dataoff, dataoff + inside->ip.ihl*4, PF_INET, 167 - inside->ip.protocol, &origtuple, 168 - &nf_conntrack_l3proto_ipv4, innerproto)) { 169 - pr_debug("icmp_error_message: ! get_tuple p=%u", 170 - inside->ip.protocol); 171 - return -NF_ACCEPT; 172 - } 154 + innerproto = __nf_ct_l4proto_find(PF_INET, origtuple.dst.protonum); 173 155 174 156 /* Ordinarily, we'd expect the inverted tupleproto, but it's 175 157 been preserved inside the ICMP. */ ··· 165 183 166 184 h = nf_conntrack_find_get(&innertuple); 167 185 if (!h) { 168 - /* Locally generated ICMPs will match inverted if they 169 - haven't been SNAT'ed yet */ 170 - /* FIXME: NAT code has to handle half-done double NAT --RR */ 171 - if (hooknum == NF_IP_LOCAL_OUT) 172 - h = nf_conntrack_find_get(&origtuple); 173 - 174 - if (!h) { 175 - pr_debug("icmp_error_message: no match\n"); 176 - return -NF_ACCEPT; 177 - } 178 - 179 - /* Reverse direction from that found */ 180 - if (NF_CT_DIRECTION(h) == IP_CT_DIR_REPLY) 181 - *ctinfo += IP_CT_IS_REPLY; 182 - } else { 183 - if (NF_CT_DIRECTION(h) == IP_CT_DIR_REPLY) 184 - *ctinfo += IP_CT_IS_REPLY; 186 + pr_debug("icmp_error_message: no match\n"); 187 + return -NF_ACCEPT; 185 188 } 189 + 190 + if (NF_CT_DIRECTION(h) == IP_CT_DIR_REPLY) 191 + *ctinfo += IP_CT_IS_REPLY; 186 192 187 193 /* Update skb to refer to this connection */ 188 194 skb->nfct = &nf_ct_tuplehash_to_ctrack(h)->ct_general; ··· 312 342 #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ 313 343 #endif /* CONFIG_SYSCTL */ 314 344 315 - struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp = 345 + struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp __read_mostly = 316 346 { 317 347 .l3proto = PF_INET, 318 348 .l4proto = IPPROTO_ICMP, ··· 338 368 #endif 339 369 #endif 340 370 }; 341 - EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_icmp);
+3 -1
net/ipv4/tcp_input.c
··· 1398 1398 * waiting for the first ACK and did not get it)... 1399 1399 */ 1400 1400 if ((tp->frto_counter == 1) && !(flag&FLAG_DATA_ACKED)) { 1401 - tp->retrans_out += tcp_skb_pcount(skb); 1401 + /* For some reason this R-bit might get cleared? */ 1402 + if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_RETRANS) 1403 + tp->retrans_out += tcp_skb_pcount(skb); 1402 1404 /* ...enter this if branch just for the first segment */ 1403 1405 flag |= FLAG_DATA_ACKED; 1404 1406 } else {
+1
net/ipv4/tcp_probe.c
··· 111 111 p->snd_una = tp->snd_una; 112 112 p->snd_cwnd = tp->snd_cwnd; 113 113 p->snd_wnd = tp->snd_wnd; 114 + p->ssthresh = tcp_current_ssthresh(sk); 114 115 p->srtt = tp->srtt >> 3; 115 116 116 117 tcp_probe.head = (tcp_probe.head + 1) % bufsize;
+1
net/ipv6/addrconf.c
··· 2475 2475 write_unlock_bh(&idev->lock); 2476 2476 2477 2477 __ipv6_ifa_notify(RTM_DELADDR, ifa); 2478 + atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifa); 2478 2479 in6_ifa_put(ifa); 2479 2480 2480 2481 write_lock_bh(&idev->lock);
+1 -1
net/ipv6/icmp.c
··· 604 604 605 605 read_lock(&raw_v6_lock); 606 606 if ((sk = sk_head(&raw_v6_htable[hash])) != NULL) { 607 - while((sk = __raw_v6_lookup(sk, nexthdr, daddr, saddr, 607 + while ((sk = __raw_v6_lookup(sk, nexthdr, saddr, daddr, 608 608 IP6CB(skb)->iif))) { 609 609 rawv6_err(sk, skb, NULL, type, code, inner_offset, info); 610 610 sk = sk_next(sk);
+1 -1
net/ipv6/netfilter/ip6_tables.c
··· 1497 1497 if (ret < 0) 1498 1498 goto err5; 1499 1499 1500 - printk("ip6_tables: (C) 2000-2006 Netfilter Core Team\n"); 1500 + printk(KERN_INFO "ip6_tables: (C) 2000-2006 Netfilter Core Team\n"); 1501 1501 return 0; 1502 1502 1503 1503 err5:
+17 -14
net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c
··· 86 86 * - Note also special handling of AUTH header. Thanks to IPsec wizards. 87 87 */ 88 88 89 - int nf_ct_ipv6_skip_exthdr(struct sk_buff *skb, int start, u8 *nexthdrp, 89 + int nf_ct_ipv6_skip_exthdr(const struct sk_buff *skb, int start, u8 *nexthdrp, 90 90 int len) 91 91 { 92 92 u8 nexthdr = *nexthdrp; ··· 117 117 return start; 118 118 } 119 119 120 - static int 121 - ipv6_prepare(struct sk_buff **pskb, unsigned int hooknum, unsigned int *dataoff, 122 - u_int8_t *protonum) 120 + static int ipv6_get_l4proto(const struct sk_buff *skb, unsigned int nhoff, 121 + unsigned int *dataoff, u_int8_t *protonum) 123 122 { 124 - unsigned int extoff = (u8 *)(ipv6_hdr(*pskb) + 1) - (*pskb)->data; 125 - unsigned char pnum = ipv6_hdr(*pskb)->nexthdr; 126 - int protoff = nf_ct_ipv6_skip_exthdr(*pskb, extoff, &pnum, 127 - (*pskb)->len - extoff); 123 + unsigned int extoff = nhoff + sizeof(struct ipv6hdr); 124 + unsigned char pnum; 125 + int protoff; 126 + 127 + if (skb_copy_bits(skb, nhoff + offsetof(struct ipv6hdr, nexthdr), 128 + &pnum, sizeof(pnum)) != 0) { 129 + pr_debug("ip6_conntrack_core: can't get nexthdr\n"); 130 + return -NF_ACCEPT; 131 + } 132 + protoff = nf_ct_ipv6_skip_exthdr(skb, extoff, &pnum, skb->len - extoff); 128 133 /* 129 - * (protoff == (*pskb)->len) mean that the packet doesn't have no data 134 + * (protoff == skb->len) mean that the packet doesn't have no data 130 135 * except of IPv6 & ext headers. but it's tracked anyway. - YK 131 136 */ 132 - if ((protoff < 0) || (protoff > (*pskb)->len)) { 137 + if ((protoff < 0) || (protoff > skb->len)) { 133 138 pr_debug("ip6_conntrack_core: can't find proto in pkt\n"); 134 - NF_CT_STAT_INC_ATOMIC(error); 135 - NF_CT_STAT_INC_ATOMIC(invalid); 136 139 return -NF_ACCEPT; 137 140 } 138 141 ··· 373 370 } 374 371 #endif 375 372 376 - struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6 = { 373 + struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6 __read_mostly = { 377 374 .l3proto = PF_INET6, 378 375 .name = "ipv6", 379 376 .pkt_to_tuple = ipv6_pkt_to_tuple, 380 377 .invert_tuple = ipv6_invert_tuple, 381 378 .print_tuple = ipv6_print_tuple, 382 379 .print_conntrack = ipv6_print_conntrack, 383 - .prepare = ipv6_prepare, 380 + .get_l4proto = ipv6_get_l4proto, 384 381 #if defined(CONFIG_NF_CT_NETLINK) || defined(CONFIG_NF_CT_NETLINK_MODULE) 385 382 .tuple_to_nfattr = ipv6_tuple_to_nfattr, 386 383 .nfattr_to_tuple = ipv6_nfattr_to_tuple,
+9 -37
net/ipv6/netfilter/nf_conntrack_proto_icmpv6.c
··· 136 136 { 137 137 struct nf_conntrack_tuple intuple, origtuple; 138 138 struct nf_conntrack_tuple_hash *h; 139 - struct icmp6hdr _hdr, *hp; 140 - unsigned int inip6off; 141 139 struct nf_conntrack_l4proto *inproto; 142 - u_int8_t inprotonum; 143 - unsigned int inprotoff; 144 140 145 141 NF_CT_ASSERT(skb->nfct == NULL); 146 142 147 - hp = skb_header_pointer(skb, icmp6off, sizeof(_hdr), &_hdr); 148 - if (hp == NULL) { 149 - pr_debug("icmpv6_error: Can't get ICMPv6 hdr.\n"); 150 - return -NF_ACCEPT; 151 - } 152 - 153 - inip6off = icmp6off + sizeof(_hdr); 154 - if (skb_copy_bits(skb, inip6off+offsetof(struct ipv6hdr, nexthdr), 155 - &inprotonum, sizeof(inprotonum)) != 0) { 156 - pr_debug("icmpv6_error: Can't get nexthdr in inner IPv6 " 157 - "header.\n"); 158 - return -NF_ACCEPT; 159 - } 160 - inprotoff = nf_ct_ipv6_skip_exthdr(skb, 161 - inip6off + sizeof(struct ipv6hdr), 162 - &inprotonum, 163 - skb->len - inip6off 164 - - sizeof(struct ipv6hdr)); 165 - 166 - if ((inprotoff > skb->len) || (inprotonum == NEXTHDR_FRAGMENT)) { 167 - pr_debug("icmpv6_error: Can't get protocol header in ICMPv6 " 168 - "payload.\n"); 143 + /* Are they talking about one of our connections? */ 144 + if (!nf_ct_get_tuplepr(skb, 145 + skb_network_offset(skb) 146 + + sizeof(struct ipv6hdr) 147 + + sizeof(struct icmp6hdr), 148 + PF_INET6, &origtuple)) { 149 + pr_debug("icmpv6_error: Can't get tuple\n"); 169 150 return -NF_ACCEPT; 170 151 } 171 152 172 153 /* rcu_read_lock()ed by nf_hook_slow */ 173 - inproto = __nf_ct_l4proto_find(PF_INET6, inprotonum); 174 - 175 - /* Are they talking about one of our connections? */ 176 - if (!nf_ct_get_tuple(skb, inip6off, inprotoff, PF_INET6, inprotonum, 177 - &origtuple, &nf_conntrack_l3proto_ipv6, inproto)) { 178 - pr_debug("icmpv6_error: Can't get tuple\n"); 179 - return -NF_ACCEPT; 180 - } 154 + inproto = __nf_ct_l4proto_find(PF_INET6, origtuple.dst.protonum); 181 155 182 156 /* Ordinarily, we'd expect the inverted tupleproto, but it's 183 157 been preserved inside the ICMP. */ ··· 276 302 }; 277 303 #endif /* CONFIG_SYSCTL */ 278 304 279 - struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6 = 305 + struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6 __read_mostly = 280 306 { 281 307 .l3proto = PF_INET6, 282 308 .l4proto = IPPROTO_ICMPV6, ··· 297 323 .ctl_table = icmpv6_sysctl_table, 298 324 #endif 299 325 }; 300 - 301 - EXPORT_SYMBOL(nf_conntrack_l4proto_icmpv6);
+4 -4
net/iucv/Kconfig
··· 1 1 config IUCV 2 - tristate "IUCV support (VM only)" 2 + tristate "IUCV support (S390 - z/VM only)" 3 3 depends on S390 4 4 help 5 - Select this option if you want to use inter-user communication under 6 - VM or VIF sockets. If you run on z/VM, say "Y" to enable a fast 5 + Select this option if you want to use inter-user communication 6 + under VM or VIF. If you run on z/VM, say "Y" to enable a fast 7 7 communication link between VM guests. 8 8 9 9 config AFIUCV 10 - tristate "AF_IUCV support (VM only)" 10 + tristate "AF_IUCV support (S390 - z/VM only)" 11 11 depends on IUCV 12 12 help 13 13 Select this option if you want to use inter-user communication under
+14 -2
net/iucv/af_iucv.c
··· 219 219 220 220 sock_init_data(sock, sk); 221 221 INIT_LIST_HEAD(&iucv_sk(sk)->accept_q); 222 + spin_lock_init(&iucv_sk(sk)->accept_q_lock); 222 223 skb_queue_head_init(&iucv_sk(sk)->send_skb_q); 223 224 skb_queue_head_init(&iucv_sk(sk)->backlog_skb_q); 224 225 iucv_sk(sk)->send_tag = 0; ··· 275 274 276 275 void iucv_accept_enqueue(struct sock *parent, struct sock *sk) 277 276 { 277 + unsigned long flags; 278 + struct iucv_sock *par = iucv_sk(parent); 279 + 278 280 sock_hold(sk); 279 - list_add_tail(&iucv_sk(sk)->accept_q, &iucv_sk(parent)->accept_q); 281 + spin_lock_irqsave(&par->accept_q_lock, flags); 282 + list_add_tail(&iucv_sk(sk)->accept_q, &par->accept_q); 283 + spin_unlock_irqrestore(&par->accept_q_lock, flags); 280 284 iucv_sk(sk)->parent = parent; 281 285 parent->sk_ack_backlog++; 282 286 } 283 287 284 288 void iucv_accept_unlink(struct sock *sk) 285 289 { 290 + unsigned long flags; 291 + struct iucv_sock *par = iucv_sk(iucv_sk(sk)->parent); 292 + 293 + spin_lock_irqsave(&par->accept_q_lock, flags); 286 294 list_del_init(&iucv_sk(sk)->accept_q); 295 + spin_unlock_irqrestore(&par->accept_q_lock, flags); 287 296 iucv_sk(sk)->parent->sk_ack_backlog--; 288 297 iucv_sk(sk)->parent = NULL; 289 298 sock_put(sk); ··· 309 298 lock_sock(sk); 310 299 311 300 if (sk->sk_state == IUCV_CLOSED) { 312 - release_sock(sk); 313 301 iucv_accept_unlink(sk); 302 + release_sock(sk); 314 303 continue; 315 304 } 316 305 ··· 890 879 /* Find out if this path belongs to af_iucv. */ 891 880 read_lock(&iucv_sk_list.lock); 892 881 iucv = NULL; 882 + sk = NULL; 893 883 sk_for_each(sk, node, &iucv_sk_list.head) 894 884 if (sk->sk_state == IUCV_LISTEN && 895 885 !memcmp(&iucv_sk(sk)->src_name, src_name, 8)) {
+4 -1
net/iucv/iucv.c
··· 1494 1494 struct iucv_irq_list *p, *n; 1495 1495 1496 1496 /* Serialize tasklet, iucv_path_sever and iucv_path_connect. */ 1497 - spin_lock(&iucv_table_lock); 1497 + if (!spin_trylock(&iucv_table_lock)) { 1498 + tasklet_schedule(&iucv_tasklet); 1499 + return; 1500 + } 1498 1501 iucv_active_cpu = smp_processor_id(); 1499 1502 1500 1503 spin_lock_irq(&iucv_queue_lock);
+1 -1
net/mac80211/debugfs_netdev.c
··· 118 118 sdata->u.sta.authenticated ? "AUTH\n" : "", 119 119 sdata->u.sta.associated ? "ASSOC\n" : "", 120 120 sdata->u.sta.probereq_poll ? "PROBEREQ POLL\n" : "", 121 - sdata->u.sta.use_protection ? "CTS prot\n" : ""); 121 + sdata->use_protection ? "CTS prot\n" : ""); 122 122 } 123 123 __IEEE80211_IF_FILE(flags); 124 124
-8
net/mac80211/hostapd_ioctl.h
··· 26 26 * mess shall be deleted completely. */ 27 27 enum { 28 28 PRISM2_PARAM_IEEE_802_1X = 23, 29 - PRISM2_PARAM_ANTSEL_TX = 24, 30 - PRISM2_PARAM_ANTSEL_RX = 25, 31 29 32 30 /* Instant802 additions */ 33 31 PRISM2_PARAM_CTS_PROTECT_ERP_FRAMES = 1001, 34 - PRISM2_PARAM_DROP_UNENCRYPTED = 1002, 35 32 PRISM2_PARAM_PREAMBLE = 1003, 36 33 PRISM2_PARAM_SHORT_SLOT_TIME = 1006, 37 34 PRISM2_PARAM_NEXT_MODE = 1008, 38 - PRISM2_PARAM_CLEAR_KEYS = 1009, 39 35 PRISM2_PARAM_RADIO_ENABLED = 1010, 40 36 PRISM2_PARAM_ANTENNA_MODE = 1013, 41 37 PRISM2_PARAM_STAT_TIME = 1016, 42 38 PRISM2_PARAM_STA_ANTENNA_SEL = 1017, 43 - PRISM2_PARAM_FORCE_UNICAST_RATE = 1018, 44 - PRISM2_PARAM_RATE_CTRL_NUM_UP = 1019, 45 - PRISM2_PARAM_RATE_CTRL_NUM_DOWN = 1020, 46 - PRISM2_PARAM_MAX_RATECTRL_RATE = 1021, 47 39 PRISM2_PARAM_TX_POWER_REDUCTION = 1022, 48 40 PRISM2_PARAM_KEY_TX_RX_THRESHOLD = 1024, 49 41 PRISM2_PARAM_DEFAULT_WEP_ONLY = 1026,
+365 -88
net/mac80211/ieee80211.c
··· 24 24 #include <linux/compiler.h> 25 25 #include <linux/bitmap.h> 26 26 #include <net/cfg80211.h> 27 + #include <asm/unaligned.h> 27 28 28 29 #include "ieee80211_common.h" 29 30 #include "ieee80211_i.h" ··· 55 54 /* No encapsulation header if EtherType < 0x600 (=length) */ 56 55 static const unsigned char eapol_header[] = 57 56 { 0xaa, 0xaa, 0x03, 0x00, 0x00, 0x00, 0x88, 0x8e }; 57 + 58 + 59 + /* 60 + * For seeing transmitted packets on monitor interfaces 61 + * we have a radiotap header too. 62 + */ 63 + struct ieee80211_tx_status_rtap_hdr { 64 + struct ieee80211_radiotap_header hdr; 65 + __le16 tx_flags; 66 + u8 data_retries; 67 + } __attribute__ ((packed)); 58 68 59 69 60 70 static inline void ieee80211_include_sequence(struct ieee80211_sub_if_data *sdata, ··· 442 430 if (!tx->u.tx.rate) 443 431 return TXRX_DROP; 444 432 if (tx->u.tx.mode->mode == MODE_IEEE80211G && 445 - tx->local->cts_protect_erp_frames && tx->fragmented && 433 + tx->sdata->use_protection && tx->fragmented && 446 434 extra.nonerp) { 447 435 tx->u.tx.last_frag_rate = tx->u.tx.rate; 448 436 tx->u.tx.probe_last_frag = extra.probe ? 1 : 0; ··· 540 528 /* reserve enough extra head and tail room for possible 541 529 * encryption */ 542 530 frag = frags[i] = 543 - dev_alloc_skb(tx->local->hw.extra_tx_headroom + 531 + dev_alloc_skb(tx->local->tx_headroom + 544 532 frag_threshold + 545 533 IEEE80211_ENCRYPT_HEADROOM + 546 534 IEEE80211_ENCRYPT_TAILROOM); ··· 549 537 /* Make sure that all fragments use the same priority so 550 538 * that they end up using the same TX queue */ 551 539 frag->priority = first->priority; 552 - skb_reserve(frag, tx->local->hw.extra_tx_headroom + 553 - IEEE80211_ENCRYPT_HEADROOM); 540 + skb_reserve(frag, tx->local->tx_headroom + 541 + IEEE80211_ENCRYPT_HEADROOM); 554 542 fhdr = (struct ieee80211_hdr *) skb_put(frag, hdrlen); 555 543 memcpy(fhdr, first->data, hdrlen); 556 544 if (i == num_fragm - 2) ··· 868 856 * for the frame. */ 869 857 if (mode->mode == MODE_IEEE80211G && 870 858 (tx->u.tx.rate->flags & IEEE80211_RATE_ERP) && 871 - tx->u.tx.unicast && 872 - tx->local->cts_protect_erp_frames && 859 + tx->u.tx.unicast && tx->sdata->use_protection && 873 860 !(control->flags & IEEE80211_TXCTL_USE_RTS_CTS)) 874 861 control->flags |= IEEE80211_TXCTL_USE_CTS_PROTECT; 875 862 ··· 1129 1118 } 1130 1119 1131 1120 1132 - static void inline 1121 + /* 1122 + * deal with packet injection down monitor interface 1123 + * with Radiotap Header -- only called for monitor mode interface 1124 + */ 1125 + 1126 + static ieee80211_txrx_result 1127 + __ieee80211_parse_tx_radiotap( 1128 + struct ieee80211_txrx_data *tx, 1129 + struct sk_buff *skb, struct ieee80211_tx_control *control) 1130 + { 1131 + /* 1132 + * this is the moment to interpret and discard the radiotap header that 1133 + * must be at the start of the packet injected in Monitor mode 1134 + * 1135 + * Need to take some care with endian-ness since radiotap 1136 + * args are little-endian 1137 + */ 1138 + 1139 + struct ieee80211_radiotap_iterator iterator; 1140 + struct ieee80211_radiotap_header *rthdr = 1141 + (struct ieee80211_radiotap_header *) skb->data; 1142 + struct ieee80211_hw_mode *mode = tx->local->hw.conf.mode; 1143 + int ret = ieee80211_radiotap_iterator_init(&iterator, rthdr, skb->len); 1144 + 1145 + /* 1146 + * default control situation for all injected packets 1147 + * FIXME: this does not suit all usage cases, expand to allow control 1148 + */ 1149 + 1150 + control->retry_limit = 1; /* no retry */ 1151 + control->key_idx = -1; /* no encryption key */ 1152 + control->flags &= ~(IEEE80211_TXCTL_USE_RTS_CTS | 1153 + IEEE80211_TXCTL_USE_CTS_PROTECT); 1154 + control->flags |= IEEE80211_TXCTL_DO_NOT_ENCRYPT | 1155 + IEEE80211_TXCTL_NO_ACK; 1156 + control->antenna_sel_tx = 0; /* default to default antenna */ 1157 + 1158 + /* 1159 + * for every radiotap entry that is present 1160 + * (ieee80211_radiotap_iterator_next returns -ENOENT when no more 1161 + * entries present, or -EINVAL on error) 1162 + */ 1163 + 1164 + while (!ret) { 1165 + int i, target_rate; 1166 + 1167 + ret = ieee80211_radiotap_iterator_next(&iterator); 1168 + 1169 + if (ret) 1170 + continue; 1171 + 1172 + /* see if this argument is something we can use */ 1173 + switch (iterator.this_arg_index) { 1174 + /* 1175 + * You must take care when dereferencing iterator.this_arg 1176 + * for multibyte types... the pointer is not aligned. Use 1177 + * get_unaligned((type *)iterator.this_arg) to dereference 1178 + * iterator.this_arg for type "type" safely on all arches. 1179 + */ 1180 + case IEEE80211_RADIOTAP_RATE: 1181 + /* 1182 + * radiotap rate u8 is in 500kbps units eg, 0x02=1Mbps 1183 + * ieee80211 rate int is in 100kbps units eg, 0x0a=1Mbps 1184 + */ 1185 + target_rate = (*iterator.this_arg) * 5; 1186 + for (i = 0; i < mode->num_rates; i++) { 1187 + struct ieee80211_rate *r = &mode->rates[i]; 1188 + 1189 + if (r->rate > target_rate) 1190 + continue; 1191 + 1192 + control->rate = r; 1193 + 1194 + if (r->flags & IEEE80211_RATE_PREAMBLE2) 1195 + control->tx_rate = r->val2; 1196 + else 1197 + control->tx_rate = r->val; 1198 + 1199 + /* end on exact match */ 1200 + if (r->rate == target_rate) 1201 + i = mode->num_rates; 1202 + } 1203 + break; 1204 + 1205 + case IEEE80211_RADIOTAP_ANTENNA: 1206 + /* 1207 + * radiotap uses 0 for 1st ant, mac80211 is 1 for 1208 + * 1st ant 1209 + */ 1210 + control->antenna_sel_tx = (*iterator.this_arg) + 1; 1211 + break; 1212 + 1213 + case IEEE80211_RADIOTAP_DBM_TX_POWER: 1214 + control->power_level = *iterator.this_arg; 1215 + break; 1216 + 1217 + case IEEE80211_RADIOTAP_FLAGS: 1218 + if (*iterator.this_arg & IEEE80211_RADIOTAP_F_FCS) { 1219 + /* 1220 + * this indicates that the skb we have been 1221 + * handed has the 32-bit FCS CRC at the end... 1222 + * we should react to that by snipping it off 1223 + * because it will be recomputed and added 1224 + * on transmission 1225 + */ 1226 + if (skb->len < (iterator.max_length + FCS_LEN)) 1227 + return TXRX_DROP; 1228 + 1229 + skb_trim(skb, skb->len - FCS_LEN); 1230 + } 1231 + break; 1232 + 1233 + default: 1234 + break; 1235 + } 1236 + } 1237 + 1238 + if (ret != -ENOENT) /* ie, if we didn't simply run out of fields */ 1239 + return TXRX_DROP; 1240 + 1241 + /* 1242 + * remove the radiotap header 1243 + * iterator->max_length was sanity-checked against 1244 + * skb->len by iterator init 1245 + */ 1246 + skb_pull(skb, iterator.max_length); 1247 + 1248 + return TXRX_CONTINUE; 1249 + } 1250 + 1251 + 1252 + static ieee80211_txrx_result inline 1133 1253 __ieee80211_tx_prepare(struct ieee80211_txrx_data *tx, 1134 1254 struct sk_buff *skb, 1135 1255 struct net_device *dev, ··· 1268 1126 { 1269 1127 struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 1270 1128 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data; 1129 + struct ieee80211_sub_if_data *sdata; 1130 + ieee80211_txrx_result res = TXRX_CONTINUE; 1131 + 1271 1132 int hdrlen; 1272 1133 1273 1134 memset(tx, 0, sizeof(*tx)); ··· 1280 1135 tx->sdata = IEEE80211_DEV_TO_SUB_IF(dev); 1281 1136 tx->sta = sta_info_get(local, hdr->addr1); 1282 1137 tx->fc = le16_to_cpu(hdr->frame_control); 1138 + 1139 + /* 1140 + * set defaults for things that can be set by 1141 + * injected radiotap headers 1142 + */ 1283 1143 control->power_level = local->hw.conf.power_level; 1144 + control->antenna_sel_tx = local->hw.conf.antenna_sel_tx; 1145 + if (local->sta_antenna_sel != STA_ANTENNA_SEL_AUTO && tx->sta) 1146 + control->antenna_sel_tx = tx->sta->antenna_sel_tx; 1147 + 1148 + /* process and remove the injection radiotap header */ 1149 + sdata = IEEE80211_DEV_TO_SUB_IF(dev); 1150 + if (unlikely(sdata->type == IEEE80211_IF_TYPE_MNTR)) { 1151 + if (__ieee80211_parse_tx_radiotap(tx, skb, control) == 1152 + TXRX_DROP) { 1153 + return TXRX_DROP; 1154 + } 1155 + /* 1156 + * we removed the radiotap header after this point, 1157 + * we filled control with what we could use 1158 + * set to the actual ieee header now 1159 + */ 1160 + hdr = (struct ieee80211_hdr *) skb->data; 1161 + res = TXRX_QUEUED; /* indication it was monitor packet */ 1162 + } 1163 + 1284 1164 tx->u.tx.control = control; 1285 1165 tx->u.tx.unicast = !is_multicast_ether_addr(hdr->addr1); 1286 1166 if (is_multicast_ether_addr(hdr->addr1)) ··· 1322 1152 control->flags |= IEEE80211_TXCTL_CLEAR_DST_MASK; 1323 1153 tx->sta->clear_dst_mask = 0; 1324 1154 } 1325 - control->antenna_sel_tx = local->hw.conf.antenna_sel_tx; 1326 - if (local->sta_antenna_sel != STA_ANTENNA_SEL_AUTO && tx->sta) 1327 - control->antenna_sel_tx = tx->sta->antenna_sel_tx; 1328 1155 hdrlen = ieee80211_get_hdrlen(tx->fc); 1329 1156 if (skb->len > hdrlen + sizeof(rfc1042_header) + 2) { 1330 1157 u8 *pos = &skb->data[hdrlen + sizeof(rfc1042_header)]; ··· 1329 1162 } 1330 1163 control->flags |= IEEE80211_TXCTL_FIRST_FRAGMENT; 1331 1164 1165 + return res; 1332 1166 } 1333 1167 1334 1168 static int inline is_ieee80211_device(struct net_device *dev, ··· 1442 1274 struct sta_info *sta; 1443 1275 ieee80211_tx_handler *handler; 1444 1276 struct ieee80211_txrx_data tx; 1445 - ieee80211_txrx_result res = TXRX_DROP; 1277 + ieee80211_txrx_result res = TXRX_DROP, res_prepare; 1446 1278 int ret, i; 1447 1279 1448 1280 WARN_ON(__ieee80211_queue_pending(local, control->queue)); ··· 1452 1284 return 0; 1453 1285 } 1454 1286 1455 - __ieee80211_tx_prepare(&tx, skb, dev, control); 1287 + res_prepare = __ieee80211_tx_prepare(&tx, skb, dev, control); 1288 + 1289 + if (res_prepare == TXRX_DROP) { 1290 + dev_kfree_skb(skb); 1291 + return 0; 1292 + } 1293 + 1456 1294 sta = tx.sta; 1457 1295 tx.u.tx.mgmt_interface = mgmt; 1458 1296 tx.u.tx.mode = local->hw.conf.mode; 1459 1297 1460 - for (handler = local->tx_handlers; *handler != NULL; handler++) { 1461 - res = (*handler)(&tx); 1462 - if (res != TXRX_CONTINUE) 1463 - break; 1298 + if (res_prepare == TXRX_QUEUED) { /* if it was an injected packet */ 1299 + res = TXRX_CONTINUE; 1300 + } else { 1301 + for (handler = local->tx_handlers; *handler != NULL; 1302 + handler++) { 1303 + res = (*handler)(&tx); 1304 + if (res != TXRX_CONTINUE) 1305 + break; 1306 + } 1464 1307 } 1465 1308 1466 1309 skb = tx.skb; /* handlers are allowed to change skb */ ··· 1646 1467 } 1647 1468 osdata = IEEE80211_DEV_TO_SUB_IF(odev); 1648 1469 1649 - headroom = osdata->local->hw.extra_tx_headroom + 1650 - IEEE80211_ENCRYPT_HEADROOM; 1470 + headroom = osdata->local->tx_headroom + IEEE80211_ENCRYPT_HEADROOM; 1651 1471 if (skb_headroom(skb) < headroom) { 1652 1472 if (pskb_expand_head(skb, headroom, 0, GFP_ATOMIC)) { 1653 1473 dev_kfree_skb(skb); ··· 1672 1494 } 1673 1495 1674 1496 1497 + int ieee80211_monitor_start_xmit(struct sk_buff *skb, 1498 + struct net_device *dev) 1499 + { 1500 + struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 1501 + struct ieee80211_tx_packet_data *pkt_data; 1502 + struct ieee80211_radiotap_header *prthdr = 1503 + (struct ieee80211_radiotap_header *)skb->data; 1504 + u16 len; 1505 + 1506 + /* 1507 + * there must be a radiotap header at the 1508 + * start in this case 1509 + */ 1510 + if (unlikely(prthdr->it_version)) { 1511 + /* only version 0 is supported */ 1512 + dev_kfree_skb(skb); 1513 + return NETDEV_TX_OK; 1514 + } 1515 + 1516 + skb->dev = local->mdev; 1517 + 1518 + pkt_data = (struct ieee80211_tx_packet_data *)skb->cb; 1519 + memset(pkt_data, 0, sizeof(*pkt_data)); 1520 + pkt_data->ifindex = dev->ifindex; 1521 + pkt_data->mgmt_iface = 0; 1522 + pkt_data->do_not_encrypt = 1; 1523 + 1524 + /* above needed because we set skb device to master */ 1525 + 1526 + /* 1527 + * fix up the pointers accounting for the radiotap 1528 + * header still being in there. We are being given 1529 + * a precooked IEEE80211 header so no need for 1530 + * normal processing 1531 + */ 1532 + len = le16_to_cpu(get_unaligned(&prthdr->it_len)); 1533 + skb_set_mac_header(skb, len); 1534 + skb_set_network_header(skb, len + sizeof(struct ieee80211_hdr)); 1535 + skb_set_transport_header(skb, len + sizeof(struct ieee80211_hdr)); 1536 + 1537 + /* 1538 + * pass the radiotap header up to 1539 + * the next stage intact 1540 + */ 1541 + dev_queue_xmit(skb); 1542 + 1543 + return NETDEV_TX_OK; 1544 + } 1545 + 1546 + 1675 1547 /** 1676 1548 * ieee80211_subif_start_xmit - netif start_xmit function for Ethernet-type 1677 1549 * subinterfaces (wlan#, WDS, and VLAN interfaces) ··· 1737 1509 * encapsulated packet will then be passed to master interface, wlan#.11, for 1738 1510 * transmission (through low-level driver). 1739 1511 */ 1740 - static int ieee80211_subif_start_xmit(struct sk_buff *skb, 1741 - struct net_device *dev) 1512 + int ieee80211_subif_start_xmit(struct sk_buff *skb, 1513 + struct net_device *dev) 1742 1514 { 1743 1515 struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 1744 1516 struct ieee80211_tx_packet_data *pkt_data; ··· 1847 1619 * build in headroom in __dev_alloc_skb() (linux/skbuff.h) and 1848 1620 * alloc_skb() (net/core/skbuff.c) 1849 1621 */ 1850 - head_need = hdrlen + encaps_len + local->hw.extra_tx_headroom; 1622 + head_need = hdrlen + encaps_len + local->tx_headroom; 1851 1623 head_need -= skb_headroom(skb); 1852 1624 1853 1625 /* We are going to modify skb data, so make a copy of it if happens to ··· 1886 1658 1887 1659 pkt_data = (struct ieee80211_tx_packet_data *)skb->cb; 1888 1660 memset(pkt_data, 0, sizeof(struct ieee80211_tx_packet_data)); 1889 - pkt_data->ifindex = sdata->dev->ifindex; 1661 + pkt_data->ifindex = dev->ifindex; 1890 1662 pkt_data->mgmt_iface = (sdata->type == IEEE80211_IF_TYPE_MGMT); 1891 1663 pkt_data->do_not_encrypt = no_encrypt; 1892 1664 ··· 1934 1706 return 0; 1935 1707 } 1936 1708 1937 - if (skb_headroom(skb) < sdata->local->hw.extra_tx_headroom) { 1938 - if (pskb_expand_head(skb, 1939 - sdata->local->hw.extra_tx_headroom, 0, GFP_ATOMIC)) { 1709 + if (skb_headroom(skb) < sdata->local->tx_headroom) { 1710 + if (pskb_expand_head(skb, sdata->local->tx_headroom, 1711 + 0, GFP_ATOMIC)) { 1940 1712 dev_kfree_skb(skb); 1941 1713 return 0; 1942 1714 } ··· 2075 1847 bh_len = ap->beacon_head_len; 2076 1848 bt_len = ap->beacon_tail_len; 2077 1849 2078 - skb = dev_alloc_skb(local->hw.extra_tx_headroom + 1850 + skb = dev_alloc_skb(local->tx_headroom + 2079 1851 bh_len + bt_len + 256 /* maximum TIM len */); 2080 1852 if (!skb) 2081 1853 return NULL; 2082 1854 2083 - skb_reserve(skb, local->hw.extra_tx_headroom); 1855 + skb_reserve(skb, local->tx_headroom); 2084 1856 memcpy(skb_put(skb, bh_len), b_head, bh_len); 2085 1857 2086 1858 ieee80211_include_sequence(sdata, (struct ieee80211_hdr *)skb->data); ··· 2604 2376 struct ieee80211_if_init_conf conf; 2605 2377 2606 2378 if (local->open_count && local->open_count == local->monitors && 2607 - !(local->hw.flags & IEEE80211_HW_MONITOR_DURING_OPER) && 2608 - local->ops->add_interface) { 2379 + !(local->hw.flags & IEEE80211_HW_MONITOR_DURING_OPER)) { 2609 2380 conf.if_id = -1; 2610 2381 conf.type = IEEE80211_IF_TYPE_MNTR; 2611 2382 conf.mac_addr = NULL; ··· 2647 2420 } 2648 2421 ieee80211_start_soft_monitor(local); 2649 2422 2650 - if (local->ops->add_interface) { 2651 - conf.if_id = dev->ifindex; 2652 - conf.type = sdata->type; 2653 - conf.mac_addr = dev->dev_addr; 2654 - res = local->ops->add_interface(local_to_hw(local), &conf); 2655 - if (res) { 2656 - if (sdata->type == IEEE80211_IF_TYPE_MNTR) 2657 - ieee80211_start_hard_monitor(local); 2658 - return res; 2659 - } 2660 - } else { 2661 - if (sdata->type != IEEE80211_IF_TYPE_STA) 2662 - return -EOPNOTSUPP; 2663 - if (local->open_count > 0) 2664 - return -ENOBUFS; 2423 + conf.if_id = dev->ifindex; 2424 + conf.type = sdata->type; 2425 + conf.mac_addr = dev->dev_addr; 2426 + res = local->ops->add_interface(local_to_hw(local), &conf); 2427 + if (res) { 2428 + if (sdata->type == IEEE80211_IF_TYPE_MNTR) 2429 + ieee80211_start_hard_monitor(local); 2430 + return res; 2665 2431 } 2666 2432 2667 2433 if (local->open_count == 0) { ··· 3161 2941 } 3162 2942 EXPORT_SYMBOL(ieee80211_radar_status); 3163 2943 3164 - int ieee80211_set_aid_for_sta(struct ieee80211_hw *hw, u8 *peer_address, 3165 - u16 aid) 3166 - { 3167 - struct sk_buff *skb; 3168 - struct ieee80211_msg_set_aid_for_sta *msg; 3169 - struct ieee80211_local *local = hw_to_local(hw); 3170 - 3171 - /* unlikely because if this event only happens for APs, 3172 - * which require an open ap device. */ 3173 - if (unlikely(!local->apdev)) 3174 - return 0; 3175 - 3176 - skb = dev_alloc_skb(sizeof(struct ieee80211_frame_info) + 3177 - sizeof(struct ieee80211_msg_set_aid_for_sta)); 3178 - 3179 - if (!skb) 3180 - return -ENOMEM; 3181 - skb_reserve(skb, sizeof(struct ieee80211_frame_info)); 3182 - 3183 - msg = (struct ieee80211_msg_set_aid_for_sta *) 3184 - skb_put(skb, sizeof(struct ieee80211_msg_set_aid_for_sta)); 3185 - memcpy(msg->sta_address, peer_address, ETH_ALEN); 3186 - msg->aid = aid; 3187 - 3188 - ieee80211_rx_mgmt(local, skb, NULL, ieee80211_msg_set_aid_for_sta); 3189 - return 0; 3190 - } 3191 - EXPORT_SYMBOL(ieee80211_set_aid_for_sta); 3192 2944 3193 2945 static void ap_sta_ps_start(struct net_device *dev, struct sta_info *sta) 3194 2946 { ··· 4476 4284 struct ieee80211_local *local = hw_to_local(hw); 4477 4285 u16 frag, type; 4478 4286 u32 msg_type; 4287 + struct ieee80211_tx_status_rtap_hdr *rthdr; 4288 + struct ieee80211_sub_if_data *sdata; 4289 + int monitors; 4479 4290 4480 4291 if (!status) { 4481 4292 printk(KERN_ERR ··· 4590 4395 local->dot11FailedCount++; 4591 4396 } 4592 4397 4593 - if (!(status->control.flags & IEEE80211_TXCTL_REQ_TX_STATUS) 4594 - || unlikely(!local->apdev)) { 4595 - dev_kfree_skb(skb); 4596 - return; 4597 - } 4598 - 4599 4398 msg_type = (status->flags & IEEE80211_TX_STATUS_ACK) ? 4600 4399 ieee80211_msg_tx_callback_ack : ieee80211_msg_tx_callback_fail; 4601 4400 4602 - /* skb was the original skb used for TX. Clone it and give the clone 4603 - * to netif_rx(). Free original skb. */ 4604 - skb2 = skb_copy(skb, GFP_ATOMIC); 4605 - if (!skb2) { 4401 + /* this was a transmitted frame, but now we want to reuse it */ 4402 + skb_orphan(skb); 4403 + 4404 + if ((status->control.flags & IEEE80211_TXCTL_REQ_TX_STATUS) && 4405 + local->apdev) { 4406 + if (local->monitors) { 4407 + skb2 = skb_clone(skb, GFP_ATOMIC); 4408 + } else { 4409 + skb2 = skb; 4410 + skb = NULL; 4411 + } 4412 + 4413 + if (skb2) 4414 + /* Send frame to hostapd */ 4415 + ieee80211_rx_mgmt(local, skb2, NULL, msg_type); 4416 + 4417 + if (!skb) 4418 + return; 4419 + } 4420 + 4421 + if (!local->monitors) { 4606 4422 dev_kfree_skb(skb); 4607 4423 return; 4608 4424 } 4609 - dev_kfree_skb(skb); 4610 - skb = skb2; 4611 4425 4612 - /* Send frame to hostapd */ 4613 - ieee80211_rx_mgmt(local, skb, NULL, msg_type); 4426 + /* send frame to monitor interfaces now */ 4427 + 4428 + if (skb_headroom(skb) < sizeof(*rthdr)) { 4429 + printk(KERN_ERR "ieee80211_tx_status: headroom too small\n"); 4430 + dev_kfree_skb(skb); 4431 + return; 4432 + } 4433 + 4434 + rthdr = (struct ieee80211_tx_status_rtap_hdr*) 4435 + skb_push(skb, sizeof(*rthdr)); 4436 + 4437 + memset(rthdr, 0, sizeof(*rthdr)); 4438 + rthdr->hdr.it_len = cpu_to_le16(sizeof(*rthdr)); 4439 + rthdr->hdr.it_present = 4440 + cpu_to_le32((1 << IEEE80211_RADIOTAP_TX_FLAGS) | 4441 + (1 << IEEE80211_RADIOTAP_DATA_RETRIES)); 4442 + 4443 + if (!(status->flags & IEEE80211_TX_STATUS_ACK) && 4444 + !is_multicast_ether_addr(hdr->addr1)) 4445 + rthdr->tx_flags |= cpu_to_le16(IEEE80211_RADIOTAP_F_TX_FAIL); 4446 + 4447 + if ((status->control.flags & IEEE80211_TXCTL_USE_RTS_CTS) && 4448 + (status->control.flags & IEEE80211_TXCTL_USE_CTS_PROTECT)) 4449 + rthdr->tx_flags |= cpu_to_le16(IEEE80211_RADIOTAP_F_TX_CTS); 4450 + else if (status->control.flags & IEEE80211_TXCTL_USE_RTS_CTS) 4451 + rthdr->tx_flags |= cpu_to_le16(IEEE80211_RADIOTAP_F_TX_RTS); 4452 + 4453 + rthdr->data_retries = status->retry_count; 4454 + 4455 + read_lock(&local->sub_if_lock); 4456 + monitors = local->monitors; 4457 + list_for_each_entry(sdata, &local->sub_if_list, list) { 4458 + /* 4459 + * Using the monitors counter is possibly racy, but 4460 + * if the value is wrong we simply either clone the skb 4461 + * once too much or forget sending it to one monitor iface 4462 + * The latter case isn't nice but fixing the race is much 4463 + * more complicated. 4464 + */ 4465 + if (!monitors || !skb) 4466 + goto out; 4467 + 4468 + if (sdata->type == IEEE80211_IF_TYPE_MNTR) { 4469 + if (!netif_running(sdata->dev)) 4470 + continue; 4471 + monitors--; 4472 + if (monitors) 4473 + skb2 = skb_clone(skb, GFP_KERNEL); 4474 + else 4475 + skb2 = NULL; 4476 + skb->dev = sdata->dev; 4477 + /* XXX: is this sufficient for BPF? */ 4478 + skb_set_mac_header(skb, 0); 4479 + skb->ip_summed = CHECKSUM_UNNECESSARY; 4480 + skb->pkt_type = PACKET_OTHERHOST; 4481 + skb->protocol = htons(ETH_P_802_2); 4482 + memset(skb->cb, 0, sizeof(skb->cb)); 4483 + netif_rx(skb); 4484 + skb = skb2; 4485 + break; 4486 + } 4487 + } 4488 + out: 4489 + read_unlock(&local->sub_if_lock); 4490 + if (skb) 4491 + dev_kfree_skb(skb); 4614 4492 } 4615 4493 EXPORT_SYMBOL(ieee80211_tx_status); 4616 4494 ··· 4887 4619 ((sizeof(struct ieee80211_local) + 4888 4620 NETDEV_ALIGN_CONST) & ~NETDEV_ALIGN_CONST); 4889 4621 4622 + BUG_ON(!ops->tx); 4623 + BUG_ON(!ops->config); 4624 + BUG_ON(!ops->add_interface); 4890 4625 local->ops = ops; 4891 4626 4892 4627 /* for now, mdev needs sub_if_data :/ */ ··· 4918 4647 local->short_retry_limit = 7; 4919 4648 local->long_retry_limit = 4; 4920 4649 local->hw.conf.radio_enabled = 1; 4921 - local->rate_ctrl_num_up = RATE_CONTROL_NUM_UP; 4922 - local->rate_ctrl_num_down = RATE_CONTROL_NUM_DOWN; 4923 4650 4924 4651 local->enabled_modes = (unsigned int) -1; 4925 4652 ··· 4980 4711 result = -ENOMEM; 4981 4712 goto fail_workqueue; 4982 4713 } 4714 + 4715 + /* 4716 + * The hardware needs headroom for sending the frame, 4717 + * and we need some headroom for passing the frame to monitor 4718 + * interfaces, but never both at the same time. 4719 + */ 4720 + local->tx_headroom = max(local->hw.extra_tx_headroom, 4721 + sizeof(struct ieee80211_tx_status_rtap_hdr)); 4983 4722 4984 4723 debugfs_hw_add(local); 4985 4724
+2 -7
net/mac80211/ieee80211_common.h
··· 47 47 ieee80211_msg_normal = 0, 48 48 ieee80211_msg_tx_callback_ack = 1, 49 49 ieee80211_msg_tx_callback_fail = 2, 50 - ieee80211_msg_passive_scan = 3, 50 + /* hole at 3, was ieee80211_msg_passive_scan but unused */ 51 51 ieee80211_msg_wep_frame_unknown_key = 4, 52 52 ieee80211_msg_michael_mic_failure = 5, 53 53 /* hole at 6, was monitor but never sent to userspace */ 54 54 ieee80211_msg_sta_not_assoc = 7, 55 - ieee80211_msg_set_aid_for_sta = 8 /* used by Intersil MVC driver */, 55 + /* 8 was ieee80211_msg_set_aid_for_sta */ 56 56 ieee80211_msg_key_threshold_notification = 9, 57 57 ieee80211_msg_radar = 11, 58 - }; 59 - 60 - struct ieee80211_msg_set_aid_for_sta { 61 - char sta_address[ETH_ALEN]; 62 - u16 aid; 63 58 }; 64 59 65 60 struct ieee80211_msg_key_notification {
+10 -4
net/mac80211/ieee80211_i.h
··· 99 99 int probe_resp; 100 100 unsigned long last_update; 101 101 102 + /* during assocation, we save an ERP value from a probe response so 103 + * that we can feed ERP info to the driver when handling the 104 + * association completes. these fields probably won't be up-to-date 105 + * otherwise, you probably don't want to use them. */ 106 + int has_erp_value; 107 + u8 erp_value; 102 108 }; 103 109 104 110 ··· 241 235 unsigned int authenticated:1; 242 236 unsigned int associated:1; 243 237 unsigned int probereq_poll:1; 244 - unsigned int use_protection:1; 245 238 unsigned int create_ibss:1; 246 239 unsigned int mixed_cell:1; 247 240 unsigned int wmm_enabled:1; ··· 283 278 int mc_count; 284 279 unsigned int allmulti:1; 285 280 unsigned int promisc:1; 281 + unsigned int use_protection:1; /* CTS protect ERP frames */ 286 282 287 283 struct net_device_stats stats; 288 284 int drop_unencrypted; ··· 398 392 int monitors; 399 393 struct iw_statistics wstats; 400 394 u8 wstats_flags; 395 + int tx_headroom; /* required headroom for hardware/radiotap */ 401 396 402 397 enum { 403 398 IEEE80211_DEV_UNINITIALIZED = 0, ··· 444 437 int *basic_rates[NUM_IEEE80211_MODES]; 445 438 446 439 int rts_threshold; 447 - int cts_protect_erp_frames; 448 440 int fragmentation_threshold; 449 441 int short_retry_limit; /* dot11ShortRetryLimit */ 450 442 int long_retry_limit; /* dot11LongRetryLimit */ ··· 518 512 STA_ANTENNA_SEL_SW_CTRL = 1, 519 513 STA_ANTENNA_SEL_SW_CTRL_DEBUG = 2 520 514 } sta_antenna_sel; 521 - 522 - int rate_ctrl_num_up, rate_ctrl_num_down; 523 515 524 516 #ifdef CONFIG_MAC80211_DEBUG_COUNTERS 525 517 /* TX/RX handler statistics */ ··· 723 719 struct ieee80211_hw_mode *mode); 724 720 void ieee80211_tx_set_iswep(struct ieee80211_txrx_data *tx); 725 721 int ieee80211_if_update_wds(struct net_device *dev, u8 *remote_addr); 722 + int ieee80211_monitor_start_xmit(struct sk_buff *skb, struct net_device *dev); 723 + int ieee80211_subif_start_xmit(struct sk_buff *skb, struct net_device *dev); 726 724 void ieee80211_if_setup(struct net_device *dev); 727 725 void ieee80211_if_mgmt_setup(struct net_device *dev); 728 726 int ieee80211_init_rate_ctrl_alg(struct ieee80211_local *local,
+3
net/mac80211/ieee80211_iface.c
··· 157 157 struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 158 158 int oldtype = sdata->type; 159 159 160 + dev->hard_start_xmit = ieee80211_subif_start_xmit; 161 + 160 162 sdata->type = type; 161 163 switch (type) { 162 164 case IEEE80211_IF_TYPE_WDS: ··· 198 196 } 199 197 case IEEE80211_IF_TYPE_MNTR: 200 198 dev->type = ARPHRD_IEEE80211_RADIOTAP; 199 + dev->hard_start_xmit = ieee80211_monitor_start_xmit; 201 200 break; 202 201 default: 203 202 printk(KERN_WARNING "%s: %s: Unknown interface type 0x%x",
+69 -171
net/mac80211/ieee80211_ioctl.c
··· 345 345 { 346 346 struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 347 347 struct iw_range *range = (struct iw_range *) extra; 348 + struct ieee80211_hw_mode *mode = NULL; 349 + int c = 0; 348 350 349 351 data->length = sizeof(struct iw_range); 350 352 memset(range, 0, sizeof(struct iw_range)); ··· 379 377 380 378 range->enc_capa = IW_ENC_CAPA_WPA | IW_ENC_CAPA_WPA2 | 381 379 IW_ENC_CAPA_CIPHER_TKIP | IW_ENC_CAPA_CIPHER_CCMP; 380 + 381 + list_for_each_entry(mode, &local->modes_list, list) { 382 + int i = 0; 383 + 384 + if (!(local->enabled_modes & (1 << mode->mode)) || 385 + (local->hw_modes & local->enabled_modes & 386 + (1 << MODE_IEEE80211G) && mode->mode == MODE_IEEE80211B)) 387 + continue; 388 + 389 + while (i < mode->num_channels && c < IW_MAX_FREQUENCIES) { 390 + struct ieee80211_channel *chan = &mode->channels[i]; 391 + 392 + if (chan->flag & IEEE80211_CHAN_W_SCAN) { 393 + range->freq[c].i = chan->chan; 394 + range->freq[c].m = chan->freq * 100000; 395 + range->freq[c].e = 1; 396 + c++; 397 + } 398 + i++; 399 + } 400 + } 401 + range->num_channels = c; 402 + range->num_frequency = c; 382 403 383 404 IW_EVENT_CAPA_SET_KERNEL(range->event_capa); 384 405 IW_EVENT_CAPA_SET(range->event_capa, SIOCGIWTHRSPY); ··· 863 838 } 864 839 865 840 841 + static int ieee80211_ioctl_siwrate(struct net_device *dev, 842 + struct iw_request_info *info, 843 + struct iw_param *rate, char *extra) 844 + { 845 + struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 846 + struct ieee80211_hw_mode *mode; 847 + int i; 848 + u32 target_rate = rate->value / 100000; 849 + struct ieee80211_sub_if_data *sdata; 850 + 851 + sdata = IEEE80211_DEV_TO_SUB_IF(dev); 852 + if (!sdata->bss) 853 + return -ENODEV; 854 + mode = local->oper_hw_mode; 855 + /* target_rate = -1, rate->fixed = 0 means auto only, so use all rates 856 + * target_rate = X, rate->fixed = 1 means only rate X 857 + * target_rate = X, rate->fixed = 0 means all rates <= X */ 858 + sdata->bss->max_ratectrl_rateidx = -1; 859 + sdata->bss->force_unicast_rateidx = -1; 860 + if (rate->value < 0) 861 + return 0; 862 + for (i=0; i< mode->num_rates; i++) { 863 + struct ieee80211_rate *rates = &mode->rates[i]; 864 + int this_rate = rates->rate; 865 + 866 + if (mode->mode == MODE_ATHEROS_TURBO || 867 + mode->mode == MODE_ATHEROS_TURBOG) 868 + this_rate *= 2; 869 + if (target_rate == this_rate) { 870 + sdata->bss->max_ratectrl_rateidx = i; 871 + if (rate->fixed) 872 + sdata->bss->force_unicast_rateidx = i; 873 + break; 874 + } 875 + } 876 + return 0; 877 + } 878 + 866 879 static int ieee80211_ioctl_giwrate(struct net_device *dev, 867 880 struct iw_request_info *info, 868 881 struct iw_param *rate, char *extra) ··· 1056 993 return 0; 1057 994 } 1058 995 1059 - static int ieee80211_ioctl_clear_keys(struct net_device *dev) 1060 - { 1061 - struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 1062 - struct ieee80211_key_conf key; 1063 - int i; 1064 - u8 addr[ETH_ALEN]; 1065 - struct ieee80211_key_conf *keyconf; 1066 - struct ieee80211_sub_if_data *sdata; 1067 - struct sta_info *sta; 1068 - 1069 - memset(addr, 0xff, ETH_ALEN); 1070 - read_lock(&local->sub_if_lock); 1071 - list_for_each_entry(sdata, &local->sub_if_list, list) { 1072 - for (i = 0; i < NUM_DEFAULT_KEYS; i++) { 1073 - keyconf = NULL; 1074 - if (sdata->keys[i] && 1075 - !sdata->keys[i]->force_sw_encrypt && 1076 - local->ops->set_key && 1077 - (keyconf = ieee80211_key_data2conf(local, 1078 - sdata->keys[i]))) 1079 - local->ops->set_key(local_to_hw(local), 1080 - DISABLE_KEY, addr, 1081 - keyconf, 0); 1082 - kfree(keyconf); 1083 - ieee80211_key_free(sdata->keys[i]); 1084 - sdata->keys[i] = NULL; 1085 - } 1086 - sdata->default_key = NULL; 1087 - } 1088 - read_unlock(&local->sub_if_lock); 1089 - 1090 - spin_lock_bh(&local->sta_lock); 1091 - list_for_each_entry(sta, &local->sta_list, list) { 1092 - keyconf = NULL; 1093 - if (sta->key && !sta->key->force_sw_encrypt && 1094 - local->ops->set_key && 1095 - (keyconf = ieee80211_key_data2conf(local, sta->key))) 1096 - local->ops->set_key(local_to_hw(local), DISABLE_KEY, 1097 - sta->addr, keyconf, sta->aid); 1098 - kfree(keyconf); 1099 - ieee80211_key_free(sta->key); 1100 - sta->key = NULL; 1101 - } 1102 - spin_unlock_bh(&local->sta_lock); 1103 - 1104 - memset(&key, 0, sizeof(key)); 1105 - if (local->ops->set_key && 1106 - local->ops->set_key(local_to_hw(local), REMOVE_ALL_KEYS, 1107 - NULL, &key, 0)) 1108 - printk(KERN_DEBUG "%s: failed to remove hwaccel keys\n", 1109 - dev->name); 1110 - 1111 - return 0; 1112 - } 1113 - 1114 - 1115 - static int 1116 - ieee80211_ioctl_force_unicast_rate(struct net_device *dev, 1117 - struct ieee80211_sub_if_data *sdata, 1118 - int rate) 1119 - { 1120 - struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 1121 - struct ieee80211_hw_mode *mode; 1122 - int i; 1123 - 1124 - if (sdata->type != IEEE80211_IF_TYPE_AP) 1125 - return -ENOENT; 1126 - 1127 - if (rate == 0) { 1128 - sdata->u.ap.force_unicast_rateidx = -1; 1129 - return 0; 1130 - } 1131 - 1132 - mode = local->oper_hw_mode; 1133 - for (i = 0; i < mode->num_rates; i++) { 1134 - if (mode->rates[i].rate == rate) { 1135 - sdata->u.ap.force_unicast_rateidx = i; 1136 - return 0; 1137 - } 1138 - } 1139 - return -EINVAL; 1140 - } 1141 - 1142 - 1143 - static int 1144 - ieee80211_ioctl_max_ratectrl_rate(struct net_device *dev, 1145 - struct ieee80211_sub_if_data *sdata, 1146 - int rate) 1147 - { 1148 - struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 1149 - struct ieee80211_hw_mode *mode; 1150 - int i; 1151 - 1152 - if (sdata->type != IEEE80211_IF_TYPE_AP) 1153 - return -ENOENT; 1154 - 1155 - if (rate == 0) { 1156 - sdata->u.ap.max_ratectrl_rateidx = -1; 1157 - return 0; 1158 - } 1159 - 1160 - mode = local->oper_hw_mode; 1161 - for (i = 0; i < mode->num_rates; i++) { 1162 - if (mode->rates[i].rate == rate) { 1163 - sdata->u.ap.max_ratectrl_rateidx = i; 1164 - return 0; 1165 - } 1166 - } 1167 - return -EINVAL; 1168 - } 1169 - 1170 - 1171 996 static void ieee80211_key_enable_hwaccel(struct ieee80211_local *local, 1172 997 struct ieee80211_key *key) 1173 998 { ··· 1179 1228 sdata->ieee802_1x = value; 1180 1229 break; 1181 1230 1182 - case PRISM2_PARAM_ANTSEL_TX: 1183 - local->hw.conf.antenna_sel_tx = value; 1184 - if (ieee80211_hw_config(local)) 1185 - ret = -EINVAL; 1186 - break; 1187 - 1188 - case PRISM2_PARAM_ANTSEL_RX: 1189 - local->hw.conf.antenna_sel_rx = value; 1190 - if (ieee80211_hw_config(local)) 1191 - ret = -EINVAL; 1192 - break; 1193 - 1194 1231 case PRISM2_PARAM_CTS_PROTECT_ERP_FRAMES: 1195 - local->cts_protect_erp_frames = value; 1196 - break; 1197 - 1198 - case PRISM2_PARAM_DROP_UNENCRYPTED: 1199 - sdata->drop_unencrypted = value; 1232 + if (sdata->type != IEEE80211_IF_TYPE_AP) 1233 + ret = -ENOENT; 1234 + else 1235 + sdata->use_protection = value; 1200 1236 break; 1201 1237 1202 1238 case PRISM2_PARAM_PREAMBLE: ··· 1212 1274 local->next_mode = value; 1213 1275 break; 1214 1276 1215 - case PRISM2_PARAM_CLEAR_KEYS: 1216 - ret = ieee80211_ioctl_clear_keys(dev); 1217 - break; 1218 - 1219 1277 case PRISM2_PARAM_RADIO_ENABLED: 1220 1278 ret = ieee80211_ioctl_set_radio_enabled(dev, value); 1221 1279 break; ··· 1224 1290 1225 1291 case PRISM2_PARAM_STA_ANTENNA_SEL: 1226 1292 local->sta_antenna_sel = value; 1227 - break; 1228 - 1229 - case PRISM2_PARAM_FORCE_UNICAST_RATE: 1230 - ret = ieee80211_ioctl_force_unicast_rate(dev, sdata, value); 1231 - break; 1232 - 1233 - case PRISM2_PARAM_MAX_RATECTRL_RATE: 1234 - ret = ieee80211_ioctl_max_ratectrl_rate(dev, sdata, value); 1235 - break; 1236 - 1237 - case PRISM2_PARAM_RATE_CTRL_NUM_UP: 1238 - local->rate_ctrl_num_up = value; 1239 - break; 1240 - 1241 - case PRISM2_PARAM_RATE_CTRL_NUM_DOWN: 1242 - local->rate_ctrl_num_down = value; 1243 1293 break; 1244 1294 1245 1295 case PRISM2_PARAM_TX_POWER_REDUCTION: ··· 1305 1387 *param = sdata->ieee802_1x; 1306 1388 break; 1307 1389 1308 - case PRISM2_PARAM_ANTSEL_TX: 1309 - *param = local->hw.conf.antenna_sel_tx; 1310 - break; 1311 - 1312 - case PRISM2_PARAM_ANTSEL_RX: 1313 - *param = local->hw.conf.antenna_sel_rx; 1314 - break; 1315 - 1316 1390 case PRISM2_PARAM_CTS_PROTECT_ERP_FRAMES: 1317 - *param = local->cts_protect_erp_frames; 1318 - break; 1319 - 1320 - case PRISM2_PARAM_DROP_UNENCRYPTED: 1321 - *param = sdata->drop_unencrypted; 1391 + *param = sdata->use_protection; 1322 1392 break; 1323 1393 1324 1394 case PRISM2_PARAM_PREAMBLE: ··· 1330 1424 1331 1425 case PRISM2_PARAM_STA_ANTENNA_SEL: 1332 1426 *param = local->sta_antenna_sel; 1333 - break; 1334 - 1335 - case PRISM2_PARAM_RATE_CTRL_NUM_UP: 1336 - *param = local->rate_ctrl_num_up; 1337 - break; 1338 - 1339 - case PRISM2_PARAM_RATE_CTRL_NUM_DOWN: 1340 - *param = local->rate_ctrl_num_down; 1341 1427 break; 1342 1428 1343 1429 case PRISM2_PARAM_TX_POWER_REDUCTION: ··· 1699 1801 (iw_handler) NULL, /* SIOCGIWNICKN */ 1700 1802 (iw_handler) NULL, /* -- hole -- */ 1701 1803 (iw_handler) NULL, /* -- hole -- */ 1702 - (iw_handler) NULL, /* SIOCSIWRATE */ 1804 + (iw_handler) ieee80211_ioctl_siwrate, /* SIOCSIWRATE */ 1703 1805 (iw_handler) ieee80211_ioctl_giwrate, /* SIOCGIWRATE */ 1704 1806 (iw_handler) ieee80211_ioctl_siwrts, /* SIOCSIWRTS */ 1705 1807 (iw_handler) ieee80211_ioctl_giwrts, /* SIOCGIWRTS */
+66 -32
net/mac80211/ieee80211_sta.c
··· 76 76 77 77 /* Parsed Information Elements */ 78 78 struct ieee802_11_elems { 79 + /* pointers to IEs */ 79 80 u8 *ssid; 80 - u8 ssid_len; 81 81 u8 *supp_rates; 82 - u8 supp_rates_len; 83 82 u8 *fh_params; 84 - u8 fh_params_len; 85 83 u8 *ds_params; 86 - u8 ds_params_len; 87 84 u8 *cf_params; 88 - u8 cf_params_len; 89 85 u8 *tim; 90 - u8 tim_len; 91 86 u8 *ibss_params; 92 - u8 ibss_params_len; 93 87 u8 *challenge; 94 - u8 challenge_len; 95 88 u8 *wpa; 96 - u8 wpa_len; 97 89 u8 *rsn; 98 - u8 rsn_len; 99 90 u8 *erp_info; 100 - u8 erp_info_len; 101 91 u8 *ext_supp_rates; 102 - u8 ext_supp_rates_len; 103 92 u8 *wmm_info; 104 - u8 wmm_info_len; 105 93 u8 *wmm_param; 94 + 95 + /* length of them, respectively */ 96 + u8 ssid_len; 97 + u8 supp_rates_len; 98 + u8 fh_params_len; 99 + u8 ds_params_len; 100 + u8 cf_params_len; 101 + u8 tim_len; 102 + u8 ibss_params_len; 103 + u8 challenge_len; 104 + u8 wpa_len; 105 + u8 rsn_len; 106 + u8 erp_info_len; 107 + u8 ext_supp_rates_len; 108 + u8 wmm_info_len; 106 109 u8 wmm_param_len; 107 110 }; 108 111 ··· 314 311 } 315 312 316 313 314 + static void ieee80211_handle_erp_ie(struct net_device *dev, u8 erp_value) 315 + { 316 + struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); 317 + struct ieee80211_if_sta *ifsta = &sdata->u.sta; 318 + int use_protection = (erp_value & WLAN_ERP_USE_PROTECTION) != 0; 319 + 320 + if (use_protection != sdata->use_protection) { 321 + if (net_ratelimit()) { 322 + printk(KERN_DEBUG "%s: CTS protection %s (BSSID=" 323 + MAC_FMT ")\n", 324 + dev->name, 325 + use_protection ? "enabled" : "disabled", 326 + MAC_ARG(ifsta->bssid)); 327 + } 328 + sdata->use_protection = use_protection; 329 + } 330 + } 331 + 332 + 317 333 static void ieee80211_sta_send_associnfo(struct net_device *dev, 318 334 struct ieee80211_if_sta *ifsta) 319 335 { ··· 388 366 struct ieee80211_if_sta *ifsta, int assoc) 389 367 { 390 368 union iwreq_data wrqu; 369 + struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); 391 370 392 371 if (ifsta->associated == assoc) 393 372 return; ··· 397 374 398 375 if (assoc) { 399 376 struct ieee80211_sub_if_data *sdata; 377 + struct ieee80211_sta_bss *bss; 400 378 sdata = IEEE80211_DEV_TO_SUB_IF(dev); 401 379 if (sdata->type != IEEE80211_IF_TYPE_STA) 402 380 return; 381 + 382 + bss = ieee80211_rx_bss_get(dev, ifsta->bssid); 383 + if (bss) { 384 + if (bss->has_erp_value) 385 + ieee80211_handle_erp_ie(dev, bss->erp_value); 386 + ieee80211_rx_bss_put(dev, bss); 387 + } 388 + 403 389 netif_carrier_on(dev); 404 390 ifsta->prev_bssid_set = 1; 405 391 memcpy(ifsta->prev_bssid, sdata->u.sta.bssid, ETH_ALEN); ··· 416 384 ieee80211_sta_send_associnfo(dev, ifsta); 417 385 } else { 418 386 netif_carrier_off(dev); 387 + sdata->use_protection = 0; 419 388 memset(wrqu.ap_addr.sa_data, 0, ETH_ALEN); 420 389 } 421 390 wrqu.ap_addr.sa_family = ARPHRD_ETHER; ··· 1207 1174 return; 1208 1175 } 1209 1176 1177 + /* it probably doesn't, but if the frame includes an ERP value then 1178 + * update our stored copy */ 1179 + if (elems.erp_info && elems.erp_info_len >= 1) { 1180 + struct ieee80211_sta_bss *bss 1181 + = ieee80211_rx_bss_get(dev, ifsta->bssid); 1182 + if (bss) { 1183 + bss->erp_value = elems.erp_info[0]; 1184 + bss->has_erp_value = 1; 1185 + ieee80211_rx_bss_put(dev, bss); 1186 + } 1187 + } 1188 + 1210 1189 printk(KERN_DEBUG "%s: associated\n", dev->name); 1211 1190 ifsta->aid = aid; 1212 1191 ifsta->ap_capab = capab_info; ··· 1541 1496 return; 1542 1497 } 1543 1498 1499 + /* save the ERP value so that it is available at association time */ 1500 + if (elems.erp_info && elems.erp_info_len >= 1) { 1501 + bss->erp_value = elems.erp_info[0]; 1502 + bss->has_erp_value = 1; 1503 + } 1504 + 1544 1505 bss->beacon_int = le16_to_cpu(mgmt->u.beacon.beacon_int); 1545 1506 bss->capability = le16_to_cpu(mgmt->u.beacon.capab_info); 1546 1507 if (elems.ssid && elems.ssid_len <= IEEE80211_MAX_SSID_LEN) { ··· 1662 1611 size_t len, 1663 1612 struct ieee80211_rx_status *rx_status) 1664 1613 { 1665 - struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 1666 1614 struct ieee80211_sub_if_data *sdata; 1667 1615 struct ieee80211_if_sta *ifsta; 1668 - int use_protection; 1669 1616 size_t baselen; 1670 1617 struct ieee802_11_elems elems; 1671 1618 ··· 1687 1638 &elems) == ParseFailed) 1688 1639 return; 1689 1640 1690 - use_protection = 0; 1691 - if (elems.erp_info && elems.erp_info_len >= 1) { 1692 - use_protection = 1693 - (elems.erp_info[0] & ERP_INFO_USE_PROTECTION) != 0; 1694 - } 1695 - 1696 - if (use_protection != !!ifsta->use_protection) { 1697 - if (net_ratelimit()) { 1698 - printk(KERN_DEBUG "%s: CTS protection %s (BSSID=" 1699 - MAC_FMT ")\n", 1700 - dev->name, 1701 - use_protection ? "enabled" : "disabled", 1702 - MAC_ARG(ifsta->bssid)); 1703 - } 1704 - ifsta->use_protection = use_protection ? 1 : 0; 1705 - local->cts_protect_erp_frames = use_protection; 1706 - } 1641 + if (elems.erp_info && elems.erp_info_len >= 1) 1642 + ieee80211_handle_erp_ie(dev, elems.erp_info[0]); 1707 1643 1708 1644 if (elems.wmm_param && ifsta->wmm_enabled) { 1709 1645 ieee80211_sta_wmm_params(dev, ifsta, elems.wmm_param,
+6 -2
net/mac80211/rc80211_simple.c
··· 187 187 } 188 188 #endif 189 189 190 - if (per_failed > local->rate_ctrl_num_down) { 190 + /* 191 + * XXX: Make these configurable once we have an 192 + * interface to the rate control algorithms 193 + */ 194 + if (per_failed > RATE_CONTROL_NUM_DOWN) { 191 195 rate_control_rate_dec(local, sta); 192 - } else if (per_failed < local->rate_ctrl_num_up) { 196 + } else if (per_failed < RATE_CONTROL_NUM_UP) { 193 197 rate_control_rate_inc(local, sta); 194 198 } 195 199 srctrl->tx_avg_rate_sum += status->control.rate->rate;
+17
net/netfilter/Kconfig
··· 102 102 If you want to compile it as a module, say M here and read 103 103 <file:Documentation/kbuild/modules.txt>. If unsure, say `N'. 104 104 105 + config NF_CT_PROTO_UDPLITE 106 + tristate 'UDP-Lite protocol connection tracking support (EXPERIMENTAL)' 107 + depends on EXPERIMENTAL && NF_CONNTRACK 108 + help 109 + With this option enabled, the layer 3 independent connection 110 + tracking code will be able to do state tracking on UDP-Lite 111 + connections. 112 + 113 + To compile it as a module, choose M here. If unsure, say N. 114 + 105 115 config NF_CONNTRACK_AMANDA 106 116 tristate "Amanda backup protocol support" 107 117 depends on NF_CONNTRACK ··· 432 422 433 423 If you want to compile it as a module, say M here and read 434 424 <file:Documentation/kbuild/modules.txt>. If unsure, say `N'. 425 + 426 + config NETFILTER_XT_MATCH_CONNLIMIT 427 + tristate '"connlimit" match support"' 428 + depends on NETFILTER_XTABLES 429 + ---help--- 430 + This match allows you to match against the number of parallel 431 + connections to a server per client IP address (or address block). 435 432 436 433 config NETFILTER_XT_MATCH_CONNMARK 437 434 tristate '"connmark" connection mark match support'
+2
net/netfilter/Makefile
··· 16 16 # SCTP protocol connection tracking 17 17 obj-$(CONFIG_NF_CT_PROTO_GRE) += nf_conntrack_proto_gre.o 18 18 obj-$(CONFIG_NF_CT_PROTO_SCTP) += nf_conntrack_proto_sctp.o 19 + obj-$(CONFIG_NF_CT_PROTO_UDPLITE) += nf_conntrack_proto_udplite.o 19 20 20 21 # netlink interface for nf_conntrack 21 22 obj-$(CONFIG_NF_CT_NETLINK) += nf_conntrack_netlink.o ··· 53 52 # matches 54 53 obj-$(CONFIG_NETFILTER_XT_MATCH_COMMENT) += xt_comment.o 55 54 obj-$(CONFIG_NETFILTER_XT_MATCH_CONNBYTES) += xt_connbytes.o 55 + obj-$(CONFIG_NETFILTER_XT_MATCH_CONNLIMIT) += xt_connlimit.o 56 56 obj-$(CONFIG_NETFILTER_XT_MATCH_CONNMARK) += xt_connmark.o 57 57 obj-$(CONFIG_NETFILTER_XT_MATCH_CONNTRACK) += xt_conntrack.o 58 58 obj-$(CONFIG_NETFILTER_XT_MATCH_DCCP) += xt_dccp.o
+35 -2
net/netfilter/nf_conntrack_core.c
··· 113 113 } 114 114 EXPORT_SYMBOL_GPL(nf_ct_get_tuple); 115 115 116 + int nf_ct_get_tuplepr(const struct sk_buff *skb, 117 + unsigned int nhoff, 118 + u_int16_t l3num, 119 + struct nf_conntrack_tuple *tuple) 120 + { 121 + struct nf_conntrack_l3proto *l3proto; 122 + struct nf_conntrack_l4proto *l4proto; 123 + unsigned int protoff; 124 + u_int8_t protonum; 125 + int ret; 126 + 127 + rcu_read_lock(); 128 + 129 + l3proto = __nf_ct_l3proto_find(l3num); 130 + ret = l3proto->get_l4proto(skb, nhoff, &protoff, &protonum); 131 + if (ret != NF_ACCEPT) { 132 + rcu_read_unlock(); 133 + return 0; 134 + } 135 + 136 + l4proto = __nf_ct_l4proto_find(l3num, protonum); 137 + 138 + ret = nf_ct_get_tuple(skb, nhoff, protoff, l3num, protonum, tuple, 139 + l3proto, l4proto); 140 + 141 + rcu_read_unlock(); 142 + return ret; 143 + } 144 + EXPORT_SYMBOL_GPL(nf_ct_get_tuplepr); 145 + 116 146 int 117 147 nf_ct_invert_tuple(struct nf_conntrack_tuple *inverse, 118 148 const struct nf_conntrack_tuple *orig, ··· 652 622 653 623 /* rcu_read_lock()ed by nf_hook_slow */ 654 624 l3proto = __nf_ct_l3proto_find((u_int16_t)pf); 655 - 656 - if ((ret = l3proto->prepare(pskb, hooknum, &dataoff, &protonum)) <= 0) { 625 + ret = l3proto->get_l4proto(*pskb, skb_network_offset(*pskb), 626 + &dataoff, &protonum); 627 + if (ret <= 0) { 657 628 pr_debug("not prepared to track yet or error occured\n"); 629 + NF_CT_STAT_INC_ATOMIC(error); 630 + NF_CT_STAT_INC_ATOMIC(invalid); 658 631 return -ret; 659 632 } 660 633
+4 -5
net/netfilter/nf_conntrack_l3proto_generic.c
··· 61 61 return 0; 62 62 } 63 63 64 - static int 65 - generic_prepare(struct sk_buff **pskb, unsigned int hooknum, 66 - unsigned int *dataoff, u_int8_t *protonum) 64 + static int generic_get_l4proto(const struct sk_buff *skb, unsigned int nhoff, 65 + unsigned int *dataoff, u_int8_t *protonum) 67 66 { 68 67 /* Never track !!! */ 69 68 return -NF_ACCEPT; 70 69 } 71 70 72 71 73 - struct nf_conntrack_l3proto nf_conntrack_l3proto_generic = { 72 + struct nf_conntrack_l3proto nf_conntrack_l3proto_generic __read_mostly = { 74 73 .l3proto = PF_UNSPEC, 75 74 .name = "unknown", 76 75 .pkt_to_tuple = generic_pkt_to_tuple, 77 76 .invert_tuple = generic_invert_tuple, 78 77 .print_tuple = generic_print_tuple, 79 78 .print_conntrack = generic_print_conntrack, 80 - .prepare = generic_prepare, 79 + .get_l4proto = generic_get_l4proto, 81 80 }; 82 81 EXPORT_SYMBOL_GPL(nf_conntrack_l3proto_generic);
+1 -1
net/netfilter/nf_conntrack_proto_generic.c
··· 98 98 #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ 99 99 #endif /* CONFIG_SYSCTL */ 100 100 101 - struct nf_conntrack_l4proto nf_conntrack_l4proto_generic = 101 + struct nf_conntrack_l4proto nf_conntrack_l4proto_generic __read_mostly = 102 102 { 103 103 .l3proto = PF_UNSPEC, 104 104 .l4proto = 0,
+1 -1
net/netfilter/nf_conntrack_proto_gre.c
··· 261 261 } 262 262 263 263 /* protocol helper struct */ 264 - static struct nf_conntrack_l4proto nf_conntrack_l4proto_gre4 = { 264 + static struct nf_conntrack_l4proto nf_conntrack_l4proto_gre4 __read_mostly = { 265 265 .l3proto = AF_INET, 266 266 .l4proto = IPPROTO_GRE, 267 267 .name = "gre",
+2 -2
net/netfilter/nf_conntrack_proto_sctp.c
··· 601 601 #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ 602 602 #endif 603 603 604 - struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 = { 604 + static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = { 605 605 .l3proto = PF_INET, 606 606 .l4proto = IPPROTO_SCTP, 607 607 .name = "sctp", ··· 622 622 #endif 623 623 }; 624 624 625 - struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 = { 625 + static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = { 626 626 .l3proto = PF_INET6, 627 627 .l4proto = IPPROTO_SCTP, 628 628 .name = "sctp",
+2 -2
net/netfilter/nf_conntrack_proto_tcp.c
··· 1372 1372 #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ 1373 1373 #endif /* CONFIG_SYSCTL */ 1374 1374 1375 - struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4 = 1375 + struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4 __read_mostly = 1376 1376 { 1377 1377 .l3proto = PF_INET, 1378 1378 .l4proto = IPPROTO_TCP, ··· 1401 1401 }; 1402 1402 EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_tcp4); 1403 1403 1404 - struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp6 = 1404 + struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp6 __read_mostly = 1405 1405 { 1406 1406 .l3proto = PF_INET6, 1407 1407 .l4proto = IPPROTO_TCP,
+2 -2
net/netfilter/nf_conntrack_proto_udp.c
··· 191 191 #endif /* CONFIG_NF_CONNTRACK_PROC_COMPAT */ 192 192 #endif /* CONFIG_SYSCTL */ 193 193 194 - struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4 = 194 + struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4 __read_mostly = 195 195 { 196 196 .l3proto = PF_INET, 197 197 .l4proto = IPPROTO_UDP, ··· 218 218 }; 219 219 EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_udp4); 220 220 221 - struct nf_conntrack_l4proto nf_conntrack_l4proto_udp6 = 221 + struct nf_conntrack_l4proto nf_conntrack_l4proto_udp6 __read_mostly = 222 222 { 223 223 .l3proto = PF_INET6, 224 224 .l4proto = IPPROTO_UDP,
+266
net/netfilter/nf_conntrack_proto_udplite.c
··· 1 + /* (C) 1999-2001 Paul `Rusty' Russell 2 + * (C) 2002-2004 Netfilter Core Team <coreteam@netfilter.org> 3 + * (C) 2007 Patrick McHardy <kaber@trash.net> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + */ 9 + 10 + #include <linux/types.h> 11 + #include <linux/timer.h> 12 + #include <linux/module.h> 13 + #include <linux/netfilter.h> 14 + #include <linux/udp.h> 15 + #include <linux/seq_file.h> 16 + #include <linux/skbuff.h> 17 + #include <linux/ipv6.h> 18 + #include <net/ip6_checksum.h> 19 + #include <net/checksum.h> 20 + 21 + #include <linux/netfilter.h> 22 + #include <linux/netfilter_ipv4.h> 23 + #include <linux/netfilter_ipv6.h> 24 + #include <net/netfilter/nf_conntrack_l4proto.h> 25 + #include <net/netfilter/nf_conntrack_ecache.h> 26 + 27 + static unsigned int nf_ct_udplite_timeout __read_mostly = 30*HZ; 28 + static unsigned int nf_ct_udplite_timeout_stream __read_mostly = 180*HZ; 29 + 30 + static int udplite_pkt_to_tuple(const struct sk_buff *skb, 31 + unsigned int dataoff, 32 + struct nf_conntrack_tuple *tuple) 33 + { 34 + struct udphdr _hdr, *hp; 35 + 36 + hp = skb_header_pointer(skb, dataoff, sizeof(_hdr), &_hdr); 37 + if (hp == NULL) 38 + return 0; 39 + 40 + tuple->src.u.udp.port = hp->source; 41 + tuple->dst.u.udp.port = hp->dest; 42 + return 1; 43 + } 44 + 45 + static int udplite_invert_tuple(struct nf_conntrack_tuple *tuple, 46 + const struct nf_conntrack_tuple *orig) 47 + { 48 + tuple->src.u.udp.port = orig->dst.u.udp.port; 49 + tuple->dst.u.udp.port = orig->src.u.udp.port; 50 + return 1; 51 + } 52 + 53 + /* Print out the per-protocol part of the tuple. */ 54 + static int udplite_print_tuple(struct seq_file *s, 55 + const struct nf_conntrack_tuple *tuple) 56 + { 57 + return seq_printf(s, "sport=%hu dport=%hu ", 58 + ntohs(tuple->src.u.udp.port), 59 + ntohs(tuple->dst.u.udp.port)); 60 + } 61 + 62 + /* Print out the private part of the conntrack. */ 63 + static int udplite_print_conntrack(struct seq_file *s, 64 + const struct nf_conn *conntrack) 65 + { 66 + return 0; 67 + } 68 + 69 + /* Returns verdict for packet, and may modify conntracktype */ 70 + static int udplite_packet(struct nf_conn *conntrack, 71 + const struct sk_buff *skb, 72 + unsigned int dataoff, 73 + enum ip_conntrack_info ctinfo, 74 + int pf, 75 + unsigned int hooknum) 76 + { 77 + /* If we've seen traffic both ways, this is some kind of UDP 78 + stream. Extend timeout. */ 79 + if (test_bit(IPS_SEEN_REPLY_BIT, &conntrack->status)) { 80 + nf_ct_refresh_acct(conntrack, ctinfo, skb, 81 + nf_ct_udplite_timeout_stream); 82 + /* Also, more likely to be important, and not a probe */ 83 + if (!test_and_set_bit(IPS_ASSURED_BIT, &conntrack->status)) 84 + nf_conntrack_event_cache(IPCT_STATUS, skb); 85 + } else 86 + nf_ct_refresh_acct(conntrack, ctinfo, skb, 87 + nf_ct_udplite_timeout); 88 + 89 + return NF_ACCEPT; 90 + } 91 + 92 + /* Called when a new connection for this protocol found. */ 93 + static int udplite_new(struct nf_conn *conntrack, const struct sk_buff *skb, 94 + unsigned int dataoff) 95 + { 96 + return 1; 97 + } 98 + 99 + static int udplite_error(struct sk_buff *skb, unsigned int dataoff, 100 + enum ip_conntrack_info *ctinfo, 101 + int pf, 102 + unsigned int hooknum) 103 + { 104 + unsigned int udplen = skb->len - dataoff; 105 + struct udphdr _hdr, *hdr; 106 + unsigned int cscov; 107 + 108 + /* Header is too small? */ 109 + hdr = skb_header_pointer(skb, dataoff, sizeof(_hdr), &_hdr); 110 + if (hdr == NULL) { 111 + if (LOG_INVALID(IPPROTO_UDPLITE)) 112 + nf_log_packet(pf, 0, skb, NULL, NULL, NULL, 113 + "nf_ct_udplite: short packet "); 114 + return -NF_ACCEPT; 115 + } 116 + 117 + cscov = ntohs(hdr->len); 118 + if (cscov == 0) 119 + cscov = udplen; 120 + else if (cscov < sizeof(*hdr) || cscov > udplen) { 121 + if (LOG_INVALID(IPPROTO_UDPLITE)) 122 + nf_log_packet(pf, 0, skb, NULL, NULL, NULL, 123 + "nf_ct_udplite: invalid checksum coverage "); 124 + return -NF_ACCEPT; 125 + } 126 + 127 + /* UDPLITE mandates checksums */ 128 + if (!hdr->check) { 129 + if (LOG_INVALID(IPPROTO_UDPLITE)) 130 + nf_log_packet(pf, 0, skb, NULL, NULL, NULL, 131 + "nf_ct_udplite: checksum missing "); 132 + return -NF_ACCEPT; 133 + } 134 + 135 + /* Checksum invalid? Ignore. */ 136 + if (nf_conntrack_checksum && !skb_csum_unnecessary(skb) && 137 + ((pf == PF_INET && hooknum == NF_IP_PRE_ROUTING) || 138 + (pf == PF_INET6 && hooknum == NF_IP6_PRE_ROUTING))) { 139 + if (pf == PF_INET) { 140 + struct iphdr *iph = ip_hdr(skb); 141 + 142 + skb->csum = csum_tcpudp_nofold(iph->saddr, iph->daddr, 143 + udplen, IPPROTO_UDPLITE, 0); 144 + } else { 145 + struct ipv6hdr *ipv6h = ipv6_hdr(skb); 146 + __wsum hsum = skb_checksum(skb, 0, dataoff, 0); 147 + 148 + skb->csum = ~csum_unfold( 149 + csum_ipv6_magic(&ipv6h->saddr, &ipv6h->daddr, 150 + udplen, IPPROTO_UDPLITE, 151 + csum_sub(0, hsum))); 152 + } 153 + 154 + skb->ip_summed = CHECKSUM_NONE; 155 + if (__skb_checksum_complete_head(skb, dataoff + cscov)) { 156 + if (LOG_INVALID(IPPROTO_UDPLITE)) 157 + nf_log_packet(pf, 0, skb, NULL, NULL, NULL, 158 + "nf_ct_udplite: bad UDPLite " 159 + "checksum "); 160 + return -NF_ACCEPT; 161 + } 162 + skb->ip_summed = CHECKSUM_UNNECESSARY; 163 + } 164 + 165 + return NF_ACCEPT; 166 + } 167 + 168 + #ifdef CONFIG_SYSCTL 169 + static unsigned int udplite_sysctl_table_users; 170 + static struct ctl_table_header *udplite_sysctl_header; 171 + static struct ctl_table udplite_sysctl_table[] = { 172 + { 173 + .ctl_name = CTL_UNNUMBERED, 174 + .procname = "nf_conntrack_udplite_timeout", 175 + .data = &nf_ct_udplite_timeout, 176 + .maxlen = sizeof(unsigned int), 177 + .mode = 0644, 178 + .proc_handler = &proc_dointvec_jiffies, 179 + }, 180 + { 181 + .ctl_name = CTL_UNNUMBERED, 182 + .procname = "nf_conntrack_udplite_timeout_stream", 183 + .data = &nf_ct_udplite_timeout_stream, 184 + .maxlen = sizeof(unsigned int), 185 + .mode = 0644, 186 + .proc_handler = &proc_dointvec_jiffies, 187 + }, 188 + { 189 + .ctl_name = 0 190 + } 191 + }; 192 + #endif /* CONFIG_SYSCTL */ 193 + 194 + static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4 __read_mostly = 195 + { 196 + .l3proto = PF_INET, 197 + .l4proto = IPPROTO_UDPLITE, 198 + .name = "udplite", 199 + .pkt_to_tuple = udplite_pkt_to_tuple, 200 + .invert_tuple = udplite_invert_tuple, 201 + .print_tuple = udplite_print_tuple, 202 + .print_conntrack = udplite_print_conntrack, 203 + .packet = udplite_packet, 204 + .new = udplite_new, 205 + .error = udplite_error, 206 + #if defined(CONFIG_NF_CT_NETLINK) || defined(CONFIG_NF_CT_NETLINK_MODULE) 207 + .tuple_to_nfattr = nf_ct_port_tuple_to_nfattr, 208 + .nfattr_to_tuple = nf_ct_port_nfattr_to_tuple, 209 + #endif 210 + #ifdef CONFIG_SYSCTL 211 + .ctl_table_users = &udplite_sysctl_table_users, 212 + .ctl_table_header = &udplite_sysctl_header, 213 + .ctl_table = udplite_sysctl_table, 214 + #endif 215 + }; 216 + 217 + static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly = 218 + { 219 + .l3proto = PF_INET6, 220 + .l4proto = IPPROTO_UDPLITE, 221 + .name = "udplite", 222 + .pkt_to_tuple = udplite_pkt_to_tuple, 223 + .invert_tuple = udplite_invert_tuple, 224 + .print_tuple = udplite_print_tuple, 225 + .print_conntrack = udplite_print_conntrack, 226 + .packet = udplite_packet, 227 + .new = udplite_new, 228 + .error = udplite_error, 229 + #if defined(CONFIG_NF_CT_NETLINK) || defined(CONFIG_NF_CT_NETLINK_MODULE) 230 + .tuple_to_nfattr = nf_ct_port_tuple_to_nfattr, 231 + .nfattr_to_tuple = nf_ct_port_nfattr_to_tuple, 232 + #endif 233 + #ifdef CONFIG_SYSCTL 234 + .ctl_table_users = &udplite_sysctl_table_users, 235 + .ctl_table_header = &udplite_sysctl_header, 236 + .ctl_table = udplite_sysctl_table, 237 + #endif 238 + }; 239 + 240 + static int __init nf_conntrack_proto_udplite_init(void) 241 + { 242 + int err; 243 + 244 + err = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_udplite4); 245 + if (err < 0) 246 + goto err1; 247 + err = nf_conntrack_l4proto_register(&nf_conntrack_l4proto_udplite6); 248 + if (err < 0) 249 + goto err2; 250 + return 0; 251 + err2: 252 + nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udplite4); 253 + err1: 254 + return err; 255 + } 256 + 257 + static void __exit nf_conntrack_proto_udplite_exit(void) 258 + { 259 + nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udplite6); 260 + nf_conntrack_l4proto_unregister(&nf_conntrack_l4proto_udplite4); 261 + } 262 + 263 + module_init(nf_conntrack_proto_udplite_init); 264 + module_exit(nf_conntrack_proto_udplite_exit); 265 + 266 + MODULE_LICENSE("GPL");
+313
net/netfilter/xt_connlimit.c
··· 1 + /* 2 + * netfilter module to limit the number of parallel tcp 3 + * connections per IP address. 4 + * (c) 2000 Gerd Knorr <kraxel@bytesex.org> 5 + * Nov 2002: Martin Bene <martin.bene@icomedias.com>: 6 + * only ignore TIME_WAIT or gone connections 7 + * Copyright © Jan Engelhardt <jengelh@gmx.de>, 2007 8 + * 9 + * based on ... 10 + * 11 + * Kernel module to match connection tracking information. 12 + * GPL (C) 1999 Rusty Russell (rusty@rustcorp.com.au). 13 + */ 14 + #include <linux/in.h> 15 + #include <linux/in6.h> 16 + #include <linux/ip.h> 17 + #include <linux/ipv6.h> 18 + #include <linux/jhash.h> 19 + #include <linux/list.h> 20 + #include <linux/module.h> 21 + #include <linux/random.h> 22 + #include <linux/skbuff.h> 23 + #include <linux/spinlock.h> 24 + #include <linux/netfilter/nf_conntrack_tcp.h> 25 + #include <linux/netfilter/x_tables.h> 26 + #include <linux/netfilter/xt_connlimit.h> 27 + #include <net/netfilter/nf_conntrack.h> 28 + #include <net/netfilter/nf_conntrack_core.h> 29 + #include <net/netfilter/nf_conntrack_tuple.h> 30 + 31 + /* we will save the tuples of all connections we care about */ 32 + struct xt_connlimit_conn { 33 + struct list_head list; 34 + struct nf_conntrack_tuple tuple; 35 + }; 36 + 37 + struct xt_connlimit_data { 38 + struct list_head iphash[256]; 39 + spinlock_t lock; 40 + }; 41 + 42 + static u_int32_t connlimit_rnd; 43 + static bool connlimit_rnd_inited; 44 + 45 + static inline unsigned int connlimit_iphash(u_int32_t addr) 46 + { 47 + if (unlikely(!connlimit_rnd_inited)) { 48 + get_random_bytes(&connlimit_rnd, sizeof(connlimit_rnd)); 49 + connlimit_rnd_inited = true; 50 + } 51 + return jhash_1word(addr, connlimit_rnd) & 0xFF; 52 + } 53 + 54 + static inline unsigned int 55 + connlimit_iphash6(const union nf_conntrack_address *addr, 56 + const union nf_conntrack_address *mask) 57 + { 58 + union nf_conntrack_address res; 59 + unsigned int i; 60 + 61 + if (unlikely(!connlimit_rnd_inited)) { 62 + get_random_bytes(&connlimit_rnd, sizeof(connlimit_rnd)); 63 + connlimit_rnd_inited = true; 64 + } 65 + 66 + for (i = 0; i < ARRAY_SIZE(addr->ip6); ++i) 67 + res.ip6[i] = addr->ip6[i] & mask->ip6[i]; 68 + 69 + return jhash2(res.ip6, ARRAY_SIZE(res.ip6), connlimit_rnd) & 0xFF; 70 + } 71 + 72 + static inline bool already_closed(const struct nf_conn *conn) 73 + { 74 + u_int16_t proto = conn->tuplehash[0].tuple.dst.protonum; 75 + 76 + if (proto == IPPROTO_TCP) 77 + return conn->proto.tcp.state == TCP_CONNTRACK_TIME_WAIT; 78 + else 79 + return 0; 80 + } 81 + 82 + static inline unsigned int 83 + same_source_net(const union nf_conntrack_address *addr, 84 + const union nf_conntrack_address *mask, 85 + const union nf_conntrack_address *u3, unsigned int family) 86 + { 87 + if (family == AF_INET) { 88 + return (addr->ip & mask->ip) == (u3->ip & mask->ip); 89 + } else { 90 + union nf_conntrack_address lh, rh; 91 + unsigned int i; 92 + 93 + for (i = 0; i < ARRAY_SIZE(addr->ip6); ++i) { 94 + lh.ip6[i] = addr->ip6[i] & mask->ip6[i]; 95 + rh.ip6[i] = u3->ip6[i] & mask->ip6[i]; 96 + } 97 + 98 + return memcmp(&lh.ip6, &rh.ip6, sizeof(lh.ip6)) == 0; 99 + } 100 + } 101 + 102 + static int count_them(struct xt_connlimit_data *data, 103 + const struct nf_conntrack_tuple *tuple, 104 + const union nf_conntrack_address *addr, 105 + const union nf_conntrack_address *mask, 106 + const struct xt_match *match) 107 + { 108 + struct nf_conntrack_tuple_hash *found; 109 + struct xt_connlimit_conn *conn; 110 + struct xt_connlimit_conn *tmp; 111 + struct nf_conn *found_ct; 112 + struct list_head *hash; 113 + bool addit = true; 114 + int matches = 0; 115 + 116 + 117 + if (match->family == AF_INET6) 118 + hash = &data->iphash[connlimit_iphash6(addr, mask)]; 119 + else 120 + hash = &data->iphash[connlimit_iphash(addr->ip & mask->ip)]; 121 + 122 + read_lock_bh(&nf_conntrack_lock); 123 + 124 + /* check the saved connections */ 125 + list_for_each_entry_safe(conn, tmp, hash, list) { 126 + found = __nf_conntrack_find(&conn->tuple, NULL); 127 + found_ct = NULL; 128 + 129 + if (found != NULL) 130 + found_ct = nf_ct_tuplehash_to_ctrack(found); 131 + 132 + if (found_ct != NULL && 133 + nf_ct_tuple_equal(&conn->tuple, tuple) && 134 + !already_closed(found_ct)) 135 + /* 136 + * Just to be sure we have it only once in the list. 137 + * We should not see tuples twice unless someone hooks 138 + * this into a table without "-p tcp --syn". 139 + */ 140 + addit = false; 141 + 142 + if (found == NULL) { 143 + /* this one is gone */ 144 + list_del(&conn->list); 145 + kfree(conn); 146 + continue; 147 + } 148 + 149 + if (already_closed(found_ct)) { 150 + /* 151 + * we do not care about connections which are 152 + * closed already -> ditch it 153 + */ 154 + list_del(&conn->list); 155 + kfree(conn); 156 + continue; 157 + } 158 + 159 + if (same_source_net(addr, mask, &conn->tuple.src.u3, 160 + match->family)) 161 + /* same source network -> be counted! */ 162 + ++matches; 163 + } 164 + 165 + read_unlock_bh(&nf_conntrack_lock); 166 + 167 + if (addit) { 168 + /* save the new connection in our list */ 169 + conn = kzalloc(sizeof(*conn), GFP_ATOMIC); 170 + if (conn == NULL) 171 + return -ENOMEM; 172 + conn->tuple = *tuple; 173 + list_add(&conn->list, hash); 174 + ++matches; 175 + } 176 + 177 + return matches; 178 + } 179 + 180 + static bool connlimit_match(const struct sk_buff *skb, 181 + const struct net_device *in, 182 + const struct net_device *out, 183 + const struct xt_match *match, 184 + const void *matchinfo, int offset, 185 + unsigned int protoff, bool *hotdrop) 186 + { 187 + const struct xt_connlimit_info *info = matchinfo; 188 + union nf_conntrack_address addr, mask; 189 + struct nf_conntrack_tuple tuple; 190 + const struct nf_conntrack_tuple *tuple_ptr = &tuple; 191 + enum ip_conntrack_info ctinfo; 192 + const struct nf_conn *ct; 193 + int connections; 194 + 195 + ct = nf_ct_get(skb, &ctinfo); 196 + if (ct != NULL) 197 + tuple_ptr = &ct->tuplehash[0].tuple; 198 + else if (!nf_ct_get_tuplepr(skb, skb_network_offset(skb), 199 + match->family, &tuple)) 200 + goto hotdrop; 201 + 202 + if (match->family == AF_INET6) { 203 + const struct ipv6hdr *iph = ipv6_hdr(skb); 204 + memcpy(&addr.ip6, &iph->saddr, sizeof(iph->saddr)); 205 + memcpy(&mask.ip6, info->v6_mask, sizeof(info->v6_mask)); 206 + } else { 207 + const struct iphdr *iph = ip_hdr(skb); 208 + addr.ip = iph->saddr; 209 + mask.ip = info->v4_mask; 210 + } 211 + 212 + spin_lock_bh(&info->data->lock); 213 + connections = count_them(info->data, tuple_ptr, &addr, &mask, match); 214 + spin_unlock_bh(&info->data->lock); 215 + 216 + if (connections < 0) { 217 + /* kmalloc failed, drop it entirely */ 218 + *hotdrop = true; 219 + return false; 220 + } 221 + 222 + return (connections > info->limit) ^ info->inverse; 223 + 224 + hotdrop: 225 + *hotdrop = true; 226 + return false; 227 + } 228 + 229 + static bool connlimit_check(const char *tablename, const void *ip, 230 + const struct xt_match *match, void *matchinfo, 231 + unsigned int hook_mask) 232 + { 233 + struct xt_connlimit_info *info = matchinfo; 234 + unsigned int i; 235 + 236 + if (nf_ct_l3proto_try_module_get(match->family) < 0) { 237 + printk(KERN_WARNING "cannot load conntrack support for " 238 + "address family %u\n", match->family); 239 + return false; 240 + } 241 + 242 + /* init private data */ 243 + info->data = kmalloc(sizeof(struct xt_connlimit_data), GFP_KERNEL); 244 + if (info->data == NULL) { 245 + nf_ct_l3proto_module_put(match->family); 246 + return false; 247 + } 248 + 249 + spin_lock_init(&info->data->lock); 250 + for (i = 0; i < ARRAY_SIZE(info->data->iphash); ++i) 251 + INIT_LIST_HEAD(&info->data->iphash[i]); 252 + 253 + return true; 254 + } 255 + 256 + static void connlimit_destroy(const struct xt_match *match, void *matchinfo) 257 + { 258 + struct xt_connlimit_info *info = matchinfo; 259 + struct xt_connlimit_conn *conn; 260 + struct xt_connlimit_conn *tmp; 261 + struct list_head *hash = info->data->iphash; 262 + unsigned int i; 263 + 264 + nf_ct_l3proto_module_put(match->family); 265 + 266 + for (i = 0; i < ARRAY_SIZE(info->data->iphash); ++i) { 267 + list_for_each_entry_safe(conn, tmp, &hash[i], list) { 268 + list_del(&conn->list); 269 + kfree(conn); 270 + } 271 + } 272 + 273 + kfree(info->data); 274 + } 275 + 276 + static struct xt_match connlimit_reg[] __read_mostly = { 277 + { 278 + .name = "connlimit", 279 + .family = AF_INET, 280 + .checkentry = connlimit_check, 281 + .match = connlimit_match, 282 + .matchsize = sizeof(struct xt_connlimit_info), 283 + .destroy = connlimit_destroy, 284 + .me = THIS_MODULE, 285 + }, 286 + { 287 + .name = "connlimit", 288 + .family = AF_INET6, 289 + .checkentry = connlimit_check, 290 + .match = connlimit_match, 291 + .matchsize = sizeof(struct xt_connlimit_info), 292 + .destroy = connlimit_destroy, 293 + .me = THIS_MODULE, 294 + }, 295 + }; 296 + 297 + static int __init xt_connlimit_init(void) 298 + { 299 + return xt_register_matches(connlimit_reg, ARRAY_SIZE(connlimit_reg)); 300 + } 301 + 302 + static void __exit xt_connlimit_exit(void) 303 + { 304 + xt_unregister_matches(connlimit_reg, ARRAY_SIZE(connlimit_reg)); 305 + } 306 + 307 + module_init(xt_connlimit_init); 308 + module_exit(xt_connlimit_exit); 309 + MODULE_AUTHOR("Jan Engelhardt <jengelh@gmx.de>"); 310 + MODULE_DESCRIPTION("netfilter xt_connlimit match module"); 311 + MODULE_LICENSE("GPL"); 312 + MODULE_ALIAS("ipt_connlimit"); 313 + MODULE_ALIAS("ip6t_connlimit");
+1 -1
net/rfkill/rfkill-input.c
··· 55 55 56 56 static void rfkill_schedule_toggle(struct rfkill_task *task) 57 57 { 58 - unsigned int flags; 58 + unsigned long flags; 59 59 60 60 spin_lock_irqsave(&task->lock, flags); 61 61
+4 -4
net/sched/Kconfig
··· 472 472 473 473 config NET_CLS_POLICE 474 474 bool "Traffic Policing (obsolete)" 475 - depends on NET_CLS_ACT!=y 475 + select NET_CLS_ACT 476 + select NET_ACT_POLICE 476 477 ---help--- 477 478 Say Y here if you want to do traffic policing, i.e. strict 478 - bandwidth limiting. This option is obsoleted by the traffic 479 - policer implemented as action, it stays here for compatibility 480 - reasons. 479 + bandwidth limiting. This option is obsolete and just selects 480 + the option replacing it. It will be removed in the future. 481 481 482 482 config NET_CLS_IND 483 483 bool "Incoming device classification"
-1
net/sched/Makefile
··· 8 8 obj-$(CONFIG_NET_CLS) += cls_api.o 9 9 obj-$(CONFIG_NET_CLS_ACT) += act_api.o 10 10 obj-$(CONFIG_NET_ACT_POLICE) += act_police.o 11 - obj-$(CONFIG_NET_CLS_POLICE) += act_police.o 12 11 obj-$(CONFIG_NET_ACT_GACT) += act_gact.o 13 12 obj-$(CONFIG_NET_ACT_MIRRED) += act_mirred.o 14 13 obj-$(CONFIG_NET_ACT_IPT) += act_ipt.o
+13 -233
net/sched/act_police.c
··· 50 50 51 51 /* Each policer is serialized by its individual spinlock */ 52 52 53 - #ifdef CONFIG_NET_CLS_ACT 54 53 static int tcf_act_police_walker(struct sk_buff *skb, struct netlink_callback *cb, 55 54 int type, struct tc_action *a) 56 55 { ··· 95 96 nlmsg_trim(skb, r); 96 97 goto done; 97 98 } 98 - #endif 99 99 100 - void tcf_police_destroy(struct tcf_police *p) 100 + static void tcf_police_destroy(struct tcf_police *p) 101 101 { 102 102 unsigned int h = tcf_hash(p->tcf_index, POL_TAB_MASK); 103 103 struct tcf_common **p1p; ··· 119 121 BUG_TRAP(0); 120 122 } 121 123 122 - #ifdef CONFIG_NET_CLS_ACT 123 124 static int tcf_act_police_locate(struct rtattr *rta, struct rtattr *est, 124 125 struct tc_action *a, int ovr, int bind) 125 126 { ··· 244 247 static int tcf_act_police_cleanup(struct tc_action *a, int bind) 245 248 { 246 249 struct tcf_police *p = a->priv; 250 + int ret = 0; 247 251 248 - if (p != NULL) 249 - return tcf_police_release(p, bind); 250 - return 0; 252 + if (p != NULL) { 253 + if (bind) 254 + p->tcf_bindcnt--; 255 + 256 + p->tcf_refcnt--; 257 + if (p->tcf_refcnt <= 0 && !p->tcf_bindcnt) { 258 + tcf_police_destroy(p); 259 + ret = 1; 260 + } 261 + } 262 + return ret; 251 263 } 252 264 253 265 static int tcf_act_police(struct sk_buff *skb, struct tc_action *a, ··· 378 372 379 373 module_init(police_init_module); 380 374 module_exit(police_cleanup_module); 381 - 382 - #else /* CONFIG_NET_CLS_ACT */ 383 - 384 - static struct tcf_common *tcf_police_lookup(u32 index) 385 - { 386 - struct tcf_hashinfo *hinfo = &police_hash_info; 387 - struct tcf_common *p; 388 - 389 - read_lock(hinfo->lock); 390 - for (p = hinfo->htab[tcf_hash(index, hinfo->hmask)]; p; 391 - p = p->tcfc_next) { 392 - if (p->tcfc_index == index) 393 - break; 394 - } 395 - read_unlock(hinfo->lock); 396 - 397 - return p; 398 - } 399 - 400 - static u32 tcf_police_new_index(void) 401 - { 402 - u32 *idx_gen = &police_idx_gen; 403 - u32 val = *idx_gen; 404 - 405 - do { 406 - if (++val == 0) 407 - val = 1; 408 - } while (tcf_police_lookup(val)); 409 - 410 - return (*idx_gen = val); 411 - } 412 - 413 - struct tcf_police *tcf_police_locate(struct rtattr *rta, struct rtattr *est) 414 - { 415 - unsigned int h; 416 - struct tcf_police *police; 417 - struct rtattr *tb[TCA_POLICE_MAX]; 418 - struct tc_police *parm; 419 - int size; 420 - 421 - if (rtattr_parse_nested(tb, TCA_POLICE_MAX, rta) < 0) 422 - return NULL; 423 - 424 - if (tb[TCA_POLICE_TBF-1] == NULL) 425 - return NULL; 426 - size = RTA_PAYLOAD(tb[TCA_POLICE_TBF-1]); 427 - if (size != sizeof(*parm) && size != sizeof(struct tc_police_compat)) 428 - return NULL; 429 - 430 - parm = RTA_DATA(tb[TCA_POLICE_TBF-1]); 431 - 432 - if (parm->index) { 433 - struct tcf_common *pc; 434 - 435 - pc = tcf_police_lookup(parm->index); 436 - if (pc) { 437 - police = to_police(pc); 438 - police->tcf_refcnt++; 439 - return police; 440 - } 441 - } 442 - police = kzalloc(sizeof(*police), GFP_KERNEL); 443 - if (unlikely(!police)) 444 - return NULL; 445 - 446 - police->tcf_refcnt = 1; 447 - spin_lock_init(&police->tcf_lock); 448 - if (parm->rate.rate) { 449 - police->tcfp_R_tab = 450 - qdisc_get_rtab(&parm->rate, tb[TCA_POLICE_RATE-1]); 451 - if (police->tcfp_R_tab == NULL) 452 - goto failure; 453 - if (parm->peakrate.rate) { 454 - police->tcfp_P_tab = 455 - qdisc_get_rtab(&parm->peakrate, 456 - tb[TCA_POLICE_PEAKRATE-1]); 457 - if (police->tcfp_P_tab == NULL) 458 - goto failure; 459 - } 460 - } 461 - if (tb[TCA_POLICE_RESULT-1]) { 462 - if (RTA_PAYLOAD(tb[TCA_POLICE_RESULT-1]) != sizeof(u32)) 463 - goto failure; 464 - police->tcfp_result = *(u32*)RTA_DATA(tb[TCA_POLICE_RESULT-1]); 465 - } 466 - if (tb[TCA_POLICE_AVRATE-1]) { 467 - if (RTA_PAYLOAD(tb[TCA_POLICE_AVRATE-1]) != sizeof(u32)) 468 - goto failure; 469 - police->tcfp_ewma_rate = 470 - *(u32*)RTA_DATA(tb[TCA_POLICE_AVRATE-1]); 471 - } 472 - police->tcfp_toks = police->tcfp_burst = parm->burst; 473 - police->tcfp_mtu = parm->mtu; 474 - if (police->tcfp_mtu == 0) { 475 - police->tcfp_mtu = ~0; 476 - if (police->tcfp_R_tab) 477 - police->tcfp_mtu = 255<<police->tcfp_R_tab->rate.cell_log; 478 - } 479 - if (police->tcfp_P_tab) 480 - police->tcfp_ptoks = L2T_P(police, police->tcfp_mtu); 481 - police->tcfp_t_c = psched_get_time(); 482 - police->tcf_index = parm->index ? parm->index : 483 - tcf_police_new_index(); 484 - police->tcf_action = parm->action; 485 - if (est) 486 - gen_new_estimator(&police->tcf_bstats, &police->tcf_rate_est, 487 - &police->tcf_lock, est); 488 - h = tcf_hash(police->tcf_index, POL_TAB_MASK); 489 - write_lock_bh(&police_lock); 490 - police->tcf_next = tcf_police_ht[h]; 491 - tcf_police_ht[h] = &police->common; 492 - write_unlock_bh(&police_lock); 493 - return police; 494 - 495 - failure: 496 - if (police->tcfp_R_tab) 497 - qdisc_put_rtab(police->tcfp_R_tab); 498 - kfree(police); 499 - return NULL; 500 - } 501 - 502 - int tcf_police(struct sk_buff *skb, struct tcf_police *police) 503 - { 504 - psched_time_t now; 505 - long toks; 506 - long ptoks = 0; 507 - 508 - spin_lock(&police->tcf_lock); 509 - 510 - police->tcf_bstats.bytes += skb->len; 511 - police->tcf_bstats.packets++; 512 - 513 - if (police->tcfp_ewma_rate && 514 - police->tcf_rate_est.bps >= police->tcfp_ewma_rate) { 515 - police->tcf_qstats.overlimits++; 516 - spin_unlock(&police->tcf_lock); 517 - return police->tcf_action; 518 - } 519 - if (skb->len <= police->tcfp_mtu) { 520 - if (police->tcfp_R_tab == NULL) { 521 - spin_unlock(&police->tcf_lock); 522 - return police->tcfp_result; 523 - } 524 - 525 - now = psched_get_time(); 526 - toks = psched_tdiff_bounded(now, police->tcfp_t_c, 527 - police->tcfp_burst); 528 - if (police->tcfp_P_tab) { 529 - ptoks = toks + police->tcfp_ptoks; 530 - if (ptoks > (long)L2T_P(police, police->tcfp_mtu)) 531 - ptoks = (long)L2T_P(police, police->tcfp_mtu); 532 - ptoks -= L2T_P(police, skb->len); 533 - } 534 - toks += police->tcfp_toks; 535 - if (toks > (long)police->tcfp_burst) 536 - toks = police->tcfp_burst; 537 - toks -= L2T(police, skb->len); 538 - if ((toks|ptoks) >= 0) { 539 - police->tcfp_t_c = now; 540 - police->tcfp_toks = toks; 541 - police->tcfp_ptoks = ptoks; 542 - spin_unlock(&police->tcf_lock); 543 - return police->tcfp_result; 544 - } 545 - } 546 - 547 - police->tcf_qstats.overlimits++; 548 - spin_unlock(&police->tcf_lock); 549 - return police->tcf_action; 550 - } 551 - EXPORT_SYMBOL(tcf_police); 552 - 553 - int tcf_police_dump(struct sk_buff *skb, struct tcf_police *police) 554 - { 555 - unsigned char *b = skb_tail_pointer(skb); 556 - struct tc_police opt; 557 - 558 - opt.index = police->tcf_index; 559 - opt.action = police->tcf_action; 560 - opt.mtu = police->tcfp_mtu; 561 - opt.burst = police->tcfp_burst; 562 - if (police->tcfp_R_tab) 563 - opt.rate = police->tcfp_R_tab->rate; 564 - else 565 - memset(&opt.rate, 0, sizeof(opt.rate)); 566 - if (police->tcfp_P_tab) 567 - opt.peakrate = police->tcfp_P_tab->rate; 568 - else 569 - memset(&opt.peakrate, 0, sizeof(opt.peakrate)); 570 - RTA_PUT(skb, TCA_POLICE_TBF, sizeof(opt), &opt); 571 - if (police->tcfp_result) 572 - RTA_PUT(skb, TCA_POLICE_RESULT, sizeof(int), 573 - &police->tcfp_result); 574 - if (police->tcfp_ewma_rate) 575 - RTA_PUT(skb, TCA_POLICE_AVRATE, 4, &police->tcfp_ewma_rate); 576 - return skb->len; 577 - 578 - rtattr_failure: 579 - nlmsg_trim(skb, b); 580 - return -1; 581 - } 582 - 583 - int tcf_police_dump_stats(struct sk_buff *skb, struct tcf_police *police) 584 - { 585 - struct gnet_dump d; 586 - 587 - if (gnet_stats_start_copy_compat(skb, TCA_STATS2, TCA_STATS, 588 - TCA_XSTATS, &police->tcf_lock, 589 - &d) < 0) 590 - goto errout; 591 - 592 - if (gnet_stats_copy_basic(&d, &police->tcf_bstats) < 0 || 593 - gnet_stats_copy_rate_est(&d, &police->tcf_rate_est) < 0 || 594 - gnet_stats_copy_queue(&d, &police->tcf_qstats) < 0) 595 - goto errout; 596 - 597 - if (gnet_stats_finish_copy(&d) < 0) 598 - goto errout; 599 - 600 - return 0; 601 - 602 - errout: 603 - return -1; 604 - } 605 - 606 - #endif /* CONFIG_NET_CLS_ACT */
-40
net/sched/cls_api.c
··· 458 458 tcf_action_destroy(exts->action, TCA_ACT_UNBIND); 459 459 exts->action = NULL; 460 460 } 461 - #elif defined CONFIG_NET_CLS_POLICE 462 - if (exts->police) { 463 - tcf_police_release(exts->police, TCA_ACT_UNBIND); 464 - exts->police = NULL; 465 - } 466 461 #endif 467 462 } 468 463 ··· 491 496 exts->action = act; 492 497 } 493 498 } 494 - #elif defined CONFIG_NET_CLS_POLICE 495 - if (map->police && tb[map->police-1]) { 496 - struct tcf_police *p; 497 - 498 - p = tcf_police_locate(tb[map->police-1], rate_tlv); 499 - if (p == NULL) 500 - return -EINVAL; 501 - 502 - exts->police = p; 503 - } else if (map->action && tb[map->action-1]) 504 - return -EOPNOTSUPP; 505 499 #else 506 500 if ((map->action && tb[map->action-1]) || 507 501 (map->police && tb[map->police-1])) ··· 512 528 tcf_tree_unlock(tp); 513 529 if (act) 514 530 tcf_action_destroy(act, TCA_ACT_UNBIND); 515 - } 516 - #elif defined CONFIG_NET_CLS_POLICE 517 - if (src->police) { 518 - struct tcf_police *p; 519 - tcf_tree_lock(tp); 520 - p = xchg(&dst->police, src->police); 521 - tcf_tree_unlock(tp); 522 - if (p) 523 - tcf_police_release(p, TCA_ACT_UNBIND); 524 531 } 525 532 #endif 526 533 } ··· 541 566 p_rta->rta_len = skb_tail_pointer(skb) - (u8 *)p_rta; 542 567 } 543 568 } 544 - #elif defined CONFIG_NET_CLS_POLICE 545 - if (map->police && exts->police) { 546 - struct rtattr *p_rta = (struct rtattr *)skb_tail_pointer(skb); 547 - 548 - RTA_PUT(skb, map->police, 0, NULL); 549 - 550 - if (tcf_police_dump(skb, exts->police) < 0) 551 - goto rtattr_failure; 552 - 553 - p_rta->rta_len = skb_tail_pointer(skb) - (u8 *)p_rta; 554 - } 555 569 #endif 556 570 return 0; 557 571 rtattr_failure: __attribute__ ((unused)) ··· 554 590 #ifdef CONFIG_NET_CLS_ACT 555 591 if (exts->action) 556 592 if (tcf_action_copy_stats(skb, exts->action, 1) < 0) 557 - goto rtattr_failure; 558 - #elif defined CONFIG_NET_CLS_POLICE 559 - if (exts->police) 560 - if (tcf_police_dump_stats(skb, exts->police) < 0) 561 593 goto rtattr_failure; 562 594 #endif 563 595 return 0;
-3
net/sched/cls_u32.c
··· 782 782 #ifdef CONFIG_CLS_U32_PERF 783 783 printk(" Performance counters on\n"); 784 784 #endif 785 - #ifdef CONFIG_NET_CLS_POLICE 786 - printk(" OLD policer on \n"); 787 - #endif 788 785 #ifdef CONFIG_NET_CLS_IND 789 786 printk(" input device check on \n"); 790 787 #endif
+38 -33
net/sched/sch_api.c
··· 278 278 279 279 wd->qdisc->flags &= ~TCQ_F_THROTTLED; 280 280 smp_wmb(); 281 - if (spin_trylock(&dev->queue_lock)) { 282 - qdisc_run(dev); 283 - spin_unlock(&dev->queue_lock); 284 - } else 285 - netif_schedule(dev); 281 + netif_schedule(dev); 286 282 287 283 return HRTIMER_NORESTART; 288 284 } ··· 1145 1149 to this qdisc, (optionally) tests for protocol and asks 1146 1150 specific classifiers. 1147 1151 */ 1152 + int tc_classify_compat(struct sk_buff *skb, struct tcf_proto *tp, 1153 + struct tcf_result *res) 1154 + { 1155 + __be16 protocol = skb->protocol; 1156 + int err = 0; 1157 + 1158 + for (; tp; tp = tp->next) { 1159 + if ((tp->protocol == protocol || 1160 + tp->protocol == htons(ETH_P_ALL)) && 1161 + (err = tp->classify(skb, tp, res)) >= 0) { 1162 + #ifdef CONFIG_NET_CLS_ACT 1163 + if (err != TC_ACT_RECLASSIFY && skb->tc_verd) 1164 + skb->tc_verd = SET_TC_VERD(skb->tc_verd, 0); 1165 + #endif 1166 + return err; 1167 + } 1168 + } 1169 + return -1; 1170 + } 1171 + EXPORT_SYMBOL(tc_classify_compat); 1172 + 1148 1173 int tc_classify(struct sk_buff *skb, struct tcf_proto *tp, 1149 - struct tcf_result *res) 1174 + struct tcf_result *res) 1150 1175 { 1151 1176 int err = 0; 1152 - __be16 protocol = skb->protocol; 1177 + __be16 protocol; 1153 1178 #ifdef CONFIG_NET_CLS_ACT 1154 1179 struct tcf_proto *otp = tp; 1155 1180 reclassify: 1156 1181 #endif 1157 1182 protocol = skb->protocol; 1158 1183 1159 - for ( ; tp; tp = tp->next) { 1160 - if ((tp->protocol == protocol || 1161 - tp->protocol == htons(ETH_P_ALL)) && 1162 - (err = tp->classify(skb, tp, res)) >= 0) { 1184 + err = tc_classify_compat(skb, tp, res); 1163 1185 #ifdef CONFIG_NET_CLS_ACT 1164 - if ( TC_ACT_RECLASSIFY == err) { 1165 - __u32 verd = (__u32) G_TC_VERD(skb->tc_verd); 1166 - tp = otp; 1186 + if (err == TC_ACT_RECLASSIFY) { 1187 + u32 verd = G_TC_VERD(skb->tc_verd); 1188 + tp = otp; 1167 1189 1168 - if (MAX_REC_LOOP < verd++) { 1169 - printk("rule prio %d protocol %02x reclassify is buggy packet dropped\n", 1170 - tp->prio&0xffff, ntohs(tp->protocol)); 1171 - return TC_ACT_SHOT; 1172 - } 1173 - skb->tc_verd = SET_TC_VERD(skb->tc_verd,verd); 1174 - goto reclassify; 1175 - } else { 1176 - if (skb->tc_verd) 1177 - skb->tc_verd = SET_TC_VERD(skb->tc_verd,0); 1178 - return err; 1179 - } 1180 - #else 1181 - 1182 - return err; 1183 - #endif 1190 + if (verd++ >= MAX_REC_LOOP) { 1191 + printk("rule prio %u protocol %02x reclassify loop, " 1192 + "packet dropped\n", 1193 + tp->prio&0xffff, ntohs(tp->protocol)); 1194 + return TC_ACT_SHOT; 1184 1195 } 1185 - 1196 + skb->tc_verd = SET_TC_VERD(skb->tc_verd, verd); 1197 + goto reclassify; 1186 1198 } 1187 - return -1; 1199 + #endif 1200 + return err; 1188 1201 } 1202 + EXPORT_SYMBOL(tc_classify); 1189 1203 1190 1204 void tcf_destroy(struct tcf_proto *tp) 1191 1205 { ··· 1262 1256 EXPORT_SYMBOL(qdisc_put_rtab); 1263 1257 EXPORT_SYMBOL(register_qdisc); 1264 1258 EXPORT_SYMBOL(unregister_qdisc); 1265 - EXPORT_SYMBOL(tc_classify);
+233 -240
net/sched/sch_atm.c
··· 2 2 3 3 /* Written 1998-2000 by Werner Almesberger, EPFL ICA */ 4 4 5 - 6 5 #include <linux/module.h> 7 6 #include <linux/init.h> 8 7 #include <linux/string.h> ··· 10 11 #include <linux/atmdev.h> 11 12 #include <linux/atmclip.h> 12 13 #include <linux/rtnetlink.h> 13 - #include <linux/file.h> /* for fput */ 14 + #include <linux/file.h> /* for fput */ 14 15 #include <net/netlink.h> 15 16 #include <net/pkt_sched.h> 16 17 17 - 18 - extern struct socket *sockfd_lookup(int fd, int *err); /* @@@ fix this */ 18 + extern struct socket *sockfd_lookup(int fd, int *err); /* @@@ fix this */ 19 19 20 20 #if 0 /* control */ 21 21 #define DPRINTK(format,args...) printk(KERN_DEBUG format,##args) ··· 27 29 #else 28 30 #define D2PRINTK(format,args...) 29 31 #endif 30 - 31 32 32 33 /* 33 34 * The ATM queuing discipline provides a framework for invoking classifiers ··· 49 52 * - should lock the flow while there is data in the queue (?) 50 53 */ 51 54 52 - 53 55 #define PRIV(sch) qdisc_priv(sch) 54 56 #define VCC2FLOW(vcc) ((struct atm_flow_data *) ((vcc)->user_back)) 55 57 56 - 57 58 struct atm_flow_data { 58 - struct Qdisc *q; /* FIFO, TBF, etc. */ 59 + struct Qdisc *q; /* FIFO, TBF, etc. */ 59 60 struct tcf_proto *filter_list; 60 - struct atm_vcc *vcc; /* VCC; NULL if VCC is closed */ 61 - void (*old_pop)(struct atm_vcc *vcc,struct sk_buff *skb); /* chaining */ 61 + struct atm_vcc *vcc; /* VCC; NULL if VCC is closed */ 62 + void (*old_pop)(struct atm_vcc *vcc, 63 + struct sk_buff * skb); /* chaining */ 62 64 struct atm_qdisc_data *parent; /* parent qdisc */ 63 65 struct socket *sock; /* for closing */ 64 66 u32 classid; /* x:y type ID */ ··· 78 82 struct tasklet_struct task; /* requeue tasklet */ 79 83 }; 80 84 81 - 82 85 /* ------------------------- Class/flow operations ------------------------- */ 83 86 84 - 85 - static int find_flow(struct atm_qdisc_data *qdisc,struct atm_flow_data *flow) 87 + static int find_flow(struct atm_qdisc_data *qdisc, struct atm_flow_data *flow) 86 88 { 87 89 struct atm_flow_data *walk; 88 90 89 - DPRINTK("find_flow(qdisc %p,flow %p)\n",qdisc,flow); 91 + DPRINTK("find_flow(qdisc %p,flow %p)\n", qdisc, flow); 90 92 for (walk = qdisc->flows; walk; walk = walk->next) 91 - if (walk == flow) return 1; 93 + if (walk == flow) 94 + return 1; 92 95 DPRINTK("find_flow: not found\n"); 93 96 return 0; 94 97 } 95 98 96 - 97 - static __inline__ struct atm_flow_data *lookup_flow(struct Qdisc *sch, 98 - u32 classid) 99 + static inline struct atm_flow_data *lookup_flow(struct Qdisc *sch, u32 classid) 99 100 { 100 101 struct atm_qdisc_data *p = PRIV(sch); 101 102 struct atm_flow_data *flow; 102 103 103 104 for (flow = p->flows; flow; flow = flow->next) 104 - if (flow->classid == classid) break; 105 + if (flow->classid == classid) 106 + break; 105 107 return flow; 106 108 } 107 109 108 - 109 - static int atm_tc_graft(struct Qdisc *sch,unsigned long arg, 110 - struct Qdisc *new,struct Qdisc **old) 110 + static int atm_tc_graft(struct Qdisc *sch, unsigned long arg, 111 + struct Qdisc *new, struct Qdisc **old) 111 112 { 112 113 struct atm_qdisc_data *p = PRIV(sch); 113 - struct atm_flow_data *flow = (struct atm_flow_data *) arg; 114 + struct atm_flow_data *flow = (struct atm_flow_data *)arg; 114 115 115 - DPRINTK("atm_tc_graft(sch %p,[qdisc %p],flow %p,new %p,old %p)\n",sch, 116 - p,flow,new,old); 117 - if (!find_flow(p,flow)) return -EINVAL; 118 - if (!new) new = &noop_qdisc; 119 - *old = xchg(&flow->q,new); 120 - if (*old) qdisc_reset(*old); 116 + DPRINTK("atm_tc_graft(sch %p,[qdisc %p],flow %p,new %p,old %p)\n", 117 + sch, p, flow, new, old); 118 + if (!find_flow(p, flow)) 119 + return -EINVAL; 120 + if (!new) 121 + new = &noop_qdisc; 122 + *old = xchg(&flow->q, new); 123 + if (*old) 124 + qdisc_reset(*old); 121 125 return 0; 122 126 } 123 127 124 - 125 - static struct Qdisc *atm_tc_leaf(struct Qdisc *sch,unsigned long cl) 128 + static struct Qdisc *atm_tc_leaf(struct Qdisc *sch, unsigned long cl) 126 129 { 127 - struct atm_flow_data *flow = (struct atm_flow_data *) cl; 130 + struct atm_flow_data *flow = (struct atm_flow_data *)cl; 128 131 129 - DPRINTK("atm_tc_leaf(sch %p,flow %p)\n",sch,flow); 132 + DPRINTK("atm_tc_leaf(sch %p,flow %p)\n", sch, flow); 130 133 return flow ? flow->q : NULL; 131 134 } 132 135 133 - 134 - static unsigned long atm_tc_get(struct Qdisc *sch,u32 classid) 136 + static unsigned long atm_tc_get(struct Qdisc *sch, u32 classid) 135 137 { 136 - struct atm_qdisc_data *p __attribute__((unused)) = PRIV(sch); 138 + struct atm_qdisc_data *p __maybe_unused = PRIV(sch); 137 139 struct atm_flow_data *flow; 138 140 139 - DPRINTK("atm_tc_get(sch %p,[qdisc %p],classid %x)\n",sch,p,classid); 140 - flow = lookup_flow(sch,classid); 141 - if (flow) flow->ref++; 142 - DPRINTK("atm_tc_get: flow %p\n",flow); 143 - return (unsigned long) flow; 141 + DPRINTK("atm_tc_get(sch %p,[qdisc %p],classid %x)\n", sch, p, classid); 142 + flow = lookup_flow(sch, classid); 143 + if (flow) 144 + flow->ref++; 145 + DPRINTK("atm_tc_get: flow %p\n", flow); 146 + return (unsigned long)flow; 144 147 } 145 148 146 - 147 149 static unsigned long atm_tc_bind_filter(struct Qdisc *sch, 148 - unsigned long parent, u32 classid) 150 + unsigned long parent, u32 classid) 149 151 { 150 - return atm_tc_get(sch,classid); 152 + return atm_tc_get(sch, classid); 151 153 } 152 154 153 155 /* ··· 153 159 * requested (atm_tc_destroy, etc.). The assumption here is that we never drop 154 160 * anything that still seems to be in use. 155 161 */ 156 - 157 162 static void atm_tc_put(struct Qdisc *sch, unsigned long cl) 158 163 { 159 164 struct atm_qdisc_data *p = PRIV(sch); 160 - struct atm_flow_data *flow = (struct atm_flow_data *) cl; 165 + struct atm_flow_data *flow = (struct atm_flow_data *)cl; 161 166 struct atm_flow_data **prev; 162 167 163 - DPRINTK("atm_tc_put(sch %p,[qdisc %p],flow %p)\n",sch,p,flow); 164 - if (--flow->ref) return; 168 + DPRINTK("atm_tc_put(sch %p,[qdisc %p],flow %p)\n", sch, p, flow); 169 + if (--flow->ref) 170 + return; 165 171 DPRINTK("atm_tc_put: destroying\n"); 166 172 for (prev = &p->flows; *prev; prev = &(*prev)->next) 167 - if (*prev == flow) break; 173 + if (*prev == flow) 174 + break; 168 175 if (!*prev) { 169 - printk(KERN_CRIT "atm_tc_put: class %p not found\n",flow); 176 + printk(KERN_CRIT "atm_tc_put: class %p not found\n", flow); 170 177 return; 171 178 } 172 179 *prev = flow->next; 173 - DPRINTK("atm_tc_put: qdisc %p\n",flow->q); 180 + DPRINTK("atm_tc_put: qdisc %p\n", flow->q); 174 181 qdisc_destroy(flow->q); 175 182 tcf_destroy_chain(flow->filter_list); 176 183 if (flow->sock) { 177 184 DPRINTK("atm_tc_put: f_count %d\n", 178 - file_count(flow->sock->file)); 185 + file_count(flow->sock->file)); 179 186 flow->vcc->pop = flow->old_pop; 180 187 sockfd_put(flow->sock); 181 188 } 182 - if (flow->excess) atm_tc_put(sch,(unsigned long) flow->excess); 183 - if (flow != &p->link) kfree(flow); 189 + if (flow->excess) 190 + atm_tc_put(sch, (unsigned long)flow->excess); 191 + if (flow != &p->link) 192 + kfree(flow); 184 193 /* 185 194 * If flow == &p->link, the qdisc no longer works at this point and 186 195 * needs to be removed. (By the caller of atm_tc_put.) 187 196 */ 188 197 } 189 198 190 - 191 - static void sch_atm_pop(struct atm_vcc *vcc,struct sk_buff *skb) 199 + static void sch_atm_pop(struct atm_vcc *vcc, struct sk_buff *skb) 192 200 { 193 201 struct atm_qdisc_data *p = VCC2FLOW(vcc)->parent; 194 202 195 - D2PRINTK("sch_atm_pop(vcc %p,skb %p,[qdisc %p])\n",vcc,skb,p); 196 - VCC2FLOW(vcc)->old_pop(vcc,skb); 203 + D2PRINTK("sch_atm_pop(vcc %p,skb %p,[qdisc %p])\n", vcc, skb, p); 204 + VCC2FLOW(vcc)->old_pop(vcc, skb); 197 205 tasklet_schedule(&p->task); 198 206 } 199 207 200 208 static const u8 llc_oui_ip[] = { 201 - 0xaa, /* DSAP: non-ISO */ 202 - 0xaa, /* SSAP: non-ISO */ 203 - 0x03, /* Ctrl: Unnumbered Information Command PDU */ 204 - 0x00, /* OUI: EtherType */ 209 + 0xaa, /* DSAP: non-ISO */ 210 + 0xaa, /* SSAP: non-ISO */ 211 + 0x03, /* Ctrl: Unnumbered Information Command PDU */ 212 + 0x00, /* OUI: EtherType */ 205 213 0x00, 0x00, 206 - 0x08, 0x00 }; /* Ethertype IP (0800) */ 214 + 0x08, 0x00 215 + }; /* Ethertype IP (0800) */ 207 216 208 217 static int atm_tc_change(struct Qdisc *sch, u32 classid, u32 parent, 209 - struct rtattr **tca, unsigned long *arg) 218 + struct rtattr **tca, unsigned long *arg) 210 219 { 211 220 struct atm_qdisc_data *p = PRIV(sch); 212 - struct atm_flow_data *flow = (struct atm_flow_data *) *arg; 221 + struct atm_flow_data *flow = (struct atm_flow_data *)*arg; 213 222 struct atm_flow_data *excess = NULL; 214 - struct rtattr *opt = tca[TCA_OPTIONS-1]; 223 + struct rtattr *opt = tca[TCA_OPTIONS - 1]; 215 224 struct rtattr *tb[TCA_ATM_MAX]; 216 225 struct socket *sock; 217 - int fd,error,hdr_len; 226 + int fd, error, hdr_len; 218 227 void *hdr; 219 228 220 229 DPRINTK("atm_tc_change(sch %p,[qdisc %p],classid %x,parent %x," 221 - "flow %p,opt %p)\n",sch,p,classid,parent,flow,opt); 230 + "flow %p,opt %p)\n", sch, p, classid, parent, flow, opt); 222 231 /* 223 232 * The concept of parents doesn't apply for this qdisc. 224 233 */ ··· 234 237 * class needs to be removed and a new one added. (This may be changed 235 238 * later.) 236 239 */ 237 - if (flow) return -EBUSY; 240 + if (flow) 241 + return -EBUSY; 238 242 if (opt == NULL || rtattr_parse_nested(tb, TCA_ATM_MAX, opt)) 239 243 return -EINVAL; 240 - if (!tb[TCA_ATM_FD-1] || RTA_PAYLOAD(tb[TCA_ATM_FD-1]) < sizeof(fd)) 244 + if (!tb[TCA_ATM_FD - 1] || RTA_PAYLOAD(tb[TCA_ATM_FD - 1]) < sizeof(fd)) 241 245 return -EINVAL; 242 - fd = *(int *) RTA_DATA(tb[TCA_ATM_FD-1]); 243 - DPRINTK("atm_tc_change: fd %d\n",fd); 244 - if (tb[TCA_ATM_HDR-1]) { 245 - hdr_len = RTA_PAYLOAD(tb[TCA_ATM_HDR-1]); 246 - hdr = RTA_DATA(tb[TCA_ATM_HDR-1]); 247 - } 248 - else { 246 + fd = *(int *)RTA_DATA(tb[TCA_ATM_FD - 1]); 247 + DPRINTK("atm_tc_change: fd %d\n", fd); 248 + if (tb[TCA_ATM_HDR - 1]) { 249 + hdr_len = RTA_PAYLOAD(tb[TCA_ATM_HDR - 1]); 250 + hdr = RTA_DATA(tb[TCA_ATM_HDR - 1]); 251 + } else { 249 252 hdr_len = RFC1483LLC_LEN; 250 - hdr = NULL; /* default LLC/SNAP for IP */ 253 + hdr = NULL; /* default LLC/SNAP for IP */ 251 254 } 252 - if (!tb[TCA_ATM_EXCESS-1]) excess = NULL; 255 + if (!tb[TCA_ATM_EXCESS - 1]) 256 + excess = NULL; 253 257 else { 254 - if (RTA_PAYLOAD(tb[TCA_ATM_EXCESS-1]) != sizeof(u32)) 258 + if (RTA_PAYLOAD(tb[TCA_ATM_EXCESS - 1]) != sizeof(u32)) 255 259 return -EINVAL; 256 - excess = (struct atm_flow_data *) atm_tc_get(sch, 257 - *(u32 *) RTA_DATA(tb[TCA_ATM_EXCESS-1])); 258 - if (!excess) return -ENOENT; 260 + excess = (struct atm_flow_data *) 261 + atm_tc_get(sch, *(u32 *)RTA_DATA(tb[TCA_ATM_EXCESS - 1])); 262 + if (!excess) 263 + return -ENOENT; 259 264 } 260 265 DPRINTK("atm_tc_change: type %d, payload %d, hdr_len %d\n", 261 - opt->rta_type,RTA_PAYLOAD(opt),hdr_len); 262 - if (!(sock = sockfd_lookup(fd,&error))) return error; /* f_count++ */ 263 - DPRINTK("atm_tc_change: f_count %d\n",file_count(sock->file)); 266 + opt->rta_type, RTA_PAYLOAD(opt), hdr_len); 267 + if (!(sock = sockfd_lookup(fd, &error))) 268 + return error; /* f_count++ */ 269 + DPRINTK("atm_tc_change: f_count %d\n", file_count(sock->file)); 264 270 if (sock->ops->family != PF_ATMSVC && sock->ops->family != PF_ATMPVC) { 265 271 error = -EPROTOTYPE; 266 272 goto err_out; ··· 276 276 error = -EINVAL; 277 277 goto err_out; 278 278 } 279 - if (find_flow(p,flow)) { 279 + if (find_flow(p, flow)) { 280 280 error = -EEXIST; 281 281 goto err_out; 282 282 } 283 - } 284 - else { 283 + } else { 285 284 int i; 286 285 unsigned long cl; 287 286 288 287 for (i = 1; i < 0x8000; i++) { 289 - classid = TC_H_MAKE(sch->handle,0x8000 | i); 290 - if (!(cl = atm_tc_get(sch,classid))) break; 291 - atm_tc_put(sch,cl); 288 + classid = TC_H_MAKE(sch->handle, 0x8000 | i); 289 + if (!(cl = atm_tc_get(sch, classid))) 290 + break; 291 + atm_tc_put(sch, cl); 292 292 } 293 293 } 294 - DPRINTK("atm_tc_change: new id %x\n",classid); 295 - flow = kmalloc(sizeof(struct atm_flow_data)+hdr_len,GFP_KERNEL); 296 - DPRINTK("atm_tc_change: flow %p\n",flow); 294 + DPRINTK("atm_tc_change: new id %x\n", classid); 295 + flow = kmalloc(sizeof(struct atm_flow_data) + hdr_len, GFP_KERNEL); 296 + DPRINTK("atm_tc_change: flow %p\n", flow); 297 297 if (!flow) { 298 298 error = -ENOBUFS; 299 299 goto err_out; 300 300 } 301 - memset(flow,0,sizeof(*flow)); 301 + memset(flow, 0, sizeof(*flow)); 302 302 flow->filter_list = NULL; 303 - if (!(flow->q = qdisc_create_dflt(sch->dev,&pfifo_qdisc_ops,classid))) 303 + if (!(flow->q = qdisc_create_dflt(sch->dev, &pfifo_qdisc_ops, classid))) 304 304 flow->q = &noop_qdisc; 305 - DPRINTK("atm_tc_change: qdisc %p\n",flow->q); 305 + DPRINTK("atm_tc_change: qdisc %p\n", flow->q); 306 306 flow->sock = sock; 307 - flow->vcc = ATM_SD(sock); /* speedup */ 307 + flow->vcc = ATM_SD(sock); /* speedup */ 308 308 flow->vcc->user_back = flow; 309 - DPRINTK("atm_tc_change: vcc %p\n",flow->vcc); 309 + DPRINTK("atm_tc_change: vcc %p\n", flow->vcc); 310 310 flow->old_pop = flow->vcc->pop; 311 311 flow->parent = p; 312 312 flow->vcc->pop = sch_atm_pop; ··· 317 317 p->link.next = flow; 318 318 flow->hdr_len = hdr_len; 319 319 if (hdr) 320 - memcpy(flow->hdr,hdr,hdr_len); 320 + memcpy(flow->hdr, hdr, hdr_len); 321 321 else 322 - memcpy(flow->hdr,llc_oui_ip,sizeof(llc_oui_ip)); 323 - *arg = (unsigned long) flow; 322 + memcpy(flow->hdr, llc_oui_ip, sizeof(llc_oui_ip)); 323 + *arg = (unsigned long)flow; 324 324 return 0; 325 325 err_out: 326 - if (excess) atm_tc_put(sch,(unsigned long) excess); 326 + if (excess) 327 + atm_tc_put(sch, (unsigned long)excess); 327 328 sockfd_put(sock); 328 329 return error; 329 330 } 330 331 331 - 332 - static int atm_tc_delete(struct Qdisc *sch,unsigned long arg) 332 + static int atm_tc_delete(struct Qdisc *sch, unsigned long arg) 333 333 { 334 334 struct atm_qdisc_data *p = PRIV(sch); 335 - struct atm_flow_data *flow = (struct atm_flow_data *) arg; 335 + struct atm_flow_data *flow = (struct atm_flow_data *)arg; 336 336 337 - DPRINTK("atm_tc_delete(sch %p,[qdisc %p],flow %p)\n",sch,p,flow); 338 - if (!find_flow(PRIV(sch),flow)) return -EINVAL; 339 - if (flow->filter_list || flow == &p->link) return -EBUSY; 337 + DPRINTK("atm_tc_delete(sch %p,[qdisc %p],flow %p)\n", sch, p, flow); 338 + if (!find_flow(PRIV(sch), flow)) 339 + return -EINVAL; 340 + if (flow->filter_list || flow == &p->link) 341 + return -EBUSY; 340 342 /* 341 343 * Reference count must be 2: one for "keepalive" (set at class 342 344 * creation), and one for the reference held when calling delete. 343 345 */ 344 346 if (flow->ref < 2) { 345 - printk(KERN_ERR "atm_tc_delete: flow->ref == %d\n",flow->ref); 347 + printk(KERN_ERR "atm_tc_delete: flow->ref == %d\n", flow->ref); 346 348 return -EINVAL; 347 349 } 348 - if (flow->ref > 2) return -EBUSY; /* catch references via excess, etc.*/ 349 - atm_tc_put(sch,arg); 350 + if (flow->ref > 2) 351 + return -EBUSY; /* catch references via excess, etc. */ 352 + atm_tc_put(sch, arg); 350 353 return 0; 351 354 } 352 355 353 - 354 - static void atm_tc_walk(struct Qdisc *sch,struct qdisc_walker *walker) 356 + static void atm_tc_walk(struct Qdisc *sch, struct qdisc_walker *walker) 355 357 { 356 358 struct atm_qdisc_data *p = PRIV(sch); 357 359 struct atm_flow_data *flow; 358 360 359 - DPRINTK("atm_tc_walk(sch %p,[qdisc %p],walker %p)\n",sch,p,walker); 360 - if (walker->stop) return; 361 + DPRINTK("atm_tc_walk(sch %p,[qdisc %p],walker %p)\n", sch, p, walker); 362 + if (walker->stop) 363 + return; 361 364 for (flow = p->flows; flow; flow = flow->next) { 362 365 if (walker->count >= walker->skip) 363 - if (walker->fn(sch,(unsigned long) flow,walker) < 0) { 366 + if (walker->fn(sch, (unsigned long)flow, walker) < 0) { 364 367 walker->stop = 1; 365 368 break; 366 369 } ··· 371 368 } 372 369 } 373 370 374 - 375 - static struct tcf_proto **atm_tc_find_tcf(struct Qdisc *sch,unsigned long cl) 371 + static struct tcf_proto **atm_tc_find_tcf(struct Qdisc *sch, unsigned long cl) 376 372 { 377 373 struct atm_qdisc_data *p = PRIV(sch); 378 - struct atm_flow_data *flow = (struct atm_flow_data *) cl; 374 + struct atm_flow_data *flow = (struct atm_flow_data *)cl; 379 375 380 - DPRINTK("atm_tc_find_tcf(sch %p,[qdisc %p],flow %p)\n",sch,p,flow); 376 + DPRINTK("atm_tc_find_tcf(sch %p,[qdisc %p],flow %p)\n", sch, p, flow); 381 377 return flow ? &flow->filter_list : &p->link.filter_list; 382 378 } 383 379 384 - 385 380 /* --------------------------- Qdisc operations ---------------------------- */ 386 381 387 - 388 - static int atm_tc_enqueue(struct sk_buff *skb,struct Qdisc *sch) 382 + static int atm_tc_enqueue(struct sk_buff *skb, struct Qdisc *sch) 389 383 { 390 384 struct atm_qdisc_data *p = PRIV(sch); 391 - struct atm_flow_data *flow = NULL ; /* @@@ */ 385 + struct atm_flow_data *flow = NULL; /* @@@ */ 392 386 struct tcf_result res; 393 387 int result; 394 388 int ret = NET_XMIT_POLICED; 395 389 396 - D2PRINTK("atm_tc_enqueue(skb %p,sch %p,[qdisc %p])\n",skb,sch,p); 397 - result = TC_POLICE_OK; /* be nice to gcc */ 390 + D2PRINTK("atm_tc_enqueue(skb %p,sch %p,[qdisc %p])\n", skb, sch, p); 391 + result = TC_POLICE_OK; /* be nice to gcc */ 398 392 if (TC_H_MAJ(skb->priority) != sch->handle || 399 - !(flow = (struct atm_flow_data *) atm_tc_get(sch,skb->priority))) 393 + !(flow = (struct atm_flow_data *)atm_tc_get(sch, skb->priority))) 400 394 for (flow = p->flows; flow; flow = flow->next) 401 395 if (flow->filter_list) { 402 - result = tc_classify(skb,flow->filter_list, 403 - &res); 404 - if (result < 0) continue; 405 - flow = (struct atm_flow_data *) res.class; 406 - if (!flow) flow = lookup_flow(sch,res.classid); 396 + result = tc_classify_compat(skb, 397 + flow->filter_list, 398 + &res); 399 + if (result < 0) 400 + continue; 401 + flow = (struct atm_flow_data *)res.class; 402 + if (!flow) 403 + flow = lookup_flow(sch, res.classid); 407 404 break; 408 405 } 409 - if (!flow) flow = &p->link; 406 + if (!flow) 407 + flow = &p->link; 410 408 else { 411 409 if (flow->vcc) 412 410 ATM_SKB(skb)->atm_options = flow->vcc->atm_options; 413 - /*@@@ looks good ... but it's not supposed to work :-)*/ 414 - #ifdef CONFIG_NET_CLS_POLICE 411 + /*@@@ looks good ... but it's not supposed to work :-) */ 412 + #ifdef CONFIG_NET_CLS_ACT 415 413 switch (result) { 416 - case TC_POLICE_SHOT: 417 - kfree_skb(skb); 418 - break; 419 - case TC_POLICE_RECLASSIFY: 420 - if (flow->excess) flow = flow->excess; 421 - else { 422 - ATM_SKB(skb)->atm_options |= 423 - ATM_ATMOPT_CLP; 424 - break; 425 - } 426 - /* fall through */ 427 - case TC_POLICE_OK: 428 - /* fall through */ 429 - default: 430 - break; 414 + case TC_ACT_QUEUED: 415 + case TC_ACT_STOLEN: 416 + kfree_skb(skb); 417 + return NET_XMIT_SUCCESS; 418 + case TC_ACT_SHOT: 419 + kfree_skb(skb); 420 + goto drop; 421 + case TC_POLICE_RECLASSIFY: 422 + if (flow->excess) 423 + flow = flow->excess; 424 + else 425 + ATM_SKB(skb)->atm_options |= ATM_ATMOPT_CLP; 426 + break; 431 427 } 432 428 #endif 433 429 } 434 - if ( 435 - #ifdef CONFIG_NET_CLS_POLICE 436 - result == TC_POLICE_SHOT || 437 - #endif 438 - (ret = flow->q->enqueue(skb,flow->q)) != 0) { 430 + 431 + if ((ret = flow->q->enqueue(skb, flow->q)) != 0) { 432 + drop: __maybe_unused 439 433 sch->qstats.drops++; 440 - if (flow) flow->qstats.drops++; 434 + if (flow) 435 + flow->qstats.drops++; 441 436 return ret; 442 437 } 443 438 sch->bstats.bytes += skb->len; ··· 459 458 return NET_XMIT_BYPASS; 460 459 } 461 460 462 - 463 461 /* 464 462 * Dequeue packets and send them over ATM. Note that we quite deliberately 465 463 * avoid checking net_device's flow control here, simply because sch_atm ··· 466 466 * non-ATM interfaces. 467 467 */ 468 468 469 - 470 469 static void sch_atm_dequeue(unsigned long data) 471 470 { 472 - struct Qdisc *sch = (struct Qdisc *) data; 471 + struct Qdisc *sch = (struct Qdisc *)data; 473 472 struct atm_qdisc_data *p = PRIV(sch); 474 473 struct atm_flow_data *flow; 475 474 struct sk_buff *skb; 476 475 477 - D2PRINTK("sch_atm_dequeue(sch %p,[qdisc %p])\n",sch,p); 476 + D2PRINTK("sch_atm_dequeue(sch %p,[qdisc %p])\n", sch, p); 478 477 for (flow = p->link.next; flow; flow = flow->next) 479 478 /* 480 479 * If traffic is properly shaped, this won't generate nasty 481 480 * little bursts. Otherwise, it may ... (but that's okay) 482 481 */ 483 482 while ((skb = flow->q->dequeue(flow->q))) { 484 - if (!atm_may_send(flow->vcc,skb->truesize)) { 485 - (void) flow->q->ops->requeue(skb,flow->q); 483 + if (!atm_may_send(flow->vcc, skb->truesize)) { 484 + (void)flow->q->ops->requeue(skb, flow->q); 486 485 break; 487 486 } 488 - D2PRINTK("atm_tc_dequeue: sending on class %p\n",flow); 487 + D2PRINTK("atm_tc_dequeue: sending on class %p\n", flow); 489 488 /* remove any LL header somebody else has attached */ 490 489 skb_pull(skb, skb_network_offset(skb)); 491 490 if (skb_headroom(skb) < flow->hdr_len) { 492 491 struct sk_buff *new; 493 492 494 - new = skb_realloc_headroom(skb,flow->hdr_len); 493 + new = skb_realloc_headroom(skb, flow->hdr_len); 495 494 dev_kfree_skb(skb); 496 - if (!new) continue; 495 + if (!new) 496 + continue; 497 497 skb = new; 498 498 } 499 499 D2PRINTK("sch_atm_dequeue: ip %p, data %p\n", 500 500 skb_network_header(skb), skb->data); 501 501 ATM_SKB(skb)->vcc = flow->vcc; 502 - memcpy(skb_push(skb,flow->hdr_len),flow->hdr, 503 - flow->hdr_len); 502 + memcpy(skb_push(skb, flow->hdr_len), flow->hdr, 503 + flow->hdr_len); 504 504 atomic_add(skb->truesize, 505 505 &sk_atm(flow->vcc)->sk_wmem_alloc); 506 506 /* atm.atm_options are already set by atm_tc_enqueue */ 507 - (void) flow->vcc->send(flow->vcc,skb); 507 + flow->vcc->send(flow->vcc, skb); 508 508 } 509 509 } 510 - 511 510 512 511 static struct sk_buff *atm_tc_dequeue(struct Qdisc *sch) 513 512 { 514 513 struct atm_qdisc_data *p = PRIV(sch); 515 514 struct sk_buff *skb; 516 515 517 - D2PRINTK("atm_tc_dequeue(sch %p,[qdisc %p])\n",sch,p); 516 + D2PRINTK("atm_tc_dequeue(sch %p,[qdisc %p])\n", sch, p); 518 517 tasklet_schedule(&p->task); 519 518 skb = p->link.q->dequeue(p->link.q); 520 - if (skb) sch->q.qlen--; 519 + if (skb) 520 + sch->q.qlen--; 521 521 return skb; 522 522 } 523 523 524 - 525 - static int atm_tc_requeue(struct sk_buff *skb,struct Qdisc *sch) 524 + static int atm_tc_requeue(struct sk_buff *skb, struct Qdisc *sch) 526 525 { 527 526 struct atm_qdisc_data *p = PRIV(sch); 528 527 int ret; 529 528 530 - D2PRINTK("atm_tc_requeue(skb %p,sch %p,[qdisc %p])\n",skb,sch,p); 531 - ret = p->link.q->ops->requeue(skb,p->link.q); 529 + D2PRINTK("atm_tc_requeue(skb %p,sch %p,[qdisc %p])\n", skb, sch, p); 530 + ret = p->link.q->ops->requeue(skb, p->link.q); 532 531 if (!ret) { 533 - sch->q.qlen++; 534 - sch->qstats.requeues++; 535 - } else { 532 + sch->q.qlen++; 533 + sch->qstats.requeues++; 534 + } else { 536 535 sch->qstats.drops++; 537 536 p->link.qstats.drops++; 538 537 } 539 538 return ret; 540 539 } 541 - 542 540 543 541 static unsigned int atm_tc_drop(struct Qdisc *sch) 544 542 { ··· 544 546 struct atm_flow_data *flow; 545 547 unsigned int len; 546 548 547 - DPRINTK("atm_tc_drop(sch %p,[qdisc %p])\n",sch,p); 549 + DPRINTK("atm_tc_drop(sch %p,[qdisc %p])\n", sch, p); 548 550 for (flow = p->flows; flow; flow = flow->next) 549 551 if (flow->q->ops->drop && (len = flow->q->ops->drop(flow->q))) 550 552 return len; 551 553 return 0; 552 554 } 553 555 554 - 555 - static int atm_tc_init(struct Qdisc *sch,struct rtattr *opt) 556 + static int atm_tc_init(struct Qdisc *sch, struct rtattr *opt) 556 557 { 557 558 struct atm_qdisc_data *p = PRIV(sch); 558 559 559 - DPRINTK("atm_tc_init(sch %p,[qdisc %p],opt %p)\n",sch,p,opt); 560 + DPRINTK("atm_tc_init(sch %p,[qdisc %p],opt %p)\n", sch, p, opt); 560 561 p->flows = &p->link; 561 - if(!(p->link.q = qdisc_create_dflt(sch->dev,&pfifo_qdisc_ops, 562 - sch->handle))) 562 + if (!(p->link.q = qdisc_create_dflt(sch->dev, &pfifo_qdisc_ops, 563 + sch->handle))) 563 564 p->link.q = &noop_qdisc; 564 - DPRINTK("atm_tc_init: link (%p) qdisc %p\n",&p->link,p->link.q); 565 + DPRINTK("atm_tc_init: link (%p) qdisc %p\n", &p->link, p->link.q); 565 566 p->link.filter_list = NULL; 566 567 p->link.vcc = NULL; 567 568 p->link.sock = NULL; 568 569 p->link.classid = sch->handle; 569 570 p->link.ref = 1; 570 571 p->link.next = NULL; 571 - tasklet_init(&p->task,sch_atm_dequeue,(unsigned long) sch); 572 + tasklet_init(&p->task, sch_atm_dequeue, (unsigned long)sch); 572 573 return 0; 573 574 } 574 - 575 575 576 576 static void atm_tc_reset(struct Qdisc *sch) 577 577 { 578 578 struct atm_qdisc_data *p = PRIV(sch); 579 579 struct atm_flow_data *flow; 580 580 581 - DPRINTK("atm_tc_reset(sch %p,[qdisc %p])\n",sch,p); 582 - for (flow = p->flows; flow; flow = flow->next) qdisc_reset(flow->q); 581 + DPRINTK("atm_tc_reset(sch %p,[qdisc %p])\n", sch, p); 582 + for (flow = p->flows; flow; flow = flow->next) 583 + qdisc_reset(flow->q); 583 584 sch->q.qlen = 0; 584 585 } 585 - 586 586 587 587 static void atm_tc_destroy(struct Qdisc *sch) 588 588 { 589 589 struct atm_qdisc_data *p = PRIV(sch); 590 590 struct atm_flow_data *flow; 591 591 592 - DPRINTK("atm_tc_destroy(sch %p,[qdisc %p])\n",sch,p); 592 + DPRINTK("atm_tc_destroy(sch %p,[qdisc %p])\n", sch, p); 593 593 /* races ? */ 594 594 while ((flow = p->flows)) { 595 595 tcf_destroy_chain(flow->filter_list); 596 596 flow->filter_list = NULL; 597 597 if (flow->ref > 1) 598 - printk(KERN_ERR "atm_destroy: %p->ref = %d\n",flow, 599 - flow->ref); 600 - atm_tc_put(sch,(unsigned long) flow); 598 + printk(KERN_ERR "atm_destroy: %p->ref = %d\n", flow, 599 + flow->ref); 600 + atm_tc_put(sch, (unsigned long)flow); 601 601 if (p->flows == flow) { 602 602 printk(KERN_ERR "atm_destroy: putting flow %p didn't " 603 - "kill it\n",flow); 604 - p->flows = flow->next; /* brute force */ 603 + "kill it\n", flow); 604 + p->flows = flow->next; /* brute force */ 605 605 break; 606 606 } 607 607 } 608 608 tasklet_kill(&p->task); 609 609 } 610 610 611 - 612 611 static int atm_tc_dump_class(struct Qdisc *sch, unsigned long cl, 613 - struct sk_buff *skb, struct tcmsg *tcm) 612 + struct sk_buff *skb, struct tcmsg *tcm) 614 613 { 615 614 struct atm_qdisc_data *p = PRIV(sch); 616 - struct atm_flow_data *flow = (struct atm_flow_data *) cl; 615 + struct atm_flow_data *flow = (struct atm_flow_data *)cl; 617 616 unsigned char *b = skb_tail_pointer(skb); 618 617 struct rtattr *rta; 619 618 620 619 DPRINTK("atm_tc_dump_class(sch %p,[qdisc %p],flow %p,skb %p,tcm %p)\n", 621 - sch,p,flow,skb,tcm); 622 - if (!find_flow(p,flow)) return -EINVAL; 620 + sch, p, flow, skb, tcm); 621 + if (!find_flow(p, flow)) 622 + return -EINVAL; 623 623 tcm->tcm_handle = flow->classid; 624 624 tcm->tcm_info = flow->q->handle; 625 - rta = (struct rtattr *) b; 626 - RTA_PUT(skb,TCA_OPTIONS,0,NULL); 627 - RTA_PUT(skb,TCA_ATM_HDR,flow->hdr_len,flow->hdr); 625 + rta = (struct rtattr *)b; 626 + RTA_PUT(skb, TCA_OPTIONS, 0, NULL); 627 + RTA_PUT(skb, TCA_ATM_HDR, flow->hdr_len, flow->hdr); 628 628 if (flow->vcc) { 629 629 struct sockaddr_atmpvc pvc; 630 630 int state; ··· 631 635 pvc.sap_addr.itf = flow->vcc->dev ? flow->vcc->dev->number : -1; 632 636 pvc.sap_addr.vpi = flow->vcc->vpi; 633 637 pvc.sap_addr.vci = flow->vcc->vci; 634 - RTA_PUT(skb,TCA_ATM_ADDR,sizeof(pvc),&pvc); 638 + RTA_PUT(skb, TCA_ATM_ADDR, sizeof(pvc), &pvc); 635 639 state = ATM_VF2VS(flow->vcc->flags); 636 - RTA_PUT(skb,TCA_ATM_STATE,sizeof(state),&state); 640 + RTA_PUT(skb, TCA_ATM_STATE, sizeof(state), &state); 637 641 } 638 642 if (flow->excess) 639 - RTA_PUT(skb,TCA_ATM_EXCESS,sizeof(u32),&flow->classid); 643 + RTA_PUT(skb, TCA_ATM_EXCESS, sizeof(u32), &flow->classid); 640 644 else { 641 645 static u32 zero; 642 646 643 - RTA_PUT(skb,TCA_ATM_EXCESS,sizeof(zero),&zero); 647 + RTA_PUT(skb, TCA_ATM_EXCESS, sizeof(zero), &zero); 644 648 } 645 649 rta->rta_len = skb_tail_pointer(skb) - b; 646 650 return skb->len; ··· 651 655 } 652 656 static int 653 657 atm_tc_dump_class_stats(struct Qdisc *sch, unsigned long arg, 654 - struct gnet_dump *d) 658 + struct gnet_dump *d) 655 659 { 656 - struct atm_flow_data *flow = (struct atm_flow_data *) arg; 660 + struct atm_flow_data *flow = (struct atm_flow_data *)arg; 657 661 658 662 flow->qstats.qlen = flow->q->q.qlen; 659 663 ··· 670 674 } 671 675 672 676 static struct Qdisc_class_ops atm_class_ops = { 673 - .graft = atm_tc_graft, 674 - .leaf = atm_tc_leaf, 675 - .get = atm_tc_get, 676 - .put = atm_tc_put, 677 - .change = atm_tc_change, 678 - .delete = atm_tc_delete, 679 - .walk = atm_tc_walk, 680 - .tcf_chain = atm_tc_find_tcf, 681 - .bind_tcf = atm_tc_bind_filter, 682 - .unbind_tcf = atm_tc_put, 683 - .dump = atm_tc_dump_class, 684 - .dump_stats = atm_tc_dump_class_stats, 677 + .graft = atm_tc_graft, 678 + .leaf = atm_tc_leaf, 679 + .get = atm_tc_get, 680 + .put = atm_tc_put, 681 + .change = atm_tc_change, 682 + .delete = atm_tc_delete, 683 + .walk = atm_tc_walk, 684 + .tcf_chain = atm_tc_find_tcf, 685 + .bind_tcf = atm_tc_bind_filter, 686 + .unbind_tcf = atm_tc_put, 687 + .dump = atm_tc_dump_class, 688 + .dump_stats = atm_tc_dump_class_stats, 685 689 }; 686 690 687 691 static struct Qdisc_ops atm_qdisc_ops = { 688 - .next = NULL, 689 - .cl_ops = &atm_class_ops, 690 - .id = "atm", 691 - .priv_size = sizeof(struct atm_qdisc_data), 692 - .enqueue = atm_tc_enqueue, 693 - .dequeue = atm_tc_dequeue, 694 - .requeue = atm_tc_requeue, 695 - .drop = atm_tc_drop, 696 - .init = atm_tc_init, 697 - .reset = atm_tc_reset, 698 - .destroy = atm_tc_destroy, 699 - .change = NULL, 700 - .dump = atm_tc_dump, 701 - .owner = THIS_MODULE, 692 + .cl_ops = &atm_class_ops, 693 + .id = "atm", 694 + .priv_size = sizeof(struct atm_qdisc_data), 695 + .enqueue = atm_tc_enqueue, 696 + .dequeue = atm_tc_dequeue, 697 + .requeue = atm_tc_requeue, 698 + .drop = atm_tc_drop, 699 + .init = atm_tc_init, 700 + .reset = atm_tc_reset, 701 + .destroy = atm_tc_destroy, 702 + .dump = atm_tc_dump, 703 + .owner = THIS_MODULE, 702 704 }; 703 - 704 705 705 706 static int __init atm_init(void) 706 707 {
+20 -28
net/sched/sch_cbq.c
··· 82 82 unsigned char priority2; /* priority to be used after overlimit */ 83 83 unsigned char ewma_log; /* time constant for idle time calculation */ 84 84 unsigned char ovl_strategy; 85 - #ifdef CONFIG_NET_CLS_POLICE 85 + #ifdef CONFIG_NET_CLS_ACT 86 86 unsigned char police; 87 87 #endif 88 88 ··· 154 154 struct cbq_class *active[TC_CBQ_MAXPRIO+1]; /* List of all classes 155 155 with backlog */ 156 156 157 - #ifdef CONFIG_NET_CLS_POLICE 157 + #ifdef CONFIG_NET_CLS_ACT 158 158 struct cbq_class *rx_class; 159 159 #endif 160 160 struct cbq_class *tx_class; ··· 196 196 return NULL; 197 197 } 198 198 199 - #ifdef CONFIG_NET_CLS_POLICE 199 + #ifdef CONFIG_NET_CLS_ACT 200 200 201 201 static struct cbq_class * 202 202 cbq_reclassify(struct sk_buff *skb, struct cbq_class *this) ··· 247 247 /* 248 248 * Step 2+n. Apply classifier. 249 249 */ 250 - if (!head->filter_list || (result = tc_classify(skb, head->filter_list, &res)) < 0) 250 + if (!head->filter_list || 251 + (result = tc_classify_compat(skb, head->filter_list, &res)) < 0) 251 252 goto fallback; 252 253 253 254 if ((cl = (void*)res.class) == NULL) { ··· 268 267 *qerr = NET_XMIT_SUCCESS; 269 268 case TC_ACT_SHOT: 270 269 return NULL; 271 - } 272 - #elif defined(CONFIG_NET_CLS_POLICE) 273 - switch (result) { 274 - case TC_POLICE_RECLASSIFY: 270 + case TC_ACT_RECLASSIFY: 275 271 return cbq_reclassify(skb, cl); 276 - case TC_POLICE_SHOT: 277 - return NULL; 278 - default: 279 - break; 280 272 } 281 273 #endif 282 274 if (cl->level == 0) ··· 383 389 int ret; 384 390 struct cbq_class *cl = cbq_classify(skb, sch, &ret); 385 391 386 - #ifdef CONFIG_NET_CLS_POLICE 392 + #ifdef CONFIG_NET_CLS_ACT 387 393 q->rx_class = cl; 388 394 #endif 389 395 if (cl == NULL) { ··· 393 399 return ret; 394 400 } 395 401 396 - #ifdef CONFIG_NET_CLS_POLICE 402 + #ifdef CONFIG_NET_CLS_ACT 397 403 cl->q->__parent = sch; 398 404 #endif 399 405 if ((ret = cl->q->enqueue(skb, cl->q)) == NET_XMIT_SUCCESS) { ··· 428 434 429 435 cbq_mark_toplevel(q, cl); 430 436 431 - #ifdef CONFIG_NET_CLS_POLICE 437 + #ifdef CONFIG_NET_CLS_ACT 432 438 q->rx_class = cl; 433 439 cl->q->__parent = sch; 434 440 #endif ··· 663 669 return HRTIMER_NORESTART; 664 670 } 665 671 666 - 667 - #ifdef CONFIG_NET_CLS_POLICE 668 - 672 + #ifdef CONFIG_NET_CLS_ACT 669 673 static int cbq_reshape_fail(struct sk_buff *skb, struct Qdisc *child) 670 674 { 671 675 int len = skb->len; ··· 1356 1364 return 0; 1357 1365 } 1358 1366 1359 - #ifdef CONFIG_NET_CLS_POLICE 1367 + #ifdef CONFIG_NET_CLS_ACT 1360 1368 static int cbq_set_police(struct cbq_class *cl, struct tc_cbq_police *p) 1361 1369 { 1362 1370 cl->police = p->police; ··· 1524 1532 return -1; 1525 1533 } 1526 1534 1527 - #ifdef CONFIG_NET_CLS_POLICE 1535 + #ifdef CONFIG_NET_CLS_ACT 1528 1536 static __inline__ int cbq_dump_police(struct sk_buff *skb, struct cbq_class *cl) 1529 1537 { 1530 1538 unsigned char *b = skb_tail_pointer(skb); ··· 1550 1558 cbq_dump_rate(skb, cl) < 0 || 1551 1559 cbq_dump_wrr(skb, cl) < 0 || 1552 1560 cbq_dump_ovl(skb, cl) < 0 || 1553 - #ifdef CONFIG_NET_CLS_POLICE 1561 + #ifdef CONFIG_NET_CLS_ACT 1554 1562 cbq_dump_police(skb, cl) < 0 || 1555 1563 #endif 1556 1564 cbq_dump_fopt(skb, cl) < 0) ··· 1645 1653 cl->classid)) == NULL) 1646 1654 return -ENOBUFS; 1647 1655 } else { 1648 - #ifdef CONFIG_NET_CLS_POLICE 1656 + #ifdef CONFIG_NET_CLS_ACT 1649 1657 if (cl->police == TC_POLICE_RECLASSIFY) 1650 1658 new->reshape_fail = cbq_reshape_fail; 1651 1659 #endif ··· 1710 1718 struct cbq_class *cl; 1711 1719 unsigned h; 1712 1720 1713 - #ifdef CONFIG_NET_CLS_POLICE 1721 + #ifdef CONFIG_NET_CLS_ACT 1714 1722 q->rx_class = NULL; 1715 1723 #endif 1716 1724 /* ··· 1739 1747 struct cbq_class *cl = (struct cbq_class*)arg; 1740 1748 1741 1749 if (--cl->refcnt == 0) { 1742 - #ifdef CONFIG_NET_CLS_POLICE 1750 + #ifdef CONFIG_NET_CLS_ACT 1743 1751 struct cbq_sched_data *q = qdisc_priv(sch); 1744 1752 1745 1753 spin_lock_bh(&sch->dev->queue_lock); ··· 1787 1795 RTA_PAYLOAD(tb[TCA_CBQ_WRROPT-1]) < sizeof(struct tc_cbq_wrropt)) 1788 1796 return -EINVAL; 1789 1797 1790 - #ifdef CONFIG_NET_CLS_POLICE 1798 + #ifdef CONFIG_NET_CLS_ACT 1791 1799 if (tb[TCA_CBQ_POLICE-1] && 1792 1800 RTA_PAYLOAD(tb[TCA_CBQ_POLICE-1]) < sizeof(struct tc_cbq_police)) 1793 1801 return -EINVAL; ··· 1830 1838 if (tb[TCA_CBQ_OVL_STRATEGY-1]) 1831 1839 cbq_set_overlimit(cl, RTA_DATA(tb[TCA_CBQ_OVL_STRATEGY-1])); 1832 1840 1833 - #ifdef CONFIG_NET_CLS_POLICE 1841 + #ifdef CONFIG_NET_CLS_ACT 1834 1842 if (tb[TCA_CBQ_POLICE-1]) 1835 1843 cbq_set_police(cl, RTA_DATA(tb[TCA_CBQ_POLICE-1])); 1836 1844 #endif ··· 1923 1931 cl->overlimit = cbq_ovl_classic; 1924 1932 if (tb[TCA_CBQ_OVL_STRATEGY-1]) 1925 1933 cbq_set_overlimit(cl, RTA_DATA(tb[TCA_CBQ_OVL_STRATEGY-1])); 1926 - #ifdef CONFIG_NET_CLS_POLICE 1934 + #ifdef CONFIG_NET_CLS_ACT 1927 1935 if (tb[TCA_CBQ_POLICE-1]) 1928 1936 cbq_set_police(cl, RTA_DATA(tb[TCA_CBQ_POLICE-1])); 1929 1937 #endif ··· 1967 1975 q->tx_class = NULL; 1968 1976 q->tx_borrowed = NULL; 1969 1977 } 1970 - #ifdef CONFIG_NET_CLS_POLICE 1978 + #ifdef CONFIG_NET_CLS_ACT 1971 1979 if (q->rx_class == cl) 1972 1980 q->rx_class = NULL; 1973 1981 #endif
+16 -18
net/sched/sch_dsmark.c
··· 237 237 D2PRINTK("result %d class 0x%04x\n", result, res.classid); 238 238 239 239 switch (result) { 240 - #ifdef CONFIG_NET_CLS_POLICE 241 - case TC_POLICE_SHOT: 242 - kfree_skb(skb); 243 - sch->qstats.drops++; 244 - return NET_XMIT_POLICED; 245 - #if 0 246 - case TC_POLICE_RECLASSIFY: 247 - /* FIXME: what to do here ??? */ 240 + #ifdef CONFIG_NET_CLS_ACT 241 + case TC_ACT_QUEUED: 242 + case TC_ACT_STOLEN: 243 + kfree_skb(skb); 244 + return NET_XMIT_SUCCESS; 245 + case TC_ACT_SHOT: 246 + kfree_skb(skb); 247 + sch->qstats.drops++; 248 + return NET_XMIT_BYPASS; 248 249 #endif 249 - #endif 250 - case TC_POLICE_OK: 251 - skb->tc_index = TC_H_MIN(res.classid); 252 - break; 253 - case TC_POLICE_UNSPEC: 254 - /* fall through */ 255 - default: 256 - if (p->default_index != NO_DEFAULT_INDEX) 257 - skb->tc_index = p->default_index; 258 - break; 250 + case TC_ACT_OK: 251 + skb->tc_index = TC_H_MIN(res.classid); 252 + break; 253 + default: 254 + if (p->default_index != NO_DEFAULT_INDEX) 255 + skb->tc_index = p->default_index; 256 + break; 259 257 } 260 258 } 261 259
-3
net/sched/sch_hfsc.c
··· 1174 1174 case TC_ACT_SHOT: 1175 1175 return NULL; 1176 1176 } 1177 - #elif defined(CONFIG_NET_CLS_POLICE) 1178 - if (result == TC_POLICE_SHOT) 1179 - return NULL; 1180 1177 #endif 1181 1178 if ((cl = (struct hfsc_class *)res.class) == NULL) { 1182 1179 if ((cl = hfsc_find_class(res.classid, sch)) == NULL)
-3
net/sched/sch_htb.c
··· 249 249 case TC_ACT_SHOT: 250 250 return NULL; 251 251 } 252 - #elif defined(CONFIG_NET_CLS_POLICE) 253 - if (result == TC_POLICE_SHOT) 254 - return HTB_DIRECT; 255 252 #endif 256 253 if ((cl = (void *)res.class) == NULL) { 257 254 if (res.classid == sch->handle)
-19
net/sched/sch_ingress.c
··· 164 164 result = TC_ACT_OK; 165 165 break; 166 166 } 167 - /* backward compat */ 168 - #else 169 - #ifdef CONFIG_NET_CLS_POLICE 170 - switch (result) { 171 - case TC_POLICE_SHOT: 172 - result = NF_DROP; 173 - sch->qstats.drops++; 174 - break; 175 - case TC_POLICE_RECLASSIFY: /* DSCP remarking here ? */ 176 - case TC_POLICE_OK: 177 - case TC_POLICE_UNSPEC: 178 - default: 179 - sch->bstats.packets++; 180 - sch->bstats.bytes += skb->len; 181 - result = NF_ACCEPT; 182 - break; 183 - } 184 - 185 167 #else 186 168 D2PRINTK("Overriding result to ACCEPT\n"); 187 169 result = NF_ACCEPT; 188 170 sch->bstats.packets++; 189 171 sch->bstats.bytes += skb->len; 190 - #endif 191 172 #endif 192 173 193 174 return result;
+1 -1
net/sched/sch_tbf.c
··· 125 125 126 126 if (skb->len > q->max_size) { 127 127 sch->qstats.drops++; 128 - #ifdef CONFIG_NET_CLS_POLICE 128 + #ifdef CONFIG_NET_CLS_ACT 129 129 if (sch->reshape_fail == NULL || sch->reshape_fail(skb, sch)) 130 130 #endif 131 131 kfree_skb(skb);
+1 -1
net/wireless/Makefile
··· 1 1 obj-$(CONFIG_WIRELESS_EXT) += wext.o 2 2 obj-$(CONFIG_CFG80211) += cfg80211.o 3 3 4 - cfg80211-y += core.o sysfs.o 4 + cfg80211-y += core.o sysfs.o radiotap.o
+257
net/wireless/radiotap.c
··· 1 + /* 2 + * Radiotap parser 3 + * 4 + * Copyright 2007 Andy Green <andy@warmcat.com> 5 + */ 6 + 7 + #include <net/cfg80211.h> 8 + #include <net/ieee80211_radiotap.h> 9 + #include <asm/unaligned.h> 10 + 11 + /* function prototypes and related defs are in include/net/cfg80211.h */ 12 + 13 + /** 14 + * ieee80211_radiotap_iterator_init - radiotap parser iterator initialization 15 + * @iterator: radiotap_iterator to initialize 16 + * @radiotap_header: radiotap header to parse 17 + * @max_length: total length we can parse into (eg, whole packet length) 18 + * 19 + * Returns: 0 or a negative error code if there is a problem. 20 + * 21 + * This function initializes an opaque iterator struct which can then 22 + * be passed to ieee80211_radiotap_iterator_next() to visit every radiotap 23 + * argument which is present in the header. It knows about extended 24 + * present headers and handles them. 25 + * 26 + * How to use: 27 + * call __ieee80211_radiotap_iterator_init() to init a semi-opaque iterator 28 + * struct ieee80211_radiotap_iterator (no need to init the struct beforehand) 29 + * checking for a good 0 return code. Then loop calling 30 + * __ieee80211_radiotap_iterator_next()... it returns either 0, 31 + * -ENOENT if there are no more args to parse, or -EINVAL if there is a problem. 32 + * The iterator's @this_arg member points to the start of the argument 33 + * associated with the current argument index that is present, which can be 34 + * found in the iterator's @this_arg_index member. This arg index corresponds 35 + * to the IEEE80211_RADIOTAP_... defines. 36 + * 37 + * Radiotap header length: 38 + * You can find the CPU-endian total radiotap header length in 39 + * iterator->max_length after executing ieee80211_radiotap_iterator_init() 40 + * successfully. 41 + * 42 + * Alignment Gotcha: 43 + * You must take care when dereferencing iterator.this_arg 44 + * for multibyte types... the pointer is not aligned. Use 45 + * get_unaligned((type *)iterator.this_arg) to dereference 46 + * iterator.this_arg for type "type" safely on all arches. 47 + * 48 + * Example code: 49 + * See Documentation/networking/radiotap-headers.txt 50 + */ 51 + 52 + int ieee80211_radiotap_iterator_init( 53 + struct ieee80211_radiotap_iterator *iterator, 54 + struct ieee80211_radiotap_header *radiotap_header, 55 + int max_length) 56 + { 57 + /* Linux only supports version 0 radiotap format */ 58 + if (radiotap_header->it_version) 59 + return -EINVAL; 60 + 61 + /* sanity check for allowed length and radiotap length field */ 62 + if (max_length < le16_to_cpu(get_unaligned(&radiotap_header->it_len))) 63 + return -EINVAL; 64 + 65 + iterator->rtheader = radiotap_header; 66 + iterator->max_length = le16_to_cpu(get_unaligned( 67 + &radiotap_header->it_len)); 68 + iterator->arg_index = 0; 69 + iterator->bitmap_shifter = le32_to_cpu(get_unaligned( 70 + &radiotap_header->it_present)); 71 + iterator->arg = (u8 *)radiotap_header + sizeof(*radiotap_header); 72 + iterator->this_arg = NULL; 73 + 74 + /* find payload start allowing for extended bitmap(s) */ 75 + 76 + if (unlikely(iterator->bitmap_shifter & (1<<IEEE80211_RADIOTAP_EXT))) { 77 + while (le32_to_cpu(get_unaligned((__le32 *)iterator->arg)) & 78 + (1<<IEEE80211_RADIOTAP_EXT)) { 79 + iterator->arg += sizeof(u32); 80 + 81 + /* 82 + * check for insanity where the present bitmaps 83 + * keep claiming to extend up to or even beyond the 84 + * stated radiotap header length 85 + */ 86 + 87 + if (((ulong)iterator->arg - 88 + (ulong)iterator->rtheader) > iterator->max_length) 89 + return -EINVAL; 90 + } 91 + 92 + iterator->arg += sizeof(u32); 93 + 94 + /* 95 + * no need to check again for blowing past stated radiotap 96 + * header length, because ieee80211_radiotap_iterator_next 97 + * checks it before it is dereferenced 98 + */ 99 + } 100 + 101 + /* we are all initialized happily */ 102 + 103 + return 0; 104 + } 105 + EXPORT_SYMBOL(ieee80211_radiotap_iterator_init); 106 + 107 + 108 + /** 109 + * ieee80211_radiotap_iterator_next - return next radiotap parser iterator arg 110 + * @iterator: radiotap_iterator to move to next arg (if any) 111 + * 112 + * Returns: 0 if there is an argument to handle, 113 + * -ENOENT if there are no more args or -EINVAL 114 + * if there is something else wrong. 115 + * 116 + * This function provides the next radiotap arg index (IEEE80211_RADIOTAP_*) 117 + * in @this_arg_index and sets @this_arg to point to the 118 + * payload for the field. It takes care of alignment handling and extended 119 + * present fields. @this_arg can be changed by the caller (eg, 120 + * incremented to move inside a compound argument like 121 + * IEEE80211_RADIOTAP_CHANNEL). The args pointed to are in 122 + * little-endian format whatever the endianess of your CPU. 123 + * 124 + * Alignment Gotcha: 125 + * You must take care when dereferencing iterator.this_arg 126 + * for multibyte types... the pointer is not aligned. Use 127 + * get_unaligned((type *)iterator.this_arg) to dereference 128 + * iterator.this_arg for type "type" safely on all arches. 129 + */ 130 + 131 + int ieee80211_radiotap_iterator_next( 132 + struct ieee80211_radiotap_iterator *iterator) 133 + { 134 + 135 + /* 136 + * small length lookup table for all radiotap types we heard of 137 + * starting from b0 in the bitmap, so we can walk the payload 138 + * area of the radiotap header 139 + * 140 + * There is a requirement to pad args, so that args 141 + * of a given length must begin at a boundary of that length 142 + * -- but note that compound args are allowed (eg, 2 x u16 143 + * for IEEE80211_RADIOTAP_CHANNEL) so total arg length is not 144 + * a reliable indicator of alignment requirement. 145 + * 146 + * upper nybble: content alignment for arg 147 + * lower nybble: content length for arg 148 + */ 149 + 150 + static const u8 rt_sizes[] = { 151 + [IEEE80211_RADIOTAP_TSFT] = 0x88, 152 + [IEEE80211_RADIOTAP_FLAGS] = 0x11, 153 + [IEEE80211_RADIOTAP_RATE] = 0x11, 154 + [IEEE80211_RADIOTAP_CHANNEL] = 0x24, 155 + [IEEE80211_RADIOTAP_FHSS] = 0x22, 156 + [IEEE80211_RADIOTAP_DBM_ANTSIGNAL] = 0x11, 157 + [IEEE80211_RADIOTAP_DBM_ANTNOISE] = 0x11, 158 + [IEEE80211_RADIOTAP_LOCK_QUALITY] = 0x22, 159 + [IEEE80211_RADIOTAP_TX_ATTENUATION] = 0x22, 160 + [IEEE80211_RADIOTAP_DB_TX_ATTENUATION] = 0x22, 161 + [IEEE80211_RADIOTAP_DBM_TX_POWER] = 0x11, 162 + [IEEE80211_RADIOTAP_ANTENNA] = 0x11, 163 + [IEEE80211_RADIOTAP_DB_ANTSIGNAL] = 0x11, 164 + [IEEE80211_RADIOTAP_DB_ANTNOISE] = 0x11 165 + /* 166 + * add more here as they are defined in 167 + * include/net/ieee80211_radiotap.h 168 + */ 169 + }; 170 + 171 + /* 172 + * for every radiotap entry we can at 173 + * least skip (by knowing the length)... 174 + */ 175 + 176 + while (iterator->arg_index < sizeof(rt_sizes)) { 177 + int hit = 0; 178 + int pad; 179 + 180 + if (!(iterator->bitmap_shifter & 1)) 181 + goto next_entry; /* arg not present */ 182 + 183 + /* 184 + * arg is present, account for alignment padding 185 + * 8-bit args can be at any alignment 186 + * 16-bit args must start on 16-bit boundary 187 + * 32-bit args must start on 32-bit boundary 188 + * 64-bit args must start on 64-bit boundary 189 + * 190 + * note that total arg size can differ from alignment of 191 + * elements inside arg, so we use upper nybble of length 192 + * table to base alignment on 193 + * 194 + * also note: these alignments are ** relative to the 195 + * start of the radiotap header **. There is no guarantee 196 + * that the radiotap header itself is aligned on any 197 + * kind of boundary. 198 + * 199 + * the above is why get_unaligned() is used to dereference 200 + * multibyte elements from the radiotap area 201 + */ 202 + 203 + pad = (((ulong)iterator->arg) - 204 + ((ulong)iterator->rtheader)) & 205 + ((rt_sizes[iterator->arg_index] >> 4) - 1); 206 + 207 + if (pad) 208 + iterator->arg += 209 + (rt_sizes[iterator->arg_index] >> 4) - pad; 210 + 211 + /* 212 + * this is what we will return to user, but we need to 213 + * move on first so next call has something fresh to test 214 + */ 215 + iterator->this_arg_index = iterator->arg_index; 216 + iterator->this_arg = iterator->arg; 217 + hit = 1; 218 + 219 + /* internally move on the size of this arg */ 220 + iterator->arg += rt_sizes[iterator->arg_index] & 0x0f; 221 + 222 + /* 223 + * check for insanity where we are given a bitmap that 224 + * claims to have more arg content than the length of the 225 + * radiotap section. We will normally end up equalling this 226 + * max_length on the last arg, never exceeding it. 227 + */ 228 + 229 + if (((ulong)iterator->arg - (ulong)iterator->rtheader) > 230 + iterator->max_length) 231 + return -EINVAL; 232 + 233 + next_entry: 234 + iterator->arg_index++; 235 + if (unlikely((iterator->arg_index & 31) == 0)) { 236 + /* completed current u32 bitmap */ 237 + if (iterator->bitmap_shifter & 1) { 238 + /* b31 was set, there is more */ 239 + /* move to next u32 bitmap */ 240 + iterator->bitmap_shifter = le32_to_cpu( 241 + get_unaligned(iterator->next_bitmap)); 242 + iterator->next_bitmap++; 243 + } else 244 + /* no more bitmaps: end */ 245 + iterator->arg_index = sizeof(rt_sizes); 246 + } else /* just try the next bit */ 247 + iterator->bitmap_shifter >>= 1; 248 + 249 + /* if we found a valid arg earlier, return it now */ 250 + if (hit) 251 + return 0; 252 + } 253 + 254 + /* we don't know how to handle any more args, we're done */ 255 + return -ENOENT; 256 + } 257 + EXPORT_SYMBOL(ieee80211_radiotap_iterator_next);