Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[NET]: Supporting UDP-Lite (RFC 3828) in Linux

This is a revision of the previously submitted patch, which alters
the way files are organized and compiled in the following manner:

* UDP and UDP-Lite now use separate object files
* source file dependencies resolved via header files
net/ipv{4,6}/udp_impl.h
* order of inclusion files in udp.c/udplite.c adapted
accordingly

[NET/IPv4]: Support for the UDP-Lite protocol (RFC 3828)

This patch adds support for UDP-Lite to the IPv4 stack, provided as an
extension to the existing UDPv4 code:
* generic routines are all located in net/ipv4/udp.c
* UDP-Lite specific routines are in net/ipv4/udplite.c
* MIB/statistics support in /proc/net/snmp and /proc/net/udplite
* shared API with extensions for partial checksum coverage

[NET/IPv6]: Extension for UDP-Lite over IPv6

It extends the existing UDPv6 code base with support for UDP-Lite
in the same manner as per UDPv4. In particular,
* UDPv6 generic and shared code is in net/ipv6/udp.c
* UDP-Litev6 specific extensions are in net/ipv6/udplite.c
* MIB/statistics support in /proc/net/snmp6 and /proc/net/udplite6
* support for IPV6_ADDRFORM
* aligned the coding style of protocol initialisation with af_inet6.c
* made the error handling in udpv6_queue_rcv_skb consistent;
to return `-1' on error on all error cases
* consolidation of shared code

[NET]: UDP-Lite Documentation and basic XFRM/Netfilter support

The UDP-Lite patch further provides
* API documentation for UDP-Lite
* basic xfrm support
* basic netfilter support for IPv4 and IPv6 (LOG target)

Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>

authored by

Gerrit Renker and committed by
David S. Miller
ba4e58ec 6051e2f4

+1443 -404
+281
Documentation/networking/udplite.txt
··· 1 + =========================================================================== 2 + The UDP-Lite protocol (RFC 3828) 3 + =========================================================================== 4 + 5 + 6 + UDP-Lite is a Standards-Track IETF transport protocol whose characteristic 7 + is a variable-length checksum. This has advantages for transport of multimedia 8 + (video, VoIP) over wireless networks, as partly damaged packets can still be 9 + fed into the codec instead of being discarded due to a failed checksum test. 10 + 11 + This file briefly describes the existing kernel support and the socket API. 12 + For in-depth information, you can consult: 13 + 14 + o The UDP-Lite Homepage: http://www.erg.abdn.ac.uk/users/gerrit/udp-lite/ 15 + Fom here you can also download some example application source code. 16 + 17 + o The UDP-Lite HOWTO on 18 + http://www.erg.abdn.ac.uk/users/gerrit/udp-lite/files/UDP-Lite-HOWTO.txt 19 + 20 + o The Wireshark UDP-Lite WiKi (with capture files): 21 + http://wiki.wireshark.org/Lightweight_User_Datagram_Protocol 22 + 23 + o The Protocol Spec, RFC 3828, http://www.ietf.org/rfc/rfc3828.txt 24 + 25 + 26 + I) APPLICATIONS 27 + 28 + Several applications have been ported successfully to UDP-Lite. Ethereal 29 + (now called wireshark) has UDP-Litev4/v6 support by default. The tarball on 30 + 31 + http://www.erg.abdn.ac.uk/users/gerrit/udp-lite/files/udplite_linux.tar.gz 32 + 33 + has source code for several v4/v6 client-server and network testing examples. 34 + 35 + Porting applications to UDP-Lite is straightforward: only socket level and 36 + IPPROTO need to be changed; senders additionally set the checksum coverage 37 + length (default = header length = 8). Details are in the next section. 38 + 39 + 40 + II) PROGRAMMING API 41 + 42 + UDP-Lite provides a connectionless, unreliable datagram service and hence 43 + uses the same socket type as UDP. In fact, porting from UDP to UDP-Lite is 44 + very easy: simply add `IPPROTO_UDPLITE' as the last argument of the socket(2) 45 + call so that the statement looks like: 46 + 47 + s = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDPLITE); 48 + 49 + or, respectively, 50 + 51 + s = socket(PF_INET6, SOCK_DGRAM, IPPROTO_UDPLITE); 52 + 53 + With just the above change you are able to run UDP-Lite services or connect 54 + to UDP-Lite servers. The kernel will assume that you are not interested in 55 + using partial checksum coverage and so emulate UDP mode (full coverage). 56 + 57 + To make use of the partial checksum coverage facilities requires setting a 58 + single socket option, which takes an integer specifying the coverage length: 59 + 60 + * Sender checksum coverage: UDPLITE_SEND_CSCOV 61 + 62 + For example, 63 + 64 + int val = 20; 65 + setsockopt(s, SOL_UDPLITE, UDPLITE_SEND_CSCOV, &val, sizeof(int)); 66 + 67 + sets the checksum coverage length to 20 bytes (12b data + 8b header). 68 + Of each packet only the first 20 bytes (plus the pseudo-header) will be 69 + checksummed. This is useful for RTP applications which have a 12-byte 70 + base header. 71 + 72 + 73 + * Receiver checksum coverage: UDPLITE_RECV_CSCOV 74 + 75 + This option is the receiver-side analogue. It is truly optional, i.e. not 76 + required to enable traffic with partial checksum coverage. Its function is 77 + that of a traffic filter: when enabled, it instructs the kernel to drop 78 + all packets which have a coverage _less_ than this value. For example, if 79 + RTP and UDP headers are to be protected, a receiver can enforce that only 80 + packets with a minimum coverage of 20 are admitted: 81 + 82 + int min = 20; 83 + setsockopt(s, SOL_UDPLITE, UDPLITE_RECV_CSCOV, &min, sizeof(int)); 84 + 85 + The calls to getsockopt(2) are analogous. Being an extension and not a stand- 86 + alone protocol, all socket options known from UDP can be used in exactly the 87 + same manner as before, e.g. UDP_CORK or UDP_ENCAP. 88 + 89 + A detailed discussion of UDP-Lite checksum coverage options is in section IV. 90 + 91 + 92 + III) HEADER FILES 93 + 94 + The socket API requires support through header files in /usr/include: 95 + 96 + * /usr/include/netinet/in.h 97 + to define IPPROTO_UDPLITE 98 + 99 + * /usr/include/netinet/udplite.h 100 + for UDP-Lite header fields and protocol constants 101 + 102 + For testing purposes, the following can serve as a `mini' header file: 103 + 104 + #define IPPROTO_UDPLITE 136 105 + #define SOL_UDPLITE 136 106 + #define UDPLITE_SEND_CSCOV 10 107 + #define UDPLITE_RECV_CSCOV 11 108 + 109 + Ready-made header files for various distros are in the UDP-Lite tarball. 110 + 111 + 112 + IV) KERNEL BEHAVIOUR WITH REGARD TO THE VARIOUS SOCKET OPTIONS 113 + 114 + To enable debugging messages, the log level need to be set to 8, as most 115 + messages use the KERN_DEBUG level (7). 116 + 117 + 1) Sender Socket Options 118 + 119 + If the sender specifies a value of 0 as coverage length, the module 120 + assumes full coverage, transmits a packet with coverage length of 0 121 + and according checksum. If the sender specifies a coverage < 8 and 122 + different from 0, the kernel assumes 8 as default value. Finally, 123 + if the specified coverage length exceeds the packet length, the packet 124 + length is used instead as coverage length. 125 + 126 + 2) Receiver Socket Options 127 + 128 + The receiver specifies the minimum value of the coverage length it 129 + is willing to accept. A value of 0 here indicates that the receiver 130 + always wants the whole of the packet covered. In this case, all 131 + partially covered packets are dropped and an error is logged. 132 + 133 + It is not possible to specify illegal values (<0 and <8); in these 134 + cases the default of 8 is assumed. 135 + 136 + All packets arriving with a coverage value less than the specified 137 + threshold are discarded, these events are also logged. 138 + 139 + 3) Disabling the Checksum Computation 140 + 141 + On both sender and receiver, checksumming will always be performed 142 + and can not be disabled using SO_NO_CHECK. Thus 143 + 144 + setsockopt(sockfd, SOL_SOCKET, SO_NO_CHECK, ... ); 145 + 146 + will always will be ignored, while the value of 147 + 148 + getsockopt(sockfd, SOL_SOCKET, SO_NO_CHECK, &value, ...); 149 + 150 + is meaningless (as in TCP). Packets with a zero checksum field are 151 + illegal (cf. RFC 3828, sec. 3.1) will be silently discarded. 152 + 153 + 4) Fragmentation 154 + 155 + The checksum computation respects both buffersize and MTU. The size 156 + of UDP-Lite packets is determined by the size of the send buffer. The 157 + minimum size of the send buffer is 2048 (defined as SOCK_MIN_SNDBUF 158 + in include/net/sock.h), the default value is configurable as 159 + net.core.wmem_default or via setting the SO_SNDBUF socket(7) 160 + option. The maximum upper bound for the send buffer is determined 161 + by net.core.wmem_max. 162 + 163 + Given a payload size larger than the send buffer size, UDP-Lite will 164 + split the payload into several individual packets, filling up the 165 + send buffer size in each case. 166 + 167 + The precise value also depends on the interface MTU. The interface MTU, 168 + in turn, may trigger IP fragmentation. In this case, the generated 169 + UDP-Lite packet is split into several IP packets, of which only the 170 + first one contains the L4 header. 171 + 172 + The send buffer size has implications on the checksum coverage length. 173 + Consider the following example: 174 + 175 + Payload: 1536 bytes Send Buffer: 1024 bytes 176 + MTU: 1500 bytes Coverage Length: 856 bytes 177 + 178 + UDP-Lite will ship the 1536 bytes in two separate packets: 179 + 180 + Packet 1: 1024 payload + 8 byte header + 20 byte IP header = 1052 bytes 181 + Packet 2: 512 payload + 8 byte header + 20 byte IP header = 540 bytes 182 + 183 + The coverage packet covers the UDP-Lite header and 848 bytes of the 184 + payload in the first packet, the second packet is fully covered. Note 185 + that for the second packet, the coverage length exceeds the packet 186 + length. The kernel always re-adjusts the coverage length to the packet 187 + length in such cases. 188 + 189 + As an example of what happens when one UDP-Lite packet is split into 190 + several tiny fragments, consider the following example. 191 + 192 + Payload: 1024 bytes Send buffer size: 1024 bytes 193 + MTU: 300 bytes Coverage length: 575 bytes 194 + 195 + +-+-----------+--------------+--------------+--------------+ 196 + |8| 272 | 280 | 280 | 280 | 197 + +-+-----------+--------------+--------------+--------------+ 198 + 280 560 840 1032 199 + ^ 200 + *****checksum coverage************* 201 + 202 + The UDP-Lite module generates one 1032 byte packet (1024 + 8 byte 203 + header). According to the interface MTU, these are split into 4 IP 204 + packets (280 byte IP payload + 20 byte IP header). The kernel module 205 + sums the contents of the entire first two packets, plus 15 bytes of 206 + the last packet before releasing the fragments to the IP module. 207 + 208 + To see the analogous case for IPv6 fragmentation, consider a link 209 + MTU of 1280 bytes and a write buffer of 3356 bytes. If the checksum 210 + coverage is less than 1232 bytes (MTU minus IPv6/fragment header 211 + lengths), only the first fragment needs to be considered. When using 212 + larger checksum coverage lengths, each eligible fragment needs to be 213 + checksummed. Suppose we have a checksum coverage of 3062. The buffer 214 + of 3356 bytes will be split into the following fragments: 215 + 216 + Fragment 1: 1280 bytes carrying 1232 bytes of UDP-Lite data 217 + Fragment 2: 1280 bytes carrying 1232 bytes of UDP-Lite data 218 + Fragment 3: 948 bytes carrying 900 bytes of UDP-Lite data 219 + 220 + The first two fragments have to be checksummed in full, of the last 221 + fragment only 598 (= 3062 - 2*1232) bytes are checksummed. 222 + 223 + While it is important that such cases are dealt with correctly, they 224 + are (annoyingly) rare: UDP-Lite is designed for optimising multimedia 225 + performance over wireless (or generally noisy) links and thus smaller 226 + coverage lenghts are likely to be expected. 227 + 228 + 229 + V) UDP-LITE RUNTIME STATISTICS AND THEIR MEANING 230 + 231 + Exceptional and error conditions are logged to syslog at the KERN_DEBUG 232 + level. Live statistics about UDP-Lite are available in /proc/net/snmp 233 + and can (with newer versions of netstat) be viewed using 234 + 235 + netstat -svu 236 + 237 + This displays UDP-Lite statistics variables, whose meaning is as follows. 238 + 239 + InDatagrams: Total number of received datagrams. 240 + 241 + NoPorts: Number of packets received to an unknown port. 242 + These cases are counted separately (not as InErrors). 243 + 244 + InErrors: Number of erroneous UDP-Lite packets. Errors include: 245 + * internal socket queue receive errors 246 + * packet too short (less than 8 bytes or stated 247 + coverage length exceeds received length) 248 + * xfrm4_policy_check() returned with error 249 + * application has specified larger min. coverage 250 + length than that of incoming packet 251 + * checksum coverage violated 252 + * bad checksum 253 + 254 + OutDatagrams: Total number of sent datagrams. 255 + 256 + These statistics derive from the UDP MIB (RFC 2013). 257 + 258 + 259 + VI) IPTABLES 260 + 261 + There is packet match support for UDP-Lite as well as support for the LOG target. 262 + If you copy and paste the following line into /etc/protcols, 263 + 264 + udplite 136 UDP-Lite # UDP-Lite [RFC 3828] 265 + 266 + then 267 + iptables -A INPUT -p udplite -j LOG 268 + 269 + will produce logging output to syslog. Dropping and rejecting packets also works. 270 + 271 + 272 + VII) MAINTAINER ADDRESS 273 + 274 + The UDP-Lite patch was developed at 275 + University of Aberdeen 276 + Electronics Research Group 277 + Department of Engineering 278 + Fraser Noble Building 279 + Aberdeen AB24 3UE; UK 280 + The current maintainer is Gerrit Renker, <gerrit@erg.abdn.ac.uk>. Initial 281 + code was developed by William Stanislaus, <william@erg.abdn.ac.uk>.
+1
include/linux/in.h
··· 45 45 46 46 IPPROTO_COMP = 108, /* Compression Header protocol */ 47 47 IPPROTO_SCTP = 132, /* Stream Control Transport Protocol */ 48 + IPPROTO_UDPLITE = 136, /* UDP-Lite (RFC 3828) */ 48 49 49 50 IPPROTO_RAW = 255, /* Raw IP packets */ 50 51 IPPROTO_MAX
+1
include/linux/socket.h
··· 264 264 #define SOL_IPV6 41 265 265 #define SOL_ICMPV6 58 266 266 #define SOL_SCTP 132 267 + #define SOL_UDPLITE 136 /* UDP-Lite (RFC 3828) */ 267 268 #define SOL_RAW 255 268 269 #define SOL_IPX 256 269 270 #define SOL_AX25 257
+12
include/linux/udp.h
··· 38 38 #include <linux/types.h> 39 39 40 40 #include <net/inet_sock.h> 41 + #define UDP_HTABLE_SIZE 128 41 42 42 43 struct udp_sock { 43 44 /* inet_sock has to be the first member */ ··· 51 50 * when the socket is uncorked. 52 51 */ 53 52 __u16 len; /* total length of pending frames */ 53 + /* 54 + * Fields specific to UDP-Lite. 55 + */ 56 + __u16 pcslen; 57 + __u16 pcrlen; 58 + /* indicator bits used by pcflag: */ 59 + #define UDPLITE_BIT 0x1 /* set by udplite proto init function */ 60 + #define UDPLITE_SEND_CC 0x2 /* set via udplite setsockopt */ 61 + #define UDPLITE_RECV_CC 0x4 /* set via udplite setsocktopt */ 62 + __u8 pcflag; /* marks socket as UDP-Lite if > 0 */ 54 63 }; 55 64 56 65 static inline struct udp_sock *udp_sk(const struct sock *sk) 57 66 { 58 67 return (struct udp_sock *)sk; 59 68 } 69 + #define IS_UDPLITE(__sk) (udp_sk(__sk)->pcflag) 60 70 61 71 #endif 62 72
+9 -3
include/net/ipv6.h
··· 158 158 SNMP_INC_STATS_OFFSET_BH(icmpv6_statistics, field, _offset); \ 159 159 }) 160 160 DECLARE_SNMP_STAT(struct udp_mib, udp_stats_in6); 161 - #define UDP6_INC_STATS(field) SNMP_INC_STATS(udp_stats_in6, field) 162 - #define UDP6_INC_STATS_BH(field) SNMP_INC_STATS_BH(udp_stats_in6, field) 163 - #define UDP6_INC_STATS_USER(field) SNMP_INC_STATS_USER(udp_stats_in6, field) 161 + DECLARE_SNMP_STAT(struct udp_mib, udplite_stats_in6); 162 + #define UDP6_INC_STATS_BH(field, is_udplite) do { \ 163 + if (is_udplite) SNMP_INC_STATS_BH(udplite_stats_in6, field); \ 164 + else SNMP_INC_STATS_BH(udp_stats_in6, field); } while(0) 165 + #define UDP6_INC_STATS_USER(field, is_udplite) do { \ 166 + if (is_udplite) SNMP_INC_STATS_USER(udplite_stats_in6, field); \ 167 + else SNMP_INC_STATS_USER(udp_stats_in6, field); } while(0) 164 168 165 169 int snmp6_register_dev(struct inet6_dev *idev); 166 170 int snmp6_unregister_dev(struct inet6_dev *idev); ··· 608 604 extern void tcp6_proc_exit(void); 609 605 extern int udp6_proc_init(void); 610 606 extern void udp6_proc_exit(void); 607 + extern int udplite6_proc_init(void); 608 + extern void udplite6_proc_exit(void); 611 609 extern int ipv6_misc_proc_init(void); 612 610 extern void ipv6_misc_proc_exit(void); 613 611
+2
include/net/transp_v6.h
··· 11 11 12 12 extern struct proto rawv6_prot; 13 13 extern struct proto udpv6_prot; 14 + extern struct proto udplitev6_prot; 14 15 extern struct proto tcpv6_prot; 15 16 16 17 struct flowi; ··· 25 24 /* transport protocols */ 26 25 extern void rawv6_init(void); 27 26 extern void udpv6_init(void); 27 + extern void udplitev6_init(void); 28 28 extern void tcpv6_init(void); 29 29 30 30 extern int udpv6_connect(struct sock *sk,
+87 -4
include/net/udp.h
··· 26 26 #include <net/inet_sock.h> 27 27 #include <net/sock.h> 28 28 #include <net/snmp.h> 29 + #include <net/ip.h> 30 + #include <linux/ipv6.h> 29 31 #include <linux/seq_file.h> 30 32 31 - #define UDP_HTABLE_SIZE 128 33 + /** 34 + * struct udp_skb_cb - UDP(-Lite) private variables 35 + * 36 + * @header: private variables used by IPv4/IPv6 37 + * @cscov: checksum coverage length (UDP-Lite only) 38 + * @partial_cov: if set indicates partial csum coverage 39 + */ 40 + struct udp_skb_cb { 41 + union { 42 + struct inet_skb_parm h4; 43 + #if defined(CONFIG_IPV6) || defined (CONFIG_IPV6_MODULE) 44 + struct inet6_skb_parm h6; 45 + #endif 46 + } header; 47 + __u16 cscov; 48 + __u8 partial_cov; 49 + }; 50 + #define UDP_SKB_CB(__skb) ((struct udp_skb_cb *)((__skb)->cb)) 32 51 33 52 extern struct hlist_head udp_hash[UDP_HTABLE_SIZE]; 34 53 extern rwlock_t udp_hash_lock; ··· 66 47 67 48 struct sk_buff; 68 49 50 + /* 51 + * Generic checksumming routines for UDP(-Lite) v4 and v6 52 + */ 53 + static inline u16 __udp_lib_checksum_complete(struct sk_buff *skb) 54 + { 55 + if (! UDP_SKB_CB(skb)->partial_cov) 56 + return __skb_checksum_complete(skb); 57 + return csum_fold(skb_checksum(skb, 0, UDP_SKB_CB(skb)->cscov, 58 + skb->csum)); 59 + } 60 + 61 + static __inline__ int udp_lib_checksum_complete(struct sk_buff *skb) 62 + { 63 + return skb->ip_summed != CHECKSUM_UNNECESSARY && 64 + __udp_lib_checksum_complete(skb); 65 + } 66 + 67 + /** 68 + * udp_csum_outgoing - compute UDPv4/v6 checksum over fragments 69 + * @sk: socket we are writing to 70 + * @skb: sk_buff containing the filled-in UDP header 71 + * (checksum field must be zeroed out) 72 + */ 73 + static inline u32 udp_csum_outgoing(struct sock *sk, struct sk_buff *skb) 74 + { 75 + u32 csum = csum_partial(skb->h.raw, sizeof(struct udphdr), 0); 76 + 77 + skb_queue_walk(&sk->sk_write_queue, skb) { 78 + csum = csum_add(csum, skb->csum); 79 + } 80 + return csum; 81 + } 82 + 83 + /* hash routines shared between UDPv4/6 and UDP-Litev4/6 */ 84 + static inline void udp_lib_hash(struct sock *sk) 85 + { 86 + BUG(); 87 + } 88 + 89 + static inline void udp_lib_unhash(struct sock *sk) 90 + { 91 + write_lock_bh(&udp_hash_lock); 92 + if (sk_del_node_init(sk)) { 93 + inet_sk(sk)->num = 0; 94 + sock_prot_dec_use(sk->sk_prot); 95 + } 96 + write_unlock_bh(&udp_hash_lock); 97 + } 98 + 99 + static inline void udp_lib_close(struct sock *sk, long timeout) 100 + { 101 + sk_common_release(sk); 102 + } 103 + 104 + 105 + /* net/ipv4/udp.c */ 69 106 extern int udp_get_port(struct sock *sk, unsigned short snum, 70 107 int (*saddr_cmp)(const struct sock *, const struct sock *)); 71 108 extern void udp_err(struct sk_buff *, u32); ··· 136 61 poll_table *wait); 137 62 138 63 DECLARE_SNMP_STAT(struct udp_mib, udp_statistics); 139 - #define UDP_INC_STATS(field) SNMP_INC_STATS(udp_statistics, field) 140 - #define UDP_INC_STATS_BH(field) SNMP_INC_STATS_BH(udp_statistics, field) 141 - #define UDP_INC_STATS_USER(field) SNMP_INC_STATS_USER(udp_statistics, field) 64 + /* 65 + * SNMP statistics for UDP and UDP-Lite 66 + */ 67 + #define UDP_INC_STATS_USER(field, is_udplite) do { \ 68 + if (is_udplite) SNMP_INC_STATS_USER(udplite_statistics, field); \ 69 + else SNMP_INC_STATS_USER(udp_statistics, field); } while(0) 70 + #define UDP_INC_STATS_BH(field, is_udplite) do { \ 71 + if (is_udplite) SNMP_INC_STATS_BH(udplite_statistics, field); \ 72 + else SNMP_INC_STATS_BH(udp_statistics, field); } while(0) 142 73 143 74 /* /proc */ 144 75 struct udp_seq_afinfo { 145 76 struct module *owner; 146 77 char *name; 147 78 sa_family_t family; 79 + struct hlist_head *hashtable; 148 80 int (*seq_show) (struct seq_file *m, void *v); 149 81 struct file_operations *seq_fops; 150 82 }; 151 83 152 84 struct udp_iter_state { 153 85 sa_family_t family; 86 + struct hlist_head *hashtable; 154 87 int bucket; 155 88 struct seq_operations seq_ops; 156 89 };
+149
include/net/udplite.h
··· 1 + /* 2 + * Definitions for the UDP-Lite (RFC 3828) code. 3 + */ 4 + #ifndef _UDPLITE_H 5 + #define _UDPLITE_H 6 + 7 + /* UDP-Lite socket options */ 8 + #define UDPLITE_SEND_CSCOV 10 /* sender partial coverage (as sent) */ 9 + #define UDPLITE_RECV_CSCOV 11 /* receiver partial coverage (threshold ) */ 10 + 11 + extern struct proto udplite_prot; 12 + extern struct hlist_head udplite_hash[UDP_HTABLE_SIZE]; 13 + 14 + /* UDP-Lite does not have a standardized MIB yet, so we inherit from UDP */ 15 + DECLARE_SNMP_STAT(struct udp_mib, udplite_statistics); 16 + 17 + /* 18 + * Checksum computation is all in software, hence simpler getfrag. 19 + */ 20 + static __inline__ int udplite_getfrag(void *from, char *to, int offset, 21 + int len, int odd, struct sk_buff *skb) 22 + { 23 + return memcpy_fromiovecend(to, (struct iovec *) from, offset, len); 24 + } 25 + 26 + /* Designate sk as UDP-Lite socket */ 27 + static inline int udplite_sk_init(struct sock *sk) 28 + { 29 + udp_sk(sk)->pcflag = UDPLITE_BIT; 30 + return 0; 31 + } 32 + 33 + /* 34 + * Checksumming routines 35 + */ 36 + static inline int udplite_checksum_init(struct sk_buff *skb, struct udphdr *uh) 37 + { 38 + u16 cscov; 39 + 40 + /* In UDPv4 a zero checksum means that the transmitter generated no 41 + * checksum. UDP-Lite (like IPv6) mandates checksums, hence packets 42 + * with a zero checksum field are illegal. */ 43 + if (uh->check == 0) { 44 + LIMIT_NETDEBUG(KERN_DEBUG "UDPLITE: zeroed checksum field\n"); 45 + return 1; 46 + } 47 + 48 + UDP_SKB_CB(skb)->partial_cov = 0; 49 + cscov = ntohs(uh->len); 50 + 51 + if (cscov == 0) /* Indicates that full coverage is required. */ 52 + cscov = skb->len; 53 + else if (cscov < 8 || cscov > skb->len) { 54 + /* 55 + * Coverage length violates RFC 3828: log and discard silently. 56 + */ 57 + LIMIT_NETDEBUG(KERN_DEBUG "UDPLITE: bad csum coverage %d/%d\n", 58 + cscov, skb->len); 59 + return 1; 60 + 61 + } else if (cscov < skb->len) 62 + UDP_SKB_CB(skb)->partial_cov = 1; 63 + 64 + UDP_SKB_CB(skb)->cscov = cscov; 65 + 66 + /* 67 + * There is no known NIC manufacturer supporting UDP-Lite yet, 68 + * hence ip_summed is always (re-)set to CHECKSUM_NONE. 69 + */ 70 + skb->ip_summed = CHECKSUM_NONE; 71 + 72 + return 0; 73 + } 74 + 75 + static __inline__ int udplite4_csum_init(struct sk_buff *skb, struct udphdr *uh) 76 + { 77 + int rc = udplite_checksum_init(skb, uh); 78 + 79 + if (!rc) 80 + skb->csum = csum_tcpudp_nofold(skb->nh.iph->saddr, 81 + skb->nh.iph->daddr, 82 + skb->len, IPPROTO_UDPLITE, 0); 83 + return rc; 84 + } 85 + 86 + static __inline__ int udplite6_csum_init(struct sk_buff *skb, struct udphdr *uh) 87 + { 88 + int rc = udplite_checksum_init(skb, uh); 89 + 90 + if (!rc) 91 + skb->csum = ~csum_ipv6_magic(&skb->nh.ipv6h->saddr, 92 + &skb->nh.ipv6h->daddr, 93 + skb->len, IPPROTO_UDPLITE, 0); 94 + return rc; 95 + } 96 + 97 + static inline int udplite_sender_cscov(struct udp_sock *up, struct udphdr *uh) 98 + { 99 + int cscov = up->len; 100 + 101 + /* 102 + * Sender has set `partial coverage' option on UDP-Lite socket 103 + */ 104 + if (up->pcflag & UDPLITE_SEND_CC) { 105 + if (up->pcslen < up->len) { 106 + /* up->pcslen == 0 means that full coverage is required, 107 + * partial coverage only if 0 < up->pcslen < up->len */ 108 + if (0 < up->pcslen) { 109 + cscov = up->pcslen; 110 + } 111 + uh->len = htons(up->pcslen); 112 + } 113 + /* 114 + * NOTE: Causes for the error case `up->pcslen > up->len': 115 + * (i) Application error (will not be penalized). 116 + * (ii) Payload too big for send buffer: data is split 117 + * into several packets, each with its own header. 118 + * In this case (e.g. last segment), coverage may 119 + * exceed packet length. 120 + * Since packets with coverage length > packet length are 121 + * illegal, we fall back to the defaults here. 122 + */ 123 + } 124 + return cscov; 125 + } 126 + 127 + static inline u32 udplite_csum_outgoing(struct sock *sk, struct sk_buff *skb) 128 + { 129 + u32 csum = 0; 130 + int off, len, cscov = udplite_sender_cscov(udp_sk(sk), skb->h.uh); 131 + 132 + skb->ip_summed = CHECKSUM_NONE; /* no HW support for checksumming */ 133 + 134 + skb_queue_walk(&sk->sk_write_queue, skb) { 135 + off = skb->h.raw - skb->data; 136 + len = skb->len - off; 137 + 138 + csum = skb_checksum(skb, off, (cscov > len)? len : cscov, csum); 139 + 140 + if ((cscov -= len) <= 0) 141 + break; 142 + } 143 + return csum; 144 + } 145 + 146 + extern void udplite4_register(void); 147 + extern int udplite_get_port(struct sock *sk, unsigned short snum, 148 + int (*scmp)(const struct sock *, const struct sock *)); 149 + #endif /* _UDPLITE_H */
+2
include/net/xfrm.h
··· 468 468 switch(fl->proto) { 469 469 case IPPROTO_TCP: 470 470 case IPPROTO_UDP: 471 + case IPPROTO_UDPLITE: 471 472 case IPPROTO_SCTP: 472 473 port = fl->fl_ip_sport; 473 474 break; ··· 494 493 switch(fl->proto) { 495 494 case IPPROTO_TCP: 496 495 case IPPROTO_UDP: 496 + case IPPROTO_UDPLITE: 497 497 case IPPROTO_SCTP: 498 498 port = fl->fl_ip_dport; 499 499 break;
+2 -1
net/ipv4/Makefile
··· 8 8 inet_timewait_sock.o inet_connection_sock.o \ 9 9 tcp.o tcp_input.o tcp_output.o tcp_timer.o tcp_ipv4.o \ 10 10 tcp_minisocks.o tcp_cong.o \ 11 - datagram.o raw.o udp.o arp.o icmp.o devinet.o af_inet.o igmp.o \ 11 + datagram.o raw.o udp.o udplite.o \ 12 + arp.o icmp.o devinet.o af_inet.o igmp.o \ 12 13 sysctl_net_ipv4.o fib_frontend.o fib_semantics.o 13 14 14 15 obj-$(CONFIG_IP_FIB_HASH) += fib_hash.o
+7 -1
net/ipv4/af_inet.c
··· 104 104 #include <net/inet_connection_sock.h> 105 105 #include <net/tcp.h> 106 106 #include <net/udp.h> 107 + #include <net/udplite.h> 107 108 #include <linux/skbuff.h> 108 109 #include <net/sock.h> 109 110 #include <net/raw.h> ··· 1224 1223 tcp_statistics[1] = alloc_percpu(struct tcp_mib); 1225 1224 udp_statistics[0] = alloc_percpu(struct udp_mib); 1226 1225 udp_statistics[1] = alloc_percpu(struct udp_mib); 1226 + udplite_statistics[0] = alloc_percpu(struct udp_mib); 1227 + udplite_statistics[1] = alloc_percpu(struct udp_mib); 1227 1228 if (! 1228 1229 (net_statistics[0] && net_statistics[1] && ip_statistics[0] 1229 1230 && ip_statistics[1] && tcp_statistics[0] && tcp_statistics[1] 1230 - && udp_statistics[0] && udp_statistics[1])) 1231 + && udp_statistics[0] && udp_statistics[1] 1232 + && udplite_statistics[0] && udplite_statistics[1] ) ) 1231 1233 return -ENOMEM; 1232 1234 1233 1235 (void) tcp_mib_init(); ··· 1317 1313 /* Setup TCP slab cache for open requests. */ 1318 1314 tcp_init(); 1319 1315 1316 + /* Add UDP-Lite (RFC 3828) */ 1317 + udplite4_register(); 1320 1318 1321 1319 /* 1322 1320 * Set the ICMP layer up
+8 -3
net/ipv4/netfilter/ipt_LOG.c
··· 171 171 } 172 172 break; 173 173 } 174 - case IPPROTO_UDP: { 174 + case IPPROTO_UDP: 175 + case IPPROTO_UDPLITE: { 175 176 struct udphdr _udph, *uh; 176 177 177 - /* Max length: 10 "PROTO=UDP " */ 178 - printk("PROTO=UDP "); 178 + if (ih->protocol == IPPROTO_UDP) 179 + /* Max length: 10 "PROTO=UDP " */ 180 + printk("PROTO=UDP " ); 181 + else /* Max length: 14 "PROTO=UDPLITE " */ 182 + printk("PROTO=UDPLITE "); 179 183 180 184 if (ntohs(ih->frag_off) & IP_OFFSET) 181 185 break; ··· 345 341 /* IP: 40+46+6+11+127 = 230 */ 346 342 /* TCP: 10+max(25,20+30+13+9+32+11+127) = 252 */ 347 343 /* UDP: 10+max(25,20) = 35 */ 344 + /* UDPLITE: 14+max(25,20) = 39 */ 348 345 /* ICMP: 11+max(25, 18+25+max(19,14,24+3+n+10,3+n+10)) = 91+n */ 349 346 /* ESP: 10+max(25)+15 = 50 */ 350 347 /* AH: 9+max(25)+15 = 49 */
+13
net/ipv4/proc.c
··· 38 38 #include <net/protocol.h> 39 39 #include <net/tcp.h> 40 40 #include <net/udp.h> 41 + #include <net/udplite.h> 41 42 #include <linux/inetdevice.h> 42 43 #include <linux/proc_fs.h> 43 44 #include <linux/seq_file.h> ··· 67 66 tcp_death_row.tw_count, atomic_read(&tcp_sockets_allocated), 68 67 atomic_read(&tcp_memory_allocated)); 69 68 seq_printf(seq, "UDP: inuse %d\n", fold_prot_inuse(&udp_prot)); 69 + seq_printf(seq, "UDPLITE: inuse %d\n", fold_prot_inuse(&udplite_prot)); 70 70 seq_printf(seq, "RAW: inuse %d\n", fold_prot_inuse(&raw_prot)); 71 71 seq_printf(seq, "FRAG: inuse %d memory %d\n", ip_frag_nqueues, 72 72 atomic_read(&ip_frag_mem)); ··· 305 303 seq_printf(seq, " %lu", 306 304 fold_field((void **) udp_statistics, 307 305 snmp4_udp_list[i].entry)); 306 + 307 + /* the UDP and UDP-Lite MIBs are the same */ 308 + seq_puts(seq, "\nUdpLite:"); 309 + for (i = 0; snmp4_udp_list[i].name != NULL; i++) 310 + seq_printf(seq, " %s", snmp4_udp_list[i].name); 311 + 312 + seq_puts(seq, "\nUdpLite:"); 313 + for (i = 0; snmp4_udp_list[i].name != NULL; i++) 314 + seq_printf(seq, " %lu", 315 + fold_field((void **) udplite_statistics, 316 + snmp4_udp_list[i].entry) ); 308 317 309 318 seq_putc(seq, '\n'); 310 319 return 0;
+300 -218
net/ipv4/udp.c
··· 92 92 #include <linux/timer.h> 93 93 #include <linux/mm.h> 94 94 #include <linux/inet.h> 95 - #include <linux/ipv6.h> 96 95 #include <linux/netdevice.h> 97 - #include <net/snmp.h> 98 - #include <net/ip.h> 99 96 #include <net/tcp_states.h> 100 - #include <net/protocol.h> 101 97 #include <linux/skbuff.h> 102 98 #include <linux/proc_fs.h> 103 99 #include <linux/seq_file.h> 104 - #include <net/sock.h> 105 - #include <net/udp.h> 106 100 #include <net/icmp.h> 107 101 #include <net/route.h> 108 - #include <net/inet_common.h> 109 102 #include <net/checksum.h> 110 103 #include <net/xfrm.h> 104 + #include "udp_impl.h" 111 105 112 106 /* 113 107 * Snmp MIB for the UDP layer ··· 114 120 115 121 static int udp_port_rover; 116 122 117 - static inline int udp_lport_inuse(u16 num) 123 + static inline int __udp_lib_lport_inuse(__be16 num, struct hlist_head udptable[]) 118 124 { 119 125 struct sock *sk; 120 126 struct hlist_node *node; 121 127 122 - sk_for_each(sk, node, &udp_hash[num & (UDP_HTABLE_SIZE - 1)]) 128 + sk_for_each(sk, node, &udptable[num & (UDP_HTABLE_SIZE - 1)]) 123 129 if (inet_sk(sk)->num == num) 124 130 return 1; 125 131 return 0; 126 132 } 127 133 128 134 /** 129 - * udp_get_port - common port lookup for IPv4 and IPv6 135 + * __udp_lib_get_port - UDP/-Lite port lookup for IPv4 and IPv6 130 136 * 131 137 * @sk: socket struct in question 132 138 * @snum: port number to look up 139 + * @udptable: hash list table, must be of UDP_HTABLE_SIZE 140 + * @port_rover: pointer to record of last unallocated port 133 141 * @saddr_comp: AF-dependent comparison of bound local IP addresses 134 142 */ 135 - int udp_get_port(struct sock *sk, unsigned short snum, 136 - int (*saddr_cmp)(const struct sock *sk1, const struct sock *sk2)) 143 + int __udp_lib_get_port(struct sock *sk, unsigned short snum, 144 + struct hlist_head udptable[], int *port_rover, 145 + int (*saddr_comp)(const struct sock *sk1, 146 + const struct sock *sk2 ) ) 137 147 { 138 148 struct hlist_node *node; 139 149 struct hlist_head *head; ··· 148 150 if (snum == 0) { 149 151 int best_size_so_far, best, result, i; 150 152 151 - if (udp_port_rover > sysctl_local_port_range[1] || 152 - udp_port_rover < sysctl_local_port_range[0]) 153 - udp_port_rover = sysctl_local_port_range[0]; 153 + if (*port_rover > sysctl_local_port_range[1] || 154 + *port_rover < sysctl_local_port_range[0]) 155 + *port_rover = sysctl_local_port_range[0]; 154 156 best_size_so_far = 32767; 155 - best = result = udp_port_rover; 157 + best = result = *port_rover; 156 158 for (i = 0; i < UDP_HTABLE_SIZE; i++, result++) { 157 159 int size; 158 160 159 - head = &udp_hash[result & (UDP_HTABLE_SIZE - 1)]; 161 + head = &udptable[result & (UDP_HTABLE_SIZE - 1)]; 160 162 if (hlist_empty(head)) { 161 163 if (result > sysctl_local_port_range[1]) 162 164 result = sysctl_local_port_range[0] + ··· 177 179 result = sysctl_local_port_range[0] 178 180 + ((result - sysctl_local_port_range[0]) & 179 181 (UDP_HTABLE_SIZE - 1)); 180 - if (!udp_lport_inuse(result)) 182 + if (! __udp_lib_lport_inuse(result, udptable)) 181 183 break; 182 184 } 183 185 if (i >= (1 << 16) / UDP_HTABLE_SIZE) 184 186 goto fail; 185 187 gotit: 186 - udp_port_rover = snum = result; 188 + *port_rover = snum = result; 187 189 } else { 188 - head = &udp_hash[snum & (UDP_HTABLE_SIZE - 1)]; 190 + head = &udptable[snum & (UDP_HTABLE_SIZE - 1)]; 189 191 190 192 sk_for_each(sk2, node, head) 191 193 if (inet_sk(sk2)->num == snum && ··· 193 195 (!sk2->sk_reuse || !sk->sk_reuse) && 194 196 (!sk2->sk_bound_dev_if || !sk->sk_bound_dev_if 195 197 || sk2->sk_bound_dev_if == sk->sk_bound_dev_if) && 196 - (*saddr_cmp)(sk, sk2) ) 198 + (*saddr_comp)(sk, sk2) ) 197 199 goto fail; 198 200 } 199 201 inet_sk(sk)->num = snum; 200 202 if (sk_unhashed(sk)) { 201 - head = &udp_hash[snum & (UDP_HTABLE_SIZE - 1)]; 203 + head = &udptable[snum & (UDP_HTABLE_SIZE - 1)]; 202 204 sk_add_node(sk, head); 203 205 sock_prot_inc_use(sk->sk_prot); 204 206 } ··· 208 210 return error; 209 211 } 210 212 211 - static inline int ipv4_rcv_saddr_equal(const struct sock *sk1, const struct sock *sk2) 213 + __inline__ int udp_get_port(struct sock *sk, unsigned short snum, 214 + int (*scmp)(const struct sock *, const struct sock *)) 215 + { 216 + return __udp_lib_get_port(sk, snum, udp_hash, &udp_port_rover, scmp); 217 + } 218 + 219 + inline int ipv4_rcv_saddr_equal(const struct sock *sk1, const struct sock *sk2) 212 220 { 213 221 struct inet_sock *inet1 = inet_sk(sk1), *inet2 = inet_sk(sk2); 214 222 ··· 228 224 return udp_get_port(sk, snum, ipv4_rcv_saddr_equal); 229 225 } 230 226 231 - 232 - static void udp_v4_hash(struct sock *sk) 233 - { 234 - BUG(); 235 - } 236 - 237 - static void udp_v4_unhash(struct sock *sk) 238 - { 239 - write_lock_bh(&udp_hash_lock); 240 - if (sk_del_node_init(sk)) { 241 - inet_sk(sk)->num = 0; 242 - sock_prot_dec_use(sk->sk_prot); 243 - } 244 - write_unlock_bh(&udp_hash_lock); 245 - } 246 - 247 227 /* UDP is nearly always wildcards out the wazoo, it makes no sense to try 248 228 * harder than this. -DaveM 249 229 */ 250 - static struct sock *udp_v4_lookup_longway(__be32 saddr, __be16 sport, 251 - __be32 daddr, __be16 dport, int dif) 230 + static struct sock *__udp4_lib_lookup(__be32 saddr, __be16 sport, 231 + __be32 daddr, __be16 dport, 232 + int dif, struct hlist_head udptable[]) 252 233 { 253 234 struct sock *sk, *result = NULL; 254 235 struct hlist_node *node; 255 236 unsigned short hnum = ntohs(dport); 256 237 int badness = -1; 257 238 258 - sk_for_each(sk, node, &udp_hash[hnum & (UDP_HTABLE_SIZE - 1)]) { 239 + read_lock(&udp_hash_lock); 240 + sk_for_each(sk, node, &udptable[hnum & (UDP_HTABLE_SIZE - 1)]) { 259 241 struct inet_sock *inet = inet_sk(sk); 260 242 261 243 if (inet->num == hnum && !ipv6_only_sock(sk)) { ··· 275 285 } 276 286 } 277 287 } 278 - return result; 279 - } 280 - 281 - static __inline__ struct sock *udp_v4_lookup(__be32 saddr, __be16 sport, 282 - __be32 daddr, __be16 dport, int dif) 283 - { 284 - struct sock *sk; 285 - 286 - read_lock(&udp_hash_lock); 287 - sk = udp_v4_lookup_longway(saddr, sport, daddr, dport, dif); 288 - if (sk) 289 - sock_hold(sk); 288 + if (result) 289 + sock_hold(result); 290 290 read_unlock(&udp_hash_lock); 291 - return sk; 291 + return result; 292 292 } 293 293 294 294 static inline struct sock *udp_v4_mcast_next(struct sock *sk, ··· 320 340 * to find the appropriate port. 321 341 */ 322 342 323 - void udp_err(struct sk_buff *skb, u32 info) 343 + void __udp4_lib_err(struct sk_buff *skb, u32 info, struct hlist_head udptable[]) 324 344 { 325 345 struct inet_sock *inet; 326 346 struct iphdr *iph = (struct iphdr*)skb->data; ··· 331 351 int harderr; 332 352 int err; 333 353 334 - sk = udp_v4_lookup(iph->daddr, uh->dest, iph->saddr, uh->source, skb->dev->ifindex); 354 + sk = __udp4_lib_lookup(iph->daddr, uh->dest, iph->saddr, uh->source, 355 + skb->dev->ifindex, udptable ); 335 356 if (sk == NULL) { 336 357 ICMP_INC_STATS_BH(ICMP_MIB_INERRORS); 337 358 return; /* No socket for error */ ··· 386 405 sock_put(sk); 387 406 } 388 407 408 + __inline__ void udp_err(struct sk_buff *skb, u32 info) 409 + { 410 + return __udp4_lib_err(skb, info, udp_hash); 411 + } 412 + 389 413 /* 390 414 * Throw away all pending data and cancel the corking. Socket is locked. 391 415 */ ··· 405 419 } 406 420 } 407 421 422 + /** 423 + * udp4_hwcsum_outgoing - handle outgoing HW checksumming 424 + * @sk: socket we are sending on 425 + * @skb: sk_buff containing the filled-in UDP header 426 + * (checksum field must be zeroed out) 427 + */ 428 + static void udp4_hwcsum_outgoing(struct sock *sk, struct sk_buff *skb, 429 + __be32 src, __be32 dst, int len ) 430 + { 431 + unsigned int csum = 0, offset; 432 + struct udphdr *uh = skb->h.uh; 433 + 434 + if (skb_queue_len(&sk->sk_write_queue) == 1) { 435 + /* 436 + * Only one fragment on the socket. 437 + */ 438 + skb->csum = offsetof(struct udphdr, check); 439 + uh->check = ~csum_tcpudp_magic(src, dst, len, IPPROTO_UDP, 0); 440 + } else { 441 + /* 442 + * HW-checksum won't work as there are two or more 443 + * fragments on the socket so that all csums of sk_buffs 444 + * should be together 445 + */ 446 + offset = skb->h.raw - skb->data; 447 + skb->csum = skb_checksum(skb, offset, skb->len - offset, 0); 448 + 449 + skb->ip_summed = CHECKSUM_NONE; 450 + 451 + skb_queue_walk(&sk->sk_write_queue, skb) { 452 + csum = csum_add(csum, skb->csum); 453 + } 454 + 455 + uh->check = csum_tcpudp_magic(src, dst, len, IPPROTO_UDP, csum); 456 + if (uh->check == 0) 457 + uh->check = -1; 458 + } 459 + } 460 + 408 461 /* 409 462 * Push out all pending data as one UDP datagram. Socket is locked. 410 463 */ 411 - static int udp_push_pending_frames(struct sock *sk, struct udp_sock *up) 464 + int udp_push_pending_frames(struct sock *sk, struct udp_sock *up) 412 465 { 413 466 struct inet_sock *inet = inet_sk(sk); 414 467 struct flowi *fl = &inet->cork.fl; 415 468 struct sk_buff *skb; 416 469 struct udphdr *uh; 417 470 int err = 0; 471 + u32 csum = 0; 418 472 419 473 /* Grab the skbuff where UDP header space exists. */ 420 474 if ((skb = skb_peek(&sk->sk_write_queue)) == NULL) ··· 469 443 uh->len = htons(up->len); 470 444 uh->check = 0; 471 445 472 - if (sk->sk_no_check == UDP_CSUM_NOXMIT) { 446 + if (up->pcflag) /* UDP-Lite */ 447 + csum = udplite_csum_outgoing(sk, skb); 448 + 449 + else if (sk->sk_no_check == UDP_CSUM_NOXMIT) { /* UDP csum disabled */ 450 + 473 451 skb->ip_summed = CHECKSUM_NONE; 474 452 goto send; 475 - } 476 453 477 - if (skb_queue_len(&sk->sk_write_queue) == 1) { 478 - /* 479 - * Only one fragment on the socket. 480 - */ 481 - if (skb->ip_summed == CHECKSUM_PARTIAL) { 482 - skb->csum = offsetof(struct udphdr, check); 483 - uh->check = ~csum_tcpudp_magic(fl->fl4_src, fl->fl4_dst, 484 - up->len, IPPROTO_UDP, 0); 485 - } else { 486 - skb->csum = csum_partial((char *)uh, 487 - sizeof(struct udphdr), skb->csum); 488 - uh->check = csum_tcpudp_magic(fl->fl4_src, fl->fl4_dst, 489 - up->len, IPPROTO_UDP, skb->csum); 490 - if (uh->check == 0) 491 - uh->check = -1; 492 - } 493 - } else { 494 - unsigned int csum = 0; 495 - /* 496 - * HW-checksum won't work as there are two or more 497 - * fragments on the socket so that all csums of sk_buffs 498 - * should be together. 499 - */ 500 - if (skb->ip_summed == CHECKSUM_PARTIAL) { 501 - int offset = (unsigned char *)uh - skb->data; 502 - skb->csum = skb_checksum(skb, offset, skb->len - offset, 0); 454 + } else if (skb->ip_summed == CHECKSUM_PARTIAL) { /* UDP hardware csum */ 503 455 504 - skb->ip_summed = CHECKSUM_NONE; 505 - } else { 506 - skb->csum = csum_partial((char *)uh, 507 - sizeof(struct udphdr), skb->csum); 508 - } 456 + udp4_hwcsum_outgoing(sk, skb, fl->fl4_src,fl->fl4_dst, up->len); 457 + goto send; 509 458 510 - skb_queue_walk(&sk->sk_write_queue, skb) { 511 - csum = csum_add(csum, skb->csum); 512 - } 513 - uh->check = csum_tcpudp_magic(fl->fl4_src, fl->fl4_dst, 514 - up->len, IPPROTO_UDP, csum); 515 - if (uh->check == 0) 516 - uh->check = -1; 517 - } 459 + } else /* `normal' UDP */ 460 + csum = udp_csum_outgoing(sk, skb); 461 + 462 + /* add protocol-dependent pseudo-header */ 463 + uh->check = csum_tcpudp_magic(fl->fl4_src, fl->fl4_dst, up->len, 464 + sk->sk_protocol, csum ); 465 + if (uh->check == 0) 466 + uh->check = -1; 467 + 518 468 send: 519 469 err = ip_push_pending_frames(sk); 520 470 out: 521 471 up->len = 0; 522 472 up->pending = 0; 523 473 return err; 524 - } 525 - 526 - 527 - static unsigned short udp_check(struct udphdr *uh, int len, __be32 saddr, __be32 daddr, unsigned long base) 528 - { 529 - return(csum_tcpudp_magic(saddr, daddr, len, IPPROTO_UDP, base)); 530 474 } 531 475 532 476 int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, ··· 512 516 __be32 daddr, faddr, saddr; 513 517 __be16 dport; 514 518 u8 tos; 515 - int err; 519 + int err, is_udplite = up->pcflag; 516 520 int corkreq = up->corkflag || msg->msg_flags&MSG_MORE; 521 + int (*getfrag)(void *, char *, int, int, int, struct sk_buff *); 517 522 518 523 if (len > 0xFFFF) 519 524 return -EMSGSIZE; ··· 619 622 { .daddr = faddr, 620 623 .saddr = saddr, 621 624 .tos = tos } }, 622 - .proto = IPPROTO_UDP, 625 + .proto = sk->sk_protocol, 623 626 .uli_u = { .ports = 624 627 { .sport = inet->sport, 625 628 .dport = dport } } }; ··· 665 668 666 669 do_append_data: 667 670 up->len += ulen; 668 - err = ip_append_data(sk, ip_generic_getfrag, msg->msg_iov, ulen, 669 - sizeof(struct udphdr), &ipc, rt, 671 + getfrag = is_udplite ? udplite_getfrag : ip_generic_getfrag; 672 + err = ip_append_data(sk, getfrag, msg->msg_iov, ulen, 673 + sizeof(struct udphdr), &ipc, rt, 670 674 corkreq ? msg->msg_flags|MSG_MORE : msg->msg_flags); 671 675 if (err) 672 676 udp_flush_pending_frames(sk); ··· 682 684 if (free) 683 685 kfree(ipc.opt); 684 686 if (!err) { 685 - UDP_INC_STATS_USER(UDP_MIB_OUTDATAGRAMS); 687 + UDP_INC_STATS_USER(UDP_MIB_OUTDATAGRAMS, is_udplite); 686 688 return len; 687 689 } 688 690 /* ··· 693 695 * seems like overkill. 694 696 */ 695 697 if (err == -ENOBUFS || test_bit(SOCK_NOSPACE, &sk->sk_socket->flags)) { 696 - UDP_INC_STATS_USER(UDP_MIB_SNDBUFERRORS); 698 + UDP_INC_STATS_USER(UDP_MIB_SNDBUFERRORS, is_udplite); 697 699 } 698 700 return err; 699 701 ··· 705 707 goto out; 706 708 } 707 709 708 - static int udp_sendpage(struct sock *sk, struct page *page, int offset, 709 - size_t size, int flags) 710 + int udp_sendpage(struct sock *sk, struct page *page, int offset, 711 + size_t size, int flags) 710 712 { 711 713 struct udp_sock *up = udp_sk(sk); 712 714 int ret; ··· 793 795 return(0); 794 796 } 795 797 796 - static __inline__ int __udp_checksum_complete(struct sk_buff *skb) 797 - { 798 - return __skb_checksum_complete(skb); 799 - } 800 - 801 - static __inline__ int udp_checksum_complete(struct sk_buff *skb) 802 - { 803 - return skb->ip_summed != CHECKSUM_UNNECESSARY && 804 - __udp_checksum_complete(skb); 805 - } 806 - 807 798 /* 808 799 * This should be easy, if there is something there we 809 800 * return it, otherwise we block. 810 801 */ 811 802 812 - static int udp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 813 - size_t len, int noblock, int flags, int *addr_len) 803 + int udp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 804 + size_t len, int noblock, int flags, int *addr_len) 814 805 { 815 806 struct inet_sock *inet = inet_sk(sk); 816 807 struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name; 817 808 struct sk_buff *skb; 818 - int copied, err; 809 + int copied, err, copy_only, is_udplite = IS_UDPLITE(sk); 819 810 820 811 /* 821 812 * Check any passed addresses ··· 826 839 msg->msg_flags |= MSG_TRUNC; 827 840 } 828 841 829 - if (skb->ip_summed==CHECKSUM_UNNECESSARY) { 830 - err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov, 831 - copied); 832 - } else if (msg->msg_flags&MSG_TRUNC) { 833 - if (__udp_checksum_complete(skb)) 842 + /* 843 + * Decide whether to checksum and/or copy data. 844 + * 845 + * UDP: checksum may have been computed in HW, 846 + * (re-)compute it if message is truncated. 847 + * UDP-Lite: always needs to checksum, no HW support. 848 + */ 849 + copy_only = (skb->ip_summed==CHECKSUM_UNNECESSARY); 850 + 851 + if (is_udplite || (!copy_only && msg->msg_flags&MSG_TRUNC)) { 852 + if (__udp_lib_checksum_complete(skb)) 834 853 goto csum_copy_err; 835 - err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov, 836 - copied); 837 - } else { 854 + copy_only = 1; 855 + } 856 + 857 + if (copy_only) 858 + err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), 859 + msg->msg_iov, copied ); 860 + else { 838 861 err = skb_copy_and_csum_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov); 839 862 840 863 if (err == -EINVAL) ··· 877 880 return err; 878 881 879 882 csum_copy_err: 880 - UDP_INC_STATS_BH(UDP_MIB_INERRORS); 883 + UDP_INC_STATS_BH(UDP_MIB_INERRORS, is_udplite); 881 884 882 885 skb_kill_datagram(sk, skb, flags); 883 886 ··· 907 910 } 908 911 sk_dst_reset(sk); 909 912 return 0; 910 - } 911 - 912 - static void udp_close(struct sock *sk, long timeout) 913 - { 914 - sk_common_release(sk); 915 913 } 916 914 917 915 /* return: ··· 1014 1022 * Note that in the success and error cases, the skb is assumed to 1015 1023 * have either been requeued or freed. 1016 1024 */ 1017 - static int udp_queue_rcv_skb(struct sock * sk, struct sk_buff *skb) 1025 + int udp_queue_rcv_skb(struct sock * sk, struct sk_buff *skb) 1018 1026 { 1019 1027 struct udp_sock *up = udp_sk(sk); 1020 1028 int rc; ··· 1022 1030 /* 1023 1031 * Charge it to the socket, dropping if the queue is full. 1024 1032 */ 1025 - if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) { 1026 - kfree_skb(skb); 1027 - return -1; 1028 - } 1033 + if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) 1034 + goto drop; 1029 1035 nf_reset(skb); 1030 1036 1031 1037 if (up->encap_type) { ··· 1047 1057 if (ret < 0) { 1048 1058 /* process the ESP packet */ 1049 1059 ret = xfrm4_rcv_encap(skb, up->encap_type); 1050 - UDP_INC_STATS_BH(UDP_MIB_INDATAGRAMS); 1060 + UDP_INC_STATS_BH(UDP_MIB_INDATAGRAMS, up->pcflag); 1051 1061 return -ret; 1052 1062 } 1053 1063 /* FALLTHROUGH -- it's a UDP Packet */ 1054 1064 } 1055 1065 1056 - if (sk->sk_filter && skb->ip_summed != CHECKSUM_UNNECESSARY) { 1057 - if (__udp_checksum_complete(skb)) { 1058 - UDP_INC_STATS_BH(UDP_MIB_INERRORS); 1059 - kfree_skb(skb); 1060 - return -1; 1066 + /* 1067 + * UDP-Lite specific tests, ignored on UDP sockets 1068 + */ 1069 + if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) { 1070 + 1071 + /* 1072 + * MIB statistics other than incrementing the error count are 1073 + * disabled for the following two types of errors: these depend 1074 + * on the application settings, not on the functioning of the 1075 + * protocol stack as such. 1076 + * 1077 + * RFC 3828 here recommends (sec 3.3): "There should also be a 1078 + * way ... to ... at least let the receiving application block 1079 + * delivery of packets with coverage values less than a value 1080 + * provided by the application." 1081 + */ 1082 + if (up->pcrlen == 0) { /* full coverage was set */ 1083 + LIMIT_NETDEBUG(KERN_WARNING "UDPLITE: partial coverage " 1084 + "%d while full coverage %d requested\n", 1085 + UDP_SKB_CB(skb)->cscov, skb->len); 1086 + goto drop; 1061 1087 } 1088 + /* The next case involves violating the min. coverage requested 1089 + * by the receiver. This is subtle: if receiver wants x and x is 1090 + * greater than the buffersize/MTU then receiver will complain 1091 + * that it wants x while sender emits packets of smaller size y. 1092 + * Therefore the above ...()->partial_cov statement is essential. 1093 + */ 1094 + if (UDP_SKB_CB(skb)->cscov < up->pcrlen) { 1095 + LIMIT_NETDEBUG(KERN_WARNING 1096 + "UDPLITE: coverage %d too small, need min %d\n", 1097 + UDP_SKB_CB(skb)->cscov, up->pcrlen); 1098 + goto drop; 1099 + } 1100 + } 1101 + 1102 + if (sk->sk_filter && skb->ip_summed != CHECKSUM_UNNECESSARY) { 1103 + if (__udp_lib_checksum_complete(skb)) 1104 + goto drop; 1062 1105 skb->ip_summed = CHECKSUM_UNNECESSARY; 1063 1106 } 1064 1107 1065 1108 if ((rc = sock_queue_rcv_skb(sk,skb)) < 0) { 1066 1109 /* Note that an ENOMEM error is charged twice */ 1067 1110 if (rc == -ENOMEM) 1068 - UDP_INC_STATS_BH(UDP_MIB_RCVBUFERRORS); 1069 - UDP_INC_STATS_BH(UDP_MIB_INERRORS); 1070 - kfree_skb(skb); 1071 - return -1; 1111 + UDP_INC_STATS_BH(UDP_MIB_RCVBUFERRORS, up->pcflag); 1112 + goto drop; 1072 1113 } 1073 - UDP_INC_STATS_BH(UDP_MIB_INDATAGRAMS); 1114 + 1115 + UDP_INC_STATS_BH(UDP_MIB_INDATAGRAMS, up->pcflag); 1074 1116 return 0; 1117 + 1118 + drop: 1119 + UDP_INC_STATS_BH(UDP_MIB_INERRORS, up->pcflag); 1120 + kfree_skb(skb); 1121 + return -1; 1075 1122 } 1076 1123 1077 1124 /* ··· 1117 1090 * Note: called only from the BH handler context, 1118 1091 * so we don't need to lock the hashes. 1119 1092 */ 1120 - static int udp_v4_mcast_deliver(struct sk_buff *skb, struct udphdr *uh, 1121 - __be32 saddr, __be32 daddr) 1093 + static int __udp4_lib_mcast_deliver(struct sk_buff *skb, 1094 + struct udphdr *uh, 1095 + __be32 saddr, __be32 daddr, 1096 + struct hlist_head udptable[]) 1122 1097 { 1123 1098 struct sock *sk; 1124 1099 int dif; 1125 1100 1126 1101 read_lock(&udp_hash_lock); 1127 - sk = sk_head(&udp_hash[ntohs(uh->dest) & (UDP_HTABLE_SIZE - 1)]); 1102 + sk = sk_head(&udptable[ntohs(uh->dest) & (UDP_HTABLE_SIZE - 1)]); 1128 1103 dif = skb->dev->ifindex; 1129 1104 sk = udp_v4_mcast_next(sk, uh->dest, daddr, uh->source, saddr, dif); 1130 1105 if (sk) { ··· 1160 1131 * Otherwise, csum completion requires chacksumming packet body, 1161 1132 * including udp header and folding it to skb->csum. 1162 1133 */ 1163 - static void udp_checksum_init(struct sk_buff *skb, struct udphdr *uh, 1164 - unsigned short ulen, __be32 saddr, __be32 daddr) 1134 + static inline void udp4_csum_init(struct sk_buff *skb, struct udphdr *uh) 1165 1135 { 1166 1136 if (uh->check == 0) { 1167 1137 skb->ip_summed = CHECKSUM_UNNECESSARY; 1168 1138 } else if (skb->ip_summed == CHECKSUM_COMPLETE) { 1169 - if (!udp_check(uh, ulen, saddr, daddr, skb->csum)) 1139 + if (!csum_tcpudp_magic(skb->nh.iph->saddr, skb->nh.iph->daddr, 1140 + skb->len, IPPROTO_UDP, skb->csum )) 1170 1141 skb->ip_summed = CHECKSUM_UNNECESSARY; 1171 1142 } 1172 1143 if (skb->ip_summed != CHECKSUM_UNNECESSARY) 1173 - skb->csum = csum_tcpudp_nofold(saddr, daddr, ulen, IPPROTO_UDP, 0); 1144 + skb->csum = csum_tcpudp_nofold(skb->nh.iph->saddr, 1145 + skb->nh.iph->daddr, 1146 + skb->len, IPPROTO_UDP, 0); 1174 1147 /* Probably, we should checksum udp header (it should be in cache 1175 1148 * in any case) and data in tiny packets (< rx copybreak). 1176 1149 */ 1150 + 1151 + /* UDP = UDP-Lite with a non-partial checksum coverage */ 1152 + UDP_SKB_CB(skb)->partial_cov = 0; 1177 1153 } 1178 1154 1179 1155 /* 1180 1156 * All we need to do is get the socket, and then do a checksum. 1181 1157 */ 1182 1158 1183 - int udp_rcv(struct sk_buff *skb) 1159 + int __udp4_lib_rcv(struct sk_buff *skb, struct hlist_head udptable[], 1160 + int is_udplite) 1184 1161 { 1185 1162 struct sock *sk; 1186 - struct udphdr *uh; 1163 + struct udphdr *uh = skb->h.uh; 1187 1164 unsigned short ulen; 1188 1165 struct rtable *rt = (struct rtable*)skb->dst; 1189 1166 __be32 saddr = skb->nh.iph->saddr; 1190 1167 __be32 daddr = skb->nh.iph->daddr; 1191 - int len = skb->len; 1192 1168 1193 1169 /* 1194 - * Validate the packet and the UDP length. 1170 + * Validate the packet. 1195 1171 */ 1196 1172 if (!pskb_may_pull(skb, sizeof(struct udphdr))) 1197 - goto no_header; 1198 - 1199 - uh = skb->h.uh; 1173 + goto drop; /* No space for header. */ 1200 1174 1201 1175 ulen = ntohs(uh->len); 1202 - 1203 - if (ulen > len || ulen < sizeof(*uh)) 1176 + if (ulen > skb->len) 1204 1177 goto short_packet; 1205 1178 1206 - if (pskb_trim_rcsum(skb, ulen)) 1207 - goto short_packet; 1179 + if(! is_udplite ) { /* UDP validates ulen. */ 1208 1180 1209 - udp_checksum_init(skb, uh, ulen, saddr, daddr); 1181 + if (ulen < sizeof(*uh) || pskb_trim_rcsum(skb, ulen)) 1182 + goto short_packet; 1183 + 1184 + udp4_csum_init(skb, uh); 1185 + 1186 + } else { /* UDP-Lite validates cscov. */ 1187 + if (udplite4_csum_init(skb, uh)) 1188 + goto csum_error; 1189 + } 1210 1190 1211 1191 if(rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST)) 1212 - return udp_v4_mcast_deliver(skb, uh, saddr, daddr); 1192 + return __udp4_lib_mcast_deliver(skb, uh, saddr, daddr, udptable); 1213 1193 1214 - sk = udp_v4_lookup(saddr, uh->source, daddr, uh->dest, skb->dev->ifindex); 1194 + sk = __udp4_lib_lookup(saddr, uh->source, daddr, uh->dest, 1195 + skb->dev->ifindex, udptable ); 1215 1196 1216 1197 if (sk != NULL) { 1217 1198 int ret = udp_queue_rcv_skb(sk, skb); 1218 1199 sock_put(sk); 1219 1200 1220 1201 /* a return value > 0 means to resubmit the input, but 1221 - * it it wants the return to be -protocol, or 0 1202 + * it wants the return to be -protocol, or 0 1222 1203 */ 1223 1204 if (ret > 0) 1224 1205 return -ret; ··· 1240 1201 nf_reset(skb); 1241 1202 1242 1203 /* No socket. Drop packet silently, if checksum is wrong */ 1243 - if (udp_checksum_complete(skb)) 1204 + if (udp_lib_checksum_complete(skb)) 1244 1205 goto csum_error; 1245 1206 1246 - UDP_INC_STATS_BH(UDP_MIB_NOPORTS); 1207 + UDP_INC_STATS_BH(UDP_MIB_NOPORTS, is_udplite); 1247 1208 icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0); 1248 1209 1249 1210 /* ··· 1254 1215 return(0); 1255 1216 1256 1217 short_packet: 1257 - LIMIT_NETDEBUG(KERN_DEBUG "UDP: short packet: From %u.%u.%u.%u:%u %d/%d to %u.%u.%u.%u:%u\n", 1218 + LIMIT_NETDEBUG(KERN_DEBUG "UDP%s: short packet: From %u.%u.%u.%u:%u %d/%d to %u.%u.%u.%u:%u\n", 1219 + is_udplite? "-Lite" : "", 1258 1220 NIPQUAD(saddr), 1259 1221 ntohs(uh->source), 1260 1222 ulen, 1261 - len, 1223 + skb->len, 1262 1224 NIPQUAD(daddr), 1263 1225 ntohs(uh->dest)); 1264 - no_header: 1265 - UDP_INC_STATS_BH(UDP_MIB_INERRORS); 1266 - kfree_skb(skb); 1267 - return(0); 1226 + goto drop; 1268 1227 1269 1228 csum_error: 1270 1229 /* 1271 1230 * RFC1122: OK. Discards the bad packet silently (as far as 1272 1231 * the network is concerned, anyway) as per 4.1.3.4 (MUST). 1273 1232 */ 1274 - LIMIT_NETDEBUG(KERN_DEBUG "UDP: bad checksum. From %d.%d.%d.%d:%d to %d.%d.%d.%d:%d ulen %d\n", 1233 + LIMIT_NETDEBUG(KERN_DEBUG "UDP%s: bad checksum. From %d.%d.%d.%d:%d to %d.%d.%d.%d:%d ulen %d\n", 1234 + is_udplite? "-Lite" : "", 1275 1235 NIPQUAD(saddr), 1276 1236 ntohs(uh->source), 1277 1237 NIPQUAD(daddr), 1278 1238 ntohs(uh->dest), 1279 1239 ulen); 1280 1240 drop: 1281 - UDP_INC_STATS_BH(UDP_MIB_INERRORS); 1241 + UDP_INC_STATS_BH(UDP_MIB_INERRORS, is_udplite); 1282 1242 kfree_skb(skb); 1283 1243 return(0); 1284 1244 } 1285 1245 1286 - static int udp_destroy_sock(struct sock *sk) 1246 + __inline__ int udp_rcv(struct sk_buff *skb) 1247 + { 1248 + return __udp4_lib_rcv(skb, udp_hash, 0); 1249 + } 1250 + 1251 + int udp_destroy_sock(struct sock *sk) 1287 1252 { 1288 1253 lock_sock(sk); 1289 1254 udp_flush_pending_frames(sk); ··· 1336 1293 } 1337 1294 break; 1338 1295 1296 + /* 1297 + * UDP-Lite's partial checksum coverage (RFC 3828). 1298 + */ 1299 + /* The sender sets actual checksum coverage length via this option. 1300 + * The case coverage > packet length is handled by send module. */ 1301 + case UDPLITE_SEND_CSCOV: 1302 + if (!up->pcflag) /* Disable the option on UDP sockets */ 1303 + return -ENOPROTOOPT; 1304 + if (val != 0 && val < 8) /* Illegal coverage: use default (8) */ 1305 + val = 8; 1306 + up->pcslen = val; 1307 + up->pcflag |= UDPLITE_SEND_CC; 1308 + break; 1309 + 1310 + /* The receiver specifies a minimum checksum coverage value. To make 1311 + * sense, this should be set to at least 8 (as done below). If zero is 1312 + * used, this again means full checksum coverage. */ 1313 + case UDPLITE_RECV_CSCOV: 1314 + if (!up->pcflag) /* Disable the option on UDP sockets */ 1315 + return -ENOPROTOOPT; 1316 + if (val != 0 && val < 8) /* Avoid silly minimal values. */ 1317 + val = 8; 1318 + up->pcrlen = val; 1319 + up->pcflag |= UDPLITE_RECV_CC; 1320 + break; 1321 + 1339 1322 default: 1340 1323 err = -ENOPROTOOPT; 1341 1324 break; ··· 1370 1301 return err; 1371 1302 } 1372 1303 1373 - static int udp_setsockopt(struct sock *sk, int level, int optname, 1374 - char __user *optval, int optlen) 1304 + int udp_setsockopt(struct sock *sk, int level, int optname, 1305 + char __user *optval, int optlen) 1375 1306 { 1376 - if (level != SOL_UDP) 1377 - return ip_setsockopt(sk, level, optname, optval, optlen); 1378 - return do_udp_setsockopt(sk, level, optname, optval, optlen); 1307 + if (level == SOL_UDP || level == SOL_UDPLITE) 1308 + return do_udp_setsockopt(sk, level, optname, optval, optlen); 1309 + return ip_setsockopt(sk, level, optname, optval, optlen); 1379 1310 } 1380 1311 1381 1312 #ifdef CONFIG_COMPAT 1382 - static int compat_udp_setsockopt(struct sock *sk, int level, int optname, 1383 - char __user *optval, int optlen) 1313 + int compat_udp_setsockopt(struct sock *sk, int level, int optname, 1314 + char __user *optval, int optlen) 1384 1315 { 1385 - if (level != SOL_UDP) 1386 - return compat_ip_setsockopt(sk, level, optname, optval, optlen); 1387 - return do_udp_setsockopt(sk, level, optname, optval, optlen); 1316 + if (level == SOL_UDP || level == SOL_UDPLITE) 1317 + return do_udp_setsockopt(sk, level, optname, optval, optlen); 1318 + return compat_ip_setsockopt(sk, level, optname, optval, optlen); 1388 1319 } 1389 1320 #endif 1390 1321 ··· 1411 1342 val = up->encap_type; 1412 1343 break; 1413 1344 1345 + /* The following two cannot be changed on UDP sockets, the return is 1346 + * always 0 (which corresponds to the full checksum coverage of UDP). */ 1347 + case UDPLITE_SEND_CSCOV: 1348 + val = up->pcslen; 1349 + break; 1350 + 1351 + case UDPLITE_RECV_CSCOV: 1352 + val = up->pcrlen; 1353 + break; 1354 + 1414 1355 default: 1415 1356 return -ENOPROTOOPT; 1416 1357 }; ··· 1432 1353 return 0; 1433 1354 } 1434 1355 1435 - static int udp_getsockopt(struct sock *sk, int level, int optname, 1436 - char __user *optval, int __user *optlen) 1356 + int udp_getsockopt(struct sock *sk, int level, int optname, 1357 + char __user *optval, int __user *optlen) 1437 1358 { 1438 - if (level != SOL_UDP) 1439 - return ip_getsockopt(sk, level, optname, optval, optlen); 1440 - return do_udp_getsockopt(sk, level, optname, optval, optlen); 1359 + if (level == SOL_UDP || level == SOL_UDPLITE) 1360 + return do_udp_getsockopt(sk, level, optname, optval, optlen); 1361 + return ip_getsockopt(sk, level, optname, optval, optlen); 1441 1362 } 1442 1363 1443 1364 #ifdef CONFIG_COMPAT 1444 - static int compat_udp_getsockopt(struct sock *sk, int level, int optname, 1365 + int compat_udp_getsockopt(struct sock *sk, int level, int optname, 1445 1366 char __user *optval, int __user *optlen) 1446 1367 { 1447 - if (level != SOL_UDP) 1448 - return compat_ip_getsockopt(sk, level, optname, optval, optlen); 1449 - return do_udp_getsockopt(sk, level, optname, optval, optlen); 1368 + if (level == SOL_UDP || level == SOL_UDPLITE) 1369 + return do_udp_getsockopt(sk, level, optname, optval, optlen); 1370 + return compat_ip_getsockopt(sk, level, optname, optval, optlen); 1450 1371 } 1451 1372 #endif 1452 1373 /** ··· 1466 1387 { 1467 1388 unsigned int mask = datagram_poll(file, sock, wait); 1468 1389 struct sock *sk = sock->sk; 1469 - 1390 + int is_lite = IS_UDPLITE(sk); 1391 + 1470 1392 /* Check for false positives due to checksum errors */ 1471 1393 if ( (mask & POLLRDNORM) && 1472 1394 !(file->f_flags & O_NONBLOCK) && ··· 1477 1397 1478 1398 spin_lock_bh(&rcvq->lock); 1479 1399 while ((skb = skb_peek(rcvq)) != NULL) { 1480 - if (udp_checksum_complete(skb)) { 1481 - UDP_INC_STATS_BH(UDP_MIB_INERRORS); 1400 + if (udp_lib_checksum_complete(skb)) { 1401 + UDP_INC_STATS_BH(UDP_MIB_INERRORS, is_lite); 1482 1402 __skb_unlink(skb, rcvq); 1483 1403 kfree_skb(skb); 1484 1404 } else { ··· 1500 1420 struct proto udp_prot = { 1501 1421 .name = "UDP", 1502 1422 .owner = THIS_MODULE, 1503 - .close = udp_close, 1423 + .close = udp_lib_close, 1504 1424 .connect = ip4_datagram_connect, 1505 1425 .disconnect = udp_disconnect, 1506 1426 .ioctl = udp_ioctl, ··· 1511 1431 .recvmsg = udp_recvmsg, 1512 1432 .sendpage = udp_sendpage, 1513 1433 .backlog_rcv = udp_queue_rcv_skb, 1514 - .hash = udp_v4_hash, 1515 - .unhash = udp_v4_unhash, 1434 + .hash = udp_lib_hash, 1435 + .unhash = udp_lib_unhash, 1516 1436 .get_port = udp_v4_get_port, 1517 1437 .obj_size = sizeof(struct udp_sock), 1518 1438 #ifdef CONFIG_COMPAT ··· 1531 1451 1532 1452 for (state->bucket = 0; state->bucket < UDP_HTABLE_SIZE; ++state->bucket) { 1533 1453 struct hlist_node *node; 1534 - sk_for_each(sk, node, &udp_hash[state->bucket]) { 1454 + sk_for_each(sk, node, state->hashtable + state->bucket) { 1535 1455 if (sk->sk_family == state->family) 1536 1456 goto found; 1537 1457 } ··· 1552 1472 } while (sk && sk->sk_family != state->family); 1553 1473 1554 1474 if (!sk && ++state->bucket < UDP_HTABLE_SIZE) { 1555 - sk = sk_head(&udp_hash[state->bucket]); 1475 + sk = sk_head(state->hashtable + state->bucket); 1556 1476 goto try_again; 1557 1477 } 1558 1478 return sk; ··· 1602 1522 if (!s) 1603 1523 goto out; 1604 1524 s->family = afinfo->family; 1525 + s->hashtable = afinfo->hashtable; 1605 1526 s->seq_ops.start = udp_seq_start; 1606 1527 s->seq_ops.next = udp_seq_next; 1607 1528 s->seq_ops.show = afinfo->seq_show; ··· 1669 1588 atomic_read(&sp->sk_refcnt), sp); 1670 1589 } 1671 1590 1672 - static int udp4_seq_show(struct seq_file *seq, void *v) 1591 + int udp4_seq_show(struct seq_file *seq, void *v) 1673 1592 { 1674 1593 if (v == SEQ_START_TOKEN) 1675 1594 seq_printf(seq, "%-127s\n", ··· 1692 1611 .owner = THIS_MODULE, 1693 1612 .name = "udp", 1694 1613 .family = AF_INET, 1614 + .hashtable = udp_hash, 1695 1615 .seq_show = udp4_seq_show, 1696 1616 .seq_fops = &udp4_seq_fops, 1697 1617 };
+38
net/ipv4/udp_impl.h
··· 1 + #ifndef _UDP4_IMPL_H 2 + #define _UDP4_IMPL_H 3 + #include <net/udp.h> 4 + #include <net/udplite.h> 5 + #include <net/protocol.h> 6 + #include <net/inet_common.h> 7 + 8 + extern int __udp4_lib_rcv(struct sk_buff *, struct hlist_head [], int ); 9 + extern void __udp4_lib_err(struct sk_buff *, u32, struct hlist_head []); 10 + 11 + extern int __udp_lib_get_port(struct sock *sk, unsigned short snum, 12 + struct hlist_head udptable[], int *port_rover, 13 + int (*)(const struct sock*,const struct sock*)); 14 + extern int ipv4_rcv_saddr_equal(const struct sock *, const struct sock *); 15 + 16 + 17 + extern int udp_setsockopt(struct sock *sk, int level, int optname, 18 + char __user *optval, int optlen); 19 + extern int udp_getsockopt(struct sock *sk, int level, int optname, 20 + char __user *optval, int __user *optlen); 21 + 22 + #ifdef CONFIG_COMPAT 23 + extern int compat_udp_setsockopt(struct sock *sk, int level, int optname, 24 + char __user *optval, int optlen); 25 + extern int compat_udp_getsockopt(struct sock *sk, int level, int optname, 26 + char __user *optval, int __user *optlen); 27 + #endif 28 + extern int udp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, 29 + size_t len, int noblock, int flags, int *addr_len); 30 + extern int udp_sendpage(struct sock *sk, struct page *page, int offset, 31 + size_t size, int flags); 32 + extern int udp_queue_rcv_skb(struct sock * sk, struct sk_buff *skb); 33 + extern int udp_destroy_sock(struct sock *sk); 34 + 35 + #ifdef CONFIG_PROC_FS 36 + extern int udp4_seq_show(struct seq_file *seq, void *v); 37 + #endif 38 + #endif /* _UDP4_IMPL_H */
+119
net/ipv4/udplite.c
··· 1 + /* 2 + * UDPLITE An implementation of the UDP-Lite protocol (RFC 3828). 3 + * 4 + * Version: $Id: udplite.c,v 1.25 2006/10/19 07:22:36 gerrit Exp $ 5 + * 6 + * Authors: Gerrit Renker <gerrit@erg.abdn.ac.uk> 7 + * 8 + * Changes: 9 + * Fixes: 10 + * This program is free software; you can redistribute it and/or 11 + * modify it under the terms of the GNU General Public License 12 + * as published by the Free Software Foundation; either version 13 + * 2 of the License, or (at your option) any later version. 14 + */ 15 + #include "udp_impl.h" 16 + DEFINE_SNMP_STAT(struct udp_mib, udplite_statistics) __read_mostly; 17 + 18 + struct hlist_head udplite_hash[UDP_HTABLE_SIZE]; 19 + static int udplite_port_rover; 20 + 21 + __inline__ int udplite_get_port(struct sock *sk, unsigned short p, 22 + int (*c)(const struct sock *, const struct sock *)) 23 + { 24 + return __udp_lib_get_port(sk, p, udplite_hash, &udplite_port_rover, c); 25 + } 26 + 27 + static __inline__ int udplite_v4_get_port(struct sock *sk, unsigned short snum) 28 + { 29 + return udplite_get_port(sk, snum, ipv4_rcv_saddr_equal); 30 + } 31 + 32 + __inline__ int udplite_rcv(struct sk_buff *skb) 33 + { 34 + return __udp4_lib_rcv(skb, udplite_hash, 1); 35 + } 36 + 37 + __inline__ void udplite_err(struct sk_buff *skb, u32 info) 38 + { 39 + return __udp4_lib_err(skb, info, udplite_hash); 40 + } 41 + 42 + static struct net_protocol udplite_protocol = { 43 + .handler = udplite_rcv, 44 + .err_handler = udplite_err, 45 + .no_policy = 1, 46 + }; 47 + 48 + struct proto udplite_prot = { 49 + .name = "UDP-Lite", 50 + .owner = THIS_MODULE, 51 + .close = udp_lib_close, 52 + .connect = ip4_datagram_connect, 53 + .disconnect = udp_disconnect, 54 + .ioctl = udp_ioctl, 55 + .init = udplite_sk_init, 56 + .destroy = udp_destroy_sock, 57 + .setsockopt = udp_setsockopt, 58 + .getsockopt = udp_getsockopt, 59 + .sendmsg = udp_sendmsg, 60 + .recvmsg = udp_recvmsg, 61 + .sendpage = udp_sendpage, 62 + .backlog_rcv = udp_queue_rcv_skb, 63 + .hash = udp_lib_hash, 64 + .unhash = udp_lib_unhash, 65 + .get_port = udplite_v4_get_port, 66 + .obj_size = sizeof(struct udp_sock), 67 + #ifdef CONFIG_COMPAT 68 + .compat_setsockopt = compat_udp_setsockopt, 69 + .compat_getsockopt = compat_udp_getsockopt, 70 + #endif 71 + }; 72 + 73 + static struct inet_protosw udplite4_protosw = { 74 + .type = SOCK_DGRAM, 75 + .protocol = IPPROTO_UDPLITE, 76 + .prot = &udplite_prot, 77 + .ops = &inet_dgram_ops, 78 + .capability = -1, 79 + .no_check = 0, /* must checksum (RFC 3828) */ 80 + .flags = INET_PROTOSW_PERMANENT, 81 + }; 82 + 83 + #ifdef CONFIG_PROC_FS 84 + static struct file_operations udplite4_seq_fops; 85 + static struct udp_seq_afinfo udplite4_seq_afinfo = { 86 + .owner = THIS_MODULE, 87 + .name = "udplite", 88 + .family = AF_INET, 89 + .hashtable = udplite_hash, 90 + .seq_show = udp4_seq_show, 91 + .seq_fops = &udplite4_seq_fops, 92 + }; 93 + #endif 94 + 95 + void __init udplite4_register(void) 96 + { 97 + if (proto_register(&udplite_prot, 1)) 98 + goto out_register_err; 99 + 100 + if (inet_add_protocol(&udplite_protocol, IPPROTO_UDPLITE) < 0) 101 + goto out_unregister_proto; 102 + 103 + inet_register_protosw(&udplite4_protosw); 104 + 105 + #ifdef CONFIG_PROC_FS 106 + if (udp_proc_register(&udplite4_seq_afinfo)) /* udplite4_proc_init() */ 107 + printk(KERN_ERR "%s: Cannot register /proc!\n", __FUNCTION__); 108 + #endif 109 + return; 110 + 111 + out_unregister_proto: 112 + proto_unregister(&udplite_prot); 113 + out_register_err: 114 + printk(KERN_CRIT "%s: Cannot add UDP-Lite protocol.\n", __FUNCTION__); 115 + } 116 + 117 + EXPORT_SYMBOL(udplite_hash); 118 + EXPORT_SYMBOL(udplite_prot); 119 + EXPORT_SYMBOL(udplite_get_port);
+1
net/ipv4/xfrm4_policy.c
··· 199 199 if (!(iph->frag_off & htons(IP_MF | IP_OFFSET))) { 200 200 switch (iph->protocol) { 201 201 case IPPROTO_UDP: 202 + case IPPROTO_UDPLITE: 202 203 case IPPROTO_TCP: 203 204 case IPPROTO_SCTP: 204 205 case IPPROTO_DCCP:
+2 -2
net/ipv6/Makefile
··· 5 5 obj-$(CONFIG_IPV6) += ipv6.o 6 6 7 7 ipv6-objs := af_inet6.o anycast.o ip6_output.o ip6_input.o addrconf.o \ 8 - route.o ip6_fib.o ipv6_sockglue.o ndisc.o udp.o raw.o \ 9 - protocol.o icmp.o mcast.o reassembly.o tcp_ipv6.o \ 8 + route.o ip6_fib.o ipv6_sockglue.o ndisc.o udp.o udplite.o \ 9 + raw.o protocol.o icmp.o mcast.o reassembly.o tcp_ipv6.o \ 10 10 exthdrs.o sysctl_net_ipv6.o datagram.o proc.o \ 11 11 ip6_flowlabel.o ipv6_syms.o inet6_connection_sock.o 12 12
+20 -1
net/ipv6/af_inet6.c
··· 49 49 #include <net/ip.h> 50 50 #include <net/ipv6.h> 51 51 #include <net/udp.h> 52 + #include <net/udplite.h> 52 53 #include <net/tcp.h> 53 54 #include <net/ipip.h> 54 55 #include <net/protocol.h> ··· 738 737 if (snmp6_mib_init((void **)udp_stats_in6, sizeof (struct udp_mib), 739 738 __alignof__(struct udp_mib)) < 0) 740 739 goto err_udp_mib; 740 + if (snmp6_mib_init((void **)udplite_stats_in6, sizeof (struct udp_mib), 741 + __alignof__(struct udp_mib)) < 0) 742 + goto err_udplite_mib; 741 743 return 0; 742 744 745 + err_udplite_mib: 746 + snmp6_mib_free((void **)udp_stats_in6); 743 747 err_udp_mib: 744 748 snmp6_mib_free((void **)icmpv6_statistics); 745 749 err_icmp_mib: ··· 759 753 snmp6_mib_free((void **)ipv6_statistics); 760 754 snmp6_mib_free((void **)icmpv6_statistics); 761 755 snmp6_mib_free((void **)udp_stats_in6); 756 + snmp6_mib_free((void **)udplite_stats_in6); 762 757 } 763 758 764 759 static int __init inet6_init(void) ··· 787 780 if (err) 788 781 goto out_unregister_tcp_proto; 789 782 790 - err = proto_register(&rawv6_prot, 1); 783 + err = proto_register(&udplitev6_prot, 1); 791 784 if (err) 792 785 goto out_unregister_udp_proto; 786 + 787 + err = proto_register(&rawv6_prot, 1); 788 + if (err) 789 + goto out_unregister_udplite_proto; 793 790 794 791 795 792 /* Register the socket-side information for inet6_create. */ ··· 848 837 goto proc_tcp6_fail; 849 838 if (udp6_proc_init()) 850 839 goto proc_udp6_fail; 840 + if (udplite6_proc_init()) 841 + goto proc_udplite6_fail; 851 842 if (ipv6_misc_proc_init()) 852 843 goto proc_misc6_fail; 853 844 ··· 875 862 876 863 /* Init v6 transport protocols. */ 877 864 udpv6_init(); 865 + udplitev6_init(); 878 866 tcpv6_init(); 879 867 880 868 ipv6_packet_init(); ··· 893 879 proc_anycast6_fail: 894 880 ipv6_misc_proc_exit(); 895 881 proc_misc6_fail: 882 + udplite6_proc_exit(); 883 + proc_udplite6_fail: 896 884 udp6_proc_exit(); 897 885 proc_udp6_fail: 898 886 tcp6_proc_exit(); ··· 918 902 sock_unregister(PF_INET6); 919 903 out_unregister_raw_proto: 920 904 proto_unregister(&rawv6_prot); 905 + out_unregister_udplite_proto: 906 + proto_unregister(&udplitev6_prot); 921 907 out_unregister_udp_proto: 922 908 proto_unregister(&udpv6_prot); 923 909 out_unregister_tcp_proto: ··· 937 919 ac6_proc_exit(); 938 920 ipv6_misc_proc_exit(); 939 921 udp6_proc_exit(); 922 + udplite6_proc_exit(); 940 923 tcp6_proc_exit(); 941 924 raw6_proc_exit(); 942 925 #endif
+9 -2
net/ipv6/ipv6_sockglue.c
··· 51 51 #include <net/inet_common.h> 52 52 #include <net/tcp.h> 53 53 #include <net/udp.h> 54 + #include <net/udplite.h> 54 55 #include <net/xfrm.h> 55 56 56 57 #include <asm/uaccess.h> ··· 240 239 struct sk_buff *pktopt; 241 240 242 241 if (sk->sk_protocol != IPPROTO_UDP && 242 + sk->sk_protocol != IPPROTO_UDPLITE && 243 243 sk->sk_protocol != IPPROTO_TCP) 244 244 break; 245 245 ··· 278 276 sk->sk_family = PF_INET; 279 277 tcp_sync_mss(sk, icsk->icsk_pmtu_cookie); 280 278 } else { 279 + struct proto *prot = &udp_prot; 280 + 281 + if (sk->sk_protocol == IPPROTO_UDPLITE) 282 + prot = &udplite_prot; 281 283 local_bh_disable(); 282 284 sock_prot_dec_use(sk->sk_prot); 283 - sock_prot_inc_use(&udp_prot); 285 + sock_prot_inc_use(prot); 284 286 local_bh_enable(); 285 - sk->sk_prot = &udp_prot; 287 + sk->sk_prot = prot; 286 288 sk->sk_socket->ops = &inet_dgram_ops; 287 289 sk->sk_family = PF_INET; 288 290 } ··· 819 813 switch (optname) { 820 814 case IPV6_ADDRFORM: 821 815 if (sk->sk_protocol != IPPROTO_UDP && 816 + sk->sk_protocol != IPPROTO_UDPLITE && 822 817 sk->sk_protocol != IPPROTO_TCP) 823 818 return -EINVAL; 824 819 if (sk->sk_state != TCP_ESTABLISHED)
+7 -3
net/ipv6/netfilter/ip6t_LOG.c
··· 270 270 } 271 271 break; 272 272 } 273 - case IPPROTO_UDP: { 273 + case IPPROTO_UDP: 274 + case IPPROTO_UDPLITE: { 274 275 struct udphdr _udph, *uh; 275 276 276 - /* Max length: 10 "PROTO=UDP " */ 277 - printk("PROTO=UDP "); 277 + if (currenthdr == IPPROTO_UDP) 278 + /* Max length: 10 "PROTO=UDP " */ 279 + printk("PROTO=UDP " ); 280 + else /* Max length: 14 "PROTO=UDPLITE " */ 281 + printk("PROTO=UDPLITE "); 278 282 279 283 if (fragment) 280 284 break;
+11
net/ipv6/proc.c
··· 49 49 fold_prot_inuse(&tcpv6_prot)); 50 50 seq_printf(seq, "UDP6: inuse %d\n", 51 51 fold_prot_inuse(&udpv6_prot)); 52 + seq_printf(seq, "UDPLITE6: inuse %d\n", 53 + fold_prot_inuse(&udplitev6_prot)); 52 54 seq_printf(seq, "RAW6: inuse %d\n", 53 55 fold_prot_inuse(&rawv6_prot)); 54 56 seq_printf(seq, "FRAG6: inuse %d memory %d\n", ··· 135 133 SNMP_MIB_SENTINEL 136 134 }; 137 135 136 + static struct snmp_mib snmp6_udplite6_list[] = { 137 + SNMP_MIB_ITEM("UdpLite6InDatagrams", UDP_MIB_INDATAGRAMS), 138 + SNMP_MIB_ITEM("UdpLite6NoPorts", UDP_MIB_NOPORTS), 139 + SNMP_MIB_ITEM("UdpLite6InErrors", UDP_MIB_INERRORS), 140 + SNMP_MIB_ITEM("UdpLite6OutDatagrams", UDP_MIB_OUTDATAGRAMS), 141 + SNMP_MIB_SENTINEL 142 + }; 143 + 138 144 static unsigned long 139 145 fold_field(void *mib[], int offt) 140 146 { ··· 177 167 snmp6_seq_show_item(seq, (void **)ipv6_statistics, snmp6_ipstats_list); 178 168 snmp6_seq_show_item(seq, (void **)icmpv6_statistics, snmp6_icmp6_list); 179 169 snmp6_seq_show_item(seq, (void **)udp_stats_in6, snmp6_udp6_list); 170 + snmp6_seq_show_item(seq, (void **)udplite_stats_in6, snmp6_udplite6_list); 180 171 } 181 172 return 0; 182 173 }
+200 -163
net/ipv6/udp.c
··· 38 38 #include <linux/skbuff.h> 39 39 #include <asm/uaccess.h> 40 40 41 - #include <net/sock.h> 42 - #include <net/snmp.h> 43 - 44 - #include <net/ipv6.h> 45 41 #include <net/ndisc.h> 46 42 #include <net/protocol.h> 47 43 #include <net/transp_v6.h> 48 44 #include <net/ip6_route.h> 49 - #include <net/addrconf.h> 50 - #include <net/ip.h> 51 - #include <net/udp.h> 52 45 #include <net/raw.h> 53 - #include <net/inet_common.h> 54 46 #include <net/tcp_states.h> 55 - 56 47 #include <net/ip6_checksum.h> 57 48 #include <net/xfrm.h> 58 49 59 50 #include <linux/proc_fs.h> 60 51 #include <linux/seq_file.h> 52 + #include "udp_impl.h" 61 53 62 54 DEFINE_SNMP_STAT(struct udp_mib, udp_stats_in6) __read_mostly; 63 55 ··· 58 66 return udp_get_port(sk, snum, ipv6_rcv_saddr_equal); 59 67 } 60 68 61 - static void udp_v6_hash(struct sock *sk) 62 - { 63 - BUG(); 64 - } 65 - 66 - static void udp_v6_unhash(struct sock *sk) 67 - { 68 - write_lock_bh(&udp_hash_lock); 69 - if (sk_del_node_init(sk)) { 70 - inet_sk(sk)->num = 0; 71 - sock_prot_dec_use(sk->sk_prot); 72 - } 73 - write_unlock_bh(&udp_hash_lock); 74 - } 75 - 76 - static struct sock *udp_v6_lookup(struct in6_addr *saddr, u16 sport, 77 - struct in6_addr *daddr, u16 dport, int dif) 69 + static struct sock *__udp6_lib_lookup(struct in6_addr *saddr, __be16 sport, 70 + struct in6_addr *daddr, __be16 dport, 71 + int dif, struct hlist_head udptable[]) 78 72 { 79 73 struct sock *sk, *result = NULL; 80 74 struct hlist_node *node; ··· 68 90 int badness = -1; 69 91 70 92 read_lock(&udp_hash_lock); 71 - sk_for_each(sk, node, &udp_hash[hnum & (UDP_HTABLE_SIZE - 1)]) { 93 + sk_for_each(sk, node, &udptable[hnum & (UDP_HTABLE_SIZE - 1)]) { 72 94 struct inet_sock *inet = inet_sk(sk); 73 95 74 96 if (inet->num == hnum && sk->sk_family == PF_INET6) { ··· 110 132 } 111 133 112 134 /* 113 - * 114 - */ 115 - 116 - static void udpv6_close(struct sock *sk, long timeout) 117 - { 118 - sk_common_release(sk); 119 - } 120 - 121 - /* 122 135 * This should be easy, if there is something there we 123 136 * return it, otherwise we block. 124 137 */ 125 138 126 - static int udpv6_recvmsg(struct kiocb *iocb, struct sock *sk, 139 + int udpv6_recvmsg(struct kiocb *iocb, struct sock *sk, 127 140 struct msghdr *msg, size_t len, 128 141 int noblock, int flags, int *addr_len) 129 142 { ··· 122 153 struct inet_sock *inet = inet_sk(sk); 123 154 struct sk_buff *skb; 124 155 size_t copied; 125 - int err; 156 + int err, copy_only, is_udplite = IS_UDPLITE(sk); 126 157 127 158 if (addr_len) 128 159 *addr_len=sizeof(struct sockaddr_in6); ··· 141 172 msg->msg_flags |= MSG_TRUNC; 142 173 } 143 174 144 - if (skb->ip_summed==CHECKSUM_UNNECESSARY) { 145 - err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov, 146 - copied); 147 - } else if (msg->msg_flags&MSG_TRUNC) { 148 - if (__skb_checksum_complete(skb)) 175 + /* 176 + * Decide whether to checksum and/or copy data. 177 + */ 178 + copy_only = (skb->ip_summed==CHECKSUM_UNNECESSARY); 179 + 180 + if (is_udplite || (!copy_only && msg->msg_flags&MSG_TRUNC)) { 181 + if (__udp_lib_checksum_complete(skb)) 149 182 goto csum_copy_err; 150 - err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov, 151 - copied); 152 - } else { 183 + copy_only = 1; 184 + } 185 + 186 + if (copy_only) 187 + err = skb_copy_datagram_iovec(skb, sizeof(struct udphdr), 188 + msg->msg_iov, copied ); 189 + else { 153 190 err = skb_copy_and_csum_datagram_iovec(skb, sizeof(struct udphdr), msg->msg_iov); 154 191 if (err == -EINVAL) 155 192 goto csum_copy_err; ··· 206 231 skb_kill_datagram(sk, skb, flags); 207 232 208 233 if (flags & MSG_DONTWAIT) { 209 - UDP6_INC_STATS_USER(UDP_MIB_INERRORS); 234 + UDP6_INC_STATS_USER(UDP_MIB_INERRORS, is_udplite); 210 235 return -EAGAIN; 211 236 } 212 237 goto try_again; 213 238 } 214 239 215 - static void udpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, 216 - int type, int code, int offset, __be32 info) 240 + void __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt, 241 + int type, int code, int offset, __be32 info, 242 + struct hlist_head udptable[] ) 217 243 { 218 244 struct ipv6_pinfo *np; 219 245 struct ipv6hdr *hdr = (struct ipv6hdr*)skb->data; ··· 224 248 struct sock *sk; 225 249 int err; 226 250 227 - sk = udp_v6_lookup(daddr, uh->dest, saddr, uh->source, inet6_iif(skb)); 228 - 251 + sk = __udp6_lib_lookup(daddr, uh->dest, 252 + saddr, uh->source, inet6_iif(skb), udptable); 229 253 if (sk == NULL) 230 254 return; 231 255 ··· 246 270 sock_put(sk); 247 271 } 248 272 249 - static inline int udpv6_queue_rcv_skb(struct sock * sk, struct sk_buff *skb) 273 + static __inline__ void udpv6_err(struct sk_buff *skb, 274 + struct inet6_skb_parm *opt, int type, 275 + int code, int offset, __u32 info ) 250 276 { 277 + return __udp6_lib_err(skb, opt, type, code, offset, info, udp_hash); 278 + } 279 + 280 + int udpv6_queue_rcv_skb(struct sock * sk, struct sk_buff *skb) 281 + { 282 + struct udp_sock *up = udp_sk(sk); 251 283 int rc; 252 284 253 - if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) { 254 - kfree_skb(skb); 255 - return -1; 285 + if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) 286 + goto drop; 287 + 288 + /* 289 + * UDP-Lite specific tests, ignored on UDP sockets (see net/ipv4/udp.c). 290 + */ 291 + if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) { 292 + 293 + if (up->pcrlen == 0) { /* full coverage was set */ 294 + LIMIT_NETDEBUG(KERN_WARNING "UDPLITE6: partial coverage" 295 + " %d while full coverage %d requested\n", 296 + UDP_SKB_CB(skb)->cscov, skb->len); 297 + goto drop; 298 + } 299 + if (UDP_SKB_CB(skb)->cscov < up->pcrlen) { 300 + LIMIT_NETDEBUG(KERN_WARNING "UDPLITE6: coverage %d " 301 + "too small, need min %d\n", 302 + UDP_SKB_CB(skb)->cscov, up->pcrlen); 303 + goto drop; 304 + } 256 305 } 257 306 258 - if (skb_checksum_complete(skb)) { 259 - UDP6_INC_STATS_BH(UDP_MIB_INERRORS); 260 - kfree_skb(skb); 261 - return 0; 262 - } 307 + if (udp_lib_checksum_complete(skb)) 308 + goto drop; 263 309 264 310 if ((rc = sock_queue_rcv_skb(sk,skb)) < 0) { 265 311 /* Note that an ENOMEM error is charged twice */ 266 312 if (rc == -ENOMEM) 267 - UDP6_INC_STATS_BH(UDP_MIB_RCVBUFERRORS); 268 - UDP6_INC_STATS_BH(UDP_MIB_INERRORS); 269 - kfree_skb(skb); 270 - return 0; 313 + UDP6_INC_STATS_BH(UDP_MIB_RCVBUFERRORS, up->pcflag); 314 + goto drop; 271 315 } 272 - UDP6_INC_STATS_BH(UDP_MIB_INDATAGRAMS); 316 + UDP6_INC_STATS_BH(UDP_MIB_INDATAGRAMS, up->pcflag); 273 317 return 0; 318 + drop: 319 + UDP6_INC_STATS_BH(UDP_MIB_INERRORS, up->pcflag); 320 + kfree_skb(skb); 321 + return -1; 274 322 } 275 323 276 324 static struct sock *udp_v6_mcast_next(struct sock *sk, ··· 338 338 * Note: called only from the BH handler context, 339 339 * so we don't need to lock the hashes. 340 340 */ 341 - static void udpv6_mcast_deliver(struct udphdr *uh, 342 - struct in6_addr *saddr, struct in6_addr *daddr, 343 - struct sk_buff *skb) 341 + static int __udp6_lib_mcast_deliver(struct sk_buff *skb, struct in6_addr *saddr, 342 + struct in6_addr *daddr, struct hlist_head udptable[]) 344 343 { 345 344 struct sock *sk, *sk2; 345 + const struct udphdr *uh = skb->h.uh; 346 346 int dif; 347 347 348 348 read_lock(&udp_hash_lock); 349 - sk = sk_head(&udp_hash[ntohs(uh->dest) & (UDP_HTABLE_SIZE - 1)]); 349 + sk = sk_head(&udptable[ntohs(uh->dest) & (UDP_HTABLE_SIZE - 1)]); 350 350 dif = inet6_iif(skb); 351 351 sk = udp_v6_mcast_next(sk, uh->dest, daddr, uh->source, saddr, dif); 352 352 if (!sk) { ··· 364 364 udpv6_queue_rcv_skb(sk, skb); 365 365 out: 366 366 read_unlock(&udp_hash_lock); 367 + return 0; 367 368 } 368 369 369 - static int udpv6_rcv(struct sk_buff **pskb) 370 + static inline int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh) 371 + 372 + { 373 + if (uh->check == 0) { 374 + /* RFC 2460 section 8.1 says that we SHOULD log 375 + this error. Well, it is reasonable. 376 + */ 377 + LIMIT_NETDEBUG(KERN_INFO "IPv6: udp checksum is 0\n"); 378 + return 1; 379 + } 380 + if (skb->ip_summed == CHECKSUM_COMPLETE && 381 + !csum_ipv6_magic(&skb->nh.ipv6h->saddr, &skb->nh.ipv6h->daddr, 382 + skb->len, IPPROTO_UDP, skb->csum )) 383 + skb->ip_summed = CHECKSUM_UNNECESSARY; 384 + 385 + if (skb->ip_summed != CHECKSUM_UNNECESSARY) 386 + skb->csum = ~csum_ipv6_magic(&skb->nh.ipv6h->saddr, 387 + &skb->nh.ipv6h->daddr, 388 + skb->len, IPPROTO_UDP, 0); 389 + 390 + return (UDP_SKB_CB(skb)->partial_cov = 0); 391 + } 392 + 393 + int __udp6_lib_rcv(struct sk_buff **pskb, struct hlist_head udptable[], 394 + int is_udplite) 370 395 { 371 396 struct sk_buff *skb = *pskb; 372 397 struct sock *sk; ··· 408 383 uh = skb->h.uh; 409 384 410 385 ulen = ntohs(uh->len); 411 - 412 - /* Check for jumbo payload */ 413 - if (ulen == 0) 414 - ulen = skb->len; 415 - 416 - if (ulen > skb->len || ulen < sizeof(*uh)) 386 + if (ulen > skb->len) 417 387 goto short_packet; 418 388 419 - if (uh->check == 0) { 420 - /* RFC 2460 section 8.1 says that we SHOULD log 421 - this error. Well, it is reasonable. 422 - */ 423 - LIMIT_NETDEBUG(KERN_INFO "IPv6: udp checksum is 0\n"); 424 - goto discard; 425 - } 389 + if(! is_udplite ) { /* UDP validates ulen. */ 426 390 427 - if (ulen < skb->len) { 428 - if (pskb_trim_rcsum(skb, ulen)) 391 + /* Check for jumbo payload */ 392 + if (ulen == 0) 393 + ulen = skb->len; 394 + 395 + if (ulen < sizeof(*uh)) 396 + goto short_packet; 397 + 398 + if (ulen < skb->len) { 399 + if (pskb_trim_rcsum(skb, ulen)) 400 + goto short_packet; 401 + saddr = &skb->nh.ipv6h->saddr; 402 + daddr = &skb->nh.ipv6h->daddr; 403 + uh = skb->h.uh; 404 + } 405 + 406 + if (udp6_csum_init(skb, uh)) 429 407 goto discard; 430 - saddr = &skb->nh.ipv6h->saddr; 431 - daddr = &skb->nh.ipv6h->daddr; 432 - uh = skb->h.uh; 408 + 409 + } else { /* UDP-Lite validates cscov. */ 410 + if (udplite6_csum_init(skb, uh)) 411 + goto discard; 433 412 } 434 - 435 - if (skb->ip_summed == CHECKSUM_COMPLETE && 436 - !csum_ipv6_magic(saddr, daddr, ulen, IPPROTO_UDP, skb->csum)) 437 - skb->ip_summed = CHECKSUM_UNNECESSARY; 438 - 439 - if (skb->ip_summed != CHECKSUM_UNNECESSARY) 440 - skb->csum = ~csum_ipv6_magic(saddr, daddr, ulen, IPPROTO_UDP, 0); 441 413 442 414 /* 443 415 * Multicast receive code 444 416 */ 445 - if (ipv6_addr_is_multicast(daddr)) { 446 - udpv6_mcast_deliver(uh, saddr, daddr, skb); 447 - return 0; 448 - } 417 + if (ipv6_addr_is_multicast(daddr)) 418 + return __udp6_lib_mcast_deliver(skb, saddr, daddr, udptable); 449 419 450 420 /* Unicast */ 451 421 ··· 448 428 * check socket cache ... must talk to Alan about his plans 449 429 * for sock caches... i'll skip this for now. 450 430 */ 451 - sk = udp_v6_lookup(saddr, uh->source, daddr, uh->dest, inet6_iif(skb)); 431 + sk = __udp6_lib_lookup(saddr, uh->source, 432 + daddr, uh->dest, inet6_iif(skb), udptable); 452 433 453 434 if (sk == NULL) { 454 435 if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) 455 436 goto discard; 456 437 457 - if (skb_checksum_complete(skb)) 438 + if (udp_lib_checksum_complete(skb)) 458 439 goto discard; 459 - UDP6_INC_STATS_BH(UDP_MIB_NOPORTS); 440 + UDP6_INC_STATS_BH(UDP_MIB_NOPORTS, is_udplite); 460 441 461 442 icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_PORT_UNREACH, 0, dev); 462 443 ··· 472 451 return(0); 473 452 474 453 short_packet: 475 - if (net_ratelimit()) 476 - printk(KERN_DEBUG "UDP: short packet: %d/%u\n", ulen, skb->len); 454 + LIMIT_NETDEBUG(KERN_DEBUG "UDP%sv6: short packet: %d/%u\n", 455 + is_udplite? "-Lite" : "", ulen, skb->len); 477 456 478 457 discard: 479 - UDP6_INC_STATS_BH(UDP_MIB_INERRORS); 458 + UDP6_INC_STATS_BH(UDP_MIB_INERRORS, is_udplite); 480 459 kfree_skb(skb); 481 460 return(0); 482 461 } 462 + 463 + static __inline__ int udpv6_rcv(struct sk_buff **pskb) 464 + { 465 + return __udp6_lib_rcv(pskb, udp_hash, 0); 466 + } 467 + 483 468 /* 484 469 * Throw away all pending data and cancel the corking. Socket is locked. 485 470 */ ··· 511 484 struct inet_sock *inet = inet_sk(sk); 512 485 struct flowi *fl = &inet->cork.fl; 513 486 int err = 0; 487 + u32 csum = 0; 514 488 515 489 /* Grab the skbuff where UDP header space exists. */ 516 490 if ((skb = skb_peek(&sk->sk_write_queue)) == NULL) ··· 526 498 uh->len = htons(up->len); 527 499 uh->check = 0; 528 500 529 - if (sk->sk_no_check == UDP_CSUM_NOXMIT) { 530 - skb->ip_summed = CHECKSUM_NONE; 531 - goto send; 532 - } 501 + if (up->pcflag) 502 + csum = udplite_csum_outgoing(sk, skb); 503 + else 504 + csum = udp_csum_outgoing(sk, skb); 533 505 534 - if (skb_queue_len(&sk->sk_write_queue) == 1) { 535 - skb->csum = csum_partial((char *)uh, 536 - sizeof(struct udphdr), skb->csum); 537 - uh->check = csum_ipv6_magic(&fl->fl6_src, 538 - &fl->fl6_dst, 539 - up->len, fl->proto, skb->csum); 540 - } else { 541 - u32 tmp_csum = 0; 542 - 543 - skb_queue_walk(&sk->sk_write_queue, skb) { 544 - tmp_csum = csum_add(tmp_csum, skb->csum); 545 - } 546 - tmp_csum = csum_partial((char *)uh, 547 - sizeof(struct udphdr), tmp_csum); 548 - tmp_csum = csum_ipv6_magic(&fl->fl6_src, 549 - &fl->fl6_dst, 550 - up->len, fl->proto, tmp_csum); 551 - uh->check = tmp_csum; 552 - 553 - } 506 + /* add protocol-dependent pseudo-header */ 507 + uh->check = csum_ipv6_magic(&fl->fl6_src, &fl->fl6_dst, 508 + up->len, fl->proto, csum ); 554 509 if (uh->check == 0) 555 510 uh->check = -1; 556 511 557 - send: 558 512 err = ip6_push_pending_frames(sk); 559 513 out: 560 514 up->len = 0; ··· 544 534 return err; 545 535 } 546 536 547 - static int udpv6_sendmsg(struct kiocb *iocb, struct sock *sk, 537 + int udpv6_sendmsg(struct kiocb *iocb, struct sock *sk, 548 538 struct msghdr *msg, size_t len) 549 539 { 550 540 struct ipv6_txoptions opt_space; ··· 564 554 int corkreq = up->corkflag || msg->msg_flags&MSG_MORE; 565 555 int err; 566 556 int connected = 0; 557 + int is_udplite = up->pcflag; 558 + int (*getfrag)(void *, char *, int, int, int, struct sk_buff *); 567 559 568 560 /* destination address check */ 569 561 if (sin6) { ··· 706 694 opt = fl6_merge_options(&opt_space, flowlabel, opt); 707 695 opt = ipv6_fixup_options(&opt_space, opt); 708 696 709 - fl.proto = IPPROTO_UDP; 697 + fl.proto = sk->sk_protocol; 710 698 ipv6_addr_copy(&fl.fl6_dst, daddr); 711 699 if (ipv6_addr_any(&fl.fl6_src) && !ipv6_addr_any(&np->saddr)) 712 700 ipv6_addr_copy(&fl.fl6_src, &np->saddr); ··· 773 761 774 762 do_append_data: 775 763 up->len += ulen; 776 - err = ip6_append_data(sk, ip_generic_getfrag, msg->msg_iov, ulen, 764 + getfrag = is_udplite ? udplite_getfrag : ip_generic_getfrag; 765 + err = ip6_append_data(sk, getfrag, msg->msg_iov, ulen, 777 766 sizeof(struct udphdr), hlimit, tclass, opt, &fl, 778 767 (struct rt6_info*)dst, 779 768 corkreq ? msg->msg_flags|MSG_MORE : msg->msg_flags); ··· 806 793 out: 807 794 fl6_sock_release(flowlabel); 808 795 if (!err) { 809 - UDP6_INC_STATS_USER(UDP_MIB_OUTDATAGRAMS); 796 + UDP6_INC_STATS_USER(UDP_MIB_OUTDATAGRAMS, is_udplite); 810 797 return len; 811 798 } 812 799 /* ··· 817 804 * seems like overkill. 818 805 */ 819 806 if (err == -ENOBUFS || test_bit(SOCK_NOSPACE, &sk->sk_socket->flags)) { 820 - UDP6_INC_STATS_USER(UDP_MIB_SNDBUFERRORS); 807 + UDP6_INC_STATS_USER(UDP_MIB_SNDBUFERRORS, is_udplite); 821 808 } 822 809 return err; 823 810 ··· 829 816 goto out; 830 817 } 831 818 832 - static int udpv6_destroy_sock(struct sock *sk) 819 + int udpv6_destroy_sock(struct sock *sk) 833 820 { 834 821 lock_sock(sk); 835 822 udp_v6_flush_pending_frames(sk); ··· 867 854 release_sock(sk); 868 855 } 869 856 break; 870 - 871 857 case UDP_ENCAP: 872 858 switch (val) { 873 859 case 0: ··· 878 866 } 879 867 break; 880 868 869 + case UDPLITE_SEND_CSCOV: 870 + if (!up->pcflag) /* Disable the option on UDP sockets */ 871 + return -ENOPROTOOPT; 872 + if (val != 0 && val < 8) /* Illegal coverage: use default (8) */ 873 + val = 8; 874 + up->pcslen = val; 875 + up->pcflag |= UDPLITE_SEND_CC; 876 + break; 877 + 878 + case UDPLITE_RECV_CSCOV: 879 + if (!up->pcflag) /* Disable the option on UDP sockets */ 880 + return -ENOPROTOOPT; 881 + if (val != 0 && val < 8) /* Avoid silly minimal values. */ 882 + val = 8; 883 + up->pcrlen = val; 884 + up->pcflag |= UDPLITE_RECV_CC; 885 + break; 886 + 881 887 default: 882 888 err = -ENOPROTOOPT; 883 889 break; ··· 904 874 return err; 905 875 } 906 876 907 - static int udpv6_setsockopt(struct sock *sk, int level, int optname, 908 - char __user *optval, int optlen) 877 + int udpv6_setsockopt(struct sock *sk, int level, int optname, 878 + char __user *optval, int optlen) 909 879 { 910 - if (level != SOL_UDP) 911 - return ipv6_setsockopt(sk, level, optname, optval, optlen); 912 - return do_udpv6_setsockopt(sk, level, optname, optval, optlen); 880 + if (level == SOL_UDP || level == SOL_UDPLITE) 881 + return do_udpv6_setsockopt(sk, level, optname, optval, optlen); 882 + return ipv6_setsockopt(sk, level, optname, optval, optlen); 913 883 } 914 884 915 885 #ifdef CONFIG_COMPAT 916 - static int compat_udpv6_setsockopt(struct sock *sk, int level, int optname, 917 - char __user *optval, int optlen) 886 + int compat_udpv6_setsockopt(struct sock *sk, int level, int optname, 887 + char __user *optval, int optlen) 918 888 { 919 - if (level != SOL_UDP) 920 - return compat_ipv6_setsockopt(sk, level, optname, 921 - optval, optlen); 922 - return do_udpv6_setsockopt(sk, level, optname, optval, optlen); 889 + if (level == SOL_UDP || level == SOL_UDPLITE) 890 + return do_udpv6_setsockopt(sk, level, optname, optval, optlen); 891 + return compat_ipv6_setsockopt(sk, level, optname, optval, optlen); 923 892 } 924 893 #endif 925 894 ··· 945 916 val = up->encap_type; 946 917 break; 947 918 919 + case UDPLITE_SEND_CSCOV: 920 + val = up->pcslen; 921 + break; 922 + 923 + case UDPLITE_RECV_CSCOV: 924 + val = up->pcrlen; 925 + break; 926 + 948 927 default: 949 928 return -ENOPROTOOPT; 950 929 }; ··· 964 927 return 0; 965 928 } 966 929 967 - static int udpv6_getsockopt(struct sock *sk, int level, int optname, 968 - char __user *optval, int __user *optlen) 930 + int udpv6_getsockopt(struct sock *sk, int level, int optname, 931 + char __user *optval, int __user *optlen) 969 932 { 970 - if (level != SOL_UDP) 971 - return ipv6_getsockopt(sk, level, optname, optval, optlen); 972 - return do_udpv6_getsockopt(sk, level, optname, optval, optlen); 933 + if (level == SOL_UDP || level == SOL_UDPLITE) 934 + return do_udpv6_getsockopt(sk, level, optname, optval, optlen); 935 + return ipv6_getsockopt(sk, level, optname, optval, optlen); 973 936 } 974 937 975 938 #ifdef CONFIG_COMPAT 976 - static int compat_udpv6_getsockopt(struct sock *sk, int level, int optname, 977 - char __user *optval, int __user *optlen) 939 + int compat_udpv6_getsockopt(struct sock *sk, int level, int optname, 940 + char __user *optval, int __user *optlen) 978 941 { 979 - if (level != SOL_UDP) 980 - return compat_ipv6_getsockopt(sk, level, optname, 981 - optval, optlen); 982 - return do_udpv6_getsockopt(sk, level, optname, optval, optlen); 942 + if (level == SOL_UDP || level == SOL_UDPLITE) 943 + return do_udpv6_getsockopt(sk, level, optname, optval, optlen); 944 + return compat_ipv6_getsockopt(sk, level, optname, optval, optlen); 983 945 } 984 946 #endif 985 947 ··· 1019 983 atomic_read(&sp->sk_refcnt), sp); 1020 984 } 1021 985 1022 - static int udp6_seq_show(struct seq_file *seq, void *v) 986 + int udp6_seq_show(struct seq_file *seq, void *v) 1023 987 { 1024 988 if (v == SEQ_START_TOKEN) 1025 989 seq_printf(seq, ··· 1038 1002 .owner = THIS_MODULE, 1039 1003 .name = "udp6", 1040 1004 .family = AF_INET6, 1005 + .hashtable = udp_hash, 1041 1006 .seq_show = udp6_seq_show, 1042 1007 .seq_fops = &udp6_seq_fops, 1043 1008 }; ··· 1058 1021 struct proto udpv6_prot = { 1059 1022 .name = "UDPv6", 1060 1023 .owner = THIS_MODULE, 1061 - .close = udpv6_close, 1024 + .close = udp_lib_close, 1062 1025 .connect = ip6_datagram_connect, 1063 1026 .disconnect = udp_disconnect, 1064 1027 .ioctl = udp_ioctl, ··· 1068 1031 .sendmsg = udpv6_sendmsg, 1069 1032 .recvmsg = udpv6_recvmsg, 1070 1033 .backlog_rcv = udpv6_queue_rcv_skb, 1071 - .hash = udp_v6_hash, 1072 - .unhash = udp_v6_unhash, 1034 + .hash = udp_lib_hash, 1035 + .unhash = udp_lib_unhash, 1073 1036 .get_port = udp_v6_get_port, 1074 1037 .obj_size = sizeof(struct udp6_sock), 1075 1038 #ifdef CONFIG_COMPAT
+34
net/ipv6/udp_impl.h
··· 1 + #ifndef _UDP6_IMPL_H 2 + #define _UDP6_IMPL_H 3 + #include <net/udp.h> 4 + #include <net/udplite.h> 5 + #include <net/protocol.h> 6 + #include <net/addrconf.h> 7 + #include <net/inet_common.h> 8 + 9 + extern int __udp6_lib_rcv(struct sk_buff **, struct hlist_head [], int ); 10 + extern void __udp6_lib_err(struct sk_buff *, struct inet6_skb_parm *, 11 + int , int , int , __be32 , struct hlist_head []); 12 + 13 + extern int udpv6_getsockopt(struct sock *sk, int level, int optname, 14 + char __user *optval, int __user *optlen); 15 + extern int udpv6_setsockopt(struct sock *sk, int level, int optname, 16 + char __user *optval, int optlen); 17 + #ifdef CONFIG_COMPAT 18 + extern int compat_udpv6_setsockopt(struct sock *sk, int level, int optname, 19 + char __user *optval, int optlen); 20 + extern int compat_udpv6_getsockopt(struct sock *sk, int level, int optname, 21 + char __user *optval, int __user *optlen); 22 + #endif 23 + extern int udpv6_sendmsg(struct kiocb *iocb, struct sock *sk, 24 + struct msghdr *msg, size_t len); 25 + extern int udpv6_recvmsg(struct kiocb *iocb, struct sock *sk, 26 + struct msghdr *msg, size_t len, 27 + int noblock, int flags, int *addr_len); 28 + extern int udpv6_queue_rcv_skb(struct sock * sk, struct sk_buff *skb); 29 + extern int udpv6_destroy_sock(struct sock *sk); 30 + 31 + #ifdef CONFIG_PROC_FS 32 + extern int udp6_seq_show(struct seq_file *seq, void *v); 33 + #endif 34 + #endif /* _UDP6_IMPL_H */
+105
net/ipv6/udplite.c
··· 1 + /* 2 + * UDPLITEv6 An implementation of the UDP-Lite protocol over IPv6. 3 + * See also net/ipv4/udplite.c 4 + * 5 + * Version: $Id: udplite.c,v 1.9 2006/10/19 08:28:10 gerrit Exp $ 6 + * 7 + * Authors: Gerrit Renker <gerrit@erg.abdn.ac.uk> 8 + * 9 + * Changes: 10 + * Fixes: 11 + * This program is free software; you can redistribute it and/or 12 + * modify it under the terms of the GNU General Public License 13 + * as published by the Free Software Foundation; either version 14 + * 2 of the License, or (at your option) any later version. 15 + */ 16 + #include "udp_impl.h" 17 + 18 + DEFINE_SNMP_STAT(struct udp_mib, udplite_stats_in6) __read_mostly; 19 + 20 + static __inline__ int udplitev6_rcv(struct sk_buff **pskb) 21 + { 22 + return __udp6_lib_rcv(pskb, udplite_hash, 1); 23 + } 24 + 25 + static __inline__ void udplitev6_err(struct sk_buff *skb, 26 + struct inet6_skb_parm *opt, 27 + int type, int code, int offset, __u32 info) 28 + { 29 + return __udp6_lib_err(skb, opt, type, code, offset, info, udplite_hash); 30 + } 31 + 32 + static struct inet6_protocol udplitev6_protocol = { 33 + .handler = udplitev6_rcv, 34 + .err_handler = udplitev6_err, 35 + .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL, 36 + }; 37 + 38 + static __inline__ int udplite_v6_get_port(struct sock *sk, unsigned short snum) 39 + { 40 + return udplite_get_port(sk, snum, ipv6_rcv_saddr_equal); 41 + } 42 + 43 + struct proto udplitev6_prot = { 44 + .name = "UDPLITEv6", 45 + .owner = THIS_MODULE, 46 + .close = udp_lib_close, 47 + .connect = ip6_datagram_connect, 48 + .disconnect = udp_disconnect, 49 + .ioctl = udp_ioctl, 50 + .init = udplite_sk_init, 51 + .destroy = udpv6_destroy_sock, 52 + .setsockopt = udpv6_setsockopt, 53 + .getsockopt = udpv6_getsockopt, 54 + .sendmsg = udpv6_sendmsg, 55 + .recvmsg = udpv6_recvmsg, 56 + .backlog_rcv = udpv6_queue_rcv_skb, 57 + .hash = udp_lib_hash, 58 + .unhash = udp_lib_unhash, 59 + .get_port = udplite_v6_get_port, 60 + .obj_size = sizeof(struct udp6_sock), 61 + #ifdef CONFIG_COMPAT 62 + .compat_setsockopt = compat_udpv6_setsockopt, 63 + .compat_getsockopt = compat_udpv6_getsockopt, 64 + #endif 65 + }; 66 + 67 + static struct inet_protosw udplite6_protosw = { 68 + .type = SOCK_DGRAM, 69 + .protocol = IPPROTO_UDPLITE, 70 + .prot = &udplitev6_prot, 71 + .ops = &inet6_dgram_ops, 72 + .capability = -1, 73 + .no_check = 0, 74 + .flags = INET_PROTOSW_PERMANENT, 75 + }; 76 + 77 + void __init udplitev6_init(void) 78 + { 79 + if (inet6_add_protocol(&udplitev6_protocol, IPPROTO_UDPLITE) < 0) 80 + printk(KERN_ERR "%s: Could not register.\n", __FUNCTION__); 81 + 82 + inet6_register_protosw(&udplite6_protosw); 83 + } 84 + 85 + #ifdef CONFIG_PROC_FS 86 + static struct file_operations udplite6_seq_fops; 87 + static struct udp_seq_afinfo udplite6_seq_afinfo = { 88 + .owner = THIS_MODULE, 89 + .name = "udplite6", 90 + .family = AF_INET6, 91 + .hashtable = udplite_hash, 92 + .seq_show = udp6_seq_show, 93 + .seq_fops = &udplite6_seq_fops, 94 + }; 95 + 96 + int __init udplite6_proc_init(void) 97 + { 98 + return udp_proc_register(&udplite6_seq_afinfo); 99 + } 100 + 101 + void udplite6_proc_exit(void) 102 + { 103 + udp_proc_unregister(&udplite6_seq_afinfo); 104 + } 105 + #endif
+1
net/ipv6/xfrm6_policy.c
··· 274 274 break; 275 275 276 276 case IPPROTO_UDP: 277 + case IPPROTO_UDPLITE: 277 278 case IPPROTO_TCP: 278 279 case IPPROTO_SCTP: 279 280 case IPPROTO_DCCP:
+3 -2
net/netfilter/xt_multiport.c
··· 1 - /* Kernel module to match one of a list of TCP/UDP/SCTP/DCCP ports: ports are in 2 - the same place so we can treat them as equal. */ 1 + /* Kernel module to match one of a list of TCP/UDP(-Lite)/SCTP/DCCP ports: 2 + ports are in the same place so we can treat them as equal. */ 3 3 4 4 /* (C) 1999-2001 Paul `Rusty' Russell 5 5 * (C) 2002-2004 Netfilter Core Team <coreteam@netfilter.org> ··· 162 162 { 163 163 /* Must specify supported protocol, no unknown flags or bad count */ 164 164 return (proto == IPPROTO_TCP || proto == IPPROTO_UDP 165 + || proto == IPPROTO_UDPLITE 165 166 || proto == IPPROTO_SCTP || proto == IPPROTO_DCCP) 166 167 && !(ip_invflags & XT_INV_PROTO) 167 168 && (match_flags == XT_MULTIPORT_SOURCE
+19 -1
net/netfilter/xt_tcpudp.c
··· 10 10 #include <linux/netfilter_ipv4/ip_tables.h> 11 11 #include <linux/netfilter_ipv6/ip6_tables.h> 12 12 13 - MODULE_DESCRIPTION("x_tables match for TCP and UDP, supports IPv4 and IPv6"); 13 + MODULE_DESCRIPTION("x_tables match for TCP and UDP(-Lite), supports IPv4 and IPv6"); 14 14 MODULE_LICENSE("GPL"); 15 15 MODULE_ALIAS("xt_tcp"); 16 16 MODULE_ALIAS("xt_udp"); ··· 232 232 .match = udp_match, 233 233 .matchsize = sizeof(struct xt_udp), 234 234 .proto = IPPROTO_UDP, 235 + .me = THIS_MODULE, 236 + }, 237 + { 238 + .name = "udplite", 239 + .family = AF_INET, 240 + .checkentry = udp_checkentry, 241 + .match = udp_match, 242 + .matchsize = sizeof(struct xt_udp), 243 + .proto = IPPROTO_UDPLITE, 244 + .me = THIS_MODULE, 245 + }, 246 + { 247 + .name = "udplite", 248 + .family = AF_INET6, 249 + .checkentry = udp_checkentry, 250 + .match = udp_match, 251 + .matchsize = sizeof(struct xt_udp), 252 + .proto = IPPROTO_UDPLITE, 235 253 .me = THIS_MODULE, 236 254 }, 237 255 };