Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull more networking updates from Jakub Kicinski:
"Networking fixes and rethook patches.

Features:

- kprobes: rethook: x86: replace kretprobe trampoline with rethook

Current release - regressions:

- sfc: avoid null-deref on systems without NUMA awareness in the new
queue sizing code

Current release - new code bugs:

- vxlan: do not feed vxlan_vnifilter_dump_dev with non-vxlan devices

- eth: lan966x: fix null-deref on PHY pointer in timestamp ioctl when
interface is down

Previous releases - always broken:

- openvswitch: correct neighbor discovery target mask field in the
flow dump

- wireguard: ignore v6 endpoints when ipv6 is disabled and fix a leak

- rxrpc: fix call timer start racing with call destruction

- rxrpc: fix null-deref when security type is rxrpc_no_security

- can: fix UAF bugs around echo skbs in multiple drivers

Misc:

- docs: move netdev-FAQ to the 'process' section of the
documentation"

* tag 'net-5.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (57 commits)
vxlan: do not feed vxlan_vnifilter_dump_dev with non vxlan devices
openvswitch: Add recirc_id to recirc warning
rxrpc: fix some null-ptr-deref bugs in server_key.c
rxrpc: Fix call timer start racing with call destruction
net: hns3: fix software vlan talbe of vlan 0 inconsistent with hardware
net: hns3: fix the concurrency between functions reading debugfs
docs: netdev: move the netdev-FAQ to the process pages
docs: netdev: broaden the new vs old code formatting guidelines
docs: netdev: call out the merge window in tag checking
docs: netdev: add missing back ticks
docs: netdev: make the testing requirement more stringent
docs: netdev: add a question about re-posting frequency
docs: netdev: rephrase the 'should I update patchwork' question
docs: netdev: rephrase the 'Under review' question
docs: netdev: shorten the name and mention msgid for patch status
docs: netdev: note that RFC postings are allowed any time
docs: netdev: turn the net-next closed into a Warning
docs: netdev: move the patch marking section up
docs: netdev: minor reword
docs: netdev: replace references to old archives
...

+588 -337
+1 -1
Documentation/bpf/bpf_devel_QA.rst
··· 658 658 659 659 .. Links 660 660 .. _Documentation/process/: https://www.kernel.org/doc/html/latest/process/ 661 - .. _netdev-FAQ: ../networking/netdev-FAQ.rst 661 + .. _netdev-FAQ: Documentation/process/maintainer-netdev.rst 662 662 .. _selftests: 663 663 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/bpf/ 664 664 .. _Documentation/dev-tools/kselftest.rst:
+3 -1
Documentation/devicetree/bindings/net/qcom,ethqos.txt
··· 7 7 8 8 Required properties: 9 9 10 - - compatible: Should be qcom,qcs404-ethqos" 10 + - compatible: Should be one of: 11 + "qcom,qcs404-ethqos" 12 + "qcom,sm8150-ethqos" 11 13 12 14 - reg: Address and length of the register set for the device 13 15
+2 -1
Documentation/networking/index.rst
··· 1 1 Linux Networking Documentation 2 2 ============================== 3 3 4 + Refer to :ref:`netdev-FAQ` for a guide on netdev development process specifics. 5 + 4 6 Contents: 5 7 6 8 .. toctree:: 7 9 :maxdepth: 2 8 10 9 - netdev-FAQ 10 11 af_xdp 11 12 bareudp 12 13 batman-adv
+68 -46
Documentation/networking/netdev-FAQ.rst Documentation/process/maintainer-netdev.rst
··· 16 16 volume of traffic have their own specific mailing lists. 17 17 18 18 The netdev list is managed (like many other Linux mailing lists) through 19 - VGER (http://vger.kernel.org/) and archives can be found below: 19 + VGER (http://vger.kernel.org/) with archives available at 20 + https://lore.kernel.org/netdev/ 20 21 21 - - http://marc.info/?l=linux-netdev 22 - - http://www.spinics.net/lists/netdev/ 23 - 24 - Aside from subsystems like that mentioned above, all network-related 22 + Aside from subsystems like those mentioned above, all network-related 25 23 Linux development (i.e. RFC, review, comments, etc.) takes place on 26 24 netdev. 27 25 ··· 34 36 35 37 - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 36 38 - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 39 + 40 + How do I indicate which tree (net vs. net-next) my patch should be in? 41 + ---------------------------------------------------------------------- 42 + To help maintainers and CI bots you should explicitly mark which tree 43 + your patch is targeting. Assuming that you use git, use the prefix 44 + flag:: 45 + 46 + git format-patch --subject-prefix='PATCH net-next' start..finish 47 + 48 + Use ``net`` instead of ``net-next`` (always lower case) in the above for 49 + bug-fix ``net`` content. 37 50 38 51 How often do changes from these trees make it to the mainline Linus tree? 39 52 ------------------------------------------------------------------------- ··· 70 61 An announcement indicating when ``net-next`` has been closed is usually 71 62 sent to netdev, but knowing the above, you can predict that in advance. 72 63 73 - IMPORTANT: Do not send new ``net-next`` content to netdev during the 74 - period during which ``net-next`` tree is closed. 64 + .. warning:: 65 + Do not send new ``net-next`` content to netdev during the 66 + period during which ``net-next`` tree is closed. 67 + 68 + RFC patches sent for review only are obviously welcome at any time 69 + (use ``--subject-prefix='RFC net-next'`` with ``git format-patch``). 75 70 76 71 Shortly after the two weeks have passed (and vX.Y-rc1 is released), the 77 72 tree for ``net-next`` reopens to collect content for the next (vX.Y+1) ··· 103 90 104 91 and note the top of the "tags" section. If it is rc1, it is early in 105 92 the dev cycle. If it was tagged rc7 a week ago, then a release is 106 - probably imminent. 93 + probably imminent. If the most recent tag is a final release tag 94 + (without an ``-rcN`` suffix) - we are most likely in a merge window 95 + and ``net-next`` is closed. 107 96 108 - How do I indicate which tree (net vs. net-next) my patch should be in? 109 - ---------------------------------------------------------------------- 110 - Firstly, think whether you have a bug fix or new "next-like" content. 111 - Then once decided, assuming that you use git, use the prefix flag, i.e. 112 - :: 113 - 114 - git format-patch --subject-prefix='PATCH net-next' start..finish 115 - 116 - Use ``net`` instead of ``net-next`` (always lower case) in the above for 117 - bug-fix ``net`` content. If you don't use git, then note the only magic 118 - in the above is just the subject text of the outgoing e-mail, and you 119 - can manually change it yourself with whatever MUA you are comfortable 120 - with. 121 - 122 - I sent a patch and I'm wondering what happened to it - how can I tell whether it got merged? 123 - -------------------------------------------------------------------------------------------- 97 + How can I tell the status of a patch I've sent? 98 + ----------------------------------------------- 124 99 Start by looking at the main patchworks queue for netdev: 125 100 126 101 https://patchwork.kernel.org/project/netdevbpf/list/ 127 102 128 103 The "State" field will tell you exactly where things are at with your 129 - patch. 104 + patch. Patches are indexed by the ``Message-ID`` header of the emails 105 + which carried them so if you have trouble finding your patch append 106 + the value of ``Message-ID`` to the URL above. 130 107 131 - The above only says "Under Review". How can I find out more? 132 - ------------------------------------------------------------- 108 + How long before my patch is accepted? 109 + ------------------------------------- 133 110 Generally speaking, the patches get triaged quickly (in less than 134 - 48h). So be patient. Asking the maintainer for status updates on your 111 + 48h). But be patient, if your patch is active in patchwork (i.e. it's 112 + listed on the project's patch list) the chances it was missed are close to zero. 113 + Asking the maintainer for status updates on your 135 114 patch is a good way to ensure your patch is ignored or pushed to the 136 115 bottom of the priority list. 137 116 138 - I submitted multiple versions of the patch series. Should I directly update patchwork for the previous versions of these patch series? 139 - -------------------------------------------------------------------------------------------------------------------------------------- 140 - No, please don't interfere with the patch status on patchwork, leave 117 + Should I directly update patchwork state of my own patches? 118 + ----------------------------------------------------------- 119 + It may be tempting to help the maintainers and update the state of your 120 + own patches when you post a new version or spot a bug. Please do not do that. 121 + Interfering with the patch status on patchwork will only cause confusion. Leave 141 122 it to the maintainer to figure out what is the most recent and current 142 123 version that should be applied. If there is any doubt, the maintainer 143 124 will reply and ask what should be done. ··· 141 134 No, please resend the entire patch series and make sure you do number your 142 135 patches such that it is clear this is the latest and greatest set of patches 143 136 that can be applied. 137 + 138 + I have received review feedback, when should I post a revised version of the patches? 139 + ------------------------------------------------------------------------------------- 140 + Allow at least 24 hours to pass between postings. This will ensure reviewers 141 + from all geographical locations have a chance to chime in. Do not wait 142 + too long (weeks) between postings either as it will make it harder for reviewers 143 + to recall all the context. 144 + 145 + Make sure you address all the feedback in your new posting. Do not post a new 146 + version of the code if the discussion about the previous version is still 147 + ongoing, unless directly instructed by a reviewer. 144 148 145 149 I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do? 146 150 ---------------------------------------------------------------------------------------------------------------------------------------- ··· 183 165 * another line of text 184 166 */ 185 167 186 - I am working in existing code that has the former comment style and not the latter. Should I submit new code in the former style or the latter? 187 - ----------------------------------------------------------------------------------------------------------------------------------------------- 188 - Make it the latter style, so that eventually all code in the domain 189 - of netdev is of this format. 168 + I am working in existing code which uses non-standard formatting. Which formatting should I use? 169 + ------------------------------------------------------------------------------------------------ 170 + Make your code follow the most recent guidelines, so that eventually all code 171 + in the domain of netdev is in the preferred format. 190 172 191 173 I found a bug that might have possible security implications or similar. Should I mail the main netdev maintainer off-list? 192 174 --------------------------------------------------------------------------------------------------------------------------- ··· 198 180 199 181 What level of testing is expected before I submit my change? 200 182 ------------------------------------------------------------ 201 - If your changes are against ``net-next``, the expectation is that you 202 - have tested by layering your changes on top of ``net-next``. Ideally 203 - you will have done run-time testing specific to your change, but at a 204 - minimum, your changes should survive an ``allyesconfig`` and an 205 - ``allmodconfig`` build without new warnings or failures. 183 + At the very minimum your changes must survive an ``allyesconfig`` and an 184 + ``allmodconfig`` build with ``W=1`` set without new warnings or failures. 185 + 186 + Ideally you will have done run-time testing specific to your change, 187 + and the patch series contains a set of kernel selftest for 188 + ``tools/testing/selftests/net`` or using the KUnit framework. 189 + 190 + You are expected to test your changes on top of the relevant networking 191 + tree (``net`` or ``net-next``) and not e.g. a stable tree or ``linux-next``. 206 192 207 193 How do I post corresponding changes to user space components? 208 194 ------------------------------------------------------------- ··· 220 198 to a public repo where user space patches can be seen. 221 199 222 200 In case user space tooling lives in a separate repository but is 223 - reviewed on netdev (e.g. patches to `iproute2` tools) kernel and 201 + reviewed on netdev (e.g. patches to ``iproute2`` tools) kernel and 224 202 user space patches should form separate series (threads) when posted 225 203 to the mailing list, e.g.:: 226 204 ··· 253 231 netdevsim is great, can I extend it for my out-of-tree tests? 254 232 ------------------------------------------------------------- 255 233 256 - No, `netdevsim` is a test vehicle solely for upstream tests. 257 - (Please add your tests under tools/testing/selftests/.) 234 + No, ``netdevsim`` is a test vehicle solely for upstream tests. 235 + (Please add your tests under ``tools/testing/selftests/``.) 258 236 259 - We also give no guarantees that `netdevsim` won't change in the future 237 + We also give no guarantees that ``netdevsim`` won't change in the future 260 238 in a way which would break what would normally be considered uAPI. 261 239 262 240 Is netdevsim considered a "user" of an API? 263 241 ------------------------------------------- 264 242 265 243 Linux kernel has a long standing rule that no API should be added unless 266 - it has a real, in-tree user. Mock-ups and tests based on `netdevsim` are 267 - strongly encouraged when adding new APIs, but `netdevsim` in itself 244 + it has a real, in-tree user. Mock-ups and tests based on ``netdevsim`` are 245 + strongly encouraged when adding new APIs, but ``netdevsim`` in itself 268 246 is **not** considered a use case/user. 269 247 270 248 Any other tips to help ensure my net/net-next patch gets OK'd?
+1
Documentation/process/maintainer-handbooks.rst
··· 16 16 :maxdepth: 2 17 17 18 18 maintainer-tip 19 + maintainer-netdev
+1
MAINTAINERS
··· 13653 13653 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 13654 13654 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git 13655 13655 F: Documentation/networking/ 13656 + F: Documentation/process/maintainer-netdev.rst 13656 13657 F: include/linux/in.h 13657 13658 F: include/linux/net.h 13658 13659 F: include/linux/netdevice.h
+7 -1
arch/Kconfig
··· 164 164 165 165 config KRETPROBES 166 166 def_bool y 167 - depends on KPROBES && HAVE_KRETPROBES 167 + depends on KPROBES && (HAVE_KRETPROBES || HAVE_RETHOOK) 168 + 169 + config KRETPROBE_ON_RETHOOK 170 + def_bool y 171 + depends on HAVE_RETHOOK 172 + depends on KRETPROBES 173 + select RETHOOK 168 174 169 175 config USER_RETURN_NOTIFIER 170 176 bool
+1
arch/x86/Kconfig
··· 224 224 select HAVE_KPROBES_ON_FTRACE 225 225 select HAVE_FUNCTION_ERROR_INJECTION 226 226 select HAVE_KRETPROBES 227 + select HAVE_RETHOOK 227 228 select HAVE_KVM 228 229 select HAVE_LIVEPATCH if X86_64 229 230 select HAVE_MIXED_BREAKPOINTS_REGS
+11 -12
arch/x86/include/asm/unwind.h
··· 4 4 5 5 #include <linux/sched.h> 6 6 #include <linux/ftrace.h> 7 - #include <linux/kprobes.h> 7 + #include <linux/rethook.h> 8 8 #include <asm/ptrace.h> 9 9 #include <asm/stacktrace.h> 10 10 ··· 16 16 unsigned long stack_mask; 17 17 struct task_struct *task; 18 18 int graph_idx; 19 - #ifdef CONFIG_KRETPROBES 19 + #if defined(CONFIG_RETHOOK) 20 20 struct llist_node *kr_cur; 21 21 #endif 22 22 bool error; ··· 104 104 #endif 105 105 106 106 static inline 107 - unsigned long unwind_recover_kretprobe(struct unwind_state *state, 108 - unsigned long addr, unsigned long *addr_p) 107 + unsigned long unwind_recover_rethook(struct unwind_state *state, 108 + unsigned long addr, unsigned long *addr_p) 109 109 { 110 - #ifdef CONFIG_KRETPROBES 111 - return is_kretprobe_trampoline(addr) ? 112 - kretprobe_find_ret_addr(state->task, addr_p, &state->kr_cur) : 113 - addr; 114 - #else 115 - return addr; 110 + #ifdef CONFIG_RETHOOK 111 + if (is_rethook_trampoline(addr)) 112 + return rethook_find_ret_addr(state->task, (unsigned long)addr_p, 113 + &state->kr_cur); 116 114 #endif 115 + return addr; 117 116 } 118 117 119 - /* Recover the return address modified by kretprobe and ftrace_graph. */ 118 + /* Recover the return address modified by rethook and ftrace_graph. */ 120 119 static inline 121 120 unsigned long unwind_recover_ret_addr(struct unwind_state *state, 122 121 unsigned long addr, unsigned long *addr_p) ··· 124 125 125 126 ret = ftrace_graph_ret_addr(state->task, &state->graph_idx, 126 127 addr, addr_p); 127 - return unwind_recover_kretprobe(state, ret, addr_p); 128 + return unwind_recover_rethook(state, ret, addr_p); 128 129 } 129 130 130 131 /*
+1
arch/x86/kernel/Makefile
··· 103 103 obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o 104 104 obj-$(CONFIG_X86_TSC) += trace_clock.o 105 105 obj-$(CONFIG_TRACING) += trace.o 106 + obj-$(CONFIG_RETHOOK) += rethook.o 106 107 obj-$(CONFIG_CRASH_CORE) += crash_core_$(BITS).o 107 108 obj-$(CONFIG_KEXEC_CORE) += machine_kexec_$(BITS).o 108 109 obj-$(CONFIG_KEXEC_CORE) += relocate_kernel_$(BITS).o crash.o
+1
arch/x86/kernel/kprobes/common.h
··· 6 6 7 7 #include <asm/asm.h> 8 8 #include <asm/frame.h> 9 + #include <asm/insn.h> 9 10 10 11 #ifdef CONFIG_X86_64 11 12
-107
arch/x86/kernel/kprobes/core.c
··· 811 811 = (regs->flags & X86_EFLAGS_IF); 812 812 } 813 813 814 - void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) 815 - { 816 - unsigned long *sara = stack_addr(regs); 817 - 818 - ri->ret_addr = (kprobe_opcode_t *) *sara; 819 - ri->fp = sara; 820 - 821 - /* Replace the return addr with trampoline addr */ 822 - *sara = (unsigned long) &__kretprobe_trampoline; 823 - } 824 - NOKPROBE_SYMBOL(arch_prepare_kretprobe); 825 - 826 814 static void kprobe_post_process(struct kprobe *cur, struct pt_regs *regs, 827 815 struct kprobe_ctlblk *kcb) 828 816 { ··· 1010 1022 return 0; 1011 1023 } 1012 1024 NOKPROBE_SYMBOL(kprobe_int3_handler); 1013 - 1014 - /* 1015 - * When a retprobed function returns, this code saves registers and 1016 - * calls trampoline_handler() runs, which calls the kretprobe's handler. 1017 - */ 1018 - asm( 1019 - ".text\n" 1020 - ".global __kretprobe_trampoline\n" 1021 - ".type __kretprobe_trampoline, @function\n" 1022 - "__kretprobe_trampoline:\n" 1023 - #ifdef CONFIG_X86_64 1024 - ANNOTATE_NOENDBR 1025 - /* Push a fake return address to tell the unwinder it's a kretprobe. */ 1026 - " pushq $__kretprobe_trampoline\n" 1027 - UNWIND_HINT_FUNC 1028 - /* Save the 'sp - 8', this will be fixed later. */ 1029 - " pushq %rsp\n" 1030 - " pushfq\n" 1031 - SAVE_REGS_STRING 1032 - " movq %rsp, %rdi\n" 1033 - " call trampoline_handler\n" 1034 - RESTORE_REGS_STRING 1035 - /* In trampoline_handler(), 'regs->flags' is copied to 'regs->sp'. */ 1036 - " addq $8, %rsp\n" 1037 - " popfq\n" 1038 - #else 1039 - /* Push a fake return address to tell the unwinder it's a kretprobe. */ 1040 - " pushl $__kretprobe_trampoline\n" 1041 - UNWIND_HINT_FUNC 1042 - /* Save the 'sp - 4', this will be fixed later. */ 1043 - " pushl %esp\n" 1044 - " pushfl\n" 1045 - SAVE_REGS_STRING 1046 - " movl %esp, %eax\n" 1047 - " call trampoline_handler\n" 1048 - RESTORE_REGS_STRING 1049 - /* In trampoline_handler(), 'regs->flags' is copied to 'regs->sp'. */ 1050 - " addl $4, %esp\n" 1051 - " popfl\n" 1052 - #endif 1053 - ASM_RET 1054 - ".size __kretprobe_trampoline, .-__kretprobe_trampoline\n" 1055 - ); 1056 - NOKPROBE_SYMBOL(__kretprobe_trampoline); 1057 - /* 1058 - * __kretprobe_trampoline() skips updating frame pointer. The frame pointer 1059 - * saved in trampoline_handler() points to the real caller function's 1060 - * frame pointer. Thus the __kretprobe_trampoline() doesn't have a 1061 - * standard stack frame with CONFIG_FRAME_POINTER=y. 1062 - * Let's mark it non-standard function. Anyway, FP unwinder can correctly 1063 - * unwind without the hint. 1064 - */ 1065 - STACK_FRAME_NON_STANDARD_FP(__kretprobe_trampoline); 1066 - 1067 - /* This is called from kretprobe_trampoline_handler(). */ 1068 - void arch_kretprobe_fixup_return(struct pt_regs *regs, 1069 - kprobe_opcode_t *correct_ret_addr) 1070 - { 1071 - unsigned long *frame_pointer = &regs->sp + 1; 1072 - 1073 - /* Replace fake return address with real one. */ 1074 - *frame_pointer = (unsigned long)correct_ret_addr; 1075 - } 1076 - 1077 - /* 1078 - * Called from __kretprobe_trampoline 1079 - */ 1080 - __used __visible void trampoline_handler(struct pt_regs *regs) 1081 - { 1082 - unsigned long *frame_pointer; 1083 - 1084 - /* fixup registers */ 1085 - regs->cs = __KERNEL_CS; 1086 - #ifdef CONFIG_X86_32 1087 - regs->gs = 0; 1088 - #endif 1089 - regs->ip = (unsigned long)&__kretprobe_trampoline; 1090 - regs->orig_ax = ~0UL; 1091 - regs->sp += sizeof(long); 1092 - frame_pointer = &regs->sp + 1; 1093 - 1094 - /* 1095 - * The return address at 'frame_pointer' is recovered by the 1096 - * arch_kretprobe_fixup_return() which called from the 1097 - * kretprobe_trampoline_handler(). 1098 - */ 1099 - kretprobe_trampoline_handler(regs, frame_pointer); 1100 - 1101 - /* 1102 - * Copy FLAGS to 'pt_regs::sp' so that __kretprobe_trapmoline() 1103 - * can do RET right after POPF. 1104 - */ 1105 - regs->sp = regs->flags; 1106 - } 1107 - NOKPROBE_SYMBOL(trampoline_handler); 1108 1025 1109 1026 int kprobe_fault_handler(struct pt_regs *regs, int trapnr) 1110 1027 {
+16 -9
arch/x86/kernel/kprobes/opt.c
··· 106 106 ".global optprobe_template_entry\n" 107 107 "optprobe_template_entry:\n" 108 108 #ifdef CONFIG_X86_64 109 - /* We don't bother saving the ss register */ 109 + " pushq $" __stringify(__KERNEL_DS) "\n" 110 + /* Save the 'sp - 8', this will be fixed later. */ 110 111 " pushq %rsp\n" 111 112 " pushfq\n" 112 113 ".global optprobe_template_clac\n" ··· 122 121 ".global optprobe_template_call\n" 123 122 "optprobe_template_call:\n" 124 123 ASM_NOP5 125 - /* Move flags to rsp */ 124 + /* Copy 'regs->flags' into 'regs->ss'. */ 126 125 " movq 18*8(%rsp), %rdx\n" 127 - " movq %rdx, 19*8(%rsp)\n" 126 + " movq %rdx, 20*8(%rsp)\n" 128 127 RESTORE_REGS_STRING 129 - /* Skip flags entry */ 130 - " addq $8, %rsp\n" 128 + /* Skip 'regs->flags' and 'regs->sp'. */ 129 + " addq $16, %rsp\n" 130 + /* And pop flags register from 'regs->ss'. */ 131 131 " popfq\n" 132 132 #else /* CONFIG_X86_32 */ 133 + " pushl %ss\n" 134 + /* Save the 'sp - 4', this will be fixed later. */ 133 135 " pushl %esp\n" 134 136 " pushfl\n" 135 137 ".global optprobe_template_clac\n" ··· 146 142 ".global optprobe_template_call\n" 147 143 "optprobe_template_call:\n" 148 144 ASM_NOP5 149 - /* Move flags into esp */ 145 + /* Copy 'regs->flags' into 'regs->ss'. */ 150 146 " movl 14*4(%esp), %edx\n" 151 - " movl %edx, 15*4(%esp)\n" 147 + " movl %edx, 16*4(%esp)\n" 152 148 RESTORE_REGS_STRING 153 - /* Skip flags entry */ 154 - " addl $4, %esp\n" 149 + /* Skip 'regs->flags' and 'regs->sp'. */ 150 + " addl $8, %esp\n" 151 + /* And pop flags register from 'regs->ss'. */ 155 152 " popfl\n" 156 153 #endif 157 154 ".global optprobe_template_end\n" ··· 184 179 kprobes_inc_nmissed_count(&op->kp); 185 180 } else { 186 181 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 182 + /* Adjust stack pointer */ 183 + regs->sp += sizeof(long); 187 184 /* Save skipped registers */ 188 185 regs->cs = __KERNEL_CS; 189 186 #ifdef CONFIG_X86_32
+127
arch/x86/kernel/rethook.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * x86 implementation of rethook. Mostly copied from arch/x86/kernel/kprobes/core.c. 4 + */ 5 + #include <linux/bug.h> 6 + #include <linux/rethook.h> 7 + #include <linux/kprobes.h> 8 + #include <linux/objtool.h> 9 + 10 + #include "kprobes/common.h" 11 + 12 + __visible void arch_rethook_trampoline_callback(struct pt_regs *regs); 13 + 14 + #ifndef ANNOTATE_NOENDBR 15 + #define ANNOTATE_NOENDBR 16 + #endif 17 + 18 + /* 19 + * When a target function returns, this code saves registers and calls 20 + * arch_rethook_trampoline_callback(), which calls the rethook handler. 21 + */ 22 + asm( 23 + ".text\n" 24 + ".global arch_rethook_trampoline\n" 25 + ".type arch_rethook_trampoline, @function\n" 26 + "arch_rethook_trampoline:\n" 27 + #ifdef CONFIG_X86_64 28 + ANNOTATE_NOENDBR /* This is only jumped from ret instruction */ 29 + /* Push a fake return address to tell the unwinder it's a rethook. */ 30 + " pushq $arch_rethook_trampoline\n" 31 + UNWIND_HINT_FUNC 32 + " pushq $" __stringify(__KERNEL_DS) "\n" 33 + /* Save the 'sp - 16', this will be fixed later. */ 34 + " pushq %rsp\n" 35 + " pushfq\n" 36 + SAVE_REGS_STRING 37 + " movq %rsp, %rdi\n" 38 + " call arch_rethook_trampoline_callback\n" 39 + RESTORE_REGS_STRING 40 + /* In the callback function, 'regs->flags' is copied to 'regs->ss'. */ 41 + " addq $16, %rsp\n" 42 + " popfq\n" 43 + #else 44 + /* Push a fake return address to tell the unwinder it's a rethook. */ 45 + " pushl $arch_rethook_trampoline\n" 46 + UNWIND_HINT_FUNC 47 + " pushl %ss\n" 48 + /* Save the 'sp - 8', this will be fixed later. */ 49 + " pushl %esp\n" 50 + " pushfl\n" 51 + SAVE_REGS_STRING 52 + " movl %esp, %eax\n" 53 + " call arch_rethook_trampoline_callback\n" 54 + RESTORE_REGS_STRING 55 + /* In the callback function, 'regs->flags' is copied to 'regs->ss'. */ 56 + " addl $8, %esp\n" 57 + " popfl\n" 58 + #endif 59 + ASM_RET 60 + ".size arch_rethook_trampoline, .-arch_rethook_trampoline\n" 61 + ); 62 + NOKPROBE_SYMBOL(arch_rethook_trampoline); 63 + 64 + /* 65 + * Called from arch_rethook_trampoline 66 + */ 67 + __used __visible void arch_rethook_trampoline_callback(struct pt_regs *regs) 68 + { 69 + unsigned long *frame_pointer; 70 + 71 + /* fixup registers */ 72 + regs->cs = __KERNEL_CS; 73 + #ifdef CONFIG_X86_32 74 + regs->gs = 0; 75 + #endif 76 + regs->ip = (unsigned long)&arch_rethook_trampoline; 77 + regs->orig_ax = ~0UL; 78 + regs->sp += 2*sizeof(long); 79 + frame_pointer = (long *)(regs + 1); 80 + 81 + /* 82 + * The return address at 'frame_pointer' is recovered by the 83 + * arch_rethook_fixup_return() which called from this 84 + * rethook_trampoline_handler(). 85 + */ 86 + rethook_trampoline_handler(regs, (unsigned long)frame_pointer); 87 + 88 + /* 89 + * Copy FLAGS to 'pt_regs::ss' so that arch_rethook_trapmoline() 90 + * can do RET right after POPF. 91 + */ 92 + *(unsigned long *)&regs->ss = regs->flags; 93 + } 94 + NOKPROBE_SYMBOL(arch_rethook_trampoline_callback); 95 + 96 + /* 97 + * arch_rethook_trampoline() skips updating frame pointer. The frame pointer 98 + * saved in arch_rethook_trampoline_callback() points to the real caller 99 + * function's frame pointer. Thus the arch_rethook_trampoline() doesn't have 100 + * a standard stack frame with CONFIG_FRAME_POINTER=y. 101 + * Let's mark it non-standard function. Anyway, FP unwinder can correctly 102 + * unwind without the hint. 103 + */ 104 + STACK_FRAME_NON_STANDARD_FP(arch_rethook_trampoline); 105 + 106 + /* This is called from rethook_trampoline_handler(). */ 107 + void arch_rethook_fixup_return(struct pt_regs *regs, 108 + unsigned long correct_ret_addr) 109 + { 110 + unsigned long *frame_pointer = (void *)(regs + 1); 111 + 112 + /* Replace fake return address with real one. */ 113 + *frame_pointer = correct_ret_addr; 114 + } 115 + NOKPROBE_SYMBOL(arch_rethook_fixup_return); 116 + 117 + void arch_rethook_prepare(struct rethook_node *rh, struct pt_regs *regs, bool mcount) 118 + { 119 + unsigned long *stack = (unsigned long *)regs->sp; 120 + 121 + rh->ret_addr = stack[0]; 122 + rh->frame = regs->sp; 123 + 124 + /* Replace the return addr with trampoline addr */ 125 + stack[0] = (unsigned long) arch_rethook_trampoline; 126 + } 127 + NOKPROBE_SYMBOL(arch_rethook_prepare);
+5 -5
arch/x86/kernel/unwind_orc.c
··· 550 550 } 551 551 /* 552 552 * There is a small chance to interrupt at the entry of 553 - * __kretprobe_trampoline() where the ORC info doesn't exist. 554 - * That point is right after the RET to __kretprobe_trampoline() 553 + * arch_rethook_trampoline() where the ORC info doesn't exist. 554 + * That point is right after the RET to arch_rethook_trampoline() 555 555 * which was modified return address. 556 - * At that point, the @addr_p of the unwind_recover_kretprobe() 556 + * At that point, the @addr_p of the unwind_recover_rethook() 557 557 * (this has to point the address of the stack entry storing 558 558 * the modified return address) must be "SP - (a stack entry)" 559 559 * because SP is incremented by the RET. 560 560 */ 561 - state->ip = unwind_recover_kretprobe(state, state->ip, 561 + state->ip = unwind_recover_rethook(state, state->ip, 562 562 (unsigned long *)(state->sp - sizeof(long))); 563 563 state->regs = (struct pt_regs *)sp; 564 564 state->prev_regs = NULL; ··· 573 573 goto err; 574 574 } 575 575 /* See UNWIND_HINT_TYPE_REGS case comment. */ 576 - state->ip = unwind_recover_kretprobe(state, state->ip, 576 + state->ip = unwind_recover_rethook(state, state->ip, 577 577 (unsigned long *)(state->sp - sizeof(long))); 578 578 579 579 if (state->full_regs)
+3 -2
drivers/net/can/m_can/m_can.c
··· 1637 1637 if (err) 1638 1638 goto out_fail; 1639 1639 1640 - can_put_echo_skb(skb, dev, 0, 0); 1641 - 1642 1640 if (cdev->can.ctrlmode & CAN_CTRLMODE_FD) { 1643 1641 cccr = m_can_read(cdev, M_CAN_CCCR); 1644 1642 cccr &= ~CCCR_CMR_MASK; ··· 1653 1655 m_can_write(cdev, M_CAN_CCCR, cccr); 1654 1656 } 1655 1657 m_can_write(cdev, M_CAN_TXBTIE, 0x1); 1658 + 1659 + can_put_echo_skb(skb, dev, 0, 0); 1660 + 1656 1661 m_can_write(cdev, M_CAN_TXBAR, 0x1); 1657 1662 /* End of xmit function for version 3.0.x */ 1658 1663 } else {
+1 -1
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 1786 1786 out_kfree_buf_rx: 1787 1787 kfree(buf_rx); 1788 1788 1789 - return 0; 1789 + return err; 1790 1790 } 1791 1791 1792 1792 #define MCP251XFD_QUIRK_ACTIVE(quirk) \
-1
drivers/net/can/usb/ems_usb.c
··· 819 819 820 820 usb_unanchor_urb(urb); 821 821 usb_free_coherent(dev->udev, size, buf, urb->transfer_dma); 822 - dev_kfree_skb(skb); 823 822 824 823 atomic_dec(&dev->active_tx_urbs); 825 824
+2
drivers/net/can/usb/gs_usb.c
··· 1092 1092 dev->data_bt_const.brp_inc = le32_to_cpu(bt_const_extended->dbrp_inc); 1093 1093 1094 1094 dev->can.data_bittiming_const = &dev->data_bt_const; 1095 + 1096 + kfree(bt_const_extended); 1095 1097 } 1096 1098 1097 1099 SET_NETDEV_DEV(netdev, &intf->dev);
+16 -11
drivers/net/can/usb/mcba_usb.c
··· 33 33 #define MCBA_USB_RX_BUFF_SIZE 64 34 34 #define MCBA_USB_TX_BUFF_SIZE (sizeof(struct mcba_usb_msg)) 35 35 36 - /* MCBA endpoint numbers */ 37 - #define MCBA_USB_EP_IN 1 38 - #define MCBA_USB_EP_OUT 1 39 - 40 36 /* Microchip command id */ 41 37 #define MBCA_CMD_RECEIVE_MESSAGE 0xE3 42 38 #define MBCA_CMD_I_AM_ALIVE_FROM_CAN 0xF5 ··· 79 83 atomic_t free_ctx_cnt; 80 84 void *rxbuf[MCBA_MAX_RX_URBS]; 81 85 dma_addr_t rxbuf_dma[MCBA_MAX_RX_URBS]; 86 + int rx_pipe; 87 + int tx_pipe; 82 88 }; 83 89 84 90 /* CAN frame */ ··· 266 268 267 269 memcpy(buf, usb_msg, MCBA_USB_TX_BUFF_SIZE); 268 270 269 - usb_fill_bulk_urb(urb, priv->udev, 270 - usb_sndbulkpipe(priv->udev, MCBA_USB_EP_OUT), buf, 271 - MCBA_USB_TX_BUFF_SIZE, mcba_usb_write_bulk_callback, 272 - ctx); 271 + usb_fill_bulk_urb(urb, priv->udev, priv->tx_pipe, buf, MCBA_USB_TX_BUFF_SIZE, 272 + mcba_usb_write_bulk_callback, ctx); 273 273 274 274 urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; 275 275 usb_anchor_urb(urb, &priv->tx_submitted); ··· 360 364 xmit_failed: 361 365 can_free_echo_skb(priv->netdev, ctx->ndx, NULL); 362 366 mcba_usb_free_ctx(ctx); 363 - dev_kfree_skb(skb); 364 367 stats->tx_dropped++; 365 368 366 369 return NETDEV_TX_OK; ··· 603 608 resubmit_urb: 604 609 605 610 usb_fill_bulk_urb(urb, priv->udev, 606 - usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_OUT), 611 + priv->rx_pipe, 607 612 urb->transfer_buffer, MCBA_USB_RX_BUFF_SIZE, 608 613 mcba_usb_read_bulk_callback, priv); 609 614 ··· 648 653 urb->transfer_dma = buf_dma; 649 654 650 655 usb_fill_bulk_urb(urb, priv->udev, 651 - usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_IN), 656 + priv->rx_pipe, 652 657 buf, MCBA_USB_RX_BUFF_SIZE, 653 658 mcba_usb_read_bulk_callback, priv); 654 659 urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; ··· 802 807 struct mcba_priv *priv; 803 808 int err; 804 809 struct usb_device *usbdev = interface_to_usbdev(intf); 810 + struct usb_endpoint_descriptor *in, *out; 811 + 812 + err = usb_find_common_endpoints(intf->cur_altsetting, &in, &out, NULL, NULL); 813 + if (err) { 814 + dev_err(&intf->dev, "Can't find endpoints\n"); 815 + return err; 816 + } 805 817 806 818 netdev = alloc_candev(sizeof(struct mcba_priv), MCBA_MAX_TX_URBS); 807 819 if (!netdev) { ··· 853 851 854 852 goto cleanup_free_candev; 855 853 } 854 + 855 + priv->rx_pipe = usb_rcvbulkpipe(priv->udev, in->bEndpointAddress); 856 + priv->tx_pipe = usb_sndbulkpipe(priv->udev, out->bEndpointAddress); 856 857 857 858 devm_can_led_init(netdev); 858 859
+14 -16
drivers/net/can/usb/usb_8dev.c
··· 663 663 atomic_inc(&priv->active_tx_urbs); 664 664 665 665 err = usb_submit_urb(urb, GFP_ATOMIC); 666 - if (unlikely(err)) 667 - goto failed; 668 - else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS) 666 + if (unlikely(err)) { 667 + can_free_echo_skb(netdev, context->echo_index, NULL); 668 + 669 + usb_unanchor_urb(urb); 670 + usb_free_coherent(priv->udev, size, buf, urb->transfer_dma); 671 + 672 + atomic_dec(&priv->active_tx_urbs); 673 + 674 + if (err == -ENODEV) 675 + netif_device_detach(netdev); 676 + else 677 + netdev_warn(netdev, "failed tx_urb %d\n", err); 678 + stats->tx_dropped++; 679 + } else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS) 669 680 /* Slow down tx path */ 670 681 netif_stop_queue(netdev); 671 682 ··· 694 683 netdev_warn(netdev, "couldn't find free context"); 695 684 696 685 return NETDEV_TX_BUSY; 697 - 698 - failed: 699 - can_free_echo_skb(netdev, context->echo_index, NULL); 700 - 701 - usb_unanchor_urb(urb); 702 - usb_free_coherent(priv->udev, size, buf, urb->transfer_dma); 703 - 704 - atomic_dec(&priv->active_tx_urbs); 705 - 706 - if (err == -ENODEV) 707 - netif_device_detach(netdev); 708 - else 709 - netdev_warn(netdev, "failed tx_urb %d\n", err); 710 686 711 687 nomembuf: 712 688 usb_free_urb(urb);
+4
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1928 1928 case FLOW_ACTION_GATE: 1929 1929 size = struct_size(sgi, entries, a->gate.num_entries); 1930 1930 sgi = kzalloc(size, GFP_KERNEL); 1931 + if (!sgi) { 1932 + ret = -ENOMEM; 1933 + goto err; 1934 + } 1931 1935 vsc9959_psfp_parse_gate(a, sgi); 1932 1936 ret = vsc9959_psfp_sgi_table_add(ocelot, sgi); 1933 1937 if (ret) {
+1
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 845 845 struct dentry *hnae3_dbgfs; 846 846 /* protects concurrent contention between debugfs commands */ 847 847 struct mutex dbgfs_lock; 848 + char **dbgfs_buf; 848 849 849 850 /* Network interface message level enabled bits */ 850 851 u32 msg_enable;
+11 -4
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 1227 1227 return ret; 1228 1228 1229 1229 mutex_lock(&handle->dbgfs_lock); 1230 - save_buf = &hns3_dbg_cmd[index].buf; 1230 + save_buf = &handle->dbgfs_buf[index]; 1231 1231 1232 1232 if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || 1233 1233 test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) { ··· 1332 1332 int ret; 1333 1333 u32 i; 1334 1334 1335 + handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev, 1336 + ARRAY_SIZE(hns3_dbg_cmd), 1337 + sizeof(*handle->dbgfs_buf), 1338 + GFP_KERNEL); 1339 + if (!handle->dbgfs_buf) 1340 + return -ENOMEM; 1341 + 1335 1342 hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry = 1336 1343 debugfs_create_dir(name, hns3_dbgfs_root); 1337 1344 handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry; ··· 1387 1380 u32 i; 1388 1381 1389 1382 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) 1390 - if (hns3_dbg_cmd[i].buf) { 1391 - kvfree(hns3_dbg_cmd[i].buf); 1392 - hns3_dbg_cmd[i].buf = NULL; 1383 + if (handle->dbgfs_buf[i]) { 1384 + kvfree(handle->dbgfs_buf[i]); 1385 + handle->dbgfs_buf[i] = NULL; 1393 1386 } 1394 1387 1395 1388 mutex_destroy(&handle->dbgfs_lock);
-1
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
··· 49 49 enum hnae3_dbg_cmd cmd; 50 50 enum hns3_dbg_dentry_type dentry; 51 51 u32 buf_len; 52 - char *buf; 53 52 int (*init)(struct hnae3_handle *handle, unsigned int cmd); 54 53 }; 55 54
+3 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 10323 10323 } 10324 10324 10325 10325 if (!ret) { 10326 - if (is_kill) 10327 - hclge_rm_vport_vlan_table(vport, vlan_id, false); 10328 - else 10326 + if (!is_kill) 10329 10327 hclge_add_vport_vlan_table(vport, vlan_id, 10330 10328 writen_to_tbl); 10329 + else if (is_kill && vlan_id != 0) 10330 + hclge_rm_vport_vlan_table(vport, vlan_id, false); 10331 10331 } else if (is_kill) { 10332 10332 /* when remove hw vlan filter failed, record the vlan id, 10333 10333 * and try to remove it from hw later, to be consistence
+1 -1
drivers/net/ethernet/intel/ice/ice.h
··· 710 710 struct ice_vsi *vsi = ring->vsi; 711 711 u16 qid; 712 712 713 - qid = ring->q_index - vsi->num_xdp_txq; 713 + qid = ring->q_index - vsi->alloc_txq; 714 714 715 715 if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps)) 716 716 return NULL;
+4 -1
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 608 608 */ 609 609 dma_rmb(); 610 610 611 + if (unlikely(rx_ring->next_to_clean == rx_ring->next_to_use)) 612 + break; 613 + 611 614 xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean); 612 615 613 616 size = le16_to_cpu(rx_desc->wb.pkt_len) & ··· 757 754 next_dd = next_dd + tx_thresh; 758 755 if (next_dd >= desc_cnt) 759 756 next_dd = tx_thresh - 1; 760 - } while (budget--); 757 + } while (--budget); 761 758 762 759 xdp_ring->next_dd = next_dd; 763 760
+3
drivers/net/ethernet/microchip/lan966x/lan966x_main.c
··· 408 408 } 409 409 } 410 410 411 + if (!dev->phydev) 412 + return -ENODEV; 413 + 411 414 return phy_mii_ioctl(dev->phydev, ifr, cmd); 412 415 } 413 416
+1
drivers/net/ethernet/microchip/sparx5/Kconfig
··· 5 5 depends on OF 6 6 depends on ARCH_SPARX5 || COMPILE_TEST 7 7 depends on PTP_1588_CLOCK_OPTIONAL 8 + depends on BRIDGE || BRIDGE=n 8 9 select PHYLINK 9 10 select PHY_SPARX5_SERDES 10 11 select RESET_CONTROLLER
+4 -7
drivers/net/ethernet/sfc/efx_channels.c
··· 91 91 } 92 92 93 93 cpumask_copy(filter_mask, cpu_online_mask); 94 - if (local_node) { 95 - int numa_node = pcibus_to_node(efx->pci_dev->bus); 96 - 97 - cpumask_and(filter_mask, filter_mask, cpumask_of_node(numa_node)); 98 - } 94 + if (local_node) 95 + cpumask_and(filter_mask, filter_mask, 96 + cpumask_of_pcibus(efx->pci_dev->bus)); 99 97 100 98 count = 0; 101 99 for_each_cpu(cpu, filter_mask) { ··· 384 386 #if defined(CONFIG_SMP) 385 387 void efx_set_interrupt_affinity(struct efx_nic *efx) 386 388 { 387 - int numa_node = pcibus_to_node(efx->pci_dev->bus); 388 - const struct cpumask *numa_mask = cpumask_of_node(numa_node); 389 + const struct cpumask *numa_mask = cpumask_of_pcibus(efx->pci_dev->bus); 389 390 struct efx_channel *channel; 390 391 unsigned int cpu; 391 392
+6
drivers/net/vxlan/vxlan_vnifilter.c
··· 425 425 err = -ENODEV; 426 426 goto out_err; 427 427 } 428 + if (!netif_is_vxlan(dev)) { 429 + NL_SET_ERR_MSG(cb->extack, 430 + "The device is not a vxlan device"); 431 + err = -EINVAL; 432 + goto out_err; 433 + } 428 434 err = vxlan_vnifilter_dump_dev(dev, skb, cb); 429 435 /* if the dump completed without an error we return 0 here */ 430 436 if (err != -EMSGSIZE)
+2 -1
drivers/net/wireguard/queueing.c
··· 4 4 */ 5 5 6 6 #include "queueing.h" 7 + #include <linux/skb_array.h> 7 8 8 9 struct multicore_worker __percpu * 9 10 wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr) ··· 43 42 { 44 43 free_percpu(queue->worker); 45 44 WARN_ON(!purge && !__ptr_ring_empty(&queue->ring)); 46 - ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL); 45 + ptr_ring_cleanup(&queue->ring, purge ? __skb_array_destroy_skb : NULL); 47 46 } 48 47 49 48 #define NEXT(skb) ((skb)->prev)
+3 -2
drivers/net/wireguard/socket.c
··· 160 160 rcu_read_unlock_bh(); 161 161 return ret; 162 162 #else 163 + kfree_skb(skb); 163 164 return -EAFNOSUPPORT; 164 165 #endif 165 166 } ··· 242 241 endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr; 243 242 endpoint->src4.s_addr = ip_hdr(skb)->daddr; 244 243 endpoint->src_if4 = skb->skb_iif; 245 - } else if (skb->protocol == htons(ETH_P_IPV6)) { 244 + } else if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) { 246 245 endpoint->addr6.sin6_family = AF_INET6; 247 246 endpoint->addr6.sin6_port = udp_hdr(skb)->source; 248 247 endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr; ··· 285 284 peer->endpoint.addr4 = endpoint->addr4; 286 285 peer->endpoint.src4 = endpoint->src4; 287 286 peer->endpoint.src_if4 = endpoint->src_if4; 288 - } else if (endpoint->addr.sa_family == AF_INET6) { 287 + } else if (IS_ENABLED(CONFIG_IPV6) && endpoint->addr.sa_family == AF_INET6) { 289 288 peer->endpoint.addr6 = endpoint->addr6; 290 289 peer->endpoint.src6 = endpoint->src6; 291 290 } else {
+8 -7
drivers/ptp/ptp_ocp.c
··· 1214 1214 static inline void 1215 1215 ptp_ocp_nvmem_device_put(struct nvmem_device **nvmemp) 1216 1216 { 1217 - if (*nvmemp != NULL) { 1217 + if (!IS_ERR_OR_NULL(*nvmemp)) 1218 1218 nvmem_device_put(*nvmemp); 1219 - *nvmemp = NULL; 1220 - } 1219 + *nvmemp = NULL; 1221 1220 } 1222 1221 1223 1222 static void ··· 1240 1241 } 1241 1242 if (!nvmem) { 1242 1243 nvmem = ptp_ocp_nvmem_device_get(bp, tag); 1243 - if (!nvmem) 1244 - goto out; 1244 + if (IS_ERR(nvmem)) { 1245 + ret = PTR_ERR(nvmem); 1246 + goto fail; 1247 + } 1245 1248 } 1246 1249 ret = nvmem_device_read(nvmem, map->off, map->len, 1247 1250 BP_MAP_ENTRY_ADDR(bp, map)); 1248 1251 if (ret != map->len) 1249 - goto read_fail; 1252 + goto fail; 1250 1253 } 1251 1254 1252 1255 bp->has_eeprom_data = true; ··· 1257 1256 ptp_ocp_nvmem_device_put(&nvmem); 1258 1257 return; 1259 1258 1260 - read_fail: 1259 + fail: 1261 1260 dev_err(&bp->pdev->dev, "could not read eeprom: %d\n", ret); 1262 1261 goto out; 1263 1262 }
+49 -2
include/linux/kprobes.h
··· 28 28 #include <linux/ftrace.h> 29 29 #include <linux/refcount.h> 30 30 #include <linux/freelist.h> 31 + #include <linux/rethook.h> 31 32 #include <asm/kprobes.h> 32 33 33 34 #ifdef CONFIG_KPROBES ··· 150 149 int maxactive; 151 150 int nmissed; 152 151 size_t data_size; 152 + #ifdef CONFIG_KRETPROBE_ON_RETHOOK 153 + struct rethook *rh; 154 + #else 153 155 struct freelist_head freelist; 154 156 struct kretprobe_holder *rph; 157 + #endif 155 158 }; 156 159 157 160 #define KRETPROBE_MAX_DATA_SIZE 4096 158 161 159 162 struct kretprobe_instance { 163 + #ifdef CONFIG_KRETPROBE_ON_RETHOOK 164 + struct rethook_node node; 165 + #else 160 166 union { 161 167 struct freelist_node freelist; 162 168 struct rcu_head rcu; ··· 172 164 struct kretprobe_holder *rph; 173 165 kprobe_opcode_t *ret_addr; 174 166 void *fp; 167 + #endif 175 168 char data[]; 176 169 }; 177 170 ··· 195 186 extern void kprobe_busy_end(void); 196 187 197 188 #ifdef CONFIG_KRETPROBES 198 - extern void arch_prepare_kretprobe(struct kretprobe_instance *ri, 199 - struct pt_regs *regs); 189 + /* Check whether @p is used for implementing a trampoline. */ 200 190 extern int arch_trampoline_kprobe(struct kprobe *p); 201 191 192 + #ifdef CONFIG_KRETPROBE_ON_RETHOOK 193 + static nokprobe_inline struct kretprobe *get_kretprobe(struct kretprobe_instance *ri) 194 + { 195 + RCU_LOCKDEP_WARN(!rcu_read_lock_any_held(), 196 + "Kretprobe is accessed from instance under preemptive context"); 197 + 198 + return (struct kretprobe *)READ_ONCE(ri->node.rethook->data); 199 + } 200 + static nokprobe_inline unsigned long get_kretprobe_retaddr(struct kretprobe_instance *ri) 201 + { 202 + return ri->node.ret_addr; 203 + } 204 + #else 205 + extern void arch_prepare_kretprobe(struct kretprobe_instance *ri, 206 + struct pt_regs *regs); 202 207 void arch_kretprobe_fixup_return(struct pt_regs *regs, 203 208 kprobe_opcode_t *correct_ret_addr); 204 209 ··· 254 231 255 232 return READ_ONCE(ri->rph->rp); 256 233 } 234 + 235 + static nokprobe_inline unsigned long get_kretprobe_retaddr(struct kretprobe_instance *ri) 236 + { 237 + return (unsigned long)ri->ret_addr; 238 + } 239 + #endif /* CONFIG_KRETPROBE_ON_RETHOOK */ 257 240 258 241 #else /* !CONFIG_KRETPROBES */ 259 242 static inline void arch_prepare_kretprobe(struct kretprobe *rp, ··· 424 395 int register_kretprobes(struct kretprobe **rps, int num); 425 396 void unregister_kretprobes(struct kretprobe **rps, int num); 426 397 398 + #ifdef CONFIG_KRETPROBE_ON_RETHOOK 399 + #define kprobe_flush_task(tk) do {} while (0) 400 + #else 427 401 void kprobe_flush_task(struct task_struct *tk); 402 + #endif 428 403 429 404 void kprobe_free_init_mem(void); 430 405 ··· 542 509 #endif /* !CONFIG_OPTPROBES */ 543 510 544 511 #ifdef CONFIG_KRETPROBES 512 + #ifdef CONFIG_KRETPROBE_ON_RETHOOK 513 + static nokprobe_inline bool is_kretprobe_trampoline(unsigned long addr) 514 + { 515 + return is_rethook_trampoline(addr); 516 + } 517 + 518 + static nokprobe_inline 519 + unsigned long kretprobe_find_ret_addr(struct task_struct *tsk, void *fp, 520 + struct llist_node **cur) 521 + { 522 + return rethook_find_ret_addr(tsk, (unsigned long)fp, cur); 523 + } 524 + #else 545 525 static nokprobe_inline bool is_kretprobe_trampoline(unsigned long addr) 546 526 { 547 527 return (void *)addr == kretprobe_trampoline_addr(); ··· 562 516 563 517 unsigned long kretprobe_find_ret_addr(struct task_struct *tsk, void *fp, 564 518 struct llist_node **cur); 519 + #endif 565 520 #else 566 521 static nokprobe_inline bool is_kretprobe_trampoline(unsigned long addr) 567 522 {
+7 -1
include/trace/events/rxrpc.h
··· 83 83 rxrpc_call_error, 84 84 rxrpc_call_got, 85 85 rxrpc_call_got_kernel, 86 + rxrpc_call_got_timer, 86 87 rxrpc_call_got_userid, 87 88 rxrpc_call_new_client, 88 89 rxrpc_call_new_service, 89 90 rxrpc_call_put, 90 91 rxrpc_call_put_kernel, 91 92 rxrpc_call_put_noqueue, 93 + rxrpc_call_put_notimer, 94 + rxrpc_call_put_timer, 92 95 rxrpc_call_put_userid, 93 96 rxrpc_call_queued, 94 97 rxrpc_call_queued_ref, ··· 281 278 EM(rxrpc_call_error, "*E*") \ 282 279 EM(rxrpc_call_got, "GOT") \ 283 280 EM(rxrpc_call_got_kernel, "Gke") \ 281 + EM(rxrpc_call_got_timer, "GTM") \ 284 282 EM(rxrpc_call_got_userid, "Gus") \ 285 283 EM(rxrpc_call_new_client, "NWc") \ 286 284 EM(rxrpc_call_new_service, "NWs") \ 287 285 EM(rxrpc_call_put, "PUT") \ 288 286 EM(rxrpc_call_put_kernel, "Pke") \ 289 - EM(rxrpc_call_put_noqueue, "PNQ") \ 287 + EM(rxrpc_call_put_noqueue, "PnQ") \ 288 + EM(rxrpc_call_put_notimer, "PnT") \ 289 + EM(rxrpc_call_put_timer, "PTM") \ 290 290 EM(rxrpc_call_put_userid, "Pus") \ 291 291 EM(rxrpc_call_queued, "QUE") \ 292 292 EM(rxrpc_call_queued_ref, "QUR") \
+1
kernel/Makefile
··· 108 108 obj-$(CONFIG_TRACE_CLOCK) += trace/ 109 109 obj-$(CONFIG_RING_BUFFER) += trace/ 110 110 obj-$(CONFIG_TRACEPOINTS) += trace/ 111 + obj-$(CONFIG_RETHOOK) += trace/ 111 112 obj-$(CONFIG_IRQ_WORK) += irq_work.o 112 113 obj-$(CONFIG_CPU_PM) += cpu_pm.o 113 114 obj-$(CONFIG_BPF) += bpf/
+1 -1
kernel/bpf/btf.c
··· 5507 5507 } 5508 5508 args = (const struct btf_param *)(func + 1); 5509 5509 nargs = btf_type_vlen(func); 5510 - if (nargs >= MAX_BPF_FUNC_ARGS) { 5510 + if (nargs > MAX_BPF_FUNC_ARGS) { 5511 5511 bpf_log(log, 5512 5512 "The function %s has %d arguments. Too many.\n", 5513 5513 tname, nargs);
+104 -20
kernel/kprobes.c
··· 1237 1237 } 1238 1238 NOKPROBE_SYMBOL(kprobes_inc_nmissed_count); 1239 1239 1240 + static struct kprobe kprobe_busy = { 1241 + .addr = (void *) get_kprobe, 1242 + }; 1243 + 1244 + void kprobe_busy_begin(void) 1245 + { 1246 + struct kprobe_ctlblk *kcb; 1247 + 1248 + preempt_disable(); 1249 + __this_cpu_write(current_kprobe, &kprobe_busy); 1250 + kcb = get_kprobe_ctlblk(); 1251 + kcb->kprobe_status = KPROBE_HIT_ACTIVE; 1252 + } 1253 + 1254 + void kprobe_busy_end(void) 1255 + { 1256 + __this_cpu_write(current_kprobe, NULL); 1257 + preempt_enable(); 1258 + } 1259 + 1260 + #if !defined(CONFIG_KRETPROBE_ON_RETHOOK) 1240 1261 static void free_rp_inst_rcu(struct rcu_head *head) 1241 1262 { 1242 1263 struct kretprobe_instance *ri = container_of(head, struct kretprobe_instance, rcu); ··· 1278 1257 call_rcu(&ri->rcu, free_rp_inst_rcu); 1279 1258 } 1280 1259 NOKPROBE_SYMBOL(recycle_rp_inst); 1281 - 1282 - static struct kprobe kprobe_busy = { 1283 - .addr = (void *) get_kprobe, 1284 - }; 1285 - 1286 - void kprobe_busy_begin(void) 1287 - { 1288 - struct kprobe_ctlblk *kcb; 1289 - 1290 - preempt_disable(); 1291 - __this_cpu_write(current_kprobe, &kprobe_busy); 1292 - kcb = get_kprobe_ctlblk(); 1293 - kcb->kprobe_status = KPROBE_HIT_ACTIVE; 1294 - } 1295 - 1296 - void kprobe_busy_end(void) 1297 - { 1298 - __this_cpu_write(current_kprobe, NULL); 1299 - preempt_enable(); 1300 - } 1301 1260 1302 1261 /* 1303 1262 * This function is called from delayed_put_task_struct() when a task is ··· 1328 1327 rp->rph = NULL; 1329 1328 } 1330 1329 } 1330 + #endif /* !CONFIG_KRETPROBE_ON_RETHOOK */ 1331 1331 1332 1332 /* Add the new probe to 'ap->list'. */ 1333 1333 static int add_new_kprobe(struct kprobe *ap, struct kprobe *p) ··· 1927 1925 1928 1926 #ifdef CONFIG_KRETPROBES 1929 1927 1928 + #if !defined(CONFIG_KRETPROBE_ON_RETHOOK) 1930 1929 /* This assumes the 'tsk' is the current task or the is not running. */ 1931 1930 static kprobe_opcode_t *__kretprobe_find_ret_addr(struct task_struct *tsk, 1932 1931 struct llist_node **cur) ··· 2090 2087 return 0; 2091 2088 } 2092 2089 NOKPROBE_SYMBOL(pre_handler_kretprobe); 2090 + #else /* CONFIG_KRETPROBE_ON_RETHOOK */ 2091 + /* 2092 + * This kprobe pre_handler is registered with every kretprobe. When probe 2093 + * hits it will set up the return probe. 2094 + */ 2095 + static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs) 2096 + { 2097 + struct kretprobe *rp = container_of(p, struct kretprobe, kp); 2098 + struct kretprobe_instance *ri; 2099 + struct rethook_node *rhn; 2100 + 2101 + rhn = rethook_try_get(rp->rh); 2102 + if (!rhn) { 2103 + rp->nmissed++; 2104 + return 0; 2105 + } 2106 + 2107 + ri = container_of(rhn, struct kretprobe_instance, node); 2108 + 2109 + if (rp->entry_handler && rp->entry_handler(ri, regs)) 2110 + rethook_recycle(rhn); 2111 + else 2112 + rethook_hook(rhn, regs, kprobe_ftrace(p)); 2113 + 2114 + return 0; 2115 + } 2116 + NOKPROBE_SYMBOL(pre_handler_kretprobe); 2117 + 2118 + static void kretprobe_rethook_handler(struct rethook_node *rh, void *data, 2119 + struct pt_regs *regs) 2120 + { 2121 + struct kretprobe *rp = (struct kretprobe *)data; 2122 + struct kretprobe_instance *ri; 2123 + struct kprobe_ctlblk *kcb; 2124 + 2125 + /* The data must NOT be null. This means rethook data structure is broken. */ 2126 + if (WARN_ON_ONCE(!data)) 2127 + return; 2128 + 2129 + __this_cpu_write(current_kprobe, &rp->kp); 2130 + kcb = get_kprobe_ctlblk(); 2131 + kcb->kprobe_status = KPROBE_HIT_ACTIVE; 2132 + 2133 + ri = container_of(rh, struct kretprobe_instance, node); 2134 + rp->handler(ri, regs); 2135 + 2136 + __this_cpu_write(current_kprobe, NULL); 2137 + } 2138 + NOKPROBE_SYMBOL(kretprobe_rethook_handler); 2139 + 2140 + #endif /* !CONFIG_KRETPROBE_ON_RETHOOK */ 2093 2141 2094 2142 /** 2095 2143 * kprobe_on_func_entry() -- check whether given address is function entry ··· 2209 2155 rp->maxactive = num_possible_cpus(); 2210 2156 #endif 2211 2157 } 2158 + #ifdef CONFIG_KRETPROBE_ON_RETHOOK 2159 + rp->rh = rethook_alloc((void *)rp, kretprobe_rethook_handler); 2160 + if (!rp->rh) 2161 + return -ENOMEM; 2162 + 2163 + for (i = 0; i < rp->maxactive; i++) { 2164 + inst = kzalloc(sizeof(struct kretprobe_instance) + 2165 + rp->data_size, GFP_KERNEL); 2166 + if (inst == NULL) { 2167 + rethook_free(rp->rh); 2168 + rp->rh = NULL; 2169 + return -ENOMEM; 2170 + } 2171 + rethook_add_node(rp->rh, &inst->node); 2172 + } 2173 + rp->nmissed = 0; 2174 + /* Establish function entry probe point */ 2175 + ret = register_kprobe(&rp->kp); 2176 + if (ret != 0) { 2177 + rethook_free(rp->rh); 2178 + rp->rh = NULL; 2179 + } 2180 + #else /* !CONFIG_KRETPROBE_ON_RETHOOK */ 2212 2181 rp->freelist.head = NULL; 2213 2182 rp->rph = kzalloc(sizeof(struct kretprobe_holder), GFP_KERNEL); 2214 2183 if (!rp->rph) ··· 2256 2179 ret = register_kprobe(&rp->kp); 2257 2180 if (ret != 0) 2258 2181 free_rp_inst(rp); 2182 + #endif 2259 2183 return ret; 2260 2184 } 2261 2185 EXPORT_SYMBOL_GPL(register_kretprobe); ··· 2295 2217 for (i = 0; i < num; i++) { 2296 2218 if (__unregister_kprobe_top(&rps[i]->kp) < 0) 2297 2219 rps[i]->kp.addr = NULL; 2220 + #ifdef CONFIG_KRETPROBE_ON_RETHOOK 2221 + rethook_free(rps[i]->rh); 2222 + #else 2298 2223 rps[i]->rph->rp = NULL; 2224 + #endif 2299 2225 } 2300 2226 mutex_unlock(&kprobe_mutex); 2301 2227 ··· 2307 2225 for (i = 0; i < num; i++) { 2308 2226 if (rps[i]->kp.addr) { 2309 2227 __unregister_kprobe_bottom(&rps[i]->kp); 2228 + #ifndef CONFIG_KRETPROBE_ON_RETHOOK 2310 2229 free_rp_inst(rps[i]); 2230 + #endif 2311 2231 } 2312 2232 } 2313 2233 }
+4 -4
kernel/trace/fprobe.c
··· 150 150 151 151 fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler); 152 152 for (i = 0; i < size; i++) { 153 - struct rethook_node *node; 153 + struct fprobe_rethook_node *node; 154 154 155 - node = kzalloc(sizeof(struct fprobe_rethook_node), GFP_KERNEL); 155 + node = kzalloc(sizeof(*node), GFP_KERNEL); 156 156 if (!node) { 157 157 rethook_free(fp->rethook); 158 158 fp->rethook = NULL; 159 159 return -ENOMEM; 160 160 } 161 - rethook_add_node(fp->rethook, node); 161 + rethook_add_node(fp->rethook, &node->node); 162 162 } 163 163 return 0; 164 164 } ··· 215 215 * correctly calculate the total number of filtered symbols 216 216 * from both filter and notfilter. 217 217 */ 218 - hash = fp->ops.local_hash.filter_hash; 218 + hash = rcu_access_pointer(fp->ops.local_hash.filter_hash); 219 219 if (WARN_ON_ONCE(!hash)) 220 220 goto out; 221 221
+2 -2
kernel/trace/trace_kprobe.c
··· 1433 1433 fbuffer.regs = regs; 1434 1434 entry = fbuffer.entry = ring_buffer_event_data(fbuffer.event); 1435 1435 entry->func = (unsigned long)tk->rp.kp.addr; 1436 - entry->ret_ip = (unsigned long)ri->ret_addr; 1436 + entry->ret_ip = get_kretprobe_retaddr(ri); 1437 1437 store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize); 1438 1438 1439 1439 trace_event_buffer_commit(&fbuffer); ··· 1628 1628 return; 1629 1629 1630 1630 entry->func = (unsigned long)tk->rp.kp.addr; 1631 - entry->ret_ip = (unsigned long)ri->ret_addr; 1631 + entry->ret_ip = get_kretprobe_retaddr(ri); 1632 1632 store_trace_args(&entry[1], &tk->tp, regs, sizeof(*entry), dsize); 1633 1633 perf_trace_buf_submit(entry, size, rctx, call->event.type, 1, regs, 1634 1634 head, NULL);
+9 -4
net/ax25/af_ax25.c
··· 991 991 sock_orphan(sk); 992 992 ax25 = sk_to_ax25(sk); 993 993 ax25_dev = ax25->ax25_dev; 994 - if (ax25_dev) { 995 - dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker); 996 - ax25_dev_put(ax25_dev); 997 - } 998 994 999 995 if (sk->sk_type == SOCK_SEQPACKET) { 1000 996 switch (ax25->state) { ··· 1051 1055 sk->sk_shutdown |= SEND_SHUTDOWN; 1052 1056 sk->sk_state_change(sk); 1053 1057 ax25_destroy_socket(ax25); 1058 + } 1059 + if (ax25_dev) { 1060 + del_timer_sync(&ax25->timer); 1061 + del_timer_sync(&ax25->t1timer); 1062 + del_timer_sync(&ax25->t2timer); 1063 + del_timer_sync(&ax25->t3timer); 1064 + del_timer_sync(&ax25->idletimer); 1065 + dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker); 1066 + ax25_dev_put(ax25_dev); 1054 1067 } 1055 1068 1056 1069 sock->sk = NULL;
+1 -1
net/can/isotp.c
··· 1050 1050 int noblock = flags & MSG_DONTWAIT; 1051 1051 int ret = 0; 1052 1052 1053 - if (flags & ~(MSG_DONTWAIT | MSG_TRUNC)) 1053 + if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK)) 1054 1054 return -EINVAL; 1055 1055 1056 1056 if (!so->bound)
+2 -2
net/openvswitch/actions.c
··· 1539 1539 pr_warn("%s: deferred action limit reached, drop sample action\n", 1540 1540 ovs_dp_name(dp)); 1541 1541 } else { /* Recirc action */ 1542 - pr_warn("%s: deferred action limit reached, drop recirc action\n", 1543 - ovs_dp_name(dp)); 1542 + pr_warn("%s: deferred action limit reached, drop recirc action (recirc_id=%#x)\n", 1543 + ovs_dp_name(dp), recirc_id); 1544 1544 } 1545 1545 } 1546 1546 }
+2 -2
net/openvswitch/flow_netlink.c
··· 2230 2230 icmpv6_key->icmpv6_type = ntohs(output->tp.src); 2231 2231 icmpv6_key->icmpv6_code = ntohs(output->tp.dst); 2232 2232 2233 - if (icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_SOLICITATION || 2234 - icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_ADVERTISEMENT) { 2233 + if (swkey->tp.src == htons(NDISC_NEIGHBOUR_SOLICITATION) || 2234 + swkey->tp.src == htons(NDISC_NEIGHBOUR_ADVERTISEMENT)) { 2235 2235 struct ovs_key_nd *nd_key; 2236 2236 2237 2237 nla = nla_reserve(skb, OVS_KEY_ATTR_ND, sizeof(*nd_key));
+7 -8
net/rxrpc/ar-internal.h
··· 777 777 enum rxrpc_propose_ack_trace); 778 778 void rxrpc_process_call(struct work_struct *); 779 779 780 - static inline void rxrpc_reduce_call_timer(struct rxrpc_call *call, 781 - unsigned long expire_at, 782 - unsigned long now, 783 - enum rxrpc_timer_trace why) 784 - { 785 - trace_rxrpc_timer(call, why, now); 786 - timer_reduce(&call->timer, expire_at); 787 - } 780 + void rxrpc_reduce_call_timer(struct rxrpc_call *call, 781 + unsigned long expire_at, 782 + unsigned long now, 783 + enum rxrpc_timer_trace why); 784 + 785 + void rxrpc_delete_call_timer(struct rxrpc_call *call); 788 786 789 787 /* 790 788 * call_object.c ··· 806 808 bool __rxrpc_queue_call(struct rxrpc_call *); 807 809 bool rxrpc_queue_call(struct rxrpc_call *); 808 810 void rxrpc_see_call(struct rxrpc_call *); 811 + bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op); 809 812 void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace); 810 813 void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace); 811 814 void rxrpc_cleanup_call(struct rxrpc_call *);
+1 -1
net/rxrpc/call_event.c
··· 310 310 } 311 311 312 312 if (call->state == RXRPC_CALL_COMPLETE) { 313 - del_timer_sync(&call->timer); 313 + rxrpc_delete_call_timer(call); 314 314 goto out_put; 315 315 } 316 316
+35 -5
net/rxrpc/call_object.c
··· 53 53 54 54 if (call->state < RXRPC_CALL_COMPLETE) { 55 55 trace_rxrpc_timer(call, rxrpc_timer_expired, jiffies); 56 - rxrpc_queue_call(call); 56 + __rxrpc_queue_call(call); 57 + } else { 58 + rxrpc_put_call(call, rxrpc_call_put); 57 59 } 60 + } 61 + 62 + void rxrpc_reduce_call_timer(struct rxrpc_call *call, 63 + unsigned long expire_at, 64 + unsigned long now, 65 + enum rxrpc_timer_trace why) 66 + { 67 + if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) { 68 + trace_rxrpc_timer(call, why, now); 69 + if (timer_reduce(&call->timer, expire_at)) 70 + rxrpc_put_call(call, rxrpc_call_put_notimer); 71 + } 72 + } 73 + 74 + void rxrpc_delete_call_timer(struct rxrpc_call *call) 75 + { 76 + if (del_timer_sync(&call->timer)) 77 + rxrpc_put_call(call, rxrpc_call_put_timer); 58 78 } 59 79 60 80 static struct lock_class_key rxrpc_call_user_mutex_lock_class_key; ··· 483 463 } 484 464 } 485 465 466 + bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op) 467 + { 468 + const void *here = __builtin_return_address(0); 469 + int n = atomic_fetch_add_unless(&call->usage, 1, 0); 470 + 471 + if (n == 0) 472 + return false; 473 + trace_rxrpc_call(call->debug_id, op, n, here, NULL); 474 + return true; 475 + } 476 + 486 477 /* 487 478 * Note the addition of a ref on a call. 488 479 */ ··· 541 510 spin_unlock_bh(&call->lock); 542 511 543 512 rxrpc_put_call_slot(call); 544 - 545 - del_timer_sync(&call->timer); 513 + rxrpc_delete_call_timer(call); 546 514 547 515 /* Make sure we don't get any more notifications */ 548 516 write_lock_bh(&rx->recvmsg_lock); ··· 648 618 struct rxrpc_call *call = container_of(work, struct rxrpc_call, processor); 649 619 struct rxrpc_net *rxnet = call->rxnet; 650 620 621 + rxrpc_delete_call_timer(call); 622 + 651 623 rxrpc_put_connection(call->conn); 652 624 rxrpc_put_peer(call->peer); 653 625 kfree(call->rxtx_buffer); ··· 683 651 _net("DESTROY CALL %d", call->debug_id); 684 652 685 653 memset(&call->sock_node, 0xcd, sizeof(call->sock_node)); 686 - 687 - del_timer_sync(&call->timer); 688 654 689 655 ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); 690 656 ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
+5 -2
net/rxrpc/server_key.c
··· 84 84 85 85 prep->payload.data[1] = (struct rxrpc_security *)sec; 86 86 87 + if (!sec->preparse_server_key) 88 + return -EINVAL; 89 + 87 90 return sec->preparse_server_key(prep); 88 91 } 89 92 ··· 94 91 { 95 92 const struct rxrpc_security *sec = prep->payload.data[1]; 96 93 97 - if (sec) 94 + if (sec && sec->free_preparse_server_key) 98 95 sec->free_preparse_server_key(prep); 99 96 } 100 97 ··· 102 99 { 103 100 const struct rxrpc_security *sec = key->payload.data[1]; 104 101 105 - if (sec) 102 + if (sec && sec->destroy_server_key) 106 103 sec->destroy_server_key(key); 107 104 } 108 105
+6 -2
net/xdp/xsk_buff_pool.c
··· 591 591 u32 nb_entries1 = 0, nb_entries2; 592 592 593 593 if (unlikely(pool->dma_need_sync)) { 594 + struct xdp_buff *buff; 595 + 594 596 /* Slow path */ 595 - *xdp = xp_alloc(pool); 596 - return !!*xdp; 597 + buff = xp_alloc(pool); 598 + if (buff) 599 + *xdp = buff; 600 + return !!buff; 597 601 } 598 602 599 603 if (unlikely(pool->free_list_cnt)) {
+4 -1
tools/bpf/bpftool/feature.c
··· 207 207 printf("bpf() syscall for unprivileged users is enabled\n"); 208 208 break; 209 209 case 1: 210 - printf("bpf() syscall restricted to privileged users\n"); 210 + printf("bpf() syscall restricted to privileged users (without recovery)\n"); 211 + break; 212 + case 2: 213 + printf("bpf() syscall restricted to privileged users (admin can change)\n"); 211 214 break; 212 215 case -1: 213 216 printf("Unable to retrieve required privileges for bpf() syscall\n");
+1 -1
tools/bpf/bpftool/gen.c
··· 477 477 codegen("\ 478 478 \n\ 479 479 __attribute__((unused)) static void \n\ 480 - %1$s__assert(struct %1$s *s) \n\ 480 + %1$s__assert(struct %1$s *s __attribute__((unused))) \n\ 481 481 { \n\ 482 482 #ifdef __cplusplus \n\ 483 483 #define _Static_assert static_assert \n\
+4 -4
tools/include/uapi/linux/bpf.h
··· 3009 3009 * 3010 3010 * # sysctl kernel.perf_event_max_stack=<new value> 3011 3011 * Return 3012 - * A non-negative value equal to or less than *size* on success, 3013 - * or a negative error in case of failure. 3012 + * The non-negative copied *buf* length equal to or less than 3013 + * *size* on success, or a negative error in case of failure. 3014 3014 * 3015 3015 * long bpf_skb_load_bytes_relative(const void *skb, u32 offset, void *to, u32 len, u32 start_header) 3016 3016 * Description ··· 4316 4316 * 4317 4317 * # sysctl kernel.perf_event_max_stack=<new value> 4318 4318 * Return 4319 - * A non-negative value equal to or less than *size* on success, 4320 - * or a negative error in case of failure. 4319 + * The non-negative copied *buf* length equal to or less than 4320 + * *size* on success, or a negative error in case of failure. 4321 4321 * 4322 4322 * long bpf_load_hdr_opt(struct bpf_sock_ops *skops, void *searchby_res, u32 len, u64 flags) 4323 4323 * Description
-3
tools/testing/selftests/bpf/prog_tests/get_stack_raw_tp.c
··· 29 29 */ 30 30 struct get_stack_trace_t e; 31 31 int i, num_stack; 32 - static __u64 cnt; 33 32 struct ksym *ks; 34 - 35 - cnt++; 36 33 37 34 memset(&e, 0, sizeof(e)); 38 35 memcpy(&e, data, size <= sizeof(e) ? size : sizeof(e));
+2 -10
tools/testing/selftests/bpf/progs/test_stacktrace_build_id.c
··· 39 39 __type(value, stack_trace_t); 40 40 } stack_amap SEC(".maps"); 41 41 42 - /* taken from /sys/kernel/debug/tracing/events/random/urandom_read/format */ 43 - struct random_urandom_args { 44 - unsigned long long pad; 45 - int got_bits; 46 - int pool_left; 47 - int input_left; 48 - }; 49 - 50 - SEC("tracepoint/random/urandom_read") 51 - int oncpu(struct random_urandom_args *args) 42 + SEC("kprobe/urandom_read") 43 + int oncpu(struct pt_regs *args) 52 44 { 53 45 __u32 max_len = sizeof(struct bpf_stack_build_id) 54 46 * PERF_MAX_STACK_DEPTH;
+2 -1
tools/testing/selftests/bpf/test_lpm_map.c
··· 209 209 static void test_lpm_map(int keysize) 210 210 { 211 211 LIBBPF_OPTS(bpf_map_create_opts, opts, .map_flags = BPF_F_NO_PREALLOC); 212 - size_t i, j, n_matches, n_matches_after_delete, n_nodes, n_lookups; 212 + volatile size_t n_matches, n_matches_after_delete; 213 + size_t i, j, n_nodes, n_lookups; 213 214 struct tlpm_node *t, *list = NULL; 214 215 struct bpf_lpm_trie_key *key; 215 216 uint8_t *data, *value;
+7 -19
tools/testing/selftests/wireguard/qemu/init.c
··· 56 56 57 57 static void seed_rng(void) 58 58 { 59 - int fd; 60 - struct { 61 - int entropy_count; 62 - int buffer_size; 63 - unsigned char buffer[256]; 64 - } entropy = { 65 - .entropy_count = sizeof(entropy.buffer) * 8, 66 - .buffer_size = sizeof(entropy.buffer), 67 - .buffer = "Adding real entropy is not actually important for these tests. Don't try this at home, kids!" 68 - }; 59 + int bits = 256, fd; 69 60 70 - if (mknod("/dev/urandom", S_IFCHR | 0644, makedev(1, 9))) 71 - panic("mknod(/dev/urandom)"); 72 - fd = open("/dev/urandom", O_WRONLY); 61 + pretty_message("[+] Fake seeding RNG..."); 62 + fd = open("/dev/random", O_WRONLY); 73 63 if (fd < 0) 74 - panic("open(urandom)"); 75 - for (int i = 0; i < 256; ++i) { 76 - if (ioctl(fd, RNDADDENTROPY, &entropy) < 0) 77 - panic("ioctl(urandom)"); 78 - } 64 + panic("open(random)"); 65 + if (ioctl(fd, RNDADDTOENTCNT, &bits) < 0) 66 + panic("ioctl(RNDADDTOENTCNT)"); 79 67 close(fd); 80 68 } 81 69 ··· 258 270 259 271 int main(int argc, char *argv[]) 260 272 { 261 - seed_rng(); 262 273 ensure_console(); 263 274 print_banner(); 264 275 mount_filesystems(); 276 + seed_rng(); 265 277 kmod_selftests(); 266 278 enable_logging(); 267 279 clear_leaks();