Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2023-02-20-15-29' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:
"There is no particular theme here - mainly quick hits all over the
tree.

Most notable is a set of zlib changes from Mikhail Zaslonko which
enhances and fixes zlib's use of S390 hardware support: 'lib/zlib: Set
of s390 DFLTCC related patches for kernel zlib'"

* tag 'mm-nonmm-stable-2023-02-20-15-29' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (55 commits)
Update CREDITS file entry for Jesper Juhl
sparc: allow PM configs for sparc32 COMPILE_TEST
hung_task: print message when hung_task_warnings gets down to zero.
arch/Kconfig: fix indentation
scripts/tags.sh: fix the Kconfig tags generation when using latest ctags
nilfs2: prevent WARNING in nilfs_dat_commit_end()
lib/zlib: remove redundation assignement of avail_in dfltcc_gdht()
lib/Kconfig.debug: do not enable DEBUG_PREEMPT by default
lib/zlib: DFLTCC always switch to software inflate for Z_PACKET_FLUSH option
lib/zlib: DFLTCC support inflate with small window
lib/zlib: Split deflate and inflate states for DFLTCC
lib/zlib: DFLTCC not writing header bits when avail_out == 0
lib/zlib: fix DFLTCC ignoring flush modes when avail_in == 0
lib/zlib: fix DFLTCC not flushing EOBS when creating raw streams
lib/zlib: implement switching between DFLTCC and software
lib/zlib: adjust offset calculation for dfltcc_state
nilfs2: replace WARN_ONs for invalid DAT metadata block requests
scripts/spelling.txt: add "exsits" pattern and fix typo instances
fs: gracefully handle ->get_block not mapping bh in __mpage_writepage
cramfs: Kconfig: fix spelling & punctuation
...

+1778 -297
+3 -3
CREDITS
··· 1852 1852 D: fbdev hacking 1853 1853 1854 1854 N: Jesper Juhl 1855 - E: jj@chaosbits.net 1855 + E: jesperjuhl76@gmail.com 1856 1856 D: Various fixes, cleanups and minor features all over the tree. 1857 1857 D: Wrote initial version of the hdaps driver (since passed on to others). 1858 - S: Lemnosvej 1, 3.tv 1859 - S: 2300 Copenhagen S. 1858 + S: Titangade 5G, 2.tv 1859 + S: 2200 Copenhagen N. 1860 1860 S: Denmark 1861 1861 1862 1862 N: Jozsef Kadlecsik
+22 -3
Documentation/admin-guide/sysctl/kernel.rst
··· 453 453 kexec_load_disabled 454 454 =================== 455 455 456 - A toggle indicating if the ``kexec_load`` syscall has been disabled. 457 - This value defaults to 0 (false: ``kexec_load`` enabled), but can be 458 - set to 1 (true: ``kexec_load`` disabled). 456 + A toggle indicating if the syscalls ``kexec_load`` and 457 + ``kexec_file_load`` have been disabled. 458 + This value defaults to 0 (false: ``kexec_*load`` enabled), but can be 459 + set to 1 (true: ``kexec_*load`` disabled). 459 460 Once true, kexec can no longer be used, and the toggle cannot be set 460 461 back to false. 461 462 This allows a kexec image to be loaded before disabling the syscall, ··· 464 463 altered. 465 464 Generally used together with the `modules_disabled`_ sysctl. 466 465 466 + kexec_load_limit_panic 467 + ====================== 468 + 469 + This parameter specifies a limit to the number of times the syscalls 470 + ``kexec_load`` and ``kexec_file_load`` can be called with a crash 471 + image. It can only be set with a more restrictive value than the 472 + current one. 473 + 474 + == ====================================================== 475 + -1 Unlimited calls to kexec. This is the default setting. 476 + N Number of calls left. 477 + == ====================================================== 478 + 479 + kexec_load_limit_reboot 480 + ======================= 481 + 482 + Similar functionality as ``kexec_load_limit_panic``, but for a normal 483 + image. 467 484 468 485 kptr_restrict 469 486 =============
+65
Documentation/fault-injection/fault-injection.rst
··· 231 231 This feature is intended for systematic testing of faults in a single 232 232 system call. See an example below. 233 233 234 + 235 + Error Injectable Functions 236 + -------------------------- 237 + 238 + This part is for the kenrel developers considering to add a function to 239 + ALLOW_ERROR_INJECTION() macro. 240 + 241 + Requirements for the Error Injectable Functions 242 + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 243 + 244 + Since the function-level error injection forcibly changes the code path 245 + and returns an error even if the input and conditions are proper, this can 246 + cause unexpected kernel crash if you allow error injection on the function 247 + which is NOT error injectable. Thus, you (and reviewers) must ensure; 248 + 249 + - The function returns an error code if it fails, and the callers must check 250 + it correctly (need to recover from it). 251 + 252 + - The function does not execute any code which can change any state before 253 + the first error return. The state includes global or local, or input 254 + variable. For example, clear output address storage (e.g. `*ret = NULL`), 255 + increments/decrements counter, set a flag, preempt/irq disable or get 256 + a lock (if those are recovered before returning error, that will be OK.) 257 + 258 + The first requirement is important, and it will result in that the release 259 + (free objects) functions are usually harder to inject errors than allocate 260 + functions. If errors of such release functions are not correctly handled 261 + it will cause a memory leak easily (the caller will confuse that the object 262 + has been released or corrupted.) 263 + 264 + The second one is for the caller which expects the function should always 265 + does something. Thus if the function error injection skips whole of the 266 + function, the expectation is betrayed and causes an unexpected error. 267 + 268 + Type of the Error Injectable Functions 269 + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 270 + 271 + Each error injectable functions will have the error type specified by the 272 + ALLOW_ERROR_INJECTION() macro. You have to choose it carefully if you add 273 + a new error injectable function. If the wrong error type is chosen, the 274 + kernel may crash because it may not be able to handle the error. 275 + There are 4 types of errors defined in include/asm-generic/error-injection.h 276 + 277 + EI_ETYPE_NULL 278 + This function will return `NULL` if it fails. e.g. return an allocateed 279 + object address. 280 + 281 + EI_ETYPE_ERRNO 282 + This function will return an `-errno` error code if it fails. e.g. return 283 + -EINVAL if the input is wrong. This will include the functions which will 284 + return an address which encodes `-errno` by ERR_PTR() macro. 285 + 286 + EI_ETYPE_ERRNO_NULL 287 + This function will return an `-errno` or `NULL` if it fails. If the caller 288 + of this function checks the return value with IS_ERR_OR_NULL() macro, this 289 + type will be appropriate. 290 + 291 + EI_ETYPE_TRUE 292 + This function will return `true` (non-zero positive value) if it fails. 293 + 294 + If you specifies a wrong type, for example, EI_TYPE_ERRNO for the function 295 + which returns an allocated object, it may cause a problem because the returned 296 + value is not an object address and the caller can not access to the address. 297 + 298 + 234 299 How to add new fault injection capability 235 300 ----------------------------------------- 236 301
+64 -64
arch/Kconfig
··· 35 35 bool 36 36 37 37 config GENERIC_ENTRY 38 - bool 38 + bool 39 39 40 40 config KPROBES 41 41 bool "Kprobes" ··· 55 55 depends on HAVE_ARCH_JUMP_LABEL 56 56 select OBJTOOL if HAVE_JUMP_LABEL_HACK 57 57 help 58 - This option enables a transparent branch optimization that 59 - makes certain almost-always-true or almost-always-false branch 60 - conditions even cheaper to execute within the kernel. 58 + This option enables a transparent branch optimization that 59 + makes certain almost-always-true or almost-always-false branch 60 + conditions even cheaper to execute within the kernel. 61 61 62 - Certain performance-sensitive kernel code, such as trace points, 63 - scheduler functionality, networking code and KVM have such 64 - branches and include support for this optimization technique. 62 + Certain performance-sensitive kernel code, such as trace points, 63 + scheduler functionality, networking code and KVM have such 64 + branches and include support for this optimization technique. 65 65 66 - If it is detected that the compiler has support for "asm goto", 67 - the kernel will compile such branches with just a nop 68 - instruction. When the condition flag is toggled to true, the 69 - nop will be converted to a jump instruction to execute the 70 - conditional block of instructions. 66 + If it is detected that the compiler has support for "asm goto", 67 + the kernel will compile such branches with just a nop 68 + instruction. When the condition flag is toggled to true, the 69 + nop will be converted to a jump instruction to execute the 70 + conditional block of instructions. 71 71 72 - This technique lowers overhead and stress on the branch prediction 73 - of the processor and generally makes the kernel faster. The update 74 - of the condition is slower, but those are always very rare. 72 + This technique lowers overhead and stress on the branch prediction 73 + of the processor and generally makes the kernel faster. The update 74 + of the condition is slower, but those are always very rare. 75 75 76 - ( On 32-bit x86, the necessary options added to the compiler 77 - flags may increase the size of the kernel slightly. ) 76 + ( On 32-bit x86, the necessary options added to the compiler 77 + flags may increase the size of the kernel slightly. ) 78 78 79 79 config STATIC_KEYS_SELFTEST 80 80 bool "Static key selftest" ··· 98 98 depends on KPROBES && HAVE_KPROBES_ON_FTRACE 99 99 depends on DYNAMIC_FTRACE_WITH_REGS 100 100 help 101 - If function tracer is enabled and the arch supports full 102 - passing of pt_regs to function tracing, then kprobes can 103 - optimize on top of function tracing. 101 + If function tracer is enabled and the arch supports full 102 + passing of pt_regs to function tracing, then kprobes can 103 + optimize on top of function tracing. 104 104 105 105 config UPROBES 106 106 def_bool n ··· 154 154 config ARCH_USE_BUILTIN_BSWAP 155 155 bool 156 156 help 157 - Modern versions of GCC (since 4.4) have builtin functions 158 - for handling byte-swapping. Using these, instead of the old 159 - inline assembler that the architecture code provides in the 160 - __arch_bswapXX() macros, allows the compiler to see what's 161 - happening and offers more opportunity for optimisation. In 162 - particular, the compiler will be able to combine the byteswap 163 - with a nearby load or store and use load-and-swap or 164 - store-and-swap instructions if the architecture has them. It 165 - should almost *never* result in code which is worse than the 166 - hand-coded assembler in <asm/swab.h>. But just in case it 167 - does, the use of the builtins is optional. 157 + Modern versions of GCC (since 4.4) have builtin functions 158 + for handling byte-swapping. Using these, instead of the old 159 + inline assembler that the architecture code provides in the 160 + __arch_bswapXX() macros, allows the compiler to see what's 161 + happening and offers more opportunity for optimisation. In 162 + particular, the compiler will be able to combine the byteswap 163 + with a nearby load or store and use load-and-swap or 164 + store-and-swap instructions if the architecture has them. It 165 + should almost *never* result in code which is worse than the 166 + hand-coded assembler in <asm/swab.h>. But just in case it 167 + does, the use of the builtins is optional. 168 168 169 - Any architecture with load-and-swap or store-and-swap 170 - instructions should set this. And it shouldn't hurt to set it 171 - on architectures that don't have such instructions. 169 + Any architecture with load-and-swap or store-and-swap 170 + instructions should set this. And it shouldn't hurt to set it 171 + on architectures that don't have such instructions. 172 172 173 173 config KRETPROBES 174 174 def_bool y ··· 720 720 depends on !COMPILE_TEST 721 721 select LTO_CLANG 722 722 help 723 - This option enables Clang's full Link Time Optimization (LTO), which 724 - allows the compiler to optimize the kernel globally. If you enable 725 - this option, the compiler generates LLVM bitcode instead of ELF 726 - object files, and the actual compilation from bitcode happens at 727 - the LTO link step, which may take several minutes depending on the 728 - kernel configuration. More information can be found from LLVM's 729 - documentation: 723 + This option enables Clang's full Link Time Optimization (LTO), which 724 + allows the compiler to optimize the kernel globally. If you enable 725 + this option, the compiler generates LLVM bitcode instead of ELF 726 + object files, and the actual compilation from bitcode happens at 727 + the LTO link step, which may take several minutes depending on the 728 + kernel configuration. More information can be found from LLVM's 729 + documentation: 730 730 731 731 https://llvm.org/docs/LinkTimeOptimization.html 732 732 ··· 1330 1330 bool 1331 1331 1332 1332 config HAVE_SPARSE_SYSCALL_NR 1333 - bool 1334 - help 1335 - An architecture should select this if its syscall numbering is sparse 1333 + bool 1334 + help 1335 + An architecture should select this if its syscall numbering is sparse 1336 1336 to save space. For example, MIPS architecture has a syscall array with 1337 1337 entries at 4000, 5000 and 6000 locations. This option turns on syscall 1338 1338 related optimizations for a given architecture. ··· 1356 1356 depends on HAVE_STATIC_CALL 1357 1357 select HAVE_PREEMPT_DYNAMIC 1358 1358 help 1359 - An architecture should select this if it can handle the preemption 1360 - model being selected at boot time using static calls. 1359 + An architecture should select this if it can handle the preemption 1360 + model being selected at boot time using static calls. 1361 1361 1362 - Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a 1363 - preemption function will be patched directly. 1362 + Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a 1363 + preemption function will be patched directly. 1364 1364 1365 - Where an architecture does not select HAVE_STATIC_CALL_INLINE, any 1366 - call to a preemption function will go through a trampoline, and the 1367 - trampoline will be patched. 1365 + Where an architecture does not select HAVE_STATIC_CALL_INLINE, any 1366 + call to a preemption function will go through a trampoline, and the 1367 + trampoline will be patched. 1368 1368 1369 - It is strongly advised to support inline static call to avoid any 1370 - overhead. 1369 + It is strongly advised to support inline static call to avoid any 1370 + overhead. 1371 1371 1372 1372 config HAVE_PREEMPT_DYNAMIC_KEY 1373 1373 bool 1374 1374 depends on HAVE_ARCH_JUMP_LABEL 1375 1375 select HAVE_PREEMPT_DYNAMIC 1376 1376 help 1377 - An architecture should select this if it can handle the preemption 1378 - model being selected at boot time using static keys. 1377 + An architecture should select this if it can handle the preemption 1378 + model being selected at boot time using static keys. 1379 1379 1380 - Each preemption function will be given an early return based on a 1381 - static key. This should have slightly lower overhead than non-inline 1382 - static calls, as this effectively inlines each trampoline into the 1383 - start of its callee. This may avoid redundant work, and may 1384 - integrate better with CFI schemes. 1380 + Each preemption function will be given an early return based on a 1381 + static key. This should have slightly lower overhead than non-inline 1382 + static calls, as this effectively inlines each trampoline into the 1383 + start of its callee. This may avoid redundant work, and may 1384 + integrate better with CFI schemes. 1385 1385 1386 - This will have greater overhead than using inline static calls as 1387 - the call to the preemption function cannot be entirely elided. 1386 + This will have greater overhead than using inline static calls as 1387 + the call to the preemption function cannot be entirely elided. 1388 1388 1389 1389 config ARCH_WANT_LD_ORPHAN_WARN 1390 1390 bool ··· 1407 1407 config ARCH_SPLIT_ARG64 1408 1408 bool 1409 1409 help 1410 - If a 32-bit architecture requires 64-bit arguments to be split into 1411 - pairs of 32-bit arguments, select this option. 1410 + If a 32-bit architecture requires 64-bit arguments to be split into 1411 + pairs of 32-bit arguments, select this option. 1412 1412 1413 1413 config ARCH_HAS_ELFCORE_COMPAT 1414 1414 bool
+1 -1
arch/alpha/kernel/process.c
··· 73 73 static void 74 74 common_shutdown_1(void *generic_ptr) 75 75 { 76 - struct halt_info *how = (struct halt_info *)generic_ptr; 76 + struct halt_info *how = generic_ptr; 77 77 struct percpu_struct *cpup; 78 78 unsigned long *pflags, flags; 79 79 int cpuid = smp_processor_id();
+2 -2
arch/alpha/kernel/smp.c
··· 628 628 static void 629 629 ipi_flush_tlb_mm(void *x) 630 630 { 631 - struct mm_struct *mm = (struct mm_struct *) x; 631 + struct mm_struct *mm = x; 632 632 if (mm == current->active_mm && !asn_locked()) 633 633 flush_tlb_current(mm); 634 634 else ··· 670 670 static void 671 671 ipi_flush_tlb_page(void *x) 672 672 { 673 - struct flush_tlb_page_struct *data = (struct flush_tlb_page_struct *)x; 673 + struct flush_tlb_page_struct *data = x; 674 674 struct mm_struct * mm = data->mm; 675 675 676 676 if (mm == current->active_mm && !asn_locked())
+1 -1
arch/sparc/Kconfig
··· 283 283 This config option is actually maximum order plus one. For example, 284 284 a value of 13 means that the largest free memory block is 2^12 pages. 285 285 286 - if SPARC64 286 + if SPARC64 || COMPILE_TEST 287 287 source "kernel/power/Kconfig" 288 288 endif 289 289
+4 -4
arch/x86/kvm/emulate.c
··· 2615 2615 return true; 2616 2616 } 2617 2617 2618 - static bool emulator_io_permited(struct x86_emulate_ctxt *ctxt, 2619 - u16 port, u16 len) 2618 + static bool emulator_io_permitted(struct x86_emulate_ctxt *ctxt, 2619 + u16 port, u16 len) 2620 2620 { 2621 2621 if (ctxt->perm_ok) 2622 2622 return true; ··· 3961 3961 static int check_perm_in(struct x86_emulate_ctxt *ctxt) 3962 3962 { 3963 3963 ctxt->dst.bytes = min(ctxt->dst.bytes, 4u); 3964 - if (!emulator_io_permited(ctxt, ctxt->src.val, ctxt->dst.bytes)) 3964 + if (!emulator_io_permitted(ctxt, ctxt->src.val, ctxt->dst.bytes)) 3965 3965 return emulate_gp(ctxt, 0); 3966 3966 3967 3967 return X86EMUL_CONTINUE; ··· 3970 3970 static int check_perm_out(struct x86_emulate_ctxt *ctxt) 3971 3971 { 3972 3972 ctxt->src.bytes = min(ctxt->src.bytes, 4u); 3973 - if (!emulator_io_permited(ctxt, ctxt->dst.val, ctxt->src.bytes)) 3973 + if (!emulator_io_permitted(ctxt, ctxt->dst.val, ctxt->src.bytes)) 3974 3974 return emulate_gp(ctxt, 0); 3975 3975 3976 3976 return X86EMUL_CONTINUE;
+1 -1
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 446 446 * @is_leading: indicate if this is the session leading connection (MCS) 447 447 * 448 448 * Return: zero on success, $error if iscsi_conn_bind fails and 449 - * -EINVAL in case end-point doesn't exsits anymore or iser connection 449 + * -EINVAL in case end-point doesn't exists anymore or iser connection 450 450 * state is not UP (teardown already started). 451 451 */ 452 452 static int iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,
+1 -1
fs/cramfs/Kconfig
··· 38 38 default y if !CRAMFS_BLOCKDEV 39 39 help 40 40 This option allows the CramFs driver to load data directly from 41 - a linear adressed memory range (usually non volatile memory 41 + a linear addressed memory range (usually non-volatile memory 42 42 like flash) instead of going through the block device layer. 43 43 This saves some memory since no intermediate buffering is 44 44 necessary.
+2 -3
fs/ext4/inode.c
··· 786 786 * once we get rid of using bh as a container for mapping information 787 787 * to pass to / from get_block functions, this can go away. 788 788 */ 789 + old_state = READ_ONCE(bh->b_state); 789 790 do { 790 - old_state = READ_ONCE(bh->b_state); 791 791 new_state = (old_state & ~EXT4_MAP_FLAGS) | flags; 792 - } while (unlikely( 793 - cmpxchg(&bh->b_state, old_state, new_state) != old_state)); 792 + } while (unlikely(!try_cmpxchg(&bh->b_state, &old_state, new_state))); 794 793 } 795 794 796 795 static int _ext4_get_block(struct inode *inode, sector_t iblock,
+2 -2
fs/fat/namei_vfat.c
··· 200 200 201 201 /* Characters that are undesirable in an MS-DOS file name */ 202 202 203 - static inline wchar_t vfat_bad_char(wchar_t w) 203 + static inline bool vfat_bad_char(wchar_t w) 204 204 { 205 205 return (w < 0x0020) 206 206 || (w == '*') || (w == '?') || (w == '<') || (w == '>') ··· 208 208 || (w == '\\'); 209 209 } 210 210 211 - static inline wchar_t vfat_replace_char(wchar_t w) 211 + static inline bool vfat_replace_char(wchar_t w) 212 212 { 213 213 return (w == '[') || (w == ']') || (w == ';') || (w == ',') 214 214 || (w == '+') || (w == '=');
+3 -3
fs/freevxfs/vxfs_subr.c
··· 31 31 32 32 /** 33 33 * vxfs_get_page - read a page into memory. 34 - * @ip: inode to read from 34 + * @mapping: mapping to read from 35 35 * @n: page number 36 36 * 37 37 * Description: ··· 81 81 } 82 82 83 83 /** 84 - * vxfs_get_block - locate buffer for given inode,block tuple 84 + * vxfs_getblk - locate buffer for given inode,block tuple 85 85 * @ip: inode 86 86 * @iblock: logical block 87 87 * @bp: buffer skeleton 88 88 * @create: %TRUE if blocks may be newly allocated. 89 89 * 90 90 * Description: 91 - * The vxfs_get_block function fills @bp with the right physical 91 + * The vxfs_getblk function fills @bp with the right physical 92 92 * block and device number to perform a lowlevel read/write on 93 93 * it. 94 94 *
+1 -1
fs/freevxfs/vxfs_super.c
··· 165 165 } 166 166 167 167 /** 168 - * vxfs_read_super - read superblock into memory and initialize filesystem 168 + * vxfs_fill_super - read superblock into memory and initialize filesystem 169 169 * @sbp: VFS superblock (to fill) 170 170 * @dp: fs private mount data 171 171 * @silent: do not complain loudly when sth is wrong
+1
fs/hfs/bnode.c
··· 274 274 tree->node_hash[hash] = node; 275 275 tree->node_hash_cnt++; 276 276 } else { 277 + hfs_bnode_get(node2); 277 278 spin_unlock(&tree->hash_lock); 278 279 kfree(node); 279 280 wait_event(node2->lock_wq, !test_bit(HFS_BNODE_NEW, &node2->flags));
+1 -1
fs/hfs/extent.c
··· 486 486 inode->i_size); 487 487 if (inode->i_size > HFS_I(inode)->phys_size) { 488 488 struct address_space *mapping = inode->i_mapping; 489 - void *fsdata; 489 + void *fsdata = NULL; 490 490 struct page *page; 491 491 492 492 /* XXX: Can use generic_cont_expand? */
+1 -1
fs/hfsplus/extents.c
··· 554 554 if (inode->i_size > hip->phys_size) { 555 555 struct address_space *mapping = inode->i_mapping; 556 556 struct page *page; 557 - void *fsdata; 557 + void *fsdata = NULL; 558 558 loff_t size = inode->i_size; 559 559 560 560 res = hfsplus_write_begin(NULL, mapping, size, 0,
+9 -9
fs/hfsplus/xattr.c
··· 257 257 int __hfsplus_setxattr(struct inode *inode, const char *name, 258 258 const void *value, size_t size, int flags) 259 259 { 260 - int err = 0; 260 + int err; 261 261 struct hfs_find_data cat_fd; 262 262 hfsplus_cat_entry entry; 263 263 u16 cat_entry_flags, cat_entry_type; ··· 494 494 __be32 xattr_record_type; 495 495 u32 record_type; 496 496 u16 record_length = 0; 497 - ssize_t res = 0; 497 + ssize_t res; 498 498 499 499 if ((!S_ISREG(inode->i_mode) && 500 500 !S_ISDIR(inode->i_mode)) || ··· 606 606 static ssize_t hfsplus_listxattr_finder_info(struct dentry *dentry, 607 607 char *buffer, size_t size) 608 608 { 609 - ssize_t res = 0; 609 + ssize_t res; 610 610 struct inode *inode = d_inode(dentry); 611 611 struct hfs_find_data fd; 612 612 u16 entry_type; ··· 674 674 ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size) 675 675 { 676 676 ssize_t err; 677 - ssize_t res = 0; 677 + ssize_t res; 678 678 struct inode *inode = d_inode(dentry); 679 679 struct hfs_find_data fd; 680 - u16 key_len = 0; 681 680 struct hfsplus_attr_key attr_key; 682 681 char *strbuf; 683 682 int xattr_name_len; ··· 718 719 } 719 720 720 721 for (;;) { 721 - key_len = hfs_bnode_read_u16(fd.bnode, fd.keyoffset); 722 + u16 key_len = hfs_bnode_read_u16(fd.bnode, fd.keyoffset); 723 + 722 724 if (key_len == 0 || key_len > fd.tree->max_key_len) { 723 725 pr_err("invalid xattr key length: %d\n", key_len); 724 726 res = -EIO; ··· 766 766 767 767 static int hfsplus_removexattr(struct inode *inode, const char *name) 768 768 { 769 - int err = 0; 769 + int err; 770 770 struct hfs_find_data cat_fd; 771 771 u16 flags; 772 772 u16 cat_entry_type; 773 - int is_xattr_acl_deleted = 0; 774 - int is_all_xattrs_deleted = 0; 773 + int is_xattr_acl_deleted; 774 + int is_all_xattrs_deleted; 775 775 776 776 if (!HFSPLUS_SB(inode->i_sb)->attr_tree) 777 777 return -EOPNOTSUPP;
+28 -10
fs/nilfs2/dat.c
··· 40 40 static int nilfs_dat_prepare_entry(struct inode *dat, 41 41 struct nilfs_palloc_req *req, int create) 42 42 { 43 - return nilfs_palloc_get_entry_block(dat, req->pr_entry_nr, 44 - create, &req->pr_entry_bh); 43 + int ret; 44 + 45 + ret = nilfs_palloc_get_entry_block(dat, req->pr_entry_nr, 46 + create, &req->pr_entry_bh); 47 + if (unlikely(ret == -ENOENT)) { 48 + nilfs_err(dat->i_sb, 49 + "DAT doesn't have a block to manage vblocknr = %llu", 50 + (unsigned long long)req->pr_entry_nr); 51 + /* 52 + * Return internal code -EINVAL to notify bmap layer of 53 + * metadata corruption. 54 + */ 55 + ret = -EINVAL; 56 + } 57 + return ret; 45 58 } 46 59 47 60 static void nilfs_dat_commit_entry(struct inode *dat, ··· 136 123 137 124 int nilfs_dat_prepare_start(struct inode *dat, struct nilfs_palloc_req *req) 138 125 { 139 - int ret; 140 - 141 - ret = nilfs_dat_prepare_entry(dat, req, 0); 142 - WARN_ON(ret == -ENOENT); 143 - return ret; 126 + return nilfs_dat_prepare_entry(dat, req, 0); 144 127 } 145 128 146 129 void nilfs_dat_commit_start(struct inode *dat, struct nilfs_palloc_req *req, ··· 158 149 int nilfs_dat_prepare_end(struct inode *dat, struct nilfs_palloc_req *req) 159 150 { 160 151 struct nilfs_dat_entry *entry; 152 + __u64 start; 161 153 sector_t blocknr; 162 154 void *kaddr; 163 155 int ret; 164 156 165 157 ret = nilfs_dat_prepare_entry(dat, req, 0); 166 - if (ret < 0) { 167 - WARN_ON(ret == -ENOENT); 158 + if (ret < 0) 168 159 return ret; 169 - } 170 160 171 161 kaddr = kmap_atomic(req->pr_entry_bh->b_page); 172 162 entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr, 173 163 req->pr_entry_bh, kaddr); 164 + start = le64_to_cpu(entry->de_start); 174 165 blocknr = le64_to_cpu(entry->de_blocknr); 175 166 kunmap_atomic(kaddr); 176 167 ··· 180 171 nilfs_dat_abort_entry(dat, req); 181 172 return ret; 182 173 } 174 + } 175 + if (unlikely(start > nilfs_mdt_cno(dat))) { 176 + nilfs_err(dat->i_sb, 177 + "vblocknr = %llu has abnormal lifetime: start cno (= %llu) > current cno (= %llu)", 178 + (unsigned long long)req->pr_entry_nr, 179 + (unsigned long long)start, 180 + (unsigned long long)nilfs_mdt_cno(dat)); 181 + nilfs_dat_abort_entry(dat, req); 182 + return -EINVAL; 183 183 } 184 184 185 185 return 0;
+5 -5
fs/ntfs/aops.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - /** 2 + /* 3 3 * aops.c - NTFS kernel address space operations and page cache handling. 4 4 * 5 5 * Copyright (c) 2001-2014 Anton Altaparmakov and Tuxera Inc. ··· 1646 1646 return block; 1647 1647 } 1648 1648 1649 - /** 1649 + /* 1650 1650 * ntfs_normal_aops - address space operations for normal inodes and attributes 1651 1651 * 1652 1652 * Note these are not used for compressed or mst protected inodes and ··· 1664 1664 .error_remove_page = generic_error_remove_page, 1665 1665 }; 1666 1666 1667 - /** 1667 + /* 1668 1668 * ntfs_compressed_aops - address space operations for compressed inodes 1669 1669 */ 1670 1670 const struct address_space_operations ntfs_compressed_aops = { ··· 1678 1678 .error_remove_page = generic_error_remove_page, 1679 1679 }; 1680 1680 1681 - /** 1681 + /* 1682 1682 * ntfs_mst_aops - general address space operations for mst protecteed inodes 1683 - * and attributes 1683 + * and attributes 1684 1684 */ 1685 1685 const struct address_space_operations ntfs_mst_aops = { 1686 1686 .read_folio = ntfs_read_folio, /* Fill page with data. */
+1 -1
fs/ntfs/aops.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /** 2 + /* 3 3 * aops.h - Defines for NTFS kernel address space operations and page cache 4 4 * handling. Part of the Linux-NTFS project. 5 5 *
+3 -3
fs/ntfs/compress.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - /** 2 + /* 3 3 * compress.c - NTFS kernel compressed attributes handling. 4 4 * Part of the Linux-NTFS project. 5 5 * ··· 41 41 NTFS_MAX_CB_SIZE = 64 * 1024, 42 42 } ntfs_compression_constants; 43 43 44 - /** 44 + /* 45 45 * ntfs_compression_buffer - one buffer for the decompression engine 46 46 */ 47 47 static u8 *ntfs_compression_buffer; 48 48 49 - /** 49 + /* 50 50 * ntfs_cb_lock - spinlock which protects ntfs_compression_buffer 51 51 */ 52 52 static DEFINE_SPINLOCK(ntfs_cb_lock);
+2 -2
fs/ntfs/dir.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - /** 2 + /* 3 3 * dir.c - NTFS kernel directory operations. Part of the Linux-NTFS project. 4 4 * 5 5 * Copyright (c) 2001-2007 Anton Altaparmakov ··· 17 17 #include "debug.h" 18 18 #include "ntfs.h" 19 19 20 - /** 20 + /* 21 21 * The little endian Unicode string $I30 as a global constant. 22 22 */ 23 23 ntfschar I30[5] = { cpu_to_le16('$'), cpu_to_le16('I'),
+3 -3
fs/ntfs/inode.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - /** 2 + /* 3 3 * inode.c - NTFS kernel inode handling. 4 4 * 5 5 * Copyright (c) 2001-2014 Anton Altaparmakov and Tuxera Inc. ··· 2935 2935 } 2936 2936 2937 2937 /** 2938 - * ntfs_write_inode - write out a dirty inode 2938 + * __ntfs_write_inode - write out a dirty inode 2939 2939 * @vi: inode to write out 2940 2940 * @sync: if true, write out synchronously 2941 2941 * ··· 3033 3033 * might not need to be written out. 3034 3034 * NOTE: It is not a problem when the inode for $MFT itself is being 3035 3035 * written out as mark_ntfs_record_dirty() will only set I_DIRTY_PAGES 3036 - * on the $MFT inode and hence ntfs_write_inode() will not be 3036 + * on the $MFT inode and hence __ntfs_write_inode() will not be 3037 3037 * re-invoked because of it which in turn is ok since the dirtied mft 3038 3038 * record will be cleaned and written out to disk below, i.e. before 3039 3039 * this function returns.
+1 -1
fs/ntfs/mft.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - /** 2 + /* 3 3 * mft.c - NTFS kernel mft record operations. Part of the Linux-NTFS project. 4 4 * 5 5 * Copyright (c) 2001-2012 Anton Altaparmakov and Tuxera Inc.
+2 -2
fs/ntfs/namei.c
··· 259 259 } 260 260 } 261 261 262 - /** 262 + /* 263 263 * Inode operations for directories. 264 264 */ 265 265 const struct inode_operations ntfs_dir_inode_ops = { ··· 364 364 ntfs_nfs_get_inode); 365 365 } 366 366 367 - /** 367 + /* 368 368 * Export operations allowing NFS exporting of mounted NTFS partitions. 369 369 * 370 370 * We use the default ->encode_fh() for now. Note that they
+1 -1
fs/ntfs/runlist.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - /** 2 + /* 3 3 * runlist.c - NTFS runlist handling code. Part of the Linux-NTFS project. 4 4 * 5 5 * Copyright (c) 2001-2007 Anton Altaparmakov
+10 -2
fs/ntfs/super.c
··· 58 58 }; 59 59 60 60 /** 61 - * simple_getbool - 61 + * simple_getbool - convert input string to a boolean value 62 + * @s: input string to convert 63 + * @setval: where to store the output boolean value 62 64 * 63 65 * Copied from old ntfs driver (which copied from vfat driver). 66 + * 67 + * "1", "yes", "true", or an empty string are converted to %true. 68 + * "0", "no", and "false" are converted to %false. 69 + * 70 + * Return: %1 if the string is converted or was empty and *setval contains it; 71 + * %0 if the string was not valid. 64 72 */ 65 73 static int simple_getbool(char *s, bool *setval) 66 74 { ··· 2665 2657 } 2666 2658 #endif 2667 2659 2668 - /** 2660 + /* 2669 2661 * The complete super operations. 2670 2662 */ 2671 2663 static const struct super_operations ntfs_sops = {
+1
fs/proc/cmdline.c
··· 17 17 struct proc_dir_entry *pde; 18 18 19 19 pde = proc_create_single("cmdline", 0, NULL, cmdline_proc_show); 20 + pde_make_permanent(pde); 20 21 pde->size = saved_command_line_len + 1; 21 22 return 0; 22 23 }
+4 -3
include/asm-generic/error-injection.h
··· 4 4 5 5 #if defined(__KERNEL__) && !defined(__ASSEMBLY__) 6 6 enum { 7 - EI_ETYPE_NONE, /* Dummy value for undefined case */ 8 7 EI_ETYPE_NULL, /* Return NULL if failure */ 9 8 EI_ETYPE_ERRNO, /* Return -ERRNO if failure */ 10 9 EI_ETYPE_ERRNO_NULL, /* Return -ERRNO or NULL if failure */ ··· 19 20 20 21 #ifdef CONFIG_FUNCTION_ERROR_INJECTION 21 22 /* 22 - * Whitelist generating macro. Specify functions which can be 23 - * error-injectable using this macro. 23 + * Whitelist generating macro. Specify functions which can be error-injectable 24 + * using this macro. If you unsure what is required for the error-injectable 25 + * functions, please read Documentation/fault-injection/fault-injection.rst 26 + * 'Error Injectable Functions' section. 24 27 */ 25 28 #define ALLOW_ERROR_INJECTION(fname, _etype) \ 26 29 static struct error_injection_entry __used \
+2 -1
include/linux/error-injection.h
··· 3 3 #define _LINUX_ERROR_INJECTION_H 4 4 5 5 #include <linux/compiler.h> 6 + #include <linux/errno.h> 6 7 #include <asm-generic/error-injection.h> 7 8 8 9 #ifdef CONFIG_FUNCTION_ERROR_INJECTION ··· 20 19 21 20 static inline int get_injectable_error_type(unsigned long addr) 22 21 { 23 - return EI_ETYPE_NONE; 22 + return -EOPNOTSUPP; 24 23 } 25 24 26 25 #endif
+2 -1
include/linux/kexec.h
··· 403 403 404 404 extern struct kimage *kexec_image; 405 405 extern struct kimage *kexec_crash_image; 406 - extern int kexec_load_disabled; 406 + 407 + bool kexec_load_permitted(int kexec_image_type); 407 408 408 409 #ifndef kexec_flush_icache_page 409 410 #define kexec_flush_icache_page(page)
+4 -2
include/linux/percpu_counter.h
··· 152 152 static inline void 153 153 percpu_counter_add(struct percpu_counter *fbc, s64 amount) 154 154 { 155 - preempt_disable(); 155 + unsigned long flags; 156 + 157 + local_irq_save(flags); 156 158 fbc->count += amount; 157 - preempt_enable(); 159 + local_irq_restore(flags); 158 160 } 159 161 160 162 /* non-SMP percpu_counter_add_local is the same with percpu_counter_add */
+2
include/linux/util_macros.h
··· 2 2 #ifndef _LINUX_HELPER_MACROS_H_ 3 3 #define _LINUX_HELPER_MACROS_H_ 4 4 5 + #include <linux/math.h> 6 + 5 7 #define __find_closest(x, a, as, op) \ 6 8 ({ \ 7 9 typeof(as) __fc_i, __fc_as = (as) - 1; \
+2 -2
init/initramfs.c
··· 11 11 #include <linux/syscalls.h> 12 12 #include <linux/utime.h> 13 13 #include <linux/file.h> 14 + #include <linux/kstrtox.h> 14 15 #include <linux/memblock.h> 15 16 #include <linux/mm.h> 16 17 #include <linux/namei.h> ··· 572 571 static bool __initdata initramfs_async = true; 573 572 static int __init initramfs_async_setup(char *str) 574 573 { 575 - strtobool(str, &initramfs_async); 576 - return 1; 574 + return kstrtobool(str, &initramfs_async) == 0; 577 575 } 578 576 __setup("initramfs_async=", initramfs_async_setup); 579 577
+2
kernel/hung_task.c
··· 142 142 143 143 if (sysctl_hung_task_all_cpu_backtrace) 144 144 hung_task_show_all_bt = true; 145 + if (!sysctl_hung_task_warnings) 146 + pr_info("Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings\n"); 145 147 } 146 148 147 149 touch_nmi_watchdog();
+3 -1
kernel/kexec.c
··· 190 190 static inline int kexec_load_check(unsigned long nr_segments, 191 191 unsigned long flags) 192 192 { 193 + int image_type = (flags & KEXEC_ON_CRASH) ? 194 + KEXEC_TYPE_CRASH : KEXEC_TYPE_DEFAULT; 193 195 int result; 194 196 195 197 /* We only trust the superuser with rebooting the system. */ 196 - if (!capable(CAP_SYS_BOOT) || kexec_load_disabled) 198 + if (!kexec_load_permitted(image_type)) 197 199 return -EPERM; 198 200 199 201 /* Permit LSMs and IMA to fail the kexec */
+93 -1
kernel/kexec_core.c
··· 921 921 return result; 922 922 } 923 923 924 + struct kexec_load_limit { 925 + /* Mutex protects the limit count. */ 926 + struct mutex mutex; 927 + int limit; 928 + }; 929 + 930 + static struct kexec_load_limit load_limit_reboot = { 931 + .mutex = __MUTEX_INITIALIZER(load_limit_reboot.mutex), 932 + .limit = -1, 933 + }; 934 + 935 + static struct kexec_load_limit load_limit_panic = { 936 + .mutex = __MUTEX_INITIALIZER(load_limit_panic.mutex), 937 + .limit = -1, 938 + }; 939 + 924 940 struct kimage *kexec_image; 925 941 struct kimage *kexec_crash_image; 926 - int kexec_load_disabled; 942 + static int kexec_load_disabled; 943 + 927 944 #ifdef CONFIG_SYSCTL 945 + static int kexec_limit_handler(struct ctl_table *table, int write, 946 + void *buffer, size_t *lenp, loff_t *ppos) 947 + { 948 + struct kexec_load_limit *limit = table->data; 949 + int val; 950 + struct ctl_table tmp = { 951 + .data = &val, 952 + .maxlen = sizeof(val), 953 + .mode = table->mode, 954 + }; 955 + int ret; 956 + 957 + if (write) { 958 + ret = proc_dointvec(&tmp, write, buffer, lenp, ppos); 959 + if (ret) 960 + return ret; 961 + 962 + if (val < 0) 963 + return -EINVAL; 964 + 965 + mutex_lock(&limit->mutex); 966 + if (limit->limit != -1 && val >= limit->limit) 967 + ret = -EINVAL; 968 + else 969 + limit->limit = val; 970 + mutex_unlock(&limit->mutex); 971 + 972 + return ret; 973 + } 974 + 975 + mutex_lock(&limit->mutex); 976 + val = limit->limit; 977 + mutex_unlock(&limit->mutex); 978 + 979 + return proc_dointvec(&tmp, write, buffer, lenp, ppos); 980 + } 981 + 928 982 static struct ctl_table kexec_core_sysctls[] = { 929 983 { 930 984 .procname = "kexec_load_disabled", ··· 990 936 .extra1 = SYSCTL_ONE, 991 937 .extra2 = SYSCTL_ONE, 992 938 }, 939 + { 940 + .procname = "kexec_load_limit_panic", 941 + .data = &load_limit_panic, 942 + .mode = 0644, 943 + .proc_handler = kexec_limit_handler, 944 + }, 945 + { 946 + .procname = "kexec_load_limit_reboot", 947 + .data = &load_limit_reboot, 948 + .mode = 0644, 949 + .proc_handler = kexec_limit_handler, 950 + }, 993 951 { } 994 952 }; 995 953 ··· 1012 946 } 1013 947 late_initcall(kexec_core_sysctl_init); 1014 948 #endif 949 + 950 + bool kexec_load_permitted(int kexec_image_type) 951 + { 952 + struct kexec_load_limit *limit; 953 + 954 + /* 955 + * Only the superuser can use the kexec syscall and if it has not 956 + * been disabled. 957 + */ 958 + if (!capable(CAP_SYS_BOOT) || kexec_load_disabled) 959 + return false; 960 + 961 + /* Check limit counter and decrease it.*/ 962 + limit = (kexec_image_type == KEXEC_TYPE_CRASH) ? 963 + &load_limit_panic : &load_limit_reboot; 964 + mutex_lock(&limit->mutex); 965 + if (!limit->limit) { 966 + mutex_unlock(&limit->mutex); 967 + return false; 968 + } 969 + if (limit->limit != -1) 970 + limit->limit--; 971 + mutex_unlock(&limit->mutex); 972 + 973 + return true; 974 + } 1015 975 1016 976 /* 1017 977 * No panic_cpu check version of crash_kexec(). This function is called
+7 -4
kernel/kexec_file.c
··· 326 326 unsigned long, cmdline_len, const char __user *, cmdline_ptr, 327 327 unsigned long, flags) 328 328 { 329 - int ret = 0, i; 329 + int image_type = (flags & KEXEC_FILE_ON_CRASH) ? 330 + KEXEC_TYPE_CRASH : KEXEC_TYPE_DEFAULT; 330 331 struct kimage **dest_image, *image; 332 + int ret = 0, i; 331 333 332 334 /* We only trust the superuser with rebooting the system. */ 333 - if (!capable(CAP_SYS_BOOT) || kexec_load_disabled) 335 + if (!kexec_load_permitted(image_type)) 334 336 return -EPERM; 335 337 336 338 /* Make sure we have a legal set of flags */ ··· 344 342 if (!kexec_trylock()) 345 343 return -EBUSY; 346 344 347 - dest_image = &kexec_image; 348 - if (flags & KEXEC_FILE_ON_CRASH) { 345 + if (image_type == KEXEC_TYPE_CRASH) { 349 346 dest_image = &kexec_crash_image; 350 347 if (kexec_crash_image) 351 348 arch_kexec_unprotect_crashkres(); 349 + } else { 350 + dest_image = &kexec_image; 352 351 } 353 352 354 353 if (flags & KEXEC_FILE_UNLOAD)
+5
kernel/kthread.c
··· 1382 1382 * Flush and destroy @worker. The simple flush is enough because the kthread 1383 1383 * worker API is used only in trivial scenarios. There are no multi-step state 1384 1384 * machines needed. 1385 + * 1386 + * Note that this function is not responsible for handling delayed work, so 1387 + * caller should be responsible for queuing or canceling all delayed work items 1388 + * before invoke this function. 1385 1389 */ 1386 1390 void kthread_destroy_worker(struct kthread_worker *worker) 1387 1391 { ··· 1397 1393 1398 1394 kthread_flush_worker(worker); 1399 1395 kthread_stop(task); 1396 + WARN_ON(!list_empty(&worker->delayed_work_list)); 1400 1397 WARN_ON(!list_empty(&worker->work_list)); 1401 1398 kfree(worker); 1402 1399 }
+1 -1
kernel/user_namespace.c
··· 229 229 EXPORT_SYMBOL(__put_user_ns); 230 230 231 231 /** 232 - * idmap_key struct holds the information necessary to find an idmapping in a 232 + * struct idmap_key - holds the information necessary to find an idmapping in a 233 233 * sorted idmap array. It is passed to cmp_map_id() as first argument. 234 234 */ 235 235 struct idmap_key {
+39 -1
lib/Kconfig.debug
··· 1185 1185 config DEBUG_PREEMPT 1186 1186 bool "Debug preemptible kernel" 1187 1187 depends on DEBUG_KERNEL && PREEMPTION && TRACE_IRQFLAGS_SUPPORT 1188 - default y 1189 1188 help 1190 1189 If you say Y here then the kernel will use a debug variant of the 1191 1190 commonly used smp_processor_id() function and will print warnings 1192 1191 if kernel code uses it in a preemption-unsafe way. Also, the kernel 1193 1192 will detect preemption count underflows. 1193 + 1194 + This option has potential to introduce high runtime overhead, 1195 + depending on workload as it triggers debugging routines for each 1196 + this_cpu operation. It should only be used for debugging purposes. 1194 1197 1195 1198 menu "Lock Debugging (spinlocks, mutexes, etc...)" 1196 1199 ··· 2031 2028 def_bool y 2032 2029 2033 2030 if RUNTIME_TESTING_MENU 2031 + 2032 + config TEST_DHRY 2033 + tristate "Dhrystone benchmark test" 2034 + help 2035 + Enable this to include the Dhrystone 2.1 benchmark. This test 2036 + calculates the number of Dhrystones per second, and the number of 2037 + DMIPS (Dhrystone MIPS) obtained when the Dhrystone score is divided 2038 + by 1757 (the number of Dhrystones per second obtained on the VAX 2039 + 11/780, nominally a 1 MIPS machine). 2040 + 2041 + To run the benchmark, it needs to be enabled explicitly, either from 2042 + the kernel command line (when built-in), or from userspace (when 2043 + built-in or modular. 2044 + 2045 + Run once during kernel boot: 2046 + 2047 + test_dhry.run 2048 + 2049 + Set number of iterations from kernel command line: 2050 + 2051 + test_dhry.iterations=<n> 2052 + 2053 + Set number of iterations from userspace: 2054 + 2055 + echo <n> > /sys/module/test_dhry/parameters/iterations 2056 + 2057 + Trigger manual run from userspace: 2058 + 2059 + echo y > /sys/module/test_dhry/parameters/run 2060 + 2061 + If the number of iterations is <= 0, the test will devise a suitable 2062 + number of iterations (test runs for at least 2s) automatically. 2063 + This process takes ca. 4s. 2064 + 2065 + If unsure, say N. 2034 2066 2035 2067 config LKDTM 2036 2068 tristate "Linux Kernel Dump Test Tool Module"
+2
lib/Makefile
··· 57 57 obj-y += kstrtox.o 58 58 obj-$(CONFIG_FIND_BIT_BENCHMARK) += find_bit_benchmark.o 59 59 obj-$(CONFIG_TEST_BPF) += test_bpf.o 60 + test_dhry-objs := dhry_1.o dhry_2.o dhry_run.o 61 + obj-$(CONFIG_TEST_DHRY) += test_dhry.o 60 62 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o 61 63 obj-$(CONFIG_TEST_BITOPS) += test_bitops.o 62 64 CFLAGS_test_bitops.o += -Werror
+358
lib/dhry.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ 2 + /* 3 + **************************************************************************** 4 + * 5 + * "DHRYSTONE" Benchmark Program 6 + * ----------------------------- 7 + * 8 + * Version: C, Version 2.1 9 + * 10 + * File: dhry.h (part 1 of 3) 11 + * 12 + * Date: May 25, 1988 13 + * 14 + * Author: Reinhold P. Weicker 15 + * Siemens AG, AUT E 51 16 + * Postfach 3220 17 + * 8520 Erlangen 18 + * Germany (West) 19 + * Phone: [+49]-9131-7-20330 20 + * (8-17 Central European Time) 21 + * Usenet: ..!mcsun!unido!estevax!weicker 22 + * 23 + * Original Version (in Ada) published in 24 + * "Communications of the ACM" vol. 27., no. 10 (Oct. 1984), 25 + * pp. 1013 - 1030, together with the statistics 26 + * on which the distribution of statements etc. is based. 27 + * 28 + * In this C version, the following C library functions are used: 29 + * - strcpy, strcmp (inside the measurement loop) 30 + * - printf, scanf (outside the measurement loop) 31 + * In addition, Berkeley UNIX system calls "times ()" or "time ()" 32 + * are used for execution time measurement. For measurements 33 + * on other systems, these calls have to be changed. 34 + * 35 + * Collection of Results: 36 + * Reinhold Weicker (address see above) and 37 + * 38 + * Rick Richardson 39 + * PC Research. Inc. 40 + * 94 Apple Orchard Drive 41 + * Tinton Falls, NJ 07724 42 + * Phone: (201) 389-8963 (9-17 EST) 43 + * Usenet: ...!uunet!pcrat!rick 44 + * 45 + * Please send results to Rick Richardson and/or Reinhold Weicker. 46 + * Complete information should be given on hardware and software used. 47 + * Hardware information includes: Machine type, CPU, type and size 48 + * of caches; for microprocessors: clock frequency, memory speed 49 + * (number of wait states). 50 + * Software information includes: Compiler (and runtime library) 51 + * manufacturer and version, compilation switches, OS version. 52 + * The Operating System version may give an indication about the 53 + * compiler; Dhrystone itself performs no OS calls in the measurement loop. 54 + * 55 + * The complete output generated by the program should be mailed 56 + * such that at least some checks for correctness can be made. 57 + * 58 + *************************************************************************** 59 + * 60 + * History: This version C/2.1 has been made for two reasons: 61 + * 62 + * 1) There is an obvious need for a common C version of 63 + * Dhrystone, since C is at present the most popular system 64 + * programming language for the class of processors 65 + * (microcomputers, minicomputers) where Dhrystone is used most. 66 + * There should be, as far as possible, only one C version of 67 + * Dhrystone such that results can be compared without 68 + * restrictions. In the past, the C versions distributed 69 + * by Rick Richardson (Version 1.1) and by Reinhold Weicker 70 + * had small (though not significant) differences. 71 + * 72 + * 2) As far as it is possible without changes to the Dhrystone 73 + * statistics, optimizing compilers should be prevented from 74 + * removing significant statements. 75 + * 76 + * This C version has been developed in cooperation with 77 + * Rick Richardson (Tinton Falls, NJ), it incorporates many 78 + * ideas from the "Version 1.1" distributed previously by 79 + * him over the UNIX network Usenet. 80 + * I also thank Chaim Benedelac (National Semiconductor), 81 + * David Ditzel (SUN), Earl Killian and John Mashey (MIPS), 82 + * Alan Smith and Rafael Saavedra-Barrera (UC at Berkeley) 83 + * for their help with comments on earlier versions of the 84 + * benchmark. 85 + * 86 + * Changes: In the initialization part, this version follows mostly 87 + * Rick Richardson's version distributed via Usenet, not the 88 + * version distributed earlier via floppy disk by Reinhold Weicker. 89 + * As a concession to older compilers, names have been made 90 + * unique within the first 8 characters. 91 + * Inside the measurement loop, this version follows the 92 + * version previously distributed by Reinhold Weicker. 93 + * 94 + * At several places in the benchmark, code has been added, 95 + * but within the measurement loop only in branches that 96 + * are not executed. The intention is that optimizing compilers 97 + * should be prevented from moving code out of the measurement 98 + * loop, or from removing code altogether. Since the statements 99 + * that are executed within the measurement loop have NOT been 100 + * changed, the numbers defining the "Dhrystone distribution" 101 + * (distribution of statements, operand types and locality) 102 + * still hold. Except for sophisticated optimizing compilers, 103 + * execution times for this version should be the same as 104 + * for previous versions. 105 + * 106 + * Since it has proven difficult to subtract the time for the 107 + * measurement loop overhead in a correct way, the loop check 108 + * has been made a part of the benchmark. This does have 109 + * an impact - though a very minor one - on the distribution 110 + * statistics which have been updated for this version. 111 + * 112 + * All changes within the measurement loop are described 113 + * and discussed in the companion paper "Rationale for 114 + * Dhrystone version 2". 115 + * 116 + * Because of the self-imposed limitation that the order and 117 + * distribution of the executed statements should not be 118 + * changed, there are still cases where optimizing compilers 119 + * may not generate code for some statements. To a certain 120 + * degree, this is unavoidable for small synthetic benchmarks. 121 + * Users of the benchmark are advised to check code listings 122 + * whether code is generated for all statements of Dhrystone. 123 + * 124 + * Version 2.1 is identical to version 2.0 distributed via 125 + * the UNIX network Usenet in March 1988 except that it corrects 126 + * some minor deficiencies that were found by users of version 2.0. 127 + * The only change within the measurement loop is that a 128 + * non-executed "else" part was added to the "if" statement in 129 + * Func_3, and a non-executed "else" part removed from Proc_3. 130 + * 131 + *************************************************************************** 132 + * 133 + * Compilation model and measurement (IMPORTANT): 134 + * 135 + * This C version of Dhrystone consists of three files: 136 + * - dhry.h (this file, containing global definitions and comments) 137 + * - dhry_1.c (containing the code corresponding to Ada package Pack_1) 138 + * - dhry_2.c (containing the code corresponding to Ada package Pack_2) 139 + * 140 + * The following "ground rules" apply for measurements: 141 + * - Separate compilation 142 + * - No procedure merging 143 + * - Otherwise, compiler optimizations are allowed but should be indicated 144 + * - Default results are those without register declarations 145 + * See the companion paper "Rationale for Dhrystone Version 2" for a more 146 + * detailed discussion of these ground rules. 147 + * 148 + * For 16-Bit processors (e.g. 80186, 80286), times for all compilation 149 + * models ("small", "medium", "large" etc.) should be given if possible, 150 + * together with a definition of these models for the compiler system used. 151 + * 152 + ************************************************************************** 153 + * 154 + * Dhrystone (C version) statistics: 155 + * 156 + * [Comment from the first distribution, updated for version 2. 157 + * Note that because of language differences, the numbers are slightly 158 + * different from the Ada version.] 159 + * 160 + * The following program contains statements of a high level programming 161 + * language (here: C) in a distribution considered representative: 162 + * 163 + * assignments 52 (51.0 %) 164 + * control statements 33 (32.4 %) 165 + * procedure, function calls 17 (16.7 %) 166 + * 167 + * 103 statements are dynamically executed. The program is balanced with 168 + * respect to the three aspects: 169 + * 170 + * - statement type 171 + * - operand type 172 + * - operand locality 173 + * operand global, local, parameter, or constant. 174 + * 175 + * The combination of these three aspects is balanced only approximately. 176 + * 177 + * 1. Statement Type: 178 + * ----------------- number 179 + * 180 + * V1 = V2 9 181 + * (incl. V1 = F(..) 182 + * V = Constant 12 183 + * Assignment, 7 184 + * with array element 185 + * Assignment, 6 186 + * with record component 187 + * -- 188 + * 34 34 189 + * 190 + * X = Y +|-|"&&"|"|" Z 5 191 + * X = Y +|-|"==" Constant 6 192 + * X = X +|- 1 3 193 + * X = Y *|/ Z 2 194 + * X = Expression, 1 195 + * two operators 196 + * X = Expression, 1 197 + * three operators 198 + * -- 199 + * 18 18 200 + * 201 + * if .... 14 202 + * with "else" 7 203 + * without "else" 7 204 + * executed 3 205 + * not executed 4 206 + * for ... 7 | counted every time 207 + * while ... 4 | the loop condition 208 + * do ... while 1 | is evaluated 209 + * switch ... 1 210 + * break 1 211 + * declaration with 1 212 + * initialization 213 + * -- 214 + * 34 34 215 + * 216 + * P (...) procedure call 11 217 + * user procedure 10 218 + * library procedure 1 219 + * X = F (...) 220 + * function call 6 221 + * user function 5 222 + * library function 1 223 + * -- 224 + * 17 17 225 + * --- 226 + * 103 227 + * 228 + * The average number of parameters in procedure or function calls 229 + * is 1.82 (not counting the function values as implicit parameters). 230 + * 231 + * 232 + * 2. Operators 233 + * ------------ 234 + * number approximate 235 + * percentage 236 + * 237 + * Arithmetic 32 50.8 238 + * 239 + * + 21 33.3 240 + * - 7 11.1 241 + * * 3 4.8 242 + * / (int div) 1 1.6 243 + * 244 + * Comparison 27 42.8 245 + * 246 + * == 9 14.3 247 + * /= 4 6.3 248 + * > 1 1.6 249 + * < 3 4.8 250 + * >= 1 1.6 251 + * <= 9 14.3 252 + * 253 + * Logic 4 6.3 254 + * 255 + * && (AND-THEN) 1 1.6 256 + * | (OR) 1 1.6 257 + * ! (NOT) 2 3.2 258 + * 259 + * -- ----- 260 + * 63 100.1 261 + * 262 + * 263 + * 3. Operand Type (counted once per operand reference): 264 + * --------------- 265 + * number approximate 266 + * percentage 267 + * 268 + * Integer 175 72.3 % 269 + * Character 45 18.6 % 270 + * Pointer 12 5.0 % 271 + * String30 6 2.5 % 272 + * Array 2 0.8 % 273 + * Record 2 0.8 % 274 + * --- ------- 275 + * 242 100.0 % 276 + * 277 + * When there is an access path leading to the final operand (e.g. a record 278 + * component), only the final data type on the access path is counted. 279 + * 280 + * 281 + * 4. Operand Locality: 282 + * ------------------- 283 + * number approximate 284 + * percentage 285 + * 286 + * local variable 114 47.1 % 287 + * global variable 22 9.1 % 288 + * parameter 45 18.6 % 289 + * value 23 9.5 % 290 + * reference 22 9.1 % 291 + * function result 6 2.5 % 292 + * constant 55 22.7 % 293 + * --- ------- 294 + * 242 100.0 % 295 + * 296 + * 297 + * The program does not compute anything meaningful, but it is syntactically 298 + * and semantically correct. All variables have a value assigned to them 299 + * before they are used as a source operand. 300 + * 301 + * There has been no explicit effort to account for the effects of a 302 + * cache, or to balance the use of long or short displacements for code or 303 + * data. 304 + * 305 + *************************************************************************** 306 + */ 307 + 308 + typedef enum { 309 + Ident_1, 310 + Ident_2, 311 + Ident_3, 312 + Ident_4, 313 + Ident_5 314 + } Enumeration; /* for boolean and enumeration types in Ada, Pascal */ 315 + 316 + /* General definitions: */ 317 + 318 + typedef int One_Thirty; 319 + typedef int One_Fifty; 320 + typedef char Capital_Letter; 321 + typedef int Boolean; 322 + typedef char Str_30[31]; 323 + typedef int Arr_1_Dim[50]; 324 + typedef int Arr_2_Dim[50][50]; 325 + 326 + typedef struct record { 327 + struct record *Ptr_Comp; 328 + Enumeration Discr; 329 + union { 330 + struct { 331 + Enumeration Enum_Comp; 332 + int Int_Comp; 333 + char Str_Comp[31]; 334 + } var_1; 335 + struct { 336 + Enumeration E_Comp_2; 337 + char Str_2_Comp[31]; 338 + } var_2; 339 + struct { 340 + char Ch_1_Comp; 341 + char Ch_2_Comp; 342 + } var_3; 343 + } variant; 344 + } Rec_Type, *Rec_Pointer; 345 + 346 + 347 + extern int Int_Glob; 348 + extern char Ch_1_Glob; 349 + 350 + void Proc_6(Enumeration Enum_Val_Par, Enumeration *Enum_Ref_Par); 351 + void Proc_7(One_Fifty Int_1_Par_Val, One_Fifty Int_2_Par_Val, 352 + One_Fifty *Int_Par_Ref); 353 + void Proc_8(Arr_1_Dim Arr_1_Par_Ref, Arr_2_Dim Arr_2_Par_Ref, 354 + int Int_1_Par_Val, int Int_2_Par_Val); 355 + Enumeration Func_1(Capital_Letter Ch_1_Par_Val, Capital_Letter Ch_2_Par_Val); 356 + Boolean Func_2(Str_30 Str_1_Par_Ref, Str_30 Str_2_Par_Ref); 357 + 358 + int dhry(int n);
+283
lib/dhry_1.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + /* 3 + **************************************************************************** 4 + * 5 + * "DHRYSTONE" Benchmark Program 6 + * ----------------------------- 7 + * 8 + * Version: C, Version 2.1 9 + * 10 + * File: dhry_1.c (part 2 of 3) 11 + * 12 + * Date: May 25, 1988 13 + * 14 + * Author: Reinhold P. Weicker 15 + * 16 + **************************************************************************** 17 + */ 18 + 19 + #include "dhry.h" 20 + 21 + #include <linux/ktime.h> 22 + #include <linux/slab.h> 23 + #include <linux/string.h> 24 + 25 + /* Global Variables: */ 26 + 27 + int Int_Glob; 28 + char Ch_1_Glob; 29 + 30 + static Rec_Pointer Ptr_Glob, Next_Ptr_Glob; 31 + static Boolean Bool_Glob; 32 + static char Ch_2_Glob; 33 + static int Arr_1_Glob[50]; 34 + static int Arr_2_Glob[50][50]; 35 + 36 + static void Proc_3(Rec_Pointer *Ptr_Ref_Par) 37 + /******************/ 38 + /* executed once */ 39 + /* Ptr_Ref_Par becomes Ptr_Glob */ 40 + { 41 + if (Ptr_Glob) { 42 + /* then, executed */ 43 + *Ptr_Ref_Par = Ptr_Glob->Ptr_Comp; 44 + } 45 + Proc_7(10, Int_Glob, &Ptr_Glob->variant.var_1.Int_Comp); 46 + } /* Proc_3 */ 47 + 48 + 49 + static void Proc_1(Rec_Pointer Ptr_Val_Par) 50 + /******************/ 51 + /* executed once */ 52 + { 53 + Rec_Pointer Next_Record = Ptr_Val_Par->Ptr_Comp; 54 + /* == Ptr_Glob_Next */ 55 + /* Local variable, initialized with Ptr_Val_Par->Ptr_Comp, */ 56 + /* corresponds to "rename" in Ada, "with" in Pascal */ 57 + 58 + *Ptr_Val_Par->Ptr_Comp = *Ptr_Glob; 59 + Ptr_Val_Par->variant.var_1.Int_Comp = 5; 60 + Next_Record->variant.var_1.Int_Comp = 61 + Ptr_Val_Par->variant.var_1.Int_Comp; 62 + Next_Record->Ptr_Comp = Ptr_Val_Par->Ptr_Comp; 63 + Proc_3(&Next_Record->Ptr_Comp); 64 + /* Ptr_Val_Par->Ptr_Comp->Ptr_Comp == Ptr_Glob->Ptr_Comp */ 65 + if (Next_Record->Discr == Ident_1) { 66 + /* then, executed */ 67 + Next_Record->variant.var_1.Int_Comp = 6; 68 + Proc_6(Ptr_Val_Par->variant.var_1.Enum_Comp, 69 + &Next_Record->variant.var_1.Enum_Comp); 70 + Next_Record->Ptr_Comp = Ptr_Glob->Ptr_Comp; 71 + Proc_7(Next_Record->variant.var_1.Int_Comp, 10, 72 + &Next_Record->variant.var_1.Int_Comp); 73 + } else { 74 + /* not executed */ 75 + *Ptr_Val_Par = *Ptr_Val_Par->Ptr_Comp; 76 + } 77 + } /* Proc_1 */ 78 + 79 + 80 + static void Proc_2(One_Fifty *Int_Par_Ref) 81 + /******************/ 82 + /* executed once */ 83 + /* *Int_Par_Ref == 1, becomes 4 */ 84 + { 85 + One_Fifty Int_Loc; 86 + Enumeration Enum_Loc; 87 + 88 + Int_Loc = *Int_Par_Ref + 10; 89 + do { 90 + /* executed once */ 91 + if (Ch_1_Glob == 'A') { 92 + /* then, executed */ 93 + Int_Loc -= 1; 94 + *Int_Par_Ref = Int_Loc - Int_Glob; 95 + Enum_Loc = Ident_1; 96 + } /* if */ 97 + } while (Enum_Loc != Ident_1); /* true */ 98 + } /* Proc_2 */ 99 + 100 + 101 + static void Proc_4(void) 102 + /*******/ 103 + /* executed once */ 104 + { 105 + Boolean Bool_Loc; 106 + 107 + Bool_Loc = Ch_1_Glob == 'A'; 108 + Bool_Glob = Bool_Loc | Bool_Glob; 109 + Ch_2_Glob = 'B'; 110 + } /* Proc_4 */ 111 + 112 + 113 + static void Proc_5(void) 114 + /*******/ 115 + /* executed once */ 116 + { 117 + Ch_1_Glob = 'A'; 118 + Bool_Glob = false; 119 + } /* Proc_5 */ 120 + 121 + 122 + int dhry(int n) 123 + /*****/ 124 + 125 + /* main program, corresponds to procedures */ 126 + /* Main and Proc_0 in the Ada version */ 127 + { 128 + One_Fifty Int_1_Loc; 129 + One_Fifty Int_2_Loc; 130 + One_Fifty Int_3_Loc; 131 + char Ch_Index; 132 + Enumeration Enum_Loc; 133 + Str_30 Str_1_Loc; 134 + Str_30 Str_2_Loc; 135 + int Run_Index; 136 + int Number_Of_Runs; 137 + ktime_t Begin_Time, End_Time; 138 + u32 User_Time; 139 + 140 + /* Initializations */ 141 + 142 + Next_Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_KERNEL); 143 + Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_KERNEL); 144 + 145 + Ptr_Glob->Ptr_Comp = Next_Ptr_Glob; 146 + Ptr_Glob->Discr = Ident_1; 147 + Ptr_Glob->variant.var_1.Enum_Comp = Ident_3; 148 + Ptr_Glob->variant.var_1.Int_Comp = 40; 149 + strcpy(Ptr_Glob->variant.var_1.Str_Comp, 150 + "DHRYSTONE PROGRAM, SOME STRING"); 151 + strcpy(Str_1_Loc, "DHRYSTONE PROGRAM, 1'ST STRING"); 152 + 153 + Arr_2_Glob[8][7] = 10; 154 + /* Was missing in published program. Without this statement, */ 155 + /* Arr_2_Glob[8][7] would have an undefined value. */ 156 + /* Warning: With 16-Bit processors and Number_Of_Runs > 32000, */ 157 + /* overflow may occur for this array element. */ 158 + 159 + pr_debug("Dhrystone Benchmark, Version 2.1 (Language: C)\n"); 160 + 161 + Number_Of_Runs = n; 162 + 163 + pr_debug("Execution starts, %d runs through Dhrystone\n", 164 + Number_Of_Runs); 165 + 166 + /***************/ 167 + /* Start timer */ 168 + /***************/ 169 + 170 + Begin_Time = ktime_get(); 171 + 172 + for (Run_Index = 1; Run_Index <= Number_Of_Runs; ++Run_Index) { 173 + Proc_5(); 174 + Proc_4(); 175 + /* Ch_1_Glob == 'A', Ch_2_Glob == 'B', Bool_Glob == true */ 176 + Int_1_Loc = 2; 177 + Int_2_Loc = 3; 178 + strcpy(Str_2_Loc, "DHRYSTONE PROGRAM, 2'ND STRING"); 179 + Enum_Loc = Ident_2; 180 + Bool_Glob = !Func_2(Str_1_Loc, Str_2_Loc); 181 + /* Bool_Glob == 1 */ 182 + while (Int_1_Loc < Int_2_Loc) { 183 + /* loop body executed once */ 184 + Int_3_Loc = 5 * Int_1_Loc - Int_2_Loc; 185 + /* Int_3_Loc == 7 */ 186 + Proc_7(Int_1_Loc, Int_2_Loc, &Int_3_Loc); 187 + /* Int_3_Loc == 7 */ 188 + Int_1_Loc += 1; 189 + } /* while */ 190 + /* Int_1_Loc == 3, Int_2_Loc == 3, Int_3_Loc == 7 */ 191 + Proc_8(Arr_1_Glob, Arr_2_Glob, Int_1_Loc, Int_3_Loc); 192 + /* Int_Glob == 5 */ 193 + Proc_1(Ptr_Glob); 194 + for (Ch_Index = 'A'; Ch_Index <= Ch_2_Glob; ++Ch_Index) { 195 + /* loop body executed twice */ 196 + if (Enum_Loc == Func_1(Ch_Index, 'C')) { 197 + /* then, not executed */ 198 + Proc_6(Ident_1, &Enum_Loc); 199 + strcpy(Str_2_Loc, "DHRYSTONE PROGRAM, 3'RD STRING"); 200 + Int_2_Loc = Run_Index; 201 + Int_Glob = Run_Index; 202 + } 203 + } 204 + /* Int_1_Loc == 3, Int_2_Loc == 3, Int_3_Loc == 7 */ 205 + Int_2_Loc = Int_2_Loc * Int_1_Loc; 206 + Int_1_Loc = Int_2_Loc / Int_3_Loc; 207 + Int_2_Loc = 7 * (Int_2_Loc - Int_3_Loc) - Int_1_Loc; 208 + /* Int_1_Loc == 1, Int_2_Loc == 13, Int_3_Loc == 7 */ 209 + Proc_2(&Int_1_Loc); 210 + /* Int_1_Loc == 5 */ 211 + 212 + } /* loop "for Run_Index" */ 213 + 214 + /**************/ 215 + /* Stop timer */ 216 + /**************/ 217 + 218 + End_Time = ktime_get(); 219 + 220 + #define dhry_assert_int_eq(val, expected) \ 221 + if (val != expected) \ 222 + pr_err("%s: %d (FAIL, expected %d)\n", #val, val, \ 223 + expected); \ 224 + else \ 225 + pr_debug("%s: %d (OK)\n", #val, val) 226 + 227 + #define dhry_assert_char_eq(val, expected) \ 228 + if (val != expected) \ 229 + pr_err("%s: %c (FAIL, expected %c)\n", #val, val, \ 230 + expected); \ 231 + else \ 232 + pr_debug("%s: %c (OK)\n", #val, val) 233 + 234 + #define dhry_assert_string_eq(val, expected) \ 235 + if (strcmp(val, expected)) \ 236 + pr_err("%s: %s (FAIL, expected %s)\n", #val, val, \ 237 + expected); \ 238 + else \ 239 + pr_debug("%s: %s (OK)\n", #val, val) 240 + 241 + pr_debug("Execution ends\n"); 242 + pr_debug("Final values of the variables used in the benchmark:\n"); 243 + dhry_assert_int_eq(Int_Glob, 5); 244 + dhry_assert_int_eq(Bool_Glob, 1); 245 + dhry_assert_char_eq(Ch_1_Glob, 'A'); 246 + dhry_assert_char_eq(Ch_2_Glob, 'B'); 247 + dhry_assert_int_eq(Arr_1_Glob[8], 7); 248 + dhry_assert_int_eq(Arr_2_Glob[8][7], Number_Of_Runs + 10); 249 + pr_debug("Ptr_Comp: %px\n", Ptr_Glob->Ptr_Comp); 250 + dhry_assert_int_eq(Ptr_Glob->Discr, 0); 251 + dhry_assert_int_eq(Ptr_Glob->variant.var_1.Enum_Comp, 2); 252 + dhry_assert_int_eq(Ptr_Glob->variant.var_1.Int_Comp, 17); 253 + dhry_assert_string_eq(Ptr_Glob->variant.var_1.Str_Comp, 254 + "DHRYSTONE PROGRAM, SOME STRING"); 255 + if (Next_Ptr_Glob->Ptr_Comp != Ptr_Glob->Ptr_Comp) 256 + pr_err("Next_Ptr_Glob->Ptr_Comp: %px (expected %px)\n", 257 + Next_Ptr_Glob->Ptr_Comp, Ptr_Glob->Ptr_Comp); 258 + else 259 + pr_debug("Next_Ptr_Glob->Ptr_Comp: %px\n", 260 + Next_Ptr_Glob->Ptr_Comp); 261 + dhry_assert_int_eq(Next_Ptr_Glob->Discr, 0); 262 + dhry_assert_int_eq(Next_Ptr_Glob->variant.var_1.Enum_Comp, 1); 263 + dhry_assert_int_eq(Next_Ptr_Glob->variant.var_1.Int_Comp, 18); 264 + dhry_assert_string_eq(Next_Ptr_Glob->variant.var_1.Str_Comp, 265 + "DHRYSTONE PROGRAM, SOME STRING"); 266 + dhry_assert_int_eq(Int_1_Loc, 5); 267 + dhry_assert_int_eq(Int_2_Loc, 13); 268 + dhry_assert_int_eq(Int_3_Loc, 7); 269 + dhry_assert_int_eq(Enum_Loc, 1); 270 + dhry_assert_string_eq(Str_1_Loc, "DHRYSTONE PROGRAM, 1'ST STRING"); 271 + dhry_assert_string_eq(Str_2_Loc, "DHRYSTONE PROGRAM, 2'ND STRING"); 272 + 273 + User_Time = ktime_to_ms(ktime_sub(End_Time, Begin_Time)); 274 + 275 + kfree(Ptr_Glob); 276 + kfree(Next_Ptr_Glob); 277 + 278 + /* Measurements should last at least 2 seconds */ 279 + if (User_Time < 2 * MSEC_PER_SEC) 280 + return -EAGAIN; 281 + 282 + return div_u64(mul_u32_u32(MSEC_PER_SEC, Number_Of_Runs), User_Time); 283 + }
+175
lib/dhry_2.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + /* 3 + **************************************************************************** 4 + * 5 + * "DHRYSTONE" Benchmark Program 6 + * ----------------------------- 7 + * 8 + * Version: C, Version 2.1 9 + * 10 + * File: dhry_2.c (part 3 of 3) 11 + * 12 + * Date: May 25, 1988 13 + * 14 + * Author: Reinhold P. Weicker 15 + * 16 + **************************************************************************** 17 + */ 18 + 19 + #include "dhry.h" 20 + 21 + #include <linux/string.h> 22 + 23 + 24 + static Boolean Func_3(Enumeration Enum_Par_Val) 25 + /***************************/ 26 + /* executed once */ 27 + /* Enum_Par_Val == Ident_3 */ 28 + { 29 + Enumeration Enum_Loc; 30 + 31 + Enum_Loc = Enum_Par_Val; 32 + if (Enum_Loc == Ident_3) { 33 + /* then, executed */ 34 + return true; 35 + } else { 36 + /* not executed */ 37 + return false; 38 + } 39 + } /* Func_3 */ 40 + 41 + 42 + void Proc_6(Enumeration Enum_Val_Par, Enumeration *Enum_Ref_Par) 43 + /*********************************/ 44 + /* executed once */ 45 + /* Enum_Val_Par == Ident_3, Enum_Ref_Par becomes Ident_2 */ 46 + { 47 + *Enum_Ref_Par = Enum_Val_Par; 48 + if (!Func_3(Enum_Val_Par)) { 49 + /* then, not executed */ 50 + *Enum_Ref_Par = Ident_4; 51 + } 52 + switch (Enum_Val_Par) { 53 + case Ident_1: 54 + *Enum_Ref_Par = Ident_1; 55 + break; 56 + case Ident_2: 57 + if (Int_Glob > 100) { 58 + /* then */ 59 + *Enum_Ref_Par = Ident_1; 60 + } else { 61 + *Enum_Ref_Par = Ident_4; 62 + } 63 + break; 64 + case Ident_3: /* executed */ 65 + *Enum_Ref_Par = Ident_2; 66 + break; 67 + case Ident_4: 68 + break; 69 + case Ident_5: 70 + *Enum_Ref_Par = Ident_3; 71 + break; 72 + } /* switch */ 73 + } /* Proc_6 */ 74 + 75 + 76 + void Proc_7(One_Fifty Int_1_Par_Val, One_Fifty Int_2_Par_Val, One_Fifty *Int_Par_Ref) 77 + /**********************************************/ 78 + /* executed three times */ 79 + /* first call: Int_1_Par_Val == 2, Int_2_Par_Val == 3, */ 80 + /* Int_Par_Ref becomes 7 */ 81 + /* second call: Int_1_Par_Val == 10, Int_2_Par_Val == 5, */ 82 + /* Int_Par_Ref becomes 17 */ 83 + /* third call: Int_1_Par_Val == 6, Int_2_Par_Val == 10, */ 84 + /* Int_Par_Ref becomes 18 */ 85 + { 86 + One_Fifty Int_Loc; 87 + 88 + Int_Loc = Int_1_Par_Val + 2; 89 + *Int_Par_Ref = Int_2_Par_Val + Int_Loc; 90 + } /* Proc_7 */ 91 + 92 + 93 + void Proc_8(Arr_1_Dim Arr_1_Par_Ref, Arr_2_Dim Arr_2_Par_Ref, int Int_1_Par_Val, int Int_2_Par_Val) 94 + /*********************************************************************/ 95 + /* executed once */ 96 + /* Int_Par_Val_1 == 3 */ 97 + /* Int_Par_Val_2 == 7 */ 98 + { 99 + One_Fifty Int_Index; 100 + One_Fifty Int_Loc; 101 + 102 + Int_Loc = Int_1_Par_Val + 5; 103 + Arr_1_Par_Ref[Int_Loc] = Int_2_Par_Val; 104 + Arr_1_Par_Ref[Int_Loc+1] = Arr_1_Par_Ref[Int_Loc]; 105 + Arr_1_Par_Ref[Int_Loc+30] = Int_Loc; 106 + for (Int_Index = Int_Loc; Int_Index <= Int_Loc+1; ++Int_Index) 107 + Arr_2_Par_Ref[Int_Loc][Int_Index] = Int_Loc; 108 + Arr_2_Par_Ref[Int_Loc][Int_Loc-1] += 1; 109 + Arr_2_Par_Ref[Int_Loc+20][Int_Loc] = Arr_1_Par_Ref[Int_Loc]; 110 + Int_Glob = 5; 111 + } /* Proc_8 */ 112 + 113 + 114 + Enumeration Func_1(Capital_Letter Ch_1_Par_Val, Capital_Letter Ch_2_Par_Val) 115 + /*************************************************/ 116 + /* executed three times */ 117 + /* first call: Ch_1_Par_Val == 'H', Ch_2_Par_Val == 'R' */ 118 + /* second call: Ch_1_Par_Val == 'A', Ch_2_Par_Val == 'C' */ 119 + /* third call: Ch_1_Par_Val == 'B', Ch_2_Par_Val == 'C' */ 120 + { 121 + Capital_Letter Ch_1_Loc; 122 + Capital_Letter Ch_2_Loc; 123 + 124 + Ch_1_Loc = Ch_1_Par_Val; 125 + Ch_2_Loc = Ch_1_Loc; 126 + if (Ch_2_Loc != Ch_2_Par_Val) { 127 + /* then, executed */ 128 + return Ident_1; 129 + } else { 130 + /* not executed */ 131 + Ch_1_Glob = Ch_1_Loc; 132 + return Ident_2; 133 + } 134 + } /* Func_1 */ 135 + 136 + 137 + Boolean Func_2(Str_30 Str_1_Par_Ref, Str_30 Str_2_Par_Ref) 138 + /*************************************************/ 139 + /* executed once */ 140 + /* Str_1_Par_Ref == "DHRYSTONE PROGRAM, 1'ST STRING" */ 141 + /* Str_2_Par_Ref == "DHRYSTONE PROGRAM, 2'ND STRING" */ 142 + { 143 + One_Thirty Int_Loc; 144 + Capital_Letter Ch_Loc; 145 + 146 + Int_Loc = 2; 147 + while (Int_Loc <= 2) { 148 + /* loop body executed once */ 149 + if (Func_1(Str_1_Par_Ref[Int_Loc], 150 + Str_2_Par_Ref[Int_Loc+1]) == Ident_1) { 151 + /* then, executed */ 152 + Ch_Loc = 'A'; 153 + Int_Loc += 1; 154 + } 155 + } /* if, while */ 156 + if (Ch_Loc >= 'W' && Ch_Loc < 'Z') { 157 + /* then, not executed */ 158 + Int_Loc = 7; 159 + } 160 + if (Ch_Loc == 'R') { 161 + /* then, not executed */ 162 + return true; 163 + } else { 164 + /* executed */ 165 + if (strcmp(Str_1_Par_Ref, Str_2_Par_Ref) > 0) { 166 + /* then, not executed */ 167 + Int_Loc += 7; 168 + Int_Glob = Int_Loc; 169 + return true; 170 + } else { 171 + /* executed */ 172 + return false; 173 + } 174 + } /* if Ch_Loc */ 175 + } /* Func_2 */
+85
lib/dhry_run.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Dhrystone benchmark test module 4 + * 5 + * Copyright (C) 2022 Glider bv 6 + */ 7 + 8 + #include "dhry.h" 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/moduleparam.h> 13 + #include <linux/mutex.h> 14 + #include <linux/smp.h> 15 + 16 + #define DHRY_VAX 1757 17 + 18 + static int dhry_run_set(const char *val, const struct kernel_param *kp); 19 + static const struct kernel_param_ops run_ops = { 20 + .flags = KERNEL_PARAM_OPS_FL_NOARG, 21 + .set = dhry_run_set, 22 + }; 23 + static bool dhry_run; 24 + module_param_cb(run, &run_ops, &dhry_run, 0200); 25 + MODULE_PARM_DESC(run, "Run the test (default: false)"); 26 + 27 + static int iterations = -1; 28 + module_param(iterations, int, 0644); 29 + MODULE_PARM_DESC(iterations, 30 + "Number of iterations through the benchmark (default: auto)"); 31 + 32 + static void dhry_benchmark(void) 33 + { 34 + int i, n; 35 + 36 + if (iterations > 0) { 37 + n = dhry(iterations); 38 + goto report; 39 + } 40 + 41 + for (i = DHRY_VAX; i > 0; i <<= 1) { 42 + n = dhry(i); 43 + if (n != -EAGAIN) 44 + break; 45 + } 46 + 47 + report: 48 + if (n >= 0) 49 + pr_info("CPU%u: Dhrystones per Second: %d (%d DMIPS)\n", 50 + smp_processor_id(), n, n / DHRY_VAX); 51 + else if (n == -EAGAIN) 52 + pr_err("Please increase the number of iterations\n"); 53 + else 54 + pr_err("Dhrystone benchmark failed error %pe\n", ERR_PTR(n)); 55 + } 56 + 57 + static int dhry_run_set(const char *val, const struct kernel_param *kp) 58 + { 59 + int ret; 60 + 61 + if (val) { 62 + ret = param_set_bool(val, kp); 63 + if (ret) 64 + return ret; 65 + } else { 66 + dhry_run = true; 67 + } 68 + 69 + if (dhry_run && system_state == SYSTEM_RUNNING) 70 + dhry_benchmark(); 71 + 72 + return 0; 73 + } 74 + 75 + static int __init dhry_init(void) 76 + { 77 + if (dhry_run) 78 + dhry_benchmark(); 79 + 80 + return 0; 81 + } 82 + module_init(dhry_init); 83 + 84 + MODULE_AUTHOR("Geert Uytterhoeven <geert+renesas@glider.be>"); 85 + MODULE_LICENSE("GPL");
+1 -1
lib/error-inject.c
··· 40 40 int get_injectable_error_type(unsigned long addr) 41 41 { 42 42 struct ei_entry *ent; 43 - int ei_type = EI_ETYPE_NONE; 43 + int ei_type = -EINVAL; 44 44 45 45 mutex_lock(&ei_mutex); 46 46 list_for_each_entry(ent, &error_injection_list, list) {
+8 -10
lib/genalloc.c
··· 40 40 return chunk->end_addr - chunk->start_addr + 1; 41 41 } 42 42 43 - static int set_bits_ll(unsigned long *addr, unsigned long mask_to_set) 43 + static inline int 44 + set_bits_ll(unsigned long *addr, unsigned long mask_to_set) 44 45 { 45 - unsigned long val, nval; 46 + unsigned long val = READ_ONCE(*addr); 46 47 47 - nval = *addr; 48 48 do { 49 - val = nval; 50 49 if (val & mask_to_set) 51 50 return -EBUSY; 52 51 cpu_relax(); 53 - } while ((nval = cmpxchg(addr, val, val | mask_to_set)) != val); 52 + } while (!try_cmpxchg(addr, &val, val | mask_to_set)); 54 53 55 54 return 0; 56 55 } 57 56 58 - static int clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear) 57 + static inline int 58 + clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear) 59 59 { 60 - unsigned long val, nval; 60 + unsigned long val = READ_ONCE(*addr); 61 61 62 - nval = *addr; 63 62 do { 64 - val = nval; 65 63 if ((val & mask_to_clear) != mask_to_clear) 66 64 return -EBUSY; 67 65 cpu_relax(); 68 - } while ((nval = cmpxchg(addr, val, val & ~mask_to_clear)) != val); 66 + } while (!try_cmpxchg(addr, &val, val & ~mask_to_clear)); 69 67 70 68 return 0; 71 69 }
+15 -10
lib/percpu_counter.c
··· 73 73 EXPORT_SYMBOL(percpu_counter_set); 74 74 75 75 /* 76 - * This function is both preempt and irq safe. The former is due to explicit 77 - * preemption disable. The latter is guaranteed by the fact that the slow path 78 - * is explicitly protected by an irq-safe spinlock whereas the fast patch uses 79 - * this_cpu_add which is irq-safe by definition. Hence there is no need muck 80 - * with irq state before calling this one 76 + * local_irq_save() is needed to make the function irq safe: 77 + * - The slow path would be ok as protected by an irq-safe spinlock. 78 + * - this_cpu_add would be ok as it is irq-safe by definition. 79 + * But: 80 + * The decision slow path/fast path and the actual update must be atomic, too. 81 + * Otherwise a call in process context could check the current values and 82 + * decide that the fast path can be used. If now an interrupt occurs before 83 + * the this_cpu_add(), and the interrupt updates this_cpu(*fbc->counters), 84 + * then the this_cpu_add() that is executed after the interrupt has completed 85 + * can produce values larger than "batch" or even overflows. 81 86 */ 82 87 void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch) 83 88 { 84 89 s64 count; 90 + unsigned long flags; 85 91 86 - preempt_disable(); 92 + local_irq_save(flags); 87 93 count = __this_cpu_read(*fbc->counters) + amount; 88 94 if (abs(count) >= batch) { 89 - unsigned long flags; 90 - raw_spin_lock_irqsave(&fbc->lock, flags); 95 + raw_spin_lock(&fbc->lock); 91 96 fbc->count += count; 92 97 __this_cpu_sub(*fbc->counters, count - amount); 93 - raw_spin_unlock_irqrestore(&fbc->lock, flags); 98 + raw_spin_unlock(&fbc->lock); 94 99 } else { 95 100 this_cpu_add(*fbc->counters, amount); 96 101 } 97 - preempt_enable(); 102 + local_irq_restore(flags); 98 103 } 99 104 EXPORT_SYMBOL(percpu_counter_add_batch); 100 105
+15 -8
lib/zlib_deflate/deflate.c
··· 54 54 55 55 /* architecture-specific bits */ 56 56 #ifdef CONFIG_ZLIB_DFLTCC 57 - # include "../zlib_dfltcc/dfltcc.h" 57 + # include "../zlib_dfltcc/dfltcc_deflate.h" 58 58 #else 59 59 #define DEFLATE_RESET_HOOK(strm) do {} while (0) 60 60 #define DEFLATE_HOOK(strm, flush, bstate) 0 ··· 106 106 deflate_state deflate_memory; 107 107 #ifdef CONFIG_ZLIB_DFLTCC 108 108 /* State memory for s390 hardware deflate */ 109 - struct dfltcc_state dfltcc_memory; 109 + struct dfltcc_deflate_state dfltcc_memory; 110 110 #endif 111 111 Byte *window_memory; 112 112 Pos *prev_memory; ··· 451 451 Assert(strm->avail_out > 0, "bug2"); 452 452 453 453 if (flush != Z_FINISH) return Z_OK; 454 - if (s->noheader) return Z_STREAM_END; 455 454 456 - /* Write the zlib trailer (adler32) */ 457 - putShortMSB(s, (uInt)(strm->adler >> 16)); 458 - putShortMSB(s, (uInt)(strm->adler & 0xffff)); 455 + if (!s->noheader) { 456 + /* Write zlib trailer (adler32) */ 457 + putShortMSB(s, (uInt)(strm->adler >> 16)); 458 + putShortMSB(s, (uInt)(strm->adler & 0xffff)); 459 + } 459 460 flush_pending(strm); 460 461 /* If avail_out is zero, the application will call deflate again 461 462 * to flush the rest. 462 463 */ 463 - s->noheader = -1; /* write the trailer only once! */ 464 - return s->pending != 0 ? Z_OK : Z_STREAM_END; 464 + if (!s->noheader) { 465 + s->noheader = -1; /* write the trailer only once! */ 466 + } 467 + if (s->pending == 0) { 468 + Assert(s->bi_valid == 0, "bi_buf not flushed"); 469 + return Z_STREAM_END; 470 + } 471 + return Z_OK; 465 472 } 466 473 467 474 /* ========================================================================= */
+3 -22
lib/zlib_dfltcc/dfltcc.c
··· 23 23 } 24 24 } 25 25 26 - void dfltcc_reset( 27 - z_streamp strm, 28 - uInt size 29 - ) 30 - { 31 - struct dfltcc_state *dfltcc_state = 32 - (struct dfltcc_state *)((char *)strm->state + size); 33 - struct dfltcc_qaf_param *param = 34 - (struct dfltcc_qaf_param *)&dfltcc_state->param; 35 - 26 + void dfltcc_reset_state(struct dfltcc_state *dfltcc_state) { 36 27 /* Initialize available functions */ 37 28 if (is_dfltcc_enabled()) { 38 - dfltcc(DFLTCC_QAF, param, NULL, NULL, NULL, NULL, NULL); 39 - memmove(&dfltcc_state->af, param, sizeof(dfltcc_state->af)); 29 + dfltcc(DFLTCC_QAF, &dfltcc_state->param, NULL, NULL, NULL, NULL, NULL); 30 + memmove(&dfltcc_state->af, &dfltcc_state->param, sizeof(dfltcc_state->af)); 40 31 } else 41 32 memset(&dfltcc_state->af, 0, sizeof(dfltcc_state->af)); 42 33 43 34 /* Initialize parameter block */ 44 35 memset(&dfltcc_state->param, 0, sizeof(dfltcc_state->param)); 45 36 dfltcc_state->param.nt = 1; 46 - 47 - /* Initialize tuning parameters */ 48 - if (zlib_dfltcc_support == ZLIB_DFLTCC_FULL_DEBUG) 49 - dfltcc_state->level_mask = DFLTCC_LEVEL_MASK_DEBUG; 50 - else 51 - dfltcc_state->level_mask = DFLTCC_LEVEL_MASK; 52 - dfltcc_state->block_size = DFLTCC_BLOCK_SIZE; 53 - dfltcc_state->block_threshold = DFLTCC_FIRST_FHT_BLOCK_SIZE; 54 - dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE; 55 37 dfltcc_state->param.ribm = DFLTCC_RIBM; 56 38 } 57 - EXPORT_SYMBOL(dfltcc_reset); 58 39 59 40 MODULE_LICENSE("GPL");
+12 -43
lib/zlib_dfltcc/dfltcc.h
··· 93 93 struct dfltcc_state { 94 94 struct dfltcc_param_v0 param; /* Parameter block */ 95 95 struct dfltcc_qaf_param af; /* Available functions */ 96 + char msg[64]; /* Buffer for strm->msg */ 97 + }; 98 + 99 + /* 100 + * Extension of inflate_state and deflate_state for DFLTCC. 101 + */ 102 + struct dfltcc_deflate_state { 103 + struct dfltcc_state common; /* Parameter block */ 96 104 uLong level_mask; /* Levels on which to use DFLTCC */ 97 105 uLong block_size; /* New block each X bytes */ 98 106 uLong block_threshold; /* New block after total_in > X */ 99 107 uLong dht_threshold; /* New block only if avail_in >= X */ 100 - char msg[64]; /* Buffer for strm->msg */ 101 108 }; 102 109 110 + #define ALIGN_UP(p, size) (__typeof__(p))(((uintptr_t)(p) + ((size) - 1)) & ~((size) - 1)) 103 111 /* Resides right after inflate_state or deflate_state */ 104 - #define GET_DFLTCC_STATE(state) ((struct dfltcc_state *)((state) + 1)) 112 + #define GET_DFLTCC_STATE(state) ((struct dfltcc_state *)((char *)(state) + ALIGN_UP(sizeof(*state), 8))) 105 113 106 - /* External functions */ 107 - int dfltcc_can_deflate(z_streamp strm); 108 - int dfltcc_deflate(z_streamp strm, 109 - int flush, 110 - block_state *result); 111 - void dfltcc_reset(z_streamp strm, uInt size); 112 - int dfltcc_can_inflate(z_streamp strm); 113 - typedef enum { 114 - DFLTCC_INFLATE_CONTINUE, 115 - DFLTCC_INFLATE_BREAK, 116 - DFLTCC_INFLATE_SOFTWARE, 117 - } dfltcc_inflate_action; 118 - dfltcc_inflate_action dfltcc_inflate(z_streamp strm, 119 - int flush, int *ret); 114 + void dfltcc_reset_state(struct dfltcc_state *dfltcc_state); 115 + 120 116 static inline int is_dfltcc_enabled(void) 121 117 { 122 118 return (zlib_dfltcc_support != ZLIB_DFLTCC_DISABLED && 123 119 test_facility(DFLTCC_FACILITY)); 124 120 } 125 121 126 - #define DEFLATE_RESET_HOOK(strm) \ 127 - dfltcc_reset((strm), sizeof(deflate_state)) 128 - 129 - #define DEFLATE_HOOK dfltcc_deflate 130 - 131 - #define DEFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_deflate((strm))) 132 - 133 122 #define DEFLATE_DFLTCC_ENABLED() is_dfltcc_enabled() 134 - 135 - #define INFLATE_RESET_HOOK(strm) \ 136 - dfltcc_reset((strm), sizeof(struct inflate_state)) 137 - 138 - #define INFLATE_TYPEDO_HOOK(strm, flush) \ 139 - if (dfltcc_can_inflate((strm))) { \ 140 - dfltcc_inflate_action action; \ 141 - \ 142 - RESTORE(); \ 143 - action = dfltcc_inflate((strm), (flush), &ret); \ 144 - LOAD(); \ 145 - if (action == DFLTCC_INFLATE_CONTINUE) \ 146 - break; \ 147 - else if (action == DFLTCC_INFLATE_BREAK) \ 148 - goto inf_leave; \ 149 - } 150 - 151 - #define INFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_inflate((strm))) 152 - 153 - #define INFLATE_NEED_UPDATEWINDOW(strm) (!dfltcc_can_inflate((strm))) 154 123 155 124 #endif /* DFLTCC_H */
+61 -30
lib/zlib_dfltcc/dfltcc_deflate.c
··· 2 2 3 3 #include "../zlib_deflate/defutil.h" 4 4 #include "dfltcc_util.h" 5 - #include "dfltcc.h" 5 + #include "dfltcc_deflate.h" 6 6 #include <asm/setup.h> 7 7 #include <linux/export.h> 8 8 #include <linux/zutil.h> 9 + 10 + #define GET_DFLTCC_DEFLATE_STATE(state) ((struct dfltcc_deflate_state *)GET_DFLTCC_STATE(state)) 9 11 10 12 /* 11 13 * Compress. ··· 17 15 ) 18 16 { 19 17 deflate_state *state = (deflate_state *)strm->state; 20 - struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state); 18 + struct dfltcc_deflate_state *dfltcc_state = GET_DFLTCC_DEFLATE_STATE(state); 21 19 22 20 /* Check for kernel dfltcc command line parameter */ 23 21 if (zlib_dfltcc_support == ZLIB_DFLTCC_DISABLED || ··· 30 28 return 0; 31 29 32 30 /* Unsupported hardware */ 33 - if (!is_bit_set(dfltcc_state->af.fns, DFLTCC_GDHT) || 34 - !is_bit_set(dfltcc_state->af.fns, DFLTCC_CMPR) || 35 - !is_bit_set(dfltcc_state->af.fmts, DFLTCC_FMT0)) 31 + if (!is_bit_set(dfltcc_state->common.af.fns, DFLTCC_GDHT) || 32 + !is_bit_set(dfltcc_state->common.af.fns, DFLTCC_CMPR) || 33 + !is_bit_set(dfltcc_state->common.af.fmts, DFLTCC_FMT0)) 36 34 return 0; 37 35 38 36 return 1; 39 37 } 40 38 EXPORT_SYMBOL(dfltcc_can_deflate); 39 + 40 + void dfltcc_reset_deflate_state(z_streamp strm) { 41 + deflate_state *state = (deflate_state *)strm->state; 42 + struct dfltcc_deflate_state *dfltcc_state = GET_DFLTCC_DEFLATE_STATE(state); 43 + 44 + dfltcc_reset_state(&dfltcc_state->common); 45 + 46 + /* Initialize tuning parameters */ 47 + if (zlib_dfltcc_support == ZLIB_DFLTCC_FULL_DEBUG) 48 + dfltcc_state->level_mask = DFLTCC_LEVEL_MASK_DEBUG; 49 + else 50 + dfltcc_state->level_mask = DFLTCC_LEVEL_MASK; 51 + dfltcc_state->block_size = DFLTCC_BLOCK_SIZE; 52 + dfltcc_state->block_threshold = DFLTCC_FIRST_FHT_BLOCK_SIZE; 53 + dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE; 54 + } 55 + EXPORT_SYMBOL(dfltcc_reset_deflate_state); 41 56 42 57 static void dfltcc_gdht( 43 58 z_streamp strm ··· 62 43 { 63 44 deflate_state *state = (deflate_state *)strm->state; 64 45 struct dfltcc_param_v0 *param = &GET_DFLTCC_STATE(state)->param; 65 - size_t avail_in = avail_in = strm->avail_in; 46 + size_t avail_in = strm->avail_in; 66 47 67 48 dfltcc(DFLTCC_GDHT, 68 49 param, NULL, NULL, ··· 123 104 ) 124 105 { 125 106 deflate_state *state = (deflate_state *)strm->state; 126 - struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state); 127 - struct dfltcc_param_v0 *param = &dfltcc_state->param; 107 + struct dfltcc_deflate_state *dfltcc_state = GET_DFLTCC_DEFLATE_STATE(state); 108 + struct dfltcc_param_v0 *param = &dfltcc_state->common.param; 128 109 uInt masked_avail_in; 129 110 dfltcc_cc cc; 130 111 int need_empty_block; 131 112 int soft_bcc; 132 113 int no_flush; 133 114 134 - if (!dfltcc_can_deflate(strm)) 115 + if (!dfltcc_can_deflate(strm)) { 116 + /* Clear history. */ 117 + if (flush == Z_FULL_FLUSH) 118 + param->hl = 0; 135 119 return 0; 120 + } 136 121 137 122 again: 138 123 masked_avail_in = 0; 139 124 soft_bcc = 0; 140 125 no_flush = flush == Z_NO_FLUSH; 141 126 142 - /* Trailing empty block. Switch to software, except when Continuation Flag 143 - * is set, which means that DFLTCC has buffered some output in the 144 - * parameter block and needs to be called again in order to flush it. 127 + /* No input data. Return, except when Continuation Flag is set, which means 128 + * that DFLTCC has buffered some output in the parameter block and needs to 129 + * be called again in order to flush it. 145 130 */ 146 - if (flush == Z_FINISH && strm->avail_in == 0 && !param->cf) { 147 - if (param->bcf) { 148 - /* A block is still open, and the hardware does not support closing 149 - * blocks without adding data. Thus, close it manually. 150 - */ 131 + if (strm->avail_in == 0 && !param->cf) { 132 + /* A block is still open, and the hardware does not support closing 133 + * blocks without adding data. Thus, close it manually. 134 + */ 135 + if (!no_flush && param->bcf) { 151 136 send_eobs(strm, param); 152 137 param->bcf = 0; 153 138 } 154 - return 0; 155 - } 156 - 157 - if (strm->avail_in == 0 && !param->cf) { 158 - *result = need_more; 139 + /* Let one of deflate_* functions write a trailing empty block. */ 140 + if (flush == Z_FINISH) 141 + return 0; 142 + /* Clear history. */ 143 + if (flush == Z_FULL_FLUSH) 144 + param->hl = 0; 145 + /* Trigger block post-processing if necessary. */ 146 + *result = no_flush ? need_more : block_done; 159 147 return 1; 160 148 } 161 149 ··· 189 163 param->bcf = 0; 190 164 dfltcc_state->block_threshold = 191 165 strm->total_in + dfltcc_state->block_size; 192 - if (strm->avail_out == 0) { 193 - *result = need_more; 194 - return 1; 195 - } 196 166 } 167 + } 168 + 169 + /* No space for compressed data. If we proceed, dfltcc_cmpr() will return 170 + * DFLTCC_CC_OP1_TOO_SHORT without buffering header bits, but we will still 171 + * set BCF=1, which is wrong. Avoid complications and return early. 172 + */ 173 + if (strm->avail_out == 0) { 174 + *result = need_more; 175 + return 1; 197 176 } 198 177 199 178 /* The caller gave us too much data. Pass only one block worth of ··· 220 189 param->cvt = CVT_ADLER32; 221 190 if (!no_flush) 222 191 /* We need to close a block. Always do this in software - when there is 223 - * no input data, the hardware will not nohor BCC. */ 192 + * no input data, the hardware will not hohor BCC. */ 224 193 soft_bcc = 1; 225 194 if (flush == Z_FINISH && !param->bcf) 226 195 /* We are about to open a BFINAL block, set Block Header Final bit ··· 235 204 param->sbb = (unsigned int)state->bi_valid; 236 205 if (param->sbb > 0) 237 206 *strm->next_out = (Byte)state->bi_buf; 238 - if (param->hl) 239 - param->nt = 0; /* Honor history */ 207 + /* Honor history and check value */ 208 + param->nt = 0; 240 209 param->cv = strm->adler; 241 210 242 211 /* When opening a block, choose a Huffman-Table Type */ ··· 263 232 } while (cc == DFLTCC_CC_AGAIN); 264 233 265 234 /* Translate parameter block to stream */ 266 - strm->msg = oesc_msg(dfltcc_state->msg, param->oesc); 235 + strm->msg = oesc_msg(dfltcc_state->common.msg, param->oesc); 267 236 state->bi_valid = param->sbb; 268 237 if (state->bi_valid == 0) 269 238 state->bi_buf = 0; /* Avoid accessing next_out */
+21
lib/zlib_dfltcc/dfltcc_deflate.h
··· 1 + // SPDX-License-Identifier: Zlib 2 + #ifndef DFLTCC_DEFLATE_H 3 + #define DFLTCC_DEFLATE_H 4 + 5 + #include "dfltcc.h" 6 + 7 + /* External functions */ 8 + int dfltcc_can_deflate(z_streamp strm); 9 + int dfltcc_deflate(z_streamp strm, 10 + int flush, 11 + block_state *result); 12 + void dfltcc_reset_deflate_state(z_streamp strm); 13 + 14 + #define DEFLATE_RESET_HOOK(strm) \ 15 + dfltcc_reset_deflate_state((strm)) 16 + 17 + #define DEFLATE_HOOK dfltcc_deflate 18 + 19 + #define DEFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_deflate((strm))) 20 + 21 + #endif /* DFLTCC_DEFLATE_H */
+13 -11
lib/zlib_dfltcc/dfltcc_inflate.c
··· 2 2 3 3 #include "../zlib_inflate/inflate.h" 4 4 #include "dfltcc_util.h" 5 - #include "dfltcc.h" 5 + #include "dfltcc_inflate.h" 6 6 #include <asm/setup.h> 7 7 #include <linux/export.h> 8 8 #include <linux/zutil.h> ··· 22 22 zlib_dfltcc_support == ZLIB_DFLTCC_DEFLATE_ONLY) 23 23 return 0; 24 24 25 - /* Unsupported compression settings */ 26 - if (state->wbits != HB_BITS) 27 - return 0; 28 - 29 25 /* Unsupported hardware */ 30 26 return is_bit_set(dfltcc_state->af.fns, DFLTCC_XPND) && 31 27 is_bit_set(dfltcc_state->af.fmts, DFLTCC_FMT0); 32 28 } 33 29 EXPORT_SYMBOL(dfltcc_can_inflate); 30 + 31 + void dfltcc_reset_inflate_state(z_streamp strm) { 32 + struct inflate_state *state = (struct inflate_state *)strm->state; 33 + struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state); 34 + 35 + dfltcc_reset_state(dfltcc_state); 36 + } 37 + EXPORT_SYMBOL(dfltcc_reset_inflate_state); 34 38 35 39 static int dfltcc_was_inflate_used( 36 40 z_streamp strm ··· 95 91 struct dfltcc_param_v0 *param = &dfltcc_state->param; 96 92 dfltcc_cc cc; 97 93 98 - if (flush == Z_BLOCK) { 99 - /* DFLTCC does not support stopping on block boundaries */ 94 + if (flush == Z_BLOCK || flush == Z_PACKET_FLUSH) { 95 + /* DFLTCC does not support stopping on block boundaries (Z_BLOCK flush option) 96 + * as well as the use of Z_PACKET_FLUSH option (used exclusively by PPP driver) 97 + */ 100 98 if (dfltcc_inflate_disable(strm)) { 101 99 *ret = Z_STREAM_ERROR; 102 100 return DFLTCC_INFLATE_BREAK; ··· 127 121 /* Translate stream to parameter block */ 128 122 param->cvt = CVT_ADLER32; 129 123 param->sbb = state->bits; 130 - param->hl = state->whave; /* Software and hardware history formats match */ 131 - param->ho = (state->write - state->whave) & ((1 << HB_BITS) - 1); 132 124 if (param->hl) 133 125 param->nt = 0; /* Honor history for the first block */ 134 126 param->cv = state->check; ··· 140 136 strm->msg = oesc_msg(dfltcc_state->msg, param->oesc); 141 137 state->last = cc == DFLTCC_CC_OK; 142 138 state->bits = param->sbb; 143 - state->whave = param->hl; 144 - state->write = (param->ho + param->hl) & ((1 << HB_BITS) - 1); 145 139 state->check = param->cv; 146 140 if (cc == DFLTCC_CC_OP2_CORRUPT && param->oesc != 0) { 147 141 /* Report an error if stream is corrupted */
+37
lib/zlib_dfltcc/dfltcc_inflate.h
··· 1 + // SPDX-License-Identifier: Zlib 2 + #ifndef DFLTCC_INFLATE_H 3 + #define DFLTCC_INFLATE_H 4 + 5 + #include "dfltcc.h" 6 + 7 + /* External functions */ 8 + void dfltcc_reset_inflate_state(z_streamp strm); 9 + int dfltcc_can_inflate(z_streamp strm); 10 + typedef enum { 11 + DFLTCC_INFLATE_CONTINUE, 12 + DFLTCC_INFLATE_BREAK, 13 + DFLTCC_INFLATE_SOFTWARE, 14 + } dfltcc_inflate_action; 15 + dfltcc_inflate_action dfltcc_inflate(z_streamp strm, 16 + int flush, int *ret); 17 + #define INFLATE_RESET_HOOK(strm) \ 18 + dfltcc_reset_inflate_state((strm)) 19 + 20 + #define INFLATE_TYPEDO_HOOK(strm, flush) \ 21 + if (dfltcc_can_inflate((strm))) { \ 22 + dfltcc_inflate_action action; \ 23 + \ 24 + RESTORE(); \ 25 + action = dfltcc_inflate((strm), (flush), &ret); \ 26 + LOAD(); \ 27 + if (action == DFLTCC_INFLATE_CONTINUE) \ 28 + break; \ 29 + else if (action == DFLTCC_INFLATE_BREAK) \ 30 + goto inf_leave; \ 31 + } 32 + 33 + #define INFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_inflate((strm))) 34 + 35 + #define INFLATE_NEED_UPDATEWINDOW(strm) (!dfltcc_can_inflate((strm))) 36 + 37 + #endif /* DFLTCC_DEFLATE_H */
+1 -1
lib/zlib_inflate/inflate.c
··· 17 17 18 18 /* architecture-specific bits */ 19 19 #ifdef CONFIG_ZLIB_DFLTCC 20 - # include "../zlib_dfltcc/dfltcc.h" 20 + # include "../zlib_dfltcc/dfltcc_inflate.h" 21 21 #else 22 22 #define INFLATE_RESET_HOOK(strm) do {} while (0) 23 23 #define INFLATE_TYPEDO_HOOK(strm, flush) do {} while (0)
+1 -1
net/openvswitch/flow_table.c
··· 1013 1013 1014 1014 mask = flow_mask_find(tbl, new); 1015 1015 if (!mask) { 1016 - /* Allocate a new mask if none exsits. */ 1016 + /* Allocate a new mask if none exists. */ 1017 1017 mask = mask_alloc(); 1018 1018 if (!mask) 1019 1019 return -ENOMEM;
+1 -2
scripts/bloat-o-meter
··· 80 80 if d<0: shrink, down = shrink+1, down-d 81 81 delta.append((d, name)) 82 82 83 - delta.sort() 84 - delta.reverse() 83 + delta.sort(reverse=True) 85 84 return grow, shrink, add, remove, up, down, delta, old, new, otot, ntot 86 85 87 86 def print_result(symboltype, symbolformat):
+32 -6
scripts/checkpatch.pl
··· 823 823 "get_state_synchronize_sched" => "get_state_synchronize_rcu", 824 824 "cond_synchronize_sched" => "cond_synchronize_rcu", 825 825 "kmap" => "kmap_local_page", 826 + "kunmap" => "kunmap_local", 826 827 "kmap_atomic" => "kmap_local_page", 828 + "kunmap_atomic" => "kunmap_local", 827 829 ); 828 830 829 831 #Create a search pattern for all these strings to speed up a loop below ··· 3144 3142 if ($sign_off =~ /^co-developed-by:$/i) { 3145 3143 if ($email eq $author) { 3146 3144 WARN("BAD_SIGN_OFF", 3147 - "Co-developed-by: should not be used to attribute nominal patch author '$author'\n" . "$here\n" . $rawline); 3145 + "Co-developed-by: should not be used to attribute nominal patch author '$author'\n" . $herecurr); 3148 3146 } 3149 3147 if (!defined $lines[$linenr]) { 3150 3148 WARN("BAD_SIGN_OFF", 3151 - "Co-developed-by: must be immediately followed by Signed-off-by:\n" . "$here\n" . $rawline); 3152 - } elsif ($rawlines[$linenr] !~ /^\s*signed-off-by:\s*(.*)/i) { 3149 + "Co-developed-by: must be immediately followed by Signed-off-by:\n" . $herecurr); 3150 + } elsif ($rawlines[$linenr] !~ /^signed-off-by:\s*(.*)/i) { 3153 3151 WARN("BAD_SIGN_OFF", 3154 - "Co-developed-by: must be immediately followed by Signed-off-by:\n" . "$here\n" . $rawline . "\n" .$rawlines[$linenr]); 3152 + "Co-developed-by: must be immediately followed by Signed-off-by:\n" . $herecurr . $rawlines[$linenr] . "\n"); 3155 3153 } elsif ($1 ne $email) { 3156 3154 WARN("BAD_SIGN_OFF", 3157 - "Co-developed-by and Signed-off-by: name/email do not match \n" . "$here\n" . $rawline . "\n" .$rawlines[$linenr]); 3155 + "Co-developed-by and Signed-off-by: name/email do not match\n" . $herecurr . $rawlines[$linenr] . "\n"); 3156 + } 3157 + } 3158 + 3159 + # check if Reported-by: is followed by a Link: 3160 + if ($sign_off =~ /^reported(?:|-and-tested)-by:$/i) { 3161 + if (!defined $lines[$linenr]) { 3162 + WARN("BAD_REPORTED_BY_LINK", 3163 + "Reported-by: should be immediately followed by Link: to the report\n" . $herecurr . $rawlines[$linenr] . "\n"); 3164 + } elsif ($rawlines[$linenr] !~ m{^link:\s*https?://}i) { 3165 + WARN("BAD_REPORTED_BY_LINK", 3166 + "Reported-by: should be immediately followed by Link: with a URL to the report\n" . $herecurr . $rawlines[$linenr] . "\n"); 3158 3167 } 3159 3168 } 3160 3169 } 3170 + 3161 3171 3162 3172 # Check Fixes: styles is correct 3163 3173 if (!$in_header_lines && ··· 3262 3248 if ($in_commit_log && $commit_log_possible_stack_dump && 3263 3249 $line =~ /^\s*$/) { 3264 3250 $commit_log_possible_stack_dump = 0; 3251 + } 3252 + 3253 + # Check for odd tags before a URI/URL 3254 + if ($in_commit_log && 3255 + $line =~ /^\s*(\w+):\s*http/ && $1 ne 'Link') { 3256 + if ($1 =~ /^v(?:ersion)?\d+/i) { 3257 + WARN("COMMIT_LOG_VERSIONING", 3258 + "Patch version information should be after the --- line\n" . $herecurr); 3259 + } else { 3260 + WARN("COMMIT_LOG_USE_LINK", 3261 + "Unknown link reference '$1:', use 'Link:' instead\n" . $herecurr); 3262 + } 3265 3263 } 3266 3264 3267 3265 # Check for lines starting with a # ··· 3751 3725 } 3752 3726 3753 3727 # check for embedded filenames 3754 - if ($rawline =~ /^\+.*\Q$realfile\E/) { 3728 + if ($rawline =~ /^\+.*\b\Q$realfile\E\b/) { 3755 3729 WARN("EMBEDDED_FILENAME", 3756 3730 "It's generally not useful to have the filename in the file\n" . $herecurr); 3757 3731 }
+222
scripts/gdb/linux/mm.py
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + # gdb helper commands and functions for Linux kernel debugging 4 + # 5 + # routines to introspect page table 6 + # 7 + # Authors: 8 + # Dmitrii Bundin <dmitrii.bundin.a@gmail.com> 9 + # 10 + 11 + import gdb 12 + 13 + from linux import utils 14 + 15 + PHYSICAL_ADDRESS_MASK = gdb.parse_and_eval('0xfffffffffffff') 16 + 17 + 18 + def page_mask(level=1): 19 + # 4KB 20 + if level == 1: 21 + return gdb.parse_and_eval('(u64) ~0xfff') 22 + # 2MB 23 + elif level == 2: 24 + return gdb.parse_and_eval('(u64) ~0x1fffff') 25 + # 1GB 26 + elif level == 3: 27 + return gdb.parse_and_eval('(u64) ~0x3fffffff') 28 + else: 29 + raise Exception(f'Unknown page level: {level}') 30 + 31 + 32 + #page_offset_base in case CONFIG_DYNAMIC_MEMORY_LAYOUT is disabled 33 + POB_NO_DYNAMIC_MEM_LAYOUT = '0xffff888000000000' 34 + def _page_offset_base(): 35 + pob_symbol = gdb.lookup_global_symbol('page_offset_base') 36 + pob = pob_symbol.name if pob_symbol else POB_NO_DYNAMIC_MEM_LAYOUT 37 + return gdb.parse_and_eval(pob) 38 + 39 + 40 + def is_bit_defined_tupled(data, offset): 41 + return offset, bool(data >> offset & 1) 42 + 43 + def content_tupled(data, bit_start, bit_end): 44 + return (bit_start, bit_end), data >> bit_start & ((1 << (1 + bit_end - bit_start)) - 1) 45 + 46 + def entry_va(level, phys_addr, translating_va): 47 + def start_bit(level): 48 + if level == 5: 49 + return 48 50 + elif level == 4: 51 + return 39 52 + elif level == 3: 53 + return 30 54 + elif level == 2: 55 + return 21 56 + elif level == 1: 57 + return 12 58 + else: 59 + raise Exception(f'Unknown level {level}') 60 + 61 + entry_offset = ((translating_va >> start_bit(level)) & 511) * 8 62 + entry_va = _page_offset_base() + phys_addr + entry_offset 63 + return entry_va 64 + 65 + class Cr3(): 66 + def __init__(self, cr3, page_levels): 67 + self.cr3 = cr3 68 + self.page_levels = page_levels 69 + self.page_level_write_through = is_bit_defined_tupled(cr3, 3) 70 + self.page_level_cache_disabled = is_bit_defined_tupled(cr3, 4) 71 + self.next_entry_physical_address = cr3 & PHYSICAL_ADDRESS_MASK & page_mask() 72 + 73 + def next_entry(self, va): 74 + next_level = self.page_levels 75 + return PageHierarchyEntry(entry_va(next_level, self.next_entry_physical_address, va), next_level) 76 + 77 + def mk_string(self): 78 + return f"""\ 79 + cr3: 80 + {'cr3 binary data': <30} {hex(self.cr3)} 81 + {'next entry physical address': <30} {hex(self.next_entry_physical_address)} 82 + --- 83 + {'bit' : <4} {self.page_level_write_through[0]: <10} {'page level write through': <30} {self.page_level_write_through[1]} 84 + {'bit' : <4} {self.page_level_cache_disabled[0]: <10} {'page level cache disabled': <30} {self.page_level_cache_disabled[1]} 85 + """ 86 + 87 + 88 + class PageHierarchyEntry(): 89 + def __init__(self, address, level): 90 + data = int.from_bytes( 91 + memoryview(gdb.selected_inferior().read_memory(address, 8)), 92 + "little" 93 + ) 94 + if level == 1: 95 + self.is_page = True 96 + self.entry_present = is_bit_defined_tupled(data, 0) 97 + self.read_write = is_bit_defined_tupled(data, 1) 98 + self.user_access_allowed = is_bit_defined_tupled(data, 2) 99 + self.page_level_write_through = is_bit_defined_tupled(data, 3) 100 + self.page_level_cache_disabled = is_bit_defined_tupled(data, 4) 101 + self.entry_was_accessed = is_bit_defined_tupled(data, 5) 102 + self.dirty = is_bit_defined_tupled(data, 6) 103 + self.pat = is_bit_defined_tupled(data, 7) 104 + self.global_translation = is_bit_defined_tupled(data, 8) 105 + self.page_physical_address = data & PHYSICAL_ADDRESS_MASK & page_mask(level) 106 + self.next_entry_physical_address = None 107 + self.hlat_restart_with_ordinary = is_bit_defined_tupled(data, 11) 108 + self.protection_key = content_tupled(data, 59, 62) 109 + self.executed_disable = is_bit_defined_tupled(data, 63) 110 + else: 111 + page_size = is_bit_defined_tupled(data, 7) 112 + page_size_bit = page_size[1] 113 + self.is_page = page_size_bit 114 + self.entry_present = is_bit_defined_tupled(data, 0) 115 + self.read_write = is_bit_defined_tupled(data, 1) 116 + self.user_access_allowed = is_bit_defined_tupled(data, 2) 117 + self.page_level_write_through = is_bit_defined_tupled(data, 3) 118 + self.page_level_cache_disabled = is_bit_defined_tupled(data, 4) 119 + self.entry_was_accessed = is_bit_defined_tupled(data, 5) 120 + self.page_size = page_size 121 + self.dirty = is_bit_defined_tupled( 122 + data, 6) if page_size_bit else None 123 + self.global_translation = is_bit_defined_tupled( 124 + data, 8) if page_size_bit else None 125 + self.pat = is_bit_defined_tupled( 126 + data, 12) if page_size_bit else None 127 + self.page_physical_address = data & PHYSICAL_ADDRESS_MASK & page_mask(level) if page_size_bit else None 128 + self.next_entry_physical_address = None if page_size_bit else data & PHYSICAL_ADDRESS_MASK & page_mask() 129 + self.hlat_restart_with_ordinary = is_bit_defined_tupled(data, 11) 130 + self.protection_key = content_tupled(data, 59, 62) if page_size_bit else None 131 + self.executed_disable = is_bit_defined_tupled(data, 63) 132 + self.address = address 133 + self.page_entry_binary_data = data 134 + self.page_hierarchy_level = level 135 + 136 + def next_entry(self, va): 137 + if self.is_page or not self.entry_present[1]: 138 + return None 139 + 140 + next_level = self.page_hierarchy_level - 1 141 + return PageHierarchyEntry(entry_va(next_level, self.next_entry_physical_address, va), next_level) 142 + 143 + 144 + def mk_string(self): 145 + if not self.entry_present[1]: 146 + return f"""\ 147 + level {self.page_hierarchy_level}: 148 + {'entry address': <30} {hex(self.address)} 149 + {'page entry binary data': <30} {hex(self.page_entry_binary_data)} 150 + --- 151 + PAGE ENTRY IS NOT PRESENT! 152 + """ 153 + elif self.is_page: 154 + def page_size_line(ps_bit, ps, level): 155 + return "" if level == 1 else f"{'bit': <3} {ps_bit: <5} {'page size': <30} {ps}" 156 + 157 + return f"""\ 158 + level {self.page_hierarchy_level}: 159 + {'entry address': <30} {hex(self.address)} 160 + {'page entry binary data': <30} {hex(self.page_entry_binary_data)} 161 + {'page size': <30} {'1GB' if self.page_hierarchy_level == 3 else '2MB' if self.page_hierarchy_level == 2 else '4KB' if self.page_hierarchy_level == 1 else 'Unknown page size for level:' + self.page_hierarchy_level} 162 + {'page physical address': <30} {hex(self.page_physical_address)} 163 + --- 164 + {'bit': <4} {self.entry_present[0]: <10} {'entry present': <30} {self.entry_present[1]} 165 + {'bit': <4} {self.read_write[0]: <10} {'read/write access allowed': <30} {self.read_write[1]} 166 + {'bit': <4} {self.user_access_allowed[0]: <10} {'user access allowed': <30} {self.user_access_allowed[1]} 167 + {'bit': <4} {self.page_level_write_through[0]: <10} {'page level write through': <30} {self.page_level_write_through[1]} 168 + {'bit': <4} {self.page_level_cache_disabled[0]: <10} {'page level cache disabled': <30} {self.page_level_cache_disabled[1]} 169 + {'bit': <4} {self.entry_was_accessed[0]: <10} {'entry has been accessed': <30} {self.entry_was_accessed[1]} 170 + {"" if self.page_hierarchy_level == 1 else f"{'bit': <4} {self.page_size[0]: <10} {'page size': <30} {self.page_size[1]}"} 171 + {'bit': <4} {self.dirty[0]: <10} {'page dirty': <30} {self.dirty[1]} 172 + {'bit': <4} {self.global_translation[0]: <10} {'global translation': <30} {self.global_translation[1]} 173 + {'bit': <4} {self.hlat_restart_with_ordinary[0]: <10} {'restart to ordinary': <30} {self.hlat_restart_with_ordinary[1]} 174 + {'bit': <4} {self.pat[0]: <10} {'pat': <30} {self.pat[1]} 175 + {'bits': <4} {str(self.protection_key[0]): <10} {'protection key': <30} {self.protection_key[1]} 176 + {'bit': <4} {self.executed_disable[0]: <10} {'execute disable': <30} {self.executed_disable[1]} 177 + """ 178 + else: 179 + return f"""\ 180 + level {self.page_hierarchy_level}: 181 + {'entry address': <30} {hex(self.address)} 182 + {'page entry binary data': <30} {hex(self.page_entry_binary_data)} 183 + {'next entry physical address': <30} {hex(self.next_entry_physical_address)} 184 + --- 185 + {'bit': <4} {self.entry_present[0]: <10} {'entry present': <30} {self.entry_present[1]} 186 + {'bit': <4} {self.read_write[0]: <10} {'read/write access allowed': <30} {self.read_write[1]} 187 + {'bit': <4} {self.user_access_allowed[0]: <10} {'user access allowed': <30} {self.user_access_allowed[1]} 188 + {'bit': <4} {self.page_level_write_through[0]: <10} {'page level write through': <30} {self.page_level_write_through[1]} 189 + {'bit': <4} {self.page_level_cache_disabled[0]: <10} {'page level cache disabled': <30} {self.page_level_cache_disabled[1]} 190 + {'bit': <4} {self.entry_was_accessed[0]: <10} {'entry has been accessed': <30} {self.entry_was_accessed[1]} 191 + {'bit': <4} {self.page_size[0]: <10} {'page size': <30} {self.page_size[1]} 192 + {'bit': <4} {self.hlat_restart_with_ordinary[0]: <10} {'restart to ordinary': <30} {self.hlat_restart_with_ordinary[1]} 193 + {'bit': <4} {self.executed_disable[0]: <10} {'execute disable': <30} {self.executed_disable[1]} 194 + """ 195 + 196 + 197 + class TranslateVM(gdb.Command): 198 + """Prints the entire paging structure used to translate a given virtual address. 199 + 200 + Having an address space of the currently executed process translates the virtual address 201 + and prints detailed information of all paging structure levels used for the transaltion. 202 + Currently supported arch: x86""" 203 + 204 + def __init__(self): 205 + super(TranslateVM, self).__init__('translate-vm', gdb.COMMAND_USER) 206 + 207 + def invoke(self, arg, from_tty): 208 + if utils.is_target_arch("x86"): 209 + vm_address = gdb.parse_and_eval(f'{arg}') 210 + cr3_data = gdb.parse_and_eval('$cr3') 211 + cr4 = gdb.parse_and_eval('$cr4') 212 + page_levels = 5 if cr4 & (1 << 12) else 4 213 + page_entry = Cr3(cr3_data, page_levels) 214 + while page_entry: 215 + gdb.write(page_entry.mk_string()) 216 + page_entry = page_entry.next_entry(vm_address) 217 + else: 218 + gdb.GdbError("Virtual address translation is not" 219 + "supported for this arch") 220 + 221 + 222 + TranslateVM()
+1
scripts/gdb/vmlinux-gdb.py
··· 37 37 import linux.clk 38 38 import linux.genpd 39 39 import linux.device 40 + import linux.mm
+17
scripts/spelling.txt
··· 65 65 acumulator||accumulator 66 66 acutally||actually 67 67 adapater||adapter 68 + adderted||asserted 68 69 addional||additional 69 70 additionaly||additionally 70 71 additonal||additional ··· 123 122 ambigious||ambiguous 124 123 ambigous||ambiguous 125 124 amoung||among 125 + amount of times||number of times 126 126 amout||amount 127 127 amplifer||amplifier 128 128 amplifyer||amplifier ··· 289 287 caputure||capture 290 288 carefuly||carefully 291 289 cariage||carriage 290 + casued||caused 292 291 catagory||category 293 292 cehck||check 294 293 challange||challenge ··· 373 370 conditionaly||conditionally 374 371 conditon||condition 375 372 condtion||condition 373 + condtional||conditional 376 374 conected||connected 377 375 conector||connector 378 376 configration||configuration ··· 427 423 couter||counter 428 424 coutner||counter 429 425 cryptocraphic||cryptographic 426 + cummulative||cumulative 430 427 cunter||counter 431 428 curently||currently 432 429 cylic||cyclic ··· 630 625 existance||existence 631 626 existant||existent 632 627 exixt||exist 628 + exsits||exists 633 629 exlcude||exclude 634 630 exlcusive||exclusive 631 + exlusive||exclusive 635 632 exmaple||example 636 633 expecially||especially 637 634 experies||expires ··· 671 664 feautures||features 672 665 fetaure||feature 673 666 fetaures||features 667 + fetcing||fetching 674 668 fileystem||filesystem 675 669 fimrware||firmware 676 670 fimware||firmware 677 671 firmare||firmware 678 672 firmaware||firmware 673 + firtly||firstly 679 674 firware||firmware 680 675 firwmare||firmware 681 676 finanize||finalize ··· 847 838 integrey||integrity 848 839 intendet||intended 849 840 intented||intended 841 + interal||internal 850 842 interanl||internal 851 843 interchangable||interchangeable 852 844 interferring||interfering ··· 1033 1023 nerver||never 1034 1024 nescessary||necessary 1035 1025 nessessary||necessary 1026 + none existent||non-existent 1036 1027 noticable||noticeable 1037 1028 notication||notification 1038 1029 notications||notifications ··· 1055 1044 occurence||occurrence 1056 1045 occure||occurred 1057 1046 occuring||occurring 1047 + ocurrence||occurrence 1058 1048 offser||offset 1059 1049 offet||offset 1060 1050 offlaod||offload ··· 1067 1055 ommiting||omitting 1068 1056 ommitted||omitted 1069 1057 onself||oneself 1058 + onthe||on the 1070 1059 ony||only 1071 1060 openning||opening 1072 1061 operatione||operation ··· 1134 1121 periperal||peripheral 1135 1122 peripherial||peripheral 1136 1123 permissons||permissions 1124 + permited||permitted 1137 1125 peroid||period 1138 1126 persistance||persistence 1139 1127 persistant||persistent ··· 1348 1334 safly||safely 1349 1335 safty||safety 1350 1336 satify||satisfy 1337 + satisifed||satisfied 1351 1338 savable||saveable 1352 1339 scaleing||scaling 1353 1340 scaned||scanned ··· 1573 1558 ture||true 1574 1559 tyep||type 1575 1560 udpate||update 1561 + updtes||updates 1576 1562 uesd||used 1577 1563 unknwon||unknown 1578 1564 uknown||unknown ··· 1630 1614 unvalid||invalid 1631 1615 upate||update 1632 1616 upsupported||unsupported 1617 + upto||up to 1633 1618 useable||usable 1634 1619 usefule||useful 1635 1620 usefull||useful
+6 -4
scripts/tags.sh
··· 264 264 --$CTAGS_EXTRA=+fq --c-kinds=+px --fields=+iaS --langmap=c:+.h \ 265 265 "${regex[@]}" 266 266 267 - setup_regex exuberant kconfig 268 - all_kconfigs | xargs $1 -a \ 269 - --langdef=kconfig --language-force=kconfig "${regex[@]}" 270 - 267 + KCONFIG_ARGS=() 268 + if ! $1 --list-languages | grep -iq kconfig; then 269 + setup_regex exuberant kconfig 270 + KCONFIG_ARGS=(--langdef=kconfig --language-force=kconfig "${regex[@]}") 271 + fi 272 + all_kconfigs | xargs $1 -a "${KCONFIG_ARGS[@]}" 271 273 } 272 274 273 275 emacs()
+1 -1
sound/soc/fsl/fsl-asoc-card.c
··· 811 811 priv->card.num_links = 1; 812 812 813 813 if (asrc_pdev) { 814 - /* DPCM DAI Links only if ASRC exsits */ 814 + /* DPCM DAI Links only if ASRC exists */ 815 815 priv->dai_link[1].cpus->of_node = asrc_np; 816 816 priv->dai_link[1].platforms->of_node = asrc_np; 817 817 priv->dai_link[2].codecs->dai_name = codec_dai_name;