Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2024-05-19-11-56' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-mm updates from Andrew Morton:
"Mainly singleton patches, documented in their respective changelogs.
Notable series include:

- Some maintenance and performance work for ocfs2 in Heming Zhao's
series "improve write IO performance when fragmentation is high".

- Some ocfs2 bugfixes from Su Yue in the series "ocfs2 bugs fixes
exposed by fstests".

- kfifo header rework from Andy Shevchenko in the series "kfifo:
Clean up kfifo.h".

- GDB script fixes from Florian Rommel in the series "scripts/gdb:
Fixes for $lx_current and $lx_per_cpu".

- After much discussion, a coding-style update from Barry Song
explaining one reason why inline functions are preferred over
macros. The series is "codingstyle: avoid unused parameters for a
function-like macro""

* tag 'mm-nonmm-stable-2024-05-19-11-56' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (62 commits)
fs/proc: fix softlockup in __read_vmcore
nilfs2: convert BUG_ON() in nilfs_finish_roll_forward() to WARN_ON()
scripts: checkpatch: check unused parameters for function-like macro
Documentation: coding-style: ask function-like macros to evaluate parameters
nilfs2: use __field_struct() for a bitwise field
selftests/kcmp: remove unused open mode
nilfs2: remove calls to folio_set_error() and folio_clear_error()
kernel/watchdog_perf.c: tidy up kerneldoc
watchdog: allow nmi watchdog to use raw perf event
watchdog: handle comma separated nmi_watchdog command line
nilfs2: make superblock data array index computation sparse friendly
squashfs: remove calls to set the folio error flag
squashfs: convert squashfs_symlink_read_folio to use folio APIs
scripts/gdb: fix detection of current CPU in KGDB
scripts/gdb: make get_thread_info accept pointers
scripts/gdb: fix parameter handling in $lx_per_cpu
scripts/gdb: fix failing KGDB detection during probe
kfifo: don't use "proxy" headers
media: stih-cec: add missing io.h
media: rc: add missing io.h
...

+679 -427
+4 -4
Documentation/admin-guide/kdump/kdump.rst
··· 136 136 137 137 CONFIG_KEXEC_CORE=y 138 138 139 - Subsequently, CRASH_CORE is selected by KEXEC_CORE:: 140 - 141 - CONFIG_CRASH_CORE=y 142 - 143 139 2) Enable "sysfs file system support" in "Filesystem" -> "Pseudo 144 140 filesystems." This is usually enabled by default:: 145 141 ··· 163 167 features":: 164 168 165 169 CONFIG_CRASH_DUMP=y 170 + 171 + And this will select VMCORE_INFO and CRASH_RESERVE:: 172 + CONFIG_VMCORE_INFO=y 173 + CONFIG_CRASH_RESERVE=y 166 174 167 175 2) Enable "/proc/vmcore support" under "Filesystems" -> "Pseudo filesystems":: 168 176
+3 -2
Documentation/admin-guide/kernel-parameters.txt
··· 3787 3787 Format: [state][,regs][,debounce][,die] 3788 3788 3789 3789 nmi_watchdog= [KNL,BUGS=X86] Debugging features for SMP kernels 3790 - Format: [panic,][nopanic,][num] 3790 + Format: [panic,][nopanic,][rNNN,][num] 3791 3791 Valid num: 0 or 1 3792 3792 0 - turn hardlockup detector in nmi_watchdog off 3793 3793 1 - turn hardlockup detector in nmi_watchdog on 3794 + rNNN - configure the watchdog with raw perf event 0xNNN 3795 + 3794 3796 When panic is specified, panic when an NMI watchdog 3795 3797 timeout occurs (or 'nopanic' to not panic on an NMI 3796 3798 watchdog, if CONFIG_BOOTPARAM_HARDLOCKUP_PANIC is set) ··· 7509 7507 memory, and other data can't be written using 7510 7508 xmon commands. 7511 7509 off xmon is disabled. 7512 -
+14
Documentation/dev-tools/checkpatch.rst
··· 906 906 907 907 See: https://lore.kernel.org/lkml/1399671106.2912.21.camel@joe-AO725/ 908 908 909 + **MACRO_ARG_UNUSED** 910 + If function-like macros do not utilize a parameter, it might result 911 + in a build warning. We advocate for utilizing static inline functions 912 + to replace such macros. 913 + For example, for a macro such as the one below:: 914 + 915 + #define test(a) do { } while (0) 916 + 917 + there would be a warning like below:: 918 + 919 + WARNING: Argument 'a' is not used in function-like macro. 920 + 921 + See: https://www.kernel.org/doc/html/latest/process/coding-style.html#macros-enums-and-rtl 922 + 909 923 **SINGLE_STATEMENT_DO_WHILE_MACRO** 910 924 For the multi-statement macros, it is necessary to use the do-while 911 925 loop to avoid unpredictable code paths. The do-while loop helps to
+23
Documentation/process/coding-style.rst
··· 827 827 do_this(b, c); \ 828 828 } while (0) 829 829 830 + Function-like macros with unused parameters should be replaced by static 831 + inline functions to avoid the issue of unused variables: 832 + 833 + .. code-block:: c 834 + 835 + static inline void fun(struct foo *foo) 836 + { 837 + } 838 + 839 + Due to historical practices, many files still employ the "cast to (void)" 840 + approach to evaluate parameters. However, this method is not advisable. 841 + Inline functions address the issue of "expression with side effects 842 + evaluated more than once", circumvent unused-variable problems, and 843 + are generally better documented than macros for some reason. 844 + 845 + .. code-block:: c 846 + 847 + /* 848 + * Avoid doing this whenever possible and instead opt for static 849 + * inline functions 850 + */ 851 + #define macrofun(foo) do { (void) (foo); } while (0) 852 + 830 853 Things to avoid when using macros: 831 854 832 855 1) macros that affect control flow:
+17 -4
arch/x86/lib/copy_mc.c
··· 4 4 #include <linux/jump_label.h> 5 5 #include <linux/uaccess.h> 6 6 #include <linux/export.h> 7 + #include <linux/instrumented.h> 7 8 #include <linux/string.h> 8 9 #include <linux/types.h> 9 10 ··· 62 61 */ 63 62 unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigned len) 64 63 { 65 - if (copy_mc_fragile_enabled) 66 - return copy_mc_fragile(dst, src, len); 67 - if (static_cpu_has(X86_FEATURE_ERMS)) 68 - return copy_mc_enhanced_fast_string(dst, src, len); 64 + unsigned long ret; 65 + 66 + if (copy_mc_fragile_enabled) { 67 + instrument_memcpy_before(dst, src, len); 68 + ret = copy_mc_fragile(dst, src, len); 69 + instrument_memcpy_after(dst, src, len, ret); 70 + return ret; 71 + } 72 + if (static_cpu_has(X86_FEATURE_ERMS)) { 73 + instrument_memcpy_before(dst, src, len); 74 + ret = copy_mc_enhanced_fast_string(dst, src, len); 75 + instrument_memcpy_after(dst, src, len, ret); 76 + return ret; 77 + } 69 78 memcpy(dst, src, len); 70 79 return 0; 71 80 } ··· 86 75 unsigned long ret; 87 76 88 77 if (copy_mc_fragile_enabled) { 78 + instrument_copy_to_user(dst, src, len); 89 79 __uaccess_begin(); 90 80 ret = copy_mc_fragile((__force void *)dst, src, len); 91 81 __uaccess_end(); ··· 94 82 } 95 83 96 84 if (static_cpu_has(X86_FEATURE_ERMS)) { 85 + instrument_copy_to_user(dst, src, len); 97 86 __uaccess_begin(); 98 87 ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len); 99 88 __uaccess_end();
+2 -4
block/partitions/ldm.c
··· 131 131 ldm_crit ("Cannot find TOCBLOCK, database may be corrupt."); 132 132 return false; 133 133 } 134 - strncpy (toc->bitmap1_name, data + 0x24, sizeof (toc->bitmap1_name)); 135 - toc->bitmap1_name[sizeof (toc->bitmap1_name) - 1] = 0; 134 + strscpy_pad(toc->bitmap1_name, data + 0x24, sizeof(toc->bitmap1_name)); 136 135 toc->bitmap1_start = get_unaligned_be64(data + 0x2E); 137 136 toc->bitmap1_size = get_unaligned_be64(data + 0x36); 138 137 ··· 141 142 TOC_BITMAP1, toc->bitmap1_name); 142 143 return false; 143 144 } 144 - strncpy (toc->bitmap2_name, data + 0x46, sizeof (toc->bitmap2_name)); 145 - toc->bitmap2_name[sizeof (toc->bitmap2_name) - 1] = 0; 145 + strscpy_pad(toc->bitmap2_name, data + 0x46, sizeof(toc->bitmap2_name)); 146 146 toc->bitmap2_start = get_unaligned_be64(data + 0x50); 147 147 toc->bitmap2_size = get_unaligned_be64(data + 0x58); 148 148 if (strncmp (toc->bitmap2_name, TOC_BITMAP2,
+3 -3
drivers/hwtracing/intel_th/core.c
··· 871 871 if (!th) 872 872 return ERR_PTR(-ENOMEM); 873 873 874 - th->id = ida_simple_get(&intel_th_ida, 0, 0, GFP_KERNEL); 874 + th->id = ida_alloc(&intel_th_ida, GFP_KERNEL); 875 875 if (th->id < 0) { 876 876 err = th->id; 877 877 goto err_alloc; ··· 931 931 "intel_th/output"); 932 932 933 933 err_ida: 934 - ida_simple_remove(&intel_th_ida, th->id); 934 + ida_free(&intel_th_ida, th->id); 935 935 936 936 err_alloc: 937 937 kfree(th); ··· 964 964 __unregister_chrdev(th->major, 0, TH_POSSIBLE_OUTPUTS, 965 965 "intel_th/output"); 966 966 967 - ida_simple_remove(&intel_th_ida, th->id); 967 + ida_free(&intel_th_ida, th->id); 968 968 969 969 kfree(th); 970 970 }
+1
drivers/media/cec/platform/sti/stih-cec.c
··· 6 6 */ 7 7 #include <linux/clk.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/io.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/mfd/syscon.h> 11 12 #include <linux/module.h>
+1
drivers/media/rc/mtk-cir.c
··· 8 8 #include <linux/clk.h> 9 9 #include <linux/interrupt.h> 10 10 #include <linux/module.h> 11 + #include <linux/io.h> 11 12 #include <linux/of.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/reset.h>
+1
drivers/media/rc/serial_ir.c
··· 18 18 #include <linux/module.h> 19 19 #include <linux/errno.h> 20 20 #include <linux/interrupt.h> 21 + #include <linux/io.h> 21 22 #include <linux/kernel.h> 22 23 #include <linux/serial_reg.h> 23 24 #include <linux/types.h>
+1
drivers/media/rc/st_rc.c
··· 6 6 #include <linux/kernel.h> 7 7 #include <linux/clk.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/io.h> 9 10 #include <linux/module.h> 10 11 #include <linux/of.h> 11 12 #include <linux/platform_device.h>
+1
drivers/media/rc/sunxi-cir.c
··· 12 12 13 13 #include <linux/clk.h> 14 14 #include <linux/interrupt.h> 15 + #include <linux/io.h> 15 16 #include <linux/module.h> 16 17 #include <linux/of.h> 17 18 #include <linux/platform_device.h>
+2 -2
drivers/mux/core.c
··· 64 64 { 65 65 struct mux_chip *mux_chip = to_mux_chip(dev); 66 66 67 - ida_simple_remove(&mux_ida, mux_chip->id); 67 + ida_free(&mux_ida, mux_chip->id); 68 68 kfree(mux_chip); 69 69 } 70 70 ··· 111 111 mux_chip->dev.of_node = dev->of_node; 112 112 dev_set_drvdata(&mux_chip->dev, mux_chip); 113 113 114 - mux_chip->id = ida_simple_get(&mux_ida, 0, 0, GFP_KERNEL); 114 + mux_chip->id = ida_alloc(&mux_ida, GFP_KERNEL); 115 115 if (mux_chip->id < 0) { 116 116 int err = mux_chip->id; 117 117
+3 -3
drivers/pps/clients/pps_parport.c
··· 148 148 return; 149 149 } 150 150 151 - index = ida_simple_get(&pps_client_index, 0, 0, GFP_KERNEL); 151 + index = ida_alloc(&pps_client_index, GFP_KERNEL); 152 152 memset(&pps_client_cb, 0, sizeof(pps_client_cb)); 153 153 pps_client_cb.private = device; 154 154 pps_client_cb.irq_func = parport_irq; ··· 188 188 err_unregister_dev: 189 189 parport_unregister_device(device->pardev); 190 190 err_free: 191 - ida_simple_remove(&pps_client_index, index); 191 + ida_free(&pps_client_index, index); 192 192 kfree(device); 193 193 } 194 194 ··· 208 208 pps_unregister_source(device->pps); 209 209 parport_release(pardev); 210 210 parport_unregister_device(pardev); 211 - ida_simple_remove(&pps_client_index, device->index); 211 + ida_free(&pps_client_index, device->index); 212 212 kfree(device); 213 213 } 214 214
+1 -1
fs/binfmt_elf.c
··· 1934 1934 threads = t->next; 1935 1935 WARN_ON(t->notes[0].data && t->notes[0].data != &t->prstatus); 1936 1936 for (i = 1; i < info->thread_notes; ++i) 1937 - kfree(t->notes[i].data); 1937 + kvfree(t->notes[i].data); 1938 1938 kfree(t); 1939 1939 } 1940 1940 kfree(info->psinfo.data);
+12
fs/fat/dir.c
··· 269 269 /** 270 270 * fat_parse_long - Parse extended directory entry. 271 271 * 272 + * @dir: Pointer to the inode that represents the directory. 273 + * @pos: On input, contains the starting position to read from. 274 + * On output, updated with the new position. 275 + * @bh: Pointer to the buffer head that may be used for reading directory 276 + * entries. May be updated. 277 + * @de: On input, points to the current directory entry. 278 + * On output, points to the next directory entry. 279 + * @unicode: Pointer to a buffer where the parsed Unicode long filename will be 280 + * stored. 281 + * @nr_slots: Pointer to a variable that will store the number of longname 282 + * slots found. 283 + * 272 284 * This function returns zero on success, negative value on error, or one of 273 285 * the following: 274 286 *
+16 -7
fs/nilfs2/btree.c
··· 1857 1857 } 1858 1858 1859 1859 /** 1860 - * nilfs_btree_convert_and_insert - 1861 - * @bmap: 1862 - * @key: 1863 - * @ptr: 1864 - * @keys: 1865 - * @ptrs: 1866 - * @n: 1860 + * nilfs_btree_convert_and_insert - Convert and insert entries into a B-tree 1861 + * @btree: NILFS B-tree structure 1862 + * @key: Key of the new entry to be inserted 1863 + * @ptr: Pointer (block number) associated with the key to be inserted 1864 + * @keys: Array of keys to be inserted in addition to @key 1865 + * @ptrs: Array of pointers associated with @keys 1866 + * @n: Number of keys and pointers in @keys and @ptrs 1867 + * 1868 + * This function is used to insert a new entry specified by @key and @ptr, 1869 + * along with additional entries specified by @keys and @ptrs arrays, into a 1870 + * NILFS B-tree. 1871 + * It prepares the necessary changes by allocating the required blocks and any 1872 + * necessary intermediate nodes. It converts configurations from other forms of 1873 + * block mapping (the one that currently exists is direct mapping) to a B-tree. 1874 + * 1875 + * Return: 0 on success or a negative error code on failure. 1867 1876 */ 1868 1877 int nilfs_btree_convert_and_insert(struct nilfs_bmap *btree, 1869 1878 __u64 key, __u64 ptr,
-1
fs/nilfs2/dir.c
··· 174 174 dir->i_ino, (folio->index << PAGE_SHIFT) + offs, 175 175 (unsigned long)le64_to_cpu(p->inode)); 176 176 fail: 177 - folio_set_error(folio); 178 177 return false; 179 178 } 180 179
+1
fs/nilfs2/gcinode.c
··· 175 175 176 176 /** 177 177 * nilfs_remove_all_gcinodes() - remove all unprocessed gc inodes 178 + * @nilfs: NILFS filesystem instance 178 179 */ 179 180 void nilfs_remove_all_gcinodes(struct the_nilfs *nilfs) 180 181 {
+2 -2
fs/nilfs2/nilfs.h
··· 335 335 336 336 extern struct nilfs_super_block * 337 337 nilfs_read_super_block(struct super_block *, u64, int, struct buffer_head **); 338 - extern int nilfs_store_magic_and_option(struct super_block *, 339 - struct nilfs_super_block *, char *); 338 + extern int nilfs_store_magic(struct super_block *sb, 339 + struct nilfs_super_block *sbp); 340 340 extern int nilfs_check_feature_compatibility(struct super_block *, 341 341 struct nilfs_super_block *); 342 342 extern void nilfs_set_log_cursor(struct nilfs_super_block *,
+4 -1
fs/nilfs2/recovery.c
··· 563 563 * checkpoint 564 564 * @nilfs: nilfs object 565 565 * @sb: super block instance 566 + * @root: NILFS root instance 566 567 * @ri: pointer to a nilfs_recovery_info 567 568 */ 568 569 static int nilfs_do_roll_forward(struct the_nilfs *nilfs, ··· 699 698 return; 700 699 701 700 bh = __getblk(nilfs->ns_bdev, ri->ri_lsegs_start, nilfs->ns_blocksize); 702 - BUG_ON(!bh); 701 + if (WARN_ON(!bh)) 702 + return; /* should never happen */ 703 + 703 704 memset(bh->b_data, 0, bh->b_size); 704 705 set_buffer_dirty(bh); 705 706 err = sync_dirty_buffer(bh);
+1 -7
fs/nilfs2/segment.c
··· 1725 1725 return; 1726 1726 } 1727 1727 1728 - if (!err) { 1729 - if (!nilfs_folio_buffers_clean(folio)) 1730 - filemap_dirty_folio(folio->mapping, folio); 1731 - folio_clear_error(folio); 1732 - } else { 1728 + if (err || !nilfs_folio_buffers_clean(folio)) 1733 1729 filemap_dirty_folio(folio->mapping, folio); 1734 - folio_set_error(folio); 1735 - } 1736 1730 1737 1731 folio_end_writeback(folio); 1738 1732 }
+168 -218
fs/nilfs2/super.c
··· 29 29 #include <linux/slab.h> 30 30 #include <linux/init.h> 31 31 #include <linux/blkdev.h> 32 - #include <linux/parser.h> 33 32 #include <linux/crc32.h> 34 33 #include <linux/vfs.h> 35 34 #include <linux/writeback.h> 36 35 #include <linux/seq_file.h> 37 36 #include <linux/mount.h> 38 37 #include <linux/fs_context.h> 38 + #include <linux/fs_parser.h> 39 39 #include "nilfs.h" 40 40 #include "export.h" 41 41 #include "mdt.h" ··· 61 61 struct kmem_cache *nilfs_btree_path_cache; 62 62 63 63 static int nilfs_setup_super(struct super_block *sb, int is_mount); 64 - static int nilfs_remount(struct super_block *sb, int *flags, char *data); 65 64 66 65 void __nilfs_msg(struct super_block *sb, const char *fmt, ...) 67 66 { ··· 701 702 .freeze_fs = nilfs_freeze, 702 703 .unfreeze_fs = nilfs_unfreeze, 703 704 .statfs = nilfs_statfs, 704 - .remount_fs = nilfs_remount, 705 705 .show_options = nilfs_show_options 706 706 }; 707 707 708 708 enum { 709 - Opt_err_cont, Opt_err_panic, Opt_err_ro, 710 - Opt_barrier, Opt_nobarrier, Opt_snapshot, Opt_order, Opt_norecovery, 711 - Opt_discard, Opt_nodiscard, Opt_err, 709 + Opt_err, Opt_barrier, Opt_snapshot, Opt_order, Opt_norecovery, 710 + Opt_discard, 712 711 }; 713 712 714 - static match_table_t tokens = { 715 - {Opt_err_cont, "errors=continue"}, 716 - {Opt_err_panic, "errors=panic"}, 717 - {Opt_err_ro, "errors=remount-ro"}, 718 - {Opt_barrier, "barrier"}, 719 - {Opt_nobarrier, "nobarrier"}, 720 - {Opt_snapshot, "cp=%u"}, 721 - {Opt_order, "order=%s"}, 722 - {Opt_norecovery, "norecovery"}, 723 - {Opt_discard, "discard"}, 724 - {Opt_nodiscard, "nodiscard"}, 725 - {Opt_err, NULL} 713 + static const struct constant_table nilfs_param_err[] = { 714 + {"continue", NILFS_MOUNT_ERRORS_CONT}, 715 + {"panic", NILFS_MOUNT_ERRORS_PANIC}, 716 + {"remount-ro", NILFS_MOUNT_ERRORS_RO}, 717 + {} 726 718 }; 727 719 728 - static int parse_options(char *options, struct super_block *sb, int is_remount) 720 + static const struct fs_parameter_spec nilfs_param_spec[] = { 721 + fsparam_enum ("errors", Opt_err, nilfs_param_err), 722 + fsparam_flag_no ("barrier", Opt_barrier), 723 + fsparam_u64 ("cp", Opt_snapshot), 724 + fsparam_string ("order", Opt_order), 725 + fsparam_flag ("norecovery", Opt_norecovery), 726 + fsparam_flag_no ("discard", Opt_discard), 727 + {} 728 + }; 729 + 730 + struct nilfs_fs_context { 731 + unsigned long ns_mount_opt; 732 + __u64 cno; 733 + }; 734 + 735 + static int nilfs_parse_param(struct fs_context *fc, struct fs_parameter *param) 729 736 { 730 - struct the_nilfs *nilfs = sb->s_fs_info; 731 - char *p; 732 - substring_t args[MAX_OPT_ARGS]; 737 + struct nilfs_fs_context *nilfs = fc->fs_private; 738 + int is_remount = fc->purpose == FS_CONTEXT_FOR_RECONFIGURE; 739 + struct fs_parse_result result; 740 + int opt; 733 741 734 - if (!options) 735 - return 1; 742 + opt = fs_parse(fc, nilfs_param_spec, param, &result); 743 + if (opt < 0) 744 + return opt; 736 745 737 - while ((p = strsep(&options, ",")) != NULL) { 738 - int token; 739 - 740 - if (!*p) 741 - continue; 742 - 743 - token = match_token(p, tokens, args); 744 - switch (token) { 745 - case Opt_barrier: 746 - nilfs_set_opt(nilfs, BARRIER); 747 - break; 748 - case Opt_nobarrier: 746 + switch (opt) { 747 + case Opt_barrier: 748 + if (result.negated) 749 749 nilfs_clear_opt(nilfs, BARRIER); 750 - break; 751 - case Opt_order: 752 - if (strcmp(args[0].from, "relaxed") == 0) 753 - /* Ordered data semantics */ 754 - nilfs_clear_opt(nilfs, STRICT_ORDER); 755 - else if (strcmp(args[0].from, "strict") == 0) 756 - /* Strict in-order semantics */ 757 - nilfs_set_opt(nilfs, STRICT_ORDER); 758 - else 759 - return 0; 760 - break; 761 - case Opt_err_panic: 762 - nilfs_write_opt(nilfs, ERROR_MODE, ERRORS_PANIC); 763 - break; 764 - case Opt_err_ro: 765 - nilfs_write_opt(nilfs, ERROR_MODE, ERRORS_RO); 766 - break; 767 - case Opt_err_cont: 768 - nilfs_write_opt(nilfs, ERROR_MODE, ERRORS_CONT); 769 - break; 770 - case Opt_snapshot: 771 - if (is_remount) { 772 - nilfs_err(sb, 773 - "\"%s\" option is invalid for remount", 774 - p); 775 - return 0; 776 - } 777 - break; 778 - case Opt_norecovery: 779 - nilfs_set_opt(nilfs, NORECOVERY); 780 - break; 781 - case Opt_discard: 782 - nilfs_set_opt(nilfs, DISCARD); 783 - break; 784 - case Opt_nodiscard: 785 - nilfs_clear_opt(nilfs, DISCARD); 786 - break; 787 - default: 788 - nilfs_err(sb, "unrecognized mount option \"%s\"", p); 789 - return 0; 750 + else 751 + nilfs_set_opt(nilfs, BARRIER); 752 + break; 753 + case Opt_order: 754 + if (strcmp(param->string, "relaxed") == 0) 755 + /* Ordered data semantics */ 756 + nilfs_clear_opt(nilfs, STRICT_ORDER); 757 + else if (strcmp(param->string, "strict") == 0) 758 + /* Strict in-order semantics */ 759 + nilfs_set_opt(nilfs, STRICT_ORDER); 760 + else 761 + return -EINVAL; 762 + break; 763 + case Opt_err: 764 + nilfs->ns_mount_opt &= ~NILFS_MOUNT_ERROR_MODE; 765 + nilfs->ns_mount_opt |= result.uint_32; 766 + break; 767 + case Opt_snapshot: 768 + if (is_remount) { 769 + struct super_block *sb = fc->root->d_sb; 770 + 771 + nilfs_err(sb, 772 + "\"%s\" option is invalid for remount", 773 + param->key); 774 + return -EINVAL; 790 775 } 776 + if (result.uint_64 == 0) { 777 + nilfs_err(NULL, 778 + "invalid option \"cp=0\": invalid checkpoint number 0"); 779 + return -EINVAL; 780 + } 781 + nilfs->cno = result.uint_64; 782 + break; 783 + case Opt_norecovery: 784 + nilfs_set_opt(nilfs, NORECOVERY); 785 + break; 786 + case Opt_discard: 787 + if (result.negated) 788 + nilfs_clear_opt(nilfs, DISCARD); 789 + else 790 + nilfs_set_opt(nilfs, DISCARD); 791 + break; 792 + default: 793 + return -EINVAL; 791 794 } 792 - return 1; 793 - } 794 795 795 - static inline void 796 - nilfs_set_default_options(struct super_block *sb, 797 - struct nilfs_super_block *sbp) 798 - { 799 - struct the_nilfs *nilfs = sb->s_fs_info; 800 - 801 - nilfs->ns_mount_opt = 802 - NILFS_MOUNT_ERRORS_RO | NILFS_MOUNT_BARRIER; 796 + return 0; 803 797 } 804 798 805 799 static int nilfs_setup_super(struct super_block *sb, int is_mount) ··· 849 857 return (struct nilfs_super_block *)((char *)(*pbh)->b_data + offset); 850 858 } 851 859 852 - int nilfs_store_magic_and_option(struct super_block *sb, 853 - struct nilfs_super_block *sbp, 854 - char *data) 860 + int nilfs_store_magic(struct super_block *sb, 861 + struct nilfs_super_block *sbp) 855 862 { 856 863 struct the_nilfs *nilfs = sb->s_fs_info; 857 864 ··· 861 870 sb->s_flags |= SB_NOATIME; 862 871 #endif 863 872 864 - nilfs_set_default_options(sb, sbp); 865 - 866 873 nilfs->ns_resuid = le16_to_cpu(sbp->s_def_resuid); 867 874 nilfs->ns_resgid = le16_to_cpu(sbp->s_def_resgid); 868 875 nilfs->ns_interval = le32_to_cpu(sbp->s_c_interval); 869 876 nilfs->ns_watermark = le32_to_cpu(sbp->s_c_block_max); 870 877 871 - return !parse_options(data, sb, 0) ? -EINVAL : 0; 878 + return 0; 872 879 } 873 880 874 881 int nilfs_check_feature_compatibility(struct super_block *sb, ··· 1024 1035 /** 1025 1036 * nilfs_fill_super() - initialize a super block instance 1026 1037 * @sb: super_block 1027 - * @data: mount options 1028 - * @silent: silent mode flag 1038 + * @fc: filesystem context 1029 1039 * 1030 1040 * This function is called exclusively by nilfs->ns_mount_mutex. 1031 1041 * So, the recovery process is protected from other simultaneous mounts. 1032 1042 */ 1033 1043 static int 1034 - nilfs_fill_super(struct super_block *sb, void *data, int silent) 1044 + nilfs_fill_super(struct super_block *sb, struct fs_context *fc) 1035 1045 { 1036 1046 struct the_nilfs *nilfs; 1037 1047 struct nilfs_root *fsroot; 1048 + struct nilfs_fs_context *ctx = fc->fs_private; 1038 1049 __u64 cno; 1039 1050 int err; 1040 1051 ··· 1044 1055 1045 1056 sb->s_fs_info = nilfs; 1046 1057 1047 - err = init_nilfs(nilfs, sb, (char *)data); 1058 + err = init_nilfs(nilfs, sb); 1048 1059 if (err) 1049 1060 goto failed_nilfs; 1061 + 1062 + /* Copy in parsed mount options */ 1063 + nilfs->ns_mount_opt = ctx->ns_mount_opt; 1050 1064 1051 1065 sb->s_op = &nilfs_sops; 1052 1066 sb->s_export_op = &nilfs_export_ops; ··· 1109 1117 return err; 1110 1118 } 1111 1119 1112 - static int nilfs_remount(struct super_block *sb, int *flags, char *data) 1120 + static int nilfs_reconfigure(struct fs_context *fc) 1113 1121 { 1122 + struct nilfs_fs_context *ctx = fc->fs_private; 1123 + struct super_block *sb = fc->root->d_sb; 1114 1124 struct the_nilfs *nilfs = sb->s_fs_info; 1115 - unsigned long old_sb_flags; 1116 - unsigned long old_mount_opt; 1117 1125 int err; 1118 1126 1119 1127 sync_filesystem(sb); 1120 - old_sb_flags = sb->s_flags; 1121 - old_mount_opt = nilfs->ns_mount_opt; 1122 - 1123 - if (!parse_options(data, sb, 1)) { 1124 - err = -EINVAL; 1125 - goto restore_opts; 1126 - } 1127 - sb->s_flags = (sb->s_flags & ~SB_POSIXACL); 1128 1128 1129 1129 err = -EINVAL; 1130 1130 1131 1131 if (!nilfs_valid_fs(nilfs)) { 1132 1132 nilfs_warn(sb, 1133 1133 "couldn't remount because the filesystem is in an incomplete recovery state"); 1134 - goto restore_opts; 1134 + goto ignore_opts; 1135 1135 } 1136 - 1137 - if ((bool)(*flags & SB_RDONLY) == sb_rdonly(sb)) 1136 + if ((bool)(fc->sb_flags & SB_RDONLY) == sb_rdonly(sb)) 1138 1137 goto out; 1139 - if (*flags & SB_RDONLY) { 1138 + if (fc->sb_flags & SB_RDONLY) { 1140 1139 sb->s_flags |= SB_RDONLY; 1141 1140 1142 1141 /* ··· 1155 1172 "couldn't remount RDWR because of unsupported optional features (%llx)", 1156 1173 (unsigned long long)features); 1157 1174 err = -EROFS; 1158 - goto restore_opts; 1175 + goto ignore_opts; 1159 1176 } 1160 1177 1161 1178 sb->s_flags &= ~SB_RDONLY; 1162 1179 1163 1180 root = NILFS_I(d_inode(sb->s_root))->i_root; 1164 1181 err = nilfs_attach_log_writer(sb, root); 1165 - if (err) 1166 - goto restore_opts; 1182 + if (err) { 1183 + sb->s_flags |= SB_RDONLY; 1184 + goto ignore_opts; 1185 + } 1167 1186 1168 1187 down_write(&nilfs->ns_sem); 1169 1188 nilfs_setup_super(sb, true); 1170 1189 up_write(&nilfs->ns_sem); 1171 1190 } 1172 1191 out: 1192 + sb->s_flags = (sb->s_flags & ~SB_POSIXACL); 1193 + /* Copy over parsed remount options */ 1194 + nilfs->ns_mount_opt = ctx->ns_mount_opt; 1195 + 1173 1196 return 0; 1174 1197 1175 - restore_opts: 1176 - sb->s_flags = old_sb_flags; 1177 - nilfs->ns_mount_opt = old_mount_opt; 1198 + ignore_opts: 1178 1199 return err; 1179 1200 } 1180 1201 1181 - struct nilfs_super_data { 1182 - __u64 cno; 1183 - int flags; 1184 - }; 1185 - 1186 - static int nilfs_parse_snapshot_option(const char *option, 1187 - const substring_t *arg, 1188 - struct nilfs_super_data *sd) 1202 + static int 1203 + nilfs_get_tree(struct fs_context *fc) 1189 1204 { 1190 - unsigned long long val; 1191 - const char *msg = NULL; 1192 - int err; 1193 - 1194 - if (!(sd->flags & SB_RDONLY)) { 1195 - msg = "read-only option is not specified"; 1196 - goto parse_error; 1197 - } 1198 - 1199 - err = kstrtoull(arg->from, 0, &val); 1200 - if (err) { 1201 - if (err == -ERANGE) 1202 - msg = "too large checkpoint number"; 1203 - else 1204 - msg = "malformed argument"; 1205 - goto parse_error; 1206 - } else if (val == 0) { 1207 - msg = "invalid checkpoint number 0"; 1208 - goto parse_error; 1209 - } 1210 - sd->cno = val; 1211 - return 0; 1212 - 1213 - parse_error: 1214 - nilfs_err(NULL, "invalid option \"%s\": %s", option, msg); 1215 - return 1; 1216 - } 1217 - 1218 - /** 1219 - * nilfs_identify - pre-read mount options needed to identify mount instance 1220 - * @data: mount options 1221 - * @sd: nilfs_super_data 1222 - */ 1223 - static int nilfs_identify(char *data, struct nilfs_super_data *sd) 1224 - { 1225 - char *p, *options = data; 1226 - substring_t args[MAX_OPT_ARGS]; 1227 - int token; 1228 - int ret = 0; 1229 - 1230 - do { 1231 - p = strsep(&options, ","); 1232 - if (p != NULL && *p) { 1233 - token = match_token(p, tokens, args); 1234 - if (token == Opt_snapshot) 1235 - ret = nilfs_parse_snapshot_option(p, &args[0], 1236 - sd); 1237 - } 1238 - if (!options) 1239 - break; 1240 - BUG_ON(options == data); 1241 - *(options - 1) = ','; 1242 - } while (!ret); 1243 - return ret; 1244 - } 1245 - 1246 - static int nilfs_set_bdev_super(struct super_block *s, void *data) 1247 - { 1248 - s->s_dev = *(dev_t *)data; 1249 - return 0; 1250 - } 1251 - 1252 - static int nilfs_test_bdev_super(struct super_block *s, void *data) 1253 - { 1254 - return !(s->s_iflags & SB_I_RETIRED) && s->s_dev == *(dev_t *)data; 1255 - } 1256 - 1257 - static struct dentry * 1258 - nilfs_mount(struct file_system_type *fs_type, int flags, 1259 - const char *dev_name, void *data) 1260 - { 1261 - struct nilfs_super_data sd = { .flags = flags }; 1205 + struct nilfs_fs_context *ctx = fc->fs_private; 1262 1206 struct super_block *s; 1263 1207 dev_t dev; 1264 1208 int err; 1265 1209 1266 - if (nilfs_identify(data, &sd)) 1267 - return ERR_PTR(-EINVAL); 1210 + if (ctx->cno && !(fc->sb_flags & SB_RDONLY)) { 1211 + nilfs_err(NULL, 1212 + "invalid option \"cp=%llu\": read-only option is not specified", 1213 + ctx->cno); 1214 + return -EINVAL; 1215 + } 1268 1216 1269 - err = lookup_bdev(dev_name, &dev); 1217 + err = lookup_bdev(fc->source, &dev); 1270 1218 if (err) 1271 - return ERR_PTR(err); 1219 + return err; 1272 1220 1273 - s = sget(fs_type, nilfs_test_bdev_super, nilfs_set_bdev_super, flags, 1274 - &dev); 1221 + s = sget_dev(fc, dev); 1275 1222 if (IS_ERR(s)) 1276 - return ERR_CAST(s); 1223 + return PTR_ERR(s); 1277 1224 1278 1225 if (!s->s_root) { 1279 - err = setup_bdev_super(s, flags, NULL); 1226 + err = setup_bdev_super(s, fc->sb_flags, fc); 1280 1227 if (!err) 1281 - err = nilfs_fill_super(s, data, 1282 - flags & SB_SILENT ? 1 : 0); 1228 + err = nilfs_fill_super(s, fc); 1283 1229 if (err) 1284 1230 goto failed_super; 1285 1231 1286 1232 s->s_flags |= SB_ACTIVE; 1287 - } else if (!sd.cno) { 1233 + } else if (!ctx->cno) { 1288 1234 if (nilfs_tree_is_busy(s->s_root)) { 1289 - if ((flags ^ s->s_flags) & SB_RDONLY) { 1235 + if ((fc->sb_flags ^ s->s_flags) & SB_RDONLY) { 1290 1236 nilfs_err(s, 1291 1237 "the device already has a %s mount.", 1292 1238 sb_rdonly(s) ? "read-only" : "read/write"); ··· 1224 1312 } 1225 1313 } else { 1226 1314 /* 1227 - * Try remount to setup mount states if the current 1315 + * Try reconfigure to setup mount states if the current 1228 1316 * tree is not mounted and only snapshots use this sb. 1317 + * 1318 + * Since nilfs_reconfigure() requires fc->root to be 1319 + * set, set it first and release it on failure. 1229 1320 */ 1230 - err = nilfs_remount(s, &flags, data); 1231 - if (err) 1321 + fc->root = dget(s->s_root); 1322 + err = nilfs_reconfigure(fc); 1323 + if (err) { 1324 + dput(fc->root); 1325 + fc->root = NULL; /* prevent double release */ 1232 1326 goto failed_super; 1327 + } 1328 + return 0; 1233 1329 } 1234 1330 } 1235 1331 1236 - if (sd.cno) { 1332 + if (ctx->cno) { 1237 1333 struct dentry *root_dentry; 1238 1334 1239 - err = nilfs_attach_snapshot(s, sd.cno, &root_dentry); 1335 + err = nilfs_attach_snapshot(s, ctx->cno, &root_dentry); 1240 1336 if (err) 1241 1337 goto failed_super; 1242 - return root_dentry; 1338 + fc->root = root_dentry; 1339 + return 0; 1243 1340 } 1244 1341 1245 - return dget(s->s_root); 1342 + fc->root = dget(s->s_root); 1343 + return 0; 1246 1344 1247 1345 failed_super: 1248 1346 deactivate_locked_super(s); 1249 - return ERR_PTR(err); 1347 + return err; 1348 + } 1349 + 1350 + static void nilfs_free_fc(struct fs_context *fc) 1351 + { 1352 + kfree(fc->fs_private); 1353 + } 1354 + 1355 + static const struct fs_context_operations nilfs_context_ops = { 1356 + .parse_param = nilfs_parse_param, 1357 + .get_tree = nilfs_get_tree, 1358 + .reconfigure = nilfs_reconfigure, 1359 + .free = nilfs_free_fc, 1360 + }; 1361 + 1362 + static int nilfs_init_fs_context(struct fs_context *fc) 1363 + { 1364 + struct nilfs_fs_context *ctx; 1365 + 1366 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 1367 + if (!ctx) 1368 + return -ENOMEM; 1369 + 1370 + ctx->ns_mount_opt = NILFS_MOUNT_ERRORS_RO | NILFS_MOUNT_BARRIER; 1371 + fc->fs_private = ctx; 1372 + fc->ops = &nilfs_context_ops; 1373 + 1374 + return 0; 1250 1375 } 1251 1376 1252 1377 struct file_system_type nilfs_fs_type = { 1253 1378 .owner = THIS_MODULE, 1254 1379 .name = "nilfs2", 1255 - .mount = nilfs_mount, 1256 1380 .kill_sb = kill_block_super, 1257 1381 .fs_flags = FS_REQUIRES_DEV, 1382 + .init_fs_context = nilfs_init_fs_context, 1383 + .parameters = nilfs_param_spec, 1258 1384 }; 1259 1385 MODULE_ALIAS_FS("nilfs2"); 1260 1386
+20 -5
fs/nilfs2/the_nilfs.c
··· 592 592 struct nilfs_super_block **sbp = nilfs->ns_sbp; 593 593 struct buffer_head **sbh = nilfs->ns_sbh; 594 594 u64 sb2off, devsize = bdev_nr_bytes(nilfs->ns_bdev); 595 - int valid[2], swp = 0; 595 + int valid[2], swp = 0, older; 596 596 597 597 if (devsize < NILFS_SEG_MIN_BLOCKS * NILFS_MIN_BLOCK_SIZE + 4096) { 598 598 nilfs_err(sb, "device size too small"); ··· 648 648 if (swp) 649 649 nilfs_swap_super_block(nilfs); 650 650 651 + /* 652 + * Calculate the array index of the older superblock data. 653 + * If one has been dropped, set index 0 pointing to the remaining one, 654 + * otherwise set index 1 pointing to the old one (including if both 655 + * are the same). 656 + * 657 + * Divided case valid[0] valid[1] swp -> older 658 + * ------------------------------------------------------------- 659 + * Both SBs are invalid 0 0 N/A (Error) 660 + * SB1 is invalid 0 1 1 0 661 + * SB2 is invalid 1 0 0 0 662 + * SB2 is newer 1 1 1 0 663 + * SB2 is older or the same 1 1 0 1 664 + */ 665 + older = valid[1] ^ swp; 666 + 651 667 nilfs->ns_sbwcount = 0; 652 668 nilfs->ns_sbwtime = le64_to_cpu(sbp[0]->s_wtime); 653 - nilfs->ns_prot_seq = le64_to_cpu(sbp[valid[1] & !swp]->s_last_seq); 669 + nilfs->ns_prot_seq = le64_to_cpu(sbp[older]->s_last_seq); 654 670 *sbpp = sbp[0]; 655 671 return 0; 656 672 } ··· 675 659 * init_nilfs - initialize a NILFS instance. 676 660 * @nilfs: the_nilfs structure 677 661 * @sb: super block 678 - * @data: mount options 679 662 * 680 663 * init_nilfs() performs common initialization per block device (e.g. 681 664 * reading the super block, getting disk layout information, initializing ··· 683 668 * Return Value: On success, 0 is returned. On error, a negative error 684 669 * code is returned. 685 670 */ 686 - int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data) 671 + int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb) 687 672 { 688 673 struct nilfs_super_block *sbp; 689 674 int blocksize; ··· 701 686 if (err) 702 687 goto out; 703 688 704 - err = nilfs_store_magic_and_option(sb, sbp, data); 689 + err = nilfs_store_magic(sb, sbp); 705 690 if (err) 706 691 goto failed_sbh; 707 692
+1 -5
fs/nilfs2/the_nilfs.h
··· 219 219 #define nilfs_set_opt(nilfs, opt) \ 220 220 ((nilfs)->ns_mount_opt |= NILFS_MOUNT_##opt) 221 221 #define nilfs_test_opt(nilfs, opt) ((nilfs)->ns_mount_opt & NILFS_MOUNT_##opt) 222 - #define nilfs_write_opt(nilfs, mask, opt) \ 223 - ((nilfs)->ns_mount_opt = \ 224 - (((nilfs)->ns_mount_opt & ~NILFS_MOUNT_##mask) | \ 225 - NILFS_MOUNT_##opt)) \ 226 222 227 223 /** 228 224 * struct nilfs_root - nilfs root object ··· 272 276 void nilfs_set_last_segment(struct the_nilfs *, sector_t, u64, __u64); 273 277 struct the_nilfs *alloc_nilfs(struct super_block *sb); 274 278 void destroy_nilfs(struct the_nilfs *nilfs); 275 - int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data); 279 + int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb); 276 280 int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb); 277 281 unsigned long nilfs_nrsvsegs(struct the_nilfs *nilfs, unsigned long nsegs); 278 282 void nilfs_set_nsegments(struct the_nilfs *nilfs, unsigned long nsegs);
-2
fs/ocfs2/aops.c
··· 2283 2283 ocfs2_inode_unlock(inode, 1); 2284 2284 brelse(di_bh); 2285 2285 out: 2286 - if (ret < 0) 2287 - ret = -EIO; 2288 2286 return ret; 2289 2287 } 2290 2288
+5 -7
fs/ocfs2/dlm/dlmdomain.c
··· 1274 1274 { 1275 1275 struct dlm_query_nodeinfo *qn; 1276 1276 struct dlm_ctxt *dlm = NULL; 1277 - int locked = 0, status = -EINVAL; 1277 + int status = -EINVAL; 1278 1278 1279 1279 qn = (struct dlm_query_nodeinfo *) msg->buf; 1280 1280 ··· 1290 1290 } 1291 1291 1292 1292 spin_lock(&dlm->spinlock); 1293 - locked = 1; 1294 1293 if (dlm->joining_node != qn->qn_nodenum) { 1295 1294 mlog(ML_ERROR, "Node %d queried nodes on domain %s but " 1296 1295 "joining node is %d\n", qn->qn_nodenum, qn->qn_domain, 1297 1296 dlm->joining_node); 1298 - goto bail; 1297 + goto unlock; 1299 1298 } 1300 1299 1301 1300 /* Support for node query was added in 1.1 */ ··· 1304 1305 "but active dlm protocol is %d.%d\n", qn->qn_nodenum, 1305 1306 qn->qn_domain, dlm->dlm_locking_proto.pv_major, 1306 1307 dlm->dlm_locking_proto.pv_minor); 1307 - goto bail; 1308 + goto unlock; 1308 1309 } 1309 1310 1310 1311 status = dlm_match_nodes(dlm, qn); 1311 1312 1313 + unlock: 1314 + spin_unlock(&dlm->spinlock); 1312 1315 bail: 1313 - if (locked) 1314 - spin_unlock(&dlm->spinlock); 1315 1316 spin_unlock(&dlm_domain_lock); 1316 1317 1317 1318 return status; ··· 1527 1528 { 1528 1529 int status, node, live; 1529 1530 1530 - status = 0; 1531 1531 node = -1; 1532 1532 while ((node = find_next_bit(node_map, O2NM_MAX_NODES, 1533 1533 node + 1)) < O2NM_MAX_NODES) {
+6 -6
fs/ocfs2/export.c
··· 255 255 if (fh_len < 3 || fh_type > 2) 256 256 return NULL; 257 257 258 - handle.ih_blkno = (u64)le32_to_cpu(fid->raw[0]) << 32; 259 - handle.ih_blkno |= (u64)le32_to_cpu(fid->raw[1]); 260 - handle.ih_generation = le32_to_cpu(fid->raw[2]); 258 + handle.ih_blkno = (u64)le32_to_cpu((__force __le32)fid->raw[0]) << 32; 259 + handle.ih_blkno |= (u64)le32_to_cpu((__force __le32)fid->raw[1]); 260 + handle.ih_generation = le32_to_cpu((__force __le32)fid->raw[2]); 261 261 return ocfs2_get_dentry(sb, &handle); 262 262 } 263 263 ··· 269 269 if (fh_type != 2 || fh_len < 6) 270 270 return NULL; 271 271 272 - parent.ih_blkno = (u64)le32_to_cpu(fid->raw[3]) << 32; 273 - parent.ih_blkno |= (u64)le32_to_cpu(fid->raw[4]); 274 - parent.ih_generation = le32_to_cpu(fid->raw[5]); 272 + parent.ih_blkno = (u64)le32_to_cpu((__force __le32)fid->raw[3]) << 32; 273 + parent.ih_blkno |= (u64)le32_to_cpu((__force __le32)fid->raw[4]); 274 + parent.ih_generation = le32_to_cpu((__force __le32)fid->raw[5]); 275 275 return ocfs2_get_dentry(sb, &parent); 276 276 } 277 277
+2
fs/ocfs2/file.c
··· 1936 1936 1937 1937 inode_lock(inode); 1938 1938 1939 + /* Wait all existing dio workers, newcomers will block on i_rwsem */ 1940 + inode_dio_wait(inode); 1939 1941 /* 1940 1942 * This prevents concurrent writes on other nodes 1941 1943 */
+2
fs/ocfs2/inode.c
··· 1621 1621 } 1622 1622 1623 1623 static void ocfs2_inode_cache_lock(struct ocfs2_caching_info *ci) 1624 + __acquires(&oi->ip_lock) 1624 1625 { 1625 1626 struct ocfs2_inode_info *oi = cache_info_to_inode(ci); 1626 1627 ··· 1629 1628 } 1630 1629 1631 1630 static void ocfs2_inode_cache_unlock(struct ocfs2_caching_info *ci) 1631 + __releases(&oi->ip_lock) 1632 1632 { 1633 1633 struct ocfs2_inode_info *oi = cache_info_to_inode(ci); 1634 1634
+1
fs/ocfs2/ioctl.c
··· 125 125 126 126 ocfs2_inode->ip_attr = flags; 127 127 ocfs2_set_inode_flags(inode); 128 + inode_set_ctime_current(inode); 128 129 129 130 status = ocfs2_mark_inode_dirty(handle, inode, bh); 130 131 if (status < 0)
+14 -20
fs/ocfs2/localalloc.c
··· 212 212 void ocfs2_local_alloc_seen_free_bits(struct ocfs2_super *osb, 213 213 unsigned int num_clusters) 214 214 { 215 - spin_lock(&osb->osb_lock); 216 - if (osb->local_alloc_state == OCFS2_LA_DISABLED || 217 - osb->local_alloc_state == OCFS2_LA_THROTTLED) 218 - if (num_clusters >= osb->local_alloc_default_bits) { 215 + if (num_clusters >= osb->local_alloc_default_bits) { 216 + spin_lock(&osb->osb_lock); 217 + if (osb->local_alloc_state == OCFS2_LA_DISABLED || 218 + osb->local_alloc_state == OCFS2_LA_THROTTLED) { 219 219 cancel_delayed_work(&osb->la_enable_wq); 220 220 osb->local_alloc_state = OCFS2_LA_ENABLED; 221 221 } 222 - spin_unlock(&osb->osb_lock); 222 + spin_unlock(&osb->osb_lock); 223 + } 223 224 } 224 225 225 226 void ocfs2_la_enable_worker(struct work_struct *work) ··· 336 335 "found = %u, set = %u, taken = %u, off = %u\n", 337 336 num_used, le32_to_cpu(alloc->id1.bitmap1.i_used), 338 337 le32_to_cpu(alloc->id1.bitmap1.i_total), 339 - OCFS2_LOCAL_ALLOC(alloc)->la_bm_off); 338 + le32_to_cpu(OCFS2_LOCAL_ALLOC(alloc)->la_bm_off)); 340 339 341 340 status = -EINVAL; 342 341 goto bail; ··· 864 863 865 864 numfound = bitoff = startoff = 0; 866 865 left = le32_to_cpu(alloc->id1.bitmap1.i_total); 867 - while ((bitoff = ocfs2_find_next_zero_bit(bitmap, left, startoff)) != -1) { 868 - if (bitoff == left) { 869 - /* mlog(0, "bitoff (%d) == left", bitoff); */ 870 - break; 871 - } 872 - /* mlog(0, "Found a zero: bitoff = %d, startoff = %d, " 873 - "numfound = %d\n", bitoff, startoff, numfound);*/ 874 - 866 + while ((bitoff = ocfs2_find_next_zero_bit(bitmap, left, startoff)) < 867 + left) { 875 868 /* Ok, we found a zero bit... is it contig. or do we 876 869 * start over?*/ 877 870 if (bitoff == startoff) { ··· 971 976 start = count = 0; 972 977 left = le32_to_cpu(alloc->id1.bitmap1.i_total); 973 978 974 - while ((bit_off = ocfs2_find_next_zero_bit(bitmap, left, start)) 975 - != -1) { 976 - if ((bit_off < left) && (bit_off == start)) { 979 + while ((bit_off = ocfs2_find_next_zero_bit(bitmap, left, start)) < 980 + left) { 981 + if (bit_off == start) { 977 982 count++; 978 983 start++; 979 984 continue; ··· 997 1002 goto bail; 998 1003 } 999 1004 } 1000 - if (bit_off >= left) 1001 - break; 1005 + 1002 1006 count = 1; 1003 1007 start = bit_off + 1; 1004 1008 } ··· 1214 1220 OCFS2_LOCAL_ALLOC(alloc)->la_bitmap); 1215 1221 1216 1222 trace_ocfs2_local_alloc_new_window_result( 1217 - OCFS2_LOCAL_ALLOC(alloc)->la_bm_off, 1223 + le32_to_cpu(OCFS2_LOCAL_ALLOC(alloc)->la_bm_off), 1218 1224 le32_to_cpu(alloc->id1.bitmap1.i_total)); 1219 1225 1220 1226 bail:
+1 -1
fs/ocfs2/move_extents.c
··· 685 685 } 686 686 687 687 ret = ocfs2_block_group_set_bits(handle, gb_inode, gd, gd_bh, 688 - goal_bit, len); 688 + goal_bit, len, 0, 0); 689 689 if (ret) { 690 690 ocfs2_rollback_alloc_dinode_counts(gb_inode, gb_bh, len, 691 691 le16_to_cpu(gd->bg_chain));
+3 -1
fs/ocfs2/namei.c
··· 566 566 fe->i_last_eb_blk = 0; 567 567 strcpy(fe->i_signature, OCFS2_INODE_SIGNATURE); 568 568 fe->i_flags |= cpu_to_le32(OCFS2_VALID_FL); 569 - ktime_get_real_ts64(&ts); 569 + ktime_get_coarse_real_ts64(&ts); 570 570 fe->i_atime = fe->i_ctime = fe->i_mtime = 571 571 cpu_to_le64(ts.tv_sec); 572 572 fe->i_mtime_nsec = fe->i_ctime_nsec = fe->i_atime_nsec = ··· 797 797 ocfs2_set_links_count(fe, inode->i_nlink); 798 798 fe->i_ctime = cpu_to_le64(inode_get_ctime_sec(inode)); 799 799 fe->i_ctime_nsec = cpu_to_le32(inode_get_ctime_nsec(inode)); 800 + ocfs2_update_inode_fsync_trans(handle, inode, 0); 800 801 ocfs2_journal_dirty(handle, fe_bh); 801 802 802 803 err = ocfs2_add_entry(handle, dentry, inode, ··· 994 993 drop_nlink(inode); 995 994 drop_nlink(inode); 996 995 ocfs2_set_links_count(fe, inode->i_nlink); 996 + ocfs2_update_inode_fsync_trans(handle, inode, 0); 997 997 ocfs2_journal_dirty(handle, fe_bh); 998 998 999 999 inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir));
+2 -1
fs/ocfs2/ocfs2_fs.h
··· 883 883 __le16 bg_free_bits_count; /* Free bits count */ 884 884 __le16 bg_chain; /* What chain I am in. */ 885 885 /*10*/ __le32 bg_generation; 886 - __le32 bg_reserved1; 886 + __le16 bg_contig_free_bits; /* max contig free bits length */ 887 + __le16 bg_reserved1; 887 888 __le64 bg_next_group; /* Next group in my list, in 888 889 blocks */ 889 890 /*20*/ __le64 bg_parent_dinode; /* dinode which owns me, in
+1 -1
fs/ocfs2/refcounttree.c
··· 630 630 rb->rf_records.rl_count = 631 631 cpu_to_le16(ocfs2_refcount_recs_per_rb(osb->sb)); 632 632 spin_lock(&osb->osb_lock); 633 - rb->rf_generation = osb->s_next_generation++; 633 + rb->rf_generation = cpu_to_le32(osb->s_next_generation++); 634 634 spin_unlock(&osb->osb_lock); 635 635 636 636 ocfs2_journal_dirty(handle, new_bh);
+1 -1
fs/ocfs2/reservations.c
··· 414 414 415 415 start = search_start; 416 416 while ((offset = ocfs2_find_next_zero_bit(bitmap, resmap->m_bitmap_len, 417 - start)) != -1) { 417 + start)) < resmap->m_bitmap_len) { 418 418 /* Search reached end of the region */ 419 419 if (offset >= (search_start + search_len)) 420 420 break;
+8
fs/ocfs2/resize.c
··· 91 91 u16 cl_bpc = le16_to_cpu(cl->cl_bpc); 92 92 u16 cl_cpg = le16_to_cpu(cl->cl_cpg); 93 93 u16 old_bg_clusters; 94 + u16 contig_bits; 95 + __le16 old_bg_contig_free_bits; 94 96 95 97 trace_ocfs2_update_last_group_and_inode(new_clusters, 96 98 first_new_cluster); ··· 123 121 cl_cpg, old_bg_clusters, 1); 124 122 le16_add_cpu(&group->bg_free_bits_count, -1 * backups); 125 123 } 124 + 125 + contig_bits = ocfs2_find_max_contig_free_bits(group->bg_bitmap, 126 + le16_to_cpu(group->bg_bits), 0); 127 + old_bg_contig_free_bits = group->bg_contig_free_bits; 128 + group->bg_contig_free_bits = cpu_to_le16(contig_bits); 126 129 127 130 ocfs2_journal_dirty(handle, group_bh); 128 131 ··· 167 160 le16_add_cpu(&group->bg_free_bits_count, backups); 168 161 le16_add_cpu(&group->bg_bits, -1 * num_bits); 169 162 le16_add_cpu(&group->bg_free_bits_count, -1 * num_bits); 163 + group->bg_contig_free_bits = old_bg_contig_free_bits; 170 164 } 171 165 out: 172 166 if (ret)
+97 -20
fs/ocfs2/suballoc.c
··· 50 50 u64 sr_blkno; /* The first allocated block */ 51 51 unsigned int sr_bit_offset; /* The bit in the bg */ 52 52 unsigned int sr_bits; /* How many bits we claimed */ 53 + unsigned int sr_max_contig_bits; /* The length for contiguous 54 + * free bits, only available 55 + * for cluster group 56 + */ 53 57 }; 54 58 55 59 static u64 ocfs2_group_from_res(struct ocfs2_suballoc_result *res) ··· 1276 1272 return ret; 1277 1273 } 1278 1274 1275 + u16 ocfs2_find_max_contig_free_bits(void *bitmap, 1276 + u16 total_bits, u16 start) 1277 + { 1278 + u16 offset, free_bits; 1279 + u16 contig_bits = 0; 1280 + 1281 + while (start < total_bits) { 1282 + offset = ocfs2_find_next_zero_bit(bitmap, total_bits, start); 1283 + if (offset == total_bits) 1284 + break; 1285 + 1286 + start = ocfs2_find_next_bit(bitmap, total_bits, offset); 1287 + free_bits = start - offset; 1288 + if (contig_bits < free_bits) 1289 + contig_bits = free_bits; 1290 + } 1291 + 1292 + return contig_bits; 1293 + } 1294 + 1279 1295 static int ocfs2_block_group_find_clear_bits(struct ocfs2_super *osb, 1280 1296 struct buffer_head *bg_bh, 1281 1297 unsigned int bits_wanted, ··· 1304 1280 { 1305 1281 void *bitmap; 1306 1282 u16 best_offset, best_size; 1283 + u16 prev_best_size = 0; 1307 1284 int offset, start, found, status = 0; 1308 1285 struct ocfs2_group_desc *bg = (struct ocfs2_group_desc *) bg_bh->b_data; 1309 1286 ··· 1315 1290 found = start = best_offset = best_size = 0; 1316 1291 bitmap = bg->bg_bitmap; 1317 1292 1318 - while((offset = ocfs2_find_next_zero_bit(bitmap, total_bits, start)) != -1) { 1319 - if (offset == total_bits) 1320 - break; 1321 - 1293 + while ((offset = ocfs2_find_next_zero_bit(bitmap, total_bits, start)) < 1294 + total_bits) { 1322 1295 if (!ocfs2_test_bg_bit_allocatable(bg_bh, offset)) { 1323 1296 /* We found a zero, but we can't use it as it 1324 1297 * hasn't been put to disk yet! */ ··· 1331 1308 /* got a zero after some ones */ 1332 1309 found = 1; 1333 1310 start = offset + 1; 1311 + prev_best_size = best_size; 1334 1312 } 1335 1313 if (found > best_size) { 1336 1314 best_size = found; ··· 1344 1320 } 1345 1321 } 1346 1322 1323 + /* best_size will be allocated, we save prev_best_size */ 1324 + res->sr_max_contig_bits = prev_best_size; 1347 1325 if (best_size) { 1348 1326 res->sr_bit_offset = best_offset; 1349 1327 res->sr_bits = best_size; ··· 1363 1337 struct ocfs2_group_desc *bg, 1364 1338 struct buffer_head *group_bh, 1365 1339 unsigned int bit_off, 1366 - unsigned int num_bits) 1340 + unsigned int num_bits, 1341 + unsigned int max_contig_bits, 1342 + int fastpath) 1367 1343 { 1368 1344 int status; 1369 1345 void *bitmap = bg->bg_bitmap; 1370 1346 int journal_type = OCFS2_JOURNAL_ACCESS_WRITE; 1347 + unsigned int start = bit_off + num_bits; 1348 + u16 contig_bits; 1349 + struct ocfs2_super *osb = OCFS2_SB(alloc_inode->i_sb); 1371 1350 1372 1351 /* All callers get the descriptor via 1373 1352 * ocfs2_read_group_descriptor(). Any corruption is a code bug. */ ··· 1403 1372 } 1404 1373 while(num_bits--) 1405 1374 ocfs2_set_bit(bit_off++, bitmap); 1375 + 1376 + /* 1377 + * this is optimize path, caller set old contig value 1378 + * in max_contig_bits to bypass finding action. 1379 + */ 1380 + if (fastpath) { 1381 + bg->bg_contig_free_bits = cpu_to_le16(max_contig_bits); 1382 + } else if (ocfs2_is_cluster_bitmap(alloc_inode)) { 1383 + /* 1384 + * Usually, the block group bitmap allocates only 1 bit 1385 + * at a time, while the cluster group allocates n bits 1386 + * each time. Therefore, we only save the contig bits for 1387 + * the cluster group. 1388 + */ 1389 + contig_bits = ocfs2_find_max_contig_free_bits(bitmap, 1390 + le16_to_cpu(bg->bg_bits), start); 1391 + if (contig_bits > max_contig_bits) 1392 + max_contig_bits = contig_bits; 1393 + bg->bg_contig_free_bits = cpu_to_le16(max_contig_bits); 1394 + ocfs2_local_alloc_seen_free_bits(osb, max_contig_bits); 1395 + } else { 1396 + bg->bg_contig_free_bits = 0; 1397 + } 1406 1398 1407 1399 ocfs2_journal_dirty(handle, group_bh); 1408 1400 ··· 1540 1486 1541 1487 BUG_ON(!ocfs2_is_cluster_bitmap(inode)); 1542 1488 1543 - if (gd->bg_free_bits_count) { 1489 + if (le16_to_cpu(gd->bg_contig_free_bits) && 1490 + le16_to_cpu(gd->bg_contig_free_bits) < bits_wanted) 1491 + return -ENOSPC; 1492 + 1493 + /* ->bg_contig_free_bits may un-initialized, so compare again */ 1494 + if (le16_to_cpu(gd->bg_free_bits_count) >= bits_wanted) { 1544 1495 max_bits = le16_to_cpu(gd->bg_bits); 1545 1496 1546 1497 /* Tail groups in cluster bitmaps which aren't cpg ··· 1589 1530 * of bits. */ 1590 1531 if (min_bits <= res->sr_bits) 1591 1532 search = 0; /* success */ 1592 - else if (res->sr_bits) { 1593 - /* 1594 - * Don't show bits which we'll be returning 1595 - * for allocation to the local alloc bitmap. 1596 - */ 1597 - ocfs2_local_alloc_seen_free_bits(osb, res->sr_bits); 1598 - } 1599 1533 } 1600 1534 1601 1535 return search; ··· 1607 1555 BUG_ON(min_bits != 1); 1608 1556 BUG_ON(ocfs2_is_cluster_bitmap(inode)); 1609 1557 1610 - if (bg->bg_free_bits_count) { 1558 + if (le16_to_cpu(bg->bg_free_bits_count) >= bits_wanted) { 1611 1559 ret = ocfs2_block_group_find_clear_bits(OCFS2_SB(inode->i_sb), 1612 1560 group_bh, bits_wanted, 1613 1561 le16_to_cpu(bg->bg_bits), ··· 1767 1715 } 1768 1716 1769 1717 ret = ocfs2_block_group_set_bits(handle, alloc_inode, gd, group_bh, 1770 - res->sr_bit_offset, res->sr_bits); 1718 + res->sr_bit_offset, res->sr_bits, 1719 + res->sr_max_contig_bits, 0); 1771 1720 if (ret < 0) { 1772 1721 ocfs2_rollback_alloc_dinode_counts(alloc_inode, ac->ac_bh, 1773 1722 res->sr_bits, ··· 1902 1849 bg, 1903 1850 group_bh, 1904 1851 res->sr_bit_offset, 1905 - res->sr_bits); 1852 + res->sr_bits, 1853 + res->sr_max_contig_bits, 1854 + 0); 1906 1855 if (status < 0) { 1907 1856 ocfs2_rollback_alloc_dinode_counts(alloc_inode, 1908 1857 ac->ac_bh, res->sr_bits, chain); ··· 2006 1951 for (i = 0; i < le16_to_cpu(cl->cl_next_free_rec); i ++) { 2007 1952 if (i == victim) 2008 1953 continue; 2009 - if (!cl->cl_recs[i].c_free) 1954 + if (le32_to_cpu(cl->cl_recs[i].c_free) < bits_wanted) 2010 1955 continue; 2011 1956 2012 1957 ac->ac_chain = i; ··· 2218 2163 bg, 2219 2164 bg_bh, 2220 2165 res->sr_bit_offset, 2221 - res->sr_bits); 2166 + res->sr_bits, 2167 + res->sr_max_contig_bits, 2168 + 0); 2222 2169 if (ret < 0) { 2223 2170 ocfs2_rollback_alloc_dinode_counts(ac->ac_inode, 2224 2171 ac->ac_bh, res->sr_bits, chain); ··· 2439 2382 struct buffer_head *group_bh, 2440 2383 unsigned int bit_off, 2441 2384 unsigned int num_bits, 2385 + unsigned int max_contig_bits, 2442 2386 void (*undo_fn)(unsigned int bit, 2443 2387 unsigned long *bmap)) 2444 2388 { 2445 2389 int status; 2446 2390 unsigned int tmp; 2391 + u16 contig_bits; 2447 2392 struct ocfs2_group_desc *undo_bg = NULL; 2448 2393 struct journal_head *jh; 2449 2394 ··· 2492 2433 num_bits); 2493 2434 } 2494 2435 2436 + /* 2437 + * TODO: even 'num_bits == 1' (the worst case, release 1 cluster), 2438 + * we still need to rescan whole bitmap. 2439 + */ 2440 + if (ocfs2_is_cluster_bitmap(alloc_inode)) { 2441 + contig_bits = ocfs2_find_max_contig_free_bits(bg->bg_bitmap, 2442 + le16_to_cpu(bg->bg_bits), 0); 2443 + if (contig_bits > max_contig_bits) 2444 + max_contig_bits = contig_bits; 2445 + bg->bg_contig_free_bits = cpu_to_le16(max_contig_bits); 2446 + } else { 2447 + bg->bg_contig_free_bits = 0; 2448 + } 2449 + 2495 2450 if (undo_fn) 2496 2451 spin_unlock(&jh->b_state_lock); 2497 2452 ··· 2532 2459 struct ocfs2_chain_list *cl = &fe->id2.i_chain; 2533 2460 struct buffer_head *group_bh = NULL; 2534 2461 struct ocfs2_group_desc *group; 2462 + __le16 old_bg_contig_free_bits = 0; 2535 2463 2536 2464 /* The alloc_bh comes from ocfs2_free_dinode() or 2537 2465 * ocfs2_free_clusters(). The callers have all locked the ··· 2557 2483 2558 2484 BUG_ON((count + start_bit) > le16_to_cpu(group->bg_bits)); 2559 2485 2486 + if (ocfs2_is_cluster_bitmap(alloc_inode)) 2487 + old_bg_contig_free_bits = group->bg_contig_free_bits; 2560 2488 status = ocfs2_block_group_clear_bits(handle, alloc_inode, 2561 2489 group, group_bh, 2562 - start_bit, count, undo_fn); 2490 + start_bit, count, 0, undo_fn); 2563 2491 if (status < 0) { 2564 2492 mlog_errno(status); 2565 2493 goto bail; ··· 2572 2496 if (status < 0) { 2573 2497 mlog_errno(status); 2574 2498 ocfs2_block_group_set_bits(handle, alloc_inode, group, group_bh, 2575 - start_bit, count); 2499 + start_bit, count, 2500 + le16_to_cpu(old_bg_contig_free_bits), 1); 2576 2501 goto bail; 2577 2502 } 2578 2503
+5 -1
fs/ocfs2/suballoc.h
··· 79 79 struct buffer_head *di_bh, 80 80 u32 num_bits, 81 81 u16 chain); 82 + u16 ocfs2_find_max_contig_free_bits(void *bitmap, 83 + u16 total_bits, u16 start); 82 84 int ocfs2_block_group_set_bits(handle_t *handle, 83 85 struct inode *alloc_inode, 84 86 struct ocfs2_group_desc *bg, 85 87 struct buffer_head *group_bh, 86 88 unsigned int bit_off, 87 - unsigned int num_bits); 89 + unsigned int num_bits, 90 + unsigned int max_contig_bits, 91 + int fastpath); 88 92 89 93 int ocfs2_claim_metadata(handle_t *handle, 90 94 struct ocfs2_alloc_context *ac,
+4 -3
fs/proc/vmcore.c
··· 383 383 /* leave now if filled buffer already */ 384 384 if (!iov_iter_count(iter)) 385 385 return acc; 386 + 387 + cond_resched(); 386 388 } 387 389 388 390 list_for_each_entry(m, &vmcore_list, list) { ··· 1372 1370 vdd_hdr->n_descsz = size + sizeof(vdd_hdr->dump_name); 1373 1371 vdd_hdr->n_type = NT_VMCOREDD; 1374 1372 1375 - strncpy((char *)vdd_hdr->name, VMCOREDD_NOTE_NAME, 1376 - sizeof(vdd_hdr->name)); 1377 - memcpy(vdd_hdr->dump_name, data->dump_name, sizeof(vdd_hdr->dump_name)); 1373 + strscpy_pad(vdd_hdr->name, VMCOREDD_NOTE_NAME); 1374 + strscpy_pad(vdd_hdr->dump_name, data->dump_name); 1378 1375 } 1379 1376 1380 1377 /**
+1 -5
fs/squashfs/file.c
··· 375 375 flush_dcache_page(page); 376 376 if (copied == avail) 377 377 SetPageUptodate(page); 378 - else 379 - SetPageError(page); 380 378 } 381 379 382 380 /* Copy data into page cache */ ··· 469 471 470 472 res = read_blocklist(inode, index, &block); 471 473 if (res < 0) 472 - goto error_out; 474 + goto out; 473 475 474 476 if (res == 0) 475 477 res = squashfs_readpage_sparse(page, expected); ··· 481 483 if (!res) 482 484 return 0; 483 485 484 - error_out: 485 - SetPageError(page); 486 486 out: 487 487 pageaddr = kmap_atomic(page); 488 488 memset(pageaddr, 0, PAGE_SIZE);
+1 -2
fs/squashfs/file_direct.c
··· 106 106 return 0; 107 107 108 108 mark_errored: 109 - /* Decompression failed, mark pages as errored. Target_page is 109 + /* Decompression failed. Target_page is 110 110 * dealt with by the caller 111 111 */ 112 112 for (i = 0; i < pages; i++) { 113 113 if (page[i] == NULL || page[i] == target_page) 114 114 continue; 115 115 flush_dcache_page(page[i]); 116 - SetPageError(page[i]); 117 116 unlock_page(page[i]); 118 117 put_page(page[i]); 119 118 }
+4 -10
fs/squashfs/namei.c
··· 62 62 */ 63 63 static int get_dir_index_using_name(struct super_block *sb, 64 64 u64 *next_block, int *next_offset, u64 index_start, 65 - int index_offset, int i_count, const char *name, 66 - int len) 65 + int index_offset, int i_count, const char *name) 67 66 { 68 67 struct squashfs_sb_info *msblk = sb->s_fs_info; 69 68 int i, length = 0, err; 70 69 unsigned int size; 71 70 struct squashfs_dir_index *index; 72 - char *str; 73 71 74 72 TRACE("Entered get_dir_index_using_name, i_count %d\n", i_count); 75 73 76 - index = kmalloc(sizeof(*index) + SQUASHFS_NAME_LEN * 2 + 2, GFP_KERNEL); 74 + index = kmalloc(sizeof(*index) + SQUASHFS_NAME_LEN + 1, GFP_KERNEL); 77 75 if (index == NULL) { 78 76 ERROR("Failed to allocate squashfs_dir_index\n"); 79 77 goto out; 80 78 } 81 - 82 - str = &index->name[SQUASHFS_NAME_LEN + 1]; 83 - strncpy(str, name, len); 84 - str[len] = '\0'; 85 79 86 80 for (i = 0; i < i_count; i++) { 87 81 err = squashfs_read_metadata(sb, index, &index_start, ··· 95 101 96 102 index->name[size] = '\0'; 97 103 98 - if (strcmp(index->name, str) > 0) 104 + if (strcmp(index->name, name) > 0) 99 105 break; 100 106 101 107 length = le32_to_cpu(index->index); ··· 147 153 length = get_dir_index_using_name(dir->i_sb, &block, &offset, 148 154 squashfs_i(dir)->dir_idx_start, 149 155 squashfs_i(dir)->dir_idx_offset, 150 - squashfs_i(dir)->dir_idx_cnt, name, len); 156 + squashfs_i(dir)->dir_idx_cnt, name); 151 157 152 158 while (length < i_size_read(dir)) { 153 159 /*
+16 -19
fs/squashfs/symlink.c
··· 32 32 33 33 static int squashfs_symlink_read_folio(struct file *file, struct folio *folio) 34 34 { 35 - struct page *page = &folio->page; 36 - struct inode *inode = page->mapping->host; 35 + struct inode *inode = folio->mapping->host; 37 36 struct super_block *sb = inode->i_sb; 38 37 struct squashfs_sb_info *msblk = sb->s_fs_info; 39 - int index = page->index << PAGE_SHIFT; 38 + int index = folio_pos(folio); 40 39 u64 block = squashfs_i(inode)->start; 41 40 int offset = squashfs_i(inode)->offset; 42 41 int length = min_t(int, i_size_read(inode) - index, PAGE_SIZE); 43 - int bytes, copied; 42 + int bytes, copied, error; 44 43 void *pageaddr; 45 44 struct squashfs_cache_entry *entry; 46 45 47 46 TRACE("Entered squashfs_symlink_readpage, page index %ld, start block " 48 - "%llx, offset %x\n", page->index, block, offset); 47 + "%llx, offset %x\n", folio->index, block, offset); 49 48 50 49 /* 51 50 * Skip index bytes into symlink metadata. ··· 56 57 ERROR("Unable to read symlink [%llx:%x]\n", 57 58 squashfs_i(inode)->start, 58 59 squashfs_i(inode)->offset); 59 - goto error_out; 60 + error = bytes; 61 + goto out; 60 62 } 61 63 } 62 64 63 65 /* 64 66 * Read length bytes from symlink metadata. Squashfs_read_metadata 65 67 * is not used here because it can sleep and we want to use 66 - * kmap_atomic to map the page. Instead call the underlying 68 + * kmap_local to map the folio. Instead call the underlying 67 69 * squashfs_cache_get routine. As length bytes may overlap metadata 68 70 * blocks, we may need to call squashfs_cache_get multiple times. 69 71 */ ··· 75 75 squashfs_i(inode)->start, 76 76 squashfs_i(inode)->offset); 77 77 squashfs_cache_put(entry); 78 - goto error_out; 78 + error = entry->error; 79 + goto out; 79 80 } 80 81 81 - pageaddr = kmap_atomic(page); 82 + pageaddr = kmap_local_folio(folio, 0); 82 83 copied = squashfs_copy_data(pageaddr + bytes, entry, offset, 83 84 length - bytes); 84 85 if (copied == length - bytes) 85 86 memset(pageaddr + length, 0, PAGE_SIZE - length); 86 87 else 87 88 block = entry->next_index; 88 - kunmap_atomic(pageaddr); 89 + kunmap_local(pageaddr); 89 90 squashfs_cache_put(entry); 90 91 } 91 92 92 - flush_dcache_page(page); 93 - SetPageUptodate(page); 94 - unlock_page(page); 95 - return 0; 96 - 97 - error_out: 98 - SetPageError(page); 99 - unlock_page(page); 100 - return 0; 93 + flush_dcache_folio(folio); 94 + error = 0; 95 + out: 96 + folio_end_read(folio, error == 0); 97 + return error; 101 98 } 102 99 103 100
-5
include/linux/cpumask.h
··· 1057 1057 void init_cpu_possible(const struct cpumask *src); 1058 1058 void init_cpu_online(const struct cpumask *src); 1059 1059 1060 - static inline void reset_cpu_possible_mask(void) 1061 - { 1062 - bitmap_zero(cpumask_bits(&__cpu_possible_mask), NR_CPUS); 1063 - } 1064 - 1065 1060 static inline void 1066 1061 set_cpu_possible(unsigned int cpu, bool possible) 1067 1062 {
+35
include/linux/instrumented.h
··· 148 148 } 149 149 150 150 /** 151 + * instrument_memcpy_before - add instrumentation before non-instrumented memcpy 152 + * @to: destination address 153 + * @from: source address 154 + * @n: number of bytes to copy 155 + * 156 + * Instrument memory accesses that happen in custom memcpy implementations. The 157 + * instrumentation should be inserted before the memcpy call. 158 + */ 159 + static __always_inline void instrument_memcpy_before(void *to, const void *from, 160 + unsigned long n) 161 + { 162 + kasan_check_write(to, n); 163 + kasan_check_read(from, n); 164 + kcsan_check_write(to, n); 165 + kcsan_check_read(from, n); 166 + } 167 + 168 + /** 169 + * instrument_memcpy_after - add instrumentation after non-instrumented memcpy 170 + * @to: destination address 171 + * @from: source address 172 + * @n: number of bytes to copy 173 + * @left: number of bytes not copied (if known) 174 + * 175 + * Instrument memory accesses that happen in custom memcpy implementations. The 176 + * instrumentation should be inserted after the memcpy call. 177 + */ 178 + static __always_inline void instrument_memcpy_after(void *to, const void *from, 179 + unsigned long n, 180 + unsigned long left) 181 + { 182 + kmsan_memmove(to, from, n - left); 183 + } 184 + 185 + /** 151 186 * instrument_get_user() - add instrumentation to get_user()-like macros 152 187 * @to: destination variable, may not be address-taken 153 188 *
+2 -4
include/linux/kexec.h
··· 464 464 465 465 extern bool kexec_file_dbg_print; 466 466 467 - #define kexec_dprintk(fmt, ...) \ 468 - printk("%s" fmt, \ 469 - kexec_file_dbg_print ? KERN_INFO : KERN_DEBUG, \ 470 - ##__VA_ARGS__) 467 + #define kexec_dprintk(fmt, arg...) \ 468 + do { if (kexec_file_dbg_print) pr_info(fmt, ##arg); } while (0) 471 469 472 470 #else /* !CONFIG_KEXEC_CORE */ 473 471 struct pt_regs;
+7 -2
include/linux/kfifo.h
··· 36 36 * to lock the reader. 37 37 */ 38 38 39 - #include <linux/kernel.h> 39 + #include <linux/array_size.h> 40 40 #include <linux/spinlock.h> 41 41 #include <linux/stddef.h> 42 - #include <linux/scatterlist.h> 42 + #include <linux/types.h> 43 + 44 + #include <asm/barrier.h> 45 + #include <asm/errno.h> 46 + 47 + struct scatterlist; 43 48 44 49 struct __kfifo { 45 50 unsigned int in;
+15
include/linux/kmsan-checks.h
··· 61 61 void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, 62 62 size_t left); 63 63 64 + /** 65 + * kmsan_memmove() - Notify KMSAN about a data copy within kernel. 66 + * @to: destination address in the kernel. 67 + * @from: source address in the kernel. 68 + * @size: number of bytes to copy. 69 + * 70 + * Invoked after non-instrumented version (e.g. implemented using assembly 71 + * code) of memmove()/memcpy() is called, in order to copy KMSAN's metadata. 72 + */ 73 + void kmsan_memmove(void *to, const void *from, size_t to_copy); 74 + 64 75 #else 65 76 66 77 static inline void kmsan_poison_memory(const void *address, size_t size, ··· 86 75 } 87 76 static inline void kmsan_copy_to_user(void __user *to, const void *from, 88 77 size_t to_copy, size_t left) 78 + { 79 + } 80 + 81 + static inline void kmsan_memmove(void *to, const void *from, size_t to_copy) 89 82 { 90 83 } 91 84
+2
include/linux/nmi.h
··· 105 105 extern void hardlockup_detector_perf_stop(void); 106 106 extern void hardlockup_detector_perf_restart(void); 107 107 extern void hardlockup_detector_perf_cleanup(void); 108 + extern void hardlockup_config_perf_event(const char *str); 108 109 #else 109 110 static inline void hardlockup_detector_perf_stop(void) { } 110 111 static inline void hardlockup_detector_perf_restart(void) { } 111 112 static inline void hardlockup_detector_perf_cleanup(void) { } 113 + static inline void hardlockup_config_perf_event(const char *str) { } 112 114 #endif 113 115 114 116 void watchdog_hardlockup_stop(void);
+5 -1
include/trace/events/nilfs2.h
··· 200 200 __field(struct inode *, inode) 201 201 __field(unsigned long, ino) 202 202 __field(unsigned long, blkoff) 203 - __field(enum req_op, mode) 203 + /* 204 + * Use field_struct() to avoid is_signed_type() on the 205 + * bitwise type enum req_op. 206 + */ 207 + __field_struct(enum req_op, mode) 204 208 ), 205 209 206 210 TP_fast_assign(
-1
init/do_mounts_initrd.c
··· 29 29 .mode = 0644, 30 30 .proc_handler = proc_dointvec, 31 31 }, 32 - { } 33 32 }; 34 33 35 34 static __init int kernel_do_mounts_initrd_sysctls_init(void)
+19
init/main.c
··· 345 345 continue; 346 346 } 347 347 xbc_array_for_each_value(vnode, val) { 348 + /* 349 + * For prettier and more readable /proc/cmdline, only 350 + * quote the value when necessary, i.e. when it contains 351 + * whitespace. 352 + */ 348 353 q = strpbrk(val, " \t\r\n") ? "\"" : ""; 349 354 ret = snprintf(buf, rest(buf, end), "%s=%s%s%s ", 350 355 xbc_namebuf, q, val, q); ··· 886 881 memblock_free(unknown_options, len); 887 882 } 888 883 884 + static void __init early_numa_node_init(void) 885 + { 886 + #ifdef CONFIG_USE_PERCPU_NUMA_NODE_ID 887 + #ifndef cpu_to_node 888 + int cpu; 889 + 890 + /* The early_cpu_to_node() should be ready here. */ 891 + for_each_possible_cpu(cpu) 892 + set_cpu_numa_node(cpu, early_cpu_to_node(cpu)); 893 + #endif 894 + #endif 895 + } 896 + 889 897 asmlinkage __visible __init __no_sanitize_address __noreturn __no_stack_protector 890 898 void start_kernel(void) 891 899 { ··· 929 911 setup_nr_cpu_ids(); 930 912 setup_per_cpu_areas(); 931 913 smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */ 914 + early_numa_node_init(); 932 915 boot_cpu_hotplug_init(); 933 916 934 917 pr_notice("Kernel command line: %s\n", saved_command_line);
-1
ipc/ipc_sysctl.c
··· 178 178 .extra2 = SYSCTL_INT_MAX, 179 179 }, 180 180 #endif 181 - {} 182 181 }; 183 182 184 183 static struct ctl_table_set *set_lookup(struct ctl_table_root *root)
-1
ipc/mq_sysctl.c
··· 64 64 .extra1 = &msg_maxsize_limit_min, 65 65 .extra2 = &msg_maxsize_limit_max, 66 66 }, 67 - {} 68 67 }; 69 68 70 69 static struct ctl_table_set *set_lookup(struct ctl_table_root *root)
+2
kernel/crash_core.c
··· 4 4 * Copyright (C) 2002-2004 Eric Biederman <ebiederm@xmission.com> 5 5 */ 6 6 7 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 8 + 7 9 #include <linux/buildid.h> 8 10 #include <linux/init.h> 9 11 #include <linux/utsname.h>
+2 -2
kernel/crash_reserve.c
··· 109 109 110 110 size = memparse(cur, &tmp); 111 111 if (cur == tmp) { 112 - pr_warn("Memory value expected\n"); 112 + pr_warn("crashkernel: Memory value expected\n"); 113 113 return -EINVAL; 114 114 } 115 115 cur = tmp; ··· 132 132 cur++; 133 133 *crash_base = memparse(cur, &tmp); 134 134 if (cur == tmp) { 135 - pr_warn("Memory value expected after '@'\n"); 135 + pr_warn("crahskernel: Memory value expected after '@'\n"); 136 136 return -EINVAL; 137 137 } 138 138 }
+2 -1
kernel/kcov.c
··· 627 627 mode = kcov_get_mode(remote_arg->trace_mode); 628 628 if (mode < 0) 629 629 return mode; 630 - if (remote_arg->area_size > LONG_MAX / sizeof(unsigned long)) 630 + if ((unsigned long)remote_arg->area_size > 631 + LONG_MAX / sizeof(unsigned long)) 631 632 return -EINVAL; 632 633 kcov->mode = mode; 633 634 t->kcov = kcov;
+3 -3
kernel/regset.c
··· 16 16 if (size > regset->n * regset->size) 17 17 size = regset->n * regset->size; 18 18 if (!p) { 19 - to_free = p = kzalloc(size, GFP_KERNEL); 19 + to_free = p = kvzalloc(size, GFP_KERNEL); 20 20 if (!p) 21 21 return -ENOMEM; 22 22 } 23 23 res = regset->regset_get(target, regset, 24 24 (struct membuf){.p = p, .left = size}); 25 25 if (res < 0) { 26 - kfree(to_free); 26 + kvfree(to_free); 27 27 return res; 28 28 } 29 29 *data = p; ··· 71 71 ret = regset_get_alloc(target, regset, size, &buf); 72 72 if (ret > 0) 73 73 ret = copy_to_user(data, buf, ret) ? -EFAULT : 0; 74 - kfree(buf); 74 + kvfree(buf); 75 75 return ret; 76 76 }
+1 -2
kernel/trace/blktrace.c
··· 524 524 if (!buts->buf_size || !buts->buf_nr) 525 525 return -EINVAL; 526 526 527 - strncpy(buts->name, name, BLKTRACE_BDEV_SIZE); 528 - buts->name[BLKTRACE_BDEV_SIZE - 1] = '\0'; 527 + strscpy_pad(buts->name, name, BLKTRACE_BDEV_SIZE); 529 528 530 529 /* 531 530 * some device names have larger paths - convert the slashes
+9
kernel/watchdog.c
··· 78 78 79 79 static int __init hardlockup_panic_setup(char *str) 80 80 { 81 + next: 81 82 if (!strncmp(str, "panic", 5)) 82 83 hardlockup_panic = 1; 83 84 else if (!strncmp(str, "nopanic", 7)) ··· 87 86 watchdog_hardlockup_user_enabled = 0; 88 87 else if (!strncmp(str, "1", 1)) 89 88 watchdog_hardlockup_user_enabled = 1; 89 + else if (!strncmp(str, "r", 1)) 90 + hardlockup_config_perf_event(str + 1); 91 + while (*(str++)) { 92 + if (*str == ',') { 93 + str++; 94 + goto next; 95 + } 96 + } 90 97 return 1; 91 98 } 92 99 __setup("nmi_watchdog=", hardlockup_panic_setup);
+45 -2
kernel/watchdog_perf.c
··· 90 90 .disabled = 1, 91 91 }; 92 92 93 + static struct perf_event_attr fallback_wd_hw_attr = { 94 + .type = PERF_TYPE_HARDWARE, 95 + .config = PERF_COUNT_HW_CPU_CYCLES, 96 + .size = sizeof(struct perf_event_attr), 97 + .pinned = 1, 98 + .disabled = 1, 99 + }; 100 + 93 101 /* Callback function for perf event subsystem */ 94 102 static void watchdog_overflow_callback(struct perf_event *event, 95 103 struct perf_sample_data *data, ··· 131 123 evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL, 132 124 watchdog_overflow_callback, NULL); 133 125 if (IS_ERR(evt)) { 126 + wd_attr = &fallback_wd_hw_attr; 127 + wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh); 128 + evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL, 129 + watchdog_overflow_callback, NULL); 130 + } 131 + 132 + if (IS_ERR(evt)) { 134 133 pr_debug("Perf event create on CPU %d failed with %ld\n", cpu, 135 134 PTR_ERR(evt)); 136 135 return PTR_ERR(evt); ··· 148 133 149 134 /** 150 135 * watchdog_hardlockup_enable - Enable the local event 151 - * 152 136 * @cpu: The CPU to enable hard lockup on. 153 137 */ 154 138 void watchdog_hardlockup_enable(unsigned int cpu) ··· 166 152 167 153 /** 168 154 * watchdog_hardlockup_disable - Disable the local event 169 - * 170 155 * @cpu: The CPU to enable hard lockup on. 171 156 */ 172 157 void watchdog_hardlockup_disable(unsigned int cpu) ··· 271 258 this_cpu_write(watchdog_ev, NULL); 272 259 } 273 260 return ret; 261 + } 262 + 263 + /** 264 + * hardlockup_config_perf_event - Overwrite config of wd_hw_attr. 265 + * @str: number which identifies the raw perf event to use 266 + */ 267 + void __init hardlockup_config_perf_event(const char *str) 268 + { 269 + u64 config; 270 + char buf[24]; 271 + char *comma = strchr(str, ','); 272 + 273 + if (!comma) { 274 + if (kstrtoull(str, 16, &config)) 275 + return; 276 + } else { 277 + unsigned int len = comma - str; 278 + 279 + if (len >= sizeof(buf)) 280 + return; 281 + 282 + if (strscpy(buf, str, sizeof(buf)) < 0) 283 + return; 284 + buf[len] = 0; 285 + if (kstrtoull(buf, 16, &config)) 286 + return; 287 + } 288 + 289 + wd_hw_attr.type = PERF_TYPE_RAW; 290 + wd_hw_attr.config = config; 274 291 }
+1
lib/Kconfig.kgdb
··· 122 122 config KDB_KEYBOARD 123 123 bool "KGDB_KDB: keyboard as input device" 124 124 depends on VT && KGDB_KDB && !PARISC 125 + depends on HAS_IOPORT 125 126 default n 126 127 help 127 128 KDB can use a PS/2 type keyboard for an input device
+4 -1
lib/build_OID_registry
··· 8 8 # 9 9 10 10 use strict; 11 + use Cwd qw(abs_path); 11 12 12 13 my @names = (); 13 14 my @oids = (); ··· 17 16 print STDERR "Format: ", $0, " <in-h-file> <out-c-file>\n"; 18 17 exit(2); 19 18 } 19 + 20 + my $abs_srctree = abs_path($ENV{'srctree'}); 20 21 21 22 # 22 23 # Open the file to read from ··· 38 35 # 39 36 open C_FILE, ">$ARGV[1]" or die; 40 37 print C_FILE "/*\n"; 41 - print C_FILE " * Automatically generated by ", $0, ". Do not edit\n"; 38 + print C_FILE " * Automatically generated by ", $0 =~ s#^\Q$abs_srctree/\E##r, ". Do not edit\n"; 42 39 print C_FILE " */\n"; 43 40 44 41 #
+15 -11
lib/devres.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bug.h> 2 3 #include <linux/device.h> 3 - #include <linux/err.h> 4 - #include <linux/io.h> 5 - #include <linux/gfp.h> 4 + #include <linux/errno.h> 6 5 #include <linux/export.h> 6 + #include <linux/gfp_types.h> 7 + #include <linux/io.h> 8 + #include <linux/ioport.h> 7 9 #include <linux/of_address.h> 10 + #include <linux/types.h> 8 11 9 12 enum devm_ioremap_type { 10 13 DEVM_IOREMAP = 0, ··· 128 125 resource_size_t size; 129 126 void __iomem *dest_ptr; 130 127 char *pretty_name; 128 + int ret; 131 129 132 130 BUG_ON(!dev); 133 131 134 132 if (!res || resource_type(res) != IORESOURCE_MEM) { 135 - dev_err(dev, "invalid resource %pR\n", res); 136 - return IOMEM_ERR_PTR(-EINVAL); 133 + ret = dev_err_probe(dev, -EINVAL, "invalid resource %pR\n", res); 134 + return IOMEM_ERR_PTR(ret); 137 135 } 138 136 139 137 if (type == DEVM_IOREMAP && res->flags & IORESOURCE_MEM_NONPOSTED) ··· 148 144 else 149 145 pretty_name = devm_kstrdup(dev, dev_name(dev), GFP_KERNEL); 150 146 if (!pretty_name) { 151 - dev_err(dev, "can't generate pretty name for resource %pR\n", res); 152 - return IOMEM_ERR_PTR(-ENOMEM); 147 + ret = dev_err_probe(dev, -ENOMEM, "can't generate pretty name for resource %pR\n", res); 148 + return IOMEM_ERR_PTR(ret); 153 149 } 154 150 155 151 if (!devm_request_mem_region(dev, res->start, size, pretty_name)) { 156 - dev_err(dev, "can't request region for resource %pR\n", res); 157 - return IOMEM_ERR_PTR(-EBUSY); 152 + ret = dev_err_probe(dev, -EBUSY, "can't request region for resource %pR\n", res); 153 + return IOMEM_ERR_PTR(ret); 158 154 } 159 155 160 156 dest_ptr = __devm_ioremap(dev, res->start, size, type); 161 157 if (!dest_ptr) { 162 - dev_err(dev, "ioremap failed for resource %pR\n", res); 163 158 devm_release_mem_region(dev, res->start, size); 164 - dest_ptr = IOMEM_ERR_PTR(-ENOMEM); 159 + ret = dev_err_probe(dev, -ENOMEM, "ioremap failed for resource %pR\n", res); 160 + return IOMEM_ERR_PTR(ret); 165 161 } 166 162 167 163 return dest_ptr;
+5 -5
lib/kfifo.c
··· 5 5 * Copyright (C) 2009/2010 Stefani Seibold <stefani@seibold.net> 6 6 */ 7 7 8 - #include <linux/kernel.h> 9 - #include <linux/export.h> 10 - #include <linux/slab.h> 11 8 #include <linux/err.h> 12 - #include <linux/log2.h> 13 - #include <linux/uaccess.h> 9 + #include <linux/export.h> 14 10 #include <linux/kfifo.h> 11 + #include <linux/log2.h> 12 + #include <linux/scatterlist.h> 13 + #include <linux/slab.h> 14 + #include <linux/uaccess.h> 15 15 16 16 /* 17 17 * internal helper to calculate the unused elements in a fifo
+1 -1
lib/test_hexdump.c
··· 113 113 *p++ = ' '; 114 114 } while (p < test + rs * 2 + rs / gs + 1); 115 115 116 - strncpy(p, data_a, l); 116 + memcpy(p, data_a, l); 117 117 p += l; 118 118 } 119 119
+11
mm/kmsan/hooks.c
··· 285 285 } 286 286 EXPORT_SYMBOL(kmsan_copy_to_user); 287 287 288 + void kmsan_memmove(void *to, const void *from, size_t size) 289 + { 290 + if (!kmsan_enabled || kmsan_in_runtime()) 291 + return; 292 + 293 + kmsan_enter_runtime(); 294 + kmsan_internal_memmove_metadata(to, (void *)from, size); 295 + kmsan_leave_runtime(); 296 + } 297 + EXPORT_SYMBOL(kmsan_memmove); 298 + 288 299 /* Helper function to check an URB. */ 289 300 void kmsan_handle_urb(const struct urb *urb, bool is_out) 290 301 {
+2 -1
samples/kfifo/dma-example.c
··· 6 6 */ 7 7 8 8 #include <linux/init.h> 9 - #include <linux/module.h> 10 9 #include <linux/kfifo.h> 10 + #include <linux/module.h> 11 + #include <linux/scatterlist.h> 11 12 12 13 /* 13 14 * This module shows how to handle fifo dma operations.
+6
scripts/checkpatch.pl
··· 6040 6040 CHK("MACRO_ARG_PRECEDENCE", 6041 6041 "Macro argument '$arg' may be better as '($arg)' to avoid precedence issues\n" . "$herectx"); 6042 6042 } 6043 + 6044 + # check if this is an unused argument 6045 + if ($define_stmt !~ /\b$arg\b/) { 6046 + WARN("MACRO_ARG_UNUSED", 6047 + "Argument '$arg' is not used in function-like macro\n" . "$herectx"); 6048 + } 6043 6049 } 6044 6050 6045 6051 # check for macros with flow control, but without ## concatenation
+3 -8
scripts/gdb/linux/cpus.py
··· 26 26 if utils.get_gdbserver_type() == utils.GDBSERVER_QEMU: 27 27 return gdb.selected_thread().num - 1 28 28 elif utils.get_gdbserver_type() == utils.GDBSERVER_KGDB: 29 - tid = gdb.selected_thread().ptid[2] 30 - if tid > (0x100000000 - MAX_CPUS - 2): 31 - return 0x100000000 - tid - 2 32 - else: 33 - return tasks.get_thread_info(tasks.get_task_by_pid(tid))['cpu'] 29 + return gdb.parse_and_eval("kgdb_active.counter") 34 30 else: 35 31 raise gdb.GdbError("Sorry, obtaining the current CPU is not yet " 36 32 "supported with this gdb server.") ··· 148 152 def __init__(self): 149 153 super(PerCpu, self).__init__("lx_per_cpu") 150 154 151 - def invoke(self, var_name, cpu=-1): 152 - var_ptr = gdb.parse_and_eval("&" + var_name.string()) 153 - return per_cpu(var_ptr, cpu) 155 + def invoke(self, var, cpu=-1): 156 + return per_cpu(var.address, cpu) 154 157 155 158 156 159 PerCpu()
+1 -1
scripts/gdb/linux/tasks.py
··· 85 85 86 86 def get_thread_info(task): 87 87 thread_info_ptr_type = thread_info_type.get_type().pointer() 88 - if task.type.fields()[0].type == thread_info_type.get_type(): 88 + if task_type.get_type().fields()[0].type == thread_info_type.get_type(): 89 89 return task['thread_info'] 90 90 thread_info = task['stack'].cast(thread_info_ptr_type) 91 91 return thread_info.dereference()
+1 -1
scripts/gdb/linux/utils.py
··· 196 196 def probe_kgdb(): 197 197 try: 198 198 thread_info = gdb.execute("info thread 2", to_string=True) 199 - return "shadowCPU0" in thread_info 199 + return "shadowCPU" in thread_info 200 200 except gdb.error: 201 201 return False 202 202
+2 -2
tools/include/linux/rbtree_augmented.h
··· 158 158 159 159 static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p) 160 160 { 161 - rb->__rb_parent_color = rb_color(rb) | (unsigned long)p; 161 + rb->__rb_parent_color = rb_color(rb) + (unsigned long)p; 162 162 } 163 163 164 164 static inline void rb_set_parent_color(struct rb_node *rb, 165 165 struct rb_node *p, int color) 166 166 { 167 - rb->__rb_parent_color = (unsigned long)p | color; 167 + rb->__rb_parent_color = (unsigned long)p + color; 168 168 } 169 169 170 170 static inline void
+1 -1
tools/lib/rbtree.c
··· 58 58 59 59 static inline void rb_set_black(struct rb_node *rb) 60 60 { 61 - rb->__rb_parent_color |= RB_BLACK; 61 + rb->__rb_parent_color += RB_BLACK; 62 62 } 63 63 64 64 static inline struct rb_node *rb_red_parent(struct rb_node *red)
+1 -1
tools/testing/selftests/kcmp/kcmp_test.c
··· 91 91 ksft_print_header(); 92 92 ksft_set_plan(3); 93 93 94 - fd2 = open(kpath, O_RDWR, 0644); 94 + fd2 = open(kpath, O_RDWR); 95 95 if (fd2 < 0) { 96 96 perror("Can't open file"); 97 97 ksft_exit_fail();