Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (patches from Andrew)

Merge yet more updates from Andrew Morton:

- the rest of ocfs2

- various hotfixes, mainly MM

- quite a bit of misc stuff - drivers, fork, exec, signals, etc.

- printk updates

- firmware

- checkpatch

- nilfs2

- more kexec stuff than usual

- rapidio updates

- w1 things

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (111 commits)
ipc: delete "nr_ipc_ns"
kcov: allow more fine-grained coverage instrumentation
init/Kconfig: add clarification for out-of-tree modules
config: add android config fragments
init/Kconfig: ban CONFIG_LOCALVERSION_AUTO with allmodconfig
relay: add global mode support for buffer-only channels
init: allow blacklisting of module_init functions
w1:omap_hdq: fix regression
w1: add helper macro module_w1_family
w1: remove need for ida and use PLATFORM_DEVID_AUTO
rapidio/switches: add driver for IDT gen3 switches
powerpc/fsl_rio: apply changes for RIO spec rev 3
rapidio: modify for rev.3 specification changes
rapidio: change inbound window size type to u64
rapidio/idt_gen2: fix locking warning
rapidio: fix error handling in mbox request/release functions
rapidio/tsi721_dma: advance queue processing from transfer submit call
rapidio/tsi721: add messaging mbox selector parameter
rapidio/tsi721: add PCIe MRRS override parameter
rapidio/tsi721_dma: add channel mask and queue size parameters
...

+5750 -2013
+2
.mailmap
··· 92 92 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 93 93 Leonid I Ananiev <leonid.i.ananiev@intel.com> 94 94 Linas Vepstas <linas@austin.ibm.com> 95 + Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de> 96 + Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch> 95 97 Mark Brown <broonie@sirena.org.uk> 96 98 Matthieu CASTET <castet.matthieu@free.fr> 97 99 Mauro Carvalho Chehab <mchehab@kernel.org> <maurochehab@gmail.com> <mchehab@infradead.org> <mchehab@redhat.com> <m.chehab@samsung.com> <mchehab@osg.samsung.com> <mchehab@s-opensource.com>
+2 -1
Documentation/filesystems/nilfs2.txt
··· 267 267 `-- file (ino=yy) 268 268 ( regular file, directory, or symlink ) 269 269 270 - For detail on the format of each file, please see include/linux/nilfs2_fs.h. 270 + For detail on the format of each file, please see nilfs2_ondisk.h 271 + located at include/uapi/linux directory. 271 272 272 273 There are no patents or other intellectual property that we protect 273 274 with regard to the design of NILFS2. It is allowed to replicate the
+1 -1
Documentation/ioctl/ioctl-number.txt
··· 248 248 'm' 00 drivers/scsi/megaraid/megaraid_ioctl.h conflict! 249 249 'm' 00-1F net/irda/irmod.h conflict! 250 250 'n' 00-7F linux/ncp_fs.h and fs/ncpfs/ioctl.c 251 - 'n' 80-8F linux/nilfs2_fs.h NILFS2 251 + 'n' 80-8F uapi/linux/nilfs2_api.h NILFS2 252 252 'n' E0-FF linux/matroxfb.h matroxfb 253 253 'o' 00-1F fs/ocfs2/ocfs2_fs.h OCFS2 254 254 'o' 00-03 mtd/ubi-user.h conflict! (OCFS2 and UBI overlaps)
+7
Documentation/kernel-parameters.txt
··· 3182 3182 Format: <bool> (1/Y/y=enable, 0/N/n=disable) 3183 3183 default: disabled 3184 3184 3185 + printk.devkmsg={on,off,ratelimit} 3186 + Control writing to /dev/kmsg. 3187 + on - unlimited logging to /dev/kmsg from userspace 3188 + off - logging to /dev/kmsg disabled 3189 + ratelimit - ratelimit the logging 3190 + Default: ratelimit 3191 + 3185 3192 printk.time= Show timing data prefixed to each printk message line 3186 3193 Format: <bool> (1/Y/y=enable, 0/N/n=disable) 3187 3194
+1 -2
Documentation/rapidio/mport_cdev.txt
··· 82 82 83 83 - 'dbg_level' - This parameter allows to control amount of debug information 84 84 generated by this device driver. This parameter is formed by set of 85 - This parameter can be changed bit masks that correspond to the specific 86 - functional block. 85 + bit masks that correspond to the specific functional blocks. 87 86 For mask definitions see 'drivers/rapidio/devices/rio_mport_cdev.c' 88 87 This parameter can be changed dynamically. 89 88 Use CONFIG_RAPIDIO_DEBUG=y to enable debug output at the top level.
+119
Documentation/rapidio/rio_cm.txt
··· 1 + RapidIO subsystem Channelized Messaging character device driver (rio_cm.c) 2 + ========================================================================== 3 + 4 + Version History: 5 + ---------------- 6 + 1.0.0 - Initial driver release. 7 + 8 + ========================================================================== 9 + 10 + I. Overview 11 + 12 + This device driver is the result of collaboration within the RapidIO.org 13 + Software Task Group (STG) between Texas Instruments, Prodrive Technologies, 14 + Nokia Networks, BAE and IDT. Additional input was received from other members 15 + of RapidIO.org. 16 + 17 + The objective was to create a character mode driver interface which exposes 18 + messaging capabilities of RapidIO endpoint devices (mports) directly 19 + to applications, in a manner that allows the numerous and varied RapidIO 20 + implementations to interoperate. 21 + 22 + This driver (RIO_CM) provides to user-space applications shared access to 23 + RapidIO mailbox messaging resources. 24 + 25 + RapidIO specification (Part 2) defines that endpoint devices may have up to four 26 + messaging mailboxes in case of multi-packet message (up to 4KB) and 27 + up to 64 mailboxes if single-packet messages (up to 256 B) are used. In addition 28 + to protocol definition limitations, a particular hardware implementation can 29 + have reduced number of messaging mailboxes. RapidIO aware applications must 30 + therefore share the messaging resources of a RapidIO endpoint. 31 + 32 + Main purpose of this device driver is to provide RapidIO mailbox messaging 33 + capability to large number of user-space processes by introducing socket-like 34 + operations using a single messaging mailbox. This allows applications to 35 + use the limited RapidIO messaging hardware resources efficiently. 36 + 37 + Most of device driver's operations are supported through 'ioctl' system calls. 38 + 39 + When loaded this device driver creates a single file system node named rio_cm 40 + in /dev directory common for all registered RapidIO mport devices. 41 + 42 + Following ioctl commands are available to user-space applications: 43 + 44 + - RIO_CM_MPORT_GET_LIST : Returns to caller list of local mport devices that 45 + support messaging operations (number of entries up to RIO_MAX_MPORTS). 46 + Each list entry is combination of mport's index in the system and RapidIO 47 + destination ID assigned to the port. 48 + - RIO_CM_EP_GET_LIST_SIZE : Returns number of messaging capable remote endpoints 49 + in a RapidIO network associated with the specified mport device. 50 + - RIO_CM_EP_GET_LIST : Returns list of RapidIO destination IDs for messaging 51 + capable remote endpoints (peers) available in a RapidIO network associated 52 + with the specified mport device. 53 + - RIO_CM_CHAN_CREATE : Creates RapidIO message exchange channel data structure 54 + with channel ID assigned automatically or as requested by a caller. 55 + - RIO_CM_CHAN_BIND : Binds the specified channel data structure to the specified 56 + mport device. 57 + - RIO_CM_CHAN_LISTEN : Enables listening for connection requests on the specified 58 + channel. 59 + - RIO_CM_CHAN_ACCEPT : Accepts a connection request from peer on the specified 60 + channel. If wait timeout for this request is specified by a caller it is 61 + a blocking call. If timeout set to 0 this is non-blocking call - ioctl 62 + handler checks for a pending connection request and if one is not available 63 + exits with -EGAIN error status immediately. 64 + - RIO_CM_CHAN_CONNECT : Sends a connection request to a remote peer/channel. 65 + - RIO_CM_CHAN_SEND : Sends a data message through the specified channel. 66 + The handler for this request assumes that message buffer specified by 67 + a caller includes the reserved space for a packet header required by 68 + this driver. 69 + - RIO_CM_CHAN_RECEIVE : Receives a data message through a connected channel. 70 + If the channel does not have an incoming message ready to return this ioctl 71 + handler will wait for new message until timeout specified by a caller 72 + expires. If timeout value is set to 0, ioctl handler uses a default value 73 + defined by MAX_SCHEDULE_TIMEOUT. 74 + - RIO_CM_CHAN_CLOSE : Closes a specified channel and frees associated buffers. 75 + If the specified channel is in the CONNECTED state, sends close notification 76 + to the remote peer. 77 + 78 + The ioctl command codes and corresponding data structures intended for use by 79 + user-space applications are defined in 'include/uapi/linux/rio_cm_cdev.h'. 80 + 81 + II. Hardware Compatibility 82 + 83 + This device driver uses standard interfaces defined by kernel RapidIO subsystem 84 + and therefore it can be used with any mport device driver registered by RapidIO 85 + subsystem with limitations set by available mport HW implementation of messaging 86 + mailboxes. 87 + 88 + III. Module parameters 89 + 90 + - 'dbg_level' - This parameter allows to control amount of debug information 91 + generated by this device driver. This parameter is formed by set of 92 + bit masks that correspond to the specific functional block. 93 + For mask definitions see 'drivers/rapidio/devices/rio_cm.c' 94 + This parameter can be changed dynamically. 95 + Use CONFIG_RAPIDIO_DEBUG=y to enable debug output at the top level. 96 + 97 + - 'cmbox' - Number of RapidIO mailbox to use (default value is 1). 98 + This parameter allows to set messaging mailbox number that will be used 99 + within entire RapidIO network. It can be used when default mailbox is 100 + used by other device drivers or is not supported by some nodes in the 101 + RapidIO network. 102 + 103 + - 'chstart' - Start channel number for dynamic assignment. Default value - 256. 104 + Allows to exclude channel numbers below this parameter from dynamic 105 + allocation to avoid conflicts with software components that use 106 + reserved predefined channel numbers. 107 + 108 + IV. Known problems 109 + 110 + None. 111 + 112 + V. User-space Applications and API Library 113 + 114 + Messaging API library and applications that use this device driver are available 115 + from RapidIO.org. 116 + 117 + VI. TODO List 118 + 119 + - Add support for system notification messages (reserved channel 0).
+26
Documentation/rapidio/tsi721.txt
··· 25 25 This parameter can be changed dynamically. 26 26 Use CONFIG_RAPIDIO_DEBUG=y to enable debug output at the top level. 27 27 28 + - 'dma_desc_per_channel' - This parameter defines number of hardware buffer 29 + descriptors allocated for each registered Tsi721 DMA channel. 30 + Its default value is 128. 31 + 32 + - 'dma_txqueue_sz' - DMA transactions queue size. Defines number of pending 33 + transaction requests that can be accepted by each DMA channel. 34 + Default value is 16. 35 + 36 + - 'dma_sel' - DMA channel selection mask. Bitmask that defines which hardware 37 + DMA channels (0 ... 6) will be registered with DmaEngine core. 38 + If bit is set to 1, the corresponding DMA channel will be registered. 39 + DMA channels not selected by this mask will not be used by this device 40 + driver. Default value is 0x7f (use all channels). 41 + 42 + - 'pcie_mrrs' - override value for PCIe Maximum Read Request Size (MRRS). 43 + This parameter gives an ability to override MRRS value set during PCIe 44 + configuration process. Tsi721 supports read request sizes up to 4096B. 45 + Value for this parameter must be set as defined by PCIe specification: 46 + 0 = 128B, 1 = 256B, 2 = 512B, 3 = 1024B, 4 = 2048B and 5 = 4096B. 47 + Default value is '-1' (= keep platform setting). 48 + 49 + - 'mbox_sel' - RIO messaging MBOX selection mask. This is a bitmask that defines 50 + messaging MBOXes are managed by this device driver. Mask bits 0 - 3 51 + correspond to MBOX0 - MBOX3. MBOX is under driver's control if the 52 + corresponding bit is set to '1'. Default value is 0x0f (= all). 53 + 28 54 II. Known problems 29 55 30 56 None.
+14
Documentation/sysctl/kernel.txt
··· 764 764 765 765 ============================================================== 766 766 767 + printk_devkmsg: 768 + 769 + Control the logging to /dev/kmsg from userspace: 770 + 771 + ratelimit: default, ratelimited 772 + on: unlimited logging to /dev/kmsg from userspace 773 + off: logging to /dev/kmsg disabled 774 + 775 + The kernel command line parameter printk.devkmsg= overrides this and is 776 + a one-time setting until next reboot: once set, it cannot be changed by 777 + this sysctl interface anymore. 778 + 779 + ============================================================== 780 + 767 781 randomize_va_space: 768 782 769 783 This option can be used to select the type of process address
+11 -2
MAINTAINERS
··· 778 778 S: Supported 779 779 F: drivers/dma/dma-axi-dmac.c 780 780 781 + ANDROID CONFIG FRAGMENTS 782 + M: Rob Herring <robh@kernel.org> 783 + S: Supported 784 + F: kernel/configs/android* 785 + 781 786 ANDROID DRIVERS 782 787 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 783 788 M: Arve Hjønnevåg <arve@android.com> ··· 2351 2346 F: drivers/media/platform/sti/bdisp 2352 2347 2353 2348 BEFS FILE SYSTEM 2354 - S: Orphan 2349 + M: Luis de Bethencourt <luisbg@osg.samsung.com> 2350 + M: Salah Triki <salah.triki@gmail.com> 2351 + S: Maintained 2352 + T: git git://github.com/luisbg/linux-befs.git 2355 2353 F: Documentation/filesystems/befs.txt 2356 2354 F: fs/befs/ 2357 2355 ··· 8272 8264 S: Supported 8273 8265 F: Documentation/filesystems/nilfs2.txt 8274 8266 F: fs/nilfs2/ 8275 - F: include/linux/nilfs2_fs.h 8276 8267 F: include/trace/events/nilfs2.h 8268 + F: include/uapi/linux/nilfs2_api.h 8269 + F: include/uapi/linux/nilfs2_ondisk.h 8277 8270 8278 8271 NINJA SCSI-3 / NINJA SCSI-32Bi (16bit/CardBus) PCMCIA SCSI HOST ADAPTER DRIVER 8279 8272 M: YOKOTA Hiroshi <yokota@netlab.is.tsukuba.ac.jp>
-27
arch/alpha/include/asm/thread_info.h
··· 86 86 #define TS_UAC_NOPRINT 0x0001 /* ! Preserve the following three */ 87 87 #define TS_UAC_NOFIX 0x0002 /* ! flags as they match */ 88 88 #define TS_UAC_SIGBUS 0x0004 /* ! userspace part of 'osf_sysinfo' */ 89 - #define TS_RESTORE_SIGMASK 0x0008 /* restore signal mask in do_signal() */ 90 - 91 - #ifndef __ASSEMBLY__ 92 - #define HAVE_SET_RESTORE_SIGMASK 1 93 - static inline void set_restore_sigmask(void) 94 - { 95 - struct thread_info *ti = current_thread_info(); 96 - ti->status |= TS_RESTORE_SIGMASK; 97 - WARN_ON(!test_bit(TIF_SIGPENDING, (unsigned long *)&ti->flags)); 98 - } 99 - static inline void clear_restore_sigmask(void) 100 - { 101 - current_thread_info()->status &= ~TS_RESTORE_SIGMASK; 102 - } 103 - static inline bool test_restore_sigmask(void) 104 - { 105 - return current_thread_info()->status & TS_RESTORE_SIGMASK; 106 - } 107 - static inline bool test_and_clear_restore_sigmask(void) 108 - { 109 - struct thread_info *ti = current_thread_info(); 110 - if (!(ti->status & TS_RESTORE_SIGMASK)) 111 - return false; 112 - ti->status &= ~TS_RESTORE_SIGMASK; 113 - return true; 114 - } 115 - #endif 116 89 117 90 #define SET_UNALIGN_CTL(task,value) ({ \ 118 91 __u32 status = task_thread_info(task)->status & ~UAC_BITMASK; \
+1 -1
arch/alpha/kernel/machvec_impl.h
··· 137 137 #define __initmv __initdata 138 138 #define ALIAS_MV(x) 139 139 #else 140 - #define __initmv __initdata_refok 140 + #define __initmv __refdata 141 141 142 142 /* GCC actually has a syntax for defining aliases, but is under some 143 143 delusion that you shouldn't be able to declare it extern somewhere
+1 -1
arch/arc/mm/init.c
··· 220 220 /* 221 221 * free_initmem: Free all the __init memory. 222 222 */ 223 - void __init_refok free_initmem(void) 223 + void __ref free_initmem(void) 224 224 { 225 225 free_initmem_default(-1); 226 226 }
+8
arch/arm/boot/dts/keystone.dtsi
··· 70 70 cpu_on = <0x84000003>; 71 71 }; 72 72 73 + psci { 74 + compatible = "arm,psci"; 75 + method = "smc"; 76 + cpu_suspend = <0x84000001>; 77 + cpu_off = <0x84000002>; 78 + cpu_on = <0x84000003>; 79 + }; 80 + 73 81 soc { 74 82 #address-cells = <1>; 75 83 #size-cells = <1>;
+24
arch/arm/include/asm/kexec.h
··· 53 53 /* Function pointer to optional machine-specific reinitialization */ 54 54 extern void (*kexec_reinit)(void); 55 55 56 + static inline unsigned long phys_to_boot_phys(phys_addr_t phys) 57 + { 58 + return phys_to_idmap(phys); 59 + } 60 + #define phys_to_boot_phys phys_to_boot_phys 61 + 62 + static inline phys_addr_t boot_phys_to_phys(unsigned long entry) 63 + { 64 + return idmap_to_phys(entry); 65 + } 66 + #define boot_phys_to_phys boot_phys_to_phys 67 + 68 + static inline unsigned long page_to_boot_pfn(struct page *page) 69 + { 70 + return page_to_pfn(page) + (arch_phys_to_idmap_offset >> PAGE_SHIFT); 71 + } 72 + #define page_to_boot_pfn page_to_boot_pfn 73 + 74 + static inline struct page *boot_pfn_to_page(unsigned long boot_pfn) 75 + { 76 + return pfn_to_page(boot_pfn - (arch_phys_to_idmap_offset >> PAGE_SHIFT)); 77 + } 78 + #define boot_pfn_to_page boot_pfn_to_page 79 + 56 80 #endif /* __ASSEMBLY__ */ 57 81 58 82 #endif /* CONFIG_KEXEC */
+1 -1
arch/arm/kernel/machine_kexec.c
··· 57 57 for (i = 0; i < image->nr_segments; i++) { 58 58 current_segment = &image->segment[i]; 59 59 60 - if (!memblock_is_region_memory(current_segment->mem, 60 + if (!memblock_is_region_memory(idmap_to_phys(current_segment->mem), 61 61 current_segment->memsz)) 62 62 return -EINVAL; 63 63
+37 -2
arch/arm/kernel/setup.c
··· 848 848 kernel_data.end = virt_to_phys(_end - 1); 849 849 850 850 for_each_memblock(memory, region) { 851 + phys_addr_t start = __pfn_to_phys(memblock_region_memory_base_pfn(region)); 852 + phys_addr_t end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1; 853 + unsigned long boot_alias_start; 854 + 855 + /* 856 + * Some systems have a special memory alias which is only 857 + * used for booting. We need to advertise this region to 858 + * kexec-tools so they know where bootable RAM is located. 859 + */ 860 + boot_alias_start = phys_to_idmap(start); 861 + if (arm_has_idmap_alias() && boot_alias_start != IDMAP_INVALID_ADDR) { 862 + res = memblock_virt_alloc(sizeof(*res), 0); 863 + res->name = "System RAM (boot alias)"; 864 + res->start = boot_alias_start; 865 + res->end = phys_to_idmap(end); 866 + res->flags = IORESOURCE_MEM | IORESOURCE_BUSY; 867 + request_resource(&iomem_resource, res); 868 + } 869 + 851 870 res = memblock_virt_alloc(sizeof(*res), 0); 852 871 res->name = "System RAM"; 853 - res->start = __pfn_to_phys(memblock_region_memory_base_pfn(region)); 854 - res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1; 872 + res->start = start; 873 + res->end = end; 855 874 res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; 856 875 857 876 request_resource(&iomem_resource, res); ··· 1019 1000 (unsigned long)(crash_base >> 20), 1020 1001 (unsigned long)(total_mem >> 20)); 1021 1002 1003 + /* The crashk resource must always be located in normal mem */ 1022 1004 crashk_res.start = crash_base; 1023 1005 crashk_res.end = crash_base + crash_size - 1; 1024 1006 insert_resource(&iomem_resource, &crashk_res); 1007 + 1008 + if (arm_has_idmap_alias()) { 1009 + /* 1010 + * If we have a special RAM alias for use at boot, we 1011 + * need to advertise to kexec tools where the alias is. 1012 + */ 1013 + static struct resource crashk_boot_res = { 1014 + .name = "Crash kernel (boot alias)", 1015 + .flags = IORESOURCE_BUSY | IORESOURCE_MEM, 1016 + }; 1017 + 1018 + crashk_boot_res.start = phys_to_idmap(crash_base); 1019 + crashk_boot_res.end = crashk_boot_res.start + crash_size - 1; 1020 + insert_resource(&iomem_resource, &crashk_boot_res); 1021 + } 1025 1022 } 1026 1023 #else 1027 1024 static inline void reserve_crashkernel(void) {}
+2 -2
arch/arm/mach-integrator/impd1.c
··· 320 320 #define IMPD1_VALID_IRQS 0x00000bffU 321 321 322 322 /* 323 - * As this module is bool, it is OK to have this as __init_refok() - no 323 + * As this module is bool, it is OK to have this as __ref() - no 324 324 * probe calls will be done after the initial system bootup, as devices 325 325 * are discovered as part of the machine startup. 326 326 */ 327 - static int __init_refok impd1_probe(struct lm_device *dev) 327 + static int __ref impd1_probe(struct lm_device *dev) 328 328 { 329 329 struct impd1_module *impd1; 330 330 int irq_base;
+1 -1
arch/arm/mach-mv78xx0/common.c
··· 343 343 DDR_WINDOW_CPU1_BASE, DDR_WINDOW_CPU_SZ); 344 344 } 345 345 346 - void __init_refok mv78xx0_timer_init(void) 346 + void __ref mv78xx0_timer_init(void) 347 347 { 348 348 orion_time_init(BRIDGE_VIRT_BASE, BRIDGE_INT_TIMER1_CLR, 349 349 IRQ_MV78XX0_TIMER_1, get_tclk());
+1 -1
arch/blackfin/mm/init.c
··· 112 112 } 113 113 #endif 114 114 115 - void __init_refok free_initmem(void) 115 + void __ref free_initmem(void) 116 116 { 117 117 #if defined CONFIG_RAMKERNEL && !defined CONFIG_MPU 118 118 free_initmem_default(-1);
+1 -1
arch/hexagon/mm/init.c
··· 93 93 * Todo: free pages between __init_begin and __init_end; possibly 94 94 * some devtree related stuff as well. 95 95 */ 96 - void __init_refok free_initmem(void) 96 + void __ref free_initmem(void) 97 97 { 98 98 } 99 99
-28
arch/ia64/include/asm/thread_info.h
··· 121 121 /* like TIF_ALLWORK_BITS but sans TIF_SYSCALL_TRACE or TIF_SYSCALL_AUDIT */ 122 122 #define TIF_WORK_MASK (TIF_ALLWORK_MASK&~(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT)) 123 123 124 - #define TS_RESTORE_SIGMASK 2 /* restore signal mask in do_signal() */ 125 - 126 - #ifndef __ASSEMBLY__ 127 - #define HAVE_SET_RESTORE_SIGMASK 1 128 - static inline void set_restore_sigmask(void) 129 - { 130 - struct thread_info *ti = current_thread_info(); 131 - ti->status |= TS_RESTORE_SIGMASK; 132 - WARN_ON(!test_bit(TIF_SIGPENDING, &ti->flags)); 133 - } 134 - static inline void clear_restore_sigmask(void) 135 - { 136 - current_thread_info()->status &= ~TS_RESTORE_SIGMASK; 137 - } 138 - static inline bool test_restore_sigmask(void) 139 - { 140 - return current_thread_info()->status & TS_RESTORE_SIGMASK; 141 - } 142 - static inline bool test_and_clear_restore_sigmask(void) 143 - { 144 - struct thread_info *ti = current_thread_info(); 145 - if (!(ti->status & TS_RESTORE_SIGMASK)) 146 - return false; 147 - ti->status &= ~TS_RESTORE_SIGMASK; 148 - return true; 149 - } 150 - #endif /* !__ASSEMBLY__ */ 151 - 152 124 #endif /* _ASM_IA64_THREAD_INFO_H */
+1 -1
arch/ia64/kernel/machine_kexec.c
··· 163 163 #endif 164 164 } 165 165 166 - unsigned long paddr_vmcoreinfo_note(void) 166 + phys_addr_t paddr_vmcoreinfo_note(void) 167 167 { 168 168 return ia64_tpa((unsigned long)(char *)&vmcoreinfo_note); 169 169 }
+1 -1
arch/ia64/kernel/mca.c
··· 1831 1831 } 1832 1832 1833 1833 /* Caller prevents this from being called after init */ 1834 - static void * __init_refok mca_bootmem(void) 1834 + static void * __ref mca_bootmem(void) 1835 1835 { 1836 1836 return __alloc_bootmem(sizeof(struct ia64_mca_cpu), 1837 1837 KERNEL_STACK_SIZE, 0);
-27
arch/microblaze/include/asm/thread_info.h
··· 148 148 */ 149 149 /* FPU was used by this task this quantum (SMP) */ 150 150 #define TS_USEDFPU 0x0001 151 - #define TS_RESTORE_SIGMASK 0x0002 152 - 153 - #ifndef __ASSEMBLY__ 154 - #define HAVE_SET_RESTORE_SIGMASK 1 155 - static inline void set_restore_sigmask(void) 156 - { 157 - struct thread_info *ti = current_thread_info(); 158 - ti->status |= TS_RESTORE_SIGMASK; 159 - WARN_ON(!test_bit(TIF_SIGPENDING, (unsigned long *)&ti->flags)); 160 - } 161 - static inline void clear_restore_sigmask(void) 162 - { 163 - current_thread_info()->status &= ~TS_RESTORE_SIGMASK; 164 - } 165 - static inline bool test_restore_sigmask(void) 166 - { 167 - return current_thread_info()->status & TS_RESTORE_SIGMASK; 168 - } 169 - static inline bool test_and_clear_restore_sigmask(void) 170 - { 171 - struct thread_info *ti = current_thread_info(); 172 - if (!(ti->status & TS_RESTORE_SIGMASK)) 173 - return false; 174 - ti->status &= ~TS_RESTORE_SIGMASK; 175 - return true; 176 - } 177 - #endif 178 151 179 152 #endif /* __KERNEL__ */ 180 153 #endif /* _ASM_MICROBLAZE_THREAD_INFO_H */
+2 -2
arch/microblaze/mm/init.c
··· 414 414 415 415 #endif /* CONFIG_MMU */ 416 416 417 - void * __init_refok alloc_maybe_bootmem(size_t size, gfp_t mask) 417 + void * __ref alloc_maybe_bootmem(size_t size, gfp_t mask) 418 418 { 419 419 if (mem_init_done) 420 420 return kmalloc(size, mask); ··· 422 422 return alloc_bootmem(size); 423 423 } 424 424 425 - void * __init_refok zalloc_maybe_bootmem(size_t size, gfp_t mask) 425 + void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask) 426 426 { 427 427 void *p; 428 428
+1 -1
arch/microblaze/mm/pgtable.c
··· 234 234 return pa; 235 235 } 236 236 237 - __init_refok pte_t *pte_alloc_one_kernel(struct mm_struct *mm, 237 + __ref pte_t *pte_alloc_one_kernel(struct mm_struct *mm, 238 238 unsigned long address) 239 239 { 240 240 pte_t *pte;
+1 -1
arch/mips/mm/init.c
··· 504 504 505 505 void (*free_init_pages_eva)(void *begin, void *end) = NULL; 506 506 507 - void __init_refok free_initmem(void) 507 + void __ref free_initmem(void) 508 508 { 509 509 prom_free_prom_memory(); 510 510 /*
+1 -1
arch/mips/txx9/generic/pci.c
··· 268 268 return err; 269 269 } 270 270 271 - static void __init_refok quirk_slc90e66_bridge(struct pci_dev *dev) 271 + static void __ref quirk_slc90e66_bridge(struct pci_dev *dev) 272 272 { 273 273 int irq; /* PCI/ISA Bridge interrupt */ 274 274 u8 reg_64;
+1 -1
arch/nios2/mm/init.c
··· 89 89 } 90 90 #endif 91 91 92 - void __init_refok free_initmem(void) 92 + void __ref free_initmem(void) 93 93 { 94 94 free_initmem_default(-1); 95 95 }
+2 -2
arch/openrisc/mm/ioremap.c
··· 38 38 * have to convert them into an offset in a page-aligned mapping, but the 39 39 * caller shouldn't need to know that small detail. 40 40 */ 41 - void __iomem *__init_refok 41 + void __iomem *__ref 42 42 __ioremap(phys_addr_t addr, unsigned long size, pgprot_t prot) 43 43 { 44 44 phys_addr_t p; ··· 116 116 * the memblock infrastructure. 117 117 */ 118 118 119 - pte_t __init_refok *pte_alloc_one_kernel(struct mm_struct *mm, 119 + pte_t __ref *pte_alloc_one_kernel(struct mm_struct *mm, 120 120 unsigned long address) 121 121 { 122 122 pte_t *pte;
+4 -4
arch/powerpc/include/asm/mman.h
··· 31 31 } 32 32 #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) 33 33 34 - static inline int arch_validate_prot(unsigned long prot) 34 + static inline bool arch_validate_prot(unsigned long prot) 35 35 { 36 36 if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO)) 37 - return 0; 37 + return false; 38 38 if ((prot & PROT_SAO) && !cpu_has_feature(CPU_FTR_SAO)) 39 - return 0; 40 - return 1; 39 + return false; 40 + return true; 41 41 } 42 42 #define arch_validate_prot(prot) arch_validate_prot(prot) 43 43
-25
arch/powerpc/include/asm/thread_info.h
··· 138 138 /* Don't move TLF_NAPPING without adjusting the code in entry_32.S */ 139 139 #define TLF_NAPPING 0 /* idle thread enabled NAP mode */ 140 140 #define TLF_SLEEPING 1 /* suspend code enabled SLEEP mode */ 141 - #define TLF_RESTORE_SIGMASK 2 /* Restore signal mask in do_signal */ 142 141 #define TLF_LAZY_MMU 3 /* tlb_batch is active */ 143 142 #define TLF_RUNLATCH 4 /* Is the runlatch enabled? */ 144 143 145 144 #define _TLF_NAPPING (1 << TLF_NAPPING) 146 145 #define _TLF_SLEEPING (1 << TLF_SLEEPING) 147 - #define _TLF_RESTORE_SIGMASK (1 << TLF_RESTORE_SIGMASK) 148 146 #define _TLF_LAZY_MMU (1 << TLF_LAZY_MMU) 149 147 #define _TLF_RUNLATCH (1 << TLF_RUNLATCH) 150 148 151 149 #ifndef __ASSEMBLY__ 152 - #define HAVE_SET_RESTORE_SIGMASK 1 153 - static inline void set_restore_sigmask(void) 154 - { 155 - struct thread_info *ti = current_thread_info(); 156 - ti->local_flags |= _TLF_RESTORE_SIGMASK; 157 - WARN_ON(!test_bit(TIF_SIGPENDING, &ti->flags)); 158 - } 159 - static inline void clear_restore_sigmask(void) 160 - { 161 - current_thread_info()->local_flags &= ~_TLF_RESTORE_SIGMASK; 162 - } 163 - static inline bool test_restore_sigmask(void) 164 - { 165 - return current_thread_info()->local_flags & _TLF_RESTORE_SIGMASK; 166 - } 167 - static inline bool test_and_clear_restore_sigmask(void) 168 - { 169 - struct thread_info *ti = current_thread_info(); 170 - if (!(ti->local_flags & _TLF_RESTORE_SIGMASK)) 171 - return false; 172 - ti->local_flags &= ~_TLF_RESTORE_SIGMASK; 173 - return true; 174 - } 175 150 176 151 static inline bool test_thread_local_flags(unsigned int flags) 177 152 {
+1 -1
arch/powerpc/lib/alloc.c
··· 6 6 #include <asm/setup.h> 7 7 8 8 9 - void * __init_refok zalloc_maybe_bootmem(size_t size, gfp_t mask) 9 + void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask) 10 10 { 11 11 void *p; 12 12
+1 -1
arch/powerpc/mm/pgtable_32.c
··· 79 79 #endif 80 80 } 81 81 82 - __init_refok pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) 82 + __ref pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) 83 83 { 84 84 pte_t *pte; 85 85
+2 -2
arch/powerpc/platforms/powermac/setup.c
··· 353 353 machine_late_initcall(powermac, pmac_late_init); 354 354 355 355 /* 356 - * This is __init_refok because we check for "initializing" before 356 + * This is __ref because we check for "initializing" before 357 357 * touching any of the __init sensitive things and "initializing" 358 358 * will be false after __init time. This can't be __init because it 359 359 * can be called whenever a disk is first accessed. 360 360 */ 361 - void __init_refok note_bootable_part(dev_t dev, int part, int goodness) 361 + void __ref note_bootable_part(dev_t dev, int part, int goodness) 362 362 { 363 363 char *p; 364 364
+1 -1
arch/powerpc/platforms/ps3/device-init.c
··· 189 189 return result; 190 190 } 191 191 192 - static int __init_refok ps3_setup_uhc_device( 192 + static int __ref ps3_setup_uhc_device( 193 193 const struct ps3_repository_device *repo, enum ps3_match_id match_id, 194 194 enum ps3_interrupt_type interrupt_type, enum ps3_reg_type reg_type) 195 195 {
+7 -17
arch/powerpc/sysdev/fsl_rio.c
··· 289 289 } 290 290 291 291 int fsl_map_inb_mem(struct rio_mport *mport, dma_addr_t lstart, 292 - u64 rstart, u32 size, u32 flags) 292 + u64 rstart, u64 size, u32 flags) 293 293 { 294 294 struct rio_priv *priv = mport->priv; 295 295 u32 base_size; ··· 298 298 u32 riwar; 299 299 int i; 300 300 301 - if ((size & (size - 1)) != 0) 301 + if ((size & (size - 1)) != 0 || size > 0x400000000ULL) 302 302 return -EINVAL; 303 303 304 304 base_size_log = ilog2(size); ··· 643 643 port->ops = ops; 644 644 port->priv = priv; 645 645 port->phys_efptr = 0x100; 646 + port->phys_rmap = 1; 646 647 priv->regs_win = rio_regs_win; 647 648 648 - /* Probe the master port phy type */ 649 649 ccsr = in_be32(priv->regs_win + RIO_CCSR + i*0x20); 650 - port->phy_type = (ccsr & 1) ? RIO_PHY_SERIAL : RIO_PHY_PARALLEL; 651 - if (port->phy_type == RIO_PHY_PARALLEL) { 652 - dev_err(&dev->dev, "RIO: Parallel PHY type, unsupported port type!\n"); 653 - release_resource(&port->iores); 654 - kfree(priv); 655 - kfree(port); 656 - continue; 657 - } 658 - dev_info(&dev->dev, "RapidIO PHY type: Serial\n"); 650 + 659 651 /* Checking the port training status */ 660 652 if (in_be32((priv->regs_win + RIO_ESCSR + i*0x20)) & 1) { 661 653 dev_err(&dev->dev, "Port %d is not ready. " ··· 697 705 ((i == 0) ? RIO_INB_ATMU_REGS_PORT1_OFFSET : 698 706 RIO_INB_ATMU_REGS_PORT2_OFFSET)); 699 707 700 - 701 - /* Set to receive any dist ID for serial RapidIO controller. */ 702 - if (port->phy_type == RIO_PHY_SERIAL) 703 - out_be32((priv->regs_win 704 - + RIO_ISR_AACR + i*0x80), RIO_ISR_AACR_AA); 708 + /* Set to receive packets with any dest ID */ 709 + out_be32((priv->regs_win + RIO_ISR_AACR + i*0x80), 710 + RIO_ISR_AACR_AA); 705 711 706 712 /* Configure maintenance transaction window */ 707 713 out_be32(&priv->maint_atmu_regs->rowbar,
+1 -1
arch/powerpc/sysdev/msi_bitmap.c
··· 112 112 return 0; 113 113 } 114 114 115 - int __init_refok msi_bitmap_alloc(struct msi_bitmap *bmp, unsigned int irq_count, 115 + int __ref msi_bitmap_alloc(struct msi_bitmap *bmp, unsigned int irq_count, 116 116 struct device_node *of_node) 117 117 { 118 118 int size;
+1 -1
arch/score/mm/init.c
··· 91 91 } 92 92 #endif 93 93 94 - void __init_refok free_initmem(void) 94 + void __ref free_initmem(void) 95 95 { 96 96 free_initmem_default(POISON_FREE_INITMEM); 97 97 }
+2 -2
arch/sh/drivers/pci/pci.c
··· 221 221 * We can't use pci_find_device() here since we are 222 222 * called from interrupt context. 223 223 */ 224 - static void __init_refok 224 + static void __ref 225 225 pcibios_bus_report_status(struct pci_bus *bus, unsigned int status_mask, 226 226 int warn) 227 227 { ··· 256 256 pcibios_bus_report_status(dev->subordinate, status_mask, warn); 257 257 } 258 258 259 - void __init_refok pcibios_report_status(unsigned int status_mask, int warn) 259 + void __ref pcibios_report_status(unsigned int status_mask, int warn) 260 260 { 261 261 struct pci_channel *hose; 262 262
-26
arch/sh/include/asm/thread_info.h
··· 151 151 * ever touches our thread-synchronous status, so we don't 152 152 * have to worry about atomic accesses. 153 153 */ 154 - #define TS_RESTORE_SIGMASK 0x0001 /* restore signal mask in do_signal() */ 155 154 #define TS_USEDFPU 0x0002 /* FPU used by this task this quantum */ 156 155 157 156 #ifndef __ASSEMBLY__ 158 - 159 - #define HAVE_SET_RESTORE_SIGMASK 1 160 - static inline void set_restore_sigmask(void) 161 - { 162 - struct thread_info *ti = current_thread_info(); 163 - ti->status |= TS_RESTORE_SIGMASK; 164 - WARN_ON(!test_bit(TIF_SIGPENDING, (unsigned long *)&ti->flags)); 165 - } 166 157 167 158 #define TI_FLAG_FAULT_CODE_SHIFT 24 168 159 ··· 171 180 { 172 181 struct thread_info *ti = current_thread_info(); 173 182 return ti->flags >> TI_FLAG_FAULT_CODE_SHIFT; 174 - } 175 - 176 - static inline void clear_restore_sigmask(void) 177 - { 178 - current_thread_info()->status &= ~TS_RESTORE_SIGMASK; 179 - } 180 - static inline bool test_restore_sigmask(void) 181 - { 182 - return current_thread_info()->status & TS_RESTORE_SIGMASK; 183 - } 184 - static inline bool test_and_clear_restore_sigmask(void) 185 - { 186 - struct thread_info *ti = current_thread_info(); 187 - if (!(ti->status & TS_RESTORE_SIGMASK)) 188 - return false; 189 - ti->status &= ~TS_RESTORE_SIGMASK; 190 - return true; 191 183 } 192 184 193 185 #endif /* !__ASSEMBLY__ */
+1 -1
arch/sh/mm/ioremap.c
··· 34 34 * have to convert them into an offset in a page-aligned mapping, but the 35 35 * caller shouldn't need to know that small detail. 36 36 */ 37 - void __iomem * __init_refok 37 + void __iomem * __ref 38 38 __ioremap_caller(phys_addr_t phys_addr, unsigned long size, 39 39 pgprot_t pgprot, void *caller) 40 40 {
-24
arch/sparc/include/asm/thread_info_64.h
··· 222 222 * 223 223 * Note that there are only 8 bits available. 224 224 */ 225 - #define TS_RESTORE_SIGMASK 0x0001 /* restore signal mask in do_signal() */ 226 225 227 226 #ifndef __ASSEMBLY__ 228 - #define HAVE_SET_RESTORE_SIGMASK 1 229 - static inline void set_restore_sigmask(void) 230 - { 231 - struct thread_info *ti = current_thread_info(); 232 - ti->status |= TS_RESTORE_SIGMASK; 233 - WARN_ON(!test_bit(TIF_SIGPENDING, &ti->flags)); 234 - } 235 - static inline void clear_restore_sigmask(void) 236 - { 237 - current_thread_info()->status &= ~TS_RESTORE_SIGMASK; 238 - } 239 - static inline bool test_restore_sigmask(void) 240 - { 241 - return current_thread_info()->status & TS_RESTORE_SIGMASK; 242 - } 243 - static inline bool test_and_clear_restore_sigmask(void) 244 - { 245 - struct thread_info *ti = current_thread_info(); 246 - if (!(ti->status & TS_RESTORE_SIGMASK)) 247 - return false; 248 - ti->status &= ~TS_RESTORE_SIGMASK; 249 - return true; 250 - } 251 227 252 228 #define thread32_stack_is_64bit(__SP) (((__SP) & 0x1) != 0) 253 229 #define test_thread_64bit_stack(__SP) \
-27
arch/tile/include/asm/thread_info.h
··· 166 166 #ifdef __tilegx__ 167 167 #define TS_COMPAT 0x0001 /* 32-bit compatibility mode */ 168 168 #endif 169 - #define TS_RESTORE_SIGMASK 0x0008 /* restore signal mask in do_signal */ 170 - 171 - #ifndef __ASSEMBLY__ 172 - #define HAVE_SET_RESTORE_SIGMASK 1 173 - static inline void set_restore_sigmask(void) 174 - { 175 - struct thread_info *ti = current_thread_info(); 176 - ti->status |= TS_RESTORE_SIGMASK; 177 - WARN_ON(!test_bit(TIF_SIGPENDING, &ti->flags)); 178 - } 179 - static inline void clear_restore_sigmask(void) 180 - { 181 - current_thread_info()->status &= ~TS_RESTORE_SIGMASK; 182 - } 183 - static inline bool test_restore_sigmask(void) 184 - { 185 - return current_thread_info()->status & TS_RESTORE_SIGMASK; 186 - } 187 - static inline bool test_and_clear_restore_sigmask(void) 188 - { 189 - struct thread_info *ti = current_thread_info(); 190 - if (!(ti->status & TS_RESTORE_SIGMASK)) 191 - return false; 192 - ti->status &= ~TS_RESTORE_SIGMASK; 193 - return true; 194 - } 195 - #endif /* !__ASSEMBLY__ */ 196 169 197 170 #endif /* _ASM_TILE_THREAD_INFO_H */
-24
arch/x86/include/asm/thread_info.h
··· 219 219 * have to worry about atomic accesses. 220 220 */ 221 221 #define TS_COMPAT 0x0002 /* 32bit syscall active (64BIT)*/ 222 - #define TS_RESTORE_SIGMASK 0x0008 /* restore signal mask in do_signal() */ 223 222 224 223 #ifndef __ASSEMBLY__ 225 - #define HAVE_SET_RESTORE_SIGMASK 1 226 - static inline void set_restore_sigmask(void) 227 - { 228 - struct thread_info *ti = current_thread_info(); 229 - ti->status |= TS_RESTORE_SIGMASK; 230 - WARN_ON(!test_bit(TIF_SIGPENDING, (unsigned long *)&ti->flags)); 231 - } 232 - static inline void clear_restore_sigmask(void) 233 - { 234 - current_thread_info()->status &= ~TS_RESTORE_SIGMASK; 235 - } 236 - static inline bool test_restore_sigmask(void) 237 - { 238 - return current_thread_info()->status & TS_RESTORE_SIGMASK; 239 - } 240 - static inline bool test_and_clear_restore_sigmask(void) 241 - { 242 - struct thread_info *ti = current_thread_info(); 243 - if (!(ti->status & TS_RESTORE_SIGMASK)) 244 - return false; 245 - ti->status &= ~TS_RESTORE_SIGMASK; 246 - return true; 247 - } 248 224 249 225 static inline bool in_ia32_syscall(void) 250 226 {
+2 -2
arch/x86/mm/init.c
··· 208 208 * adjust the page_size_mask for small range to go with 209 209 * big page size instead small one if nearby are ram too. 210 210 */ 211 - static void __init_refok adjust_range_page_size_mask(struct map_range *mr, 211 + static void __ref adjust_range_page_size_mask(struct map_range *mr, 212 212 int nr_range) 213 213 { 214 214 int i; ··· 396 396 * This runs before bootmem is initialized and gets pages directly from 397 397 * the physical memory. To access them they are temporarily mapped. 398 398 */ 399 - unsigned long __init_refok init_memory_mapping(unsigned long start, 399 + unsigned long __ref init_memory_mapping(unsigned long start, 400 400 unsigned long end) 401 401 { 402 402 struct map_range mr[NR_RANGE_MR];
+2 -2
arch/x86/platform/efi/early_printk.c
··· 44 44 * In case earlyprintk=efi,keep we have the whole framebuffer mapped already 45 45 * so just return the offset efi_fb + start. 46 46 */ 47 - static __init_refok void *early_efi_map(unsigned long start, unsigned long len) 47 + static __ref void *early_efi_map(unsigned long start, unsigned long len) 48 48 { 49 49 unsigned long base; 50 50 ··· 56 56 return early_ioremap(base + start, len); 57 57 } 58 58 59 - static __init_refok void early_efi_unmap(void *addr, unsigned long len) 59 + static __ref void early_efi_unmap(void *addr, unsigned long len) 60 60 { 61 61 if (!efi_fb) 62 62 early_iounmap(addr, len);
+2 -3
arch/x86/xen/enlighten.c
··· 34 34 #include <linux/edd.h> 35 35 #include <linux/frame.h> 36 36 37 - #ifdef CONFIG_KEXEC_CORE 38 37 #include <linux/kexec.h> 39 - #endif 40 38 41 39 #include <xen/xen.h> 42 40 #include <xen/events.h> ··· 1332 1334 static int 1333 1335 xen_panic_event(struct notifier_block *this, unsigned long event, void *ptr) 1334 1336 { 1335 - xen_reboot(SHUTDOWN_crash); 1337 + if (!kexec_crash_loaded()) 1338 + xen_reboot(SHUTDOWN_crash); 1336 1339 return NOTIFY_DONE; 1337 1340 } 1338 1341
+2 -3
drivers/acpi/osl.c
··· 309 309 * During early init (when acpi_gbl_permanent_mmap has not been set yet) this 310 310 * routine simply calls __acpi_map_table() to get the job done. 311 311 */ 312 - void __iomem *__init_refok 312 + void __iomem *__ref 313 313 acpi_os_map_iomem(acpi_physical_address phys, acpi_size size) 314 314 { 315 315 struct acpi_ioremap *map; ··· 362 362 } 363 363 EXPORT_SYMBOL_GPL(acpi_os_map_iomem); 364 364 365 - void *__init_refok 366 - acpi_os_map_memory(acpi_physical_address phys, acpi_size size) 365 + void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size) 367 366 { 368 367 return (void *)acpi_os_map_iomem(phys, size); 369 368 }
+127 -56
drivers/base/firmware_class.c
··· 46 46 extern struct builtin_fw __start_builtin_fw[]; 47 47 extern struct builtin_fw __end_builtin_fw[]; 48 48 49 - static bool fw_get_builtin_firmware(struct firmware *fw, const char *name) 49 + static bool fw_get_builtin_firmware(struct firmware *fw, const char *name, 50 + void *buf, size_t size) 50 51 { 51 52 struct builtin_fw *b_fw; 52 53 ··· 55 54 if (strcmp(name, b_fw->name) == 0) { 56 55 fw->size = b_fw->size; 57 56 fw->data = b_fw->data; 57 + 58 + if (buf && fw->size <= size) 59 + memcpy(buf, fw->data, fw->size); 58 60 return true; 59 61 } 60 62 } ··· 78 74 79 75 #else /* Module case - no builtin firmware support */ 80 76 81 - static inline bool fw_get_builtin_firmware(struct firmware *fw, const char *name) 77 + static inline bool fw_get_builtin_firmware(struct firmware *fw, 78 + const char *name, void *buf, 79 + size_t size) 82 80 { 83 81 return false; 84 82 } ··· 118 112 #define FW_OPT_FALLBACK 0 119 113 #endif 120 114 #define FW_OPT_NO_WARN (1U << 3) 115 + #define FW_OPT_NOCACHE (1U << 4) 121 116 122 117 struct firmware_cache { 123 118 /* firmware_buf instance will be added into the below list */ ··· 150 143 unsigned long status; 151 144 void *data; 152 145 size_t size; 146 + size_t allocated_size; 153 147 #ifdef CONFIG_FW_LOADER_USER_HELPER 154 148 bool is_paged_buf; 155 149 bool need_uevent; ··· 186 178 static struct firmware_cache fw_cache; 187 179 188 180 static struct firmware_buf *__allocate_fw_buf(const char *fw_name, 189 - struct firmware_cache *fwc) 181 + struct firmware_cache *fwc, 182 + void *dbuf, size_t size) 190 183 { 191 184 struct firmware_buf *buf; 192 185 ··· 203 194 204 195 kref_init(&buf->ref); 205 196 buf->fwc = fwc; 197 + buf->data = dbuf; 198 + buf->allocated_size = size; 206 199 init_completion(&buf->completion); 207 200 #ifdef CONFIG_FW_LOADER_USER_HELPER 208 201 INIT_LIST_HEAD(&buf->pending_list); ··· 228 217 229 218 static int fw_lookup_and_allocate_buf(const char *fw_name, 230 219 struct firmware_cache *fwc, 231 - struct firmware_buf **buf) 220 + struct firmware_buf **buf, void *dbuf, 221 + size_t size) 232 222 { 233 223 struct firmware_buf *tmp; 234 224 ··· 241 229 *buf = tmp; 242 230 return 1; 243 231 } 244 - tmp = __allocate_fw_buf(fw_name, fwc); 232 + tmp = __allocate_fw_buf(fw_name, fwc, dbuf, size); 245 233 if (tmp) 246 234 list_add(&tmp->list, &fwc->head); 247 235 spin_unlock(&fwc->lock); ··· 273 261 vfree(buf->pages); 274 262 } else 275 263 #endif 264 + if (!buf->allocated_size) 276 265 vfree(buf->data); 277 266 kfree_const(buf->fw_id); 278 267 kfree(buf); ··· 314 301 mutex_unlock(&fw_lock); 315 302 } 316 303 317 - static int fw_get_filesystem_firmware(struct device *device, 318 - struct firmware_buf *buf) 304 + static int 305 + fw_get_filesystem_firmware(struct device *device, struct firmware_buf *buf) 319 306 { 320 307 loff_t size; 321 308 int i, len; 322 309 int rc = -ENOENT; 323 310 char *path; 311 + enum kernel_read_file_id id = READING_FIRMWARE; 312 + size_t msize = INT_MAX; 313 + 314 + /* Already populated data member means we're loading into a buffer */ 315 + if (buf->data) { 316 + id = READING_FIRMWARE_PREALLOC_BUFFER; 317 + msize = buf->allocated_size; 318 + } 324 319 325 320 path = __getname(); 326 321 if (!path) ··· 347 326 } 348 327 349 328 buf->size = 0; 350 - rc = kernel_read_file_from_path(path, &buf->data, &size, 351 - INT_MAX, READING_FIRMWARE); 329 + rc = kernel_read_file_from_path(path, &buf->data, &size, msize, 330 + id); 352 331 if (rc) { 353 332 if (rc == -ENOENT) 354 333 dev_dbg(device, "loading %s failed with error %d\n", ··· 712 691 713 692 static DEVICE_ATTR(loading, 0644, firmware_loading_show, firmware_loading_store); 714 693 694 + static void firmware_rw_buf(struct firmware_buf *buf, char *buffer, 695 + loff_t offset, size_t count, bool read) 696 + { 697 + if (read) 698 + memcpy(buffer, buf->data + offset, count); 699 + else 700 + memcpy(buf->data + offset, buffer, count); 701 + } 702 + 703 + static void firmware_rw(struct firmware_buf *buf, char *buffer, 704 + loff_t offset, size_t count, bool read) 705 + { 706 + while (count) { 707 + void *page_data; 708 + int page_nr = offset >> PAGE_SHIFT; 709 + int page_ofs = offset & (PAGE_SIZE-1); 710 + int page_cnt = min_t(size_t, PAGE_SIZE - page_ofs, count); 711 + 712 + page_data = kmap(buf->pages[page_nr]); 713 + 714 + if (read) 715 + memcpy(buffer, page_data + page_ofs, page_cnt); 716 + else 717 + memcpy(page_data + page_ofs, buffer, page_cnt); 718 + 719 + kunmap(buf->pages[page_nr]); 720 + buffer += page_cnt; 721 + offset += page_cnt; 722 + count -= page_cnt; 723 + } 724 + } 725 + 715 726 static ssize_t firmware_data_read(struct file *filp, struct kobject *kobj, 716 727 struct bin_attribute *bin_attr, 717 728 char *buffer, loff_t offset, size_t count) ··· 768 715 769 716 ret_count = count; 770 717 771 - while (count) { 772 - void *page_data; 773 - int page_nr = offset >> PAGE_SHIFT; 774 - int page_ofs = offset & (PAGE_SIZE-1); 775 - int page_cnt = min_t(size_t, PAGE_SIZE - page_ofs, count); 718 + if (buf->data) 719 + firmware_rw_buf(buf, buffer, offset, count, true); 720 + else 721 + firmware_rw(buf, buffer, offset, count, true); 776 722 777 - page_data = kmap(buf->pages[page_nr]); 778 - 779 - memcpy(buffer, page_data + page_ofs, page_cnt); 780 - 781 - kunmap(buf->pages[page_nr]); 782 - buffer += page_cnt; 783 - offset += page_cnt; 784 - count -= page_cnt; 785 - } 786 723 out: 787 724 mutex_unlock(&fw_lock); 788 725 return ret_count; ··· 847 804 goto out; 848 805 } 849 806 850 - retval = fw_realloc_buffer(fw_priv, offset + count); 851 - if (retval) 852 - goto out; 807 + if (buf->data) { 808 + if (offset + count > buf->allocated_size) { 809 + retval = -ENOMEM; 810 + goto out; 811 + } 812 + firmware_rw_buf(buf, buffer, offset, count, false); 813 + retval = count; 814 + } else { 815 + retval = fw_realloc_buffer(fw_priv, offset + count); 816 + if (retval) 817 + goto out; 853 818 854 - retval = count; 855 - 856 - while (count) { 857 - void *page_data; 858 - int page_nr = offset >> PAGE_SHIFT; 859 - int page_ofs = offset & (PAGE_SIZE - 1); 860 - int page_cnt = min_t(size_t, PAGE_SIZE - page_ofs, count); 861 - 862 - page_data = kmap(buf->pages[page_nr]); 863 - 864 - memcpy(page_data + page_ofs, buffer, page_cnt); 865 - 866 - kunmap(buf->pages[page_nr]); 867 - buffer += page_cnt; 868 - offset += page_cnt; 869 - count -= page_cnt; 819 + retval = count; 820 + firmware_rw(buf, buffer, offset, count, false); 870 821 } 871 822 872 - buf->size = max_t(size_t, offset, buf->size); 823 + buf->size = max_t(size_t, offset + count, buf->size); 873 824 out: 874 825 mutex_unlock(&fw_lock); 875 826 return retval; ··· 931 894 struct firmware_buf *buf = fw_priv->buf; 932 895 933 896 /* fall back on userspace loading */ 934 - buf->is_paged_buf = true; 897 + if (!buf->data) 898 + buf->is_paged_buf = true; 935 899 936 900 dev_set_uevent_suppress(f_dev, true); 937 901 ··· 967 929 968 930 if (is_fw_load_aborted(buf)) 969 931 retval = -EAGAIN; 970 - else if (!buf->data) 932 + else if (buf->is_paged_buf && !buf->data) 971 933 retval = -ENOMEM; 972 934 973 935 device_del(f_dev); ··· 1050 1012 */ 1051 1013 static int 1052 1014 _request_firmware_prepare(struct firmware **firmware_p, const char *name, 1053 - struct device *device) 1015 + struct device *device, void *dbuf, size_t size) 1054 1016 { 1055 1017 struct firmware *firmware; 1056 1018 struct firmware_buf *buf; ··· 1063 1025 return -ENOMEM; 1064 1026 } 1065 1027 1066 - if (fw_get_builtin_firmware(firmware, name)) { 1028 + if (fw_get_builtin_firmware(firmware, name, dbuf, size)) { 1067 1029 dev_dbg(device, "using built-in %s\n", name); 1068 1030 return 0; /* assigned */ 1069 1031 } 1070 1032 1071 - ret = fw_lookup_and_allocate_buf(name, &fw_cache, &buf); 1033 + ret = fw_lookup_and_allocate_buf(name, &fw_cache, &buf, dbuf, size); 1072 1034 1073 1035 /* 1074 1036 * bind with 'buf' now to avoid warning in failure path ··· 1108 1070 * should be fixed in devres or driver core. 1109 1071 */ 1110 1072 /* don't cache firmware handled without uevent */ 1111 - if (device && (opt_flags & FW_OPT_UEVENT)) 1073 + if (device && (opt_flags & FW_OPT_UEVENT) && 1074 + !(opt_flags & FW_OPT_NOCACHE)) 1112 1075 fw_add_devm_name(device, buf->fw_id); 1113 1076 1114 1077 /* 1115 1078 * After caching firmware image is started, let it piggyback 1116 1079 * on request firmware. 1117 1080 */ 1118 - if (buf->fwc->state == FW_LOADER_START_CACHE) { 1081 + if (!(opt_flags & FW_OPT_NOCACHE) && 1082 + buf->fwc->state == FW_LOADER_START_CACHE) { 1119 1083 if (fw_cache_piggyback_on_request(buf->fw_id)) 1120 1084 kref_get(&buf->ref); 1121 1085 } ··· 1131 1091 /* called from request_firmware() and request_firmware_work_func() */ 1132 1092 static int 1133 1093 _request_firmware(const struct firmware **firmware_p, const char *name, 1134 - struct device *device, unsigned int opt_flags) 1094 + struct device *device, void *buf, size_t size, 1095 + unsigned int opt_flags) 1135 1096 { 1136 1097 struct firmware *fw = NULL; 1137 1098 long timeout; ··· 1146 1105 goto out; 1147 1106 } 1148 1107 1149 - ret = _request_firmware_prepare(&fw, name, device); 1108 + ret = _request_firmware_prepare(&fw, name, device, buf, size); 1150 1109 if (ret <= 0) /* error or already assigned */ 1151 1110 goto out; 1152 1111 ··· 1225 1184 1226 1185 /* Need to pin this module until return */ 1227 1186 __module_get(THIS_MODULE); 1228 - ret = _request_firmware(firmware_p, name, device, 1187 + ret = _request_firmware(firmware_p, name, device, NULL, 0, 1229 1188 FW_OPT_UEVENT | FW_OPT_FALLBACK); 1230 1189 module_put(THIS_MODULE); 1231 1190 return ret; ··· 1249 1208 int ret; 1250 1209 1251 1210 __module_get(THIS_MODULE); 1252 - ret = _request_firmware(firmware_p, name, device, 1211 + ret = _request_firmware(firmware_p, name, device, NULL, 0, 1253 1212 FW_OPT_UEVENT | FW_OPT_NO_WARN); 1254 1213 module_put(THIS_MODULE); 1255 1214 return ret; 1256 1215 } 1257 1216 EXPORT_SYMBOL_GPL(request_firmware_direct); 1217 + 1218 + /** 1219 + * request_firmware_into_buf - load firmware into a previously allocated buffer 1220 + * @firmware_p: pointer to firmware image 1221 + * @name: name of firmware file 1222 + * @device: device for which firmware is being loaded and DMA region allocated 1223 + * @buf: address of buffer to load firmware into 1224 + * @size: size of buffer 1225 + * 1226 + * This function works pretty much like request_firmware(), but it doesn't 1227 + * allocate a buffer to hold the firmware data. Instead, the firmware 1228 + * is loaded directly into the buffer pointed to by @buf and the @firmware_p 1229 + * data member is pointed at @buf. 1230 + * 1231 + * This function doesn't cache firmware either. 1232 + */ 1233 + int 1234 + request_firmware_into_buf(const struct firmware **firmware_p, const char *name, 1235 + struct device *device, void *buf, size_t size) 1236 + { 1237 + int ret; 1238 + 1239 + __module_get(THIS_MODULE); 1240 + ret = _request_firmware(firmware_p, name, device, buf, size, 1241 + FW_OPT_UEVENT | FW_OPT_FALLBACK | 1242 + FW_OPT_NOCACHE); 1243 + module_put(THIS_MODULE); 1244 + return ret; 1245 + } 1246 + EXPORT_SYMBOL(request_firmware_into_buf); 1258 1247 1259 1248 /** 1260 1249 * release_firmware: - release the resource associated with a firmware image ··· 1318 1247 1319 1248 fw_work = container_of(work, struct firmware_work, work); 1320 1249 1321 - _request_firmware(&fw, fw_work->name, fw_work->device, 1250 + _request_firmware(&fw, fw_work->name, fw_work->device, NULL, 0, 1322 1251 fw_work->opt_flags); 1323 1252 fw_work->cont(fw, fw_work->context); 1324 1253 put_device(fw_work->device); /* taken in request_firmware_nowait() */ ··· 1451 1380 1452 1381 pr_debug("%s: %s\n", __func__, fw_name); 1453 1382 1454 - if (fw_get_builtin_firmware(&fw, fw_name)) 1383 + if (fw_get_builtin_firmware(&fw, fw_name, NULL, 0)) 1455 1384 return 0; 1456 1385 1457 1386 buf = fw_lookup_buf(fw_name);
+1 -1
drivers/base/node.c
··· 370 370 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE 371 371 #define page_initialized(page) (page->lru.next) 372 372 373 - static int __init_refok get_nid_for_pfn(unsigned long pfn) 373 + static int __ref get_nid_for_pfn(unsigned long pfn) 374 374 { 375 375 struct page *page; 376 376
-1
drivers/block/drbd/drbd_actlog.c
··· 27 27 #include <linux/crc32c.h> 28 28 #include <linux/drbd.h> 29 29 #include <linux/drbd_limits.h> 30 - #include <linux/dynamic_debug.h> 31 30 #include "drbd_int.h" 32 31 33 32
+1
drivers/block/drbd/drbd_int.h
··· 41 41 #include <linux/backing-dev.h> 42 42 #include <linux/genhd.h> 43 43 #include <linux/idr.h> 44 + #include <linux/dynamic_debug.h> 44 45 #include <net/tcp.h> 45 46 #include <linux/lru_cache.h> 46 47 #include <linux/prefetch.h>
+2 -2
drivers/clk/clkdev.c
··· 250 250 char con_id[MAX_CON_ID]; 251 251 }; 252 252 253 - static struct clk_lookup * __init_refok 253 + static struct clk_lookup * __ref 254 254 vclkdev_alloc(struct clk_hw *hw, const char *con_id, const char *dev_fmt, 255 255 va_list ap) 256 256 { ··· 287 287 return cl; 288 288 } 289 289 290 - struct clk_lookup * __init_refok 290 + struct clk_lookup * __ref 291 291 clkdev_alloc(struct clk *clk, const char *con_id, const char *dev_fmt, ...) 292 292 { 293 293 struct clk_lookup *cl;
+2 -15
drivers/memstick/core/ms_block.c
··· 2338 2338 .resume = msb_resume 2339 2339 }; 2340 2340 2341 - static int major; 2342 - 2343 2341 static int __init msb_init(void) 2344 2342 { 2345 - int rc = register_blkdev(0, DRIVER_NAME); 2346 - 2347 - if (rc < 0) { 2348 - pr_err("failed to register major (error %d)\n", rc); 2349 - return rc; 2350 - } 2351 - 2352 - major = rc; 2353 - rc = memstick_register_driver(&msb_driver); 2354 - if (rc) { 2355 - unregister_blkdev(major, DRIVER_NAME); 2343 + int rc = memstick_register_driver(&msb_driver); 2344 + if (rc) 2356 2345 pr_err("failed to register memstick driver (error %d)\n", rc); 2357 - } 2358 2346 2359 2347 return rc; 2360 2348 } ··· 2350 2362 static void __exit msb_exit(void) 2351 2363 { 2352 2364 memstick_unregister_driver(&msb_driver); 2353 - unregister_blkdev(major, DRIVER_NAME); 2354 2365 idr_destroy(&msb_disk_idr); 2355 2366 } 2356 2367
+1 -1
drivers/pci/xen-pcifront.c
··· 1086 1086 return err; 1087 1087 } 1088 1088 1089 - static void __init_refok pcifront_backend_changed(struct xenbus_device *xdev, 1089 + static void __ref pcifront_backend_changed(struct xenbus_device *xdev, 1090 1090 enum xenbus_state be_state) 1091 1091 { 1092 1092 struct pcifront_device *pdev = dev_get_drvdata(&xdev->dev);
+9
drivers/rapidio/Kconfig
··· 67 67 68 68 endchoice 69 69 70 + config RAPIDIO_CHMAN 71 + tristate "RapidIO Channelized Messaging driver" 72 + depends on RAPIDIO 73 + help 74 + This option includes RapidIO channelized messaging driver which 75 + provides socket-like interface to allow sharing of single RapidIO 76 + messaging mailbox between multiple user-space applications. 77 + See "Documentation/rapidio/rio_cm.txt" for driver description. 78 + 70 79 config RAPIDIO_MPORT_CDEV 71 80 tristate "RapidIO /dev mport device driver" 72 81 depends on RAPIDIO
+1
drivers/rapidio/Makefile
··· 5 5 rapidio-y := rio.o rio-access.o rio-driver.o rio-sysfs.o 6 6 7 7 obj-$(CONFIG_RAPIDIO_ENUM_BASIC) += rio-scan.o 8 + obj-$(CONFIG_RAPIDIO_CHMAN) += rio_cm.o 8 9 9 10 obj-$(CONFIG_RAPIDIO) += switches/ 10 11 obj-$(CONFIG_RAPIDIO) += devices/
+3 -3
drivers/rapidio/devices/rio_mport_cdev.c
··· 1813 1813 if (rdev->pef & RIO_PEF_EXT_FEATURES) { 1814 1814 rdev->efptr = rval & 0xffff; 1815 1815 rdev->phys_efptr = rio_mport_get_physefb(mport, 0, destid, 1816 - hopcount); 1816 + hopcount, &rdev->phys_rmap); 1817 1817 1818 1818 rdev->em_efptr = rio_mport_get_feature(mport, 0, destid, 1819 1819 hopcount, RIO_EFB_ERR_MGMNT); ··· 2242 2242 { 2243 2243 struct rio_mport_mapping *map = vma->vm_private_data; 2244 2244 2245 - rmcd_debug(MMAP, "0x%pad", &map->phys_addr); 2245 + rmcd_debug(MMAP, "%pad", &map->phys_addr); 2246 2246 kref_get(&map->ref); 2247 2247 } 2248 2248 ··· 2250 2250 { 2251 2251 struct rio_mport_mapping *map = vma->vm_private_data; 2252 2252 2253 - rmcd_debug(MMAP, "0x%pad", &map->phys_addr); 2253 + rmcd_debug(MMAP, "%pad", &map->phys_addr); 2254 2254 mutex_lock(&map->md->buf_mutex); 2255 2255 kref_put(&map->ref, mport_release_mapping); 2256 2256 mutex_unlock(&map->md->buf_mutex);
+45 -12
drivers/rapidio/devices/tsi721.c
··· 37 37 #include "tsi721.h" 38 38 39 39 #ifdef DEBUG 40 - u32 dbg_level = DBG_INIT | DBG_EXIT; 40 + u32 dbg_level; 41 41 module_param(dbg_level, uint, S_IWUSR | S_IRUGO); 42 42 MODULE_PARM_DESC(dbg_level, "Debugging output level (default 0 = none)"); 43 43 #endif 44 + 45 + static int pcie_mrrs = -1; 46 + module_param(pcie_mrrs, int, S_IRUGO); 47 + MODULE_PARM_DESC(pcie_mrrs, "PCIe MRRS override value (0...5)"); 48 + 49 + static u8 mbox_sel = 0x0f; 50 + module_param(mbox_sel, byte, S_IRUGO); 51 + MODULE_PARM_DESC(mbox_sel, 52 + "RIO Messaging MBOX Selection Mask (default: 0x0f = all)"); 44 53 45 54 static void tsi721_omsg_handler(struct tsi721_device *priv, int ch); 46 55 static void tsi721_imsg_handler(struct tsi721_device *priv, int ch); ··· 1090 1081 * from rstart to lstart. 1091 1082 */ 1092 1083 static int tsi721_rio_map_inb_mem(struct rio_mport *mport, dma_addr_t lstart, 1093 - u64 rstart, u32 size, u32 flags) 1084 + u64 rstart, u64 size, u32 flags) 1094 1085 { 1095 1086 struct tsi721_device *priv = mport->priv; 1096 1087 int i, avail = -1; ··· 1103 1094 struct tsi721_ib_win_mapping *map = NULL; 1104 1095 int ret = -EBUSY; 1105 1096 1097 + /* Max IBW size supported by HW is 16GB */ 1098 + if (size > 0x400000000UL) 1099 + return -EINVAL; 1100 + 1106 1101 if (direct) { 1107 1102 /* Calculate minimal acceptable window size and base address */ 1108 1103 ··· 1114 1101 ibw_start = lstart & ~(ibw_size - 1); 1115 1102 1116 1103 tsi_debug(IBW, &priv->pdev->dev, 1117 - "Direct (RIO_0x%llx -> PCIe_0x%pad), size=0x%x, ibw_start = 0x%llx", 1104 + "Direct (RIO_0x%llx -> PCIe_%pad), size=0x%llx, ibw_start = 0x%llx", 1118 1105 rstart, &lstart, size, ibw_start); 1119 1106 1120 1107 while ((lstart + size) > (ibw_start + ibw_size)) { 1121 1108 ibw_size *= 2; 1122 1109 ibw_start = lstart & ~(ibw_size - 1); 1123 - if (ibw_size > 0x80000000) { /* Limit max size to 2GB */ 1110 + /* Check for crossing IBW max size 16GB */ 1111 + if (ibw_size > 0x400000000UL) 1124 1112 return -EBUSY; 1125 - } 1126 1113 } 1127 1114 1128 1115 loc_start = ibw_start; ··· 1133 1120 1134 1121 } else { 1135 1122 tsi_debug(IBW, &priv->pdev->dev, 1136 - "Translated (RIO_0x%llx -> PCIe_0x%pad), size=0x%x", 1123 + "Translated (RIO_0x%llx -> PCIe_%pad), size=0x%llx", 1137 1124 rstart, &lstart, size); 1138 1125 1139 1126 if (!is_power_of_2(size) || size < 0x1000 || ··· 1228 1215 priv->ibwin_cnt--; 1229 1216 1230 1217 tsi_debug(IBW, &priv->pdev->dev, 1231 - "Configured IBWIN%d (RIO_0x%llx -> PCIe_0x%pad), size=0x%llx", 1218 + "Configured IBWIN%d (RIO_0x%llx -> PCIe_%pad), size=0x%llx", 1232 1219 i, ibw_start, &loc_start, ibw_size); 1233 1220 1234 1221 return 0; ··· 1250 1237 int i; 1251 1238 1252 1239 tsi_debug(IBW, &priv->pdev->dev, 1253 - "Unmap IBW mapped to PCIe_0x%pad", &lstart); 1240 + "Unmap IBW mapped to PCIe_%pad", &lstart); 1254 1241 1255 1242 /* Search for matching active inbound translation window */ 1256 1243 for (i = 0; i < TSI721_IBWIN_NUM; i++) { ··· 1890 1877 goto out; 1891 1878 } 1892 1879 1880 + if ((mbox_sel & (1 << mbox)) == 0) { 1881 + rc = -ENODEV; 1882 + goto out; 1883 + } 1884 + 1893 1885 priv->omsg_ring[mbox].dev_id = dev_id; 1894 1886 priv->omsg_ring[mbox].size = entries; 1895 1887 priv->omsg_ring[mbox].sts_rdptr = 0; ··· 2176 2158 (entries > TSI721_IMSGD_RING_SIZE) || 2177 2159 (!is_power_of_2(entries)) || mbox >= RIO_MAX_MBOX) { 2178 2160 rc = -EINVAL; 2161 + goto out; 2162 + } 2163 + 2164 + if ((mbox_sel & (1 << mbox)) == 0) { 2165 + rc = -ENODEV; 2179 2166 goto out; 2180 2167 } 2181 2168 ··· 2555 2532 struct tsi721_device *priv = mport->priv; 2556 2533 u32 rval; 2557 2534 2558 - rval = ioread32(priv->regs + (0x100 + RIO_PORT_N_ERR_STS_CSR(0))); 2535 + rval = ioread32(priv->regs + 0x100 + RIO_PORT_N_ERR_STS_CSR(0, 0)); 2559 2536 if (rval & RIO_PORT_N_ERR_STS_PORT_OK) { 2560 - rval = ioread32(priv->regs + (0x100 + RIO_PORT_N_CTL2_CSR(0))); 2537 + rval = ioread32(priv->regs + 0x100 + RIO_PORT_N_CTL2_CSR(0, 0)); 2561 2538 attr->link_speed = (rval & RIO_PORT_N_CTL2_SEL_BAUD) >> 28; 2562 - rval = ioread32(priv->regs + (0x100 + RIO_PORT_N_CTL_CSR(0))); 2539 + rval = ioread32(priv->regs + 0x100 + RIO_PORT_N_CTL_CSR(0, 0)); 2563 2540 attr->link_width = (rval & RIO_PORT_N_CTL_IPW) >> 27; 2564 2541 } else 2565 2542 attr->link_speed = RIO_LINK_DOWN; ··· 2673 2650 mport->ops = &tsi721_rio_ops; 2674 2651 mport->index = 0; 2675 2652 mport->sys_size = 0; /* small system */ 2676 - mport->phy_type = RIO_PHY_SERIAL; 2677 2653 mport->priv = (void *)priv; 2678 2654 mport->phys_efptr = 0x100; 2655 + mport->phys_rmap = 1; 2679 2656 mport->dev.parent = &pdev->dev; 2680 2657 mport->dev.release = tsi721_mport_release; 2681 2658 ··· 2862 2839 /* Clear "no snoop" and "relaxed ordering" bits. */ 2863 2840 pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL, 2864 2841 PCI_EXP_DEVCTL_RELAX_EN | PCI_EXP_DEVCTL_NOSNOOP_EN, 0); 2842 + 2843 + /* Override PCIe Maximum Read Request Size setting if requested */ 2844 + if (pcie_mrrs >= 0) { 2845 + if (pcie_mrrs <= 5) 2846 + pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL, 2847 + PCI_EXP_DEVCTL_READRQ, pcie_mrrs << 12); 2848 + else 2849 + tsi_info(&pdev->dev, 2850 + "Invalid MRRS override value %d", pcie_mrrs); 2851 + } 2865 2852 2866 2853 /* Adjust PCIe completion timeout. */ 2867 2854 pcie_capability_clear_and_set_word(pdev, PCI_EXP_DEVCTL2, 0xf, 0x2);
+1 -1
drivers/rapidio/devices/tsi721.h
··· 661 661 */ 662 662 #define TSI721_DMA_CHNUM TSI721_DMA_MAXCH 663 663 664 - #define TSI721_DMACH_MAINT 0 /* DMA channel for maint requests */ 664 + #define TSI721_DMACH_MAINT 7 /* DMA channel for maint requests */ 665 665 #define TSI721_DMACH_MAINT_NBD 32 /* Number of BDs for maint requests */ 666 666 667 667 #define TSI721_DMACH_DMA 1 /* DMA channel for data transfers */
+18 -9
drivers/rapidio/devices/tsi721_dma.c
··· 36 36 37 37 #include "tsi721.h" 38 38 39 - #define TSI721_DMA_TX_QUEUE_SZ 16 /* number of transaction descriptors */ 40 - 41 39 #ifdef CONFIG_PCI_MSI 42 40 static irqreturn_t tsi721_bdma_msix(int irq, void *ptr); 43 41 #endif 44 42 static int tsi721_submit_sg(struct tsi721_tx_desc *desc); 45 43 46 44 static unsigned int dma_desc_per_channel = 128; 47 - module_param(dma_desc_per_channel, uint, S_IWUSR | S_IRUGO); 45 + module_param(dma_desc_per_channel, uint, S_IRUGO); 48 46 MODULE_PARM_DESC(dma_desc_per_channel, 49 47 "Number of DMA descriptors per channel (default: 128)"); 48 + 49 + static unsigned int dma_txqueue_sz = 16; 50 + module_param(dma_txqueue_sz, uint, S_IRUGO); 51 + MODULE_PARM_DESC(dma_txqueue_sz, 52 + "DMA Transactions Queue Size (default: 16)"); 53 + 54 + static u8 dma_sel = 0x7f; 55 + module_param(dma_sel, byte, S_IRUGO); 56 + MODULE_PARM_DESC(dma_sel, 57 + "DMA Channel Selection Mask (default: 0x7f = all)"); 50 58 51 59 static inline struct tsi721_bdma_chan *to_tsi721_chan(struct dma_chan *chan) 52 60 { ··· 726 718 cookie = dma_cookie_assign(txd); 727 719 desc->status = DMA_IN_PROGRESS; 728 720 list_add_tail(&desc->desc_node, &bdma_chan->queue); 721 + tsi721_advance_work(bdma_chan, NULL); 729 722 730 723 spin_unlock_bh(&bdma_chan->lock); 731 724 return cookie; ··· 741 732 tsi_debug(DMA, &dchan->dev->device, "DMAC%d", bdma_chan->id); 742 733 743 734 if (bdma_chan->bd_base) 744 - return TSI721_DMA_TX_QUEUE_SZ; 735 + return dma_txqueue_sz; 745 736 746 737 /* Initialize BDMA channel */ 747 738 if (tsi721_bdma_ch_init(bdma_chan, dma_desc_per_channel)) { ··· 751 742 } 752 743 753 744 /* Allocate queue of transaction descriptors */ 754 - desc = kcalloc(TSI721_DMA_TX_QUEUE_SZ, sizeof(struct tsi721_tx_desc), 745 + desc = kcalloc(dma_txqueue_sz, sizeof(struct tsi721_tx_desc), 755 746 GFP_ATOMIC); 756 747 if (!desc) { 757 748 tsi_err(&dchan->dev->device, ··· 763 754 764 755 bdma_chan->tx_desc = desc; 765 756 766 - for (i = 0; i < TSI721_DMA_TX_QUEUE_SZ; i++) { 757 + for (i = 0; i < dma_txqueue_sz; i++) { 767 758 dma_async_tx_descriptor_init(&desc[i].txd, dchan); 768 759 desc[i].txd.tx_submit = tsi721_tx_submit; 769 760 desc[i].txd.flags = DMA_CTRL_ACK; ··· 775 766 bdma_chan->active = true; 776 767 tsi721_bdma_interrupt_enable(bdma_chan, 1); 777 768 778 - return TSI721_DMA_TX_QUEUE_SZ; 769 + return dma_txqueue_sz; 779 770 } 780 771 781 772 static void tsi721_sync_dma_irq(struct tsi721_bdma_chan *bdma_chan) ··· 971 962 int i; 972 963 973 964 for (i = 0; i < TSI721_DMA_MAXCH; i++) { 974 - if (i != TSI721_DMACH_MAINT) 965 + if ((i != TSI721_DMACH_MAINT) && (dma_sel & (1 << i))) 975 966 tsi721_dma_stop(&priv->bdma[i]); 976 967 } 977 968 } ··· 988 979 for (i = 0; i < TSI721_DMA_MAXCH; i++) { 989 980 struct tsi721_bdma_chan *bdma_chan = &priv->bdma[i]; 990 981 991 - if (i == TSI721_DMACH_MAINT) 982 + if ((i == TSI721_DMACH_MAINT) || (dma_sel & (1 << i)) == 0) 992 983 continue; 993 984 994 985 bdma_chan->regs = priv->regs + TSI721_DMAC_BASE(i);
+18 -56
drivers/rapidio/rio-scan.c
··· 49 49 static int next_destid = 0; 50 50 static int next_comptag = 1; 51 51 52 - static int rio_mport_phys_table[] = { 53 - RIO_EFB_PAR_EP_ID, 54 - RIO_EFB_PAR_EP_REC_ID, 55 - RIO_EFB_SER_EP_ID, 56 - RIO_EFB_SER_EP_REC_ID, 57 - -1, 58 - }; 59 - 60 - 61 52 /** 62 53 * rio_destid_alloc - Allocate next available destID for given network 63 54 * @net: RIO network ··· 371 380 if (rdev->pef & RIO_PEF_EXT_FEATURES) { 372 381 rdev->efptr = result & 0xffff; 373 382 rdev->phys_efptr = rio_mport_get_physefb(port, 0, destid, 374 - hopcount); 383 + hopcount, &rdev->phys_rmap); 384 + pr_debug("RIO: %s Register Map %d device\n", 385 + __func__, rdev->phys_rmap); 375 386 376 387 rdev->em_efptr = rio_mport_get_feature(port, 0, destid, 377 388 hopcount, RIO_EFB_ERR_MGMNT); 389 + if (!rdev->em_efptr) 390 + rdev->em_efptr = rio_mport_get_feature(port, 0, destid, 391 + hopcount, RIO_EFB_ERR_MGMNT_HS); 378 392 } 379 393 380 394 rio_mport_read_config_32(port, destid, hopcount, RIO_SRC_OPS_CAR, ··· 441 445 rio_route_clr_table(rdev, RIO_GLOBAL_TABLE, 0); 442 446 } else { 443 447 if (do_enum) 444 - /*Enable Input Output Port (transmitter reviever)*/ 448 + /*Enable Input Output Port (transmitter receiver)*/ 445 449 rio_enable_rx_tx_port(port, 0, destid, hopcount, 0); 446 450 447 451 dev_set_name(&rdev->dev, "%02x:e:%04x", rdev->net->id, ··· 477 481 478 482 /** 479 483 * rio_sport_is_active- Tests if a switch port has an active connection. 480 - * @port: Master port to send transaction 481 - * @destid: Associated destination ID for switch 482 - * @hopcount: Hopcount to reach switch 483 - * @sport: Switch port number 484 + * @rdev: RapidIO device object 485 + * @sp: Switch port number 484 486 * 485 487 * Reads the port error status CSR for a particular switch port to 486 488 * determine if the port has an active link. Returns ··· 486 492 * inactive. 487 493 */ 488 494 static int 489 - rio_sport_is_active(struct rio_mport *port, u16 destid, u8 hopcount, int sport) 495 + rio_sport_is_active(struct rio_dev *rdev, int sp) 490 496 { 491 497 u32 result = 0; 492 - u32 ext_ftr_ptr; 493 498 494 - ext_ftr_ptr = rio_mport_get_efb(port, 0, destid, hopcount, 0); 495 - 496 - while (ext_ftr_ptr) { 497 - rio_mport_read_config_32(port, destid, hopcount, 498 - ext_ftr_ptr, &result); 499 - result = RIO_GET_BLOCK_ID(result); 500 - if ((result == RIO_EFB_SER_EP_FREE_ID) || 501 - (result == RIO_EFB_SER_EP_FREE_ID_V13P) || 502 - (result == RIO_EFB_SER_EP_FREC_ID)) 503 - break; 504 - 505 - ext_ftr_ptr = rio_mport_get_efb(port, 0, destid, hopcount, 506 - ext_ftr_ptr); 507 - } 508 - 509 - if (ext_ftr_ptr) 510 - rio_mport_read_config_32(port, destid, hopcount, 511 - ext_ftr_ptr + 512 - RIO_PORT_N_ERR_STS_CSR(sport), 513 - &result); 499 + rio_read_config_32(rdev, RIO_DEV_PORT_N_ERR_STS_CSR(rdev, sp), 500 + &result); 514 501 515 502 return result & RIO_PORT_N_ERR_STS_PORT_OK; 516 503 } ··· 630 655 631 656 cur_destid = next_destid; 632 657 633 - if (rio_sport_is_active 634 - (port, RIO_ANY_DESTID(port->sys_size), hopcount, 635 - port_num)) { 658 + if (rio_sport_is_active(rdev, port_num)) { 636 659 pr_debug( 637 660 "RIO: scanning device on port %d\n", 638 661 port_num); ··· 758 785 if (RIO_GET_PORT_NUM(rdev->swpinfo) == port_num) 759 786 continue; 760 787 761 - if (rio_sport_is_active 762 - (port, destid, hopcount, port_num)) { 788 + if (rio_sport_is_active(rdev, port_num)) { 763 789 pr_debug( 764 790 "RIO: scanning device on port %d\n", 765 791 port_num); ··· 803 831 static int rio_mport_is_active(struct rio_mport *port) 804 832 { 805 833 u32 result = 0; 806 - u32 ext_ftr_ptr; 807 - int *entry = rio_mport_phys_table; 808 834 809 - do { 810 - if ((ext_ftr_ptr = 811 - rio_mport_get_feature(port, 1, 0, 0, *entry))) 812 - break; 813 - } while (*++entry >= 0); 814 - 815 - if (ext_ftr_ptr) 816 - rio_local_read_config_32(port, 817 - ext_ftr_ptr + 818 - RIO_PORT_N_ERR_STS_CSR(port->index), 819 - &result); 820 - 835 + rio_local_read_config_32(port, 836 + port->phys_efptr + 837 + RIO_PORT_N_ERR_STS_CSR(port->index, port->phys_rmap), 838 + &result); 821 839 return result & RIO_PORT_N_ERR_STS_PORT_OK; 822 840 } 823 841
+125 -87
drivers/rapidio/rio.c
··· 268 268 mport->inb_msg[mbox].mcback = minb; 269 269 270 270 rc = mport->ops->open_inb_mbox(mport, dev_id, mbox, entries); 271 + if (rc) { 272 + mport->inb_msg[mbox].mcback = NULL; 273 + mport->inb_msg[mbox].res = NULL; 274 + release_resource(res); 275 + kfree(res); 276 + } 271 277 } else 272 278 rc = -ENOMEM; 273 279 ··· 291 285 */ 292 286 int rio_release_inb_mbox(struct rio_mport *mport, int mbox) 293 287 { 294 - if (mport->ops->close_inb_mbox) { 295 - mport->ops->close_inb_mbox(mport, mbox); 288 + int rc; 296 289 297 - /* Release the mailbox resource */ 298 - return release_resource(mport->inb_msg[mbox].res); 299 - } else 300 - return -ENOSYS; 290 + if (!mport->ops->close_inb_mbox || !mport->inb_msg[mbox].res) 291 + return -EINVAL; 292 + 293 + mport->ops->close_inb_mbox(mport, mbox); 294 + mport->inb_msg[mbox].mcback = NULL; 295 + 296 + rc = release_resource(mport->inb_msg[mbox].res); 297 + if (rc) 298 + return rc; 299 + 300 + kfree(mport->inb_msg[mbox].res); 301 + mport->inb_msg[mbox].res = NULL; 302 + 303 + return 0; 301 304 } 302 305 303 306 /** ··· 351 336 mport->outb_msg[mbox].mcback = moutb; 352 337 353 338 rc = mport->ops->open_outb_mbox(mport, dev_id, mbox, entries); 339 + if (rc) { 340 + mport->outb_msg[mbox].mcback = NULL; 341 + mport->outb_msg[mbox].res = NULL; 342 + release_resource(res); 343 + kfree(res); 344 + } 354 345 } else 355 346 rc = -ENOMEM; 356 347 ··· 374 353 */ 375 354 int rio_release_outb_mbox(struct rio_mport *mport, int mbox) 376 355 { 377 - if (mport->ops->close_outb_mbox) { 378 - mport->ops->close_outb_mbox(mport, mbox); 356 + int rc; 379 357 380 - /* Release the mailbox resource */ 381 - return release_resource(mport->outb_msg[mbox].res); 382 - } else 383 - return -ENOSYS; 358 + if (!mport->ops->close_outb_mbox || !mport->outb_msg[mbox].res) 359 + return -EINVAL; 360 + 361 + mport->ops->close_outb_mbox(mport, mbox); 362 + mport->outb_msg[mbox].mcback = NULL; 363 + 364 + rc = release_resource(mport->outb_msg[mbox].res); 365 + if (rc) 366 + return rc; 367 + 368 + kfree(mport->outb_msg[mbox].res); 369 + mport->outb_msg[mbox].res = NULL; 370 + 371 + return 0; 384 372 } 385 373 386 374 /** ··· 786 756 * @local: Indicate a local master port or remote device access 787 757 * @destid: Destination ID of the device 788 758 * @hopcount: Number of switch hops to the device 759 + * @rmap: pointer to location to store register map type info 789 760 */ 790 761 u32 791 762 rio_mport_get_physefb(struct rio_mport *port, int local, 792 - u16 destid, u8 hopcount) 763 + u16 destid, u8 hopcount, u32 *rmap) 793 764 { 794 765 u32 ext_ftr_ptr; 795 766 u32 ftr_header; ··· 808 777 ftr_header = RIO_GET_BLOCK_ID(ftr_header); 809 778 switch (ftr_header) { 810 779 811 - case RIO_EFB_SER_EP_ID_V13P: 812 - case RIO_EFB_SER_EP_REC_ID_V13P: 813 - case RIO_EFB_SER_EP_FREE_ID_V13P: 814 780 case RIO_EFB_SER_EP_ID: 815 781 case RIO_EFB_SER_EP_REC_ID: 816 782 case RIO_EFB_SER_EP_FREE_ID: 817 - case RIO_EFB_SER_EP_FREC_ID: 783 + case RIO_EFB_SER_EP_M1_ID: 784 + case RIO_EFB_SER_EP_SW_M1_ID: 785 + case RIO_EFB_SER_EPF_M1_ID: 786 + case RIO_EFB_SER_EPF_SW_M1_ID: 787 + *rmap = 1; 788 + return ext_ftr_ptr; 818 789 790 + case RIO_EFB_SER_EP_M2_ID: 791 + case RIO_EFB_SER_EP_SW_M2_ID: 792 + case RIO_EFB_SER_EPF_M2_ID: 793 + case RIO_EFB_SER_EPF_SW_M2_ID: 794 + *rmap = 2; 819 795 return ext_ftr_ptr; 820 796 821 797 default: ··· 881 843 u32 regval; 882 844 883 845 rio_read_config_32(rdev, 884 - rdev->phys_efptr + RIO_PORT_N_CTL_CSR(pnum), 885 - &regval); 846 + RIO_DEV_PORT_N_CTL_CSR(rdev, pnum), 847 + &regval); 886 848 if (lock) 887 849 regval |= RIO_PORT_N_CTL_LOCKOUT; 888 850 else 889 851 regval &= ~RIO_PORT_N_CTL_LOCKOUT; 890 852 891 853 rio_write_config_32(rdev, 892 - rdev->phys_efptr + RIO_PORT_N_CTL_CSR(pnum), 893 - regval); 854 + RIO_DEV_PORT_N_CTL_CSR(rdev, pnum), 855 + regval); 894 856 return 0; 895 857 } 896 858 EXPORT_SYMBOL_GPL(rio_set_port_lockout); ··· 914 876 #ifdef CONFIG_RAPIDIO_ENABLE_RX_TX_PORTS 915 877 u32 regval; 916 878 u32 ext_ftr_ptr; 879 + u32 rmap; 917 880 918 881 /* 919 882 * enable rx input tx output port ··· 922 883 pr_debug("rio_enable_rx_tx_port(local = %d, destid = %d, hopcount = " 923 884 "%d, port_num = %d)\n", local, destid, hopcount, port_num); 924 885 925 - ext_ftr_ptr = rio_mport_get_physefb(port, local, destid, hopcount); 886 + ext_ftr_ptr = rio_mport_get_physefb(port, local, destid, 887 + hopcount, &rmap); 926 888 927 889 if (local) { 928 - rio_local_read_config_32(port, ext_ftr_ptr + 929 - RIO_PORT_N_CTL_CSR(0), 890 + rio_local_read_config_32(port, 891 + ext_ftr_ptr + RIO_PORT_N_CTL_CSR(0, rmap), 930 892 &regval); 931 893 } else { 932 894 if (rio_mport_read_config_32(port, destid, hopcount, 933 - ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num), &regval) < 0) 895 + ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num, rmap), 896 + &regval) < 0) 934 897 return -EIO; 935 898 } 936 899 937 - if (regval & RIO_PORT_N_CTL_P_TYP_SER) { 938 - /* serial */ 939 - regval = regval | RIO_PORT_N_CTL_EN_RX_SER 940 - | RIO_PORT_N_CTL_EN_TX_SER; 941 - } else { 942 - /* parallel */ 943 - regval = regval | RIO_PORT_N_CTL_EN_RX_PAR 944 - | RIO_PORT_N_CTL_EN_TX_PAR; 945 - } 900 + regval = regval | RIO_PORT_N_CTL_EN_RX | RIO_PORT_N_CTL_EN_TX; 946 901 947 902 if (local) { 948 - rio_local_write_config_32(port, ext_ftr_ptr + 949 - RIO_PORT_N_CTL_CSR(0), regval); 903 + rio_local_write_config_32(port, 904 + ext_ftr_ptr + RIO_PORT_N_CTL_CSR(0, rmap), regval); 950 905 } else { 951 906 if (rio_mport_write_config_32(port, destid, hopcount, 952 - ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num), regval) < 0) 907 + ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num, rmap), 908 + regval) < 0) 953 909 return -EIO; 954 910 } 955 911 #endif ··· 1046 1012 /* Read from link maintenance response register 1047 1013 * to clear valid bit */ 1048 1014 rio_read_config_32(rdev, 1049 - rdev->phys_efptr + RIO_PORT_N_MNT_RSP_CSR(pnum), 1015 + RIO_DEV_PORT_N_MNT_RSP_CSR(rdev, pnum), 1050 1016 &regval); 1051 1017 udelay(50); 1052 1018 } 1053 1019 1054 1020 /* Issue Input-status command */ 1055 1021 rio_write_config_32(rdev, 1056 - rdev->phys_efptr + RIO_PORT_N_MNT_REQ_CSR(pnum), 1022 + RIO_DEV_PORT_N_MNT_REQ_CSR(rdev, pnum), 1057 1023 RIO_MNT_REQ_CMD_IS); 1058 1024 1059 1025 /* Exit if the response is not expected */ ··· 1064 1030 while (checkcount--) { 1065 1031 udelay(50); 1066 1032 rio_read_config_32(rdev, 1067 - rdev->phys_efptr + RIO_PORT_N_MNT_RSP_CSR(pnum), 1033 + RIO_DEV_PORT_N_MNT_RSP_CSR(rdev, pnum), 1068 1034 &regval); 1069 1035 if (regval & RIO_PORT_N_MNT_RSP_RVAL) { 1070 1036 *lnkresp = regval; ··· 1080 1046 * @rdev: Pointer to RIO device control structure 1081 1047 * @pnum: Switch port number to clear errors 1082 1048 * @err_status: port error status (if 0 reads register from device) 1049 + * 1050 + * TODO: Currently this routine is not compatible with recovery process 1051 + * specified for idt_gen3 RapidIO switch devices. It has to be reviewed 1052 + * to implement universal recovery process that is compatible full range 1053 + * off available devices. 1054 + * IDT gen3 switch driver now implements HW-specific error handler that 1055 + * issues soft port reset to the port to reset ERR_STOP bits and ackIDs. 1083 1056 */ 1084 1057 static int rio_clr_err_stopped(struct rio_dev *rdev, u32 pnum, u32 err_status) 1085 1058 { ··· 1096 1055 1097 1056 if (err_status == 0) 1098 1057 rio_read_config_32(rdev, 1099 - rdev->phys_efptr + RIO_PORT_N_ERR_STS_CSR(pnum), 1058 + RIO_DEV_PORT_N_ERR_STS_CSR(rdev, pnum), 1100 1059 &err_status); 1101 1060 1102 - if (err_status & RIO_PORT_N_ERR_STS_PW_OUT_ES) { 1061 + if (err_status & RIO_PORT_N_ERR_STS_OUT_ES) { 1103 1062 pr_debug("RIO_EM: servicing Output Error-Stopped state\n"); 1104 1063 /* 1105 1064 * Send a Link-Request/Input-Status control symbol ··· 1114 1073 far_ackid = (regval & RIO_PORT_N_MNT_RSP_ASTAT) >> 5; 1115 1074 far_linkstat = regval & RIO_PORT_N_MNT_RSP_LSTAT; 1116 1075 rio_read_config_32(rdev, 1117 - rdev->phys_efptr + RIO_PORT_N_ACK_STS_CSR(pnum), 1076 + RIO_DEV_PORT_N_ACK_STS_CSR(rdev, pnum), 1118 1077 &regval); 1119 1078 pr_debug("RIO_EM: SP%d_ACK_STS_CSR=0x%08x\n", pnum, regval); 1120 1079 near_ackid = (regval & RIO_PORT_N_ACK_INBOUND) >> 24; ··· 1132 1091 * far inbound. 1133 1092 */ 1134 1093 rio_write_config_32(rdev, 1135 - rdev->phys_efptr + RIO_PORT_N_ACK_STS_CSR(pnum), 1094 + RIO_DEV_PORT_N_ACK_STS_CSR(rdev, pnum), 1136 1095 (near_ackid << 24) | 1137 1096 (far_ackid << 8) | far_ackid); 1138 1097 /* Align far outstanding/outbound ackIDs with 1139 1098 * near inbound. 1140 1099 */ 1141 1100 far_ackid++; 1142 - if (nextdev) 1143 - rio_write_config_32(nextdev, 1144 - nextdev->phys_efptr + 1145 - RIO_PORT_N_ACK_STS_CSR(RIO_GET_PORT_NUM(nextdev->swpinfo)), 1146 - (far_ackid << 24) | 1147 - (near_ackid << 8) | near_ackid); 1148 - else 1149 - pr_debug("RIO_EM: Invalid nextdev pointer (NULL)\n"); 1101 + if (!nextdev) { 1102 + pr_debug("RIO_EM: nextdev pointer == NULL\n"); 1103 + goto rd_err; 1104 + } 1105 + 1106 + rio_write_config_32(nextdev, 1107 + RIO_DEV_PORT_N_ACK_STS_CSR(nextdev, 1108 + RIO_GET_PORT_NUM(nextdev->swpinfo)), 1109 + (far_ackid << 24) | 1110 + (near_ackid << 8) | near_ackid); 1150 1111 } 1151 1112 rd_err: 1152 - rio_read_config_32(rdev, 1153 - rdev->phys_efptr + RIO_PORT_N_ERR_STS_CSR(pnum), 1154 - &err_status); 1113 + rio_read_config_32(rdev, RIO_DEV_PORT_N_ERR_STS_CSR(rdev, pnum), 1114 + &err_status); 1155 1115 pr_debug("RIO_EM: SP%d_ERR_STS_CSR=0x%08x\n", pnum, err_status); 1156 1116 } 1157 1117 1158 - if ((err_status & RIO_PORT_N_ERR_STS_PW_INP_ES) && nextdev) { 1118 + if ((err_status & RIO_PORT_N_ERR_STS_INP_ES) && nextdev) { 1159 1119 pr_debug("RIO_EM: servicing Input Error-Stopped state\n"); 1160 1120 rio_get_input_status(nextdev, 1161 1121 RIO_GET_PORT_NUM(nextdev->swpinfo), NULL); 1162 1122 udelay(50); 1163 1123 1164 - rio_read_config_32(rdev, 1165 - rdev->phys_efptr + RIO_PORT_N_ERR_STS_CSR(pnum), 1166 - &err_status); 1124 + rio_read_config_32(rdev, RIO_DEV_PORT_N_ERR_STS_CSR(rdev, pnum), 1125 + &err_status); 1167 1126 pr_debug("RIO_EM: SP%d_ERR_STS_CSR=0x%08x\n", pnum, err_status); 1168 1127 } 1169 1128 1170 - return (err_status & (RIO_PORT_N_ERR_STS_PW_OUT_ES | 1171 - RIO_PORT_N_ERR_STS_PW_INP_ES)) ? 1 : 0; 1129 + return (err_status & (RIO_PORT_N_ERR_STS_OUT_ES | 1130 + RIO_PORT_N_ERR_STS_INP_ES)) ? 1 : 0; 1172 1131 } 1173 1132 1174 1133 /** ··· 1268 1227 if (rdev->rswitch->ops && rdev->rswitch->ops->em_handle) 1269 1228 rdev->rswitch->ops->em_handle(rdev, portnum); 1270 1229 1271 - rio_read_config_32(rdev, 1272 - rdev->phys_efptr + RIO_PORT_N_ERR_STS_CSR(portnum), 1273 - &err_status); 1230 + rio_read_config_32(rdev, RIO_DEV_PORT_N_ERR_STS_CSR(rdev, portnum), 1231 + &err_status); 1274 1232 pr_debug("RIO_PW: SP%d_ERR_STS_CSR=0x%08x\n", portnum, err_status); 1275 1233 1276 1234 if (err_status & RIO_PORT_N_ERR_STS_PORT_OK) { ··· 1286 1246 * Depending on the link partner state, two attempts 1287 1247 * may be needed for successful recovery. 1288 1248 */ 1289 - if (err_status & (RIO_PORT_N_ERR_STS_PW_OUT_ES | 1290 - RIO_PORT_N_ERR_STS_PW_INP_ES)) { 1249 + if (err_status & (RIO_PORT_N_ERR_STS_OUT_ES | 1250 + RIO_PORT_N_ERR_STS_INP_ES)) { 1291 1251 if (rio_clr_err_stopped(rdev, portnum, err_status)) 1292 1252 rio_clr_err_stopped(rdev, portnum, 0); 1293 1253 } ··· 1297 1257 rdev->rswitch->port_ok &= ~(1 << portnum); 1298 1258 rio_set_port_lockout(rdev, portnum, 1); 1299 1259 1260 + if (rdev->phys_rmap == 1) { 1300 1261 rio_write_config_32(rdev, 1301 - rdev->phys_efptr + 1302 - RIO_PORT_N_ACK_STS_CSR(portnum), 1262 + RIO_DEV_PORT_N_ACK_STS_CSR(rdev, portnum), 1303 1263 RIO_PORT_N_ACK_CLEAR); 1264 + } else { 1265 + rio_write_config_32(rdev, 1266 + RIO_DEV_PORT_N_OB_ACK_CSR(rdev, portnum), 1267 + RIO_PORT_N_OB_ACK_CLEAR); 1268 + rio_write_config_32(rdev, 1269 + RIO_DEV_PORT_N_IB_ACK_CSR(rdev, portnum), 1270 + 0); 1271 + } 1304 1272 1305 1273 /* Schedule Extraction Service */ 1306 1274 pr_debug("RIO_PW: Device Extraction on [%s]-P%d\n", ··· 1337 1289 } 1338 1290 1339 1291 /* Clear remaining error bits and Port-Write Pending bit */ 1340 - rio_write_config_32(rdev, 1341 - rdev->phys_efptr + RIO_PORT_N_ERR_STS_CSR(portnum), 1342 - err_status); 1292 + rio_write_config_32(rdev, RIO_DEV_PORT_N_ERR_STS_CSR(rdev, portnum), 1293 + err_status); 1343 1294 1344 1295 return 0; 1345 1296 } ··· 1389 1342 * Tell if a device supports a given RapidIO capability. 1390 1343 * Returns the offset of the requested extended feature 1391 1344 * block within the device's RIO configuration space or 1392 - * 0 in case the device does not support it. Possible 1393 - * values for @ftr: 1394 - * 1395 - * %RIO_EFB_PAR_EP_ID LP/LVDS EP Devices 1396 - * 1397 - * %RIO_EFB_PAR_EP_REC_ID LP/LVDS EP Recovery Devices 1398 - * 1399 - * %RIO_EFB_PAR_EP_FREE_ID LP/LVDS EP Free Devices 1400 - * 1401 - * %RIO_EFB_SER_EP_ID LP/Serial EP Devices 1402 - * 1403 - * %RIO_EFB_SER_EP_REC_ID LP/Serial EP Recovery Devices 1404 - * 1405 - * %RIO_EFB_SER_EP_FREE_ID LP/Serial EP Free Devices 1345 + * 0 in case the device does not support it. 1406 1346 */ 1407 1347 u32 1408 1348 rio_mport_get_feature(struct rio_mport * port, int local, u16 destid, ··· 1882 1848 * Initializes RapidIO capable DMA channel for the specified data transfer. 1883 1849 * Uses DMA channel private extension to pass information related to remote 1884 1850 * target RIO device. 1885 - * Returns pointer to DMA transaction descriptor or NULL if failed. 1851 + * 1852 + * Returns: pointer to DMA transaction descriptor if successful, 1853 + * error-valued pointer or NULL if failed. 1886 1854 */ 1887 1855 struct dma_async_tx_descriptor *rio_dma_prep_xfer(struct dma_chan *dchan, 1888 1856 u16 destid, struct rio_dma_data *data, ··· 1919 1883 * Initializes RapidIO capable DMA channel for the specified data transfer. 1920 1884 * Uses DMA channel private extension to pass information related to remote 1921 1885 * target RIO device. 1922 - * Returns pointer to DMA transaction descriptor or NULL if failed. 1886 + * 1887 + * Returns: pointer to DMA transaction descriptor if successful, 1888 + * error-valued pointer or NULL if failed. 1923 1889 */ 1924 1890 struct dma_async_tx_descriptor *rio_dma_prep_slave_sg(struct rio_dev *rdev, 1925 1891 struct dma_chan *dchan, struct rio_dma_data *data,
+1 -1
drivers/rapidio/rio.h
··· 22 22 extern u32 rio_mport_get_feature(struct rio_mport *mport, int local, u16 destid, 23 23 u8 hopcount, int ftr); 24 24 extern u32 rio_mport_get_physefb(struct rio_mport *port, int local, 25 - u16 destid, u8 hopcount); 25 + u16 destid, u8 hopcount, u32 *rmap); 26 26 extern u32 rio_mport_get_efb(struct rio_mport *port, int local, u16 destid, 27 27 u8 hopcount, u32 from); 28 28 extern int rio_mport_chk_dev_access(struct rio_mport *mport, u16 destid,
+2366
drivers/rapidio/rio_cm.c
··· 1 + /* 2 + * rio_cm - RapidIO Channelized Messaging Driver 3 + * 4 + * Copyright 2013-2016 Integrated Device Technology, Inc. 5 + * Copyright (c) 2015, Prodrive Technologies 6 + * Copyright (c) 2015, RapidIO Trade Association 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * THIS PROGRAM IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, 14 + * BUT WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED WARRANTY OF 15 + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE 16 + * GNU GENERAL PUBLIC LICENSE FOR MORE DETAILS. 17 + */ 18 + 19 + #include <linux/module.h> 20 + #include <linux/kernel.h> 21 + #include <linux/dma-mapping.h> 22 + #include <linux/delay.h> 23 + #include <linux/sched.h> 24 + #include <linux/rio.h> 25 + #include <linux/rio_drv.h> 26 + #include <linux/slab.h> 27 + #include <linux/idr.h> 28 + #include <linux/interrupt.h> 29 + #include <linux/cdev.h> 30 + #include <linux/fs.h> 31 + #include <linux/poll.h> 32 + #include <linux/reboot.h> 33 + #include <linux/bitops.h> 34 + #include <linux/printk.h> 35 + #include <linux/rio_cm_cdev.h> 36 + 37 + #define DRV_NAME "rio_cm" 38 + #define DRV_VERSION "1.0.0" 39 + #define DRV_AUTHOR "Alexandre Bounine <alexandre.bounine@idt.com>" 40 + #define DRV_DESC "RapidIO Channelized Messaging Driver" 41 + #define DEV_NAME "rio_cm" 42 + 43 + /* Debug output filtering masks */ 44 + enum { 45 + DBG_NONE = 0, 46 + DBG_INIT = BIT(0), /* driver init */ 47 + DBG_EXIT = BIT(1), /* driver exit */ 48 + DBG_MPORT = BIT(2), /* mport add/remove */ 49 + DBG_RDEV = BIT(3), /* RapidIO device add/remove */ 50 + DBG_CHOP = BIT(4), /* channel operations */ 51 + DBG_WAIT = BIT(5), /* waiting for events */ 52 + DBG_TX = BIT(6), /* message TX */ 53 + DBG_TX_EVENT = BIT(7), /* message TX event */ 54 + DBG_RX_DATA = BIT(8), /* inbound data messages */ 55 + DBG_RX_CMD = BIT(9), /* inbound REQ/ACK/NACK messages */ 56 + DBG_ALL = ~0, 57 + }; 58 + 59 + #ifdef DEBUG 60 + #define riocm_debug(level, fmt, arg...) \ 61 + do { \ 62 + if (DBG_##level & dbg_level) \ 63 + pr_debug(DRV_NAME ": %s " fmt "\n", \ 64 + __func__, ##arg); \ 65 + } while (0) 66 + #else 67 + #define riocm_debug(level, fmt, arg...) \ 68 + no_printk(KERN_DEBUG pr_fmt(DRV_NAME fmt "\n"), ##arg) 69 + #endif 70 + 71 + #define riocm_warn(fmt, arg...) \ 72 + pr_warn(DRV_NAME ": %s WARNING " fmt "\n", __func__, ##arg) 73 + 74 + #define riocm_error(fmt, arg...) \ 75 + pr_err(DRV_NAME ": %s ERROR " fmt "\n", __func__, ##arg) 76 + 77 + 78 + static int cmbox = 1; 79 + module_param(cmbox, int, S_IRUGO); 80 + MODULE_PARM_DESC(cmbox, "RapidIO Mailbox number (default 1)"); 81 + 82 + static int chstart = 256; 83 + module_param(chstart, int, S_IRUGO); 84 + MODULE_PARM_DESC(chstart, 85 + "Start channel number for dynamic allocation (default 256)"); 86 + 87 + #ifdef DEBUG 88 + static u32 dbg_level = DBG_NONE; 89 + module_param(dbg_level, uint, S_IWUSR | S_IRUGO); 90 + MODULE_PARM_DESC(dbg_level, "Debugging output level (default 0 = none)"); 91 + #endif 92 + 93 + MODULE_AUTHOR(DRV_AUTHOR); 94 + MODULE_DESCRIPTION(DRV_DESC); 95 + MODULE_LICENSE("GPL"); 96 + MODULE_VERSION(DRV_VERSION); 97 + 98 + #define RIOCM_TX_RING_SIZE 128 99 + #define RIOCM_RX_RING_SIZE 128 100 + #define RIOCM_CONNECT_TO 3 /* connect response TO (in sec) */ 101 + 102 + #define RIOCM_MAX_CHNUM 0xffff /* Use full range of u16 field */ 103 + #define RIOCM_CHNUM_AUTO 0 104 + #define RIOCM_MAX_EP_COUNT 0x10000 /* Max number of endpoints */ 105 + 106 + enum rio_cm_state { 107 + RIO_CM_IDLE, 108 + RIO_CM_CONNECT, 109 + RIO_CM_CONNECTED, 110 + RIO_CM_DISCONNECT, 111 + RIO_CM_CHAN_BOUND, 112 + RIO_CM_LISTEN, 113 + RIO_CM_DESTROYING, 114 + }; 115 + 116 + enum rio_cm_pkt_type { 117 + RIO_CM_SYS = 0xaa, 118 + RIO_CM_CHAN = 0x55, 119 + }; 120 + 121 + enum rio_cm_chop { 122 + CM_CONN_REQ, 123 + CM_CONN_ACK, 124 + CM_CONN_CLOSE, 125 + CM_DATA_MSG, 126 + }; 127 + 128 + struct rio_ch_base_bhdr { 129 + u32 src_id; 130 + u32 dst_id; 131 + #define RIO_HDR_LETTER_MASK 0xffff0000 132 + #define RIO_HDR_MBOX_MASK 0x0000ffff 133 + u8 src_mbox; 134 + u8 dst_mbox; 135 + u8 type; 136 + } __attribute__((__packed__)); 137 + 138 + struct rio_ch_chan_hdr { 139 + struct rio_ch_base_bhdr bhdr; 140 + u8 ch_op; 141 + u16 dst_ch; 142 + u16 src_ch; 143 + u16 msg_len; 144 + u16 rsrvd; 145 + } __attribute__((__packed__)); 146 + 147 + struct tx_req { 148 + struct list_head node; 149 + struct rio_dev *rdev; 150 + void *buffer; 151 + size_t len; 152 + }; 153 + 154 + struct cm_dev { 155 + struct list_head list; 156 + struct rio_mport *mport; 157 + void *rx_buf[RIOCM_RX_RING_SIZE]; 158 + int rx_slots; 159 + struct mutex rx_lock; 160 + 161 + void *tx_buf[RIOCM_TX_RING_SIZE]; 162 + int tx_slot; 163 + int tx_cnt; 164 + int tx_ack_slot; 165 + struct list_head tx_reqs; 166 + spinlock_t tx_lock; 167 + 168 + struct list_head peers; 169 + u32 npeers; 170 + struct workqueue_struct *rx_wq; 171 + struct work_struct rx_work; 172 + }; 173 + 174 + struct chan_rx_ring { 175 + void *buf[RIOCM_RX_RING_SIZE]; 176 + int head; 177 + int tail; 178 + int count; 179 + 180 + /* Tracking RX buffers reported to upper level */ 181 + void *inuse[RIOCM_RX_RING_SIZE]; 182 + int inuse_cnt; 183 + }; 184 + 185 + struct rio_channel { 186 + u16 id; /* local channel ID */ 187 + struct kref ref; /* channel refcount */ 188 + struct file *filp; 189 + struct cm_dev *cmdev; /* associated CM device object */ 190 + struct rio_dev *rdev; /* remote RapidIO device */ 191 + enum rio_cm_state state; 192 + int error; 193 + spinlock_t lock; 194 + void *context; 195 + u32 loc_destid; /* local destID */ 196 + u32 rem_destid; /* remote destID */ 197 + u16 rem_channel; /* remote channel ID */ 198 + struct list_head accept_queue; 199 + struct list_head ch_node; 200 + struct completion comp; 201 + struct completion comp_close; 202 + struct chan_rx_ring rx_ring; 203 + }; 204 + 205 + struct cm_peer { 206 + struct list_head node; 207 + struct rio_dev *rdev; 208 + }; 209 + 210 + struct rio_cm_work { 211 + struct work_struct work; 212 + struct cm_dev *cm; 213 + void *data; 214 + }; 215 + 216 + struct conn_req { 217 + struct list_head node; 218 + u32 destid; /* requester destID */ 219 + u16 chan; /* requester channel ID */ 220 + struct cm_dev *cmdev; 221 + }; 222 + 223 + /* 224 + * A channel_dev structure represents a CM_CDEV 225 + * @cdev Character device 226 + * @dev Associated device object 227 + */ 228 + struct channel_dev { 229 + struct cdev cdev; 230 + struct device *dev; 231 + }; 232 + 233 + static struct rio_channel *riocm_ch_alloc(u16 ch_num); 234 + static void riocm_ch_free(struct kref *ref); 235 + static int riocm_post_send(struct cm_dev *cm, struct rio_dev *rdev, 236 + void *buffer, size_t len); 237 + static int riocm_ch_close(struct rio_channel *ch); 238 + 239 + static DEFINE_SPINLOCK(idr_lock); 240 + static DEFINE_IDR(ch_idr); 241 + 242 + static LIST_HEAD(cm_dev_list); 243 + static DECLARE_RWSEM(rdev_sem); 244 + 245 + static struct class *dev_class; 246 + static unsigned int dev_major; 247 + static unsigned int dev_minor_base; 248 + static dev_t dev_number; 249 + static struct channel_dev riocm_cdev; 250 + 251 + #define is_msg_capable(src_ops, dst_ops) \ 252 + ((src_ops & RIO_SRC_OPS_DATA_MSG) && \ 253 + (dst_ops & RIO_DST_OPS_DATA_MSG)) 254 + #define dev_cm_capable(dev) \ 255 + is_msg_capable(dev->src_ops, dev->dst_ops) 256 + 257 + static int riocm_cmp(struct rio_channel *ch, enum rio_cm_state cmp) 258 + { 259 + int ret; 260 + 261 + spin_lock_bh(&ch->lock); 262 + ret = (ch->state == cmp); 263 + spin_unlock_bh(&ch->lock); 264 + return ret; 265 + } 266 + 267 + static int riocm_cmp_exch(struct rio_channel *ch, 268 + enum rio_cm_state cmp, enum rio_cm_state exch) 269 + { 270 + int ret; 271 + 272 + spin_lock_bh(&ch->lock); 273 + ret = (ch->state == cmp); 274 + if (ret) 275 + ch->state = exch; 276 + spin_unlock_bh(&ch->lock); 277 + return ret; 278 + } 279 + 280 + static enum rio_cm_state riocm_exch(struct rio_channel *ch, 281 + enum rio_cm_state exch) 282 + { 283 + enum rio_cm_state old; 284 + 285 + spin_lock_bh(&ch->lock); 286 + old = ch->state; 287 + ch->state = exch; 288 + spin_unlock_bh(&ch->lock); 289 + return old; 290 + } 291 + 292 + static struct rio_channel *riocm_get_channel(u16 nr) 293 + { 294 + struct rio_channel *ch; 295 + 296 + spin_lock_bh(&idr_lock); 297 + ch = idr_find(&ch_idr, nr); 298 + if (ch) 299 + kref_get(&ch->ref); 300 + spin_unlock_bh(&idr_lock); 301 + return ch; 302 + } 303 + 304 + static void riocm_put_channel(struct rio_channel *ch) 305 + { 306 + kref_put(&ch->ref, riocm_ch_free); 307 + } 308 + 309 + static void *riocm_rx_get_msg(struct cm_dev *cm) 310 + { 311 + void *msg; 312 + int i; 313 + 314 + msg = rio_get_inb_message(cm->mport, cmbox); 315 + if (msg) { 316 + for (i = 0; i < RIOCM_RX_RING_SIZE; i++) { 317 + if (cm->rx_buf[i] == msg) { 318 + cm->rx_buf[i] = NULL; 319 + cm->rx_slots++; 320 + break; 321 + } 322 + } 323 + 324 + if (i == RIOCM_RX_RING_SIZE) 325 + riocm_warn("no record for buffer 0x%p", msg); 326 + } 327 + 328 + return msg; 329 + } 330 + 331 + /* 332 + * riocm_rx_fill - fills a ring of receive buffers for given cm device 333 + * @cm: cm_dev object 334 + * @nent: max number of entries to fill 335 + * 336 + * Returns: none 337 + */ 338 + static void riocm_rx_fill(struct cm_dev *cm, int nent) 339 + { 340 + int i; 341 + 342 + if (cm->rx_slots == 0) 343 + return; 344 + 345 + for (i = 0; i < RIOCM_RX_RING_SIZE && cm->rx_slots && nent; i++) { 346 + if (cm->rx_buf[i] == NULL) { 347 + cm->rx_buf[i] = kmalloc(RIO_MAX_MSG_SIZE, GFP_KERNEL); 348 + if (cm->rx_buf[i] == NULL) 349 + break; 350 + rio_add_inb_buffer(cm->mport, cmbox, cm->rx_buf[i]); 351 + cm->rx_slots--; 352 + nent--; 353 + } 354 + } 355 + } 356 + 357 + /* 358 + * riocm_rx_free - frees all receive buffers associated with given cm device 359 + * @cm: cm_dev object 360 + * 361 + * Returns: none 362 + */ 363 + static void riocm_rx_free(struct cm_dev *cm) 364 + { 365 + int i; 366 + 367 + for (i = 0; i < RIOCM_RX_RING_SIZE; i++) { 368 + if (cm->rx_buf[i] != NULL) { 369 + kfree(cm->rx_buf[i]); 370 + cm->rx_buf[i] = NULL; 371 + } 372 + } 373 + } 374 + 375 + /* 376 + * riocm_req_handler - connection request handler 377 + * @cm: cm_dev object 378 + * @req_data: pointer to the request packet 379 + * 380 + * Returns: 0 if success, or 381 + * -EINVAL if channel is not in correct state, 382 + * -ENODEV if cannot find a channel with specified ID, 383 + * -ENOMEM if unable to allocate memory to store the request 384 + */ 385 + static int riocm_req_handler(struct cm_dev *cm, void *req_data) 386 + { 387 + struct rio_channel *ch; 388 + struct conn_req *req; 389 + struct rio_ch_chan_hdr *hh = req_data; 390 + u16 chnum; 391 + 392 + chnum = ntohs(hh->dst_ch); 393 + 394 + ch = riocm_get_channel(chnum); 395 + 396 + if (!ch) 397 + return -ENODEV; 398 + 399 + if (ch->state != RIO_CM_LISTEN) { 400 + riocm_debug(RX_CMD, "channel %d is not in listen state", chnum); 401 + riocm_put_channel(ch); 402 + return -EINVAL; 403 + } 404 + 405 + req = kzalloc(sizeof(*req), GFP_KERNEL); 406 + if (!req) { 407 + riocm_put_channel(ch); 408 + return -ENOMEM; 409 + } 410 + 411 + req->destid = ntohl(hh->bhdr.src_id); 412 + req->chan = ntohs(hh->src_ch); 413 + req->cmdev = cm; 414 + 415 + spin_lock_bh(&ch->lock); 416 + list_add_tail(&req->node, &ch->accept_queue); 417 + spin_unlock_bh(&ch->lock); 418 + complete(&ch->comp); 419 + riocm_put_channel(ch); 420 + 421 + return 0; 422 + } 423 + 424 + /* 425 + * riocm_resp_handler - response to connection request handler 426 + * @resp_data: pointer to the response packet 427 + * 428 + * Returns: 0 if success, or 429 + * -EINVAL if channel is not in correct state, 430 + * -ENODEV if cannot find a channel with specified ID, 431 + */ 432 + static int riocm_resp_handler(void *resp_data) 433 + { 434 + struct rio_channel *ch; 435 + struct rio_ch_chan_hdr *hh = resp_data; 436 + u16 chnum; 437 + 438 + chnum = ntohs(hh->dst_ch); 439 + ch = riocm_get_channel(chnum); 440 + if (!ch) 441 + return -ENODEV; 442 + 443 + if (ch->state != RIO_CM_CONNECT) { 444 + riocm_put_channel(ch); 445 + return -EINVAL; 446 + } 447 + 448 + riocm_exch(ch, RIO_CM_CONNECTED); 449 + ch->rem_channel = ntohs(hh->src_ch); 450 + complete(&ch->comp); 451 + riocm_put_channel(ch); 452 + 453 + return 0; 454 + } 455 + 456 + /* 457 + * riocm_close_handler - channel close request handler 458 + * @req_data: pointer to the request packet 459 + * 460 + * Returns: 0 if success, or 461 + * -ENODEV if cannot find a channel with specified ID, 462 + * + error codes returned by riocm_ch_close. 463 + */ 464 + static int riocm_close_handler(void *data) 465 + { 466 + struct rio_channel *ch; 467 + struct rio_ch_chan_hdr *hh = data; 468 + int ret; 469 + 470 + riocm_debug(RX_CMD, "for ch=%d", ntohs(hh->dst_ch)); 471 + 472 + spin_lock_bh(&idr_lock); 473 + ch = idr_find(&ch_idr, ntohs(hh->dst_ch)); 474 + if (!ch) { 475 + spin_unlock_bh(&idr_lock); 476 + return -ENODEV; 477 + } 478 + idr_remove(&ch_idr, ch->id); 479 + spin_unlock_bh(&idr_lock); 480 + 481 + riocm_exch(ch, RIO_CM_DISCONNECT); 482 + 483 + ret = riocm_ch_close(ch); 484 + if (ret) 485 + riocm_debug(RX_CMD, "riocm_ch_close() returned %d", ret); 486 + 487 + return 0; 488 + } 489 + 490 + /* 491 + * rio_cm_handler - function that services request (non-data) packets 492 + * @cm: cm_dev object 493 + * @data: pointer to the packet 494 + */ 495 + static void rio_cm_handler(struct cm_dev *cm, void *data) 496 + { 497 + struct rio_ch_chan_hdr *hdr; 498 + 499 + if (!rio_mport_is_running(cm->mport)) 500 + goto out; 501 + 502 + hdr = data; 503 + 504 + riocm_debug(RX_CMD, "OP=%x for ch=%d from %d", 505 + hdr->ch_op, ntohs(hdr->dst_ch), ntohs(hdr->src_ch)); 506 + 507 + switch (hdr->ch_op) { 508 + case CM_CONN_REQ: 509 + riocm_req_handler(cm, data); 510 + break; 511 + case CM_CONN_ACK: 512 + riocm_resp_handler(data); 513 + break; 514 + case CM_CONN_CLOSE: 515 + riocm_close_handler(data); 516 + break; 517 + default: 518 + riocm_error("Invalid packet header"); 519 + break; 520 + } 521 + out: 522 + kfree(data); 523 + } 524 + 525 + /* 526 + * rio_rx_data_handler - received data packet handler 527 + * @cm: cm_dev object 528 + * @buf: data packet 529 + * 530 + * Returns: 0 if success, or 531 + * -ENODEV if cannot find a channel with specified ID, 532 + * -EIO if channel is not in CONNECTED state, 533 + * -ENOMEM if channel RX queue is full (packet discarded) 534 + */ 535 + static int rio_rx_data_handler(struct cm_dev *cm, void *buf) 536 + { 537 + struct rio_ch_chan_hdr *hdr; 538 + struct rio_channel *ch; 539 + 540 + hdr = buf; 541 + 542 + riocm_debug(RX_DATA, "for ch=%d", ntohs(hdr->dst_ch)); 543 + 544 + ch = riocm_get_channel(ntohs(hdr->dst_ch)); 545 + if (!ch) { 546 + /* Discard data message for non-existing channel */ 547 + kfree(buf); 548 + return -ENODEV; 549 + } 550 + 551 + /* Place pointer to the buffer into channel's RX queue */ 552 + spin_lock(&ch->lock); 553 + 554 + if (ch->state != RIO_CM_CONNECTED) { 555 + /* Channel is not ready to receive data, discard a packet */ 556 + riocm_debug(RX_DATA, "ch=%d is in wrong state=%d", 557 + ch->id, ch->state); 558 + spin_unlock(&ch->lock); 559 + kfree(buf); 560 + riocm_put_channel(ch); 561 + return -EIO; 562 + } 563 + 564 + if (ch->rx_ring.count == RIOCM_RX_RING_SIZE) { 565 + /* If RX ring is full, discard a packet */ 566 + riocm_debug(RX_DATA, "ch=%d is full", ch->id); 567 + spin_unlock(&ch->lock); 568 + kfree(buf); 569 + riocm_put_channel(ch); 570 + return -ENOMEM; 571 + } 572 + 573 + ch->rx_ring.buf[ch->rx_ring.head] = buf; 574 + ch->rx_ring.head++; 575 + ch->rx_ring.count++; 576 + ch->rx_ring.head %= RIOCM_RX_RING_SIZE; 577 + 578 + complete(&ch->comp); 579 + 580 + spin_unlock(&ch->lock); 581 + riocm_put_channel(ch); 582 + 583 + return 0; 584 + } 585 + 586 + /* 587 + * rio_ibmsg_handler - inbound message packet handler 588 + */ 589 + static void rio_ibmsg_handler(struct work_struct *work) 590 + { 591 + struct cm_dev *cm = container_of(work, struct cm_dev, rx_work); 592 + void *data; 593 + struct rio_ch_chan_hdr *hdr; 594 + 595 + if (!rio_mport_is_running(cm->mport)) 596 + return; 597 + 598 + while (1) { 599 + mutex_lock(&cm->rx_lock); 600 + data = riocm_rx_get_msg(cm); 601 + if (data) 602 + riocm_rx_fill(cm, 1); 603 + mutex_unlock(&cm->rx_lock); 604 + 605 + if (data == NULL) 606 + break; 607 + 608 + hdr = data; 609 + 610 + if (hdr->bhdr.type != RIO_CM_CHAN) { 611 + /* For now simply discard packets other than channel */ 612 + riocm_error("Unsupported TYPE code (0x%x). Msg dropped", 613 + hdr->bhdr.type); 614 + kfree(data); 615 + continue; 616 + } 617 + 618 + /* Process a channel message */ 619 + if (hdr->ch_op == CM_DATA_MSG) 620 + rio_rx_data_handler(cm, data); 621 + else 622 + rio_cm_handler(cm, data); 623 + } 624 + } 625 + 626 + static void riocm_inb_msg_event(struct rio_mport *mport, void *dev_id, 627 + int mbox, int slot) 628 + { 629 + struct cm_dev *cm = dev_id; 630 + 631 + if (rio_mport_is_running(cm->mport) && !work_pending(&cm->rx_work)) 632 + queue_work(cm->rx_wq, &cm->rx_work); 633 + } 634 + 635 + /* 636 + * rio_txcq_handler - TX completion handler 637 + * @cm: cm_dev object 638 + * @slot: TX queue slot 639 + * 640 + * TX completion handler also ensures that pending request packets are placed 641 + * into transmit queue as soon as a free slot becomes available. This is done 642 + * to give higher priority to request packets during high intensity data flow. 643 + */ 644 + static void rio_txcq_handler(struct cm_dev *cm, int slot) 645 + { 646 + int ack_slot; 647 + 648 + /* ATTN: Add TX completion notification if/when direct buffer 649 + * transfer is implemented. At this moment only correct tracking 650 + * of tx_count is important. 651 + */ 652 + riocm_debug(TX_EVENT, "for mport_%d slot %d tx_cnt %d", 653 + cm->mport->id, slot, cm->tx_cnt); 654 + 655 + spin_lock(&cm->tx_lock); 656 + ack_slot = cm->tx_ack_slot; 657 + 658 + if (ack_slot == slot) 659 + riocm_debug(TX_EVENT, "slot == ack_slot"); 660 + 661 + while (cm->tx_cnt && ((ack_slot != slot) || 662 + (cm->tx_cnt == RIOCM_TX_RING_SIZE))) { 663 + 664 + cm->tx_buf[ack_slot] = NULL; 665 + ++ack_slot; 666 + ack_slot &= (RIOCM_TX_RING_SIZE - 1); 667 + cm->tx_cnt--; 668 + } 669 + 670 + if (cm->tx_cnt < 0 || cm->tx_cnt > RIOCM_TX_RING_SIZE) 671 + riocm_error("tx_cnt %d out of sync", cm->tx_cnt); 672 + 673 + WARN_ON((cm->tx_cnt < 0) || (cm->tx_cnt > RIOCM_TX_RING_SIZE)); 674 + 675 + cm->tx_ack_slot = ack_slot; 676 + 677 + /* 678 + * If there are pending requests, insert them into transmit queue 679 + */ 680 + if (!list_empty(&cm->tx_reqs) && (cm->tx_cnt < RIOCM_TX_RING_SIZE)) { 681 + struct tx_req *req, *_req; 682 + int rc; 683 + 684 + list_for_each_entry_safe(req, _req, &cm->tx_reqs, node) { 685 + list_del(&req->node); 686 + cm->tx_buf[cm->tx_slot] = req->buffer; 687 + rc = rio_add_outb_message(cm->mport, req->rdev, cmbox, 688 + req->buffer, req->len); 689 + kfree(req->buffer); 690 + kfree(req); 691 + 692 + ++cm->tx_cnt; 693 + ++cm->tx_slot; 694 + cm->tx_slot &= (RIOCM_TX_RING_SIZE - 1); 695 + if (cm->tx_cnt == RIOCM_TX_RING_SIZE) 696 + break; 697 + } 698 + } 699 + 700 + spin_unlock(&cm->tx_lock); 701 + } 702 + 703 + static void riocm_outb_msg_event(struct rio_mport *mport, void *dev_id, 704 + int mbox, int slot) 705 + { 706 + struct cm_dev *cm = dev_id; 707 + 708 + if (cm && rio_mport_is_running(cm->mport)) 709 + rio_txcq_handler(cm, slot); 710 + } 711 + 712 + static int riocm_queue_req(struct cm_dev *cm, struct rio_dev *rdev, 713 + void *buffer, size_t len) 714 + { 715 + unsigned long flags; 716 + struct tx_req *treq; 717 + 718 + treq = kzalloc(sizeof(*treq), GFP_KERNEL); 719 + if (treq == NULL) 720 + return -ENOMEM; 721 + 722 + treq->rdev = rdev; 723 + treq->buffer = buffer; 724 + treq->len = len; 725 + 726 + spin_lock_irqsave(&cm->tx_lock, flags); 727 + list_add_tail(&treq->node, &cm->tx_reqs); 728 + spin_unlock_irqrestore(&cm->tx_lock, flags); 729 + return 0; 730 + } 731 + 732 + /* 733 + * riocm_post_send - helper function that places packet into msg TX queue 734 + * @cm: cm_dev object 735 + * @rdev: target RapidIO device object (required by outbound msg interface) 736 + * @buffer: pointer to a packet buffer to send 737 + * @len: length of data to transfer 738 + * @req: request priority flag 739 + * 740 + * Returns: 0 if success, or error code otherwise. 741 + */ 742 + static int riocm_post_send(struct cm_dev *cm, struct rio_dev *rdev, 743 + void *buffer, size_t len) 744 + { 745 + int rc; 746 + unsigned long flags; 747 + 748 + spin_lock_irqsave(&cm->tx_lock, flags); 749 + 750 + if (cm->mport == NULL) { 751 + rc = -ENODEV; 752 + goto err_out; 753 + } 754 + 755 + if (cm->tx_cnt == RIOCM_TX_RING_SIZE) { 756 + riocm_debug(TX, "Tx Queue is full"); 757 + rc = -EBUSY; 758 + goto err_out; 759 + } 760 + 761 + cm->tx_buf[cm->tx_slot] = buffer; 762 + rc = rio_add_outb_message(cm->mport, rdev, cmbox, buffer, len); 763 + 764 + riocm_debug(TX, "Add buf@%p destid=%x tx_slot=%d tx_cnt=%d", 765 + buffer, rdev->destid, cm->tx_slot, cm->tx_cnt); 766 + 767 + ++cm->tx_cnt; 768 + ++cm->tx_slot; 769 + cm->tx_slot &= (RIOCM_TX_RING_SIZE - 1); 770 + 771 + err_out: 772 + spin_unlock_irqrestore(&cm->tx_lock, flags); 773 + return rc; 774 + } 775 + 776 + /* 777 + * riocm_ch_send - sends a data packet to a remote device 778 + * @ch_id: local channel ID 779 + * @buf: pointer to a data buffer to send (including CM header) 780 + * @len: length of data to transfer (including CM header) 781 + * 782 + * ATTN: ASSUMES THAT THE HEADER SPACE IS RESERVED PART OF THE DATA PACKET 783 + * 784 + * Returns: 0 if success, or 785 + * -EINVAL if one or more input parameters is/are not valid, 786 + * -ENODEV if cannot find a channel with specified ID, 787 + * -EAGAIN if a channel is not in CONNECTED state, 788 + * + error codes returned by HW send routine. 789 + */ 790 + static int riocm_ch_send(u16 ch_id, void *buf, int len) 791 + { 792 + struct rio_channel *ch; 793 + struct rio_ch_chan_hdr *hdr; 794 + int ret; 795 + 796 + if (buf == NULL || ch_id == 0 || len == 0 || len > RIO_MAX_MSG_SIZE) 797 + return -EINVAL; 798 + 799 + ch = riocm_get_channel(ch_id); 800 + if (!ch) { 801 + riocm_error("%s(%d) ch_%d not found", current->comm, 802 + task_pid_nr(current), ch_id); 803 + return -ENODEV; 804 + } 805 + 806 + if (!riocm_cmp(ch, RIO_CM_CONNECTED)) { 807 + ret = -EAGAIN; 808 + goto err_out; 809 + } 810 + 811 + /* 812 + * Fill buffer header section with corresponding channel data 813 + */ 814 + hdr = buf; 815 + 816 + hdr->bhdr.src_id = htonl(ch->loc_destid); 817 + hdr->bhdr.dst_id = htonl(ch->rem_destid); 818 + hdr->bhdr.src_mbox = cmbox; 819 + hdr->bhdr.dst_mbox = cmbox; 820 + hdr->bhdr.type = RIO_CM_CHAN; 821 + hdr->ch_op = CM_DATA_MSG; 822 + hdr->dst_ch = htons(ch->rem_channel); 823 + hdr->src_ch = htons(ch->id); 824 + hdr->msg_len = htons((u16)len); 825 + 826 + /* ATTN: the function call below relies on the fact that underlying 827 + * HW-specific add_outb_message() routine copies TX data into its own 828 + * internal transfer buffer (true for all RIONET compatible mport 829 + * drivers). Must be reviewed if mport driver uses the buffer directly. 830 + */ 831 + 832 + ret = riocm_post_send(ch->cmdev, ch->rdev, buf, len); 833 + if (ret) 834 + riocm_debug(TX, "ch %d send_err=%d", ch->id, ret); 835 + err_out: 836 + riocm_put_channel(ch); 837 + return ret; 838 + } 839 + 840 + static int riocm_ch_free_rxbuf(struct rio_channel *ch, void *buf) 841 + { 842 + int i, ret = -EINVAL; 843 + 844 + spin_lock_bh(&ch->lock); 845 + 846 + for (i = 0; i < RIOCM_RX_RING_SIZE; i++) { 847 + if (ch->rx_ring.inuse[i] == buf) { 848 + ch->rx_ring.inuse[i] = NULL; 849 + ch->rx_ring.inuse_cnt--; 850 + ret = 0; 851 + break; 852 + } 853 + } 854 + 855 + spin_unlock_bh(&ch->lock); 856 + 857 + if (!ret) 858 + kfree(buf); 859 + 860 + return ret; 861 + } 862 + 863 + /* 864 + * riocm_ch_receive - fetch a data packet received for the specified channel 865 + * @ch: local channel ID 866 + * @buf: pointer to a packet buffer 867 + * @timeout: timeout to wait for incoming packet (in jiffies) 868 + * 869 + * Returns: 0 and valid buffer pointer if success, or NULL pointer and one of: 870 + * -EAGAIN if a channel is not in CONNECTED state, 871 + * -ENOMEM if in-use tracking queue is full, 872 + * -ETIME if wait timeout expired, 873 + * -EINTR if wait was interrupted. 874 + */ 875 + static int riocm_ch_receive(struct rio_channel *ch, void **buf, long timeout) 876 + { 877 + void *rxmsg = NULL; 878 + int i, ret = 0; 879 + long wret; 880 + 881 + if (!riocm_cmp(ch, RIO_CM_CONNECTED)) { 882 + ret = -EAGAIN; 883 + goto out; 884 + } 885 + 886 + if (ch->rx_ring.inuse_cnt == RIOCM_RX_RING_SIZE) { 887 + /* If we do not have entries to track buffers given to upper 888 + * layer, reject request. 889 + */ 890 + ret = -ENOMEM; 891 + goto out; 892 + } 893 + 894 + wret = wait_for_completion_interruptible_timeout(&ch->comp, timeout); 895 + 896 + riocm_debug(WAIT, "wait on %d returned %ld", ch->id, wret); 897 + 898 + if (!wret) 899 + ret = -ETIME; 900 + else if (wret == -ERESTARTSYS) 901 + ret = -EINTR; 902 + else 903 + ret = riocm_cmp(ch, RIO_CM_CONNECTED) ? 0 : -ECONNRESET; 904 + 905 + if (ret) 906 + goto out; 907 + 908 + spin_lock_bh(&ch->lock); 909 + 910 + rxmsg = ch->rx_ring.buf[ch->rx_ring.tail]; 911 + ch->rx_ring.buf[ch->rx_ring.tail] = NULL; 912 + ch->rx_ring.count--; 913 + ch->rx_ring.tail++; 914 + ch->rx_ring.tail %= RIOCM_RX_RING_SIZE; 915 + ret = -ENOMEM; 916 + 917 + for (i = 0; i < RIOCM_RX_RING_SIZE; i++) { 918 + if (ch->rx_ring.inuse[i] == NULL) { 919 + ch->rx_ring.inuse[i] = rxmsg; 920 + ch->rx_ring.inuse_cnt++; 921 + ret = 0; 922 + break; 923 + } 924 + } 925 + 926 + if (ret) { 927 + /* We have no entry to store pending message: drop it */ 928 + kfree(rxmsg); 929 + rxmsg = NULL; 930 + } 931 + 932 + spin_unlock_bh(&ch->lock); 933 + out: 934 + *buf = rxmsg; 935 + return ret; 936 + } 937 + 938 + /* 939 + * riocm_ch_connect - sends a connect request to a remote device 940 + * @loc_ch: local channel ID 941 + * @cm: CM device to send connect request 942 + * @peer: target RapidIO device 943 + * @rem_ch: remote channel ID 944 + * 945 + * Returns: 0 if success, or 946 + * -EINVAL if the channel is not in IDLE state, 947 + * -EAGAIN if no connection request available immediately, 948 + * -ETIME if ACK response timeout expired, 949 + * -EINTR if wait for response was interrupted. 950 + */ 951 + static int riocm_ch_connect(u16 loc_ch, struct cm_dev *cm, 952 + struct cm_peer *peer, u16 rem_ch) 953 + { 954 + struct rio_channel *ch = NULL; 955 + struct rio_ch_chan_hdr *hdr; 956 + int ret; 957 + long wret; 958 + 959 + ch = riocm_get_channel(loc_ch); 960 + if (!ch) 961 + return -ENODEV; 962 + 963 + if (!riocm_cmp_exch(ch, RIO_CM_IDLE, RIO_CM_CONNECT)) { 964 + ret = -EINVAL; 965 + goto conn_done; 966 + } 967 + 968 + ch->cmdev = cm; 969 + ch->rdev = peer->rdev; 970 + ch->context = NULL; 971 + ch->loc_destid = cm->mport->host_deviceid; 972 + ch->rem_channel = rem_ch; 973 + 974 + /* 975 + * Send connect request to the remote RapidIO device 976 + */ 977 + 978 + hdr = kzalloc(sizeof(*hdr), GFP_KERNEL); 979 + if (hdr == NULL) { 980 + ret = -ENOMEM; 981 + goto conn_done; 982 + } 983 + 984 + hdr->bhdr.src_id = htonl(ch->loc_destid); 985 + hdr->bhdr.dst_id = htonl(peer->rdev->destid); 986 + hdr->bhdr.src_mbox = cmbox; 987 + hdr->bhdr.dst_mbox = cmbox; 988 + hdr->bhdr.type = RIO_CM_CHAN; 989 + hdr->ch_op = CM_CONN_REQ; 990 + hdr->dst_ch = htons(rem_ch); 991 + hdr->src_ch = htons(loc_ch); 992 + 993 + /* ATTN: the function call below relies on the fact that underlying 994 + * HW-specific add_outb_message() routine copies TX data into its 995 + * internal transfer buffer. Must be reviewed if mport driver uses 996 + * this buffer directly. 997 + */ 998 + ret = riocm_post_send(cm, peer->rdev, hdr, sizeof(*hdr)); 999 + 1000 + if (ret != -EBUSY) { 1001 + kfree(hdr); 1002 + } else { 1003 + ret = riocm_queue_req(cm, peer->rdev, hdr, sizeof(*hdr)); 1004 + if (ret) 1005 + kfree(hdr); 1006 + } 1007 + 1008 + if (ret) { 1009 + riocm_cmp_exch(ch, RIO_CM_CONNECT, RIO_CM_IDLE); 1010 + goto conn_done; 1011 + } 1012 + 1013 + /* Wait for connect response from the remote device */ 1014 + wret = wait_for_completion_interruptible_timeout(&ch->comp, 1015 + RIOCM_CONNECT_TO * HZ); 1016 + riocm_debug(WAIT, "wait on %d returns %ld", ch->id, wret); 1017 + 1018 + if (!wret) 1019 + ret = -ETIME; 1020 + else if (wret == -ERESTARTSYS) 1021 + ret = -EINTR; 1022 + else 1023 + ret = riocm_cmp(ch, RIO_CM_CONNECTED) ? 0 : -1; 1024 + 1025 + conn_done: 1026 + riocm_put_channel(ch); 1027 + return ret; 1028 + } 1029 + 1030 + static int riocm_send_ack(struct rio_channel *ch) 1031 + { 1032 + struct rio_ch_chan_hdr *hdr; 1033 + int ret; 1034 + 1035 + hdr = kzalloc(sizeof(*hdr), GFP_KERNEL); 1036 + if (hdr == NULL) 1037 + return -ENOMEM; 1038 + 1039 + hdr->bhdr.src_id = htonl(ch->loc_destid); 1040 + hdr->bhdr.dst_id = htonl(ch->rem_destid); 1041 + hdr->dst_ch = htons(ch->rem_channel); 1042 + hdr->src_ch = htons(ch->id); 1043 + hdr->bhdr.src_mbox = cmbox; 1044 + hdr->bhdr.dst_mbox = cmbox; 1045 + hdr->bhdr.type = RIO_CM_CHAN; 1046 + hdr->ch_op = CM_CONN_ACK; 1047 + 1048 + /* ATTN: the function call below relies on the fact that underlying 1049 + * add_outb_message() routine copies TX data into its internal transfer 1050 + * buffer. Review if switching to direct buffer version. 1051 + */ 1052 + ret = riocm_post_send(ch->cmdev, ch->rdev, hdr, sizeof(*hdr)); 1053 + 1054 + if (ret == -EBUSY && !riocm_queue_req(ch->cmdev, 1055 + ch->rdev, hdr, sizeof(*hdr))) 1056 + return 0; 1057 + kfree(hdr); 1058 + 1059 + if (ret) 1060 + riocm_error("send ACK to ch_%d on %s failed (ret=%d)", 1061 + ch->id, rio_name(ch->rdev), ret); 1062 + return ret; 1063 + } 1064 + 1065 + /* 1066 + * riocm_ch_accept - accept incoming connection request 1067 + * @ch_id: channel ID 1068 + * @new_ch_id: local mport device 1069 + * @timeout: wait timeout (if 0 non-blocking call, do not wait if connection 1070 + * request is not available). 1071 + * 1072 + * Returns: pointer to new channel struct if success, or error-valued pointer: 1073 + * -ENODEV - cannot find specified channel or mport, 1074 + * -EINVAL - the channel is not in IDLE state, 1075 + * -EAGAIN - no connection request available immediately (timeout=0), 1076 + * -ENOMEM - unable to allocate new channel, 1077 + * -ETIME - wait timeout expired, 1078 + * -EINTR - wait was interrupted. 1079 + */ 1080 + static struct rio_channel *riocm_ch_accept(u16 ch_id, u16 *new_ch_id, 1081 + long timeout) 1082 + { 1083 + struct rio_channel *ch = NULL; 1084 + struct rio_channel *new_ch = NULL; 1085 + struct conn_req *req; 1086 + struct cm_peer *peer; 1087 + int found = 0; 1088 + int err = 0; 1089 + long wret; 1090 + 1091 + ch = riocm_get_channel(ch_id); 1092 + if (!ch) 1093 + return ERR_PTR(-EINVAL); 1094 + 1095 + if (!riocm_cmp(ch, RIO_CM_LISTEN)) { 1096 + err = -EINVAL; 1097 + goto err_put; 1098 + } 1099 + 1100 + /* Don't sleep if this is a non blocking call */ 1101 + if (!timeout) { 1102 + if (!try_wait_for_completion(&ch->comp)) { 1103 + err = -EAGAIN; 1104 + goto err_put; 1105 + } 1106 + } else { 1107 + riocm_debug(WAIT, "on %d", ch->id); 1108 + 1109 + wret = wait_for_completion_interruptible_timeout(&ch->comp, 1110 + timeout); 1111 + if (!wret) { 1112 + err = -ETIME; 1113 + goto err_put; 1114 + } else if (wret == -ERESTARTSYS) { 1115 + err = -EINTR; 1116 + goto err_put; 1117 + } 1118 + } 1119 + 1120 + spin_lock_bh(&ch->lock); 1121 + 1122 + if (ch->state != RIO_CM_LISTEN) { 1123 + err = -ECANCELED; 1124 + } else if (list_empty(&ch->accept_queue)) { 1125 + riocm_debug(WAIT, "on %d accept_queue is empty on completion", 1126 + ch->id); 1127 + err = -EIO; 1128 + } 1129 + 1130 + spin_unlock_bh(&ch->lock); 1131 + 1132 + if (err) { 1133 + riocm_debug(WAIT, "on %d returns %d", ch->id, err); 1134 + goto err_put; 1135 + } 1136 + 1137 + /* Create new channel for this connection */ 1138 + new_ch = riocm_ch_alloc(RIOCM_CHNUM_AUTO); 1139 + 1140 + if (IS_ERR(new_ch)) { 1141 + riocm_error("failed to get channel for new req (%ld)", 1142 + PTR_ERR(new_ch)); 1143 + err = -ENOMEM; 1144 + goto err_put; 1145 + } 1146 + 1147 + spin_lock_bh(&ch->lock); 1148 + 1149 + req = list_first_entry(&ch->accept_queue, struct conn_req, node); 1150 + list_del(&req->node); 1151 + new_ch->cmdev = ch->cmdev; 1152 + new_ch->loc_destid = ch->loc_destid; 1153 + new_ch->rem_destid = req->destid; 1154 + new_ch->rem_channel = req->chan; 1155 + 1156 + spin_unlock_bh(&ch->lock); 1157 + riocm_put_channel(ch); 1158 + kfree(req); 1159 + 1160 + down_read(&rdev_sem); 1161 + /* Find requester's device object */ 1162 + list_for_each_entry(peer, &new_ch->cmdev->peers, node) { 1163 + if (peer->rdev->destid == new_ch->rem_destid) { 1164 + riocm_debug(RX_CMD, "found matching device(%s)", 1165 + rio_name(peer->rdev)); 1166 + found = 1; 1167 + break; 1168 + } 1169 + } 1170 + up_read(&rdev_sem); 1171 + 1172 + if (!found) { 1173 + /* If peer device object not found, simply ignore the request */ 1174 + err = -ENODEV; 1175 + goto err_nodev; 1176 + } 1177 + 1178 + new_ch->rdev = peer->rdev; 1179 + new_ch->state = RIO_CM_CONNECTED; 1180 + spin_lock_init(&new_ch->lock); 1181 + 1182 + /* Acknowledge the connection request. */ 1183 + riocm_send_ack(new_ch); 1184 + 1185 + *new_ch_id = new_ch->id; 1186 + return new_ch; 1187 + err_put: 1188 + riocm_put_channel(ch); 1189 + err_nodev: 1190 + if (new_ch) { 1191 + spin_lock_bh(&idr_lock); 1192 + idr_remove(&ch_idr, new_ch->id); 1193 + spin_unlock_bh(&idr_lock); 1194 + riocm_put_channel(new_ch); 1195 + } 1196 + *new_ch_id = 0; 1197 + return ERR_PTR(err); 1198 + } 1199 + 1200 + /* 1201 + * riocm_ch_listen - puts a channel into LISTEN state 1202 + * @ch_id: channel ID 1203 + * 1204 + * Returns: 0 if success, or 1205 + * -EINVAL if the specified channel does not exists or 1206 + * is not in CHAN_BOUND state. 1207 + */ 1208 + static int riocm_ch_listen(u16 ch_id) 1209 + { 1210 + struct rio_channel *ch = NULL; 1211 + int ret = 0; 1212 + 1213 + riocm_debug(CHOP, "(ch_%d)", ch_id); 1214 + 1215 + ch = riocm_get_channel(ch_id); 1216 + if (!ch || !riocm_cmp_exch(ch, RIO_CM_CHAN_BOUND, RIO_CM_LISTEN)) 1217 + ret = -EINVAL; 1218 + riocm_put_channel(ch); 1219 + return ret; 1220 + } 1221 + 1222 + /* 1223 + * riocm_ch_bind - associate a channel object and an mport device 1224 + * @ch_id: channel ID 1225 + * @mport_id: local mport device ID 1226 + * @context: pointer to the additional caller's context 1227 + * 1228 + * Returns: 0 if success, or 1229 + * -ENODEV if cannot find specified mport, 1230 + * -EINVAL if the specified channel does not exist or 1231 + * is not in IDLE state. 1232 + */ 1233 + static int riocm_ch_bind(u16 ch_id, u8 mport_id, void *context) 1234 + { 1235 + struct rio_channel *ch = NULL; 1236 + struct cm_dev *cm; 1237 + int rc = -ENODEV; 1238 + 1239 + riocm_debug(CHOP, "ch_%d to mport_%d", ch_id, mport_id); 1240 + 1241 + /* Find matching cm_dev object */ 1242 + down_read(&rdev_sem); 1243 + list_for_each_entry(cm, &cm_dev_list, list) { 1244 + if ((cm->mport->id == mport_id) && 1245 + rio_mport_is_running(cm->mport)) { 1246 + rc = 0; 1247 + break; 1248 + } 1249 + } 1250 + 1251 + if (rc) 1252 + goto exit; 1253 + 1254 + ch = riocm_get_channel(ch_id); 1255 + if (!ch) { 1256 + rc = -EINVAL; 1257 + goto exit; 1258 + } 1259 + 1260 + spin_lock_bh(&ch->lock); 1261 + if (ch->state != RIO_CM_IDLE) { 1262 + spin_unlock_bh(&ch->lock); 1263 + rc = -EINVAL; 1264 + goto err_put; 1265 + } 1266 + 1267 + ch->cmdev = cm; 1268 + ch->loc_destid = cm->mport->host_deviceid; 1269 + ch->context = context; 1270 + ch->state = RIO_CM_CHAN_BOUND; 1271 + spin_unlock_bh(&ch->lock); 1272 + err_put: 1273 + riocm_put_channel(ch); 1274 + exit: 1275 + up_read(&rdev_sem); 1276 + return rc; 1277 + } 1278 + 1279 + /* 1280 + * riocm_ch_alloc - channel object allocation helper routine 1281 + * @ch_num: channel ID (1 ... RIOCM_MAX_CHNUM, 0 = automatic) 1282 + * 1283 + * Return value: pointer to newly created channel object, 1284 + * or error-valued pointer 1285 + */ 1286 + static struct rio_channel *riocm_ch_alloc(u16 ch_num) 1287 + { 1288 + int id; 1289 + int start, end; 1290 + struct rio_channel *ch; 1291 + 1292 + ch = kzalloc(sizeof(*ch), GFP_KERNEL); 1293 + if (!ch) 1294 + return ERR_PTR(-ENOMEM); 1295 + 1296 + if (ch_num) { 1297 + /* If requested, try to obtain the specified channel ID */ 1298 + start = ch_num; 1299 + end = ch_num + 1; 1300 + } else { 1301 + /* Obtain channel ID from the dynamic allocation range */ 1302 + start = chstart; 1303 + end = RIOCM_MAX_CHNUM + 1; 1304 + } 1305 + 1306 + idr_preload(GFP_KERNEL); 1307 + spin_lock_bh(&idr_lock); 1308 + id = idr_alloc_cyclic(&ch_idr, ch, start, end, GFP_NOWAIT); 1309 + spin_unlock_bh(&idr_lock); 1310 + idr_preload_end(); 1311 + 1312 + if (id < 0) { 1313 + kfree(ch); 1314 + return ERR_PTR(id == -ENOSPC ? -EBUSY : id); 1315 + } 1316 + 1317 + ch->id = (u16)id; 1318 + ch->state = RIO_CM_IDLE; 1319 + spin_lock_init(&ch->lock); 1320 + INIT_LIST_HEAD(&ch->accept_queue); 1321 + INIT_LIST_HEAD(&ch->ch_node); 1322 + init_completion(&ch->comp); 1323 + init_completion(&ch->comp_close); 1324 + kref_init(&ch->ref); 1325 + ch->rx_ring.head = 0; 1326 + ch->rx_ring.tail = 0; 1327 + ch->rx_ring.count = 0; 1328 + ch->rx_ring.inuse_cnt = 0; 1329 + 1330 + return ch; 1331 + } 1332 + 1333 + /* 1334 + * riocm_ch_create - creates a new channel object and allocates ID for it 1335 + * @ch_num: channel ID (1 ... RIOCM_MAX_CHNUM, 0 = automatic) 1336 + * 1337 + * Allocates and initializes a new channel object. If the parameter ch_num > 0 1338 + * and is within the valid range, riocm_ch_create tries to allocate the 1339 + * specified ID for the new channel. If ch_num = 0, channel ID will be assigned 1340 + * automatically from the range (chstart ... RIOCM_MAX_CHNUM). 1341 + * Module parameter 'chstart' defines start of an ID range available for dynamic 1342 + * allocation. Range below 'chstart' is reserved for pre-defined ID numbers. 1343 + * Available channel numbers are limited by 16-bit size of channel numbers used 1344 + * in the packet header. 1345 + * 1346 + * Return value: PTR to rio_channel structure if successful (with channel number 1347 + * updated via pointer) or error-valued pointer if error. 1348 + */ 1349 + static struct rio_channel *riocm_ch_create(u16 *ch_num) 1350 + { 1351 + struct rio_channel *ch = NULL; 1352 + 1353 + ch = riocm_ch_alloc(*ch_num); 1354 + 1355 + if (IS_ERR(ch)) 1356 + riocm_debug(CHOP, "Failed to allocate channel %d (err=%ld)", 1357 + *ch_num, PTR_ERR(ch)); 1358 + else 1359 + *ch_num = ch->id; 1360 + 1361 + return ch; 1362 + } 1363 + 1364 + /* 1365 + * riocm_ch_free - channel object release routine 1366 + * @ref: pointer to a channel's kref structure 1367 + */ 1368 + static void riocm_ch_free(struct kref *ref) 1369 + { 1370 + struct rio_channel *ch = container_of(ref, struct rio_channel, ref); 1371 + int i; 1372 + 1373 + riocm_debug(CHOP, "(ch_%d)", ch->id); 1374 + 1375 + if (ch->rx_ring.inuse_cnt) { 1376 + for (i = 0; 1377 + i < RIOCM_RX_RING_SIZE && ch->rx_ring.inuse_cnt; i++) { 1378 + if (ch->rx_ring.inuse[i] != NULL) { 1379 + kfree(ch->rx_ring.inuse[i]); 1380 + ch->rx_ring.inuse_cnt--; 1381 + } 1382 + } 1383 + } 1384 + 1385 + if (ch->rx_ring.count) 1386 + for (i = 0; i < RIOCM_RX_RING_SIZE && ch->rx_ring.count; i++) { 1387 + if (ch->rx_ring.buf[i] != NULL) { 1388 + kfree(ch->rx_ring.buf[i]); 1389 + ch->rx_ring.count--; 1390 + } 1391 + } 1392 + 1393 + complete(&ch->comp_close); 1394 + } 1395 + 1396 + static int riocm_send_close(struct rio_channel *ch) 1397 + { 1398 + struct rio_ch_chan_hdr *hdr; 1399 + int ret; 1400 + 1401 + /* 1402 + * Send CH_CLOSE notification to the remote RapidIO device 1403 + */ 1404 + 1405 + hdr = kzalloc(sizeof(*hdr), GFP_KERNEL); 1406 + if (hdr == NULL) 1407 + return -ENOMEM; 1408 + 1409 + hdr->bhdr.src_id = htonl(ch->loc_destid); 1410 + hdr->bhdr.dst_id = htonl(ch->rem_destid); 1411 + hdr->bhdr.src_mbox = cmbox; 1412 + hdr->bhdr.dst_mbox = cmbox; 1413 + hdr->bhdr.type = RIO_CM_CHAN; 1414 + hdr->ch_op = CM_CONN_CLOSE; 1415 + hdr->dst_ch = htons(ch->rem_channel); 1416 + hdr->src_ch = htons(ch->id); 1417 + 1418 + /* ATTN: the function call below relies on the fact that underlying 1419 + * add_outb_message() routine copies TX data into its internal transfer 1420 + * buffer. Needs to be reviewed if switched to direct buffer mode. 1421 + */ 1422 + ret = riocm_post_send(ch->cmdev, ch->rdev, hdr, sizeof(*hdr)); 1423 + 1424 + if (ret == -EBUSY && !riocm_queue_req(ch->cmdev, ch->rdev, 1425 + hdr, sizeof(*hdr))) 1426 + return 0; 1427 + kfree(hdr); 1428 + 1429 + if (ret) 1430 + riocm_error("ch(%d) send CLOSE failed (ret=%d)", ch->id, ret); 1431 + 1432 + return ret; 1433 + } 1434 + 1435 + /* 1436 + * riocm_ch_close - closes a channel object with specified ID (by local request) 1437 + * @ch: channel to be closed 1438 + */ 1439 + static int riocm_ch_close(struct rio_channel *ch) 1440 + { 1441 + unsigned long tmo = msecs_to_jiffies(3000); 1442 + enum rio_cm_state state; 1443 + long wret; 1444 + int ret = 0; 1445 + 1446 + riocm_debug(CHOP, "ch_%d by %s(%d)", 1447 + ch->id, current->comm, task_pid_nr(current)); 1448 + 1449 + state = riocm_exch(ch, RIO_CM_DESTROYING); 1450 + if (state == RIO_CM_CONNECTED) 1451 + riocm_send_close(ch); 1452 + 1453 + complete_all(&ch->comp); 1454 + 1455 + riocm_put_channel(ch); 1456 + wret = wait_for_completion_interruptible_timeout(&ch->comp_close, tmo); 1457 + 1458 + riocm_debug(WAIT, "wait on %d returns %ld", ch->id, wret); 1459 + 1460 + if (wret == 0) { 1461 + /* Timeout on wait occurred */ 1462 + riocm_debug(CHOP, "%s(%d) timed out waiting for ch %d", 1463 + current->comm, task_pid_nr(current), ch->id); 1464 + ret = -ETIMEDOUT; 1465 + } else if (wret == -ERESTARTSYS) { 1466 + /* Wait_for_completion was interrupted by a signal */ 1467 + riocm_debug(CHOP, "%s(%d) wait for ch %d was interrupted", 1468 + current->comm, task_pid_nr(current), ch->id); 1469 + ret = -EINTR; 1470 + } 1471 + 1472 + if (!ret) { 1473 + riocm_debug(CHOP, "ch_%d resources released", ch->id); 1474 + kfree(ch); 1475 + } else { 1476 + riocm_debug(CHOP, "failed to release ch_%d resources", ch->id); 1477 + } 1478 + 1479 + return ret; 1480 + } 1481 + 1482 + /* 1483 + * riocm_cdev_open() - Open character device 1484 + */ 1485 + static int riocm_cdev_open(struct inode *inode, struct file *filp) 1486 + { 1487 + riocm_debug(INIT, "by %s(%d) filp=%p ", 1488 + current->comm, task_pid_nr(current), filp); 1489 + 1490 + if (list_empty(&cm_dev_list)) 1491 + return -ENODEV; 1492 + 1493 + return 0; 1494 + } 1495 + 1496 + /* 1497 + * riocm_cdev_release() - Release character device 1498 + */ 1499 + static int riocm_cdev_release(struct inode *inode, struct file *filp) 1500 + { 1501 + struct rio_channel *ch, *_c; 1502 + unsigned int i; 1503 + LIST_HEAD(list); 1504 + 1505 + riocm_debug(EXIT, "by %s(%d) filp=%p", 1506 + current->comm, task_pid_nr(current), filp); 1507 + 1508 + /* Check if there are channels associated with this file descriptor */ 1509 + spin_lock_bh(&idr_lock); 1510 + idr_for_each_entry(&ch_idr, ch, i) { 1511 + if (ch && ch->filp == filp) { 1512 + riocm_debug(EXIT, "ch_%d not released by %s(%d)", 1513 + ch->id, current->comm, 1514 + task_pid_nr(current)); 1515 + idr_remove(&ch_idr, ch->id); 1516 + list_add(&ch->ch_node, &list); 1517 + } 1518 + } 1519 + spin_unlock_bh(&idr_lock); 1520 + 1521 + if (!list_empty(&list)) { 1522 + list_for_each_entry_safe(ch, _c, &list, ch_node) { 1523 + list_del(&ch->ch_node); 1524 + riocm_ch_close(ch); 1525 + } 1526 + } 1527 + 1528 + return 0; 1529 + } 1530 + 1531 + /* 1532 + * cm_ep_get_list_size() - Reports number of endpoints in the network 1533 + */ 1534 + static int cm_ep_get_list_size(void __user *arg) 1535 + { 1536 + u32 __user *p = arg; 1537 + u32 mport_id; 1538 + u32 count = 0; 1539 + struct cm_dev *cm; 1540 + 1541 + if (get_user(mport_id, p)) 1542 + return -EFAULT; 1543 + if (mport_id >= RIO_MAX_MPORTS) 1544 + return -EINVAL; 1545 + 1546 + /* Find a matching cm_dev object */ 1547 + down_read(&rdev_sem); 1548 + list_for_each_entry(cm, &cm_dev_list, list) { 1549 + if (cm->mport->id == mport_id) { 1550 + count = cm->npeers; 1551 + up_read(&rdev_sem); 1552 + if (copy_to_user(arg, &count, sizeof(u32))) 1553 + return -EFAULT; 1554 + return 0; 1555 + } 1556 + } 1557 + up_read(&rdev_sem); 1558 + 1559 + return -ENODEV; 1560 + } 1561 + 1562 + /* 1563 + * cm_ep_get_list() - Returns list of attached endpoints 1564 + */ 1565 + static int cm_ep_get_list(void __user *arg) 1566 + { 1567 + struct cm_dev *cm; 1568 + struct cm_peer *peer; 1569 + u32 info[2]; 1570 + void *buf; 1571 + u32 nent; 1572 + u32 *entry_ptr; 1573 + u32 i = 0; 1574 + int ret = 0; 1575 + 1576 + if (copy_from_user(&info, arg, sizeof(info))) 1577 + return -EFAULT; 1578 + 1579 + if (info[1] >= RIO_MAX_MPORTS || info[0] > RIOCM_MAX_EP_COUNT) 1580 + return -EINVAL; 1581 + 1582 + /* Find a matching cm_dev object */ 1583 + down_read(&rdev_sem); 1584 + list_for_each_entry(cm, &cm_dev_list, list) 1585 + if (cm->mport->id == (u8)info[1]) 1586 + goto found; 1587 + 1588 + up_read(&rdev_sem); 1589 + return -ENODEV; 1590 + 1591 + found: 1592 + nent = min(info[0], cm->npeers); 1593 + buf = kcalloc(nent + 2, sizeof(u32), GFP_KERNEL); 1594 + if (!buf) { 1595 + up_read(&rdev_sem); 1596 + return -ENOMEM; 1597 + } 1598 + 1599 + entry_ptr = (u32 *)((uintptr_t)buf + 2*sizeof(u32)); 1600 + 1601 + list_for_each_entry(peer, &cm->peers, node) { 1602 + *entry_ptr = (u32)peer->rdev->destid; 1603 + entry_ptr++; 1604 + if (++i == nent) 1605 + break; 1606 + } 1607 + up_read(&rdev_sem); 1608 + 1609 + ((u32 *)buf)[0] = i; /* report an updated number of entries */ 1610 + ((u32 *)buf)[1] = info[1]; /* put back an mport ID */ 1611 + if (copy_to_user(arg, buf, sizeof(u32) * (info[0] + 2))) 1612 + ret = -EFAULT; 1613 + 1614 + kfree(buf); 1615 + return ret; 1616 + } 1617 + 1618 + /* 1619 + * cm_mport_get_list() - Returns list of available local mport devices 1620 + */ 1621 + static int cm_mport_get_list(void __user *arg) 1622 + { 1623 + int ret = 0; 1624 + u32 entries; 1625 + void *buf; 1626 + struct cm_dev *cm; 1627 + u32 *entry_ptr; 1628 + int count = 0; 1629 + 1630 + if (copy_from_user(&entries, arg, sizeof(entries))) 1631 + return -EFAULT; 1632 + if (entries == 0 || entries > RIO_MAX_MPORTS) 1633 + return -EINVAL; 1634 + buf = kcalloc(entries + 1, sizeof(u32), GFP_KERNEL); 1635 + if (!buf) 1636 + return -ENOMEM; 1637 + 1638 + /* Scan all registered cm_dev objects */ 1639 + entry_ptr = (u32 *)((uintptr_t)buf + sizeof(u32)); 1640 + down_read(&rdev_sem); 1641 + list_for_each_entry(cm, &cm_dev_list, list) { 1642 + if (count++ < entries) { 1643 + *entry_ptr = (cm->mport->id << 16) | 1644 + cm->mport->host_deviceid; 1645 + entry_ptr++; 1646 + } 1647 + } 1648 + up_read(&rdev_sem); 1649 + 1650 + *((u32 *)buf) = count; /* report a real number of entries */ 1651 + if (copy_to_user(arg, buf, sizeof(u32) * (count + 1))) 1652 + ret = -EFAULT; 1653 + 1654 + kfree(buf); 1655 + return ret; 1656 + } 1657 + 1658 + /* 1659 + * cm_chan_create() - Create a message exchange channel 1660 + */ 1661 + static int cm_chan_create(struct file *filp, void __user *arg) 1662 + { 1663 + u16 __user *p = arg; 1664 + u16 ch_num; 1665 + struct rio_channel *ch; 1666 + 1667 + if (get_user(ch_num, p)) 1668 + return -EFAULT; 1669 + 1670 + riocm_debug(CHOP, "ch_%d requested by %s(%d)", 1671 + ch_num, current->comm, task_pid_nr(current)); 1672 + ch = riocm_ch_create(&ch_num); 1673 + if (IS_ERR(ch)) 1674 + return PTR_ERR(ch); 1675 + 1676 + ch->filp = filp; 1677 + riocm_debug(CHOP, "ch_%d created by %s(%d)", 1678 + ch_num, current->comm, task_pid_nr(current)); 1679 + return put_user(ch_num, p); 1680 + } 1681 + 1682 + /* 1683 + * cm_chan_close() - Close channel 1684 + * @filp: Pointer to file object 1685 + * @arg: Channel to close 1686 + */ 1687 + static int cm_chan_close(struct file *filp, void __user *arg) 1688 + { 1689 + u16 __user *p = arg; 1690 + u16 ch_num; 1691 + struct rio_channel *ch; 1692 + 1693 + if (get_user(ch_num, p)) 1694 + return -EFAULT; 1695 + 1696 + riocm_debug(CHOP, "ch_%d by %s(%d)", 1697 + ch_num, current->comm, task_pid_nr(current)); 1698 + 1699 + spin_lock_bh(&idr_lock); 1700 + ch = idr_find(&ch_idr, ch_num); 1701 + if (!ch) { 1702 + spin_unlock_bh(&idr_lock); 1703 + return 0; 1704 + } 1705 + if (ch->filp != filp) { 1706 + spin_unlock_bh(&idr_lock); 1707 + return -EINVAL; 1708 + } 1709 + idr_remove(&ch_idr, ch->id); 1710 + spin_unlock_bh(&idr_lock); 1711 + 1712 + return riocm_ch_close(ch); 1713 + } 1714 + 1715 + /* 1716 + * cm_chan_bind() - Bind channel 1717 + * @arg: Channel number 1718 + */ 1719 + static int cm_chan_bind(void __user *arg) 1720 + { 1721 + struct rio_cm_channel chan; 1722 + 1723 + if (copy_from_user(&chan, arg, sizeof(chan))) 1724 + return -EFAULT; 1725 + if (chan.mport_id >= RIO_MAX_MPORTS) 1726 + return -EINVAL; 1727 + 1728 + return riocm_ch_bind(chan.id, chan.mport_id, NULL); 1729 + } 1730 + 1731 + /* 1732 + * cm_chan_listen() - Listen on channel 1733 + * @arg: Channel number 1734 + */ 1735 + static int cm_chan_listen(void __user *arg) 1736 + { 1737 + u16 __user *p = arg; 1738 + u16 ch_num; 1739 + 1740 + if (get_user(ch_num, p)) 1741 + return -EFAULT; 1742 + 1743 + return riocm_ch_listen(ch_num); 1744 + } 1745 + 1746 + /* 1747 + * cm_chan_accept() - Accept incoming connection 1748 + * @filp: Pointer to file object 1749 + * @arg: Channel number 1750 + */ 1751 + static int cm_chan_accept(struct file *filp, void __user *arg) 1752 + { 1753 + struct rio_cm_accept param; 1754 + long accept_to; 1755 + struct rio_channel *ch; 1756 + 1757 + if (copy_from_user(&param, arg, sizeof(param))) 1758 + return -EFAULT; 1759 + 1760 + riocm_debug(CHOP, "on ch_%d by %s(%d)", 1761 + param.ch_num, current->comm, task_pid_nr(current)); 1762 + 1763 + accept_to = param.wait_to ? 1764 + msecs_to_jiffies(param.wait_to) : 0; 1765 + 1766 + ch = riocm_ch_accept(param.ch_num, &param.ch_num, accept_to); 1767 + if (IS_ERR(ch)) 1768 + return PTR_ERR(ch); 1769 + ch->filp = filp; 1770 + 1771 + riocm_debug(CHOP, "new ch_%d for %s(%d)", 1772 + ch->id, current->comm, task_pid_nr(current)); 1773 + 1774 + if (copy_to_user(arg, &param, sizeof(param))) 1775 + return -EFAULT; 1776 + return 0; 1777 + } 1778 + 1779 + /* 1780 + * cm_chan_connect() - Connect on channel 1781 + * @arg: Channel information 1782 + */ 1783 + static int cm_chan_connect(void __user *arg) 1784 + { 1785 + struct rio_cm_channel chan; 1786 + struct cm_dev *cm; 1787 + struct cm_peer *peer; 1788 + int ret = -ENODEV; 1789 + 1790 + if (copy_from_user(&chan, arg, sizeof(chan))) 1791 + return -EFAULT; 1792 + if (chan.mport_id >= RIO_MAX_MPORTS) 1793 + return -EINVAL; 1794 + 1795 + down_read(&rdev_sem); 1796 + 1797 + /* Find matching cm_dev object */ 1798 + list_for_each_entry(cm, &cm_dev_list, list) { 1799 + if (cm->mport->id == chan.mport_id) { 1800 + ret = 0; 1801 + break; 1802 + } 1803 + } 1804 + 1805 + if (ret) 1806 + goto err_out; 1807 + 1808 + if (chan.remote_destid >= RIO_ANY_DESTID(cm->mport->sys_size)) { 1809 + ret = -EINVAL; 1810 + goto err_out; 1811 + } 1812 + 1813 + /* Find corresponding RapidIO endpoint device object */ 1814 + ret = -ENODEV; 1815 + 1816 + list_for_each_entry(peer, &cm->peers, node) { 1817 + if (peer->rdev->destid == chan.remote_destid) { 1818 + ret = 0; 1819 + break; 1820 + } 1821 + } 1822 + 1823 + if (ret) 1824 + goto err_out; 1825 + 1826 + up_read(&rdev_sem); 1827 + 1828 + return riocm_ch_connect(chan.id, cm, peer, chan.remote_channel); 1829 + err_out: 1830 + up_read(&rdev_sem); 1831 + return ret; 1832 + } 1833 + 1834 + /* 1835 + * cm_chan_msg_send() - Send a message through channel 1836 + * @arg: Outbound message information 1837 + */ 1838 + static int cm_chan_msg_send(void __user *arg) 1839 + { 1840 + struct rio_cm_msg msg; 1841 + void *buf; 1842 + int ret = 0; 1843 + 1844 + if (copy_from_user(&msg, arg, sizeof(msg))) 1845 + return -EFAULT; 1846 + if (msg.size > RIO_MAX_MSG_SIZE) 1847 + return -EINVAL; 1848 + 1849 + buf = kmalloc(msg.size, GFP_KERNEL); 1850 + if (!buf) 1851 + return -ENOMEM; 1852 + 1853 + if (copy_from_user(buf, (void __user *)(uintptr_t)msg.msg, msg.size)) { 1854 + ret = -EFAULT; 1855 + goto out; 1856 + } 1857 + 1858 + ret = riocm_ch_send(msg.ch_num, buf, msg.size); 1859 + out: 1860 + kfree(buf); 1861 + return ret; 1862 + } 1863 + 1864 + /* 1865 + * cm_chan_msg_rcv() - Receive a message through channel 1866 + * @arg: Inbound message information 1867 + */ 1868 + static int cm_chan_msg_rcv(void __user *arg) 1869 + { 1870 + struct rio_cm_msg msg; 1871 + struct rio_channel *ch; 1872 + void *buf; 1873 + long rxto; 1874 + int ret = 0, msg_size; 1875 + 1876 + if (copy_from_user(&msg, arg, sizeof(msg))) 1877 + return -EFAULT; 1878 + 1879 + if (msg.ch_num == 0 || msg.size == 0) 1880 + return -EINVAL; 1881 + 1882 + ch = riocm_get_channel(msg.ch_num); 1883 + if (!ch) 1884 + return -ENODEV; 1885 + 1886 + rxto = msg.rxto ? msecs_to_jiffies(msg.rxto) : MAX_SCHEDULE_TIMEOUT; 1887 + 1888 + ret = riocm_ch_receive(ch, &buf, rxto); 1889 + if (ret) 1890 + goto out; 1891 + 1892 + msg_size = min(msg.size, (u16)(RIO_MAX_MSG_SIZE)); 1893 + 1894 + if (copy_to_user((void __user *)(uintptr_t)msg.msg, buf, msg_size)) 1895 + ret = -EFAULT; 1896 + 1897 + riocm_ch_free_rxbuf(ch, buf); 1898 + out: 1899 + riocm_put_channel(ch); 1900 + return ret; 1901 + } 1902 + 1903 + /* 1904 + * riocm_cdev_ioctl() - IOCTL requests handler 1905 + */ 1906 + static long 1907 + riocm_cdev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 1908 + { 1909 + switch (cmd) { 1910 + case RIO_CM_EP_GET_LIST_SIZE: 1911 + return cm_ep_get_list_size((void __user *)arg); 1912 + case RIO_CM_EP_GET_LIST: 1913 + return cm_ep_get_list((void __user *)arg); 1914 + case RIO_CM_CHAN_CREATE: 1915 + return cm_chan_create(filp, (void __user *)arg); 1916 + case RIO_CM_CHAN_CLOSE: 1917 + return cm_chan_close(filp, (void __user *)arg); 1918 + case RIO_CM_CHAN_BIND: 1919 + return cm_chan_bind((void __user *)arg); 1920 + case RIO_CM_CHAN_LISTEN: 1921 + return cm_chan_listen((void __user *)arg); 1922 + case RIO_CM_CHAN_ACCEPT: 1923 + return cm_chan_accept(filp, (void __user *)arg); 1924 + case RIO_CM_CHAN_CONNECT: 1925 + return cm_chan_connect((void __user *)arg); 1926 + case RIO_CM_CHAN_SEND: 1927 + return cm_chan_msg_send((void __user *)arg); 1928 + case RIO_CM_CHAN_RECEIVE: 1929 + return cm_chan_msg_rcv((void __user *)arg); 1930 + case RIO_CM_MPORT_GET_LIST: 1931 + return cm_mport_get_list((void __user *)arg); 1932 + default: 1933 + break; 1934 + } 1935 + 1936 + return -EINVAL; 1937 + } 1938 + 1939 + static const struct file_operations riocm_cdev_fops = { 1940 + .owner = THIS_MODULE, 1941 + .open = riocm_cdev_open, 1942 + .release = riocm_cdev_release, 1943 + .unlocked_ioctl = riocm_cdev_ioctl, 1944 + }; 1945 + 1946 + /* 1947 + * riocm_add_dev - add new remote RapidIO device into channel management core 1948 + * @dev: device object associated with RapidIO device 1949 + * @sif: subsystem interface 1950 + * 1951 + * Adds the specified RapidIO device (if applicable) into peers list of 1952 + * the corresponding channel management device (cm_dev). 1953 + */ 1954 + static int riocm_add_dev(struct device *dev, struct subsys_interface *sif) 1955 + { 1956 + struct cm_peer *peer; 1957 + struct rio_dev *rdev = to_rio_dev(dev); 1958 + struct cm_dev *cm; 1959 + 1960 + /* Check if the remote device has capabilities required to support CM */ 1961 + if (!dev_cm_capable(rdev)) 1962 + return 0; 1963 + 1964 + riocm_debug(RDEV, "(%s)", rio_name(rdev)); 1965 + 1966 + peer = kmalloc(sizeof(*peer), GFP_KERNEL); 1967 + if (!peer) 1968 + return -ENOMEM; 1969 + 1970 + /* Find a corresponding cm_dev object */ 1971 + down_write(&rdev_sem); 1972 + list_for_each_entry(cm, &cm_dev_list, list) { 1973 + if (cm->mport == rdev->net->hport) 1974 + goto found; 1975 + } 1976 + 1977 + up_write(&rdev_sem); 1978 + kfree(peer); 1979 + return -ENODEV; 1980 + 1981 + found: 1982 + peer->rdev = rdev; 1983 + list_add_tail(&peer->node, &cm->peers); 1984 + cm->npeers++; 1985 + 1986 + up_write(&rdev_sem); 1987 + return 0; 1988 + } 1989 + 1990 + /* 1991 + * riocm_remove_dev - remove remote RapidIO device from channel management core 1992 + * @dev: device object associated with RapidIO device 1993 + * @sif: subsystem interface 1994 + * 1995 + * Removes the specified RapidIO device (if applicable) from peers list of 1996 + * the corresponding channel management device (cm_dev). 1997 + */ 1998 + static void riocm_remove_dev(struct device *dev, struct subsys_interface *sif) 1999 + { 2000 + struct rio_dev *rdev = to_rio_dev(dev); 2001 + struct cm_dev *cm; 2002 + struct cm_peer *peer; 2003 + struct rio_channel *ch, *_c; 2004 + unsigned int i; 2005 + bool found = false; 2006 + LIST_HEAD(list); 2007 + 2008 + /* Check if the remote device has capabilities required to support CM */ 2009 + if (!dev_cm_capable(rdev)) 2010 + return; 2011 + 2012 + riocm_debug(RDEV, "(%s)", rio_name(rdev)); 2013 + 2014 + /* Find matching cm_dev object */ 2015 + down_write(&rdev_sem); 2016 + list_for_each_entry(cm, &cm_dev_list, list) { 2017 + if (cm->mport == rdev->net->hport) { 2018 + found = true; 2019 + break; 2020 + } 2021 + } 2022 + 2023 + if (!found) { 2024 + up_write(&rdev_sem); 2025 + return; 2026 + } 2027 + 2028 + /* Remove remote device from the list of peers */ 2029 + found = false; 2030 + list_for_each_entry(peer, &cm->peers, node) { 2031 + if (peer->rdev == rdev) { 2032 + riocm_debug(RDEV, "removing peer %s", rio_name(rdev)); 2033 + found = true; 2034 + list_del(&peer->node); 2035 + cm->npeers--; 2036 + kfree(peer); 2037 + break; 2038 + } 2039 + } 2040 + 2041 + up_write(&rdev_sem); 2042 + 2043 + if (!found) 2044 + return; 2045 + 2046 + /* 2047 + * Release channels associated with this peer 2048 + */ 2049 + 2050 + spin_lock_bh(&idr_lock); 2051 + idr_for_each_entry(&ch_idr, ch, i) { 2052 + if (ch && ch->rdev == rdev) { 2053 + if (atomic_read(&rdev->state) != RIO_DEVICE_SHUTDOWN) 2054 + riocm_exch(ch, RIO_CM_DISCONNECT); 2055 + idr_remove(&ch_idr, ch->id); 2056 + list_add(&ch->ch_node, &list); 2057 + } 2058 + } 2059 + spin_unlock_bh(&idr_lock); 2060 + 2061 + if (!list_empty(&list)) { 2062 + list_for_each_entry_safe(ch, _c, &list, ch_node) { 2063 + list_del(&ch->ch_node); 2064 + riocm_ch_close(ch); 2065 + } 2066 + } 2067 + } 2068 + 2069 + /* 2070 + * riocm_cdev_add() - Create rio_cm char device 2071 + * @devno: device number assigned to device (MAJ + MIN) 2072 + */ 2073 + static int riocm_cdev_add(dev_t devno) 2074 + { 2075 + int ret; 2076 + 2077 + cdev_init(&riocm_cdev.cdev, &riocm_cdev_fops); 2078 + riocm_cdev.cdev.owner = THIS_MODULE; 2079 + ret = cdev_add(&riocm_cdev.cdev, devno, 1); 2080 + if (ret < 0) { 2081 + riocm_error("Cannot register a device with error %d", ret); 2082 + return ret; 2083 + } 2084 + 2085 + riocm_cdev.dev = device_create(dev_class, NULL, devno, NULL, DEV_NAME); 2086 + if (IS_ERR(riocm_cdev.dev)) { 2087 + cdev_del(&riocm_cdev.cdev); 2088 + return PTR_ERR(riocm_cdev.dev); 2089 + } 2090 + 2091 + riocm_debug(MPORT, "Added %s cdev(%d:%d)", 2092 + DEV_NAME, MAJOR(devno), MINOR(devno)); 2093 + 2094 + return 0; 2095 + } 2096 + 2097 + /* 2098 + * riocm_add_mport - add new local mport device into channel management core 2099 + * @dev: device object associated with mport 2100 + * @class_intf: class interface 2101 + * 2102 + * When a new mport device is added, CM immediately reserves inbound and 2103 + * outbound RapidIO mailboxes that will be used. 2104 + */ 2105 + static int riocm_add_mport(struct device *dev, 2106 + struct class_interface *class_intf) 2107 + { 2108 + int rc; 2109 + int i; 2110 + struct cm_dev *cm; 2111 + struct rio_mport *mport = to_rio_mport(dev); 2112 + 2113 + riocm_debug(MPORT, "add mport %s", mport->name); 2114 + 2115 + cm = kzalloc(sizeof(*cm), GFP_KERNEL); 2116 + if (!cm) 2117 + return -ENOMEM; 2118 + 2119 + cm->mport = mport; 2120 + 2121 + rc = rio_request_outb_mbox(mport, cm, cmbox, 2122 + RIOCM_TX_RING_SIZE, riocm_outb_msg_event); 2123 + if (rc) { 2124 + riocm_error("failed to allocate OBMBOX_%d on %s", 2125 + cmbox, mport->name); 2126 + kfree(cm); 2127 + return -ENODEV; 2128 + } 2129 + 2130 + rc = rio_request_inb_mbox(mport, cm, cmbox, 2131 + RIOCM_RX_RING_SIZE, riocm_inb_msg_event); 2132 + if (rc) { 2133 + riocm_error("failed to allocate IBMBOX_%d on %s", 2134 + cmbox, mport->name); 2135 + rio_release_outb_mbox(mport, cmbox); 2136 + kfree(cm); 2137 + return -ENODEV; 2138 + } 2139 + 2140 + /* 2141 + * Allocate and register inbound messaging buffers to be ready 2142 + * to receive channel and system management requests 2143 + */ 2144 + for (i = 0; i < RIOCM_RX_RING_SIZE; i++) 2145 + cm->rx_buf[i] = NULL; 2146 + 2147 + cm->rx_slots = RIOCM_RX_RING_SIZE; 2148 + mutex_init(&cm->rx_lock); 2149 + riocm_rx_fill(cm, RIOCM_RX_RING_SIZE); 2150 + cm->rx_wq = create_workqueue(DRV_NAME "/rxq"); 2151 + INIT_WORK(&cm->rx_work, rio_ibmsg_handler); 2152 + 2153 + cm->tx_slot = 0; 2154 + cm->tx_cnt = 0; 2155 + cm->tx_ack_slot = 0; 2156 + spin_lock_init(&cm->tx_lock); 2157 + 2158 + INIT_LIST_HEAD(&cm->peers); 2159 + cm->npeers = 0; 2160 + INIT_LIST_HEAD(&cm->tx_reqs); 2161 + 2162 + down_write(&rdev_sem); 2163 + list_add_tail(&cm->list, &cm_dev_list); 2164 + up_write(&rdev_sem); 2165 + 2166 + return 0; 2167 + } 2168 + 2169 + /* 2170 + * riocm_remove_mport - remove local mport device from channel management core 2171 + * @dev: device object associated with mport 2172 + * @class_intf: class interface 2173 + * 2174 + * Removes a local mport device from the list of registered devices that provide 2175 + * channel management services. Returns an error if the specified mport is not 2176 + * registered with the CM core. 2177 + */ 2178 + static void riocm_remove_mport(struct device *dev, 2179 + struct class_interface *class_intf) 2180 + { 2181 + struct rio_mport *mport = to_rio_mport(dev); 2182 + struct cm_dev *cm; 2183 + struct cm_peer *peer, *temp; 2184 + struct rio_channel *ch, *_c; 2185 + unsigned int i; 2186 + bool found = false; 2187 + LIST_HEAD(list); 2188 + 2189 + riocm_debug(MPORT, "%s", mport->name); 2190 + 2191 + /* Find a matching cm_dev object */ 2192 + down_write(&rdev_sem); 2193 + list_for_each_entry(cm, &cm_dev_list, list) { 2194 + if (cm->mport == mport) { 2195 + list_del(&cm->list); 2196 + found = true; 2197 + break; 2198 + } 2199 + } 2200 + up_write(&rdev_sem); 2201 + if (!found) 2202 + return; 2203 + 2204 + flush_workqueue(cm->rx_wq); 2205 + destroy_workqueue(cm->rx_wq); 2206 + 2207 + /* Release channels bound to this mport */ 2208 + spin_lock_bh(&idr_lock); 2209 + idr_for_each_entry(&ch_idr, ch, i) { 2210 + if (ch->cmdev == cm) { 2211 + riocm_debug(RDEV, "%s drop ch_%d", 2212 + mport->name, ch->id); 2213 + idr_remove(&ch_idr, ch->id); 2214 + list_add(&ch->ch_node, &list); 2215 + } 2216 + } 2217 + spin_unlock_bh(&idr_lock); 2218 + 2219 + if (!list_empty(&list)) { 2220 + list_for_each_entry_safe(ch, _c, &list, ch_node) { 2221 + list_del(&ch->ch_node); 2222 + riocm_ch_close(ch); 2223 + } 2224 + } 2225 + 2226 + rio_release_inb_mbox(mport, cmbox); 2227 + rio_release_outb_mbox(mport, cmbox); 2228 + 2229 + /* Remove and free peer entries */ 2230 + if (!list_empty(&cm->peers)) 2231 + riocm_debug(RDEV, "ATTN: peer list not empty"); 2232 + list_for_each_entry_safe(peer, temp, &cm->peers, node) { 2233 + riocm_debug(RDEV, "removing peer %s", rio_name(peer->rdev)); 2234 + list_del(&peer->node); 2235 + kfree(peer); 2236 + } 2237 + 2238 + riocm_rx_free(cm); 2239 + kfree(cm); 2240 + riocm_debug(MPORT, "%s done", mport->name); 2241 + } 2242 + 2243 + static int rio_cm_shutdown(struct notifier_block *nb, unsigned long code, 2244 + void *unused) 2245 + { 2246 + struct rio_channel *ch; 2247 + unsigned int i; 2248 + 2249 + riocm_debug(EXIT, "."); 2250 + 2251 + spin_lock_bh(&idr_lock); 2252 + idr_for_each_entry(&ch_idr, ch, i) { 2253 + riocm_debug(EXIT, "close ch %d", ch->id); 2254 + if (ch->state == RIO_CM_CONNECTED) 2255 + riocm_send_close(ch); 2256 + } 2257 + spin_unlock_bh(&idr_lock); 2258 + 2259 + return NOTIFY_DONE; 2260 + } 2261 + 2262 + /* 2263 + * riocm_interface handles addition/removal of remote RapidIO devices 2264 + */ 2265 + static struct subsys_interface riocm_interface = { 2266 + .name = "rio_cm", 2267 + .subsys = &rio_bus_type, 2268 + .add_dev = riocm_add_dev, 2269 + .remove_dev = riocm_remove_dev, 2270 + }; 2271 + 2272 + /* 2273 + * rio_mport_interface handles addition/removal local mport devices 2274 + */ 2275 + static struct class_interface rio_mport_interface __refdata = { 2276 + .class = &rio_mport_class, 2277 + .add_dev = riocm_add_mport, 2278 + .remove_dev = riocm_remove_mport, 2279 + }; 2280 + 2281 + static struct notifier_block rio_cm_notifier = { 2282 + .notifier_call = rio_cm_shutdown, 2283 + }; 2284 + 2285 + static int __init riocm_init(void) 2286 + { 2287 + int ret; 2288 + 2289 + /* Create device class needed by udev */ 2290 + dev_class = class_create(THIS_MODULE, DRV_NAME); 2291 + if (IS_ERR(dev_class)) { 2292 + riocm_error("Cannot create " DRV_NAME " class"); 2293 + return PTR_ERR(dev_class); 2294 + } 2295 + 2296 + ret = alloc_chrdev_region(&dev_number, 0, 1, DRV_NAME); 2297 + if (ret) { 2298 + class_destroy(dev_class); 2299 + return ret; 2300 + } 2301 + 2302 + dev_major = MAJOR(dev_number); 2303 + dev_minor_base = MINOR(dev_number); 2304 + riocm_debug(INIT, "Registered class with %d major", dev_major); 2305 + 2306 + /* 2307 + * Register as rapidio_port class interface to get notifications about 2308 + * mport additions and removals. 2309 + */ 2310 + ret = class_interface_register(&rio_mport_interface); 2311 + if (ret) { 2312 + riocm_error("class_interface_register error: %d", ret); 2313 + goto err_reg; 2314 + } 2315 + 2316 + /* 2317 + * Register as RapidIO bus interface to get notifications about 2318 + * addition/removal of remote RapidIO devices. 2319 + */ 2320 + ret = subsys_interface_register(&riocm_interface); 2321 + if (ret) { 2322 + riocm_error("subsys_interface_register error: %d", ret); 2323 + goto err_cl; 2324 + } 2325 + 2326 + ret = register_reboot_notifier(&rio_cm_notifier); 2327 + if (ret) { 2328 + riocm_error("failed to register reboot notifier (err=%d)", ret); 2329 + goto err_sif; 2330 + } 2331 + 2332 + ret = riocm_cdev_add(dev_number); 2333 + if (ret) { 2334 + unregister_reboot_notifier(&rio_cm_notifier); 2335 + ret = -ENODEV; 2336 + goto err_sif; 2337 + } 2338 + 2339 + return 0; 2340 + err_sif: 2341 + subsys_interface_unregister(&riocm_interface); 2342 + err_cl: 2343 + class_interface_unregister(&rio_mport_interface); 2344 + err_reg: 2345 + unregister_chrdev_region(dev_number, 1); 2346 + class_destroy(dev_class); 2347 + return ret; 2348 + } 2349 + 2350 + static void __exit riocm_exit(void) 2351 + { 2352 + riocm_debug(EXIT, "enter"); 2353 + unregister_reboot_notifier(&rio_cm_notifier); 2354 + subsys_interface_unregister(&riocm_interface); 2355 + class_interface_unregister(&rio_mport_interface); 2356 + idr_destroy(&ch_idr); 2357 + 2358 + device_unregister(riocm_cdev.dev); 2359 + cdev_del(&(riocm_cdev.cdev)); 2360 + 2361 + class_destroy(dev_class); 2362 + unregister_chrdev_region(dev_number, 1); 2363 + } 2364 + 2365 + late_initcall(riocm_init); 2366 + module_exit(riocm_exit);
+6
drivers/rapidio/switches/Kconfig
··· 22 22 default n 23 23 ---help--- 24 24 Includes support for ITD CPS Gen.2 serial RapidIO switches. 25 + 26 + config RAPIDIO_RXS_GEN3 27 + tristate "IDT RXS Gen.3 SRIO switch support" 28 + default n 29 + ---help--- 30 + Includes support for ITD RXS Gen.3 serial RapidIO switches.
+1
drivers/rapidio/switches/Makefile
··· 6 6 obj-$(CONFIG_RAPIDIO_CPS_XX) += idtcps.o 7 7 obj-$(CONFIG_RAPIDIO_TSI568) += tsi568.o 8 8 obj-$(CONFIG_RAPIDIO_CPS_GEN2) += idt_gen2.o 9 + obj-$(CONFIG_RAPIDIO_RXS_GEN3) += idt_gen3.o
+3 -4
drivers/rapidio/switches/idt_gen2.c
··· 436 436 RIO_STD_RTE_DEFAULT_PORT, IDT_NO_ROUTE); 437 437 } 438 438 439 + spin_unlock(&rdev->rswitch->lock); 440 + 439 441 /* Create device-specific sysfs attributes */ 440 442 idtg2_sysfs(rdev, true); 441 443 442 - spin_unlock(&rdev->rswitch->lock); 443 444 return 0; 444 445 } 445 446 ··· 453 452 return; 454 453 } 455 454 rdev->rswitch->ops = NULL; 456 - 455 + spin_unlock(&rdev->rswitch->lock); 457 456 /* Remove device-specific sysfs attributes */ 458 457 idtg2_sysfs(rdev, false); 459 - 460 - spin_unlock(&rdev->rswitch->lock); 461 458 } 462 459 463 460 static struct rio_device_id idtg2_id_table[] = {
+382
drivers/rapidio/switches/idt_gen3.c
··· 1 + /* 2 + * IDT RXS Gen.3 Serial RapidIO switch family support 3 + * 4 + * Copyright 2016 Integrated Device Technology, Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License as published by the 8 + * Free Software Foundation; either version 2 of the License, or (at your 9 + * option) any later version. 10 + */ 11 + 12 + #include <linux/stat.h> 13 + #include <linux/module.h> 14 + #include <linux/rio.h> 15 + #include <linux/rio_drv.h> 16 + #include <linux/rio_ids.h> 17 + #include <linux/delay.h> 18 + 19 + #include <asm/page.h> 20 + #include "../rio.h" 21 + 22 + #define RIO_EM_PW_STAT 0x40020 23 + #define RIO_PW_CTL 0x40204 24 + #define RIO_PW_CTL_PW_TMR 0xffffff00 25 + #define RIO_PW_ROUTE 0x40208 26 + 27 + #define RIO_EM_DEV_INT_EN 0x40030 28 + 29 + #define RIO_PLM_SPx_IMP_SPEC_CTL(x) (0x10100 + (x)*0x100) 30 + #define RIO_PLM_SPx_IMP_SPEC_CTL_SOFT_RST 0x02000000 31 + 32 + #define RIO_PLM_SPx_PW_EN(x) (0x10118 + (x)*0x100) 33 + #define RIO_PLM_SPx_PW_EN_OK2U 0x40000000 34 + #define RIO_PLM_SPx_PW_EN_LINIT 0x10000000 35 + 36 + #define RIO_BC_L2_Gn_ENTRYx_CSR(n, x) (0x31000 + (n)*0x400 + (x)*0x4) 37 + #define RIO_SPx_L2_Gn_ENTRYy_CSR(x, n, y) \ 38 + (0x51000 + (x)*0x2000 + (n)*0x400 + (y)*0x4) 39 + 40 + static int 41 + idtg3_route_add_entry(struct rio_mport *mport, u16 destid, u8 hopcount, 42 + u16 table, u16 route_destid, u8 route_port) 43 + { 44 + u32 rval; 45 + u32 entry = route_port; 46 + int err = 0; 47 + 48 + pr_debug("RIO: %s t=0x%x did_%x to p_%x\n", 49 + __func__, table, route_destid, entry); 50 + 51 + if (route_destid > 0xFF) 52 + return -EINVAL; 53 + 54 + if (route_port == RIO_INVALID_ROUTE) 55 + entry = RIO_RT_ENTRY_DROP_PKT; 56 + 57 + if (table == RIO_GLOBAL_TABLE) { 58 + /* Use broadcast register to update all per-port tables */ 59 + err = rio_mport_write_config_32(mport, destid, hopcount, 60 + RIO_BC_L2_Gn_ENTRYx_CSR(0, route_destid), 61 + entry); 62 + return err; 63 + } 64 + 65 + /* 66 + * Verify that specified port/table number is valid 67 + */ 68 + err = rio_mport_read_config_32(mport, destid, hopcount, 69 + RIO_SWP_INFO_CAR, &rval); 70 + if (err) 71 + return err; 72 + 73 + if (table >= RIO_GET_TOTAL_PORTS(rval)) 74 + return -EINVAL; 75 + 76 + err = rio_mport_write_config_32(mport, destid, hopcount, 77 + RIO_SPx_L2_Gn_ENTRYy_CSR(table, 0, route_destid), 78 + entry); 79 + return err; 80 + } 81 + 82 + static int 83 + idtg3_route_get_entry(struct rio_mport *mport, u16 destid, u8 hopcount, 84 + u16 table, u16 route_destid, u8 *route_port) 85 + { 86 + u32 rval; 87 + int err; 88 + 89 + if (route_destid > 0xFF) 90 + return -EINVAL; 91 + 92 + err = rio_mport_read_config_32(mport, destid, hopcount, 93 + RIO_SWP_INFO_CAR, &rval); 94 + if (err) 95 + return err; 96 + 97 + /* 98 + * This switch device does not have the dedicated global routing table. 99 + * It is substituted by reading routing table of the ingress port of 100 + * maintenance read requests. 101 + */ 102 + if (table == RIO_GLOBAL_TABLE) 103 + table = RIO_GET_PORT_NUM(rval); 104 + else if (table >= RIO_GET_TOTAL_PORTS(rval)) 105 + return -EINVAL; 106 + 107 + err = rio_mport_read_config_32(mport, destid, hopcount, 108 + RIO_SPx_L2_Gn_ENTRYy_CSR(table, 0, route_destid), 109 + &rval); 110 + if (err) 111 + return err; 112 + 113 + if (rval == RIO_RT_ENTRY_DROP_PKT) 114 + *route_port = RIO_INVALID_ROUTE; 115 + else 116 + *route_port = (u8)rval; 117 + 118 + return 0; 119 + } 120 + 121 + static int 122 + idtg3_route_clr_table(struct rio_mport *mport, u16 destid, u8 hopcount, 123 + u16 table) 124 + { 125 + u32 i; 126 + u32 rval; 127 + int err; 128 + 129 + if (table == RIO_GLOBAL_TABLE) { 130 + for (i = 0; i <= 0xff; i++) { 131 + err = rio_mport_write_config_32(mport, destid, hopcount, 132 + RIO_BC_L2_Gn_ENTRYx_CSR(0, i), 133 + RIO_RT_ENTRY_DROP_PKT); 134 + if (err) 135 + break; 136 + } 137 + 138 + return err; 139 + } 140 + 141 + err = rio_mport_read_config_32(mport, destid, hopcount, 142 + RIO_SWP_INFO_CAR, &rval); 143 + if (err) 144 + return err; 145 + 146 + if (table >= RIO_GET_TOTAL_PORTS(rval)) 147 + return -EINVAL; 148 + 149 + for (i = 0; i <= 0xff; i++) { 150 + err = rio_mport_write_config_32(mport, destid, hopcount, 151 + RIO_SPx_L2_Gn_ENTRYy_CSR(table, 0, i), 152 + RIO_RT_ENTRY_DROP_PKT); 153 + if (err) 154 + break; 155 + } 156 + 157 + return err; 158 + } 159 + 160 + /* 161 + * This routine performs device-specific initialization only. 162 + * All standard EM configuration should be performed at upper level. 163 + */ 164 + static int 165 + idtg3_em_init(struct rio_dev *rdev) 166 + { 167 + int i, tmp; 168 + u32 rval; 169 + 170 + pr_debug("RIO: %s [%d:%d]\n", __func__, rdev->destid, rdev->hopcount); 171 + 172 + /* Disable assertion of interrupt signal */ 173 + rio_write_config_32(rdev, RIO_EM_DEV_INT_EN, 0); 174 + 175 + /* Disable port-write event notifications during initialization */ 176 + rio_write_config_32(rdev, rdev->em_efptr + RIO_EM_PW_TX_CTRL, 177 + RIO_EM_PW_TX_CTRL_PW_DIS); 178 + 179 + /* Configure Port-Write notifications for hot-swap events */ 180 + tmp = RIO_GET_TOTAL_PORTS(rdev->swpinfo); 181 + for (i = 0; i < tmp; i++) { 182 + 183 + rio_read_config_32(rdev, 184 + RIO_DEV_PORT_N_ERR_STS_CSR(rdev, i), 185 + &rval); 186 + if (rval & RIO_PORT_N_ERR_STS_PORT_UA) 187 + continue; 188 + 189 + /* Clear events signaled before enabling notification */ 190 + rio_write_config_32(rdev, 191 + rdev->em_efptr + RIO_EM_PN_ERR_DETECT(i), 0); 192 + 193 + /* Enable event notifications */ 194 + rio_write_config_32(rdev, 195 + rdev->em_efptr + RIO_EM_PN_ERRRATE_EN(i), 196 + RIO_EM_PN_ERRRATE_EN_OK2U | RIO_EM_PN_ERRRATE_EN_U2OK); 197 + /* Enable port-write generation on events */ 198 + rio_write_config_32(rdev, RIO_PLM_SPx_PW_EN(i), 199 + RIO_PLM_SPx_PW_EN_OK2U | RIO_PLM_SPx_PW_EN_LINIT); 200 + 201 + } 202 + 203 + /* Set Port-Write destination port */ 204 + tmp = RIO_GET_PORT_NUM(rdev->swpinfo); 205 + rio_write_config_32(rdev, RIO_PW_ROUTE, 1 << tmp); 206 + 207 + 208 + /* Enable sending port-write event notifications */ 209 + rio_write_config_32(rdev, rdev->em_efptr + RIO_EM_PW_TX_CTRL, 0); 210 + 211 + /* set TVAL = ~50us */ 212 + rio_write_config_32(rdev, 213 + rdev->phys_efptr + RIO_PORT_LINKTO_CTL_CSR, 0x8e << 8); 214 + return 0; 215 + } 216 + 217 + 218 + /* 219 + * idtg3_em_handler - device-specific error handler 220 + * 221 + * If the link is down (PORT_UNINIT) does nothing - this is considered 222 + * as link partner removal from the port. 223 + * 224 + * If the link is up (PORT_OK) - situation is handled as *new* device insertion. 225 + * In this case ERR_STOP bits are cleared by issuing soft reset command to the 226 + * reporting port. Inbound and outbound ackIDs are cleared by the reset as well. 227 + * This way the port is synchronized with freshly inserted device (assuming it 228 + * was reset/powered-up on insertion). 229 + * 230 + * TODO: This is not sufficient in a situation when a link between two devices 231 + * was down and up again (e.g. cable disconnect). For that situation full ackID 232 + * realignment process has to be implemented. 233 + */ 234 + static int 235 + idtg3_em_handler(struct rio_dev *rdev, u8 pnum) 236 + { 237 + u32 err_status; 238 + u32 rval; 239 + 240 + rio_read_config_32(rdev, 241 + RIO_DEV_PORT_N_ERR_STS_CSR(rdev, pnum), 242 + &err_status); 243 + 244 + /* Do nothing for device/link removal */ 245 + if (err_status & RIO_PORT_N_ERR_STS_PORT_UNINIT) 246 + return 0; 247 + 248 + /* When link is OK we have a device insertion. 249 + * Request port soft reset to clear errors if they present. 250 + * Inbound and outbound ackIDs will be 0 after reset. 251 + */ 252 + if (err_status & (RIO_PORT_N_ERR_STS_OUT_ES | 253 + RIO_PORT_N_ERR_STS_INP_ES)) { 254 + rio_read_config_32(rdev, RIO_PLM_SPx_IMP_SPEC_CTL(pnum), &rval); 255 + rio_write_config_32(rdev, RIO_PLM_SPx_IMP_SPEC_CTL(pnum), 256 + rval | RIO_PLM_SPx_IMP_SPEC_CTL_SOFT_RST); 257 + udelay(10); 258 + rio_write_config_32(rdev, RIO_PLM_SPx_IMP_SPEC_CTL(pnum), rval); 259 + msleep(500); 260 + } 261 + 262 + return 0; 263 + } 264 + 265 + static struct rio_switch_ops idtg3_switch_ops = { 266 + .owner = THIS_MODULE, 267 + .add_entry = idtg3_route_add_entry, 268 + .get_entry = idtg3_route_get_entry, 269 + .clr_table = idtg3_route_clr_table, 270 + .em_init = idtg3_em_init, 271 + .em_handle = idtg3_em_handler, 272 + }; 273 + 274 + static int idtg3_probe(struct rio_dev *rdev, const struct rio_device_id *id) 275 + { 276 + pr_debug("RIO: %s for %s\n", __func__, rio_name(rdev)); 277 + 278 + spin_lock(&rdev->rswitch->lock); 279 + 280 + if (rdev->rswitch->ops) { 281 + spin_unlock(&rdev->rswitch->lock); 282 + return -EINVAL; 283 + } 284 + 285 + rdev->rswitch->ops = &idtg3_switch_ops; 286 + 287 + if (rdev->do_enum) { 288 + /* Disable hierarchical routing support: Existing fabric 289 + * enumeration/discovery process (see rio-scan.c) uses 8-bit 290 + * flat destination ID routing only. 291 + */ 292 + rio_write_config_32(rdev, 0x5000 + RIO_BC_RT_CTL_CSR, 0); 293 + } 294 + 295 + spin_unlock(&rdev->rswitch->lock); 296 + 297 + return 0; 298 + } 299 + 300 + static void idtg3_remove(struct rio_dev *rdev) 301 + { 302 + pr_debug("RIO: %s for %s\n", __func__, rio_name(rdev)); 303 + spin_lock(&rdev->rswitch->lock); 304 + if (rdev->rswitch->ops == &idtg3_switch_ops) 305 + rdev->rswitch->ops = NULL; 306 + spin_unlock(&rdev->rswitch->lock); 307 + } 308 + 309 + /* 310 + * Gen3 switches repeat sending PW messages until a corresponding event flag 311 + * is cleared. Use shutdown notification to disable generation of port-write 312 + * messages if their destination node is shut down. 313 + */ 314 + static void idtg3_shutdown(struct rio_dev *rdev) 315 + { 316 + int i; 317 + u32 rval; 318 + u16 destid; 319 + 320 + /* Currently the enumerator node acts also as PW handler */ 321 + if (!rdev->do_enum) 322 + return; 323 + 324 + pr_debug("RIO: %s(%s)\n", __func__, rio_name(rdev)); 325 + 326 + rio_read_config_32(rdev, RIO_PW_ROUTE, &rval); 327 + i = RIO_GET_PORT_NUM(rdev->swpinfo); 328 + 329 + /* Check port-write destination port */ 330 + if (!((1 << i) & rval)) 331 + return; 332 + 333 + /* Disable sending port-write event notifications if PW destID 334 + * matches to one of the enumerator node 335 + */ 336 + rio_read_config_32(rdev, rdev->em_efptr + RIO_EM_PW_TGT_DEVID, &rval); 337 + 338 + if (rval & RIO_EM_PW_TGT_DEVID_DEV16) 339 + destid = rval >> 16; 340 + else 341 + destid = ((rval & RIO_EM_PW_TGT_DEVID_D8) >> 16); 342 + 343 + if (rdev->net->hport->host_deviceid == destid) { 344 + rio_write_config_32(rdev, 345 + rdev->em_efptr + RIO_EM_PW_TX_CTRL, 0); 346 + pr_debug("RIO: %s(%s) PW transmission disabled\n", 347 + __func__, rio_name(rdev)); 348 + } 349 + } 350 + 351 + static struct rio_device_id idtg3_id_table[] = { 352 + {RIO_DEVICE(RIO_DID_IDTRXS1632, RIO_VID_IDT)}, 353 + {RIO_DEVICE(RIO_DID_IDTRXS2448, RIO_VID_IDT)}, 354 + { 0, } /* terminate list */ 355 + }; 356 + 357 + static struct rio_driver idtg3_driver = { 358 + .name = "idt_gen3", 359 + .id_table = idtg3_id_table, 360 + .probe = idtg3_probe, 361 + .remove = idtg3_remove, 362 + .shutdown = idtg3_shutdown, 363 + }; 364 + 365 + static int __init idtg3_init(void) 366 + { 367 + return rio_register_driver(&idtg3_driver); 368 + } 369 + 370 + static void __exit idtg3_exit(void) 371 + { 372 + pr_debug("RIO: %s\n", __func__); 373 + rio_unregister_driver(&idtg3_driver); 374 + pr_debug("RIO: %s done\n", __func__); 375 + } 376 + 377 + device_initcall(idtg3_init); 378 + module_exit(idtg3_exit); 379 + 380 + MODULE_DESCRIPTION("IDT RXS Gen.3 Serial RapidIO switch family driver"); 381 + MODULE_AUTHOR("Integrated Device Technology, Inc."); 382 + MODULE_LICENSE("GPL");
+12 -14
drivers/rapidio/switches/tsi57x.c
··· 175 175 176 176 /* Clear all pending interrupts */ 177 177 rio_read_config_32(rdev, 178 - rdev->phys_efptr + 179 - RIO_PORT_N_ERR_STS_CSR(portnum), 178 + RIO_DEV_PORT_N_ERR_STS_CSR(rdev, portnum), 180 179 &regval); 181 180 rio_write_config_32(rdev, 182 - rdev->phys_efptr + 183 - RIO_PORT_N_ERR_STS_CSR(portnum), 181 + RIO_DEV_PORT_N_ERR_STS_CSR(rdev, portnum), 184 182 regval & 0x07120214); 185 183 186 184 rio_read_config_32(rdev, ··· 196 198 197 199 /* Skip next (odd) port if the current port is in x4 mode */ 198 200 rio_read_config_32(rdev, 199 - rdev->phys_efptr + RIO_PORT_N_CTL_CSR(portnum), 201 + RIO_DEV_PORT_N_CTL_CSR(rdev, portnum), 200 202 &regval); 201 203 if ((regval & RIO_PORT_N_CTL_PWIDTH) == RIO_PORT_N_CTL_PWIDTH_4) 202 204 portnum++; ··· 219 221 u32 regval; 220 222 221 223 rio_read_config_32(rdev, 222 - rdev->phys_efptr + RIO_PORT_N_ERR_STS_CSR(portnum), 224 + RIO_DEV_PORT_N_ERR_STS_CSR(rdev, portnum), 223 225 &err_status); 224 226 225 227 if ((err_status & RIO_PORT_N_ERR_STS_PORT_OK) && 226 - (err_status & (RIO_PORT_N_ERR_STS_PW_OUT_ES | 227 - RIO_PORT_N_ERR_STS_PW_INP_ES))) { 228 + (err_status & (RIO_PORT_N_ERR_STS_OUT_ES | 229 + RIO_PORT_N_ERR_STS_INP_ES))) { 228 230 /* Remove any queued packets by locking/unlocking port */ 229 231 rio_read_config_32(rdev, 230 - rdev->phys_efptr + RIO_PORT_N_CTL_CSR(portnum), 232 + RIO_DEV_PORT_N_CTL_CSR(rdev, portnum), 231 233 &regval); 232 234 if (!(regval & RIO_PORT_N_CTL_LOCKOUT)) { 233 235 rio_write_config_32(rdev, 234 - rdev->phys_efptr + RIO_PORT_N_CTL_CSR(portnum), 236 + RIO_DEV_PORT_N_CTL_CSR(rdev, portnum), 235 237 regval | RIO_PORT_N_CTL_LOCKOUT); 236 238 udelay(50); 237 239 rio_write_config_32(rdev, 238 - rdev->phys_efptr + RIO_PORT_N_CTL_CSR(portnum), 240 + RIO_DEV_PORT_N_CTL_CSR(rdev, portnum), 239 241 regval); 240 242 } 241 243 ··· 243 245 * valid bit 244 246 */ 245 247 rio_read_config_32(rdev, 246 - rdev->phys_efptr + RIO_PORT_N_MNT_RSP_CSR(portnum), 248 + RIO_DEV_PORT_N_MNT_RSP_CSR(rdev, portnum), 247 249 &regval); 248 250 249 251 /* Send a Packet-Not-Accepted/Link-Request-Input-Status control ··· 257 259 while (checkcount--) { 258 260 udelay(50); 259 261 rio_read_config_32(rdev, 260 - rdev->phys_efptr + 261 - RIO_PORT_N_MNT_RSP_CSR(portnum), 262 + RIO_DEV_PORT_N_MNT_RSP_CSR(rdev, 263 + portnum), 262 264 &regval); 263 265 if (regval & RIO_PORT_N_MNT_RSP_RVAL) 264 266 goto exit_es;
+2
drivers/video/fbdev/bfin_adv7393fb.c
··· 10 10 * TODO: Code Cleanup 11 11 */ 12 12 13 + #define DRIVER_NAME "bfin-adv7393" 14 + 13 15 #define pr_fmt(fmt) DRIVER_NAME ": " fmt 14 16 15 17 #include <linux/module.h>
-2
drivers/video/fbdev/bfin_adv7393fb.h
··· 59 59 BLANK_OFF, 60 60 }; 61 61 62 - #define DRIVER_NAME "bfin-adv7393" 63 - 64 62 struct adv7393fb_modes { 65 63 const s8 name[25]; /* Full name */ 66 64 u16 xres; /* Active Horizonzal Pixels */
+2 -2
drivers/video/logo/logo.c
··· 36 36 37 37 late_initcall(fb_logo_late_init); 38 38 39 - /* logo's are marked __initdata. Use __init_refok to tell 39 + /* logo's are marked __initdata. Use __ref to tell 40 40 * modpost that it is intended that this function uses data 41 41 * marked __initdata. 42 42 */ 43 - const struct linux_logo * __init_refok fb_find_logo(int depth) 43 + const struct linux_logo * __ref fb_find_logo(int depth) 44 44 { 45 45 const struct linux_logo *logo = NULL; 46 46
-2
drivers/w1/masters/omap_hdq.c
··· 390 390 goto out; 391 391 } 392 392 393 - hdq_data->hdq_irqstatus = 0; 394 - 395 393 if (!(hdq_data->hdq_irqstatus & OMAP_HDQ_INT_STATUS_RXCOMPLETE)) { 396 394 hdq_reg_merge(hdq_data, OMAP_HDQ_CTRL_STATUS, 397 395 OMAP_HDQ_CTRL_STATUS_DIR | OMAP_HDQ_CTRL_STATUS_GO,
+1 -13
drivers/w1/slaves/w1_ds2406.c
··· 153 153 .fid = W1_FAMILY_DS2406, 154 154 .fops = &w1_f12_fops, 155 155 }; 156 - 157 - static int __init w1_f12_init(void) 158 - { 159 - return w1_register_family(&w1_family_12); 160 - } 161 - 162 - static void __exit w1_f12_exit(void) 163 - { 164 - w1_unregister_family(&w1_family_12); 165 - } 166 - 167 - module_init(w1_f12_init); 168 - module_exit(w1_f12_exit); 156 + module_w1_family(w1_family_12);
+1 -13
drivers/w1/slaves/w1_ds2408.c
··· 351 351 .fid = W1_FAMILY_DS2408, 352 352 .fops = &w1_f29_fops, 353 353 }; 354 - 355 - static int __init w1_f29_init(void) 356 - { 357 - return w1_register_family(&w1_family_29); 358 - } 359 - 360 - static void __exit w1_f29_exit(void) 361 - { 362 - w1_unregister_family(&w1_family_29); 363 - } 364 - 365 - module_init(w1_f29_init); 366 - module_exit(w1_f29_exit); 354 + module_w1_family(w1_family_29);
+1 -13
drivers/w1/slaves/w1_ds2413.c
··· 135 135 .fid = W1_FAMILY_DS2413, 136 136 .fops = &w1_f3a_fops, 137 137 }; 138 - 139 - static int __init w1_f3a_init(void) 140 - { 141 - return w1_register_family(&w1_family_3a); 142 - } 143 - 144 - static void __exit w1_f3a_exit(void) 145 - { 146 - w1_unregister_family(&w1_family_3a); 147 - } 148 - 149 - module_init(w1_f3a_init); 150 - module_exit(w1_f3a_exit); 138 + module_w1_family(w1_family_3a);
+1 -13
drivers/w1/slaves/w1_ds2423.c
··· 138 138 .fid = W1_COUNTER_DS2423, 139 139 .fops = &w1_f1d_fops, 140 140 }; 141 - 142 - static int __init w1_f1d_init(void) 143 - { 144 - return w1_register_family(&w1_family_1d); 145 - } 146 - 147 - static void __exit w1_f1d_exit(void) 148 - { 149 - w1_unregister_family(&w1_family_1d); 150 - } 151 - 152 - module_init(w1_f1d_init); 153 - module_exit(w1_f1d_exit); 141 + module_w1_family(w1_family_1d); 154 142 155 143 MODULE_LICENSE("GPL"); 156 144 MODULE_AUTHOR("Mika Laitio <lamikr@pilppa.org>");
+1 -13
drivers/w1/slaves/w1_ds2431.c
··· 288 288 .fid = W1_EEPROM_DS2431, 289 289 .fops = &w1_f2d_fops, 290 290 }; 291 - 292 - static int __init w1_f2d_init(void) 293 - { 294 - return w1_register_family(&w1_family_2d); 295 - } 296 - 297 - static void __exit w1_f2d_fini(void) 298 - { 299 - w1_unregister_family(&w1_family_2d); 300 - } 301 - 302 - module_init(w1_f2d_init); 303 - module_exit(w1_f2d_fini); 291 + module_w1_family(w1_family_2d); 304 292 305 293 MODULE_LICENSE("GPL"); 306 294 MODULE_AUTHOR("Bernhard Weirich <bernhard.weirich@riedel.net>");
+1 -13
drivers/w1/slaves/w1_ds2433.c
··· 305 305 .fid = W1_EEPROM_DS2433, 306 306 .fops = &w1_f23_fops, 307 307 }; 308 - 309 - static int __init w1_f23_init(void) 310 - { 311 - return w1_register_family(&w1_family_23); 312 - } 313 - 314 - static void __exit w1_f23_fini(void) 315 - { 316 - w1_unregister_family(&w1_family_23); 317 - } 318 - 319 - module_init(w1_f23_init); 320 - module_exit(w1_f23_fini); 308 + module_w1_family(w1_family_23);
+6 -37
drivers/w1/slaves/w1_ds2760.c
··· 121 121 NULL, 122 122 }; 123 123 124 - static DEFINE_IDA(bat_ida); 125 - 126 124 static int w1_ds2760_add_slave(struct w1_slave *sl) 127 125 { 128 126 int ret; 129 - int id; 130 127 struct platform_device *pdev; 131 128 132 - id = ida_simple_get(&bat_ida, 0, 0, GFP_KERNEL); 133 - if (id < 0) { 134 - ret = id; 135 - goto noid; 136 - } 137 - 138 - pdev = platform_device_alloc("ds2760-battery", id); 139 - if (!pdev) { 140 - ret = -ENOMEM; 141 - goto pdev_alloc_failed; 142 - } 129 + pdev = platform_device_alloc("ds2760-battery", PLATFORM_DEVID_AUTO); 130 + if (!pdev) 131 + return -ENOMEM; 143 132 pdev->dev.parent = &sl->dev; 144 133 145 134 ret = platform_device_add(pdev); ··· 137 148 138 149 dev_set_drvdata(&sl->dev, pdev); 139 150 140 - goto success; 151 + return 0; 141 152 142 153 pdev_add_failed: 143 154 platform_device_put(pdev); 144 - pdev_alloc_failed: 145 - ida_simple_remove(&bat_ida, id); 146 - noid: 147 - success: 155 + 148 156 return ret; 149 157 } 150 158 151 159 static void w1_ds2760_remove_slave(struct w1_slave *sl) 152 160 { 153 161 struct platform_device *pdev = dev_get_drvdata(&sl->dev); 154 - int id = pdev->id; 155 162 156 163 platform_device_unregister(pdev); 157 - ida_simple_remove(&bat_ida, id); 158 164 } 159 165 160 166 static struct w1_family_ops w1_ds2760_fops = { ··· 162 178 .fid = W1_FAMILY_DS2760, 163 179 .fops = &w1_ds2760_fops, 164 180 }; 165 - 166 - static int __init w1_ds2760_init(void) 167 - { 168 - pr_info("1-Wire driver for the DS2760 battery monitor chip - (c) 2004-2005, Szabolcs Gyurko\n"); 169 - ida_init(&bat_ida); 170 - return w1_register_family(&w1_ds2760_family); 171 - } 172 - 173 - static void __exit w1_ds2760_exit(void) 174 - { 175 - w1_unregister_family(&w1_ds2760_family); 176 - ida_destroy(&bat_ida); 177 - } 181 + module_w1_family(w1_ds2760_family); 178 182 179 183 EXPORT_SYMBOL(w1_ds2760_read); 180 184 EXPORT_SYMBOL(w1_ds2760_write); 181 185 EXPORT_SYMBOL(w1_ds2760_store_eeprom); 182 186 EXPORT_SYMBOL(w1_ds2760_recall_eeprom); 183 - 184 - module_init(w1_ds2760_init); 185 - module_exit(w1_ds2760_exit); 186 187 187 188 MODULE_LICENSE("GPL"); 188 189 MODULE_AUTHOR("Szabolcs Gyurko <szabolcs.gyurko@tlt.hu>");
+5 -34
drivers/w1/slaves/w1_ds2780.c
··· 113 113 NULL, 114 114 }; 115 115 116 - static DEFINE_IDA(bat_ida); 117 - 118 116 static int w1_ds2780_add_slave(struct w1_slave *sl) 119 117 { 120 118 int ret; 121 - int id; 122 119 struct platform_device *pdev; 123 120 124 - id = ida_simple_get(&bat_ida, 0, 0, GFP_KERNEL); 125 - if (id < 0) { 126 - ret = id; 127 - goto noid; 128 - } 129 - 130 - pdev = platform_device_alloc("ds2780-battery", id); 131 - if (!pdev) { 132 - ret = -ENOMEM; 133 - goto pdev_alloc_failed; 134 - } 121 + pdev = platform_device_alloc("ds2780-battery", PLATFORM_DEVID_AUTO); 122 + if (!pdev) 123 + return -ENOMEM; 135 124 pdev->dev.parent = &sl->dev; 136 125 137 126 ret = platform_device_add(pdev); ··· 133 144 134 145 pdev_add_failed: 135 146 platform_device_put(pdev); 136 - pdev_alloc_failed: 137 - ida_simple_remove(&bat_ida, id); 138 - noid: 147 + 139 148 return ret; 140 149 } 141 150 142 151 static void w1_ds2780_remove_slave(struct w1_slave *sl) 143 152 { 144 153 struct platform_device *pdev = dev_get_drvdata(&sl->dev); 145 - int id = pdev->id; 146 154 147 155 platform_device_unregister(pdev); 148 - ida_simple_remove(&bat_ida, id); 149 156 } 150 157 151 158 static struct w1_family_ops w1_ds2780_fops = { ··· 154 169 .fid = W1_FAMILY_DS2780, 155 170 .fops = &w1_ds2780_fops, 156 171 }; 157 - 158 - static int __init w1_ds2780_init(void) 159 - { 160 - ida_init(&bat_ida); 161 - return w1_register_family(&w1_ds2780_family); 162 - } 163 - 164 - static void __exit w1_ds2780_exit(void) 165 - { 166 - w1_unregister_family(&w1_ds2780_family); 167 - ida_destroy(&bat_ida); 168 - } 169 - 170 - module_init(w1_ds2780_init); 171 - module_exit(w1_ds2780_exit); 172 + module_w1_family(w1_ds2780_family); 172 173 173 174 MODULE_LICENSE("GPL"); 174 175 MODULE_AUTHOR("Clifton Barnes <cabarnes@indesign-llc.com>");
+5 -35
drivers/w1/slaves/w1_ds2781.c
··· 17 17 #include <linux/types.h> 18 18 #include <linux/platform_device.h> 19 19 #include <linux/mutex.h> 20 - #include <linux/idr.h> 21 20 22 21 #include "../w1.h" 23 22 #include "../w1_int.h" ··· 110 111 NULL, 111 112 }; 112 113 113 - static DEFINE_IDA(bat_ida); 114 - 115 114 static int w1_ds2781_add_slave(struct w1_slave *sl) 116 115 { 117 116 int ret; 118 - int id; 119 117 struct platform_device *pdev; 120 118 121 - id = ida_simple_get(&bat_ida, 0, 0, GFP_KERNEL); 122 - if (id < 0) { 123 - ret = id; 124 - goto noid; 125 - } 126 - 127 - pdev = platform_device_alloc("ds2781-battery", id); 128 - if (!pdev) { 129 - ret = -ENOMEM; 130 - goto pdev_alloc_failed; 131 - } 119 + pdev = platform_device_alloc("ds2781-battery", PLATFORM_DEVID_AUTO); 120 + if (!pdev) 121 + return -ENOMEM; 132 122 pdev->dev.parent = &sl->dev; 133 123 134 124 ret = platform_device_add(pdev); ··· 130 142 131 143 pdev_add_failed: 132 144 platform_device_put(pdev); 133 - pdev_alloc_failed: 134 - ida_simple_remove(&bat_ida, id); 135 - noid: 145 + 136 146 return ret; 137 147 } 138 148 139 149 static void w1_ds2781_remove_slave(struct w1_slave *sl) 140 150 { 141 151 struct platform_device *pdev = dev_get_drvdata(&sl->dev); 142 - int id = pdev->id; 143 152 144 153 platform_device_unregister(pdev); 145 - ida_simple_remove(&bat_ida, id); 146 154 } 147 155 148 156 static struct w1_family_ops w1_ds2781_fops = { ··· 151 167 .fid = W1_FAMILY_DS2781, 152 168 .fops = &w1_ds2781_fops, 153 169 }; 154 - 155 - static int __init w1_ds2781_init(void) 156 - { 157 - ida_init(&bat_ida); 158 - return w1_register_family(&w1_ds2781_family); 159 - } 160 - 161 - static void __exit w1_ds2781_exit(void) 162 - { 163 - w1_unregister_family(&w1_ds2781_family); 164 - ida_destroy(&bat_ida); 165 - } 166 - 167 - module_init(w1_ds2781_init); 168 - module_exit(w1_ds2781_exit); 170 + module_w1_family(w1_ds2781_family); 169 171 170 172 MODULE_LICENSE("GPL"); 171 173 MODULE_AUTHOR("Renata Sayakhova <renata@oktetlabs.ru>");
+1 -13
drivers/w1/slaves/w1_ds28e04.c
··· 427 427 .fid = W1_FAMILY_DS28E04, 428 428 .fops = &w1_f1C_fops, 429 429 }; 430 - 431 - static int __init w1_f1C_init(void) 432 - { 433 - return w1_register_family(&w1_family_1C); 434 - } 435 - 436 - static void __exit w1_f1C_fini(void) 437 - { 438 - w1_unregister_family(&w1_family_1C); 439 - } 440 - 441 - module_init(w1_f1C_init); 442 - module_exit(w1_f1C_fini); 430 + module_w1_family(w1_family_1C);
+12
drivers/w1/w1_family.h
··· 88 88 void w1_unregister_family(struct w1_family *); 89 89 int w1_register_family(struct w1_family *); 90 90 91 + /** 92 + * module_w1_driver() - Helper macro for registering a 1-Wire families 93 + * @__w1_family: w1_family struct 94 + * 95 + * Helper macro for 1-Wire families which do not do anything special in module 96 + * init/exit. This eliminates a lot of boilerplate. Each module may only 97 + * use this macro once, and calling it replaces module_init() and module_exit() 98 + */ 99 + #define module_w1_family(__w1_family) \ 100 + module_driver(__w1_family, w1_register_family, \ 101 + w1_unregister_family) 102 + 91 103 #endif /* __W1_FAMILY_H */
+18 -16
fs/binfmt_elf.c
··· 605 605 * Do the same thing for the memory mapping - between 606 606 * elf_bss and last_bss is the bss section. 607 607 */ 608 - k = load_addr + eppnt->p_memsz + eppnt->p_vaddr; 608 + k = load_addr + eppnt->p_vaddr + eppnt->p_memsz; 609 609 if (k > last_bss) 610 610 last_bss = k; 611 611 } 612 612 } 613 613 614 + /* 615 + * Now fill out the bss section: first pad the last page from 616 + * the file up to the page boundary, and zero it from elf_bss 617 + * up to the end of the page. 618 + */ 619 + if (padzero(elf_bss)) { 620 + error = -EFAULT; 621 + goto out; 622 + } 623 + /* 624 + * Next, align both the file and mem bss up to the page size, 625 + * since this is where elf_bss was just zeroed up to, and where 626 + * last_bss will end after the vm_brk() below. 627 + */ 628 + elf_bss = ELF_PAGEALIGN(elf_bss); 629 + last_bss = ELF_PAGEALIGN(last_bss); 630 + /* Finally, if there is still more bss to allocate, do it. */ 614 631 if (last_bss > elf_bss) { 615 - /* 616 - * Now fill out the bss section. First pad the last page up 617 - * to the page boundary, and then perform a mmap to make sure 618 - * that there are zero-mapped pages up to and including the 619 - * last bss page. 620 - */ 621 - if (padzero(elf_bss)) { 622 - error = -EFAULT; 623 - goto out; 624 - } 625 - 626 - /* What we have mapped so far */ 627 - elf_bss = ELF_PAGESTART(elf_bss + ELF_MIN_ALIGN - 1); 628 - 629 - /* Map the last of the bss segment */ 630 632 error = vm_brk(elf_bss, last_bss - elf_bss); 631 633 if (error) 632 634 goto out;
+2 -1
fs/binfmt_em86.c
··· 24 24 25 25 static int load_em86(struct linux_binprm *bprm) 26 26 { 27 - char *interp, *i_name, *i_arg; 27 + const char *i_name, *i_arg; 28 + char *interp; 28 29 struct file * file; 29 30 int retval; 30 31 struct elfhdr elf_ex;
+6 -3
fs/exec.c
··· 866 866 goto out; 867 867 } 868 868 869 - *buf = vmalloc(i_size); 869 + if (id != READING_FIRMWARE_PREALLOC_BUFFER) 870 + *buf = vmalloc(i_size); 870 871 if (!*buf) { 871 872 ret = -ENOMEM; 872 873 goto out; ··· 898 897 899 898 out_free: 900 899 if (ret < 0) { 901 - vfree(*buf); 902 - *buf = NULL; 900 + if (id != READING_FIRMWARE_PREALLOC_BUFFER) { 901 + vfree(*buf); 902 + *buf = NULL; 903 + } 903 904 } 904 905 905 906 out:
+1 -1
fs/inode.c
··· 345 345 void address_space_init_once(struct address_space *mapping) 346 346 { 347 347 memset(mapping, 0, sizeof(*mapping)); 348 - INIT_RADIX_TREE(&mapping->page_tree, GFP_ATOMIC); 348 + INIT_RADIX_TREE(&mapping->page_tree, GFP_ATOMIC | __GFP_ACCOUNT); 349 349 spin_lock_init(&mapping->tree_lock); 350 350 init_rwsem(&mapping->i_mmap_rwsem); 351 351 INIT_LIST_HEAD(&mapping->private_list);
+21 -24
fs/nilfs2/alloc.c
··· 622 622 lock = nilfs_mdt_bgl_lock(inode, group); 623 623 624 624 if (!nilfs_clear_bit_atomic(lock, group_offset, bitmap)) 625 - nilfs_warning(inode->i_sb, __func__, 626 - "entry number %llu already freed: ino=%lu", 627 - (unsigned long long)req->pr_entry_nr, 628 - (unsigned long)inode->i_ino); 625 + nilfs_msg(inode->i_sb, KERN_WARNING, 626 + "%s (ino=%lu): entry number %llu already freed", 627 + __func__, inode->i_ino, 628 + (unsigned long long)req->pr_entry_nr); 629 629 else 630 630 nilfs_palloc_group_desc_add_entries(desc, lock, 1); 631 631 ··· 663 663 lock = nilfs_mdt_bgl_lock(inode, group); 664 664 665 665 if (!nilfs_clear_bit_atomic(lock, group_offset, bitmap)) 666 - nilfs_warning(inode->i_sb, __func__, 667 - "entry number %llu already freed: ino=%lu", 668 - (unsigned long long)req->pr_entry_nr, 669 - (unsigned long)inode->i_ino); 666 + nilfs_msg(inode->i_sb, KERN_WARNING, 667 + "%s (ino=%lu): entry number %llu already freed", 668 + __func__, inode->i_ino, 669 + (unsigned long long)req->pr_entry_nr); 670 670 else 671 671 nilfs_palloc_group_desc_add_entries(desc, lock, 1); 672 672 ··· 772 772 do { 773 773 if (!nilfs_clear_bit_atomic(lock, group_offset, 774 774 bitmap)) { 775 - nilfs_warning(inode->i_sb, __func__, 776 - "entry number %llu already freed: ino=%lu", 777 - (unsigned long long)entry_nrs[j], 778 - (unsigned long)inode->i_ino); 775 + nilfs_msg(inode->i_sb, KERN_WARNING, 776 + "%s (ino=%lu): entry number %llu already freed", 777 + __func__, inode->i_ino, 778 + (unsigned long long)entry_nrs[j]); 779 779 } else { 780 780 n++; 781 781 } ··· 816 816 for (k = 0; k < nempties; k++) { 817 817 ret = nilfs_palloc_delete_entry_block(inode, 818 818 last_nrs[k]); 819 - if (ret && ret != -ENOENT) { 820 - nilfs_warning(inode->i_sb, __func__, 821 - "failed to delete block of entry %llu: ino=%lu, err=%d", 822 - (unsigned long long)last_nrs[k], 823 - (unsigned long)inode->i_ino, ret); 824 - } 819 + if (ret && ret != -ENOENT) 820 + nilfs_msg(inode->i_sb, KERN_WARNING, 821 + "error %d deleting block that object (entry=%llu, ino=%lu) belongs to", 822 + ret, (unsigned long long)last_nrs[k], 823 + inode->i_ino); 825 824 } 826 825 827 826 desc_kaddr = kmap_atomic(desc_bh->b_page); ··· 834 835 835 836 if (nfree == nilfs_palloc_entries_per_group(inode)) { 836 837 ret = nilfs_palloc_delete_bitmap_block(inode, group); 837 - if (ret && ret != -ENOENT) { 838 - nilfs_warning(inode->i_sb, __func__, 839 - "failed to delete bitmap block of group %lu: ino=%lu, err=%d", 840 - group, 841 - (unsigned long)inode->i_ino, ret); 842 - } 838 + if (ret && ret != -ENOENT) 839 + nilfs_msg(inode->i_sb, KERN_WARNING, 840 + "error %d deleting bitmap block of group=%lu, ino=%lu", 841 + ret, group, inode->i_ino); 843 842 } 844 843 } 845 844 return 0;
+2 -2
fs/nilfs2/bmap.c
··· 41 41 struct inode *inode = bmap->b_inode; 42 42 43 43 if (err == -EINVAL) { 44 - nilfs_error(inode->i_sb, fname, 45 - "broken bmap (inode number=%lu)", inode->i_ino); 44 + __nilfs_error(inode->i_sb, fname, 45 + "broken bmap (inode number=%lu)", inode->i_ino); 46 46 err = -EIO; 47 47 } 48 48 return err;
+1 -1
fs/nilfs2/bmap.h
··· 22 22 #include <linux/types.h> 23 23 #include <linux/fs.h> 24 24 #include <linux/buffer_head.h> 25 - #include <linux/nilfs2_fs.h> 25 + #include <linux/nilfs2_ondisk.h> /* nilfs_binfo, nilfs_inode, etc */ 26 26 #include "alloc.h" 27 27 #include "dat.h" 28 28
+2 -2
fs/nilfs2/btnode.c
··· 41 41 struct inode *inode = NILFS_BTNC_I(btnc); 42 42 struct buffer_head *bh; 43 43 44 - bh = nilfs_grab_buffer(inode, btnc, blocknr, 1 << BH_NILFS_Node); 44 + bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node)); 45 45 if (unlikely(!bh)) 46 46 return NULL; 47 47 ··· 70 70 struct page *page; 71 71 int err; 72 72 73 - bh = nilfs_grab_buffer(inode, btnc, blocknr, 1 << BH_NILFS_Node); 73 + bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node)); 74 74 if (unlikely(!bh)) 75 75 return -ENOMEM; 76 76
+36 -25
fs/nilfs2/btree.c
··· 339 339 * nilfs_btree_node_broken - verify consistency of btree node 340 340 * @node: btree node block to be examined 341 341 * @size: node size (in bytes) 342 + * @inode: host inode of btree 342 343 * @blocknr: block number 343 344 * 344 345 * Return Value: If node is broken, 1 is returned. Otherwise, 0 is returned. 345 346 */ 346 347 static int nilfs_btree_node_broken(const struct nilfs_btree_node *node, 347 - size_t size, sector_t blocknr) 348 + size_t size, struct inode *inode, 349 + sector_t blocknr) 348 350 { 349 351 int level, flags, nchildren; 350 352 int ret = 0; ··· 360 358 (flags & NILFS_BTREE_NODE_ROOT) || 361 359 nchildren < 0 || 362 360 nchildren > NILFS_BTREE_NODE_NCHILDREN_MAX(size))) { 363 - printk(KERN_CRIT "NILFS: bad btree node (blocknr=%llu): " 364 - "level = %d, flags = 0x%x, nchildren = %d\n", 365 - (unsigned long long)blocknr, level, flags, nchildren); 361 + nilfs_msg(inode->i_sb, KERN_CRIT, 362 + "bad btree node (ino=%lu, blocknr=%llu): level = %d, flags = 0x%x, nchildren = %d", 363 + inode->i_ino, (unsigned long long)blocknr, level, 364 + flags, nchildren); 366 365 ret = 1; 367 366 } 368 367 return ret; ··· 372 369 /** 373 370 * nilfs_btree_root_broken - verify consistency of btree root node 374 371 * @node: btree root node to be examined 375 - * @ino: inode number 372 + * @inode: host inode of btree 376 373 * 377 374 * Return Value: If node is broken, 1 is returned. Otherwise, 0 is returned. 378 375 */ 379 376 static int nilfs_btree_root_broken(const struct nilfs_btree_node *node, 380 - unsigned long ino) 377 + struct inode *inode) 381 378 { 382 379 int level, flags, nchildren; 383 380 int ret = 0; ··· 390 387 level >= NILFS_BTREE_LEVEL_MAX || 391 388 nchildren < 0 || 392 389 nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX)) { 393 - pr_crit("NILFS: bad btree root (inode number=%lu): level = %d, flags = 0x%x, nchildren = %d\n", 394 - ino, level, flags, nchildren); 390 + nilfs_msg(inode->i_sb, KERN_CRIT, 391 + "bad btree root (ino=%lu): level = %d, flags = 0x%x, nchildren = %d", 392 + inode->i_ino, level, flags, nchildren); 395 393 ret = 1; 396 394 } 397 395 return ret; ··· 400 396 401 397 int nilfs_btree_broken_node_block(struct buffer_head *bh) 402 398 { 399 + struct inode *inode; 403 400 int ret; 404 401 405 402 if (buffer_nilfs_checked(bh)) 406 403 return 0; 407 404 405 + inode = bh->b_page->mapping->host; 408 406 ret = nilfs_btree_node_broken((struct nilfs_btree_node *)bh->b_data, 409 - bh->b_size, bh->b_blocknr); 407 + bh->b_size, inode, bh->b_blocknr); 410 408 if (likely(!ret)) 411 409 set_buffer_nilfs_checked(bh); 412 410 return ret; ··· 454 448 return node; 455 449 } 456 450 457 - static int 458 - nilfs_btree_bad_node(struct nilfs_btree_node *node, int level) 451 + static int nilfs_btree_bad_node(const struct nilfs_bmap *btree, 452 + struct nilfs_btree_node *node, int level) 459 453 { 460 454 if (unlikely(nilfs_btree_node_get_level(node) != level)) { 461 455 dump_stack(); 462 - printk(KERN_CRIT "NILFS: btree level mismatch: %d != %d\n", 463 - nilfs_btree_node_get_level(node), level); 456 + nilfs_msg(btree->b_inode->i_sb, KERN_CRIT, 457 + "btree level mismatch (ino=%lu): %d != %d", 458 + btree->b_inode->i_ino, 459 + nilfs_btree_node_get_level(node), level); 464 460 return 1; 465 461 } 466 462 return 0; ··· 517 509 518 510 out_no_wait: 519 511 if (!buffer_uptodate(bh)) { 512 + nilfs_msg(btree->b_inode->i_sb, KERN_ERR, 513 + "I/O error reading b-tree node block (ino=%lu, blocknr=%llu)", 514 + btree->b_inode->i_ino, (unsigned long long)ptr); 520 515 brelse(bh); 521 516 return -EIO; 522 517 } ··· 579 568 return ret; 580 569 581 570 node = nilfs_btree_get_nonroot_node(path, level); 582 - if (nilfs_btree_bad_node(node, level)) 571 + if (nilfs_btree_bad_node(btree, node, level)) 583 572 return -EINVAL; 584 573 if (!found) 585 574 found = nilfs_btree_node_lookup(node, key, &index); ··· 627 616 if (ret < 0) 628 617 return ret; 629 618 node = nilfs_btree_get_nonroot_node(path, level); 630 - if (nilfs_btree_bad_node(node, level)) 619 + if (nilfs_btree_bad_node(btree, node, level)) 631 620 return -EINVAL; 632 621 index = nilfs_btree_node_get_nchildren(node) - 1; 633 622 ptr = nilfs_btree_node_get_ptr(node, index, ncmax); ··· 2083 2072 ret = nilfs_btree_do_lookup(btree, path, key, NULL, level + 1, 0); 2084 2073 if (ret < 0) { 2085 2074 if (unlikely(ret == -ENOENT)) 2086 - printk(KERN_CRIT "%s: key = %llu, level == %d\n", 2087 - __func__, (unsigned long long)key, level); 2075 + nilfs_msg(btree->b_inode->i_sb, KERN_CRIT, 2076 + "writing node/leaf block does not appear in b-tree (ino=%lu) at key=%llu, level=%d", 2077 + btree->b_inode->i_ino, 2078 + (unsigned long long)key, level); 2088 2079 goto out; 2089 2080 } 2090 2081 ··· 2123 2110 if (level < NILFS_BTREE_LEVEL_NODE_MIN || 2124 2111 level >= NILFS_BTREE_LEVEL_MAX) { 2125 2112 dump_stack(); 2126 - printk(KERN_WARNING 2127 - "%s: invalid btree level: %d (key=%llu, ino=%lu, " 2128 - "blocknr=%llu)\n", 2129 - __func__, level, (unsigned long long)key, 2130 - NILFS_BMAP_I(btree)->vfs_inode.i_ino, 2131 - (unsigned long long)bh->b_blocknr); 2113 + nilfs_msg(btree->b_inode->i_sb, KERN_WARNING, 2114 + "invalid btree level: %d (key=%llu, ino=%lu, blocknr=%llu)", 2115 + level, (unsigned long long)key, 2116 + btree->b_inode->i_ino, 2117 + (unsigned long long)bh->b_blocknr); 2132 2118 return; 2133 2119 } 2134 2120 ··· 2406 2394 2407 2395 __nilfs_btree_init(bmap); 2408 2396 2409 - if (nilfs_btree_root_broken(nilfs_btree_get_root(bmap), 2410 - bmap->b_inode->i_ino)) 2397 + if (nilfs_btree_root_broken(nilfs_btree_get_root(bmap), bmap->b_inode)) 2411 2398 ret = -EIO; 2412 2399 return ret; 2413 2400 }
+1 -1
fs/nilfs2/btree.h
··· 22 22 #include <linux/types.h> 23 23 #include <linux/buffer_head.h> 24 24 #include <linux/list.h> 25 - #include <linux/nilfs2_fs.h> 25 + #include <linux/nilfs2_ondisk.h> /* nilfs_btree_node */ 26 26 #include "btnode.h" 27 27 #include "bmap.h" 28 28
+10 -13
fs/nilfs2/cpfile.c
··· 21 21 #include <linux/string.h> 22 22 #include <linux/buffer_head.h> 23 23 #include <linux/errno.h> 24 - #include <linux/nilfs2_fs.h> 25 24 #include "mdt.h" 26 25 #include "cpfile.h" 27 26 ··· 331 332 int ret, ncps, nicps, nss, count, i; 332 333 333 334 if (unlikely(start == 0 || start > end)) { 334 - printk(KERN_ERR "%s: invalid range of checkpoint numbers: " 335 - "[%llu, %llu)\n", __func__, 336 - (unsigned long long)start, (unsigned long long)end); 335 + nilfs_msg(cpfile->i_sb, KERN_ERR, 336 + "cannot delete checkpoints: invalid range [%llu, %llu)", 337 + (unsigned long long)start, (unsigned long long)end); 337 338 return -EINVAL; 338 339 } 339 340 ··· 385 386 cpfile, cno); 386 387 if (ret == 0) 387 388 continue; 388 - printk(KERN_ERR 389 - "%s: cannot delete block\n", 390 - __func__); 389 + nilfs_msg(cpfile->i_sb, KERN_ERR, 390 + "error %d deleting checkpoint block", 391 + ret); 391 392 break; 392 393 } 393 394 } ··· 990 991 int err; 991 992 992 993 if (cpsize > sb->s_blocksize) { 993 - printk(KERN_ERR 994 - "NILFS: too large checkpoint size: %zu bytes.\n", 995 - cpsize); 994 + nilfs_msg(sb, KERN_ERR, 995 + "too large checkpoint size: %zu bytes", cpsize); 996 996 return -EINVAL; 997 997 } else if (cpsize < NILFS_MIN_CHECKPOINT_SIZE) { 998 - printk(KERN_ERR 999 - "NILFS: too small checkpoint size: %zu bytes.\n", 1000 - cpsize); 998 + nilfs_msg(sb, KERN_ERR, 999 + "too small checkpoint size: %zu bytes", cpsize); 1001 1000 return -EINVAL; 1002 1001 } 1003 1002
+2 -1
fs/nilfs2/cpfile.h
··· 21 21 22 22 #include <linux/fs.h> 23 23 #include <linux/buffer_head.h> 24 - #include <linux/nilfs2_fs.h> 24 + #include <linux/nilfs2_api.h> /* nilfs_cpstat */ 25 + #include <linux/nilfs2_ondisk.h> /* nilfs_inode, nilfs_checkpoint */ 25 26 26 27 27 28 int nilfs_cpfile_get_checkpoint(struct inode *, __u64, int,
+9 -10
fs/nilfs2/dat.c
··· 349 349 kaddr = kmap_atomic(entry_bh->b_page); 350 350 entry = nilfs_palloc_block_get_entry(dat, vblocknr, entry_bh, kaddr); 351 351 if (unlikely(entry->de_blocknr == cpu_to_le64(0))) { 352 - printk(KERN_CRIT "%s: vbn = %llu, [%llu, %llu)\n", __func__, 353 - (unsigned long long)vblocknr, 354 - (unsigned long long)le64_to_cpu(entry->de_start), 355 - (unsigned long long)le64_to_cpu(entry->de_end)); 352 + nilfs_msg(dat->i_sb, KERN_CRIT, 353 + "%s: invalid vblocknr = %llu, [%llu, %llu)", 354 + __func__, (unsigned long long)vblocknr, 355 + (unsigned long long)le64_to_cpu(entry->de_start), 356 + (unsigned long long)le64_to_cpu(entry->de_end)); 356 357 kunmap_atomic(kaddr); 357 358 brelse(entry_bh); 358 359 return -EINVAL; ··· 480 479 int err; 481 480 482 481 if (entry_size > sb->s_blocksize) { 483 - printk(KERN_ERR 484 - "NILFS: too large DAT entry size: %zu bytes.\n", 485 - entry_size); 482 + nilfs_msg(sb, KERN_ERR, "too large DAT entry size: %zu bytes", 483 + entry_size); 486 484 return -EINVAL; 487 485 } else if (entry_size < NILFS_MIN_DAT_ENTRY_SIZE) { 488 - printk(KERN_ERR 489 - "NILFS: too small DAT entry size: %zu bytes.\n", 490 - entry_size); 486 + nilfs_msg(sb, KERN_ERR, "too small DAT entry size: %zu bytes", 487 + entry_size); 491 488 return -EINVAL; 492 489 } 493 490
+1
fs/nilfs2/dat.h
··· 22 22 #include <linux/types.h> 23 23 #include <linux/buffer_head.h> 24 24 #include <linux/fs.h> 25 + #include <linux/nilfs2_ondisk.h> /* nilfs_inode, nilfs_checkpoint */ 25 26 26 27 27 28 struct nilfs_palloc_req;
+39 -21
fs/nilfs2/dir.c
··· 42 42 #include "nilfs.h" 43 43 #include "page.h" 44 44 45 + static inline unsigned int nilfs_rec_len_from_disk(__le16 dlen) 46 + { 47 + unsigned int len = le16_to_cpu(dlen); 48 + 49 + #if (PAGE_SIZE >= 65536) 50 + if (len == NILFS_MAX_REC_LEN) 51 + return 1 << 16; 52 + #endif 53 + return len; 54 + } 55 + 56 + static inline __le16 nilfs_rec_len_to_disk(unsigned int len) 57 + { 58 + #if (PAGE_SIZE >= 65536) 59 + if (len == (1 << 16)) 60 + return cpu_to_le16(NILFS_MAX_REC_LEN); 61 + 62 + BUG_ON(len > (1 << 16)); 63 + #endif 64 + return cpu_to_le16(len); 65 + } 66 + 45 67 /* 46 68 * nilfs uses block-sized chunks. Arguably, sector-sized ones would be 47 69 * more robust, but we have what we have ··· 162 140 /* Too bad, we had an error */ 163 141 164 142 Ebadsize: 165 - nilfs_error(sb, "nilfs_check_page", 143 + nilfs_error(sb, 166 144 "size of directory #%lu is not a multiple of chunk size", 167 - dir->i_ino 168 - ); 145 + dir->i_ino); 169 146 goto fail; 170 147 Eshort: 171 148 error = "rec_len is smaller than minimal"; ··· 178 157 Espan: 179 158 error = "directory entry across blocks"; 180 159 bad_entry: 181 - nilfs_error(sb, "nilfs_check_page", "bad entry in directory #%lu: %s - " 182 - "offset=%lu, inode=%lu, rec_len=%d, name_len=%d", 183 - dir->i_ino, error, (page->index<<PAGE_SHIFT)+offs, 184 - (unsigned long) le64_to_cpu(p->inode), 160 + nilfs_error(sb, 161 + "bad entry in directory #%lu: %s - offset=%lu, inode=%lu, rec_len=%d, name_len=%d", 162 + dir->i_ino, error, (page->index << PAGE_SHIFT) + offs, 163 + (unsigned long)le64_to_cpu(p->inode), 185 164 rec_len, p->name_len); 186 165 goto fail; 187 166 Eend: 188 167 p = (struct nilfs_dir_entry *)(kaddr + offs); 189 - nilfs_error(sb, "nilfs_check_page", 190 - "entry in directory #%lu spans the page boundary" 191 - "offset=%lu, inode=%lu", 192 - dir->i_ino, (page->index<<PAGE_SHIFT)+offs, 193 - (unsigned long) le64_to_cpu(p->inode)); 168 + nilfs_error(sb, 169 + "entry in directory #%lu spans the page boundary offset=%lu, inode=%lu", 170 + dir->i_ino, (page->index << PAGE_SHIFT) + offs, 171 + (unsigned long)le64_to_cpu(p->inode)); 194 172 fail: 195 173 SetPageError(page); 196 174 return false; ··· 287 267 struct page *page = nilfs_get_page(inode, n); 288 268 289 269 if (IS_ERR(page)) { 290 - nilfs_error(sb, __func__, "bad page in #%lu", 291 - inode->i_ino); 270 + nilfs_error(sb, "bad page in #%lu", inode->i_ino); 292 271 ctx->pos += PAGE_SIZE - offset; 293 272 return -EIO; 294 273 } ··· 297 278 NILFS_DIR_REC_LEN(1); 298 279 for ( ; (char *)de <= limit; de = nilfs_next_entry(de)) { 299 280 if (de->rec_len == 0) { 300 - nilfs_error(sb, __func__, 301 - "zero-length directory entry"); 281 + nilfs_error(sb, "zero-length directory entry"); 302 282 nilfs_put_page(page); 303 283 return -EIO; 304 284 } ··· 363 345 kaddr += nilfs_last_byte(dir, n) - reclen; 364 346 while ((char *) de <= kaddr) { 365 347 if (de->rec_len == 0) { 366 - nilfs_error(dir->i_sb, __func__, 348 + nilfs_error(dir->i_sb, 367 349 "zero-length directory entry"); 368 350 nilfs_put_page(page); 369 351 goto out; ··· 378 360 n = 0; 379 361 /* next page is past the blocks we've got */ 380 362 if (unlikely(n > (dir->i_blocks >> (PAGE_SHIFT - 9)))) { 381 - nilfs_error(dir->i_sb, __func__, 363 + nilfs_error(dir->i_sb, 382 364 "dir %lu size %lld exceeds block count %llu", 383 365 dir->i_ino, dir->i_size, 384 366 (unsigned long long)dir->i_blocks); ··· 487 469 goto got_it; 488 470 } 489 471 if (de->rec_len == 0) { 490 - nilfs_error(dir->i_sb, __func__, 472 + nilfs_error(dir->i_sb, 491 473 "zero-length directory entry"); 492 474 err = -EIO; 493 475 goto out_unlock; ··· 559 541 560 542 while ((char *)de < (char *)dir) { 561 543 if (de->rec_len == 0) { 562 - nilfs_error(inode->i_sb, __func__, 544 + nilfs_error(inode->i_sb, 563 545 "zero-length directory entry"); 564 546 err = -EIO; 565 547 goto out; ··· 646 628 647 629 while ((char *)de <= kaddr) { 648 630 if (de->rec_len == 0) { 649 - nilfs_error(inode->i_sb, __func__, 631 + nilfs_error(inode->i_sb, 650 632 "zero-length directory entry (kaddr=%p, de=%p)", 651 633 kaddr, de); 652 634 goto not_empty;
+6 -4
fs/nilfs2/direct.c
··· 337 337 338 338 key = nilfs_bmap_data_get_key(bmap, *bh); 339 339 if (unlikely(key > NILFS_DIRECT_KEY_MAX)) { 340 - printk(KERN_CRIT "%s: invalid key: %llu\n", __func__, 341 - (unsigned long long)key); 340 + nilfs_msg(bmap->b_inode->i_sb, KERN_CRIT, 341 + "%s (ino=%lu): invalid key: %llu", __func__, 342 + bmap->b_inode->i_ino, (unsigned long long)key); 342 343 return -EINVAL; 343 344 } 344 345 ptr = nilfs_direct_get_ptr(bmap, key); 345 346 if (unlikely(ptr == NILFS_BMAP_INVALID_PTR)) { 346 - printk(KERN_CRIT "%s: invalid pointer: %llu\n", __func__, 347 - (unsigned long long)ptr); 347 + nilfs_msg(bmap->b_inode->i_sb, KERN_CRIT, 348 + "%s (ino=%lu): invalid pointer: %llu", __func__, 349 + bmap->b_inode->i_ino, (unsigned long long)ptr); 348 350 return -EINVAL; 349 351 } 350 352
-10
fs/nilfs2/direct.h
··· 24 24 #include "bmap.h" 25 25 26 26 27 - /** 28 - * struct nilfs_direct_node - direct node 29 - * @dn_flags: flags 30 - * @dn_pad: padding 31 - */ 32 - struct nilfs_direct_node { 33 - __u8 dn_flags; 34 - __u8 pad[7]; 35 - }; 36 - 37 27 #define NILFS_DIRECT_NBLOCKS (NILFS_BMAP_SIZE / sizeof(__le64) - 1) 38 28 #define NILFS_DIRECT_KEY_MIN 0 39 29 #define NILFS_DIRECT_KEY_MAX (NILFS_DIRECT_NBLOCKS - 1)
+8 -1
fs/nilfs2/gcinode.c
··· 148 148 int nilfs_gccache_wait_and_mark_dirty(struct buffer_head *bh) 149 149 { 150 150 wait_on_buffer(bh); 151 - if (!buffer_uptodate(bh)) 151 + if (!buffer_uptodate(bh)) { 152 + struct inode *inode = bh->b_page->mapping->host; 153 + 154 + nilfs_msg(inode->i_sb, KERN_ERR, 155 + "I/O error reading %s block for GC (ino=%lu, vblocknr=%llu)", 156 + buffer_nilfs_node(bh) ? "node" : "data", 157 + inode->i_ino, (unsigned long long)bh->b_blocknr); 152 158 return -EIO; 159 + } 153 160 if (buffer_dirty(bh)) 154 161 return -EEXIST; 155 162
+3 -4
fs/nilfs2/ifile.c
··· 145 145 int err; 146 146 147 147 if (unlikely(!NILFS_VALID_INODE(sb, ino))) { 148 - nilfs_error(sb, __func__, "bad inode number: %lu", 149 - (unsigned long) ino); 148 + nilfs_error(sb, "bad inode number: %lu", (unsigned long)ino); 150 149 return -EINVAL; 151 150 } 152 151 153 152 err = nilfs_palloc_get_entry_block(ifile, ino, 0, out_bh); 154 153 if (unlikely(err)) 155 - nilfs_warning(sb, __func__, "unable to read inode: %lu", 156 - (unsigned long) ino); 154 + nilfs_msg(sb, KERN_WARNING, "error %d reading inode: ino=%lu", 155 + err, (unsigned long)ino); 157 156 return err; 158 157 } 159 158
-1
fs/nilfs2/ifile.h
··· 23 23 24 24 #include <linux/fs.h> 25 25 #include <linux/buffer_head.h> 26 - #include <linux/nilfs2_fs.h> 27 26 #include "mdt.h" 28 27 #include "alloc.h" 29 28
+17 -19
fs/nilfs2/inode.c
··· 112 112 * However, the page having this block must 113 113 * be locked in this case. 114 114 */ 115 - printk(KERN_WARNING 116 - "nilfs_get_block: a race condition " 117 - "while inserting a data block. " 118 - "(inode number=%lu, file block " 119 - "offset=%llu)\n", 120 - inode->i_ino, 121 - (unsigned long long)blkoff); 115 + nilfs_msg(inode->i_sb, KERN_WARNING, 116 + "%s (ino=%lu): a race condition while inserting a data block at offset=%llu", 117 + __func__, inode->i_ino, 118 + (unsigned long long)blkoff); 122 119 err = 0; 123 120 } 124 121 nilfs_transaction_abort(inode->i_sb); ··· 356 359 357 360 root = NILFS_I(dir)->i_root; 358 361 ii = NILFS_I(inode); 359 - ii->i_state = 1 << NILFS_I_NEW; 362 + ii->i_state = BIT(NILFS_I_NEW); 360 363 ii->i_root = root; 361 364 362 365 err = nilfs_ifile_create_inode(root->ifile, &ino, &ii->i_bh); ··· 555 558 556 559 inode->i_ino = args->ino; 557 560 if (args->for_gc) { 558 - NILFS_I(inode)->i_state = 1 << NILFS_I_GCINODE; 561 + NILFS_I(inode)->i_state = BIT(NILFS_I_GCINODE); 559 562 NILFS_I(inode)->i_cno = args->cno; 560 563 NILFS_I(inode)->i_root = NULL; 561 564 } else { ··· 723 726 goto repeat; 724 727 725 728 failed: 726 - nilfs_warning(ii->vfs_inode.i_sb, __func__, 727 - "failed to truncate bmap (ino=%lu, err=%d)", 728 - ii->vfs_inode.i_ino, ret); 729 + nilfs_msg(ii->vfs_inode.i_sb, KERN_WARNING, 730 + "error %d truncating bmap (ino=%lu)", ret, 731 + ii->vfs_inode.i_ino); 729 732 } 730 733 731 734 void nilfs_truncate(struct inode *inode) ··· 936 939 * This will happen when somebody is freeing 937 940 * this inode. 938 941 */ 939 - nilfs_warning(inode->i_sb, __func__, 940 - "cannot get inode (ino=%lu)", 941 - inode->i_ino); 942 + nilfs_msg(inode->i_sb, KERN_WARNING, 943 + "cannot set file dirty (ino=%lu): the file is being freed", 944 + inode->i_ino); 942 945 spin_unlock(&nilfs->ns_inode_lock); 943 946 return -EINVAL; /* 944 947 * NILFS_I_DIRTY may remain for ··· 959 962 960 963 err = nilfs_load_inode_block(inode, &ibh); 961 964 if (unlikely(err)) { 962 - nilfs_warning(inode->i_sb, __func__, 963 - "failed to reget inode block."); 965 + nilfs_msg(inode->i_sb, KERN_WARNING, 966 + "cannot mark inode dirty (ino=%lu): error %d loading inode block", 967 + inode->i_ino, err); 964 968 return err; 965 969 } 966 970 nilfs_update_inode(inode, ibh, flags); ··· 987 989 struct nilfs_mdt_info *mdi = NILFS_MDT(inode); 988 990 989 991 if (is_bad_inode(inode)) { 990 - nilfs_warning(inode->i_sb, __func__, 991 - "tried to mark bad_inode dirty. ignored."); 992 + nilfs_msg(inode->i_sb, KERN_WARNING, 993 + "tried to mark bad_inode dirty. ignored."); 992 994 dump_stack(); 993 995 return; 994 996 }
+23 -25
fs/nilfs2/ioctl.c
··· 25 25 #include <linux/compat.h> /* compat_ptr() */ 26 26 #include <linux/mount.h> /* mnt_want_write_file(), mnt_drop_write_file() */ 27 27 #include <linux/buffer_head.h> 28 - #include <linux/nilfs2_fs.h> 29 28 #include "nilfs.h" 30 29 #include "segment.h" 31 30 #include "bmap.h" ··· 583 584 584 585 if (unlikely(ret < 0)) { 585 586 if (ret == -ENOENT) 586 - printk(KERN_CRIT 587 - "%s: invalid virtual block address (%s): " 588 - "ino=%llu, cno=%llu, offset=%llu, " 589 - "blocknr=%llu, vblocknr=%llu\n", 590 - __func__, vdesc->vd_flags ? "node" : "data", 591 - (unsigned long long)vdesc->vd_ino, 592 - (unsigned long long)vdesc->vd_cno, 593 - (unsigned long long)vdesc->vd_offset, 594 - (unsigned long long)vdesc->vd_blocknr, 595 - (unsigned long long)vdesc->vd_vblocknr); 587 + nilfs_msg(inode->i_sb, KERN_CRIT, 588 + "%s: invalid virtual block address (%s): ino=%llu, cno=%llu, offset=%llu, blocknr=%llu, vblocknr=%llu", 589 + __func__, vdesc->vd_flags ? "node" : "data", 590 + (unsigned long long)vdesc->vd_ino, 591 + (unsigned long long)vdesc->vd_cno, 592 + (unsigned long long)vdesc->vd_offset, 593 + (unsigned long long)vdesc->vd_blocknr, 594 + (unsigned long long)vdesc->vd_vblocknr); 596 595 return ret; 597 596 } 598 597 if (unlikely(!list_empty(&bh->b_assoc_buffers))) { 599 - printk(KERN_CRIT "%s: conflicting %s buffer: ino=%llu, " 600 - "cno=%llu, offset=%llu, blocknr=%llu, vblocknr=%llu\n", 601 - __func__, vdesc->vd_flags ? "node" : "data", 602 - (unsigned long long)vdesc->vd_ino, 603 - (unsigned long long)vdesc->vd_cno, 604 - (unsigned long long)vdesc->vd_offset, 605 - (unsigned long long)vdesc->vd_blocknr, 606 - (unsigned long long)vdesc->vd_vblocknr); 598 + nilfs_msg(inode->i_sb, KERN_CRIT, 599 + "%s: conflicting %s buffer: ino=%llu, cno=%llu, offset=%llu, blocknr=%llu, vblocknr=%llu", 600 + __func__, vdesc->vd_flags ? "node" : "data", 601 + (unsigned long long)vdesc->vd_ino, 602 + (unsigned long long)vdesc->vd_cno, 603 + (unsigned long long)vdesc->vd_offset, 604 + (unsigned long long)vdesc->vd_blocknr, 605 + (unsigned long long)vdesc->vd_vblocknr); 607 606 brelse(bh); 608 607 return -EEXIST; 609 608 } ··· 851 854 return 0; 852 855 853 856 failed: 854 - printk(KERN_ERR "NILFS: GC failed during preparation: %s: err=%d\n", 855 - msg, ret); 857 + nilfs_msg(nilfs->ns_sb, KERN_ERR, "error %d preparing GC: %s", ret, 858 + msg); 856 859 return ret; 857 860 } 858 861 ··· 960 963 } 961 964 962 965 ret = nilfs_ioctl_move_blocks(inode->i_sb, &argv[0], kbufs[0]); 963 - if (ret < 0) 964 - printk(KERN_ERR "NILFS: GC failed during preparation: " 965 - "cannot read source blocks: err=%d\n", ret); 966 - else { 966 + if (ret < 0) { 967 + nilfs_msg(inode->i_sb, KERN_ERR, 968 + "error %d preparing GC: cannot read source blocks", 969 + ret); 970 + } else { 967 971 if (nilfs_sb_need_update(nilfs)) 968 972 set_nilfs_discontinued(nilfs); 969 973 ret = nilfs_clean_segments(inode->i_sb, argv, kbufs);
+5 -1
fs/nilfs2/mdt.c
··· 207 207 208 208 out_no_wait: 209 209 err = -EIO; 210 - if (!buffer_uptodate(first_bh)) 210 + if (!buffer_uptodate(first_bh)) { 211 + nilfs_msg(inode->i_sb, KERN_ERR, 212 + "I/O error reading meta-data file (ino=%lu, block-offset=%lu)", 213 + inode->i_ino, block); 211 214 goto failed_bh; 215 + } 212 216 out: 213 217 *out_bh = first_bh; 214 218 return 0;
+3 -3
fs/nilfs2/namei.c
··· 283 283 goto out; 284 284 285 285 if (!inode->i_nlink) { 286 - nilfs_warning(inode->i_sb, __func__, 287 - "deleting nonexistent file (%lu), %d", 288 - inode->i_ino, inode->i_nlink); 286 + nilfs_msg(inode->i_sb, KERN_WARNING, 287 + "deleting nonexistent file (ino=%lu), %d", 288 + inode->i_ino, inode->i_nlink); 289 289 set_nlink(inode, 1); 290 290 } 291 291 err = nilfs_delete_entry(de, page);
+37 -11
fs/nilfs2/nilfs.h
··· 23 23 #include <linux/buffer_head.h> 24 24 #include <linux/spinlock.h> 25 25 #include <linux/blkdev.h> 26 - #include <linux/nilfs2_fs.h> 26 + #include <linux/nilfs2_api.h> 27 + #include <linux/nilfs2_ondisk.h> 27 28 #include "the_nilfs.h" 28 29 #include "bmap.h" 29 30 ··· 120 119 /* 121 120 * Macros to check inode numbers 122 121 */ 123 - #define NILFS_MDT_INO_BITS \ 124 - ((unsigned int)(1 << NILFS_DAT_INO | 1 << NILFS_CPFILE_INO | \ 125 - 1 << NILFS_SUFILE_INO | 1 << NILFS_IFILE_INO | \ 126 - 1 << NILFS_ATIME_INO | 1 << NILFS_SKETCH_INO)) 122 + #define NILFS_MDT_INO_BITS \ 123 + (BIT(NILFS_DAT_INO) | BIT(NILFS_CPFILE_INO) | \ 124 + BIT(NILFS_SUFILE_INO) | BIT(NILFS_IFILE_INO) | \ 125 + BIT(NILFS_ATIME_INO) | BIT(NILFS_SKETCH_INO)) 127 126 128 - #define NILFS_SYS_INO_BITS \ 129 - ((unsigned int)(1 << NILFS_ROOT_INO) | NILFS_MDT_INO_BITS) 127 + #define NILFS_SYS_INO_BITS (BIT(NILFS_ROOT_INO) | NILFS_MDT_INO_BITS) 130 128 131 129 #define NILFS_FIRST_INO(sb) (((struct the_nilfs *)sb->s_fs_info)->ns_first_ino) 132 130 133 131 #define NILFS_MDT_INODE(sb, ino) \ 134 - ((ino) < NILFS_FIRST_INO(sb) && (NILFS_MDT_INO_BITS & (1 << (ino)))) 132 + ((ino) < NILFS_FIRST_INO(sb) && (NILFS_MDT_INO_BITS & BIT(ino))) 135 133 #define NILFS_VALID_INODE(sb, ino) \ 136 - ((ino) >= NILFS_FIRST_INO(sb) || (NILFS_SYS_INO_BITS & (1 << (ino)))) 134 + ((ino) >= NILFS_FIRST_INO(sb) || (NILFS_SYS_INO_BITS & BIT(ino))) 137 135 138 136 /** 139 137 * struct nilfs_transaction_info: context information for synchronization ··· 299 299 /* super.c */ 300 300 extern struct inode *nilfs_alloc_inode(struct super_block *); 301 301 extern void nilfs_destroy_inode(struct inode *); 302 + 302 303 extern __printf(3, 4) 303 - void nilfs_error(struct super_block *, const char *, const char *, ...); 304 + void __nilfs_msg(struct super_block *sb, const char *level, 305 + const char *fmt, ...); 304 306 extern __printf(3, 4) 305 - void nilfs_warning(struct super_block *, const char *, const char *, ...); 307 + void __nilfs_error(struct super_block *sb, const char *function, 308 + const char *fmt, ...); 309 + 310 + #ifdef CONFIG_PRINTK 311 + 312 + #define nilfs_msg(sb, level, fmt, ...) \ 313 + __nilfs_msg(sb, level, fmt, ##__VA_ARGS__) 314 + #define nilfs_error(sb, fmt, ...) \ 315 + __nilfs_error(sb, __func__, fmt, ##__VA_ARGS__) 316 + 317 + #else 318 + 319 + #define nilfs_msg(sb, level, fmt, ...) \ 320 + do { \ 321 + no_printk(fmt, ##__VA_ARGS__); \ 322 + (void)(sb); \ 323 + } while (0) 324 + #define nilfs_error(sb, fmt, ...) \ 325 + do { \ 326 + no_printk(fmt, ##__VA_ARGS__); \ 327 + __nilfs_error(sb, "", " "); \ 328 + } while (0) 329 + 330 + #endif /* CONFIG_PRINTK */ 331 + 306 332 extern struct nilfs_super_block * 307 333 nilfs_read_super_block(struct super_block *, u64, int, struct buffer_head **); 308 334 extern int nilfs_store_magic_and_option(struct super_block *,
+22 -23
fs/nilfs2/page.c
··· 30 30 #include "mdt.h" 31 31 32 32 33 - #define NILFS_BUFFER_INHERENT_BITS \ 34 - ((1UL << BH_Uptodate) | (1UL << BH_Mapped) | (1UL << BH_NILFS_Node) | \ 35 - (1UL << BH_NILFS_Volatile) | (1UL << BH_NILFS_Checked)) 33 + #define NILFS_BUFFER_INHERENT_BITS \ 34 + (BIT(BH_Uptodate) | BIT(BH_Mapped) | BIT(BH_NILFS_Node) | \ 35 + BIT(BH_NILFS_Volatile) | BIT(BH_NILFS_Checked)) 36 36 37 37 static struct buffer_head * 38 38 __nilfs_get_page_block(struct page *page, unsigned long block, pgoff_t index, ··· 85 85 { 86 86 struct page *page = bh->b_page; 87 87 const unsigned long clear_bits = 88 - (1 << BH_Uptodate | 1 << BH_Dirty | 1 << BH_Mapped | 89 - 1 << BH_Async_Write | 1 << BH_NILFS_Volatile | 90 - 1 << BH_NILFS_Checked | 1 << BH_NILFS_Redirected); 88 + (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) | 89 + BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) | 90 + BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected)); 91 91 92 92 lock_buffer(bh); 93 93 set_mask_bits(&bh->b_state, clear_bits, 0); ··· 124 124 dbh->b_bdev = sbh->b_bdev; 125 125 126 126 bh = dbh; 127 - bits = sbh->b_state & ((1UL << BH_Uptodate) | (1UL << BH_Mapped)); 127 + bits = sbh->b_state & (BIT(BH_Uptodate) | BIT(BH_Mapped)); 128 128 while ((bh = bh->b_this_page) != dbh) { 129 129 lock_buffer(bh); 130 130 bits &= bh->b_state; 131 131 unlock_buffer(bh); 132 132 } 133 - if (bits & (1UL << BH_Uptodate)) 133 + if (bits & BIT(BH_Uptodate)) 134 134 SetPageUptodate(dpage); 135 135 else 136 136 ClearPageUptodate(dpage); 137 - if (bits & (1UL << BH_Mapped)) 137 + if (bits & BIT(BH_Mapped)) 138 138 SetPageMappedToDisk(dpage); 139 139 else 140 140 ClearPageMappedToDisk(dpage); ··· 215 215 create_empty_buffers(dst, sbh->b_size, 0); 216 216 217 217 if (copy_dirty) 218 - mask |= (1UL << BH_Dirty); 218 + mask |= BIT(BH_Dirty); 219 219 220 220 dbh = dbufs = page_buffers(dst); 221 221 do { ··· 403 403 404 404 BUG_ON(!PageLocked(page)); 405 405 406 - if (!silent) { 407 - nilfs_warning(sb, __func__, 408 - "discard page: offset %lld, ino %lu", 409 - page_offset(page), inode->i_ino); 410 - } 406 + if (!silent) 407 + nilfs_msg(sb, KERN_WARNING, 408 + "discard dirty page: offset=%lld, ino=%lu", 409 + page_offset(page), inode->i_ino); 411 410 412 411 ClearPageUptodate(page); 413 412 ClearPageMappedToDisk(page); ··· 414 415 if (page_has_buffers(page)) { 415 416 struct buffer_head *bh, *head; 416 417 const unsigned long clear_bits = 417 - (1 << BH_Uptodate | 1 << BH_Dirty | 1 << BH_Mapped | 418 - 1 << BH_Async_Write | 1 << BH_NILFS_Volatile | 419 - 1 << BH_NILFS_Checked | 1 << BH_NILFS_Redirected); 418 + (BIT(BH_Uptodate) | BIT(BH_Dirty) | BIT(BH_Mapped) | 419 + BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) | 420 + BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected)); 420 421 421 422 bh = head = page_buffers(page); 422 423 do { 423 424 lock_buffer(bh); 424 - if (!silent) { 425 - nilfs_warning(sb, __func__, 426 - "discard block %llu, size %zu", 427 - (u64)bh->b_blocknr, bh->b_size); 428 - } 425 + if (!silent) 426 + nilfs_msg(sb, KERN_WARNING, 427 + "discard dirty block: blocknr=%llu, size=%zu", 428 + (u64)bh->b_blocknr, bh->b_size); 429 + 429 430 set_mask_bits(&bh->b_state, clear_bits, 0); 430 431 unlock_buffer(bh); 431 432 } while (bh = bh->b_this_page, bh != head);
+35 -37
fs/nilfs2/recovery.c
··· 54 54 }; 55 55 56 56 57 - static int nilfs_warn_segment_error(int err) 57 + static int nilfs_warn_segment_error(struct super_block *sb, int err) 58 58 { 59 + const char *msg = NULL; 60 + 59 61 switch (err) { 60 62 case NILFS_SEG_FAIL_IO: 61 - printk(KERN_WARNING 62 - "NILFS warning: I/O error on loading last segment\n"); 63 + nilfs_msg(sb, KERN_ERR, "I/O error reading segment"); 63 64 return -EIO; 64 65 case NILFS_SEG_FAIL_MAGIC: 65 - printk(KERN_WARNING 66 - "NILFS warning: Segment magic number invalid\n"); 66 + msg = "Magic number mismatch"; 67 67 break; 68 68 case NILFS_SEG_FAIL_SEQ: 69 - printk(KERN_WARNING 70 - "NILFS warning: Sequence number mismatch\n"); 69 + msg = "Sequence number mismatch"; 71 70 break; 72 71 case NILFS_SEG_FAIL_CHECKSUM_SUPER_ROOT: 73 - printk(KERN_WARNING 74 - "NILFS warning: Checksum error in super root\n"); 72 + msg = "Checksum error in super root"; 75 73 break; 76 74 case NILFS_SEG_FAIL_CHECKSUM_FULL: 77 - printk(KERN_WARNING 78 - "NILFS warning: Checksum error in segment payload\n"); 75 + msg = "Checksum error in segment payload"; 79 76 break; 80 77 case NILFS_SEG_FAIL_CONSISTENCY: 81 - printk(KERN_WARNING 82 - "NILFS warning: Inconsistent segment\n"); 78 + msg = "Inconsistency found"; 83 79 break; 84 80 case NILFS_SEG_NO_SUPER_ROOT: 85 - printk(KERN_WARNING 86 - "NILFS warning: No super root in the last segment\n"); 81 + msg = "No super root in the last segment"; 87 82 break; 83 + default: 84 + nilfs_msg(sb, KERN_ERR, "unrecognized segment error %d", err); 85 + return -EINVAL; 88 86 } 87 + nilfs_msg(sb, KERN_WARNING, "invalid segment: %s", msg); 89 88 return -EINVAL; 90 89 } 91 90 ··· 177 178 brelse(bh_sr); 178 179 179 180 failed: 180 - return nilfs_warn_segment_error(ret); 181 + return nilfs_warn_segment_error(nilfs->ns_sb, ret); 181 182 } 182 183 183 184 /** ··· 552 553 put_page(page); 553 554 554 555 failed_inode: 555 - printk(KERN_WARNING 556 - "NILFS warning: error recovering data block " 557 - "(err=%d, ino=%lu, block-offset=%llu)\n", 558 - err, (unsigned long)rb->ino, 559 - (unsigned long long)rb->blkoff); 556 + nilfs_msg(sb, KERN_WARNING, 557 + "error %d recovering data block (ino=%lu, block-offset=%llu)", 558 + err, (unsigned long)rb->ino, 559 + (unsigned long long)rb->blkoff); 560 560 if (!err2) 561 561 err2 = err; 562 562 next: ··· 678 680 } 679 681 680 682 if (nsalvaged_blocks) { 681 - printk(KERN_INFO "NILFS (device %s): salvaged %lu blocks\n", 682 - sb->s_id, nsalvaged_blocks); 683 + nilfs_msg(sb, KERN_INFO, "salvaged %lu blocks", 684 + nsalvaged_blocks); 683 685 ri->ri_need_recovery = NILFS_RECOVERY_ROLLFORWARD_DONE; 684 686 } 685 687 out: ··· 690 692 confused: 691 693 err = -EINVAL; 692 694 failed: 693 - printk(KERN_ERR 694 - "NILFS (device %s): Error roll-forwarding " 695 - "(err=%d, pseg block=%llu). ", 696 - sb->s_id, err, (unsigned long long)pseg_start); 695 + nilfs_msg(sb, KERN_ERR, 696 + "error %d roll-forwarding partial segment at blocknr = %llu", 697 + err, (unsigned long long)pseg_start); 697 698 goto out; 698 699 } 699 700 ··· 712 715 set_buffer_dirty(bh); 713 716 err = sync_dirty_buffer(bh); 714 717 if (unlikely(err)) 715 - printk(KERN_WARNING 716 - "NILFS warning: buffer sync write failed during " 717 - "post-cleaning of recovery.\n"); 718 + nilfs_msg(nilfs->ns_sb, KERN_WARNING, 719 + "buffer sync write failed during post-cleaning of recovery."); 718 720 brelse(bh); 719 721 } 720 722 ··· 748 752 749 753 err = nilfs_attach_checkpoint(sb, ri->ri_cno, true, &root); 750 754 if (unlikely(err)) { 751 - printk(KERN_ERR 752 - "NILFS: error loading the latest checkpoint.\n"); 755 + nilfs_msg(sb, KERN_ERR, 756 + "error %d loading the latest checkpoint", err); 753 757 return err; 754 758 } 755 759 ··· 760 764 if (ri->ri_need_recovery == NILFS_RECOVERY_ROLLFORWARD_DONE) { 761 765 err = nilfs_prepare_segment_for_recovery(nilfs, sb, ri); 762 766 if (unlikely(err)) { 763 - printk(KERN_ERR "NILFS: Error preparing segments for " 764 - "recovery.\n"); 767 + nilfs_msg(sb, KERN_ERR, 768 + "error %d preparing segment for recovery", 769 + err); 765 770 goto failed; 766 771 } 767 772 ··· 775 778 nilfs_detach_log_writer(sb); 776 779 777 780 if (unlikely(err)) { 778 - printk(KERN_ERR "NILFS: Oops! recovery failed. " 779 - "(err=%d)\n", err); 781 + nilfs_msg(sb, KERN_ERR, 782 + "error %d writing segment for recovery", 783 + err); 780 784 goto failed; 781 785 } 782 786 ··· 959 961 failed: 960 962 brelse(bh_sum); 961 963 nilfs_dispose_segment_list(&segments); 962 - return (ret < 0) ? ret : nilfs_warn_segment_error(ret); 964 + return ret < 0 ? ret : nilfs_warn_segment_error(nilfs->ns_sb, ret); 963 965 }
+5 -1
fs/nilfs2/segbuf.c
··· 514 514 } while (--segbuf->sb_nbio > 0); 515 515 516 516 if (unlikely(atomic_read(&segbuf->sb_err) > 0)) { 517 - printk(KERN_ERR "NILFS: IO error writing segment\n"); 517 + nilfs_msg(segbuf->sb_super, KERN_ERR, 518 + "I/O error writing log (start-blocknr=%llu, block-count=%lu) in segment %llu", 519 + (unsigned long long)segbuf->sb_pseg_start, 520 + segbuf->sb_sum.nblocks, 521 + (unsigned long long)segbuf->sb_segnum); 518 522 err = -EIO; 519 523 } 520 524 return err;
+30 -31
fs/nilfs2/segment.c
··· 150 150 #define nilfs_cnt32_lt(a, b) nilfs_cnt32_gt(b, a) 151 151 #define nilfs_cnt32_le(a, b) nilfs_cnt32_ge(b, a) 152 152 153 - static int nilfs_prepare_segment_lock(struct nilfs_transaction_info *ti) 153 + static int nilfs_prepare_segment_lock(struct super_block *sb, 154 + struct nilfs_transaction_info *ti) 154 155 { 155 156 struct nilfs_transaction_info *cur_ti = current->journal_info; 156 157 void *save = NULL; ··· 165 164 * it is saved and will be restored on 166 165 * nilfs_transaction_commit(). 167 166 */ 168 - printk(KERN_WARNING 169 - "NILFS warning: journal info from a different FS\n"); 167 + nilfs_msg(sb, KERN_WARNING, "journal info from a different FS"); 170 168 save = current->journal_info; 171 169 } 172 170 if (!ti) { ··· 215 215 int vacancy_check) 216 216 { 217 217 struct the_nilfs *nilfs; 218 - int ret = nilfs_prepare_segment_lock(ti); 218 + int ret = nilfs_prepare_segment_lock(sb, ti); 219 219 struct nilfs_transaction_info *trace_ti; 220 220 221 221 if (unlikely(ret < 0)) ··· 373 373 nilfs_segctor_do_immediate_flush(sci); 374 374 375 375 up_write(&nilfs->ns_segctor_sem); 376 - yield(); 376 + cond_resched(); 377 377 } 378 378 if (gcflag) 379 379 ti->ti_flags |= NILFS_TI_GC; ··· 1858 1858 */ 1859 1859 list_for_each_entry(bh, &segbuf->sb_payload_buffers, 1860 1860 b_assoc_buffers) { 1861 - const unsigned long set_bits = (1 << BH_Uptodate); 1861 + const unsigned long set_bits = BIT(BH_Uptodate); 1862 1862 const unsigned long clear_bits = 1863 - (1 << BH_Dirty | 1 << BH_Async_Write | 1864 - 1 << BH_Delay | 1 << BH_NILFS_Volatile | 1865 - 1 << BH_NILFS_Redirected); 1863 + (BIT(BH_Dirty) | BIT(BH_Async_Write) | 1864 + BIT(BH_Delay) | BIT(BH_NILFS_Volatile) | 1865 + BIT(BH_NILFS_Redirected)); 1866 1866 1867 1867 set_mask_bits(&bh->b_state, clear_bits, set_bits); 1868 1868 if (bh == segbuf->sb_super_root) { ··· 1951 1951 err = nilfs_ifile_get_inode_block( 1952 1952 ifile, ii->vfs_inode.i_ino, &ibh); 1953 1953 if (unlikely(err)) { 1954 - nilfs_warning(sci->sc_super, __func__, 1955 - "failed to get inode block."); 1954 + nilfs_msg(sci->sc_super, KERN_WARNING, 1955 + "log writer: error %d getting inode block (ino=%lu)", 1956 + err, ii->vfs_inode.i_ino); 1956 1957 return err; 1957 1958 } 1958 1959 mark_buffer_dirty(ibh); ··· 2132 2131 static void nilfs_segctor_do_flush(struct nilfs_sc_info *sci, int bn) 2133 2132 { 2134 2133 spin_lock(&sci->sc_state_lock); 2135 - if (!(sci->sc_flush_request & (1 << bn))) { 2134 + if (!(sci->sc_flush_request & BIT(bn))) { 2136 2135 unsigned long prev_req = sci->sc_flush_request; 2137 2136 2138 - sci->sc_flush_request |= (1 << bn); 2137 + sci->sc_flush_request |= BIT(bn); 2139 2138 if (!prev_req) 2140 2139 wake_up(&sci->sc_wait_daemon); 2141 2140 } ··· 2319 2318 } 2320 2319 2321 2320 #define FLUSH_FILE_BIT (0x1) /* data file only */ 2322 - #define FLUSH_DAT_BIT (1 << NILFS_DAT_INO) /* DAT only */ 2321 + #define FLUSH_DAT_BIT BIT(NILFS_DAT_INO) /* DAT only */ 2323 2322 2324 2323 /** 2325 2324 * nilfs_segctor_accept - record accepted sequence count of log-write requests ··· 2459 2458 if (likely(!err)) 2460 2459 break; 2461 2460 2462 - nilfs_warning(sb, __func__, 2463 - "segment construction failed. (err=%d)", err); 2461 + nilfs_msg(sb, KERN_WARNING, "error %d cleaning segments", err); 2464 2462 set_current_state(TASK_INTERRUPTIBLE); 2465 2463 schedule_timeout(sci->sc_interval); 2466 2464 } ··· 2467 2467 int ret = nilfs_discard_segments(nilfs, sci->sc_freesegs, 2468 2468 sci->sc_nfreesegs); 2469 2469 if (ret) { 2470 - printk(KERN_WARNING 2471 - "NILFS warning: error %d on discard request, " 2472 - "turning discards off for the device\n", ret); 2470 + nilfs_msg(sb, KERN_WARNING, 2471 + "error %d on discard request, turning discards off for the device", 2472 + ret); 2473 2473 nilfs_clear_opt(nilfs, DISCARD); 2474 2474 } 2475 2475 } ··· 2551 2551 /* start sync. */ 2552 2552 sci->sc_task = current; 2553 2553 wake_up(&sci->sc_wait_task); /* for nilfs_segctor_start_thread() */ 2554 - printk(KERN_INFO 2555 - "segctord starting. Construction interval = %lu seconds, " 2556 - "CP frequency < %lu seconds\n", 2557 - sci->sc_interval / HZ, sci->sc_mjcp_freq / HZ); 2554 + nilfs_msg(sci->sc_super, KERN_INFO, 2555 + "segctord starting. Construction interval = %lu seconds, CP frequency < %lu seconds", 2556 + sci->sc_interval / HZ, sci->sc_mjcp_freq / HZ); 2558 2557 2559 2558 spin_lock(&sci->sc_state_lock); 2560 2559 loop: ··· 2627 2628 if (IS_ERR(t)) { 2628 2629 int err = PTR_ERR(t); 2629 2630 2630 - printk(KERN_ERR "NILFS: error %d creating segctord thread\n", 2631 - err); 2631 + nilfs_msg(sci->sc_super, KERN_ERR, 2632 + "error %d creating segctord thread", err); 2632 2633 return err; 2633 2634 } 2634 2635 wait_event(sci->sc_wait_task, sci->sc_task != NULL); ··· 2738 2739 nilfs_segctor_write_out(sci); 2739 2740 2740 2741 if (!list_empty(&sci->sc_dirty_files)) { 2741 - nilfs_warning(sci->sc_super, __func__, 2742 - "dirty file(s) after the final construction"); 2742 + nilfs_msg(sci->sc_super, KERN_WARNING, 2743 + "disposed unprocessed dirty file(s) when stopping log writer"); 2743 2744 nilfs_dispose_list(nilfs, &sci->sc_dirty_files, 1); 2744 2745 } 2745 2746 2746 2747 if (!list_empty(&sci->sc_iput_queue)) { 2747 - nilfs_warning(sci->sc_super, __func__, 2748 - "iput queue is not empty"); 2748 + nilfs_msg(sci->sc_super, KERN_WARNING, 2749 + "disposed unprocessed inode(s) in iput queue when stopping log writer"); 2749 2750 nilfs_dispose_list(nilfs, &sci->sc_iput_queue, 1); 2750 2751 } 2751 2752 ··· 2821 2822 spin_lock(&nilfs->ns_inode_lock); 2822 2823 if (!list_empty(&nilfs->ns_dirty_files)) { 2823 2824 list_splice_init(&nilfs->ns_dirty_files, &garbage_list); 2824 - nilfs_warning(sb, __func__, 2825 - "Hit dirty file after stopped log writer"); 2825 + nilfs_msg(sb, KERN_WARNING, 2826 + "disposed unprocessed dirty file(s) when detaching log writer"); 2826 2827 } 2827 2828 spin_unlock(&nilfs->ns_inode_lock); 2828 2829 up_write(&nilfs->ns_segctor_sem);
-1
fs/nilfs2/segment.h
··· 23 23 #include <linux/fs.h> 24 24 #include <linux/buffer_head.h> 25 25 #include <linux/workqueue.h> 26 - #include <linux/nilfs2_fs.h> 27 26 #include "nilfs.h" 28 27 29 28 struct nilfs_root;
+22 -22
fs/nilfs2/sufile.c
··· 22 22 #include <linux/string.h> 23 23 #include <linux/buffer_head.h> 24 24 #include <linux/errno.h> 25 - #include <linux/nilfs2_fs.h> 26 25 #include "mdt.h" 27 26 #include "sufile.h" 28 27 ··· 180 181 down_write(&NILFS_MDT(sufile)->mi_sem); 181 182 for (seg = segnumv; seg < segnumv + nsegs; seg++) { 182 183 if (unlikely(*seg >= nilfs_sufile_get_nsegments(sufile))) { 183 - printk(KERN_WARNING 184 - "%s: invalid segment number: %llu\n", __func__, 185 - (unsigned long long)*seg); 184 + nilfs_msg(sufile->i_sb, KERN_WARNING, 185 + "%s: invalid segment number: %llu", 186 + __func__, (unsigned long long)*seg); 186 187 nerr++; 187 188 } 188 189 } ··· 239 240 int ret; 240 241 241 242 if (unlikely(segnum >= nilfs_sufile_get_nsegments(sufile))) { 242 - printk(KERN_WARNING "%s: invalid segment number: %llu\n", 243 - __func__, (unsigned long long)segnum); 243 + nilfs_msg(sufile->i_sb, KERN_WARNING, 244 + "%s: invalid segment number: %llu", 245 + __func__, (unsigned long long)segnum); 244 246 return -EINVAL; 245 247 } 246 248 down_write(&NILFS_MDT(sufile)->mi_sem); ··· 419 419 kaddr = kmap_atomic(su_bh->b_page); 420 420 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 421 421 if (unlikely(!nilfs_segment_usage_clean(su))) { 422 - printk(KERN_WARNING "%s: segment %llu must be clean\n", 423 - __func__, (unsigned long long)segnum); 422 + nilfs_msg(sufile->i_sb, KERN_WARNING, 423 + "%s: segment %llu must be clean", __func__, 424 + (unsigned long long)segnum); 424 425 kunmap_atomic(kaddr); 425 426 return; 426 427 } ··· 445 444 446 445 kaddr = kmap_atomic(su_bh->b_page); 447 446 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 448 - if (su->su_flags == cpu_to_le32(1UL << NILFS_SEGMENT_USAGE_DIRTY) && 447 + if (su->su_flags == cpu_to_le32(BIT(NILFS_SEGMENT_USAGE_DIRTY)) && 449 448 su->su_nblocks == cpu_to_le32(0)) { 450 449 kunmap_atomic(kaddr); 451 450 return; ··· 456 455 /* make the segment garbage */ 457 456 su->su_lastmod = cpu_to_le64(0); 458 457 su->su_nblocks = cpu_to_le32(0); 459 - su->su_flags = cpu_to_le32(1UL << NILFS_SEGMENT_USAGE_DIRTY); 458 + su->su_flags = cpu_to_le32(BIT(NILFS_SEGMENT_USAGE_DIRTY)); 460 459 kunmap_atomic(kaddr); 461 460 462 461 nilfs_sufile_mod_counter(header_bh, clean ? (u64)-1 : 0, dirty ? 0 : 1); ··· 477 476 kaddr = kmap_atomic(su_bh->b_page); 478 477 su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr); 479 478 if (nilfs_segment_usage_clean(su)) { 480 - printk(KERN_WARNING "%s: segment %llu is already clean\n", 481 - __func__, (unsigned long long)segnum); 479 + nilfs_msg(sufile->i_sb, KERN_WARNING, 480 + "%s: segment %llu is already clean", 481 + __func__, (unsigned long long)segnum); 482 482 kunmap_atomic(kaddr); 483 483 return; 484 484 } ··· 694 692 su2 = su; 695 693 for (j = 0; j < n; j++, su = (void *)su + susz) { 696 694 if ((le32_to_cpu(su->su_flags) & 697 - ~(1UL << NILFS_SEGMENT_USAGE_ERROR)) || 695 + ~BIT(NILFS_SEGMENT_USAGE_ERROR)) || 698 696 nilfs_segment_is_active(nilfs, segnum + j)) { 699 697 ret = -EBUSY; 700 698 kunmap_atomic(kaddr); ··· 861 859 si->sui_lastmod = le64_to_cpu(su->su_lastmod); 862 860 si->sui_nblocks = le32_to_cpu(su->su_nblocks); 863 861 si->sui_flags = le32_to_cpu(su->su_flags) & 864 - ~(1UL << NILFS_SEGMENT_USAGE_ACTIVE); 862 + ~BIT(NILFS_SEGMENT_USAGE_ACTIVE); 865 863 if (nilfs_segment_is_active(nilfs, segnum + j)) 866 864 si->sui_flags |= 867 - (1UL << NILFS_SEGMENT_USAGE_ACTIVE); 865 + BIT(NILFS_SEGMENT_USAGE_ACTIVE); 868 866 } 869 867 kunmap_atomic(kaddr); 870 868 brelse(su_bh); ··· 952 950 * disk. 953 951 */ 954 952 sup->sup_sui.sui_flags &= 955 - ~(1UL << NILFS_SEGMENT_USAGE_ACTIVE); 953 + ~BIT(NILFS_SEGMENT_USAGE_ACTIVE); 956 954 957 955 cleansi = nilfs_suinfo_clean(&sup->sup_sui); 958 956 cleansu = nilfs_segment_usage_clean(su); ··· 1177 1175 int err; 1178 1176 1179 1177 if (susize > sb->s_blocksize) { 1180 - printk(KERN_ERR 1181 - "NILFS: too large segment usage size: %zu bytes.\n", 1182 - susize); 1178 + nilfs_msg(sb, KERN_ERR, 1179 + "too large segment usage size: %zu bytes", susize); 1183 1180 return -EINVAL; 1184 1181 } else if (susize < NILFS_MIN_SEGMENT_USAGE_SIZE) { 1185 - printk(KERN_ERR 1186 - "NILFS: too small segment usage size: %zu bytes.\n", 1187 - susize); 1182 + nilfs_msg(sb, KERN_ERR, 1183 + "too small segment usage size: %zu bytes", susize); 1188 1184 return -EINVAL; 1189 1185 } 1190 1186
-1
fs/nilfs2/sufile.h
··· 21 21 22 22 #include <linux/fs.h> 23 23 #include <linux/buffer_head.h> 24 - #include <linux/nilfs2_fs.h> 25 24 #include "mdt.h" 26 25 27 26
+108 -98
fs/nilfs2/super.c
··· 71 71 static int nilfs_setup_super(struct super_block *sb, int is_mount); 72 72 static int nilfs_remount(struct super_block *sb, int *flags, char *data); 73 73 74 + void __nilfs_msg(struct super_block *sb, const char *level, const char *fmt, 75 + ...) 76 + { 77 + struct va_format vaf; 78 + va_list args; 79 + 80 + va_start(args, fmt); 81 + vaf.fmt = fmt; 82 + vaf.va = &args; 83 + if (sb) 84 + printk("%sNILFS (%s): %pV\n", level, sb->s_id, &vaf); 85 + else 86 + printk("%sNILFS: %pV\n", level, &vaf); 87 + va_end(args); 88 + } 89 + 74 90 static void nilfs_set_error(struct super_block *sb) 75 91 { 76 92 struct the_nilfs *nilfs = sb->s_fs_info; ··· 107 91 } 108 92 109 93 /** 110 - * nilfs_error() - report failure condition on a filesystem 94 + * __nilfs_error() - report failure condition on a filesystem 111 95 * 112 - * nilfs_error() sets an ERROR_FS flag on the superblock as well as 113 - * reporting an error message. It should be called when NILFS detects 114 - * incoherences or defects of meta data on disk. As for sustainable 115 - * errors such as a single-shot I/O error, nilfs_warning() or the printk() 116 - * function should be used instead. 96 + * __nilfs_error() sets an ERROR_FS flag on the superblock as well as 97 + * reporting an error message. This function should be called when 98 + * NILFS detects incoherences or defects of meta data on disk. 117 99 * 118 - * The segment constructor must not call this function because it can 119 - * kill itself. 100 + * This implements the body of nilfs_error() macro. Normally, 101 + * nilfs_error() should be used. As for sustainable errors such as a 102 + * single-shot I/O error, nilfs_msg() should be used instead. 103 + * 104 + * Callers should not add a trailing newline since this will do it. 120 105 */ 121 - void nilfs_error(struct super_block *sb, const char *function, 122 - const char *fmt, ...) 106 + void __nilfs_error(struct super_block *sb, const char *function, 107 + const char *fmt, ...) 123 108 { 124 109 struct the_nilfs *nilfs = sb->s_fs_info; 125 110 struct va_format vaf; ··· 149 132 panic("NILFS (device %s): panic forced after error\n", 150 133 sb->s_id); 151 134 } 152 - 153 - void nilfs_warning(struct super_block *sb, const char *function, 154 - const char *fmt, ...) 155 - { 156 - struct va_format vaf; 157 - va_list args; 158 - 159 - va_start(args, fmt); 160 - 161 - vaf.fmt = fmt; 162 - vaf.va = &args; 163 - 164 - printk(KERN_WARNING "NILFS warning (device %s): %s: %pV\n", 165 - sb->s_id, function, &vaf); 166 - 167 - va_end(args); 168 - } 169 - 170 135 171 136 struct inode *nilfs_alloc_inode(struct super_block *sb) 172 137 { ··· 195 196 } 196 197 197 198 if (unlikely(err)) { 198 - printk(KERN_ERR 199 - "NILFS: unable to write superblock (err=%d)\n", err); 199 + nilfs_msg(sb, KERN_ERR, "unable to write superblock: err=%d", 200 + err); 200 201 if (err == -EIO && nilfs->ns_sbh[1]) { 201 202 /* 202 203 * sbp[0] points to newer log than sbp[1], ··· 266 267 sbp[1]->s_magic == cpu_to_le16(NILFS_SUPER_MAGIC)) { 267 268 memcpy(sbp[0], sbp[1], nilfs->ns_sbsize); 268 269 } else { 269 - printk(KERN_CRIT "NILFS: superblock broke on dev %s\n", 270 - sb->s_id); 270 + nilfs_msg(sb, KERN_CRIT, "superblock broke"); 271 271 return NULL; 272 272 } 273 273 } else if (sbp[1] && ··· 376 378 offset = sb2off & (nilfs->ns_blocksize - 1); 377 379 nsbh = sb_getblk(sb, newblocknr); 378 380 if (!nsbh) { 379 - printk(KERN_WARNING 380 - "NILFS warning: unable to move secondary superblock " 381 - "to block %llu\n", (unsigned long long)newblocknr); 381 + nilfs_msg(sb, KERN_WARNING, 382 + "unable to move secondary superblock to block %llu", 383 + (unsigned long long)newblocknr); 382 384 ret = -EIO; 383 385 goto out; 384 386 } ··· 541 543 up_read(&nilfs->ns_segctor_sem); 542 544 if (unlikely(err)) { 543 545 if (err == -ENOENT || err == -EINVAL) { 544 - printk(KERN_ERR 545 - "NILFS: Invalid checkpoint " 546 - "(checkpoint number=%llu)\n", 547 - (unsigned long long)cno); 546 + nilfs_msg(sb, KERN_ERR, 547 + "Invalid checkpoint (checkpoint number=%llu)", 548 + (unsigned long long)cno); 548 549 err = -EINVAL; 549 550 } 550 551 goto failed; ··· 639 642 err = nilfs_ifile_count_free_inodes(root->ifile, 640 643 &nmaxinodes, &nfreeinodes); 641 644 if (unlikely(err)) { 642 - printk(KERN_WARNING 643 - "NILFS warning: fail to count free inodes: err %d.\n", 644 - err); 645 + nilfs_msg(sb, KERN_WARNING, 646 + "failed to count free inodes: err=%d", err); 645 647 if (err == -ERANGE) { 646 648 /* 647 649 * If nilfs_palloc_count_max_entries() returns ··· 772 776 break; 773 777 case Opt_snapshot: 774 778 if (is_remount) { 775 - printk(KERN_ERR 776 - "NILFS: \"%s\" option is invalid " 777 - "for remount.\n", p); 779 + nilfs_msg(sb, KERN_ERR, 780 + "\"%s\" option is invalid for remount", 781 + p); 778 782 return 0; 779 783 } 780 784 break; ··· 788 792 nilfs_clear_opt(nilfs, DISCARD); 789 793 break; 790 794 default: 791 - printk(KERN_ERR 792 - "NILFS: Unrecognized mount option \"%s\"\n", p); 795 + nilfs_msg(sb, KERN_ERR, 796 + "unrecognized mount option \"%s\"", p); 793 797 return 0; 794 798 } 795 799 } ··· 825 829 mnt_count = le16_to_cpu(sbp[0]->s_mnt_count); 826 830 827 831 if (nilfs->ns_mount_state & NILFS_ERROR_FS) { 828 - printk(KERN_WARNING 829 - "NILFS warning: mounting fs with errors\n"); 832 + nilfs_msg(sb, KERN_WARNING, "mounting fs with errors"); 830 833 #if 0 831 834 } else if (max_mnt_count >= 0 && mnt_count >= max_mnt_count) { 832 - printk(KERN_WARNING 833 - "NILFS warning: maximal mount count reached\n"); 835 + nilfs_msg(sb, KERN_WARNING, "maximal mount count reached"); 834 836 #endif 835 837 } 836 838 if (!max_mnt_count) ··· 891 897 features = le64_to_cpu(sbp->s_feature_incompat) & 892 898 ~NILFS_FEATURE_INCOMPAT_SUPP; 893 899 if (features) { 894 - printk(KERN_ERR "NILFS: couldn't mount because of unsupported " 895 - "optional features (%llx)\n", 896 - (unsigned long long)features); 900 + nilfs_msg(sb, KERN_ERR, 901 + "couldn't mount because of unsupported optional features (%llx)", 902 + (unsigned long long)features); 897 903 return -EINVAL; 898 904 } 899 905 features = le64_to_cpu(sbp->s_feature_compat_ro) & 900 906 ~NILFS_FEATURE_COMPAT_RO_SUPP; 901 907 if (!(sb->s_flags & MS_RDONLY) && features) { 902 - printk(KERN_ERR "NILFS: couldn't mount RDWR because of " 903 - "unsupported optional features (%llx)\n", 904 - (unsigned long long)features); 908 + nilfs_msg(sb, KERN_ERR, 909 + "couldn't mount RDWR because of unsupported optional features (%llx)", 910 + (unsigned long long)features); 905 911 return -EINVAL; 906 912 } 907 913 return 0; ··· 917 923 918 924 inode = nilfs_iget(sb, root, NILFS_ROOT_INO); 919 925 if (IS_ERR(inode)) { 920 - printk(KERN_ERR "NILFS: get root inode failed\n"); 921 926 ret = PTR_ERR(inode); 927 + nilfs_msg(sb, KERN_ERR, "error %d getting root inode", ret); 922 928 goto out; 923 929 } 924 930 if (!S_ISDIR(inode->i_mode) || !inode->i_blocks || !inode->i_size) { 925 931 iput(inode); 926 - printk(KERN_ERR "NILFS: corrupt root inode.\n"); 932 + nilfs_msg(sb, KERN_ERR, "corrupt root inode"); 927 933 ret = -EINVAL; 928 934 goto out; 929 935 } ··· 951 957 return ret; 952 958 953 959 failed_dentry: 954 - printk(KERN_ERR "NILFS: get root dentry failed\n"); 960 + nilfs_msg(sb, KERN_ERR, "error %d getting root dentry", ret); 955 961 goto out; 956 962 } 957 963 ··· 971 977 ret = (ret == -ENOENT) ? -EINVAL : ret; 972 978 goto out; 973 979 } else if (!ret) { 974 - printk(KERN_ERR "NILFS: The specified checkpoint is " 975 - "not a snapshot (checkpoint number=%llu).\n", 976 - (unsigned long long)cno); 980 + nilfs_msg(s, KERN_ERR, 981 + "The specified checkpoint is not a snapshot (checkpoint number=%llu)", 982 + (unsigned long long)cno); 977 983 ret = -EINVAL; 978 984 goto out; 979 985 } 980 986 981 987 ret = nilfs_attach_checkpoint(s, cno, false, &root); 982 988 if (ret) { 983 - printk(KERN_ERR "NILFS: error loading snapshot " 984 - "(checkpoint number=%llu).\n", 985 - (unsigned long long)cno); 989 + nilfs_msg(s, KERN_ERR, 990 + "error %d while loading snapshot (checkpoint number=%llu)", 991 + ret, (unsigned long long)cno); 986 992 goto out; 987 993 } 988 994 ret = nilfs_get_root_dentry(s, root, root_dentry); ··· 1052 1058 __u64 cno; 1053 1059 int err; 1054 1060 1055 - nilfs = alloc_nilfs(sb->s_bdev); 1061 + nilfs = alloc_nilfs(sb); 1056 1062 if (!nilfs) 1057 1063 return -ENOMEM; 1058 1064 ··· 1077 1083 cno = nilfs_last_cno(nilfs); 1078 1084 err = nilfs_attach_checkpoint(sb, cno, true, &fsroot); 1079 1085 if (err) { 1080 - printk(KERN_ERR "NILFS: error loading last checkpoint " 1081 - "(checkpoint number=%llu).\n", (unsigned long long)cno); 1086 + nilfs_msg(sb, KERN_ERR, 1087 + "error %d while loading last checkpoint (checkpoint number=%llu)", 1088 + err, (unsigned long long)cno); 1082 1089 goto failed_unload; 1083 1090 } 1084 1091 ··· 1139 1144 err = -EINVAL; 1140 1145 1141 1146 if (!nilfs_valid_fs(nilfs)) { 1142 - printk(KERN_WARNING "NILFS (device %s): couldn't " 1143 - "remount because the filesystem is in an " 1144 - "incomplete recovery state.\n", sb->s_id); 1147 + nilfs_msg(sb, KERN_WARNING, 1148 + "couldn't remount because the filesystem is in an incomplete recovery state"); 1145 1149 goto restore_opts; 1146 1150 } 1147 1151 ··· 1172 1178 ~NILFS_FEATURE_COMPAT_RO_SUPP; 1173 1179 up_read(&nilfs->ns_sem); 1174 1180 if (features) { 1175 - printk(KERN_WARNING "NILFS (device %s): couldn't " 1176 - "remount RDWR because of unsupported optional " 1177 - "features (%llx)\n", 1178 - sb->s_id, (unsigned long long)features); 1181 + nilfs_msg(sb, KERN_WARNING, 1182 + "couldn't remount RDWR because of unsupported optional features (%llx)", 1183 + (unsigned long long)features); 1179 1184 err = -EROFS; 1180 1185 goto restore_opts; 1181 1186 } ··· 1205 1212 int flags; 1206 1213 }; 1207 1214 1215 + static int nilfs_parse_snapshot_option(const char *option, 1216 + const substring_t *arg, 1217 + struct nilfs_super_data *sd) 1218 + { 1219 + unsigned long long val; 1220 + const char *msg = NULL; 1221 + int err; 1222 + 1223 + if (!(sd->flags & MS_RDONLY)) { 1224 + msg = "read-only option is not specified"; 1225 + goto parse_error; 1226 + } 1227 + 1228 + err = kstrtoull(arg->from, 0, &val); 1229 + if (err) { 1230 + if (err == -ERANGE) 1231 + msg = "too large checkpoint number"; 1232 + else 1233 + msg = "malformed argument"; 1234 + goto parse_error; 1235 + } else if (val == 0) { 1236 + msg = "invalid checkpoint number 0"; 1237 + goto parse_error; 1238 + } 1239 + sd->cno = val; 1240 + return 0; 1241 + 1242 + parse_error: 1243 + nilfs_msg(NULL, KERN_ERR, "invalid option \"%s\": %s", option, msg); 1244 + return 1; 1245 + } 1246 + 1208 1247 /** 1209 1248 * nilfs_identify - pre-read mount options needed to identify mount instance 1210 1249 * @data: mount options ··· 1253 1228 p = strsep(&options, ","); 1254 1229 if (p != NULL && *p) { 1255 1230 token = match_token(p, tokens, args); 1256 - if (token == Opt_snapshot) { 1257 - if (!(sd->flags & MS_RDONLY)) { 1258 - ret++; 1259 - } else { 1260 - sd->cno = simple_strtoull(args[0].from, 1261 - NULL, 0); 1262 - /* 1263 - * No need to see the end pointer; 1264 - * match_token() has done syntax 1265 - * checking. 1266 - */ 1267 - if (sd->cno == 0) 1268 - ret++; 1269 - } 1270 - } 1271 - if (ret) 1272 - printk(KERN_ERR 1273 - "NILFS: invalid mount option: %s\n", p); 1231 + if (token == Opt_snapshot) 1232 + ret = nilfs_parse_snapshot_option(p, &args[0], 1233 + sd); 1274 1234 } 1275 1235 if (!options) 1276 1236 break; ··· 1336 1326 } else if (!sd.cno) { 1337 1327 if (nilfs_tree_is_busy(s->s_root)) { 1338 1328 if ((flags ^ s->s_flags) & MS_RDONLY) { 1339 - printk(KERN_ERR "NILFS: the device already " 1340 - "has a %s mount.\n", 1341 - (s->s_flags & MS_RDONLY) ? 1342 - "read-only" : "read/write"); 1329 + nilfs_msg(s, KERN_ERR, 1330 + "the device already has a %s mount.", 1331 + (s->s_flags & MS_RDONLY) ? 1332 + "read-only" : "read/write"); 1343 1333 err = -EBUSY; 1344 1334 goto failed_super; 1345 1335 }
+38 -36
fs/nilfs2/sysfs.c
··· 272 272 err = nilfs_cpfile_get_stat(nilfs->ns_cpfile, &cpstat); 273 273 up_read(&nilfs->ns_segctor_sem); 274 274 if (err < 0) { 275 - printk(KERN_ERR "NILFS: unable to get checkpoint stat: err=%d\n", 276 - err); 275 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 276 + "unable to get checkpoint stat: err=%d", err); 277 277 return err; 278 278 } 279 279 ··· 295 295 err = nilfs_cpfile_get_stat(nilfs->ns_cpfile, &cpstat); 296 296 up_read(&nilfs->ns_segctor_sem); 297 297 if (err < 0) { 298 - printk(KERN_ERR "NILFS: unable to get checkpoint stat: err=%d\n", 299 - err); 298 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 299 + "unable to get checkpoint stat: err=%d", err); 300 300 return err; 301 301 } 302 302 ··· 326 326 { 327 327 __u64 cno; 328 328 329 - down_read(&nilfs->ns_sem); 329 + down_read(&nilfs->ns_segctor_sem); 330 330 cno = nilfs->ns_cno; 331 - up_read(&nilfs->ns_sem); 331 + up_read(&nilfs->ns_segctor_sem); 332 332 333 333 return snprintf(buf, PAGE_SIZE, "%llu\n", cno); 334 334 } ··· 414 414 err = nilfs_sufile_get_stat(nilfs->ns_sufile, &sustat); 415 415 up_read(&nilfs->ns_segctor_sem); 416 416 if (err < 0) { 417 - printk(KERN_ERR "NILFS: unable to get segment stat: err=%d\n", 418 - err); 417 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 418 + "unable to get segment stat: err=%d", err); 419 419 return err; 420 420 } 421 421 ··· 511 511 { 512 512 u64 seg_seq; 513 513 514 - down_read(&nilfs->ns_sem); 514 + down_read(&nilfs->ns_segctor_sem); 515 515 seg_seq = nilfs->ns_seg_seq; 516 - up_read(&nilfs->ns_sem); 516 + up_read(&nilfs->ns_segctor_sem); 517 517 518 518 return snprintf(buf, PAGE_SIZE, "%llu\n", seg_seq); 519 519 } ··· 525 525 { 526 526 __u64 segnum; 527 527 528 - down_read(&nilfs->ns_sem); 528 + down_read(&nilfs->ns_segctor_sem); 529 529 segnum = nilfs->ns_segnum; 530 - up_read(&nilfs->ns_sem); 530 + up_read(&nilfs->ns_segctor_sem); 531 531 532 532 return snprintf(buf, PAGE_SIZE, "%llu\n", segnum); 533 533 } ··· 539 539 { 540 540 __u64 nextnum; 541 541 542 - down_read(&nilfs->ns_sem); 542 + down_read(&nilfs->ns_segctor_sem); 543 543 nextnum = nilfs->ns_nextnum; 544 - up_read(&nilfs->ns_sem); 544 + up_read(&nilfs->ns_segctor_sem); 545 545 546 546 return snprintf(buf, PAGE_SIZE, "%llu\n", nextnum); 547 547 } ··· 553 553 { 554 554 unsigned long pseg_offset; 555 555 556 - down_read(&nilfs->ns_sem); 556 + down_read(&nilfs->ns_segctor_sem); 557 557 pseg_offset = nilfs->ns_pseg_offset; 558 - up_read(&nilfs->ns_sem); 558 + up_read(&nilfs->ns_segctor_sem); 559 559 560 560 return snprintf(buf, PAGE_SIZE, "%lu\n", pseg_offset); 561 561 } ··· 567 567 { 568 568 __u64 cno; 569 569 570 - down_read(&nilfs->ns_sem); 570 + down_read(&nilfs->ns_segctor_sem); 571 571 cno = nilfs->ns_cno; 572 - up_read(&nilfs->ns_sem); 572 + up_read(&nilfs->ns_segctor_sem); 573 573 574 574 return snprintf(buf, PAGE_SIZE, "%llu\n", cno); 575 575 } ··· 581 581 { 582 582 time_t ctime; 583 583 584 - down_read(&nilfs->ns_sem); 584 + down_read(&nilfs->ns_segctor_sem); 585 585 ctime = nilfs->ns_ctime; 586 - up_read(&nilfs->ns_sem); 586 + up_read(&nilfs->ns_segctor_sem); 587 587 588 588 return NILFS_SHOW_TIME(ctime, buf); 589 589 } ··· 595 595 { 596 596 time_t ctime; 597 597 598 - down_read(&nilfs->ns_sem); 598 + down_read(&nilfs->ns_segctor_sem); 599 599 ctime = nilfs->ns_ctime; 600 - up_read(&nilfs->ns_sem); 600 + up_read(&nilfs->ns_segctor_sem); 601 601 602 602 return snprintf(buf, PAGE_SIZE, "%llu\n", (unsigned long long)ctime); 603 603 } ··· 609 609 { 610 610 time_t nongc_ctime; 611 611 612 - down_read(&nilfs->ns_sem); 612 + down_read(&nilfs->ns_segctor_sem); 613 613 nongc_ctime = nilfs->ns_nongc_ctime; 614 - up_read(&nilfs->ns_sem); 614 + up_read(&nilfs->ns_segctor_sem); 615 615 616 616 return NILFS_SHOW_TIME(nongc_ctime, buf); 617 617 } ··· 623 623 { 624 624 time_t nongc_ctime; 625 625 626 - down_read(&nilfs->ns_sem); 626 + down_read(&nilfs->ns_segctor_sem); 627 627 nongc_ctime = nilfs->ns_nongc_ctime; 628 - up_read(&nilfs->ns_sem); 628 + up_read(&nilfs->ns_segctor_sem); 629 629 630 630 return snprintf(buf, PAGE_SIZE, "%llu\n", 631 631 (unsigned long long)nongc_ctime); ··· 638 638 { 639 639 u32 ndirtyblks; 640 640 641 - down_read(&nilfs->ns_sem); 641 + down_read(&nilfs->ns_segctor_sem); 642 642 ndirtyblks = atomic_read(&nilfs->ns_ndirtyblks); 643 - up_read(&nilfs->ns_sem); 643 + up_read(&nilfs->ns_segctor_sem); 644 644 645 645 return snprintf(buf, PAGE_SIZE, "%u\n", ndirtyblks); 646 646 } ··· 789 789 790 790 err = kstrtouint(skip_spaces(buf), 0, &val); 791 791 if (err) { 792 - printk(KERN_ERR "NILFS: unable to convert string: err=%d\n", 793 - err); 792 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 793 + "unable to convert string: err=%d", err); 794 794 return err; 795 795 } 796 796 797 797 if (val < NILFS_SB_FREQ) { 798 798 val = NILFS_SB_FREQ; 799 - printk(KERN_WARNING "NILFS: superblock update frequency cannot be lesser than 10 seconds\n"); 799 + nilfs_msg(nilfs->ns_sb, KERN_WARNING, 800 + "superblock update frequency cannot be lesser than 10 seconds"); 800 801 } 801 802 802 803 down_write(&nilfs->ns_sem); ··· 1000 999 nilfs->ns_dev_subgroups = kzalloc(devgrp_size, GFP_KERNEL); 1001 1000 if (unlikely(!nilfs->ns_dev_subgroups)) { 1002 1001 err = -ENOMEM; 1003 - printk(KERN_ERR "NILFS: unable to allocate memory for device group\n"); 1002 + nilfs_msg(sb, KERN_ERR, 1003 + "unable to allocate memory for device group"); 1004 1004 goto failed_create_device_group; 1005 1005 } 1006 1006 ··· 1111 1109 nilfs_kset = kset_create_and_add(NILFS_ROOT_GROUP_NAME, NULL, fs_kobj); 1112 1110 if (!nilfs_kset) { 1113 1111 err = -ENOMEM; 1114 - printk(KERN_ERR "NILFS: unable to create sysfs entry: err %d\n", 1115 - err); 1112 + nilfs_msg(NULL, KERN_ERR, 1113 + "unable to create sysfs entry: err=%d", err); 1116 1114 goto failed_sysfs_init; 1117 1115 } 1118 1116 1119 1117 err = sysfs_create_group(&nilfs_kset->kobj, &nilfs_feature_attr_group); 1120 1118 if (unlikely(err)) { 1121 - printk(KERN_ERR "NILFS: unable to create feature group: err %d\n", 1122 - err); 1119 + nilfs_msg(NULL, KERN_ERR, 1120 + "unable to create feature group: err=%d", err); 1123 1121 goto cleanup_sysfs_init; 1124 1122 } 1125 1123
+71 -63
fs/nilfs2/the_nilfs.c
··· 56 56 57 57 /** 58 58 * alloc_nilfs - allocate a nilfs object 59 - * @bdev: block device to which the_nilfs is related 59 + * @sb: super block instance 60 60 * 61 61 * Return Value: On success, pointer to the_nilfs is returned. 62 62 * On error, NULL is returned. 63 63 */ 64 - struct the_nilfs *alloc_nilfs(struct block_device *bdev) 64 + struct the_nilfs *alloc_nilfs(struct super_block *sb) 65 65 { 66 66 struct the_nilfs *nilfs; 67 67 ··· 69 69 if (!nilfs) 70 70 return NULL; 71 71 72 - nilfs->ns_bdev = bdev; 72 + nilfs->ns_sb = sb; 73 + nilfs->ns_bdev = sb->s_bdev; 73 74 atomic_set(&nilfs->ns_ndirtyblks, 0); 74 75 init_rwsem(&nilfs->ns_sem); 75 76 mutex_init(&nilfs->ns_snapshot_mount_mutex); ··· 192 191 nilfs_get_segnum_of_block(nilfs, nilfs->ns_last_pseg); 193 192 nilfs->ns_cno = nilfs->ns_last_cno + 1; 194 193 if (nilfs->ns_segnum >= nilfs->ns_nsegments) { 195 - printk(KERN_ERR "NILFS invalid last segment number.\n"); 194 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 195 + "pointed segment number is out of range: segnum=%llu, nsegments=%lu", 196 + (unsigned long long)nilfs->ns_segnum, 197 + nilfs->ns_nsegments); 196 198 ret = -EINVAL; 197 199 } 198 200 return ret; ··· 219 215 int err; 220 216 221 217 if (!valid_fs) { 222 - printk(KERN_WARNING "NILFS warning: mounting unchecked fs\n"); 218 + nilfs_msg(sb, KERN_WARNING, "mounting unchecked fs"); 223 219 if (s_flags & MS_RDONLY) { 224 - printk(KERN_INFO "NILFS: INFO: recovery " 225 - "required for readonly filesystem.\n"); 226 - printk(KERN_INFO "NILFS: write access will " 227 - "be enabled during recovery.\n"); 220 + nilfs_msg(sb, KERN_INFO, 221 + "recovery required for readonly filesystem"); 222 + nilfs_msg(sb, KERN_INFO, 223 + "write access will be enabled during recovery"); 228 224 } 229 225 } 230 226 ··· 239 235 goto scan_error; 240 236 241 237 if (!nilfs_valid_sb(sbp[1])) { 242 - printk(KERN_WARNING 243 - "NILFS warning: unable to fall back to spare" 244 - "super block\n"); 238 + nilfs_msg(sb, KERN_WARNING, 239 + "unable to fall back to spare super block"); 245 240 goto scan_error; 246 241 } 247 - printk(KERN_INFO 248 - "NILFS: try rollback from an earlier position\n"); 242 + nilfs_msg(sb, KERN_INFO, 243 + "trying rollback from an earlier position"); 249 244 250 245 /* 251 246 * restore super block with its spare and reconfigure ··· 257 254 /* verify consistency between two super blocks */ 258 255 blocksize = BLOCK_SIZE << le32_to_cpu(sbp[0]->s_log_block_size); 259 256 if (blocksize != nilfs->ns_blocksize) { 260 - printk(KERN_WARNING 261 - "NILFS warning: blocksize differs between " 262 - "two super blocks (%d != %d)\n", 263 - blocksize, nilfs->ns_blocksize); 257 + nilfs_msg(sb, KERN_WARNING, 258 + "blocksize differs between two super blocks (%d != %d)", 259 + blocksize, nilfs->ns_blocksize); 264 260 goto scan_error; 265 261 } 266 262 ··· 278 276 279 277 err = nilfs_load_super_root(nilfs, sb, ri.ri_super_root); 280 278 if (unlikely(err)) { 281 - printk(KERN_ERR "NILFS: error loading super root.\n"); 279 + nilfs_msg(sb, KERN_ERR, "error %d while loading super root", 280 + err); 282 281 goto failed; 283 282 } 284 283 ··· 290 287 __u64 features; 291 288 292 289 if (nilfs_test_opt(nilfs, NORECOVERY)) { 293 - printk(KERN_INFO "NILFS: norecovery option specified. " 294 - "skipping roll-forward recovery\n"); 290 + nilfs_msg(sb, KERN_INFO, 291 + "norecovery option specified, skipping roll-forward recovery"); 295 292 goto skip_recovery; 296 293 } 297 294 features = le64_to_cpu(nilfs->ns_sbp[0]->s_feature_compat_ro) & 298 295 ~NILFS_FEATURE_COMPAT_RO_SUPP; 299 296 if (features) { 300 - printk(KERN_ERR "NILFS: couldn't proceed with " 301 - "recovery because of unsupported optional " 302 - "features (%llx)\n", 303 - (unsigned long long)features); 297 + nilfs_msg(sb, KERN_ERR, 298 + "couldn't proceed with recovery because of unsupported optional features (%llx)", 299 + (unsigned long long)features); 304 300 err = -EROFS; 305 301 goto failed_unload; 306 302 } 307 303 if (really_read_only) { 308 - printk(KERN_ERR "NILFS: write access " 309 - "unavailable, cannot proceed.\n"); 304 + nilfs_msg(sb, KERN_ERR, 305 + "write access unavailable, cannot proceed"); 310 306 err = -EROFS; 311 307 goto failed_unload; 312 308 } 313 309 sb->s_flags &= ~MS_RDONLY; 314 310 } else if (nilfs_test_opt(nilfs, NORECOVERY)) { 315 - printk(KERN_ERR "NILFS: recovery cancelled because norecovery " 316 - "option was specified for a read/write mount\n"); 311 + nilfs_msg(sb, KERN_ERR, 312 + "recovery cancelled because norecovery option was specified for a read/write mount"); 317 313 err = -EINVAL; 318 314 goto failed_unload; 319 315 } ··· 327 325 up_write(&nilfs->ns_sem); 328 326 329 327 if (err) { 330 - printk(KERN_ERR "NILFS: failed to update super block. " 331 - "recovery unfinished.\n"); 328 + nilfs_msg(sb, KERN_ERR, 329 + "error %d updating super block. recovery unfinished.", 330 + err); 332 331 goto failed_unload; 333 332 } 334 - printk(KERN_INFO "NILFS: recovery complete.\n"); 333 + nilfs_msg(sb, KERN_INFO, "recovery complete"); 335 334 336 335 skip_recovery: 337 336 nilfs_clear_recovery_info(&ri); ··· 340 337 return 0; 341 338 342 339 scan_error: 343 - printk(KERN_ERR "NILFS: error searching super root.\n"); 340 + nilfs_msg(sb, KERN_ERR, "error %d while searching super root", err); 344 341 goto failed; 345 342 346 343 failed_unload: ··· 387 384 struct nilfs_super_block *sbp) 388 385 { 389 386 if (le32_to_cpu(sbp->s_rev_level) < NILFS_MIN_SUPP_REV) { 390 - printk(KERN_ERR "NILFS: unsupported revision " 391 - "(superblock rev.=%d.%d, current rev.=%d.%d). " 392 - "Please check the version of mkfs.nilfs.\n", 393 - le32_to_cpu(sbp->s_rev_level), 394 - le16_to_cpu(sbp->s_minor_rev_level), 395 - NILFS_CURRENT_REV, NILFS_MINOR_REV); 387 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 388 + "unsupported revision (superblock rev.=%d.%d, current rev.=%d.%d). Please check the version of mkfs.nilfs(2).", 389 + le32_to_cpu(sbp->s_rev_level), 390 + le16_to_cpu(sbp->s_minor_rev_level), 391 + NILFS_CURRENT_REV, NILFS_MINOR_REV); 396 392 return -EINVAL; 397 393 } 398 394 nilfs->ns_sbsize = le16_to_cpu(sbp->s_bytes); ··· 400 398 401 399 nilfs->ns_inode_size = le16_to_cpu(sbp->s_inode_size); 402 400 if (nilfs->ns_inode_size > nilfs->ns_blocksize) { 403 - printk(KERN_ERR "NILFS: too large inode size: %d bytes.\n", 404 - nilfs->ns_inode_size); 401 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 402 + "too large inode size: %d bytes", 403 + nilfs->ns_inode_size); 405 404 return -EINVAL; 406 405 } else if (nilfs->ns_inode_size < NILFS_MIN_INODE_SIZE) { 407 - printk(KERN_ERR "NILFS: too small inode size: %d bytes.\n", 408 - nilfs->ns_inode_size); 406 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 407 + "too small inode size: %d bytes", 408 + nilfs->ns_inode_size); 409 409 return -EINVAL; 410 410 } 411 411 ··· 415 411 416 412 nilfs->ns_blocks_per_segment = le32_to_cpu(sbp->s_blocks_per_segment); 417 413 if (nilfs->ns_blocks_per_segment < NILFS_SEG_MIN_BLOCKS) { 418 - printk(KERN_ERR "NILFS: too short segment.\n"); 414 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 415 + "too short segment: %lu blocks", 416 + nilfs->ns_blocks_per_segment); 419 417 return -EINVAL; 420 418 } 421 419 ··· 426 420 le32_to_cpu(sbp->s_r_segments_percentage); 427 421 if (nilfs->ns_r_segments_percentage < 1 || 428 422 nilfs->ns_r_segments_percentage > 99) { 429 - printk(KERN_ERR "NILFS: invalid reserved segments percentage.\n"); 423 + nilfs_msg(nilfs->ns_sb, KERN_ERR, 424 + "invalid reserved segments percentage: %lu", 425 + nilfs->ns_r_segments_percentage); 430 426 return -EINVAL; 431 427 } 432 428 ··· 512 504 513 505 if (!sbp[0]) { 514 506 if (!sbp[1]) { 515 - printk(KERN_ERR "NILFS: unable to read superblock\n"); 507 + nilfs_msg(sb, KERN_ERR, "unable to read superblock"); 516 508 return -EIO; 517 509 } 518 - printk(KERN_WARNING 519 - "NILFS warning: unable to read primary superblock " 520 - "(blocksize = %d)\n", blocksize); 510 + nilfs_msg(sb, KERN_WARNING, 511 + "unable to read primary superblock (blocksize = %d)", 512 + blocksize); 521 513 } else if (!sbp[1]) { 522 - printk(KERN_WARNING 523 - "NILFS warning: unable to read secondary superblock " 524 - "(blocksize = %d)\n", blocksize); 514 + nilfs_msg(sb, KERN_WARNING, 515 + "unable to read secondary superblock (blocksize = %d)", 516 + blocksize); 525 517 } 526 518 527 519 /* ··· 543 535 } 544 536 if (!valid[swp]) { 545 537 nilfs_release_super_block(nilfs); 546 - printk(KERN_ERR "NILFS: Can't find nilfs on dev %s.\n", 547 - sb->s_id); 538 + nilfs_msg(sb, KERN_ERR, "couldn't find nilfs on the device"); 548 539 return -EINVAL; 549 540 } 550 541 551 542 if (!valid[!swp]) 552 - printk(KERN_WARNING "NILFS warning: broken superblock. " 553 - "using spare superblock (blocksize = %d).\n", blocksize); 543 + nilfs_msg(sb, KERN_WARNING, 544 + "broken superblock, retrying with spare superblock (blocksize = %d)", 545 + blocksize); 554 546 if (swp) 555 547 nilfs_swap_super_block(nilfs); 556 548 ··· 584 576 585 577 blocksize = sb_min_blocksize(sb, NILFS_MIN_BLOCK_SIZE); 586 578 if (!blocksize) { 587 - printk(KERN_ERR "NILFS: unable to set blocksize\n"); 579 + nilfs_msg(sb, KERN_ERR, "unable to set blocksize"); 588 580 err = -EINVAL; 589 581 goto out; 590 582 } ··· 603 595 blocksize = BLOCK_SIZE << le32_to_cpu(sbp->s_log_block_size); 604 596 if (blocksize < NILFS_MIN_BLOCK_SIZE || 605 597 blocksize > NILFS_MAX_BLOCK_SIZE) { 606 - printk(KERN_ERR "NILFS: couldn't mount because of unsupported " 607 - "filesystem blocksize %d\n", blocksize); 598 + nilfs_msg(sb, KERN_ERR, 599 + "couldn't mount because of unsupported filesystem blocksize %d", 600 + blocksize); 608 601 err = -EINVAL; 609 602 goto failed_sbh; 610 603 } ··· 613 604 int hw_blocksize = bdev_logical_block_size(sb->s_bdev); 614 605 615 606 if (blocksize < hw_blocksize) { 616 - printk(KERN_ERR 617 - "NILFS: blocksize %d too small for device " 618 - "(sector-size = %d).\n", 619 - blocksize, hw_blocksize); 607 + nilfs_msg(sb, KERN_ERR, 608 + "blocksize %d too small for device (sector-size = %d)", 609 + blocksize, hw_blocksize); 620 610 err = -EINVAL; 621 611 goto failed_sbh; 622 612 }
+5 -6
fs/nilfs2/the_nilfs.h
··· 43 43 * struct the_nilfs - struct to supervise multiple nilfs mount points 44 44 * @ns_flags: flags 45 45 * @ns_flushed_device: flag indicating if all volatile data was flushed 46 + * @ns_sb: back pointer to super block instance 46 47 * @ns_bdev: block device 47 48 * @ns_sem: semaphore for shared states 48 49 * @ns_snapshot_mount_mutex: mutex to protect snapshot mounts ··· 103 102 unsigned long ns_flags; 104 103 int ns_flushed_device; 105 104 105 + struct super_block *ns_sb; 106 106 struct block_device *ns_bdev; 107 107 struct rw_semaphore ns_sem; 108 108 struct mutex ns_snapshot_mount_mutex; ··· 122 120 unsigned int ns_sb_update_freq; 123 121 124 122 /* 125 - * Following fields are dedicated to a writable FS-instance. 126 - * Except for the period seeking checkpoint, code outside the segment 127 - * constructor must lock a segment semaphore while accessing these 128 - * fields. 129 - * The writable FS-instance is sole during a lifetime of the_nilfs. 123 + * The following fields are updated by a writable FS-instance. 124 + * These fields are protected by ns_segctor_sem outside load_nilfs(). 130 125 */ 131 126 u64 ns_seg_seq; 132 127 __u64 ns_segnum; ··· 280 281 } 281 282 282 283 void nilfs_set_last_segment(struct the_nilfs *, sector_t, u64, __u64); 283 - struct the_nilfs *alloc_nilfs(struct block_device *bdev); 284 + struct the_nilfs *alloc_nilfs(struct super_block *sb); 284 285 void destroy_nilfs(struct the_nilfs *nilfs); 285 286 int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data); 286 287 int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb);
+37
fs/ocfs2/alloc.c
··· 6106 6106 } 6107 6107 } 6108 6108 6109 + /* 6110 + * Try to flush truncate logs if we can free enough clusters from it. 6111 + * As for return value, "< 0" means error, "0" no space and "1" means 6112 + * we have freed enough spaces and let the caller try to allocate again. 6113 + */ 6114 + int ocfs2_try_to_free_truncate_log(struct ocfs2_super *osb, 6115 + unsigned int needed) 6116 + { 6117 + tid_t target; 6118 + int ret = 0; 6119 + unsigned int truncated_clusters; 6120 + 6121 + inode_lock(osb->osb_tl_inode); 6122 + truncated_clusters = osb->truncated_clusters; 6123 + inode_unlock(osb->osb_tl_inode); 6124 + 6125 + /* 6126 + * Check whether we can succeed in allocating if we free 6127 + * the truncate log. 6128 + */ 6129 + if (truncated_clusters < needed) 6130 + goto out; 6131 + 6132 + ret = ocfs2_flush_truncate_log(osb); 6133 + if (ret) { 6134 + mlog_errno(ret); 6135 + goto out; 6136 + } 6137 + 6138 + if (jbd2_journal_start_commit(osb->journal->j_journal, &target)) { 6139 + jbd2_log_wait_commit(osb->journal->j_journal, target); 6140 + ret = 1; 6141 + } 6142 + out: 6143 + return ret; 6144 + } 6145 + 6109 6146 static int ocfs2_get_truncate_log_info(struct ocfs2_super *osb, 6110 6147 int slot_num, 6111 6148 struct inode **tl_inode,
+2
fs/ocfs2/alloc.h
··· 188 188 u64 start_blk, 189 189 unsigned int num_clusters); 190 190 int __ocfs2_flush_truncate_log(struct ocfs2_super *osb); 191 + int ocfs2_try_to_free_truncate_log(struct ocfs2_super *osb, 192 + unsigned int needed); 191 193 192 194 /* 193 195 * Process local structure which describes the block unlinks done
-37
fs/ocfs2/aops.c
··· 1645 1645 return ret; 1646 1646 } 1647 1647 1648 - /* 1649 - * Try to flush truncate logs if we can free enough clusters from it. 1650 - * As for return value, "< 0" means error, "0" no space and "1" means 1651 - * we have freed enough spaces and let the caller try to allocate again. 1652 - */ 1653 - static int ocfs2_try_to_free_truncate_log(struct ocfs2_super *osb, 1654 - unsigned int needed) 1655 - { 1656 - tid_t target; 1657 - int ret = 0; 1658 - unsigned int truncated_clusters; 1659 - 1660 - inode_lock(osb->osb_tl_inode); 1661 - truncated_clusters = osb->truncated_clusters; 1662 - inode_unlock(osb->osb_tl_inode); 1663 - 1664 - /* 1665 - * Check whether we can succeed in allocating if we free 1666 - * the truncate log. 1667 - */ 1668 - if (truncated_clusters < needed) 1669 - goto out; 1670 - 1671 - ret = ocfs2_flush_truncate_log(osb); 1672 - if (ret) { 1673 - mlog_errno(ret); 1674 - goto out; 1675 - } 1676 - 1677 - if (jbd2_journal_start_commit(osb->journal->j_journal, &target)) { 1678 - jbd2_log_wait_commit(osb->journal->j_journal, target); 1679 - ret = 1; 1680 - } 1681 - out: 1682 - return ret; 1683 - } 1684 - 1685 1648 int ocfs2_write_begin_nolock(struct address_space *mapping, 1686 1649 loff_t pos, unsigned len, ocfs2_write_type_t type, 1687 1650 struct page **pagep, void **fsdata,
+2
fs/ocfs2/dlm/dlmcommon.h
··· 1004 1004 int dlm_do_master_requery(struct dlm_ctxt *dlm, struct dlm_lock_resource *res, 1005 1005 u8 nodenum, u8 *real_master); 1006 1006 1007 + void __dlm_do_purge_lockres(struct dlm_ctxt *dlm, 1008 + struct dlm_lock_resource *res); 1007 1009 1008 1010 int dlm_dispatch_assert_master(struct dlm_ctxt *dlm, 1009 1011 struct dlm_lock_resource *res,
+17 -36
fs/ocfs2/dlm/dlmmaster.c
··· 2276 2276 mlog(ML_ERROR, "%s: res %.*s, DEREF to node %u got %d\n", 2277 2277 dlm->name, namelen, lockname, res->owner, r); 2278 2278 dlm_print_one_lock_resource(res); 2279 - BUG(); 2280 - } 2281 - return ret ? ret : r; 2279 + if (r == -ENOMEM) 2280 + BUG(); 2281 + } else 2282 + ret = r; 2283 + 2284 + return ret; 2282 2285 } 2283 2286 2284 2287 int dlm_deref_lockres_handler(struct o2net_msg *msg, u32 len, void *data, ··· 2419 2416 } 2420 2417 2421 2418 spin_lock(&res->spinlock); 2422 - BUG_ON(!(res->state & DLM_LOCK_RES_DROPPING_REF)); 2423 - if (!list_empty(&res->purge)) { 2424 - mlog(0, "%s: Removing res %.*s from purgelist\n", 2425 - dlm->name, res->lockname.len, res->lockname.name); 2426 - list_del_init(&res->purge); 2427 - dlm_lockres_put(res); 2428 - dlm->purge_count--; 2419 + if (!(res->state & DLM_LOCK_RES_DROPPING_REF)) { 2420 + spin_unlock(&res->spinlock); 2421 + spin_unlock(&dlm->spinlock); 2422 + mlog(ML_NOTICE, "%s:%.*s: node %u sends deref done " 2423 + "but it is already derefed!\n", dlm->name, 2424 + res->lockname.len, res->lockname.name, node); 2425 + ret = 0; 2426 + goto done; 2429 2427 } 2430 2428 2431 - if (!__dlm_lockres_unused(res)) { 2432 - mlog(ML_ERROR, "%s: res %.*s in use after deref\n", 2433 - dlm->name, res->lockname.len, res->lockname.name); 2434 - __dlm_print_one_lock_resource(res); 2435 - BUG(); 2436 - } 2437 - 2438 - __dlm_unhash_lockres(dlm, res); 2439 - 2440 - spin_lock(&dlm->track_lock); 2441 - if (!list_empty(&res->tracking)) 2442 - list_del_init(&res->tracking); 2443 - else { 2444 - mlog(ML_ERROR, "%s: Resource %.*s not on the Tracking list\n", 2445 - dlm->name, res->lockname.len, res->lockname.name); 2446 - __dlm_print_one_lock_resource(res); 2447 - } 2448 - spin_unlock(&dlm->track_lock); 2449 - 2450 - /* lockres is not in the hash now. drop the flag and wake up 2451 - * any processes waiting in dlm_get_lock_resource. 2452 - */ 2453 - res->state &= ~DLM_LOCK_RES_DROPPING_REF; 2429 + __dlm_do_purge_lockres(dlm, res); 2454 2430 spin_unlock(&res->spinlock); 2455 2431 wake_up(&res->wq); 2456 - 2457 - dlm_lockres_put(res); 2458 2432 2459 2433 spin_unlock(&dlm->spinlock); 2460 2434 2461 2435 ret = 0; 2462 - 2463 2436 done: 2437 + if (res) 2438 + dlm_lockres_put(res); 2464 2439 dlm_put(dlm); 2465 2440 return ret; 2466 2441 }
+21 -8
fs/ocfs2/dlm/dlmrecovery.c
··· 2343 2343 struct dlm_lock_resource *res; 2344 2344 int i; 2345 2345 struct hlist_head *bucket; 2346 + struct hlist_node *tmp; 2346 2347 struct dlm_lock *lock; 2347 2348 2348 2349 ··· 2366 2365 */ 2367 2366 for (i = 0; i < DLM_HASH_BUCKETS; i++) { 2368 2367 bucket = dlm_lockres_hash(dlm, i); 2369 - hlist_for_each_entry(res, bucket, hash_node) { 2368 + hlist_for_each_entry_safe(res, tmp, bucket, hash_node) { 2370 2369 /* always prune any $RECOVERY entries for dead nodes, 2371 2370 * otherwise hangs can occur during later recovery */ 2372 2371 if (dlm_is_recovery_lock(res->lockname.name, ··· 2387 2386 break; 2388 2387 } 2389 2388 } 2390 - dlm_lockres_clear_refmap_bit(dlm, res, 2391 - dead_node); 2389 + 2390 + if ((res->owner == dead_node) && 2391 + (res->state & DLM_LOCK_RES_DROPPING_REF)) { 2392 + dlm_lockres_get(res); 2393 + __dlm_do_purge_lockres(dlm, res); 2394 + spin_unlock(&res->spinlock); 2395 + wake_up(&res->wq); 2396 + dlm_lockres_put(res); 2397 + continue; 2398 + } else if (res->owner == dlm->node_num) 2399 + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); 2392 2400 spin_unlock(&res->spinlock); 2393 2401 continue; 2394 2402 } ··· 2408 2398 if (res->state & DLM_LOCK_RES_DROPPING_REF) { 2409 2399 mlog(0, "%s:%.*s: owned by " 2410 2400 "dead node %u, this node was " 2411 - "dropping its ref when it died. " 2412 - "continue, dropping the flag.\n", 2401 + "dropping its ref when master died. " 2402 + "continue, purging the lockres.\n", 2413 2403 dlm->name, res->lockname.len, 2414 2404 res->lockname.name, dead_node); 2405 + dlm_lockres_get(res); 2406 + __dlm_do_purge_lockres(dlm, res); 2407 + spin_unlock(&res->spinlock); 2408 + wake_up(&res->wq); 2409 + dlm_lockres_put(res); 2410 + continue; 2415 2411 } 2416 - res->state &= ~DLM_LOCK_RES_DROPPING_REF; 2417 - dlm_move_lockres_to_recovery_list(dlm, 2418 - res); 2419 2412 } else if (res->owner == dlm->node_num) { 2420 2413 dlm_free_dead_locks(dlm, res, dead_node); 2421 2414 __dlm_lockres_calc_usage(dlm, res);
+55 -2
fs/ocfs2/dlm/dlmthread.c
··· 160 160 spin_unlock(&dlm->spinlock); 161 161 } 162 162 163 + /* 164 + * Do the real purge work: 165 + * unhash the lockres, and 166 + * clear flag DLM_LOCK_RES_DROPPING_REF. 167 + * It requires dlm and lockres spinlock to be taken. 168 + */ 169 + void __dlm_do_purge_lockres(struct dlm_ctxt *dlm, 170 + struct dlm_lock_resource *res) 171 + { 172 + assert_spin_locked(&dlm->spinlock); 173 + assert_spin_locked(&res->spinlock); 174 + 175 + if (!list_empty(&res->purge)) { 176 + mlog(0, "%s: Removing res %.*s from purgelist\n", 177 + dlm->name, res->lockname.len, res->lockname.name); 178 + list_del_init(&res->purge); 179 + dlm_lockres_put(res); 180 + dlm->purge_count--; 181 + } 182 + 183 + if (!__dlm_lockres_unused(res)) { 184 + mlog(ML_ERROR, "%s: res %.*s in use after deref\n", 185 + dlm->name, res->lockname.len, res->lockname.name); 186 + __dlm_print_one_lock_resource(res); 187 + BUG(); 188 + } 189 + 190 + __dlm_unhash_lockres(dlm, res); 191 + 192 + spin_lock(&dlm->track_lock); 193 + if (!list_empty(&res->tracking)) 194 + list_del_init(&res->tracking); 195 + else { 196 + mlog(ML_ERROR, "%s: Resource %.*s not on the Tracking list\n", 197 + dlm->name, res->lockname.len, res->lockname.name); 198 + __dlm_print_one_lock_resource(res); 199 + } 200 + spin_unlock(&dlm->track_lock); 201 + 202 + /* 203 + * lockres is not in the hash now. drop the flag and wake up 204 + * any processes waiting in dlm_get_lock_resource. 205 + */ 206 + res->state &= ~DLM_LOCK_RES_DROPPING_REF; 207 + } 208 + 163 209 static void dlm_purge_lockres(struct dlm_ctxt *dlm, 164 210 struct dlm_lock_resource *res) 165 211 { ··· 221 175 res->lockname.len, res->lockname.name, master); 222 176 223 177 if (!master) { 178 + if (res->state & DLM_LOCK_RES_DROPPING_REF) { 179 + mlog(ML_NOTICE, "%s: res %.*s already in DLM_LOCK_RES_DROPPING_REF state\n", 180 + dlm->name, res->lockname.len, res->lockname.name); 181 + spin_unlock(&res->spinlock); 182 + return; 183 + } 184 + 224 185 res->state |= DLM_LOCK_RES_DROPPING_REF; 225 186 /* drop spinlock... retake below */ 226 187 spin_unlock(&res->spinlock); ··· 256 203 dlm->purge_count--; 257 204 } 258 205 259 - if (!master && ret != 0) { 260 - mlog(0, "%s: deref %.*s in progress or master goes down\n", 206 + if (!master && ret == DLM_DEREF_RESPONSE_INPROG) { 207 + mlog(0, "%s: deref %.*s in progress\n", 261 208 dlm->name, res->lockname.len, res->lockname.name); 262 209 spin_unlock(&res->spinlock); 263 210 return;
+9 -2
fs/ocfs2/stack_user.c
··· 1007 1007 lc->oc_type = NO_CONTROLD; 1008 1008 1009 1009 rc = dlm_new_lockspace(conn->cc_name, conn->cc_cluster_name, 1010 - DLM_LSFL_FS, DLM_LVB_LEN, 1010 + DLM_LSFL_FS | DLM_LSFL_NEWEXCL, DLM_LVB_LEN, 1011 1011 &ocfs2_ls_ops, conn, &ops_rv, &fsdlm); 1012 - if (rc) 1012 + if (rc) { 1013 + if (rc == -EEXIST || rc == -EPROTO) 1014 + printk(KERN_ERR "ocfs2: Unable to create the " 1015 + "lockspace %s (%d), because a ocfs2-tools " 1016 + "program is running on this file system " 1017 + "with the same name lockspace\n", 1018 + conn->cc_name, rc); 1013 1019 goto out; 1020 + } 1014 1021 1015 1022 if (ops_rv == -EOPNOTSUPP) { 1016 1023 lc->oc_type = WITH_CONTROLD;
+19 -1
fs/ocfs2/suballoc.c
··· 1164 1164 int flags, 1165 1165 struct ocfs2_alloc_context **ac) 1166 1166 { 1167 - int status; 1167 + int status, ret = 0; 1168 + int retried = 0; 1168 1169 1169 1170 *ac = kzalloc(sizeof(struct ocfs2_alloc_context), GFP_KERNEL); 1170 1171 if (!(*ac)) { ··· 1190 1189 } 1191 1190 1192 1191 if (status == -ENOSPC) { 1192 + retry: 1193 1193 status = ocfs2_reserve_cluster_bitmap_bits(osb, *ac); 1194 + /* Retry if there is sufficient space cached in truncate log */ 1195 + if (status == -ENOSPC && !retried) { 1196 + retried = 1; 1197 + ocfs2_inode_unlock((*ac)->ac_inode, 1); 1198 + inode_unlock((*ac)->ac_inode); 1199 + 1200 + ret = ocfs2_try_to_free_truncate_log(osb, bits_wanted); 1201 + if (ret == 1) 1202 + goto retry; 1203 + 1204 + if (ret < 0) 1205 + mlog_errno(ret); 1206 + 1207 + inode_lock((*ac)->ac_inode); 1208 + ocfs2_inode_lock((*ac)->ac_inode, NULL, 1); 1209 + } 1194 1210 if (status < 0) { 1195 1211 if (status != -ENOSPC) 1196 1212 mlog_errno(status);
+1
fs/proc/Makefile
··· 4 4 5 5 obj-y += proc.o 6 6 7 + CFLAGS_task_mmu.o += -Wno-override-init 7 8 proc-y := nommu.o task_nommu.o 8 9 proc-$(CONFIG_MMU) := task_mmu.o 9 10
+2 -5
fs/proc/base.c
··· 579 579 unsigned long totalpages = totalram_pages + total_swap_pages; 580 580 unsigned long points = 0; 581 581 582 - read_lock(&tasklist_lock); 583 - if (pid_alive(task)) 584 - points = oom_badness(task, NULL, NULL, totalpages) * 585 - 1000 / totalpages; 586 - read_unlock(&tasklist_lock); 582 + points = oom_badness(task, NULL, NULL, totalpages) * 583 + 1000 / totalpages; 587 584 seq_printf(m, "%lu\n", points); 588 585 589 586 return 0;
+4 -6
fs/proc/stat.c
··· 80 80 static int show_stat(struct seq_file *p, void *v) 81 81 { 82 82 int i, j; 83 - unsigned long jif; 84 83 u64 user, nice, system, idle, iowait, irq, softirq, steal; 85 84 u64 guest, guest_nice; 86 85 u64 sum = 0; 87 86 u64 sum_softirq = 0; 88 87 unsigned int per_softirq_sums[NR_SOFTIRQS] = {0}; 89 - struct timespec boottime; 88 + struct timespec64 boottime; 90 89 91 90 user = nice = system = idle = iowait = 92 91 irq = softirq = steal = 0; 93 92 guest = guest_nice = 0; 94 - getboottime(&boottime); 95 - jif = boottime.tv_sec; 93 + getboottime64(&boottime); 96 94 97 95 for_each_possible_cpu(i) { 98 96 user += kcpustat_cpu(i).cpustat[CPUTIME_USER]; ··· 161 163 162 164 seq_printf(p, 163 165 "\nctxt %llu\n" 164 - "btime %lu\n" 166 + "btime %llu\n" 165 167 "processes %lu\n" 166 168 "procs_running %lu\n" 167 169 "procs_blocked %lu\n", 168 170 nr_context_switches(), 169 - (unsigned long)jif, 171 + (unsigned long long)boottime.tv_sec, 170 172 total_forks, 171 173 nr_running(), 172 174 nr_iowait());
+2 -1
fs/reiserfs/ibalance.c
··· 1153 1153 insert_ptr); 1154 1154 } 1155 1155 1156 - memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE); 1157 1156 insert_ptr[0] = new_insert_ptr; 1157 + if (new_insert_ptr) 1158 + memcpy(new_insert_key_addr, &new_insert_key, KEY_SIZE); 1158 1159 1159 1160 return order; 1160 1161 }
+1 -1
include/acpi/acpi_io.h
··· 13 13 } 14 14 #endif 15 15 16 - void __iomem *__init_refok 16 + void __iomem *__ref 17 17 acpi_os_map_iomem(acpi_physical_address phys, acpi_size size); 18 18 void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size); 19 19 void __iomem *acpi_os_get_iomem(acpi_physical_address phys, unsigned int size);
+1
include/linux/capability.h
··· 38 38 struct file; 39 39 struct inode; 40 40 struct dentry; 41 + struct task_struct; 41 42 struct user_namespace; 42 43 43 44 extern const kernel_cap_t __cap_empty_set;
+1 -1
include/linux/cpumask.h
··· 579 579 } 580 580 581 581 /** 582 - * cpumask_parse - extract a cpumask from from a string 582 + * cpumask_parse - extract a cpumask from a string 583 583 * @buf: the buffer to extract from 584 584 * @dstp: the cpumask to set. 585 585 *
+8
include/linux/firmware.h
··· 47 47 void (*cont)(const struct firmware *fw, void *context)); 48 48 int request_firmware_direct(const struct firmware **fw, const char *name, 49 49 struct device *device); 50 + int request_firmware_into_buf(const struct firmware **firmware_p, 51 + const char *name, struct device *device, void *buf, size_t size); 50 52 51 53 void release_firmware(const struct firmware *fw); 52 54 #else ··· 73 71 static inline int request_firmware_direct(const struct firmware **fw, 74 72 const char *name, 75 73 struct device *device) 74 + { 75 + return -EINVAL; 76 + } 77 + 78 + static inline int request_firmware_into_buf(const struct firmware **firmware_p, 79 + const char *name, struct device *device, void *buf, size_t size) 76 80 { 77 81 return -EINVAL; 78 82 }
+1
include/linux/fs.h
··· 2652 2652 #define __kernel_read_file_id(id) \ 2653 2653 id(UNKNOWN, unknown) \ 2654 2654 id(FIRMWARE, firmware) \ 2655 + id(FIRMWARE_PREALLOC_BUFFER, firmware) \ 2655 2656 id(MODULE, kernel-module) \ 2656 2657 id(KEXEC_IMAGE, kexec-image) \ 2657 2658 id(KEXEC_INITRAMFS, kexec-initramfs) \
-6
include/linux/init.h
··· 77 77 #define __refdata __section(.ref.data) 78 78 #define __refconst __constsection(.ref.rodata) 79 79 80 - /* compatibility defines */ 81 - #define __init_refok __ref 82 - #define __initdata_refok __refdata 83 - #define __exit_refok __ref 84 - 85 - 86 80 #ifdef MODULE 87 81 #define __exitused 88 82 #else
-2
include/linux/ipc_namespace.h
··· 63 63 }; 64 64 65 65 extern struct ipc_namespace init_ipc_ns; 66 - extern atomic_t nr_ipc_ns; 67 - 68 66 extern spinlock_t mq_lock; 69 67 70 68 #ifdef CONFIG_SYSVIPC
+3
include/linux/kasan.h
··· 56 56 void kasan_poison_slab(struct page *page); 57 57 void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); 58 58 void kasan_poison_object_data(struct kmem_cache *cache, void *object); 59 + void kasan_init_slab_obj(struct kmem_cache *cache, const void *object); 59 60 60 61 void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); 61 62 void kasan_kfree_large(const void *ptr); ··· 103 102 void *object) {} 104 103 static inline void kasan_poison_object_data(struct kmem_cache *cache, 105 104 void *object) {} 105 + static inline void kasan_init_slab_obj(struct kmem_cache *cache, 106 + const void *object) {} 106 107 107 108 static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {} 108 109 static inline void kasan_kfree_large(const void *ptr) {}
-1
include/linux/kernel.h
··· 11 11 #include <linux/log2.h> 12 12 #include <linux/typecheck.h> 13 13 #include <linux/printk.h> 14 - #include <linux/dynamic_debug.h> 15 14 #include <asm/byteorder.h> 16 15 #include <uapi/linux/kernel.h> 17 16
+44 -2
include/linux/kexec.h
··· 14 14 15 15 #if !defined(__ASSEMBLY__) 16 16 17 + #include <asm/io.h> 18 + 17 19 #include <uapi/linux/kexec.h> 18 20 19 21 #ifdef CONFIG_KEXEC_CORE ··· 43 41 #endif 44 42 45 43 #ifndef KEXEC_CONTROL_MEMORY_GFP 46 - #define KEXEC_CONTROL_MEMORY_GFP GFP_KERNEL 44 + #define KEXEC_CONTROL_MEMORY_GFP (GFP_KERNEL | __GFP_NORETRY) 47 45 #endif 48 46 49 47 #ifndef KEXEC_CONTROL_PAGE_SIZE ··· 230 228 extern void __crash_kexec(struct pt_regs *); 231 229 extern void crash_kexec(struct pt_regs *); 232 230 int kexec_should_crash(struct task_struct *); 231 + int kexec_crash_loaded(void); 233 232 void crash_save_cpu(struct pt_regs *regs, int cpu); 234 233 void crash_save_vmcoreinfo(void); 235 234 void arch_crash_save_vmcoreinfo(void); 236 235 __printf(1, 2) 237 236 void vmcoreinfo_append_str(const char *fmt, ...); 238 - unsigned long paddr_vmcoreinfo_note(void); 237 + phys_addr_t paddr_vmcoreinfo_note(void); 239 238 240 239 #define VMCOREINFO_OSRELEASE(value) \ 241 240 vmcoreinfo_append_str("OSRELEASE=%s\n", value) ··· 321 318 void arch_kexec_protect_crashkres(void); 322 319 void arch_kexec_unprotect_crashkres(void); 323 320 321 + #ifndef page_to_boot_pfn 322 + static inline unsigned long page_to_boot_pfn(struct page *page) 323 + { 324 + return page_to_pfn(page); 325 + } 326 + #endif 327 + 328 + #ifndef boot_pfn_to_page 329 + static inline struct page *boot_pfn_to_page(unsigned long boot_pfn) 330 + { 331 + return pfn_to_page(boot_pfn); 332 + } 333 + #endif 334 + 335 + #ifndef phys_to_boot_phys 336 + static inline unsigned long phys_to_boot_phys(phys_addr_t phys) 337 + { 338 + return phys; 339 + } 340 + #endif 341 + 342 + #ifndef boot_phys_to_phys 343 + static inline phys_addr_t boot_phys_to_phys(unsigned long boot_phys) 344 + { 345 + return boot_phys; 346 + } 347 + #endif 348 + 349 + static inline unsigned long virt_to_boot_phys(void *addr) 350 + { 351 + return phys_to_boot_phys(__pa((unsigned long)addr)); 352 + } 353 + 354 + static inline void *boot_phys_to_virt(unsigned long entry) 355 + { 356 + return phys_to_virt(boot_phys_to_phys(entry)); 357 + } 358 + 324 359 #else /* !CONFIG_KEXEC_CORE */ 325 360 struct pt_regs; 326 361 struct task_struct; 327 362 static inline void __crash_kexec(struct pt_regs *regs) { } 328 363 static inline void crash_kexec(struct pt_regs *regs) { } 329 364 static inline int kexec_should_crash(struct task_struct *p) { return 0; } 365 + static inline int kexec_crash_loaded(void) { return 0; } 330 366 #define kexec_in_progress false 331 367 #endif /* CONFIG_KEXEC_CORE */ 332 368
+1 -1
include/linux/mman.h
··· 49 49 * 50 50 * Returns true if the prot flags are valid 51 51 */ 52 - static inline int arch_validate_prot(unsigned long prot) 52 + static inline bool arch_validate_prot(unsigned long prot) 53 53 { 54 54 return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0; 55 55 }
+22 -306
include/linux/nilfs2_fs.h include/uapi/linux/nilfs2_ondisk.h
··· 1 1 /* 2 - * nilfs2_fs.h - NILFS2 on-disk structures and common declarations. 2 + * nilfs2_ondisk.h - NILFS2 on-disk structures 3 3 * 4 4 * Copyright (C) 2005-2008 Nippon Telegraph and Telephone Corporation. 5 5 * ··· 7 7 * it under the terms of the GNU Lesser General Public License as published 8 8 * by the Free Software Foundation; either version 2.1 of the License, or 9 9 * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU Lesser General Public License for more details. 15 - * 16 - * Written by Koji Sato and Ryusuke Konishi. 17 10 */ 18 11 /* 19 12 * linux/include/linux/ext2_fs.h ··· 23 30 * Copyright (C) 1991, 1992 Linus Torvalds 24 31 */ 25 32 26 - #ifndef _LINUX_NILFS_FS_H 27 - #define _LINUX_NILFS_FS_H 33 + #ifndef _LINUX_NILFS2_ONDISK_H 34 + #define _LINUX_NILFS2_ONDISK_H 28 35 29 36 #include <linux/types.h> 30 - #include <linux/ioctl.h> 31 37 #include <linux/magic.h> 32 - #include <linux/bug.h> 33 38 34 39 35 40 #define NILFS_INODE_BMAP_SIZE 7 41 + 36 42 /** 37 43 * struct nilfs_inode - structure of an inode on disk 38 44 * @i_blocks: blocks count ··· 48 56 * @i_bmap: block mapping 49 57 * @i_xattr: extended attributes 50 58 * @i_generation: file generation (for NFS) 51 - * @i_pad: padding 59 + * @i_pad: padding 52 60 */ 53 61 struct nilfs_inode { 54 62 __le64 i_blocks; ··· 330 338 #define NILFS_DIR_ROUND (NILFS_DIR_PAD - 1) 331 339 #define NILFS_DIR_REC_LEN(name_len) (((name_len) + 12 + NILFS_DIR_ROUND) & \ 332 340 ~NILFS_DIR_ROUND) 333 - #define NILFS_MAX_REC_LEN ((1<<16)-1) 334 - 335 - static inline unsigned int nilfs_rec_len_from_disk(__le16 dlen) 336 - { 337 - unsigned int len = le16_to_cpu(dlen); 338 - 339 - #if !defined(__KERNEL__) || (PAGE_SIZE >= 65536) 340 - if (len == NILFS_MAX_REC_LEN) 341 - return 1 << 16; 342 - #endif 343 - return len; 344 - } 345 - 346 - static inline __le16 nilfs_rec_len_to_disk(unsigned int len) 347 - { 348 - #if !defined(__KERNEL__) || (PAGE_SIZE >= 65536) 349 - if (len == (1 << 16)) 350 - return cpu_to_le16(NILFS_MAX_REC_LEN); 351 - else if (len > (1 << 16)) 352 - BUG(); 353 - #endif 354 - return cpu_to_le16(len); 355 - } 341 + #define NILFS_MAX_REC_LEN ((1 << 16) - 1) 356 342 357 343 /** 358 344 * struct nilfs_finfo - file information ··· 344 374 __le64 fi_cno; 345 375 __le32 fi_nblocks; 346 376 __le32 fi_ndatablk; 347 - /* array of virtual block numbers */ 348 377 }; 349 378 350 379 /** 351 - * struct nilfs_binfo_v - information for the block to which a virtual block number is assigned 380 + * struct nilfs_binfo_v - information on a data block (except DAT) 352 381 * @bi_vblocknr: virtual block number 353 382 * @bi_blkoff: block offset 354 383 */ ··· 357 388 }; 358 389 359 390 /** 360 - * struct nilfs_binfo_dat - information for the block which belongs to the DAT file 391 + * struct nilfs_binfo_dat - information on a DAT node block 361 392 * @bi_blkoff: block offset 362 393 * @bi_level: level 363 394 * @bi_pad: padding ··· 423 454 #define NILFS_SS_GC 0x0010 /* segment written for cleaner operation */ 424 455 425 456 /** 426 - * struct nilfs_btree_node - B-tree node 457 + * struct nilfs_btree_node - header of B-tree node block 427 458 * @bn_flags: flags 428 459 * @bn_level: level 429 460 * @bn_nchildren: number of children ··· 443 474 #define NILFS_BTREE_LEVEL_DATA 0 444 475 #define NILFS_BTREE_LEVEL_NODE_MIN (NILFS_BTREE_LEVEL_DATA + 1) 445 476 #define NILFS_BTREE_LEVEL_MAX 14 /* Max level (exclusive) */ 477 + 478 + /** 479 + * struct nilfs_direct_node - header of built-in bmap array 480 + * @dn_flags: flags 481 + * @dn_pad: padding 482 + */ 483 + struct nilfs_direct_node { 484 + __u8 dn_flags; 485 + __u8 pad[7]; 486 + }; 446 487 447 488 /** 448 489 * struct nilfs_palloc_group_desc - block group descriptor ··· 553 574 NILFS_CHECKPOINT_FNS(MINOR, minor) 554 575 555 576 /** 556 - * struct nilfs_cpinfo - checkpoint information 557 - * @ci_flags: flags 558 - * @ci_pad: padding 559 - * @ci_cno: checkpoint number 560 - * @ci_create: creation timestamp 561 - * @ci_nblk_inc: number of blocks incremented by this checkpoint 562 - * @ci_inodes_count: inodes count 563 - * @ci_blocks_count: blocks count 564 - * @ci_next: next checkpoint number in snapshot list 565 - */ 566 - struct nilfs_cpinfo { 567 - __u32 ci_flags; 568 - __u32 ci_pad; 569 - __u64 ci_cno; 570 - __u64 ci_create; 571 - __u64 ci_nblk_inc; 572 - __u64 ci_inodes_count; 573 - __u64 ci_blocks_count; 574 - __u64 ci_next; 575 - }; 576 - 577 - #define NILFS_CPINFO_FNS(flag, name) \ 578 - static inline int \ 579 - nilfs_cpinfo_##name(const struct nilfs_cpinfo *cpinfo) \ 580 - { \ 581 - return !!(cpinfo->ci_flags & (1UL << NILFS_CHECKPOINT_##flag)); \ 582 - } 583 - 584 - NILFS_CPINFO_FNS(SNAPSHOT, snapshot) 585 - NILFS_CPINFO_FNS(INVALID, invalid) 586 - NILFS_CPINFO_FNS(MINOR, minor) 587 - 588 - 589 - /** 590 577 * struct nilfs_cpfile_header - checkpoint file header 591 578 * @ch_ncheckpoints: number of checkpoints 592 579 * @ch_nsnapshots: number of snapshots ··· 564 619 struct nilfs_snapshot_list ch_snapshot_list; 565 620 }; 566 621 567 - #define NILFS_CPFILE_FIRST_CHECKPOINT_OFFSET \ 622 + #define NILFS_CPFILE_FIRST_CHECKPOINT_OFFSET \ 568 623 ((sizeof(struct nilfs_cpfile_header) + \ 569 624 sizeof(struct nilfs_checkpoint) - 1) / \ 570 625 sizeof(struct nilfs_checkpoint)) ··· 588 643 NILFS_SEGMENT_USAGE_ACTIVE, 589 644 NILFS_SEGMENT_USAGE_DIRTY, 590 645 NILFS_SEGMENT_USAGE_ERROR, 591 - 592 - /* ... */ 593 646 }; 594 647 595 648 #define NILFS_SEGMENT_USAGE_FNS(flag, name) \ ··· 642 699 /* ... */ 643 700 }; 644 701 645 - #define NILFS_SUFILE_FIRST_SEGMENT_USAGE_OFFSET \ 702 + #define NILFS_SUFILE_FIRST_SEGMENT_USAGE_OFFSET \ 646 703 ((sizeof(struct nilfs_sufile_header) + \ 647 704 sizeof(struct nilfs_segment_usage) - 1) / \ 648 705 sizeof(struct nilfs_segment_usage)) 649 706 650 - /** 651 - * nilfs_suinfo - segment usage information 652 - * @sui_lastmod: timestamp of last modification 653 - * @sui_nblocks: number of written blocks in segment 654 - * @sui_flags: segment usage flags 655 - */ 656 - struct nilfs_suinfo { 657 - __u64 sui_lastmod; 658 - __u32 sui_nblocks; 659 - __u32 sui_flags; 660 - }; 661 - 662 - #define NILFS_SUINFO_FNS(flag, name) \ 663 - static inline int \ 664 - nilfs_suinfo_##name(const struct nilfs_suinfo *si) \ 665 - { \ 666 - return si->sui_flags & (1UL << NILFS_SEGMENT_USAGE_##flag); \ 667 - } 668 - 669 - NILFS_SUINFO_FNS(ACTIVE, active) 670 - NILFS_SUINFO_FNS(DIRTY, dirty) 671 - NILFS_SUINFO_FNS(ERROR, error) 672 - 673 - static inline int nilfs_suinfo_clean(const struct nilfs_suinfo *si) 674 - { 675 - return !si->sui_flags; 676 - } 677 - 678 - /* ioctl */ 679 - /** 680 - * nilfs_suinfo_update - segment usage information update 681 - * @sup_segnum: segment number 682 - * @sup_flags: flags for which fields are active in sup_sui 683 - * @sup_reserved: reserved necessary for alignment 684 - * @sup_sui: segment usage information 685 - */ 686 - struct nilfs_suinfo_update { 687 - __u64 sup_segnum; 688 - __u32 sup_flags; 689 - __u32 sup_reserved; 690 - struct nilfs_suinfo sup_sui; 691 - }; 692 - 693 - enum { 694 - NILFS_SUINFO_UPDATE_LASTMOD, 695 - NILFS_SUINFO_UPDATE_NBLOCKS, 696 - NILFS_SUINFO_UPDATE_FLAGS, 697 - __NR_NILFS_SUINFO_UPDATE_FIELDS, 698 - }; 699 - 700 - #define NILFS_SUINFO_UPDATE_FNS(flag, name) \ 701 - static inline void \ 702 - nilfs_suinfo_update_set_##name(struct nilfs_suinfo_update *sup) \ 703 - { \ 704 - sup->sup_flags |= 1UL << NILFS_SUINFO_UPDATE_##flag; \ 705 - } \ 706 - static inline void \ 707 - nilfs_suinfo_update_clear_##name(struct nilfs_suinfo_update *sup) \ 708 - { \ 709 - sup->sup_flags &= ~(1UL << NILFS_SUINFO_UPDATE_##flag); \ 710 - } \ 711 - static inline int \ 712 - nilfs_suinfo_update_##name(const struct nilfs_suinfo_update *sup) \ 713 - { \ 714 - return !!(sup->sup_flags & (1UL << NILFS_SUINFO_UPDATE_##flag));\ 715 - } 716 - 717 - NILFS_SUINFO_UPDATE_FNS(LASTMOD, lastmod) 718 - NILFS_SUINFO_UPDATE_FNS(NBLOCKS, nblocks) 719 - NILFS_SUINFO_UPDATE_FNS(FLAGS, flags) 720 - 721 - enum { 722 - NILFS_CHECKPOINT, 723 - NILFS_SNAPSHOT, 724 - }; 725 - 726 - /** 727 - * struct nilfs_cpmode - change checkpoint mode structure 728 - * @cm_cno: checkpoint number 729 - * @cm_mode: mode of checkpoint 730 - * @cm_pad: padding 731 - */ 732 - struct nilfs_cpmode { 733 - __u64 cm_cno; 734 - __u32 cm_mode; 735 - __u32 cm_pad; 736 - }; 737 - 738 - /** 739 - * struct nilfs_argv - argument vector 740 - * @v_base: pointer on data array from userspace 741 - * @v_nmembs: number of members in data array 742 - * @v_size: size of data array in bytes 743 - * @v_flags: flags 744 - * @v_index: start number of target data items 745 - */ 746 - struct nilfs_argv { 747 - __u64 v_base; 748 - __u32 v_nmembs; /* number of members */ 749 - __u16 v_size; /* size of members */ 750 - __u16 v_flags; 751 - __u64 v_index; 752 - }; 753 - 754 - /** 755 - * struct nilfs_period - period of checkpoint numbers 756 - * @p_start: start checkpoint number (inclusive) 757 - * @p_end: end checkpoint number (exclusive) 758 - */ 759 - struct nilfs_period { 760 - __u64 p_start; 761 - __u64 p_end; 762 - }; 763 - 764 - /** 765 - * struct nilfs_cpstat - checkpoint statistics 766 - * @cs_cno: checkpoint number 767 - * @cs_ncps: number of checkpoints 768 - * @cs_nsss: number of snapshots 769 - */ 770 - struct nilfs_cpstat { 771 - __u64 cs_cno; 772 - __u64 cs_ncps; 773 - __u64 cs_nsss; 774 - }; 775 - 776 - /** 777 - * struct nilfs_sustat - segment usage statistics 778 - * @ss_nsegs: number of segments 779 - * @ss_ncleansegs: number of clean segments 780 - * @ss_ndirtysegs: number of dirty segments 781 - * @ss_ctime: creation time of the last segment 782 - * @ss_nongc_ctime: creation time of the last segment not for GC 783 - * @ss_prot_seq: least sequence number of segments which must not be reclaimed 784 - */ 785 - struct nilfs_sustat { 786 - __u64 ss_nsegs; 787 - __u64 ss_ncleansegs; 788 - __u64 ss_ndirtysegs; 789 - __u64 ss_ctime; 790 - __u64 ss_nongc_ctime; 791 - __u64 ss_prot_seq; 792 - }; 793 - 794 - /** 795 - * struct nilfs_vinfo - virtual block number information 796 - * @vi_vblocknr: virtual block number 797 - * @vi_start: start checkpoint number (inclusive) 798 - * @vi_end: end checkpoint number (exclusive) 799 - * @vi_blocknr: disk block number 800 - */ 801 - struct nilfs_vinfo { 802 - __u64 vi_vblocknr; 803 - __u64 vi_start; 804 - __u64 vi_end; 805 - __u64 vi_blocknr; 806 - }; 807 - 808 - /** 809 - * struct nilfs_vdesc - descriptor of virtual block number 810 - * @vd_ino: inode number 811 - * @vd_cno: checkpoint number 812 - * @vd_vblocknr: virtual block number 813 - * @vd_period: period of checkpoint numbers 814 - * @vd_blocknr: disk block number 815 - * @vd_offset: logical block offset inside a file 816 - * @vd_flags: flags (data or node block) 817 - * @vd_pad: padding 818 - */ 819 - struct nilfs_vdesc { 820 - __u64 vd_ino; 821 - __u64 vd_cno; 822 - __u64 vd_vblocknr; 823 - struct nilfs_period vd_period; 824 - __u64 vd_blocknr; 825 - __u64 vd_offset; 826 - __u32 vd_flags; 827 - __u32 vd_pad; 828 - }; 829 - 830 - /** 831 - * struct nilfs_bdesc - descriptor of disk block number 832 - * @bd_ino: inode number 833 - * @bd_oblocknr: disk block address (for skipping dead blocks) 834 - * @bd_blocknr: disk block address 835 - * @bd_offset: logical block offset inside a file 836 - * @bd_level: level in the b-tree organization 837 - * @bd_pad: padding 838 - */ 839 - struct nilfs_bdesc { 840 - __u64 bd_ino; 841 - __u64 bd_oblocknr; 842 - __u64 bd_blocknr; 843 - __u64 bd_offset; 844 - __u32 bd_level; 845 - __u32 bd_pad; 846 - }; 847 - 848 - #define NILFS_IOCTL_IDENT 'n' 849 - 850 - #define NILFS_IOCTL_CHANGE_CPMODE \ 851 - _IOW(NILFS_IOCTL_IDENT, 0x80, struct nilfs_cpmode) 852 - #define NILFS_IOCTL_DELETE_CHECKPOINT \ 853 - _IOW(NILFS_IOCTL_IDENT, 0x81, __u64) 854 - #define NILFS_IOCTL_GET_CPINFO \ 855 - _IOR(NILFS_IOCTL_IDENT, 0x82, struct nilfs_argv) 856 - #define NILFS_IOCTL_GET_CPSTAT \ 857 - _IOR(NILFS_IOCTL_IDENT, 0x83, struct nilfs_cpstat) 858 - #define NILFS_IOCTL_GET_SUINFO \ 859 - _IOR(NILFS_IOCTL_IDENT, 0x84, struct nilfs_argv) 860 - #define NILFS_IOCTL_GET_SUSTAT \ 861 - _IOR(NILFS_IOCTL_IDENT, 0x85, struct nilfs_sustat) 862 - #define NILFS_IOCTL_GET_VINFO \ 863 - _IOWR(NILFS_IOCTL_IDENT, 0x86, struct nilfs_argv) 864 - #define NILFS_IOCTL_GET_BDESCS \ 865 - _IOWR(NILFS_IOCTL_IDENT, 0x87, struct nilfs_argv) 866 - #define NILFS_IOCTL_CLEAN_SEGMENTS \ 867 - _IOW(NILFS_IOCTL_IDENT, 0x88, struct nilfs_argv[5]) 868 - #define NILFS_IOCTL_SYNC \ 869 - _IOR(NILFS_IOCTL_IDENT, 0x8A, __u64) 870 - #define NILFS_IOCTL_RESIZE \ 871 - _IOW(NILFS_IOCTL_IDENT, 0x8B, __u64) 872 - #define NILFS_IOCTL_SET_ALLOC_RANGE \ 873 - _IOW(NILFS_IOCTL_IDENT, 0x8C, __u64[2]) 874 - #define NILFS_IOCTL_SET_SUINFO \ 875 - _IOW(NILFS_IOCTL_IDENT, 0x8D, struct nilfs_argv) 876 - 877 - #endif /* _LINUX_NILFS_FS_H */ 707 + #endif /* _LINUX_NILFS2_ONDISK_H */
+44 -16
include/linux/printk.h
··· 61 61 console_loglevel = CONSOLE_LOGLEVEL_MOTORMOUTH; 62 62 } 63 63 64 + /* strlen("ratelimit") + 1 */ 65 + #define DEVKMSG_STR_MAX_SIZE 10 66 + extern char devkmsg_log_str[]; 67 + struct ctl_table; 68 + 64 69 struct va_format { 65 70 const char *fmt; 66 71 va_list *va; ··· 180 175 extern int dmesg_restrict; 181 176 extern int kptr_restrict; 182 177 178 + extern int 179 + devkmsg_sysctl_set_loglvl(struct ctl_table *table, int write, void __user *buf, 180 + size_t *lenp, loff_t *ppos); 181 + 183 182 extern void wake_up_klogd(void); 184 183 185 184 char *log_buf_addr_get(void); ··· 266 257 * and other debug macros are compiled out unless either DEBUG is defined 267 258 * or CONFIG_DYNAMIC_DEBUG is set. 268 259 */ 269 - #define pr_emerg(fmt, ...) \ 270 - printk(KERN_EMERG pr_fmt(fmt), ##__VA_ARGS__) 271 - #define pr_alert(fmt, ...) \ 272 - printk(KERN_ALERT pr_fmt(fmt), ##__VA_ARGS__) 273 - #define pr_crit(fmt, ...) \ 274 - printk(KERN_CRIT pr_fmt(fmt), ##__VA_ARGS__) 275 - #define pr_err(fmt, ...) \ 276 - printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__) 277 - #define pr_warning(fmt, ...) \ 278 - printk(KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__) 279 - #define pr_warn pr_warning 280 - #define pr_notice(fmt, ...) \ 281 - printk(KERN_NOTICE pr_fmt(fmt), ##__VA_ARGS__) 282 - #define pr_info(fmt, ...) \ 283 - printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__) 260 + 261 + #ifdef CONFIG_PRINTK 262 + 263 + asmlinkage __printf(1, 2) __cold void __pr_emerg(const char *fmt, ...); 264 + asmlinkage __printf(1, 2) __cold void __pr_alert(const char *fmt, ...); 265 + asmlinkage __printf(1, 2) __cold void __pr_crit(const char *fmt, ...); 266 + asmlinkage __printf(1, 2) __cold void __pr_err(const char *fmt, ...); 267 + asmlinkage __printf(1, 2) __cold void __pr_warn(const char *fmt, ...); 268 + asmlinkage __printf(1, 2) __cold void __pr_notice(const char *fmt, ...); 269 + asmlinkage __printf(1, 2) __cold void __pr_info(const char *fmt, ...); 270 + 271 + #define pr_emerg(fmt, ...) __pr_emerg(pr_fmt(fmt), ##__VA_ARGS__) 272 + #define pr_alert(fmt, ...) __pr_alert(pr_fmt(fmt), ##__VA_ARGS__) 273 + #define pr_crit(fmt, ...) __pr_crit(pr_fmt(fmt), ##__VA_ARGS__) 274 + #define pr_err(fmt, ...) __pr_err(pr_fmt(fmt), ##__VA_ARGS__) 275 + #define pr_warn(fmt, ...) __pr_warn(pr_fmt(fmt), ##__VA_ARGS__) 276 + #define pr_notice(fmt, ...) __pr_notice(pr_fmt(fmt), ##__VA_ARGS__) 277 + #define pr_info(fmt, ...) __pr_info(pr_fmt(fmt), ##__VA_ARGS__) 278 + 279 + #else 280 + 281 + #define pr_emerg(fmt, ...) printk(KERN_EMERG pr_fmt(fmt), ##__VA_ARGS__) 282 + #define pr_alert(fmt, ...) printk(KERN_ALERT pr_fmt(fmt), ##__VA_ARGS__) 283 + #define pr_crit(fmt, ...) printk(KERN_CRIT pr_fmt(fmt), ##__VA_ARGS__) 284 + #define pr_err(fmt, ...) printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__) 285 + #define pr_warn(fmt, ...) printk(KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__) 286 + #define pr_notice(fmt, ...) printk(KERN_NOTICE pr_fmt(fmt), ##__VA_ARGS__) 287 + #define pr_info(fmt, ...) printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__) 288 + 289 + #endif 290 + 291 + #define pr_warning pr_warn 292 + 284 293 /* 285 294 * Like KERN_CONT, pr_cont() should only be used when continuing 286 295 * a line with no newline ('\n') enclosed. Otherwise it defaults ··· 316 289 no_printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__) 317 290 #endif 318 291 319 - #include <linux/dynamic_debug.h> 320 292 321 293 /* If you are writing a driver, please use dev_dbg instead */ 322 294 #if defined(CONFIG_DYNAMIC_DEBUG) 295 + #include <linux/dynamic_debug.h> 296 + 323 297 /* dynamic_pr_debug() uses pr_fmt() internally so we don't need it here */ 324 298 #define pr_debug(fmt, ...) \ 325 299 dynamic_pr_debug(fmt, ##__VA_ARGS__)
+1 -1
include/linux/radix-tree.h
··· 35 35 * 00 - data pointer 36 36 * 01 - internal entry 37 37 * 10 - exceptional entry 38 - * 11 - locked exceptional entry 38 + * 11 - this bit combination is currently unused/reserved 39 39 * 40 40 * The internal entry may be a pointer to the next level in the tree, a 41 41 * sibling entry, or an indicator that the entry in this slot has been moved
+33 -5
include/linux/ratelimit.h
··· 2 2 #define _LINUX_RATELIMIT_H 3 3 4 4 #include <linux/param.h> 5 + #include <linux/sched.h> 5 6 #include <linux/spinlock.h> 6 7 7 8 #define DEFAULT_RATELIMIT_INTERVAL (5 * HZ) 8 9 #define DEFAULT_RATELIMIT_BURST 10 10 + 11 + /* issue num suppressed message on exit */ 12 + #define RATELIMIT_MSG_ON_RELEASE BIT(0) 9 13 10 14 struct ratelimit_state { 11 15 raw_spinlock_t lock; /* protect the state */ ··· 19 15 int printed; 20 16 int missed; 21 17 unsigned long begin; 18 + unsigned long flags; 22 19 }; 23 20 24 21 #define RATELIMIT_STATE_INIT(name, interval_init, burst_init) { \ ··· 39 34 static inline void ratelimit_state_init(struct ratelimit_state *rs, 40 35 int interval, int burst) 41 36 { 37 + memset(rs, 0, sizeof(*rs)); 38 + 42 39 raw_spin_lock_init(&rs->lock); 43 - rs->interval = interval; 44 - rs->burst = burst; 45 - rs->printed = 0; 46 - rs->missed = 0; 47 - rs->begin = 0; 40 + rs->interval = interval; 41 + rs->burst = burst; 42 + } 43 + 44 + static inline void ratelimit_default_init(struct ratelimit_state *rs) 45 + { 46 + return ratelimit_state_init(rs, DEFAULT_RATELIMIT_INTERVAL, 47 + DEFAULT_RATELIMIT_BURST); 48 + } 49 + 50 + static inline void ratelimit_state_exit(struct ratelimit_state *rs) 51 + { 52 + if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE)) 53 + return; 54 + 55 + if (rs->missed) { 56 + pr_warn("%s: %d output lines suppressed due to ratelimiting\n", 57 + current->comm, rs->missed); 58 + rs->missed = 0; 59 + } 60 + } 61 + 62 + static inline void 63 + ratelimit_set_flags(struct ratelimit_state *rs, unsigned long flags) 64 + { 65 + rs->flags = flags; 48 66 } 49 67 50 68 extern struct ratelimit_state printk_ratelimit_state;
+5 -8
include/linux/rio.h
··· 163 163 * @dst_ops: Destination operation capabilities 164 164 * @comp_tag: RIO component tag 165 165 * @phys_efptr: RIO device extended features pointer 166 + * @phys_rmap: LP-Serial Register Map Type (1 or 2) 166 167 * @em_efptr: RIO Error Management features pointer 167 168 * @dma_mask: Mask of bits of RIO address this device implements 168 169 * @driver: Driver claiming this device ··· 194 193 u32 dst_ops; 195 194 u32 comp_tag; 196 195 u32 phys_efptr; 196 + u32 phys_rmap; 197 197 u32 em_efptr; 198 198 u64 dma_mask; 199 199 struct rio_driver *driver; /* RIO driver claiming this device */ ··· 239 237 void *dev_id; 240 238 }; 241 239 242 - enum rio_phy_type { 243 - RIO_PHY_PARALLEL, 244 - RIO_PHY_SERIAL, 245 - }; 246 - 247 240 /** 248 241 * struct rio_mport - RIO master port info 249 242 * @dbells: List of doorbell events ··· 256 259 * @id: Port ID, unique among all ports 257 260 * @index: Port index, unique among all port interfaces of the same type 258 261 * @sys_size: RapidIO common transport system size 259 - * @phy_type: RapidIO phy type 260 262 * @phys_efptr: RIO port extended features pointer 263 + * @phys_rmap: LP-Serial EFB Register Mapping type (1 or 2). 261 264 * @name: Port name string 262 265 * @dev: device structure associated with an mport 263 266 * @priv: Master port private data ··· 286 289 * 0 - Small size. 256 devices. 287 290 * 1 - Large size, 65536 devices. 288 291 */ 289 - enum rio_phy_type phy_type; /* RapidIO phy type */ 290 292 u32 phys_efptr; 293 + u32 phys_rmap; 291 294 unsigned char name[RIO_MAX_MPORT_NAME]; 292 295 struct device dev; 293 296 void *priv; /* Master port private data */ ··· 422 425 int (*add_inb_buffer)(struct rio_mport *mport, int mbox, void *buf); 423 426 void *(*get_inb_message)(struct rio_mport *mport, int mbox); 424 427 int (*map_inb)(struct rio_mport *mport, dma_addr_t lstart, 425 - u64 rstart, u32 size, u32 flags); 428 + u64 rstart, u64 size, u32 flags); 426 429 void (*unmap_inb)(struct rio_mport *mport, dma_addr_t lstart); 427 430 int (*query_mport)(struct rio_mport *mport, 428 431 struct rio_mport_attr *attr);
+2
include/linux/rio_ids.h
··· 38 38 #define RIO_DID_IDTVPS1616 0x0377 39 39 #define RIO_DID_IDTSPS1616 0x0378 40 40 #define RIO_DID_TSI721 0x80ab 41 + #define RIO_DID_IDTRXS1632 0x80e5 42 + #define RIO_DID_IDTRXS2448 0x80e6 41 43 42 44 #endif /* LINUX_RIO_IDS_H */
+132 -35
include/linux/rio_regs.h
··· 42 42 #define RIO_PEF_INB_MBOX2 0x00200000 /* [II, <= 1.2] Mailbox 2 */ 43 43 #define RIO_PEF_INB_MBOX3 0x00100000 /* [II, <= 1.2] Mailbox 3 */ 44 44 #define RIO_PEF_INB_DOORBELL 0x00080000 /* [II, <= 1.2] Doorbells */ 45 + #define RIO_PEF_DEV32 0x00001000 /* [III] PE supports Common TRansport Dev32 */ 45 46 #define RIO_PEF_EXT_RT 0x00000200 /* [III, 1.3] Extended route table support */ 46 47 #define RIO_PEF_STD_RT 0x00000100 /* [III, 1.3] Standard route table support */ 47 - #define RIO_PEF_CTLS 0x00000010 /* [III] CTLS */ 48 + #define RIO_PEF_CTLS 0x00000010 /* [III] Common Transport Large System (< rev.3) */ 49 + #define RIO_PEF_DEV16 0x00000010 /* [III] PE Supports Common Transport Dev16 (rev.3) */ 48 50 #define RIO_PEF_EXT_FEATURES 0x00000008 /* [I] EFT_PTR valid */ 49 51 #define RIO_PEF_ADDR_66 0x00000004 /* [I] 66 bits */ 50 52 #define RIO_PEF_ADDR_50 0x00000002 /* [I] 50 bits */ ··· 196 194 #define RIO_GET_BLOCK_ID(x) (x & RIO_EFB_ID_MASK) 197 195 198 196 /* Extended Feature Block IDs */ 199 - #define RIO_EFB_PAR_EP_ID 0x0001 /* [IV] LP/LVDS EP Devices */ 200 - #define RIO_EFB_PAR_EP_REC_ID 0x0002 /* [IV] LP/LVDS EP Recovery Devices */ 201 - #define RIO_EFB_PAR_EP_FREE_ID 0x0003 /* [IV] LP/LVDS EP Free Devices */ 202 - #define RIO_EFB_SER_EP_ID_V13P 0x0001 /* [VI] LP/Serial EP Devices, RapidIO Spec ver 1.3 and above */ 203 - #define RIO_EFB_SER_EP_REC_ID_V13P 0x0002 /* [VI] LP/Serial EP Recovery Devices, RapidIO Spec ver 1.3 and above */ 204 - #define RIO_EFB_SER_EP_FREE_ID_V13P 0x0003 /* [VI] LP/Serial EP Free Devices, RapidIO Spec ver 1.3 and above */ 205 - #define RIO_EFB_SER_EP_ID 0x0004 /* [VI] LP/Serial EP Devices */ 206 - #define RIO_EFB_SER_EP_REC_ID 0x0005 /* [VI] LP/Serial EP Recovery Devices */ 207 - #define RIO_EFB_SER_EP_FREE_ID 0x0006 /* [VI] LP/Serial EP Free Devices */ 208 - #define RIO_EFB_SER_EP_FREC_ID 0x0009 /* [VI] LP/Serial EP Free Recovery Devices */ 197 + #define RIO_EFB_SER_EP_M1_ID 0x0001 /* [VI] LP-Serial EP Devices, Map I */ 198 + #define RIO_EFB_SER_EP_SW_M1_ID 0x0002 /* [VI] LP-Serial EP w SW Recovery Devices, Map I */ 199 + #define RIO_EFB_SER_EPF_M1_ID 0x0003 /* [VI] LP-Serial EP Free Devices, Map I */ 200 + #define RIO_EFB_SER_EP_ID 0x0004 /* [VI] LP-Serial EP Devices, RIO 1.2 */ 201 + #define RIO_EFB_SER_EP_REC_ID 0x0005 /* [VI] LP-Serial EP w SW Recovery Devices, RIO 1.2 */ 202 + #define RIO_EFB_SER_EP_FREE_ID 0x0006 /* [VI] LP-Serial EP Free Devices, RIO 1.2 */ 209 203 #define RIO_EFB_ERR_MGMNT 0x0007 /* [VIII] Error Management Extensions */ 204 + #define RIO_EFB_SER_EPF_SW_M1_ID 0x0009 /* [VI] LP-Serial EP Free w SW Recovery Devices, Map I */ 205 + #define RIO_EFB_SW_ROUTING_TBL 0x000E /* [III] Switch Routing Table Block */ 206 + #define RIO_EFB_SER_EP_M2_ID 0x0011 /* [VI] LP-Serial EP Devices, Map II */ 207 + #define RIO_EFB_SER_EP_SW_M2_ID 0x0012 /* [VI] LP-Serial EP w SW Recovery Devices, Map II */ 208 + #define RIO_EFB_SER_EPF_M2_ID 0x0013 /* [VI] LP-Serial EP Free Devices, Map II */ 209 + #define RIO_EFB_ERR_MGMNT_HS 0x0017 /* [VIII] Error Management Extensions, Hot-Swap only */ 210 + #define RIO_EFB_SER_EPF_SW_M2_ID 0x0019 /* [VI] LP-Serial EP Free w SW Recovery Devices, Map II */ 210 211 211 212 /* 212 - * Physical 8/16 LP-LVDS 213 - * ID=0x0001, Generic End Point Devices 214 - * ID=0x0002, Generic End Point Devices, software assisted recovery option 215 - * ID=0x0003, Generic End Point Free Devices 216 - * 217 - * Physical LP-Serial 218 - * ID=0x0004, Generic End Point Devices 219 - * ID=0x0005, Generic End Point Devices, software assisted recovery option 220 - * ID=0x0006, Generic End Point Free Devices 213 + * Physical LP-Serial Registers Definitions 214 + * Parameters in register macros: 215 + * n - port number, m - Register Map Type (1 or 2) 221 216 */ 222 217 #define RIO_PORT_MNT_HEADER 0x0000 223 218 #define RIO_PORT_REQ_CTL_CSR 0x0020 224 - #define RIO_PORT_RSP_CTL_CSR 0x0024 /* 0x0001/0x0002 */ 225 - #define RIO_PORT_LINKTO_CTL_CSR 0x0020 /* Serial */ 226 - #define RIO_PORT_RSPTO_CTL_CSR 0x0024 /* Serial */ 219 + #define RIO_PORT_RSP_CTL_CSR 0x0024 220 + #define RIO_PORT_LINKTO_CTL_CSR 0x0020 221 + #define RIO_PORT_RSPTO_CTL_CSR 0x0024 227 222 #define RIO_PORT_GEN_CTL_CSR 0x003c 228 223 #define RIO_PORT_GEN_HOST 0x80000000 229 224 #define RIO_PORT_GEN_MASTER 0x40000000 230 225 #define RIO_PORT_GEN_DISCOVERED 0x20000000 231 - #define RIO_PORT_N_MNT_REQ_CSR(x) (0x0040 + x*0x20) /* 0x0002 */ 226 + #define RIO_PORT_N_MNT_REQ_CSR(n, m) (0x40 + (n) * (0x20 * (m))) 232 227 #define RIO_MNT_REQ_CMD_RD 0x03 /* Reset-device command */ 233 228 #define RIO_MNT_REQ_CMD_IS 0x04 /* Input-status command */ 234 - #define RIO_PORT_N_MNT_RSP_CSR(x) (0x0044 + x*0x20) /* 0x0002 */ 229 + #define RIO_PORT_N_MNT_RSP_CSR(n, m) (0x44 + (n) * (0x20 * (m))) 235 230 #define RIO_PORT_N_MNT_RSP_RVAL 0x80000000 /* Response Valid */ 236 231 #define RIO_PORT_N_MNT_RSP_ASTAT 0x000007e0 /* ackID Status */ 237 232 #define RIO_PORT_N_MNT_RSP_LSTAT 0x0000001f /* Link Status */ 238 - #define RIO_PORT_N_ACK_STS_CSR(x) (0x0048 + x*0x20) /* 0x0002 */ 233 + #define RIO_PORT_N_ACK_STS_CSR(n) (0x48 + (n) * 0x20) /* Only in RM-I */ 239 234 #define RIO_PORT_N_ACK_CLEAR 0x80000000 240 235 #define RIO_PORT_N_ACK_INBOUND 0x3f000000 241 236 #define RIO_PORT_N_ACK_OUTSTAND 0x00003f00 242 237 #define RIO_PORT_N_ACK_OUTBOUND 0x0000003f 243 - #define RIO_PORT_N_CTL2_CSR(x) (0x0054 + x*0x20) 238 + #define RIO_PORT_N_CTL2_CSR(n, m) (0x54 + (n) * (0x20 * (m))) 244 239 #define RIO_PORT_N_CTL2_SEL_BAUD 0xf0000000 245 - #define RIO_PORT_N_ERR_STS_CSR(x) (0x0058 + x*0x20) 246 - #define RIO_PORT_N_ERR_STS_PW_OUT_ES 0x00010000 /* Output Error-stopped */ 247 - #define RIO_PORT_N_ERR_STS_PW_INP_ES 0x00000100 /* Input Error-stopped */ 240 + #define RIO_PORT_N_ERR_STS_CSR(n, m) (0x58 + (n) * (0x20 * (m))) 241 + #define RIO_PORT_N_ERR_STS_OUT_ES 0x00010000 /* Output Error-stopped */ 242 + #define RIO_PORT_N_ERR_STS_INP_ES 0x00000100 /* Input Error-stopped */ 248 243 #define RIO_PORT_N_ERR_STS_PW_PEND 0x00000010 /* Port-Write Pending */ 244 + #define RIO_PORT_N_ERR_STS_PORT_UA 0x00000008 /* Port Unavailable */ 249 245 #define RIO_PORT_N_ERR_STS_PORT_ERR 0x00000004 250 246 #define RIO_PORT_N_ERR_STS_PORT_OK 0x00000002 251 247 #define RIO_PORT_N_ERR_STS_PORT_UNINIT 0x00000001 252 - #define RIO_PORT_N_CTL_CSR(x) (0x005c + x*0x20) 248 + #define RIO_PORT_N_CTL_CSR(n, m) (0x5c + (n) * (0x20 * (m))) 253 249 #define RIO_PORT_N_CTL_PWIDTH 0xc0000000 254 250 #define RIO_PORT_N_CTL_PWIDTH_1 0x00000000 255 251 #define RIO_PORT_N_CTL_PWIDTH_4 0x40000000 256 252 #define RIO_PORT_N_CTL_IPW 0x38000000 /* Initialized Port Width */ 257 253 #define RIO_PORT_N_CTL_P_TYP_SER 0x00000001 258 254 #define RIO_PORT_N_CTL_LOCKOUT 0x00000002 259 - #define RIO_PORT_N_CTL_EN_RX_SER 0x00200000 260 - #define RIO_PORT_N_CTL_EN_TX_SER 0x00400000 261 - #define RIO_PORT_N_CTL_EN_RX_PAR 0x08000000 262 - #define RIO_PORT_N_CTL_EN_TX_PAR 0x40000000 255 + #define RIO_PORT_N_CTL_EN_RX 0x00200000 256 + #define RIO_PORT_N_CTL_EN_TX 0x00400000 257 + #define RIO_PORT_N_OB_ACK_CSR(n) (0x60 + (n) * 0x40) /* Only in RM-II */ 258 + #define RIO_PORT_N_OB_ACK_CLEAR 0x80000000 259 + #define RIO_PORT_N_OB_ACK_OUTSTD 0x00fff000 260 + #define RIO_PORT_N_OB_ACK_OUTBND 0x00000fff 261 + #define RIO_PORT_N_IB_ACK_CSR(n) (0x64 + (n) * 0x40) /* Only in RM-II */ 262 + #define RIO_PORT_N_IB_ACK_INBND 0x00000fff 263 + 264 + /* 265 + * Device-based helper macros for serial port register access. 266 + * d - pointer to rapidio device object, n - port number 267 + */ 268 + 269 + #define RIO_DEV_PORT_N_MNT_REQ_CSR(d, n) \ 270 + (d->phys_efptr + RIO_PORT_N_MNT_REQ_CSR(n, d->phys_rmap)) 271 + 272 + #define RIO_DEV_PORT_N_MNT_RSP_CSR(d, n) \ 273 + (d->phys_efptr + RIO_PORT_N_MNT_RSP_CSR(n, d->phys_rmap)) 274 + 275 + #define RIO_DEV_PORT_N_ACK_STS_CSR(d, n) \ 276 + (d->phys_efptr + RIO_PORT_N_ACK_STS_CSR(n)) 277 + 278 + #define RIO_DEV_PORT_N_CTL2_CSR(d, n) \ 279 + (d->phys_efptr + RIO_PORT_N_CTL2_CSR(n, d->phys_rmap)) 280 + 281 + #define RIO_DEV_PORT_N_ERR_STS_CSR(d, n) \ 282 + (d->phys_efptr + RIO_PORT_N_ERR_STS_CSR(n, d->phys_rmap)) 283 + 284 + #define RIO_DEV_PORT_N_CTL_CSR(d, n) \ 285 + (d->phys_efptr + RIO_PORT_N_CTL_CSR(n, d->phys_rmap)) 286 + 287 + #define RIO_DEV_PORT_N_OB_ACK_CSR(d, n) \ 288 + (d->phys_efptr + RIO_PORT_N_OB_ACK_CSR(n)) 289 + 290 + #define RIO_DEV_PORT_N_IB_ACK_CSR(d, n) \ 291 + (d->phys_efptr + RIO_PORT_N_IB_ACK_CSR(n)) 263 292 264 293 /* 265 294 * Error Management Extensions (RapidIO 1.3+, Part 8) ··· 301 268 /* General EM Registers (Common for all Ports) */ 302 269 303 270 #define RIO_EM_EFB_HEADER 0x000 /* Error Management Extensions Block Header */ 271 + #define RIO_EM_EMHS_CAR 0x004 /* EM Functionality CAR */ 304 272 #define RIO_EM_LTL_ERR_DETECT 0x008 /* Logical/Transport Layer Error Detect CSR */ 305 273 #define RIO_EM_LTL_ERR_EN 0x00c /* Logical/Transport Layer Error Enable CSR */ 306 274 #define REM_LTL_ERR_ILLTRAN 0x08000000 /* Illegal Transaction decode */ ··· 312 278 #define RIO_EM_LTL_ADDR_CAP 0x014 /* Logical/Transport Layer Address Capture CSR */ 313 279 #define RIO_EM_LTL_DEVID_CAP 0x018 /* Logical/Transport Layer Device ID Capture CSR */ 314 280 #define RIO_EM_LTL_CTRL_CAP 0x01c /* Logical/Transport Layer Control Capture CSR */ 281 + #define RIO_EM_LTL_DID32_CAP 0x020 /* Logical/Transport Layer Dev32 DestID Capture CSR */ 282 + #define RIO_EM_LTL_SID32_CAP 0x024 /* Logical/Transport Layer Dev32 source ID Capture CSR */ 315 283 #define RIO_EM_PW_TGT_DEVID 0x028 /* Port-write Target deviceID CSR */ 284 + #define RIO_EM_PW_TGT_DEVID_D16M 0xff000000 /* Port-write Target DID16 MSB */ 285 + #define RIO_EM_PW_TGT_DEVID_D8 0x00ff0000 /* Port-write Target DID16 LSB or DID8 */ 286 + #define RIO_EM_PW_TGT_DEVID_DEV16 0x00008000 /* Port-write Target DID16 LSB or DID8 */ 287 + #define RIO_EM_PW_TGT_DEVID_DEV32 0x00004000 /* Port-write Target DID16 LSB or DID8 */ 316 288 #define RIO_EM_PKT_TTL 0x02c /* Packet Time-to-live CSR */ 289 + #define RIO_EM_PKT_TTL_VAL 0xffff0000 /* Packet Time-to-live value */ 290 + #define RIO_EM_PW_TGT32_DEVID 0x030 /* Port-write Dev32 Target deviceID CSR */ 291 + #define RIO_EM_PW_TX_CTRL 0x034 /* Port-write Transmission Control CSR */ 292 + #define RIO_EM_PW_TX_CTRL_PW_DIS 0x00000001 /* Port-write Transmission Disable bit */ 317 293 318 294 /* Per-Port EM Registers */ 319 295 320 296 #define RIO_EM_PN_ERR_DETECT(x) (0x040 + x*0x40) /* Port N Error Detect CSR */ 321 297 #define REM_PED_IMPL_SPEC 0x80000000 298 + #define REM_PED_LINK_OK2U 0x40000000 /* Link OK to Uninit transition */ 299 + #define REM_PED_LINK_UPDA 0x20000000 /* Link Uninit Packet Discard Active */ 300 + #define REM_PED_LINK_U2OK 0x10000000 /* Link Uninit to OK transition */ 322 301 #define REM_PED_LINK_TO 0x00000001 302 + 323 303 #define RIO_EM_PN_ERRRATE_EN(x) (0x044 + x*0x40) /* Port N Error Rate Enable CSR */ 304 + #define RIO_EM_PN_ERRRATE_EN_OK2U 0x40000000 /* Enable notification for OK2U */ 305 + #define RIO_EM_PN_ERRRATE_EN_UPDA 0x20000000 /* Enable notification for UPDA */ 306 + #define RIO_EM_PN_ERRRATE_EN_U2OK 0x10000000 /* Enable notification for U2OK */ 307 + 324 308 #define RIO_EM_PN_ATTRIB_CAP(x) (0x048 + x*0x40) /* Port N Attributes Capture CSR */ 325 309 #define RIO_EM_PN_PKT_CAP_0(x) (0x04c + x*0x40) /* Port N Packet/Control Symbol Capture 0 CSR */ 326 310 #define RIO_EM_PN_PKT_CAP_1(x) (0x050 + x*0x40) /* Port N Packet Capture 1 CSR */ ··· 346 294 #define RIO_EM_PN_PKT_CAP_3(x) (0x058 + x*0x40) /* Port N Packet Capture 3 CSR */ 347 295 #define RIO_EM_PN_ERRRATE(x) (0x068 + x*0x40) /* Port N Error Rate CSR */ 348 296 #define RIO_EM_PN_ERRRATE_TR(x) (0x06c + x*0x40) /* Port N Error Rate Threshold CSR */ 297 + #define RIO_EM_PN_LINK_UDT(x) (0x070 + x*0x40) /* Port N Link Uninit Discard Timer CSR */ 298 + #define RIO_EM_PN_LINK_UDT_TO 0xffffff00 /* Link Uninit Timeout value */ 299 + 300 + /* 301 + * Switch Routing Table Register Block ID=0x000E (RapidIO 3.0+, part 3) 302 + * Register offsets are defined from beginning of the block. 303 + */ 304 + 305 + /* Broadcast Routing Table Control CSR */ 306 + #define RIO_BC_RT_CTL_CSR 0x020 307 + #define RIO_RT_CTL_THREE_LVL 0x80000000 308 + #define RIO_RT_CTL_DEV32_RT_CTRL 0x40000000 309 + #define RIO_RT_CTL_MC_MASK_SZ 0x03000000 /* 3.0+ Part 11: Multicast */ 310 + 311 + /* Broadcast Level 0 Info CSR */ 312 + #define RIO_BC_RT_LVL0_INFO_CSR 0x030 313 + #define RIO_RT_L0I_NUM_GR 0xff000000 314 + #define RIO_RT_L0I_GR_PTR 0x00fffc00 315 + 316 + /* Broadcast Level 1 Info CSR */ 317 + #define RIO_BC_RT_LVL1_INFO_CSR 0x034 318 + #define RIO_RT_L1I_NUM_GR 0xff000000 319 + #define RIO_RT_L1I_GR_PTR 0x00fffc00 320 + 321 + /* Broadcast Level 2 Info CSR */ 322 + #define RIO_BC_RT_LVL2_INFO_CSR 0x038 323 + #define RIO_RT_L2I_NUM_GR 0xff000000 324 + #define RIO_RT_L2I_GR_PTR 0x00fffc00 325 + 326 + /* Per-Port Routing Table registers. 327 + * Register fields defined in the broadcast section above are 328 + * applicable to the corresponding registers below. 329 + */ 330 + #define RIO_SPx_RT_CTL_CSR(x) (0x040 + (0x20 * x)) 331 + #define RIO_SPx_RT_LVL0_INFO_CSR(x) (0x50 + (0x20 * x)) 332 + #define RIO_SPx_RT_LVL1_INFO_CSR(x) (0x54 + (0x20 * x)) 333 + #define RIO_SPx_RT_LVL2_INFO_CSR(x) (0x58 + (0x20 * x)) 334 + 335 + /* Register Formats for Routing Table Group entry. 336 + * Register offsets are calculated using GR_PTR field in the corresponding 337 + * table Level_N and group/entry numbers (see RapidIO 3.0+ Part 3). 338 + */ 339 + #define RIO_RT_Ln_ENTRY_IMPL_DEF 0xf0000000 340 + #define RIO_RT_Ln_ENTRY_RTE_VAL 0x000003ff 341 + #define RIO_RT_ENTRY_DROP_PKT 0x300 349 342 350 343 #endif /* LINUX_RIO_REGS_H */
+63
include/linux/sched.h
··· 1547 1547 /* unserialized, strictly 'current' */ 1548 1548 unsigned in_execve:1; /* bit to tell LSMs we're in execve */ 1549 1549 unsigned in_iowait:1; 1550 + #if !defined(TIF_RESTORE_SIGMASK) 1551 + unsigned restore_sigmask:1; 1552 + #endif 1550 1553 #ifdef CONFIG_MEMCG 1551 1554 unsigned memcg_may_oom:1; 1552 1555 #ifndef CONFIG_SLOB ··· 2682 2679 extern void sigqueue_free(struct sigqueue *); 2683 2680 extern int send_sigqueue(struct sigqueue *, struct task_struct *, int group); 2684 2681 extern int do_sigaction(int, struct k_sigaction *, struct k_sigaction *); 2682 + 2683 + #ifdef TIF_RESTORE_SIGMASK 2684 + /* 2685 + * Legacy restore_sigmask accessors. These are inefficient on 2686 + * SMP architectures because they require atomic operations. 2687 + */ 2688 + 2689 + /** 2690 + * set_restore_sigmask() - make sure saved_sigmask processing gets done 2691 + * 2692 + * This sets TIF_RESTORE_SIGMASK and ensures that the arch signal code 2693 + * will run before returning to user mode, to process the flag. For 2694 + * all callers, TIF_SIGPENDING is already set or it's no harm to set 2695 + * it. TIF_RESTORE_SIGMASK need not be in the set of bits that the 2696 + * arch code will notice on return to user mode, in case those bits 2697 + * are scarce. We set TIF_SIGPENDING here to ensure that the arch 2698 + * signal code always gets run when TIF_RESTORE_SIGMASK is set. 2699 + */ 2700 + static inline void set_restore_sigmask(void) 2701 + { 2702 + set_thread_flag(TIF_RESTORE_SIGMASK); 2703 + WARN_ON(!test_thread_flag(TIF_SIGPENDING)); 2704 + } 2705 + static inline void clear_restore_sigmask(void) 2706 + { 2707 + clear_thread_flag(TIF_RESTORE_SIGMASK); 2708 + } 2709 + static inline bool test_restore_sigmask(void) 2710 + { 2711 + return test_thread_flag(TIF_RESTORE_SIGMASK); 2712 + } 2713 + static inline bool test_and_clear_restore_sigmask(void) 2714 + { 2715 + return test_and_clear_thread_flag(TIF_RESTORE_SIGMASK); 2716 + } 2717 + 2718 + #else /* TIF_RESTORE_SIGMASK */ 2719 + 2720 + /* Higher-quality implementation, used if TIF_RESTORE_SIGMASK doesn't exist. */ 2721 + static inline void set_restore_sigmask(void) 2722 + { 2723 + current->restore_sigmask = true; 2724 + WARN_ON(!test_thread_flag(TIF_SIGPENDING)); 2725 + } 2726 + static inline void clear_restore_sigmask(void) 2727 + { 2728 + current->restore_sigmask = false; 2729 + } 2730 + static inline bool test_restore_sigmask(void) 2731 + { 2732 + return current->restore_sigmask; 2733 + } 2734 + static inline bool test_and_clear_restore_sigmask(void) 2735 + { 2736 + if (!current->restore_sigmask) 2737 + return false; 2738 + current->restore_sigmask = false; 2739 + return true; 2740 + } 2741 + #endif 2685 2742 2686 2743 static inline void restore_saved_sigmask(void) 2687 2744 {
+1
include/linux/sysctl.h
··· 28 28 #include <uapi/linux/sysctl.h> 29 29 30 30 /* For the /proc/sys support */ 31 + struct completion; 31 32 struct ctl_table; 32 33 struct nsproxy; 33 34 struct ctl_table_root;
-41
include/linux/thread_info.h
··· 105 105 106 106 #define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED) 107 107 108 - #if defined TIF_RESTORE_SIGMASK && !defined HAVE_SET_RESTORE_SIGMASK 109 - /* 110 - * An arch can define its own version of set_restore_sigmask() to get the 111 - * job done however works, with or without TIF_RESTORE_SIGMASK. 112 - */ 113 - #define HAVE_SET_RESTORE_SIGMASK 1 114 - 115 - /** 116 - * set_restore_sigmask() - make sure saved_sigmask processing gets done 117 - * 118 - * This sets TIF_RESTORE_SIGMASK and ensures that the arch signal code 119 - * will run before returning to user mode, to process the flag. For 120 - * all callers, TIF_SIGPENDING is already set or it's no harm to set 121 - * it. TIF_RESTORE_SIGMASK need not be in the set of bits that the 122 - * arch code will notice on return to user mode, in case those bits 123 - * are scarce. We set TIF_SIGPENDING here to ensure that the arch 124 - * signal code always gets run when TIF_RESTORE_SIGMASK is set. 125 - */ 126 - static inline void set_restore_sigmask(void) 127 - { 128 - set_thread_flag(TIF_RESTORE_SIGMASK); 129 - WARN_ON(!test_thread_flag(TIF_SIGPENDING)); 130 - } 131 - static inline void clear_restore_sigmask(void) 132 - { 133 - clear_thread_flag(TIF_RESTORE_SIGMASK); 134 - } 135 - static inline bool test_restore_sigmask(void) 136 - { 137 - return test_thread_flag(TIF_RESTORE_SIGMASK); 138 - } 139 - static inline bool test_and_clear_restore_sigmask(void) 140 - { 141 - return test_and_clear_thread_flag(TIF_RESTORE_SIGMASK); 142 - } 143 - #endif /* TIF_RESTORE_SIGMASK && !HAVE_SET_RESTORE_SIGMASK */ 144 - 145 - #ifndef HAVE_SET_RESTORE_SIGMASK 146 - #error "no set_restore_sigmask() provided and default one won't work" 147 - #endif 148 - 149 108 #endif /* __KERNEL__ */ 150 109 151 110 #endif /* _LINUX_THREAD_INFO_H */
+1 -1
include/net/net_namespace.h
··· 275 275 #define __net_initconst 276 276 #else 277 277 #define __net_init __init 278 - #define __net_exit __exit_refok 278 + #define __net_exit __ref 279 279 #define __net_initdata __initdata 280 280 #define __net_initconst __initconst 281 281 #endif
+1
include/uapi/linux/Kbuild
··· 357 357 header-y += reiserfs_xattr.h 358 358 header-y += resource.h 359 359 header-y += rfkill.h 360 + header-y += rio_cm_cdev.h 360 361 header-y += rio_mport_cdev.h 361 362 header-y += romfs_fs.h 362 363 header-y += rose.h
-2
include/uapi/linux/capability.h
··· 15 15 16 16 #include <linux/types.h> 17 17 18 - struct task_struct; 19 - 20 18 /* User-level do most of the mapping between kernel and user 21 19 capabilities based on the version tag given by the kernel. The 22 20 kernel might be somewhat backwards compatible, but don't bet on
+292
include/uapi/linux/nilfs2_api.h
··· 1 + /* 2 + * nilfs2_api.h - NILFS2 user space API 3 + * 4 + * Copyright (C) 2005-2008 Nippon Telegraph and Telephone Corporation. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU Lesser General Public License as published 8 + * by the Free Software Foundation; either version 2.1 of the License, or 9 + * (at your option) any later version. 10 + */ 11 + 12 + #ifndef _LINUX_NILFS2_API_H 13 + #define _LINUX_NILFS2_API_H 14 + 15 + #include <linux/types.h> 16 + #include <linux/ioctl.h> 17 + 18 + /** 19 + * struct nilfs_cpinfo - checkpoint information 20 + * @ci_flags: flags 21 + * @ci_pad: padding 22 + * @ci_cno: checkpoint number 23 + * @ci_create: creation timestamp 24 + * @ci_nblk_inc: number of blocks incremented by this checkpoint 25 + * @ci_inodes_count: inodes count 26 + * @ci_blocks_count: blocks count 27 + * @ci_next: next checkpoint number in snapshot list 28 + */ 29 + struct nilfs_cpinfo { 30 + __u32 ci_flags; 31 + __u32 ci_pad; 32 + __u64 ci_cno; 33 + __u64 ci_create; 34 + __u64 ci_nblk_inc; 35 + __u64 ci_inodes_count; 36 + __u64 ci_blocks_count; 37 + __u64 ci_next; 38 + }; 39 + 40 + /* checkpoint flags */ 41 + enum { 42 + NILFS_CPINFO_SNAPSHOT, 43 + NILFS_CPINFO_INVALID, 44 + NILFS_CPINFO_SKETCH, 45 + NILFS_CPINFO_MINOR, 46 + }; 47 + 48 + #define NILFS_CPINFO_FNS(flag, name) \ 49 + static inline int \ 50 + nilfs_cpinfo_##name(const struct nilfs_cpinfo *cpinfo) \ 51 + { \ 52 + return !!(cpinfo->ci_flags & (1UL << NILFS_CPINFO_##flag)); \ 53 + } 54 + 55 + NILFS_CPINFO_FNS(SNAPSHOT, snapshot) 56 + NILFS_CPINFO_FNS(INVALID, invalid) 57 + NILFS_CPINFO_FNS(MINOR, minor) 58 + 59 + /** 60 + * nilfs_suinfo - segment usage information 61 + * @sui_lastmod: timestamp of last modification 62 + * @sui_nblocks: number of written blocks in segment 63 + * @sui_flags: segment usage flags 64 + */ 65 + struct nilfs_suinfo { 66 + __u64 sui_lastmod; 67 + __u32 sui_nblocks; 68 + __u32 sui_flags; 69 + }; 70 + 71 + /* segment usage flags */ 72 + enum { 73 + NILFS_SUINFO_ACTIVE, 74 + NILFS_SUINFO_DIRTY, 75 + NILFS_SUINFO_ERROR, 76 + }; 77 + 78 + #define NILFS_SUINFO_FNS(flag, name) \ 79 + static inline int \ 80 + nilfs_suinfo_##name(const struct nilfs_suinfo *si) \ 81 + { \ 82 + return si->sui_flags & (1UL << NILFS_SUINFO_##flag); \ 83 + } 84 + 85 + NILFS_SUINFO_FNS(ACTIVE, active) 86 + NILFS_SUINFO_FNS(DIRTY, dirty) 87 + NILFS_SUINFO_FNS(ERROR, error) 88 + 89 + static inline int nilfs_suinfo_clean(const struct nilfs_suinfo *si) 90 + { 91 + return !si->sui_flags; 92 + } 93 + 94 + /** 95 + * nilfs_suinfo_update - segment usage information update 96 + * @sup_segnum: segment number 97 + * @sup_flags: flags for which fields are active in sup_sui 98 + * @sup_reserved: reserved necessary for alignment 99 + * @sup_sui: segment usage information 100 + */ 101 + struct nilfs_suinfo_update { 102 + __u64 sup_segnum; 103 + __u32 sup_flags; 104 + __u32 sup_reserved; 105 + struct nilfs_suinfo sup_sui; 106 + }; 107 + 108 + enum { 109 + NILFS_SUINFO_UPDATE_LASTMOD, 110 + NILFS_SUINFO_UPDATE_NBLOCKS, 111 + NILFS_SUINFO_UPDATE_FLAGS, 112 + __NR_NILFS_SUINFO_UPDATE_FIELDS, 113 + }; 114 + 115 + #define NILFS_SUINFO_UPDATE_FNS(flag, name) \ 116 + static inline void \ 117 + nilfs_suinfo_update_set_##name(struct nilfs_suinfo_update *sup) \ 118 + { \ 119 + sup->sup_flags |= 1UL << NILFS_SUINFO_UPDATE_##flag; \ 120 + } \ 121 + static inline void \ 122 + nilfs_suinfo_update_clear_##name(struct nilfs_suinfo_update *sup) \ 123 + { \ 124 + sup->sup_flags &= ~(1UL << NILFS_SUINFO_UPDATE_##flag); \ 125 + } \ 126 + static inline int \ 127 + nilfs_suinfo_update_##name(const struct nilfs_suinfo_update *sup) \ 128 + { \ 129 + return !!(sup->sup_flags & (1UL << NILFS_SUINFO_UPDATE_##flag));\ 130 + } 131 + 132 + NILFS_SUINFO_UPDATE_FNS(LASTMOD, lastmod) 133 + NILFS_SUINFO_UPDATE_FNS(NBLOCKS, nblocks) 134 + NILFS_SUINFO_UPDATE_FNS(FLAGS, flags) 135 + 136 + enum { 137 + NILFS_CHECKPOINT, 138 + NILFS_SNAPSHOT, 139 + }; 140 + 141 + /** 142 + * struct nilfs_cpmode - change checkpoint mode structure 143 + * @cm_cno: checkpoint number 144 + * @cm_mode: mode of checkpoint 145 + * @cm_pad: padding 146 + */ 147 + struct nilfs_cpmode { 148 + __u64 cm_cno; 149 + __u32 cm_mode; 150 + __u32 cm_pad; 151 + }; 152 + 153 + /** 154 + * struct nilfs_argv - argument vector 155 + * @v_base: pointer on data array from userspace 156 + * @v_nmembs: number of members in data array 157 + * @v_size: size of data array in bytes 158 + * @v_flags: flags 159 + * @v_index: start number of target data items 160 + */ 161 + struct nilfs_argv { 162 + __u64 v_base; 163 + __u32 v_nmembs; /* number of members */ 164 + __u16 v_size; /* size of members */ 165 + __u16 v_flags; 166 + __u64 v_index; 167 + }; 168 + 169 + /** 170 + * struct nilfs_period - period of checkpoint numbers 171 + * @p_start: start checkpoint number (inclusive) 172 + * @p_end: end checkpoint number (exclusive) 173 + */ 174 + struct nilfs_period { 175 + __u64 p_start; 176 + __u64 p_end; 177 + }; 178 + 179 + /** 180 + * struct nilfs_cpstat - checkpoint statistics 181 + * @cs_cno: checkpoint number 182 + * @cs_ncps: number of checkpoints 183 + * @cs_nsss: number of snapshots 184 + */ 185 + struct nilfs_cpstat { 186 + __u64 cs_cno; 187 + __u64 cs_ncps; 188 + __u64 cs_nsss; 189 + }; 190 + 191 + /** 192 + * struct nilfs_sustat - segment usage statistics 193 + * @ss_nsegs: number of segments 194 + * @ss_ncleansegs: number of clean segments 195 + * @ss_ndirtysegs: number of dirty segments 196 + * @ss_ctime: creation time of the last segment 197 + * @ss_nongc_ctime: creation time of the last segment not for GC 198 + * @ss_prot_seq: least sequence number of segments which must not be reclaimed 199 + */ 200 + struct nilfs_sustat { 201 + __u64 ss_nsegs; 202 + __u64 ss_ncleansegs; 203 + __u64 ss_ndirtysegs; 204 + __u64 ss_ctime; 205 + __u64 ss_nongc_ctime; 206 + __u64 ss_prot_seq; 207 + }; 208 + 209 + /** 210 + * struct nilfs_vinfo - virtual block number information 211 + * @vi_vblocknr: virtual block number 212 + * @vi_start: start checkpoint number (inclusive) 213 + * @vi_end: end checkpoint number (exclusive) 214 + * @vi_blocknr: disk block number 215 + */ 216 + struct nilfs_vinfo { 217 + __u64 vi_vblocknr; 218 + __u64 vi_start; 219 + __u64 vi_end; 220 + __u64 vi_blocknr; 221 + }; 222 + 223 + /** 224 + * struct nilfs_vdesc - descriptor of virtual block number 225 + * @vd_ino: inode number 226 + * @vd_cno: checkpoint number 227 + * @vd_vblocknr: virtual block number 228 + * @vd_period: period of checkpoint numbers 229 + * @vd_blocknr: disk block number 230 + * @vd_offset: logical block offset inside a file 231 + * @vd_flags: flags (data or node block) 232 + * @vd_pad: padding 233 + */ 234 + struct nilfs_vdesc { 235 + __u64 vd_ino; 236 + __u64 vd_cno; 237 + __u64 vd_vblocknr; 238 + struct nilfs_period vd_period; 239 + __u64 vd_blocknr; 240 + __u64 vd_offset; 241 + __u32 vd_flags; 242 + __u32 vd_pad; 243 + }; 244 + 245 + /** 246 + * struct nilfs_bdesc - descriptor of disk block number 247 + * @bd_ino: inode number 248 + * @bd_oblocknr: disk block address (for skipping dead blocks) 249 + * @bd_blocknr: disk block address 250 + * @bd_offset: logical block offset inside a file 251 + * @bd_level: level in the b-tree organization 252 + * @bd_pad: padding 253 + */ 254 + struct nilfs_bdesc { 255 + __u64 bd_ino; 256 + __u64 bd_oblocknr; 257 + __u64 bd_blocknr; 258 + __u64 bd_offset; 259 + __u32 bd_level; 260 + __u32 bd_pad; 261 + }; 262 + 263 + #define NILFS_IOCTL_IDENT 'n' 264 + 265 + #define NILFS_IOCTL_CHANGE_CPMODE \ 266 + _IOW(NILFS_IOCTL_IDENT, 0x80, struct nilfs_cpmode) 267 + #define NILFS_IOCTL_DELETE_CHECKPOINT \ 268 + _IOW(NILFS_IOCTL_IDENT, 0x81, __u64) 269 + #define NILFS_IOCTL_GET_CPINFO \ 270 + _IOR(NILFS_IOCTL_IDENT, 0x82, struct nilfs_argv) 271 + #define NILFS_IOCTL_GET_CPSTAT \ 272 + _IOR(NILFS_IOCTL_IDENT, 0x83, struct nilfs_cpstat) 273 + #define NILFS_IOCTL_GET_SUINFO \ 274 + _IOR(NILFS_IOCTL_IDENT, 0x84, struct nilfs_argv) 275 + #define NILFS_IOCTL_GET_SUSTAT \ 276 + _IOR(NILFS_IOCTL_IDENT, 0x85, struct nilfs_sustat) 277 + #define NILFS_IOCTL_GET_VINFO \ 278 + _IOWR(NILFS_IOCTL_IDENT, 0x86, struct nilfs_argv) 279 + #define NILFS_IOCTL_GET_BDESCS \ 280 + _IOWR(NILFS_IOCTL_IDENT, 0x87, struct nilfs_argv) 281 + #define NILFS_IOCTL_CLEAN_SEGMENTS \ 282 + _IOW(NILFS_IOCTL_IDENT, 0x88, struct nilfs_argv[5]) 283 + #define NILFS_IOCTL_SYNC \ 284 + _IOR(NILFS_IOCTL_IDENT, 0x8A, __u64) 285 + #define NILFS_IOCTL_RESIZE \ 286 + _IOW(NILFS_IOCTL_IDENT, 0x8B, __u64) 287 + #define NILFS_IOCTL_SET_ALLOC_RANGE \ 288 + _IOW(NILFS_IOCTL_IDENT, 0x8C, __u64[2]) 289 + #define NILFS_IOCTL_SET_SUINFO \ 290 + _IOW(NILFS_IOCTL_IDENT, 0x8D, struct nilfs_argv) 291 + 292 + #endif /* _LINUX_NILFS2_API_H */
+78
include/uapi/linux/rio_cm_cdev.h
··· 1 + /* 2 + * Copyright (c) 2015, Integrated Device Technology Inc. 3 + * Copyright (c) 2015, Prodrive Technologies 4 + * Copyright (c) 2015, RapidIO Trade Association 5 + * All rights reserved. 6 + * 7 + * This software is available to you under a choice of one of two licenses. 8 + * You may choose to be licensed under the terms of the GNU General Public 9 + * License(GPL) Version 2, or the BSD-3 Clause license below: 10 + * 11 + * Redistribution and use in source and binary forms, with or without 12 + * modification, are permitted provided that the following conditions are met: 13 + * 14 + * 1. Redistributions of source code must retain the above copyright notice, 15 + * this list of conditions and the following disclaimer. 16 + * 17 + * 2. Redistributions in binary form must reproduce the above copyright notice, 18 + * this list of conditions and the following disclaimer in the documentation 19 + * and/or other materials provided with the distribution. 20 + * 21 + * 3. Neither the name of the copyright holder nor the names of its contributors 22 + * may be used to endorse or promote products derived from this software without 23 + * specific prior written permission. 24 + * 25 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 26 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, 27 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 28 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR 29 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 30 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 31 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; 32 + * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, 33 + * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR 34 + * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF 35 + * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 36 + */ 37 + 38 + #ifndef _RIO_CM_CDEV_H_ 39 + #define _RIO_CM_CDEV_H_ 40 + 41 + #include <linux/types.h> 42 + 43 + struct rio_cm_channel { 44 + __u16 id; 45 + __u16 remote_channel; 46 + __u16 remote_destid; 47 + __u8 mport_id; 48 + }; 49 + 50 + struct rio_cm_msg { 51 + __u16 ch_num; 52 + __u16 size; 53 + __u32 rxto; /* receive timeout in mSec. 0 = blocking */ 54 + __u64 msg; 55 + }; 56 + 57 + struct rio_cm_accept { 58 + __u16 ch_num; 59 + __u16 pad0; 60 + __u32 wait_to; /* accept timeout in mSec. 0 = blocking */ 61 + }; 62 + 63 + /* RapidIO Channelized Messaging Driver IOCTLs */ 64 + #define RIO_CM_IOC_MAGIC 'c' 65 + 66 + #define RIO_CM_EP_GET_LIST_SIZE _IOWR(RIO_CM_IOC_MAGIC, 1, __u32) 67 + #define RIO_CM_EP_GET_LIST _IOWR(RIO_CM_IOC_MAGIC, 2, __u32) 68 + #define RIO_CM_CHAN_CREATE _IOWR(RIO_CM_IOC_MAGIC, 3, __u16) 69 + #define RIO_CM_CHAN_CLOSE _IOW(RIO_CM_IOC_MAGIC, 4, __u16) 70 + #define RIO_CM_CHAN_BIND _IOW(RIO_CM_IOC_MAGIC, 5, struct rio_cm_channel) 71 + #define RIO_CM_CHAN_LISTEN _IOW(RIO_CM_IOC_MAGIC, 6, __u16) 72 + #define RIO_CM_CHAN_ACCEPT _IOWR(RIO_CM_IOC_MAGIC, 7, struct rio_cm_accept) 73 + #define RIO_CM_CHAN_CONNECT _IOW(RIO_CM_IOC_MAGIC, 8, struct rio_cm_channel) 74 + #define RIO_CM_CHAN_SEND _IOW(RIO_CM_IOC_MAGIC, 9, struct rio_cm_msg) 75 + #define RIO_CM_CHAN_RECEIVE _IOWR(RIO_CM_IOC_MAGIC, 10, struct rio_cm_msg) 76 + #define RIO_CM_MPORT_GET_LIST _IOWR(RIO_CM_IOC_MAGIC, 11, __u32) 77 + 78 + #endif /* _RIO_CM_CDEV_H_ */
-2
include/uapi/linux/sysctl.h
··· 26 26 #include <linux/types.h> 27 27 #include <linux/compiler.h> 28 28 29 - struct completion; 30 - 31 29 #define CTL_MAXNAME 10 /* how many path components do we allow in a 32 30 call to sysctl? In other words, what is 33 31 the largest acceptable value for the nlen
+5 -3
init/Kconfig
··· 55 55 56 56 config COMPILE_TEST 57 57 bool "Compile also drivers which will not load" 58 + depends on !UML 58 59 default n 59 60 help 60 61 Some drivers can be compiled on a different platform than they are ··· 81 80 config LOCALVERSION_AUTO 82 81 bool "Automatically append version information to the version string" 83 82 default y 83 + depends on !COMPILE_TEST 84 84 help 85 85 This will try to automatically determine if the current tree is a 86 86 release tree by looking for git tags that belong to the current ··· 954 952 controls or device isolation. 955 953 See 956 954 - Documentation/scheduler/sched-design-CFS.txt (CFS) 957 - - Documentation/cgroups/ (features for grouping, isolation 955 + - Documentation/cgroup-v1/ (features for grouping, isolation 958 956 and resource control) 959 957 960 958 Say N if unsure. ··· 1011 1009 CONFIG_CFQ_GROUP_IOSCHED=y; for enabling throttling policy, set 1012 1010 CONFIG_BLK_DEV_THROTTLING=y. 1013 1011 1014 - See Documentation/cgroups/blkio-controller.txt for more information. 1012 + See Documentation/cgroup-v1/blkio-controller.txt for more information. 1015 1013 1016 1014 config DEBUG_BLK_CGROUP 1017 1015 bool "IO controller debugging" ··· 2080 2078 (especially when using LTO) for optimizing the code and reducing 2081 2079 binary size. This might have some security advantages as well. 2082 2080 2083 - If unsure say N. 2081 + If unsure, or if you need to build out-of-tree modules, say N. 2084 2082 2085 2083 endif # MODULES 2086 2084
+7 -1
init/main.c
··· 380 380 381 381 static __initdata DECLARE_COMPLETION(kthreadd_done); 382 382 383 - static noinline void __init_refok rest_init(void) 383 + static noinline void __ref rest_init(void) 384 384 { 385 385 int pid; 386 386 ··· 715 715 716 716 addr = (unsigned long) dereference_function_descriptor(fn); 717 717 sprint_symbol_no_offset(fn_name, addr); 718 + 719 + /* 720 + * fn will be "function_name [module_name]" where [module_name] is not 721 + * displayed for built-in init functions. Strip off the [module_name]. 722 + */ 723 + strreplace(fn_name, ' ', '\0'); 718 724 719 725 list_for_each_entry(entry, &blacklisted_initcalls, next) { 720 726 if (!strcmp(fn_name, entry->buf)) {
+1 -1
ipc/msg.c
··· 680 680 rcu_read_lock(); 681 681 ipc_lock_object(&msq->q_perm); 682 682 683 - ipc_rcu_putref(msq, ipc_rcu_free); 683 + ipc_rcu_putref(msq, msg_rcu_free); 684 684 /* raced with RMID? */ 685 685 if (!ipc_valid_object(&msq->q_perm)) { 686 686 err = -EIDRM;
-2
ipc/msgutil.c
··· 37 37 #endif 38 38 }; 39 39 40 - atomic_t nr_ipc_ns = ATOMIC_INIT(1); 41 - 42 40 struct msg_msgseg { 43 41 struct msg_msgseg *next; 44 42 /* the next part of the message follows immediately */
-2
ipc/namespace.c
··· 43 43 kfree(ns); 44 44 return ERR_PTR(err); 45 45 } 46 - atomic_inc(&nr_ipc_ns); 47 46 48 47 sem_init_ns(ns); 49 48 msg_init_ns(ns); ··· 95 96 sem_exit_ns(ns); 96 97 msg_exit_ns(ns); 97 98 shm_exit_ns(ns); 98 - atomic_dec(&nr_ipc_ns); 99 99 100 100 put_user_ns(ns->user_ns); 101 101 ns_free_inum(&ns->ns);
+6 -6
ipc/sem.c
··· 438 438 static inline void sem_lock_and_putref(struct sem_array *sma) 439 439 { 440 440 sem_lock(sma, NULL, -1); 441 - ipc_rcu_putref(sma, ipc_rcu_free); 441 + ipc_rcu_putref(sma, sem_rcu_free); 442 442 } 443 443 444 444 static inline void sem_rmid(struct ipc_namespace *ns, struct sem_array *s) ··· 1381 1381 rcu_read_unlock(); 1382 1382 sem_io = ipc_alloc(sizeof(ushort)*nsems); 1383 1383 if (sem_io == NULL) { 1384 - ipc_rcu_putref(sma, ipc_rcu_free); 1384 + ipc_rcu_putref(sma, sem_rcu_free); 1385 1385 return -ENOMEM; 1386 1386 } 1387 1387 ··· 1415 1415 if (nsems > SEMMSL_FAST) { 1416 1416 sem_io = ipc_alloc(sizeof(ushort)*nsems); 1417 1417 if (sem_io == NULL) { 1418 - ipc_rcu_putref(sma, ipc_rcu_free); 1418 + ipc_rcu_putref(sma, sem_rcu_free); 1419 1419 return -ENOMEM; 1420 1420 } 1421 1421 } 1422 1422 1423 1423 if (copy_from_user(sem_io, p, nsems*sizeof(ushort))) { 1424 - ipc_rcu_putref(sma, ipc_rcu_free); 1424 + ipc_rcu_putref(sma, sem_rcu_free); 1425 1425 err = -EFAULT; 1426 1426 goto out_free; 1427 1427 } 1428 1428 1429 1429 for (i = 0; i < nsems; i++) { 1430 1430 if (sem_io[i] > SEMVMX) { 1431 - ipc_rcu_putref(sma, ipc_rcu_free); 1431 + ipc_rcu_putref(sma, sem_rcu_free); 1432 1432 err = -ERANGE; 1433 1433 goto out_free; 1434 1434 } ··· 1720 1720 /* step 2: allocate new undo structure */ 1721 1721 new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL); 1722 1722 if (!new) { 1723 - ipc_rcu_putref(sma, ipc_rcu_free); 1723 + ipc_rcu_putref(sma, sem_rcu_free); 1724 1724 return ERR_PTR(-ENOMEM); 1725 1725 } 1726 1726
+152
kernel/configs/android-base.config
··· 1 + # KEEP ALPHABETICALLY SORTED 2 + # CONFIG_DEVKMEM is not set 3 + # CONFIG_DEVMEM is not set 4 + # CONFIG_INET_LRO is not set 5 + # CONFIG_MODULES is not set 6 + # CONFIG_OABI_COMPAT is not set 7 + # CONFIG_SYSVIPC is not set 8 + CONFIG_ANDROID=y 9 + CONFIG_ANDROID_BINDER_IPC=y 10 + CONFIG_ANDROID_LOW_MEMORY_KILLER=y 11 + CONFIG_ARMV8_DEPRECATED=y 12 + CONFIG_ASHMEM=y 13 + CONFIG_AUDIT=y 14 + CONFIG_BLK_DEV_DM=y 15 + CONFIG_BLK_DEV_INITRD=y 16 + CONFIG_CGROUPS=y 17 + CONFIG_CGROUP_CPUACCT=y 18 + CONFIG_CGROUP_DEBUG=y 19 + CONFIG_CGROUP_FREEZER=y 20 + CONFIG_CGROUP_SCHED=y 21 + CONFIG_CP15_BARRIER_EMULATION=y 22 + CONFIG_DM_CRYPT=y 23 + CONFIG_DM_VERITY=y 24 + CONFIG_DM_VERITY_FEC=y 25 + CONFIG_EMBEDDED=y 26 + CONFIG_FB=y 27 + CONFIG_HIGH_RES_TIMERS=y 28 + CONFIG_INET6_AH=y 29 + CONFIG_INET6_ESP=y 30 + CONFIG_INET6_IPCOMP=y 31 + CONFIG_INET=y 32 + CONFIG_INET_DIAG_DESTROY=y 33 + CONFIG_INET_ESP=y 34 + CONFIG_INET_XFRM_MODE_TUNNEL=y 35 + CONFIG_IP6_NF_FILTER=y 36 + CONFIG_IP6_NF_IPTABLES=y 37 + CONFIG_IP6_NF_MANGLE=y 38 + CONFIG_IP6_NF_RAW=y 39 + CONFIG_IP6_NF_TARGET_REJECT=y 40 + CONFIG_IPV6=y 41 + CONFIG_IPV6_MIP6=y 42 + CONFIG_IPV6_MULTIPLE_TABLES=y 43 + CONFIG_IPV6_OPTIMISTIC_DAD=y 44 + CONFIG_IPV6_PRIVACY=y 45 + CONFIG_IPV6_ROUTER_PREF=y 46 + CONFIG_IPV6_ROUTE_INFO=y 47 + CONFIG_IP_ADVANCED_ROUTER=y 48 + CONFIG_IP_MULTICAST=y 49 + CONFIG_IP_MULTIPLE_TABLES=y 50 + CONFIG_IP_NF_ARPFILTER=y 51 + CONFIG_IP_NF_ARPTABLES=y 52 + CONFIG_IP_NF_ARP_MANGLE=y 53 + CONFIG_IP_NF_FILTER=y 54 + CONFIG_IP_NF_IPTABLES=y 55 + CONFIG_IP_NF_MANGLE=y 56 + CONFIG_IP_NF_MATCH_AH=y 57 + CONFIG_IP_NF_MATCH_ECN=y 58 + CONFIG_IP_NF_MATCH_TTL=y 59 + CONFIG_IP_NF_NAT=y 60 + CONFIG_IP_NF_RAW=y 61 + CONFIG_IP_NF_SECURITY=y 62 + CONFIG_IP_NF_TARGET_MASQUERADE=y 63 + CONFIG_IP_NF_TARGET_NETMAP=y 64 + CONFIG_IP_NF_TARGET_REDIRECT=y 65 + CONFIG_IP_NF_TARGET_REJECT=y 66 + CONFIG_NET=y 67 + CONFIG_NETDEVICES=y 68 + CONFIG_NETFILTER=y 69 + CONFIG_NETFILTER_TPROXY=y 70 + CONFIG_NETFILTER_XT_MATCH_COMMENT=y 71 + CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y 72 + CONFIG_NETFILTER_XT_MATCH_CONNMARK=y 73 + CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y 74 + CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y 75 + CONFIG_NETFILTER_XT_MATCH_HELPER=y 76 + CONFIG_NETFILTER_XT_MATCH_IPRANGE=y 77 + CONFIG_NETFILTER_XT_MATCH_LENGTH=y 78 + CONFIG_NETFILTER_XT_MATCH_LIMIT=y 79 + CONFIG_NETFILTER_XT_MATCH_MAC=y 80 + CONFIG_NETFILTER_XT_MATCH_MARK=y 81 + CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y 82 + CONFIG_NETFILTER_XT_MATCH_POLICY=y 83 + CONFIG_NETFILTER_XT_MATCH_QUOTA=y 84 + CONFIG_NETFILTER_XT_MATCH_SOCKET=y 85 + CONFIG_NETFILTER_XT_MATCH_STATE=y 86 + CONFIG_NETFILTER_XT_MATCH_STATISTIC=y 87 + CONFIG_NETFILTER_XT_MATCH_STRING=y 88 + CONFIG_NETFILTER_XT_MATCH_TIME=y 89 + CONFIG_NETFILTER_XT_MATCH_U32=y 90 + CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y 91 + CONFIG_NETFILTER_XT_TARGET_CONNMARK=y 92 + CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y 93 + CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y 94 + CONFIG_NETFILTER_XT_TARGET_MARK=y 95 + CONFIG_NETFILTER_XT_TARGET_NFLOG=y 96 + CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y 97 + CONFIG_NETFILTER_XT_TARGET_SECMARK=y 98 + CONFIG_NETFILTER_XT_TARGET_TCPMSS=y 99 + CONFIG_NETFILTER_XT_TARGET_TPROXY=y 100 + CONFIG_NETFILTER_XT_TARGET_TRACE=y 101 + CONFIG_NET_CLS_ACT=y 102 + CONFIG_NET_CLS_U32=y 103 + CONFIG_NET_EMATCH=y 104 + CONFIG_NET_EMATCH_U32=y 105 + CONFIG_NET_KEY=y 106 + CONFIG_NET_SCHED=y 107 + CONFIG_NET_SCH_HTB=y 108 + CONFIG_NF_CONNTRACK=y 109 + CONFIG_NF_CONNTRACK_AMANDA=y 110 + CONFIG_NF_CONNTRACK_EVENTS=y 111 + CONFIG_NF_CONNTRACK_FTP=y 112 + CONFIG_NF_CONNTRACK_H323=y 113 + CONFIG_NF_CONNTRACK_IPV4=y 114 + CONFIG_NF_CONNTRACK_IPV6=y 115 + CONFIG_NF_CONNTRACK_IRC=y 116 + CONFIG_NF_CONNTRACK_NETBIOS_NS=y 117 + CONFIG_NF_CONNTRACK_PPTP=y 118 + CONFIG_NF_CONNTRACK_SANE=y 119 + CONFIG_NF_CONNTRACK_SECMARK=y 120 + CONFIG_NF_CONNTRACK_TFTP=y 121 + CONFIG_NF_CT_NETLINK=y 122 + CONFIG_NF_CT_PROTO_DCCP=y 123 + CONFIG_NF_CT_PROTO_SCTP=y 124 + CONFIG_NF_CT_PROTO_UDPLITE=y 125 + CONFIG_NF_NAT=y 126 + CONFIG_NO_HZ=y 127 + CONFIG_PACKET=y 128 + CONFIG_PM_AUTOSLEEP=y 129 + CONFIG_PM_WAKELOCKS=y 130 + CONFIG_PPP=y 131 + CONFIG_PPP_BSDCOMP=y 132 + CONFIG_PPP_DEFLATE=y 133 + CONFIG_PPP_MPPE=y 134 + CONFIG_PREEMPT=y 135 + CONFIG_QUOTA=y 136 + CONFIG_RTC_CLASS=y 137 + CONFIG_RT_GROUP_SCHED=y 138 + CONFIG_SECURITY=y 139 + CONFIG_SECURITY_NETWORK=y 140 + CONFIG_SECURITY_SELINUX=y 141 + CONFIG_SETEND_EMULATION=y 142 + CONFIG_STAGING=y 143 + CONFIG_SWP_EMULATION=y 144 + CONFIG_SYNC=y 145 + CONFIG_TUN=y 146 + CONFIG_UNIX=y 147 + CONFIG_USB_GADGET=y 148 + CONFIG_USB_CONFIGFS=y 149 + CONFIG_USB_CONFIGFS_F_FS=y 150 + CONFIG_USB_CONFIGFS_F_MIDI=y 151 + CONFIG_USB_OTG_WAKELOCK=y 152 + CONFIG_XFRM_USER=y
+121
kernel/configs/android-recommended.config
··· 1 + # KEEP ALPHABETICALLY SORTED 2 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 3 + # CONFIG_INPUT_MOUSE is not set 4 + # CONFIG_LEGACY_PTYS is not set 5 + # CONFIG_NF_CONNTRACK_SIP is not set 6 + # CONFIG_PM_WAKELOCKS_GC is not set 7 + # CONFIG_VT is not set 8 + CONFIG_BACKLIGHT_LCD_SUPPORT=y 9 + CONFIG_BLK_DEV_LOOP=y 10 + CONFIG_BLK_DEV_RAM=y 11 + CONFIG_BLK_DEV_RAM_SIZE=8192 12 + CONFIG_COMPACTION=y 13 + CONFIG_DEBUG_RODATA=y 14 + CONFIG_DM_UEVENT=y 15 + CONFIG_DRAGONRISE_FF=y 16 + CONFIG_ENABLE_DEFAULT_TRACERS=y 17 + CONFIG_EXT4_FS=y 18 + CONFIG_EXT4_FS_SECURITY=y 19 + CONFIG_FUSE_FS=y 20 + CONFIG_GREENASIA_FF=y 21 + CONFIG_HIDRAW=y 22 + CONFIG_HID_A4TECH=y 23 + CONFIG_HID_ACRUX=y 24 + CONFIG_HID_ACRUX_FF=y 25 + CONFIG_HID_APPLE=y 26 + CONFIG_HID_BELKIN=y 27 + CONFIG_HID_CHERRY=y 28 + CONFIG_HID_CHICONY=y 29 + CONFIG_HID_CYPRESS=y 30 + CONFIG_HID_DRAGONRISE=y 31 + CONFIG_HID_ELECOM=y 32 + CONFIG_HID_EMS_FF=y 33 + CONFIG_HID_EZKEY=y 34 + CONFIG_HID_GREENASIA=y 35 + CONFIG_HID_GYRATION=y 36 + CONFIG_HID_HOLTEK=y 37 + CONFIG_HID_KENSINGTON=y 38 + CONFIG_HID_KEYTOUCH=y 39 + CONFIG_HID_KYE=y 40 + CONFIG_HID_LCPOWER=y 41 + CONFIG_HID_LOGITECH=y 42 + CONFIG_HID_LOGITECH_DJ=y 43 + CONFIG_HID_MAGICMOUSE=y 44 + CONFIG_HID_MICROSOFT=y 45 + CONFIG_HID_MONTEREY=y 46 + CONFIG_HID_MULTITOUCH=y 47 + CONFIG_HID_NTRIG=y 48 + CONFIG_HID_ORTEK=y 49 + CONFIG_HID_PANTHERLORD=y 50 + CONFIG_HID_PETALYNX=y 51 + CONFIG_HID_PICOLCD=y 52 + CONFIG_HID_PRIMAX=y 53 + CONFIG_HID_PRODIKEYS=y 54 + CONFIG_HID_ROCCAT=y 55 + CONFIG_HID_SAITEK=y 56 + CONFIG_HID_SAMSUNG=y 57 + CONFIG_HID_SMARTJOYPLUS=y 58 + CONFIG_HID_SONY=y 59 + CONFIG_HID_SPEEDLINK=y 60 + CONFIG_HID_SUNPLUS=y 61 + CONFIG_HID_THRUSTMASTER=y 62 + CONFIG_HID_TIVO=y 63 + CONFIG_HID_TOPSEED=y 64 + CONFIG_HID_TWINHAN=y 65 + CONFIG_HID_UCLOGIC=y 66 + CONFIG_HID_WACOM=y 67 + CONFIG_HID_WALTOP=y 68 + CONFIG_HID_WIIMOTE=y 69 + CONFIG_HID_ZEROPLUS=y 70 + CONFIG_HID_ZYDACRON=y 71 + CONFIG_INPUT_EVDEV=y 72 + CONFIG_INPUT_GPIO=y 73 + CONFIG_INPUT_JOYSTICK=y 74 + CONFIG_INPUT_MISC=y 75 + CONFIG_INPUT_TABLET=y 76 + CONFIG_INPUT_UINPUT=y 77 + CONFIG_ION=y 78 + CONFIG_JOYSTICK_XPAD=y 79 + CONFIG_JOYSTICK_XPAD_FF=y 80 + CONFIG_JOYSTICK_XPAD_LEDS=y 81 + CONFIG_KALLSYMS_ALL=y 82 + CONFIG_KSM=y 83 + CONFIG_LOGIG940_FF=y 84 + CONFIG_LOGIRUMBLEPAD2_FF=y 85 + CONFIG_LOGITECH_FF=y 86 + CONFIG_MD=y 87 + CONFIG_MEDIA_SUPPORT=y 88 + CONFIG_MSDOS_FS=y 89 + CONFIG_PANIC_TIMEOUT=5 90 + CONFIG_PANTHERLORD_FF=y 91 + CONFIG_PERF_EVENTS=y 92 + CONFIG_PM_DEBUG=y 93 + CONFIG_PM_RUNTIME=y 94 + CONFIG_PM_WAKELOCKS_LIMIT=0 95 + CONFIG_POWER_SUPPLY=y 96 + CONFIG_PSTORE=y 97 + CONFIG_PSTORE_CONSOLE=y 98 + CONFIG_PSTORE_RAM=y 99 + CONFIG_SCHEDSTATS=y 100 + CONFIG_SMARTJOYPLUS_FF=y 101 + CONFIG_SND=y 102 + CONFIG_SOUND=y 103 + CONFIG_SUSPEND_TIME=y 104 + CONFIG_TABLET_USB_ACECAD=y 105 + CONFIG_TABLET_USB_AIPTEK=y 106 + CONFIG_TABLET_USB_GTCO=y 107 + CONFIG_TABLET_USB_HANWANG=y 108 + CONFIG_TABLET_USB_KBTAB=y 109 + CONFIG_TASKSTATS=y 110 + CONFIG_TASK_DELAY_ACCT=y 111 + CONFIG_TASK_IO_ACCOUNTING=y 112 + CONFIG_TASK_XACCT=y 113 + CONFIG_TIMER_STATS=y 114 + CONFIG_TMPFS=y 115 + CONFIG_TMPFS_POSIX_ACL=y 116 + CONFIG_UHID=y 117 + CONFIG_USB_ANNOUNCE_NEW_DEVICES=y 118 + CONFIG_USB_EHCI_HCD=y 119 + CONFIG_USB_HIDDEV=y 120 + CONFIG_USB_USBNET=y 121 + CONFIG_VFAT_FS=y
+1 -1
kernel/exit.c
··· 715 715 716 716 spin_lock(&low_water_lock); 717 717 if (free < lowest_to_date) { 718 - pr_warn("%s (%d) used greatest stack depth: %lu bytes left\n", 718 + pr_info("%s (%d) used greatest stack depth: %lu bytes left\n", 719 719 current->comm, task_pid_nr(current), free); 720 720 lowest_to_date = free; 721 721 }
+2 -1
kernel/kexec.c
··· 48 48 49 49 if (kexec_on_panic) { 50 50 /* Verify we have a valid entry point */ 51 - if ((entry < crashk_res.start) || (entry > crashk_res.end)) 51 + if ((entry < phys_to_boot_phys(crashk_res.start)) || 52 + (entry > phys_to_boot_phys(crashk_res.end))) 52 53 return -EADDRNOTAVAIL; 53 54 } 54 55
+45 -24
kernel/kexec_core.c
··· 95 95 return 0; 96 96 } 97 97 98 + int kexec_crash_loaded(void) 99 + { 100 + return !!kexec_crash_image; 101 + } 102 + EXPORT_SYMBOL_GPL(kexec_crash_loaded); 103 + 98 104 /* 99 105 * When kexec transitions to the new kernel there is a one-to-one 100 106 * mapping between physical and virtual addresses. On processors ··· 146 140 * allocating pages whose destination address we do not care about. 147 141 */ 148 142 #define KIMAGE_NO_DEST (-1UL) 143 + #define PAGE_COUNT(x) (((x) + PAGE_SIZE - 1) >> PAGE_SHIFT) 149 144 150 145 static struct page *kimage_alloc_page(struct kimage *image, 151 146 gfp_t gfp_mask, ··· 154 147 155 148 int sanity_check_segment_list(struct kimage *image) 156 149 { 157 - int result, i; 150 + int i; 158 151 unsigned long nr_segments = image->nr_segments; 152 + unsigned long total_pages = 0; 159 153 160 154 /* 161 155 * Verify we have good destination addresses. The caller is ··· 171 163 * simply because addresses are changed to page size 172 164 * granularity. 173 165 */ 174 - result = -EADDRNOTAVAIL; 175 166 for (i = 0; i < nr_segments; i++) { 176 167 unsigned long mstart, mend; 177 168 178 169 mstart = image->segment[i].mem; 179 170 mend = mstart + image->segment[i].memsz; 171 + if (mstart > mend) 172 + return -EADDRNOTAVAIL; 180 173 if ((mstart & ~PAGE_MASK) || (mend & ~PAGE_MASK)) 181 - return result; 174 + return -EADDRNOTAVAIL; 182 175 if (mend >= KEXEC_DESTINATION_MEMORY_LIMIT) 183 - return result; 176 + return -EADDRNOTAVAIL; 184 177 } 185 178 186 179 /* Verify our destination addresses do not overlap. ··· 189 180 * through very weird things can happen with no 190 181 * easy explanation as one segment stops on another. 191 182 */ 192 - result = -EINVAL; 193 183 for (i = 0; i < nr_segments; i++) { 194 184 unsigned long mstart, mend; 195 185 unsigned long j; ··· 202 194 pend = pstart + image->segment[j].memsz; 203 195 /* Do the segments overlap ? */ 204 196 if ((mend > pstart) && (mstart < pend)) 205 - return result; 197 + return -EINVAL; 206 198 } 207 199 } 208 200 ··· 211 203 * and it is easier to check up front than to be surprised 212 204 * later on. 213 205 */ 214 - result = -EINVAL; 215 206 for (i = 0; i < nr_segments; i++) { 216 207 if (image->segment[i].bufsz > image->segment[i].memsz) 217 - return result; 208 + return -EINVAL; 218 209 } 210 + 211 + /* 212 + * Verify that no more than half of memory will be consumed. If the 213 + * request from userspace is too large, a large amount of time will be 214 + * wasted allocating pages, which can cause a soft lockup. 215 + */ 216 + for (i = 0; i < nr_segments; i++) { 217 + if (PAGE_COUNT(image->segment[i].memsz) > totalram_pages / 2) 218 + return -EINVAL; 219 + 220 + total_pages += PAGE_COUNT(image->segment[i].memsz); 221 + } 222 + 223 + if (total_pages > totalram_pages / 2) 224 + return -EINVAL; 219 225 220 226 /* 221 227 * Verify we have good destination addresses. Normally ··· 242 220 */ 243 221 244 222 if (image->type == KEXEC_TYPE_CRASH) { 245 - result = -EADDRNOTAVAIL; 246 223 for (i = 0; i < nr_segments; i++) { 247 224 unsigned long mstart, mend; 248 225 249 226 mstart = image->segment[i].mem; 250 227 mend = mstart + image->segment[i].memsz - 1; 251 228 /* Ensure we are within the crash kernel limits */ 252 - if ((mstart < crashk_res.start) || 253 - (mend > crashk_res.end)) 254 - return result; 229 + if ((mstart < phys_to_boot_phys(crashk_res.start)) || 230 + (mend > phys_to_boot_phys(crashk_res.end))) 231 + return -EADDRNOTAVAIL; 255 232 } 256 233 } 257 234 ··· 373 352 pages = kimage_alloc_pages(KEXEC_CONTROL_MEMORY_GFP, order); 374 353 if (!pages) 375 354 break; 376 - pfn = page_to_pfn(pages); 355 + pfn = page_to_boot_pfn(pages); 377 356 epfn = pfn + count; 378 357 addr = pfn << PAGE_SHIFT; 379 358 eaddr = epfn << PAGE_SHIFT; ··· 499 478 return -ENOMEM; 500 479 501 480 ind_page = page_address(page); 502 - *image->entry = virt_to_phys(ind_page) | IND_INDIRECTION; 481 + *image->entry = virt_to_boot_phys(ind_page) | IND_INDIRECTION; 503 482 image->entry = ind_page; 504 483 image->last_entry = ind_page + 505 484 ((PAGE_SIZE/sizeof(kimage_entry_t)) - 1); ··· 554 533 #define for_each_kimage_entry(image, ptr, entry) \ 555 534 for (ptr = &image->head; (entry = *ptr) && !(entry & IND_DONE); \ 556 535 ptr = (entry & IND_INDIRECTION) ? \ 557 - phys_to_virt((entry & PAGE_MASK)) : ptr + 1) 536 + boot_phys_to_virt((entry & PAGE_MASK)) : ptr + 1) 558 537 559 538 static void kimage_free_entry(kimage_entry_t entry) 560 539 { 561 540 struct page *page; 562 541 563 - page = pfn_to_page(entry >> PAGE_SHIFT); 542 + page = boot_pfn_to_page(entry >> PAGE_SHIFT); 564 543 kimage_free_pages(page); 565 544 } 566 545 ··· 654 633 * have a match. 655 634 */ 656 635 list_for_each_entry(page, &image->dest_pages, lru) { 657 - addr = page_to_pfn(page) << PAGE_SHIFT; 636 + addr = page_to_boot_pfn(page) << PAGE_SHIFT; 658 637 if (addr == destination) { 659 638 list_del(&page->lru); 660 639 return page; ··· 669 648 if (!page) 670 649 return NULL; 671 650 /* If the page cannot be used file it away */ 672 - if (page_to_pfn(page) > 651 + if (page_to_boot_pfn(page) > 673 652 (KEXEC_SOURCE_MEMORY_LIMIT >> PAGE_SHIFT)) { 674 653 list_add(&page->lru, &image->unusable_pages); 675 654 continue; 676 655 } 677 - addr = page_to_pfn(page) << PAGE_SHIFT; 656 + addr = page_to_boot_pfn(page) << PAGE_SHIFT; 678 657 679 658 /* If it is the destination page we want use it */ 680 659 if (addr == destination) ··· 697 676 struct page *old_page; 698 677 699 678 old_addr = *old & PAGE_MASK; 700 - old_page = pfn_to_page(old_addr >> PAGE_SHIFT); 679 + old_page = boot_pfn_to_page(old_addr >> PAGE_SHIFT); 701 680 copy_highpage(page, old_page); 702 681 *old = addr | (*old & ~PAGE_MASK); 703 682 ··· 753 732 result = -ENOMEM; 754 733 goto out; 755 734 } 756 - result = kimage_add_page(image, page_to_pfn(page) 735 + result = kimage_add_page(image, page_to_boot_pfn(page) 757 736 << PAGE_SHIFT); 758 737 if (result < 0) 759 738 goto out; ··· 814 793 char *ptr; 815 794 size_t uchunk, mchunk; 816 795 817 - page = pfn_to_page(maddr >> PAGE_SHIFT); 796 + page = boot_pfn_to_page(maddr >> PAGE_SHIFT); 818 797 if (!page) { 819 798 result = -ENOMEM; 820 799 goto out; ··· 942 921 unsigned long addr; 943 922 944 923 for (addr = begin; addr < end; addr += PAGE_SIZE) 945 - free_reserved_page(pfn_to_page(addr >> PAGE_SHIFT)); 924 + free_reserved_page(boot_pfn_to_page(addr >> PAGE_SHIFT)); 946 925 } 947 926 948 927 int crash_shrink_memory(unsigned long new_size) ··· 1395 1374 void __weak arch_crash_save_vmcoreinfo(void) 1396 1375 {} 1397 1376 1398 - unsigned long __weak paddr_vmcoreinfo_note(void) 1377 + phys_addr_t __weak paddr_vmcoreinfo_note(void) 1399 1378 { 1400 1379 return __pa((unsigned long)(char *)&vmcoreinfo_note); 1401 1380 }
+3 -3
kernel/ksysfs.c
··· 101 101 static ssize_t kexec_crash_loaded_show(struct kobject *kobj, 102 102 struct kobj_attribute *attr, char *buf) 103 103 { 104 - return sprintf(buf, "%d\n", !!kexec_crash_image); 104 + return sprintf(buf, "%d\n", kexec_crash_loaded()); 105 105 } 106 106 KERNEL_ATTR_RO(kexec_crash_loaded); 107 107 ··· 128 128 static ssize_t vmcoreinfo_show(struct kobject *kobj, 129 129 struct kobj_attribute *attr, char *buf) 130 130 { 131 - return sprintf(buf, "%lx %x\n", 132 - paddr_vmcoreinfo_note(), 131 + phys_addr_t vmcore_base = paddr_vmcoreinfo_note(); 132 + return sprintf(buf, "%pa %x\n", &vmcore_base, 133 133 (unsigned int)sizeof(vmcoreinfo_note)); 134 134 } 135 135 KERNEL_ATTR_RO(vmcoreinfo);
+1
kernel/module.c
··· 60 60 #include <linux/jump_label.h> 61 61 #include <linux/pfn.h> 62 62 #include <linux/bsearch.h> 63 + #include <linux/dynamic_debug.h> 63 64 #include <uapi/linux/module.h> 64 65 #include "module-internal.h" 65 66
+4 -9
kernel/panic.c
··· 108 108 long i, i_next = 0; 109 109 int state = 0; 110 110 int old_cpu, this_cpu; 111 + bool _crash_kexec_post_notifiers = crash_kexec_post_notifiers; 111 112 112 113 /* 113 114 * Disable local interrupts. This will prevent panic_smp_self_stop ··· 161 160 * 162 161 * Bypass the panic_cpu check and call __crash_kexec directly. 163 162 */ 164 - if (!crash_kexec_post_notifiers) { 163 + if (!_crash_kexec_post_notifiers) { 165 164 printk_nmi_flush_on_panic(); 166 165 __crash_kexec(NULL); 167 166 } ··· 192 191 * 193 192 * Bypass the panic_cpu check and call __crash_kexec directly. 194 193 */ 195 - if (crash_kexec_post_notifiers) 194 + if (_crash_kexec_post_notifiers) 196 195 __crash_kexec(NULL); 197 196 198 197 bust_spinlocks(0); ··· 572 571 core_param(panic, panic_timeout, int, 0644); 573 572 core_param(pause_on_oops, pause_on_oops, int, 0644); 574 573 core_param(panic_on_warn, panic_on_warn, int, 0644); 575 - 576 - static int __init setup_crash_kexec_post_notifiers(char *s) 577 - { 578 - crash_kexec_post_notifiers = true; 579 - return 0; 580 - } 581 - early_param("crash_kexec_post_notifiers", setup_crash_kexec_post_notifiers); 574 + core_param(crash_kexec_post_notifiers, crash_kexec_post_notifiers, bool, 0644); 582 575 583 576 static int __init oops_setup(char *s) 584 577 {
+10 -6
kernel/printk/internal.h
··· 16 16 */ 17 17 #include <linux/percpu.h> 18 18 19 - typedef __printf(1, 0) int (*printk_func_t)(const char *fmt, va_list args); 19 + typedef __printf(2, 0) int (*printk_func_t)(int level, const char *fmt, 20 + va_list args); 20 21 21 - int __printf(1, 0) vprintk_default(const char *fmt, va_list args); 22 + __printf(2, 0) 23 + int vprintk_default(int level, const char *fmt, va_list args); 22 24 23 25 #ifdef CONFIG_PRINTK_NMI 24 26 ··· 33 31 * via per-CPU variable. 34 32 */ 35 33 DECLARE_PER_CPU(printk_func_t, printk_func); 36 - static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args) 34 + __printf(2, 0) 35 + static inline int vprintk_func(int level, const char *fmt, va_list args) 37 36 { 38 - return this_cpu_read(printk_func)(fmt, args); 37 + return this_cpu_read(printk_func)(level, fmt, args); 39 38 } 40 39 41 40 extern atomic_t nmi_message_lost; ··· 47 44 48 45 #else /* CONFIG_PRINTK_NMI */ 49 46 50 - static inline __printf(1, 0) int vprintk_func(const char *fmt, va_list args) 47 + __printf(2, 0) 48 + static inline int vprintk_func(int level, const char *fmt, va_list args) 51 49 { 52 - return vprintk_default(fmt, args); 50 + return vprintk_default(level, fmt, args); 53 51 } 54 52 55 53 static inline int get_nmi_message_lost(void)
+11 -2
kernel/printk/nmi.c
··· 58 58 * one writer running. But the buffer might get flushed from another 59 59 * CPU, so we need to be careful. 60 60 */ 61 - static int vprintk_nmi(const char *fmt, va_list args) 61 + static int vprintk_nmi(int level, const char *fmt, va_list args) 62 62 { 63 63 struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq); 64 64 int add = 0; ··· 79 79 if (!len) 80 80 smp_rmb(); 81 81 82 - add = vsnprintf(s->buffer + len, sizeof(s->buffer) - len, fmt, args); 82 + if (level != LOGLEVEL_DEFAULT) { 83 + add = snprintf(s->buffer + len, sizeof(s->buffer) - len, 84 + KERN_SOH "%c", '0' + level); 85 + add += vsnprintf(s->buffer + len + add, 86 + sizeof(s->buffer) - len - add, 87 + fmt, args); 88 + } else { 89 + add = vsnprintf(s->buffer + len, sizeof(s->buffer) - len, 90 + fmt, args); 91 + } 83 92 84 93 /* 85 94 * Do it once again if the buffer has been flushed in the meantime.
+178 -19
kernel/printk/printk.c
··· 26 26 #include <linux/nmi.h> 27 27 #include <linux/module.h> 28 28 #include <linux/moduleparam.h> 29 - #include <linux/interrupt.h> /* For in_interrupt() */ 30 29 #include <linux/delay.h> 31 30 #include <linux/smp.h> 32 31 #include <linux/security.h> ··· 47 48 #include <linux/uio.h> 48 49 49 50 #include <asm/uaccess.h> 50 - #include <asm-generic/sections.h> 51 + #include <asm/sections.h> 51 52 52 53 #define CREATE_TRACE_POINTS 53 54 #include <trace/events/printk.h> ··· 84 85 .name = "console_lock" 85 86 }; 86 87 #endif 88 + 89 + enum devkmsg_log_bits { 90 + __DEVKMSG_LOG_BIT_ON = 0, 91 + __DEVKMSG_LOG_BIT_OFF, 92 + __DEVKMSG_LOG_BIT_LOCK, 93 + }; 94 + 95 + enum devkmsg_log_masks { 96 + DEVKMSG_LOG_MASK_ON = BIT(__DEVKMSG_LOG_BIT_ON), 97 + DEVKMSG_LOG_MASK_OFF = BIT(__DEVKMSG_LOG_BIT_OFF), 98 + DEVKMSG_LOG_MASK_LOCK = BIT(__DEVKMSG_LOG_BIT_LOCK), 99 + }; 100 + 101 + /* Keep both the 'on' and 'off' bits clear, i.e. ratelimit by default: */ 102 + #define DEVKMSG_LOG_MASK_DEFAULT 0 103 + 104 + static unsigned int __read_mostly devkmsg_log = DEVKMSG_LOG_MASK_DEFAULT; 105 + 106 + static int __control_devkmsg(char *str) 107 + { 108 + if (!str) 109 + return -EINVAL; 110 + 111 + if (!strncmp(str, "on", 2)) { 112 + devkmsg_log = DEVKMSG_LOG_MASK_ON; 113 + return 2; 114 + } else if (!strncmp(str, "off", 3)) { 115 + devkmsg_log = DEVKMSG_LOG_MASK_OFF; 116 + return 3; 117 + } else if (!strncmp(str, "ratelimit", 9)) { 118 + devkmsg_log = DEVKMSG_LOG_MASK_DEFAULT; 119 + return 9; 120 + } 121 + return -EINVAL; 122 + } 123 + 124 + static int __init control_devkmsg(char *str) 125 + { 126 + if (__control_devkmsg(str) < 0) 127 + return 1; 128 + 129 + /* 130 + * Set sysctl string accordingly: 131 + */ 132 + if (devkmsg_log == DEVKMSG_LOG_MASK_ON) { 133 + memset(devkmsg_log_str, 0, DEVKMSG_STR_MAX_SIZE); 134 + strncpy(devkmsg_log_str, "on", 2); 135 + } else if (devkmsg_log == DEVKMSG_LOG_MASK_OFF) { 136 + memset(devkmsg_log_str, 0, DEVKMSG_STR_MAX_SIZE); 137 + strncpy(devkmsg_log_str, "off", 3); 138 + } 139 + /* else "ratelimit" which is set by default. */ 140 + 141 + /* 142 + * Sysctl cannot change it anymore. The kernel command line setting of 143 + * this parameter is to force the setting to be permanent throughout the 144 + * runtime of the system. This is a precation measure against userspace 145 + * trying to be a smarta** and attempting to change it up on us. 146 + */ 147 + devkmsg_log |= DEVKMSG_LOG_MASK_LOCK; 148 + 149 + return 0; 150 + } 151 + __setup("printk.devkmsg=", control_devkmsg); 152 + 153 + char devkmsg_log_str[DEVKMSG_STR_MAX_SIZE] = "ratelimit"; 154 + 155 + int devkmsg_sysctl_set_loglvl(struct ctl_table *table, int write, 156 + void __user *buffer, size_t *lenp, loff_t *ppos) 157 + { 158 + char old_str[DEVKMSG_STR_MAX_SIZE]; 159 + unsigned int old; 160 + int err; 161 + 162 + if (write) { 163 + if (devkmsg_log & DEVKMSG_LOG_MASK_LOCK) 164 + return -EINVAL; 165 + 166 + old = devkmsg_log; 167 + strncpy(old_str, devkmsg_log_str, DEVKMSG_STR_MAX_SIZE); 168 + } 169 + 170 + err = proc_dostring(table, write, buffer, lenp, ppos); 171 + if (err) 172 + return err; 173 + 174 + if (write) { 175 + err = __control_devkmsg(devkmsg_log_str); 176 + 177 + /* 178 + * Do not accept an unknown string OR a known string with 179 + * trailing crap... 180 + */ 181 + if (err < 0 || (err + 1 != *lenp)) { 182 + 183 + /* ... and restore old setting. */ 184 + devkmsg_log = old; 185 + strncpy(devkmsg_log_str, old_str, DEVKMSG_STR_MAX_SIZE); 186 + 187 + return -EINVAL; 188 + } 189 + } 190 + 191 + return 0; 192 + } 87 193 88 194 /* 89 195 * Number of registered extended console drivers. ··· 718 614 u64 seq; 719 615 u32 idx; 720 616 enum log_flags prev; 617 + struct ratelimit_state rs; 721 618 struct mutex lock; 722 619 char buf[CONSOLE_EXT_LOG_MAX]; 723 620 }; ··· 728 623 char *buf, *line; 729 624 int level = default_message_loglevel; 730 625 int facility = 1; /* LOG_USER */ 626 + struct file *file = iocb->ki_filp; 627 + struct devkmsg_user *user = file->private_data; 731 628 size_t len = iov_iter_count(from); 732 629 ssize_t ret = len; 733 630 734 - if (len > LOG_LINE_MAX) 631 + if (!user || len > LOG_LINE_MAX) 735 632 return -EINVAL; 633 + 634 + /* Ignore when user logging is disabled. */ 635 + if (devkmsg_log & DEVKMSG_LOG_MASK_OFF) 636 + return len; 637 + 638 + /* Ratelimit when not explicitly enabled. */ 639 + if (!(devkmsg_log & DEVKMSG_LOG_MASK_ON)) { 640 + if (!___ratelimit(&user->rs, current->comm)) 641 + return ret; 642 + } 643 + 736 644 buf = kmalloc(len+1, GFP_KERNEL); 737 645 if (buf == NULL) 738 646 return -ENOMEM; ··· 918 800 struct devkmsg_user *user; 919 801 int err; 920 802 921 - /* write-only does not need any file context */ 922 - if ((file->f_flags & O_ACCMODE) == O_WRONLY) 923 - return 0; 803 + if (devkmsg_log & DEVKMSG_LOG_MASK_OFF) 804 + return -EPERM; 924 805 925 - err = check_syslog_permissions(SYSLOG_ACTION_READ_ALL, 926 - SYSLOG_FROM_READER); 927 - if (err) 928 - return err; 806 + /* write-only does not need any file context */ 807 + if ((file->f_flags & O_ACCMODE) != O_WRONLY) { 808 + err = check_syslog_permissions(SYSLOG_ACTION_READ_ALL, 809 + SYSLOG_FROM_READER); 810 + if (err) 811 + return err; 812 + } 929 813 930 814 user = kmalloc(sizeof(struct devkmsg_user), GFP_KERNEL); 931 815 if (!user) 932 816 return -ENOMEM; 817 + 818 + ratelimit_default_init(&user->rs); 819 + ratelimit_set_flags(&user->rs, RATELIMIT_MSG_ON_RELEASE); 933 820 934 821 mutex_init(&user->lock); 935 822 ··· 953 830 954 831 if (!user) 955 832 return 0; 833 + 834 + ratelimit_state_exit(&user->rs); 956 835 957 836 mutex_destroy(&user->lock); 958 837 kfree(user); ··· 1111 986 MODULE_PARM_DESC(ignore_loglevel, 1112 987 "ignore loglevel setting (prints all kernel messages to the console)"); 1113 988 989 + static bool suppress_message_printing(int level) 990 + { 991 + return (level >= console_loglevel && !ignore_loglevel); 992 + } 993 + 1114 994 #ifdef CONFIG_BOOT_PRINTK_DELAY 1115 995 1116 996 static int boot_delay; /* msecs delay after each printk during bootup */ ··· 1145 1015 unsigned long timeout; 1146 1016 1147 1017 if ((boot_delay == 0 || system_state != SYSTEM_BOOTING) 1148 - || (level >= console_loglevel && !ignore_loglevel)) { 1018 + || suppress_message_printing(level)) { 1149 1019 return; 1150 1020 } 1151 1021 ··· 1569 1439 1570 1440 trace_console(text, len); 1571 1441 1572 - if (level >= console_loglevel && !ignore_loglevel) 1573 - return; 1574 1442 if (!console_drivers) 1575 1443 return; 1576 1444 ··· 1930 1802 } 1931 1803 EXPORT_SYMBOL(printk_emit); 1932 1804 1933 - int vprintk_default(const char *fmt, va_list args) 1805 + #ifdef CONFIG_PRINTK 1806 + #define define_pr_level(func, loglevel) \ 1807 + asmlinkage __visible void func(const char *fmt, ...) \ 1808 + { \ 1809 + va_list args; \ 1810 + \ 1811 + va_start(args, fmt); \ 1812 + vprintk_default(loglevel, fmt, args); \ 1813 + va_end(args); \ 1814 + } \ 1815 + EXPORT_SYMBOL(func) 1816 + 1817 + define_pr_level(__pr_emerg, LOGLEVEL_EMERG); 1818 + define_pr_level(__pr_alert, LOGLEVEL_ALERT); 1819 + define_pr_level(__pr_crit, LOGLEVEL_CRIT); 1820 + define_pr_level(__pr_err, LOGLEVEL_ERR); 1821 + define_pr_level(__pr_warn, LOGLEVEL_WARNING); 1822 + define_pr_level(__pr_notice, LOGLEVEL_NOTICE); 1823 + define_pr_level(__pr_info, LOGLEVEL_INFO); 1824 + #endif 1825 + 1826 + int vprintk_default(int level, const char *fmt, va_list args) 1934 1827 { 1935 1828 int r; 1936 1829 ··· 1961 1812 return r; 1962 1813 } 1963 1814 #endif 1964 - r = vprintk_emit(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args); 1815 + r = vprintk_emit(0, level, NULL, 0, fmt, args); 1965 1816 1966 1817 return r; 1967 1818 } ··· 1994 1845 int r; 1995 1846 1996 1847 va_start(args, fmt); 1997 - r = vprintk_func(fmt, args); 1848 + r = vprintk_func(LOGLEVEL_DEFAULT, fmt, args); 1998 1849 va_end(args); 1999 1850 2000 1851 return r; ··· 2037 1888 static size_t msg_print_text(const struct printk_log *msg, enum log_flags prev, 2038 1889 bool syslog, char *buf, size_t size) { return 0; } 2039 1890 static size_t cont_print_text(char *text, size_t size) { return 0; } 1891 + static bool suppress_message_printing(int level) { return false; } 2040 1892 2041 1893 /* Still needs to be defined for users */ 2042 1894 DEFINE_PER_CPU(printk_func_t, printk_func); ··· 2317 2167 if (!cont.len) 2318 2168 goto out; 2319 2169 2170 + if (suppress_message_printing(cont.level)) { 2171 + cont.cons = cont.len; 2172 + if (cont.flushed) 2173 + cont.len = 0; 2174 + goto out; 2175 + } 2176 + 2320 2177 /* 2321 2178 * We still queue earlier records, likely because the console was 2322 2179 * busy. The earlier ones need to be printed before this one, we ··· 2427 2270 break; 2428 2271 2429 2272 msg = log_from_idx(console_idx); 2430 - if (msg->flags & LOG_NOCONS) { 2273 + level = msg->level; 2274 + if ((msg->flags & LOG_NOCONS) || 2275 + suppress_message_printing(level)) { 2431 2276 /* 2432 2277 * Skip record we have buffered and already printed 2433 - * directly to the console when we received it. 2278 + * directly to the console when we received it, and 2279 + * record that has level above the console loglevel. 2434 2280 */ 2435 2281 console_idx = log_next(console_idx); 2436 2282 console_seq++; ··· 2447 2287 goto skip; 2448 2288 } 2449 2289 2450 - level = msg->level; 2451 2290 len += msg_print_text(msg, console_prev, false, 2452 2291 text + len, sizeof(text) - len); 2453 2292 if (nr_ext_console_drivers) {
+32 -2
kernel/relay.c
··· 451 451 if (!dentry) 452 452 goto free_buf; 453 453 relay_set_buf_dentry(buf, dentry); 454 + } else { 455 + /* Only retrieve global info, nothing more, nothing less */ 456 + dentry = chan->cb->create_buf_file(NULL, NULL, 457 + S_IRUSR, buf, 458 + &chan->is_global); 459 + if (WARN_ON(dentry)) 460 + goto free_buf; 454 461 } 455 462 456 463 buf->cpu = cpu; ··· 569 562 * attributes specified. The created channel buffer files 570 563 * will be named base_filename0...base_filenameN-1. File 571 564 * permissions will be %S_IRUSR. 565 + * 566 + * If opening a buffer (@parent = NULL) that you later wish to register 567 + * in a filesystem, call relay_late_setup_files() once the @parent dentry 568 + * is available. 572 569 */ 573 570 struct rchan *relay_open(const char *base_filename, 574 571 struct dentry *parent, ··· 651 640 * 652 641 * Returns 0 if successful, non-zero otherwise. 653 642 * 654 - * Use to setup files for a previously buffer-only channel. 655 - * Useful to do early tracing in kernel, before VFS is up, for example. 643 + * Use to setup files for a previously buffer-only channel created 644 + * by relay_open() with a NULL parent dentry. 645 + * 646 + * For example, this is useful for perfomring early tracing in kernel, 647 + * before VFS is up and then exposing the early results once the dentry 648 + * is available. 656 649 */ 657 650 int relay_late_setup_files(struct rchan *chan, 658 651 const char *base_filename, ··· 681 666 } 682 667 chan->has_base_filename = 1; 683 668 chan->parent = parent; 669 + 670 + if (chan->is_global) { 671 + err = -EINVAL; 672 + if (!WARN_ON_ONCE(!chan->buf[0])) { 673 + dentry = relay_create_buf_file(chan, chan->buf[0], 0); 674 + if (dentry && !WARN_ON_ONCE(!chan->is_global)) { 675 + relay_set_buf_dentry(chan->buf[0], dentry); 676 + err = 0; 677 + } 678 + } 679 + mutex_unlock(&relay_channels_mutex); 680 + return err; 681 + } 682 + 684 683 curr_cpu = get_cpu(); 685 684 /* 686 685 * The CPU hotplug notifier ran before us and created buffers with ··· 735 706 736 707 return err; 737 708 } 709 + EXPORT_SYMBOL_GPL(relay_late_setup_files); 738 710 739 711 /** 740 712 * relay_switch_subbuf - switch to a new sub-buffer
+7
kernel/sysctl.c
··· 814 814 .extra2 = &ten_thousand, 815 815 }, 816 816 { 817 + .procname = "printk_devkmsg", 818 + .data = devkmsg_log_str, 819 + .maxlen = DEVKMSG_STR_MAX_SIZE, 820 + .mode = 0644, 821 + .proc_handler = devkmsg_sysctl_set_loglvl, 822 + }, 823 + { 817 824 .procname = "dmesg_restrict", 818 825 .data = &dmesg_restrict, 819 826 .maxlen = sizeof(int),
+6 -4
kernel/task_work.c
··· 29 29 struct callback_head *head; 30 30 31 31 do { 32 - head = ACCESS_ONCE(task->task_works); 32 + head = READ_ONCE(task->task_works); 33 33 if (unlikely(head == &work_exited)) 34 34 return -ESRCH; 35 35 work->next = head; ··· 57 57 struct callback_head **pprev = &task->task_works; 58 58 struct callback_head *work; 59 59 unsigned long flags; 60 + 61 + if (likely(!task->task_works)) 62 + return NULL; 60 63 /* 61 64 * If cmpxchg() fails we continue without updating pprev. 62 65 * Either we raced with task_work_add() which added the ··· 67 64 * we raced with task_work_run(), *pprev == NULL/exited. 68 65 */ 69 66 raw_spin_lock_irqsave(&task->pi_lock, flags); 70 - while ((work = ACCESS_ONCE(*pprev))) { 71 - smp_read_barrier_depends(); 67 + while ((work = lockless_dereference(*pprev))) { 72 68 if (work->func != func) 73 69 pprev = &work->next; 74 70 else if (cmpxchg(pprev, work, work->next) == work) ··· 97 95 * work_exited unless the list is empty. 98 96 */ 99 97 do { 100 - work = ACCESS_ONCE(task->task_works); 98 + work = READ_ONCE(task->task_works); 101 99 head = !work && (task->flags & PF_EXITING) ? 102 100 &work_exited : NULL; 103 101 } while (cmpxchg(&task->task_works, work, head) != work);
+11
lib/Kconfig.debug
··· 721 721 722 722 For more details, see Documentation/kcov.txt. 723 723 724 + config KCOV_INSTRUMENT_ALL 725 + bool "Instrument all code by default" 726 + depends on KCOV 727 + default y if KCOV 728 + help 729 + If you are doing generic system call fuzzing (like e.g. syzkaller), 730 + then you will want to instrument the whole kernel and you should 731 + say y here. If you are doing more targeted fuzzing (like e.g. 732 + filesystem fuzzing with AFL) then you will want to enable coverage 733 + for more specific subsets of files, and should say n here. 734 + 724 735 config DEBUG_SHIRQ 725 736 bool "Debug shared IRQ handlers" 726 737 depends on DEBUG_KERNEL
+4 -12
lib/crc32.c
··· 979 979 int i; 980 980 int errors = 0; 981 981 int bytes = 0; 982 - struct timespec start, stop; 983 982 u64 nsec; 984 983 unsigned long flags; 985 984 ··· 998 999 local_irq_save(flags); 999 1000 local_irq_disable(); 1000 1001 1001 - getnstimeofday(&start); 1002 + nsec = ktime_get_ns(); 1002 1003 for (i = 0; i < 100; i++) { 1003 1004 if (test[i].crc32c_le != __crc32c_le(test[i].crc, test_buf + 1004 1005 test[i].start, test[i].length)) 1005 1006 errors++; 1006 1007 } 1007 - getnstimeofday(&stop); 1008 + nsec = ktime_get_ns() - nsec; 1008 1009 1009 1010 local_irq_restore(flags); 1010 1011 local_irq_enable(); 1011 - 1012 - nsec = stop.tv_nsec - start.tv_nsec + 1013 - 1000000000 * (stop.tv_sec - start.tv_sec); 1014 1012 1015 1013 pr_info("crc32c: CRC_LE_BITS = %d\n", CRC_LE_BITS); 1016 1014 ··· 1061 1065 int i; 1062 1066 int errors = 0; 1063 1067 int bytes = 0; 1064 - struct timespec start, stop; 1065 1068 u64 nsec; 1066 1069 unsigned long flags; 1067 1070 ··· 1083 1088 local_irq_save(flags); 1084 1089 local_irq_disable(); 1085 1090 1086 - getnstimeofday(&start); 1091 + nsec = ktime_get_ns(); 1087 1092 for (i = 0; i < 100; i++) { 1088 1093 if (test[i].crc_le != crc32_le(test[i].crc, test_buf + 1089 1094 test[i].start, test[i].length)) ··· 1093 1098 test[i].start, test[i].length)) 1094 1099 errors++; 1095 1100 } 1096 - getnstimeofday(&stop); 1101 + nsec = ktime_get_ns() - nsec; 1097 1102 1098 1103 local_irq_restore(flags); 1099 1104 local_irq_enable(); 1100 - 1101 - nsec = stop.tv_nsec - start.tv_nsec + 1102 - 1000000000 * (stop.tv_sec - start.tv_sec); 1103 1105 1104 1106 pr_info("crc32: CRC_LE_BITS = %d, CRC_BE BITS = %d\n", 1105 1107 CRC_LE_BITS, CRC_BE_BITS);
+1 -2
lib/iommu-helper.c
··· 29 29 index = bitmap_find_next_zero_area(map, size, start, nr, align_mask); 30 30 if (index < size) { 31 31 if (iommu_is_span_boundary(index, nr, shift, boundary_size)) { 32 - /* we could do more effectively */ 33 - start = index + 1; 32 + start = ALIGN(shift + index, boundary_size) - shift; 34 33 goto again; 35 34 } 36 35 bitmap_set(map, index, nr);
+10 -4
lib/radix-tree.c
··· 277 277 278 278 /* 279 279 * Even if the caller has preloaded, try to allocate from the 280 - * cache first for the new node to get accounted. 280 + * cache first for the new node to get accounted to the memory 281 + * cgroup. 281 282 */ 282 283 ret = kmem_cache_alloc(radix_tree_node_cachep, 283 - gfp_mask | __GFP_ACCOUNT | __GFP_NOWARN); 284 + gfp_mask | __GFP_NOWARN); 284 285 if (ret) 285 286 goto out; 286 287 ··· 304 303 kmemleak_update_trace(ret); 305 304 goto out; 306 305 } 307 - ret = kmem_cache_alloc(radix_tree_node_cachep, 308 - gfp_mask | __GFP_ACCOUNT); 306 + ret = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask); 309 307 out: 310 308 BUG_ON(radix_tree_is_internal_node(ret)); 311 309 return ret; ··· 350 350 struct radix_tree_preload *rtp; 351 351 struct radix_tree_node *node; 352 352 int ret = -ENOMEM; 353 + 354 + /* 355 + * Nodes preloaded by one cgroup can be be used by another cgroup, so 356 + * they should never be accounted to any particular memory cgroup. 357 + */ 358 + gfp_mask &= ~__GFP_ACCOUNT; 353 359 354 360 preempt_disable(); 355 361 rtp = this_cpu_ptr(&radix_tree_preloads);
+6 -4
lib/ratelimit.c
··· 46 46 rs->begin = jiffies; 47 47 48 48 if (time_is_before_jiffies(rs->begin + rs->interval)) { 49 - if (rs->missed) 50 - printk(KERN_WARNING "%s: %d callbacks suppressed\n", 51 - func, rs->missed); 49 + if (rs->missed) { 50 + if (!(rs->flags & RATELIMIT_MSG_ON_RELEASE)) { 51 + pr_warn("%s: %d callbacks suppressed\n", func, rs->missed); 52 + rs->missed = 0; 53 + } 54 + } 52 55 rs->begin = jiffies; 53 56 rs->printed = 0; 54 - rs->missed = 0; 55 57 } 56 58 if (rs->burst && rs->burst > rs->printed) { 57 59 rs->printed++;
+1 -1
lib/ubsan.c
··· 308 308 return; 309 309 310 310 ubsan_prologue(&data->location, &flags); 311 - pr_err("%s address %pk with insufficient space\n", 311 + pr_err("%s address %p with insufficient space\n", 312 312 type_check_kinds[data->type_check_kind], 313 313 (void *) ptr); 314 314 pr_err("for an object of type %s\n", data->type->type_name);
+5 -1
mm/hugetlb.c
··· 2216 2216 * and reducing the surplus. 2217 2217 */ 2218 2218 spin_unlock(&hugetlb_lock); 2219 + 2220 + /* yield cpu to avoid soft lockup */ 2221 + cond_resched(); 2222 + 2219 2223 if (hstate_is_gigantic(h)) 2220 2224 ret = alloc_fresh_gigantic_page(h, nodes_allowed); 2221 2225 else ··· 4310 4306 pte = (pte_t *)pmd_alloc(mm, pud, addr); 4311 4307 } 4312 4308 } 4313 - BUG_ON(pte && !pte_none(*pte) && !pte_huge(*pte)); 4309 + BUG_ON(pte && pte_present(*pte) && !pte_huge(*pte)); 4314 4310 4315 4311 return pte; 4316 4312 }
+30 -37
mm/kasan/kasan.c
··· 442 442 kasan_poison_shadow(object, 443 443 round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE), 444 444 KASAN_KMALLOC_REDZONE); 445 - if (cache->flags & SLAB_KASAN) { 446 - struct kasan_alloc_meta *alloc_info = 447 - get_alloc_info(cache, object); 448 - alloc_info->state = KASAN_STATE_INIT; 449 - } 450 445 } 451 446 452 447 static inline int in_irqentry_text(unsigned long ptr) ··· 505 510 return (void *)object + cache->kasan_info.free_meta_offset; 506 511 } 507 512 513 + void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) 514 + { 515 + struct kasan_alloc_meta *alloc_info; 516 + 517 + if (!(cache->flags & SLAB_KASAN)) 518 + return; 519 + 520 + alloc_info = get_alloc_info(cache, object); 521 + __memset(alloc_info, 0, sizeof(*alloc_info)); 522 + } 523 + 508 524 void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) 509 525 { 510 526 kasan_kmalloc(cache, object, cache->object_size, flags); ··· 535 529 536 530 bool kasan_slab_free(struct kmem_cache *cache, void *object) 537 531 { 532 + s8 shadow_byte; 533 + 538 534 /* RCU slabs could be legally used after free within the RCU period */ 539 535 if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU)) 540 536 return false; 541 537 542 - if (likely(cache->flags & SLAB_KASAN)) { 543 - struct kasan_alloc_meta *alloc_info; 544 - struct kasan_free_meta *free_info; 545 - 546 - alloc_info = get_alloc_info(cache, object); 547 - free_info = get_free_info(cache, object); 548 - 549 - switch (alloc_info->state) { 550 - case KASAN_STATE_ALLOC: 551 - alloc_info->state = KASAN_STATE_QUARANTINE; 552 - quarantine_put(free_info, cache); 553 - set_track(&free_info->track, GFP_NOWAIT); 554 - kasan_poison_slab_free(cache, object); 555 - return true; 556 - case KASAN_STATE_QUARANTINE: 557 - case KASAN_STATE_FREE: 558 - pr_err("Double free"); 559 - dump_stack(); 560 - break; 561 - default: 562 - break; 563 - } 538 + shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); 539 + if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { 540 + kasan_report_double_free(cache, object, shadow_byte); 541 + return true; 564 542 } 565 - return false; 543 + 544 + kasan_poison_slab_free(cache, object); 545 + 546 + if (unlikely(!(cache->flags & SLAB_KASAN))) 547 + return false; 548 + 549 + set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); 550 + quarantine_put(get_free_info(cache, object), cache); 551 + return true; 566 552 } 567 553 568 554 void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, ··· 563 565 unsigned long redzone_start; 564 566 unsigned long redzone_end; 565 567 566 - if (flags & __GFP_RECLAIM) 568 + if (gfpflags_allow_blocking(flags)) 567 569 quarantine_reduce(); 568 570 569 571 if (unlikely(object == NULL)) ··· 577 579 kasan_unpoison_shadow(object, size); 578 580 kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, 579 581 KASAN_KMALLOC_REDZONE); 580 - if (cache->flags & SLAB_KASAN) { 581 - struct kasan_alloc_meta *alloc_info = 582 - get_alloc_info(cache, object); 583 582 584 - alloc_info->state = KASAN_STATE_ALLOC; 585 - alloc_info->alloc_size = size; 586 - set_track(&alloc_info->track, flags); 587 - } 583 + if (cache->flags & SLAB_KASAN) 584 + set_track(&get_alloc_info(cache, object)->alloc_track, flags); 588 585 } 589 586 EXPORT_SYMBOL(kasan_kmalloc); 590 587 ··· 589 596 unsigned long redzone_start; 590 597 unsigned long redzone_end; 591 598 592 - if (flags & __GFP_RECLAIM) 599 + if (gfpflags_allow_blocking(flags)) 593 600 quarantine_reduce(); 594 601 595 602 if (unlikely(ptr == NULL))
+4 -11
mm/kasan/kasan.h
··· 59 59 * Structures to keep alloc and free tracks * 60 60 */ 61 61 62 - enum kasan_state { 63 - KASAN_STATE_INIT, 64 - KASAN_STATE_ALLOC, 65 - KASAN_STATE_QUARANTINE, 66 - KASAN_STATE_FREE 67 - }; 68 - 69 62 #define KASAN_STACK_DEPTH 64 70 63 71 64 struct kasan_track { ··· 67 74 }; 68 75 69 76 struct kasan_alloc_meta { 70 - struct kasan_track track; 71 - u32 state : 2; /* enum kasan_state */ 72 - u32 alloc_size : 30; 77 + struct kasan_track alloc_track; 78 + struct kasan_track free_track; 73 79 }; 74 80 75 81 struct qlist_node { ··· 79 87 * Otherwise it might be used for the allocator freelist. 80 88 */ 81 89 struct qlist_node quarantine_link; 82 - struct kasan_track track; 83 90 }; 84 91 85 92 struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, ··· 99 108 100 109 void kasan_report(unsigned long addr, size_t size, 101 110 bool is_write, unsigned long ip); 111 + void kasan_report_double_free(struct kmem_cache *cache, void *object, 112 + s8 shadow); 102 113 103 114 #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB) 104 115 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
+13 -6
mm/kasan/quarantine.c
··· 144 144 static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache) 145 145 { 146 146 void *object = qlink_to_object(qlink, cache); 147 - struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object); 148 147 unsigned long flags; 149 148 150 - local_irq_save(flags); 151 - alloc_info->state = KASAN_STATE_FREE; 149 + if (IS_ENABLED(CONFIG_SLAB)) 150 + local_irq_save(flags); 151 + 152 152 ___cache_free(cache, object, _THIS_IP_); 153 - local_irq_restore(flags); 153 + 154 + if (IS_ENABLED(CONFIG_SLAB)) 155 + local_irq_restore(flags); 154 156 } 155 157 156 158 static void qlist_free_all(struct qlist_head *q, struct kmem_cache *cache) ··· 198 196 199 197 void quarantine_reduce(void) 200 198 { 201 - size_t new_quarantine_size; 199 + size_t new_quarantine_size, percpu_quarantines; 202 200 unsigned long flags; 203 201 struct qlist_head to_free = QLIST_INIT; 204 202 size_t size_to_free = 0; ··· 216 214 */ 217 215 new_quarantine_size = (READ_ONCE(totalram_pages) << PAGE_SHIFT) / 218 216 QUARANTINE_FRACTION; 219 - new_quarantine_size -= QUARANTINE_PERCPU_SIZE * num_online_cpus(); 217 + percpu_quarantines = QUARANTINE_PERCPU_SIZE * num_online_cpus(); 218 + if (WARN_ONCE(new_quarantine_size < percpu_quarantines, 219 + "Too little memory, disabling global KASAN quarantine.\n")) 220 + new_quarantine_size = 0; 221 + else 222 + new_quarantine_size -= percpu_quarantines; 220 223 WRITE_ONCE(quarantine_size, new_quarantine_size); 221 224 222 225 last = global_quarantine.head;
+46 -39
mm/kasan/report.c
··· 116 116 sizeof(init_thread_union.stack)); 117 117 } 118 118 119 + static DEFINE_SPINLOCK(report_lock); 120 + 121 + static void kasan_start_report(unsigned long *flags) 122 + { 123 + /* 124 + * Make sure we don't end up in loop. 125 + */ 126 + kasan_disable_current(); 127 + spin_lock_irqsave(&report_lock, *flags); 128 + pr_err("==================================================================\n"); 129 + } 130 + 131 + static void kasan_end_report(unsigned long *flags) 132 + { 133 + pr_err("==================================================================\n"); 134 + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); 135 + spin_unlock_irqrestore(&report_lock, *flags); 136 + kasan_enable_current(); 137 + } 138 + 119 139 static void print_track(struct kasan_track *track) 120 140 { 121 141 pr_err("PID = %u\n", track->pid); ··· 149 129 } 150 130 } 151 131 152 - static void kasan_object_err(struct kmem_cache *cache, struct page *page, 153 - void *object, char *unused_reason) 132 + static void kasan_object_err(struct kmem_cache *cache, void *object) 154 133 { 155 134 struct kasan_alloc_meta *alloc_info = get_alloc_info(cache, object); 156 - struct kasan_free_meta *free_info; 157 135 158 136 dump_stack(); 159 - pr_err("Object at %p, in cache %s\n", object, cache->name); 137 + pr_err("Object at %p, in cache %s size: %d\n", object, cache->name, 138 + cache->object_size); 139 + 160 140 if (!(cache->flags & SLAB_KASAN)) 161 141 return; 162 - switch (alloc_info->state) { 163 - case KASAN_STATE_INIT: 164 - pr_err("Object not allocated yet\n"); 165 - break; 166 - case KASAN_STATE_ALLOC: 167 - pr_err("Object allocated with size %u bytes.\n", 168 - alloc_info->alloc_size); 169 - pr_err("Allocation:\n"); 170 - print_track(&alloc_info->track); 171 - break; 172 - case KASAN_STATE_FREE: 173 - case KASAN_STATE_QUARANTINE: 174 - pr_err("Object freed, allocated with size %u bytes\n", 175 - alloc_info->alloc_size); 176 - free_info = get_free_info(cache, object); 177 - pr_err("Allocation:\n"); 178 - print_track(&alloc_info->track); 179 - pr_err("Deallocation:\n"); 180 - print_track(&free_info->track); 181 - break; 182 - } 142 + 143 + pr_err("Allocated:\n"); 144 + print_track(&alloc_info->alloc_track); 145 + pr_err("Freed:\n"); 146 + print_track(&alloc_info->free_track); 147 + } 148 + 149 + void kasan_report_double_free(struct kmem_cache *cache, void *object, 150 + s8 shadow) 151 + { 152 + unsigned long flags; 153 + 154 + kasan_start_report(&flags); 155 + pr_err("BUG: Double free or freeing an invalid pointer\n"); 156 + pr_err("Unexpected shadow byte: 0x%hhX\n", shadow); 157 + kasan_object_err(cache, object); 158 + kasan_end_report(&flags); 183 159 } 184 160 185 161 static void print_address_description(struct kasan_access_info *info) ··· 191 175 struct kmem_cache *cache = page->slab_cache; 192 176 object = nearest_obj(cache, page, 193 177 (void *)info->access_addr); 194 - kasan_object_err(cache, page, object, 195 - "kasan: bad access detected"); 178 + kasan_object_err(cache, object); 196 179 return; 197 180 } 198 181 dump_page(page, "kasan: bad access detected"); ··· 256 241 } 257 242 } 258 243 259 - static DEFINE_SPINLOCK(report_lock); 260 - 261 244 static void kasan_report_error(struct kasan_access_info *info) 262 245 { 263 246 unsigned long flags; 264 247 const char *bug_type; 265 248 266 - /* 267 - * Make sure we don't end up in loop. 268 - */ 269 - kasan_disable_current(); 270 - spin_lock_irqsave(&report_lock, flags); 271 - pr_err("==================================================================\n"); 249 + kasan_start_report(&flags); 250 + 272 251 if (info->access_addr < 273 252 kasan_shadow_to_mem((void *)KASAN_SHADOW_START)) { 274 253 if ((unsigned long)info->access_addr < PAGE_SIZE) ··· 283 274 print_address_description(info); 284 275 print_shadow_for_address(info->first_bad_addr); 285 276 } 286 - pr_err("==================================================================\n"); 287 - add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); 288 - spin_unlock_irqrestore(&report_lock, flags); 289 - kasan_enable_current(); 277 + 278 + kasan_end_report(&flags); 290 279 } 291 280 292 281 void kasan_report(unsigned long addr, size_t size,
+9
mm/memcontrol.c
··· 2559 2559 return 0; 2560 2560 2561 2561 mctz = soft_limit_tree_node(pgdat->node_id); 2562 + 2563 + /* 2564 + * Do not even bother to check the largest node if the root 2565 + * is empty. Do it lockless to prevent lock bouncing. Races 2566 + * are acceptable as soft limit is best effort anyway. 2567 + */ 2568 + if (RB_EMPTY_ROOT(&mctz->rb_root)) 2569 + return 0; 2570 + 2562 2571 /* 2563 2572 * This loop can run a while, specially if mem_cgroup's continuously 2564 2573 * keep exceeding their soft limit and putting the system under
+3
mm/memory.c
··· 2642 2642 if (page == swapcache) { 2643 2643 do_page_add_anon_rmap(page, vma, fe->address, exclusive); 2644 2644 mem_cgroup_commit_charge(page, memcg, true, false); 2645 + activate_page(page); 2645 2646 } else { /* ksm created a completely new copy */ 2646 2647 page_add_new_anon_rmap(page, vma, fe->address, false); 2647 2648 mem_cgroup_commit_charge(page, memcg, false, false); ··· 3134 3133 3135 3134 if (pmd_none(*fe->pmd)) { 3136 3135 fe->prealloc_pte = pte_alloc_one(fe->vma->vm_mm, fe->address); 3136 + if (!fe->prealloc_pte) 3137 + goto out; 3137 3138 smp_wmb(); /* See comment in __pte_alloc() */ 3138 3139 } 3139 3140
+5 -3
mm/mmap.c
··· 2653 2653 * anonymous maps. eventually we may be able to do some 2654 2654 * brk-specific accounting here. 2655 2655 */ 2656 - static int do_brk(unsigned long addr, unsigned long len) 2656 + static int do_brk(unsigned long addr, unsigned long request) 2657 2657 { 2658 2658 struct mm_struct *mm = current->mm; 2659 2659 struct vm_area_struct *vma, *prev; 2660 - unsigned long flags; 2660 + unsigned long flags, len; 2661 2661 struct rb_node **rb_link, *rb_parent; 2662 2662 pgoff_t pgoff = addr >> PAGE_SHIFT; 2663 2663 int error; 2664 2664 2665 - len = PAGE_ALIGN(len); 2665 + len = PAGE_ALIGN(request); 2666 + if (len < request) 2667 + return -ENOMEM; 2666 2668 if (!len) 2667 2669 return 0; 2668 2670
+2 -2
mm/page_alloc.c
··· 5276 5276 setup_zone_pageset(zone); 5277 5277 } 5278 5278 5279 - static noinline __init_refok 5279 + static noinline __ref 5280 5280 int zone_wait_table_init(struct zone *zone, unsigned long zone_size_pages) 5281 5281 { 5282 5282 int i; ··· 5903 5903 } 5904 5904 } 5905 5905 5906 - static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat) 5906 + static void __ref alloc_node_mem_map(struct pglist_data *pgdat) 5907 5907 { 5908 5908 unsigned long __maybe_unused start = 0; 5909 5909 unsigned long __maybe_unused offset = 0;
+4 -2
mm/slab.c
··· 1877 1877 return cpu_cache; 1878 1878 } 1879 1879 1880 - static int __init_refok setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp) 1880 + static int __ref setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp) 1881 1881 { 1882 1882 if (slab_state >= FULL) 1883 1883 return enable_cpucache(cachep, gfp); ··· 2604 2604 } 2605 2605 2606 2606 for (i = 0; i < cachep->num; i++) { 2607 + objp = index_to_obj(cachep, page, i); 2608 + kasan_init_slab_obj(cachep, objp); 2609 + 2607 2610 /* constructor could break poison info */ 2608 2611 if (DEBUG == 0 && cachep->ctor) { 2609 - objp = index_to_obj(cachep, page, i); 2610 2612 kasan_unpoison_object_data(cachep, objp); 2611 2613 cachep->ctor(objp); 2612 2614 kasan_poison_object_data(cachep, objp);
+1
mm/slub.c
··· 1384 1384 void *object) 1385 1385 { 1386 1386 setup_object_debug(s, page, object); 1387 + kasan_init_slab_obj(s, object); 1387 1388 if (unlikely(s->ctor)) { 1388 1389 kasan_unpoison_object_data(s, object); 1389 1390 s->ctor(object);
+1 -1
mm/sparse-vmemmap.c
··· 36 36 * Uses the main allocators if they are available, else bootmem. 37 37 */ 38 38 39 - static void * __init_refok __earlyonly_bootmem_alloc(int node, 39 + static void * __ref __earlyonly_bootmem_alloc(int node, 40 40 unsigned long size, 41 41 unsigned long align, 42 42 unsigned long goal)
+1 -1
mm/sparse.c
··· 59 59 #endif 60 60 61 61 #ifdef CONFIG_SPARSEMEM_EXTREME 62 - static struct mem_section noinline __init_refok *sparse_index_alloc(int nid) 62 + static noinline struct mem_section __ref *sparse_index_alloc(int nid) 63 63 { 64 64 struct mem_section *section = NULL; 65 65 unsigned long array_size = SECTIONS_PER_ROOT *
+1 -1
mm/vmscan.c
··· 2561 2561 shrink_node_memcg(pgdat, memcg, sc, &lru_pages); 2562 2562 node_lru_pages += lru_pages; 2563 2563 2564 - if (!global_reclaim(sc)) 2564 + if (memcg) 2565 2565 shrink_slab(sc->gfp_mask, pgdat->node_id, 2566 2566 memcg, sc->nr_scanned - scanned, 2567 2567 lru_pages);
+1 -1
scripts/Makefile.lib
··· 138 138 139 139 ifeq ($(CONFIG_KCOV),y) 140 140 _c_flags += $(if $(patsubst n%,, \ 141 - $(KCOV_INSTRUMENT_$(basetarget).o)$(KCOV_INSTRUMENT)y), \ 141 + $(KCOV_INSTRUMENT_$(basetarget).o)$(KCOV_INSTRUMENT)$(CONFIG_KCOV_INSTRUMENT_ALL)), \ 142 142 $(CFLAGS_KCOV)) 143 143 endif 144 144
+21 -8
scripts/checkpatch.pl
··· 55 55 my $codespell = 0; 56 56 my $codespellfile = "/usr/share/codespell/dictionary.txt"; 57 57 my $color = 1; 58 + my $allow_c99_comments = 1; 58 59 59 60 sub help { 60 61 my ($exitcode) = @_; ··· 228 227 } 229 228 } 230 229 230 + #if no filenames are given, push '-' to read patch from stdin 231 231 if ($#ARGV < 0) { 232 - print "$P: no input files\n"; 233 - exit(1); 232 + push(@ARGV, '-'); 234 233 } 235 234 236 235 sub hash_save_array_words { ··· 1143 1142 } elsif ($res =~ /^.\s*\#\s*(?:error|warning)\s+(.*)\b/) { 1144 1143 my $clean = 'X' x length($1); 1145 1144 $res =~ s@(\#\s*(?:error|warning)\s+).*@$1$clean@; 1145 + } 1146 + 1147 + if ($allow_c99_comments && $res =~ m@(//.*$)@) { 1148 + my $match = $1; 1149 + $res =~ s/\Q$match\E/"$;" x length($match)/e; 1146 1150 } 1147 1151 1148 1152 return $res; ··· 2069 2063 my $is_patch = 0; 2070 2064 my $in_header_lines = $file ? 0 : 1; 2071 2065 my $in_commit_log = 0; #Scanning lines before patch 2066 + my $has_commit_log = 0; #Encountered lines before patch 2072 2067 my $commit_log_possible_stack_dump = 0; 2073 2068 my $commit_log_long_line = 0; 2074 2069 my $commit_log_has_diff = 0; ··· 2460 2453 2461 2454 # Check for git id commit length and improperly formed commit descriptions 2462 2455 if ($in_commit_log && !$commit_log_possible_stack_dump && 2463 - $line !~ /^\s*(?:Link|Patchwork|http|BugLink):/i && 2456 + $line !~ /^\s*(?:Link|Patchwork|http|https|BugLink):/i && 2464 2457 ($line =~ /\bcommit\s+[0-9a-f]{5,}\b/i || 2465 - ($line =~ /\b[0-9a-f]{12,40}\b/i && 2458 + ($line =~ /(?:\s|^)[0-9a-f]{12,40}(?:[\s"'\(\[]|$)/i && 2466 2459 $line !~ /[\<\[][0-9a-f]{12,40}[\>\]]/i && 2467 2460 $line !~ /\bfixes:\s*[0-9a-f]{12,40}/i))) { 2468 2461 my $init_char = "c"; ··· 2567 2560 $rawline =~ /^(commit\b|from\b|[\w-]+:).*$/i)) { 2568 2561 $in_header_lines = 0; 2569 2562 $in_commit_log = 1; 2563 + $has_commit_log = 1; 2570 2564 } 2571 2565 2572 2566 # Check if there is UTF-8 in a commit log when a mail header has explicitly ··· 2769 2761 # #defines with only strings 2770 2762 } elsif ($line =~ /^\+\s*$String\s*(?:\s*|,|\)\s*;)\s*$/ || 2771 2763 $line =~ /^\+\s*#\s*define\s+\w+\s+$String$/) { 2764 + $msg_type = ""; 2765 + 2766 + # EFI_GUID is another special case 2767 + } elsif ($line =~ /^\+.*\bEFI_GUID\s*\(/) { 2772 2768 $msg_type = ""; 2773 2769 2774 2770 # Otherwise set the alternate message types ··· 3349 3337 next if ($line =~ /^[^\+]/); 3350 3338 3351 3339 # check for declarations of signed or unsigned without int 3352 - while ($line =~ m{($Declare)\s*(?!char\b|short\b|int\b|long\b)\s*($Ident)?\s*[=,;\[\)\(]}g) { 3340 + while ($line =~ m{\b($Declare)\s*(?!char\b|short\b|int\b|long\b)\s*($Ident)?\s*[=,;\[\)\(]}g) { 3353 3341 my $type = $1; 3354 3342 my $var = $2; 3355 3343 $var = "" if (!defined $var); ··· 5734 5722 } 5735 5723 } 5736 5724 5737 - # check for #defines like: 1 << <digit> that could be BIT(digit) 5738 - if ($line =~ /#\s*define\s+\w+\s+\(?\s*1\s*([ulUL]*)\s*\<\<\s*(?:\d+|$Ident)\s*\)?/) { 5725 + # check for #defines like: 1 << <digit> that could be BIT(digit), it is not exported to uapi 5726 + if ($realfile !~ m@^include/uapi/@ && 5727 + $line =~ /#\s*define\s+\w+\s+\(?\s*1\s*([ulUL]*)\s*\<\<\s*(?:\d+|$Ident)\s*\)?/) { 5739 5728 my $ull = ""; 5740 5729 $ull = "_ULL" if (defined($1) && $1 =~ /ll/i); 5741 5730 if (CHK("BIT_MACRO", ··· 6057 6044 ERROR("NOT_UNIFIED_DIFF", 6058 6045 "Does not appear to be a unified-diff format patch\n"); 6059 6046 } 6060 - if ($is_patch && $filename ne '-' && $chk_signoff && $signoff == 0) { 6047 + if ($is_patch && $has_commit_log && $chk_signoff && $signoff == 0) { 6061 6048 ERROR("MISSING_SIGN_OFF", 6062 6049 "Missing Signed-off-by: line(s)\n"); 6063 6050 }
+19 -1
scripts/get_maintainer.pl
··· 133 133 "author_pattern" => "^GitAuthor: (.*)", 134 134 "subject_pattern" => "^GitSubject: (.*)", 135 135 "stat_pattern" => "^(\\d+)\\t(\\d+)\\t\$file\$", 136 + "file_exists_cmd" => "git ls-files \$file", 136 137 ); 137 138 138 139 my %VCS_cmds_hg = ( ··· 162 161 "author_pattern" => "^HgAuthor: (.*)", 163 162 "subject_pattern" => "^HgSubject: (.*)", 164 163 "stat_pattern" => "^(\\d+)\t(\\d+)\t\$file\$", 164 + "file_exists_cmd" => "hg files \$file", 165 165 ); 166 166 167 167 my $conf = which_conf(".get_maintainer.conf"); ··· 432 430 die "$P: file '${file}' not found\n"; 433 431 } 434 432 } 435 - if ($from_filename) { 433 + if ($from_filename || vcs_file_exists($file)) { 436 434 $file =~ s/^\Q${cur_path}\E//; #strip any absolute path 437 435 $file =~ s/^\Q${lk_path}\E//; #or the path to the lk tree 438 436 push(@files, $file); ··· 2124 2122 } 2125 2123 vcs_assign("modified commits", $total_commits, @signers); 2126 2124 } 2125 + } 2126 + 2127 + sub vcs_file_exists { 2128 + my ($file) = @_; 2129 + 2130 + my $exists; 2131 + 2132 + my $vcs_used = vcs_exists(); 2133 + return 0 if (!$vcs_used); 2134 + 2135 + my $cmd = $VCS_cmds{"file_exists_cmd"}; 2136 + $cmd =~ s/(\$\w+)/$1/eeg; # interpolate $cmd 2137 + 2138 + $exists = &{$VCS_cmds{"execute_cmd"}}($cmd); 2139 + 2140 + return $exists; 2127 2141 } 2128 2142 2129 2143 sub uniq {
+1 -1
tools/testing/radix-tree/linux/gfp.h
··· 1 1 #ifndef _GFP_H 2 2 #define _GFP_H 3 3 4 - #define __GFP_BITS_SHIFT 22 4 + #define __GFP_BITS_SHIFT 26 5 5 #define __GFP_BITS_MASK ((gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) 6 6 #define __GFP_WAIT 1 7 7 #define __GFP_ACCOUNT 0