Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (patches from Andrew Morton) into next

Merge more updates from Andrew Morton:

- Most of the rest of MM.

This includes "mark remap_file_pages syscall as deprecated" but the
actual "replace remap_file_pages syscall with emulation" is held
back. I guess we'll need to work out when to pull the trigger on
that one.

- various minor cleanups to obscure filesystems

- the drivers/rtc queue

- hfsplus updates

- ufs, hpfs, fatfs, affs, reiserfs

- Documentation/

- signals

- procfs

- cpu hotplug

- lib/idr.c

- rapidio

- sysctl

- ipc updates

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (171 commits)
ufs: sb mutex merge + mutex_destroy
powerpc: update comments for generic idle conversion
cris: update comments for generic idle conversion
idle: remove cpu_idle() forward declarations
nbd: zero from and len fields in NBD_CMD_DISCONNECT.
mm: convert some level-less printks to pr_*
MAINTAINERS: adi-buildroot-devel is moderated
MAINTAINERS: add linux-api for review of API/ABI changes
mm/kmemleak-test.c: use pr_fmt for logging
fs/dlm/debug_fs.c: replace seq_printf by seq_puts
fs/dlm/lockspace.c: convert simple_str to kstr
fs/dlm/config.c: convert simple_str to kstr
mm: mark remap_file_pages() syscall as deprecated
mm: memcontrol: remove unnecessary memcg argument from soft limit functions
mm: memcontrol: clean up memcg zoneinfo lookup
mm/memblock.c: call kmemleak directly from memblock_(alloc|free)
mm/mempool.c: update the kmemleak stack trace for mempool allocations
lib/radix-tree.c: update the kmemleak stack trace for radix tree allocations
mm: introduce kmemleak_update_trace()
mm/kmemleak.c: use %u to print ->checksum
...

+4204 -1799
+21 -1
Documentation/SubmittingPatches
··· 132 132 platform_set_drvdata(), but left the variable "dev" unused, 133 133 delete it. 134 134 135 + If your patch fixes a bug in a specific commit, e.g. you found an issue using 136 + git-bisect, please use the 'Fixes:' tag with the first 12 characters of the 137 + SHA-1 ID, and the one line summary. 138 + Example: 139 + 140 + Fixes: e21d2170f366 ("video: remove unnecessary platform_set_drvdata()") 141 + 142 + The following git-config settings can be used to add a pretty format for 143 + outputting the above style in the git log or git show commands 144 + 145 + [core] 146 + abbrev = 12 147 + [pretty] 148 + fixes = Fixes: %h (\"%s\") 135 149 136 150 3) Separate your changes. 137 151 ··· 457 443 have been included in the discussion 458 444 459 445 460 - 14) Using Reported-by:, Tested-by:, Reviewed-by: and Suggested-by: 446 + 14) Using Reported-by:, Tested-by:, Reviewed-by:, Suggested-by: and Fixes: 461 447 462 448 If this patch fixes a problem reported by somebody else, consider adding a 463 449 Reported-by: tag to credit the reporter for their contribution. Please ··· 511 497 idea was not posted in a public forum. That said, if we diligently credit our 512 498 idea reporters, they will, hopefully, be inspired to help us again in the 513 499 future. 500 + 501 + A Fixes: tag indicates that the patch fixes an issue in a previous commit. It 502 + is used to make it easy to determine where a bug originated, which can help 503 + review a bug fix. This tag also assists the stable kernel team in determining 504 + which stable kernel versions should receive your fix. This is the preferred 505 + method for indicating a bug fixed by the patch. See #2 above for more details. 514 506 515 507 516 508 15) The canonical patch format
+6 -7
Documentation/cgroups/memory.txt
··· 540 540 541 541 5.3 swappiness 542 542 543 - Similar to /proc/sys/vm/swappiness, but only affecting reclaim that is 544 - triggered by this cgroup's hard limit. The tunable in the root cgroup 545 - corresponds to the global swappiness setting. 543 + Overrides /proc/sys/vm/swappiness for the particular group. The tunable 544 + in the root cgroup corresponds to the global swappiness setting. 546 545 547 - Please note that unlike the global swappiness, memcg knob set to 0 548 - really prevents from any swapping even if there is a swap storage 549 - available. This might lead to memcg OOM killer if there are no file 550 - pages to reclaim. 546 + Please note that unlike during the global reclaim, limit reclaim 547 + enforces that 0 swappiness really prevents from any swapping even if 548 + there is a swap storage available. This might lead to memcg OOM killer 549 + if there are no file pages to reclaim. 551 550 552 551 5.4 failcnt 553 552
+3
Documentation/devicetree/bindings/rtc/haoyu,hym8563.txt
··· 9 9 - interrupts: rtc alarm/event interrupt 10 10 - #clock-cells: the value should be 0 11 11 12 + Optional properties: 13 + - clock-output-names: From common clock binding 14 + 12 15 Example: 13 16 14 17 hym8563: hym8563@51 {
+28
Documentation/devicetree/bindings/rtc/xgene-rtc.txt
··· 1 + * APM X-Gene Real Time Clock 2 + 3 + RTC controller for the APM X-Gene Real Time Clock 4 + 5 + Required properties: 6 + - compatible : Should be "apm,xgene-rtc" 7 + - reg: physical base address of the controller and length of memory mapped 8 + region. 9 + - interrupts: IRQ line for the RTC. 10 + - #clock-cells: Should be 1. 11 + - clocks: Reference to the clock entry. 12 + 13 + Example: 14 + 15 + rtcclk: rtcclk { 16 + compatible = "fixed-clock"; 17 + #clock-cells = <1>; 18 + clock-frequency = <100000000>; 19 + clock-output-names = "rtcclk"; 20 + }; 21 + 22 + rtc: rtc@10510000 { 23 + compatible = "apm,xgene-rtc"; 24 + reg = <0x0 0x10510000 0x0 0x400>; 25 + interrupts = <0x0 0x46 0x4>; 26 + #clock-cells = <1>; 27 + clocks = <&rtcclk 0>; 28 + };
+9
Documentation/filesystems/seq_file.txt
··· 54 54 wanting to see the full source for this module can find it at 55 55 http://lwn.net/Articles/22359/). 56 56 57 + Deprecated create_proc_entry 58 + 59 + Note that the above article uses create_proc_entry which was removed in 60 + kernel 3.10. Current versions require the following update 61 + 62 + - entry = create_proc_entry("sequence", 0, NULL); 63 + - if (entry) 64 + - entry->proc_fops = &ct_file_ops; 65 + + entry = proc_create("sequence", 0, NULL, &ct_file_ops); 57 66 58 67 The iterator interface 59 68
+5
Documentation/filesystems/vfat.txt
··· 172 172 To maintain backward compatibility, '-o nfs' is also accepted, 173 173 defaulting to stale_rw 174 174 175 + dos1xfloppy -- If set, use a fallback default BIOS Parameter Block 176 + configuration, determined by backing device size. These static 177 + parameters match defaults assumed by DOS 1.x for 160 kiB, 178 + 180 kiB, 320 kiB, and 360 kiB floppies and floppy images. 179 + 175 180 176 181 <bool>: 0,1,yes,no,true,false 177 182
+8
Documentation/kernel-parameters.txt
··· 2361 2361 timeout < 0: reboot immediately 2362 2362 Format: <timeout> 2363 2363 2364 + crash_kexec_post_notifiers 2365 + Run kdump after running panic-notifiers and dumping 2366 + kmsg. This only for the users who doubt kdump always 2367 + succeeds in any situation. 2368 + Note that this also increases risks of kdump failure, 2369 + because some panic notifiers can make the crashed 2370 + kernel more unstable. 2371 + 2364 2372 parkbd.port= [HW] Parallel port number the keyboard adapter is 2365 2373 connected to, default is 0. 2366 2374 Format: <parport#>
+1
Documentation/kmemleak.txt
··· 142 142 kmemleak_free - notify of a memory block freeing 143 143 kmemleak_free_part - notify of a partial memory block freeing 144 144 kmemleak_free_percpu - notify of a percpu memory block freeing 145 + kmemleak_update_trace - update object allocation stack trace 145 146 kmemleak_not_leak - mark an object as not a leak 146 147 kmemleak_ignore - do not scan or report an object as leak 147 148 kmemleak_scan_area - add scan areas inside a memory block
+2 -2
Documentation/memory-barriers.txt
··· 115 115 CPU 1 CPU 2 116 116 =============== =============== 117 117 { A == 1; B == 2 } 118 - A = 3; x = A; 119 - B = 4; y = B; 118 + A = 3; x = B; 119 + B = 4; y = A; 120 120 121 121 The set of accesses as seen by the memory system in the middle can be arranged 122 122 in 24 different combinations:
+21
Documentation/sysctl/kernel.txt
··· 77 77 - shmmni 78 78 - stop-a [ SPARC only ] 79 79 - sysrq ==> Documentation/sysrq.txt 80 + - sysctl_writes_strict 80 81 - tainted 81 82 - threads-max 82 83 - unknown_nmi_panic ··· 760 759 761 760 Note that if you change this from 0 to 1, already created segments 762 761 without users and with a dead originative process will be destroyed. 762 + 763 + ============================================================== 764 + 765 + sysctl_writes_strict: 766 + 767 + Control how file position affects the behavior of updating sysctl values 768 + via the /proc/sys interface: 769 + 770 + -1 - Legacy per-write sysctl value handling, with no printk warnings. 771 + Each write syscall must fully contain the sysctl value to be 772 + written, and multiple writes on the same sysctl file descriptor 773 + will rewrite the sysctl value, regardless of file position. 774 + 0 - (default) Same behavior as above, but warn about processes that 775 + perform writes to a sysctl file descriptor when the file position 776 + is not 0. 777 + 1 - Respect file position when writing sysctl strings. Multiple writes 778 + will append to the sysctl value buffer. Anything past the max length 779 + of the sysctl value buffer will be ignored. Writes to numeric sysctl 780 + entries must always be at file position 0 and the value must be 781 + fully contained in the buffer sent in the write syscall. 763 782 764 783 ============================================================== 765 784
+28
Documentation/vm/remap_file_pages.txt
··· 1 + The remap_file_pages() system call is used to create a nonlinear mapping, 2 + that is, a mapping in which the pages of the file are mapped into a 3 + nonsequential order in memory. The advantage of using remap_file_pages() 4 + over using repeated calls to mmap(2) is that the former approach does not 5 + require the kernel to create additional VMA (Virtual Memory Area) data 6 + structures. 7 + 8 + Supporting of nonlinear mapping requires significant amount of non-trivial 9 + code in kernel virtual memory subsystem including hot paths. Also to get 10 + nonlinear mapping work kernel need a way to distinguish normal page table 11 + entries from entries with file offset (pte_file). Kernel reserves flag in 12 + PTE for this purpose. PTE flags are scarce resource especially on some CPU 13 + architectures. It would be nice to free up the flag for other usage. 14 + 15 + Fortunately, there are not many users of remap_file_pages() in the wild. 16 + It's only known that one enterprise RDBMS implementation uses the syscall 17 + on 32-bit systems to map files bigger than can linearly fit into 32-bit 18 + virtual address space. This use-case is not critical anymore since 64-bit 19 + systems are widely available. 20 + 21 + The plan is to deprecate the syscall and replace it with an emulation. 22 + The emulation will create new VMAs instead of nonlinear mappings. It's 23 + going to work slower for rare users of remap_file_pages() but ABI is 24 + preserved. 25 + 26 + One side effect of emulation (apart from performance) is that user can hit 27 + vm.max_map_count limit more easily due to additional VMAs. See comment for 28 + DEFAULT_MAX_MAP_COUNT for more details on the limit.
+16 -9
MAINTAINERS
··· 210 210 F: Documentation/scsi/aacraid.txt 211 211 F: drivers/scsi/aacraid/ 212 212 213 + ABI/API 214 + L: linux-api@vger.kernel.org 215 + F: Documentation/ABI/ 216 + F: include/linux/syscalls.h 217 + F: include/uapi/ 218 + F: kernel/sys_ni.c 219 + 213 220 ABIT UGURU 1,2 HARDWARE MONITOR DRIVER 214 221 M: Hans de Goede <hdegoede@redhat.com> 215 222 L: lm-sensors@lm-sensors.org ··· 654 647 F: sound/soc/codecs/sigmadsp.* 655 648 656 649 ANALOG DEVICES INC ASOC DRIVERS 657 - L: adi-buildroot-devel@lists.sourceforge.net 650 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 658 651 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 659 652 W: http://blackfin.uclinux.org/ 660 653 S: Supported ··· 1765 1758 1766 1759 BLACKFIN ARCHITECTURE 1767 1760 M: Steven Miao <realmz6@gmail.com> 1768 - L: adi-buildroot-devel@lists.sourceforge.net 1761 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1769 1762 T: git git://git.code.sf.net/p/adi-linux/code 1770 1763 W: http://blackfin.uclinux.org 1771 1764 S: Supported 1772 1765 F: arch/blackfin/ 1773 1766 1774 1767 BLACKFIN EMAC DRIVER 1775 - L: adi-buildroot-devel@lists.sourceforge.net 1768 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1776 1769 W: http://blackfin.uclinux.org 1777 1770 S: Supported 1778 1771 F: drivers/net/ethernet/adi/ 1779 1772 1780 1773 BLACKFIN RTC DRIVER 1781 - L: adi-buildroot-devel@lists.sourceforge.net 1774 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1782 1775 W: http://blackfin.uclinux.org 1783 1776 S: Supported 1784 1777 F: drivers/rtc/rtc-bfin.c 1785 1778 1786 1779 BLACKFIN SDH DRIVER 1787 1780 M: Sonic Zhang <sonic.zhang@analog.com> 1788 - L: adi-buildroot-devel@lists.sourceforge.net 1781 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1789 1782 W: http://blackfin.uclinux.org 1790 1783 S: Supported 1791 1784 F: drivers/mmc/host/bfin_sdh.c 1792 1785 1793 1786 BLACKFIN SERIAL DRIVER 1794 1787 M: Sonic Zhang <sonic.zhang@analog.com> 1795 - L: adi-buildroot-devel@lists.sourceforge.net 1788 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1796 1789 W: http://blackfin.uclinux.org 1797 1790 S: Supported 1798 1791 F: drivers/tty/serial/bfin_uart.c 1799 1792 1800 1793 BLACKFIN WATCHDOG DRIVER 1801 - L: adi-buildroot-devel@lists.sourceforge.net 1794 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1802 1795 W: http://blackfin.uclinux.org 1803 1796 S: Supported 1804 1797 F: drivers/watchdog/bfin_wdt.c 1805 1798 1806 1799 BLACKFIN I2C TWI DRIVER 1807 1800 M: Sonic Zhang <sonic.zhang@analog.com> 1808 - L: adi-buildroot-devel@lists.sourceforge.net 1801 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1809 1802 W: http://blackfin.uclinux.org/ 1810 1803 S: Supported 1811 1804 F: drivers/i2c/busses/i2c-bfin-twi.c 1812 1805 1813 1806 BLACKFIN MEDIA DRIVER 1814 1807 M: Scott Jiang <scott.jiang.linux@gmail.com> 1815 - L: adi-buildroot-devel@lists.sourceforge.net 1808 + L: adi-buildroot-devel@lists.sourceforge.net (moderated for non-subscribers) 1816 1809 W: http://blackfin.uclinux.org/ 1817 1810 S: Supported 1818 1811 F: drivers/media/platform/blackfin/
+3 -3
arch/arm/kernel/isa.c
··· 20 20 21 21 static unsigned int isa_membase, isa_portbase, isa_portshift; 22 22 23 - static ctl_table ctl_isa_vars[4] = { 23 + static struct ctl_table ctl_isa_vars[4] = { 24 24 { 25 25 .procname = "membase", 26 26 .data = &isa_membase, ··· 44 44 45 45 static struct ctl_table_header *isa_sysctl_header; 46 46 47 - static ctl_table ctl_isa[2] = { 47 + static struct ctl_table ctl_isa[2] = { 48 48 { 49 49 .procname = "isa", 50 50 .mode = 0555, ··· 52 52 }, {} 53 53 }; 54 54 55 - static ctl_table ctl_bus[2] = { 55 + static struct ctl_table ctl_bus[2] = { 56 56 { 57 57 .procname = "bus", 58 58 .mode = 0555,
+13 -9
arch/arm/mach-at91/sysirq_mask.c
··· 25 25 26 26 #include "generic.h" 27 27 28 - #define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */ 29 - #define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */ 28 + #define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */ 29 + #define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */ 30 + #define AT91_RTC_IRQ_MASK 0x1f /* Available IRQs mask */ 30 31 31 32 void __init at91_sysirq_mask_rtc(u32 rtc_base) 32 33 { 33 34 void __iomem *base; 34 - u32 mask; 35 35 36 36 base = ioremap(rtc_base, 64); 37 37 if (!base) 38 38 return; 39 39 40 - mask = readl_relaxed(base + AT91_RTC_IMR); 41 - if (mask) { 42 - pr_info("AT91: Disabling rtc irq\n"); 43 - writel_relaxed(mask, base + AT91_RTC_IDR); 44 - (void)readl_relaxed(base + AT91_RTC_IMR); /* flush */ 45 - } 40 + /* 41 + * sam9x5 SoCs have the following errata: 42 + * "RTC: Interrupt Mask Register cannot be used 43 + * Interrupt Mask Register read always returns 0." 44 + * 45 + * Hence we're not relying on IMR values to disable 46 + * interrupts. 47 + */ 48 + writel_relaxed(AT91_RTC_IRQ_MASK, base + AT91_RTC_IDR); 49 + (void)readl_relaxed(base + AT91_RTC_IMR); /* flush */ 46 50 47 51 iounmap(base); 48 52 }
+21
arch/arm64/boot/dts/apm-storm.dtsi
··· 257 257 enable-offset = <0x0>; 258 258 enable-mask = <0x39>; 259 259 }; 260 + 261 + rtcclk: rtcclk@17000000 { 262 + compatible = "apm,xgene-device-clock"; 263 + #clock-cells = <1>; 264 + clocks = <&socplldiv2 0>; 265 + reg = <0x0 0x17000000 0x0 0x2000>; 266 + reg-names = "csr-reg"; 267 + csr-offset = <0xc>; 268 + csr-mask = <0x2>; 269 + enable-offset = <0x10>; 270 + enable-mask = <0x2>; 271 + clock-output-names = "rtcclk"; 272 + }; 260 273 }; 261 274 262 275 serial0: serial@1c020000 { ··· 354 341 clocks = <&sata45clk 0>; 355 342 phys = <&phy3 0>; 356 343 phy-names = "sata-phy"; 344 + }; 345 + 346 + rtc: rtc@10510000 { 347 + compatible = "apm,xgene-rtc"; 348 + reg = <0x0 0x10510000 0x0 0x400>; 349 + interrupts = <0x0 0x46 0x4>; 350 + #clock-cells = <1>; 351 + clocks = <&rtcclk 0>; 357 352 }; 358 353 }; 359 354 };
+6 -2
arch/blackfin/kernel/ptrace.c
··· 117 117 int 118 118 is_user_addr_valid(struct task_struct *child, unsigned long start, unsigned long len) 119 119 { 120 + bool valid; 120 121 struct vm_area_struct *vma; 121 122 struct sram_list_struct *sraml; 122 123 ··· 125 124 if (start + len < start) 126 125 return -EIO; 127 126 127 + down_read(&child->mm->mmap_sem); 128 128 vma = find_vma(child->mm, start); 129 - if (vma && start >= vma->vm_start && start + len <= vma->vm_end) 130 - return 0; 129 + valid = vma && start >= vma->vm_start && start + len <= vma->vm_end; 130 + up_read(&child->mm->mmap_sem); 131 + if (valid) 132 + return 0; 131 133 132 134 for (sraml = child->mm->context.sram_list; sraml; sraml = sraml->next) 133 135 if (start >= (unsigned long)sraml->addr
+2 -2
arch/cris/arch-v10/drivers/gpio.c
··· 833 833 printk(KERN_INFO "ETRAX 100LX GPIO driver v2.5, (c) 2001-2008 " 834 834 "Axis Communications AB\n"); 835 835 /* We call etrax_gpio_wake_up_check() from timer interrupt and 836 - * from cpu_idle() in kernel/process.c 837 - * The check in cpu_idle() reduces latency from ~15 ms to ~6 ms 836 + * from default_idle() in kernel/process.c 837 + * The check in default_idle() reduces latency from ~15 ms to ~6 ms 838 838 * in some tests. 839 839 */ 840 840 res = request_irq(TIMER0_IRQ_NBR, gpio_poll_timer_interrupt,
+1 -5
arch/cris/arch-v32/drivers/mach-fs/gpio.c
··· 958 958 959 959 printk(KERN_INFO "ETRAX FS GPIO driver v2.5, (c) 2003-2007 " 960 960 "Axis Communications AB\n"); 961 - /* We call etrax_gpio_wake_up_check() from timer interrupt and 962 - * from cpu_idle() in kernel/process.c 963 - * The check in cpu_idle() reduces latency from ~15 ms to ~6 ms 964 - * in some tests. 965 - */ 961 + /* We call etrax_gpio_wake_up_check() from timer interrupt */ 966 962 if (request_irq(TIMER0_INTR_VECT, gpio_poll_timer_interrupt, 967 963 IRQF_SHARED, "gpio poll", &alarmlist)) 968 964 printk(KERN_ERR "timer0 irq for gpio\n");
+2 -2
arch/ia64/kernel/crash.c
··· 237 237 } 238 238 239 239 #ifdef CONFIG_SYSCTL 240 - static ctl_table kdump_ctl_table[] = { 240 + static struct ctl_table kdump_ctl_table[] = { 241 241 { 242 242 .procname = "kdump_on_init", 243 243 .data = &kdump_on_init, ··· 255 255 { } 256 256 }; 257 257 258 - static ctl_table sys_table[] = { 258 + static struct ctl_table sys_table[] = { 259 259 { 260 260 .procname = "kernel", 261 261 .mode = 0555,
+3 -3
arch/ia64/kernel/perfmon.c
··· 521 521 pfm_sysctl_t pfm_sysctl; 522 522 EXPORT_SYMBOL(pfm_sysctl); 523 523 524 - static ctl_table pfm_ctl_table[]={ 524 + static struct ctl_table pfm_ctl_table[] = { 525 525 { 526 526 .procname = "debug", 527 527 .data = &pfm_sysctl.debug, ··· 552 552 }, 553 553 {} 554 554 }; 555 - static ctl_table pfm_sysctl_dir[] = { 555 + static struct ctl_table pfm_sysctl_dir[] = { 556 556 { 557 557 .procname = "perfmon", 558 558 .mode = 0555, ··· 560 560 }, 561 561 {} 562 562 }; 563 - static ctl_table pfm_sysctl_root[] = { 563 + static struct ctl_table pfm_sysctl_root[] = { 564 564 { 565 565 .procname = "kernel", 566 566 .mode = 0555,
-9
arch/m68k/include/asm/signal.h
··· 60 60 __const_sigismember(set,sig) : \ 61 61 __gen_sigismember(set,sig)) 62 62 63 - static inline int sigfindinword(unsigned long word) 64 - { 65 - asm ("bfffo %1{#0,#0},%0" 66 - : "=d" (word) 67 - : "d" (word & -word) 68 - : "cc"); 69 - return word ^ 31; 70 - } 71 - 72 63 #endif /* !CONFIG_CPU_HAS_NO_BITFIELDS */ 73 64 74 65 #ifndef __uClinux__
+1 -1
arch/mips/dec/Makefile
··· 3 3 # 4 4 5 5 obj-y := ecc-berr.o int-handler.o ioasic-irq.o kn01-berr.o \ 6 - kn02-irq.o kn02xa-berr.o reset.o setup.o time.o 6 + kn02-irq.o kn02xa-berr.o platform.o reset.o setup.o time.o 7 7 8 8 obj-$(CONFIG_TC) += tc.o 9 9 obj-$(CONFIG_CPU_HAS_WB) += wbflush.o
+44
arch/mips/dec/platform.c
··· 1 + /* 2 + * DEC platform devices. 3 + * 4 + * Copyright (c) 2014 Maciej W. Rozycki 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public License 8 + * as published by the Free Software Foundation; either version 9 + * 2 of the License, or (at your option) any later version. 10 + */ 11 + 12 + #include <linux/ioport.h> 13 + #include <linux/kernel.h> 14 + #include <linux/mc146818rtc.h> 15 + #include <linux/platform_device.h> 16 + 17 + static struct resource dec_rtc_resources[] = { 18 + { 19 + .name = "rtc", 20 + .flags = IORESOURCE_MEM, 21 + }, 22 + }; 23 + 24 + static struct cmos_rtc_board_info dec_rtc_info = { 25 + .flags = CMOS_RTC_FLAGS_NOFREQ, 26 + .address_space = 64, 27 + }; 28 + 29 + static struct platform_device dec_rtc_device = { 30 + .name = "rtc_cmos", 31 + .id = PLATFORM_DEVID_NONE, 32 + .dev.platform_data = &dec_rtc_info, 33 + .resource = dec_rtc_resources, 34 + .num_resources = ARRAY_SIZE(dec_rtc_resources), 35 + }; 36 + 37 + static int __init dec_add_devices(void) 38 + { 39 + dec_rtc_resources[0].start = RTC_PORT(0); 40 + dec_rtc_resources[0].end = RTC_PORT(0) + dec_kn_slot_size - 1; 41 + return platform_device_register(&dec_rtc_device); 42 + } 43 + 44 + device_initcall(dec_add_devices);
+1 -1
arch/powerpc/kernel/irq.c
··· 304 304 * being re-enabled and generally sanitized the lazy irq state, 305 305 * and in the latter case it will leave with interrupts hard 306 306 * disabled and marked as such, so the local_irq_enable() call 307 - * in cpu_idle() will properly re-enable everything. 307 + * in arch_cpu_idle() will properly re-enable everything. 308 308 */ 309 309 bool prep_irq_for_idle(void) 310 310 {
+2 -2
arch/tile/kernel/proc.c
··· 113 113 * Support /proc/sys/tile directory 114 114 */ 115 115 116 - static ctl_table unaligned_subtable[] = { 116 + static struct ctl_table unaligned_subtable[] = { 117 117 { 118 118 .procname = "enabled", 119 119 .data = &unaligned_fixup, ··· 138 138 {} 139 139 }; 140 140 141 - static ctl_table unaligned_table[] = { 141 + static struct ctl_table unaligned_table[] = { 142 142 { 143 143 .procname = "unaligned_fixup", 144 144 .mode = 0555,
-6
arch/x86/include/asm/signal.h
··· 92 92 ? __const_sigismember((set), (sig)) \ 93 93 : __gen_sigismember((set), (sig))) 94 94 95 - static inline int sigfindinword(unsigned long word) 96 - { 97 - asm("bsfl %1,%0" : "=r"(word) : "rm"(word) : "cc"); 98 - return word; 99 - } 100 - 101 95 struct pt_regs; 102 96 103 97 #else /* __i386__ */
+5 -2
block/bounce.c
··· 3 3 * - Split from highmem.c 4 4 */ 5 5 6 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 + 6 8 #include <linux/mm.h> 7 9 #include <linux/export.h> 8 10 #include <linux/swap.h> ··· 17 15 #include <linux/hash.h> 18 16 #include <linux/highmem.h> 19 17 #include <linux/bootmem.h> 18 + #include <linux/printk.h> 20 19 #include <asm/tlbflush.h> 21 20 22 21 #include <trace/events/block.h> ··· 37 34 38 35 page_pool = mempool_create_page_pool(POOL_SIZE, 0); 39 36 BUG_ON(!page_pool); 40 - printk("bounce pool size: %d pages\n", POOL_SIZE); 37 + pr_info("pool size: %d pages\n", POOL_SIZE); 41 38 42 39 return 0; 43 40 } ··· 89 86 mempool_free_pages, (void *) 0); 90 87 BUG_ON(!isa_page_pool); 91 88 92 - printk("isa bounce pool size: %d pages\n", ISA_POOL_SIZE); 89 + pr_info("isa pool size: %d pages\n", ISA_POOL_SIZE); 93 90 return 0; 94 91 } 95 92
+2 -5
drivers/block/nbd.c
··· 243 243 struct nbd_request request; 244 244 unsigned long size = blk_rq_bytes(req); 245 245 246 + memset(&request, 0, sizeof(request)); 246 247 request.magic = htonl(NBD_REQUEST_MAGIC); 247 248 request.type = htonl(nbd_cmd(req)); 248 249 249 - if (nbd_cmd(req) == NBD_CMD_FLUSH) { 250 - /* Other values are reserved for FLUSH requests. */ 251 - request.from = 0; 252 - request.len = 0; 253 - } else { 250 + if (nbd_cmd(req) != NBD_CMD_FLUSH && nbd_cmd(req) != NBD_CMD_DISC) { 254 251 request.from = cpu_to_be64((u64)blk_rq_pos(req) << 9); 255 252 request.len = htonl(size); 256 253 }
+5 -5
drivers/cdrom/cdrom.c
··· 3470 3470 return 0; 3471 3471 } 3472 3472 3473 - static int cdrom_sysctl_info(ctl_table *ctl, int write, 3473 + static int cdrom_sysctl_info(struct ctl_table *ctl, int write, 3474 3474 void __user *buffer, size_t *lenp, loff_t *ppos) 3475 3475 { 3476 3476 int pos; ··· 3583 3583 mutex_unlock(&cdrom_mutex); 3584 3584 } 3585 3585 3586 - static int cdrom_sysctl_handler(ctl_table *ctl, int write, 3586 + static int cdrom_sysctl_handler(struct ctl_table *ctl, int write, 3587 3587 void __user *buffer, size_t *lenp, loff_t *ppos) 3588 3588 { 3589 3589 int ret; ··· 3609 3609 } 3610 3610 3611 3611 /* Place files in /proc/sys/dev/cdrom */ 3612 - static ctl_table cdrom_table[] = { 3612 + static struct ctl_table cdrom_table[] = { 3613 3613 { 3614 3614 .procname = "info", 3615 3615 .data = &cdrom_sysctl_settings.info, ··· 3655 3655 { } 3656 3656 }; 3657 3657 3658 - static ctl_table cdrom_cdrom_table[] = { 3658 + static struct ctl_table cdrom_cdrom_table[] = { 3659 3659 { 3660 3660 .procname = "cdrom", 3661 3661 .maxlen = 0, ··· 3666 3666 }; 3667 3667 3668 3668 /* Make sure that /proc/sys/dev is there */ 3669 - static ctl_table cdrom_root_table[] = { 3669 + static struct ctl_table cdrom_root_table[] = { 3670 3670 { 3671 3671 .procname = "dev", 3672 3672 .maxlen = 0,
+2 -2
drivers/char/random.c
··· 1582 1582 /* 1583 1583 * Return entropy available scaled to integral bits 1584 1584 */ 1585 - static int proc_do_entropy(ctl_table *table, int write, 1585 + static int proc_do_entropy(struct ctl_table *table, int write, 1586 1586 void __user *buffer, size_t *lenp, loff_t *ppos) 1587 1587 { 1588 - ctl_table fake_table; 1588 + struct ctl_table fake_table; 1589 1589 int entropy_count; 1590 1590 1591 1591 entropy_count = *(int *)table->data >> ENTROPY_SHIFT;
+29 -29
drivers/parport/procfs.c
··· 31 31 #define PARPORT_MIN_SPINTIME_VALUE 1 32 32 #define PARPORT_MAX_SPINTIME_VALUE 1000 33 33 34 - static int do_active_device(ctl_table *table, int write, 34 + static int do_active_device(struct ctl_table *table, int write, 35 35 void __user *result, size_t *lenp, loff_t *ppos) 36 36 { 37 37 struct parport *port = (struct parport *)table->extra1; ··· 68 68 } 69 69 70 70 #ifdef CONFIG_PARPORT_1284 71 - static int do_autoprobe(ctl_table *table, int write, 71 + static int do_autoprobe(struct ctl_table *table, int write, 72 72 void __user *result, size_t *lenp, loff_t *ppos) 73 73 { 74 74 struct parport_device_info *info = table->extra2; ··· 110 110 } 111 111 #endif /* IEEE1284.3 support. */ 112 112 113 - static int do_hardware_base_addr (ctl_table *table, int write, 114 - void __user *result, 115 - size_t *lenp, loff_t *ppos) 113 + static int do_hardware_base_addr(struct ctl_table *table, int write, 114 + void __user *result, 115 + size_t *lenp, loff_t *ppos) 116 116 { 117 117 struct parport *port = (struct parport *)table->extra1; 118 118 char buffer[20]; ··· 138 138 return copy_to_user(result, buffer, len) ? -EFAULT : 0; 139 139 } 140 140 141 - static int do_hardware_irq (ctl_table *table, int write, 142 - void __user *result, 143 - size_t *lenp, loff_t *ppos) 141 + static int do_hardware_irq(struct ctl_table *table, int write, 142 + void __user *result, 143 + size_t *lenp, loff_t *ppos) 144 144 { 145 145 struct parport *port = (struct parport *)table->extra1; 146 146 char buffer[20]; ··· 166 166 return copy_to_user(result, buffer, len) ? -EFAULT : 0; 167 167 } 168 168 169 - static int do_hardware_dma (ctl_table *table, int write, 170 - void __user *result, 171 - size_t *lenp, loff_t *ppos) 169 + static int do_hardware_dma(struct ctl_table *table, int write, 170 + void __user *result, 171 + size_t *lenp, loff_t *ppos) 172 172 { 173 173 struct parport *port = (struct parport *)table->extra1; 174 174 char buffer[20]; ··· 194 194 return copy_to_user(result, buffer, len) ? -EFAULT : 0; 195 195 } 196 196 197 - static int do_hardware_modes (ctl_table *table, int write, 198 - void __user *result, 199 - size_t *lenp, loff_t *ppos) 197 + static int do_hardware_modes(struct ctl_table *table, int write, 198 + void __user *result, 199 + size_t *lenp, loff_t *ppos) 200 200 { 201 201 struct parport *port = (struct parport *)table->extra1; 202 202 char buffer[40]; ··· 255 255 256 256 struct parport_sysctl_table { 257 257 struct ctl_table_header *sysctl_header; 258 - ctl_table vars[12]; 259 - ctl_table device_dir[2]; 260 - ctl_table port_dir[2]; 261 - ctl_table parport_dir[2]; 262 - ctl_table dev_dir[2]; 258 + struct ctl_table vars[12]; 259 + struct ctl_table device_dir[2]; 260 + struct ctl_table port_dir[2]; 261 + struct ctl_table parport_dir[2]; 262 + struct ctl_table dev_dir[2]; 263 263 }; 264 264 265 265 static const struct parport_sysctl_table parport_sysctl_template = { ··· 369 369 struct parport_device_sysctl_table 370 370 { 371 371 struct ctl_table_header *sysctl_header; 372 - ctl_table vars[2]; 373 - ctl_table device_dir[2]; 374 - ctl_table devices_root_dir[2]; 375 - ctl_table port_dir[2]; 376 - ctl_table parport_dir[2]; 377 - ctl_table dev_dir[2]; 372 + struct ctl_table vars[2]; 373 + struct ctl_table device_dir[2]; 374 + struct ctl_table devices_root_dir[2]; 375 + struct ctl_table port_dir[2]; 376 + struct ctl_table parport_dir[2]; 377 + struct ctl_table dev_dir[2]; 378 378 }; 379 379 380 380 static const struct parport_device_sysctl_table ··· 422 422 struct parport_default_sysctl_table 423 423 { 424 424 struct ctl_table_header *sysctl_header; 425 - ctl_table vars[3]; 426 - ctl_table default_dir[2]; 427 - ctl_table parport_dir[2]; 428 - ctl_table dev_dir[2]; 425 + struct ctl_table vars[3]; 426 + struct ctl_table default_dir[2]; 427 + struct ctl_table parport_dir[2]; 428 + struct ctl_table dev_dir[2]; 429 429 }; 430 430 431 431 static struct parport_default_sysctl_table
+3 -8
drivers/rapidio/devices/tsi721.c
··· 768 768 } 769 769 #endif /* CONFIG_RAPIDIO_DMA_ENGINE */ 770 770 771 - err = pci_enable_msix(priv->pdev, entries, ARRAY_SIZE(entries)); 771 + err = pci_enable_msix_exact(priv->pdev, entries, ARRAY_SIZE(entries)); 772 772 if (err) { 773 - if (err > 0) 774 - dev_info(&priv->pdev->dev, 775 - "Only %d MSI-X vectors available, " 776 - "not using MSI-X\n", err); 777 - else 778 - dev_err(&priv->pdev->dev, 779 - "Failed to enable MSI-X (err=%d)\n", err); 773 + dev_err(&priv->pdev->dev, 774 + "Failed to enable MSI-X (err=%d)\n", err); 780 775 return err; 781 776 } 782 777
+40 -2
drivers/rtc/Kconfig
··· 386 386 will be called rtc-pcf8583. 387 387 388 388 config RTC_DRV_M41T80 389 - tristate "ST M41T62/65/M41T80/81/82/83/84/85/87" 389 + tristate "ST M41T62/65/M41T80/81/82/83/84/85/87 and compatible" 390 390 help 391 391 If you say Y here you will get support for the ST M41T60 392 392 and M41T80 RTC chips series. Currently, the following chips are 393 393 supported: M41T62, M41T65, M41T80, M41T81, M41T82, M41T83, M41ST84, 394 - M41ST85, and M41ST87. 394 + M41ST85, M41ST87, and MicroCrystal RV4162. 395 395 396 396 This driver can also be built as a module. If so, the module 397 397 will be called rtc-m41t80. ··· 573 573 This driver can also be built as a module. If so, the module 574 574 will be called rtc-ds1305. 575 575 576 + config RTC_DRV_DS1343 577 + select REGMAP_SPI 578 + tristate "Dallas/Maxim DS1343/DS1344" 579 + help 580 + If you say yes here you get support for the 581 + Dallas/Maxim DS1343 and DS1344 real time clock chips. 582 + Support for trickle charger, alarm is provided. 583 + 584 + This driver can also be built as a module. If so, the module 585 + will be called rtc-ds1343. 586 + 576 587 config RTC_DRV_DS1347 577 588 tristate "Dallas/Maxim DS1347" 578 589 help ··· 660 649 661 650 This driver can also be built as a module. If so the module 662 651 will be called rtc-rx4581. 652 + 653 + config RTC_DRV_MCP795 654 + tristate "Microchip MCP795" 655 + help 656 + If you say yes here you will get support for the Microchip MCP795. 657 + 658 + This driver can also be built as a module. If so the module 659 + will be called rtc-mcp795. 663 660 664 661 endif # SPI_MASTER 665 662 ··· 776 757 777 758 This driver can also be built as a module. If so, the module 778 759 will be called rtc-da9055 760 + 761 + config RTC_DRV_DA9063 762 + tristate "Dialog Semiconductor DA9063 RTC" 763 + depends on MFD_DA9063 764 + help 765 + If you say yes here you will get support for the RTC subsystem 766 + of the Dialog Semiconductor DA9063. 767 + 768 + This driver can also be built as a module. If so, the module 769 + will be called "rtc-da9063". 779 770 780 771 config RTC_DRV_EFI 781 772 tristate "EFI RTC" ··· 1355 1326 1356 1327 This driver can also be built as a module. If so, the module 1357 1328 will be called rtc-moxart 1329 + 1330 + config RTC_DRV_XGENE 1331 + tristate "APM X-Gene RTC" 1332 + help 1333 + If you say yes here you get support for the APM X-Gene SoC real time 1334 + clock. 1335 + 1336 + This driver can also be built as a module, if so, the module 1337 + will be called "rtc-xgene". 1358 1338 1359 1339 comment "HID Sensor RTC drivers" 1360 1340
+4
drivers/rtc/Makefile
··· 32 32 obj-$(CONFIG_RTC_DRV_COH901331) += rtc-coh901331.o 33 33 obj-$(CONFIG_RTC_DRV_DA9052) += rtc-da9052.o 34 34 obj-$(CONFIG_RTC_DRV_DA9055) += rtc-da9055.o 35 + obj-$(CONFIG_RTC_DRV_DA9063) += rtc-da9063.o 35 36 obj-$(CONFIG_RTC_DRV_DAVINCI) += rtc-davinci.o 36 37 obj-$(CONFIG_RTC_DRV_DM355EVM) += rtc-dm355evm.o 37 38 obj-$(CONFIG_RTC_DRV_VRTC) += rtc-mrst.o ··· 41 40 obj-$(CONFIG_RTC_DRV_DS1302) += rtc-ds1302.o 42 41 obj-$(CONFIG_RTC_DRV_DS1305) += rtc-ds1305.o 43 42 obj-$(CONFIG_RTC_DRV_DS1307) += rtc-ds1307.o 43 + obj-$(CONFIG_RTC_DRV_DS1343) += rtc-ds1343.o 44 44 obj-$(CONFIG_RTC_DRV_DS1347) += rtc-ds1347.o 45 45 obj-$(CONFIG_RTC_DRV_DS1374) += rtc-ds1374.o 46 46 obj-$(CONFIG_RTC_DRV_DS1390) += rtc-ds1390.o ··· 82 80 obj-$(CONFIG_RTC_DRV_MAX6902) += rtc-max6902.o 83 81 obj-$(CONFIG_RTC_DRV_MAX77686) += rtc-max77686.o 84 82 obj-$(CONFIG_RTC_DRV_MC13XXX) += rtc-mc13xxx.o 83 + obj-$(CONFIG_RTC_DRV_MCP795) += rtc-mcp795.o 85 84 obj-$(CONFIG_RTC_DRV_MSM6242) += rtc-msm6242.o 86 85 obj-$(CONFIG_RTC_DRV_MPC5121) += rtc-mpc5121.o 87 86 obj-$(CONFIG_RTC_DRV_MV) += rtc-mv.o ··· 138 135 obj-$(CONFIG_RTC_DRV_WM831X) += rtc-wm831x.o 139 136 obj-$(CONFIG_RTC_DRV_WM8350) += rtc-wm8350.o 140 137 obj-$(CONFIG_RTC_DRV_X1205) += rtc-x1205.o 138 + obj-$(CONFIG_RTC_DRV_XGENE) += rtc-xgene.o 141 139 obj-$(CONFIG_RTC_DRV_SIRFSOC) += rtc-sirfsoc.o 142 140 obj-$(CONFIG_RTC_DRV_MOXART) += rtc-moxart.o
+12 -2
drivers/rtc/interface.c
··· 292 292 dev_dbg(&rtc->dev, "alarm rollover: %s\n", "year"); 293 293 do { 294 294 alarm->time.tm_year++; 295 - } while (rtc_valid_tm(&alarm->time) != 0); 295 + } while (!is_leap_year(alarm->time.tm_year + 1900) 296 + && rtc_valid_tm(&alarm->time) != 0); 296 297 break; 297 298 298 299 default: ··· 301 300 } 302 301 303 302 done: 304 - return 0; 303 + err = rtc_valid_tm(&alarm->time); 304 + 305 + if (err) { 306 + dev_warn(&rtc->dev, "invalid alarm value: %d-%d-%d %d:%d:%d\n", 307 + alarm->time.tm_year + 1900, alarm->time.tm_mon + 1, 308 + alarm->time.tm_mday, alarm->time.tm_hour, alarm->time.tm_min, 309 + alarm->time.tm_sec); 310 + } 311 + 312 + return err; 305 313 } 306 314 307 315 int rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+2 -1
drivers/rtc/rtc-88pm860x.c
··· 293 293 int ret; 294 294 if (!np) 295 295 return -ENODEV; 296 - np = of_find_node_by_name(np, "rtc"); 296 + np = of_get_child_by_name(np, "rtc"); 297 297 if (!np) { 298 298 dev_err(&pdev->dev, "failed to find rtc node\n"); 299 299 return -ENODEV; ··· 301 301 ret = of_property_read_u32(np, "marvell,88pm860x-vrtc", &info->vrtc); 302 302 if (ret) 303 303 info->vrtc = 0; 304 + of_node_put(np); 304 305 return 0; 305 306 } 306 307 #else
+14 -2
drivers/rtc/rtc-at91rm9200.c
··· 48 48 49 49 static const struct at91_rtc_config *at91_rtc_config; 50 50 static DECLARE_COMPLETION(at91_rtc_updated); 51 + static DECLARE_COMPLETION(at91_rtc_upd_rdy); 51 52 static unsigned int at91_alarm_year = AT91_RTC_EPOCH; 52 53 static void __iomem *at91_rtc_regs; 53 54 static int irq; ··· 162 161 1900 + tm->tm_year, tm->tm_mon, tm->tm_mday, 163 162 tm->tm_hour, tm->tm_min, tm->tm_sec); 164 163 164 + wait_for_completion(&at91_rtc_upd_rdy); 165 + 165 166 /* Stop Time/Calendar from counting */ 166 167 cr = at91_rtc_read(AT91_RTC_CR); 167 168 at91_rtc_write(AT91_RTC_CR, cr | AT91_RTC_UPDCAL | AT91_RTC_UPDTIM); ··· 186 183 187 184 /* Restart Time/Calendar */ 188 185 cr = at91_rtc_read(AT91_RTC_CR); 186 + at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_SECEV); 189 187 at91_rtc_write(AT91_RTC_CR, cr & ~(AT91_RTC_UPDCAL | AT91_RTC_UPDTIM)); 188 + at91_rtc_write_ier(AT91_RTC_SECEV); 190 189 191 190 return 0; 192 191 } ··· 295 290 if (rtsr) { /* this interrupt is shared! Is it ours? */ 296 291 if (rtsr & AT91_RTC_ALARM) 297 292 events |= (RTC_AF | RTC_IRQF); 298 - if (rtsr & AT91_RTC_SECEV) 299 - events |= (RTC_UF | RTC_IRQF); 293 + if (rtsr & AT91_RTC_SECEV) { 294 + complete(&at91_rtc_upd_rdy); 295 + at91_rtc_write_idr(AT91_RTC_SECEV); 296 + } 300 297 if (rtsr & AT91_RTC_ACKUPD) 301 298 complete(&at91_rtc_updated); 302 299 ··· 419 412 if (IS_ERR(rtc)) 420 413 return PTR_ERR(rtc); 421 414 platform_set_drvdata(pdev, rtc); 415 + 416 + /* enable SECEV interrupt in order to initialize at91_rtc_upd_rdy 417 + * completion. 418 + */ 419 + at91_rtc_write_ier(AT91_RTC_SECEV); 422 420 423 421 dev_info(&pdev->dev, "AT91 Real Time Clock driver.\n"); 424 422 return 0;
+7 -9
drivers/rtc/rtc-bfin.c
··· 346 346 { 347 347 struct bfin_rtc *rtc; 348 348 struct device *dev = &pdev->dev; 349 - int ret = 0; 349 + int ret; 350 350 unsigned long timeout = jiffies + HZ; 351 351 352 352 dev_dbg_stamp(dev); ··· 361 361 /* Register our RTC with the RTC framework */ 362 362 rtc->rtc_dev = devm_rtc_device_register(dev, pdev->name, &bfin_rtc_ops, 363 363 THIS_MODULE); 364 - if (unlikely(IS_ERR(rtc->rtc_dev))) { 365 - ret = PTR_ERR(rtc->rtc_dev); 366 - goto err; 367 - } 364 + if (unlikely(IS_ERR(rtc->rtc_dev))) 365 + return PTR_ERR(rtc->rtc_dev); 368 366 369 367 /* Grab the IRQ and init the hardware */ 370 368 ret = devm_request_irq(dev, IRQ_RTC, bfin_rtc_interrupt, 0, 371 369 pdev->name, dev); 372 370 if (unlikely(ret)) 373 - goto err; 371 + dev_err(&pdev->dev, 372 + "unable to request IRQ; alarm won't work, " 373 + "and writes will be delayed\n"); 374 + 374 375 /* sometimes the bootloader touched things, but the write complete was not 375 376 * enabled, so let's just do a quick timeout here since the IRQ will not fire ... 376 377 */ ··· 382 381 bfin_write_RTC_SWCNT(0); 383 382 384 383 return 0; 385 - 386 - err: 387 - return ret; 388 384 } 389 385 390 386 static int bfin_rtc_remove(struct platform_device *pdev)
+60 -25
drivers/rtc/rtc-cmos.c
··· 647 647 int retval = 0; 648 648 unsigned char rtc_control; 649 649 unsigned address_space; 650 + u32 flags = 0; 650 651 651 652 /* there can be only one ... */ 652 653 if (cmos_rtc.dev) ··· 661 660 * REVISIT non-x86 systems may instead use memory space resources 662 661 * (needing ioremap etc), not i/o space resources like this ... 663 662 */ 664 - ports = request_region(ports->start, 665 - resource_size(ports), 666 - driver_name); 663 + if (RTC_IOMAPPED) 664 + ports = request_region(ports->start, resource_size(ports), 665 + driver_name); 666 + else 667 + ports = request_mem_region(ports->start, resource_size(ports), 668 + driver_name); 667 669 if (!ports) { 668 670 dev_dbg(dev, "i/o registers already in use\n"); 669 671 return -EBUSY; ··· 703 699 * expect CMOS_READ and friends to handle. 704 700 */ 705 701 if (info) { 702 + if (info->flags) 703 + flags = info->flags; 704 + if (info->address_space) 705 + address_space = info->address_space; 706 + 706 707 if (info->rtc_day_alarm && info->rtc_day_alarm < 128) 707 708 cmos_rtc.day_alrm = info->rtc_day_alarm; 708 709 if (info->rtc_mon_alarm && info->rtc_mon_alarm < 128) ··· 735 726 736 727 spin_lock_irq(&rtc_lock); 737 728 738 - /* force periodic irq to CMOS reset default of 1024Hz; 739 - * 740 - * REVISIT it's been reported that at least one x86_64 ALI mobo 741 - * doesn't use 32KHz here ... for portability we might need to 742 - * do something about other clock frequencies. 743 - */ 744 - cmos_rtc.rtc->irq_freq = 1024; 745 - hpet_set_periodic_freq(cmos_rtc.rtc->irq_freq); 746 - CMOS_WRITE(RTC_REF_CLCK_32KHZ | 0x06, RTC_FREQ_SELECT); 729 + if (!(flags & CMOS_RTC_FLAGS_NOFREQ)) { 730 + /* force periodic irq to CMOS reset default of 1024Hz; 731 + * 732 + * REVISIT it's been reported that at least one x86_64 ALI 733 + * mobo doesn't use 32KHz here ... for portability we might 734 + * need to do something about other clock frequencies. 735 + */ 736 + cmos_rtc.rtc->irq_freq = 1024; 737 + hpet_set_periodic_freq(cmos_rtc.rtc->irq_freq); 738 + CMOS_WRITE(RTC_REF_CLCK_32KHZ | 0x06, RTC_FREQ_SELECT); 739 + } 747 740 748 741 /* disable irqs */ 749 - cmos_irq_disable(&cmos_rtc, RTC_PIE | RTC_AIE | RTC_UIE); 742 + if (is_valid_irq(rtc_irq)) 743 + cmos_irq_disable(&cmos_rtc, RTC_PIE | RTC_AIE | RTC_UIE); 750 744 751 745 rtc_control = CMOS_READ(RTC_CONTROL); 752 746 ··· 814 802 cmos_rtc.dev = NULL; 815 803 rtc_device_unregister(cmos_rtc.rtc); 816 804 cleanup0: 817 - release_region(ports->start, resource_size(ports)); 805 + if (RTC_IOMAPPED) 806 + release_region(ports->start, resource_size(ports)); 807 + else 808 + release_mem_region(ports->start, resource_size(ports)); 818 809 return retval; 819 810 } 820 811 821 - static void cmos_do_shutdown(void) 812 + static void cmos_do_shutdown(int rtc_irq) 822 813 { 823 814 spin_lock_irq(&rtc_lock); 824 - cmos_irq_disable(&cmos_rtc, RTC_IRQMASK); 815 + if (is_valid_irq(rtc_irq)) 816 + cmos_irq_disable(&cmos_rtc, RTC_IRQMASK); 825 817 spin_unlock_irq(&rtc_lock); 826 818 } 827 819 ··· 834 818 struct cmos_rtc *cmos = dev_get_drvdata(dev); 835 819 struct resource *ports; 836 820 837 - cmos_do_shutdown(); 821 + cmos_do_shutdown(cmos->irq); 838 822 839 823 sysfs_remove_bin_file(&dev->kobj, &nvram); 840 824 ··· 847 831 cmos->rtc = NULL; 848 832 849 833 ports = cmos->iomem; 850 - release_region(ports->start, resource_size(ports)); 834 + if (RTC_IOMAPPED) 835 + release_region(ports->start, resource_size(ports)); 836 + else 837 + release_mem_region(ports->start, resource_size(ports)); 851 838 cmos->iomem = NULL; 852 839 853 840 cmos->dev = NULL; ··· 1084 1065 1085 1066 static void cmos_pnp_shutdown(struct pnp_dev *pnp) 1086 1067 { 1087 - if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(&pnp->dev)) 1068 + struct device *dev = &pnp->dev; 1069 + struct cmos_rtc *cmos = dev_get_drvdata(dev); 1070 + 1071 + if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(dev)) 1088 1072 return; 1089 1073 1090 - cmos_do_shutdown(); 1074 + cmos_do_shutdown(cmos->irq); 1091 1075 } 1092 1076 1093 1077 static const struct pnp_device_id rtc_ids[] = { ··· 1165 1143 1166 1144 static int __init cmos_platform_probe(struct platform_device *pdev) 1167 1145 { 1146 + struct resource *resource; 1147 + int irq; 1148 + 1168 1149 cmos_of_init(pdev); 1169 1150 cmos_wake_setup(&pdev->dev); 1170 - return cmos_do_probe(&pdev->dev, 1171 - platform_get_resource(pdev, IORESOURCE_IO, 0), 1172 - platform_get_irq(pdev, 0)); 1151 + 1152 + if (RTC_IOMAPPED) 1153 + resource = platform_get_resource(pdev, IORESOURCE_IO, 0); 1154 + else 1155 + resource = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1156 + irq = platform_get_irq(pdev, 0); 1157 + if (irq < 0) 1158 + irq = -1; 1159 + 1160 + return cmos_do_probe(&pdev->dev, resource, irq); 1173 1161 } 1174 1162 1175 1163 static int __exit cmos_platform_remove(struct platform_device *pdev) ··· 1190 1158 1191 1159 static void cmos_platform_shutdown(struct platform_device *pdev) 1192 1160 { 1193 - if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(&pdev->dev)) 1161 + struct device *dev = &pdev->dev; 1162 + struct cmos_rtc *cmos = dev_get_drvdata(dev); 1163 + 1164 + if (system_state == SYSTEM_POWER_OFF && !cmos_poweroff(dev)) 1194 1165 return; 1195 1166 1196 - cmos_do_shutdown(); 1167 + cmos_do_shutdown(cmos->irq); 1197 1168 } 1198 1169 1199 1170 /* work with hotplug and coldplug */
+66 -56
drivers/rtc/rtc-da9052.c
··· 20 20 #include <linux/mfd/da9052/da9052.h> 21 21 #include <linux/mfd/da9052/reg.h> 22 22 23 - #define rtc_err(da9052, fmt, ...) \ 24 - dev_err(da9052->dev, "%s: " fmt, __func__, ##__VA_ARGS__) 23 + #define rtc_err(rtc, fmt, ...) \ 24 + dev_err(rtc->da9052->dev, "%s: " fmt, __func__, ##__VA_ARGS__) 25 25 26 26 struct da9052_rtc { 27 27 struct rtc_device *rtc; 28 28 struct da9052 *da9052; 29 29 }; 30 30 31 - static int da9052_rtc_enable_alarm(struct da9052 *da9052, bool enable) 31 + static int da9052_rtc_enable_alarm(struct da9052_rtc *rtc, bool enable) 32 32 { 33 33 int ret; 34 34 if (enable) { 35 - ret = da9052_reg_update(da9052, DA9052_ALARM_Y_REG, 36 - DA9052_ALARM_Y_ALARM_ON, 37 - DA9052_ALARM_Y_ALARM_ON); 35 + ret = da9052_reg_update(rtc->da9052, DA9052_ALARM_Y_REG, 36 + DA9052_ALARM_Y_ALARM_ON|DA9052_ALARM_Y_TICK_ON, 37 + DA9052_ALARM_Y_ALARM_ON); 38 38 if (ret != 0) 39 - rtc_err(da9052, "Failed to enable ALM: %d\n", ret); 39 + rtc_err(rtc, "Failed to enable ALM: %d\n", ret); 40 40 } else { 41 - ret = da9052_reg_update(da9052, DA9052_ALARM_Y_REG, 42 - DA9052_ALARM_Y_ALARM_ON, 0); 41 + ret = da9052_reg_update(rtc->da9052, DA9052_ALARM_Y_REG, 42 + DA9052_ALARM_Y_ALARM_ON|DA9052_ALARM_Y_TICK_ON, 0); 43 43 if (ret != 0) 44 - rtc_err(da9052, "Write error: %d\n", ret); 44 + rtc_err(rtc, "Write error: %d\n", ret); 45 45 } 46 46 return ret; 47 47 } ··· 49 49 static irqreturn_t da9052_rtc_irq(int irq, void *data) 50 50 { 51 51 struct da9052_rtc *rtc = data; 52 - int ret; 53 52 54 - ret = da9052_reg_read(rtc->da9052, DA9052_ALARM_MI_REG); 55 - if (ret < 0) { 56 - rtc_err(rtc->da9052, "Read error: %d\n", ret); 57 - return IRQ_NONE; 58 - } 59 - 60 - if (ret & DA9052_ALARMMI_ALARMTYPE) { 61 - da9052_rtc_enable_alarm(rtc->da9052, 0); 62 - rtc_update_irq(rtc->rtc, 1, RTC_IRQF | RTC_AF); 63 - } else 64 - rtc_update_irq(rtc->rtc, 1, RTC_IRQF | RTC_PF); 53 + rtc_update_irq(rtc->rtc, 1, RTC_IRQF | RTC_AF); 65 54 66 55 return IRQ_HANDLED; 67 56 } 68 57 69 - static int da9052_read_alarm(struct da9052 *da9052, struct rtc_time *rtc_tm) 58 + static int da9052_read_alarm(struct da9052_rtc *rtc, struct rtc_time *rtc_tm) 70 59 { 71 60 int ret; 72 61 uint8_t v[5]; 73 62 74 - ret = da9052_group_read(da9052, DA9052_ALARM_MI_REG, 5, v); 63 + ret = da9052_group_read(rtc->da9052, DA9052_ALARM_MI_REG, 5, v); 75 64 if (ret != 0) { 76 - rtc_err(da9052, "Failed to group read ALM: %d\n", ret); 65 + rtc_err(rtc, "Failed to group read ALM: %d\n", ret); 77 66 return ret; 78 67 } 79 68 ··· 73 84 rtc_tm->tm_min = v[0] & DA9052_RTC_MIN; 74 85 75 86 ret = rtc_valid_tm(rtc_tm); 76 - if (ret != 0) 77 - return ret; 78 87 return ret; 79 88 } 80 89 81 - static int da9052_set_alarm(struct da9052 *da9052, struct rtc_time *rtc_tm) 90 + static int da9052_set_alarm(struct da9052_rtc *rtc, struct rtc_time *rtc_tm) 82 91 { 92 + struct da9052 *da9052 = rtc->da9052; 93 + unsigned long alm_time; 83 94 int ret; 84 95 uint8_t v[3]; 96 + 97 + ret = rtc_tm_to_time(rtc_tm, &alm_time); 98 + if (ret != 0) 99 + return ret; 100 + 101 + if (rtc_tm->tm_sec > 0) { 102 + alm_time += 60 - rtc_tm->tm_sec; 103 + rtc_time_to_tm(alm_time, rtc_tm); 104 + } 105 + BUG_ON(rtc_tm->tm_sec); /* it will cause repeated irqs if not zero */ 85 106 86 107 rtc_tm->tm_year -= 100; 87 108 rtc_tm->tm_mon += 1; ··· 99 100 ret = da9052_reg_update(da9052, DA9052_ALARM_MI_REG, 100 101 DA9052_RTC_MIN, rtc_tm->tm_min); 101 102 if (ret != 0) { 102 - rtc_err(da9052, "Failed to write ALRM MIN: %d\n", ret); 103 + rtc_err(rtc, "Failed to write ALRM MIN: %d\n", ret); 103 104 return ret; 104 105 } 105 106 ··· 114 115 ret = da9052_reg_update(da9052, DA9052_ALARM_Y_REG, 115 116 DA9052_RTC_YEAR, rtc_tm->tm_year); 116 117 if (ret != 0) 117 - rtc_err(da9052, "Failed to write ALRM YEAR: %d\n", ret); 118 + rtc_err(rtc, "Failed to write ALRM YEAR: %d\n", ret); 118 119 119 120 return ret; 120 121 } 121 122 122 - static int da9052_rtc_get_alarm_status(struct da9052 *da9052) 123 + static int da9052_rtc_get_alarm_status(struct da9052_rtc *rtc) 123 124 { 124 125 int ret; 125 126 126 - ret = da9052_reg_read(da9052, DA9052_ALARM_Y_REG); 127 + ret = da9052_reg_read(rtc->da9052, DA9052_ALARM_Y_REG); 127 128 if (ret < 0) { 128 - rtc_err(da9052, "Failed to read ALM: %d\n", ret); 129 + rtc_err(rtc, "Failed to read ALM: %d\n", ret); 129 130 return ret; 130 131 } 131 - ret &= DA9052_ALARM_Y_ALARM_ON; 132 - return (ret > 0) ? 1 : 0; 132 + 133 + return !!(ret&DA9052_ALARM_Y_ALARM_ON); 133 134 } 134 135 135 136 static int da9052_rtc_read_time(struct device *dev, struct rtc_time *rtc_tm) ··· 140 141 141 142 ret = da9052_group_read(rtc->da9052, DA9052_COUNT_S_REG, 6, v); 142 143 if (ret < 0) { 143 - rtc_err(rtc->da9052, "Failed to read RTC time : %d\n", ret); 144 + rtc_err(rtc, "Failed to read RTC time : %d\n", ret); 144 145 return ret; 145 146 } 146 147 ··· 152 153 rtc_tm->tm_sec = v[0] & DA9052_RTC_SEC; 153 154 154 155 ret = rtc_valid_tm(rtc_tm); 155 - if (ret != 0) { 156 - rtc_err(rtc->da9052, "rtc_valid_tm failed: %d\n", ret); 157 - return ret; 158 - } 159 - 160 - return 0; 156 + return ret; 161 157 } 162 158 163 159 static int da9052_rtc_set_time(struct device *dev, struct rtc_time *tm) 164 160 { 165 161 struct da9052_rtc *rtc; 166 162 uint8_t v[6]; 163 + int ret; 167 164 168 165 rtc = dev_get_drvdata(dev); 169 166 ··· 170 175 v[4] = tm->tm_mon + 1; 171 176 v[5] = tm->tm_year - 100; 172 177 173 - return da9052_group_write(rtc->da9052, DA9052_COUNT_S_REG, 6, v); 178 + ret = da9052_group_write(rtc->da9052, DA9052_COUNT_S_REG, 6, v); 179 + if (ret < 0) 180 + rtc_err(rtc, "failed to set RTC time: %d\n", ret); 181 + return ret; 174 182 } 175 183 176 184 static int da9052_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) ··· 182 184 struct rtc_time *tm = &alrm->time; 183 185 struct da9052_rtc *rtc = dev_get_drvdata(dev); 184 186 185 - ret = da9052_read_alarm(rtc->da9052, tm); 186 - 187 - if (ret) 187 + ret = da9052_read_alarm(rtc, tm); 188 + if (ret < 0) { 189 + rtc_err(rtc, "failed to read RTC alarm: %d\n", ret); 188 190 return ret; 191 + } 189 192 190 - alrm->enabled = da9052_rtc_get_alarm_status(rtc->da9052); 191 - 193 + alrm->enabled = da9052_rtc_get_alarm_status(rtc); 192 194 return 0; 193 195 } 194 196 ··· 198 200 struct rtc_time *tm = &alrm->time; 199 201 struct da9052_rtc *rtc = dev_get_drvdata(dev); 200 202 201 - ret = da9052_rtc_enable_alarm(rtc->da9052, 0); 203 + ret = da9052_rtc_enable_alarm(rtc, 0); 202 204 if (ret < 0) 203 205 return ret; 204 206 205 - ret = da9052_set_alarm(rtc->da9052, tm); 206 - if (ret) 207 + ret = da9052_set_alarm(rtc, tm); 208 + if (ret < 0) 207 209 return ret; 208 210 209 - ret = da9052_rtc_enable_alarm(rtc->da9052, 1); 210 - 211 + ret = da9052_rtc_enable_alarm(rtc, 1); 211 212 return ret; 212 213 } 213 214 ··· 214 217 { 215 218 struct da9052_rtc *rtc = dev_get_drvdata(dev); 216 219 217 - return da9052_rtc_enable_alarm(rtc->da9052, enabled); 220 + return da9052_rtc_enable_alarm(rtc, enabled); 218 221 } 219 222 220 223 static const struct rtc_class_ops da9052_rtc_ops = { ··· 236 239 237 240 rtc->da9052 = dev_get_drvdata(pdev->dev.parent); 238 241 platform_set_drvdata(pdev, rtc); 242 + 243 + ret = da9052_reg_write(rtc->da9052, DA9052_BBAT_CONT_REG, 0xFE); 244 + if (ret < 0) { 245 + rtc_err(rtc, 246 + "Failed to setup RTC battery charging: %d\n", ret); 247 + return ret; 248 + } 249 + 250 + ret = da9052_reg_update(rtc->da9052, DA9052_ALARM_Y_REG, 251 + DA9052_ALARM_Y_TICK_ON, 0); 252 + if (ret != 0) 253 + rtc_err(rtc, "Failed to disable TICKS: %d\n", ret); 254 + 239 255 ret = da9052_request_irq(rtc->da9052, DA9052_IRQ_ALARM, "ALM", 240 256 da9052_rtc_irq, rtc); 241 257 if (ret != 0) { 242 - rtc_err(rtc->da9052, "irq registration failed: %d\n", ret); 258 + rtc_err(rtc, "irq registration failed: %d\n", ret); 243 259 return ret; 244 260 } 245 261 ··· 271 261 272 262 module_platform_driver(da9052_rtc_driver); 273 263 274 - MODULE_AUTHOR("David Dajun Chen <dchen@diasemi.com>"); 264 + MODULE_AUTHOR("Anthony Olech <Anthony.Olech@diasemi.com>"); 275 265 MODULE_DESCRIPTION("RTC driver for Dialog DA9052 PMIC"); 276 266 MODULE_LICENSE("GPL"); 277 267 MODULE_ALIAS("platform:da9052-rtc");
+333
drivers/rtc/rtc-da9063.c
··· 1 + /* rtc-da9063.c - Real time clock device driver for DA9063 2 + * Copyright (C) 2013-14 Dialog Semiconductor Ltd. 3 + * 4 + * This library is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU Library General Public 6 + * License as published by the Free Software Foundation; either 7 + * version 2 of the License, or (at your option) any later version. 8 + * 9 + * This library is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * Library General Public License for more details. 13 + */ 14 + 15 + #include <linux/kernel.h> 16 + #include <linux/module.h> 17 + #include <linux/init.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/interrupt.h> 20 + #include <linux/rtc.h> 21 + #include <linux/slab.h> 22 + #include <linux/delay.h> 23 + #include <linux/regmap.h> 24 + #include <linux/mfd/da9063/registers.h> 25 + #include <linux/mfd/da9063/core.h> 26 + 27 + #define YEARS_TO_DA9063(year) ((year) - 100) 28 + #define MONTHS_TO_DA9063(month) ((month) + 1) 29 + #define YEARS_FROM_DA9063(year) ((year) + 100) 30 + #define MONTHS_FROM_DA9063(month) ((month) - 1) 31 + 32 + #define RTC_DATA_LEN (DA9063_REG_COUNT_Y - DA9063_REG_COUNT_S + 1) 33 + #define RTC_SEC 0 34 + #define RTC_MIN 1 35 + #define RTC_HOUR 2 36 + #define RTC_DAY 3 37 + #define RTC_MONTH 4 38 + #define RTC_YEAR 5 39 + 40 + struct da9063_rtc { 41 + struct rtc_device *rtc_dev; 42 + struct da9063 *hw; 43 + struct rtc_time alarm_time; 44 + bool rtc_sync; 45 + }; 46 + 47 + static void da9063_data_to_tm(u8 *data, struct rtc_time *tm) 48 + { 49 + tm->tm_sec = data[RTC_SEC] & DA9063_COUNT_SEC_MASK; 50 + tm->tm_min = data[RTC_MIN] & DA9063_COUNT_MIN_MASK; 51 + tm->tm_hour = data[RTC_HOUR] & DA9063_COUNT_HOUR_MASK; 52 + tm->tm_mday = data[RTC_DAY] & DA9063_COUNT_DAY_MASK; 53 + tm->tm_mon = MONTHS_FROM_DA9063(data[RTC_MONTH] & 54 + DA9063_COUNT_MONTH_MASK); 55 + tm->tm_year = YEARS_FROM_DA9063(data[RTC_YEAR] & 56 + DA9063_COUNT_YEAR_MASK); 57 + } 58 + 59 + static void da9063_tm_to_data(struct rtc_time *tm, u8 *data) 60 + { 61 + data[RTC_SEC] &= ~DA9063_COUNT_SEC_MASK; 62 + data[RTC_SEC] |= tm->tm_sec & DA9063_COUNT_SEC_MASK; 63 + 64 + data[RTC_MIN] &= ~DA9063_COUNT_MIN_MASK; 65 + data[RTC_MIN] |= tm->tm_min & DA9063_COUNT_MIN_MASK; 66 + 67 + data[RTC_HOUR] &= ~DA9063_COUNT_HOUR_MASK; 68 + data[RTC_HOUR] |= tm->tm_hour & DA9063_COUNT_HOUR_MASK; 69 + 70 + data[RTC_DAY] &= ~DA9063_COUNT_DAY_MASK; 71 + data[RTC_DAY] |= tm->tm_mday & DA9063_COUNT_DAY_MASK; 72 + 73 + data[RTC_MONTH] &= ~DA9063_COUNT_MONTH_MASK; 74 + data[RTC_MONTH] |= MONTHS_TO_DA9063(tm->tm_mon) & 75 + DA9063_COUNT_MONTH_MASK; 76 + 77 + data[RTC_YEAR] &= ~DA9063_COUNT_YEAR_MASK; 78 + data[RTC_YEAR] |= YEARS_TO_DA9063(tm->tm_year) & 79 + DA9063_COUNT_YEAR_MASK; 80 + } 81 + 82 + static int da9063_rtc_stop_alarm(struct device *dev) 83 + { 84 + struct da9063_rtc *rtc = dev_get_drvdata(dev); 85 + 86 + return regmap_update_bits(rtc->hw->regmap, DA9063_REG_ALARM_Y, 87 + DA9063_ALARM_ON, 0); 88 + } 89 + 90 + static int da9063_rtc_start_alarm(struct device *dev) 91 + { 92 + struct da9063_rtc *rtc = dev_get_drvdata(dev); 93 + 94 + return regmap_update_bits(rtc->hw->regmap, DA9063_REG_ALARM_Y, 95 + DA9063_ALARM_ON, DA9063_ALARM_ON); 96 + } 97 + 98 + static int da9063_rtc_read_time(struct device *dev, struct rtc_time *tm) 99 + { 100 + struct da9063_rtc *rtc = dev_get_drvdata(dev); 101 + unsigned long tm_secs; 102 + unsigned long al_secs; 103 + u8 data[RTC_DATA_LEN]; 104 + int ret; 105 + 106 + ret = regmap_bulk_read(rtc->hw->regmap, DA9063_REG_COUNT_S, 107 + data, RTC_DATA_LEN); 108 + if (ret < 0) { 109 + dev_err(dev, "Failed to read RTC time data: %d\n", ret); 110 + return ret; 111 + } 112 + 113 + if (!(data[RTC_SEC] & DA9063_RTC_READ)) { 114 + dev_dbg(dev, "RTC not yet ready to be read by the host\n"); 115 + return -EINVAL; 116 + } 117 + 118 + da9063_data_to_tm(data, tm); 119 + 120 + rtc_tm_to_time(tm, &tm_secs); 121 + rtc_tm_to_time(&rtc->alarm_time, &al_secs); 122 + 123 + /* handle the rtc synchronisation delay */ 124 + if (rtc->rtc_sync == true && al_secs - tm_secs == 1) 125 + memcpy(tm, &rtc->alarm_time, sizeof(struct rtc_time)); 126 + else 127 + rtc->rtc_sync = false; 128 + 129 + return rtc_valid_tm(tm); 130 + } 131 + 132 + static int da9063_rtc_set_time(struct device *dev, struct rtc_time *tm) 133 + { 134 + struct da9063_rtc *rtc = dev_get_drvdata(dev); 135 + u8 data[RTC_DATA_LEN]; 136 + int ret; 137 + 138 + da9063_tm_to_data(tm, data); 139 + ret = regmap_bulk_write(rtc->hw->regmap, DA9063_REG_COUNT_S, 140 + data, RTC_DATA_LEN); 141 + if (ret < 0) 142 + dev_err(dev, "Failed to set RTC time data: %d\n", ret); 143 + 144 + return ret; 145 + } 146 + 147 + static int da9063_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) 148 + { 149 + struct da9063_rtc *rtc = dev_get_drvdata(dev); 150 + u8 data[RTC_DATA_LEN]; 151 + int ret; 152 + unsigned int val; 153 + 154 + ret = regmap_bulk_read(rtc->hw->regmap, DA9063_REG_ALARM_S, 155 + &data[RTC_SEC], RTC_DATA_LEN); 156 + if (ret < 0) 157 + return ret; 158 + 159 + da9063_data_to_tm(data, &alrm->time); 160 + 161 + alrm->enabled = !!(data[RTC_YEAR] & DA9063_ALARM_ON); 162 + 163 + ret = regmap_read(rtc->hw->regmap, DA9063_REG_EVENT_A, &val); 164 + if (ret < 0) 165 + return ret; 166 + 167 + if (val & (DA9063_E_ALARM)) 168 + alrm->pending = 1; 169 + else 170 + alrm->pending = 0; 171 + 172 + return 0; 173 + } 174 + 175 + static int da9063_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alrm) 176 + { 177 + struct da9063_rtc *rtc = dev_get_drvdata(dev); 178 + u8 data[RTC_DATA_LEN]; 179 + int ret; 180 + 181 + da9063_tm_to_data(&alrm->time, data); 182 + 183 + ret = da9063_rtc_stop_alarm(dev); 184 + if (ret < 0) { 185 + dev_err(dev, "Failed to stop alarm: %d\n", ret); 186 + return ret; 187 + } 188 + 189 + ret = regmap_bulk_write(rtc->hw->regmap, DA9063_REG_ALARM_S, 190 + data, RTC_DATA_LEN); 191 + if (ret < 0) { 192 + dev_err(dev, "Failed to write alarm: %d\n", ret); 193 + return ret; 194 + } 195 + 196 + rtc->alarm_time = alrm->time; 197 + 198 + if (alrm->enabled) { 199 + ret = da9063_rtc_start_alarm(dev); 200 + if (ret < 0) { 201 + dev_err(dev, "Failed to start alarm: %d\n", ret); 202 + return ret; 203 + } 204 + } 205 + 206 + return ret; 207 + } 208 + 209 + static int da9063_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled) 210 + { 211 + if (enabled) 212 + return da9063_rtc_start_alarm(dev); 213 + else 214 + return da9063_rtc_stop_alarm(dev); 215 + } 216 + 217 + static irqreturn_t da9063_alarm_event(int irq, void *data) 218 + { 219 + struct da9063_rtc *rtc = data; 220 + 221 + regmap_update_bits(rtc->hw->regmap, DA9063_REG_ALARM_Y, 222 + DA9063_ALARM_ON, 0); 223 + 224 + rtc->rtc_sync = true; 225 + rtc_update_irq(rtc->rtc_dev, 1, RTC_IRQF | RTC_AF); 226 + 227 + return IRQ_HANDLED; 228 + } 229 + 230 + static const struct rtc_class_ops da9063_rtc_ops = { 231 + .read_time = da9063_rtc_read_time, 232 + .set_time = da9063_rtc_set_time, 233 + .read_alarm = da9063_rtc_read_alarm, 234 + .set_alarm = da9063_rtc_set_alarm, 235 + .alarm_irq_enable = da9063_rtc_alarm_irq_enable, 236 + }; 237 + 238 + static int da9063_rtc_probe(struct platform_device *pdev) 239 + { 240 + struct da9063 *da9063 = dev_get_drvdata(pdev->dev.parent); 241 + struct da9063_rtc *rtc; 242 + int irq_alarm; 243 + u8 data[RTC_DATA_LEN]; 244 + int ret; 245 + 246 + ret = regmap_update_bits(da9063->regmap, DA9063_REG_CONTROL_E, 247 + DA9063_RTC_EN, DA9063_RTC_EN); 248 + if (ret < 0) { 249 + dev_err(&pdev->dev, "Failed to enable RTC\n"); 250 + goto err; 251 + } 252 + 253 + ret = regmap_update_bits(da9063->regmap, DA9063_REG_EN_32K, 254 + DA9063_CRYSTAL, DA9063_CRYSTAL); 255 + if (ret < 0) { 256 + dev_err(&pdev->dev, "Failed to run 32kHz oscillator\n"); 257 + goto err; 258 + } 259 + 260 + ret = regmap_update_bits(da9063->regmap, DA9063_REG_ALARM_S, 261 + DA9063_ALARM_STATUS_TICK | DA9063_ALARM_STATUS_ALARM, 262 + 0); 263 + if (ret < 0) { 264 + dev_err(&pdev->dev, "Failed to access RTC alarm register\n"); 265 + goto err; 266 + } 267 + 268 + ret = regmap_update_bits(da9063->regmap, DA9063_REG_ALARM_S, 269 + DA9063_ALARM_STATUS_ALARM, 270 + DA9063_ALARM_STATUS_ALARM); 271 + if (ret < 0) { 272 + dev_err(&pdev->dev, "Failed to access RTC alarm register\n"); 273 + goto err; 274 + } 275 + 276 + ret = regmap_update_bits(da9063->regmap, DA9063_REG_ALARM_Y, 277 + DA9063_TICK_ON, 0); 278 + if (ret < 0) { 279 + dev_err(&pdev->dev, "Failed to disable TICKs\n"); 280 + goto err; 281 + } 282 + 283 + ret = regmap_bulk_read(da9063->regmap, DA9063_REG_ALARM_S, 284 + data, RTC_DATA_LEN); 285 + if (ret < 0) { 286 + dev_err(&pdev->dev, "Failed to read initial alarm data: %d\n", 287 + ret); 288 + goto err; 289 + } 290 + 291 + rtc = devm_kzalloc(&pdev->dev, sizeof(*rtc), GFP_KERNEL); 292 + if (!rtc) 293 + return -ENOMEM; 294 + 295 + platform_set_drvdata(pdev, rtc); 296 + 297 + irq_alarm = platform_get_irq_byname(pdev, "ALARM"); 298 + ret = devm_request_threaded_irq(&pdev->dev, irq_alarm, NULL, 299 + da9063_alarm_event, 300 + IRQF_TRIGGER_LOW | IRQF_ONESHOT, 301 + "ALARM", rtc); 302 + if (ret) { 303 + dev_err(&pdev->dev, "Failed to request ALARM IRQ %d: %d\n", 304 + irq_alarm, ret); 305 + goto err; 306 + } 307 + 308 + rtc->hw = da9063; 309 + rtc->rtc_dev = devm_rtc_device_register(&pdev->dev, DA9063_DRVNAME_RTC, 310 + &da9063_rtc_ops, THIS_MODULE); 311 + if (IS_ERR(rtc->rtc_dev)) 312 + return PTR_ERR(rtc->rtc_dev); 313 + 314 + da9063_data_to_tm(data, &rtc->alarm_time); 315 + rtc->rtc_sync = false; 316 + err: 317 + return ret; 318 + } 319 + 320 + static struct platform_driver da9063_rtc_driver = { 321 + .probe = da9063_rtc_probe, 322 + .driver = { 323 + .name = DA9063_DRVNAME_RTC, 324 + .owner = THIS_MODULE, 325 + }, 326 + }; 327 + 328 + module_platform_driver(da9063_rtc_driver); 329 + 330 + MODULE_AUTHOR("S Twiss <stwiss.opensource@diasemi.com>"); 331 + MODULE_DESCRIPTION("Real time clock device driver for Dialog DA9063"); 332 + MODULE_LICENSE("GPL v2"); 333 + MODULE_ALIAS("platform:" DA9063_DRVNAME_RTC);
+689
drivers/rtc/rtc-ds1343.c
··· 1 + /* rtc-ds1343.c 2 + * 3 + * Driver for Dallas Semiconductor DS1343 Low Current, SPI Compatible 4 + * Real Time Clock 5 + * 6 + * Author : Raghavendra Chandra Ganiga <ravi23ganiga@gmail.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + * 12 + */ 13 + 14 + #include <linux/init.h> 15 + #include <linux/module.h> 16 + #include <linux/interrupt.h> 17 + #include <linux/device.h> 18 + #include <linux/spi/spi.h> 19 + #include <linux/regmap.h> 20 + #include <linux/rtc.h> 21 + #include <linux/bcd.h> 22 + #include <linux/pm.h> 23 + #include <linux/slab.h> 24 + 25 + #define DS1343_DRV_VERSION "01.00" 26 + #define DALLAS_MAXIM_DS1343 0 27 + #define DALLAS_MAXIM_DS1344 1 28 + 29 + /* RTC DS1343 Registers */ 30 + #define DS1343_SECONDS_REG 0x00 31 + #define DS1343_MINUTES_REG 0x01 32 + #define DS1343_HOURS_REG 0x02 33 + #define DS1343_DAY_REG 0x03 34 + #define DS1343_DATE_REG 0x04 35 + #define DS1343_MONTH_REG 0x05 36 + #define DS1343_YEAR_REG 0x06 37 + #define DS1343_ALM0_SEC_REG 0x07 38 + #define DS1343_ALM0_MIN_REG 0x08 39 + #define DS1343_ALM0_HOUR_REG 0x09 40 + #define DS1343_ALM0_DAY_REG 0x0A 41 + #define DS1343_ALM1_SEC_REG 0x0B 42 + #define DS1343_ALM1_MIN_REG 0x0C 43 + #define DS1343_ALM1_HOUR_REG 0x0D 44 + #define DS1343_ALM1_DAY_REG 0x0E 45 + #define DS1343_CONTROL_REG 0x0F 46 + #define DS1343_STATUS_REG 0x10 47 + #define DS1343_TRICKLE_REG 0x11 48 + 49 + /* DS1343 Control Registers bits */ 50 + #define DS1343_EOSC 0x80 51 + #define DS1343_DOSF 0x20 52 + #define DS1343_EGFIL 0x10 53 + #define DS1343_SQW 0x08 54 + #define DS1343_INTCN 0x04 55 + #define DS1343_A1IE 0x02 56 + #define DS1343_A0IE 0x01 57 + 58 + /* DS1343 Status Registers bits */ 59 + #define DS1343_OSF 0x80 60 + #define DS1343_IRQF1 0x02 61 + #define DS1343_IRQF0 0x01 62 + 63 + /* DS1343 Trickle Charger Registers bits */ 64 + #define DS1343_TRICKLE_MAGIC 0xa0 65 + #define DS1343_TRICKLE_DS1 0x08 66 + #define DS1343_TRICKLE_1K 0x01 67 + #define DS1343_TRICKLE_2K 0x02 68 + #define DS1343_TRICKLE_4K 0x03 69 + 70 + static const struct spi_device_id ds1343_id[] = { 71 + { "ds1343", DALLAS_MAXIM_DS1343 }, 72 + { "ds1344", DALLAS_MAXIM_DS1344 }, 73 + { } 74 + }; 75 + MODULE_DEVICE_TABLE(spi, ds1343_id); 76 + 77 + struct ds1343_priv { 78 + struct spi_device *spi; 79 + struct rtc_device *rtc; 80 + struct regmap *map; 81 + struct mutex mutex; 82 + unsigned int irqen; 83 + int irq; 84 + int alarm_sec; 85 + int alarm_min; 86 + int alarm_hour; 87 + int alarm_mday; 88 + }; 89 + 90 + static int ds1343_ioctl(struct device *dev, unsigned int cmd, unsigned long arg) 91 + { 92 + switch (cmd) { 93 + #ifdef RTC_SET_CHARGE 94 + case RTC_SET_CHARGE: 95 + { 96 + int val; 97 + 98 + if (copy_from_user(&val, (int __user *)arg, sizeof(int))) 99 + return -EFAULT; 100 + 101 + return regmap_write(priv->map, DS1343_TRICKLE_REG, val); 102 + } 103 + break; 104 + #endif 105 + } 106 + 107 + return -ENOIOCTLCMD; 108 + } 109 + 110 + static ssize_t ds1343_show_glitchfilter(struct device *dev, 111 + struct device_attribute *attr, char *buf) 112 + { 113 + struct ds1343_priv *priv = dev_get_drvdata(dev); 114 + int glitch_filt_status, data; 115 + 116 + regmap_read(priv->map, DS1343_CONTROL_REG, &data); 117 + 118 + glitch_filt_status = !!(data & DS1343_EGFIL); 119 + 120 + if (glitch_filt_status) 121 + return sprintf(buf, "enabled\n"); 122 + else 123 + return sprintf(buf, "disabled\n"); 124 + } 125 + 126 + static ssize_t ds1343_store_glitchfilter(struct device *dev, 127 + struct device_attribute *attr, 128 + const char *buf, size_t count) 129 + { 130 + struct ds1343_priv *priv = dev_get_drvdata(dev); 131 + int data; 132 + 133 + regmap_read(priv->map, DS1343_CONTROL_REG, &data); 134 + 135 + if (strncmp(buf, "enabled", 7) == 0) 136 + data |= DS1343_EGFIL; 137 + 138 + else if (strncmp(buf, "disabled", 8) == 0) 139 + data &= ~(DS1343_EGFIL); 140 + 141 + else 142 + return -EINVAL; 143 + 144 + regmap_write(priv->map, DS1343_CONTROL_REG, data); 145 + 146 + return count; 147 + } 148 + 149 + static DEVICE_ATTR(glitch_filter, S_IRUGO | S_IWUSR, ds1343_show_glitchfilter, 150 + ds1343_store_glitchfilter); 151 + 152 + static ssize_t ds1343_show_alarmstatus(struct device *dev, 153 + struct device_attribute *attr, char *buf) 154 + { 155 + struct ds1343_priv *priv = dev_get_drvdata(dev); 156 + int alarmstatus, data; 157 + 158 + regmap_read(priv->map, DS1343_CONTROL_REG, &data); 159 + 160 + alarmstatus = !!(data & DS1343_A0IE); 161 + 162 + if (alarmstatus) 163 + return sprintf(buf, "enabled\n"); 164 + else 165 + return sprintf(buf, "disabled\n"); 166 + } 167 + 168 + static DEVICE_ATTR(alarm_status, S_IRUGO, ds1343_show_alarmstatus, NULL); 169 + 170 + static ssize_t ds1343_show_alarmmode(struct device *dev, 171 + struct device_attribute *attr, char *buf) 172 + { 173 + struct ds1343_priv *priv = dev_get_drvdata(dev); 174 + int alarm_mode, data; 175 + char *alarm_str; 176 + 177 + regmap_read(priv->map, DS1343_ALM0_SEC_REG, &data); 178 + alarm_mode = (data & 0x80) >> 4; 179 + 180 + regmap_read(priv->map, DS1343_ALM0_MIN_REG, &data); 181 + alarm_mode |= (data & 0x80) >> 5; 182 + 183 + regmap_read(priv->map, DS1343_ALM0_HOUR_REG, &data); 184 + alarm_mode |= (data & 0x80) >> 6; 185 + 186 + regmap_read(priv->map, DS1343_ALM0_DAY_REG, &data); 187 + alarm_mode |= (data & 0x80) >> 7; 188 + 189 + switch (alarm_mode) { 190 + case 15: 191 + alarm_str = "each second"; 192 + break; 193 + 194 + case 7: 195 + alarm_str = "seconds match"; 196 + break; 197 + 198 + case 3: 199 + alarm_str = "minutes and seconds match"; 200 + break; 201 + 202 + case 1: 203 + alarm_str = "hours, minutes and seconds match"; 204 + break; 205 + 206 + case 0: 207 + alarm_str = "day, hours, minutes and seconds match"; 208 + break; 209 + 210 + default: 211 + alarm_str = "invalid"; 212 + break; 213 + } 214 + 215 + return sprintf(buf, "%s\n", alarm_str); 216 + } 217 + 218 + static DEVICE_ATTR(alarm_mode, S_IRUGO, ds1343_show_alarmmode, NULL); 219 + 220 + static ssize_t ds1343_show_tricklecharger(struct device *dev, 221 + struct device_attribute *attr, char *buf) 222 + { 223 + struct ds1343_priv *priv = dev_get_drvdata(dev); 224 + int data; 225 + char *diodes = "disabled", *resistors = " "; 226 + 227 + regmap_read(priv->map, DS1343_TRICKLE_REG, &data); 228 + 229 + if ((data & 0xf0) == DS1343_TRICKLE_MAGIC) { 230 + switch (data & 0x0c) { 231 + case DS1343_TRICKLE_DS1: 232 + diodes = "one diode,"; 233 + break; 234 + 235 + default: 236 + diodes = "no diode,"; 237 + break; 238 + } 239 + 240 + switch (data & 0x03) { 241 + case DS1343_TRICKLE_1K: 242 + resistors = "1k Ohm"; 243 + break; 244 + 245 + case DS1343_TRICKLE_2K: 246 + resistors = "2k Ohm"; 247 + break; 248 + 249 + case DS1343_TRICKLE_4K: 250 + resistors = "4k Ohm"; 251 + break; 252 + 253 + default: 254 + diodes = "disabled"; 255 + break; 256 + } 257 + } 258 + 259 + return sprintf(buf, "%s %s\n", diodes, resistors); 260 + } 261 + 262 + static DEVICE_ATTR(trickle_charger, S_IRUGO, ds1343_show_tricklecharger, NULL); 263 + 264 + static int ds1343_sysfs_register(struct device *dev) 265 + { 266 + struct ds1343_priv *priv = dev_get_drvdata(dev); 267 + int err; 268 + 269 + err = device_create_file(dev, &dev_attr_glitch_filter); 270 + if (err) 271 + return err; 272 + 273 + err = device_create_file(dev, &dev_attr_trickle_charger); 274 + if (err) 275 + goto error1; 276 + 277 + if (priv->irq <= 0) 278 + return err; 279 + 280 + err = device_create_file(dev, &dev_attr_alarm_mode); 281 + if (err) 282 + goto error2; 283 + 284 + err = device_create_file(dev, &dev_attr_alarm_status); 285 + if (!err) 286 + return err; 287 + 288 + device_remove_file(dev, &dev_attr_alarm_mode); 289 + 290 + error2: 291 + device_remove_file(dev, &dev_attr_trickle_charger); 292 + 293 + error1: 294 + device_remove_file(dev, &dev_attr_glitch_filter); 295 + 296 + return err; 297 + } 298 + 299 + static void ds1343_sysfs_unregister(struct device *dev) 300 + { 301 + struct ds1343_priv *priv = dev_get_drvdata(dev); 302 + 303 + device_remove_file(dev, &dev_attr_glitch_filter); 304 + device_remove_file(dev, &dev_attr_trickle_charger); 305 + 306 + if (priv->irq <= 0) 307 + return; 308 + 309 + device_remove_file(dev, &dev_attr_alarm_status); 310 + device_remove_file(dev, &dev_attr_alarm_mode); 311 + } 312 + 313 + static int ds1343_read_time(struct device *dev, struct rtc_time *dt) 314 + { 315 + struct ds1343_priv *priv = dev_get_drvdata(dev); 316 + unsigned char buf[7]; 317 + int res; 318 + 319 + res = regmap_bulk_read(priv->map, DS1343_SECONDS_REG, buf, 7); 320 + if (res) 321 + return res; 322 + 323 + dt->tm_sec = bcd2bin(buf[0]); 324 + dt->tm_min = bcd2bin(buf[1]); 325 + dt->tm_hour = bcd2bin(buf[2] & 0x3F); 326 + dt->tm_wday = bcd2bin(buf[3]) - 1; 327 + dt->tm_mday = bcd2bin(buf[4]); 328 + dt->tm_mon = bcd2bin(buf[5] & 0x1F) - 1; 329 + dt->tm_year = bcd2bin(buf[6]) + 100; /* year offset from 1900 */ 330 + 331 + return rtc_valid_tm(dt); 332 + } 333 + 334 + static int ds1343_set_time(struct device *dev, struct rtc_time *dt) 335 + { 336 + struct ds1343_priv *priv = dev_get_drvdata(dev); 337 + int res; 338 + 339 + res = regmap_write(priv->map, DS1343_SECONDS_REG, 340 + bin2bcd(dt->tm_sec)); 341 + if (res) 342 + return res; 343 + 344 + res = regmap_write(priv->map, DS1343_MINUTES_REG, 345 + bin2bcd(dt->tm_min)); 346 + if (res) 347 + return res; 348 + 349 + res = regmap_write(priv->map, DS1343_HOURS_REG, 350 + bin2bcd(dt->tm_hour) & 0x3F); 351 + if (res) 352 + return res; 353 + 354 + res = regmap_write(priv->map, DS1343_DAY_REG, 355 + bin2bcd(dt->tm_wday + 1)); 356 + if (res) 357 + return res; 358 + 359 + res = regmap_write(priv->map, DS1343_DATE_REG, 360 + bin2bcd(dt->tm_mday)); 361 + if (res) 362 + return res; 363 + 364 + res = regmap_write(priv->map, DS1343_MONTH_REG, 365 + bin2bcd(dt->tm_mon + 1)); 366 + if (res) 367 + return res; 368 + 369 + dt->tm_year %= 100; 370 + 371 + res = regmap_write(priv->map, DS1343_YEAR_REG, 372 + bin2bcd(dt->tm_year)); 373 + if (res) 374 + return res; 375 + 376 + return 0; 377 + } 378 + 379 + static int ds1343_update_alarm(struct device *dev) 380 + { 381 + struct ds1343_priv *priv = dev_get_drvdata(dev); 382 + unsigned int control, stat; 383 + unsigned char buf[4]; 384 + int res = 0; 385 + 386 + res = regmap_read(priv->map, DS1343_CONTROL_REG, &control); 387 + if (res) 388 + return res; 389 + 390 + res = regmap_read(priv->map, DS1343_STATUS_REG, &stat); 391 + if (res) 392 + return res; 393 + 394 + control &= ~(DS1343_A0IE); 395 + stat &= ~(DS1343_IRQF0); 396 + 397 + res = regmap_write(priv->map, DS1343_CONTROL_REG, control); 398 + if (res) 399 + return res; 400 + 401 + res = regmap_write(priv->map, DS1343_STATUS_REG, stat); 402 + if (res) 403 + return res; 404 + 405 + buf[0] = priv->alarm_sec < 0 || (priv->irqen & RTC_UF) ? 406 + 0x80 : bin2bcd(priv->alarm_sec) & 0x7F; 407 + buf[1] = priv->alarm_min < 0 || (priv->irqen & RTC_UF) ? 408 + 0x80 : bin2bcd(priv->alarm_min) & 0x7F; 409 + buf[2] = priv->alarm_hour < 0 || (priv->irqen & RTC_UF) ? 410 + 0x80 : bin2bcd(priv->alarm_hour) & 0x3F; 411 + buf[3] = priv->alarm_mday < 0 || (priv->irqen & RTC_UF) ? 412 + 0x80 : bin2bcd(priv->alarm_mday) & 0x7F; 413 + 414 + res = regmap_bulk_write(priv->map, DS1343_ALM0_SEC_REG, buf, 4); 415 + if (res) 416 + return res; 417 + 418 + if (priv->irqen) { 419 + control |= DS1343_A0IE; 420 + res = regmap_write(priv->map, DS1343_CONTROL_REG, control); 421 + } 422 + 423 + return res; 424 + } 425 + 426 + static int ds1343_read_alarm(struct device *dev, struct rtc_wkalrm *alarm) 427 + { 428 + struct ds1343_priv *priv = dev_get_drvdata(dev); 429 + int res = 0; 430 + unsigned int stat; 431 + 432 + if (priv->irq <= 0) 433 + return -EINVAL; 434 + 435 + mutex_lock(&priv->mutex); 436 + 437 + res = regmap_read(priv->map, DS1343_STATUS_REG, &stat); 438 + if (res) 439 + goto out; 440 + 441 + alarm->enabled = !!(priv->irqen & RTC_AF); 442 + alarm->pending = !!(stat & DS1343_IRQF0); 443 + 444 + alarm->time.tm_sec = priv->alarm_sec < 0 ? 0 : priv->alarm_sec; 445 + alarm->time.tm_min = priv->alarm_min < 0 ? 0 : priv->alarm_min; 446 + alarm->time.tm_hour = priv->alarm_hour < 0 ? 0 : priv->alarm_hour; 447 + alarm->time.tm_mday = priv->alarm_mday < 0 ? 0 : priv->alarm_mday; 448 + 449 + alarm->time.tm_mon = -1; 450 + alarm->time.tm_year = -1; 451 + alarm->time.tm_wday = -1; 452 + alarm->time.tm_yday = -1; 453 + alarm->time.tm_isdst = -1; 454 + 455 + out: 456 + mutex_unlock(&priv->mutex); 457 + return res; 458 + } 459 + 460 + static int ds1343_set_alarm(struct device *dev, struct rtc_wkalrm *alarm) 461 + { 462 + struct ds1343_priv *priv = dev_get_drvdata(dev); 463 + int res = 0; 464 + 465 + if (priv->irq <= 0) 466 + return -EINVAL; 467 + 468 + mutex_lock(&priv->mutex); 469 + 470 + priv->alarm_sec = alarm->time.tm_sec; 471 + priv->alarm_min = alarm->time.tm_min; 472 + priv->alarm_hour = alarm->time.tm_hour; 473 + priv->alarm_mday = alarm->time.tm_mday; 474 + 475 + if (alarm->enabled) 476 + priv->irqen |= RTC_AF; 477 + 478 + res = ds1343_update_alarm(dev); 479 + 480 + mutex_unlock(&priv->mutex); 481 + 482 + return res; 483 + } 484 + 485 + static int ds1343_alarm_irq_enable(struct device *dev, unsigned int enabled) 486 + { 487 + struct ds1343_priv *priv = dev_get_drvdata(dev); 488 + int res = 0; 489 + 490 + if (priv->irq <= 0) 491 + return -EINVAL; 492 + 493 + mutex_lock(&priv->mutex); 494 + 495 + if (enabled) 496 + priv->irqen |= RTC_AF; 497 + else 498 + priv->irqen &= ~RTC_AF; 499 + 500 + res = ds1343_update_alarm(dev); 501 + 502 + mutex_unlock(&priv->mutex); 503 + 504 + return res; 505 + } 506 + 507 + static irqreturn_t ds1343_thread(int irq, void *dev_id) 508 + { 509 + struct ds1343_priv *priv = dev_id; 510 + unsigned int stat, control; 511 + int res = 0; 512 + 513 + mutex_lock(&priv->mutex); 514 + 515 + res = regmap_read(priv->map, DS1343_STATUS_REG, &stat); 516 + if (res) 517 + goto out; 518 + 519 + if (stat & DS1343_IRQF0) { 520 + stat &= ~DS1343_IRQF0; 521 + regmap_write(priv->map, DS1343_STATUS_REG, stat); 522 + 523 + res = regmap_read(priv->map, DS1343_CONTROL_REG, &control); 524 + if (res) 525 + goto out; 526 + 527 + control &= ~DS1343_A0IE; 528 + regmap_write(priv->map, DS1343_CONTROL_REG, control); 529 + 530 + rtc_update_irq(priv->rtc, 1, RTC_AF | RTC_IRQF); 531 + } 532 + 533 + out: 534 + mutex_unlock(&priv->mutex); 535 + return IRQ_HANDLED; 536 + } 537 + 538 + static const struct rtc_class_ops ds1343_rtc_ops = { 539 + .ioctl = ds1343_ioctl, 540 + .read_time = ds1343_read_time, 541 + .set_time = ds1343_set_time, 542 + .read_alarm = ds1343_read_alarm, 543 + .set_alarm = ds1343_set_alarm, 544 + .alarm_irq_enable = ds1343_alarm_irq_enable, 545 + }; 546 + 547 + static int ds1343_probe(struct spi_device *spi) 548 + { 549 + struct ds1343_priv *priv; 550 + struct regmap_config config; 551 + unsigned int data; 552 + int res; 553 + 554 + memset(&config, 0, sizeof(config)); 555 + config.reg_bits = 8; 556 + config.val_bits = 8; 557 + config.write_flag_mask = 0x80; 558 + 559 + priv = devm_kzalloc(&spi->dev, sizeof(struct ds1343_priv), GFP_KERNEL); 560 + if (!priv) 561 + return -ENOMEM; 562 + 563 + priv->spi = spi; 564 + mutex_init(&priv->mutex); 565 + 566 + /* RTC DS1347 works in spi mode 3 and 567 + * its chip select is active high 568 + */ 569 + spi->mode = SPI_MODE_3 | SPI_CS_HIGH; 570 + spi->bits_per_word = 8; 571 + res = spi_setup(spi); 572 + if (res) 573 + return res; 574 + 575 + spi_set_drvdata(spi, priv); 576 + 577 + priv->map = devm_regmap_init_spi(spi, &config); 578 + 579 + if (IS_ERR(priv->map)) { 580 + dev_err(&spi->dev, "spi regmap init failed for rtc ds1343\n"); 581 + return PTR_ERR(priv->map); 582 + } 583 + 584 + res = regmap_read(priv->map, DS1343_SECONDS_REG, &data); 585 + if (res) 586 + return res; 587 + 588 + regmap_read(priv->map, DS1343_CONTROL_REG, &data); 589 + data |= DS1343_INTCN; 590 + data &= ~(DS1343_EOSC | DS1343_A1IE | DS1343_A0IE); 591 + regmap_write(priv->map, DS1343_CONTROL_REG, data); 592 + 593 + regmap_read(priv->map, DS1343_STATUS_REG, &data); 594 + data &= ~(DS1343_OSF | DS1343_IRQF1 | DS1343_IRQF0); 595 + regmap_write(priv->map, DS1343_STATUS_REG, data); 596 + 597 + priv->rtc = devm_rtc_device_register(&spi->dev, "ds1343", 598 + &ds1343_rtc_ops, THIS_MODULE); 599 + if (IS_ERR(priv->rtc)) { 600 + dev_err(&spi->dev, "unable to register rtc ds1343\n"); 601 + return PTR_ERR(priv->rtc); 602 + } 603 + 604 + priv->irq = spi->irq; 605 + 606 + if (priv->irq >= 0) { 607 + res = devm_request_threaded_irq(&spi->dev, spi->irq, NULL, 608 + ds1343_thread, 609 + IRQF_NO_SUSPEND | IRQF_ONESHOT, 610 + "ds1343", priv); 611 + if (res) { 612 + priv->irq = -1; 613 + dev_err(&spi->dev, 614 + "unable to request irq for rtc ds1343\n"); 615 + } else { 616 + device_set_wakeup_capable(&spi->dev, 1); 617 + } 618 + } 619 + 620 + res = ds1343_sysfs_register(&spi->dev); 621 + if (res) 622 + dev_err(&spi->dev, 623 + "unable to create sysfs entries for rtc ds1343\n"); 624 + 625 + return 0; 626 + } 627 + 628 + static int ds1343_remove(struct spi_device *spi) 629 + { 630 + struct ds1343_priv *priv = spi_get_drvdata(spi); 631 + 632 + if (spi->irq) { 633 + mutex_lock(&priv->mutex); 634 + priv->irqen &= ~RTC_AF; 635 + mutex_unlock(&priv->mutex); 636 + 637 + devm_free_irq(&spi->dev, spi->irq, priv); 638 + } 639 + 640 + spi_set_drvdata(spi, NULL); 641 + 642 + ds1343_sysfs_unregister(&spi->dev); 643 + 644 + return 0; 645 + } 646 + 647 + #ifdef CONFIG_PM_SLEEP 648 + 649 + static int ds1343_suspend(struct device *dev) 650 + { 651 + struct spi_device *spi = to_spi_device(dev); 652 + 653 + if (spi->irq >= 0 && device_may_wakeup(dev)) 654 + enable_irq_wake(spi->irq); 655 + 656 + return 0; 657 + } 658 + 659 + static int ds1343_resume(struct device *dev) 660 + { 661 + struct spi_device *spi = to_spi_device(dev); 662 + 663 + if (spi->irq >= 0 && device_may_wakeup(dev)) 664 + disable_irq_wake(spi->irq); 665 + 666 + return 0; 667 + } 668 + 669 + #endif 670 + 671 + static SIMPLE_DEV_PM_OPS(ds1343_pm, ds1343_suspend, ds1343_resume); 672 + 673 + static struct spi_driver ds1343_driver = { 674 + .driver = { 675 + .name = "ds1343", 676 + .owner = THIS_MODULE, 677 + .pm = &ds1343_pm, 678 + }, 679 + .probe = ds1343_probe, 680 + .remove = ds1343_remove, 681 + .id_table = ds1343_id, 682 + }; 683 + 684 + module_spi_driver(ds1343_driver); 685 + 686 + MODULE_DESCRIPTION("DS1343 RTC SPI Driver"); 687 + MODULE_AUTHOR("Raghavendra Chandra Ganiga <ravi23ganiga@gmail.com>"); 688 + MODULE_LICENSE("GPL v2"); 689 + MODULE_VERSION(DS1343_DRV_VERSION);
+1 -1
drivers/rtc/rtc-ds1742.c
··· 219 219 return 0; 220 220 } 221 221 222 - static struct of_device_id __maybe_unused ds1742_rtc_of_match[] = { 222 + static const struct of_device_id __maybe_unused ds1742_rtc_of_match[] = { 223 223 { .compatible = "maxim,ds1742", }, 224 224 { } 225 225 };
+1 -1
drivers/rtc/rtc-efi.c
··· 35 35 compute_yday(efi_time_t *eft) 36 36 { 37 37 /* efi_time_t.month is in the [1-12] so, we need -1 */ 38 - return rtc_year_days(eft->day - 1, eft->month - 1, eft->year); 38 + return rtc_year_days(eft->day, eft->month - 1, eft->year); 39 39 } 40 40 /* 41 41 * returns day of the week [0-6] 0=Sunday
+4 -1
drivers/rtc/rtc-hym8563.c
··· 418 418 init.num_parents = 0; 419 419 hym8563->clkout_hw.init = &init; 420 420 421 + /* optional override of the clockname */ 422 + of_property_read_string(node, "clock-output-names", &init.name); 423 + 421 424 /* register the clock */ 422 425 clk = clk_register(&client->dev, &hym8563->clkout_hw); 423 426 ··· 588 585 }; 589 586 MODULE_DEVICE_TABLE(i2c, hym8563_id); 590 587 591 - static struct of_device_id hym8563_dt_idtable[] = { 588 + static const struct of_device_id hym8563_dt_idtable[] = { 592 589 { .compatible = "haoyu,hym8563" }, 593 590 {}, 594 591 };
+1 -1
drivers/rtc/rtc-isl12057.c
··· 278 278 } 279 279 280 280 #ifdef CONFIG_OF 281 - static struct of_device_id isl12057_dt_match[] = { 281 + static const struct of_device_id isl12057_dt_match[] = { 282 282 { .compatible = "isl,isl12057" }, 283 283 { }, 284 284 };
+45 -59
drivers/rtc/rtc-m41t80.c
··· 66 66 #define M41T80_FEATURE_WD (1 << 3) /* Extra watchdog resolution */ 67 67 #define M41T80_FEATURE_SQ_ALT (1 << 4) /* RSx bits are in reg 4 */ 68 68 69 - #define DRV_VERSION "0.05" 70 - 71 69 static DEFINE_MUTEX(m41t80_rtc_mutex); 72 70 static const struct i2c_device_id m41t80_id[] = { 73 71 { "m41t62", M41T80_FEATURE_SQ | M41T80_FEATURE_SQ_ALT }, ··· 78 80 { "m41st84", M41T80_FEATURE_HT | M41T80_FEATURE_BL | M41T80_FEATURE_SQ }, 79 81 { "m41st85", M41T80_FEATURE_HT | M41T80_FEATURE_BL | M41T80_FEATURE_SQ }, 80 82 { "m41st87", M41T80_FEATURE_HT | M41T80_FEATURE_BL | M41T80_FEATURE_SQ }, 83 + { "rv4162", M41T80_FEATURE_SQ | M41T80_FEATURE_WD | M41T80_FEATURE_SQ_ALT }, 81 84 { } 82 85 }; 83 86 MODULE_DEVICE_TABLE(i2c, m41t80_id); ··· 231 232 232 233 val = i2c_smbus_read_byte_data(client, M41T80_REG_FLAGS); 233 234 if (val < 0) 234 - return -EIO; 235 + return val; 235 236 return sprintf(buf, "%#x\n", val); 236 237 } 237 238 static DEVICE_ATTR(flags, S_IRUGO, m41t80_sysfs_show_flags, NULL); ··· 251 252 reg_sqw = M41T80_REG_WDAY; 252 253 val = i2c_smbus_read_byte_data(client, reg_sqw); 253 254 if (val < 0) 254 - return -EIO; 255 + return val; 255 256 val = (val >> 4) & 0xf; 256 257 switch (val) { 257 258 case 0: ··· 270 271 { 271 272 struct i2c_client *client = to_i2c_client(dev); 272 273 struct m41t80_data *clientdata = i2c_get_clientdata(client); 273 - int almon, sqw, reg_sqw; 274 + int almon, sqw, reg_sqw, rc; 274 275 int val = simple_strtoul(buf, NULL, 0); 275 276 276 277 if (!(clientdata->features & M41T80_FEATURE_SQ)) ··· 290 291 /* disable SQW, set SQW frequency & re-enable */ 291 292 almon = i2c_smbus_read_byte_data(client, M41T80_REG_ALARM_MON); 292 293 if (almon < 0) 293 - return -EIO; 294 + return almon; 294 295 reg_sqw = M41T80_REG_SQW; 295 296 if (clientdata->features & M41T80_FEATURE_SQ_ALT) 296 297 reg_sqw = M41T80_REG_WDAY; 297 298 sqw = i2c_smbus_read_byte_data(client, reg_sqw); 298 299 if (sqw < 0) 299 - return -EIO; 300 + return sqw; 300 301 sqw = (sqw & 0x0f) | (val << 4); 301 - if (i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, 302 - almon & ~M41T80_ALMON_SQWE) < 0 || 303 - i2c_smbus_write_byte_data(client, reg_sqw, sqw) < 0) 304 - return -EIO; 305 - if (val && i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, 306 - almon | M41T80_ALMON_SQWE) < 0) 307 - return -EIO; 302 + 303 + rc = i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, 304 + almon & ~M41T80_ALMON_SQWE); 305 + if (rc < 0) 306 + return rc; 307 + 308 + if (val) { 309 + rc = i2c_smbus_write_byte_data(client, reg_sqw, sqw); 310 + if (rc < 0) 311 + return rc; 312 + 313 + rc = i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_MON, 314 + almon | M41T80_ALMON_SQWE); 315 + if (rc <0) 316 + return rc; 317 + } 308 318 return count; 309 319 } 310 320 static DEVICE_ATTR(sqwfreq, S_IRUGO | S_IWUSR, ··· 637 629 struct m41t80_data *clientdata = NULL; 638 630 639 631 if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C 640 - | I2C_FUNC_SMBUS_BYTE_DATA)) { 641 - rc = -ENODEV; 642 - goto exit; 643 - } 644 - 645 - dev_info(&client->dev, 646 - "chip found, driver version " DRV_VERSION "\n"); 632 + | I2C_FUNC_SMBUS_BYTE_DATA)) 633 + return -ENODEV; 647 634 648 635 clientdata = devm_kzalloc(&client->dev, sizeof(*clientdata), 649 636 GFP_KERNEL); 650 - if (!clientdata) { 651 - rc = -ENOMEM; 652 - goto exit; 653 - } 637 + if (!clientdata) 638 + return -ENOMEM; 654 639 655 640 clientdata->features = id->driver_data; 656 641 i2c_set_clientdata(client, clientdata); 657 642 658 643 rtc = devm_rtc_device_register(&client->dev, client->name, 659 644 &m41t80_rtc_ops, THIS_MODULE); 660 - if (IS_ERR(rtc)) { 661 - rc = PTR_ERR(rtc); 662 - rtc = NULL; 663 - goto exit; 664 - } 645 + if (IS_ERR(rtc)) 646 + return PTR_ERR(rtc); 665 647 666 648 clientdata->rtc = rtc; 667 649 668 650 /* Make sure HT (Halt Update) bit is cleared */ 669 651 rc = i2c_smbus_read_byte_data(client, M41T80_REG_ALARM_HOUR); 670 - if (rc < 0) 671 - goto ht_err; 672 652 673 - if (rc & M41T80_ALHOUR_HT) { 653 + if (rc >= 0 && rc & M41T80_ALHOUR_HT) { 674 654 if (clientdata->features & M41T80_FEATURE_HT) { 675 655 m41t80_get_datetime(client, &tm); 676 656 dev_info(&client->dev, "HT bit was set!\n"); ··· 669 673 tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, 670 674 tm.tm_min, tm.tm_sec); 671 675 } 672 - if (i2c_smbus_write_byte_data(client, 673 - M41T80_REG_ALARM_HOUR, 674 - rc & ~M41T80_ALHOUR_HT) < 0) 675 - goto ht_err; 676 + rc = i2c_smbus_write_byte_data(client, M41T80_REG_ALARM_HOUR, 677 + rc & ~M41T80_ALHOUR_HT); 678 + } 679 + 680 + if (rc < 0) { 681 + dev_err(&client->dev, "Can't clear HT bit\n"); 682 + return rc; 676 683 } 677 684 678 685 /* Make sure ST (stop) bit is cleared */ 679 686 rc = i2c_smbus_read_byte_data(client, M41T80_REG_SEC); 680 - if (rc < 0) 681 - goto st_err; 682 687 683 - if (rc & M41T80_SEC_ST) { 684 - if (i2c_smbus_write_byte_data(client, M41T80_REG_SEC, 685 - rc & ~M41T80_SEC_ST) < 0) 686 - goto st_err; 688 + if (rc >= 0 && rc & M41T80_SEC_ST) 689 + rc = i2c_smbus_write_byte_data(client, M41T80_REG_SEC, 690 + rc & ~M41T80_SEC_ST); 691 + if (rc < 0) { 692 + dev_err(&client->dev, "Can't clear ST bit\n"); 693 + return rc; 687 694 } 688 695 689 696 rc = m41t80_sysfs_register(&client->dev); 690 697 if (rc) 691 - goto exit; 698 + return rc; 692 699 693 700 #ifdef CONFIG_RTC_DRV_M41T80_WDT 694 701 if (clientdata->features & M41T80_FEATURE_HT) { 695 702 save_client = client; 696 703 rc = misc_register(&wdt_dev); 697 704 if (rc) 698 - goto exit; 705 + return rc; 699 706 rc = register_reboot_notifier(&wdt_notifier); 700 707 if (rc) { 701 708 misc_deregister(&wdt_dev); 702 - goto exit; 709 + return rc; 703 710 } 704 711 } 705 712 #endif 706 713 return 0; 707 - 708 - st_err: 709 - rc = -EIO; 710 - dev_err(&client->dev, "Can't clear ST bit\n"); 711 - goto exit; 712 - ht_err: 713 - rc = -EIO; 714 - dev_err(&client->dev, "Can't clear HT bit\n"); 715 - goto exit; 716 - 717 - exit: 718 - return rc; 719 714 } 720 715 721 716 static int m41t80_remove(struct i2c_client *client) ··· 737 750 MODULE_AUTHOR("Alexander Bigga <ab@mycable.de>"); 738 751 MODULE_DESCRIPTION("ST Microelectronics M41T80 series RTC I2C Client Driver"); 739 752 MODULE_LICENSE("GPL"); 740 - MODULE_VERSION(DRV_VERSION);
+199
drivers/rtc/rtc-mcp795.c
··· 1 + /* 2 + * SPI Driver for Microchip MCP795 RTC 3 + * 4 + * Copyright (C) Josef Gajdusek <atx@atx.name> 5 + * 6 + * based on other Linux RTC drivers 7 + * 8 + * Device datasheet: 9 + * http://ww1.microchip.com/downloads/en/DeviceDoc/22280A.pdf 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License version 2 as 13 + * published by the Free Software Foundation. 14 + * 15 + * */ 16 + 17 + #include <linux/module.h> 18 + #include <linux/kernel.h> 19 + #include <linux/device.h> 20 + #include <linux/printk.h> 21 + #include <linux/spi/spi.h> 22 + #include <linux/rtc.h> 23 + 24 + /* MCP795 Instructions, see datasheet table 3-1 */ 25 + #define MCP795_EEREAD 0x03 26 + #define MCP795_EEWRITE 0x02 27 + #define MCP795_EEWRDI 0x04 28 + #define MCP795_EEWREN 0x06 29 + #define MCP795_SRREAD 0x05 30 + #define MCP795_SRWRITE 0x01 31 + #define MCP795_READ 0x13 32 + #define MCP795_WRITE 0x12 33 + #define MCP795_UNLOCK 0x14 34 + #define MCP795_IDWRITE 0x32 35 + #define MCP795_IDREAD 0x33 36 + #define MCP795_CLRWDT 0x44 37 + #define MCP795_CLRRAM 0x54 38 + 39 + #define MCP795_ST_BIT 0x80 40 + #define MCP795_24_BIT 0x40 41 + 42 + static int mcp795_rtcc_read(struct device *dev, u8 addr, u8 *buf, u8 count) 43 + { 44 + struct spi_device *spi = to_spi_device(dev); 45 + int ret; 46 + u8 tx[2]; 47 + 48 + tx[0] = MCP795_READ; 49 + tx[1] = addr; 50 + ret = spi_write_then_read(spi, tx, sizeof(tx), buf, count); 51 + 52 + if (ret) 53 + dev_err(dev, "Failed reading %d bytes from address %x.\n", 54 + count, addr); 55 + 56 + return ret; 57 + } 58 + 59 + static int mcp795_rtcc_write(struct device *dev, u8 addr, u8 *data, u8 count) 60 + { 61 + struct spi_device *spi = to_spi_device(dev); 62 + int ret; 63 + u8 tx[2 + count]; 64 + 65 + tx[0] = MCP795_WRITE; 66 + tx[1] = addr; 67 + memcpy(&tx[2], data, count); 68 + 69 + ret = spi_write(spi, tx, 2 + count); 70 + 71 + if (ret) 72 + dev_err(dev, "Failed to write %d bytes to address %x.\n", 73 + count, addr); 74 + 75 + return ret; 76 + } 77 + 78 + static int mcp795_rtcc_set_bits(struct device *dev, u8 addr, u8 mask, u8 state) 79 + { 80 + int ret; 81 + u8 tmp; 82 + 83 + ret = mcp795_rtcc_read(dev, addr, &tmp, 1); 84 + if (ret) 85 + return ret; 86 + 87 + if ((tmp & mask) != state) { 88 + tmp = (tmp & ~mask) | state; 89 + ret = mcp795_rtcc_write(dev, addr, &tmp, 1); 90 + } 91 + 92 + return ret; 93 + } 94 + 95 + static int mcp795_set_time(struct device *dev, struct rtc_time *tim) 96 + { 97 + int ret; 98 + u8 data[7]; 99 + 100 + /* Read first, so we can leave config bits untouched */ 101 + ret = mcp795_rtcc_read(dev, 0x01, data, sizeof(data)); 102 + 103 + if (ret) 104 + return ret; 105 + 106 + data[0] = (data[0] & 0x80) | ((tim->tm_sec / 10) << 4) | (tim->tm_sec % 10); 107 + data[1] = (data[1] & 0x80) | ((tim->tm_min / 10) << 4) | (tim->tm_min % 10); 108 + data[2] = ((tim->tm_hour / 10) << 4) | (tim->tm_hour % 10); 109 + data[4] = ((tim->tm_mday / 10) << 4) | ((tim->tm_mday) % 10); 110 + data[5] = (data[5] & 0x10) | (tim->tm_mon / 10) | (tim->tm_mon % 10); 111 + 112 + if (tim->tm_year > 100) 113 + tim->tm_year -= 100; 114 + 115 + data[6] = ((tim->tm_year / 10) << 4) | (tim->tm_year % 10); 116 + 117 + ret = mcp795_rtcc_write(dev, 0x01, data, sizeof(data)); 118 + 119 + if (ret) 120 + return ret; 121 + 122 + dev_dbg(dev, "Set mcp795: %04d-%02d-%02d %02d:%02d:%02d\n", 123 + tim->tm_year + 1900, tim->tm_mon, tim->tm_mday, 124 + tim->tm_hour, tim->tm_min, tim->tm_sec); 125 + 126 + return 0; 127 + } 128 + 129 + static int mcp795_read_time(struct device *dev, struct rtc_time *tim) 130 + { 131 + int ret; 132 + u8 data[7]; 133 + 134 + ret = mcp795_rtcc_read(dev, 0x01, data, sizeof(data)); 135 + 136 + if (ret) 137 + return ret; 138 + 139 + tim->tm_sec = ((data[0] & 0x70) >> 4) * 10 + (data[0] & 0x0f); 140 + tim->tm_min = ((data[1] & 0x70) >> 4) * 10 + (data[1] & 0x0f); 141 + tim->tm_hour = ((data[2] & 0x30) >> 4) * 10 + (data[2] & 0x0f); 142 + tim->tm_mday = ((data[4] & 0x30) >> 4) * 10 + (data[4] & 0x0f); 143 + tim->tm_mon = ((data[5] & 0x10) >> 4) * 10 + (data[5] & 0x0f); 144 + tim->tm_year = ((data[6] & 0xf0) >> 4) * 10 + (data[6] & 0x0f) + 100; /* Assume we are in 20xx */ 145 + 146 + dev_dbg(dev, "Read from mcp795: %04d-%02d-%02d %02d:%02d:%02d\n", 147 + tim->tm_year + 1900, tim->tm_mon, tim->tm_mday, 148 + tim->tm_hour, tim->tm_min, tim->tm_sec); 149 + 150 + return rtc_valid_tm(tim); 151 + } 152 + 153 + static struct rtc_class_ops mcp795_rtc_ops = { 154 + .read_time = mcp795_read_time, 155 + .set_time = mcp795_set_time 156 + }; 157 + 158 + static int mcp795_probe(struct spi_device *spi) 159 + { 160 + struct rtc_device *rtc; 161 + int ret; 162 + 163 + spi->mode = SPI_MODE_0; 164 + spi->bits_per_word = 8; 165 + ret = spi_setup(spi); 166 + if (ret) { 167 + dev_err(&spi->dev, "Unable to setup SPI\n"); 168 + return ret; 169 + } 170 + 171 + /* Start the oscillator */ 172 + mcp795_rtcc_set_bits(&spi->dev, 0x01, MCP795_ST_BIT, MCP795_ST_BIT); 173 + /* Clear the 12 hour mode flag*/ 174 + mcp795_rtcc_set_bits(&spi->dev, 0x03, MCP795_24_BIT, 0); 175 + 176 + rtc = devm_rtc_device_register(&spi->dev, "rtc-mcp795", 177 + &mcp795_rtc_ops, THIS_MODULE); 178 + if (IS_ERR(rtc)) 179 + return PTR_ERR(rtc); 180 + 181 + spi_set_drvdata(spi, rtc); 182 + 183 + return 0; 184 + } 185 + 186 + static struct spi_driver mcp795_driver = { 187 + .driver = { 188 + .name = "rtc-mcp795", 189 + .owner = THIS_MODULE, 190 + }, 191 + .probe = mcp795_probe, 192 + }; 193 + 194 + module_spi_driver(mcp795_driver); 195 + 196 + MODULE_DESCRIPTION("MCP795 RTC SPI Driver"); 197 + MODULE_AUTHOR("Josef Gajdusek <atx@atx.name>"); 198 + MODULE_LICENSE("GPL"); 199 + MODULE_ALIAS("spi:mcp795");
+1 -1
drivers/rtc/rtc-mv.c
··· 319 319 } 320 320 321 321 #ifdef CONFIG_OF 322 - static struct of_device_id rtc_mv_of_match_table[] = { 322 + static const struct of_device_id rtc_mv_of_match_table[] = { 323 323 { .compatible = "marvell,orion-rtc", }, 324 324 {} 325 325 };
+78 -60
drivers/rtc/rtc-omap.c
··· 73 73 #define OMAP_RTC_IRQWAKEEN 0x7c 74 74 75 75 /* OMAP_RTC_CTRL_REG bit fields: */ 76 - #define OMAP_RTC_CTRL_SPLIT (1<<7) 77 - #define OMAP_RTC_CTRL_DISABLE (1<<6) 78 - #define OMAP_RTC_CTRL_SET_32_COUNTER (1<<5) 79 - #define OMAP_RTC_CTRL_TEST (1<<4) 80 - #define OMAP_RTC_CTRL_MODE_12_24 (1<<3) 81 - #define OMAP_RTC_CTRL_AUTO_COMP (1<<2) 82 - #define OMAP_RTC_CTRL_ROUND_30S (1<<1) 83 - #define OMAP_RTC_CTRL_STOP (1<<0) 76 + #define OMAP_RTC_CTRL_SPLIT BIT(7) 77 + #define OMAP_RTC_CTRL_DISABLE BIT(6) 78 + #define OMAP_RTC_CTRL_SET_32_COUNTER BIT(5) 79 + #define OMAP_RTC_CTRL_TEST BIT(4) 80 + #define OMAP_RTC_CTRL_MODE_12_24 BIT(3) 81 + #define OMAP_RTC_CTRL_AUTO_COMP BIT(2) 82 + #define OMAP_RTC_CTRL_ROUND_30S BIT(1) 83 + #define OMAP_RTC_CTRL_STOP BIT(0) 84 84 85 85 /* OMAP_RTC_STATUS_REG bit fields: */ 86 - #define OMAP_RTC_STATUS_POWER_UP (1<<7) 87 - #define OMAP_RTC_STATUS_ALARM (1<<6) 88 - #define OMAP_RTC_STATUS_1D_EVENT (1<<5) 89 - #define OMAP_RTC_STATUS_1H_EVENT (1<<4) 90 - #define OMAP_RTC_STATUS_1M_EVENT (1<<3) 91 - #define OMAP_RTC_STATUS_1S_EVENT (1<<2) 92 - #define OMAP_RTC_STATUS_RUN (1<<1) 93 - #define OMAP_RTC_STATUS_BUSY (1<<0) 86 + #define OMAP_RTC_STATUS_POWER_UP BIT(7) 87 + #define OMAP_RTC_STATUS_ALARM BIT(6) 88 + #define OMAP_RTC_STATUS_1D_EVENT BIT(5) 89 + #define OMAP_RTC_STATUS_1H_EVENT BIT(4) 90 + #define OMAP_RTC_STATUS_1M_EVENT BIT(3) 91 + #define OMAP_RTC_STATUS_1S_EVENT BIT(2) 92 + #define OMAP_RTC_STATUS_RUN BIT(1) 93 + #define OMAP_RTC_STATUS_BUSY BIT(0) 94 94 95 95 /* OMAP_RTC_INTERRUPTS_REG bit fields: */ 96 - #define OMAP_RTC_INTERRUPTS_IT_ALARM (1<<3) 97 - #define OMAP_RTC_INTERRUPTS_IT_TIMER (1<<2) 96 + #define OMAP_RTC_INTERRUPTS_IT_ALARM BIT(3) 97 + #define OMAP_RTC_INTERRUPTS_IT_TIMER BIT(2) 98 + 99 + /* OMAP_RTC_OSC_REG bit fields: */ 100 + #define OMAP_RTC_OSC_32KCLK_EN BIT(6) 98 101 99 102 /* OMAP_RTC_IRQWAKEEN bit fields: */ 100 - #define OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN (1<<1) 103 + #define OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN BIT(1) 101 104 102 105 /* OMAP_RTC_KICKER values */ 103 106 #define KICK0_VALUE 0x83e70b13 104 107 #define KICK1_VALUE 0x95a4f1e0 105 108 106 - #define OMAP_RTC_HAS_KICKER 0x1 109 + #define OMAP_RTC_HAS_KICKER BIT(0) 107 110 108 111 /* 109 112 * Few RTC IP revisions has special WAKE-EN Register to enable Wakeup 110 113 * generation for event Alarm. 111 114 */ 112 - #define OMAP_RTC_HAS_IRQWAKEEN 0x2 115 + #define OMAP_RTC_HAS_IRQWAKEEN BIT(1) 116 + 117 + /* 118 + * Some RTC IP revisions (like those in AM335x and DRA7x) need 119 + * the 32KHz clock to be explicitly enabled. 120 + */ 121 + #define OMAP_RTC_HAS_32KCLK_EN BIT(2) 113 122 114 123 static void __iomem *rtc_base; 115 124 ··· 171 162 172 163 static int omap_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled) 173 164 { 174 - u8 reg; 165 + u8 reg, irqwake_reg = 0; 166 + struct platform_device *pdev = to_platform_device(dev); 167 + const struct platform_device_id *id_entry = 168 + platform_get_device_id(pdev); 175 169 176 170 local_irq_disable(); 177 171 rtc_wait_not_busy(); 178 172 reg = rtc_read(OMAP_RTC_INTERRUPTS_REG); 179 - if (enabled) 173 + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) 174 + irqwake_reg = rtc_read(OMAP_RTC_IRQWAKEEN); 175 + 176 + if (enabled) { 180 177 reg |= OMAP_RTC_INTERRUPTS_IT_ALARM; 181 - else 178 + irqwake_reg |= OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; 179 + } else { 182 180 reg &= ~OMAP_RTC_INTERRUPTS_IT_ALARM; 181 + irqwake_reg &= ~OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; 182 + } 183 183 rtc_wait_not_busy(); 184 184 rtc_write(reg, OMAP_RTC_INTERRUPTS_REG); 185 + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) 186 + rtc_write(irqwake_reg, OMAP_RTC_IRQWAKEEN); 185 187 local_irq_enable(); 186 188 187 189 return 0; ··· 292 272 293 273 static int omap_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alm) 294 274 { 295 - u8 reg; 275 + u8 reg, irqwake_reg = 0; 276 + struct platform_device *pdev = to_platform_device(dev); 277 + const struct platform_device_id *id_entry = 278 + platform_get_device_id(pdev); 296 279 297 280 if (tm2bcd(&alm->time) < 0) 298 281 return -EINVAL; ··· 311 288 rtc_write(alm->time.tm_sec, OMAP_RTC_ALARM_SECONDS_REG); 312 289 313 290 reg = rtc_read(OMAP_RTC_INTERRUPTS_REG); 314 - if (alm->enabled) 291 + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) 292 + irqwake_reg = rtc_read(OMAP_RTC_IRQWAKEEN); 293 + 294 + if (alm->enabled) { 315 295 reg |= OMAP_RTC_INTERRUPTS_IT_ALARM; 316 - else 296 + irqwake_reg |= OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; 297 + } else { 317 298 reg &= ~OMAP_RTC_INTERRUPTS_IT_ALARM; 299 + irqwake_reg &= ~OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; 300 + } 318 301 rtc_write(reg, OMAP_RTC_INTERRUPTS_REG); 302 + if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) 303 + rtc_write(irqwake_reg, OMAP_RTC_IRQWAKEEN); 319 304 320 305 local_irq_enable(); 321 306 ··· 350 319 }, 351 320 [OMAP_RTC_DATA_AM3352_IDX] = { 352 321 .name = "am3352-rtc", 353 - .driver_data = OMAP_RTC_HAS_KICKER | OMAP_RTC_HAS_IRQWAKEEN, 322 + .driver_data = OMAP_RTC_HAS_KICKER | OMAP_RTC_HAS_IRQWAKEEN | 323 + OMAP_RTC_HAS_32KCLK_EN, 354 324 }, 355 325 [OMAP_RTC_DATA_DA830_IDX] = { 356 326 .name = "da830-rtc", ··· 384 352 if (of_id) 385 353 pdev->id_entry = of_id->data; 386 354 355 + id_entry = platform_get_device_id(pdev); 356 + if (!id_entry) { 357 + dev_err(&pdev->dev, "no matching device entry\n"); 358 + return -ENODEV; 359 + } 360 + 387 361 omap_rtc_timer = platform_get_irq(pdev, 0); 388 362 if (omap_rtc_timer <= 0) { 389 363 pr_debug("%s: no update irq?\n", pdev->name); ··· 411 373 pm_runtime_enable(&pdev->dev); 412 374 pm_runtime_get_sync(&pdev->dev); 413 375 414 - id_entry = platform_get_device_id(pdev); 415 - if (id_entry && (id_entry->driver_data & OMAP_RTC_HAS_KICKER)) { 376 + if (id_entry->driver_data & OMAP_RTC_HAS_KICKER) { 416 377 rtc_writel(KICK0_VALUE, OMAP_RTC_KICK0_REG); 417 378 rtc_writel(KICK1_VALUE, OMAP_RTC_KICK1_REG); 418 379 } ··· 429 392 * which we'll use instead of update irqs 430 393 */ 431 394 rtc_write(0, OMAP_RTC_INTERRUPTS_REG); 395 + 396 + /* enable RTC functional clock */ 397 + if (id_entry->driver_data & OMAP_RTC_HAS_32KCLK_EN) 398 + rtc_writel(OMAP_RTC_OSC_32KCLK_EN, OMAP_RTC_OSC_REG); 432 399 433 400 /* clear old status */ 434 401 reg = rtc_read(OMAP_RTC_STATUS_REG); ··· 493 452 return 0; 494 453 495 454 fail0: 496 - if (id_entry && (id_entry->driver_data & OMAP_RTC_HAS_KICKER)) 455 + if (id_entry->driver_data & OMAP_RTC_HAS_KICKER) 497 456 rtc_writel(0, OMAP_RTC_KICK0_REG); 498 457 pm_runtime_put_sync(&pdev->dev); 499 458 pm_runtime_disable(&pdev->dev); ··· 510 469 /* leave rtc running, but disable irqs */ 511 470 rtc_write(0, OMAP_RTC_INTERRUPTS_REG); 512 471 513 - if (id_entry && (id_entry->driver_data & OMAP_RTC_HAS_KICKER)) 472 + if (id_entry->driver_data & OMAP_RTC_HAS_KICKER) 514 473 rtc_writel(0, OMAP_RTC_KICK0_REG); 515 474 516 475 /* Disable the clock/module */ ··· 525 484 526 485 static int omap_rtc_suspend(struct device *dev) 527 486 { 528 - u8 irqwake_stat; 529 - struct platform_device *pdev = to_platform_device(dev); 530 - const struct platform_device_id *id_entry = 531 - platform_get_device_id(pdev); 532 - 533 487 irqstat = rtc_read(OMAP_RTC_INTERRUPTS_REG); 534 488 535 489 /* FIXME the RTC alarm is not currently acting as a wakeup event 536 490 * source on some platforms, and in fact this enable() call is just 537 491 * saving a flag that's never used... 538 492 */ 539 - if (device_may_wakeup(dev)) { 493 + if (device_may_wakeup(dev)) 540 494 enable_irq_wake(omap_rtc_alarm); 541 - 542 - if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) { 543 - irqwake_stat = rtc_read(OMAP_RTC_IRQWAKEEN); 544 - irqwake_stat |= OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; 545 - rtc_write(irqwake_stat, OMAP_RTC_IRQWAKEEN); 546 - } 547 - } else { 495 + else 548 496 rtc_write(0, OMAP_RTC_INTERRUPTS_REG); 549 - } 550 497 551 498 /* Disable the clock/module */ 552 499 pm_runtime_put_sync(dev); ··· 544 515 545 516 static int omap_rtc_resume(struct device *dev) 546 517 { 547 - u8 irqwake_stat; 548 - struct platform_device *pdev = to_platform_device(dev); 549 - const struct platform_device_id *id_entry = 550 - platform_get_device_id(pdev); 551 - 552 518 /* Enable the clock/module so that we can access the registers */ 553 519 pm_runtime_get_sync(dev); 554 520 555 - if (device_may_wakeup(dev)) { 521 + if (device_may_wakeup(dev)) 556 522 disable_irq_wake(omap_rtc_alarm); 557 - 558 - if (id_entry->driver_data & OMAP_RTC_HAS_IRQWAKEEN) { 559 - irqwake_stat = rtc_read(OMAP_RTC_IRQWAKEEN); 560 - irqwake_stat &= ~OMAP_RTC_IRQWAKEEN_ALARM_WAKEEN; 561 - rtc_write(irqwake_stat, OMAP_RTC_IRQWAKEEN); 562 - } 563 - } else { 523 + else 564 524 rtc_write(irqstat, OMAP_RTC_INTERRUPTS_REG); 565 - } 525 + 566 526 return 0; 567 527 } 568 528 #endif
+1 -1
drivers/rtc/rtc-palmas.c
··· 352 352 palmas_rtc_resume); 353 353 354 354 #ifdef CONFIG_OF 355 - static struct of_device_id of_palmas_rtc_match[] = { 355 + static const struct of_device_id of_palmas_rtc_match[] = { 356 356 { .compatible = "ti,palmas-rtc"}, 357 357 { }, 358 358 };
+1 -1
drivers/rtc/rtc-pxa.c
··· 389 389 } 390 390 391 391 #ifdef CONFIG_OF 392 - static struct of_device_id pxa_rtc_dt_ids[] = { 392 + static const struct of_device_id pxa_rtc_dt_ids[] = { 393 393 { .compatible = "marvell,pxa-rtc" }, 394 394 {} 395 395 };
+1 -1
drivers/rtc/rtc-sa1100.c
··· 338 338 sa1100_rtc_resume); 339 339 340 340 #ifdef CONFIG_OF 341 - static struct of_device_id sa1100_rtc_dt_ids[] = { 341 + static const struct of_device_id sa1100_rtc_dt_ids[] = { 342 342 { .compatible = "mrvl,sa1100-rtc", }, 343 343 { .compatible = "mrvl,mmp-rtc", }, 344 344 {}
+278
drivers/rtc/rtc-xgene.c
··· 1 + /* 2 + * APM X-Gene SoC Real Time Clock Driver 3 + * 4 + * Copyright (c) 2014, Applied Micro Circuits Corporation 5 + * Author: Rameshwar Prasad Sahu <rsahu@apm.com> 6 + * Loc Ho <lho@apm.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 + * 21 + */ 22 + 23 + #include <linux/init.h> 24 + #include <linux/module.h> 25 + #include <linux/of.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/io.h> 28 + #include <linux/slab.h> 29 + #include <linux/clk.h> 30 + #include <linux/delay.h> 31 + #include <linux/rtc.h> 32 + 33 + /* RTC CSR Registers */ 34 + #define RTC_CCVR 0x00 35 + #define RTC_CMR 0x04 36 + #define RTC_CLR 0x08 37 + #define RTC_CCR 0x0C 38 + #define RTC_CCR_IE BIT(0) 39 + #define RTC_CCR_MASK BIT(1) 40 + #define RTC_CCR_EN BIT(2) 41 + #define RTC_CCR_WEN BIT(3) 42 + #define RTC_STAT 0x10 43 + #define RTC_STAT_BIT BIT(0) 44 + #define RTC_RSTAT 0x14 45 + #define RTC_EOI 0x18 46 + #define RTC_VER 0x1C 47 + 48 + struct xgene_rtc_dev { 49 + struct rtc_device *rtc; 50 + struct device *dev; 51 + unsigned long alarm_time; 52 + void __iomem *csr_base; 53 + struct clk *clk; 54 + unsigned int irq_wake; 55 + }; 56 + 57 + static int xgene_rtc_read_time(struct device *dev, struct rtc_time *tm) 58 + { 59 + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); 60 + 61 + rtc_time_to_tm(readl(pdata->csr_base + RTC_CCVR), tm); 62 + return rtc_valid_tm(tm); 63 + } 64 + 65 + static int xgene_rtc_set_mmss(struct device *dev, unsigned long secs) 66 + { 67 + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); 68 + 69 + /* 70 + * NOTE: After the following write, the RTC_CCVR is only reflected 71 + * after the update cycle of 1 seconds. 72 + */ 73 + writel((u32) secs, pdata->csr_base + RTC_CLR); 74 + readl(pdata->csr_base + RTC_CLR); /* Force a barrier */ 75 + 76 + return 0; 77 + } 78 + 79 + static int xgene_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) 80 + { 81 + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); 82 + 83 + rtc_time_to_tm(pdata->alarm_time, &alrm->time); 84 + alrm->enabled = readl(pdata->csr_base + RTC_CCR) & RTC_CCR_IE; 85 + 86 + return 0; 87 + } 88 + 89 + static int xgene_rtc_alarm_irq_enable(struct device *dev, u32 enabled) 90 + { 91 + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); 92 + u32 ccr; 93 + 94 + ccr = readl(pdata->csr_base + RTC_CCR); 95 + if (enabled) { 96 + ccr &= ~RTC_CCR_MASK; 97 + ccr |= RTC_CCR_IE; 98 + } else { 99 + ccr &= ~RTC_CCR_IE; 100 + ccr |= RTC_CCR_MASK; 101 + } 102 + writel(ccr, pdata->csr_base + RTC_CCR); 103 + 104 + return 0; 105 + } 106 + 107 + static int xgene_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alrm) 108 + { 109 + struct xgene_rtc_dev *pdata = dev_get_drvdata(dev); 110 + unsigned long rtc_time; 111 + unsigned long alarm_time; 112 + 113 + rtc_time = readl(pdata->csr_base + RTC_CCVR); 114 + rtc_tm_to_time(&alrm->time, &alarm_time); 115 + 116 + pdata->alarm_time = alarm_time; 117 + writel((u32) pdata->alarm_time, pdata->csr_base + RTC_CMR); 118 + 119 + xgene_rtc_alarm_irq_enable(dev, alrm->enabled); 120 + 121 + return 0; 122 + } 123 + 124 + static const struct rtc_class_ops xgene_rtc_ops = { 125 + .read_time = xgene_rtc_read_time, 126 + .set_mmss = xgene_rtc_set_mmss, 127 + .read_alarm = xgene_rtc_read_alarm, 128 + .set_alarm = xgene_rtc_set_alarm, 129 + .alarm_irq_enable = xgene_rtc_alarm_irq_enable, 130 + }; 131 + 132 + static irqreturn_t xgene_rtc_interrupt(int irq, void *id) 133 + { 134 + struct xgene_rtc_dev *pdata = (struct xgene_rtc_dev *) id; 135 + 136 + /* Check if interrupt asserted */ 137 + if (!(readl(pdata->csr_base + RTC_STAT) & RTC_STAT_BIT)) 138 + return IRQ_NONE; 139 + 140 + /* Clear interrupt */ 141 + readl(pdata->csr_base + RTC_EOI); 142 + 143 + rtc_update_irq(pdata->rtc, 1, RTC_IRQF | RTC_AF); 144 + 145 + return IRQ_HANDLED; 146 + } 147 + 148 + static int xgene_rtc_probe(struct platform_device *pdev) 149 + { 150 + struct xgene_rtc_dev *pdata; 151 + struct resource *res; 152 + int ret; 153 + int irq; 154 + 155 + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 156 + if (!pdata) 157 + return -ENOMEM; 158 + platform_set_drvdata(pdev, pdata); 159 + pdata->dev = &pdev->dev; 160 + 161 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 162 + pdata->csr_base = devm_ioremap_resource(&pdev->dev, res); 163 + if (IS_ERR(pdata->csr_base)) 164 + return PTR_ERR(pdata->csr_base); 165 + 166 + irq = platform_get_irq(pdev, 0); 167 + if (irq < 0) { 168 + dev_err(&pdev->dev, "No IRQ resource\n"); 169 + return irq; 170 + } 171 + ret = devm_request_irq(&pdev->dev, irq, xgene_rtc_interrupt, 0, 172 + dev_name(&pdev->dev), pdata); 173 + if (ret) { 174 + dev_err(&pdev->dev, "Could not request IRQ\n"); 175 + return ret; 176 + } 177 + 178 + pdata->clk = devm_clk_get(&pdev->dev, NULL); 179 + if (IS_ERR(pdata->clk)) { 180 + dev_err(&pdev->dev, "Couldn't get the clock for RTC\n"); 181 + return -ENODEV; 182 + } 183 + clk_prepare_enable(pdata->clk); 184 + 185 + /* Turn on the clock and the crystal */ 186 + writel(RTC_CCR_EN, pdata->csr_base + RTC_CCR); 187 + 188 + device_init_wakeup(&pdev->dev, 1); 189 + 190 + pdata->rtc = devm_rtc_device_register(&pdev->dev, pdev->name, 191 + &xgene_rtc_ops, THIS_MODULE); 192 + if (IS_ERR(pdata->rtc)) { 193 + clk_disable_unprepare(pdata->clk); 194 + return PTR_ERR(pdata->rtc); 195 + } 196 + 197 + /* HW does not support update faster than 1 seconds */ 198 + pdata->rtc->uie_unsupported = 1; 199 + 200 + return 0; 201 + } 202 + 203 + static int xgene_rtc_remove(struct platform_device *pdev) 204 + { 205 + struct xgene_rtc_dev *pdata = platform_get_drvdata(pdev); 206 + 207 + xgene_rtc_alarm_irq_enable(&pdev->dev, 0); 208 + device_init_wakeup(&pdev->dev, 0); 209 + clk_disable_unprepare(pdata->clk); 210 + return 0; 211 + } 212 + 213 + #ifdef CONFIG_PM_SLEEP 214 + static int xgene_rtc_suspend(struct device *dev) 215 + { 216 + struct platform_device *pdev = to_platform_device(dev); 217 + struct xgene_rtc_dev *pdata = platform_get_drvdata(pdev); 218 + int irq; 219 + 220 + irq = platform_get_irq(pdev, 0); 221 + if (device_may_wakeup(&pdev->dev)) { 222 + if (!enable_irq_wake(irq)) 223 + pdata->irq_wake = 1; 224 + } else { 225 + xgene_rtc_alarm_irq_enable(dev, 0); 226 + clk_disable(pdata->clk); 227 + } 228 + 229 + return 0; 230 + } 231 + 232 + static int xgene_rtc_resume(struct device *dev) 233 + { 234 + struct platform_device *pdev = to_platform_device(dev); 235 + struct xgene_rtc_dev *pdata = platform_get_drvdata(pdev); 236 + int irq; 237 + 238 + irq = platform_get_irq(pdev, 0); 239 + if (device_may_wakeup(&pdev->dev)) { 240 + if (pdata->irq_wake) { 241 + disable_irq_wake(irq); 242 + pdata->irq_wake = 0; 243 + } 244 + } else { 245 + clk_enable(pdata->clk); 246 + xgene_rtc_alarm_irq_enable(dev, 1); 247 + } 248 + 249 + return 0; 250 + } 251 + #endif 252 + 253 + static SIMPLE_DEV_PM_OPS(xgene_rtc_pm_ops, xgene_rtc_suspend, xgene_rtc_resume); 254 + 255 + #ifdef CONFIG_OF 256 + static const struct of_device_id xgene_rtc_of_match[] = { 257 + {.compatible = "apm,xgene-rtc" }, 258 + { } 259 + }; 260 + MODULE_DEVICE_TABLE(of, xgene_rtc_of_match); 261 + #endif 262 + 263 + static struct platform_driver xgene_rtc_driver = { 264 + .probe = xgene_rtc_probe, 265 + .remove = xgene_rtc_remove, 266 + .driver = { 267 + .owner = THIS_MODULE, 268 + .name = "xgene-rtc", 269 + .pm = &xgene_rtc_pm_ops, 270 + .of_match_table = of_match_ptr(xgene_rtc_of_match), 271 + }, 272 + }; 273 + 274 + module_platform_driver(xgene_rtc_driver); 275 + 276 + MODULE_DESCRIPTION("APM X-Gene SoC RTC driver"); 277 + MODULE_AUTHOR("Rameshwar Sahu <rsahu@apm.com>"); 278 + MODULE_LICENSE("GPL");
+3 -3
drivers/scsi/scsi_sysctl.c
··· 12 12 #include "scsi_priv.h" 13 13 14 14 15 - static ctl_table scsi_table[] = { 15 + static struct ctl_table scsi_table[] = { 16 16 { .procname = "logging_level", 17 17 .data = &scsi_logging_level, 18 18 .maxlen = sizeof(scsi_logging_level), ··· 21 21 { } 22 22 }; 23 23 24 - static ctl_table scsi_dir_table[] = { 24 + static struct ctl_table scsi_dir_table[] = { 25 25 { .procname = "scsi", 26 26 .mode = 0555, 27 27 .child = scsi_table }, 28 28 { } 29 29 }; 30 30 31 - static ctl_table scsi_root_table[] = { 31 + static struct ctl_table scsi_root_table[] = { 32 32 { .procname = "dev", 33 33 .mode = 0555, 34 34 .child = scsi_dir_table },
+15 -6
drivers/tty/sysrq.c
··· 46 46 #include <linux/jiffies.h> 47 47 #include <linux/syscalls.h> 48 48 #include <linux/of.h> 49 + #include <linux/rcupdate.h> 49 50 50 51 #include <asm/ptrace.h> 51 52 #include <asm/irq_regs.h> ··· 511 510 struct sysrq_key_op *op_p; 512 511 int orig_log_level; 513 512 int i; 514 - unsigned long flags; 515 513 516 - spin_lock_irqsave(&sysrq_key_table_lock, flags); 514 + rcu_sysrq_start(); 515 + rcu_read_lock(); 517 516 /* 518 517 * Raise the apparent loglevel to maximum so that the sysrq header 519 518 * is shown to provide the user with positive feedback. We do not ··· 555 554 printk("\n"); 556 555 console_loglevel = orig_log_level; 557 556 } 558 - spin_unlock_irqrestore(&sysrq_key_table_lock, flags); 557 + rcu_read_unlock(); 558 + rcu_sysrq_end(); 559 559 } 560 560 561 561 void handle_sysrq(int key) ··· 1045 1043 struct sysrq_key_op *remove_op_p) 1046 1044 { 1047 1045 int retval; 1048 - unsigned long flags; 1049 1046 1050 - spin_lock_irqsave(&sysrq_key_table_lock, flags); 1047 + spin_lock(&sysrq_key_table_lock); 1051 1048 if (__sysrq_get_key_op(key) == remove_op_p) { 1052 1049 __sysrq_put_key_op(key, insert_op_p); 1053 1050 retval = 0; 1054 1051 } else { 1055 1052 retval = -1; 1056 1053 } 1057 - spin_unlock_irqrestore(&sysrq_key_table_lock, flags); 1054 + spin_unlock(&sysrq_key_table_lock); 1055 + 1056 + /* 1057 + * A concurrent __handle_sysrq either got the old op or the new op. 1058 + * Wait for it to go away before returning, so the code for an old 1059 + * op is not freed (eg. on module unload) while it is in use. 1060 + */ 1061 + synchronize_rcu(); 1062 + 1058 1063 return retval; 1059 1064 } 1060 1065
+11 -5
fs/affs/affs.h
··· 1 + #ifdef pr_fmt 2 + #undef pr_fmt 3 + #endif 4 + 5 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 6 + 1 7 #include <linux/types.h> 2 8 #include <linux/fs.h> 3 9 #include <linux/buffer_head.h> ··· 212 206 static inline struct buffer_head * 213 207 affs_bread(struct super_block *sb, int block) 214 208 { 215 - pr_debug("affs_bread: %d\n", block); 209 + pr_debug("%s: %d\n", __func__, block); 216 210 if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) 217 211 return sb_bread(sb, block); 218 212 return NULL; ··· 220 214 static inline struct buffer_head * 221 215 affs_getblk(struct super_block *sb, int block) 222 216 { 223 - pr_debug("affs_getblk: %d\n", block); 217 + pr_debug("%s: %d\n", __func__, block); 224 218 if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) 225 219 return sb_getblk(sb, block); 226 220 return NULL; ··· 229 223 affs_getzeroblk(struct super_block *sb, int block) 230 224 { 231 225 struct buffer_head *bh; 232 - pr_debug("affs_getzeroblk: %d\n", block); 226 + pr_debug("%s: %d\n", __func__, block); 233 227 if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) { 234 228 bh = sb_getblk(sb, block); 235 229 lock_buffer(bh); ··· 244 238 affs_getemptyblk(struct super_block *sb, int block) 245 239 { 246 240 struct buffer_head *bh; 247 - pr_debug("affs_getemptyblk: %d\n", block); 241 + pr_debug("%s: %d\n", __func__, block); 248 242 if (block >= AFFS_SB(sb)->s_reserved && block < AFFS_SB(sb)->s_partition_size) { 249 243 bh = sb_getblk(sb, block); 250 244 wait_on_buffer(bh); ··· 257 251 affs_brelse(struct buffer_head *bh) 258 252 { 259 253 if (bh) 260 - pr_debug("affs_brelse: %lld\n", (long long) bh->b_blocknr); 254 + pr_debug("%s: %lld\n", __func__, (long long) bh->b_blocknr); 261 255 brelse(bh); 262 256 } 263 257
+8 -7
fs/affs/amigaffs.c
··· 34 34 ino = bh->b_blocknr; 35 35 offset = affs_hash_name(sb, AFFS_TAIL(sb, bh)->name + 1, AFFS_TAIL(sb, bh)->name[0]); 36 36 37 - pr_debug("AFFS: insert_hash(dir=%u, ino=%d)\n", (u32)dir->i_ino, ino); 37 + pr_debug("%s(dir=%u, ino=%d)\n", __func__, (u32)dir->i_ino, ino); 38 38 39 39 dir_bh = affs_bread(sb, dir->i_ino); 40 40 if (!dir_bh) ··· 84 84 sb = dir->i_sb; 85 85 rem_ino = rem_bh->b_blocknr; 86 86 offset = affs_hash_name(sb, AFFS_TAIL(sb, rem_bh)->name+1, AFFS_TAIL(sb, rem_bh)->name[0]); 87 - pr_debug("AFFS: remove_hash(dir=%d, ino=%d, hashval=%d)\n", (u32)dir->i_ino, rem_ino, offset); 87 + pr_debug("%s(dir=%d, ino=%d, hashval=%d)\n", 88 + __func__, (u32)dir->i_ino, rem_ino, offset); 88 89 89 90 bh = affs_bread(sb, dir->i_ino); 90 91 if (!bh) ··· 148 147 u32 link_ino, ino; 149 148 int retval; 150 149 151 - pr_debug("AFFS: remove_link(key=%ld)\n", inode->i_ino); 150 + pr_debug("%s(key=%ld)\n", __func__, inode->i_ino); 152 151 retval = -EIO; 153 152 bh = affs_bread(sb, inode->i_ino); 154 153 if (!bh) ··· 280 279 if (!inode) 281 280 goto done; 282 281 283 - pr_debug("AFFS: remove_header(key=%ld)\n", inode->i_ino); 282 + pr_debug("%s(key=%ld)\n", __func__, inode->i_ino); 284 283 retval = -EIO; 285 284 bh = affs_bread(sb, (u32)(long)dentry->d_fsdata); 286 285 if (!bh) ··· 452 451 vsnprintf(ErrorBuffer,sizeof(ErrorBuffer),fmt,args); 453 452 va_end(args); 454 453 455 - printk(KERN_CRIT "AFFS error (device %s): %s(): %s\n", sb->s_id, 454 + pr_crit("error (device %s): %s(): %s\n", sb->s_id, 456 455 function,ErrorBuffer); 457 456 if (!(sb->s_flags & MS_RDONLY)) 458 - printk(KERN_WARNING "AFFS: Remounting filesystem read-only\n"); 457 + pr_warn("Remounting filesystem read-only\n"); 459 458 sb->s_flags |= MS_RDONLY; 460 459 } 461 460 ··· 468 467 vsnprintf(ErrorBuffer,sizeof(ErrorBuffer),fmt,args); 469 468 va_end(args); 470 469 471 - printk(KERN_WARNING "AFFS warning (device %s): %s(): %s\n", sb->s_id, 470 + pr_warn("(device %s): %s(): %s\n", sb->s_id, 472 471 function,ErrorBuffer); 473 472 } 474 473
+10 -11
fs/affs/bitmap.c
··· 17 17 u32 free; 18 18 int i; 19 19 20 - pr_debug("AFFS: count_free_blocks()\n"); 20 + pr_debug("%s()\n", __func__); 21 21 22 22 if (sb->s_flags & MS_RDONLY) 23 23 return 0; ··· 43 43 u32 blk, bmap, bit, mask, tmp; 44 44 __be32 *data; 45 45 46 - pr_debug("AFFS: free_block(%u)\n", block); 46 + pr_debug("%s(%u)\n", __func__, block); 47 47 48 48 if (block > sbi->s_partition_size) 49 49 goto err_range; ··· 125 125 sb = inode->i_sb; 126 126 sbi = AFFS_SB(sb); 127 127 128 - pr_debug("AFFS: balloc(inode=%lu,goal=%u): ", inode->i_ino, goal); 128 + pr_debug("balloc(inode=%lu,goal=%u): ", inode->i_ino, goal); 129 129 130 130 if (AFFS_I(inode)->i_pa_cnt) { 131 131 pr_debug("%d\n", AFFS_I(inode)->i_lastalloc+1); ··· 254 254 return 0; 255 255 256 256 if (!AFFS_ROOT_TAIL(sb, sbi->s_root_bh)->bm_flag) { 257 - printk(KERN_NOTICE "AFFS: Bitmap invalid - mounting %s read only\n", 258 - sb->s_id); 257 + pr_notice("Bitmap invalid - mounting %s read only\n", sb->s_id); 259 258 *flags |= MS_RDONLY; 260 259 return 0; 261 260 } ··· 267 268 size = sbi->s_bmap_count * sizeof(*bm); 268 269 bm = sbi->s_bitmap = kzalloc(size, GFP_KERNEL); 269 270 if (!sbi->s_bitmap) { 270 - printk(KERN_ERR "AFFS: Bitmap allocation failed\n"); 271 + pr_err("Bitmap allocation failed\n"); 271 272 return -ENOMEM; 272 273 } 273 274 ··· 281 282 bm->bm_key = be32_to_cpu(bmap_blk[blk]); 282 283 bh = affs_bread(sb, bm->bm_key); 283 284 if (!bh) { 284 - printk(KERN_ERR "AFFS: Cannot read bitmap\n"); 285 + pr_err("Cannot read bitmap\n"); 285 286 res = -EIO; 286 287 goto out; 287 288 } 288 289 if (affs_checksum_block(sb, bh)) { 289 - printk(KERN_WARNING "AFFS: Bitmap %u invalid - mounting %s read only.\n", 290 - bm->bm_key, sb->s_id); 290 + pr_warn("Bitmap %u invalid - mounting %s read only.\n", 291 + bm->bm_key, sb->s_id); 291 292 *flags |= MS_RDONLY; 292 293 goto out; 293 294 } 294 - pr_debug("AFFS: read bitmap block %d: %d\n", blk, bm->bm_key); 295 + pr_debug("read bitmap block %d: %d\n", blk, bm->bm_key); 295 296 bm->bm_free = memweight(bh->b_data + 4, sb->s_blocksize - 4); 296 297 297 298 /* Don't try read the extension if this is the last block, ··· 303 304 affs_brelse(bmap_bh); 304 305 bmap_bh = affs_bread(sb, be32_to_cpu(bmap_blk[blk])); 305 306 if (!bmap_bh) { 306 - printk(KERN_ERR "AFFS: Cannot read bitmap extension\n"); 307 + pr_err("Cannot read bitmap extension\n"); 307 308 res = -EIO; 308 309 goto out; 309 310 }
+4 -4
fs/affs/dir.c
··· 54 54 u32 ino; 55 55 int error = 0; 56 56 57 - pr_debug("AFFS: readdir(ino=%lu,f_pos=%lx)\n", 58 - inode->i_ino, (unsigned long)ctx->pos); 57 + pr_debug("%s(ino=%lu,f_pos=%lx)\n", 58 + __func__, inode->i_ino, (unsigned long)ctx->pos); 59 59 60 60 if (ctx->pos < 2) { 61 61 file->private_data = (void *)0; ··· 81 81 */ 82 82 ino = (u32)(long)file->private_data; 83 83 if (ino && file->f_version == inode->i_version) { 84 - pr_debug("AFFS: readdir() left off=%d\n", ino); 84 + pr_debug("readdir() left off=%d\n", ino); 85 85 goto inside; 86 86 } 87 87 ··· 117 117 118 118 namelen = min(AFFS_TAIL(sb, fh_bh)->name[0], (u8)30); 119 119 name = AFFS_TAIL(sb, fh_bh)->name + 1; 120 - pr_debug("AFFS: readdir(): dir_emit(\"%.*s\", " 120 + pr_debug("readdir(): dir_emit(\"%.*s\", " 121 121 "ino=%u), hash=%d, f_pos=%x\n", 122 122 namelen, name, ino, hash_pos, (u32)ctx->pos); 123 123
+26 -20
fs/affs/file.c
··· 45 45 static int 46 46 affs_file_open(struct inode *inode, struct file *filp) 47 47 { 48 - pr_debug("AFFS: open(%lu,%d)\n", 48 + pr_debug("open(%lu,%d)\n", 49 49 inode->i_ino, atomic_read(&AFFS_I(inode)->i_opencnt)); 50 50 atomic_inc(&AFFS_I(inode)->i_opencnt); 51 51 return 0; ··· 54 54 static int 55 55 affs_file_release(struct inode *inode, struct file *filp) 56 56 { 57 - pr_debug("AFFS: release(%lu, %d)\n", 57 + pr_debug("release(%lu, %d)\n", 58 58 inode->i_ino, atomic_read(&AFFS_I(inode)->i_opencnt)); 59 59 60 60 if (atomic_dec_and_test(&AFFS_I(inode)->i_opencnt)) { ··· 324 324 struct buffer_head *ext_bh; 325 325 u32 ext; 326 326 327 - pr_debug("AFFS: get_block(%u, %lu)\n", (u32)inode->i_ino, (unsigned long)block); 327 + pr_debug("%s(%u, %lu)\n", 328 + __func__, (u32)inode->i_ino, (unsigned long)block); 328 329 329 330 BUG_ON(block > (sector_t)0x7fffffffUL); 330 331 ··· 499 498 } 500 499 501 500 static int 502 - affs_do_readpage_ofs(struct file *file, struct page *page, unsigned from, unsigned to) 501 + affs_do_readpage_ofs(struct page *page, unsigned to) 503 502 { 504 503 struct inode *inode = page->mapping->host; 505 504 struct super_block *sb = inode->i_sb; 506 505 struct buffer_head *bh; 507 506 char *data; 507 + unsigned pos = 0; 508 508 u32 bidx, boff, bsize; 509 509 u32 tmp; 510 510 511 - pr_debug("AFFS: read_page(%u, %ld, %d, %d)\n", (u32)inode->i_ino, page->index, from, to); 512 - BUG_ON(from > to || to > PAGE_CACHE_SIZE); 511 + pr_debug("%s(%u, %ld, 0, %d)\n", __func__, (u32)inode->i_ino, 512 + page->index, to); 513 + BUG_ON(to > PAGE_CACHE_SIZE); 513 514 kmap(page); 514 515 data = page_address(page); 515 516 bsize = AFFS_SB(sb)->s_data_blksize; 516 - tmp = (page->index << PAGE_CACHE_SHIFT) + from; 517 + tmp = page->index << PAGE_CACHE_SHIFT; 517 518 bidx = tmp / bsize; 518 519 boff = tmp % bsize; 519 520 520 - while (from < to) { 521 + while (pos < to) { 521 522 bh = affs_bread_ino(inode, bidx, 0); 522 523 if (IS_ERR(bh)) 523 524 return PTR_ERR(bh); 524 - tmp = min(bsize - boff, to - from); 525 - BUG_ON(from + tmp > to || tmp > bsize); 526 - memcpy(data + from, AFFS_DATA(bh) + boff, tmp); 525 + tmp = min(bsize - boff, to - pos); 526 + BUG_ON(pos + tmp > to || tmp > bsize); 527 + memcpy(data + pos, AFFS_DATA(bh) + boff, tmp); 527 528 affs_brelse(bh); 528 529 bidx++; 529 - from += tmp; 530 + pos += tmp; 530 531 boff = 0; 531 532 } 532 533 flush_dcache_page(page); ··· 545 542 u32 size, bsize; 546 543 u32 tmp; 547 544 548 - pr_debug("AFFS: extent_file(%u, %d)\n", (u32)inode->i_ino, newsize); 545 + pr_debug("%s(%u, %d)\n", __func__, (u32)inode->i_ino, newsize); 549 546 bsize = AFFS_SB(sb)->s_data_blksize; 550 547 bh = NULL; 551 548 size = AFFS_I(inode)->mmu_private; ··· 611 608 u32 to; 612 609 int err; 613 610 614 - pr_debug("AFFS: read_page(%u, %ld)\n", (u32)inode->i_ino, page->index); 611 + pr_debug("%s(%u, %ld)\n", __func__, (u32)inode->i_ino, page->index); 615 612 to = PAGE_CACHE_SIZE; 616 613 if (((page->index + 1) << PAGE_CACHE_SHIFT) > inode->i_size) { 617 614 to = inode->i_size & ~PAGE_CACHE_MASK; 618 615 memset(page_address(page) + to, 0, PAGE_CACHE_SIZE - to); 619 616 } 620 617 621 - err = affs_do_readpage_ofs(file, page, 0, to); 618 + err = affs_do_readpage_ofs(page, to); 622 619 if (!err) 623 620 SetPageUptodate(page); 624 621 unlock_page(page); ··· 634 631 pgoff_t index; 635 632 int err = 0; 636 633 637 - pr_debug("AFFS: write_begin(%u, %llu, %llu)\n", (u32)inode->i_ino, (unsigned long long)pos, (unsigned long long)pos + len); 634 + pr_debug("%s(%u, %llu, %llu)\n", __func__, (u32)inode->i_ino, 635 + (unsigned long long)pos, (unsigned long long)pos + len); 638 636 if (pos > AFFS_I(inode)->mmu_private) { 639 637 /* XXX: this probably leaves a too-big i_size in case of 640 638 * failure. Should really be updating i_size at write_end time ··· 655 651 return 0; 656 652 657 653 /* XXX: inefficient but safe in the face of short writes */ 658 - err = affs_do_readpage_ofs(file, page, 0, PAGE_CACHE_SIZE); 654 + err = affs_do_readpage_ofs(page, PAGE_CACHE_SIZE); 659 655 if (err) { 660 656 unlock_page(page); 661 657 page_cache_release(page); ··· 684 680 * due to write_begin. 685 681 */ 686 682 687 - pr_debug("AFFS: write_begin(%u, %llu, %llu)\n", (u32)inode->i_ino, (unsigned long long)pos, (unsigned long long)pos + len); 683 + pr_debug("%s(%u, %llu, %llu)\n", 684 + __func__, (u32)inode->i_ino, (unsigned long long)pos, 685 + (unsigned long long)pos + len); 688 686 bsize = AFFS_SB(sb)->s_data_blksize; 689 687 data = page_address(page); 690 688 ··· 808 802 { 809 803 struct super_block *sb = inode->i_sb; 810 804 811 - pr_debug("AFFS: free_prealloc(ino=%lu)\n", inode->i_ino); 805 + pr_debug("free_prealloc(ino=%lu)\n", inode->i_ino); 812 806 813 807 while (AFFS_I(inode)->i_pa_cnt) { 814 808 AFFS_I(inode)->i_pa_cnt--; ··· 828 822 struct buffer_head *ext_bh; 829 823 int i; 830 824 831 - pr_debug("AFFS: truncate(inode=%d, oldsize=%u, newsize=%u)\n", 825 + pr_debug("truncate(inode=%d, oldsize=%u, newsize=%u)\n", 832 826 (u32)inode->i_ino, (u32)AFFS_I(inode)->mmu_private, (u32)inode->i_size); 833 827 834 828 last_blk = 0;
+8 -6
fs/affs/inode.c
··· 34 34 if (!(inode->i_state & I_NEW)) 35 35 return inode; 36 36 37 - pr_debug("AFFS: affs_iget(%lu)\n", inode->i_ino); 37 + pr_debug("affs_iget(%lu)\n", inode->i_ino); 38 38 39 39 block = inode->i_ino; 40 40 bh = affs_bread(sb, block); ··· 175 175 uid_t uid; 176 176 gid_t gid; 177 177 178 - pr_debug("AFFS: write_inode(%lu)\n",inode->i_ino); 178 + pr_debug("write_inode(%lu)\n", inode->i_ino); 179 179 180 180 if (!inode->i_nlink) 181 181 // possibly free block ··· 220 220 struct inode *inode = dentry->d_inode; 221 221 int error; 222 222 223 - pr_debug("AFFS: notify_change(%lu,0x%x)\n",inode->i_ino,attr->ia_valid); 223 + pr_debug("notify_change(%lu,0x%x)\n", inode->i_ino, attr->ia_valid); 224 224 225 225 error = inode_change_ok(inode,attr); 226 226 if (error) ··· 258 258 affs_evict_inode(struct inode *inode) 259 259 { 260 260 unsigned long cache_page; 261 - pr_debug("AFFS: evict_inode(ino=%lu, nlink=%u)\n", inode->i_ino, inode->i_nlink); 261 + pr_debug("evict_inode(ino=%lu, nlink=%u)\n", 262 + inode->i_ino, inode->i_nlink); 262 263 truncate_inode_pages_final(&inode->i_data); 263 264 264 265 if (!inode->i_nlink) { ··· 272 271 affs_free_prealloc(inode); 273 272 cache_page = (unsigned long)AFFS_I(inode)->i_lc; 274 273 if (cache_page) { 275 - pr_debug("AFFS: freeing ext cache\n"); 274 + pr_debug("freeing ext cache\n"); 276 275 AFFS_I(inode)->i_lc = NULL; 277 276 AFFS_I(inode)->i_ac = NULL; 278 277 free_page(cache_page); ··· 351 350 u32 block = 0; 352 351 int retval; 353 352 354 - pr_debug("AFFS: add_entry(dir=%u, inode=%u, \"%*s\", type=%d)\n", (u32)dir->i_ino, 353 + pr_debug("%s(dir=%u, inode=%u, \"%*s\", type=%d)\n", 354 + __func__, (u32)dir->i_ino, 355 355 (u32)inode->i_ino, (int)dentry->d_name.len, dentry->d_name.name, type); 356 356 357 357 retval = -EIO;
+23 -16
fs/affs/namei.c
··· 190 190 toupper_t toupper = affs_get_toupper(sb); 191 191 u32 key; 192 192 193 - pr_debug("AFFS: find_entry(\"%.*s\")\n", (int)dentry->d_name.len, dentry->d_name.name); 193 + pr_debug("%s(\"%.*s\")\n", 194 + __func__, (int)dentry->d_name.len, dentry->d_name.name); 194 195 195 196 bh = affs_bread(sb, dir->i_ino); 196 197 if (!bh) ··· 219 218 struct buffer_head *bh; 220 219 struct inode *inode = NULL; 221 220 222 - pr_debug("AFFS: lookup(\"%.*s\")\n",(int)dentry->d_name.len,dentry->d_name.name); 221 + pr_debug("%s(\"%.*s\")\n", 222 + __func__, (int)dentry->d_name.len, dentry->d_name.name); 223 223 224 224 affs_lock_dir(dir); 225 225 bh = affs_find_entry(dir, dentry); ··· 250 248 int 251 249 affs_unlink(struct inode *dir, struct dentry *dentry) 252 250 { 253 - pr_debug("AFFS: unlink(dir=%d, %lu \"%.*s\")\n", (u32)dir->i_ino, 254 - dentry->d_inode->i_ino, 255 - (int)dentry->d_name.len, dentry->d_name.name); 251 + pr_debug("%s(dir=%d, %lu \"%.*s\")\n", 252 + __func__, (u32)dir->i_ino, dentry->d_inode->i_ino, 253 + (int)dentry->d_name.len, dentry->d_name.name); 256 254 257 255 return affs_remove_header(dentry); 258 256 } ··· 264 262 struct inode *inode; 265 263 int error; 266 264 267 - pr_debug("AFFS: create(%lu,\"%.*s\",0%ho)\n",dir->i_ino,(int)dentry->d_name.len, 265 + pr_debug("%s(%lu,\"%.*s\",0%ho)\n", 266 + __func__, dir->i_ino, (int)dentry->d_name.len, 268 267 dentry->d_name.name,mode); 269 268 270 269 inode = affs_new_inode(dir); ··· 294 291 struct inode *inode; 295 292 int error; 296 293 297 - pr_debug("AFFS: mkdir(%lu,\"%.*s\",0%ho)\n",dir->i_ino, 298 - (int)dentry->d_name.len,dentry->d_name.name,mode); 294 + pr_debug("%s(%lu,\"%.*s\",0%ho)\n", 295 + __func__, dir->i_ino, (int)dentry->d_name.len, 296 + dentry->d_name.name, mode); 299 297 300 298 inode = affs_new_inode(dir); 301 299 if (!inode) ··· 321 317 int 322 318 affs_rmdir(struct inode *dir, struct dentry *dentry) 323 319 { 324 - pr_debug("AFFS: rmdir(dir=%u, %lu \"%.*s\")\n", (u32)dir->i_ino, 325 - dentry->d_inode->i_ino, 320 + pr_debug("%s(dir=%u, %lu \"%.*s\")\n", 321 + __func__, (u32)dir->i_ino, dentry->d_inode->i_ino, 326 322 (int)dentry->d_name.len, dentry->d_name.name); 327 323 328 324 return affs_remove_header(dentry); ··· 338 334 int i, maxlen, error; 339 335 char c, lc; 340 336 341 - pr_debug("AFFS: symlink(%lu,\"%.*s\" -> \"%s\")\n",dir->i_ino, 342 - (int)dentry->d_name.len,dentry->d_name.name,symname); 337 + pr_debug("%s(%lu,\"%.*s\" -> \"%s\")\n", 338 + __func__, dir->i_ino, (int)dentry->d_name.len, 339 + dentry->d_name.name, symname); 343 340 344 341 maxlen = AFFS_SB(sb)->s_hashsize * sizeof(u32) - 1; 345 342 inode = affs_new_inode(dir); ··· 409 404 { 410 405 struct inode *inode = old_dentry->d_inode; 411 406 412 - pr_debug("AFFS: link(%u, %u, \"%.*s\")\n", (u32)inode->i_ino, (u32)dir->i_ino, 407 + pr_debug("%s(%u, %u, \"%.*s\")\n", 408 + __func__, (u32)inode->i_ino, (u32)dir->i_ino, 413 409 (int)dentry->d_name.len,dentry->d_name.name); 414 410 415 411 return affs_add_entry(dir, inode, dentry, ST_LINKFILE); ··· 424 418 struct buffer_head *bh = NULL; 425 419 int retval; 426 420 427 - pr_debug("AFFS: rename(old=%u,\"%*s\" to new=%u,\"%*s\")\n", 428 - (u32)old_dir->i_ino, (int)old_dentry->d_name.len, old_dentry->d_name.name, 429 - (u32)new_dir->i_ino, (int)new_dentry->d_name.len, new_dentry->d_name.name); 421 + pr_debug("%s(old=%u,\"%*s\" to new=%u,\"%*s\")\n", 422 + __func__, (u32)old_dir->i_ino, (int)old_dentry->d_name.len, 423 + old_dentry->d_name.name, (u32)new_dir->i_ino, 424 + (int)new_dentry->d_name.len, new_dentry->d_name.name); 430 425 431 426 retval = affs_check_name(new_dentry->d_name.name, 432 427 new_dentry->d_name.len,
+21 -22
fs/affs/super.c
··· 46 46 affs_put_super(struct super_block *sb) 47 47 { 48 48 struct affs_sb_info *sbi = AFFS_SB(sb); 49 - pr_debug("AFFS: put_super()\n"); 49 + pr_debug("%s()\n", __func__); 50 50 51 51 cancel_delayed_work_sync(&sbi->sb_work); 52 52 } ··· 220 220 return 0; 221 221 if (n != 512 && n != 1024 && n != 2048 222 222 && n != 4096) { 223 - printk ("AFFS: Invalid blocksize (512, 1024, 2048, 4096 allowed)\n"); 223 + pr_warn("Invalid blocksize (512, 1024, 2048, 4096 allowed)\n"); 224 224 return 0; 225 225 } 226 226 *blocksize = n; ··· 285 285 /* Silently ignore the quota options */ 286 286 break; 287 287 default: 288 - printk("AFFS: Unrecognized mount option \"%s\" " 289 - "or missing value\n", p); 288 + pr_warn("Unrecognized mount option \"%s\" or missing value\n", 289 + p); 290 290 return 0; 291 291 } 292 292 } ··· 319 319 320 320 save_mount_options(sb, data); 321 321 322 - pr_debug("AFFS: read_super(%s)\n",data ? (const char *)data : "no options"); 322 + pr_debug("read_super(%s)\n", data ? (const char *)data : "no options"); 323 323 324 324 sb->s_magic = AFFS_SUPER_MAGIC; 325 325 sb->s_op = &affs_sops; ··· 339 339 if (!parse_options(data,&uid,&gid,&i,&reserved,&root_block, 340 340 &blocksize,&sbi->s_prefix, 341 341 sbi->s_volume, &mount_flags)) { 342 - printk(KERN_ERR "AFFS: Error parsing options\n"); 342 + pr_err("Error parsing options\n"); 343 343 return -EINVAL; 344 344 } 345 345 /* N.B. after this point s_prefix must be released */ ··· 356 356 */ 357 357 358 358 size = sb->s_bdev->bd_inode->i_size >> 9; 359 - pr_debug("AFFS: initial blocksize=%d, #blocks=%d\n", 512, size); 359 + pr_debug("initial blocksize=%d, #blocks=%d\n", 512, size); 360 360 361 361 affs_set_blocksize(sb, PAGE_SIZE); 362 362 /* Try to find root block. Its location depends on the block size. */ ··· 371 371 sbi->s_root_block = root_block; 372 372 if (root_block < 0) 373 373 sbi->s_root_block = (reserved + size - 1) / 2; 374 - pr_debug("AFFS: setting blocksize to %d\n", blocksize); 374 + pr_debug("setting blocksize to %d\n", blocksize); 375 375 affs_set_blocksize(sb, blocksize); 376 376 sbi->s_partition_size = size; 377 377 ··· 386 386 * block behind the calculated one. So we check this one, too. 387 387 */ 388 388 for (num_bm = 0; num_bm < 2; num_bm++) { 389 - pr_debug("AFFS: Dev %s, trying root=%u, bs=%d, " 389 + pr_debug("Dev %s, trying root=%u, bs=%d, " 390 390 "size=%d, reserved=%d\n", 391 391 sb->s_id, 392 392 sbi->s_root_block + num_bm, ··· 407 407 } 408 408 } 409 409 if (!silent) 410 - printk(KERN_ERR "AFFS: No valid root block on device %s\n", 411 - sb->s_id); 410 + pr_err("No valid root block on device %s\n", sb->s_id); 412 411 return -EINVAL; 413 412 414 413 /* N.B. after this point bh must be released */ ··· 419 420 /* Find out which kind of FS we have */ 420 421 boot_bh = sb_bread(sb, 0); 421 422 if (!boot_bh) { 422 - printk(KERN_ERR "AFFS: Cannot read boot block\n"); 423 + pr_err("Cannot read boot block\n"); 423 424 return -EINVAL; 424 425 } 425 426 memcpy(sig, boot_bh->b_data, 4); ··· 432 433 */ 433 434 if ((chksum == FS_DCFFS || chksum == MUFS_DCFFS || chksum == FS_DCOFS 434 435 || chksum == MUFS_DCOFS) && !(sb->s_flags & MS_RDONLY)) { 435 - printk(KERN_NOTICE "AFFS: Dircache FS - mounting %s read only\n", 436 - sb->s_id); 436 + pr_notice("Dircache FS - mounting %s read only\n", sb->s_id); 437 437 sb->s_flags |= MS_RDONLY; 438 438 } 439 439 switch (chksum) { ··· 466 468 sb->s_flags |= MS_NOEXEC; 467 469 break; 468 470 default: 469 - printk(KERN_ERR "AFFS: Unknown filesystem on device %s: %08X\n", 470 - sb->s_id, chksum); 471 + pr_err("Unknown filesystem on device %s: %08X\n", 472 + sb->s_id, chksum); 471 473 return -EINVAL; 472 474 } 473 475 474 476 if (mount_flags & SF_VERBOSE) { 475 477 u8 len = AFFS_ROOT_TAIL(sb, root_bh)->disk_name[0]; 476 - printk(KERN_NOTICE "AFFS: Mounting volume \"%.*s\": Type=%.3s\\%c, Blocksize=%d\n", 478 + pr_notice("Mounting volume \"%.*s\": Type=%.3s\\%c, Blocksize=%d\n", 477 479 len > 31 ? 31 : len, 478 480 AFFS_ROOT_TAIL(sb, root_bh)->disk_name + 1, 479 481 sig, sig[3] + '0', blocksize); ··· 504 506 505 507 sb->s_root = d_make_root(root_inode); 506 508 if (!sb->s_root) { 507 - printk(KERN_ERR "AFFS: Get root inode failed\n"); 509 + pr_err("AFFS: Get root inode failed\n"); 508 510 return -ENOMEM; 509 511 } 510 512 511 - pr_debug("AFFS: s_flags=%lX\n",sb->s_flags); 513 + pr_debug("s_flags=%lX\n", sb->s_flags); 512 514 return 0; 513 515 } 514 516 ··· 528 530 char volume[32]; 529 531 char *prefix = NULL; 530 532 531 - pr_debug("AFFS: remount(flags=0x%x,opts=\"%s\")\n",*flags,data); 533 + pr_debug("%s(flags=0x%x,opts=\"%s\")\n", __func__, *flags, data); 532 534 533 535 sync_filesystem(sb); 534 536 *flags |= MS_NODIRATIME; ··· 576 578 int free; 577 579 u64 id = huge_encode_dev(sb->s_bdev->bd_dev); 578 580 579 - pr_debug("AFFS: statfs() partsize=%d, reserved=%d\n",AFFS_SB(sb)->s_partition_size, 580 - AFFS_SB(sb)->s_reserved); 581 + pr_debug("%s() partsize=%d, reserved=%d\n", 582 + __func__, AFFS_SB(sb)->s_partition_size, 583 + AFFS_SB(sb)->s_reserved); 581 584 582 585 free = affs_count_free_blocks(sb); 583 586 buf->f_type = AFFS_SUPER_MAGIC;
+1 -1
fs/affs/symlink.c
··· 21 21 char c; 22 22 char lc; 23 23 24 - pr_debug("AFFS: follow_link(ino=%lu)\n",inode->i_ino); 24 + pr_debug("follow_link(ino=%lu)\n", inode->i_ino); 25 25 26 26 err = -EIO; 27 27 bh = affs_bread(inode->i_sb, inode->i_ino);
+5 -6
fs/befs/btree.c
··· 318 318 * befs_find_key - Search for a key within a node 319 319 * @sb: Filesystem superblock 320 320 * @node: Node to find the key within 321 - * @key: Keystring to search for 321 + * @findkey: Keystring to search for 322 322 * @value: If key is found, the value stored with the key is put here 323 323 * 324 324 * finds exact match if one exists, and returns BEFS_BT_MATCH ··· 405 405 * Heres how it works: Key_no is the index of the key/value pair to 406 406 * return in keybuf/value. 407 407 * Bufsize is the size of keybuf (BEFS_NAME_LEN+1 is a good size). Keysize is 408 - * the number of charecters in the key (just a convenience). 408 + * the number of characters in the key (just a convenience). 409 409 * 410 410 * Algorithm: 411 411 * Get the first leafnode of the tree. See if the requested key is in that ··· 502 502 "for key of size %d", __func__, bufsize, keylen); 503 503 brelse(this_node->bh); 504 504 goto error_alloc; 505 - }; 505 + } 506 506 507 - strncpy(keybuf, keystart, keylen); 507 + strlcpy(keybuf, keystart, keylen + 1); 508 508 *value = fs64_to_cpu(sb, valarray[cur_key]); 509 509 *keysize = keylen; 510 - keybuf[keylen] = '\0'; 511 510 512 511 befs_debug(sb, "Read [%llu,%d]: Key \"%.*s\", Value %llu", node_off, 513 512 cur_key, keylen, keybuf, *value); ··· 706 707 * @key1: pointer to the first key to be compared 707 708 * @keylen1: length in bytes of key1 708 709 * @key2: pointer to the second key to be compared 709 - * @kelen2: length in bytes of key2 710 + * @keylen2: length in bytes of key2 710 711 * 711 712 * Returns 0 if @key1 and @key2 are equal. 712 713 * Returns >0 if @key1 is greater.
+1 -1
fs/befs/datastream.c
··· 116 116 * befs_read_lsmylink - read long symlink from datastream. 117 117 * @sb: Filesystem superblock 118 118 * @ds: Datastrem to read from 119 - * @buf: Buffer in which to place long symlink data 119 + * @buff: Buffer in which to place long symlink data 120 120 * @len: Length of the long symlink in bytes 121 121 * 122 122 * Returns the number of bytes read
+10 -19
fs/befs/linuxvfs.c
··· 133 133 134 134 befs_debug(sb, "---> befs_get_block() for inode %lu, block %ld", 135 135 (unsigned long)inode->i_ino, (long)block); 136 - 137 - if (block < 0) { 138 - befs_error(sb, "befs_get_block() was asked for a block " 139 - "number less than zero: block %ld in inode %lu", 140 - (long)block, (unsigned long)inode->i_ino); 141 - return -EIO; 142 - } 143 - 144 136 if (create) { 145 137 befs_error(sb, "befs_get_block() was asked to write to " 146 138 "block %ld in inode %lu", (long)block, ··· 388 396 if (S_ISLNK(inode->i_mode) && !(befs_ino->i_flags & BEFS_LONG_SYMLINK)){ 389 397 inode->i_size = 0; 390 398 inode->i_blocks = befs_sb->block_size / VFS_BLOCK_SIZE; 391 - strncpy(befs_ino->i_data.symlink, raw_inode->data.symlink, 392 - BEFS_SYMLINK_LEN - 1); 393 - befs_ino->i_data.symlink[BEFS_SYMLINK_LEN - 1] = '\0'; 399 + strlcpy(befs_ino->i_data.symlink, raw_inode->data.symlink, 400 + BEFS_SYMLINK_LEN); 394 401 } else { 395 402 int num_blks; 396 403 ··· 582 591 /** 583 592 * befs_nls2utf - Convert NLS string to utf8 encodeing 584 593 * @sb: Superblock 585 - * @src: Input string buffer in NLS format 586 - * @srclen: Length of input string in bytes 587 - * @dest: The output string in UTF-8 format 588 - * @destlen: Length of the output buffer 594 + * @in: Input string buffer in NLS format 595 + * @in_len: Length of input string in bytes 596 + * @out: The output string in UTF-8 format 597 + * @out_len: Length of the output buffer 589 598 * 590 - * Converts input string @src, which is in the format of the loaded NLS map, 599 + * Converts input string @in, which is in the format of the loaded NLS map, 591 600 * into a utf8 string. 592 601 * 593 - * The destination string @dest is allocated by this function and the caller is 602 + * The destination string @out is allocated by this function and the caller is 594 603 * responsible for freeing it with kfree() 595 604 * 596 - * On return, *@destlen is the length of @dest in bytes. 605 + * On return, *@out_len is the length of @out in bytes. 597 606 * 598 607 * On success, the return value is the number of utf8 characters written to 599 - * the output buffer @dest. 608 + * the output buffer @out. 600 609 * 601 610 * On Failure, a negative number coresponding to the error code is returned. 602 611 */
+7 -10
fs/cachefiles/bind.c
··· 50 50 cache->brun_percent < 100); 51 51 52 52 if (*args) { 53 - kerror("'bind' command doesn't take an argument"); 53 + pr_err("'bind' command doesn't take an argument"); 54 54 return -EINVAL; 55 55 } 56 56 57 57 if (!cache->rootdirname) { 58 - kerror("No cache directory specified"); 58 + pr_err("No cache directory specified"); 59 59 return -EINVAL; 60 60 } 61 61 62 62 /* don't permit already bound caches to be re-bound */ 63 63 if (test_bit(CACHEFILES_READY, &cache->flags)) { 64 - kerror("Cache already bound"); 64 + pr_err("Cache already bound"); 65 65 return -EBUSY; 66 66 } 67 67 ··· 228 228 set_bit(CACHEFILES_READY, &cache->flags); 229 229 dput(root); 230 230 231 - printk(KERN_INFO "CacheFiles:" 232 - " File cache on %s registered\n", 233 - cache->cache.identifier); 231 + pr_info("File cache on %s registered\n", cache->cache.identifier); 234 232 235 233 /* check how much space the cache has */ 236 234 cachefiles_has_space(cache, 0, 0); ··· 248 250 kmem_cache_free(cachefiles_object_jar, fsdef); 249 251 error_root_object: 250 252 cachefiles_end_secure(cache, saved_cred); 251 - kerror("Failed to register: %d", ret); 253 + pr_err("Failed to register: %d", ret); 252 254 return ret; 253 255 } 254 256 ··· 260 262 _enter(""); 261 263 262 264 if (test_bit(CACHEFILES_READY, &cache->flags)) { 263 - printk(KERN_INFO "CacheFiles:" 264 - " File cache on %s unregistering\n", 265 - cache->cache.identifier); 265 + pr_info("File cache on %s unregistering\n", 266 + cache->cache.identifier); 266 267 267 268 fscache_withdraw_cache(&cache->cache); 268 269 }
+15 -16
fs/cachefiles/daemon.c
··· 315 315 static int cachefiles_daemon_range_error(struct cachefiles_cache *cache, 316 316 char *args) 317 317 { 318 - kerror("Free space limits must be in range" 319 - " 0%%<=stop<cull<run<100%%"); 318 + pr_err("Free space limits must be in range 0%%<=stop<cull<run<100%%"); 320 319 321 320 return -EINVAL; 322 321 } ··· 475 476 _enter(",%s", args); 476 477 477 478 if (!*args) { 478 - kerror("Empty directory specified"); 479 + pr_err("Empty directory specified"); 479 480 return -EINVAL; 480 481 } 481 482 482 483 if (cache->rootdirname) { 483 - kerror("Second cache directory specified"); 484 + pr_err("Second cache directory specified"); 484 485 return -EEXIST; 485 486 } 486 487 ··· 503 504 _enter(",%s", args); 504 505 505 506 if (!*args) { 506 - kerror("Empty security context specified"); 507 + pr_err("Empty security context specified"); 507 508 return -EINVAL; 508 509 } 509 510 510 511 if (cache->secctx) { 511 - kerror("Second security context specified"); 512 + pr_err("Second security context specified"); 512 513 return -EINVAL; 513 514 } 514 515 ··· 531 532 _enter(",%s", args); 532 533 533 534 if (!*args) { 534 - kerror("Empty tag specified"); 535 + pr_err("Empty tag specified"); 535 536 return -EINVAL; 536 537 } 537 538 ··· 562 563 goto inval; 563 564 564 565 if (!test_bit(CACHEFILES_READY, &cache->flags)) { 565 - kerror("cull applied to unready cache"); 566 + pr_err("cull applied to unready cache"); 566 567 return -EIO; 567 568 } 568 569 569 570 if (test_bit(CACHEFILES_DEAD, &cache->flags)) { 570 - kerror("cull applied to dead cache"); 571 + pr_err("cull applied to dead cache"); 571 572 return -EIO; 572 573 } 573 574 ··· 587 588 588 589 notdir: 589 590 path_put(&path); 590 - kerror("cull command requires dirfd to be a directory"); 591 + pr_err("cull command requires dirfd to be a directory"); 591 592 return -ENOTDIR; 592 593 593 594 inval: 594 - kerror("cull command requires dirfd and filename"); 595 + pr_err("cull command requires dirfd and filename"); 595 596 return -EINVAL; 596 597 } 597 598 ··· 614 615 return 0; 615 616 616 617 inval: 617 - kerror("debug command requires mask"); 618 + pr_err("debug command requires mask"); 618 619 return -EINVAL; 619 620 } 620 621 ··· 634 635 goto inval; 635 636 636 637 if (!test_bit(CACHEFILES_READY, &cache->flags)) { 637 - kerror("inuse applied to unready cache"); 638 + pr_err("inuse applied to unready cache"); 638 639 return -EIO; 639 640 } 640 641 641 642 if (test_bit(CACHEFILES_DEAD, &cache->flags)) { 642 - kerror("inuse applied to dead cache"); 643 + pr_err("inuse applied to dead cache"); 643 644 return -EIO; 644 645 } 645 646 ··· 659 660 660 661 notdir: 661 662 path_put(&path); 662 - kerror("inuse command requires dirfd to be a directory"); 663 + pr_err("inuse command requires dirfd to be a directory"); 663 664 return -ENOTDIR; 664 665 665 666 inval: 666 - kerror("inuse command requires dirfd and filename"); 667 + pr_err("inuse command requires dirfd and filename"); 667 668 return -EINVAL; 668 669 } 669 670
+1 -2
fs/cachefiles/interface.c
··· 146 146 147 147 if (ret < 0 && ret != -ETIMEDOUT) { 148 148 if (ret != -ENOBUFS) 149 - printk(KERN_WARNING 150 - "CacheFiles: Lookup failed error %d\n", ret); 149 + pr_warn("Lookup failed error %d\n", ret); 151 150 fscache_object_lookup_error(&object->fscache); 152 151 } 153 152
+18 -12
fs/cachefiles/internal.h
··· 9 9 * 2 of the Licence, or (at your option) any later version. 10 10 */ 11 11 12 + #ifdef pr_fmt 13 + #undef pr_fmt 14 + #endif 15 + 16 + #define pr_fmt(fmt) "CacheFiles: " fmt 17 + 18 + 12 19 #include <linux/fscache-cache.h> 13 20 #include <linux/timer.h> 14 21 #include <linux/wait.h> ··· 252 245 /* 253 246 * error handling 254 247 */ 255 - #define kerror(FMT, ...) printk(KERN_ERR "CacheFiles: "FMT"\n", ##__VA_ARGS__) 256 248 257 249 #define cachefiles_io_error(___cache, FMT, ...) \ 258 250 do { \ 259 - kerror("I/O Error: " FMT, ##__VA_ARGS__); \ 251 + pr_err("I/O Error: " FMT, ##__VA_ARGS__); \ 260 252 fscache_io_error(&(___cache)->cache); \ 261 253 set_bit(CACHEFILES_DEAD, &(___cache)->flags); \ 262 254 } while (0) ··· 316 310 #define ASSERT(X) \ 317 311 do { \ 318 312 if (unlikely(!(X))) { \ 319 - printk(KERN_ERR "\n"); \ 320 - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ 313 + pr_err("\n"); \ 314 + pr_err("Assertion failed\n"); \ 321 315 BUG(); \ 322 316 } \ 323 317 } while (0) ··· 325 319 #define ASSERTCMP(X, OP, Y) \ 326 320 do { \ 327 321 if (unlikely(!((X) OP (Y)))) { \ 328 - printk(KERN_ERR "\n"); \ 329 - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ 330 - printk(KERN_ERR "%lx " #OP " %lx is false\n", \ 322 + pr_err("\n"); \ 323 + pr_err("Assertion failed\n"); \ 324 + pr_err("%lx " #OP " %lx is false\n", \ 331 325 (unsigned long)(X), (unsigned long)(Y)); \ 332 326 BUG(); \ 333 327 } \ ··· 336 330 #define ASSERTIF(C, X) \ 337 331 do { \ 338 332 if (unlikely((C) && !(X))) { \ 339 - printk(KERN_ERR "\n"); \ 340 - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ 333 + pr_err("\n"); \ 334 + pr_err("Assertion failed\n"); \ 341 335 BUG(); \ 342 336 } \ 343 337 } while (0) ··· 345 339 #define ASSERTIFCMP(C, X, OP, Y) \ 346 340 do { \ 347 341 if (unlikely((C) && !((X) OP (Y)))) { \ 348 - printk(KERN_ERR "\n"); \ 349 - printk(KERN_ERR "CacheFiles: Assertion failed\n"); \ 350 - printk(KERN_ERR "%lx " #OP " %lx is false\n", \ 342 + pr_err("\n"); \ 343 + pr_err("Assertion failed\n"); \ 344 + pr_err("%lx " #OP " %lx is false\n", \ 351 345 (unsigned long)(X), (unsigned long)(Y)); \ 352 346 BUG(); \ 353 347 } \
+4 -5
fs/cachefiles/main.c
··· 68 68 SLAB_HWCACHE_ALIGN, 69 69 cachefiles_object_init_once); 70 70 if (!cachefiles_object_jar) { 71 - printk(KERN_NOTICE 72 - "CacheFiles: Failed to allocate an object jar\n"); 71 + pr_notice("Failed to allocate an object jar\n"); 73 72 goto error_object_jar; 74 73 } 75 74 ··· 76 77 if (ret < 0) 77 78 goto error_proc; 78 79 79 - printk(KERN_INFO "CacheFiles: Loaded\n"); 80 + pr_info("Loaded\n"); 80 81 return 0; 81 82 82 83 error_proc: ··· 84 85 error_object_jar: 85 86 misc_deregister(&cachefiles_dev); 86 87 error_dev: 87 - kerror("failed to register: %d", ret); 88 + pr_err("failed to register: %d", ret); 88 89 return ret; 89 90 } 90 91 ··· 95 96 */ 96 97 static void __exit cachefiles_exit(void) 97 98 { 98 - printk(KERN_INFO "CacheFiles: Unloading\n"); 99 + pr_info("Unloading\n"); 99 100 100 101 cachefiles_proc_cleanup(); 101 102 kmem_cache_destroy(cachefiles_object_jar);
+24 -29
fs/cachefiles/namei.c
··· 35 35 struct fscache_cookie *cookie; 36 36 unsigned keylen, loop; 37 37 38 - printk(KERN_ERR "%sobject: OBJ%x\n", 39 - prefix, object->fscache.debug_id); 40 - printk(KERN_ERR "%sobjstate=%s fl=%lx wbusy=%x ev=%lx[%lx]\n", 38 + pr_err("%sobject: OBJ%x\n", prefix, object->fscache.debug_id); 39 + pr_err("%sobjstate=%s fl=%lx wbusy=%x ev=%lx[%lx]\n", 41 40 prefix, object->fscache.state->name, 42 41 object->fscache.flags, work_busy(&object->fscache.work), 43 42 object->fscache.events, object->fscache.event_mask); 44 - printk(KERN_ERR "%sops=%u inp=%u exc=%u\n", 43 + pr_err("%sops=%u inp=%u exc=%u\n", 45 44 prefix, object->fscache.n_ops, object->fscache.n_in_progress, 46 45 object->fscache.n_exclusive); 47 - printk(KERN_ERR "%sparent=%p\n", 46 + pr_err("%sparent=%p\n", 48 47 prefix, object->fscache.parent); 49 48 50 49 spin_lock(&object->fscache.lock); 51 50 cookie = object->fscache.cookie; 52 51 if (cookie) { 53 - printk(KERN_ERR "%scookie=%p [pr=%p nd=%p fl=%lx]\n", 52 + pr_err("%scookie=%p [pr=%p nd=%p fl=%lx]\n", 54 53 prefix, 55 54 object->fscache.cookie, 56 55 object->fscache.cookie->parent, ··· 61 62 else 62 63 keylen = 0; 63 64 } else { 64 - printk(KERN_ERR "%scookie=NULL\n", prefix); 65 + pr_err("%scookie=NULL\n", prefix); 65 66 keylen = 0; 66 67 } 67 68 spin_unlock(&object->fscache.lock); 68 69 69 70 if (keylen) { 70 - printk(KERN_ERR "%skey=[%u] '", prefix, keylen); 71 + pr_err("%skey=[%u] '", prefix, keylen); 71 72 for (loop = 0; loop < keylen; loop++) 72 - printk("%02x", keybuf[loop]); 73 - printk("'\n"); 73 + pr_cont("%02x", keybuf[loop]); 74 + pr_cont("'\n"); 74 75 } 75 76 } 76 77 ··· 130 131 dentry); 131 132 132 133 if (fscache_object_is_live(&object->fscache)) { 133 - printk(KERN_ERR "\n"); 134 - printk(KERN_ERR "CacheFiles: Error:" 135 - " Can't preemptively bury live object\n"); 134 + pr_err("\n"); 135 + pr_err("Error: Can't preemptively bury live object\n"); 136 136 cachefiles_printk_object(object, NULL); 137 137 } else if (test_and_set_bit(CACHEFILES_OBJECT_BURIED, &object->flags)) { 138 - printk(KERN_ERR "CacheFiles: Error:" 139 - " Object already preemptively buried\n"); 138 + pr_err("Error: Object already preemptively buried\n"); 140 139 } 141 140 142 141 write_unlock(&cache->active_lock); ··· 157 160 write_lock(&cache->active_lock); 158 161 159 162 if (test_and_set_bit(CACHEFILES_OBJECT_ACTIVE, &object->flags)) { 160 - printk(KERN_ERR "CacheFiles: Error: Object already active\n"); 163 + pr_err("Error: Object already active\n"); 161 164 cachefiles_printk_object(object, NULL); 162 165 BUG(); 163 166 } ··· 190 193 * need to wait for it to be destroyed */ 191 194 wait_for_old_object: 192 195 if (fscache_object_is_live(&object->fscache)) { 193 - printk(KERN_ERR "\n"); 194 - printk(KERN_ERR "CacheFiles: Error:" 195 - " Unexpected object collision\n"); 196 + pr_err("\n"); 197 + pr_err("Error: Unexpected object collision\n"); 196 198 cachefiles_printk_object(object, xobject); 197 199 BUG(); 198 200 } ··· 237 241 } 238 242 239 243 if (timeout <= 0) { 240 - printk(KERN_ERR "\n"); 241 - printk(KERN_ERR "CacheFiles: Error: Overlong" 242 - " wait for old active object to go away\n"); 244 + pr_err("\n"); 245 + pr_err("Error: Overlong wait for old active object to go away\n"); 243 246 cachefiles_printk_object(object, xobject); 244 247 goto requeue; 245 248 } ··· 543 548 next, next->d_inode, next->d_inode->i_ino); 544 549 545 550 } else if (!S_ISDIR(next->d_inode->i_mode)) { 546 - kerror("inode %lu is not a directory", 551 + pr_err("inode %lu is not a directory", 547 552 next->d_inode->i_ino); 548 553 ret = -ENOBUFS; 549 554 goto error; ··· 574 579 } else if (!S_ISDIR(next->d_inode->i_mode) && 575 580 !S_ISREG(next->d_inode->i_mode) 576 581 ) { 577 - kerror("inode %lu is not a file or directory", 582 + pr_err("inode %lu is not a file or directory", 578 583 next->d_inode->i_ino); 579 584 ret = -ENOBUFS; 580 585 goto error; ··· 768 773 ASSERT(subdir->d_inode); 769 774 770 775 if (!S_ISDIR(subdir->d_inode->i_mode)) { 771 - kerror("%s is not a directory", dirname); 776 + pr_err("%s is not a directory", dirname); 772 777 ret = -EIO; 773 778 goto check_error; 774 779 } ··· 795 800 mkdir_error: 796 801 mutex_unlock(&dir->d_inode->i_mutex); 797 802 dput(subdir); 798 - kerror("mkdir %s failed with error %d", dirname, ret); 803 + pr_err("mkdir %s failed with error %d", dirname, ret); 799 804 return ERR_PTR(ret); 800 805 801 806 lookup_error: 802 807 mutex_unlock(&dir->d_inode->i_mutex); 803 808 ret = PTR_ERR(subdir); 804 - kerror("Lookup %s failed with error %d", dirname, ret); 809 + pr_err("Lookup %s failed with error %d", dirname, ret); 805 810 return ERR_PTR(ret); 806 811 807 812 nomem_d_alloc: ··· 891 896 if (ret == -EIO) { 892 897 cachefiles_io_error(cache, "Lookup failed"); 893 898 } else if (ret != -ENOMEM) { 894 - kerror("Internal error: %d", ret); 899 + pr_err("Internal error: %d", ret); 895 900 ret = -EIO; 896 901 } 897 902 ··· 950 955 } 951 956 952 957 if (ret != -ENOMEM) { 953 - kerror("Internal error: %d", ret); 958 + pr_err("Internal error: %d", ret); 954 959 ret = -EIO; 955 960 } 956 961
+3 -7
fs/cachefiles/security.c
··· 34 34 ret = set_security_override_from_ctx(new, cache->secctx); 35 35 if (ret < 0) { 36 36 put_cred(new); 37 - printk(KERN_ERR "CacheFiles:" 38 - " Security denies permission to nominate" 39 - " security context: error %d\n", 37 + pr_err("Security denies permission to nominate security context: error %d\n", 40 38 ret); 41 39 goto error; 42 40 } ··· 57 59 58 60 ret = security_inode_mkdir(root->d_inode, root, 0); 59 61 if (ret < 0) { 60 - printk(KERN_ERR "CacheFiles:" 61 - " Security denies permission to make dirs: error %d", 62 + pr_err("Security denies permission to make dirs: error %d", 62 63 ret); 63 64 return ret; 64 65 } 65 66 66 67 ret = security_inode_create(root->d_inode, root, 0); 67 68 if (ret < 0) 68 - printk(KERN_ERR "CacheFiles:" 69 - " Security denies permission to create files: error %d", 69 + pr_err("Security denies permission to create files: error %d", 70 70 ret); 71 71 72 72 return ret;
+5 -5
fs/cachefiles/xattr.c
··· 51 51 } 52 52 53 53 if (ret != -EEXIST) { 54 - kerror("Can't set xattr on %*.*s [%lu] (err %d)", 54 + pr_err("Can't set xattr on %*.*s [%lu] (err %d)", 55 55 dentry->d_name.len, dentry->d_name.len, 56 56 dentry->d_name.name, dentry->d_inode->i_ino, 57 57 -ret); ··· 64 64 if (ret == -ERANGE) 65 65 goto bad_type_length; 66 66 67 - kerror("Can't read xattr on %*.*s [%lu] (err %d)", 67 + pr_err("Can't read xattr on %*.*s [%lu] (err %d)", 68 68 dentry->d_name.len, dentry->d_name.len, 69 69 dentry->d_name.name, dentry->d_inode->i_ino, 70 70 -ret); ··· 85 85 return ret; 86 86 87 87 bad_type_length: 88 - kerror("Cache object %lu type xattr length incorrect", 88 + pr_err("Cache object %lu type xattr length incorrect", 89 89 dentry->d_inode->i_ino); 90 90 ret = -EIO; 91 91 goto error; 92 92 93 93 bad_type: 94 94 xtype[2] = 0; 95 - kerror("Cache object %*.*s [%lu] type %s not %s", 95 + pr_err("Cache object %*.*s [%lu] type %s not %s", 96 96 dentry->d_name.len, dentry->d_name.len, 97 97 dentry->d_name.name, dentry->d_inode->i_ino, 98 98 xtype, type); ··· 293 293 return ret; 294 294 295 295 bad_type_length: 296 - kerror("Cache object %lu xattr length incorrect", 296 + pr_err("Cache object %lu xattr length incorrect", 297 297 dentry->d_inode->i_ino); 298 298 ret = -EIO; 299 299 goto error;
+1 -1
fs/ceph/addr.c
··· 694 694 (wbc->sync_mode == WB_SYNC_ALL ? "ALL" : "HOLD")); 695 695 696 696 if (fsc->mount_state == CEPH_MOUNT_SHUTDOWN) { 697 - pr_warning("writepage_start %p on forced umount\n", inode); 697 + pr_warn("writepage_start %p on forced umount\n", inode); 698 698 return -EIO; /* we're in a forced umount, don't write! */ 699 699 } 700 700 if (fsc->mount_options->wsize && fsc->mount_options->wsize < wsize)
+3 -3
fs/ceph/debugfs.c
··· 71 71 seq_printf(s, "%s", ceph_mds_op_name(req->r_op)); 72 72 73 73 if (req->r_got_unsafe) 74 - seq_printf(s, "\t(unsafe)"); 74 + seq_puts(s, "\t(unsafe)"); 75 75 else 76 - seq_printf(s, "\t"); 76 + seq_puts(s, "\t"); 77 77 78 78 if (req->r_inode) { 79 79 seq_printf(s, " #%llx", ceph_ino(req->r_inode)); ··· 119 119 seq_printf(s, " %s", req->r_path2); 120 120 } 121 121 122 - seq_printf(s, "\n"); 122 + seq_puts(s, "\n"); 123 123 } 124 124 mutex_unlock(&mdsc->mutex); 125 125
+1 -1
fs/ceph/inode.c
··· 821 821 spin_unlock(&ci->i_ceph_lock); 822 822 } 823 823 } else if (cap_fmode >= 0) { 824 - pr_warning("mds issued no caps on %llx.%llx\n", 824 + pr_warn("mds issued no caps on %llx.%llx\n", 825 825 ceph_vinop(inode)); 826 826 __ceph_get_fmode(ci, cap_fmode); 827 827 }
+3 -3
fs/ceph/mds_client.c
··· 2218 2218 /* dup? */ 2219 2219 if ((req->r_got_unsafe && !head->safe) || 2220 2220 (req->r_got_safe && head->safe)) { 2221 - pr_warning("got a dup %s reply on %llu from mds%d\n", 2221 + pr_warn("got a dup %s reply on %llu from mds%d\n", 2222 2222 head->safe ? "safe" : "unsafe", tid, mds); 2223 2223 mutex_unlock(&mdsc->mutex); 2224 2224 goto out; 2225 2225 } 2226 2226 if (req->r_got_safe && !head->safe) { 2227 - pr_warning("got unsafe after safe on %llu from mds%d\n", 2227 + pr_warn("got unsafe after safe on %llu from mds%d\n", 2228 2228 tid, mds); 2229 2229 mutex_unlock(&mdsc->mutex); 2230 2230 goto out; ··· 3525 3525 struct ceph_mds_session *s = con->private; 3526 3526 struct ceph_mds_client *mdsc = s->s_mdsc; 3527 3527 3528 - pr_warning("mds%d closed our session\n", s->s_mds); 3528 + pr_warn("mds%d closed our session\n", s->s_mds); 3529 3529 send_mds_reconnect(mdsc, s); 3530 3530 } 3531 3531
+1 -1
fs/ceph/mdsmap.c
··· 62 62 63 63 ceph_decode_16_safe(p, end, version, bad); 64 64 if (version > 3) { 65 - pr_warning("got mdsmap version %d > 3, failing", version); 65 + pr_warn("got mdsmap version %d > 3, failing", version); 66 66 goto bad; 67 67 } 68 68
+2 -2
fs/coda/cnode.c
··· 101 101 102 102 inode = coda_iget(sb, fid, &attr); 103 103 if (IS_ERR(inode)) 104 - printk("coda_cnode_make: coda_iget failed\n"); 104 + pr_warn("%s: coda_iget failed\n", __func__); 105 105 return inode; 106 106 } 107 107 ··· 137 137 unsigned long hash = coda_f2i(fid); 138 138 139 139 if ( !sb ) { 140 - printk("coda_fid_to_inode: no sb!\n"); 140 + pr_warn("%s: no sb!\n", __func__); 141 141 return NULL; 142 142 } 143 143
+7 -1
fs/coda/coda_linux.h
··· 12 12 #ifndef _LINUX_CODA_FS 13 13 #define _LINUX_CODA_FS 14 14 15 + #ifdef pr_fmt 16 + #undef pr_fmt 17 + #endif 18 + 19 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 20 + 15 21 #include <linux/kernel.h> 16 22 #include <linux/param.h> 17 23 #include <linux/mm.h> ··· 69 63 else \ 70 64 ptr = (cast)vzalloc((unsigned long) size); \ 71 65 if (!ptr) \ 72 - printk("kernel malloc returns 0 at %s:%d\n", __FILE__, __LINE__); \ 66 + pr_warn("kernel malloc returns 0 at %s:%d\n", __FILE__, __LINE__); \ 73 67 } while (0) 74 68 75 69
+9 -9
fs/coda/dir.c
··· 102 102 int type = 0; 103 103 104 104 if (length > CODA_MAXNAMLEN) { 105 - printk(KERN_ERR "name too long: lookup, %s (%*s)\n", 105 + pr_err("name too long: lookup, %s (%*s)\n", 106 106 coda_i2s(dir), (int)length, name); 107 107 return ERR_PTR(-ENAMETOOLONG); 108 108 } ··· 453 453 ret = kernel_read(host_file, ctx->pos - 2, (char *)vdir, 454 454 sizeof(*vdir)); 455 455 if (ret < 0) { 456 - printk(KERN_ERR "coda readdir: read dir %s failed %d\n", 457 - coda_f2s(&cii->c_fid), ret); 456 + pr_err("%s: read dir %s failed %d\n", 457 + __func__, coda_f2s(&cii->c_fid), ret); 458 458 break; 459 459 } 460 460 if (ret == 0) break; /* end of directory file reached */ 461 461 462 462 /* catch truncated reads */ 463 463 if (ret < vdir_size || ret < vdir_size + vdir->d_namlen) { 464 - printk(KERN_ERR "coda readdir: short read on %s\n", 465 - coda_f2s(&cii->c_fid)); 464 + pr_err("%s: short read on %s\n", 465 + __func__, coda_f2s(&cii->c_fid)); 466 466 ret = -EBADF; 467 467 break; 468 468 } 469 469 /* validate whether the directory file actually makes sense */ 470 470 if (vdir->d_reclen < vdir_size + vdir->d_namlen) { 471 - printk(KERN_ERR "coda readdir: invalid dir %s\n", 472 - coda_f2s(&cii->c_fid)); 471 + pr_err("%s: invalid dir %s\n", 472 + __func__, coda_f2s(&cii->c_fid)); 473 473 ret = -EBADF; 474 474 break; 475 475 } ··· 589 589 coda_vattr_to_iattr(inode, &attr); 590 590 591 591 if ((old_mode & S_IFMT) != (inode->i_mode & S_IFMT)) { 592 - printk("Coda: inode %ld, fid %s changed type!\n", 593 - inode->i_ino, coda_f2s(&(cii->c_fid))); 592 + pr_warn("inode %ld, fid %s changed type!\n", 593 + inode->i_ino, coda_f2s(&(cii->c_fid))); 594 594 } 595 595 596 596 /* the following can happen when a local fid is replaced
+15 -14
fs/coda/inode.c
··· 119 119 int idx; 120 120 121 121 if (data == NULL) { 122 - printk("coda_read_super: Bad mount data\n"); 122 + pr_warn("%s: Bad mount data\n", __func__); 123 123 return -1; 124 124 } 125 125 126 126 if (data->version != CODA_MOUNT_VERSION) { 127 - printk("coda_read_super: Bad mount version\n"); 127 + pr_warn("%s: Bad mount version\n", __func__); 128 128 return -1; 129 129 } 130 130 ··· 141 141 fdput(f); 142 142 143 143 if (idx < 0 || idx >= MAX_CODADEVS) { 144 - printk("coda_read_super: Bad minor number\n"); 144 + pr_warn("%s: Bad minor number\n", __func__); 145 145 return -1; 146 146 } 147 147 148 148 return idx; 149 149 Ebadf: 150 - printk("coda_read_super: Bad file\n"); 150 + pr_warn("%s: Bad file\n", __func__); 151 151 return -1; 152 152 } 153 153 ··· 168 168 if(idx == -1) 169 169 idx = 0; 170 170 171 - printk(KERN_INFO "coda_read_super: device index: %i\n", idx); 171 + pr_info("%s: device index: %i\n", __func__, idx); 172 172 173 173 vc = &coda_comms[idx]; 174 174 mutex_lock(&vc->vc_mutex); 175 175 176 176 if (!vc->vc_inuse) { 177 - printk("coda_read_super: No pseudo device\n"); 177 + pr_warn("%s: No pseudo device\n", __func__); 178 178 error = -EINVAL; 179 179 goto unlock_out; 180 180 } 181 181 182 182 if (vc->vc_sb) { 183 - printk("coda_read_super: Device already mounted\n"); 183 + pr_warn("%s: Device already mounted\n", __func__); 184 184 error = -EBUSY; 185 185 goto unlock_out; 186 186 } ··· 204 204 /* get root fid from Venus: this needs the root inode */ 205 205 error = venus_rootfid(sb, &fid); 206 206 if ( error ) { 207 - printk("coda_read_super: coda_get_rootfid failed with %d\n", 208 - error); 207 + pr_warn("%s: coda_get_rootfid failed with %d\n", 208 + __func__, error); 209 209 goto error; 210 210 } 211 - printk("coda_read_super: rootfid is %s\n", coda_f2s(&fid)); 211 + pr_info("%s: rootfid is %s\n", __func__, coda_f2s(&fid)); 212 212 213 213 /* make root inode */ 214 214 root = coda_cnode_make(&fid, sb); 215 215 if (IS_ERR(root)) { 216 216 error = PTR_ERR(root); 217 - printk("Failure of coda_cnode_make for root: error %d\n", error); 217 + pr_warn("Failure of coda_cnode_make for root: error %d\n", 218 + error); 218 219 goto error; 219 220 } 220 221 221 - printk("coda_read_super: rootinode is %ld dev %s\n", 222 - root->i_ino, root->i_sb->s_id); 222 + pr_info("%s: rootinode is %ld dev %s\n", 223 + __func__, root->i_ino, root->i_sb->s_id); 223 224 sb->s_root = d_make_root(root); 224 225 if (!sb->s_root) { 225 226 error = -EINVAL; ··· 247 246 sb->s_fs_info = NULL; 248 247 mutex_unlock(&vcp->vc_mutex); 249 248 250 - printk("Coda: Bye bye.\n"); 249 + pr_info("Bye bye.\n"); 251 250 } 252 251 253 252 static void coda_evict_inode(struct inode *inode)
+20 -19
fs/coda/psdev.c
··· 114 114 int size = sizeof(*dcbuf); 115 115 116 116 if ( nbytes < sizeof(struct coda_out_hdr) ) { 117 - printk("coda_downcall opc %d uniq %d, not enough!\n", 118 - hdr.opcode, hdr.unique); 117 + pr_warn("coda_downcall opc %d uniq %d, not enough!\n", 118 + hdr.opcode, hdr.unique); 119 119 count = nbytes; 120 120 goto out; 121 121 } 122 122 if ( nbytes > size ) { 123 - printk("Coda: downcall opc %d, uniq %d, too much!", 124 - hdr.opcode, hdr.unique); 123 + pr_warn("downcall opc %d, uniq %d, too much!", 124 + hdr.opcode, hdr.unique); 125 125 nbytes = size; 126 126 } 127 127 CODA_ALLOC(dcbuf, union outputArgs *, nbytes); ··· 136 136 137 137 CODA_FREE(dcbuf, nbytes); 138 138 if (error) { 139 - printk("psdev_write: coda_downcall error: %d\n", error); 139 + pr_warn("%s: coda_downcall error: %d\n", 140 + __func__, error); 140 141 retval = error; 141 142 goto out; 142 143 } ··· 158 157 mutex_unlock(&vcp->vc_mutex); 159 158 160 159 if (!req) { 161 - printk("psdev_write: msg (%d, %d) not found\n", 162 - hdr.opcode, hdr.unique); 160 + pr_warn("%s: msg (%d, %d) not found\n", 161 + __func__, hdr.opcode, hdr.unique); 163 162 retval = -ESRCH; 164 163 goto out; 165 164 } 166 165 167 166 /* move data into response buffer. */ 168 167 if (req->uc_outSize < nbytes) { 169 - printk("psdev_write: too much cnt: %d, cnt: %ld, opc: %d, uniq: %d.\n", 170 - req->uc_outSize, (long)nbytes, hdr.opcode, hdr.unique); 168 + pr_warn("%s: too much cnt: %d, cnt: %ld, opc: %d, uniq: %d.\n", 169 + __func__, req->uc_outSize, (long)nbytes, 170 + hdr.opcode, hdr.unique); 171 171 nbytes = req->uc_outSize; /* don't have more space! */ 172 172 } 173 173 if (copy_from_user(req->uc_data, buf, nbytes)) { ··· 242 240 /* Move the input args into userspace */ 243 241 count = req->uc_inSize; 244 242 if (nbytes < req->uc_inSize) { 245 - printk ("psdev_read: Venus read %ld bytes of %d in message\n", 246 - (long)nbytes, req->uc_inSize); 243 + pr_warn("%s: Venus read %ld bytes of %d in message\n", 244 + __func__, (long)nbytes, req->uc_inSize); 247 245 count = nbytes; 248 246 } 249 247 ··· 307 305 struct upc_req *req, *tmp; 308 306 309 307 if (!vcp || !vcp->vc_inuse ) { 310 - printk("psdev_release: Not open.\n"); 308 + pr_warn("%s: Not open.\n", __func__); 311 309 return -1; 312 310 } 313 311 ··· 356 354 { 357 355 int i, err = 0; 358 356 if (register_chrdev(CODA_PSDEV_MAJOR, "coda", &coda_psdev_fops)) { 359 - printk(KERN_ERR "coda_psdev: unable to get major %d\n", 360 - CODA_PSDEV_MAJOR); 357 + pr_err("%s: unable to get major %d\n", 358 + __func__, CODA_PSDEV_MAJOR); 361 359 return -EIO; 362 360 } 363 361 coda_psdev_class = class_create(THIS_MODULE, "coda"); ··· 395 393 goto out2; 396 394 status = init_coda_psdev(); 397 395 if ( status ) { 398 - printk("Problem (%d) in init_coda_psdev\n", status); 396 + pr_warn("Problem (%d) in init_coda_psdev\n", status); 399 397 goto out1; 400 398 } 401 399 402 400 status = register_filesystem(&coda_fs_type); 403 401 if (status) { 404 - printk("coda: failed to register filesystem!\n"); 402 + pr_warn("failed to register filesystem!\n"); 405 403 goto out; 406 404 } 407 405 return 0; ··· 422 420 int err, i; 423 421 424 422 err = unregister_filesystem(&coda_fs_type); 425 - if ( err != 0 ) { 426 - printk("coda: failed to unregister filesystem\n"); 427 - } 423 + if (err != 0) 424 + pr_warn("failed to unregister filesystem\n"); 428 425 for (i = 0; i < MAX_CODADEVS; i++) 429 426 device_destroy(coda_psdev_class, MKDEV(CODA_PSDEV_MAJOR, i)); 430 427 class_destroy(coda_psdev_class);
+2 -2
fs/coda/sysctl.c
··· 14 14 #ifdef CONFIG_SYSCTL 15 15 static struct ctl_table_header *fs_table_header; 16 16 17 - static ctl_table coda_table[] = { 17 + static struct ctl_table coda_table[] = { 18 18 { 19 19 .procname = "timeout", 20 20 .data = &coda_timeout, ··· 39 39 {} 40 40 }; 41 41 42 - static ctl_table fs_table[] = { 42 + static struct ctl_table fs_table[] = { 43 43 { 44 44 .procname = "coda", 45 45 .mode = 0555,
+7 -7
fs/coda/upcall.c
··· 508 508 inp->coda_ioctl.data = (char *)(INSIZE(ioctl)); 509 509 510 510 /* get the data out of user space */ 511 - if ( copy_from_user((char*)inp + (long)inp->coda_ioctl.data, 512 - data->vi.in, data->vi.in_size) ) { 511 + if (copy_from_user((char *)inp + (long)inp->coda_ioctl.data, 512 + data->vi.in, data->vi.in_size)) { 513 513 error = -EINVAL; 514 514 goto exit; 515 515 } ··· 518 518 &outsize, inp); 519 519 520 520 if (error) { 521 - printk("coda_pioctl: Venus returns: %d for %s\n", 522 - error, coda_f2s(fid)); 521 + pr_warn("%s: Venus returns: %d for %s\n", 522 + __func__, error, coda_f2s(fid)); 523 523 goto exit; 524 524 } 525 525 ··· 675 675 mutex_lock(&vcp->vc_mutex); 676 676 677 677 if (!vcp->vc_inuse) { 678 - printk(KERN_NOTICE "coda: Venus dead, not sending upcall\n"); 678 + pr_notice("Venus dead, not sending upcall\n"); 679 679 error = -ENXIO; 680 680 goto exit; 681 681 } ··· 725 725 726 726 error = -EINTR; 727 727 if ((req->uc_flags & CODA_REQ_ABORT) || !signal_pending(current)) { 728 - printk(KERN_WARNING "coda: Unexpected interruption.\n"); 728 + pr_warn("Unexpected interruption.\n"); 729 729 goto exit; 730 730 } 731 731 ··· 735 735 736 736 /* Venus saw the upcall, make sure we can send interrupt signal */ 737 737 if (!vcp->vc_inuse) { 738 - printk(KERN_INFO "coda: Venus dead, not sending signal.\n"); 738 + pr_info("Venus dead, not sending signal.\n"); 739 739 goto exit; 740 740 } 741 741
+1 -1
fs/dcache.c
··· 150 150 return sum < 0 ? 0 : sum; 151 151 } 152 152 153 - int proc_nr_dentry(ctl_table *table, int write, void __user *buffer, 153 + int proc_nr_dentry(struct ctl_table *table, int write, void __user *buffer, 154 154 size_t *lenp, loff_t *ppos) 155 155 { 156 156 dentry_stat.nr_dentry = get_nr_dentry();
+15 -11
fs/devpts/inode.c
··· 10 10 * 11 11 * ------------------------------------------------------------------------- */ 12 12 13 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 14 + 13 15 #include <linux/module.h> 14 16 #include <linux/init.h> 15 17 #include <linux/fs.h> ··· 150 148 151 149 /* 152 150 * parse_mount_options(): 153 - * Set @opts to mount options specified in @data. If an option is not 154 - * specified in @data, set it to its default value. The exception is 155 - * 'newinstance' option which can only be set/cleared on a mount (i.e. 156 - * cannot be changed during remount). 151 + * Set @opts to mount options specified in @data. If an option is not 152 + * specified in @data, set it to its default value. The exception is 153 + * 'newinstance' option which can only be set/cleared on a mount (i.e. 154 + * cannot be changed during remount). 157 155 * 158 156 * Note: @data may be NULL (in which case all options are set to default). 159 157 */ ··· 227 225 break; 228 226 #endif 229 227 default: 230 - printk(KERN_ERR "devpts: called with bogus options\n"); 228 + pr_err("called with bogus options\n"); 231 229 return -EINVAL; 232 230 } 233 231 } ··· 263 261 264 262 dentry = d_alloc_name(root, "ptmx"); 265 263 if (!dentry) { 266 - printk(KERN_NOTICE "Unable to alloc dentry for ptmx node\n"); 264 + pr_err("Unable to alloc dentry for ptmx node\n"); 267 265 goto out; 268 266 } 269 267 ··· 272 270 */ 273 271 inode = new_inode(sb); 274 272 if (!inode) { 275 - printk(KERN_ERR "Unable to alloc inode for ptmx node\n"); 273 + pr_err("Unable to alloc inode for ptmx node\n"); 276 274 dput(dentry); 277 275 goto out; 278 276 } ··· 305 303 #else 306 304 static inline void update_ptmx_mode(struct pts_fs_info *fsi) 307 305 { 308 - return; 306 + return; 309 307 } 310 308 #endif 311 309 ··· 335 333 struct pts_mount_opts *opts = &fsi->mount_opts; 336 334 337 335 if (opts->setuid) 338 - seq_printf(seq, ",uid=%u", from_kuid_munged(&init_user_ns, opts->uid)); 336 + seq_printf(seq, ",uid=%u", 337 + from_kuid_munged(&init_user_ns, opts->uid)); 339 338 if (opts->setgid) 340 - seq_printf(seq, ",gid=%u", from_kgid_munged(&init_user_ns, opts->gid)); 339 + seq_printf(seq, ",gid=%u", 340 + from_kgid_munged(&init_user_ns, opts->gid)); 341 341 seq_printf(seq, ",mode=%03o", opts->mode); 342 342 #ifdef CONFIG_DEVPTS_MULTIPLE_INSTANCES 343 343 seq_printf(seq, ",ptmxmode=%03o", opts->ptmxmode); ··· 400 396 if (s->s_root) 401 397 return 0; 402 398 403 - printk(KERN_ERR "devpts: get root dentry failed\n"); 399 + pr_err("get root dentry failed\n"); 404 400 405 401 fail: 406 402 return -ENOMEM;
+20 -6
fs/dlm/config.c
··· 157 157 const char *buf, size_t len) 158 158 { 159 159 unsigned int x; 160 + int rc; 160 161 161 162 if (!capable(CAP_SYS_ADMIN)) 162 163 return -EPERM; 163 - 164 - x = simple_strtoul(buf, NULL, 0); 164 + rc = kstrtouint(buf, 0, &x); 165 + if (rc) 166 + return rc; 165 167 166 168 if (check_zero && !x) 167 169 return -EINVAL; ··· 732 730 static ssize_t comm_nodeid_write(struct dlm_comm *cm, const char *buf, 733 731 size_t len) 734 732 { 735 - cm->nodeid = simple_strtol(buf, NULL, 0); 733 + int rc = kstrtoint(buf, 0, &cm->nodeid); 734 + 735 + if (rc) 736 + return rc; 736 737 return len; 737 738 } 738 739 ··· 747 742 static ssize_t comm_local_write(struct dlm_comm *cm, const char *buf, 748 743 size_t len) 749 744 { 750 - cm->local= simple_strtol(buf, NULL, 0); 745 + int rc = kstrtoint(buf, 0, &cm->local); 746 + 747 + if (rc) 748 + return rc; 751 749 if (cm->local && !local_comm) 752 750 local_comm = cm; 753 751 return len; ··· 854 846 size_t len) 855 847 { 856 848 uint32_t seq = 0; 857 - nd->nodeid = simple_strtol(buf, NULL, 0); 849 + int rc = kstrtoint(buf, 0, &nd->nodeid); 850 + 851 + if (rc) 852 + return rc; 858 853 dlm_comm_seq(nd->nodeid, &seq); 859 854 nd->comm_seq = seq; 860 855 return len; ··· 871 860 static ssize_t node_weight_write(struct dlm_node *nd, const char *buf, 872 861 size_t len) 873 862 { 874 - nd->weight = simple_strtol(buf, NULL, 0); 863 + int rc = kstrtoint(buf, 0, &nd->weight); 864 + 865 + if (rc) 866 + return rc; 875 867 return len; 876 868 } 877 869
+17 -17
fs/dlm/debug_fs.c
··· 68 68 if (lkb->lkb_wait_type) 69 69 seq_printf(s, " wait_type: %d", lkb->lkb_wait_type); 70 70 71 - return seq_printf(s, "\n"); 71 + return seq_puts(s, "\n"); 72 72 } 73 73 74 74 static int print_format1(struct dlm_rsb *res, struct seq_file *s) ··· 92 92 } 93 93 94 94 if (res->res_nodeid > 0) 95 - rv = seq_printf(s, "\" \nLocal Copy, Master is node %d\n", 95 + rv = seq_printf(s, "\"\nLocal Copy, Master is node %d\n", 96 96 res->res_nodeid); 97 97 else if (res->res_nodeid == 0) 98 - rv = seq_printf(s, "\" \nMaster Copy\n"); 98 + rv = seq_puts(s, "\"\nMaster Copy\n"); 99 99 else if (res->res_nodeid == -1) 100 - rv = seq_printf(s, "\" \nLooking up master (lkid %x)\n", 100 + rv = seq_printf(s, "\"\nLooking up master (lkid %x)\n", 101 101 res->res_first_lkid); 102 102 else 103 - rv = seq_printf(s, "\" \nInvalid master %d\n", 103 + rv = seq_printf(s, "\"\nInvalid master %d\n", 104 104 res->res_nodeid); 105 105 if (rv) 106 106 goto out; 107 107 108 108 /* Print the LVB: */ 109 109 if (res->res_lvbptr) { 110 - seq_printf(s, "LVB: "); 110 + seq_puts(s, "LVB: "); 111 111 for (i = 0; i < lvblen; i++) { 112 112 if (i == lvblen / 2) 113 - seq_printf(s, "\n "); 113 + seq_puts(s, "\n "); 114 114 seq_printf(s, "%02x ", 115 115 (unsigned char) res->res_lvbptr[i]); 116 116 } 117 117 if (rsb_flag(res, RSB_VALNOTVALID)) 118 - seq_printf(s, " (INVALID)"); 119 - rv = seq_printf(s, "\n"); 118 + seq_puts(s, " (INVALID)"); 119 + rv = seq_puts(s, "\n"); 120 120 if (rv) 121 121 goto out; 122 122 } ··· 133 133 } 134 134 135 135 /* Print the locks attached to this resource */ 136 - seq_printf(s, "Granted Queue\n"); 136 + seq_puts(s, "Granted Queue\n"); 137 137 list_for_each_entry(lkb, &res->res_grantqueue, lkb_statequeue) { 138 138 rv = print_format1_lock(s, lkb, res); 139 139 if (rv) 140 140 goto out; 141 141 } 142 142 143 - seq_printf(s, "Conversion Queue\n"); 143 + seq_puts(s, "Conversion Queue\n"); 144 144 list_for_each_entry(lkb, &res->res_convertqueue, lkb_statequeue) { 145 145 rv = print_format1_lock(s, lkb, res); 146 146 if (rv) 147 147 goto out; 148 148 } 149 149 150 - seq_printf(s, "Waiting Queue\n"); 150 + seq_puts(s, "Waiting Queue\n"); 151 151 list_for_each_entry(lkb, &res->res_waitqueue, lkb_statequeue) { 152 152 rv = print_format1_lock(s, lkb, res); 153 153 if (rv) ··· 157 157 if (list_empty(&res->res_lookup)) 158 158 goto out; 159 159 160 - seq_printf(s, "Lookup Queue\n"); 160 + seq_puts(s, "Lookup Queue\n"); 161 161 list_for_each_entry(lkb, &res->res_lookup, lkb_rsb_lookup) { 162 162 rv = seq_printf(s, "%08x %s", lkb->lkb_id, 163 163 print_lockmode(lkb->lkb_rqmode)); 164 164 if (lkb->lkb_wait_type) 165 165 seq_printf(s, " wait_type: %d", lkb->lkb_wait_type); 166 - rv = seq_printf(s, "\n"); 166 + rv = seq_puts(s, "\n"); 167 167 } 168 168 out: 169 169 unlock_rsb(res); ··· 300 300 else 301 301 seq_printf(s, " %02x", (unsigned char)r->res_name[i]); 302 302 } 303 - rv = seq_printf(s, "\n"); 303 + rv = seq_puts(s, "\n"); 304 304 if (rv) 305 305 goto out; 306 306 ··· 311 311 312 312 for (i = 0; i < lvblen; i++) 313 313 seq_printf(s, " %02x", (unsigned char)r->res_lvbptr[i]); 314 - rv = seq_printf(s, "\n"); 314 + rv = seq_puts(s, "\n"); 315 315 if (rv) 316 316 goto out; 317 317 ··· 377 377 else 378 378 seq_printf(s, " %02x", (unsigned char)r->res_name[i]); 379 379 } 380 - rv = seq_printf(s, "\n"); 380 + rv = seq_puts(s, "\n"); 381 381 out: 382 382 unlock_rsb(r); 383 383 return rv;
+17 -4
fs/dlm/lockspace.c
··· 35 35 static ssize_t dlm_control_store(struct dlm_ls *ls, const char *buf, size_t len) 36 36 { 37 37 ssize_t ret = len; 38 - int n = simple_strtol(buf, NULL, 0); 38 + int n; 39 + int rc = kstrtoint(buf, 0, &n); 39 40 41 + if (rc) 42 + return rc; 40 43 ls = dlm_find_lockspace_local(ls->ls_local_handle); 41 44 if (!ls) 42 45 return -EINVAL; ··· 60 57 61 58 static ssize_t dlm_event_store(struct dlm_ls *ls, const char *buf, size_t len) 62 59 { 63 - ls->ls_uevent_result = simple_strtol(buf, NULL, 0); 60 + int rc = kstrtoint(buf, 0, &ls->ls_uevent_result); 61 + 62 + if (rc) 63 + return rc; 64 64 set_bit(LSFL_UEVENT_WAIT, &ls->ls_flags); 65 65 wake_up(&ls->ls_uevent_wait); 66 66 return len; ··· 76 70 77 71 static ssize_t dlm_id_store(struct dlm_ls *ls, const char *buf, size_t len) 78 72 { 79 - ls->ls_global_id = simple_strtoul(buf, NULL, 0); 73 + int rc = kstrtouint(buf, 0, &ls->ls_global_id); 74 + 75 + if (rc) 76 + return rc; 80 77 return len; 81 78 } 82 79 ··· 90 81 91 82 static ssize_t dlm_nodir_store(struct dlm_ls *ls, const char *buf, size_t len) 92 83 { 93 - int val = simple_strtoul(buf, NULL, 0); 84 + int val; 85 + int rc = kstrtoint(buf, 0, &val); 86 + 87 + if (rc) 88 + return rc; 94 89 if (val == 1) 95 90 set_bit(LSFL_NODIR, &ls->ls_flags); 96 91 return len;
+1 -1
fs/drop_caches.c
··· 50 50 } while (nr_objects > 10); 51 51 } 52 52 53 - int drop_caches_sysctl_handler(ctl_table *table, int write, 53 + int drop_caches_sysctl_handler(struct ctl_table *table, int write, 54 54 void __user *buffer, size_t *length, loff_t *ppos) 55 55 { 56 56 int ret;
+1 -1
fs/eventpoll.c
··· 293 293 static long zero; 294 294 static long long_max = LONG_MAX; 295 295 296 - ctl_table epoll_table[] = { 296 + struct ctl_table epoll_table[] = { 297 297 { 298 298 .procname = "max_user_watches", 299 299 .data = &max_user_watches,
+2 -1
fs/fat/fat.h
··· 52 52 usefree:1, /* Use free_clusters for FAT32 */ 53 53 tz_set:1, /* Filesystem timestamps' offset set */ 54 54 rodir:1, /* allow ATTR_RO for directory */ 55 - discard:1; /* Issue discard requests on deletions */ 55 + discard:1, /* Issue discard requests on deletions */ 56 + dos1xfloppy:1; /* Assume default BPB for DOS 1.x floppies */ 56 57 }; 57 58 58 59 #define FAT_HASH_BITS 8
+268 -77
fs/fat/inode.c
··· 35 35 #define CONFIG_FAT_DEFAULT_IOCHARSET "" 36 36 #endif 37 37 38 + #define KB_IN_SECTORS 2 39 + 40 + /* 41 + * A deserialized copy of the on-disk structure laid out in struct 42 + * fat_boot_sector. 43 + */ 44 + struct fat_bios_param_block { 45 + u16 fat_sector_size; 46 + u8 fat_sec_per_clus; 47 + u16 fat_reserved; 48 + u8 fat_fats; 49 + u16 fat_dir_entries; 50 + u16 fat_sectors; 51 + u16 fat_fat_length; 52 + u32 fat_total_sect; 53 + 54 + u8 fat16_state; 55 + u32 fat16_vol_id; 56 + 57 + u32 fat32_length; 58 + u32 fat32_root_cluster; 59 + u16 fat32_info_sector; 60 + u8 fat32_state; 61 + u32 fat32_vol_id; 62 + }; 63 + 38 64 static int fat_default_codepage = CONFIG_FAT_DEFAULT_CODEPAGE; 39 65 static char fat_default_iocharset[] = CONFIG_FAT_DEFAULT_IOCHARSET; 40 66 67 + static struct fat_floppy_defaults { 68 + unsigned nr_sectors; 69 + unsigned sec_per_clus; 70 + unsigned dir_entries; 71 + unsigned media; 72 + unsigned fat_length; 73 + } floppy_defaults[] = { 74 + { 75 + .nr_sectors = 160 * KB_IN_SECTORS, 76 + .sec_per_clus = 1, 77 + .dir_entries = 64, 78 + .media = 0xFE, 79 + .fat_length = 1, 80 + }, 81 + { 82 + .nr_sectors = 180 * KB_IN_SECTORS, 83 + .sec_per_clus = 1, 84 + .dir_entries = 64, 85 + .media = 0xFC, 86 + .fat_length = 2, 87 + }, 88 + { 89 + .nr_sectors = 320 * KB_IN_SECTORS, 90 + .sec_per_clus = 2, 91 + .dir_entries = 112, 92 + .media = 0xFF, 93 + .fat_length = 1, 94 + }, 95 + { 96 + .nr_sectors = 360 * KB_IN_SECTORS, 97 + .sec_per_clus = 2, 98 + .dir_entries = 112, 99 + .media = 0xFD, 100 + .fat_length = 2, 101 + }, 102 + }; 41 103 42 104 static int fat_add_cluster(struct inode *inode) 43 105 { ··· 421 359 422 360 static int is_exec(unsigned char *extension) 423 361 { 424 - unsigned char *exe_extensions = "EXECOMBAT", *walk; 362 + unsigned char exe_extensions[] = "EXECOMBAT", *walk; 425 363 426 364 for (walk = exe_extensions; *walk; walk += 3) 427 365 if (!strncmp(extension, walk, 3)) ··· 915 853 seq_puts(m, ",nfs=stale_rw"); 916 854 if (opts->discard) 917 855 seq_puts(m, ",discard"); 856 + if (opts->dos1xfloppy) 857 + seq_puts(m, ",dos1xfloppy"); 918 858 919 859 return 0; 920 860 } ··· 931 867 Opt_uni_xl_no, Opt_uni_xl_yes, Opt_nonumtail_no, Opt_nonumtail_yes, 932 868 Opt_obsolete, Opt_flush, Opt_tz_utc, Opt_rodir, Opt_err_cont, 933 869 Opt_err_panic, Opt_err_ro, Opt_discard, Opt_nfs, Opt_time_offset, 934 - Opt_nfs_stale_rw, Opt_nfs_nostale_ro, Opt_err, 870 + Opt_nfs_stale_rw, Opt_nfs_nostale_ro, Opt_err, Opt_dos1xfloppy, 935 871 }; 936 872 937 873 static const match_table_t fat_tokens = { ··· 964 900 {Opt_nfs_stale_rw, "nfs"}, 965 901 {Opt_nfs_stale_rw, "nfs=stale_rw"}, 966 902 {Opt_nfs_nostale_ro, "nfs=nostale_ro"}, 903 + {Opt_dos1xfloppy, "dos1xfloppy"}, 967 904 {Opt_obsolete, "conv=binary"}, 968 905 {Opt_obsolete, "conv=text"}, 969 906 {Opt_obsolete, "conv=auto"}, ··· 1167 1102 case Opt_nfs_nostale_ro: 1168 1103 opts->nfs = FAT_NFS_NOSTALE_RO; 1169 1104 break; 1105 + case Opt_dos1xfloppy: 1106 + opts->dos1xfloppy = 1; 1107 + break; 1170 1108 1171 1109 /* msdos specific */ 1172 1110 case Opt_dots: ··· 1315 1247 return sbi->fat_length * sb->s_blocksize * 8 / sbi->fat_bits; 1316 1248 } 1317 1249 1250 + static bool fat_bpb_is_zero(struct fat_boot_sector *b) 1251 + { 1252 + if (get_unaligned_le16(&b->sector_size)) 1253 + return false; 1254 + if (b->sec_per_clus) 1255 + return false; 1256 + if (b->reserved) 1257 + return false; 1258 + if (b->fats) 1259 + return false; 1260 + if (get_unaligned_le16(&b->dir_entries)) 1261 + return false; 1262 + if (get_unaligned_le16(&b->sectors)) 1263 + return false; 1264 + if (b->media) 1265 + return false; 1266 + if (b->fat_length) 1267 + return false; 1268 + if (b->secs_track) 1269 + return false; 1270 + if (b->heads) 1271 + return false; 1272 + return true; 1273 + } 1274 + 1275 + static int fat_read_bpb(struct super_block *sb, struct fat_boot_sector *b, 1276 + int silent, struct fat_bios_param_block *bpb) 1277 + { 1278 + int error = -EINVAL; 1279 + 1280 + /* Read in BPB ... */ 1281 + memset(bpb, 0, sizeof(*bpb)); 1282 + bpb->fat_sector_size = get_unaligned_le16(&b->sector_size); 1283 + bpb->fat_sec_per_clus = b->sec_per_clus; 1284 + bpb->fat_reserved = le16_to_cpu(b->reserved); 1285 + bpb->fat_fats = b->fats; 1286 + bpb->fat_dir_entries = get_unaligned_le16(&b->dir_entries); 1287 + bpb->fat_sectors = get_unaligned_le16(&b->sectors); 1288 + bpb->fat_fat_length = le16_to_cpu(b->fat_length); 1289 + bpb->fat_total_sect = le32_to_cpu(b->total_sect); 1290 + 1291 + bpb->fat16_state = b->fat16.state; 1292 + bpb->fat16_vol_id = get_unaligned_le32(b->fat16.vol_id); 1293 + 1294 + bpb->fat32_length = le32_to_cpu(b->fat32.length); 1295 + bpb->fat32_root_cluster = le32_to_cpu(b->fat32.root_cluster); 1296 + bpb->fat32_info_sector = le16_to_cpu(b->fat32.info_sector); 1297 + bpb->fat32_state = b->fat32.state; 1298 + bpb->fat32_vol_id = get_unaligned_le32(b->fat32.vol_id); 1299 + 1300 + /* Validate this looks like a FAT filesystem BPB */ 1301 + if (!bpb->fat_reserved) { 1302 + if (!silent) 1303 + fat_msg(sb, KERN_ERR, 1304 + "bogus number of reserved sectors"); 1305 + goto out; 1306 + } 1307 + if (!bpb->fat_fats) { 1308 + if (!silent) 1309 + fat_msg(sb, KERN_ERR, "bogus number of FAT structure"); 1310 + goto out; 1311 + } 1312 + 1313 + /* 1314 + * Earlier we checked here that b->secs_track and b->head are nonzero, 1315 + * but it turns out valid FAT filesystems can have zero there. 1316 + */ 1317 + 1318 + if (!fat_valid_media(b->media)) { 1319 + if (!silent) 1320 + fat_msg(sb, KERN_ERR, "invalid media value (0x%02x)", 1321 + (unsigned)b->media); 1322 + goto out; 1323 + } 1324 + 1325 + if (!is_power_of_2(bpb->fat_sector_size) 1326 + || (bpb->fat_sector_size < 512) 1327 + || (bpb->fat_sector_size > 4096)) { 1328 + if (!silent) 1329 + fat_msg(sb, KERN_ERR, "bogus logical sector size %u", 1330 + (unsigned)bpb->fat_sector_size); 1331 + goto out; 1332 + } 1333 + 1334 + if (!is_power_of_2(bpb->fat_sec_per_clus)) { 1335 + if (!silent) 1336 + fat_msg(sb, KERN_ERR, "bogus sectors per cluster %u", 1337 + (unsigned)bpb->fat_sec_per_clus); 1338 + goto out; 1339 + } 1340 + 1341 + error = 0; 1342 + 1343 + out: 1344 + return error; 1345 + } 1346 + 1347 + static int fat_read_static_bpb(struct super_block *sb, 1348 + struct fat_boot_sector *b, int silent, 1349 + struct fat_bios_param_block *bpb) 1350 + { 1351 + static const char *notdos1x = "This doesn't look like a DOS 1.x volume"; 1352 + 1353 + struct fat_floppy_defaults *fdefaults = NULL; 1354 + int error = -EINVAL; 1355 + sector_t bd_sects; 1356 + unsigned i; 1357 + 1358 + bd_sects = i_size_read(sb->s_bdev->bd_inode) / SECTOR_SIZE; 1359 + 1360 + /* 16-bit DOS 1.x reliably wrote bootstrap short-jmp code */ 1361 + if (b->ignored[0] != 0xeb || b->ignored[2] != 0x90) { 1362 + if (!silent) 1363 + fat_msg(sb, KERN_ERR, 1364 + "%s; no bootstrapping code", notdos1x); 1365 + goto out; 1366 + } 1367 + 1368 + /* 1369 + * If any value in this region is non-zero, it isn't archaic 1370 + * DOS. 1371 + */ 1372 + if (!fat_bpb_is_zero(b)) { 1373 + if (!silent) 1374 + fat_msg(sb, KERN_ERR, 1375 + "%s; DOS 2.x BPB is non-zero", notdos1x); 1376 + goto out; 1377 + } 1378 + 1379 + for (i = 0; i < ARRAY_SIZE(floppy_defaults); i++) { 1380 + if (floppy_defaults[i].nr_sectors == bd_sects) { 1381 + fdefaults = &floppy_defaults[i]; 1382 + break; 1383 + } 1384 + } 1385 + 1386 + if (fdefaults == NULL) { 1387 + if (!silent) 1388 + fat_msg(sb, KERN_WARNING, 1389 + "This looks like a DOS 1.x volume, but isn't a recognized floppy size (%llu sectors)", 1390 + (u64)bd_sects); 1391 + goto out; 1392 + } 1393 + 1394 + if (!silent) 1395 + fat_msg(sb, KERN_INFO, 1396 + "This looks like a DOS 1.x volume; assuming default BPB values"); 1397 + 1398 + memset(bpb, 0, sizeof(*bpb)); 1399 + bpb->fat_sector_size = SECTOR_SIZE; 1400 + bpb->fat_sec_per_clus = fdefaults->sec_per_clus; 1401 + bpb->fat_reserved = 1; 1402 + bpb->fat_fats = 2; 1403 + bpb->fat_dir_entries = fdefaults->dir_entries; 1404 + bpb->fat_sectors = fdefaults->nr_sectors; 1405 + bpb->fat_fat_length = fdefaults->fat_length; 1406 + 1407 + error = 0; 1408 + 1409 + out: 1410 + return error; 1411 + } 1412 + 1318 1413 /* 1319 1414 * Read the super block of an MS-DOS FS. 1320 1415 */ ··· 1487 1256 struct inode *root_inode = NULL, *fat_inode = NULL; 1488 1257 struct inode *fsinfo_inode = NULL; 1489 1258 struct buffer_head *bh; 1490 - struct fat_boot_sector *b; 1259 + struct fat_bios_param_block bpb; 1491 1260 struct msdos_sb_info *sbi; 1492 1261 u16 logical_sector_size; 1493 1262 u32 total_sectors, total_clusters, fat_clusters, rootdir_sectors; 1494 1263 int debug; 1495 - unsigned int media; 1496 1264 long error; 1497 1265 char buf[50]; 1498 1266 ··· 1528 1298 goto out_fail; 1529 1299 } 1530 1300 1531 - b = (struct fat_boot_sector *) bh->b_data; 1532 - if (!b->reserved) { 1533 - if (!silent) 1534 - fat_msg(sb, KERN_ERR, "bogus number of reserved sectors"); 1535 - brelse(bh); 1536 - goto out_invalid; 1537 - } 1538 - if (!b->fats) { 1539 - if (!silent) 1540 - fat_msg(sb, KERN_ERR, "bogus number of FAT structure"); 1541 - brelse(bh); 1542 - goto out_invalid; 1543 - } 1301 + error = fat_read_bpb(sb, (struct fat_boot_sector *)bh->b_data, silent, 1302 + &bpb); 1303 + if (error == -EINVAL && sbi->options.dos1xfloppy) 1304 + error = fat_read_static_bpb(sb, 1305 + (struct fat_boot_sector *)bh->b_data, silent, &bpb); 1306 + brelse(bh); 1544 1307 1545 - /* 1546 - * Earlier we checked here that b->secs_track and b->head are nonzero, 1547 - * but it turns out valid FAT filesystems can have zero there. 1548 - */ 1308 + if (error == -EINVAL) 1309 + goto out_invalid; 1310 + else if (error) 1311 + goto out_fail; 1549 1312 1550 - media = b->media; 1551 - if (!fat_valid_media(media)) { 1552 - if (!silent) 1553 - fat_msg(sb, KERN_ERR, "invalid media value (0x%02x)", 1554 - media); 1555 - brelse(bh); 1556 - goto out_invalid; 1557 - } 1558 - logical_sector_size = get_unaligned_le16(&b->sector_size); 1559 - if (!is_power_of_2(logical_sector_size) 1560 - || (logical_sector_size < 512) 1561 - || (logical_sector_size > 4096)) { 1562 - if (!silent) 1563 - fat_msg(sb, KERN_ERR, "bogus logical sector size %u", 1564 - logical_sector_size); 1565 - brelse(bh); 1566 - goto out_invalid; 1567 - } 1568 - sbi->sec_per_clus = b->sec_per_clus; 1569 - if (!is_power_of_2(sbi->sec_per_clus)) { 1570 - if (!silent) 1571 - fat_msg(sb, KERN_ERR, "bogus sectors per cluster %u", 1572 - sbi->sec_per_clus); 1573 - brelse(bh); 1574 - goto out_invalid; 1575 - } 1313 + logical_sector_size = bpb.fat_sector_size; 1314 + sbi->sec_per_clus = bpb.fat_sec_per_clus; 1576 1315 1316 + error = -EIO; 1577 1317 if (logical_sector_size < sb->s_blocksize) { 1578 1318 fat_msg(sb, KERN_ERR, "logical sector size too small for device" 1579 1319 " (logical sector size = %u)", logical_sector_size); 1580 - brelse(bh); 1581 1320 goto out_fail; 1582 1321 } 1322 + 1583 1323 if (logical_sector_size > sb->s_blocksize) { 1584 - brelse(bh); 1324 + struct buffer_head *bh_resize; 1585 1325 1586 1326 if (!sb_set_blocksize(sb, logical_sector_size)) { 1587 1327 fat_msg(sb, KERN_ERR, "unable to set blocksize %u", 1588 1328 logical_sector_size); 1589 1329 goto out_fail; 1590 1330 } 1591 - bh = sb_bread(sb, 0); 1592 - if (bh == NULL) { 1331 + 1332 + /* Verify that the larger boot sector is fully readable */ 1333 + bh_resize = sb_bread(sb, 0); 1334 + if (bh_resize == NULL) { 1593 1335 fat_msg(sb, KERN_ERR, "unable to read boot sector" 1594 1336 " (logical sector size = %lu)", 1595 1337 sb->s_blocksize); 1596 1338 goto out_fail; 1597 1339 } 1598 - b = (struct fat_boot_sector *) bh->b_data; 1340 + brelse(bh_resize); 1599 1341 } 1600 1342 1601 1343 mutex_init(&sbi->s_lock); 1602 1344 sbi->cluster_size = sb->s_blocksize * sbi->sec_per_clus; 1603 1345 sbi->cluster_bits = ffs(sbi->cluster_size) - 1; 1604 - sbi->fats = b->fats; 1346 + sbi->fats = bpb.fat_fats; 1605 1347 sbi->fat_bits = 0; /* Don't know yet */ 1606 - sbi->fat_start = le16_to_cpu(b->reserved); 1607 - sbi->fat_length = le16_to_cpu(b->fat_length); 1348 + sbi->fat_start = bpb.fat_reserved; 1349 + sbi->fat_length = bpb.fat_fat_length; 1608 1350 sbi->root_cluster = 0; 1609 1351 sbi->free_clusters = -1; /* Don't know yet */ 1610 1352 sbi->free_clus_valid = 0; 1611 1353 sbi->prev_free = FAT_START_ENT; 1612 1354 sb->s_maxbytes = 0xffffffff; 1613 1355 1614 - if (!sbi->fat_length && b->fat32.length) { 1356 + if (!sbi->fat_length && bpb.fat32_length) { 1615 1357 struct fat_boot_fsinfo *fsinfo; 1616 1358 struct buffer_head *fsinfo_bh; 1617 1359 1618 1360 /* Must be FAT32 */ 1619 1361 sbi->fat_bits = 32; 1620 - sbi->fat_length = le32_to_cpu(b->fat32.length); 1621 - sbi->root_cluster = le32_to_cpu(b->fat32.root_cluster); 1362 + sbi->fat_length = bpb.fat32_length; 1363 + sbi->root_cluster = bpb.fat32_root_cluster; 1622 1364 1623 1365 /* MC - if info_sector is 0, don't multiply by 0 */ 1624 - sbi->fsinfo_sector = le16_to_cpu(b->fat32.info_sector); 1366 + sbi->fsinfo_sector = bpb.fat32_info_sector; 1625 1367 if (sbi->fsinfo_sector == 0) 1626 1368 sbi->fsinfo_sector = 1; 1627 1369 ··· 1601 1399 if (fsinfo_bh == NULL) { 1602 1400 fat_msg(sb, KERN_ERR, "bread failed, FSINFO block" 1603 1401 " (sector = %lu)", sbi->fsinfo_sector); 1604 - brelse(bh); 1605 1402 goto out_fail; 1606 1403 } 1607 1404 ··· 1623 1422 1624 1423 /* interpret volume ID as a little endian 32 bit integer */ 1625 1424 if (sbi->fat_bits == 32) 1626 - sbi->vol_id = (((u32)b->fat32.vol_id[0]) | 1627 - ((u32)b->fat32.vol_id[1] << 8) | 1628 - ((u32)b->fat32.vol_id[2] << 16) | 1629 - ((u32)b->fat32.vol_id[3] << 24)); 1425 + sbi->vol_id = bpb.fat32_vol_id; 1630 1426 else /* fat 16 or 12 */ 1631 - sbi->vol_id = (((u32)b->fat16.vol_id[0]) | 1632 - ((u32)b->fat16.vol_id[1] << 8) | 1633 - ((u32)b->fat16.vol_id[2] << 16) | 1634 - ((u32)b->fat16.vol_id[3] << 24)); 1427 + sbi->vol_id = bpb.fat16_vol_id; 1635 1428 1636 1429 sbi->dir_per_block = sb->s_blocksize / sizeof(struct msdos_dir_entry); 1637 1430 sbi->dir_per_block_bits = ffs(sbi->dir_per_block) - 1; 1638 1431 1639 1432 sbi->dir_start = sbi->fat_start + sbi->fats * sbi->fat_length; 1640 - sbi->dir_entries = get_unaligned_le16(&b->dir_entries); 1433 + sbi->dir_entries = bpb.fat_dir_entries; 1641 1434 if (sbi->dir_entries & (sbi->dir_per_block - 1)) { 1642 1435 if (!silent) 1643 1436 fat_msg(sb, KERN_ERR, "bogus directory-entries per block" 1644 1437 " (%u)", sbi->dir_entries); 1645 - brelse(bh); 1646 1438 goto out_invalid; 1647 1439 } 1648 1440 1649 1441 rootdir_sectors = sbi->dir_entries 1650 1442 * sizeof(struct msdos_dir_entry) / sb->s_blocksize; 1651 1443 sbi->data_start = sbi->dir_start + rootdir_sectors; 1652 - total_sectors = get_unaligned_le16(&b->sectors); 1444 + total_sectors = bpb.fat_sectors; 1653 1445 if (total_sectors == 0) 1654 - total_sectors = le32_to_cpu(b->total_sect); 1446 + total_sectors = bpb.fat_total_sect; 1655 1447 1656 1448 total_clusters = (total_sectors - sbi->data_start) / sbi->sec_per_clus; 1657 1449 ··· 1653 1459 1654 1460 /* some OSes set FAT_STATE_DIRTY and clean it on unmount. */ 1655 1461 if (sbi->fat_bits == 32) 1656 - sbi->dirty = b->fat32.state & FAT_STATE_DIRTY; 1462 + sbi->dirty = bpb.fat32_state & FAT_STATE_DIRTY; 1657 1463 else /* fat 16 or 12 */ 1658 - sbi->dirty = b->fat16.state & FAT_STATE_DIRTY; 1464 + sbi->dirty = bpb.fat16_state & FAT_STATE_DIRTY; 1659 1465 1660 1466 /* check that FAT table does not overflow */ 1661 1467 fat_clusters = calc_fat_clusters(sb); ··· 1664 1470 if (!silent) 1665 1471 fat_msg(sb, KERN_ERR, "count of clusters too big (%u)", 1666 1472 total_clusters); 1667 - brelse(bh); 1668 1473 goto out_invalid; 1669 1474 } 1670 1475 ··· 1675 1482 sbi->prev_free %= sbi->max_cluster; 1676 1483 if (sbi->prev_free < FAT_START_ENT) 1677 1484 sbi->prev_free = FAT_START_ENT; 1678 - 1679 - brelse(bh); 1680 1485 1681 1486 /* set up enough so that it can read an inode */ 1682 1487 fat_hash_init(sb);
+2 -2
fs/file_table.c
··· 76 76 * Handle nr_files sysctl 77 77 */ 78 78 #if defined(CONFIG_SYSCTL) && defined(CONFIG_PROC_FS) 79 - int proc_nr_files(ctl_table *table, int write, 79 + int proc_nr_files(struct ctl_table *table, int write, 80 80 void __user *buffer, size_t *lenp, loff_t *ppos) 81 81 { 82 82 files_stat.nr_files = get_nr_files(); 83 83 return proc_doulongvec_minmax(table, write, buffer, lenp, ppos); 84 84 } 85 85 #else 86 - int proc_nr_files(ctl_table *table, int write, 86 + int proc_nr_files(struct ctl_table *table, int write, 87 87 void __user *buffer, size_t *lenp, loff_t *ppos) 88 88 { 89 89 return -ENOSYS;
+2 -2
fs/fscache/main.c
··· 67 67 return ret; 68 68 } 69 69 70 - ctl_table fscache_sysctls[] = { 70 + struct ctl_table fscache_sysctls[] = { 71 71 { 72 72 .procname = "object_max_active", 73 73 .data = &fscache_object_max_active, ··· 87 87 {} 88 88 }; 89 89 90 - ctl_table fscache_sysctls_root[] = { 90 + struct ctl_table fscache_sysctls_root[] = { 91 91 { 92 92 .procname = "fscache", 93 93 .mode = 0555,
+4 -32
fs/hfsplus/attributes.c
··· 54 54 memset(key, 0, sizeof(struct hfsplus_attr_key)); 55 55 key->attr.cnid = cpu_to_be32(cnid); 56 56 if (name) { 57 - len = strlen(name); 58 - if (len > HFSPLUS_ATTR_MAX_STRLEN) { 59 - pr_err("invalid xattr name's length\n"); 60 - return -EINVAL; 61 - } 62 - hfsplus_asc2uni(sb, 57 + int res = hfsplus_asc2uni(sb, 63 58 (struct hfsplus_unistr *)&key->attr.key_name, 64 - HFSPLUS_ATTR_MAX_STRLEN, name, len); 59 + HFSPLUS_ATTR_MAX_STRLEN, name, strlen(name)); 60 + if (res) 61 + return res; 65 62 len = be16_to_cpu(key->attr.key_name.length); 66 63 } else { 67 64 key->attr.key_name.length = 0; ··· 77 80 2 * len); 78 81 79 82 return 0; 80 - } 81 - 82 - void hfsplus_attr_build_key_uni(hfsplus_btree_key *key, 83 - u32 cnid, 84 - struct hfsplus_attr_unistr *name) 85 - { 86 - int ustrlen; 87 - 88 - memset(key, 0, sizeof(struct hfsplus_attr_key)); 89 - ustrlen = be16_to_cpu(name->length); 90 - key->attr.cnid = cpu_to_be32(cnid); 91 - key->attr.key_name.length = cpu_to_be16(ustrlen); 92 - ustrlen *= 2; 93 - memcpy(key->attr.key_name.unicode, name->unicode, ustrlen); 94 - 95 - /* The length of the key, as stored in key_len field, does not include 96 - * the size of the key_len field itself. 97 - * So, offsetof(hfsplus_attr_key, key_name) is a trick because 98 - * it takes into consideration key_len field (__be16) of 99 - * hfsplus_attr_key structure instead of length field (__be16) of 100 - * hfsplus_attr_unistr structure. 101 - */ 102 - key->key_len = 103 - cpu_to_be16(offsetof(struct hfsplus_attr_key, key_name) + 104 - ustrlen); 105 83 } 106 84 107 85 hfsplus_attr_entry *hfsplus_alloc_attr_entry(void)
+30 -19
fs/hfsplus/bnode.c
··· 27 27 pagep = node->page + (off >> PAGE_CACHE_SHIFT); 28 28 off &= ~PAGE_CACHE_MASK; 29 29 30 - l = min(len, (int)PAGE_CACHE_SIZE - off); 30 + l = min_t(int, len, PAGE_CACHE_SIZE - off); 31 31 memcpy(buf, kmap(*pagep) + off, l); 32 32 kunmap(*pagep); 33 33 34 34 while ((len -= l) != 0) { 35 35 buf += l; 36 - l = min(len, (int)PAGE_CACHE_SIZE); 36 + l = min_t(int, len, PAGE_CACHE_SIZE); 37 37 memcpy(buf, kmap(*++pagep), l); 38 38 kunmap(*pagep); 39 39 } ··· 80 80 pagep = node->page + (off >> PAGE_CACHE_SHIFT); 81 81 off &= ~PAGE_CACHE_MASK; 82 82 83 - l = min(len, (int)PAGE_CACHE_SIZE - off); 83 + l = min_t(int, len, PAGE_CACHE_SIZE - off); 84 84 memcpy(kmap(*pagep) + off, buf, l); 85 85 set_page_dirty(*pagep); 86 86 kunmap(*pagep); 87 87 88 88 while ((len -= l) != 0) { 89 89 buf += l; 90 - l = min(len, (int)PAGE_CACHE_SIZE); 90 + l = min_t(int, len, PAGE_CACHE_SIZE); 91 91 memcpy(kmap(*++pagep), buf, l); 92 92 set_page_dirty(*pagep); 93 93 kunmap(*pagep); ··· 110 110 pagep = node->page + (off >> PAGE_CACHE_SHIFT); 111 111 off &= ~PAGE_CACHE_MASK; 112 112 113 - l = min(len, (int)PAGE_CACHE_SIZE - off); 113 + l = min_t(int, len, PAGE_CACHE_SIZE - off); 114 114 memset(kmap(*pagep) + off, 0, l); 115 115 set_page_dirty(*pagep); 116 116 kunmap(*pagep); 117 117 118 118 while ((len -= l) != 0) { 119 - l = min(len, (int)PAGE_CACHE_SIZE); 119 + l = min_t(int, len, PAGE_CACHE_SIZE); 120 120 memset(kmap(*++pagep), 0, l); 121 121 set_page_dirty(*pagep); 122 122 kunmap(*pagep); ··· 142 142 dst &= ~PAGE_CACHE_MASK; 143 143 144 144 if (src == dst) { 145 - l = min(len, (int)PAGE_CACHE_SIZE - src); 145 + l = min_t(int, len, PAGE_CACHE_SIZE - src); 146 146 memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); 147 147 kunmap(*src_page); 148 148 set_page_dirty(*dst_page); 149 149 kunmap(*dst_page); 150 150 151 151 while ((len -= l) != 0) { 152 - l = min(len, (int)PAGE_CACHE_SIZE); 152 + l = min_t(int, len, PAGE_CACHE_SIZE); 153 153 memcpy(kmap(*++dst_page), kmap(*++src_page), l); 154 154 kunmap(*src_page); 155 155 set_page_dirty(*dst_page); ··· 251 251 dst &= ~PAGE_CACHE_MASK; 252 252 253 253 if (src == dst) { 254 - l = min(len, (int)PAGE_CACHE_SIZE - src); 254 + l = min_t(int, len, PAGE_CACHE_SIZE - src); 255 255 memmove(kmap(*dst_page) + src, 256 256 kmap(*src_page) + src, l); 257 257 kunmap(*src_page); ··· 259 259 kunmap(*dst_page); 260 260 261 261 while ((len -= l) != 0) { 262 - l = min(len, (int)PAGE_CACHE_SIZE); 262 + l = min_t(int, len, PAGE_CACHE_SIZE); 263 263 memmove(kmap(*++dst_page), 264 264 kmap(*++src_page), l); 265 265 kunmap(*src_page); ··· 386 386 struct hfs_bnode *node; 387 387 388 388 if (cnid >= tree->node_count) { 389 - pr_err("request for non-existent node " 390 - "%d in B*Tree\n", 391 - cnid); 389 + pr_err("request for non-existent node %d in B*Tree\n", 390 + cnid); 392 391 return NULL; 393 392 } 394 393 ··· 408 409 loff_t off; 409 410 410 411 if (cnid >= tree->node_count) { 411 - pr_err("request for non-existent node " 412 - "%d in B*Tree\n", 413 - cnid); 412 + pr_err("request for non-existent node %d in B*Tree\n", 413 + cnid); 414 414 return NULL; 415 415 } 416 416 ··· 600 602 601 603 pagep = node->page; 602 604 memset(kmap(*pagep) + node->page_offset, 0, 603 - min((int)PAGE_CACHE_SIZE, (int)tree->node_size)); 605 + min_t(int, PAGE_CACHE_SIZE, tree->node_size)); 604 606 set_page_dirty(*pagep); 605 607 kunmap(*pagep); 606 608 for (i = 1; i < tree->pages_per_bnode; i++) { ··· 646 648 if (test_bit(HFS_BNODE_DELETED, &node->flags)) { 647 649 hfs_bnode_unhash(node); 648 650 spin_unlock(&tree->hash_lock); 649 - hfs_bnode_clear(node, 0, 650 - PAGE_CACHE_SIZE * tree->pages_per_bnode); 651 + if (hfs_bnode_need_zeroout(tree)) 652 + hfs_bnode_clear(node, 0, tree->node_size); 651 653 hfs_bmap_free(node); 652 654 hfs_bnode_free(node); 653 655 return; ··· 656 658 } 657 659 } 658 660 661 + /* 662 + * Unused nodes have to be zeroed if this is the catalog tree and 663 + * a corresponding flag in the volume header is set. 664 + */ 665 + bool hfs_bnode_need_zeroout(struct hfs_btree *tree) 666 + { 667 + struct super_block *sb = tree->inode->i_sb; 668 + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); 669 + const u32 volume_attr = be32_to_cpu(sbi->s_vhdr->attributes); 670 + 671 + return tree->cnid == HFSPLUS_CAT_CNID && 672 + volume_attr & HFSPLUS_VOL_UNUSED_NODE_FIX; 673 + }
+1 -1
fs/hfsplus/btree.c
··· 358 358 u32 count; 359 359 int res; 360 360 361 - res = hfsplus_file_extend(inode); 361 + res = hfsplus_file_extend(inode, hfs_bnode_need_zeroout(tree)); 362 362 if (res) 363 363 return ERR_PTR(res); 364 364 hip->phys_size = inode->i_size =
+28 -3
fs/hfsplus/dir.c
··· 12 12 #include <linux/fs.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/random.h> 15 + #include <linux/nls.h> 15 16 16 17 #include "hfsplus_fs.h" 17 18 #include "hfsplus_raw.h" ··· 128 127 struct inode *inode = file_inode(file); 129 128 struct super_block *sb = inode->i_sb; 130 129 int len, err; 131 - char strbuf[HFSPLUS_MAX_STRLEN + 1]; 130 + char *strbuf; 132 131 hfsplus_cat_entry entry; 133 132 struct hfs_find_data fd; 134 133 struct hfsplus_readdir_data *rd; ··· 140 139 err = hfs_find_init(HFSPLUS_SB(sb)->cat_tree, &fd); 141 140 if (err) 142 141 return err; 142 + strbuf = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_MAX_STRLEN + 1, GFP_KERNEL); 143 + if (!strbuf) { 144 + err = -ENOMEM; 145 + goto out; 146 + } 143 147 hfsplus_cat_build_key(sb, fd.search_key, inode->i_ino, NULL); 144 148 err = hfs_brec_find(&fd, hfs_find_rec_by_key); 145 149 if (err) ··· 199 193 hfs_bnode_read(fd.bnode, &entry, fd.entryoffset, 200 194 fd.entrylength); 201 195 type = be16_to_cpu(entry.type); 202 - len = HFSPLUS_MAX_STRLEN; 196 + len = NLS_MAX_CHARSET_SIZE * HFSPLUS_MAX_STRLEN; 203 197 err = hfsplus_uni2asc(sb, &fd.key->cat.name, strbuf, &len); 204 198 if (err) 205 199 goto out; ··· 218 212 be32_to_cpu(entry.folder.id), DT_DIR)) 219 213 break; 220 214 } else if (type == HFSPLUS_FILE) { 215 + u16 mode; 216 + unsigned type = DT_UNKNOWN; 217 + 221 218 if (fd.entrylength < sizeof(struct hfsplus_cat_file)) { 222 219 pr_err("small file entry\n"); 223 220 err = -EIO; 224 221 goto out; 225 222 } 223 + 224 + mode = be16_to_cpu(entry.file.permissions.mode); 225 + if (S_ISREG(mode)) 226 + type = DT_REG; 227 + else if (S_ISLNK(mode)) 228 + type = DT_LNK; 229 + else if (S_ISFIFO(mode)) 230 + type = DT_FIFO; 231 + else if (S_ISCHR(mode)) 232 + type = DT_CHR; 233 + else if (S_ISBLK(mode)) 234 + type = DT_BLK; 235 + else if (S_ISSOCK(mode)) 236 + type = DT_SOCK; 237 + 226 238 if (!dir_emit(ctx, strbuf, len, 227 - be32_to_cpu(entry.file.id), DT_REG)) 239 + be32_to_cpu(entry.file.id), type)) 228 240 break; 229 241 } else { 230 242 pr_err("bad catalog entry type\n"); ··· 270 246 } 271 247 memcpy(&rd->key, fd.key, sizeof(struct hfsplus_cat_key)); 272 248 out: 249 + kfree(strbuf); 273 250 hfs_find_exit(&fd); 274 251 return err; 275 252 }
+11 -6
fs/hfsplus/extents.c
··· 235 235 if (iblock > hip->fs_blocks || !create) 236 236 return -EIO; 237 237 if (ablock >= hip->alloc_blocks) { 238 - res = hfsplus_file_extend(inode); 238 + res = hfsplus_file_extend(inode, false); 239 239 if (res) 240 240 return res; 241 241 } ··· 425 425 return res; 426 426 } 427 427 428 - int hfsplus_file_extend(struct inode *inode) 428 + int hfsplus_file_extend(struct inode *inode, bool zeroout) 429 429 { 430 430 struct super_block *sb = inode->i_sb; 431 431 struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb); ··· 436 436 if (sbi->alloc_file->i_size * 8 < 437 437 sbi->total_blocks - sbi->free_blocks + 8) { 438 438 /* extend alloc file */ 439 - pr_err("extend alloc file! " 440 - "(%llu,%u,%u)\n", 441 - sbi->alloc_file->i_size * 8, 442 - sbi->total_blocks, sbi->free_blocks); 439 + pr_err("extend alloc file! (%llu,%u,%u)\n", 440 + sbi->alloc_file->i_size * 8, 441 + sbi->total_blocks, sbi->free_blocks); 443 442 return -ENOSPC; 444 443 } 445 444 ··· 460 461 res = -ENOSPC; 461 462 goto out; 462 463 } 464 + } 465 + 466 + if (zeroout) { 467 + res = sb_issue_zeroout(sb, start, len, GFP_NOFS); 468 + if (res) 469 + goto out; 463 470 } 464 471 465 472 hfs_dbg(EXTENT, "extend %lu: %u,%u\n", inode->i_ino, start, len);
+107 -98
fs/hfsplus/hfsplus_fs.h
··· 369 369 /* attributes.c */ 370 370 int __init hfsplus_create_attr_tree_cache(void); 371 371 void hfsplus_destroy_attr_tree_cache(void); 372 + int hfsplus_attr_bin_cmp_key(const hfsplus_btree_key *k1, 373 + const hfsplus_btree_key *k2); 374 + int hfsplus_attr_build_key(struct super_block *sb, hfsplus_btree_key *key, 375 + u32 cnid, const char *name); 372 376 hfsplus_attr_entry *hfsplus_alloc_attr_entry(void); 373 - void hfsplus_destroy_attr_entry(hfsplus_attr_entry *entry_p); 374 - int hfsplus_attr_bin_cmp_key(const hfsplus_btree_key *, 375 - const hfsplus_btree_key *); 376 - int hfsplus_attr_build_key(struct super_block *, hfsplus_btree_key *, 377 - u32, const char *); 378 - void hfsplus_attr_build_key_uni(hfsplus_btree_key *key, 379 - u32 cnid, 380 - struct hfsplus_attr_unistr *name); 381 - int hfsplus_find_attr(struct super_block *, u32, 382 - const char *, struct hfs_find_data *); 377 + void hfsplus_destroy_attr_entry(hfsplus_attr_entry *entry); 378 + int hfsplus_find_attr(struct super_block *sb, u32 cnid, const char *name, 379 + struct hfs_find_data *fd); 383 380 int hfsplus_attr_exists(struct inode *inode, const char *name); 384 - int hfsplus_create_attr(struct inode *, const char *, const void *, size_t); 385 - int hfsplus_delete_attr(struct inode *, const char *); 381 + int hfsplus_create_attr(struct inode *inode, const char *name, 382 + const void *value, size_t size); 383 + int hfsplus_delete_attr(struct inode *inode, const char *name); 386 384 int hfsplus_delete_all_attrs(struct inode *dir, u32 cnid); 387 385 388 386 /* bitmap.c */ 389 - int hfsplus_block_allocate(struct super_block *, u32, u32, u32 *); 390 - int hfsplus_block_free(struct super_block *, u32, u32); 387 + int hfsplus_block_allocate(struct super_block *sb, u32 size, u32 offset, 388 + u32 *max); 389 + int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count); 391 390 392 391 /* btree.c */ 393 - u32 hfsplus_calc_btree_clump_size(u32, u32, u64, int); 394 - struct hfs_btree *hfs_btree_open(struct super_block *, u32); 395 - void hfs_btree_close(struct hfs_btree *); 396 - int hfs_btree_write(struct hfs_btree *); 397 - struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *); 398 - void hfs_bmap_free(struct hfs_bnode *); 392 + u32 hfsplus_calc_btree_clump_size(u32 block_size, u32 node_size, u64 sectors, 393 + int file_id); 394 + struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id); 395 + void hfs_btree_close(struct hfs_btree *tree); 396 + int hfs_btree_write(struct hfs_btree *tree); 397 + struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree); 398 + void hfs_bmap_free(struct hfs_bnode *node); 399 399 400 400 /* bnode.c */ 401 - void hfs_bnode_read(struct hfs_bnode *, void *, int, int); 402 - u16 hfs_bnode_read_u16(struct hfs_bnode *, int); 403 - u8 hfs_bnode_read_u8(struct hfs_bnode *, int); 404 - void hfs_bnode_read_key(struct hfs_bnode *, void *, int); 405 - void hfs_bnode_write(struct hfs_bnode *, void *, int, int); 406 - void hfs_bnode_write_u16(struct hfs_bnode *, int, u16); 407 - void hfs_bnode_clear(struct hfs_bnode *, int, int); 408 - void hfs_bnode_copy(struct hfs_bnode *, int, 409 - struct hfs_bnode *, int, int); 410 - void hfs_bnode_move(struct hfs_bnode *, int, int, int); 411 - void hfs_bnode_dump(struct hfs_bnode *); 412 - void hfs_bnode_unlink(struct hfs_bnode *); 413 - struct hfs_bnode *hfs_bnode_findhash(struct hfs_btree *, u32); 414 - struct hfs_bnode *hfs_bnode_find(struct hfs_btree *, u32); 415 - void hfs_bnode_unhash(struct hfs_bnode *); 416 - void hfs_bnode_free(struct hfs_bnode *); 417 - struct hfs_bnode *hfs_bnode_create(struct hfs_btree *, u32); 418 - void hfs_bnode_get(struct hfs_bnode *); 419 - void hfs_bnode_put(struct hfs_bnode *); 401 + void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len); 402 + u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off); 403 + u8 hfs_bnode_read_u8(struct hfs_bnode *node, int off); 404 + void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off); 405 + void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len); 406 + void hfs_bnode_write_u16(struct hfs_bnode *node, int off, u16 data); 407 + void hfs_bnode_clear(struct hfs_bnode *node, int off, int len); 408 + void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, 409 + struct hfs_bnode *src_node, int src, int len); 410 + void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len); 411 + void hfs_bnode_dump(struct hfs_bnode *node); 412 + void hfs_bnode_unlink(struct hfs_bnode *node); 413 + struct hfs_bnode *hfs_bnode_findhash(struct hfs_btree *tree, u32 cnid); 414 + void hfs_bnode_unhash(struct hfs_bnode *node); 415 + struct hfs_bnode *hfs_bnode_find(struct hfs_btree *tree, u32 num); 416 + void hfs_bnode_free(struct hfs_bnode *node); 417 + struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num); 418 + void hfs_bnode_get(struct hfs_bnode *node); 419 + void hfs_bnode_put(struct hfs_bnode *node); 420 + bool hfs_bnode_need_zeroout(struct hfs_btree *tree); 420 421 421 422 /* brec.c */ 422 - u16 hfs_brec_lenoff(struct hfs_bnode *, u16, u16 *); 423 - u16 hfs_brec_keylen(struct hfs_bnode *, u16); 424 - int hfs_brec_insert(struct hfs_find_data *, void *, int); 425 - int hfs_brec_remove(struct hfs_find_data *); 423 + u16 hfs_brec_lenoff(struct hfs_bnode *node, u16 rec, u16 *off); 424 + u16 hfs_brec_keylen(struct hfs_bnode *node, u16 rec); 425 + int hfs_brec_insert(struct hfs_find_data *fd, void *entry, int entry_len); 426 + int hfs_brec_remove(struct hfs_find_data *fd); 426 427 427 428 /* bfind.c */ 428 - int hfs_find_init(struct hfs_btree *, struct hfs_find_data *); 429 - void hfs_find_exit(struct hfs_find_data *); 430 - int hfs_find_1st_rec_by_cnid(struct hfs_bnode *, 431 - struct hfs_find_data *, 432 - int *, int *, int *); 433 - int hfs_find_rec_by_key(struct hfs_bnode *, 434 - struct hfs_find_data *, 435 - int *, int *, int *); 436 - int __hfs_brec_find(struct hfs_bnode *, struct hfs_find_data *, 437 - search_strategy_t); 438 - int hfs_brec_find(struct hfs_find_data *, search_strategy_t); 439 - int hfs_brec_read(struct hfs_find_data *, void *, int); 440 - int hfs_brec_goto(struct hfs_find_data *, int); 429 + int hfs_find_init(struct hfs_btree *tree, struct hfs_find_data *fd); 430 + void hfs_find_exit(struct hfs_find_data *fd); 431 + int hfs_find_1st_rec_by_cnid(struct hfs_bnode *bnode, struct hfs_find_data *fd, 432 + int *begin, int *end, int *cur_rec); 433 + int hfs_find_rec_by_key(struct hfs_bnode *bnode, struct hfs_find_data *fd, 434 + int *begin, int *end, int *cur_rec); 435 + int __hfs_brec_find(struct hfs_bnode *bnode, struct hfs_find_data *fd, 436 + search_strategy_t rec_found); 437 + int hfs_brec_find(struct hfs_find_data *fd, search_strategy_t do_key_compare); 438 + int hfs_brec_read(struct hfs_find_data *fd, void *rec, int rec_len); 439 + int hfs_brec_goto(struct hfs_find_data *fd, int cnt); 441 440 442 441 /* catalog.c */ 443 - int hfsplus_cat_case_cmp_key(const hfsplus_btree_key *, 444 - const hfsplus_btree_key *); 445 - int hfsplus_cat_bin_cmp_key(const hfsplus_btree_key *, 446 - const hfsplus_btree_key *); 447 - void hfsplus_cat_build_key(struct super_block *sb, 448 - hfsplus_btree_key *, u32, struct qstr *); 449 - int hfsplus_find_cat(struct super_block *, u32, struct hfs_find_data *); 450 - int hfsplus_create_cat(u32, struct inode *, struct qstr *, struct inode *); 451 - int hfsplus_delete_cat(u32, struct inode *, struct qstr *); 452 - int hfsplus_rename_cat(u32, struct inode *, struct qstr *, 453 - struct inode *, struct qstr *); 442 + int hfsplus_cat_case_cmp_key(const hfsplus_btree_key *k1, 443 + const hfsplus_btree_key *k2); 444 + int hfsplus_cat_bin_cmp_key(const hfsplus_btree_key *k1, 445 + const hfsplus_btree_key *k2); 446 + void hfsplus_cat_build_key(struct super_block *sb, hfsplus_btree_key *key, 447 + u32 parent, struct qstr *str); 454 448 void hfsplus_cat_set_perms(struct inode *inode, struct hfsplus_perm *perms); 449 + int hfsplus_find_cat(struct super_block *sb, u32 cnid, 450 + struct hfs_find_data *fd); 451 + int hfsplus_create_cat(u32 cnid, struct inode *dir, struct qstr *str, 452 + struct inode *inode); 453 + int hfsplus_delete_cat(u32 cnid, struct inode *dir, struct qstr *str); 454 + int hfsplus_rename_cat(u32 cnid, struct inode *src_dir, struct qstr *src_name, 455 + struct inode *dst_dir, struct qstr *dst_name); 455 456 456 457 /* dir.c */ 457 458 extern const struct inode_operations hfsplus_dir_inode_operations; 458 459 extern const struct file_operations hfsplus_dir_operations; 459 460 460 461 /* extents.c */ 461 - int hfsplus_ext_cmp_key(const hfsplus_btree_key *, const hfsplus_btree_key *); 462 - int hfsplus_ext_write_extent(struct inode *); 463 - int hfsplus_get_block(struct inode *, sector_t, struct buffer_head *, int); 464 - int hfsplus_free_fork(struct super_block *, u32, 465 - struct hfsplus_fork_raw *, int); 466 - int hfsplus_file_extend(struct inode *); 467 - void hfsplus_file_truncate(struct inode *); 462 + int hfsplus_ext_cmp_key(const hfsplus_btree_key *k1, 463 + const hfsplus_btree_key *k2); 464 + int hfsplus_ext_write_extent(struct inode *inode); 465 + int hfsplus_get_block(struct inode *inode, sector_t iblock, 466 + struct buffer_head *bh_result, int create); 467 + int hfsplus_free_fork(struct super_block *sb, u32 cnid, 468 + struct hfsplus_fork_raw *fork, int type); 469 + int hfsplus_file_extend(struct inode *inode, bool zeroout); 470 + void hfsplus_file_truncate(struct inode *inode); 468 471 469 472 /* inode.c */ 470 473 extern const struct address_space_operations hfsplus_aops; 471 474 extern const struct address_space_operations hfsplus_btree_aops; 472 475 extern const struct dentry_operations hfsplus_dentry_operations; 473 476 474 - void hfsplus_inode_read_fork(struct inode *, struct hfsplus_fork_raw *); 475 - void hfsplus_inode_write_fork(struct inode *, struct hfsplus_fork_raw *); 476 - int hfsplus_cat_read_inode(struct inode *, struct hfs_find_data *); 477 - int hfsplus_cat_write_inode(struct inode *); 478 - struct inode *hfsplus_new_inode(struct super_block *, umode_t); 479 - void hfsplus_delete_inode(struct inode *); 477 + struct inode *hfsplus_new_inode(struct super_block *sb, umode_t mode); 478 + void hfsplus_delete_inode(struct inode *inode); 479 + void hfsplus_inode_read_fork(struct inode *inode, 480 + struct hfsplus_fork_raw *fork); 481 + void hfsplus_inode_write_fork(struct inode *inode, 482 + struct hfsplus_fork_raw *fork); 483 + int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd); 484 + int hfsplus_cat_write_inode(struct inode *inode); 480 485 int hfsplus_file_fsync(struct file *file, loff_t start, loff_t end, 481 486 int datasync); 482 487 ··· 489 484 long hfsplus_ioctl(struct file *filp, unsigned int cmd, unsigned long arg); 490 485 491 486 /* options.c */ 492 - int hfsplus_parse_options(char *, struct hfsplus_sb_info *); 487 + void hfsplus_fill_defaults(struct hfsplus_sb_info *opts); 493 488 int hfsplus_parse_options_remount(char *input, int *force); 494 - void hfsplus_fill_defaults(struct hfsplus_sb_info *); 495 - int hfsplus_show_options(struct seq_file *, struct dentry *); 489 + int hfsplus_parse_options(char *input, struct hfsplus_sb_info *sbi); 490 + int hfsplus_show_options(struct seq_file *seq, struct dentry *root); 491 + 492 + /* part_tbl.c */ 493 + int hfs_part_find(struct super_block *sb, sector_t *part_start, 494 + sector_t *part_size); 496 495 497 496 /* super.c */ 498 - struct inode *hfsplus_iget(struct super_block *, unsigned long); 497 + struct inode *hfsplus_iget(struct super_block *sb, unsigned long ino); 499 498 void hfsplus_mark_mdb_dirty(struct super_block *sb); 500 499 501 500 /* tables.c */ ··· 508 499 extern u16 hfsplus_compose_table[]; 509 500 510 501 /* unicode.c */ 511 - int hfsplus_strcasecmp(const struct hfsplus_unistr *, 512 - const struct hfsplus_unistr *); 513 - int hfsplus_strcmp(const struct hfsplus_unistr *, 514 - const struct hfsplus_unistr *); 515 - int hfsplus_uni2asc(struct super_block *, 516 - const struct hfsplus_unistr *, char *, int *); 517 - int hfsplus_asc2uni(struct super_block *, 518 - struct hfsplus_unistr *, int, const char *, int); 502 + int hfsplus_strcasecmp(const struct hfsplus_unistr *s1, 503 + const struct hfsplus_unistr *s2); 504 + int hfsplus_strcmp(const struct hfsplus_unistr *s1, 505 + const struct hfsplus_unistr *s2); 506 + int hfsplus_uni2asc(struct super_block *sb, const struct hfsplus_unistr *ustr, 507 + char *astr, int *len_p); 508 + int hfsplus_asc2uni(struct super_block *sb, struct hfsplus_unistr *ustr, 509 + int max_unistr_len, const char *astr, int len); 519 510 int hfsplus_hash_dentry(const struct dentry *dentry, struct qstr *str); 520 - int hfsplus_compare_dentry(const struct dentry *parent, const struct dentry *dentry, 521 - unsigned int len, const char *str, const struct qstr *name); 511 + int hfsplus_compare_dentry(const struct dentry *parent, 512 + const struct dentry *dentry, unsigned int len, 513 + const char *str, const struct qstr *name); 522 514 523 515 /* wrapper.c */ 524 - int hfsplus_read_wrapper(struct super_block *); 525 - int hfs_part_find(struct super_block *, sector_t *, sector_t *); 526 - int hfsplus_submit_bio(struct super_block *sb, sector_t sector, 527 - void *buf, void **data, int rw); 516 + int hfsplus_submit_bio(struct super_block *sb, sector_t sector, void *buf, 517 + void **data, int rw); 518 + int hfsplus_read_wrapper(struct super_block *sb); 528 519 529 520 /* time macros */ 530 521 #define __hfsp_mt2ut(t) (be32_to_cpu(t) - 2082844800U)
+1
fs/hfsplus/hfsplus_raw.h
··· 144 144 #define HFSPLUS_VOL_NODEID_REUSED (1 << 12) 145 145 #define HFSPLUS_VOL_JOURNALED (1 << 13) 146 146 #define HFSPLUS_VOL_SOFTLOCK (1 << 15) 147 + #define HFSPLUS_VOL_UNUSED_NODE_FIX (1 << 31) 147 148 148 149 /* HFS+ BTree node descriptor */ 149 150 struct hfs_bnode_desc {
+4 -5
fs/hfsplus/options.c
··· 173 173 if (p) 174 174 sbi->nls = load_nls(p); 175 175 if (!sbi->nls) { 176 - pr_err("unable to load " 177 - "nls mapping \"%s\"\n", 178 - p); 176 + pr_err("unable to load nls mapping \"%s\"\n", 177 + p); 179 178 kfree(p); 180 179 return 0; 181 180 } ··· 231 232 if (sbi->nls) 232 233 seq_printf(seq, ",nls=%s", sbi->nls->charset); 233 234 if (test_bit(HFSPLUS_SB_NODECOMPOSE, &sbi->flags)) 234 - seq_printf(seq, ",nodecompose"); 235 + seq_puts(seq, ",nodecompose"); 235 236 if (test_bit(HFSPLUS_SB_NOBARRIER, &sbi->flags)) 236 - seq_printf(seq, ",nobarrier"); 237 + seq_puts(seq, ",nobarrier"); 237 238 return 0; 238 239 }
+2 -1
fs/hfsplus/super.c
··· 131 131 hfsplus_inode_write_fork(inode, fork); 132 132 if (tree) { 133 133 int err = hfs_btree_write(tree); 134 + 134 135 if (err) { 135 136 pr_err("b-tree write err: %d, ino %lu\n", 136 - err, inode->i_ino); 137 + err, inode->i_ino); 137 138 return err; 138 139 } 139 140 }
+4 -6
fs/hfsplus/wrapper.c
··· 24 24 u16 embed_count; 25 25 }; 26 26 27 - /* 28 - * hfsplus_submit_bio - Perfrom block I/O 27 + /** 28 + * hfsplus_submit_bio - Perform block I/O 29 29 * @sb: super block of volume for I/O 30 30 * @sector: block to read or write, for blocks of HFSPLUS_SECTOR_SIZE bytes 31 31 * @buf: buffer for I/O ··· 231 231 if (blocksize < HFSPLUS_SECTOR_SIZE || ((blocksize - 1) & blocksize)) 232 232 goto out_free_backup_vhdr; 233 233 sbi->alloc_blksz = blocksize; 234 - sbi->alloc_blksz_shift = 0; 235 - while ((blocksize >>= 1) != 0) 236 - sbi->alloc_blksz_shift++; 237 - blocksize = min(sbi->alloc_blksz, (u32)PAGE_SIZE); 234 + sbi->alloc_blksz_shift = ilog2(blocksize); 235 + blocksize = min_t(u32, sbi->alloc_blksz, PAGE_SIZE); 238 236 239 237 /* 240 238 * Align block size to block offset.
+37 -20
fs/hfsplus/xattr.c
··· 8 8 9 9 #include "hfsplus_fs.h" 10 10 #include <linux/posix_acl_xattr.h> 11 + #include <linux/nls.h> 11 12 #include "xattr.h" 12 13 #include "acl.h" 13 14 ··· 67 66 char *bmp; 68 67 u32 used_nodes; 69 68 u32 used_bmp_bytes; 70 - loff_t tmp; 69 + u64 tmp; 71 70 72 71 hfs_dbg(ATTR_MOD, "init_hdr_attr_file: clump %u, node_size %u\n", 73 - clump_size, node_size); 72 + clump_size, node_size); 74 73 75 74 /* The end of the node contains list of record offsets */ 76 75 rec_offsets = (__be16 *)(buf + node_size); ··· 196 195 } 197 196 198 197 while (hip->alloc_blocks < hip->clump_blocks) { 199 - err = hfsplus_file_extend(attr_file); 198 + err = hfsplus_file_extend(attr_file, false); 200 199 if (unlikely(err)) { 201 200 pr_err("failed to extend attributes file\n"); 202 201 goto end_attr_file_creation; ··· 646 645 struct hfs_find_data fd; 647 646 u16 key_len = 0; 648 647 struct hfsplus_attr_key attr_key; 649 - char strbuf[HFSPLUS_ATTR_MAX_STRLEN + 650 - XATTR_MAC_OSX_PREFIX_LEN + 1] = {0}; 648 + char *strbuf; 651 649 int xattr_name_len; 652 650 653 651 if ((!S_ISREG(inode->i_mode) && ··· 664 664 if (err) { 665 665 pr_err("can't init xattr find struct\n"); 666 666 return err; 667 + } 668 + 669 + strbuf = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 670 + XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL); 671 + if (!strbuf) { 672 + res = -ENOMEM; 673 + goto out; 667 674 } 668 675 669 676 err = hfsplus_find_attr(inode->i_sb, inode->i_ino, NULL, &fd); ··· 699 692 if (be32_to_cpu(attr_key.cnid) != inode->i_ino) 700 693 goto end_listxattr; 701 694 702 - xattr_name_len = HFSPLUS_ATTR_MAX_STRLEN; 695 + xattr_name_len = NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN; 703 696 if (hfsplus_uni2asc(inode->i_sb, 704 697 (const struct hfsplus_unistr *)&fd.key->attr.key_name, 705 698 strbuf, &xattr_name_len)) { ··· 725 718 } 726 719 727 720 end_listxattr: 721 + kfree(strbuf); 722 + out: 728 723 hfs_find_exit(&fd); 729 724 return res; 730 725 } ··· 806 797 static int hfsplus_osx_getxattr(struct dentry *dentry, const char *name, 807 798 void *buffer, size_t size, int type) 808 799 { 809 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 810 - XATTR_MAC_OSX_PREFIX_LEN + 1] = {0}; 811 - size_t len = strlen(name); 800 + char *xattr_name; 801 + int res; 812 802 813 803 if (!strcmp(name, "")) 814 804 return -EINVAL; 815 - 816 - if (len > HFSPLUS_ATTR_MAX_STRLEN) 817 - return -EOPNOTSUPP; 818 805 819 806 /* 820 807 * Don't allow retrieving properly prefixed attributes ··· 818 813 */ 819 814 if (is_known_namespace(name)) 820 815 return -EOPNOTSUPP; 816 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN 817 + + XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL); 818 + if (!xattr_name) 819 + return -ENOMEM; 820 + strcpy(xattr_name, XATTR_MAC_OSX_PREFIX); 821 + strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name); 821 822 822 - return hfsplus_getxattr(dentry, xattr_name, buffer, size); 823 + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); 824 + kfree(xattr_name); 825 + return res; 823 826 } 824 827 825 828 static int hfsplus_osx_setxattr(struct dentry *dentry, const char *name, 826 829 const void *buffer, size_t size, int flags, int type) 827 830 { 828 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 829 - XATTR_MAC_OSX_PREFIX_LEN + 1] = {0}; 830 - size_t len = strlen(name); 831 + char *xattr_name; 832 + int res; 831 833 832 834 if (!strcmp(name, "")) 833 835 return -EINVAL; 834 - 835 - if (len > HFSPLUS_ATTR_MAX_STRLEN) 836 - return -EOPNOTSUPP; 837 836 838 837 /* 839 838 * Don't allow setting properly prefixed attributes ··· 845 836 */ 846 837 if (is_known_namespace(name)) 847 838 return -EOPNOTSUPP; 839 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN 840 + + XATTR_MAC_OSX_PREFIX_LEN + 1, GFP_KERNEL); 841 + if (!xattr_name) 842 + return -ENOMEM; 843 + strcpy(xattr_name, XATTR_MAC_OSX_PREFIX); 844 + strcpy(xattr_name + XATTR_MAC_OSX_PREFIX_LEN, name); 848 845 849 - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 846 + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 847 + kfree(xattr_name); 848 + return res; 850 849 } 851 850 852 851 static size_t hfsplus_osx_listxattr(struct dentry *dentry, char *list,
+28 -21
fs/hfsplus/xattr_security.c
··· 7 7 */ 8 8 9 9 #include <linux/security.h> 10 + #include <linux/nls.h> 11 + 10 12 #include "hfsplus_fs.h" 11 13 #include "xattr.h" 12 14 #include "acl.h" ··· 16 14 static int hfsplus_security_getxattr(struct dentry *dentry, const char *name, 17 15 void *buffer, size_t size, int type) 18 16 { 19 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; 20 - size_t len = strlen(name); 17 + char *xattr_name; 18 + int res; 21 19 22 20 if (!strcmp(name, "")) 23 21 return -EINVAL; 24 22 25 - if (len + XATTR_SECURITY_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) 26 - return -EOPNOTSUPP; 27 - 23 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, 24 + GFP_KERNEL); 25 + if (!xattr_name) 26 + return -ENOMEM; 28 27 strcpy(xattr_name, XATTR_SECURITY_PREFIX); 29 28 strcpy(xattr_name + XATTR_SECURITY_PREFIX_LEN, name); 30 29 31 - return hfsplus_getxattr(dentry, xattr_name, buffer, size); 30 + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); 31 + kfree(xattr_name); 32 + return res; 32 33 } 33 34 34 35 static int hfsplus_security_setxattr(struct dentry *dentry, const char *name, 35 36 const void *buffer, size_t size, int flags, int type) 36 37 { 37 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; 38 - size_t len = strlen(name); 38 + char *xattr_name; 39 + int res; 39 40 40 41 if (!strcmp(name, "")) 41 42 return -EINVAL; 42 43 43 - if (len + XATTR_SECURITY_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) 44 - return -EOPNOTSUPP; 45 - 44 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, 45 + GFP_KERNEL); 46 + if (!xattr_name) 47 + return -ENOMEM; 46 48 strcpy(xattr_name, XATTR_SECURITY_PREFIX); 47 49 strcpy(xattr_name + XATTR_SECURITY_PREFIX_LEN, name); 48 50 49 - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 51 + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 52 + kfree(xattr_name); 53 + return res; 50 54 } 51 55 52 56 static size_t hfsplus_security_listxattr(struct dentry *dentry, char *list, ··· 70 62 void *fs_info) 71 63 { 72 64 const struct xattr *xattr; 73 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; 74 - size_t xattr_name_len; 65 + char *xattr_name; 75 66 int err = 0; 76 67 68 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, 69 + GFP_KERNEL); 70 + if (!xattr_name) 71 + return -ENOMEM; 77 72 for (xattr = xattr_array; xattr->name != NULL; xattr++) { 78 - xattr_name_len = strlen(xattr->name); 79 73 80 - if (xattr_name_len == 0) 74 + if (!strcmp(xattr->name, "")) 81 75 continue; 82 - 83 - if (xattr_name_len + XATTR_SECURITY_PREFIX_LEN > 84 - HFSPLUS_ATTR_MAX_STRLEN) 85 - return -EOPNOTSUPP; 86 76 87 77 strcpy(xattr_name, XATTR_SECURITY_PREFIX); 88 78 strcpy(xattr_name + 89 79 XATTR_SECURITY_PREFIX_LEN, xattr->name); 90 80 memset(xattr_name + 91 - XATTR_SECURITY_PREFIX_LEN + xattr_name_len, 0, 1); 81 + XATTR_SECURITY_PREFIX_LEN + strlen(xattr->name), 0, 1); 92 82 93 83 err = __hfsplus_setxattr(inode, xattr_name, 94 84 xattr->value, xattr->value_len, 0); 95 85 if (err) 96 86 break; 97 87 } 88 + kfree(xattr_name); 98 89 return err; 99 90 } 100 91
+20 -12
fs/hfsplus/xattr_trusted.c
··· 6 6 * Handler for trusted extended attributes. 7 7 */ 8 8 9 + #include <linux/nls.h> 10 + 9 11 #include "hfsplus_fs.h" 10 12 #include "xattr.h" 11 13 12 14 static int hfsplus_trusted_getxattr(struct dentry *dentry, const char *name, 13 15 void *buffer, size_t size, int type) 14 16 { 15 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; 16 - size_t len = strlen(name); 17 + char *xattr_name; 18 + int res; 17 19 18 20 if (!strcmp(name, "")) 19 21 return -EINVAL; 20 22 21 - if (len + XATTR_TRUSTED_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) 22 - return -EOPNOTSUPP; 23 - 23 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, 24 + GFP_KERNEL); 25 + if (!xattr_name) 26 + return -ENOMEM; 24 27 strcpy(xattr_name, XATTR_TRUSTED_PREFIX); 25 28 strcpy(xattr_name + XATTR_TRUSTED_PREFIX_LEN, name); 26 29 27 - return hfsplus_getxattr(dentry, xattr_name, buffer, size); 30 + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); 31 + kfree(xattr_name); 32 + return res; 28 33 } 29 34 30 35 static int hfsplus_trusted_setxattr(struct dentry *dentry, const char *name, 31 36 const void *buffer, size_t size, int flags, int type) 32 37 { 33 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; 34 - size_t len = strlen(name); 38 + char *xattr_name; 39 + int res; 35 40 36 41 if (!strcmp(name, "")) 37 42 return -EINVAL; 38 43 39 - if (len + XATTR_TRUSTED_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) 40 - return -EOPNOTSUPP; 41 - 44 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, 45 + GFP_KERNEL); 46 + if (!xattr_name) 47 + return -ENOMEM; 42 48 strcpy(xattr_name, XATTR_TRUSTED_PREFIX); 43 49 strcpy(xattr_name + XATTR_TRUSTED_PREFIX_LEN, name); 44 50 45 - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 51 + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 52 + kfree(xattr_name); 53 + return res; 46 54 } 47 55 48 56 static size_t hfsplus_trusted_listxattr(struct dentry *dentry, char *list,
+20 -12
fs/hfsplus/xattr_user.c
··· 6 6 * Handler for user extended attributes. 7 7 */ 8 8 9 + #include <linux/nls.h> 10 + 9 11 #include "hfsplus_fs.h" 10 12 #include "xattr.h" 11 13 12 14 static int hfsplus_user_getxattr(struct dentry *dentry, const char *name, 13 15 void *buffer, size_t size, int type) 14 16 { 15 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; 16 - size_t len = strlen(name); 17 + char *xattr_name; 18 + int res; 17 19 18 20 if (!strcmp(name, "")) 19 21 return -EINVAL; 20 22 21 - if (len + XATTR_USER_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) 22 - return -EOPNOTSUPP; 23 - 23 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, 24 + GFP_KERNEL); 25 + if (!xattr_name) 26 + return -ENOMEM; 24 27 strcpy(xattr_name, XATTR_USER_PREFIX); 25 28 strcpy(xattr_name + XATTR_USER_PREFIX_LEN, name); 26 29 27 - return hfsplus_getxattr(dentry, xattr_name, buffer, size); 30 + res = hfsplus_getxattr(dentry, xattr_name, buffer, size); 31 + kfree(xattr_name); 32 + return res; 28 33 } 29 34 30 35 static int hfsplus_user_setxattr(struct dentry *dentry, const char *name, 31 36 const void *buffer, size_t size, int flags, int type) 32 37 { 33 - char xattr_name[HFSPLUS_ATTR_MAX_STRLEN + 1] = {0}; 34 - size_t len = strlen(name); 38 + char *xattr_name; 39 + int res; 35 40 36 41 if (!strcmp(name, "")) 37 42 return -EINVAL; 38 43 39 - if (len + XATTR_USER_PREFIX_LEN > HFSPLUS_ATTR_MAX_STRLEN) 40 - return -EOPNOTSUPP; 41 - 44 + xattr_name = kmalloc(NLS_MAX_CHARSET_SIZE * HFSPLUS_ATTR_MAX_STRLEN + 1, 45 + GFP_KERNEL); 46 + if (!xattr_name) 47 + return -ENOMEM; 42 48 strcpy(xattr_name, XATTR_USER_PREFIX); 43 49 strcpy(xattr_name + XATTR_USER_PREFIX_LEN, name); 44 50 45 - return hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 51 + res = hfsplus_setxattr(dentry, xattr_name, buffer, size, flags); 52 + kfree(xattr_name); 53 + return res; 46 54 } 47 55 48 56 static size_t hfsplus_user_listxattr(struct dentry *dentry, char *list,
+1 -1
fs/hpfs/alloc.c
··· 316 316 struct quad_buffer_head qbh; 317 317 __le32 *bmp; 318 318 struct hpfs_sb_info *sbi = hpfs_sb(s); 319 - /*printk("2 - ");*/ 319 + /*pr_info("2 - ");*/ 320 320 if (!n) return; 321 321 if (sec < 0x12) { 322 322 hpfs_error(s, "Trying to free reserved sector %08x", sec);
+6 -6
fs/hpfs/buffer.c
··· 55 55 if (bh != NULL) 56 56 return bh->b_data; 57 57 else { 58 - printk("HPFS: hpfs_map_sector: read error\n"); 58 + pr_err("%s(): read error\n", __func__); 59 59 return NULL; 60 60 } 61 61 } ··· 76 76 set_buffer_uptodate(bh); 77 77 return bh->b_data; 78 78 } else { 79 - printk("HPFS: hpfs_get_sector: getblk failed\n"); 79 + pr_err("%s(): getblk failed\n", __func__); 80 80 return NULL; 81 81 } 82 82 } ··· 93 93 cond_resched(); 94 94 95 95 if (secno & 3) { 96 - printk("HPFS: hpfs_map_4sectors: unaligned read\n"); 96 + pr_err("%s(): unaligned read\n", __func__); 97 97 return NULL; 98 98 } 99 99 ··· 112 112 113 113 qbh->data = data = kmalloc(2048, GFP_NOFS); 114 114 if (!data) { 115 - printk("HPFS: hpfs_map_4sectors: out of memory\n"); 115 + pr_err("%s(): out of memory\n", __func__); 116 116 goto bail4; 117 117 } 118 118 ··· 145 145 hpfs_lock_assert(s); 146 146 147 147 if (secno & 3) { 148 - printk("HPFS: hpfs_get_4sectors: unaligned read\n"); 148 + pr_err("%s(): unaligned read\n", __func__); 149 149 return NULL; 150 150 } 151 151 ··· 161 161 } 162 162 163 163 if (!(qbh->data = kmalloc(2048, GFP_NOFS))) { 164 - printk("HPFS: hpfs_get_4sectors: out of memory\n"); 164 + pr_err("%s(): out of memory\n", __func__); 165 165 goto bail4; 166 166 } 167 167 return qbh->data;
+3 -3
fs/hpfs/dir.c
··· 36 36 mutex_lock(&i->i_mutex); 37 37 hpfs_lock(s); 38 38 39 - /*printk("dir lseek\n");*/ 39 + /*pr_info("dir lseek\n");*/ 40 40 if (new_off == 0 || new_off == 1 || new_off == 11 || new_off == 12 || new_off == 13) goto ok; 41 41 pos = ((loff_t) hpfs_de_as_down_as_possible(s, hpfs_inode->i_dno) << 4) + 1; 42 42 while (pos != new_off) { ··· 51 51 mutex_unlock(&i->i_mutex); 52 52 return new_off; 53 53 fail: 54 - /*printk("illegal lseek: %016llx\n", new_off);*/ 54 + /*pr_warn("illegal lseek: %016llx\n", new_off);*/ 55 55 hpfs_unlock(s); 56 56 mutex_unlock(&i->i_mutex); 57 57 return -ESPIPE; ··· 127 127 if (ctx->pos == 12) 128 128 goto out; 129 129 if (ctx->pos == 3 || ctx->pos == 4 || ctx->pos == 5) { 130 - printk("HPFS: warning: pos==%d\n",(int)ctx->pos); 130 + pr_err("pos==%d\n", (int)ctx->pos); 131 131 goto out; 132 132 } 133 133 if (ctx->pos == 0) {
+26 -18
fs/hpfs/dnode.c
··· 17 17 if (de == fde) return ((loff_t) le32_to_cpu(d->self) << 4) | (loff_t)i; 18 18 i++; 19 19 } 20 - printk("HPFS: get_pos: not_found\n"); 20 + pr_info("%s(): not_found\n", __func__); 21 21 return ((loff_t)le32_to_cpu(d->self) << 4) | (loff_t)1; 22 22 } 23 23 ··· 32 32 if (hpfs_inode->i_rddir_off[i] == pos) return; 33 33 if (!(i&0x0f)) { 34 34 if (!(ppos = kmalloc((i+0x11) * sizeof(loff_t*), GFP_NOFS))) { 35 - printk("HPFS: out of memory for position list\n"); 35 + pr_err("out of memory for position list\n"); 36 36 return; 37 37 } 38 38 if (hpfs_inode->i_rddir_off) { ··· 63 63 } 64 64 return; 65 65 not_f: 66 - /*printk("HPFS: warning: position pointer %p->%08x not found\n", pos, (int)*pos);*/ 66 + /*pr_warn("position pointer %p->%08x not found\n", 67 + pos, (int)*pos);*/ 67 68 return; 68 69 } 69 70 ··· 93 92 { 94 93 if ((*p & ~0x3f) == (d & ~0x3f) && (*p & 0x3f) >= (d & 0x3f)) { 95 94 int n = (*p & 0x3f) + c; 96 - if (n > 0x3f) printk("HPFS: hpfs_pos_ins: %08x + %d\n", (int)*p, (int)c >> 8); 97 - else *p = (*p & ~0x3f) | n; 95 + if (n > 0x3f) 96 + pr_err("%s(): %08x + %d\n", 97 + __func__, (int)*p, (int)c >> 8); 98 + else 99 + *p = (*p & ~0x3f) | n; 98 100 } 99 101 } 100 102 ··· 105 101 { 106 102 if ((*p & ~0x3f) == (d & ~0x3f) && (*p & 0x3f) >= (d & 0x3f)) { 107 103 int n = (*p & 0x3f) - c; 108 - if (n < 1) printk("HPFS: hpfs_pos_ins: %08x - %d\n", (int)*p, (int)c >> 8); 109 - else *p = (*p & ~0x3f) | n; 104 + if (n < 1) 105 + pr_err("%s(): %08x - %d\n", 106 + __func__, (int)*p, (int)c >> 8); 107 + else 108 + *p = (*p & ~0x3f) | n; 110 109 } 111 110 } 112 111 ··· 246 239 struct fnode *fnode; 247 240 int c1, c2 = 0; 248 241 if (!(nname = kmalloc(256, GFP_NOFS))) { 249 - printk("HPFS: out of memory, can't add to dnode\n"); 242 + pr_err("out of memory, can't add to dnode\n"); 250 243 return 1; 251 244 } 252 245 go_up: 253 246 if (namelen >= 256) { 254 - hpfs_error(i->i_sb, "hpfs_add_to_dnode: namelen == %d", namelen); 247 + hpfs_error(i->i_sb, "%s(): namelen == %d", __func__, namelen); 255 248 kfree(nd); 256 249 kfree(nname); 257 250 return 1; ··· 288 281 not be any error while splitting dnodes, otherwise the 289 282 whole directory, not only file we're adding, would 290 283 be lost. */ 291 - printk("HPFS: out of memory for dnode splitting\n"); 284 + pr_err("out of memory for dnode splitting\n"); 292 285 hpfs_brelse4(&qbh); 293 286 kfree(nname); 294 287 return 1; ··· 604 597 if (!de_next->down) goto endm; 605 598 ndown = de_down_pointer(de_next); 606 599 if (!(de_cp = kmalloc(le16_to_cpu(de->length), GFP_NOFS))) { 607 - printk("HPFS: out of memory for dtree balancing\n"); 600 + pr_err("out of memory for dtree balancing\n"); 608 601 goto endm; 609 602 } 610 603 memcpy(de_cp, de, le16_to_cpu(de->length)); ··· 619 612 hpfs_brelse4(&qbh1); 620 613 } 621 614 hpfs_add_to_dnode(i, ndown, de_cp->name, de_cp->namelen, de_cp, de_cp->down ? de_down_pointer(de_cp) : 0); 622 - /*printk("UP-TO-DNODE: %08x (ndown = %08x, down = %08x, dno = %08x)\n", up, ndown, down, dno);*/ 615 + /*pr_info("UP-TO-DNODE: %08x (ndown = %08x, down = %08x, dno = %08x)\n", 616 + up, ndown, down, dno);*/ 623 617 dno = up; 624 618 kfree(de_cp); 625 619 goto try_it_again; ··· 645 637 if (!dlp && down) { 646 638 if (le32_to_cpu(d1->first_free) > 2044) { 647 639 if (hpfs_sb(i->i_sb)->sb_chk >= 2) { 648 - printk("HPFS: warning: unbalanced dnode tree, see hpfs.txt 4 more info\n"); 649 - printk("HPFS: warning: terminating balancing operation\n"); 640 + pr_err("unbalanced dnode tree, see hpfs.txt 4 more info\n"); 641 + pr_err("terminating balancing operation\n"); 650 642 } 651 643 hpfs_brelse4(&qbh1); 652 644 goto endm; 653 645 } 654 646 if (hpfs_sb(i->i_sb)->sb_chk >= 2) { 655 - printk("HPFS: warning: unbalanced dnode tree, see hpfs.txt 4 more info\n"); 656 - printk("HPFS: warning: goin'on\n"); 647 + pr_err("unbalanced dnode tree, see hpfs.txt 4 more info\n"); 648 + pr_err("goin'on\n"); 657 649 } 658 650 le16_add_cpu(&del->length, 4); 659 651 del->down = 1; ··· 667 659 *(__le32 *) ((void *) del + le16_to_cpu(del->length) - 4) = cpu_to_le32(down); 668 660 } else goto endm; 669 661 if (!(de_cp = kmalloc(le16_to_cpu(de_prev->length), GFP_NOFS))) { 670 - printk("HPFS: out of memory for dtree balancing\n"); 662 + pr_err("out of memory for dtree balancing\n"); 671 663 hpfs_brelse4(&qbh1); 672 664 goto endm; 673 665 } ··· 1008 1000 int d1, d2 = 0; 1009 1001 name1 = f->name; 1010 1002 if (!(name2 = kmalloc(256, GFP_NOFS))) { 1011 - printk("HPFS: out of memory, can't map dirent\n"); 1003 + pr_err("out of memory, can't map dirent\n"); 1012 1004 return NULL; 1013 1005 } 1014 1006 if (f->len <= 15)
+3 -3
fs/hpfs/ea.c
··· 51 51 { 52 52 char *ret; 53 53 if (!(ret = kmalloc(size + 1, GFP_NOFS))) { 54 - printk("HPFS: out of memory for EA\n"); 54 + pr_err("out of memory for EA\n"); 55 55 return NULL; 56 56 } 57 57 if (hpfs_ea_read(s, a, ano, 0, size, ret)) { ··· 139 139 if (ea_indirect(ea)) 140 140 return get_indirect_ea(s, ea_in_anode(ea), ea_sec(ea), *size = ea_len(ea)); 141 141 if (!(ret = kmalloc((*size = ea_valuelen(ea)) + 1, GFP_NOFS))) { 142 - printk("HPFS: out of memory for EA\n"); 142 + pr_err("out of memory for EA\n"); 143 143 return NULL; 144 144 } 145 145 memcpy(ret, ea_data(ea), ea_valuelen(ea)); ··· 165 165 if (ea_indirect(ea)) 166 166 return get_indirect_ea(s, ea_in_anode(ea), ea_sec(ea), *size = ea_len(ea)); 167 167 if (!(ret = kmalloc((*size = ea_valuelen(ea)) + 1, GFP_NOFS))) { 168 - printk("HPFS: out of memory for EA\n"); 168 + pr_err("out of memory for EA\n"); 169 169 return NULL; 170 170 } 171 171 if (hpfs_ea_read(s, a, ano, pos + 4 + ea->namelen + 1, ea_valuelen(ea), ret)) {
+5
fs/hpfs/hpfs_fn.h
··· 8 8 9 9 //#define DBG 10 10 //#define DEBUG_LOCKS 11 + #ifdef pr_fmt 12 + #undef pr_fmt 13 + #endif 14 + 15 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 11 16 12 17 #include <linux/mutex.h> 13 18 #include <linux/pagemap.h>
+2 -1
fs/hpfs/inode.c
··· 183 183 struct inode *parent; 184 184 if (i->i_ino == hpfs_sb(i->i_sb)->sb_root) return; 185 185 if (hpfs_inode->i_rddir_off && !atomic_read(&i->i_count)) { 186 - if (*hpfs_inode->i_rddir_off) printk("HPFS: write_inode: some position still there\n"); 186 + if (*hpfs_inode->i_rddir_off) 187 + pr_err("write_inode: some position still there\n"); 187 188 kfree(hpfs_inode->i_rddir_off); 188 189 hpfs_inode->i_rddir_off = NULL; 189 190 }
+10 -7
fs/hpfs/map.c
··· 65 65 struct code_page_directory *cp = hpfs_map_sector(s, cps, &bh, 0); 66 66 if (!cp) return NULL; 67 67 if (le32_to_cpu(cp->magic) != CP_DIR_MAGIC) { 68 - printk("HPFS: Code page directory magic doesn't match (magic = %08x)\n", le32_to_cpu(cp->magic)); 68 + pr_err("Code page directory magic doesn't match (magic = %08x)\n", 69 + le32_to_cpu(cp->magic)); 69 70 brelse(bh); 70 71 return NULL; 71 72 } 72 73 if (!le32_to_cpu(cp->n_code_pages)) { 73 - printk("HPFS: n_code_pages == 0\n"); 74 + pr_err("n_code_pages == 0\n"); 74 75 brelse(bh); 75 76 return NULL; 76 77 } ··· 80 79 brelse(bh); 81 80 82 81 if (cpi >= 3) { 83 - printk("HPFS: Code page index out of array\n"); 82 + pr_err("Code page index out of array\n"); 84 83 return NULL; 85 84 } 86 85 87 86 if (!(cpd = hpfs_map_sector(s, cpds, &bh, 0))) return NULL; 88 87 if (le16_to_cpu(cpd->offs[cpi]) > 0x178) { 89 - printk("HPFS: Code page index out of sector\n"); 88 + pr_err("Code page index out of sector\n"); 90 89 brelse(bh); 91 90 return NULL; 92 91 } 93 92 ptr = (unsigned char *)cpd + le16_to_cpu(cpd->offs[cpi]) + 6; 94 93 if (!(cp_table = kmalloc(256, GFP_KERNEL))) { 95 - printk("HPFS: out of memory for code page table\n"); 94 + pr_err("out of memory for code page table\n"); 96 95 brelse(bh); 97 96 return NULL; 98 97 } ··· 115 114 int i; 116 115 __le32 *b; 117 116 if (!(b = kmalloc(n * 512, GFP_KERNEL))) { 118 - printk("HPFS: can't allocate memory for bitmap directory\n"); 117 + pr_err("can't allocate memory for bitmap directory\n"); 119 118 return NULL; 120 119 } 121 120 for (i=0;i<n;i++) { ··· 282 281 hpfs_error(s, "dnode %08x does not end with \\377 entry", secno); 283 282 goto bail; 284 283 } 285 - if (b == 3) printk("HPFS: warning: unbalanced dnode tree, dnode %08x; see hpfs.txt 4 more info\n", secno); 284 + if (b == 3) 285 + pr_err("unbalanced dnode tree, dnode %08x; see hpfs.txt 4 more info\n", 286 + secno); 286 287 } 287 288 return dnode; 288 289 bail:
+6 -5
fs/hpfs/name.c
··· 56 56 unsigned char *to; 57 57 int i; 58 58 if (hpfs_sb(s)->sb_chk >= 2) if (hpfs_is_name_long(from, len) != lng) { 59 - printk("HPFS: Long name flag mismatch - name "); 60 - for (i=0; i<len; i++) printk("%c", from[i]); 61 - printk(" misidentified as %s.\n", lng ? "short" : "long"); 62 - printk("HPFS: It's nothing serious. It could happen because of bug in OS/2.\nHPFS: Set checks=normal to disable this message.\n"); 59 + pr_err("Long name flag mismatch - name "); 60 + for (i = 0; i < len; i++) 61 + pr_cont("%c", from[i]); 62 + pr_cont(" misidentified as %s.\n", lng ? "short" : "long"); 63 + pr_err("It's nothing serious. It could happen because of bug in OS/2.\nSet checks=normal to disable this message.\n"); 63 64 } 64 65 if (!lc) return from; 65 66 if (!(to = kmalloc(len, GFP_KERNEL))) { 66 - printk("HPFS: can't allocate memory for name conversion buffer\n"); 67 + pr_err("can't allocate memory for name conversion buffer\n"); 67 68 return from; 68 69 } 69 70 for (i = 0; i < len; i++) to[i] = locase(hpfs_sb(s)->sb_cp_table,from[i]);
+1 -1
fs/hpfs/namei.c
··· 404 404 d_rehash(dentry); 405 405 } else { 406 406 struct iattr newattrs; 407 - /*printk("HPFS: truncating file before delete.\n");*/ 407 + /*pr_info("truncating file before delete.\n");*/ 408 408 newattrs.ia_size = 0; 409 409 newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME; 410 410 err = notify_change(dentry, &newattrs, NULL);
+32 -23
fs/hpfs/super.c
··· 62 62 vsnprintf(err_buf, sizeof(err_buf), fmt, args); 63 63 va_end(args); 64 64 65 - printk("HPFS: filesystem error: %s", err_buf); 65 + pr_err("filesystem error: %s", err_buf); 66 66 if (!hpfs_sb(s)->sb_was_error) { 67 67 if (hpfs_sb(s)->sb_err == 2) { 68 - printk("; crashing the system because you wanted it\n"); 68 + pr_cont("; crashing the system because you wanted it\n"); 69 69 mark_dirty(s, 0); 70 70 panic("HPFS panic"); 71 71 } else if (hpfs_sb(s)->sb_err == 1) { 72 - if (s->s_flags & MS_RDONLY) printk("; already mounted read-only\n"); 72 + if (s->s_flags & MS_RDONLY) 73 + pr_cont("; already mounted read-only\n"); 73 74 else { 74 - printk("; remounting read-only\n"); 75 + pr_cont("; remounting read-only\n"); 75 76 mark_dirty(s, 0); 76 77 s->s_flags |= MS_RDONLY; 77 78 } 78 - } else if (s->s_flags & MS_RDONLY) printk("; going on - but anything won't be destroyed because it's read-only\n"); 79 - else printk("; corrupted filesystem mounted read/write - your computer will explode within 20 seconds ... but you wanted it so!\n"); 80 - } else printk("\n"); 79 + } else if (s->s_flags & MS_RDONLY) 80 + pr_cont("; going on - but anything won't be destroyed because it's read-only\n"); 81 + else 82 + pr_cont("; corrupted filesystem mounted read/write - your computer will explode within 20 seconds ... but you wanted it so!\n"); 83 + } else 84 + pr_cont("\n"); 81 85 hpfs_sb(s)->sb_was_error = 1; 82 86 } 83 87 ··· 296 292 if (!opts) 297 293 return 1; 298 294 299 - /*printk("Parsing opts: '%s'\n",opts);*/ 295 + /*pr_info("Parsing opts: '%s'\n",opts);*/ 300 296 301 297 while ((p = strsep(&opts, ",")) != NULL) { 302 298 substring_t args[MAX_OPT_ARGS]; ··· 391 387 392 388 static inline void hpfs_help(void) 393 389 { 394 - printk("\n\ 390 + pr_info("\n\ 395 391 HPFS filesystem options:\n\ 396 392 help do not mount and display this text\n\ 397 393 uid=xxx set uid of files that don't have uid specified in eas\n\ ··· 438 434 439 435 if (!(o = parse_opts(data, &uid, &gid, &umask, &lowercase, 440 436 &eas, &chk, &errs, &chkdsk, &timeshift))) { 441 - printk("HPFS: bad mount options.\n"); 437 + pr_err("bad mount options.\n"); 442 438 goto out_err; 443 439 } 444 440 if (o == 2) { ··· 446 442 goto out_err; 447 443 } 448 444 if (timeshift != sbi->sb_timeshift) { 449 - printk("HPFS: timeshift can't be changed using remount.\n"); 445 + pr_err("timeshift can't be changed using remount.\n"); 450 446 goto out_err; 451 447 } 452 448 ··· 527 523 528 524 if (!(o = parse_opts(options, &uid, &gid, &umask, &lowercase, 529 525 &eas, &chk, &errs, &chkdsk, &timeshift))) { 530 - printk("HPFS: bad mount options.\n"); 526 + pr_err("bad mount options.\n"); 531 527 goto bail0; 532 528 } 533 529 if (o==2) { ··· 546 542 if (/*le16_to_cpu(bootblock->magic) != BB_MAGIC 547 543 ||*/ le32_to_cpu(superblock->magic) != SB_MAGIC 548 544 || le32_to_cpu(spareblock->magic) != SP_MAGIC) { 549 - if (!silent) printk("HPFS: Bad magic ... probably not HPFS\n"); 545 + if (!silent) 546 + pr_err("Bad magic ... probably not HPFS\n"); 550 547 goto bail4; 551 548 } 552 549 553 550 /* Check version */ 554 551 if (!(s->s_flags & MS_RDONLY) && 555 552 superblock->funcversion != 2 && superblock->funcversion != 3) { 556 - printk("HPFS: Bad version %d,%d. Mount readonly to go around\n", 553 + pr_err("Bad version %d,%d. Mount readonly to go around\n", 557 554 (int)superblock->version, (int)superblock->funcversion); 558 - printk("HPFS: please try recent version of HPFS driver at http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi and if it still can't understand this format, contact author - mikulas@artax.karlin.mff.cuni.cz\n"); 555 + pr_err("please try recent version of HPFS driver at http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi and if it still can't understand this format, contact author - mikulas@artax.karlin.mff.cuni.cz\n"); 559 556 goto bail4; 560 557 } 561 558 ··· 602 597 /* Check for general fs errors*/ 603 598 if (spareblock->dirty && !spareblock->old_wrote) { 604 599 if (errs == 2) { 605 - printk("HPFS: Improperly stopped, not mounted\n"); 600 + pr_err("Improperly stopped, not mounted\n"); 606 601 goto bail4; 607 602 } 608 603 hpfs_error(s, "improperly stopped"); ··· 616 611 617 612 if (spareblock->hotfixes_used || spareblock->n_spares_used) { 618 613 if (errs >= 2) { 619 - printk("HPFS: Hotfixes not supported here, try chkdsk\n"); 614 + pr_err("Hotfixes not supported here, try chkdsk\n"); 620 615 mark_dirty(s, 0); 621 616 goto bail4; 622 617 } 623 618 hpfs_error(s, "hotfixes not supported here, try chkdsk"); 624 - if (errs == 0) printk("HPFS: Proceeding, but your filesystem will be probably corrupted by this driver...\n"); 625 - else printk("HPFS: This driver may read bad files or crash when operating on disk with hotfixes.\n"); 619 + if (errs == 0) 620 + pr_err("Proceeding, but your filesystem will be probably corrupted by this driver...\n"); 621 + else 622 + pr_err("This driver may read bad files or crash when operating on disk with hotfixes.\n"); 626 623 } 627 624 if (le32_to_cpu(spareblock->n_dnode_spares) != le32_to_cpu(spareblock->n_dnode_spares_free)) { 628 625 if (errs >= 2) { 629 - printk("HPFS: Spare dnodes used, try chkdsk\n"); 626 + pr_err("Spare dnodes used, try chkdsk\n"); 630 627 mark_dirty(s, 0); 631 628 goto bail4; 632 629 } 633 630 hpfs_error(s, "warning: spare dnodes used, try chkdsk"); 634 - if (errs == 0) printk("HPFS: Proceeding, but your filesystem could be corrupted if you delete files or directories\n"); 631 + if (errs == 0) 632 + pr_err("Proceeding, but your filesystem could be corrupted if you delete files or directories\n"); 635 633 } 636 634 if (chk) { 637 635 unsigned a; ··· 653 645 goto bail4; 654 646 } 655 647 sbi->sb_dirband_size = a; 656 - } else printk("HPFS: You really don't want any checks? You are crazy...\n"); 648 + } else 649 + pr_err("You really don't want any checks? You are crazy...\n"); 657 650 658 651 /* Load code page table */ 659 652 if (le32_to_cpu(spareblock->n_code_pages)) 660 653 if (!(sbi->sb_cp_table = hpfs_load_code_page(s, le32_to_cpu(spareblock->code_page_dir)))) 661 - printk("HPFS: Warning: code page support is disabled\n"); 654 + pr_err("code page support is disabled\n"); 662 655 663 656 brelse(bh2); 664 657 brelse(bh1);
+1 -1
fs/inode.c
··· 105 105 * Handle nr_inode sysctl 106 106 */ 107 107 #ifdef CONFIG_SYSCTL 108 - int proc_nr_inodes(ctl_table *table, int write, 108 + int proc_nr_inodes(struct ctl_table *table, int write, 109 109 void __user *buffer, size_t *lenp, loff_t *ppos) 110 110 { 111 111 inodes_stat.nr_inodes = get_nr_inodes();
+7 -5
fs/jffs2/background.c
··· 75 75 static int jffs2_garbage_collect_thread(void *_c) 76 76 { 77 77 struct jffs2_sb_info *c = _c; 78 + sigset_t hupmask; 78 79 80 + siginitset(&hupmask, sigmask(SIGHUP)); 79 81 allow_signal(SIGKILL); 80 82 allow_signal(SIGSTOP); 81 83 allow_signal(SIGCONT); 84 + allow_signal(SIGHUP); 82 85 83 86 c->gc_task = current; 84 87 complete(&c->gc_thread_start); ··· 90 87 91 88 set_freezable(); 92 89 for (;;) { 93 - allow_signal(SIGHUP); 90 + sigprocmask(SIG_UNBLOCK, &hupmask, NULL); 94 91 again: 95 92 spin_lock(&c->erase_completion_lock); 96 93 if (!jffs2_thread_should_wake(c)) { ··· 98 95 spin_unlock(&c->erase_completion_lock); 99 96 jffs2_dbg(1, "%s(): sleeping...\n", __func__); 100 97 schedule(); 101 - } else 98 + } else { 102 99 spin_unlock(&c->erase_completion_lock); 103 - 104 - 100 + } 105 101 /* Problem - immediately after bootup, the GCD spends a lot 106 102 * of time in places like jffs2_kill_fragtree(); so much so 107 103 * that userspace processes (like gdm and X) are starved ··· 152 150 } 153 151 } 154 152 /* We don't want SIGHUP to interrupt us. STOP and KILL are OK though. */ 155 - disallow_signal(SIGHUP); 153 + sigprocmask(SIG_BLOCK, &hupmask, NULL); 156 154 157 155 jffs2_dbg(1, "%s(): pass\n", __func__); 158 156 if (jffs2_garbage_collect_pass(c) == -ENOSPC) {
+3 -3
fs/lockd/svc.c
··· 436 436 * Sysctl parameters (same as module parameters, different interface). 437 437 */ 438 438 439 - static ctl_table nlm_sysctls[] = { 439 + static struct ctl_table nlm_sysctls[] = { 440 440 { 441 441 .procname = "nlm_grace_period", 442 442 .data = &nlm_grace_period, ··· 490 490 { } 491 491 }; 492 492 493 - static ctl_table nlm_sysctl_dir[] = { 493 + static struct ctl_table nlm_sysctl_dir[] = { 494 494 { 495 495 .procname = "nfs", 496 496 .mode = 0555, ··· 499 499 { } 500 500 }; 501 501 502 - static ctl_table nlm_sysctl_root[] = { 502 + static struct ctl_table nlm_sysctl_root[] = { 503 503 { 504 504 .procname = "fs", 505 505 .mode = 0555,
+3 -3
fs/nfs/nfs4sysctl.c
··· 16 16 static const int nfs_set_port_max = 65535; 17 17 static struct ctl_table_header *nfs4_callback_sysctl_table; 18 18 19 - static ctl_table nfs4_cb_sysctls[] = { 19 + static struct ctl_table nfs4_cb_sysctls[] = { 20 20 { 21 21 .procname = "nfs_callback_tcpport", 22 22 .data = &nfs_callback_set_tcpport, ··· 36 36 { } 37 37 }; 38 38 39 - static ctl_table nfs4_cb_sysctl_dir[] = { 39 + static struct ctl_table nfs4_cb_sysctl_dir[] = { 40 40 { 41 41 .procname = "nfs", 42 42 .mode = 0555, ··· 45 45 { } 46 46 }; 47 47 48 - static ctl_table nfs4_cb_sysctl_root[] = { 48 + static struct ctl_table nfs4_cb_sysctl_root[] = { 49 49 { 50 50 .procname = "fs", 51 51 .mode = 0555,
+3 -3
fs/nfs/sysctl.c
··· 13 13 14 14 static struct ctl_table_header *nfs_callback_sysctl_table; 15 15 16 - static ctl_table nfs_cb_sysctls[] = { 16 + static struct ctl_table nfs_cb_sysctls[] = { 17 17 { 18 18 .procname = "nfs_mountpoint_timeout", 19 19 .data = &nfs_mountpoint_expiry_timeout, ··· 31 31 { } 32 32 }; 33 33 34 - static ctl_table nfs_cb_sysctl_dir[] = { 34 + static struct ctl_table nfs_cb_sysctl_dir[] = { 35 35 { 36 36 .procname = "nfs", 37 37 .mode = 0555, ··· 40 40 { } 41 41 }; 42 42 43 - static ctl_table nfs_cb_sysctl_root[] = { 43 + static struct ctl_table nfs_cb_sysctl_root[] = { 44 44 { 45 45 .procname = "fs", 46 46 .mode = 0555,
+1 -1
fs/notify/inotify/inotify_user.c
··· 57 57 58 58 static int zero; 59 59 60 - ctl_table inotify_table[] = { 60 + struct ctl_table inotify_table[] = { 61 61 { 62 62 .procname = "max_user_instances", 63 63 .data = &inotify_max_user_instances,
+2 -2
fs/ntfs/sysctl.c
··· 34 34 #include "debug.h" 35 35 36 36 /* Definition of the ntfs sysctl. */ 37 - static ctl_table ntfs_sysctls[] = { 37 + static struct ctl_table ntfs_sysctls[] = { 38 38 { 39 39 .procname = "ntfs-debug", 40 40 .data = &debug_msgs, /* Data pointer and size. */ ··· 46 46 }; 47 47 48 48 /* Define the parent directory /proc/sys/fs. */ 49 - static ctl_table sysctls_root[] = { 49 + static struct ctl_table sysctls_root[] = { 50 50 { 51 51 .procname = "fs", 52 52 .mode = 0555,
+4 -4
fs/proc/task_mmu.c
··· 1418 1418 seq_printf(m, "%08lx %s", vma->vm_start, buffer); 1419 1419 1420 1420 if (file) { 1421 - seq_printf(m, " file="); 1421 + seq_puts(m, " file="); 1422 1422 seq_path(m, &file->f_path, "\n\t= "); 1423 1423 } else if (vma->vm_start <= mm->brk && vma->vm_end >= mm->start_brk) { 1424 - seq_printf(m, " heap"); 1424 + seq_puts(m, " heap"); 1425 1425 } else { 1426 1426 pid_t tid = vm_is_stack(task, vma, is_pid); 1427 1427 if (tid != 0) { ··· 1431 1431 */ 1432 1432 if (!is_pid || (vma->vm_start <= mm->start_stack && 1433 1433 vma->vm_end >= mm->start_stack)) 1434 - seq_printf(m, " stack"); 1434 + seq_puts(m, " stack"); 1435 1435 else 1436 1436 seq_printf(m, " stack:%d", tid); 1437 1437 } 1438 1438 } 1439 1439 1440 1440 if (is_vm_hugetlb_page(vma)) 1441 - seq_printf(m, " huge"); 1441 + seq_puts(m, " huge"); 1442 1442 1443 1443 walk_page_range(vma->vm_start, vma->vm_end, &walk); 1444 1444
+1 -1
fs/proc/vmcore.c
··· 42 42 /* Total size of vmcore file. */ 43 43 static u64 vmcore_size; 44 44 45 - static struct proc_dir_entry *proc_vmcore = NULL; 45 + static struct proc_dir_entry *proc_vmcore; 46 46 47 47 /* 48 48 * Returns > 0 for RAM pages, 0 for non-RAM pages, < 0 on error
+9 -10
fs/pstore/platform.c
··· 18 18 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 19 */ 20 20 21 + #define pr_fmt(fmt) "pstore: " fmt 22 + 21 23 #include <linux/atomic.h> 22 24 #include <linux/types.h> 23 25 #include <linux/errno.h> ··· 226 224 zlib_inflate_workspacesize()); 227 225 stream.workspace = kmalloc(size, GFP_KERNEL); 228 226 if (!stream.workspace) { 229 - pr_err("pstore: No memory for compression workspace; " 230 - "skipping compression\n"); 227 + pr_err("No memory for compression workspace; skipping compression\n"); 231 228 kfree(big_oops_buf); 232 229 big_oops_buf = NULL; 233 230 } 234 231 } else { 235 - pr_err("No memory for uncompressed data; " 236 - "skipping compression\n"); 232 + pr_err("No memory for uncompressed data; skipping compression\n"); 237 233 stream.workspace = NULL; 238 234 } 239 235 ··· 455 455 add_timer(&pstore_timer); 456 456 } 457 457 458 - pr_info("pstore: Registered %s as persistent store backend\n", 459 - psi->name); 458 + pr_info("Registered %s as persistent store backend\n", psi->name); 460 459 461 460 return 0; 462 461 } ··· 501 502 size = unzipped_len; 502 503 compressed = false; 503 504 } else { 504 - pr_err("pstore: decompression failed;" 505 - "returned %d\n", unzipped_len); 505 + pr_err("decompression failed;returned %d\n", 506 + unzipped_len); 506 507 compressed = true; 507 508 } 508 509 } ··· 523 524 mutex_unlock(&psi->read_mutex); 524 525 525 526 if (failed) 526 - printk(KERN_WARNING "pstore: failed to load %d record(s) from '%s'\n", 527 - failed, psi->name); 527 + pr_warn("failed to load %d record(s) from '%s'\n", 528 + failed, psi->name); 528 529 } 529 530 530 531 static void pstore_dowork(struct work_struct *work)
+17 -19
fs/pstore/ram_core.c
··· 12 12 * 13 13 */ 14 14 15 + #define pr_fmt(fmt) "persistent_ram: " fmt 16 + 15 17 #include <linux/device.h> 16 18 #include <linux/err.h> 17 19 #include <linux/errno.h> ··· 207 205 size = buffer->data + prz->buffer_size - block; 208 206 numerr = persistent_ram_decode_rs8(prz, block, size, par); 209 207 if (numerr > 0) { 210 - pr_devel("persistent_ram: error in block %p, %d\n", 211 - block, numerr); 208 + pr_devel("error in block %p, %d\n", block, numerr); 212 209 prz->corrected_bytes += numerr; 213 210 } else if (numerr < 0) { 214 - pr_devel("persistent_ram: uncorrectable error in block %p\n", 215 - block); 211 + pr_devel("uncorrectable error in block %p\n", block); 216 212 prz->bad_blocks++; 217 213 } 218 214 block += prz->ecc_info.block_size; ··· 257 257 prz->rs_decoder = init_rs(prz->ecc_info.symsize, prz->ecc_info.poly, 258 258 0, 1, prz->ecc_info.ecc_size); 259 259 if (prz->rs_decoder == NULL) { 260 - pr_info("persistent_ram: init_rs failed\n"); 260 + pr_info("init_rs failed\n"); 261 261 return -EINVAL; 262 262 } 263 263 ··· 267 267 numerr = persistent_ram_decode_rs8(prz, buffer, sizeof(*buffer), 268 268 prz->par_header); 269 269 if (numerr > 0) { 270 - pr_info("persistent_ram: error in header, %d\n", numerr); 270 + pr_info("error in header, %d\n", numerr); 271 271 prz->corrected_bytes += numerr; 272 272 } else if (numerr < 0) { 273 - pr_info("persistent_ram: uncorrectable error in header\n"); 273 + pr_info("uncorrectable error in header\n"); 274 274 prz->bad_blocks++; 275 275 } 276 276 ··· 317 317 prz->old_log = kmalloc(size, GFP_KERNEL); 318 318 } 319 319 if (!prz->old_log) { 320 - pr_err("persistent_ram: failed to allocate buffer\n"); 320 + pr_err("failed to allocate buffer\n"); 321 321 return; 322 322 } 323 323 ··· 396 396 397 397 pages = kmalloc(sizeof(struct page *) * page_count, GFP_KERNEL); 398 398 if (!pages) { 399 - pr_err("%s: Failed to allocate array for %u pages\n", __func__, 400 - page_count); 399 + pr_err("%s: Failed to allocate array for %u pages\n", 400 + __func__, page_count); 401 401 return NULL; 402 402 } 403 403 ··· 462 462 if (prz->buffer->sig == sig) { 463 463 if (buffer_size(prz) > prz->buffer_size || 464 464 buffer_start(prz) > buffer_size(prz)) 465 - pr_info("persistent_ram: found existing invalid buffer," 466 - " size %zu, start %zu\n", 467 - buffer_size(prz), buffer_start(prz)); 465 + pr_info("found existing invalid buffer, size %zu, start %zu\n", 466 + buffer_size(prz), buffer_start(prz)); 468 467 else { 469 - pr_debug("persistent_ram: found existing buffer," 470 - " size %zu, start %zu\n", 471 - buffer_size(prz), buffer_start(prz)); 468 + pr_debug("found existing buffer, size %zu, start %zu\n", 469 + buffer_size(prz), buffer_start(prz)); 472 470 persistent_ram_save_old(prz); 473 471 return 0; 474 472 } 475 473 } else { 476 - pr_debug("persistent_ram: no valid data in buffer" 477 - " (sig = 0x%08x)\n", prz->buffer->sig); 474 + pr_debug("no valid data in buffer (sig = 0x%08x)\n", 475 + prz->buffer->sig); 478 476 } 479 477 480 478 prz->buffer->sig = sig; ··· 507 509 508 510 prz = kzalloc(sizeof(struct persistent_ram_zone), GFP_KERNEL); 509 511 if (!prz) { 510 - pr_err("persistent_ram: failed to allocate persistent ram zone\n"); 512 + pr_err("failed to allocate persistent ram zone\n"); 511 513 goto err; 512 514 } 513 515
+6 -7
fs/reiserfs/bitmap.c
··· 142 142 int org = *beg; 143 143 144 144 BUG_ON(!th->t_trans_id); 145 - 146 145 RFALSE(bmap_n >= reiserfs_bmap_count(s), "Bitmap %u is out of " 147 146 "range (0..%u)", bmap_n, reiserfs_bmap_count(s) - 1); 148 147 PROC_INFO_INC(s, scan_bitmap.bmap); ··· 320 321 unsigned int off_max = s->s_blocksize << 3; 321 322 322 323 BUG_ON(!th->t_trans_id); 323 - 324 324 PROC_INFO_INC(s, scan_bitmap.call); 325 325 if (SB_FREE_BLOCKS(s) <= 0) 326 326 return 0; // No point in looking for more free blocks ··· 386 388 unsigned int nr, offset; 387 389 388 390 BUG_ON(!th->t_trans_id); 389 - 390 391 PROC_INFO_INC(s, free_block); 391 - 392 392 rs = SB_DISK_SUPER_BLOCK(s); 393 393 sbh = SB_BUFFER_WITH_SB(s); 394 394 apbi = SB_AP_BITMAP(s); ··· 431 435 int for_unformatted) 432 436 { 433 437 struct super_block *s = th->t_super; 434 - BUG_ON(!th->t_trans_id); 435 438 439 + BUG_ON(!th->t_trans_id); 436 440 RFALSE(!s, "vs-4061: trying to free block on nonexistent device"); 437 441 if (!is_reusable(s, block, 1)) 438 442 return; ··· 467 471 unsigned long save = ei->i_prealloc_block; 468 472 int dirty = 0; 469 473 struct inode *inode = &ei->vfs_inode; 474 + 470 475 BUG_ON(!th->t_trans_id); 471 476 #ifdef CONFIG_REISERFS_CHECK 472 477 if (ei->i_prealloc_count < 0) ··· 491 494 struct inode *inode) 492 495 { 493 496 struct reiserfs_inode_info *ei = REISERFS_I(inode); 497 + 494 498 BUG_ON(!th->t_trans_id); 495 499 if (ei->i_prealloc_count) 496 500 __discard_prealloc(th, ei); ··· 502 504 struct list_head *plist = &SB_JOURNAL(th->t_super)->j_prealloc_list; 503 505 504 506 BUG_ON(!th->t_trans_id); 505 - 506 507 while (!list_empty(plist)) { 507 508 struct reiserfs_inode_info *ei; 508 509 ei = list_entry(plist->next, struct reiserfs_inode_info, ··· 559 562 if (!strcmp(this_char, "displacing_new_packing_localities")) { 560 563 SET_OPTION(displacing_new_packing_localities); 561 564 continue; 562 - }; 565 + } 563 566 564 567 if (!strcmp(this_char, "old_hashed_relocation")) { 565 568 SET_OPTION(old_hashed_relocation); ··· 726 729 static inline void new_hashed_relocation(reiserfs_blocknr_hint_t * hint) 727 730 { 728 731 char *hash_in; 732 + 729 733 if (hint->formatted_node) { 730 734 hash_in = (char *)&hint->key.k_dir_id; 731 735 } else { ··· 755 757 __u32 dirid = 0; 756 758 int bm = 0; 757 759 struct super_block *sb = hint->th->t_super; 760 + 758 761 if (hint->inode) 759 762 dirid = le32_to_cpu(INODE_PKEY(hint->inode)->k_dir_id); 760 763 else if (hint->formatted_node)
+4 -4
fs/reiserfs/stree.c
··· 228 228 229 229 /* Maximal possible key. It is never in the tree. */ 230 230 static const struct reiserfs_key MAX_KEY = { 231 - __constant_cpu_to_le32(0xffffffff), 232 - __constant_cpu_to_le32(0xffffffff), 233 - {{__constant_cpu_to_le32(0xffffffff), 234 - __constant_cpu_to_le32(0xffffffff)},} 231 + cpu_to_le32(0xffffffff), 232 + cpu_to_le32(0xffffffff), 233 + {{cpu_to_le32(0xffffffff), 234 + cpu_to_le32(0xffffffff)},} 235 235 }; 236 236 237 237 /* Get delimiting key of the buffer by looking for it in the buffers in the path, starting from the bottom
+18 -18
fs/ufs/balloc.c
··· 24 24 25 25 #define INVBLOCK ((u64)-1L) 26 26 27 - static u64 ufs_add_fragments(struct inode *, u64, unsigned, unsigned, int *); 27 + static u64 ufs_add_fragments(struct inode *, u64, unsigned, unsigned); 28 28 static u64 ufs_alloc_fragments(struct inode *, unsigned, u64, unsigned, int *); 29 29 static u64 ufs_alloccg_block(struct inode *, struct ufs_cg_private_info *, u64, int *); 30 30 static u64 ufs_bitmap_search (struct super_block *, struct ufs_cg_private_info *, u64, unsigned); ··· 52 52 if (ufs_fragnum(fragment) + count > uspi->s_fpg) 53 53 ufs_error (sb, "ufs_free_fragments", "internal error"); 54 54 55 - mutex_lock(&UFS_SB(sb)->s_lock); 55 + lock_ufs(sb); 56 56 57 57 cgno = ufs_dtog(uspi, fragment); 58 58 bit = ufs_dtogd(uspi, fragment); ··· 116 116 ubh_sync_block(UCPI_UBH(ucpi)); 117 117 ufs_mark_sb_dirty(sb); 118 118 119 - mutex_unlock(&UFS_SB(sb)->s_lock); 119 + unlock_ufs(sb); 120 120 UFSD("EXIT\n"); 121 121 return; 122 122 123 123 failed: 124 - mutex_unlock(&UFS_SB(sb)->s_lock); 124 + unlock_ufs(sb); 125 125 UFSD("EXIT (FAILED)\n"); 126 126 return; 127 127 } ··· 151 151 goto failed; 152 152 } 153 153 154 - mutex_lock(&UFS_SB(sb)->s_lock); 154 + lock_ufs(sb); 155 155 156 156 do_more: 157 157 overflow = 0; ··· 211 211 } 212 212 213 213 ufs_mark_sb_dirty(sb); 214 - mutex_unlock(&UFS_SB(sb)->s_lock); 214 + unlock_ufs(sb); 215 215 UFSD("EXIT\n"); 216 216 return; 217 217 218 218 failed_unlock: 219 - mutex_unlock(&UFS_SB(sb)->s_lock); 219 + unlock_ufs(sb); 220 220 failed: 221 221 UFSD("EXIT (FAILED)\n"); 222 222 return; ··· 357 357 usb1 = ubh_get_usb_first(uspi); 358 358 *err = -ENOSPC; 359 359 360 - mutex_lock(&UFS_SB(sb)->s_lock); 360 + lock_ufs(sb); 361 361 tmp = ufs_data_ptr_to_cpu(sb, p); 362 362 363 363 if (count + ufs_fragnum(fragment) > uspi->s_fpb) { ··· 378 378 "fragment %llu, tmp %llu\n", 379 379 (unsigned long long)fragment, 380 380 (unsigned long long)tmp); 381 - mutex_unlock(&UFS_SB(sb)->s_lock); 381 + unlock_ufs(sb); 382 382 return INVBLOCK; 383 383 } 384 384 if (fragment < UFS_I(inode)->i_lastfrag) { 385 385 UFSD("EXIT (ALREADY ALLOCATED)\n"); 386 - mutex_unlock(&UFS_SB(sb)->s_lock); 386 + unlock_ufs(sb); 387 387 return 0; 388 388 } 389 389 } 390 390 else { 391 391 if (tmp) { 392 392 UFSD("EXIT (ALREADY ALLOCATED)\n"); 393 - mutex_unlock(&UFS_SB(sb)->s_lock); 393 + unlock_ufs(sb); 394 394 return 0; 395 395 } 396 396 } ··· 399 399 * There is not enough space for user on the device 400 400 */ 401 401 if (!capable(CAP_SYS_RESOURCE) && ufs_freespace(uspi, UFS_MINFREE) <= 0) { 402 - mutex_unlock(&UFS_SB(sb)->s_lock); 402 + unlock_ufs(sb); 403 403 UFSD("EXIT (FAILED)\n"); 404 404 return 0; 405 405 } ··· 424 424 ufs_clear_frags(inode, result + oldcount, 425 425 newcount - oldcount, locked_page != NULL); 426 426 } 427 - mutex_unlock(&UFS_SB(sb)->s_lock); 427 + unlock_ufs(sb); 428 428 UFSD("EXIT, result %llu\n", (unsigned long long)result); 429 429 return result; 430 430 } ··· 432 432 /* 433 433 * resize block 434 434 */ 435 - result = ufs_add_fragments (inode, tmp, oldcount, newcount, err); 435 + result = ufs_add_fragments(inode, tmp, oldcount, newcount); 436 436 if (result) { 437 437 *err = 0; 438 438 UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag, 439 439 fragment + count); 440 440 ufs_clear_frags(inode, result + oldcount, newcount - oldcount, 441 441 locked_page != NULL); 442 - mutex_unlock(&UFS_SB(sb)->s_lock); 442 + unlock_ufs(sb); 443 443 UFSD("EXIT, result %llu\n", (unsigned long long)result); 444 444 return result; 445 445 } ··· 477 477 *err = 0; 478 478 UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag, 479 479 fragment + count); 480 - mutex_unlock(&UFS_SB(sb)->s_lock); 480 + unlock_ufs(sb); 481 481 if (newcount < request) 482 482 ufs_free_fragments (inode, result + newcount, request - newcount); 483 483 ufs_free_fragments (inode, tmp, oldcount); ··· 485 485 return result; 486 486 } 487 487 488 - mutex_unlock(&UFS_SB(sb)->s_lock); 488 + unlock_ufs(sb); 489 489 UFSD("EXIT (FAILED)\n"); 490 490 return 0; 491 491 } 492 492 493 493 static u64 ufs_add_fragments(struct inode *inode, u64 fragment, 494 - unsigned oldcount, unsigned newcount, int *err) 494 + unsigned oldcount, unsigned newcount) 495 495 { 496 496 struct super_block * sb; 497 497 struct ufs_sb_private_info * uspi;
+8 -9
fs/ufs/ialloc.c
··· 69 69 70 70 ino = inode->i_ino; 71 71 72 - mutex_lock(&UFS_SB(sb)->s_lock); 72 + lock_ufs(sb); 73 73 74 74 if (!((ino > 1) && (ino < (uspi->s_ncg * uspi->s_ipg )))) { 75 75 ufs_warning(sb, "ufs_free_inode", "reserved inode or nonexistent inode %u\n", ino); 76 - mutex_unlock(&UFS_SB(sb)->s_lock); 76 + unlock_ufs(sb); 77 77 return; 78 78 } 79 79 ··· 81 81 bit = ufs_inotocgoff (ino); 82 82 ucpi = ufs_load_cylinder (sb, cg); 83 83 if (!ucpi) { 84 - mutex_unlock(&UFS_SB(sb)->s_lock); 84 + unlock_ufs(sb); 85 85 return; 86 86 } 87 87 ucg = ubh_get_ucg(UCPI_UBH(ucpi)); ··· 115 115 ubh_sync_block(UCPI_UBH(ucpi)); 116 116 117 117 ufs_mark_sb_dirty(sb); 118 - mutex_unlock(&UFS_SB(sb)->s_lock); 118 + unlock_ufs(sb); 119 119 UFSD("EXIT\n"); 120 120 } 121 121 ··· 193 193 sbi = UFS_SB(sb); 194 194 uspi = sbi->s_uspi; 195 195 196 - mutex_lock(&sbi->s_lock); 196 + lock_ufs(sb); 197 197 198 198 /* 199 199 * Try to place the inode in its parent directory ··· 328 328 sync_dirty_buffer(bh); 329 329 brelse(bh); 330 330 } 331 - 332 - mutex_unlock(&sbi->s_lock); 331 + unlock_ufs(sb); 333 332 334 333 UFSD("allocating inode %lu\n", inode->i_ino); 335 334 UFSD("EXIT\n"); 336 335 return inode; 337 336 338 337 fail_remove_inode: 339 - mutex_unlock(&sbi->s_lock); 338 + unlock_ufs(sb); 340 339 clear_nlink(inode); 341 340 iput(inode); 342 341 UFSD("EXIT (FAILED): err %d\n", err); 343 342 return ERR_PTR(err); 344 343 failed: 345 - mutex_unlock(&sbi->s_lock); 344 + unlock_ufs(sb); 346 345 make_bad_inode(inode); 347 346 iput (inode); 348 347 UFSD("EXIT (FAILED): err %d\n", err);
+10 -18
fs/ufs/super.c
··· 697 697 unsigned flags; 698 698 699 699 lock_ufs(sb); 700 - mutex_lock(&UFS_SB(sb)->s_lock); 701 700 702 701 UFSD("ENTER\n"); 703 702 ··· 714 715 ufs_put_cstotal(sb); 715 716 716 717 UFSD("EXIT\n"); 717 - mutex_unlock(&UFS_SB(sb)->s_lock); 718 718 unlock_ufs(sb); 719 719 720 720 return 0; ··· 758 760 759 761 ubh_brelse_uspi (sbi->s_uspi); 760 762 kfree (sbi->s_uspi); 763 + mutex_destroy(&sbi->mutex); 761 764 kfree (sbi); 762 765 sb->s_fs_info = NULL; 763 766 UFSD("EXIT\n"); ··· 785 786 flags = 0; 786 787 787 788 UFSD("ENTER\n"); 789 + 790 + #ifndef CONFIG_UFS_FS_WRITE 791 + if (!(sb->s_flags & MS_RDONLY)) { 792 + printk("ufs was compiled with read-only support, " 793 + "can't be mounted as read-write\n"); 794 + return -EROFS; 795 + } 796 + #endif 788 797 789 798 sbi = kzalloc(sizeof(struct ufs_sb_info), GFP_KERNEL); 790 799 if (!sbi) ··· 802 795 803 796 UFSD("flag %u\n", (int)(sb->s_flags & MS_RDONLY)); 804 797 805 - #ifndef CONFIG_UFS_FS_WRITE 806 - if (!(sb->s_flags & MS_RDONLY)) { 807 - printk("ufs was compiled with read-only support, " 808 - "can't be mounted as read-write\n"); 809 - goto failed; 810 - } 811 - #endif 812 798 mutex_init(&sbi->mutex); 813 - mutex_init(&sbi->s_lock); 814 799 spin_lock_init(&sbi->work_lock); 815 800 INIT_DELAYED_WORK(&sbi->sync_work, delayed_sync_fs); 816 801 /* ··· 1256 1257 return 0; 1257 1258 1258 1259 failed: 1260 + mutex_destroy(&sbi->mutex); 1259 1261 if (ubh) 1260 1262 ubh_brelse_uspi (uspi); 1261 1263 kfree (uspi); ··· 1280 1280 1281 1281 sync_filesystem(sb); 1282 1282 lock_ufs(sb); 1283 - mutex_lock(&UFS_SB(sb)->s_lock); 1284 1283 uspi = UFS_SB(sb)->s_uspi; 1285 1284 flags = UFS_SB(sb)->s_flags; 1286 1285 usb1 = ubh_get_usb_first(uspi); ··· 1293 1294 new_mount_opt = 0; 1294 1295 ufs_set_opt (new_mount_opt, ONERROR_LOCK); 1295 1296 if (!ufs_parse_options (data, &new_mount_opt)) { 1296 - mutex_unlock(&UFS_SB(sb)->s_lock); 1297 1297 unlock_ufs(sb); 1298 1298 return -EINVAL; 1299 1299 } ··· 1300 1302 new_mount_opt |= ufstype; 1301 1303 } else if ((new_mount_opt & UFS_MOUNT_UFSTYPE) != ufstype) { 1302 1304 printk("ufstype can't be changed during remount\n"); 1303 - mutex_unlock(&UFS_SB(sb)->s_lock); 1304 1305 unlock_ufs(sb); 1305 1306 return -EINVAL; 1306 1307 } 1307 1308 1308 1309 if ((*mount_flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) { 1309 1310 UFS_SB(sb)->s_mount_opt = new_mount_opt; 1310 - mutex_unlock(&UFS_SB(sb)->s_lock); 1311 1311 unlock_ufs(sb); 1312 1312 return 0; 1313 1313 } ··· 1330 1334 #ifndef CONFIG_UFS_FS_WRITE 1331 1335 printk("ufs was compiled with read-only support, " 1332 1336 "can't be mounted as read-write\n"); 1333 - mutex_unlock(&UFS_SB(sb)->s_lock); 1334 1337 unlock_ufs(sb); 1335 1338 return -EINVAL; 1336 1339 #else ··· 1339 1344 ufstype != UFS_MOUNT_UFSTYPE_SUNx86 && 1340 1345 ufstype != UFS_MOUNT_UFSTYPE_UFS2) { 1341 1346 printk("this ufstype is read-only supported\n"); 1342 - mutex_unlock(&UFS_SB(sb)->s_lock); 1343 1347 unlock_ufs(sb); 1344 1348 return -EINVAL; 1345 1349 } 1346 1350 if (!ufs_read_cylinder_structures(sb)) { 1347 1351 printk("failed during remounting\n"); 1348 - mutex_unlock(&UFS_SB(sb)->s_lock); 1349 1352 unlock_ufs(sb); 1350 1353 return -EPERM; 1351 1354 } ··· 1351 1358 #endif 1352 1359 } 1353 1360 UFS_SB(sb)->s_mount_opt = new_mount_opt; 1354 - mutex_unlock(&UFS_SB(sb)->s_lock); 1355 1361 unlock_ufs(sb); 1356 1362 return 0; 1357 1363 }
-1
fs/ufs/ufs.h
··· 24 24 int work_queued; /* non-zero if the delayed work is queued */ 25 25 struct delayed_work sync_work; /* FS sync delayed work */ 26 26 spinlock_t work_lock; /* protects sync_work and work_queued */ 27 - struct mutex s_lock; 28 27 }; 29 28 30 29 struct ufs_inode_info {
+5
include/asm-generic/ioctl.h
··· 3 3 4 4 #include <uapi/asm-generic/ioctl.h> 5 5 6 + #ifdef __CHECKER__ 7 + #define _IOC_TYPECHECK(t) (sizeof(t)) 8 + #else 6 9 /* provoke compile error for invalid uses of size argument */ 7 10 extern unsigned int __invalid_size_argument_for_IOC; 8 11 #define _IOC_TYPECHECK(t) \ 9 12 ((sizeof(t) == sizeof(t[1]) && \ 10 13 sizeof(t) < (1 << _IOC_SIZEBITS)) ? \ 11 14 sizeof(t) : __invalid_size_argument_for_IOC) 15 + #endif 16 + 12 17 #endif /* _ASM_GENERIC_IOCTL_H */
-1
include/linux/cpu.h
··· 256 256 }; 257 257 258 258 void cpu_startup_entry(enum cpuhp_state state); 259 - void cpu_idle(void); 260 259 261 260 void cpu_idle_poll_ctrl(bool enable); 262 261
+8 -5
include/linux/idr.h
··· 29 29 30 30 struct idr_layer { 31 31 int prefix; /* the ID prefix of this idr_layer */ 32 - DECLARE_BITMAP(bitmap, IDR_SIZE); /* A zero bit means "space here" */ 32 + int layer; /* distance from leaf */ 33 33 struct idr_layer __rcu *ary[1<<IDR_BITS]; 34 34 int count; /* When zero, we can release it */ 35 - int layer; /* distance from leaf */ 36 - struct rcu_head rcu_head; 35 + union { 36 + /* A zero bit means "space here" */ 37 + DECLARE_BITMAP(bitmap, IDR_SIZE); 38 + struct rcu_head rcu_head; 39 + }; 37 40 }; 38 41 39 42 struct idr { 40 43 struct idr_layer __rcu *hint; /* the last layer allocated from */ 41 44 struct idr_layer __rcu *top; 42 - struct idr_layer *id_free; 43 45 int layers; /* only valid w/o concurrent changes */ 44 - int id_free_cnt; 45 46 int cur; /* current pos for cyclic allocation */ 46 47 spinlock_t lock; 48 + int id_free_cnt; 49 + struct idr_layer *id_free; 47 50 }; 48 51 49 52 #define IDR_INIT(name) \
+1 -1
include/linux/key.h
··· 332 332 } while (0) 333 333 334 334 #ifdef CONFIG_SYSCTL 335 - extern ctl_table key_sysctls[]; 335 + extern struct ctl_table key_sysctls[]; 336 336 #endif 337 337 /* 338 338 * the userspace interface
+4
include/linux/kmemleak.h
··· 30 30 extern void kmemleak_free(const void *ptr) __ref; 31 31 extern void kmemleak_free_part(const void *ptr, size_t size) __ref; 32 32 extern void kmemleak_free_percpu(const void __percpu *ptr) __ref; 33 + extern void kmemleak_update_trace(const void *ptr) __ref; 33 34 extern void kmemleak_not_leak(const void *ptr) __ref; 34 35 extern void kmemleak_ignore(const void *ptr) __ref; 35 36 extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref; ··· 82 81 { 83 82 } 84 83 static inline void kmemleak_free_percpu(const void __percpu *ptr) 84 + { 85 + } 86 + static inline void kmemleak_update_trace(const void *ptr) 85 87 { 86 88 } 87 89 static inline void kmemleak_not_leak(const void *ptr)
+4
include/linux/mc146818rtc.h
··· 31 31 void (*wake_on)(struct device *dev); 32 32 void (*wake_off)(struct device *dev); 33 33 34 + u32 flags; 35 + #define CMOS_RTC_FLAGS_NOFREQ (1 << 0) 36 + int address_space; 37 + 34 38 u8 rtc_day_alarm; /* zero, or register index */ 35 39 u8 rtc_mon_alarm; /* zero, or register index */ 36 40 u8 rtc_century; /* zero, or register index */
+32
include/linux/ptrace.h
··· 5 5 #include <linux/sched.h> /* For struct task_struct. */ 6 6 #include <linux/err.h> /* for IS_ERR_VALUE */ 7 7 #include <linux/bug.h> /* For BUG_ON. */ 8 + #include <linux/pid_namespace.h> /* For task_active_pid_ns. */ 8 9 #include <uapi/linux/ptrace.h> 9 10 10 11 /* ··· 127 126 if ((current->ptrace & (PT_PTRACED|PT_SEIZED)) == PT_PTRACED) 128 127 send_sig(SIGTRAP, current, 0); 129 128 } 129 + } 130 + 131 + /** 132 + * ptrace_event_pid - possibly stop for a ptrace event notification 133 + * @event: %PTRACE_EVENT_* value to report 134 + * @pid: process identifier for %PTRACE_GETEVENTMSG to return 135 + * 136 + * Check whether @event is enabled and, if so, report @event and @pid 137 + * to the ptrace parent. @pid is reported as the pid_t seen from the 138 + * the ptrace parent's pid namespace. 139 + * 140 + * Called without locks. 141 + */ 142 + static inline void ptrace_event_pid(int event, struct pid *pid) 143 + { 144 + /* 145 + * FIXME: There's a potential race if a ptracer in a different pid 146 + * namespace than parent attaches between computing message below and 147 + * when we acquire tasklist_lock in ptrace_stop(). If this happens, 148 + * the ptracer will get a bogus pid from PTRACE_GETEVENTMSG. 149 + */ 150 + unsigned long message = 0; 151 + struct pid_namespace *ns; 152 + 153 + rcu_read_lock(); 154 + ns = task_active_pid_ns(rcu_dereference(current->parent)); 155 + if (ns) 156 + message = pid_nr_ns(pid, ns); 157 + rcu_read_unlock(); 158 + 159 + ptrace_event(event, message); 130 160 } 131 161 132 162 /**
-3
include/linux/sched.h
··· 2414 2414 2415 2415 extern void do_group_exit(int); 2416 2416 2417 - extern int allow_signal(int); 2418 - extern int disallow_signal(int); 2419 - 2420 2417 extern int do_execve(struct filename *, 2421 2418 const char __user * const __user *, 2422 2419 const char __user * const __user *);
+1 -2
include/linux/shm.h
··· 3 3 4 4 #include <asm/page.h> 5 5 #include <uapi/linux/shm.h> 6 - 7 - #define SHMALL (SHMMAX/PAGE_SIZE*(SHMMNI/16)) /* max shm system wide (pages) */ 8 6 #include <asm/shmparam.h> 7 + 9 8 struct shmid_kernel /* private to the kernel */ 10 9 { 11 10 struct kern_ipc_perm shm_perm;
+16 -5
include/linux/signal.h
··· 63 63 return 1 & (set->sig[sig / _NSIG_BPW] >> (sig % _NSIG_BPW)); 64 64 } 65 65 66 - static inline int sigfindinword(unsigned long word) 67 - { 68 - return ffz(~word); 69 - } 70 - 71 66 #endif /* __HAVE_ARCH_SIG_BITOPS */ 72 67 73 68 static inline int sigisemptyset(sigset_t *set) ··· 284 289 extern void signal_setup_done(int failed, struct ksignal *ksig, int stepping); 285 290 extern void signal_delivered(int sig, siginfo_t *info, struct k_sigaction *ka, struct pt_regs *regs, int stepping); 286 291 extern void exit_signals(struct task_struct *tsk); 292 + extern void kernel_sigaction(int, __sighandler_t); 293 + 294 + static inline void allow_signal(int sig) 295 + { 296 + /* 297 + * Kernel threads handle their own signals. Let the signal code 298 + * know it'll be handled, so that they don't get converted to 299 + * SIGKILL or just silently dropped. 300 + */ 301 + kernel_sigaction(sig, (__force __sighandler_t)2); 302 + } 303 + 304 + static inline void disallow_signal(int sig) 305 + { 306 + kernel_sigaction(sig, SIG_IGN); 307 + } 287 308 288 309 /* 289 310 * Eventually that'll replace get_signal_to_deliver(); macro for now,
-2
include/linux/smp.h
··· 13 13 #include <linux/init.h> 14 14 #include <linux/llist.h> 15 15 16 - extern void cpu_idle(void); 17 - 18 16 typedef void (*smp_call_func_t)(void *info); 19 17 struct call_single_data { 20 18 struct llist_node llist;
+9 -8
include/uapi/linux/shm.h
··· 8 8 #endif 9 9 10 10 /* 11 - * SHMMAX, SHMMNI and SHMALL are upper limits are defaults which can 12 - * be increased by sysctl 11 + * SHMMNI, SHMMAX and SHMALL are default upper limits which can be 12 + * modified by sysctl. The SHMMAX and SHMALL values have been chosen to 13 + * be as large possible without facilitating scenarios where userspace 14 + * causes overflows when adjusting the limits via operations of the form 15 + * "retrieve current limit; add X; update limit". It is therefore not 16 + * advised to make SHMMAX and SHMALL any larger. These limits are 17 + * suitable for both 32 and 64-bit systems. 13 18 */ 14 - 15 - #define SHMMAX 0x2000000 /* max shared seg size (bytes) */ 16 19 #define SHMMIN 1 /* min shared seg size (bytes) */ 17 20 #define SHMMNI 4096 /* max num of segs system wide */ 18 - #ifndef __KERNEL__ 19 - #define SHMALL (SHMMAX/getpagesize()*(SHMMNI/16)) 20 - #endif 21 + #define SHMMAX (ULONG_MAX - (1UL << 24)) /* max shared seg size (bytes) */ 22 + #define SHMALL (ULONG_MAX - (1UL << 24)) /* max shm system wide (pages) */ 21 23 #define SHMSEG SHMMNI /* max shared segs per process */ 22 - 23 24 24 25 /* Obsolete, used only for backwards compatibility and libc5 compiles */ 25 26 struct shmid_ds {
+1 -1
ipc/compat.c
··· 30 30 #include <linux/ptrace.h> 31 31 32 32 #include <linux/mutex.h> 33 - #include <asm/uaccess.h> 33 + #include <linux/uaccess.h> 34 34 35 35 #include "util.h" 36 36
+1 -1
ipc/compat_mq.c
··· 12 12 #include <linux/mqueue.h> 13 13 #include <linux/syscalls.h> 14 14 15 - #include <asm/uaccess.h> 15 + #include <linux/uaccess.h> 16 16 17 17 struct compat_mq_attr { 18 18 compat_long_t mq_flags; /* message queue flags */
+7 -7
ipc/ipc_sysctl.c
··· 18 18 #include <linux/msg.h> 19 19 #include "util.h" 20 20 21 - static void *get_ipc(ctl_table *table) 21 + static void *get_ipc(struct ctl_table *table) 22 22 { 23 23 char *which = table->data; 24 24 struct ipc_namespace *ipc_ns = current->nsproxy->ipc_ns; ··· 27 27 } 28 28 29 29 #ifdef CONFIG_PROC_SYSCTL 30 - static int proc_ipc_dointvec(ctl_table *table, int write, 30 + static int proc_ipc_dointvec(struct ctl_table *table, int write, 31 31 void __user *buffer, size_t *lenp, loff_t *ppos) 32 32 { 33 33 struct ctl_table ipc_table; ··· 38 38 return proc_dointvec(&ipc_table, write, buffer, lenp, ppos); 39 39 } 40 40 41 - static int proc_ipc_dointvec_minmax(ctl_table *table, int write, 41 + static int proc_ipc_dointvec_minmax(struct ctl_table *table, int write, 42 42 void __user *buffer, size_t *lenp, loff_t *ppos) 43 43 { 44 44 struct ctl_table ipc_table; ··· 49 49 return proc_dointvec_minmax(&ipc_table, write, buffer, lenp, ppos); 50 50 } 51 51 52 - static int proc_ipc_dointvec_minmax_orphans(ctl_table *table, int write, 52 + static int proc_ipc_dointvec_minmax_orphans(struct ctl_table *table, int write, 53 53 void __user *buffer, size_t *lenp, loff_t *ppos) 54 54 { 55 55 struct ipc_namespace *ns = current->nsproxy->ipc_ns; ··· 62 62 return err; 63 63 } 64 64 65 - static int proc_ipc_callback_dointvec_minmax(ctl_table *table, int write, 65 + static int proc_ipc_callback_dointvec_minmax(struct ctl_table *table, int write, 66 66 void __user *buffer, size_t *lenp, loff_t *ppos) 67 67 { 68 68 struct ctl_table ipc_table; ··· 85 85 return rc; 86 86 } 87 87 88 - static int proc_ipc_doulongvec_minmax(ctl_table *table, int write, 88 + static int proc_ipc_doulongvec_minmax(struct ctl_table *table, int write, 89 89 void __user *buffer, size_t *lenp, loff_t *ppos) 90 90 { 91 91 struct ctl_table ipc_table; ··· 119 119 } 120 120 } 121 121 122 - static int proc_ipcauto_dointvec_minmax(ctl_table *table, int write, 122 + static int proc_ipcauto_dointvec_minmax(struct ctl_table *table, int write, 123 123 void __user *buffer, size_t *lenp, loff_t *ppos) 124 124 { 125 125 struct ctl_table ipc_table;
+6 -6
ipc/mq_sysctl.c
··· 14 14 #include <linux/sysctl.h> 15 15 16 16 #ifdef CONFIG_PROC_SYSCTL 17 - static void *get_mq(ctl_table *table) 17 + static void *get_mq(struct ctl_table *table) 18 18 { 19 19 char *which = table->data; 20 20 struct ipc_namespace *ipc_ns = current->nsproxy->ipc_ns; ··· 22 22 return which; 23 23 } 24 24 25 - static int proc_mq_dointvec(ctl_table *table, int write, 25 + static int proc_mq_dointvec(struct ctl_table *table, int write, 26 26 void __user *buffer, size_t *lenp, loff_t *ppos) 27 27 { 28 28 struct ctl_table mq_table; ··· 32 32 return proc_dointvec(&mq_table, write, buffer, lenp, ppos); 33 33 } 34 34 35 - static int proc_mq_dointvec_minmax(ctl_table *table, int write, 35 + static int proc_mq_dointvec_minmax(struct ctl_table *table, int write, 36 36 void __user *buffer, size_t *lenp, loff_t *ppos) 37 37 { 38 38 struct ctl_table mq_table; ··· 53 53 static int msg_maxsize_limit_min = MIN_MSGSIZEMAX; 54 54 static int msg_maxsize_limit_max = HARD_MSGSIZEMAX; 55 55 56 - static ctl_table mq_sysctls[] = { 56 + static struct ctl_table mq_sysctls[] = { 57 57 { 58 58 .procname = "queues_max", 59 59 .data = &init_ipc_ns.mq_queues_max, ··· 100 100 {} 101 101 }; 102 102 103 - static ctl_table mq_sysctl_dir[] = { 103 + static struct ctl_table mq_sysctl_dir[] = { 104 104 { 105 105 .procname = "mqueue", 106 106 .mode = 0555, ··· 109 109 {} 110 110 }; 111 111 112 - static ctl_table mq_sysctl_root[] = { 112 + static struct ctl_table mq_sysctl_root[] = { 113 113 { 114 114 .procname = "fs", 115 115 .mode = 0555,
+92 -96
ipc/msg.c
··· 39 39 #include <linux/ipc_namespace.h> 40 40 41 41 #include <asm/current.h> 42 - #include <asm/uaccess.h> 42 + #include <linux/uaccess.h> 43 43 #include "util.h" 44 44 45 - /* 46 - * one msg_receiver structure for each sleeping receiver: 47 - */ 45 + /* one msg_receiver structure for each sleeping receiver */ 48 46 struct msg_receiver { 49 47 struct list_head r_list; 50 48 struct task_struct *r_tsk; ··· 51 53 long r_msgtype; 52 54 long r_maxsize; 53 55 56 + /* 57 + * Mark r_msg volatile so that the compiler 58 + * does not try to get smart and optimize 59 + * it. We rely on this for the lockless 60 + * receive algorithm. 61 + */ 54 62 struct msg_msg *volatile r_msg; 55 63 }; 56 64 ··· 73 69 #define SEARCH_NUMBER 5 74 70 75 71 #define msg_ids(ns) ((ns)->ids[IPC_MSG_IDS]) 76 - 77 - static void freeque(struct ipc_namespace *, struct kern_ipc_perm *); 78 - static int newque(struct ipc_namespace *, struct ipc_params *); 79 - #ifdef CONFIG_PROC_FS 80 - static int sysvipc_msg_proc_show(struct seq_file *s, void *it); 81 - #endif 82 - 83 - /* 84 - * Scale msgmni with the available lowmem size: the memory dedicated to msg 85 - * queues should occupy at most 1/MSG_MEM_SCALE of lowmem. 86 - * Also take into account the number of nsproxies created so far. 87 - * This should be done staying within the (MSGMNI , IPCMNI/nr_ipc_ns) range. 88 - */ 89 - void recompute_msgmni(struct ipc_namespace *ns) 90 - { 91 - struct sysinfo i; 92 - unsigned long allowed; 93 - int nb_ns; 94 - 95 - si_meminfo(&i); 96 - allowed = (((i.totalram - i.totalhigh) / MSG_MEM_SCALE) * i.mem_unit) 97 - / MSGMNB; 98 - nb_ns = atomic_read(&nr_ipc_ns); 99 - allowed /= nb_ns; 100 - 101 - if (allowed < MSGMNI) { 102 - ns->msg_ctlmni = MSGMNI; 103 - return; 104 - } 105 - 106 - if (allowed > IPCMNI / nb_ns) { 107 - ns->msg_ctlmni = IPCMNI / nb_ns; 108 - return; 109 - } 110 - 111 - ns->msg_ctlmni = allowed; 112 - } 113 - 114 - void msg_init_ns(struct ipc_namespace *ns) 115 - { 116 - ns->msg_ctlmax = MSGMAX; 117 - ns->msg_ctlmnb = MSGMNB; 118 - 119 - recompute_msgmni(ns); 120 - 121 - atomic_set(&ns->msg_bytes, 0); 122 - atomic_set(&ns->msg_hdrs, 0); 123 - ipc_init_ids(&ns->ids[IPC_MSG_IDS]); 124 - } 125 - 126 - #ifdef CONFIG_IPC_NS 127 - void msg_exit_ns(struct ipc_namespace *ns) 128 - { 129 - free_ipcs(ns, &msg_ids(ns), freeque); 130 - idr_destroy(&ns->ids[IPC_MSG_IDS].ipcs_idr); 131 - } 132 - #endif 133 - 134 - void __init msg_init(void) 135 - { 136 - msg_init_ns(&init_ipc_ns); 137 - 138 - printk(KERN_INFO "msgmni has been set to %d\n", 139 - init_ipc_ns.msg_ctlmni); 140 - 141 - ipc_init_proc_interface("sysvipc/msg", 142 - " key msqid perms cbytes qnum lspid lrpid uid gid cuid cgid stime rtime ctime\n", 143 - IPC_MSG_IDS, sysvipc_msg_proc_show); 144 - } 145 72 146 73 static inline struct msg_queue *msq_obtain_object(struct ipc_namespace *ns, int id) 147 74 { ··· 162 227 static inline void ss_add(struct msg_queue *msq, struct msg_sender *mss) 163 228 { 164 229 mss->tsk = current; 165 - current->state = TASK_INTERRUPTIBLE; 230 + __set_current_state(TASK_INTERRUPTIBLE); 166 231 list_add_tail(&mss->list, &msq->q_senders); 167 232 } 168 233 ··· 241 306 SYSCALL_DEFINE2(msgget, key_t, key, int, msgflg) 242 307 { 243 308 struct ipc_namespace *ns; 244 - struct ipc_ops msg_ops; 309 + static const struct ipc_ops msg_ops = { 310 + .getnew = newque, 311 + .associate = msg_security, 312 + }; 245 313 struct ipc_params msg_params; 246 314 247 315 ns = current->nsproxy->ipc_ns; 248 - 249 - msg_ops.getnew = newque; 250 - msg_ops.associate = msg_security; 251 - msg_ops.more_checks = NULL; 252 316 253 317 msg_params.key = key; 254 318 msg_params.flg = msgflg; ··· 546 612 547 613 static int testmsg(struct msg_msg *msg, long type, int mode) 548 614 { 549 - switch (mode) 550 - { 551 - case SEARCH_ANY: 552 - case SEARCH_NUMBER: 615 + switch (mode) { 616 + case SEARCH_ANY: 617 + case SEARCH_NUMBER: 618 + return 1; 619 + case SEARCH_LESSEQUAL: 620 + if (msg->m_type <= type) 553 621 return 1; 554 - case SEARCH_LESSEQUAL: 555 - if (msg->m_type <= type) 556 - return 1; 557 - break; 558 - case SEARCH_EQUAL: 559 - if (msg->m_type == type) 560 - return 1; 561 - break; 562 - case SEARCH_NOTEQUAL: 563 - if (msg->m_type != type) 564 - return 1; 565 - break; 622 + break; 623 + case SEARCH_EQUAL: 624 + if (msg->m_type == type) 625 + return 1; 626 + break; 627 + case SEARCH_NOTEQUAL: 628 + if (msg->m_type != type) 629 + return 1; 630 + break; 566 631 } 567 632 return 0; 568 633 } ··· 911 978 else 912 979 msr_d.r_maxsize = bufsz; 913 980 msr_d.r_msg = ERR_PTR(-EAGAIN); 914 - current->state = TASK_INTERRUPTIBLE; 981 + __set_current_state(TASK_INTERRUPTIBLE); 915 982 916 983 ipc_unlock_object(&msq->q_perm); 917 984 rcu_read_unlock(); ··· 989 1056 return do_msgrcv(msqid, msgp, msgsz, msgtyp, msgflg, do_msg_fill); 990 1057 } 991 1058 1059 + /* 1060 + * Scale msgmni with the available lowmem size: the memory dedicated to msg 1061 + * queues should occupy at most 1/MSG_MEM_SCALE of lowmem. 1062 + * Also take into account the number of nsproxies created so far. 1063 + * This should be done staying within the (MSGMNI , IPCMNI/nr_ipc_ns) range. 1064 + */ 1065 + void recompute_msgmni(struct ipc_namespace *ns) 1066 + { 1067 + struct sysinfo i; 1068 + unsigned long allowed; 1069 + int nb_ns; 1070 + 1071 + si_meminfo(&i); 1072 + allowed = (((i.totalram - i.totalhigh) / MSG_MEM_SCALE) * i.mem_unit) 1073 + / MSGMNB; 1074 + nb_ns = atomic_read(&nr_ipc_ns); 1075 + allowed /= nb_ns; 1076 + 1077 + if (allowed < MSGMNI) { 1078 + ns->msg_ctlmni = MSGMNI; 1079 + return; 1080 + } 1081 + 1082 + if (allowed > IPCMNI / nb_ns) { 1083 + ns->msg_ctlmni = IPCMNI / nb_ns; 1084 + return; 1085 + } 1086 + 1087 + ns->msg_ctlmni = allowed; 1088 + } 1089 + 1090 + void msg_init_ns(struct ipc_namespace *ns) 1091 + { 1092 + ns->msg_ctlmax = MSGMAX; 1093 + ns->msg_ctlmnb = MSGMNB; 1094 + 1095 + recompute_msgmni(ns); 1096 + 1097 + atomic_set(&ns->msg_bytes, 0); 1098 + atomic_set(&ns->msg_hdrs, 0); 1099 + ipc_init_ids(&ns->ids[IPC_MSG_IDS]); 1100 + } 1101 + 1102 + #ifdef CONFIG_IPC_NS 1103 + void msg_exit_ns(struct ipc_namespace *ns) 1104 + { 1105 + free_ipcs(ns, &msg_ids(ns), freeque); 1106 + idr_destroy(&ns->ids[IPC_MSG_IDS].ipcs_idr); 1107 + } 1108 + #endif 1109 + 992 1110 #ifdef CONFIG_PROC_FS 993 1111 static int sysvipc_msg_proc_show(struct seq_file *s, void *it) 994 1112 { ··· 1064 1080 msq->q_ctime); 1065 1081 } 1066 1082 #endif 1083 + 1084 + void __init msg_init(void) 1085 + { 1086 + msg_init_ns(&init_ipc_ns); 1087 + 1088 + printk(KERN_INFO "msgmni has been set to %d\n", 1089 + init_ipc_ns.msg_ctlmni); 1090 + 1091 + ipc_init_proc_interface("sysvipc/msg", 1092 + " key msqid perms cbytes qnum lspid lrpid uid gid cuid cgid stime rtime ctime\n", 1093 + IPC_MSG_IDS, sysvipc_msg_proc_show); 1094 + }
+91 -80
ipc/sem.c
··· 47 47 * Thus: Perfect SMP scaling between independent semaphore arrays. 48 48 * If multiple semaphores in one array are used, then cache line 49 49 * trashing on the semaphore array spinlock will limit the scaling. 50 - * - semncnt and semzcnt are calculated on demand in count_semncnt() and 51 - * count_semzcnt() 50 + * - semncnt and semzcnt are calculated on demand in count_semcnt() 52 51 * - the task that performs a successful semop() scans the list of all 53 52 * sleeping tasks and completes any pending operations that can be fulfilled. 54 53 * Semaphores are actively given to waiting tasks (necessary for FIFO). ··· 86 87 #include <linux/nsproxy.h> 87 88 #include <linux/ipc_namespace.h> 88 89 89 - #include <asm/uaccess.h> 90 + #include <linux/uaccess.h> 90 91 #include "util.h" 91 92 92 93 /* One semaphore structure for each semaphore in the system. */ ··· 109 110 int pid; /* process id of requesting process */ 110 111 int status; /* completion status of operation */ 111 112 struct sembuf *sops; /* array of pending operations */ 113 + struct sembuf *blocking; /* the operation that blocked */ 112 114 int nsops; /* number of operations */ 113 115 int alter; /* does *sops alter the array? */ 114 116 }; ··· 160 160 * sem_array.pending{_alter,_cont}, 161 161 * sem_array.sem_undo: global sem_lock() for read/write 162 162 * sem_undo.proc_next: only "current" is allowed to read/write that field. 163 - * 163 + * 164 164 * sem_array.sem_base[i].pending_{const,alter}: 165 165 * global or semaphore sem_lock() for read/write 166 166 */ ··· 564 564 SYSCALL_DEFINE3(semget, key_t, key, int, nsems, int, semflg) 565 565 { 566 566 struct ipc_namespace *ns; 567 - struct ipc_ops sem_ops; 567 + static const struct ipc_ops sem_ops = { 568 + .getnew = newary, 569 + .associate = sem_security, 570 + .more_checks = sem_more_checks, 571 + }; 568 572 struct ipc_params sem_params; 569 573 570 574 ns = current->nsproxy->ipc_ns; 571 575 572 576 if (nsems < 0 || nsems > ns->sc_semmsl) 573 577 return -EINVAL; 574 - 575 - sem_ops.getnew = newary; 576 - sem_ops.associate = sem_security; 577 - sem_ops.more_checks = sem_more_checks; 578 578 579 579 sem_params.key = key; 580 580 sem_params.flg = semflg; ··· 586 586 /** 587 587 * perform_atomic_semop - Perform (if possible) a semaphore operation 588 588 * @sma: semaphore array 589 - * @sops: array with operations that should be checked 590 - * @nsops: number of operations 591 - * @un: undo array 592 - * @pid: pid that did the change 589 + * @q: struct sem_queue that describes the operation 593 590 * 594 591 * Returns 0 if the operation was possible. 595 592 * Returns 1 if the operation is impossible, the caller must sleep. 596 593 * Negative values are error codes. 597 594 */ 598 - static int perform_atomic_semop(struct sem_array *sma, struct sembuf *sops, 599 - int nsops, struct sem_undo *un, int pid) 595 + static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q) 600 596 { 601 - int result, sem_op; 597 + int result, sem_op, nsops, pid; 602 598 struct sembuf *sop; 603 599 struct sem *curr; 600 + struct sembuf *sops; 601 + struct sem_undo *un; 602 + 603 + sops = q->sops; 604 + nsops = q->nsops; 605 + un = q->undo; 604 606 605 607 for (sop = sops; sop < sops + nsops; sop++) { 606 608 curr = sma->sem_base + sop->sem_num; ··· 630 628 } 631 629 632 630 sop--; 631 + pid = q->pid; 633 632 while (sop >= sops) { 634 633 sma->sem_base[sop->sem_num].sempid = pid; 635 634 sop--; ··· 643 640 goto undo; 644 641 645 642 would_block: 643 + q->blocking = sop; 644 + 646 645 if (sop->sem_flg & IPC_NOWAIT) 647 646 result = -EAGAIN; 648 647 else ··· 785 780 q = container_of(walk, struct sem_queue, list); 786 781 walk = walk->next; 787 782 788 - error = perform_atomic_semop(sma, q->sops, q->nsops, 789 - q->undo, q->pid); 783 + error = perform_atomic_semop(sma, q); 790 784 791 785 if (error <= 0) { 792 786 /* operation completed, remove from queue & wakeup */ ··· 897 893 if (semnum != -1 && sma->sem_base[semnum].semval == 0) 898 894 break; 899 895 900 - error = perform_atomic_semop(sma, q->sops, q->nsops, 901 - q->undo, q->pid); 896 + error = perform_atomic_semop(sma, q); 902 897 903 898 /* Does q->sleeper still need to sleep? */ 904 899 if (error > 0) ··· 992 989 set_semotime(sma, sops); 993 990 } 994 991 992 + /* 993 + * check_qop: Test if a queued operation sleeps on the semaphore semnum 994 + */ 995 + static int check_qop(struct sem_array *sma, int semnum, struct sem_queue *q, 996 + bool count_zero) 997 + { 998 + struct sembuf *sop = q->blocking; 999 + 1000 + /* 1001 + * Linux always (since 0.99.10) reported a task as sleeping on all 1002 + * semaphores. This violates SUS, therefore it was changed to the 1003 + * standard compliant behavior. 1004 + * Give the administrators a chance to notice that an application 1005 + * might misbehave because it relies on the Linux behavior. 1006 + */ 1007 + pr_info_once("semctl(GETNCNT/GETZCNT) is since 3.16 Single Unix Specification compliant.\n" 1008 + "The task %s (%d) triggered the difference, watch for misbehavior.\n", 1009 + current->comm, task_pid_nr(current)); 1010 + 1011 + if (sop->sem_num != semnum) 1012 + return 0; 1013 + 1014 + if (count_zero && sop->sem_op == 0) 1015 + return 1; 1016 + if (!count_zero && sop->sem_op < 0) 1017 + return 1; 1018 + 1019 + return 0; 1020 + } 1021 + 995 1022 /* The following counts are associated to each semaphore: 996 1023 * semncnt number of tasks waiting on semval being nonzero 997 1024 * semzcnt number of tasks waiting on semval being zero 998 - * This model assumes that a task waits on exactly one semaphore. 999 - * Since semaphore operations are to be performed atomically, tasks actually 1000 - * wait on a whole sequence of semaphores simultaneously. 1001 - * The counts we return here are a rough approximation, but still 1002 - * warrant that semncnt+semzcnt>0 if the task is on the pending queue. 1025 + * 1026 + * Per definition, a task waits only on the semaphore of the first semop 1027 + * that cannot proceed, even if additional operation would block, too. 1003 1028 */ 1004 - static int count_semncnt(struct sem_array *sma, ushort semnum) 1029 + static int count_semcnt(struct sem_array *sma, ushort semnum, 1030 + bool count_zero) 1005 1031 { 1006 - int semncnt; 1032 + struct list_head *l; 1007 1033 struct sem_queue *q; 1034 + int semcnt; 1008 1035 1009 - semncnt = 0; 1010 - list_for_each_entry(q, &sma->sem_base[semnum].pending_alter, list) { 1011 - struct sembuf *sops = q->sops; 1012 - BUG_ON(sops->sem_num != semnum); 1013 - if ((sops->sem_op < 0) && !(sops->sem_flg & IPC_NOWAIT)) 1014 - semncnt++; 1036 + semcnt = 0; 1037 + /* First: check the simple operations. They are easy to evaluate */ 1038 + if (count_zero) 1039 + l = &sma->sem_base[semnum].pending_const; 1040 + else 1041 + l = &sma->sem_base[semnum].pending_alter; 1042 + 1043 + list_for_each_entry(q, l, list) { 1044 + /* all task on a per-semaphore list sleep on exactly 1045 + * that semaphore 1046 + */ 1047 + semcnt++; 1015 1048 } 1016 1049 1050 + /* Then: check the complex operations. */ 1017 1051 list_for_each_entry(q, &sma->pending_alter, list) { 1018 - struct sembuf *sops = q->sops; 1019 - int nsops = q->nsops; 1020 - int i; 1021 - for (i = 0; i < nsops; i++) 1022 - if (sops[i].sem_num == semnum 1023 - && (sops[i].sem_op < 0) 1024 - && !(sops[i].sem_flg & IPC_NOWAIT)) 1025 - semncnt++; 1052 + semcnt += check_qop(sma, semnum, q, count_zero); 1026 1053 } 1027 - return semncnt; 1028 - } 1029 - 1030 - static int count_semzcnt(struct sem_array *sma, ushort semnum) 1031 - { 1032 - int semzcnt; 1033 - struct sem_queue *q; 1034 - 1035 - semzcnt = 0; 1036 - list_for_each_entry(q, &sma->sem_base[semnum].pending_const, list) { 1037 - struct sembuf *sops = q->sops; 1038 - BUG_ON(sops->sem_num != semnum); 1039 - if ((sops->sem_op == 0) && !(sops->sem_flg & IPC_NOWAIT)) 1040 - semzcnt++; 1054 + if (count_zero) { 1055 + list_for_each_entry(q, &sma->pending_const, list) { 1056 + semcnt += check_qop(sma, semnum, q, count_zero); 1057 + } 1041 1058 } 1042 - 1043 - list_for_each_entry(q, &sma->pending_const, list) { 1044 - struct sembuf *sops = q->sops; 1045 - int nsops = q->nsops; 1046 - int i; 1047 - for (i = 0; i < nsops; i++) 1048 - if (sops[i].sem_num == semnum 1049 - && (sops[i].sem_op == 0) 1050 - && !(sops[i].sem_flg & IPC_NOWAIT)) 1051 - semzcnt++; 1052 - } 1053 - return semzcnt; 1059 + return semcnt; 1054 1060 } 1055 1061 1056 1062 /* Free a semaphore set. freeary() is called with sem_ids.rwsem locked ··· 1173 1161 err = security_sem_semctl(NULL, cmd); 1174 1162 if (err) 1175 1163 return err; 1176 - 1164 + 1177 1165 memset(&seminfo, 0, sizeof(seminfo)); 1178 1166 seminfo.semmni = ns->sc_semmni; 1179 1167 seminfo.semmns = ns->sc_semmns; ··· 1193 1181 } 1194 1182 max_id = ipc_get_maxid(&sem_ids(ns)); 1195 1183 up_read(&sem_ids(ns).rwsem); 1196 - if (copy_to_user(p, &seminfo, sizeof(struct seminfo))) 1184 + if (copy_to_user(p, &seminfo, sizeof(struct seminfo))) 1197 1185 return -EFAULT; 1198 1186 return (max_id < 0) ? 0 : max_id; 1199 1187 } ··· 1461 1449 err = curr->sempid; 1462 1450 goto out_unlock; 1463 1451 case GETNCNT: 1464 - err = count_semncnt(sma, semnum); 1452 + err = count_semcnt(sma, semnum, 0); 1465 1453 goto out_unlock; 1466 1454 case GETZCNT: 1467 - err = count_semzcnt(sma, semnum); 1455 + err = count_semcnt(sma, semnum, 1); 1468 1456 goto out_unlock; 1469 1457 } 1470 1458 ··· 1878 1866 if (un && un->semid == -1) 1879 1867 goto out_unlock_free; 1880 1868 1881 - error = perform_atomic_semop(sma, sops, nsops, un, 1882 - task_tgid_vnr(current)); 1869 + queue.sops = sops; 1870 + queue.nsops = nsops; 1871 + queue.undo = un; 1872 + queue.pid = task_tgid_vnr(current); 1873 + queue.alter = alter; 1874 + 1875 + error = perform_atomic_semop(sma, &queue); 1883 1876 if (error == 0) { 1884 1877 /* If the operation was successful, then do 1885 1878 * the required updates. ··· 1900 1883 /* We need to sleep on this operation, so we put the current 1901 1884 * task into the pending queue and go to sleep. 1902 1885 */ 1903 - 1904 - queue.sops = sops; 1905 - queue.nsops = nsops; 1906 - queue.undo = un; 1907 - queue.pid = task_tgid_vnr(current); 1908 - queue.alter = alter; 1909 1886 1910 1887 if (nsops == 1) { 1911 1888 struct sem *curr; ··· 2027 2016 return error; 2028 2017 atomic_inc(&undo_list->refcnt); 2029 2018 tsk->sysvsem.undo_list = undo_list; 2030 - } else 2019 + } else 2031 2020 tsk->sysvsem.undo_list = NULL; 2032 2021 2033 2022 return 0;
+15 -8
ipc/shm.c
··· 43 43 #include <linux/mount.h> 44 44 #include <linux/ipc_namespace.h> 45 45 46 - #include <asm/uaccess.h> 46 + #include <linux/uaccess.h> 47 47 48 48 #include "util.h" 49 49 ··· 493 493 if (size < SHMMIN || size > ns->shm_ctlmax) 494 494 return -EINVAL; 495 495 496 - if (ns->shm_tot + numpages > ns->shm_ctlall) 496 + if (numpages << PAGE_SHIFT < size) 497 + return -ENOSPC; 498 + 499 + if (ns->shm_tot + numpages < ns->shm_tot || 500 + ns->shm_tot + numpages > ns->shm_ctlall) 497 501 return -ENOSPC; 498 502 499 503 shp = ipc_rcu_alloc(sizeof(*shp)); ··· 613 609 SYSCALL_DEFINE3(shmget, key_t, key, size_t, size, int, shmflg) 614 610 { 615 611 struct ipc_namespace *ns; 616 - struct ipc_ops shm_ops; 612 + static const struct ipc_ops shm_ops = { 613 + .getnew = newseg, 614 + .associate = shm_security, 615 + .more_checks = shm_more_checks, 616 + }; 617 617 struct ipc_params shm_params; 618 618 619 619 ns = current->nsproxy->ipc_ns; 620 - 621 - shm_ops.getnew = newseg; 622 - shm_ops.associate = shm_security; 623 - shm_ops.more_checks = shm_more_checks; 624 620 625 621 shm_params.key = key; 626 622 shm_params.flg = shmflg; ··· 698 694 out.shmmin = in->shmmin; 699 695 out.shmmni = in->shmmni; 700 696 out.shmseg = in->shmseg; 701 - out.shmall = in->shmall; 697 + out.shmall = in->shmall; 702 698 703 699 return copy_to_user(buf, &out, sizeof(out)); 704 700 } ··· 1164 1160 down_write(&current->mm->mmap_sem); 1165 1161 if (addr && !(shmflg & SHM_REMAP)) { 1166 1162 err = -EINVAL; 1163 + if (addr + size < addr) 1164 + goto invalid; 1165 + 1167 1166 if (find_vma_intersection(current->mm, addr, addr + size)) 1168 1167 goto invalid; 1169 1168 /*
+6 -6
ipc/util.c
··· 183 183 * ipc_findkey - find a key in an ipc identifier set 184 184 * @ids: ipc identifier set 185 185 * @key: key to find 186 - * 186 + * 187 187 * Returns the locked pointer to the ipc structure if found or NULL 188 188 * otherwise. If key is found ipc points to the owning ipc structure 189 189 * ··· 317 317 * when the key is IPC_PRIVATE. 318 318 */ 319 319 static int ipcget_new(struct ipc_namespace *ns, struct ipc_ids *ids, 320 - struct ipc_ops *ops, struct ipc_params *params) 320 + const struct ipc_ops *ops, struct ipc_params *params) 321 321 { 322 322 int err; 323 323 ··· 344 344 */ 345 345 static int ipc_check_perms(struct ipc_namespace *ns, 346 346 struct kern_ipc_perm *ipcp, 347 - struct ipc_ops *ops, 347 + const struct ipc_ops *ops, 348 348 struct ipc_params *params) 349 349 { 350 350 int err; ··· 375 375 * On success, the ipc id is returned. 376 376 */ 377 377 static int ipcget_public(struct ipc_namespace *ns, struct ipc_ids *ids, 378 - struct ipc_ops *ops, struct ipc_params *params) 378 + const struct ipc_ops *ops, struct ipc_params *params) 379 379 { 380 380 struct kern_ipc_perm *ipcp; 381 381 int flg = params->flg; ··· 538 538 else if (in_group_p(ipcp->cgid) || in_group_p(ipcp->gid)) 539 539 granted_mode >>= 3; 540 540 /* is there some bit set in requested_mode but not in granted_mode? */ 541 - if ((requested_mode & ~granted_mode & 0007) && 541 + if ((requested_mode & ~granted_mode & 0007) && 542 542 !ns_capable(ns->user_ns, CAP_IPC_OWNER)) 543 543 return -1; 544 544 ··· 678 678 * Common routine called by sys_msgget(), sys_semget() and sys_shmget(). 679 679 */ 680 680 int ipcget(struct ipc_namespace *ns, struct ipc_ids *ids, 681 - struct ipc_ops *ops, struct ipc_params *params) 681 + const struct ipc_ops *ops, struct ipc_params *params) 682 682 { 683 683 if (params->key == IPC_PRIVATE) 684 684 return ipcget_new(ns, ids, ops, params);
+5 -5
ipc/util.h
··· 78 78 * . routine to call for an extra check if needed 79 79 */ 80 80 struct ipc_ops { 81 - int (*getnew) (struct ipc_namespace *, struct ipc_params *); 82 - int (*associate) (struct kern_ipc_perm *, int); 83 - int (*more_checks) (struct kern_ipc_perm *, struct ipc_params *); 81 + int (*getnew)(struct ipc_namespace *, struct ipc_params *); 82 + int (*associate)(struct kern_ipc_perm *, int); 83 + int (*more_checks)(struct kern_ipc_perm *, struct ipc_params *); 84 84 }; 85 85 86 86 struct seq_file; ··· 142 142 struct ipc64_perm *perm, int extra_perm); 143 143 144 144 #ifndef CONFIG_ARCH_WANT_IPC_PARSE_VERSION 145 - /* On IA-64, we always use the "64-bit version" of the IPC structures. */ 145 + /* On IA-64, we always use the "64-bit version" of the IPC structures. */ 146 146 # define ipc_parse_version(cmd) IPC_64 147 147 #else 148 148 int ipc_parse_version(int *cmd); ··· 201 201 202 202 struct kern_ipc_perm *ipc_obtain_object_check(struct ipc_ids *ids, int id); 203 203 int ipcget(struct ipc_namespace *ns, struct ipc_ids *ids, 204 - struct ipc_ops *ops, struct ipc_params *params); 204 + const struct ipc_ops *ops, struct ipc_params *params); 205 205 void free_ipcs(struct ipc_namespace *ns, struct ipc_ids *ids, 206 206 void (*free)(struct ipc_namespace *, struct kern_ipc_perm *)); 207 207 #endif
+3 -3
kernel/acct.c
··· 55 55 #include <linux/times.h> 56 56 #include <linux/syscalls.h> 57 57 #include <linux/mount.h> 58 - #include <asm/uaccess.h> 58 + #include <linux/uaccess.h> 59 59 #include <asm/div64.h> 60 60 #include <linux/blkdev.h> /* sector_div */ 61 61 #include <linux/pid_namespace.h> ··· 134 134 spin_lock(&acct_lock); 135 135 if (file != acct->file) { 136 136 if (act) 137 - res = act>0; 137 + res = act > 0; 138 138 goto out; 139 139 } 140 140 ··· 262 262 if (name) { 263 263 struct filename *tmp = getname(name); 264 264 if (IS_ERR(tmp)) 265 - return (PTR_ERR(tmp)); 265 + return PTR_ERR(tmp); 266 266 error = acct_on(tmp); 267 267 putname(tmp); 268 268 } else {
+1 -1
kernel/audit.c
··· 44 44 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 45 45 46 46 #include <linux/init.h> 47 - #include <asm/types.h> 47 + #include <linux/types.h> 48 48 #include <linux/atomic.h> 49 49 #include <linux/mm.h> 50 50 #include <linux/export.h>
-39
kernel/exit.c
··· 313 313 } 314 314 } 315 315 316 - /* 317 - * Let kernel threads use this to say that they allow a certain signal. 318 - * Must not be used if kthread was cloned with CLONE_SIGHAND. 319 - */ 320 - int allow_signal(int sig) 321 - { 322 - if (!valid_signal(sig) || sig < 1) 323 - return -EINVAL; 324 - 325 - spin_lock_irq(&current->sighand->siglock); 326 - /* This is only needed for daemonize()'ed kthreads */ 327 - sigdelset(&current->blocked, sig); 328 - /* 329 - * Kernel threads handle their own signals. Let the signal code 330 - * know it'll be handled, so that they don't get converted to 331 - * SIGKILL or just silently dropped. 332 - */ 333 - current->sighand->action[(sig)-1].sa.sa_handler = (void __user *)2; 334 - recalc_sigpending(); 335 - spin_unlock_irq(&current->sighand->siglock); 336 - return 0; 337 - } 338 - 339 - EXPORT_SYMBOL(allow_signal); 340 - 341 - int disallow_signal(int sig) 342 - { 343 - if (!valid_signal(sig) || sig < 1) 344 - return -EINVAL; 345 - 346 - spin_lock_irq(&current->sighand->siglock); 347 - current->sighand->action[(sig)-1].sa.sa_handler = SIG_IGN; 348 - recalc_sigpending(); 349 - spin_unlock_irq(&current->sighand->siglock); 350 - return 0; 351 - } 352 - 353 - EXPORT_SYMBOL(disallow_signal); 354 - 355 316 #ifdef CONFIG_MEMCG 356 317 /* 357 318 * A task is exiting. If it owned this mm, find a new owner for the mm.
+7 -3
kernel/fork.c
··· 1606 1606 */ 1607 1607 if (!IS_ERR(p)) { 1608 1608 struct completion vfork; 1609 + struct pid *pid; 1609 1610 1610 1611 trace_sched_process_fork(current, p); 1611 1612 1612 - nr = task_pid_vnr(p); 1613 + pid = get_task_pid(p, PIDTYPE_PID); 1614 + nr = pid_vnr(pid); 1613 1615 1614 1616 if (clone_flags & CLONE_PARENT_SETTID) 1615 1617 put_user(nr, parent_tidptr); ··· 1626 1624 1627 1625 /* forking complete and child started to run, tell ptracer */ 1628 1626 if (unlikely(trace)) 1629 - ptrace_event(trace, nr); 1627 + ptrace_event_pid(trace, pid); 1630 1628 1631 1629 if (clone_flags & CLONE_VFORK) { 1632 1630 if (!wait_for_vfork_done(p, &vfork)) 1633 - ptrace_event(PTRACE_EVENT_VFORK_DONE, nr); 1631 + ptrace_event_pid(PTRACE_EVENT_VFORK_DONE, pid); 1634 1632 } 1633 + 1634 + put_pid(pid); 1635 1635 } else { 1636 1636 nr = PTR_ERR(p); 1637 1637 }
+32 -37
kernel/kexec.c
··· 125 125 unsigned long dest); 126 126 127 127 static int do_kimage_alloc(struct kimage **rimage, unsigned long entry, 128 - unsigned long nr_segments, 129 - struct kexec_segment __user *segments) 128 + unsigned long nr_segments, 129 + struct kexec_segment __user *segments) 130 130 { 131 131 size_t segment_bytes; 132 132 struct kimage *image; ··· 257 257 image->control_code_page = kimage_alloc_control_pages(image, 258 258 get_order(KEXEC_CONTROL_PAGE_SIZE)); 259 259 if (!image->control_code_page) { 260 - printk(KERN_ERR "Could not allocate control_code_buffer\n"); 260 + pr_err("Could not allocate control_code_buffer\n"); 261 261 goto out_free; 262 262 } 263 263 264 264 image->swap_page = kimage_alloc_control_pages(image, 0); 265 265 if (!image->swap_page) { 266 - printk(KERN_ERR "Could not allocate swap buffer\n"); 266 + pr_err("Could not allocate swap buffer\n"); 267 267 goto out_free; 268 268 } 269 269 ··· 332 332 image->control_code_page = kimage_alloc_control_pages(image, 333 333 get_order(KEXEC_CONTROL_PAGE_SIZE)); 334 334 if (!image->control_code_page) { 335 - printk(KERN_ERR "Could not allocate control_code_buffer\n"); 335 + pr_err("Could not allocate control_code_buffer\n"); 336 336 goto out_free; 337 337 } 338 338 ··· 621 621 622 622 #define for_each_kimage_entry(image, ptr, entry) \ 623 623 for (ptr = &image->head; (entry = *ptr) && !(entry & IND_DONE); \ 624 - ptr = (entry & IND_INDIRECTION)? \ 625 - phys_to_virt((entry & PAGE_MASK)): ptr +1) 624 + ptr = (entry & IND_INDIRECTION) ? \ 625 + phys_to_virt((entry & PAGE_MASK)) : ptr + 1) 626 626 627 627 static void kimage_free_entry(kimage_entry_t entry) 628 628 { ··· 650 650 * done with it. 651 651 */ 652 652 ind = entry; 653 - } 654 - else if (entry & IND_SOURCE) 653 + } else if (entry & IND_SOURCE) 655 654 kimage_free_entry(entry); 656 655 } 657 656 /* Free the final indirection page */ ··· 773 774 addr = old_addr; 774 775 page = old_page; 775 776 break; 776 - } 777 - else { 777 + } else { 778 778 /* Place the page on the destination list I 779 779 * will use it later. 780 780 */ ··· 1057 1059 return -EINVAL; 1058 1060 1059 1061 ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); 1060 - for (i=0; i < nr_segments; i++) { 1062 + for (i = 0; i < nr_segments; i++) { 1061 1063 result = copy_from_user(&in, &segments[i], sizeof(in)); 1062 1064 if (result) 1063 1065 return -EFAULT; ··· 1212 1214 * squirrelled away. ELF notes happen to provide 1213 1215 * all of that, so there is no need to invent something new. 1214 1216 */ 1215 - buf = (u32*)per_cpu_ptr(crash_notes, cpu); 1217 + buf = (u32 *)per_cpu_ptr(crash_notes, cpu); 1216 1218 if (!buf) 1217 1219 return; 1218 1220 memset(&prstatus, 0, sizeof(prstatus)); 1219 1221 prstatus.pr_pid = current->pid; 1220 1222 elf_core_copy_kernel_regs(&prstatus.pr_reg, regs); 1221 1223 buf = append_elf_note(buf, KEXEC_CORE_NOTE_NAME, NT_PRSTATUS, 1222 - &prstatus, sizeof(prstatus)); 1224 + &prstatus, sizeof(prstatus)); 1223 1225 final_note(buf); 1224 1226 } 1225 1227 ··· 1228 1230 /* Allocate memory for saving cpu registers. */ 1229 1231 crash_notes = alloc_percpu(note_buf_t); 1230 1232 if (!crash_notes) { 1231 - printk("Kexec: Memory allocation for saving cpu register" 1232 - " states failed\n"); 1233 + pr_warn("Kexec: Memory allocation for saving cpu register states failed\n"); 1233 1234 return -ENOMEM; 1234 1235 } 1235 1236 return 0; ··· 1250 1253 * 1251 1254 * The function returns 0 on success and -EINVAL on failure. 1252 1255 */ 1253 - static int __init parse_crashkernel_mem(char *cmdline, 1254 - unsigned long long system_ram, 1255 - unsigned long long *crash_size, 1256 - unsigned long long *crash_base) 1256 + static int __init parse_crashkernel_mem(char *cmdline, 1257 + unsigned long long system_ram, 1258 + unsigned long long *crash_size, 1259 + unsigned long long *crash_base) 1257 1260 { 1258 1261 char *cur = cmdline, *tmp; 1259 1262 ··· 1264 1267 /* get the start of the range */ 1265 1268 start = memparse(cur, &tmp); 1266 1269 if (cur == tmp) { 1267 - pr_warning("crashkernel: Memory value expected\n"); 1270 + pr_warn("crashkernel: Memory value expected\n"); 1268 1271 return -EINVAL; 1269 1272 } 1270 1273 cur = tmp; 1271 1274 if (*cur != '-') { 1272 - pr_warning("crashkernel: '-' expected\n"); 1275 + pr_warn("crashkernel: '-' expected\n"); 1273 1276 return -EINVAL; 1274 1277 } 1275 1278 cur++; ··· 1278 1281 if (*cur != ':') { 1279 1282 end = memparse(cur, &tmp); 1280 1283 if (cur == tmp) { 1281 - pr_warning("crashkernel: Memory " 1282 - "value expected\n"); 1284 + pr_warn("crashkernel: Memory value expected\n"); 1283 1285 return -EINVAL; 1284 1286 } 1285 1287 cur = tmp; 1286 1288 if (end <= start) { 1287 - pr_warning("crashkernel: end <= start\n"); 1289 + pr_warn("crashkernel: end <= start\n"); 1288 1290 return -EINVAL; 1289 1291 } 1290 1292 } 1291 1293 1292 1294 if (*cur != ':') { 1293 - pr_warning("crashkernel: ':' expected\n"); 1295 + pr_warn("crashkernel: ':' expected\n"); 1294 1296 return -EINVAL; 1295 1297 } 1296 1298 cur++; 1297 1299 1298 1300 size = memparse(cur, &tmp); 1299 1301 if (cur == tmp) { 1300 - pr_warning("Memory value expected\n"); 1302 + pr_warn("Memory value expected\n"); 1301 1303 return -EINVAL; 1302 1304 } 1303 1305 cur = tmp; 1304 1306 if (size >= system_ram) { 1305 - pr_warning("crashkernel: invalid size\n"); 1307 + pr_warn("crashkernel: invalid size\n"); 1306 1308 return -EINVAL; 1307 1309 } 1308 1310 ··· 1319 1323 cur++; 1320 1324 *crash_base = memparse(cur, &tmp); 1321 1325 if (cur == tmp) { 1322 - pr_warning("Memory value expected " 1323 - "after '@'\n"); 1326 + pr_warn("Memory value expected after '@'\n"); 1324 1327 return -EINVAL; 1325 1328 } 1326 1329 } ··· 1331 1336 /* 1332 1337 * That function parses "simple" (old) crashkernel command lines like 1333 1338 * 1334 - * crashkernel=size[@offset] 1339 + * crashkernel=size[@offset] 1335 1340 * 1336 1341 * It returns 0 on success and -EINVAL on failure. 1337 1342 */ 1338 - static int __init parse_crashkernel_simple(char *cmdline, 1339 - unsigned long long *crash_size, 1340 - unsigned long long *crash_base) 1343 + static int __init parse_crashkernel_simple(char *cmdline, 1344 + unsigned long long *crash_size, 1345 + unsigned long long *crash_base) 1341 1346 { 1342 1347 char *cur = cmdline; 1343 1348 1344 1349 *crash_size = memparse(cmdline, &cur); 1345 1350 if (cmdline == cur) { 1346 - pr_warning("crashkernel: memory value expected\n"); 1351 + pr_warn("crashkernel: memory value expected\n"); 1347 1352 return -EINVAL; 1348 1353 } 1349 1354 1350 1355 if (*cur == '@') 1351 1356 *crash_base = memparse(cur+1, &cur); 1352 1357 else if (*cur != ' ' && *cur != '\0') { 1353 - pr_warning("crashkernel: unrecognized char\n"); 1358 + pr_warn("crashkernel: unrecognized char\n"); 1354 1359 return -EINVAL; 1355 1360 } 1356 1361 ··· 1686 1691 * CPU hotplug again; so re-enable it here. 1687 1692 */ 1688 1693 cpu_hotplug_enable(); 1689 - printk(KERN_EMERG "Starting new kernel\n"); 1694 + pr_emerg("Starting new kernel\n"); 1690 1695 machine_shutdown(); 1691 1696 } 1692 1697
+1 -4
kernel/kmod.c
··· 285 285 pid_t pid; 286 286 287 287 /* If SIGCLD is ignored sys_wait4 won't populate the status. */ 288 - spin_lock_irq(&current->sighand->siglock); 289 - current->sighand->action[SIGCHLD-1].sa.sa_handler = SIG_DFL; 290 - spin_unlock_irq(&current->sighand->siglock); 291 - 288 + kernel_sigaction(SIGCHLD, SIG_DFL); 292 289 pid = kernel_thread(____call_usermodehelper, sub_info, SIGCHLD); 293 290 if (pid < 0) { 294 291 sub_info->retval = pid;
+21 -2
kernel/panic.c
··· 32 32 static int pause_on_oops; 33 33 static int pause_on_oops_flag; 34 34 static DEFINE_SPINLOCK(pause_on_oops_lock); 35 + static bool crash_kexec_post_notifiers; 35 36 36 37 int panic_timeout = CONFIG_PANIC_TIMEOUT; 37 38 EXPORT_SYMBOL_GPL(panic_timeout); ··· 113 112 /* 114 113 * If we have crashed and we have a crash kernel loaded let it handle 115 114 * everything else. 116 - * Do we want to call this before we try to display a message? 115 + * If we want to run this after calling panic_notifiers, pass 116 + * the "crash_kexec_post_notifiers" option to the kernel. 117 117 */ 118 - crash_kexec(NULL); 118 + if (!crash_kexec_post_notifiers) 119 + crash_kexec(NULL); 119 120 120 121 /* 121 122 * Note smp_send_stop is the usual smp shutdown function, which ··· 133 130 atomic_notifier_call_chain(&panic_notifier_list, 0, buf); 134 131 135 132 kmsg_dump(KMSG_DUMP_PANIC); 133 + 134 + /* 135 + * If you doubt kdump always works fine in any situation, 136 + * "crash_kexec_post_notifiers" offers you a chance to run 137 + * panic_notifiers and dumping kmsg before kdump. 138 + * Note: since some panic_notifiers can make crashed kernel 139 + * more unstable, it can increase risks of the kdump failure too. 140 + */ 141 + crash_kexec(NULL); 136 142 137 143 bust_spinlocks(0); 138 144 ··· 483 471 484 472 core_param(panic, panic_timeout, int, 0644); 485 473 core_param(pause_on_oops, pause_on_oops, int, 0644); 474 + 475 + static int __init setup_crash_kexec_post_notifiers(char *s) 476 + { 477 + crash_kexec_post_notifiers = true; 478 + return 0; 479 + } 480 + early_param("crash_kexec_post_notifiers", setup_crash_kexec_post_notifiers); 486 481 487 482 static int __init oops_setup(char *s) 488 483 {
+8 -12
kernel/profile.c
··· 52 52 53 53 int profile_setup(char *str) 54 54 { 55 - static char schedstr[] = "schedule"; 56 - static char sleepstr[] = "sleep"; 57 - static char kvmstr[] = "kvm"; 55 + static const char schedstr[] = "schedule"; 56 + static const char sleepstr[] = "sleep"; 57 + static const char kvmstr[] = "kvm"; 58 58 int par; 59 59 60 60 if (!strncmp(str, sleepstr, strlen(sleepstr))) { ··· 64 64 str += strlen(sleepstr) + 1; 65 65 if (get_option(&str, &par)) 66 66 prof_shift = par; 67 - printk(KERN_INFO 68 - "kernel sleep profiling enabled (shift: %ld)\n", 67 + pr_info("kernel sleep profiling enabled (shift: %ld)\n", 69 68 prof_shift); 70 69 #else 71 - printk(KERN_WARNING 72 - "kernel sleep profiling requires CONFIG_SCHEDSTATS\n"); 70 + pr_warn("kernel sleep profiling requires CONFIG_SCHEDSTATS\n"); 73 71 #endif /* CONFIG_SCHEDSTATS */ 74 72 } else if (!strncmp(str, schedstr, strlen(schedstr))) { 75 73 prof_on = SCHED_PROFILING; ··· 75 77 str += strlen(schedstr) + 1; 76 78 if (get_option(&str, &par)) 77 79 prof_shift = par; 78 - printk(KERN_INFO 79 - "kernel schedule profiling enabled (shift: %ld)\n", 80 + pr_info("kernel schedule profiling enabled (shift: %ld)\n", 80 81 prof_shift); 81 82 } else if (!strncmp(str, kvmstr, strlen(kvmstr))) { 82 83 prof_on = KVM_PROFILING; ··· 83 86 str += strlen(kvmstr) + 1; 84 87 if (get_option(&str, &par)) 85 88 prof_shift = par; 86 - printk(KERN_INFO 87 - "kernel KVM profiling enabled (shift: %ld)\n", 89 + pr_info("kernel KVM profiling enabled (shift: %ld)\n", 88 90 prof_shift); 89 91 } else if (get_option(&str, &par)) { 90 92 prof_shift = par; 91 93 prof_on = CPU_PROFILING; 92 - printk(KERN_INFO "kernel profiling enabled (shift: %ld)\n", 94 + pr_info("kernel profiling enabled (shift: %ld)\n", 93 95 prof_shift); 94 96 } 95 97 return 1;
+2 -2
kernel/seccomp.c
··· 39 39 * is only needed for handling filters shared across tasks. 40 40 * @prev: points to a previously installed, or inherited, filter 41 41 * @len: the number of instructions in the program 42 - * @insns: the BPF program instructions to evaluate 42 + * @insnsi: the BPF program instructions to evaluate 43 43 * 44 44 * seccomp_filter objects are organized in a tree linked via the @prev 45 45 * pointer. For any task, it appears to be a singly-linked list starting ··· 220 220 return -ENOMEM; 221 221 222 222 /* 223 - * Installing a seccomp filter requires that the task have 223 + * Installing a seccomp filter requires that the task has 224 224 * CAP_SYS_ADMIN in its namespace or be running with no_new_privs. 225 225 * This avoids scenarios where unprivileged tasks can affect the 226 226 * behavior of privileged children.
+43 -48
kernel/signal.c
··· 277 277 { 278 278 if (unlikely(task->jobctl & JOBCTL_TRAPPING)) { 279 279 task->jobctl &= ~JOBCTL_TRAPPING; 280 + smp_mb(); /* advised by wake_up_bit() */ 280 281 wake_up_bit(&task->jobctl, JOBCTL_TRAPPING_BIT); 281 282 } 282 283 } ··· 706 705 * Returns 1 if any signals were found. 707 706 * 708 707 * All callers must be holding the siglock. 709 - * 710 - * This version takes a sigset mask and looks at all signals, 711 - * not just those in the first mask word. 712 708 */ 713 - static int rm_from_queue_full(sigset_t *mask, struct sigpending *s) 709 + static int flush_sigqueue_mask(sigset_t *mask, struct sigpending *s) 714 710 { 715 711 struct sigqueue *q, *n; 716 712 sigset_t m; ··· 719 721 sigandnsets(&s->signal, &s->signal, mask); 720 722 list_for_each_entry_safe(q, n, &s->list, list) { 721 723 if (sigismember(mask, q->info.si_signo)) { 722 - list_del_init(&q->list); 723 - __sigqueue_free(q); 724 - } 725 - } 726 - return 1; 727 - } 728 - /* 729 - * Remove signals in mask from the pending set and queue. 730 - * Returns 1 if any signals were found. 731 - * 732 - * All callers must be holding the siglock. 733 - */ 734 - static int rm_from_queue(unsigned long mask, struct sigpending *s) 735 - { 736 - struct sigqueue *q, *n; 737 - 738 - if (!sigtestsetmask(&s->signal, mask)) 739 - return 0; 740 - 741 - sigdelsetmask(&s->signal, mask); 742 - list_for_each_entry_safe(q, n, &s->list, list) { 743 - if (q->info.si_signo < SIGRTMIN && 744 - (mask & sigmask(q->info.si_signo))) { 745 724 list_del_init(&q->list); 746 725 __sigqueue_free(q); 747 726 } ··· 836 861 { 837 862 struct signal_struct *signal = p->signal; 838 863 struct task_struct *t; 864 + sigset_t flush; 839 865 840 866 if (signal->flags & (SIGNAL_GROUP_EXIT | SIGNAL_GROUP_COREDUMP)) { 841 867 if (signal->flags & SIGNAL_GROUP_COREDUMP) ··· 848 872 /* 849 873 * This is a stop signal. Remove SIGCONT from all queues. 850 874 */ 851 - rm_from_queue(sigmask(SIGCONT), &signal->shared_pending); 852 - t = p; 853 - do { 854 - rm_from_queue(sigmask(SIGCONT), &t->pending); 855 - } while_each_thread(p, t); 875 + siginitset(&flush, sigmask(SIGCONT)); 876 + flush_sigqueue_mask(&flush, &signal->shared_pending); 877 + for_each_thread(p, t) 878 + flush_sigqueue_mask(&flush, &t->pending); 856 879 } else if (sig == SIGCONT) { 857 880 unsigned int why; 858 881 /* 859 882 * Remove all stop signals from all queues, wake all threads. 860 883 */ 861 - rm_from_queue(SIG_KERNEL_STOP_MASK, &signal->shared_pending); 862 - t = p; 863 - do { 884 + siginitset(&flush, SIG_KERNEL_STOP_MASK); 885 + flush_sigqueue_mask(&flush, &signal->shared_pending); 886 + for_each_thread(p, t) { 887 + flush_sigqueue_mask(&flush, &t->pending); 864 888 task_clear_jobctl_pending(t, JOBCTL_STOP_PENDING); 865 - rm_from_queue(SIG_KERNEL_STOP_MASK, &t->pending); 866 889 if (likely(!(t->ptrace & PT_SEIZED))) 867 890 wake_up_state(t, __TASK_STOPPED); 868 891 else 869 892 ptrace_trap_notify(t); 870 - } while_each_thread(p, t); 893 + } 871 894 872 895 /* 873 896 * Notify the parent with CLD_CONTINUED if we were stopped. ··· 2829 2854 2830 2855 spin_lock_irq(&tsk->sighand->siglock); 2831 2856 __set_task_blocked(tsk, &tsk->real_blocked); 2832 - siginitset(&tsk->real_blocked, 0); 2857 + sigemptyset(&tsk->real_blocked); 2833 2858 sig = dequeue_signal(tsk, &mask, info); 2834 2859 } 2835 2860 spin_unlock_irq(&tsk->sighand->siglock); ··· 3066 3091 } 3067 3092 #endif 3068 3093 3094 + /* 3095 + * For kthreads only, must not be used if cloned with CLONE_SIGHAND 3096 + */ 3097 + void kernel_sigaction(int sig, __sighandler_t action) 3098 + { 3099 + spin_lock_irq(&current->sighand->siglock); 3100 + current->sighand->action[sig - 1].sa.sa_handler = action; 3101 + if (action == SIG_IGN) { 3102 + sigset_t mask; 3103 + 3104 + sigemptyset(&mask); 3105 + sigaddset(&mask, sig); 3106 + 3107 + flush_sigqueue_mask(&mask, &current->signal->shared_pending); 3108 + flush_sigqueue_mask(&mask, &current->pending); 3109 + recalc_sigpending(); 3110 + } 3111 + spin_unlock_irq(&current->sighand->siglock); 3112 + } 3113 + EXPORT_SYMBOL(kernel_sigaction); 3114 + 3069 3115 int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact) 3070 3116 { 3071 - struct task_struct *t = current; 3117 + struct task_struct *p = current, *t; 3072 3118 struct k_sigaction *k; 3073 3119 sigset_t mask; 3074 3120 3075 3121 if (!valid_signal(sig) || sig < 1 || (act && sig_kernel_only(sig))) 3076 3122 return -EINVAL; 3077 3123 3078 - k = &t->sighand->action[sig-1]; 3124 + k = &p->sighand->action[sig-1]; 3079 3125 3080 - spin_lock_irq(&current->sighand->siglock); 3126 + spin_lock_irq(&p->sighand->siglock); 3081 3127 if (oact) 3082 3128 *oact = *k; 3083 3129 ··· 3117 3121 * (for example, SIGCHLD), shall cause the pending signal to 3118 3122 * be discarded, whether or not it is blocked" 3119 3123 */ 3120 - if (sig_handler_ignored(sig_handler(t, sig), sig)) { 3124 + if (sig_handler_ignored(sig_handler(p, sig), sig)) { 3121 3125 sigemptyset(&mask); 3122 3126 sigaddset(&mask, sig); 3123 - rm_from_queue_full(&mask, &t->signal->shared_pending); 3124 - do { 3125 - rm_from_queue_full(&mask, &t->pending); 3126 - } while_each_thread(current, t); 3127 + flush_sigqueue_mask(&mask, &p->signal->shared_pending); 3128 + for_each_thread(p, t) 3129 + flush_sigqueue_mask(&mask, &t->pending); 3127 3130 } 3128 3131 } 3129 3132 3130 - spin_unlock_irq(&current->sighand->siglock); 3133 + spin_unlock_irq(&p->sighand->siglock); 3131 3134 return 0; 3132 3135 } 3133 3136 3134 - static int 3137 + static int 3135 3138 do_sigaltstack (const stack_t __user *uss, stack_t __user *uoss, unsigned long sp) 3136 3139 { 3137 3140 stack_t oss;
+15 -3
kernel/smp.c
··· 185 185 { 186 186 struct llist_node *entry; 187 187 struct call_single_data *csd, *csd_next; 188 + static bool warned; 189 + 190 + entry = llist_del_all(&__get_cpu_var(call_single_queue)); 191 + entry = llist_reverse_order(entry); 188 192 189 193 /* 190 194 * Shouldn't receive this interrupt on a cpu that is not yet online. 191 195 */ 192 - WARN_ON_ONCE(!cpu_online(smp_processor_id())); 196 + if (unlikely(!cpu_online(smp_processor_id()) && !warned)) { 197 + warned = true; 198 + WARN(1, "IPI on offline CPU %d\n", smp_processor_id()); 193 199 194 - entry = llist_del_all(&__get_cpu_var(call_single_queue)); 195 - entry = llist_reverse_order(entry); 200 + /* 201 + * We don't have to use the _safe() variant here 202 + * because we are not invoking the IPI handlers yet. 203 + */ 204 + llist_for_each_entry(csd, entry, llist) 205 + pr_warn("IPI callback %pS sent to offline CPU\n", 206 + csd->func); 207 + } 196 208 197 209 llist_for_each_entry_safe(csd, csd_next, entry, llist) { 198 210 csd->func(csd->info);
+78 -16
kernel/sysctl.c
··· 173 173 #endif 174 174 175 175 #ifdef CONFIG_PROC_SYSCTL 176 + 177 + #define SYSCTL_WRITES_LEGACY -1 178 + #define SYSCTL_WRITES_WARN 0 179 + #define SYSCTL_WRITES_STRICT 1 180 + 181 + static int sysctl_writes_strict = SYSCTL_WRITES_WARN; 182 + 176 183 static int proc_do_cad_pid(struct ctl_table *table, int write, 177 184 void __user *buffer, size_t *lenp, loff_t *ppos); 178 185 static int proc_taint(struct ctl_table *table, int write, ··· 202 195 /* Note: sysrq code uses it's own private copy */ 203 196 static int __sysrq_enabled = CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE; 204 197 205 - static int sysrq_sysctl_handler(ctl_table *table, int write, 198 + static int sysrq_sysctl_handler(struct ctl_table *table, int write, 206 199 void __user *buffer, size_t *lenp, 207 200 loff_t *ppos) 208 201 { ··· 501 494 .maxlen = sizeof(long), 502 495 .mode = 0644, 503 496 .proc_handler = proc_taint, 497 + }, 498 + { 499 + .procname = "sysctl_writes_strict", 500 + .data = &sysctl_writes_strict, 501 + .maxlen = sizeof(int), 502 + .mode = 0644, 503 + .proc_handler = proc_dointvec_minmax, 504 + .extra1 = &neg_one, 505 + .extra2 = &one, 504 506 }, 505 507 #endif 506 508 #ifdef CONFIG_LATENCYTOP ··· 1719 1703 1720 1704 #ifdef CONFIG_PROC_SYSCTL 1721 1705 1722 - static int _proc_do_string(void* data, int maxlen, int write, 1723 - void __user *buffer, 1706 + static int _proc_do_string(char *data, int maxlen, int write, 1707 + char __user *buffer, 1724 1708 size_t *lenp, loff_t *ppos) 1725 1709 { 1726 1710 size_t len; ··· 1733 1717 } 1734 1718 1735 1719 if (write) { 1736 - len = 0; 1720 + if (sysctl_writes_strict == SYSCTL_WRITES_STRICT) { 1721 + /* Only continue writes not past the end of buffer. */ 1722 + len = strlen(data); 1723 + if (len > maxlen - 1) 1724 + len = maxlen - 1; 1725 + 1726 + if (*ppos > len) 1727 + return 0; 1728 + len = *ppos; 1729 + } else { 1730 + /* Start writing from beginning of buffer. */ 1731 + len = 0; 1732 + } 1733 + 1734 + *ppos += *lenp; 1737 1735 p = buffer; 1738 - while (len < *lenp) { 1736 + while ((p - buffer) < *lenp && len < maxlen - 1) { 1739 1737 if (get_user(c, p++)) 1740 1738 return -EFAULT; 1741 1739 if (c == 0 || c == '\n') 1742 1740 break; 1743 - len++; 1741 + data[len++] = c; 1744 1742 } 1745 - if (len >= maxlen) 1746 - len = maxlen-1; 1747 - if(copy_from_user(data, buffer, len)) 1748 - return -EFAULT; 1749 - ((char *) data)[len] = 0; 1750 - *ppos += *lenp; 1743 + data[len] = 0; 1751 1744 } else { 1752 1745 len = strlen(data); 1753 1746 if (len > maxlen) ··· 1773 1748 if (len > *lenp) 1774 1749 len = *lenp; 1775 1750 if (len) 1776 - if(copy_to_user(buffer, data, len)) 1751 + if (copy_to_user(buffer, data, len)) 1777 1752 return -EFAULT; 1778 1753 if (len < *lenp) { 1779 - if(put_user('\n', ((char __user *) buffer) + len)) 1754 + if (put_user('\n', buffer + len)) 1780 1755 return -EFAULT; 1781 1756 len++; 1782 1757 } ··· 1784 1759 *ppos += len; 1785 1760 } 1786 1761 return 0; 1762 + } 1763 + 1764 + static void warn_sysctl_write(struct ctl_table *table) 1765 + { 1766 + pr_warn_once("%s wrote to %s when file position was not 0!\n" 1767 + "This will not be supported in the future. To silence this\n" 1768 + "warning, set kernel.sysctl_writes_strict = -1\n", 1769 + current->comm, table->procname); 1787 1770 } 1788 1771 1789 1772 /** ··· 1814 1781 int proc_dostring(struct ctl_table *table, int write, 1815 1782 void __user *buffer, size_t *lenp, loff_t *ppos) 1816 1783 { 1817 - return _proc_do_string(table->data, table->maxlen, write, 1818 - buffer, lenp, ppos); 1784 + if (write && *ppos && sysctl_writes_strict == SYSCTL_WRITES_WARN) 1785 + warn_sysctl_write(table); 1786 + 1787 + return _proc_do_string((char *)(table->data), table->maxlen, write, 1788 + (char __user *)buffer, lenp, ppos); 1819 1789 } 1820 1790 1821 1791 static size_t proc_skip_spaces(char **buf) ··· 1992 1956 conv = do_proc_dointvec_conv; 1993 1957 1994 1958 if (write) { 1959 + if (*ppos) { 1960 + switch (sysctl_writes_strict) { 1961 + case SYSCTL_WRITES_STRICT: 1962 + goto out; 1963 + case SYSCTL_WRITES_WARN: 1964 + warn_sysctl_write(table); 1965 + break; 1966 + default: 1967 + break; 1968 + } 1969 + } 1970 + 1995 1971 if (left > PAGE_SIZE - 1) 1996 1972 left = PAGE_SIZE - 1; 1997 1973 page = __get_free_page(GFP_TEMPORARY); ··· 2061 2013 return err ? : -EINVAL; 2062 2014 } 2063 2015 *lenp -= left; 2016 + out: 2064 2017 *ppos += *lenp; 2065 2018 return err; 2066 2019 } ··· 2254 2205 left = *lenp; 2255 2206 2256 2207 if (write) { 2208 + if (*ppos) { 2209 + switch (sysctl_writes_strict) { 2210 + case SYSCTL_WRITES_STRICT: 2211 + goto out; 2212 + case SYSCTL_WRITES_WARN: 2213 + warn_sysctl_write(table); 2214 + break; 2215 + default: 2216 + break; 2217 + } 2218 + } 2219 + 2257 2220 if (left > PAGE_SIZE - 1) 2258 2221 left = PAGE_SIZE - 1; 2259 2222 page = __get_free_page(GFP_TEMPORARY); ··· 2321 2260 return err ? : -EINVAL; 2322 2261 } 2323 2262 *lenp -= left; 2263 + out: 2324 2264 *ppos += *lenp; 2325 2265 return err; 2326 2266 }
+20 -13
kernel/user_namespace.c
··· 286 286 /** 287 287 * make_kgid - Map a user-namespace gid pair into a kgid. 288 288 * @ns: User namespace that the gid is in 289 - * @uid: group identifier 289 + * @gid: group identifier 290 290 * 291 291 * Maps a user-namespace gid pair into a kernel internal kgid, 292 292 * and returns that kgid. ··· 482 482 return 0; 483 483 } 484 484 485 - static void *m_start(struct seq_file *seq, loff_t *ppos, struct uid_gid_map *map) 485 + static void *m_start(struct seq_file *seq, loff_t *ppos, 486 + struct uid_gid_map *map) 486 487 { 487 488 struct uid_gid_extent *extent = NULL; 488 489 loff_t pos = *ppos; ··· 547 546 .show = projid_m_show, 548 547 }; 549 548 550 - static bool mappings_overlap(struct uid_gid_map *new_map, struct uid_gid_extent *extent) 549 + static bool mappings_overlap(struct uid_gid_map *new_map, 550 + struct uid_gid_extent *extent) 551 551 { 552 552 u32 upper_first, lower_first, upper_last, lower_last; 553 553 unsigned idx; ··· 655 653 ret = -EINVAL; 656 654 pos = kbuf; 657 655 new_map.nr_extents = 0; 658 - for (;pos; pos = next_line) { 656 + for (; pos; pos = next_line) { 659 657 extent = &new_map.extent[new_map.nr_extents]; 660 658 661 659 /* Find the end of line and ensure I don't look past it */ ··· 689 687 690 688 /* Verify we have been given valid starting values */ 691 689 if ((extent->first == (u32) -1) || 692 - (extent->lower_first == (u32) -1 )) 690 + (extent->lower_first == (u32) -1)) 693 691 goto out; 694 692 695 - /* Verify count is not zero and does not cause the extent to wrap */ 693 + /* Verify count is not zero and does not cause the 694 + * extent to wrap 695 + */ 696 696 if ((extent->first + extent->count) <= extent->first) 697 697 goto out; 698 - if ((extent->lower_first + extent->count) <= extent->lower_first) 698 + if ((extent->lower_first + extent->count) <= 699 + extent->lower_first) 699 700 goto out; 700 701 701 702 /* Do the ranges in extent overlap any previous extents? */ ··· 756 751 return ret; 757 752 } 758 753 759 - ssize_t proc_uid_map_write(struct file *file, const char __user *buf, size_t size, loff_t *ppos) 754 + ssize_t proc_uid_map_write(struct file *file, const char __user *buf, 755 + size_t size, loff_t *ppos) 760 756 { 761 757 struct seq_file *seq = file->private_data; 762 758 struct user_namespace *ns = seq->private; ··· 773 767 &ns->uid_map, &ns->parent->uid_map); 774 768 } 775 769 776 - ssize_t proc_gid_map_write(struct file *file, const char __user *buf, size_t size, loff_t *ppos) 770 + ssize_t proc_gid_map_write(struct file *file, const char __user *buf, 771 + size_t size, loff_t *ppos) 777 772 { 778 773 struct seq_file *seq = file->private_data; 779 774 struct user_namespace *ns = seq->private; ··· 790 783 &ns->gid_map, &ns->parent->gid_map); 791 784 } 792 785 793 - ssize_t proc_projid_map_write(struct file *file, const char __user *buf, size_t size, loff_t *ppos) 786 + ssize_t proc_projid_map_write(struct file *file, const char __user *buf, 787 + size_t size, loff_t *ppos) 794 788 { 795 789 struct seq_file *seq = file->private_data; 796 790 struct user_namespace *ns = seq->private; ··· 808 800 &ns->projid_map, &ns->parent->projid_map); 809 801 } 810 802 811 - static bool new_idmap_permitted(const struct file *file, 803 + static bool new_idmap_permitted(const struct file *file, 812 804 struct user_namespace *ns, int cap_setid, 813 805 struct uid_gid_map *new_map) 814 806 { ··· 819 811 kuid_t uid = make_kuid(ns->parent, id); 820 812 if (uid_eq(uid, file->f_cred->fsuid)) 821 813 return true; 822 - } 823 - else if (cap_setid == CAP_SETGID) { 814 + } else if (cap_setid == CAP_SETGID) { 824 815 kgid_t gid = make_kgid(ns->parent, id); 825 816 if (gid_eq(gid, file->f_cred->fsgid)) 826 817 return true;
+3 -3
kernel/utsname_sysctl.c
··· 17 17 18 18 #ifdef CONFIG_PROC_SYSCTL 19 19 20 - static void *get_uts(ctl_table *table, int write) 20 + static void *get_uts(struct ctl_table *table, int write) 21 21 { 22 22 char *which = table->data; 23 23 struct uts_namespace *uts_ns; ··· 32 32 return which; 33 33 } 34 34 35 - static void put_uts(ctl_table *table, int write, void *which) 35 + static void put_uts(struct ctl_table *table, int write, void *which) 36 36 { 37 37 if (!write) 38 38 up_read(&uts_sem); ··· 44 44 * Special case of dostring for the UTS structure. This has locks 45 45 * to observe. Should this be in kernel/sys.c ???? 46 46 */ 47 - static int proc_do_uts_string(ctl_table *table, int write, 47 + static int proc_do_uts_string(struct ctl_table *table, int write, 48 48 void __user *buffer, size_t *lenp, loff_t *ppos) 49 49 { 50 50 struct ctl_table uts_table;
+15 -25
lib/idr.c
··· 18 18 * pointer or what ever, we treat it as a (void *). You can pass this 19 19 * id to a user for him to pass back at a later time. You then pass 20 20 * that id to this code and it returns your pointer. 21 - 22 - * You can release ids at any time. When all ids are released, most of 23 - * the memory is returned (we keep MAX_IDR_FREE) in a local pool so we 24 - * don't need to go to the memory "store" during an id allocate, just 25 - * so you don't need to be too concerned about locking and conflicts 26 - * with the slab allocator. 27 21 */ 28 22 29 23 #ifndef TEST // to test in user space... ··· 145 151 146 152 static inline void free_layer(struct idr *idr, struct idr_layer *p) 147 153 { 148 - if (idr->hint && idr->hint == p) 154 + if (idr->hint == p) 149 155 RCU_INIT_POINTER(idr->hint, NULL); 150 156 call_rcu(&p->rcu_head, idr_layer_rcu_free); 151 157 } ··· 243 249 id = (id | ((1 << (IDR_BITS * l)) - 1)) + 1; 244 250 245 251 /* if already at the top layer, we need to grow */ 246 - if (id >= 1 << (idp->layers * IDR_BITS)) { 252 + if (id > idr_max(idp->layers)) { 247 253 *starting_id = id; 248 254 return -EAGAIN; 249 255 } ··· 556 562 if (id < 0) 557 563 return; 558 564 565 + if (id > idr_max(idp->layers)) { 566 + idr_remove_warning(id); 567 + return; 568 + } 569 + 559 570 sub_remove(idp, (idp->layers - 1) * IDR_BITS, id); 560 571 if (idp->top && idp->top->count == 1 && (idp->layers > 1) && 561 572 idp->top->ary[0]) { ··· 578 579 bitmap_clear(to_free->bitmap, 0, IDR_SIZE); 579 580 free_layer(idp, to_free); 580 581 } 581 - while (idp->id_free_cnt >= MAX_IDR_FREE) { 582 - p = get_from_free_list(idp); 583 - /* 584 - * Note: we don't call the rcu callback here, since the only 585 - * layers that fall into the freelist are those that have been 586 - * preallocated. 587 - */ 588 - kmem_cache_free(idr_layer_cache, p); 589 - } 590 - return; 591 582 } 592 583 EXPORT_SYMBOL(idr_remove); 593 584 ··· 798 809 799 810 p = idp->top; 800 811 if (!p) 801 - return ERR_PTR(-EINVAL); 812 + return ERR_PTR(-ENOENT); 802 813 803 - n = (p->layer+1) * IDR_BITS; 814 + if (id > idr_max(p->layer + 1)) 815 + return ERR_PTR(-ENOENT); 804 816 805 - if (id >= (1 << n)) 806 - return ERR_PTR(-EINVAL); 807 - 808 - n -= IDR_BITS; 817 + n = p->layer * IDR_BITS; 809 818 while ((n > 0) && p) { 810 819 p = p->ary[(id >> n) & IDR_MASK]; 811 820 n -= IDR_BITS; ··· 1014 1027 int n; 1015 1028 struct ida_bitmap *bitmap; 1016 1029 1030 + if (idr_id > idr_max(ida->idr.layers)) 1031 + goto err; 1032 + 1017 1033 /* clear full bits while looking up the leaf idr_layer */ 1018 1034 while ((shift > 0) && p) { 1019 1035 n = (idr_id >> shift) & IDR_MASK; ··· 1032 1042 __clear_bit(n, p->bitmap); 1033 1043 1034 1044 bitmap = (void *)p->ary[n]; 1035 - if (!test_bit(offset, bitmap->bitmap)) 1045 + if (!bitmap || !test_bit(offset, bitmap->bitmap)) 1036 1046 goto err; 1037 1047 1038 1048 /* update bitmap and remove it if empty */
+6
lib/radix-tree.c
··· 27 27 #include <linux/radix-tree.h> 28 28 #include <linux/percpu.h> 29 29 #include <linux/slab.h> 30 + #include <linux/kmemleak.h> 30 31 #include <linux/notifier.h> 31 32 #include <linux/cpu.h> 32 33 #include <linux/string.h> ··· 201 200 rtp->nodes[rtp->nr - 1] = NULL; 202 201 rtp->nr--; 203 202 } 203 + /* 204 + * Update the allocation stack trace as this is more useful 205 + * for debugging. 206 + */ 207 + kmemleak_update_trace(ret); 204 208 } 205 209 if (ret == NULL) 206 210 ret = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
+4
mm/fremap.c
··· 149 149 int has_write_lock = 0; 150 150 vm_flags_t vm_flags = 0; 151 151 152 + pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. " 153 + "See Documentation/vm/remap_file_pages.txt.\n", 154 + current->comm, current->pid); 155 + 152 156 if (prot) 153 157 return err; 154 158 /*
+19 -17
mm/kmemleak-test.c
··· 18 18 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 19 */ 20 20 21 + #define pr_fmt(fmt) "kmemleak: " fmt 22 + 21 23 #include <linux/init.h> 22 24 #include <linux/kernel.h> 23 25 #include <linux/module.h> ··· 52 50 printk(KERN_INFO "Kmemleak testing\n"); 53 51 54 52 /* make some orphan objects */ 55 - pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); 56 - pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); 57 - pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); 58 - pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); 59 - pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); 60 - pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); 61 - pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); 62 - pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); 53 + pr_info("kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); 54 + pr_info("kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL)); 55 + pr_info("kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); 56 + pr_info("kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL)); 57 + pr_info("kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); 58 + pr_info("kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL)); 59 + pr_info("kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); 60 + pr_info("kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL)); 63 61 #ifndef CONFIG_MODULES 64 - pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n", 62 + pr_info("kmem_cache_alloc(files_cachep) = %p\n", 65 63 kmem_cache_alloc(files_cachep, GFP_KERNEL)); 66 - pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n", 64 + pr_info("kmem_cache_alloc(files_cachep) = %p\n", 67 65 kmem_cache_alloc(files_cachep, GFP_KERNEL)); 68 66 #endif 69 - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); 70 - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); 71 - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); 72 - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); 73 - pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64)); 67 + pr_info("vmalloc(64) = %p\n", vmalloc(64)); 68 + pr_info("vmalloc(64) = %p\n", vmalloc(64)); 69 + pr_info("vmalloc(64) = %p\n", vmalloc(64)); 70 + pr_info("vmalloc(64) = %p\n", vmalloc(64)); 71 + pr_info("vmalloc(64) = %p\n", vmalloc(64)); 74 72 75 73 /* 76 74 * Add elements to a list. They should only appear as orphan ··· 78 76 */ 79 77 for (i = 0; i < 10; i++) { 80 78 elem = kzalloc(sizeof(*elem), GFP_KERNEL); 81 - pr_info("kmemleak: kzalloc(sizeof(*elem)) = %p\n", elem); 79 + pr_info("kzalloc(sizeof(*elem)) = %p\n", elem); 82 80 if (!elem) 83 81 return -ENOMEM; 84 82 INIT_LIST_HEAD(&elem->list); ··· 87 85 88 86 for_each_possible_cpu(i) { 89 87 per_cpu(kmemleak_test_pointer, i) = kmalloc(129, GFP_KERNEL); 90 - pr_info("kmemleak: kmalloc(129) = %p\n", 88 + pr_info("kmalloc(129) = %p\n", 91 89 per_cpu(kmemleak_test_pointer, i)); 92 90 } 93 91
+35 -1
mm/kmemleak.c
··· 387 387 pr_notice(" min_count = %d\n", object->min_count); 388 388 pr_notice(" count = %d\n", object->count); 389 389 pr_notice(" flags = 0x%lx\n", object->flags); 390 - pr_notice(" checksum = %d\n", object->checksum); 390 + pr_notice(" checksum = %u\n", object->checksum); 391 391 pr_notice(" backtrace:\n"); 392 392 print_stack_trace(&trace, 4); 393 393 } ··· 988 988 log_early(KMEMLEAK_FREE_PERCPU, ptr, 0, 0); 989 989 } 990 990 EXPORT_SYMBOL_GPL(kmemleak_free_percpu); 991 + 992 + /** 993 + * kmemleak_update_trace - update object allocation stack trace 994 + * @ptr: pointer to beginning of the object 995 + * 996 + * Override the object allocation stack trace for cases where the actual 997 + * allocation place is not always useful. 998 + */ 999 + void __ref kmemleak_update_trace(const void *ptr) 1000 + { 1001 + struct kmemleak_object *object; 1002 + unsigned long flags; 1003 + 1004 + pr_debug("%s(0x%p)\n", __func__, ptr); 1005 + 1006 + if (!kmemleak_enabled || IS_ERR_OR_NULL(ptr)) 1007 + return; 1008 + 1009 + object = find_and_get_object((unsigned long)ptr, 1); 1010 + if (!object) { 1011 + #ifdef DEBUG 1012 + kmemleak_warn("Updating stack trace for unknown object at %p\n", 1013 + ptr); 1014 + #endif 1015 + return; 1016 + } 1017 + 1018 + spin_lock_irqsave(&object->lock, flags); 1019 + object->trace_len = __save_stack_trace(object->trace); 1020 + spin_unlock_irqrestore(&object->lock, flags); 1021 + 1022 + put_object(object); 1023 + } 1024 + EXPORT_SYMBOL(kmemleak_update_trace); 991 1025 992 1026 /** 993 1027 * kmemleak_not_leak - mark an allocated object as false positive
+8 -2
mm/memblock.c
··· 691 691 (unsigned long long)base + size - 1, 692 692 (void *)_RET_IP_); 693 693 694 + kmemleak_free_part(__va(base), size); 694 695 return memblock_remove_range(&memblock.reserved, base, size); 695 696 } 696 697 ··· 1044 1043 align = SMP_CACHE_BYTES; 1045 1044 1046 1045 found = memblock_find_in_range_node(size, align, start, end, nid); 1047 - if (found && !memblock_reserve(found, size)) 1046 + if (found && !memblock_reserve(found, size)) { 1047 + /* 1048 + * The min_count is set to 0 so that memblock allocations are 1049 + * never reported as leaks. 1050 + */ 1051 + kmemleak_alloc(__va(found), size, 0, 0); 1048 1052 return found; 1049 - 1053 + } 1050 1054 return 0; 1051 1055 } 1052 1056
+54 -71
mm/memcontrol.c
··· 676 676 static void drain_all_stock_async(struct mem_cgroup *memcg); 677 677 678 678 static struct mem_cgroup_per_zone * 679 - mem_cgroup_zoneinfo(struct mem_cgroup *memcg, int nid, int zid) 679 + mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone) 680 680 { 681 - VM_BUG_ON((unsigned)nid >= nr_node_ids); 681 + int nid = zone_to_nid(zone); 682 + int zid = zone_idx(zone); 683 + 682 684 return &memcg->nodeinfo[nid]->zoneinfo[zid]; 683 685 } 684 686 ··· 690 688 } 691 689 692 690 static struct mem_cgroup_per_zone * 693 - page_cgroup_zoneinfo(struct mem_cgroup *memcg, struct page *page) 691 + mem_cgroup_page_zoneinfo(struct mem_cgroup *memcg, struct page *page) 694 692 { 695 693 int nid = page_to_nid(page); 696 694 int zid = page_zonenum(page); 697 695 698 - return mem_cgroup_zoneinfo(memcg, nid, zid); 696 + return &memcg->nodeinfo[nid]->zoneinfo[zid]; 699 697 } 700 698 701 699 static struct mem_cgroup_tree_per_zone * ··· 713 711 return &soft_limit_tree.rb_tree_per_node[nid]->rb_tree_per_zone[zid]; 714 712 } 715 713 716 - static void 717 - __mem_cgroup_insert_exceeded(struct mem_cgroup *memcg, 718 - struct mem_cgroup_per_zone *mz, 719 - struct mem_cgroup_tree_per_zone *mctz, 720 - unsigned long long new_usage_in_excess) 714 + static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_zone *mz, 715 + struct mem_cgroup_tree_per_zone *mctz, 716 + unsigned long long new_usage_in_excess) 721 717 { 722 718 struct rb_node **p = &mctz->rb_root.rb_node; 723 719 struct rb_node *parent = NULL; ··· 745 745 mz->on_tree = true; 746 746 } 747 747 748 - static void 749 - __mem_cgroup_remove_exceeded(struct mem_cgroup *memcg, 750 - struct mem_cgroup_per_zone *mz, 751 - struct mem_cgroup_tree_per_zone *mctz) 748 + static void __mem_cgroup_remove_exceeded(struct mem_cgroup_per_zone *mz, 749 + struct mem_cgroup_tree_per_zone *mctz) 752 750 { 753 751 if (!mz->on_tree) 754 752 return; ··· 754 756 mz->on_tree = false; 755 757 } 756 758 757 - static void 758 - mem_cgroup_remove_exceeded(struct mem_cgroup *memcg, 759 - struct mem_cgroup_per_zone *mz, 760 - struct mem_cgroup_tree_per_zone *mctz) 759 + static void mem_cgroup_remove_exceeded(struct mem_cgroup_per_zone *mz, 760 + struct mem_cgroup_tree_per_zone *mctz) 761 761 { 762 762 spin_lock(&mctz->lock); 763 - __mem_cgroup_remove_exceeded(memcg, mz, mctz); 763 + __mem_cgroup_remove_exceeded(mz, mctz); 764 764 spin_unlock(&mctz->lock); 765 765 } 766 766 ··· 768 772 unsigned long long excess; 769 773 struct mem_cgroup_per_zone *mz; 770 774 struct mem_cgroup_tree_per_zone *mctz; 771 - int nid = page_to_nid(page); 772 - int zid = page_zonenum(page); 773 - mctz = soft_limit_tree_from_page(page); 774 775 776 + mctz = soft_limit_tree_from_page(page); 775 777 /* 776 778 * Necessary to update all ancestors when hierarchy is used. 777 779 * because their event counter is not touched. 778 780 */ 779 781 for (; memcg; memcg = parent_mem_cgroup(memcg)) { 780 - mz = mem_cgroup_zoneinfo(memcg, nid, zid); 782 + mz = mem_cgroup_page_zoneinfo(memcg, page); 781 783 excess = res_counter_soft_limit_excess(&memcg->res); 782 784 /* 783 785 * We have to update the tree if mz is on RB-tree or ··· 785 791 spin_lock(&mctz->lock); 786 792 /* if on-tree, remove it */ 787 793 if (mz->on_tree) 788 - __mem_cgroup_remove_exceeded(memcg, mz, mctz); 794 + __mem_cgroup_remove_exceeded(mz, mctz); 789 795 /* 790 796 * Insert again. mz->usage_in_excess will be updated. 791 797 * If excess is 0, no tree ops. 792 798 */ 793 - __mem_cgroup_insert_exceeded(memcg, mz, mctz, excess); 799 + __mem_cgroup_insert_exceeded(mz, mctz, excess); 794 800 spin_unlock(&mctz->lock); 795 801 } 796 802 } ··· 798 804 799 805 static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg) 800 806 { 801 - int node, zone; 802 - struct mem_cgroup_per_zone *mz; 803 807 struct mem_cgroup_tree_per_zone *mctz; 808 + struct mem_cgroup_per_zone *mz; 809 + int nid, zid; 804 810 805 - for_each_node(node) { 806 - for (zone = 0; zone < MAX_NR_ZONES; zone++) { 807 - mz = mem_cgroup_zoneinfo(memcg, node, zone); 808 - mctz = soft_limit_tree_node_zone(node, zone); 809 - mem_cgroup_remove_exceeded(memcg, mz, mctz); 811 + for_each_node(nid) { 812 + for (zid = 0; zid < MAX_NR_ZONES; zid++) { 813 + mz = &memcg->nodeinfo[nid]->zoneinfo[zid]; 814 + mctz = soft_limit_tree_node_zone(nid, zid); 815 + mem_cgroup_remove_exceeded(mz, mctz); 810 816 } 811 817 } 812 818 } ··· 829 835 * we will to add it back at the end of reclaim to its correct 830 836 * position in the tree. 831 837 */ 832 - __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz); 838 + __mem_cgroup_remove_exceeded(mz, mctz); 833 839 if (!res_counter_soft_limit_excess(&mz->memcg->res) || 834 840 !css_tryget(&mz->memcg->css)) 835 841 goto retry; ··· 940 946 __this_cpu_add(memcg->stat->nr_page_events, nr_pages); 941 947 } 942 948 943 - unsigned long 944 - mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru) 949 + unsigned long mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru) 945 950 { 946 951 struct mem_cgroup_per_zone *mz; 947 952 ··· 948 955 return mz->lru_size[lru]; 949 956 } 950 957 951 - static unsigned long 952 - mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid, 953 - unsigned int lru_mask) 958 + static unsigned long mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, 959 + int nid, 960 + unsigned int lru_mask) 954 961 { 955 - struct mem_cgroup_per_zone *mz; 956 - enum lru_list lru; 957 - unsigned long ret = 0; 958 - 959 - mz = mem_cgroup_zoneinfo(memcg, nid, zid); 960 - 961 - for_each_lru(lru) { 962 - if (BIT(lru) & lru_mask) 963 - ret += mz->lru_size[lru]; 964 - } 965 - return ret; 966 - } 967 - 968 - static unsigned long 969 - mem_cgroup_node_nr_lru_pages(struct mem_cgroup *memcg, 970 - int nid, unsigned int lru_mask) 971 - { 972 - u64 total = 0; 962 + unsigned long nr = 0; 973 963 int zid; 974 964 975 - for (zid = 0; zid < MAX_NR_ZONES; zid++) 976 - total += mem_cgroup_zone_nr_lru_pages(memcg, 977 - nid, zid, lru_mask); 965 + VM_BUG_ON((unsigned)nid >= nr_node_ids); 978 966 979 - return total; 967 + for (zid = 0; zid < MAX_NR_ZONES; zid++) { 968 + struct mem_cgroup_per_zone *mz; 969 + enum lru_list lru; 970 + 971 + for_each_lru(lru) { 972 + if (!(BIT(lru) & lru_mask)) 973 + continue; 974 + mz = &memcg->nodeinfo[nid]->zoneinfo[zid]; 975 + nr += mz->lru_size[lru]; 976 + } 977 + } 978 + return nr; 980 979 } 981 980 982 981 static unsigned long mem_cgroup_nr_lru_pages(struct mem_cgroup *memcg, 983 982 unsigned int lru_mask) 984 983 { 984 + unsigned long nr = 0; 985 985 int nid; 986 - u64 total = 0; 987 986 988 987 for_each_node_state(nid, N_MEMORY) 989 - total += mem_cgroup_node_nr_lru_pages(memcg, nid, lru_mask); 990 - return total; 988 + nr += mem_cgroup_node_nr_lru_pages(memcg, nid, lru_mask); 989 + return nr; 991 990 } 992 991 993 992 static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg, ··· 1227 1242 int uninitialized_var(seq); 1228 1243 1229 1244 if (reclaim) { 1230 - int nid = zone_to_nid(reclaim->zone); 1231 - int zid = zone_idx(reclaim->zone); 1232 1245 struct mem_cgroup_per_zone *mz; 1233 1246 1234 - mz = mem_cgroup_zoneinfo(root, nid, zid); 1247 + mz = mem_cgroup_zone_zoneinfo(root, reclaim->zone); 1235 1248 iter = &mz->reclaim_iter[reclaim->priority]; 1236 1249 if (prev && reclaim->generation != iter->generation) { 1237 1250 iter->last_visited = NULL; ··· 1336 1353 goto out; 1337 1354 } 1338 1355 1339 - mz = mem_cgroup_zoneinfo(memcg, zone_to_nid(zone), zone_idx(zone)); 1356 + mz = mem_cgroup_zone_zoneinfo(memcg, zone); 1340 1357 lruvec = &mz->lruvec; 1341 1358 out: 1342 1359 /* ··· 1395 1412 if (!PageLRU(page) && !PageCgroupUsed(pc) && memcg != root_mem_cgroup) 1396 1413 pc->mem_cgroup = memcg = root_mem_cgroup; 1397 1414 1398 - mz = page_cgroup_zoneinfo(memcg, page); 1415 + mz = mem_cgroup_page_zoneinfo(memcg, page); 1399 1416 lruvec = &mz->lruvec; 1400 1417 out: 1401 1418 /* ··· 1533 1550 int mem_cgroup_swappiness(struct mem_cgroup *memcg) 1534 1551 { 1535 1552 /* root ? */ 1536 - if (!css_parent(&memcg->css)) 1553 + if (mem_cgroup_disabled() || !css_parent(&memcg->css)) 1537 1554 return vm_swappiness; 1538 1555 1539 1556 return memcg->swappiness; ··· 4580 4597 break; 4581 4598 } while (1); 4582 4599 } 4583 - __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz); 4600 + __mem_cgroup_remove_exceeded(mz, mctz); 4584 4601 excess = res_counter_soft_limit_excess(&mz->memcg->res); 4585 4602 /* 4586 4603 * One school of thought says that we should not add ··· 4591 4608 * term TODO. 4592 4609 */ 4593 4610 /* If excess == 0, no tree ops */ 4594 - __mem_cgroup_insert_exceeded(mz->memcg, mz, mctz, excess); 4611 + __mem_cgroup_insert_exceeded(mz, mctz, excess); 4595 4612 spin_unlock(&mctz->lock); 4596 4613 css_put(&mz->memcg->css); 4597 4614 loop++; ··· 5288 5305 5289 5306 for_each_online_node(nid) 5290 5307 for (zid = 0; zid < MAX_NR_ZONES; zid++) { 5291 - mz = mem_cgroup_zoneinfo(memcg, nid, zid); 5308 + mz = &memcg->nodeinfo[nid]->zoneinfo[zid]; 5292 5309 rstat = &mz->lruvec.reclaim_stat; 5293 5310 5294 5311 recent_rotated[0] += rstat->recent_rotated[0];
+4 -1
mm/mempolicy.c
··· 65 65 kernel is not always grateful with that. 66 66 */ 67 67 68 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 69 + 68 70 #include <linux/mempolicy.h> 69 71 #include <linux/mm.h> 70 72 #include <linux/highmem.h> ··· 93 91 #include <linux/ctype.h> 94 92 #include <linux/mm_inline.h> 95 93 #include <linux/mmu_notifier.h> 94 + #include <linux/printk.h> 96 95 97 96 #include <asm/tlbflush.h> 98 97 #include <asm/uaccess.h> ··· 2648 2645 node_set(prefer, interleave_nodes); 2649 2646 2650 2647 if (do_set_mempolicy(MPOL_INTERLEAVE, 0, &interleave_nodes)) 2651 - printk("numa_policy_init: interleaving failed\n"); 2648 + pr_err("%s: interleaving failed\n", __func__); 2652 2649 2653 2650 check_numabalancing_enable(); 2654 2651 }
+6
mm/mempool.c
··· 10 10 11 11 #include <linux/mm.h> 12 12 #include <linux/slab.h> 13 + #include <linux/kmemleak.h> 13 14 #include <linux/export.h> 14 15 #include <linux/mempool.h> 15 16 #include <linux/blkdev.h> ··· 223 222 spin_unlock_irqrestore(&pool->lock, flags); 224 223 /* paired with rmb in mempool_free(), read comment there */ 225 224 smp_wmb(); 225 + /* 226 + * Update the allocation stack trace as this is more useful 227 + * for debugging. 228 + */ 229 + kmemleak_update_trace(element); 226 230 return element; 227 231 } 228 232
+12 -9
mm/mmap.c
··· 6 6 * Address space accounting code <alan@lxorguk.ukuu.org.uk> 7 7 */ 8 8 9 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 10 + 9 11 #include <linux/kernel.h> 10 12 #include <linux/slab.h> 11 13 #include <linux/backing-dev.h> ··· 39 37 #include <linux/sched/sysctl.h> 40 38 #include <linux/notifier.h> 41 39 #include <linux/memory.h> 40 + #include <linux/printk.h> 42 41 43 42 #include <asm/uaccess.h> 44 43 #include <asm/cacheflush.h> ··· 364 361 struct vm_area_struct *vma; 365 362 vma = rb_entry(nd, struct vm_area_struct, vm_rb); 366 363 if (vma->vm_start < prev) { 367 - printk("vm_start %lx prev %lx\n", vma->vm_start, prev); 364 + pr_info("vm_start %lx prev %lx\n", vma->vm_start, prev); 368 365 bug = 1; 369 366 } 370 367 if (vma->vm_start < pend) { 371 - printk("vm_start %lx pend %lx\n", vma->vm_start, pend); 368 + pr_info("vm_start %lx pend %lx\n", vma->vm_start, pend); 372 369 bug = 1; 373 370 } 374 371 if (vma->vm_start > vma->vm_end) { 375 - printk("vm_end %lx < vm_start %lx\n", 372 + pr_info("vm_end %lx < vm_start %lx\n", 376 373 vma->vm_end, vma->vm_start); 377 374 bug = 1; 378 375 } 379 376 if (vma->rb_subtree_gap != vma_compute_subtree_gap(vma)) { 380 - printk("free gap %lx, correct %lx\n", 377 + pr_info("free gap %lx, correct %lx\n", 381 378 vma->rb_subtree_gap, 382 379 vma_compute_subtree_gap(vma)); 383 380 bug = 1; ··· 391 388 for (nd = pn; nd; nd = rb_prev(nd)) 392 389 j++; 393 390 if (i != j) { 394 - printk("backwards %d, forwards %d\n", j, i); 391 + pr_info("backwards %d, forwards %d\n", j, i); 395 392 bug = 1; 396 393 } 397 394 return bug ? -1 : i; ··· 426 423 i++; 427 424 } 428 425 if (i != mm->map_count) { 429 - printk("map_count %d vm_next %d\n", mm->map_count, i); 426 + pr_info("map_count %d vm_next %d\n", mm->map_count, i); 430 427 bug = 1; 431 428 } 432 429 if (highest_address != mm->highest_vm_end) { 433 - printk("mm->highest_vm_end %lx, found %lx\n", 430 + pr_info("mm->highest_vm_end %lx, found %lx\n", 434 431 mm->highest_vm_end, highest_address); 435 432 bug = 1; 436 433 } 437 434 i = browse_rb(&mm->mm_rb); 438 435 if (i != mm->map_count) { 439 - printk("map_count %d rb %d\n", mm->map_count, i); 436 + pr_info("map_count %d rb %d\n", mm->map_count, i); 440 437 bug = 1; 441 438 } 442 439 BUG_ON(bug); ··· 3283 3280 static int __meminit init_reserve_notifier(void) 3284 3281 { 3285 3282 if (register_hotmemory_notifier(&reserve_mem_nb)) 3286 - printk("Failed registering memory add/remove notifier for admin reserve"); 3283 + pr_err("Failed registering memory add/remove notifier for admin reserve\n"); 3287 3284 3288 3285 return 0; 3289 3286 }
-2
mm/nobootmem.c
··· 197 197 void __init free_bootmem_node(pg_data_t *pgdat, unsigned long physaddr, 198 198 unsigned long size) 199 199 { 200 - kmemleak_free_part(__va(physaddr), size); 201 200 memblock_free(physaddr, size); 202 201 } 203 202 ··· 211 212 */ 212 213 void __init free_bootmem(unsigned long addr, unsigned long size) 213 214 { 214 - kmemleak_free_part(__va(addr), size); 215 215 memblock_free(addr, size); 216 216 } 217 217
+4 -1
mm/nommu.c
··· 13 13 * Copyright (c) 2007-2010 Paul Mundt <lethal@linux-sh.org> 14 14 */ 15 15 16 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 17 + 16 18 #include <linux/export.h> 17 19 #include <linux/mm.h> 18 20 #include <linux/vmacache.h> ··· 34 32 #include <linux/syscalls.h> 35 33 #include <linux/audit.h> 36 34 #include <linux/sched/sysctl.h> 35 + #include <linux/printk.h> 37 36 38 37 #include <asm/uaccess.h> 39 38 #include <asm/tlb.h> ··· 1249 1246 return ret; 1250 1247 1251 1248 enomem: 1252 - printk("Allocation of length %lu from process %d (%s) failed\n", 1249 + pr_err("Allocation of length %lu from process %d (%s) failed\n", 1253 1250 len, current->pid, current->comm); 1254 1251 show_free_areas(0); 1255 1252 return -ENOMEM;
+1 -1
mm/page-writeback.c
··· 1664 1664 /* 1665 1665 * sysctl handler for /proc/sys/vm/dirty_writeback_centisecs 1666 1666 */ 1667 - int dirty_writeback_centisecs_handler(ctl_table *table, int write, 1667 + int dirty_writeback_centisecs_handler(struct ctl_table *table, int write, 1668 1668 void __user *buffer, size_t *length, loff_t *ppos) 1669 1669 { 1670 1670 proc_dointvec(table, write, buffer, length, ppos);
+6 -6
mm/page_alloc.c
··· 3389 3389 /* 3390 3390 * sysctl handler for numa_zonelist_order 3391 3391 */ 3392 - int numa_zonelist_order_handler(ctl_table *table, int write, 3392 + int numa_zonelist_order_handler(struct ctl_table *table, int write, 3393 3393 void __user *buffer, size_t *length, 3394 3394 loff_t *ppos) 3395 3395 { ··· 5805 5805 * that we can call two helper functions whenever min_free_kbytes 5806 5806 * changes. 5807 5807 */ 5808 - int min_free_kbytes_sysctl_handler(ctl_table *table, int write, 5808 + int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, 5809 5809 void __user *buffer, size_t *length, loff_t *ppos) 5810 5810 { 5811 5811 int rc; ··· 5822 5822 } 5823 5823 5824 5824 #ifdef CONFIG_NUMA 5825 - int sysctl_min_unmapped_ratio_sysctl_handler(ctl_table *table, int write, 5825 + int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, 5826 5826 void __user *buffer, size_t *length, loff_t *ppos) 5827 5827 { 5828 5828 struct zone *zone; ··· 5838 5838 return 0; 5839 5839 } 5840 5840 5841 - int sysctl_min_slab_ratio_sysctl_handler(ctl_table *table, int write, 5841 + int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, 5842 5842 void __user *buffer, size_t *length, loff_t *ppos) 5843 5843 { 5844 5844 struct zone *zone; ··· 5864 5864 * minimum watermarks. The lowmem reserve ratio can only make sense 5865 5865 * if in function of the boot time zone sizes. 5866 5866 */ 5867 - int lowmem_reserve_ratio_sysctl_handler(ctl_table *table, int write, 5867 + int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write, 5868 5868 void __user *buffer, size_t *length, loff_t *ppos) 5869 5869 { 5870 5870 proc_dointvec_minmax(table, write, buffer, length, ppos); ··· 5877 5877 * cpu. It is the fraction of total pages in each zone that a hot per cpu 5878 5878 * pagelist can have before it gets flushed back to buddy allocator. 5879 5879 */ 5880 - int percpu_pagelist_fraction_sysctl_handler(ctl_table *table, int write, 5880 + int percpu_pagelist_fraction_sysctl_handler(struct ctl_table *table, int write, 5881 5881 void __user *buffer, size_t *length, loff_t *ppos) 5882 5882 { 5883 5883 struct zone *zone;
+1 -1
mm/slub.c
··· 1726 1726 struct kmem_cache_cpu *c) 1727 1727 { 1728 1728 void *object; 1729 - int searchnode = (node == NUMA_NO_NODE) ? numa_node_id() : node; 1729 + int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node; 1730 1730 1731 1731 object = get_partial_node(s, get_node(s, searchnode), c, flags); 1732 1732 if (object || node != NUMA_NO_NODE)
+15 -11
mm/vmscan.c
··· 11 11 * Multiqueue VM started 5.8.00, Rik van Riel. 12 12 */ 13 13 14 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 + 14 16 #include <linux/mm.h> 15 17 #include <linux/module.h> 16 18 #include <linux/gfp.h> ··· 45 43 #include <linux/sysctl.h> 46 44 #include <linux/oom.h> 47 45 #include <linux/prefetch.h> 46 + #include <linux/printk.h> 48 47 49 48 #include <asm/tlbflush.h> 50 49 #include <asm/div64.h> ··· 85 82 86 83 /* Scan (total_size >> priority) pages at once */ 87 84 int priority; 85 + 86 + /* anon vs. file LRUs scanning "ratio" */ 87 + int swappiness; 88 88 89 89 /* 90 90 * The memory cgroup that hit its limit and as a result is the ··· 483 477 if (page_has_private(page)) { 484 478 if (try_to_free_buffers(page)) { 485 479 ClearPageDirty(page); 486 - printk("%s: orphaned page\n", __func__); 480 + pr_info("%s: orphaned page\n", __func__); 487 481 return PAGE_CLEAN; 488 482 } 489 483 } ··· 1851 1845 return shrink_inactive_list(nr_to_scan, lruvec, sc, lru); 1852 1846 } 1853 1847 1854 - static int vmscan_swappiness(struct scan_control *sc) 1855 - { 1856 - if (global_reclaim(sc)) 1857 - return vm_swappiness; 1858 - return mem_cgroup_swappiness(sc->target_mem_cgroup); 1859 - } 1860 - 1861 1848 enum scan_balance { 1862 1849 SCAN_EQUAL, 1863 1850 SCAN_FRACT, ··· 1911 1912 * using the memory controller's swap limit feature would be 1912 1913 * too expensive. 1913 1914 */ 1914 - if (!global_reclaim(sc) && !vmscan_swappiness(sc)) { 1915 + if (!global_reclaim(sc) && !sc->swappiness) { 1915 1916 scan_balance = SCAN_FILE; 1916 1917 goto out; 1917 1918 } ··· 1921 1922 * system is close to OOM, scan both anon and file equally 1922 1923 * (unless the swappiness setting disagrees with swapping). 1923 1924 */ 1924 - if (!sc->priority && vmscan_swappiness(sc)) { 1925 + if (!sc->priority && sc->swappiness) { 1925 1926 scan_balance = SCAN_EQUAL; 1926 1927 goto out; 1927 1928 } ··· 1964 1965 * With swappiness at 100, anonymous and file have the same priority. 1965 1966 * This scanning priority is essentially the inverse of IO cost. 1966 1967 */ 1967 - anon_prio = vmscan_swappiness(sc); 1968 + anon_prio = sc->swappiness; 1968 1969 file_prio = 200 - anon_prio; 1969 1970 1970 1971 /* ··· 2264 2265 2265 2266 lruvec = mem_cgroup_zone_lruvec(zone, memcg); 2266 2267 2268 + sc->swappiness = mem_cgroup_swappiness(memcg); 2267 2269 shrink_lruvec(lruvec, sc); 2268 2270 2269 2271 /* ··· 2731 2731 .may_swap = !noswap, 2732 2732 .order = 0, 2733 2733 .priority = 0, 2734 + .swappiness = mem_cgroup_swappiness(memcg), 2734 2735 .target_mem_cgroup = memcg, 2735 2736 }; 2736 2737 struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg); ··· 3373 3372 } 3374 3373 } 3375 3374 3375 + tsk->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD); 3376 3376 current->reclaim_state = NULL; 3377 + lockdep_clear_current_reclaim_state(); 3378 + 3377 3379 return 0; 3378 3380 } 3379 3381
+1
tools/testing/selftests/Makefile
··· 10 10 TARGETS += vm 11 11 TARGETS += powerpc 12 12 TARGETS += user 13 + TARGETS += sysctl 13 14 14 15 all: 15 16 for TARGET in $(TARGETS); do \
+19
tools/testing/selftests/sysctl/Makefile
··· 1 + # Makefile for sysctl selftests. 2 + # Expects kernel.sysctl_writes_strict=1. 3 + 4 + # No binaries, but make sure arg-less "make" doesn't trigger "run_tests". 5 + all: 6 + 7 + # Allow specific tests to be selected. 8 + test_num: 9 + @/bin/sh ./run_numerictests 10 + 11 + test_string: 12 + @/bin/sh ./run_stringtests 13 + 14 + run_tests: all test_num test_string 15 + 16 + # Nothing to clean up. 17 + clean: 18 + 19 + .PHONY: all run_tests clean test_num test_string
+109
tools/testing/selftests/sysctl/common_tests
··· 1 + #!/bin/sh 2 + 3 + TEST_FILE=$(mktemp) 4 + 5 + echo "== Testing sysctl behavior against ${TARGET} ==" 6 + 7 + set_orig() 8 + { 9 + echo "${ORIG}" > "${TARGET}" 10 + } 11 + 12 + set_test() 13 + { 14 + echo "${TEST_STR}" > "${TARGET}" 15 + } 16 + 17 + verify() 18 + { 19 + local seen 20 + seen=$(cat "$1") 21 + if [ "${seen}" != "${TEST_STR}" ]; then 22 + return 1 23 + fi 24 + return 0 25 + } 26 + 27 + trap 'set_orig; rm -f "${TEST_FILE}"' EXIT 28 + 29 + rc=0 30 + 31 + echo -n "Writing test file ... " 32 + echo "${TEST_STR}" > "${TEST_FILE}" 33 + if ! verify "${TEST_FILE}"; then 34 + echo "FAIL" >&2 35 + exit 1 36 + else 37 + echo "ok" 38 + fi 39 + 40 + echo -n "Checking sysctl is not set to test value ... " 41 + if verify "${TARGET}"; then 42 + echo "FAIL" >&2 43 + exit 1 44 + else 45 + echo "ok" 46 + fi 47 + 48 + echo -n "Writing sysctl from shell ... " 49 + set_test 50 + if ! verify "${TARGET}"; then 51 + echo "FAIL" >&2 52 + exit 1 53 + else 54 + echo "ok" 55 + fi 56 + 57 + echo -n "Resetting sysctl to original value ... " 58 + set_orig 59 + if verify "${TARGET}"; then 60 + echo "FAIL" >&2 61 + exit 1 62 + else 63 + echo "ok" 64 + fi 65 + 66 + # Now that we've validated the sanity of "set_test" and "set_orig", 67 + # we can use those functions to set starting states before running 68 + # specific behavioral tests. 69 + 70 + echo -n "Writing entire sysctl in single write ... " 71 + set_orig 72 + dd if="${TEST_FILE}" of="${TARGET}" bs=4096 2>/dev/null 73 + if ! verify "${TARGET}"; then 74 + echo "FAIL" >&2 75 + rc=1 76 + else 77 + echo "ok" 78 + fi 79 + 80 + echo -n "Writing middle of sysctl after synchronized seek ... " 81 + set_test 82 + dd if="${TEST_FILE}" of="${TARGET}" bs=1 seek=1 skip=1 2>/dev/null 83 + if ! verify "${TARGET}"; then 84 + echo "FAIL" >&2 85 + rc=1 86 + else 87 + echo "ok" 88 + fi 89 + 90 + echo -n "Writing beyond end of sysctl ... " 91 + set_orig 92 + dd if="${TEST_FILE}" of="${TARGET}" bs=20 seek=2 2>/dev/null 93 + if verify "${TARGET}"; then 94 + echo "FAIL" >&2 95 + rc=1 96 + else 97 + echo "ok" 98 + fi 99 + 100 + echo -n "Writing sysctl with multiple long writes ... " 101 + set_orig 102 + (perl -e 'print "A" x 50;'; echo "${TEST_STR}") | \ 103 + dd of="${TARGET}" bs=50 2>/dev/null 104 + if verify "${TARGET}"; then 105 + echo "FAIL" >&2 106 + rc=1 107 + else 108 + echo "ok" 109 + fi
+10
tools/testing/selftests/sysctl/run_numerictests
··· 1 + #!/bin/sh 2 + 3 + SYSCTL="/proc/sys" 4 + TARGET="${SYSCTL}/vm/swappiness" 5 + ORIG=$(cat "${TARGET}") 6 + TEST_STR=$(( $ORIG + 1 )) 7 + 8 + . ./common_tests 9 + 10 + exit $rc
+77
tools/testing/selftests/sysctl/run_stringtests
··· 1 + #!/bin/sh 2 + 3 + SYSCTL="/proc/sys" 4 + TARGET="${SYSCTL}/kernel/domainname" 5 + ORIG=$(cat "${TARGET}") 6 + TEST_STR="Testing sysctl" 7 + 8 + . ./common_tests 9 + 10 + # Only string sysctls support seeking/appending. 11 + MAXLEN=65 12 + 13 + echo -n "Writing entire sysctl in short writes ... " 14 + set_orig 15 + dd if="${TEST_FILE}" of="${TARGET}" bs=1 2>/dev/null 16 + if ! verify "${TARGET}"; then 17 + echo "FAIL" >&2 18 + rc=1 19 + else 20 + echo "ok" 21 + fi 22 + 23 + echo -n "Writing middle of sysctl after unsynchronized seek ... " 24 + set_test 25 + dd if="${TEST_FILE}" of="${TARGET}" bs=1 seek=1 2>/dev/null 26 + if verify "${TARGET}"; then 27 + echo "FAIL" >&2 28 + rc=1 29 + else 30 + echo "ok" 31 + fi 32 + 33 + echo -n "Checking sysctl maxlen is at least $MAXLEN ... " 34 + set_orig 35 + perl -e 'print "A" x ('"${MAXLEN}"'-2), "B";' | \ 36 + dd of="${TARGET}" bs="${MAXLEN}" 2>/dev/null 37 + if ! grep -q B "${TARGET}"; then 38 + echo "FAIL" >&2 39 + rc=1 40 + else 41 + echo "ok" 42 + fi 43 + 44 + echo -n "Checking sysctl keeps original string on overflow append ... " 45 + set_orig 46 + perl -e 'print "A" x ('"${MAXLEN}"'-1), "B";' | \ 47 + dd of="${TARGET}" bs=$(( MAXLEN - 1 )) 2>/dev/null 48 + if grep -q B "${TARGET}"; then 49 + echo "FAIL" >&2 50 + rc=1 51 + else 52 + echo "ok" 53 + fi 54 + 55 + echo -n "Checking sysctl stays NULL terminated on write ... " 56 + set_orig 57 + perl -e 'print "A" x ('"${MAXLEN}"'-1), "B";' | \ 58 + dd of="${TARGET}" bs="${MAXLEN}" 2>/dev/null 59 + if grep -q B "${TARGET}"; then 60 + echo "FAIL" >&2 61 + rc=1 62 + else 63 + echo "ok" 64 + fi 65 + 66 + echo -n "Checking sysctl stays NULL terminated on overwrite ... " 67 + set_orig 68 + perl -e 'print "A" x ('"${MAXLEN}"'-1), "BB";' | \ 69 + dd of="${TARGET}" bs=$(( $MAXLEN + 1 )) 2>/dev/null 70 + if grep -q B "${TARGET}"; then 71 + echo "FAIL" >&2 72 + rc=1 73 + else 74 + echo "ok" 75 + fi 76 + 77 + exit $rc
-77
usr/Kconfig
··· 98 98 help 99 99 Support loading of a LZ4 encoded initial ramdisk or cpio buffer 100 100 If unsure, say N. 101 - 102 - choice 103 - prompt "Built-in initramfs compression mode" if INITRAMFS_SOURCE!="" 104 - help 105 - This option decides by which algorithm the builtin initramfs 106 - will be compressed. Several compression algorithms are 107 - available, which differ in efficiency, compression and 108 - decompression speed. Compression speed is only relevant 109 - when building a kernel. Decompression speed is relevant at 110 - each boot. 111 - 112 - If you have any problems with bzip2 or LZMA compressed 113 - initramfs, mail me (Alain Knaff) <alain@knaff.lu>. 114 - 115 - High compression options are mostly useful for users who are 116 - low on RAM, since it reduces the memory consumption during 117 - boot. 118 - 119 - If in doubt, select 'gzip' 120 - 121 - config INITRAMFS_COMPRESSION_NONE 122 - bool "None" 123 - help 124 - Do not compress the built-in initramfs at all. This may 125 - sound wasteful in space, but, you should be aware that the 126 - built-in initramfs will be compressed at a later stage 127 - anyways along with the rest of the kernel, on those 128 - architectures that support this. 129 - However, not compressing the initramfs may lead to slightly 130 - higher memory consumption during a short time at boot, while 131 - both the cpio image and the unpacked filesystem image will 132 - be present in memory simultaneously 133 - 134 - config INITRAMFS_COMPRESSION_GZIP 135 - bool "Gzip" 136 - depends on RD_GZIP 137 - help 138 - The old and tried gzip compression. It provides a good balance 139 - between compression ratio and decompression speed. 140 - 141 - config INITRAMFS_COMPRESSION_BZIP2 142 - bool "Bzip2" 143 - depends on RD_BZIP2 144 - help 145 - Its compression ratio and speed is intermediate. 146 - Decompression speed is slowest among the choices. The initramfs 147 - size is about 10% smaller with bzip2, in comparison to gzip. 148 - Bzip2 uses a large amount of memory. For modern kernels you 149 - will need at least 8MB RAM or more for booting. 150 - 151 - config INITRAMFS_COMPRESSION_LZMA 152 - bool "LZMA" 153 - depends on RD_LZMA 154 - help 155 - This algorithm's compression ratio is best. 156 - Decompression speed is between the other choices. 157 - Compression is slowest. The initramfs size is about 33% 158 - smaller with LZMA in comparison to gzip. 159 - 160 - config INITRAMFS_COMPRESSION_XZ 161 - bool "XZ" 162 - depends on RD_XZ 163 - help 164 - XZ uses the LZMA2 algorithm. The initramfs size is about 30% 165 - smaller with XZ in comparison to gzip. Decompression speed 166 - is better than that of bzip2 but worse than gzip and LZO. 167 - Compression is slow. 168 - 169 - config INITRAMFS_COMPRESSION_LZO 170 - bool "LZO" 171 - depends on RD_LZO 172 - help 173 - Its compression ratio is the poorest among the choices. The kernel 174 - size is about 10% bigger than gzip; however its speed 175 - (both compression and decompression) is the fastest. 176 - 177 - endchoice