Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'docs-6.18' of git://git.lwn.net/linux

Pull documentation updates from Jonathan Corbet:
"It has been a relatively busy cycle in docsland, with changes all
over:

- Bring the kernel memory-model docs into the Sphinx build in the
"literal include" mode.

- Lots of build-infrastructure work, further cleaning up long-term
kernel-doc technical debt. The sphinx-pre-install tool has been
converted to Python and updated for current systems.

- A new tool to detect when documents have been moved and generate
HTML redirects; this can be used on kernel.org (or any other site
hosting the rendered docs) to avoid breaking links.

- Automated processing of the YAML files describing the netlink
protocol.

- A significant update of the maintainer's PGP guide.

... and a seemingly endless series of typo fixes, build-problem fixes,
etc"

* tag 'docs-6.18' of git://git.lwn.net/linux: (193 commits)
Documentation/features: Update feature lists for 6.17-rc7
docs: remove cdomain.py
Documentation/process: submitting-patches: fix typo in "were do"
docs: dev-tools/lkmm: Fix typo of missing file extension
Documentation: trace: histogram: Convert ftrace docs cross-reference
Documentation: trace: histogram-design: Wrap introductory note in note:: directive
Documentation: trace: historgram-design: Separate sched_waking histogram section heading and the following diagram
Documentation: trace: histogram-design: Trim trailing vertices in diagram explanation text
Documentation: trace: histogram: Fix histogram trigger subsection number order
docs: driver-api: fix spelling of "buses".
Documentation: fbcon: Use admonition directives
Documentation: fbcon: Reindent 8th step of attach/detach/unload
Documentation: fbcon: Add boot options and attach/detach/unload section headings
docs: filesystems: sysfs: add remaining top level sysfs directory descriptions
docs: filesystems: sysfs: clarify symlink destinations in dev and bus/devices descriptions
docs: filesystems: sysfs: remove top level sysfs net directory
docs: maintainer: Fix ambiguous subheading formatting
docs: kdoc: a few more dump_typedef() tweaks
docs: kdoc: remove redundant comment stripping in dump_typedef()
docs: kdoc: remove some dead code in dump_typedef()
...

+7098 -3612
+1 -1
.pylintrc
··· 1 1 [MASTER] 2 - init-hook='import sys; sys.path += ["scripts/lib/kdoc", "scripts/lib/abi"]' 2 + init-hook='import sys; sys.path += ["scripts/lib/kdoc", "scripts/lib/abi", "tools/docs/lib"]'
+1191
Documentation/.renames.txt
··· 1 + 80211/cfg80211 driver-api/80211/cfg80211 2 + 80211/index driver-api/80211/index 3 + 80211/introduction driver-api/80211/introduction 4 + 80211/mac80211 driver-api/80211/mac80211 5 + 80211/mac80211-advanced driver-api/80211/mac80211-advanced 6 + EDID/howto admin-guide/edid 7 + PCI/picebus-howto PCI/pciebus-howto 8 + RAS/address-translation admin-guide/RAS/address-translation 9 + RAS/error-decoding admin-guide/RAS/error-decoding 10 + RAS/ras admin-guide/RAS/error-decoding 11 + accelerators/ocxl userspace-api/accelerators/ocxl 12 + admin-guide/gpio/sysfs userspace-api/gpio/sysfs 13 + admin-guide/l1tf admin-guide/hw-vuln/l1tf 14 + admin-guide/media/v4l-with-ir admin-guide/media/remote-controller 15 + admin-guide/ras admin-guide/RAS/main 16 + admin-guide/security-bugs process/security-bugs 17 + aoe/aoe admin-guide/aoe/aoe 18 + aoe/examples admin-guide/aoe/examples 19 + aoe/index admin-guide/aoe/index 20 + aoe/todo admin-guide/aoe/todo 21 + arc/arc arch/arc/arc 22 + arc/features arch/arc/features 23 + arc/index arch/arc/index 24 + arch/x86/resctrl filesystems/resctrl 25 + arm/arm arch/arm/arm 26 + arm/booting arch/arm/booting 27 + arm/cluster-pm-race-avoidance arch/arm/cluster-pm-race-avoidance 28 + arm/features arch/arm/features 29 + arm/firmware arch/arm/firmware 30 + arm/google/chromebook-boot-flow arch/arm/google/chromebook-boot-flow 31 + arm/index arch/arm/index 32 + arm/interrupts arch/arm/interrupts 33 + arm/ixp4xx arch/arm/ixp4xx 34 + arm/kernel_mode_neon arch/arm/kernel_mode_neon 35 + arm/kernel_user_helpers arch/arm/kernel_user_helpers 36 + arm/keystone/knav-qmss arch/arm/keystone/knav-qmss 37 + arm/keystone/overview arch/arm/keystone/overview 38 + arm/marvel arch/arm/marvell 39 + arm/marvell arch/arm/marvell 40 + arm/mem_alignment arch/arm/mem_alignment 41 + arm/memory arch/arm/memory 42 + arm/microchip arch/arm/microchip 43 + arm/netwinder arch/arm/netwinder 44 + arm/nwfpe/index arch/arm/nwfpe/index 45 + arm/nwfpe/netwinder-fpe arch/arm/nwfpe/netwinder-fpe 46 + arm/nwfpe/notes arch/arm/nwfpe/notes 47 + arm/nwfpe/nwfpe arch/arm/nwfpe/nwfpe 48 + arm/nwfpe/todo arch/arm/nwfpe/todo 49 + arm/omap/dss arch/arm/omap/dss 50 + arm/omap/index arch/arm/omap/index 51 + arm/omap/omap arch/arm/omap/omap 52 + arm/omap/omap_pm arch/arm/omap/omap_pm 53 + arm/porting arch/arm/porting 54 + arm/pxa/mfp arch/arm/pxa/mfp 55 + arm/sa1100/assabet arch/arm/sa1100/assabet 56 + arm/sa1100/cerf arch/arm/sa1100/cerf 57 + arm/sa1100/index arch/arm/sa1100/index 58 + arm/sa1100/lart arch/arm/sa1100/lart 59 + arm/sa1100/serial_uart arch/arm/sa1100/serial_uart 60 + arm/samsung/bootloader-interface arch/arm/samsung/bootloader-interface 61 + arm/samsung/gpio arch/arm/samsung/gpio 62 + arm/samsung/index arch/arm/samsung/index 63 + arm/samsung/overview arch/arm/samsung/overview 64 + arm/setup arch/arm/setup 65 + arm/spear/overview arch/arm/spear/overview 66 + arm/sti/overview arch/arm/sti/overview 67 + arm/sti/stih407-overview arch/arm/sti/stih407-overview 68 + arm/sti/stih418-overview arch/arm/sti/stih418-overview 69 + arm/stm32/overview arch/arm/stm32/overview 70 + arm/stm32/stm32-dma-mdma-chaining arch/arm/stm32/stm32-dma-mdma-chaining 71 + arm/stm32/stm32f429-overview arch/arm/stm32/stm32f429-overview 72 + arm/stm32/stm32f746-overview arch/arm/stm32/stm32f746-overview 73 + arm/stm32/stm32f769-overview arch/arm/stm32/stm32f769-overview 74 + arm/stm32/stm32h743-overview arch/arm/stm32/stm32h743-overview 75 + arm/stm32/stm32h750-overview arch/arm/stm32/stm32h750-overview 76 + arm/stm32/stm32mp13-overview arch/arm/stm32/stm32mp13-overview 77 + arm/stm32/stm32mp151-overview arch/arm/stm32/stm32mp151-overview 78 + arm/stm32/stm32mp157-overview arch/arm/stm32/stm32mp157-overview 79 + arm/sunxi arch/arm/sunxi 80 + arm/sunxi/clocks arch/arm/sunxi/clocks 81 + arm/swp_emulation arch/arm/swp_emulation 82 + arm/tcm arch/arm/tcm 83 + arm/uefi arch/arm/uefi 84 + arm/vfp/release-notes arch/arm/vfp/release-notes 85 + arm/vlocks arch/arm/vlocks 86 + arm64/acpi_object_usage arch/arm64/acpi_object_usage 87 + arm64/amu arch/arm64/amu 88 + arm64/arm-acpi arch/arm64/arm-acpi 89 + arm64/asymmetric-32bit arch/arm64/asymmetric-32bit 90 + arm64/booting arch/arm64/booting 91 + arm64/cpu-feature-registers arch/arm64/cpu-feature-registers 92 + arm64/elf_hwcaps arch/arm64/elf_hwcaps 93 + arm64/features arch/arm64/features 94 + arm64/hugetlbpage arch/arm64/hugetlbpage 95 + arm64/index arch/arm64/index 96 + arm64/legacy_instructions arch/arm64/legacy_instructions 97 + arm64/memory arch/arm64/memory 98 + arm64/memory-tagging-extension arch/arm64/memory-tagging-extension 99 + arm64/perf arch/arm64/perf 100 + arm64/pointer-authentication arch/arm64/pointer-authentication 101 + arm64/silicon-errata arch/arm64/silicon-errata 102 + arm64/sme arch/arm64/sme 103 + arm64/sve arch/arm64/sve 104 + arm64/tagged-address-abi arch/arm64/tagged-address-abi 105 + arm64/tagged-pointers arch/arm64/tagged-pointers 106 + asm-annotations core-api/asm-annotations 107 + auxdisplay/lcd-panel-cgram admin-guide/lcd-panel-cgram 108 + backlight/lp855x-driver driver-api/backlight/lp855x-driver 109 + blockdev/drbd/data-structure-v9 admin-guide/blockdev/drbd/data-structure-v9 110 + blockdev/drbd/figures admin-guide/blockdev/drbd/figures 111 + blockdev/drbd/index admin-guide/blockdev/drbd/index 112 + blockdev/floppy admin-guide/blockdev/floppy 113 + blockdev/index admin-guide/blockdev/index 114 + blockdev/nbd admin-guide/blockdev/nbd 115 + blockdev/paride admin-guide/blockdev/paride 116 + blockdev/ramdisk admin-guide/blockdev/ramdisk 117 + blockdev/zram admin-guide/blockdev/zram 118 + bpf/README bpf/index 119 + bpf/bpf_lsm bpf/prog_lsm 120 + bpf/instruction-set bpf/standardization/instruction-set 121 + bpf/libbpf/libbpf bpf/libbpf/index 122 + bpf/standardization/linux-notes bpf/linux-notes 123 + bus-devices/ti-gpmc driver-api/memory-devices/ti-gpmc 124 + cgroup-v1/blkio-controller admin-guide/cgroup-v1/blkio-controller 125 + cgroup-v1/cgroups admin-guide/cgroup-v1/cgroups 126 + cgroup-v1/cpuacct admin-guide/cgroup-v1/cpuacct 127 + cgroup-v1/cpusets admin-guide/cgroup-v1/cpusets 128 + cgroup-v1/devices admin-guide/cgroup-v1/devices 129 + cgroup-v1/freezer-subsystem admin-guide/cgroup-v1/freezer-subsystem 130 + cgroup-v1/hugetlb admin-guide/cgroup-v1/hugetlb 131 + cgroup-v1/index admin-guide/cgroup-v1/index 132 + cgroup-v1/memcg_test admin-guide/cgroup-v1/memcg_test 133 + cgroup-v1/memory admin-guide/cgroup-v1/memory 134 + cgroup-v1/net_cls admin-guide/cgroup-v1/net_cls 135 + cgroup-v1/net_prio admin-guide/cgroup-v1/net_prio 136 + cgroup-v1/pids admin-guide/cgroup-v1/pids 137 + cgroup-v1/rdma admin-guide/cgroup-v1/rdma 138 + cma/debugfs admin-guide/mm/cma_debugfs 139 + connector/connector driver-api/connector 140 + console/console driver-api/console 141 + core-api/gcc-plugins kbuild/gcc-plugins 142 + core-api/ioctl driver-api/ioctl 143 + core-api/memory-hotplug-notifier core-api/memory-hotplug 144 + dev-tools/gdb-kernel-debugging process/debugging/gdb-kernel-debugging 145 + dev-tools/kgdb process/debugging/kgdb 146 + dev-tools/tools dev-tools/index 147 + development-process/1.Intro process/1.Intro 148 + development-process/2.Process process/2.Process 149 + development-process/3.Early-stage process/3.Early-stage 150 + development-process/4.Coding process/4.Coding 151 + development-process/5.Posting process/5.Posting 152 + development-process/6.Followthrough process/6.Followthrough 153 + development-process/7.AdvancedTopics process/7.AdvancedTopics 154 + development-process/8.Conclusion process/8.Conclusion 155 + development-process/development-process process/development-process 156 + development-process/index process/index 157 + device-mapper/cache admin-guide/device-mapper/cache 158 + device-mapper/cache-policies admin-guide/device-mapper/cache-policies 159 + device-mapper/delay admin-guide/device-mapper/delay 160 + device-mapper/dm-crypt admin-guide/device-mapper/dm-crypt 161 + device-mapper/dm-flakey admin-guide/device-mapper/dm-flakey 162 + device-mapper/dm-init admin-guide/device-mapper/dm-init 163 + device-mapper/dm-integrity admin-guide/device-mapper/dm-integrity 164 + device-mapper/dm-io admin-guide/device-mapper/dm-io 165 + device-mapper/dm-log admin-guide/device-mapper/dm-log 166 + device-mapper/dm-queue-length admin-guide/device-mapper/dm-queue-length 167 + device-mapper/dm-raid admin-guide/device-mapper/dm-raid 168 + device-mapper/dm-service-time admin-guide/device-mapper/dm-service-time 169 + device-mapper/dm-uevent admin-guide/device-mapper/dm-uevent 170 + device-mapper/dm-zoned admin-guide/device-mapper/dm-zoned 171 + device-mapper/era admin-guide/device-mapper/era 172 + device-mapper/index admin-guide/device-mapper/index 173 + device-mapper/kcopyd admin-guide/device-mapper/kcopyd 174 + device-mapper/linear admin-guide/device-mapper/linear 175 + device-mapper/log-writes admin-guide/device-mapper/log-writes 176 + device-mapper/persistent-data admin-guide/device-mapper/persistent-data 177 + device-mapper/snapshot admin-guide/device-mapper/snapshot 178 + device-mapper/statistics admin-guide/device-mapper/statistics 179 + device-mapper/striped admin-guide/device-mapper/striped 180 + device-mapper/switch admin-guide/device-mapper/switch 181 + device-mapper/thin-provisioning admin-guide/device-mapper/thin-provisioning 182 + device-mapper/unstriped admin-guide/device-mapper/unstriped 183 + device-mapper/verity admin-guide/device-mapper/verity 184 + device-mapper/writecache admin-guide/device-mapper/writecache 185 + device-mapper/zero admin-guide/device-mapper/zero 186 + devicetree/writing-schema devicetree/bindings/writing-schema 187 + driver-api/bt8xxgpio driver-api/gpio/bt8xxgpio 188 + driver-api/cxl/access-coordinates driver-api/cxl/linux/access-coordinates 189 + driver-api/cxl/memory-devices driver-api/cxl/theory-of-operation 190 + driver-api/dcdbas userspace-api/dcdbas 191 + driver-api/dell_rbu admin-guide/dell_rbu 192 + driver-api/edid admin-guide/edid 193 + driver-api/gpio driver-api/gpio/index 194 + driver-api/hte/tegra194-hte driver-api/hte/tegra-hte 195 + driver-api/isapnp userspace-api/isapnp 196 + driver-api/media/drivers/v4l-drivers/zoran driver-api/media/drivers/zoran 197 + driver-api/mtd/intel-spi driver-api/mtd/spi-intel 198 + driver-api/pci driver-api/pci/pci 199 + driver-api/pinctl driver-api/pin-control 200 + driver-api/rapidio admin-guide/rapidio 201 + driver-api/serial/moxa-smartio driver-api/tty/moxa-smartio 202 + driver-api/serial/n_gsm driver-api/tty/n_gsm 203 + driver-api/serial/tty driver-api/tty/tty_ldisc 204 + driver-api/thermal/intel_powerclamp admin-guide/thermal/intel_powerclamp 205 + driver-api/usb driver-api/usb/usb 206 + driver-model/binding driver-api/driver-model/binding 207 + driver-model/bus driver-api/driver-model/bus 208 + driver-model/design-patterns driver-api/driver-model/design-patterns 209 + driver-model/device driver-api/driver-model/device 210 + driver-model/devres driver-api/driver-model/devres 211 + driver-model/driver driver-api/driver-model/driver 212 + driver-model/index driver-api/driver-model/index 213 + driver-model/overview driver-api/driver-model/overview 214 + driver-model/platform driver-api/driver-model/platform 215 + driver-model/porting driver-api/driver-model/porting 216 + early-userspace/buffer-format driver-api/early-userspace/buffer-format 217 + early-userspace/early_userspace_support driver-api/early-userspace/early_userspace_support 218 + early-userspace/index driver-api/early-userspace/index 219 + errseq core-api/errseq 220 + filesystems/binderfs admin-guide/binderfs 221 + filesystems/cifs/cifsd filesystems/smb/ksmbd 222 + filesystems/cifs/cifsroot filesystems/smb/cifsroot 223 + filesystems/cifs/index filesystems/smb/index 224 + filesystems/cifs/ksmbd filesystems/smb/ksmbd 225 + filesystems/ext4/ext4 admin-guide/ext4 226 + filesystems/ext4/ondisk/about filesystems/ext4/about 227 + filesystems/ext4/ondisk/allocators filesystems/ext4/allocators 228 + filesystems/ext4/ondisk/attributes filesystems/ext4/attributes 229 + filesystems/ext4/ondisk/bigalloc filesystems/ext4/bigalloc 230 + filesystems/ext4/ondisk/bitmaps filesystems/ext4/bitmaps 231 + filesystems/ext4/ondisk/blockgroup filesystems/ext4/blockgroup 232 + filesystems/ext4/ondisk/blockmap filesystems/ext4/blockmap 233 + filesystems/ext4/ondisk/blocks filesystems/ext4/blocks 234 + filesystems/ext4/ondisk/checksums filesystems/ext4/checksums 235 + filesystems/ext4/ondisk/directory filesystems/ext4/directory 236 + filesystems/ext4/ondisk/dynamic filesystems/ext4/dynamic 237 + filesystems/ext4/ondisk/eainode filesystems/ext4/eainode 238 + filesystems/ext4/ondisk/globals filesystems/ext4/globals 239 + filesystems/ext4/ondisk/group_descr filesystems/ext4/group_descr 240 + filesystems/ext4/ondisk/ifork filesystems/ext4/ifork 241 + filesystems/ext4/ondisk/inlinedata filesystems/ext4/inlinedata 242 + filesystems/ext4/ondisk/inodes filesystems/ext4/inodes 243 + filesystems/ext4/ondisk/journal filesystems/ext4/journal 244 + filesystems/ext4/ondisk/mmp filesystems/ext4/mmp 245 + filesystems/ext4/ondisk/overview filesystems/ext4/overview 246 + filesystems/ext4/ondisk/special_inodes filesystems/ext4/special_inodes 247 + filesystems/ext4/ondisk/super filesystems/ext4/super 248 + filesystems/sysfs-pci PCI/sysfs-pci 249 + filesystems/sysfs-tagging networking/sysfs-tagging 250 + filesystems/xfs-delayed-logging-design filesystems/xfs/xfs-delayed-logging-design 251 + filesystems/xfs-maintainer-entry-profile filesystems/xfs/xfs-maintainer-entry-profile 252 + filesystems/xfs-online-fsck-design filesystems/xfs/xfs-online-fsck-design 253 + filesystems/xfs-self-describing-metadata filesystems/xfs/xfs-self-describing-metadata 254 + gpio/index admin-guide/gpio/index 255 + gpio/sysfs userspace-api/gpio/sysfs 256 + gpu/amdgpu gpu/amdgpu/index 257 + hte/hte driver-api/hte/hte 258 + hte/index driver-api/hte/index 259 + hte/tegra194-hte driver-api/hte/tegra-hte 260 + input/alps input/devices/alps 261 + input/amijoy input/devices/amijoy 262 + input/appletouch input/devices/appletouch 263 + input/atarikbd input/devices/atarikbd 264 + input/bcm5974 input/devices/bcm5974 265 + input/cma3000_d0x input/devices/cma3000_d0x 266 + input/cs461x input/devices/cs461x 267 + input/edt-ft5x06 input/devices/edt-ft5x06 268 + input/elantech input/devices/elantech 269 + input/iforce-protocol input/devices/iforce-protocol 270 + input/joystick input/joydev/joystick 271 + input/joystick-api input/joydev/joystick-api 272 + input/joystick-parport input/devices/joystick-parport 273 + input/ntrig input/devices/ntrig 274 + input/rotary-encoder input/devices/rotary-encoder 275 + input/sentelic input/devices/sentelic 276 + input/walkera0701 input/devices/walkera0701 277 + input/xpad input/devices/xpad 278 + input/yealink input/devices/yealink 279 + interconnect/interconnect driver-api/interconnect 280 + ioctl/botching-up-ioctls process/botching-up-ioctls 281 + ioctl/cdrom userspace-api/ioctl/cdrom 282 + ioctl/hdio userspace-api/ioctl/hdio 283 + ioctl/index userspace-api/ioctl/index 284 + ioctl/ioctl-decoding userspace-api/ioctl/ioctl-decoding 285 + ioctl/ioctl-number userspace-api/ioctl/ioctl-number 286 + kbuild/namespaces core-api/symbol-namespaces 287 + kdump/index admin-guide/kdump/index 288 + kdump/kdump admin-guide/kdump/kdump 289 + kdump/vmcoreinfo admin-guide/kdump/vmcoreinfo 290 + kernel-documentation doc-guide/kernel-doc 291 + laptops/asus-laptop admin-guide/laptops/asus-laptop 292 + laptops/disk-shock-protection admin-guide/laptops/disk-shock-protection 293 + laptops/index admin-guide/laptops/index 294 + laptops/laptop-mode admin-guide/laptops/laptop-mode 295 + laptops/lg-laptop admin-guide/laptops/lg-laptop 296 + laptops/sony-laptop admin-guide/laptops/sony-laptop 297 + laptops/sonypi admin-guide/laptops/sonypi 298 + laptops/thinkpad-acpi admin-guide/laptops/thinkpad-acpi 299 + laptops/toshiba_haps admin-guide/laptops/toshiba_haps 300 + loongarch/booting arch/loongarch/booting 301 + loongarch/features arch/loongarch/features 302 + loongarch/index arch/loongarch/index 303 + loongarch/introduction arch/loongarch/introduction 304 + loongarch/irq-chip-model arch/loongarch/irq-chip-model 305 + m68k/buddha-driver arch/m68k/buddha-driver 306 + m68k/features arch/m68k/features 307 + m68k/index arch/m68k/index 308 + m68k/kernel-options arch/m68k/kernel-options 309 + md/index driver-api/md/index 310 + md/md-cluster driver-api/md/md-cluster 311 + md/raid5-cache driver-api/md/raid5-cache 312 + md/raid5-ppl driver-api/md/raid5-ppl 313 + media/dvb-drivers/avermedia admin-guide/media/avermedia 314 + media/dvb-drivers/bt8xx admin-guide/media/bt8xx 315 + media/dvb-drivers/ci admin-guide/media/ci 316 + media/dvb-drivers/contributors driver-api/media/drivers/contributors 317 + media/dvb-drivers/dvb-usb driver-api/media/drivers/dvb-usb 318 + media/dvb-drivers/faq admin-guide/media/faq 319 + media/dvb-drivers/frontends driver-api/media/drivers/frontends 320 + media/dvb-drivers/index driver-api/media/drivers/index 321 + media/dvb-drivers/lmedm04 admin-guide/media/lmedm04 322 + media/dvb-drivers/opera-firmware admin-guide/media/opera-firmware 323 + media/dvb-drivers/technisat admin-guide/media/technisat 324 + media/dvb-drivers/ttusb-dec admin-guide/media/ttusb-dec 325 + media/intro userspace-api/media/intro 326 + media/kapi/cec-core driver-api/media/cec-core 327 + media/kapi/dtv-ca driver-api/media/dtv-ca 328 + media/kapi/dtv-common driver-api/media/dtv-common 329 + media/kapi/dtv-core driver-api/media/dtv-core 330 + media/kapi/dtv-demux driver-api/media/dtv-demux 331 + media/kapi/dtv-frontend driver-api/media/dtv-frontend 332 + media/kapi/dtv-net driver-api/media/dtv-net 333 + media/kapi/mc-core driver-api/media/mc-core 334 + media/kapi/rc-core driver-api/media/rc-core 335 + media/kapi/v4l2-async driver-api/media/v4l2-async 336 + media/kapi/v4l2-common driver-api/media/v4l2-common 337 + media/kapi/v4l2-controls driver-api/media/v4l2-controls 338 + media/kapi/v4l2-core driver-api/media/v4l2-core 339 + media/kapi/v4l2-dev driver-api/media/v4l2-dev 340 + media/kapi/v4l2-device driver-api/media/v4l2-device 341 + media/kapi/v4l2-dv-timings driver-api/media/v4l2-dv-timings 342 + media/kapi/v4l2-event driver-api/media/v4l2-event 343 + media/kapi/v4l2-fh driver-api/media/v4l2-fh 344 + media/kapi/v4l2-flash-led-class driver-api/media/v4l2-flash-led-class 345 + media/kapi/v4l2-fwnode driver-api/media/v4l2-fwnode 346 + media/kapi/v4l2-intro driver-api/media/v4l2-intro 347 + media/kapi/v4l2-mc driver-api/media/v4l2-mc 348 + media/kapi/v4l2-mediabus driver-api/media/v4l2-mediabus 349 + media/kapi/v4l2-mem2mem driver-api/media/v4l2-mem2mem 350 + media/kapi/v4l2-rect driver-api/media/v4l2-rect 351 + media/kapi/v4l2-subdev driver-api/media/v4l2-subdev 352 + media/kapi/v4l2-tuner driver-api/media/v4l2-tuner 353 + media/kapi/v4l2-tveeprom driver-api/media/v4l2-tveeprom 354 + media/kapi/v4l2-videobuf2 driver-api/media/v4l2-videobuf2 355 + media/media_kapi driver-api/media/index 356 + media/media_uapi userspace-api/media/index 357 + media/uapi/cec/cec-api userspace-api/media/cec/cec-api 358 + media/uapi/cec/cec-func-close userspace-api/media/cec/cec-func-close 359 + media/uapi/cec/cec-func-ioctl userspace-api/media/cec/cec-func-ioctl 360 + media/uapi/cec/cec-func-open userspace-api/media/cec/cec-func-open 361 + media/uapi/cec/cec-func-poll userspace-api/media/cec/cec-func-poll 362 + media/uapi/cec/cec-funcs userspace-api/media/cec/cec-funcs 363 + media/uapi/cec/cec-header userspace-api/media/cec/cec-header 364 + media/uapi/cec/cec-intro userspace-api/media/cec/cec-intro 365 + media/uapi/cec/cec-ioc-adap-g-caps userspace-api/media/cec/cec-ioc-adap-g-caps 366 + media/uapi/cec/cec-ioc-adap-g-conn-info userspace-api/media/cec/cec-ioc-adap-g-conn-info 367 + media/uapi/cec/cec-ioc-adap-g-log-addrs userspace-api/media/cec/cec-ioc-adap-g-log-addrs 368 + media/uapi/cec/cec-ioc-adap-g-phys-addr userspace-api/media/cec/cec-ioc-adap-g-phys-addr 369 + media/uapi/cec/cec-ioc-dqevent userspace-api/media/cec/cec-ioc-dqevent 370 + media/uapi/cec/cec-ioc-g-mode userspace-api/media/cec/cec-ioc-g-mode 371 + media/uapi/cec/cec-ioc-receive userspace-api/media/cec/cec-ioc-receive 372 + media/uapi/cec/cec-pin-error-inj userspace-api/media/cec/cec-pin-error-inj 373 + media/uapi/dvb/ca userspace-api/media/dvb/ca 374 + media/uapi/dvb/ca-fclose userspace-api/media/dvb/ca-fclose 375 + media/uapi/dvb/ca-fopen userspace-api/media/dvb/ca-fopen 376 + media/uapi/dvb/ca-get-cap userspace-api/media/dvb/ca-get-cap 377 + media/uapi/dvb/ca-get-descr-info userspace-api/media/dvb/ca-get-descr-info 378 + media/uapi/dvb/ca-get-msg userspace-api/media/dvb/ca-get-msg 379 + media/uapi/dvb/ca-get-slot-info userspace-api/media/dvb/ca-get-slot-info 380 + media/uapi/dvb/ca-reset userspace-api/media/dvb/ca-reset 381 + media/uapi/dvb/ca-send-msg userspace-api/media/dvb/ca-send-msg 382 + media/uapi/dvb/ca-set-descr userspace-api/media/dvb/ca-set-descr 383 + media/uapi/dvb/ca_data_types userspace-api/media/dvb/ca_data_types 384 + media/uapi/dvb/ca_function_calls userspace-api/media/dvb/ca_function_calls 385 + media/uapi/dvb/ca_high_level userspace-api/media/dvb/ca_high_level 386 + media/uapi/dvb/demux userspace-api/media/dvb/demux 387 + media/uapi/dvb/dmx-add-pid userspace-api/media/dvb/dmx-add-pid 388 + media/uapi/dvb/dmx-expbuf userspace-api/media/dvb/dmx-expbuf 389 + media/uapi/dvb/dmx-fclose userspace-api/media/dvb/dmx-fclose 390 + media/uapi/dvb/dmx-fopen userspace-api/media/dvb/dmx-fopen 391 + media/uapi/dvb/dmx-fread userspace-api/media/dvb/dmx-fread 392 + media/uapi/dvb/dmx-fwrite userspace-api/media/dvb/dmx-fwrite 393 + media/uapi/dvb/dmx-get-pes-pids userspace-api/media/dvb/dmx-get-pes-pids 394 + media/uapi/dvb/dmx-get-stc userspace-api/media/dvb/dmx-get-stc 395 + media/uapi/dvb/dmx-mmap userspace-api/media/dvb/dmx-mmap 396 + media/uapi/dvb/dmx-munmap userspace-api/media/dvb/dmx-munmap 397 + media/uapi/dvb/dmx-qbuf userspace-api/media/dvb/dmx-qbuf 398 + media/uapi/dvb/dmx-querybuf userspace-api/media/dvb/dmx-querybuf 399 + media/uapi/dvb/dmx-remove-pid userspace-api/media/dvb/dmx-remove-pid 400 + media/uapi/dvb/dmx-reqbufs userspace-api/media/dvb/dmx-reqbufs 401 + media/uapi/dvb/dmx-set-buffer-size userspace-api/media/dvb/dmx-set-buffer-size 402 + media/uapi/dvb/dmx-set-filter userspace-api/media/dvb/dmx-set-filter 403 + media/uapi/dvb/dmx-set-pes-filter userspace-api/media/dvb/dmx-set-pes-filter 404 + media/uapi/dvb/dmx-start userspace-api/media/dvb/dmx-start 405 + media/uapi/dvb/dmx-stop userspace-api/media/dvb/dmx-stop 406 + media/uapi/dvb/dmx_fcalls userspace-api/media/dvb/dmx_fcalls 407 + media/uapi/dvb/dmx_types userspace-api/media/dvb/dmx_types 408 + media/uapi/dvb/dvb-fe-read-status userspace-api/media/dvb/dvb-fe-read-status 409 + media/uapi/dvb/dvb-frontend-event userspace-api/media/dvb/dvb-frontend-event 410 + media/uapi/dvb/dvb-frontend-parameters userspace-api/media/dvb/dvb-frontend-parameters 411 + media/uapi/dvb/dvbapi userspace-api/media/dvb/dvbapi 412 + media/uapi/dvb/dvbproperty userspace-api/media/dvb/dvbproperty 413 + media/uapi/dvb/examples userspace-api/media/dvb/examples 414 + media/uapi/dvb/fe-bandwidth-t userspace-api/media/dvb/fe-bandwidth-t 415 + media/uapi/dvb/fe-diseqc-recv-slave-reply userspace-api/media/dvb/fe-diseqc-recv-slave-reply 416 + media/uapi/dvb/fe-diseqc-reset-overload userspace-api/media/dvb/fe-diseqc-reset-overload 417 + media/uapi/dvb/fe-diseqc-send-burst userspace-api/media/dvb/fe-diseqc-send-burst 418 + media/uapi/dvb/fe-diseqc-send-master-cmd userspace-api/media/dvb/fe-diseqc-send-master-cmd 419 + media/uapi/dvb/fe-dishnetwork-send-legacy-cmd userspace-api/media/dvb/fe-dishnetwork-send-legacy-cmd 420 + media/uapi/dvb/fe-enable-high-lnb-voltage userspace-api/media/dvb/fe-enable-high-lnb-voltage 421 + media/uapi/dvb/fe-get-event userspace-api/media/dvb/fe-get-event 422 + media/uapi/dvb/fe-get-frontend userspace-api/media/dvb/fe-get-frontend 423 + media/uapi/dvb/fe-get-info userspace-api/media/dvb/fe-get-info 424 + media/uapi/dvb/fe-get-property userspace-api/media/dvb/fe-get-property 425 + media/uapi/dvb/fe-read-ber userspace-api/media/dvb/fe-read-ber 426 + media/uapi/dvb/fe-read-signal-strength userspace-api/media/dvb/fe-read-signal-strength 427 + media/uapi/dvb/fe-read-snr userspace-api/media/dvb/fe-read-snr 428 + media/uapi/dvb/fe-read-status userspace-api/media/dvb/fe-read-status 429 + media/uapi/dvb/fe-read-uncorrected-blocks userspace-api/media/dvb/fe-read-uncorrected-blocks 430 + media/uapi/dvb/fe-set-frontend userspace-api/media/dvb/fe-set-frontend 431 + media/uapi/dvb/fe-set-frontend-tune-mode userspace-api/media/dvb/fe-set-frontend-tune-mode 432 + media/uapi/dvb/fe-set-tone userspace-api/media/dvb/fe-set-tone 433 + media/uapi/dvb/fe-set-voltage userspace-api/media/dvb/fe-set-voltage 434 + media/uapi/dvb/fe-type-t userspace-api/media/dvb/fe-type-t 435 + media/uapi/dvb/fe_property_parameters userspace-api/media/dvb/fe_property_parameters 436 + media/uapi/dvb/frontend userspace-api/media/dvb/frontend 437 + media/uapi/dvb/frontend-header userspace-api/media/dvb/frontend-header 438 + media/uapi/dvb/frontend-property-cable-systems userspace-api/media/dvb/frontend-property-cable-systems 439 + media/uapi/dvb/frontend-property-satellite-systems userspace-api/media/dvb/frontend-property-satellite-systems 440 + media/uapi/dvb/frontend-property-terrestrial-systems userspace-api/media/dvb/frontend-property-terrestrial-systems 441 + media/uapi/dvb/frontend-stat-properties userspace-api/media/dvb/frontend-stat-properties 442 + media/uapi/dvb/frontend_f_close userspace-api/media/dvb/frontend_f_close 443 + media/uapi/dvb/frontend_f_open userspace-api/media/dvb/frontend_f_open 444 + media/uapi/dvb/frontend_fcalls userspace-api/media/dvb/frontend_fcalls 445 + media/uapi/dvb/frontend_legacy_api userspace-api/media/dvb/frontend_legacy_api 446 + media/uapi/dvb/frontend_legacy_dvbv3_api userspace-api/media/dvb/frontend_legacy_dvbv3_api 447 + media/uapi/dvb/headers userspace-api/media/dvb/headers 448 + media/uapi/dvb/intro userspace-api/media/dvb/intro 449 + media/uapi/dvb/legacy_dvb_apis userspace-api/media/dvb/legacy_dvb_apis 450 + media/uapi/dvb/net userspace-api/media/dvb/net 451 + media/uapi/dvb/net-add-if userspace-api/media/dvb/net-add-if 452 + media/uapi/dvb/net-get-if userspace-api/media/dvb/net-get-if 453 + media/uapi/dvb/net-remove-if userspace-api/media/dvb/net-remove-if 454 + media/uapi/dvb/net-types userspace-api/media/dvb/net-types 455 + media/uapi/dvb/query-dvb-frontend-info userspace-api/media/dvb/query-dvb-frontend-info 456 + media/uapi/fdl-appendix userspace-api/media/fdl-appendix 457 + media/uapi/gen-errors userspace-api/media/gen-errors 458 + media/uapi/mediactl/media-controller userspace-api/media/mediactl/media-controller 459 + media/uapi/mediactl/media-controller-intro userspace-api/media/mediactl/media-controller-intro 460 + media/uapi/mediactl/media-controller-model userspace-api/media/mediactl/media-controller-model 461 + media/uapi/mediactl/media-func-close userspace-api/media/mediactl/media-func-close 462 + media/uapi/mediactl/media-func-ioctl userspace-api/media/mediactl/media-func-ioctl 463 + media/uapi/mediactl/media-func-open userspace-api/media/mediactl/media-func-open 464 + media/uapi/mediactl/media-funcs userspace-api/media/mediactl/media-funcs 465 + media/uapi/mediactl/media-header userspace-api/media/mediactl/media-header 466 + media/uapi/mediactl/media-ioc-device-info userspace-api/media/mediactl/media-ioc-device-info 467 + media/uapi/mediactl/media-ioc-enum-entities userspace-api/media/mediactl/media-ioc-enum-entities 468 + media/uapi/mediactl/media-ioc-enum-links userspace-api/media/mediactl/media-ioc-enum-links 469 + media/uapi/mediactl/media-ioc-g-topology userspace-api/media/mediactl/media-ioc-g-topology 470 + media/uapi/mediactl/media-ioc-request-alloc userspace-api/media/mediactl/media-ioc-request-alloc 471 + media/uapi/mediactl/media-ioc-setup-link userspace-api/media/mediactl/media-ioc-setup-link 472 + media/uapi/mediactl/media-request-ioc-queue userspace-api/media/mediactl/media-request-ioc-queue 473 + media/uapi/mediactl/media-request-ioc-reinit userspace-api/media/mediactl/media-request-ioc-reinit 474 + media/uapi/mediactl/media-types userspace-api/media/mediactl/media-types 475 + media/uapi/mediactl/request-api userspace-api/media/mediactl/request-api 476 + media/uapi/mediactl/request-func-close userspace-api/media/mediactl/request-func-close 477 + media/uapi/mediactl/request-func-ioctl userspace-api/media/mediactl/request-func-ioctl 478 + media/uapi/mediactl/request-func-poll userspace-api/media/mediactl/request-func-poll 479 + media/uapi/rc/keytable.c userspace-api/media/rc/keytable.c 480 + media/uapi/rc/lirc-dev userspace-api/media/rc/lirc-dev 481 + media/uapi/rc/lirc-dev-intro userspace-api/media/rc/lirc-dev-intro 482 + media/uapi/rc/lirc-func userspace-api/media/rc/lirc-func 483 + media/uapi/rc/lirc-get-features userspace-api/media/rc/lirc-get-features 484 + media/uapi/rc/lirc-get-rec-mode userspace-api/media/rc/lirc-get-rec-mode 485 + media/uapi/rc/lirc-get-rec-resolution userspace-api/media/rc/lirc-get-rec-resolution 486 + media/uapi/rc/lirc-get-send-mode userspace-api/media/rc/lirc-get-send-mode 487 + media/uapi/rc/lirc-get-timeout userspace-api/media/rc/lirc-get-timeout 488 + media/uapi/rc/lirc-header userspace-api/media/rc/lirc-header 489 + media/uapi/rc/lirc-read userspace-api/media/rc/lirc-read 490 + media/uapi/rc/lirc-set-measure-carrier-mode userspace-api/media/rc/lirc-set-measure-carrier-mode 491 + media/uapi/rc/lirc-set-rec-carrier userspace-api/media/rc/lirc-set-rec-carrier 492 + media/uapi/rc/lirc-set-rec-carrier-range userspace-api/media/rc/lirc-set-rec-carrier-range 493 + media/uapi/rc/lirc-set-rec-timeout userspace-api/media/rc/lirc-set-rec-timeout 494 + media/uapi/rc/lirc-set-send-carrier userspace-api/media/rc/lirc-set-send-carrier 495 + media/uapi/rc/lirc-set-send-duty-cycle userspace-api/media/rc/lirc-set-send-duty-cycle 496 + media/uapi/rc/lirc-set-transmitter-mask userspace-api/media/rc/lirc-set-transmitter-mask 497 + media/uapi/rc/lirc-set-wideband-receiver userspace-api/media/rc/lirc-set-wideband-receiver 498 + media/uapi/rc/lirc-write userspace-api/media/rc/lirc-write 499 + media/uapi/rc/rc-intro userspace-api/media/rc/rc-intro 500 + media/uapi/rc/rc-protos userspace-api/media/rc/rc-protos 501 + media/uapi/rc/rc-sysfs-nodes userspace-api/media/rc/rc-sysfs-nodes 502 + media/uapi/rc/rc-table-change userspace-api/media/rc/rc-table-change 503 + media/uapi/rc/rc-tables userspace-api/media/rc/rc-tables 504 + media/uapi/rc/remote_controllers userspace-api/media/rc/remote_controllers 505 + media/uapi/v4l/app-pri userspace-api/media/v4l/app-pri 506 + media/uapi/v4l/audio userspace-api/media/v4l/audio 507 + media/uapi/v4l/biblio userspace-api/media/v4l/biblio 508 + media/uapi/v4l/buffer userspace-api/media/v4l/buffer 509 + media/uapi/v4l/capture-example userspace-api/media/v4l/capture-example 510 + media/uapi/v4l/capture.c userspace-api/media/v4l/capture.c 511 + media/uapi/v4l/colorspaces userspace-api/media/v4l/colorspaces 512 + media/uapi/v4l/colorspaces-defs userspace-api/media/v4l/colorspaces-defs 513 + media/uapi/v4l/colorspaces-details userspace-api/media/v4l/colorspaces-details 514 + media/uapi/v4l/common userspace-api/media/v4l/common 515 + media/uapi/v4l/common-defs userspace-api/media/v4l/common-defs 516 + media/uapi/v4l/compat userspace-api/media/v4l/compat 517 + media/uapi/v4l/control userspace-api/media/v4l/control 518 + media/uapi/v4l/crop userspace-api/media/v4l/crop 519 + media/uapi/v4l/depth-formats userspace-api/media/v4l/depth-formats 520 + media/uapi/v4l/dev-capture userspace-api/media/v4l/dev-capture 521 + media/uapi/v4l/dev-codec userspace-api/media/v4l/dev-mem2mem 522 + media/uapi/v4l/dev-decoder userspace-api/media/v4l/dev-decoder 523 + media/uapi/v4l/dev-event userspace-api/media/v4l/dev-event 524 + media/uapi/v4l/dev-mem2mem userspace-api/media/v4l/dev-mem2mem 525 + media/uapi/v4l/dev-meta userspace-api/media/v4l/dev-meta 526 + media/uapi/v4l/dev-osd userspace-api/media/v4l/dev-osd 527 + media/uapi/v4l/dev-output userspace-api/media/v4l/dev-output 528 + media/uapi/v4l/dev-overlay userspace-api/media/v4l/dev-overlay 529 + media/uapi/v4l/dev-radio userspace-api/media/v4l/dev-radio 530 + media/uapi/v4l/dev-raw-vbi userspace-api/media/v4l/dev-raw-vbi 531 + media/uapi/v4l/dev-rds userspace-api/media/v4l/dev-rds 532 + media/uapi/v4l/dev-sdr userspace-api/media/v4l/dev-sdr 533 + media/uapi/v4l/dev-sliced-vbi userspace-api/media/v4l/dev-sliced-vbi 534 + media/uapi/v4l/dev-stateless-decoder userspace-api/media/v4l/dev-stateless-decoder 535 + media/uapi/v4l/dev-subdev userspace-api/media/v4l/dev-subdev 536 + media/uapi/v4l/dev-touch userspace-api/media/v4l/dev-touch 537 + media/uapi/v4l/devices userspace-api/media/v4l/devices 538 + media/uapi/v4l/diff-v4l userspace-api/media/v4l/diff-v4l 539 + media/uapi/v4l/dmabuf userspace-api/media/v4l/dmabuf 540 + media/uapi/v4l/dv-timings userspace-api/media/v4l/dv-timings 541 + media/uapi/v4l/ext-ctrls-camera userspace-api/media/v4l/ext-ctrls-camera 542 + media/uapi/v4l/ext-ctrls-codec userspace-api/media/v4l/ext-ctrls-codec 543 + media/uapi/v4l/ext-ctrls-detect userspace-api/media/v4l/ext-ctrls-detect 544 + media/uapi/v4l/ext-ctrls-dv userspace-api/media/v4l/ext-ctrls-dv 545 + media/uapi/v4l/ext-ctrls-flash userspace-api/media/v4l/ext-ctrls-flash 546 + media/uapi/v4l/ext-ctrls-fm-rx userspace-api/media/v4l/ext-ctrls-fm-rx 547 + media/uapi/v4l/ext-ctrls-fm-tx userspace-api/media/v4l/ext-ctrls-fm-tx 548 + media/uapi/v4l/ext-ctrls-image-process userspace-api/media/v4l/ext-ctrls-image-process 549 + media/uapi/v4l/ext-ctrls-image-source userspace-api/media/v4l/ext-ctrls-image-source 550 + media/uapi/v4l/ext-ctrls-jpeg userspace-api/media/v4l/ext-ctrls-jpeg 551 + media/uapi/v4l/ext-ctrls-rf-tuner userspace-api/media/v4l/ext-ctrls-rf-tuner 552 + media/uapi/v4l/extended-controls userspace-api/media/v4l/extended-controls 553 + media/uapi/v4l/field-order userspace-api/media/v4l/field-order 554 + media/uapi/v4l/format userspace-api/media/v4l/format 555 + media/uapi/v4l/func-close userspace-api/media/v4l/func-close 556 + media/uapi/v4l/func-ioctl userspace-api/media/v4l/func-ioctl 557 + media/uapi/v4l/func-mmap userspace-api/media/v4l/func-mmap 558 + media/uapi/v4l/func-munmap userspace-api/media/v4l/func-munmap 559 + media/uapi/v4l/func-open userspace-api/media/v4l/func-open 560 + media/uapi/v4l/func-poll userspace-api/media/v4l/func-poll 561 + media/uapi/v4l/func-read userspace-api/media/v4l/func-read 562 + media/uapi/v4l/func-select userspace-api/media/v4l/func-select 563 + media/uapi/v4l/func-write userspace-api/media/v4l/func-write 564 + media/uapi/v4l/hist-v4l2 userspace-api/media/v4l/hist-v4l2 565 + media/uapi/v4l/hsv-formats userspace-api/media/v4l/hsv-formats 566 + media/uapi/v4l/io userspace-api/media/v4l/io 567 + media/uapi/v4l/libv4l userspace-api/media/v4l/libv4l 568 + media/uapi/v4l/libv4l-introduction userspace-api/media/v4l/libv4l-introduction 569 + media/uapi/v4l/meta-formats userspace-api/media/v4l/meta-formats 570 + media/uapi/v4l/mmap userspace-api/media/v4l/mmap 571 + media/uapi/v4l/open userspace-api/media/v4l/open 572 + media/uapi/v4l/pixfmt userspace-api/media/v4l/pixfmt 573 + media/uapi/v4l/pixfmt-002 userspace-api/media/v4l/pixfmt-v4l2 574 + media/uapi/v4l/pixfmt-003 userspace-api/media/v4l/pixfmt-v4l2-mplane 575 + media/uapi/v4l/pixfmt-004 userspace-api/media/v4l/pixfmt-intro 576 + media/uapi/v4l/pixfmt-006 userspace-api/media/v4l/colorspaces-defs 577 + media/uapi/v4l/pixfmt-007 userspace-api/media/v4l/colorspaces-details 578 + media/uapi/v4l/pixfmt-013 userspace-api/media/v4l/pixfmt-compressed 579 + media/uapi/v4l/pixfmt-bayer userspace-api/media/v4l/pixfmt-bayer 580 + media/uapi/v4l/pixfmt-cnf4 userspace-api/media/v4l/pixfmt-cnf4 581 + media/uapi/v4l/pixfmt-compressed userspace-api/media/v4l/pixfmt-compressed 582 + media/uapi/v4l/pixfmt-indexed userspace-api/media/v4l/pixfmt-indexed 583 + media/uapi/v4l/pixfmt-intro userspace-api/media/v4l/pixfmt-intro 584 + media/uapi/v4l/pixfmt-inzi userspace-api/media/v4l/pixfmt-inzi 585 + media/uapi/v4l/pixfmt-m420 userspace-api/media/v4l/pixfmt-m420 586 + media/uapi/v4l/pixfmt-meta-d4xx userspace-api/media/v4l/metafmt-d4xx 587 + media/uapi/v4l/pixfmt-meta-intel-ipu3 userspace-api/media/v4l/metafmt-intel-ipu3 588 + media/uapi/v4l/pixfmt-meta-uvc userspace-api/media/v4l/metafmt-uvc 589 + media/uapi/v4l/pixfmt-meta-vivid userspace-api/media/v4l/metafmt-vivid 590 + media/uapi/v4l/pixfmt-meta-vsp1-hgo userspace-api/media/v4l/metafmt-vsp1-hgo 591 + media/uapi/v4l/pixfmt-meta-vsp1-hgt userspace-api/media/v4l/metafmt-vsp1-hgt 592 + media/uapi/v4l/pixfmt-packed-hsv userspace-api/media/v4l/pixfmt-packed-hsv 593 + media/uapi/v4l/pixfmt-packed-yuv userspace-api/media/v4l/pixfmt-packed-yuv 594 + media/uapi/v4l/pixfmt-reserved userspace-api/media/v4l/pixfmt-reserved 595 + media/uapi/v4l/pixfmt-rgb userspace-api/media/v4l/pixfmt-rgb 596 + media/uapi/v4l/pixfmt-sbggr16 userspace-api/media/v4l/pixfmt-srggb16 597 + media/uapi/v4l/pixfmt-sdr-cs08 userspace-api/media/v4l/pixfmt-sdr-cs08 598 + media/uapi/v4l/pixfmt-sdr-cs14le userspace-api/media/v4l/pixfmt-sdr-cs14le 599 + media/uapi/v4l/pixfmt-sdr-cu08 userspace-api/media/v4l/pixfmt-sdr-cu08 600 + media/uapi/v4l/pixfmt-sdr-cu16le userspace-api/media/v4l/pixfmt-sdr-cu16le 601 + media/uapi/v4l/pixfmt-sdr-pcu16be userspace-api/media/v4l/pixfmt-sdr-pcu16be 602 + media/uapi/v4l/pixfmt-sdr-pcu18be userspace-api/media/v4l/pixfmt-sdr-pcu18be 603 + media/uapi/v4l/pixfmt-sdr-pcu20be userspace-api/media/v4l/pixfmt-sdr-pcu20be 604 + media/uapi/v4l/pixfmt-sdr-ru12le userspace-api/media/v4l/pixfmt-sdr-ru12le 605 + media/uapi/v4l/pixfmt-srggb10 userspace-api/media/v4l/pixfmt-srggb10 606 + media/uapi/v4l/pixfmt-srggb10-ipu3 userspace-api/media/v4l/pixfmt-srggb10-ipu3 607 + media/uapi/v4l/pixfmt-srggb10alaw8 userspace-api/media/v4l/pixfmt-srggb10alaw8 608 + media/uapi/v4l/pixfmt-srggb10dpcm8 userspace-api/media/v4l/pixfmt-srggb10dpcm8 609 + media/uapi/v4l/pixfmt-srggb10p userspace-api/media/v4l/pixfmt-srggb10p 610 + media/uapi/v4l/pixfmt-srggb12 userspace-api/media/v4l/pixfmt-srggb12 611 + media/uapi/v4l/pixfmt-srggb12p userspace-api/media/v4l/pixfmt-srggb12p 612 + media/uapi/v4l/pixfmt-srggb14 userspace-api/media/v4l/pixfmt-srggb14 613 + media/uapi/v4l/pixfmt-srggb14p userspace-api/media/v4l/pixfmt-srggb14p 614 + media/uapi/v4l/pixfmt-srggb16 userspace-api/media/v4l/pixfmt-srggb16 615 + media/uapi/v4l/pixfmt-srggb8 userspace-api/media/v4l/pixfmt-srggb8 616 + media/uapi/v4l/pixfmt-tch-td08 userspace-api/media/v4l/pixfmt-tch-td08 617 + media/uapi/v4l/pixfmt-tch-td16 userspace-api/media/v4l/pixfmt-tch-td16 618 + media/uapi/v4l/pixfmt-tch-tu08 userspace-api/media/v4l/pixfmt-tch-tu08 619 + media/uapi/v4l/pixfmt-tch-tu16 userspace-api/media/v4l/pixfmt-tch-tu16 620 + media/uapi/v4l/pixfmt-uv8 userspace-api/media/v4l/pixfmt-uv8 621 + media/uapi/v4l/pixfmt-v4l2 userspace-api/media/v4l/pixfmt-v4l2 622 + media/uapi/v4l/pixfmt-v4l2-mplane userspace-api/media/v4l/pixfmt-v4l2-mplane 623 + media/uapi/v4l/pixfmt-y12i userspace-api/media/v4l/pixfmt-y12i 624 + media/uapi/v4l/pixfmt-y8i userspace-api/media/v4l/pixfmt-y8i 625 + media/uapi/v4l/pixfmt-z16 userspace-api/media/v4l/pixfmt-z16 626 + media/uapi/v4l/planar-apis userspace-api/media/v4l/planar-apis 627 + media/uapi/v4l/querycap userspace-api/media/v4l/querycap 628 + media/uapi/v4l/rw userspace-api/media/v4l/rw 629 + media/uapi/v4l/sdr-formats userspace-api/media/v4l/sdr-formats 630 + media/uapi/v4l/selection-api userspace-api/media/v4l/selection-api 631 + media/uapi/v4l/selection-api-002 userspace-api/media/v4l/selection-api-intro 632 + media/uapi/v4l/selection-api-003 userspace-api/media/v4l/selection-api-targets 633 + media/uapi/v4l/selection-api-004 userspace-api/media/v4l/selection-api-configuration 634 + media/uapi/v4l/selection-api-005 userspace-api/media/v4l/selection-api-vs-crop-api 635 + media/uapi/v4l/selection-api-006 userspace-api/media/v4l/selection-api-examples 636 + media/uapi/v4l/selection-api-configuration userspace-api/media/v4l/selection-api-configuration 637 + media/uapi/v4l/selection-api-examples userspace-api/media/v4l/selection-api-examples 638 + media/uapi/v4l/selection-api-intro userspace-api/media/v4l/selection-api-intro 639 + media/uapi/v4l/selection-api-targets userspace-api/media/v4l/selection-api-targets 640 + media/uapi/v4l/selection-api-vs-crop-api userspace-api/media/v4l/selection-api-vs-crop-api 641 + media/uapi/v4l/selections-common userspace-api/media/v4l/selections-common 642 + media/uapi/v4l/standard userspace-api/media/v4l/standard 643 + media/uapi/v4l/streaming-par userspace-api/media/v4l/streaming-par 644 + media/uapi/v4l/subdev-formats userspace-api/media/v4l/subdev-formats 645 + media/uapi/v4l/tch-formats userspace-api/media/v4l/tch-formats 646 + media/uapi/v4l/tuner userspace-api/media/v4l/tuner 647 + media/uapi/v4l/user-func userspace-api/media/v4l/user-func 648 + media/uapi/v4l/userp userspace-api/media/v4l/userp 649 + media/uapi/v4l/v4l2 userspace-api/media/v4l/v4l2 650 + media/uapi/v4l/v4l2-selection-flags userspace-api/media/v4l/v4l2-selection-flags 651 + media/uapi/v4l/v4l2-selection-targets userspace-api/media/v4l/v4l2-selection-targets 652 + media/uapi/v4l/v4l2grab-example userspace-api/media/v4l/v4l2grab-example 653 + media/uapi/v4l/v4l2grab.c userspace-api/media/v4l/v4l2grab.c 654 + media/uapi/v4l/video userspace-api/media/v4l/video 655 + media/uapi/v4l/videodev userspace-api/media/v4l/videodev 656 + media/uapi/v4l/vidioc-create-bufs userspace-api/media/v4l/vidioc-create-bufs 657 + media/uapi/v4l/vidioc-cropcap userspace-api/media/v4l/vidioc-cropcap 658 + media/uapi/v4l/vidioc-dbg-g-chip-info userspace-api/media/v4l/vidioc-dbg-g-chip-info 659 + media/uapi/v4l/vidioc-dbg-g-register userspace-api/media/v4l/vidioc-dbg-g-register 660 + media/uapi/v4l/vidioc-decoder-cmd userspace-api/media/v4l/vidioc-decoder-cmd 661 + media/uapi/v4l/vidioc-dqevent userspace-api/media/v4l/vidioc-dqevent 662 + media/uapi/v4l/vidioc-dv-timings-cap userspace-api/media/v4l/vidioc-dv-timings-cap 663 + media/uapi/v4l/vidioc-encoder-cmd userspace-api/media/v4l/vidioc-encoder-cmd 664 + media/uapi/v4l/vidioc-enum-dv-timings userspace-api/media/v4l/vidioc-enum-dv-timings 665 + media/uapi/v4l/vidioc-enum-fmt userspace-api/media/v4l/vidioc-enum-fmt 666 + media/uapi/v4l/vidioc-enum-frameintervals userspace-api/media/v4l/vidioc-enum-frameintervals 667 + media/uapi/v4l/vidioc-enum-framesizes userspace-api/media/v4l/vidioc-enum-framesizes 668 + media/uapi/v4l/vidioc-enum-freq-bands userspace-api/media/v4l/vidioc-enum-freq-bands 669 + media/uapi/v4l/vidioc-enumaudio userspace-api/media/v4l/vidioc-enumaudio 670 + media/uapi/v4l/vidioc-enumaudioout userspace-api/media/v4l/vidioc-enumaudioout 671 + media/uapi/v4l/vidioc-enuminput userspace-api/media/v4l/vidioc-enuminput 672 + media/uapi/v4l/vidioc-enumoutput userspace-api/media/v4l/vidioc-enumoutput 673 + media/uapi/v4l/vidioc-enumstd userspace-api/media/v4l/vidioc-enumstd 674 + media/uapi/v4l/vidioc-expbuf userspace-api/media/v4l/vidioc-expbuf 675 + media/uapi/v4l/vidioc-g-audio userspace-api/media/v4l/vidioc-g-audio 676 + media/uapi/v4l/vidioc-g-audioout userspace-api/media/v4l/vidioc-g-audioout 677 + media/uapi/v4l/vidioc-g-crop userspace-api/media/v4l/vidioc-g-crop 678 + media/uapi/v4l/vidioc-g-ctrl userspace-api/media/v4l/vidioc-g-ctrl 679 + media/uapi/v4l/vidioc-g-dv-timings userspace-api/media/v4l/vidioc-g-dv-timings 680 + media/uapi/v4l/vidioc-g-edid userspace-api/media/v4l/vidioc-g-edid 681 + media/uapi/v4l/vidioc-g-enc-index userspace-api/media/v4l/vidioc-g-enc-index 682 + media/uapi/v4l/vidioc-g-ext-ctrls userspace-api/media/v4l/vidioc-g-ext-ctrls 683 + media/uapi/v4l/vidioc-g-fbuf userspace-api/media/v4l/vidioc-g-fbuf 684 + media/uapi/v4l/vidioc-g-fmt userspace-api/media/v4l/vidioc-g-fmt 685 + media/uapi/v4l/vidioc-g-frequency userspace-api/media/v4l/vidioc-g-frequency 686 + media/uapi/v4l/vidioc-g-input userspace-api/media/v4l/vidioc-g-input 687 + media/uapi/v4l/vidioc-g-jpegcomp userspace-api/media/v4l/vidioc-g-jpegcomp 688 + media/uapi/v4l/vidioc-g-modulator userspace-api/media/v4l/vidioc-g-modulator 689 + media/uapi/v4l/vidioc-g-output userspace-api/media/v4l/vidioc-g-output 690 + media/uapi/v4l/vidioc-g-parm userspace-api/media/v4l/vidioc-g-parm 691 + media/uapi/v4l/vidioc-g-priority userspace-api/media/v4l/vidioc-g-priority 692 + media/uapi/v4l/vidioc-g-selection userspace-api/media/v4l/vidioc-g-selection 693 + media/uapi/v4l/vidioc-g-sliced-vbi-cap userspace-api/media/v4l/vidioc-g-sliced-vbi-cap 694 + media/uapi/v4l/vidioc-g-std userspace-api/media/v4l/vidioc-g-std 695 + media/uapi/v4l/vidioc-g-tuner userspace-api/media/v4l/vidioc-g-tuner 696 + media/uapi/v4l/vidioc-log-status userspace-api/media/v4l/vidioc-log-status 697 + media/uapi/v4l/vidioc-overlay userspace-api/media/v4l/vidioc-overlay 698 + media/uapi/v4l/vidioc-prepare-buf userspace-api/media/v4l/vidioc-prepare-buf 699 + media/uapi/v4l/vidioc-qbuf userspace-api/media/v4l/vidioc-qbuf 700 + media/uapi/v4l/vidioc-query-dv-timings userspace-api/media/v4l/vidioc-query-dv-timings 701 + media/uapi/v4l/vidioc-querybuf userspace-api/media/v4l/vidioc-querybuf 702 + media/uapi/v4l/vidioc-querycap userspace-api/media/v4l/vidioc-querycap 703 + media/uapi/v4l/vidioc-queryctrl userspace-api/media/v4l/vidioc-queryctrl 704 + media/uapi/v4l/vidioc-querystd userspace-api/media/v4l/vidioc-querystd 705 + media/uapi/v4l/vidioc-reqbufs userspace-api/media/v4l/vidioc-reqbufs 706 + media/uapi/v4l/vidioc-s-hw-freq-seek userspace-api/media/v4l/vidioc-s-hw-freq-seek 707 + media/uapi/v4l/vidioc-streamon userspace-api/media/v4l/vidioc-streamon 708 + media/uapi/v4l/vidioc-subdev-enum-frame-interval userspace-api/media/v4l/vidioc-subdev-enum-frame-interval 709 + media/uapi/v4l/vidioc-subdev-enum-frame-size userspace-api/media/v4l/vidioc-subdev-enum-frame-size 710 + media/uapi/v4l/vidioc-subdev-enum-mbus-code userspace-api/media/v4l/vidioc-subdev-enum-mbus-code 711 + media/uapi/v4l/vidioc-subdev-g-crop userspace-api/media/v4l/vidioc-subdev-g-crop 712 + media/uapi/v4l/vidioc-subdev-g-fmt userspace-api/media/v4l/vidioc-subdev-g-fmt 713 + media/uapi/v4l/vidioc-subdev-g-frame-interval userspace-api/media/v4l/vidioc-subdev-g-frame-interval 714 + media/uapi/v4l/vidioc-subdev-g-selection userspace-api/media/v4l/vidioc-subdev-g-selection 715 + media/uapi/v4l/vidioc-subscribe-event userspace-api/media/v4l/vidioc-subscribe-event 716 + media/uapi/v4l/yuv-formats userspace-api/media/v4l/yuv-formats 717 + media/v4l-drivers/au0828-cardlist admin-guide/media/au0828-cardlist 718 + media/v4l-drivers/bttv admin-guide/media/bttv 719 + media/v4l-drivers/bttv-cardlist admin-guide/media/bttv-cardlist 720 + media/v4l-drivers/bttv-devel driver-api/media/drivers/bttv-devel 721 + media/v4l-drivers/cafe_ccic admin-guide/media/cafe_ccic 722 + media/v4l-drivers/cardlist admin-guide/media/cardlist 723 + media/v4l-drivers/cx2341x driver-api/media/drivers/cx2341x-devel 724 + media/v4l-drivers/cx2341x-devel driver-api/media/drivers/cx2341x-devel 725 + media/v4l-drivers/cx2341x-uapi userspace-api/media/drivers/cx2341x-uapi 726 + media/v4l-drivers/cx23885-cardlist admin-guide/media/cx23885-cardlist 727 + media/v4l-drivers/cx88 admin-guide/media/cx88 728 + media/v4l-drivers/cx88-cardlist admin-guide/media/cx88-cardlist 729 + media/v4l-drivers/cx88-devel driver-api/media/drivers/cx88-devel 730 + media/v4l-drivers/em28xx-cardlist admin-guide/media/em28xx-cardlist 731 + media/v4l-drivers/fimc admin-guide/media/fimc 732 + media/v4l-drivers/fimc-devel driver-api/media/drivers/fimc-devel 733 + media/v4l-drivers/fourcc userspace-api/media/v4l/fourcc 734 + media/v4l-drivers/gspca-cardlist admin-guide/media/gspca-cardlist 735 + media/v4l-drivers/imx admin-guide/media/imx 736 + media/v4l-drivers/imx-uapi userspace-api/media/drivers/imx-uapi 737 + media/v4l-drivers/imx7 admin-guide/media/imx7 738 + media/v4l-drivers/index userspace-api/media/drivers/index 739 + media/v4l-drivers/ipu3 admin-guide/media/ipu3 740 + media/v4l-drivers/ivtv admin-guide/media/ivtv 741 + media/v4l-drivers/ivtv-cardlist admin-guide/media/ivtv-cardlist 742 + media/v4l-drivers/max2175 userspace-api/media/drivers/max2175 743 + media/v4l-drivers/omap3isp admin-guide/media/omap3isp 744 + media/v4l-drivers/omap3isp-uapi userspace-api/media/drivers/omap3isp-uapi 745 + media/v4l-drivers/philips admin-guide/media/philips 746 + media/v4l-drivers/pvrusb2 driver-api/media/drivers/pvrusb2 747 + media/v4l-drivers/pxa_camera driver-api/media/drivers/pxa_camera 748 + media/v4l-drivers/qcom_camss admin-guide/media/qcom_camss 749 + media/v4l-drivers/radiotrack driver-api/media/drivers/radiotrack 750 + media/v4l-drivers/rcar-fdp1 admin-guide/media/rcar-fdp1 751 + media/v4l-drivers/saa7134 admin-guide/media/saa7134 752 + media/v4l-drivers/saa7134-cardlist admin-guide/media/saa7134-cardlist 753 + media/v4l-drivers/saa7134-devel driver-api/media/drivers/saa7134-devel 754 + media/v4l-drivers/saa7164-cardlist admin-guide/media/saa7164-cardlist 755 + media/v4l-drivers/sh_mobile_ceu_camera driver-api/media/drivers/sh_mobile_ceu_camera 756 + media/v4l-drivers/si470x admin-guide/media/si470x 757 + media/v4l-drivers/si4713 admin-guide/media/si4713 758 + media/v4l-drivers/si476x admin-guide/media/si476x 759 + media/v4l-drivers/tuner-cardlist admin-guide/media/tuner-cardlist 760 + media/v4l-drivers/tuners driver-api/media/drivers/tuners 761 + media/v4l-drivers/uvcvideo userspace-api/media/drivers/uvcvideo 762 + media/v4l-drivers/v4l-with-ir admin-guide/media/remote-controller 763 + media/v4l-drivers/vimc admin-guide/media/vimc 764 + media/v4l-drivers/vimc-devel driver-api/media/drivers/vimc-devel 765 + media/v4l-drivers/vivid admin-guide/media/vivid 766 + media/v4l-drivers/zoran driver-api/media/drivers/zoran 767 + memory-devices/ti-emif driver-api/memory-devices/ti-emif 768 + mips/booting arch/mips/booting 769 + mips/features arch/mips/features 770 + mips/index arch/mips/index 771 + mips/ingenic-tcu arch/mips/ingenic-tcu 772 + mm/slub admin-guide/mm/slab 773 + mmc/index driver-api/mmc/index 774 + mmc/mmc-async-req driver-api/mmc/mmc-async-req 775 + mmc/mmc-dev-attrs driver-api/mmc/mmc-dev-attrs 776 + mmc/mmc-dev-parts driver-api/mmc/mmc-dev-parts 777 + mmc/mmc-tools driver-api/mmc/mmc-tools 778 + mtd/index driver-api/mtd/index 779 + mtd/intel-spi driver-api/mtd/spi-intel 780 + mtd/nand_ecc driver-api/mtd/nand_ecc 781 + mtd/spi-nor driver-api/mtd/spi-nor 782 + namespaces/compatibility-list admin-guide/namespaces/compatibility-list 783 + namespaces/index admin-guide/namespaces/index 784 + namespaces/resource-control admin-guide/namespaces/resource-control 785 + networking/altera_tse networking/device_drivers/ethernet/altera/altera_tse 786 + networking/baycom networking/device_drivers/hamradio/baycom 787 + networking/bpf_flow_dissector bpf/prog_flow_dissector 788 + networking/cxacru networking/device_drivers/atm/cxacru 789 + networking/defza networking/device_drivers/fddi/defza 790 + networking/device_drivers/3com/3c509 networking/device_drivers/ethernet/3com/3c509 791 + networking/device_drivers/3com/vortex networking/device_drivers/ethernet/3com/vortex 792 + networking/device_drivers/amazon/ena networking/device_drivers/ethernet/amazon/ena 793 + networking/device_drivers/aquantia/atlantic networking/device_drivers/ethernet/aquantia/atlantic 794 + networking/device_drivers/chelsio/cxgb networking/device_drivers/ethernet/chelsio/cxgb 795 + networking/device_drivers/cirrus/cs89x0 networking/device_drivers/ethernet/cirrus/cs89x0 796 + networking/device_drivers/davicom/dm9000 networking/device_drivers/ethernet/davicom/dm9000 797 + networking/device_drivers/dec/dmfe networking/device_drivers/ethernet/dec/dmfe 798 + networking/device_drivers/dlink/dl2k networking/device_drivers/ethernet/dlink/dl2k 799 + networking/device_drivers/freescale/dpaa networking/device_drivers/ethernet/freescale/dpaa 800 + networking/device_drivers/freescale/dpaa2/dpio-driver networking/device_drivers/ethernet/freescale/dpaa2/dpio-driver 801 + networking/device_drivers/freescale/dpaa2/ethernet-driver networking/device_drivers/ethernet/freescale/dpaa2/ethernet-driver 802 + networking/device_drivers/freescale/dpaa2/index networking/device_drivers/ethernet/freescale/dpaa2/index 803 + networking/device_drivers/freescale/dpaa2/mac-phy-support networking/device_drivers/ethernet/freescale/dpaa2/mac-phy-support 804 + networking/device_drivers/freescale/dpaa2/overview networking/device_drivers/ethernet/freescale/dpaa2/overview 805 + networking/device_drivers/freescale/gianfar networking/device_drivers/ethernet/freescale/gianfar 806 + networking/device_drivers/google/gve networking/device_drivers/ethernet/google/gve 807 + networking/device_drivers/intel/e100 networking/device_drivers/ethernet/intel/e100 808 + networking/device_drivers/intel/e1000 networking/device_drivers/ethernet/intel/e1000 809 + networking/device_drivers/intel/e1000e networking/device_drivers/ethernet/intel/e1000e 810 + networking/device_drivers/intel/fm10k networking/device_drivers/ethernet/intel/fm10k 811 + networking/device_drivers/intel/i40e networking/device_drivers/ethernet/intel/i40e 812 + networking/device_drivers/intel/iavf networking/device_drivers/ethernet/intel/iavf 813 + networking/device_drivers/intel/ice networking/device_drivers/ethernet/intel/ice 814 + networking/device_drivers/intel/igb networking/device_drivers/ethernet/intel/igb 815 + networking/device_drivers/intel/igbvf networking/device_drivers/ethernet/intel/igbvf 816 + networking/device_drivers/intel/ipw2100 networking/device_drivers/wifi/intel/ipw2100 817 + networking/device_drivers/intel/ipw2200 networking/device_drivers/wifi/intel/ipw2200 818 + networking/device_drivers/intel/ixgbe networking/device_drivers/ethernet/intel/ixgbe 819 + networking/device_drivers/intel/ixgbevf networking/device_drivers/ethernet/intel/ixgbevf 820 + networking/device_drivers/marvell/octeontx2 networking/device_drivers/ethernet/marvell/octeontx2 821 + networking/device_drivers/microsoft/netvsc networking/device_drivers/ethernet/microsoft/netvsc 822 + networking/device_drivers/neterion/s2io networking/device_drivers/ethernet/neterion/s2io 823 + networking/device_drivers/netronome/nfp networking/device_drivers/ethernet/netronome/nfp 824 + networking/device_drivers/pensando/ionic networking/device_drivers/ethernet/pensando/ionic 825 + networking/device_drivers/qualcomm/rmnet networking/device_drivers/cellular/qualcomm/rmnet 826 + networking/device_drivers/smsc/smc9 networking/device_drivers/ethernet/smsc/smc9 827 + networking/device_drivers/stmicro/stmmac networking/device_drivers/ethernet/stmicro/stmmac 828 + networking/device_drivers/ti/cpsw networking/device_drivers/ethernet/ti/cpsw 829 + networking/device_drivers/ti/cpsw_switchdev networking/device_drivers/ethernet/ti/cpsw_switchdev 830 + networking/device_drivers/ti/tlan networking/device_drivers/ethernet/ti/tlan 831 + networking/devlink-trap networking/devlink/devlink-trap 832 + networking/dpaa2/dpio-driver networking/device_drivers/ethernet/freescale/dpaa2/dpio-driver 833 + networking/dpaa2/ethernet-driver networking/device_drivers/ethernet/freescale/dpaa2/ethernet-driver 834 + networking/dpaa2/index networking/device_drivers/ethernet/freescale/dpaa2/index 835 + networking/dpaa2/overview networking/device_drivers/ethernet/freescale/dpaa2/overview 836 + networking/e100 networking/device_drivers/ethernet/intel/e100 837 + networking/e1000 networking/device_drivers/ethernet/intel/e1000 838 + networking/e1000e networking/device_drivers/ethernet/intel/e1000e 839 + networking/fm10k networking/device_drivers/ethernet/intel/fm10k 840 + networking/fore200e networking/device_drivers/atm/fore200e 841 + networking/hinic networking/device_drivers/ethernet/huawei/hinic 842 + networking/i40e networking/device_drivers/ethernet/intel/i40e 843 + networking/iavf networking/device_drivers/ethernet/intel/iavf 844 + networking/ice networking/device_drivers/ethernet/intel/ice 845 + networking/igb networking/device_drivers/ethernet/intel/igb 846 + networking/igbvf networking/device_drivers/ethernet/intel/igbvf 847 + networking/iphase networking/device_drivers/atm/iphase 848 + networking/ixgbe networking/device_drivers/ethernet/intel/ixgbe 849 + networking/ixgbevf networking/device_drivers/ethernet/intel/ixgbevf 850 + networking/netdev-FAQ process/maintainer-netdev 851 + networking/skfp networking/device_drivers/fddi/skfp 852 + networking/z8530drv networking/device_drivers/hamradio/z8530drv 853 + nfc/index driver-api/nfc/index 854 + nfc/nfc-hci driver-api/nfc/nfc-hci 855 + nfc/nfc-pn544 driver-api/nfc/nfc-pn544 856 + nios2/features arch/nios2/features 857 + nios2/index arch/nios2/index 858 + nios2/nios2 arch/nios2/nios2 859 + nvdimm/btt driver-api/nvdimm/btt 860 + nvdimm/index driver-api/nvdimm/index 861 + nvdimm/nvdimm driver-api/nvdimm/nvdimm 862 + nvdimm/security driver-api/nvdimm/security 863 + nvmem/nvmem driver-api/nvmem 864 + openrisc/features arch/openrisc/features 865 + openrisc/index arch/openrisc/index 866 + openrisc/openrisc_port arch/openrisc/openrisc_port 867 + openrisc/todo arch/openrisc/todo 868 + parisc/debugging arch/parisc/debugging 869 + parisc/features arch/parisc/features 870 + parisc/index arch/parisc/index 871 + parisc/registers arch/parisc/registers 872 + perf/arm-ccn admin-guide/perf/arm-ccn 873 + perf/arm_dsu_pmu admin-guide/perf/arm_dsu_pmu 874 + perf/hisi-pmu admin-guide/perf/hisi-pmu 875 + perf/index admin-guide/perf/index 876 + perf/qcom_l2_pmu admin-guide/perf/qcom_l2_pmu 877 + perf/qcom_l3_pmu admin-guide/perf/qcom_l3_pmu 878 + perf/thunderx2-pmu admin-guide/perf/thunderx2-pmu 879 + perf/xgene-pmu admin-guide/perf/xgene-pmu 880 + phy/samsung-usb2 driver-api/phy/samsung-usb2 881 + powerpc/associativity arch/powerpc/associativity 882 + powerpc/booting arch/powerpc/booting 883 + powerpc/bootwrapper arch/powerpc/bootwrapper 884 + powerpc/cpu_families arch/powerpc/cpu_families 885 + powerpc/cpu_features arch/powerpc/cpu_features 886 + powerpc/dawr-power9 arch/powerpc/dawr-power9 887 + powerpc/dexcr arch/powerpc/dexcr 888 + powerpc/dscr arch/powerpc/dscr 889 + powerpc/eeh-pci-error-recovery arch/powerpc/eeh-pci-error-recovery 890 + powerpc/elf_hwcaps arch/powerpc/elf_hwcaps 891 + powerpc/elfnote arch/powerpc/elfnote 892 + powerpc/features arch/powerpc/features 893 + powerpc/firmware-assisted-dump arch/powerpc/firmware-assisted-dump 894 + powerpc/hvcs arch/powerpc/hvcs 895 + powerpc/imc arch/powerpc/imc 896 + powerpc/index arch/powerpc/index 897 + powerpc/isa-versions arch/powerpc/isa-versions 898 + powerpc/kaslr-booke32 arch/powerpc/kaslr-booke32 899 + powerpc/mpc52xx arch/powerpc/mpc52xx 900 + powerpc/papr_hcalls arch/powerpc/papr_hcalls 901 + powerpc/pci_iov_resource_on_powernv arch/powerpc/pci_iov_resource_on_powernv 902 + powerpc/pmu-ebb arch/powerpc/pmu-ebb 903 + powerpc/ptrace arch/powerpc/ptrace 904 + powerpc/qe_firmware arch/powerpc/qe_firmware 905 + powerpc/syscall64-abi arch/powerpc/syscall64-abi 906 + powerpc/transactional_memory arch/powerpc/transactional_memory 907 + powerpc/ultravisor arch/powerpc/ultravisor 908 + powerpc/vas-api arch/powerpc/vas-api 909 + powerpc/vcpudispatch_stats arch/powerpc/vcpudispatch_stats 910 + powerpc/vmemmap_dedup arch/powerpc/vmemmap_dedup 911 + process/clang-format dev-tools/clang-format 912 + process/magic-number staging/magic-number 913 + process/unaligned-memory-access core-api/unaligned-memory-access 914 + rapidio/index driver-api/rapidio/index 915 + rapidio/mport_cdev driver-api/rapidio/mport_cdev 916 + rapidio/rapidio driver-api/rapidio/rapidio 917 + rapidio/rio_cm driver-api/rapidio/rio_cm 918 + rapidio/sysfs driver-api/rapidio/sysfs 919 + rapidio/tsi721 driver-api/rapidio/tsi721 920 + riscv/acpi arch/riscv/acpi 921 + riscv/boot arch/riscv/boot 922 + riscv/boot-image-header arch/riscv/boot-image-header 923 + riscv/features arch/riscv/features 924 + riscv/hwprobe arch/riscv/hwprobe 925 + riscv/index arch/riscv/index 926 + riscv/patch-acceptance arch/riscv/patch-acceptance 927 + riscv/uabi arch/riscv/uabi 928 + riscv/vector arch/riscv/vector 929 + riscv/vm-layout arch/riscv/vm-layout 930 + s390/3270 arch/s390/3270 931 + s390/cds arch/s390/cds 932 + s390/common_io arch/s390/common_io 933 + s390/driver-model arch/s390/driver-model 934 + s390/features arch/s390/features 935 + s390/index arch/s390/index 936 + s390/monreader arch/s390/monreader 937 + s390/pci arch/s390/pci 938 + s390/qeth arch/s390/qeth 939 + s390/s390dbf arch/s390/s390dbf 940 + s390/text_files arch/s390/text_files 941 + s390/vfio-ap arch/s390/vfio-ap 942 + s390/vfio-ap-locking arch/s390/vfio-ap-locking 943 + s390/vfio-ccw arch/s390/vfio-ccw 944 + s390/zfcpdump arch/s390/zfcpdump 945 + security/LSM security/lsm-development 946 + security/LSM-sctp security/SCTP 947 + serial/driver driver-api/serial/driver 948 + serial/index driver-api/serial/index 949 + serial/moxa-smartio driver-api/tty/moxa-smartio 950 + serial/n_gsm driver-api/tty/n_gsm 951 + serial/serial-iso7816 driver-api/serial/serial-iso7816 952 + serial/serial-rs485 driver-api/serial/serial-rs485 953 + serial/tty driver-api/tty/tty_ldisc 954 + sh/booting arch/sh/booting 955 + sh/features arch/sh/features 956 + sh/index arch/sh/index 957 + sh/new-machine arch/sh/new-machine 958 + sh/register-banks arch/sh/register-banks 959 + sparc/adi arch/sparc/adi 960 + sparc/console arch/sparc/console 961 + sparc/features arch/sparc/features 962 + sparc/index arch/sparc/index 963 + sparc/oradax/oracle-dax arch/sparc/oradax/oracle-dax 964 + staging/kprobes trace/kprobes 965 + sysctl/abi admin-guide/sysctl/abi 966 + sysctl/fs admin-guide/sysctl/fs 967 + sysctl/index admin-guide/sysctl/index 968 + sysctl/kernel admin-guide/sysctl/kernel 969 + sysctl/net admin-guide/sysctl/net 970 + sysctl/sunrpc admin-guide/sysctl/sunrpc 971 + sysctl/user admin-guide/sysctl/user 972 + sysctl/vm admin-guide/sysctl/vm 973 + thermal/cpu-cooling-api driver-api/thermal/cpu-cooling-api 974 + thermal/exynos_thermal driver-api/thermal/exynos_thermal 975 + thermal/exynos_thermal_emulation driver-api/thermal/exynos_thermal_emulation 976 + thermal/index driver-api/thermal/index 977 + thermal/intel_powerclamp admin-guide/thermal/intel_powerclamp 978 + thermal/nouveau_thermal driver-api/thermal/nouveau_thermal 979 + thermal/power_allocator driver-api/thermal/power_allocator 980 + thermal/sysfs-api driver-api/thermal/sysfs-api 981 + thermal/x86_pkg_temperature_thermal driver-api/thermal/x86_pkg_temperature_thermal 982 + tpm/index security/tpm/index 983 + tpm/tpm_vtpm_proxy security/tpm/tpm_vtpm_proxy 984 + trace/coresight trace/coresight/coresight 985 + trace/coresight-cpu-debug trace/coresight/coresight-cpu-debug 986 + trace/rv/da_monitor_synthesis trace/rv/monitor_synthesis 987 + translations/it_IT/admin-guide/security-bugs translations/it_IT/process/security-bugs 988 + translations/it_IT/process/clang-format translations/it_IT/dev-tools/clang-format 989 + translations/it_IT/process/magic-number translations/it_IT/staging/magic-number 990 + translations/it_IT/riscv/patch-acceptance translations/it_IT/arch/riscv/patch-acceptance 991 + translations/ja_JP/howto translations/ja_JP/process/howto 992 + translations/ko_KR/howto translations/ko_KR/process/howto 993 + translations/sp_SP/howto translations/sp_SP/process/howto 994 + translations/sp_SP/submitting-patches translations/sp_SP/process/submitting-patches 995 + translations/zh_CN/admin-guide/security-bugs translations/zh_CN/process/security-bugs 996 + translations/zh_CN/arch translations/zh_CN/arch/index 997 + translations/zh_CN/arm64/amu translations/zh_CN/arch/arm64/amu 998 + translations/zh_CN/arm64/elf_hwcaps translations/zh_CN/arch/arm64/elf_hwcaps 999 + translations/zh_CN/arm64/hugetlbpage translations/zh_CN/arch/arm64/hugetlbpage 1000 + translations/zh_CN/arm64/index translations/zh_CN/arch/arm64/index 1001 + translations/zh_CN/arm64/perf translations/zh_CN/arch/arm64/perf 1002 + translations/zh_CN/coding-style translations/zh_CN/process/coding-style 1003 + translations/zh_CN/loongarch/booting translations/zh_CN/arch/loongarch/booting 1004 + translations/zh_CN/loongarch/features translations/zh_CN/arch/loongarch/features 1005 + translations/zh_CN/loongarch/index translations/zh_CN/arch/loongarch/index 1006 + translations/zh_CN/loongarch/introduction translations/zh_CN/arch/loongarch/introduction 1007 + translations/zh_CN/loongarch/irq-chip-model translations/zh_CN/arch/loongarch/irq-chip-model 1008 + translations/zh_CN/mips/booting translations/zh_CN/arch/mips/booting 1009 + translations/zh_CN/mips/features translations/zh_CN/arch/mips/features 1010 + translations/zh_CN/mips/index translations/zh_CN/arch/mips/index 1011 + translations/zh_CN/mips/ingenic-tcu translations/zh_CN/arch/mips/ingenic-tcu 1012 + translations/zh_CN/openrisc/index translations/zh_CN/arch/openrisc/index 1013 + translations/zh_CN/openrisc/openrisc_port translations/zh_CN/arch/openrisc/openrisc_port 1014 + translations/zh_CN/openrisc/todo translations/zh_CN/arch/openrisc/todo 1015 + translations/zh_CN/parisc/debugging translations/zh_CN/arch/parisc/debugging 1016 + translations/zh_CN/parisc/index translations/zh_CN/arch/parisc/index 1017 + translations/zh_CN/parisc/registers translations/zh_CN/arch/parisc/registers 1018 + translations/zh_CN/riscv/boot-image-header translations/zh_CN/arch/riscv/boot-image-header 1019 + translations/zh_CN/riscv/index translations/zh_CN/arch/riscv/index 1020 + translations/zh_CN/riscv/patch-acceptance translations/zh_CN/arch/riscv/patch-acceptance 1021 + translations/zh_CN/riscv/vm-layout translations/zh_CN/arch/riscv/vm-layout 1022 + translations/zh_CN/vm/active_mm translations/zh_CN/mm/active_mm 1023 + translations/zh_CN/vm/balance translations/zh_CN/mm/balance 1024 + translations/zh_CN/vm/damon/api translations/zh_CN/mm/damon/api 1025 + translations/zh_CN/vm/damon/design translations/zh_CN/mm/damon/design 1026 + translations/zh_CN/vm/damon/faq translations/zh_CN/mm/damon/faq 1027 + translations/zh_CN/vm/damon/index translations/zh_CN/mm/damon/index 1028 + translations/zh_CN/vm/free_page_reporting translations/zh_CN/mm/free_page_reporting 1029 + translations/zh_CN/vm/highmem translations/zh_CN/mm/highmem 1030 + translations/zh_CN/vm/hmm translations/zh_CN/mm/hmm 1031 + translations/zh_CN/vm/hugetlbfs_reserv translations/zh_CN/mm/hugetlbfs_reserv 1032 + translations/zh_CN/vm/hwpoison translations/zh_CN/mm/hwpoison 1033 + translations/zh_CN/vm/index translations/zh_CN/mm/index 1034 + translations/zh_CN/vm/ksm translations/zh_CN/mm/ksm 1035 + translations/zh_CN/vm/memory-model translations/zh_CN/mm/memory-model 1036 + translations/zh_CN/vm/mmu_notifier translations/zh_CN/mm/mmu_notifier 1037 + translations/zh_CN/vm/numa translations/zh_CN/mm/numa 1038 + translations/zh_CN/vm/overcommit-accounting translations/zh_CN/mm/overcommit-accounting 1039 + translations/zh_CN/vm/page_frags translations/zh_CN/mm/page_frags 1040 + translations/zh_CN/vm/page_owner translations/zh_CN/mm/page_owner 1041 + translations/zh_CN/vm/page_table_check translations/zh_CN/mm/page_table_check 1042 + translations/zh_CN/vm/remap_file_pages translations/zh_CN/mm/remap_file_pages 1043 + translations/zh_CN/vm/split_page_table_lock translations/zh_CN/mm/split_page_table_lock 1044 + translations/zh_CN/vm/zsmalloc translations/zh_CN/mm/zsmalloc 1045 + translations/zh_TW/arm64/amu translations/zh_TW/arch/arm64/amu 1046 + translations/zh_TW/arm64/elf_hwcaps translations/zh_TW/arch/arm64/elf_hwcaps 1047 + translations/zh_TW/arm64/hugetlbpage translations/zh_TW/arch/arm64/hugetlbpage 1048 + translations/zh_TW/arm64/index translations/zh_TW/arch/arm64/index 1049 + translations/zh_TW/arm64/perf translations/zh_TW/arch/arm64/perf 1050 + tty/device_drivers/oxsemi-tornado misc-devices/oxsemi-tornado 1051 + tty/index driver-api/tty/index 1052 + tty/n_tty driver-api/tty/n_tty 1053 + tty/tty_buffer driver-api/tty/tty_buffer 1054 + tty/tty_driver driver-api/tty/tty_driver 1055 + tty/tty_internals driver-api/tty/tty_internals 1056 + tty/tty_ldisc driver-api/tty/tty_ldisc 1057 + tty/tty_port driver-api/tty/tty_port 1058 + tty/tty_struct driver-api/tty/tty_struct 1059 + usb/typec driver-api/usb/typec 1060 + usb/usb3-debug-port driver-api/usb/usb3-debug-port 1061 + userspace-api/media/drivers/st-vgxy61 userspace-api/media/drivers/vgxy61 1062 + userspace-api/media/v4l/pixfmt-meta-d4xx userspace-api/media/v4l/metafmt-d4xx 1063 + userspace-api/media/v4l/pixfmt-meta-intel-ipu3 userspace-api/media/v4l/metafmt-intel-ipu3 1064 + userspace-api/media/v4l/pixfmt-meta-rkisp1 userspace-api/media/v4l/metafmt-rkisp1 1065 + userspace-api/media/v4l/pixfmt-meta-uvc userspace-api/media/v4l/metafmt-uvc 1066 + userspace-api/media/v4l/pixfmt-meta-vivid userspace-api/media/v4l/metafmt-vivid 1067 + userspace-api/media/v4l/pixfmt-meta-vsp1-hgo userspace-api/media/v4l/metafmt-vsp1-hgo 1068 + userspace-api/media/v4l/pixfmt-meta-vsp1-hgt userspace-api/media/v4l/metafmt-vsp1-hgt 1069 + virt/coco/sevguest virt/coco/sev-guest 1070 + virt/kvm/amd-memory-encryption virt/kvm/x86/amd-memory-encryption 1071 + virt/kvm/arm/psci virt/kvm/arm/fw-pseudo-registers 1072 + virt/kvm/cpuid virt/kvm/x86/cpuid 1073 + virt/kvm/hypercalls virt/kvm/x86/hypercalls 1074 + virt/kvm/mmu virt/kvm/x86/mmu 1075 + virt/kvm/msr virt/kvm/x86/msr 1076 + virt/kvm/nested-vmx virt/kvm/x86/nested-vmx 1077 + virt/kvm/running-nested-guests virt/kvm/x86/running-nested-guests 1078 + virt/kvm/s390-diag virt/kvm/s390/s390-diag 1079 + virt/kvm/s390-pv virt/kvm/s390/s390-pv 1080 + virt/kvm/s390-pv-boot virt/kvm/s390/s390-pv-boot 1081 + virt/kvm/timekeeping virt/kvm/x86/timekeeping 1082 + virt/kvm/x86/halt-polling virt/kvm/halt-polling 1083 + virtual/index virt/index 1084 + virtual/kvm/amd-memory-encryption virt/kvm/x86/amd-memory-encryption 1085 + virtual/kvm/cpuid virt/kvm/x86/cpuid 1086 + virtual/kvm/index virt/kvm/index 1087 + virtual/kvm/vcpu-requests virt/kvm/vcpu-requests 1088 + virtual/paravirt_ops virt/paravirt_ops 1089 + vm/active_mm mm/active_mm 1090 + vm/arch_pgtable_helpers mm/arch_pgtable_helpers 1091 + vm/balance mm/balance 1092 + vm/bootmem mm/bootmem 1093 + vm/damon/api mm/damon/api 1094 + vm/damon/design mm/damon/design 1095 + vm/damon/faq mm/damon/faq 1096 + vm/damon/index mm/damon/index 1097 + vm/free_page_reporting mm/free_page_reporting 1098 + vm/highmem mm/highmem 1099 + vm/hmm mm/hmm 1100 + vm/hugetlbfs_reserv mm/hugetlbfs_reserv 1101 + vm/hugetlbpage admin-guide/mm/hugetlbpage 1102 + vm/hwpoison mm/hwpoison 1103 + vm/idle_page_tracking admin-guide/mm/idle_page_tracking 1104 + vm/index mm/index 1105 + vm/ksm mm/ksm 1106 + vm/memory-model mm/memory-model 1107 + vm/mmu_notifier mm/mmu_notifier 1108 + vm/numa mm/numa 1109 + vm/numa_memory_policy admin-guide/mm/numa_memory_policy 1110 + vm/oom mm/oom 1111 + vm/overcommit-accounting mm/overcommit-accounting 1112 + vm/page_allocation mm/page_allocation 1113 + vm/page_cache mm/page_cache 1114 + vm/page_frags mm/page_frags 1115 + vm/page_migration mm/page_migration 1116 + vm/page_owner mm/page_owner 1117 + vm/page_reclaim mm/page_reclaim 1118 + vm/page_table_check mm/page_table_check 1119 + vm/page_tables mm/page_tables 1120 + vm/pagemap admin-guide/mm/pagemap 1121 + vm/physical_memory mm/physical_memory 1122 + vm/process_addrs mm/process_addrs 1123 + vm/remap_file_pages mm/remap_file_pages 1124 + vm/shmfs mm/shmfs 1125 + vm/slab mm/slab 1126 + vm/slub admin-guide/mm/slab 1127 + vm/soft-dirty admin-guide/mm/soft-dirty 1128 + vm/split_page_table_lock mm/split_page_table_lock 1129 + vm/swap mm/swap 1130 + vm/swap_numa admin-guide/mm/swap_numa 1131 + vm/transhuge mm/transhuge 1132 + vm/unevictable-lru mm/unevictable-lru 1133 + vm/userfaultfd admin-guide/mm/userfaultfd 1134 + vm/vmalloc mm/vmalloc 1135 + vm/vmalloced-kernel-stacks mm/vmalloced-kernel-stacks 1136 + vm/vmemmap_dedup mm/vmemmap_dedup 1137 + vm/zsmalloc mm/zsmalloc 1138 + vm/zswap admin-guide/mm/zswap 1139 + watch_queue core-api/watch_queue 1140 + x86/amd-memory-encryption arch/x86/amd-memory-encryption 1141 + x86/amd_hsmp arch/x86/amd_hsmp 1142 + x86/boot arch/x86/boot 1143 + x86/booting-dt arch/x86/booting-dt 1144 + x86/buslock arch/x86/buslock 1145 + x86/cpuinfo arch/x86/cpuinfo 1146 + x86/earlyprintk arch/x86/earlyprintk 1147 + x86/elf_auxvec arch/x86/elf_auxvec 1148 + x86/entry_64 arch/x86/entry_64 1149 + x86/exception-tables arch/x86/exception-tables 1150 + x86/features arch/x86/features 1151 + x86/i386/IO-APIC arch/x86/i386/IO-APIC 1152 + x86/i386/index arch/x86/i386/index 1153 + x86/ifs arch/x86/ifs 1154 + x86/index arch/x86/index 1155 + x86/intel-hfi arch/x86/intel-hfi 1156 + x86/intel_txt arch/x86/intel_txt 1157 + x86/iommu arch/x86/iommu 1158 + x86/kernel-stacks arch/x86/kernel-stacks 1159 + x86/mds arch/x86/mds 1160 + x86/microcode arch/x86/microcode 1161 + x86/mtrr arch/x86/mtrr 1162 + x86/orc-unwinder arch/x86/orc-unwinder 1163 + x86/pat arch/x86/pat 1164 + x86/protection-keys core-api/protection-keys 1165 + x86/pti arch/x86/pti 1166 + x86/resctrl filesystems/resctrl 1167 + x86/resctrl_ui filesystems/resctrl 1168 + x86/sgx arch/x86/sgx 1169 + x86/sva arch/x86/sva 1170 + x86/tdx arch/x86/tdx 1171 + x86/tlb arch/x86/tlb 1172 + x86/topology arch/x86/topology 1173 + x86/tsx_async_abort arch/x86/tsx_async_abort 1174 + x86/usb-legacy-support arch/x86/usb-legacy-support 1175 + x86/x86_64/5level-paging arch/x86/x86_64/5level-paging 1176 + x86/x86_64/cpu-hotplug-spec arch/x86/x86_64/cpu-hotplug-spec 1177 + x86/x86_64/fake-numa-for-cpusets arch/x86/x86_64/fake-numa-for-cpusets 1178 + x86/x86_64/fsgs arch/x86/x86_64/fsgs 1179 + x86/x86_64/index arch/x86/x86_64/index 1180 + x86/x86_64/machinecheck arch/x86/x86_64/machinecheck 1181 + x86/x86_64/mm arch/x86/x86_64/mm 1182 + x86/x86_64/uefi arch/x86/x86_64/uefi 1183 + x86/xstate arch/x86/xstate 1184 + x86/zero-page arch/x86/zero-page 1185 + xilinx/eemi driver-api/xilinx/eemi 1186 + xilinx/index driver-api/xilinx/index 1187 + xtensa/atomctl arch/xtensa/atomctl 1188 + xtensa/booting arch/xtensa/booting 1189 + xtensa/features arch/xtensa/features 1190 + xtensa/index arch/xtensa/index 1191 + xtensa/mmu arch/xtensa/mmu
+7 -4
Documentation/Makefile
··· 60 60 endif #HAVE_LATEXMK 61 61 62 62 # Internal variables. 63 - PAPEROPT_a4 = -D latex_paper_size=a4 64 - PAPEROPT_letter = -D latex_paper_size=letter 63 + PAPEROPT_a4 = -D latex_elements.papersize=a4paper 64 + PAPEROPT_letter = -D latex_elements.papersize=letterpaper 65 65 ALLSPHINXOPTS = -D kerneldoc_srctree=$(srctree) -D kerneldoc_bin=$(KERNELDOC) 66 66 ALLSPHINXOPTS += $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) 67 67 ifneq ($(wildcard $(srctree)/.config),) ··· 87 87 PYTHONPYCACHEPREFIX ?= $(abspath $(BUILDDIR)/__pycache__) 88 88 89 89 quiet_cmd_sphinx = SPHINX $@ --> file://$(abspath $(BUILDDIR)/$3/$4) 90 - cmd_sphinx = $(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) $(build)=Documentation/userspace-api/media $2 && \ 90 + cmd_sphinx = \ 91 91 PYTHONPYCACHEPREFIX="$(PYTHONPYCACHEPREFIX)" \ 92 92 BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath $(src)/$5/$(SPHINX_CONF)) \ 93 93 $(PYTHON3) $(srctree)/scripts/jobserver-exec \ ··· 107 107 htmldocs: 108 108 @$(srctree)/scripts/sphinx-pre-install --version-check 109 109 @+$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,html,$(var),,$(var))) 110 + 111 + htmldocs-redirects: $(srctree)/Documentation/.renames.txt 112 + @tools/docs/gen-redirects.py --output $(BUILDDIR) < $< 110 113 111 114 # If Rust support is available and .config exists, add rustdoc generated contents. 112 115 # If there are any, the errors from this make rustdoc will be displayed but ··· 174 171 175 172 cleandocs: 176 173 $(Q)rm -rf $(BUILDDIR) 177 - $(Q)$(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) $(build)=Documentation/userspace-api/media clean 178 174 179 175 dochelp: 180 176 @echo ' Linux kernel internal documentation in different formats from ReST:' 181 177 @echo ' htmldocs - HTML' 178 + @echo ' htmldocs-redirects - generate HTML redirects for moved pages' 182 179 @echo ' texinfodocs - Texinfo' 183 180 @echo ' infodocs - Info' 184 181 @echo ' latexdocs - LaTeX'
+2 -2
Documentation/PCI/endpoint/pci-endpoint-cfs.rst
··· 86 86 be created by the user to represent the virtual functions that are bound to 87 87 the physical function. In the above directory structure <EPF Device 11> is a 88 88 physical function and <EPF Device 31> is a virtual function. An EPF device once 89 - it's linked to another EPF device, cannot be linked to a EPC device. 89 + it's linked to another EPF device, cannot be linked to an EPC device. 90 90 91 91 EPC Device 92 92 ========== ··· 108 108 The <EPC Device> directory will have a list of symbolic links to 109 109 <EPF Device>. These symbolic links should be created by the user to 110 110 represent the functions present in the endpoint device. Only <EPF Device> 111 - that represents a physical function can be linked to a EPC device. 111 + that represents a physical function can be linked to an EPC device. 112 112 113 113 The <EPC Device> directory will also have a *start* field. Once 114 114 "1" is written to this field, the endpoint device will be ready to
+3 -3
Documentation/PCI/endpoint/pci-endpoint.rst
··· 197 197 * pci_epf_register_driver() 198 198 199 199 The PCI Endpoint Function driver should implement the following ops: 200 - * bind: ops to perform when a EPC device has been bound to EPF device 201 - * unbind: ops to perform when a binding has been lost between a EPC 200 + * bind: ops to perform when an EPC device has been bound to EPF device 201 + * unbind: ops to perform when a binding has been lost between an EPC 202 202 device and EPF device 203 203 * add_cfs: optional ops to create function specific configfs 204 204 attributes ··· 251 251 * pci_epf_bind() 252 252 253 253 pci_epf_bind() should be invoked when the EPF device has been bound to 254 - a EPC device. 254 + an EPC device. 255 255 256 256 * pci_epf_unbind() 257 257
+1 -1
Documentation/RCU/lockdep.rst
··· 106 106 Like rcu_dereference(), when lockdep is enabled, RCU list and hlist 107 107 traversal primitives check for being called from within an RCU read-side 108 108 critical section. However, a lockdep expression can be passed to them 109 - as a additional optional argument. With this lockdep expression, these 109 + as an additional optional argument. With this lockdep expression, these 110 110 traversal primitives will complain only if the lockdep expression is 111 111 false and they are called from outside any RCU read-side critical section. 112 112
+1 -1
Documentation/RCU/stallwarn.rst
··· 119 119 uncommon in large datacenter. In one memorable case some decades 120 120 back, a CPU failed in a running system, becoming unresponsive, 121 121 but not causing an immediate crash. This resulted in a series 122 - of RCU CPU stall warnings, eventually leading the realization 122 + of RCU CPU stall warnings, eventually leading to the realization 123 123 that the CPU had failed. 124 124 125 125 The RCU, RCU-sched, RCU-tasks, and RCU-tasks-trace implementations have
+1 -1
Documentation/admin-guide/LSM/SafeSetID.rst
··· 41 41 services without having to give out CAP_SETUID all over the place just so that 42 42 non-root programs can drop to even-lesser-privileged uids. This is especially 43 43 relevant when one non-root daemon on the system should be allowed to spawn other 44 - processes as different uids, but its undesirable to give the daemon a 44 + processes as different uids, but it's undesirable to give the daemon a 45 45 basically-root-equivalent CAP_SETUID. 46 46 47 47
+1 -1
Documentation/admin-guide/RAS/main.rst
··· 253 253 Some architectures have ECC detectors for L1, L2 and L3 caches, 254 254 along with DMA engines, fabric switches, main data path switches, 255 255 interconnections, and various other hardware data paths. If the hardware 256 - reports it, then a edac_device device probably can be constructed to 256 + reports it, then an edac_device device probably can be constructed to 257 257 harvest and present that to userspace. 258 258 259 259
+3 -3
Documentation/admin-guide/aoe/udev.txt
··· 2 2 # They may be installed along the following lines. Check the section 3 3 # 8 udev manpage to see whether your udev supports SUBSYSTEM, and 4 4 # whether it uses one or two equal signs for SUBSYSTEM and KERNEL. 5 - # 5 + # 6 6 # ecashin@makki ~$ su 7 7 # Password: 8 8 # bash# find /etc -type f -name udev.conf ··· 13 13 # 10-wacom.rules 50-udev.rules 14 14 # bash# cp /path/to/linux/Documentation/admin-guide/aoe/udev.txt \ 15 15 # /etc/udev/rules.d/60-aoe.rules 16 - # 16 + # 17 17 18 18 # aoe char devices 19 19 SUBSYSTEM=="aoe", KERNEL=="discover", NAME="etherd/%k", GROUP="disk", MODE="0220" ··· 22 22 SUBSYSTEM=="aoe", KERNEL=="revalidate", NAME="etherd/%k", GROUP="disk", MODE="0220" 23 23 SUBSYSTEM=="aoe", KERNEL=="flush", NAME="etherd/%k", GROUP="disk", MODE="0220" 24 24 25 - # aoe block devices 25 + # aoe block devices 26 26 KERNEL=="etherd*", GROUP="disk"
+1 -1
Documentation/admin-guide/blockdev/paride.rst
··· 118 118 ================ ============ ======== 119 119 120 120 All parports and all protocol drivers are probed automatically unless probe=0 121 - parameter is used. So just "modprobe epat" is enough for a Imation SuperDisk 121 + parameter is used. So just "modprobe epat" is enough for an Imation SuperDisk 122 122 drive to work. 123 123 124 124 Manual device creation::
+1 -1
Documentation/admin-guide/device-mapper/vdo-design.rst
··· 600 600 All storage within vdo is managed as 4KB blocks, but it can accept writes 601 601 as small as 512 bytes. Processing a write that is smaller than 4K requires 602 602 a read-modify-write operation that reads the relevant 4K block, copies the 603 - new data over the approriate sectors of the block, and then launches a 603 + new data over the appropriate sectors of the block, and then launches a 604 604 write operation for the modified data block. The read and write stages of 605 605 this operation are nearly identical to the normal read and write 606 606 operations, and a single data_vio is used throughout this operation.
+1 -1
Documentation/admin-guide/ext4.rst
··· 398 398 * writeback mode 399 399 400 400 In data=writeback mode, ext4 does not journal data at all. This mode provides 401 - a similar level of journaling as that of XFS, JFS, and ReiserFS in its default 401 + a similar level of journaling as that of XFS and JFS in its default 402 402 mode - metadata journaling. A crash+recovery can cause incorrect data to 403 403 appear in files which were written shortly before the crash. This mode will 404 404 typically provide the best ext4 performance.
+1 -1
Documentation/admin-guide/hw-vuln/mds.rst
··· 214 214 command line with the 'ring3mwait=disable' command line option. 215 215 216 216 XEON PHI is not affected by the other MDS variants and MSBDS is mitigated 217 - before the CPU enters a idle state. As XEON PHI is not affected by L1TF 217 + before the CPU enters an idle state. As XEON PHI is not affected by L1TF 218 218 either disabling SMT is not required for full protection. 219 219 220 220 .. _mds_smt_control:
+3 -3
Documentation/admin-guide/hw-vuln/spectre.rst
··· 664 664 665 665 .. _spec_ref1: 666 666 667 - [1] `Intel analysis of speculative execution side channels <https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf>`_. 667 + [1] `Intel analysis of speculative execution side channels <https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/analysis-of-speculative-execution-side-channels-white-paper.pdf>`_. 668 668 669 669 .. _spec_ref2: 670 670 ··· 682 682 683 683 .. _spec_ref5: 684 684 685 - [5] `AMD64 technology indirect branch control extension <https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf>`_. 685 + [5] `AMD64 technology indirect branch control extension <https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/white-papers/111006-architecture-guidelines-update-amd64-technology-indirect-branch-control-extension.pdf>`_. 686 686 687 687 .. _spec_ref6: 688 688 ··· 708 708 709 709 .. _spec_ref10: 710 710 711 - [10] `MIPS: response on speculative execution and side channel vulnerabilities <https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/>`_. 711 + [10] `MIPS: response on speculative execution and side channel vulnerabilities <https://web.archive.org/web/20220512003005if_/https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/>`_. 712 712 713 713 Academic papers: 714 714
+1 -1
Documentation/admin-guide/kdump/kdump.rst
··· 471 471 performance degradation. To enable multi-cpu support, you should bring up an 472 472 SMP dump-capture kernel and specify maxcpus/nr_cpus options while loading it. 473 473 474 - * For s390x there are two kdump modes: If a ELF header is specified with 474 + * For s390x there are two kdump modes: If an ELF header is specified with 475 475 the elfcorehdr= kernel parameter, it is used by the kdump kernel as it 476 476 is done on all other architectures. If no elfcorehdr= kernel parameter is 477 477 specified, the s390x kdump kernel dynamically creates the header. The
+3 -1
Documentation/admin-guide/kernel-parameters.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 1 3 .. _kernelparameters: 2 4 3 5 The kernel's command-line parameters ··· 215 213 There are also arch-specific kernel-parameters not documented here. 216 214 217 215 Note that ALL kernel parameters listed below are CASE SENSITIVE, and that 218 - a trailing = on the name of any parameter states that that parameter will 216 + a trailing = on the name of any parameter states that the parameter will 219 217 be entered as an environment variable, whereas its absence indicates that 220 218 it will appear as a kernel argument readable via /proc/cmdline by programs 221 219 running once the system is up.
+6 -6
Documentation/admin-guide/kernel-parameters.txt
··· 3705 3705 looking for corruption. Enabling this will 3706 3706 both detect corruption and prevent the kernel 3707 3707 from using the memory being corrupted. 3708 - However, its intended as a diagnostic tool; if 3708 + However, it's intended as a diagnostic tool; if 3709 3709 repeatable BIOS-originated corruption always 3710 3710 affects the same memory, you can use memmap= 3711 3711 to prevent the kernel from using that memory. ··· 7400 7400 (converted into nanoseconds). Fast, but 7401 7401 depending on the architecture, may not be 7402 7402 in sync between CPUs. 7403 - global - Event time stamps are synchronize across 7403 + global - Event time stamps are synchronized across 7404 7404 CPUs. May be slower than the local clock, 7405 7405 but better for some race conditions. 7406 7406 counter - Simple counting of events (1, 2, ..) ··· 7520 7520 section. 7521 7521 7522 7522 trace_trigger=[trigger-list] 7523 - [FTRACE] Add a event trigger on specific events. 7523 + [FTRACE] Add an event trigger on specific events. 7524 7524 Set a trigger on top of a specific event, with an optional 7525 7525 filter. 7526 7526 7527 - The format is is "trace_trigger=<event>.<trigger>[ if <filter>],..." 7528 - Where more than one trigger may be specified that are comma deliminated. 7527 + The format is "trace_trigger=<event>.<trigger>[ if <filter>],..." 7528 + Where more than one trigger may be specified that are comma delimited. 7529 7529 7530 7530 For example: 7531 7531 ··· 7533 7533 7534 7534 The above will enable the "stacktrace" trigger on the "sched_switch" 7535 7535 event but only trigger it if the "prev_state" of the "sched_switch" 7536 - event is "2" (TASK_UNINTERUPTIBLE). 7536 + event is "2" (TASK_UNINTERRUPTIBLE). 7537 7537 7538 7538 See also "Event triggers" in Documentation/trace/events.rst 7539 7539
+4 -4
Documentation/admin-guide/laptops/laptop-mode.rst
··· 61 61 Check your drive's rating, and don't wear down your drive's lifetime if you 62 62 don't need to. 63 63 64 - * If you mount some of your ext3/reiserfs filesystems with the -n option, then 64 + * If you mount some of your ext3 filesystems with the -n option, then 65 65 the control script will not be able to remount them correctly. You must set 66 66 DO_REMOUNTS=0 in the control script, otherwise it will remount them with the 67 67 wrong options -- or it will fail because it cannot write to /etc/mtab. ··· 96 96 dirtied are not forced to be written to disk as often. The control script also 97 97 changes the dirty background ratio, so that background writeback of dirty pages 98 98 is not done anymore. Combined with a higher commit value (also 10 minutes) for 99 - ext3 or ReiserFS filesystems (also done automatically by the control script), 99 + ext3 filesystem (also done automatically by the control script), 100 100 this results in concentration of disk activity in a small time interval which 101 101 occurs only once every 10 minutes, or whenever the disk is forced to spin up by 102 102 a cache miss. The disk can then be spun down in the periods of inactivity. ··· 587 587 FST=$(deduce_fstype $MP) 588 588 fi 589 589 case "$FST" in 590 - "ext3"|"reiserfs") 590 + "ext3") 591 591 PARSEDOPTS="$(parse_mount_opts commit "$OPTS")" 592 592 mount $DEV -t $FST $MP -o remount,$PARSEDOPTS,commit=$MAX_AGE$NOATIME_OPT 593 593 ;; ··· 647 647 FST=$(deduce_fstype $MP) 648 648 fi 649 649 case "$FST" in 650 - "ext3"|"reiserfs") 650 + "ext3") 651 651 PARSEDOPTS="$(parse_mount_opts_wfstab $DEV commit $OPTS)" 652 652 PARSEDOPTS="$(parse_yesno_opts_wfstab $DEV atime atime $PARSEDOPTS)" 653 653 mount $DEV -t $FST $MP -o remount,$PARSEDOPTS
+1 -1
Documentation/admin-guide/laptops/sonypi.rst
··· 25 25 (when available) 26 26 27 27 Those events (see linux/sonypi.h) can be polled using the character device node 28 - /dev/sonypi (major 10, minor auto allocated or specified as a option). 28 + /dev/sonypi (major 10, minor auto allocated or specified as an option). 29 29 A simple daemon which translates the jogdial movements into mouse wheel events 30 30 can be downloaded at: <http://popies.net/sonypi/> 31 31
+1 -1
Documentation/admin-guide/md.rst
··· 794 794 795 795 journal_mode (currently raid5 only) 796 796 The cache mode for raid5. raid5 could include an extra disk for 797 - caching. The mode can be "write-throuth" and "write-back". The 797 + caching. The mode can be "write-through" or "write-back". The 798 798 default is "write-through". 799 799 800 800 ppl_write_hint
+1 -1
Documentation/admin-guide/media/imx.rst
··· 96 96 motion compensation modes: low, medium, and high motion. Pipelines are 97 97 defined that allow sending frames to the VDIC subdev directly from the 98 98 CSI. There is also support in the future for sending frames to the 99 - VDIC from memory buffers via a output/mem2mem devices. 99 + VDIC from memory buffers via output/mem2mem devices. 100 100 101 101 - Includes a Frame Interval Monitor (FIM) that can correct vertical sync 102 102 problems with the ADV718x video decoders.
+3 -3
Documentation/admin-guide/media/si4713.rst
··· 13 13 Information about the Device 14 14 ---------------------------- 15 15 16 - This chip is a Silicon Labs product. It is a I2C device, currently on 0x63 address. 16 + This chip is a Silicon Labs product. It is an I2C device, currently on 0x63 address. 17 17 Basically, it has transmission and signal noise level measurement features. 18 18 19 19 The Si4713 integrates transmit functions for FM broadcast stereo transmission. ··· 28 28 Device driver description 29 29 ------------------------- 30 30 31 - There are two modules to handle this device. One is a I2C device driver 31 + There are two modules to handle this device. One is an I2C device driver 32 32 and the other is a platform driver. 33 33 34 34 The I2C device driver exports a v4l2-subdev interface to the kernel. ··· 113 113 - acomp_attack_time - Sets the attack time for audio dynamic range control. 114 114 - acomp_release_time - Sets the release time for audio dynamic range control. 115 115 116 - * Limiter setups audio deviation limiter feature. Once a over deviation occurs, 116 + * Limiter sets up the audio deviation limiter feature. Once an over deviation occurs, 117 117 it is possible to adjust the front-end gain of the audio input and always 118 118 prevent over deviation. 119 119
+1 -1
Documentation/admin-guide/mm/damon/usage.rst
··· 360 360 DAMON-based operation scheme. 361 361 362 362 Under ``quotas`` directory, four files (``ms``, ``bytes``, 363 - ``reset_interval_ms``, ``effective_bytes``) and two directores (``weights`` and 363 + ``reset_interval_ms``, ``effective_bytes``) and two directories (``weights`` and 364 364 ``goals``) exist. 365 365 366 366 You can set the ``time quota`` in milliseconds, ``size quota`` in bytes, and
+1 -1
Documentation/admin-guide/nfs/nfsroot.rst
··· 342 342 When using pxelinux, the kernel image is specified using 343 343 "kernel <relative-path-below /tftpboot>". The nfsroot parameters 344 344 are passed to the kernel by adding them to the "append" line. 345 - It is common to use serial console in conjunction with pxeliunx, 345 + It is common to use serial console in conjunction with pxelinux, 346 346 see Documentation/admin-guide/serial-console.rst for more information. 347 347 348 348 For more information on isolinux, including how to create bootdisks
+2 -2
Documentation/admin-guide/perf/hisi-pmu.rst
··· 110 110 - 2'b11: count the events which sent to the uring_ext (MATA) channel; 111 111 - 2'b01: is the same as 2'b11; 112 112 - 2'b10: count the events which sent to the uring (non-MATA) channel; 113 - - 2'b00: default value, count the events which sent to the both uring and 114 - uring_ext channel; 113 + - 2'b00: default value, count the events which sent to both uring and 114 + uring_ext channels; 115 115 116 116 6. ch: NoC PMU supports filtering the event counts of certain transaction 117 117 channel with this option. The current supported channels are as follows:
+2 -2
Documentation/admin-guide/quickly-build-trimmed-linux.rst
··· 273 273 does nothing at all; in that case you have to manually install your kernel, 274 274 as outlined in the reference section. 275 275 276 - If you are running a immutable Linux distribution, check its documentation 276 + If you are running an immutable Linux distribution, check its documentation 277 277 and the web to find out how to install your own kernel there. 278 278 279 279 [:ref:`details<install>`] ··· 884 884 setup that often can be fixed quickly; other times though the problem lies in 885 885 the code and can only be fixed by a developer. A close examination of the 886 886 failure messages coupled with some research on the internet will often tell you 887 - which of the two it is. To perform such a investigation, restart the build 887 + which of the two it is. To perform such an investigation, restart the build 888 888 process like this:: 889 889 890 890 make V=1
+2 -2
Documentation/admin-guide/reporting-issues.rst
··· 611 611 612 612 How to read the MAINTAINERS file 613 613 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 614 - To illustrate how to use the :ref:`MAINTAINERS <maintainers>` file, lets assume 614 + To illustrate how to use the :ref:`MAINTAINERS <maintainers>` file, let's assume 615 615 the WiFi in your Laptop suddenly misbehaves after updating the kernel. In that 616 616 case it's likely an issue in the WiFi driver. Obviously it could also be some 617 617 code it builds upon, but unless you suspect something like that stick to the ··· 1543 1543 1544 1544 And note, it helps developers a great deal if you can specify the exact version 1545 1545 that introduced the problem. Hence if possible within a reasonable time frame, 1546 - try to find that version using vanilla kernels. Lets assume something broke when 1546 + try to find that version using vanilla kernels. Let's assume something broke when 1547 1547 your distributor released a update from Linux kernel 5.10.5 to 5.10.8. Then as 1548 1548 instructed above go and check the latest kernel from that version line, say 1549 1549 5.10.9. If it shows the problem, try a vanilla 5.10.5 to ensure that no patches
+2 -2
Documentation/admin-guide/sysctl/fs.rst
··· 164 164 -------------------- 165 165 166 166 Maximum total number of pages a non-privileged user may allocate for pipes 167 - before the pipe size gets limited to a single page. Once this limit is reached, 168 - new pipes will be limited to a single page in size for this user in order to 167 + before the pipe size gets limited to two pages. Once this limit is reached, 168 + new pipes will be limited to two pages in size for this user in order to 169 169 limit total memory usage, and trying to increase them using ``fcntl()`` will be 170 170 denied until usage goes below the limit again. The default value allows to 171 171 allocate up to 1024 pipes at their default size. When set to 0, no limit is
+12 -6
Documentation/admin-guide/sysctl/index.rst
··· 66 66 67 67 =============== =============================================================== 68 68 abi/ execution domains & personalities 69 - debug/ <empty> 70 - dev/ device specific information (eg dev/cdrom/info) 69 + <$ARCH> tuning controls for various CPU architecture (e.g. csky, s390) 70 + crypto/ <undocumented> 71 + debug/ <undocumented> 72 + dev/ device specific information (e.g. dev/cdrom/info) 71 73 fs/ specific filesystems 72 74 filehandle, inode, dentry and quota tuning 73 75 binfmt_misc <Documentation/admin-guide/binfmt-misc.rst> 74 76 kernel/ global kernel info / tuning 75 77 miscellaneous stuff 78 + some architecture-specific controls 79 + security (LSM) stuff 76 80 net/ networking stuff, for documentation look in: 77 81 <Documentation/networking/> 78 82 proc/ <empty> 79 83 sunrpc/ SUN Remote Procedure Call (NFS) 84 + user/ Per user namespace limits 80 85 vm/ memory management tuning 81 86 buffer and cache management 82 - user/ Per user per user namespace limits 87 + xen/ <undocumented> 83 88 =============== =============================================================== 84 89 85 - These are the subdirs I have on my system. There might be more 86 - or other subdirs in another setup. If you see another dir, I'd 87 - really like to hear about it :-) 90 + These are the subdirs I have on my system or have been discovered by 91 + searching through the source code. There might be more or other subdirs 92 + in another setup. If you see another dir, I'd really like to hear about 93 + it :-) 88 94 89 95 .. toctree:: 90 96 :maxdepth: 1
+1 -1
Documentation/admin-guide/verify-bugs-and-bisect-regressions.rst
··· 1757 1757 to your bootloader's configuration. 1758 1758 1759 1759 You have to take care of some or all of the tasks yourself, if your 1760 - distribution lacks a installkernel script or does only handle part of them. 1760 + distribution lacks an installkernel script or does only handle part of them. 1761 1761 Consult the distribution's documentation for details. If in doubt, install the 1762 1762 kernel manually:: 1763 1763
+1 -1
Documentation/arch/arm/stm32/stm32f746-overview.rst
··· 15 15 - SD/MMC/SDIO support 16 16 - Ethernet controller 17 17 - USB OTFG FS & HS controllers 18 - - I2C, SPI, CAN busses support 18 + - I2C, SPI, CAN buses support 19 19 - Several 16 & 32 bits general purpose timers 20 20 - Serial Audio interface 21 21 - LCD controller
+1 -1
Documentation/arch/arm/stm32/stm32f769-overview.rst
··· 15 15 - SD/MMC/SDIO support*2 16 16 - Ethernet controller 17 17 - USB OTFG FS & HS controllers 18 - - I2C*4, SPI*6, CAN*3 busses support 18 + - I2C*4, SPI*6, CAN*3 buses support 19 19 - Several 16 & 32 bits general purpose timers 20 20 - Serial Audio interface*2 21 21 - LCD controller
+1 -1
Documentation/arch/arm/stm32/stm32h743-overview.rst
··· 15 15 - SD/MMC/SDIO support 16 16 - Ethernet controller 17 17 - USB OTFG FS & HS controllers 18 - - I2C, SPI, CAN busses support 18 + - I2C, SPI, CAN buses support 19 19 - Several 16 & 32 bits general purpose timers 20 20 - Serial Audio interface 21 21 - LCD controller
+1 -1
Documentation/arch/arm/stm32/stm32h750-overview.rst
··· 15 15 - SD/MMC/SDIO support 16 16 - Ethernet controller 17 17 - USB OTFG FS & HS controllers 18 - - I2C, SPI, CAN busses support 18 + - I2C, SPI, CAN buses support 19 19 - Several 16 & 32 bits general purpose timers 20 20 - Serial Audio interface 21 21 - LCD controller
+1 -1
Documentation/arch/arm/stm32/stm32mp13-overview.rst
··· 24 24 - ADC/DAC 25 25 - USB EHCI/OHCI controllers 26 26 - USB OTG 27 - - I2C, SPI, CAN busses support 27 + - I2C, SPI, CAN buses support 28 28 - Several general purpose timers 29 29 - Serial Audio interface 30 30 - LCD controller
+1 -1
Documentation/arch/arm/stm32/stm32mp151-overview.rst
··· 23 23 - ADC/DAC 24 24 - USB EHCI/OHCI controllers 25 25 - USB OTG 26 - - I2C, SPI busses support 26 + - I2C, SPI buses support 27 27 - Several general purpose timers 28 28 - Serial Audio interface 29 29 - LCD-TFT controller
+2 -2
Documentation/arch/loongarch/irq-chip-model.rst
··· 139 139 indicates that CPU Interrupt Pin selection can be normal method rather than 140 140 bitmap method, so interrupt can be routed to IP0 - IP15. 141 141 142 - Feature EXTIOI_HAS_CPU_ENCODE is entension of V-EIOINTC. If it is 1, it 142 + Feature EXTIOI_HAS_CPU_ENCODE is extension of V-EIOINTC. If it is 1, it 143 143 indicates that CPU selection can be normal method rather than bitmap method, 144 144 so interrupt can be routed to CPU0 - CPU255. 145 145 146 146 EXTIOI_VIRT_CONFIG 147 147 ------------------ 148 - This register is read-write register, for compatibility intterupt routed uses 148 + This register is read-write register, for compatibility interrupt routed uses 149 149 the default method which is the same with standard EIOINTC. If the bit is set 150 150 with 1, it indicated HW to use normal method rather than bitmap method. 151 151
-1
Documentation/arch/powerpc/eeh-pci-error-recovery.rst
··· 315 315 ideally, the reset should happen at or below the block layer, 316 316 so that the file systems are not disturbed. 317 317 318 - Reiserfs does not tolerate errors returned from the block device. 319 318 Ext3fs seems to be tolerant, retrying reads/writes until it does 320 319 succeed. Both have been only lightly tested in this scenario. 321 320
+1 -1
Documentation/arch/x86/cpuinfo.rst
··· 11 11 represents an ill-fated attempt from long time ago to put feature flags 12 12 in an easy to find place for userspace. 13 13 14 - However, the amount of feature flags is growing by the CPU generation, 14 + However, the number of feature flags is growing with each CPU generation, 15 15 leading to unparseable and unwieldy /proc/cpuinfo. 16 16 17 17 What is more, those feature flags do not even need to be in that file
+61 -45
Documentation/conf.py
··· 9 9 import shutil 10 10 import sys 11 11 12 + from textwrap import dedent 13 + 12 14 import sphinx 13 15 14 16 # If extensions (or modules to document with autodoc) are in another directory, ··· 53 51 dyn_exclude_patterns.append("devicetree/bindings/**.yaml") 54 52 dyn_exclude_patterns.append("core-api/kho/bindings/**.yaml") 55 53 56 - # Properly handle include/exclude patterns 57 - # ---------------------------------------- 54 + # Properly handle directory patterns and LaTeX docs 55 + # ------------------------------------------------- 58 56 59 - def update_patterns(app, config): 57 + def config_init(app, config): 60 58 """ 59 + Initialize path-dependent variabled 60 + 61 61 On Sphinx, all directories are relative to what it is passed as 62 62 SOURCEDIR parameter for sphinx-build. Due to that, all patterns 63 63 that have directory names on it need to be dynamically set, after ··· 90 86 91 87 config.exclude_patterns.append(rel_path) 92 88 89 + # LaTeX and PDF output require a list of documents with are dependent 90 + # of the app.srcdir. Add them here 91 + 92 + # When SPHINXDIRS is used, we just need to get index.rst, if it exists 93 + if not os.path.samefile(doctree, app.srcdir): 94 + doc = os.path.basename(app.srcdir) 95 + fname = "index" 96 + if os.path.exists(os.path.join(app.srcdir, fname + ".rst")): 97 + latex_documents.append((fname, doc + ".tex", 98 + "Linux %s Documentation" % doc.capitalize(), 99 + "The kernel development community", 100 + "manual")) 101 + return 102 + 103 + # When building all docs, or when a main index.rst doesn't exist, seek 104 + # for it on subdirectories 105 + for doc in os.listdir(app.srcdir): 106 + fname = os.path.join(doc, "index") 107 + if not os.path.exists(os.path.join(app.srcdir, fname + ".rst")): 108 + continue 109 + 110 + has = False 111 + for l in latex_documents: 112 + if l[0] == fname: 113 + has = True 114 + break 115 + 116 + if not has: 117 + latex_documents.append((fname, doc + ".tex", 118 + "Linux %s Documentation" % doc.capitalize(), 119 + "The kernel development community", 120 + "manual")) 93 121 94 122 # helper 95 123 # ------ ··· 270 234 # |version| and |release|, also used in various other places throughout the 271 235 # built documents. 272 236 # 273 - # In a normal build, version and release are are set to KERNELVERSION and 237 + # In a normal build, version and release are set to KERNELVERSION and 274 238 # KERNELRELEASE, respectively, from the Makefile via Sphinx command line 275 239 # arguments. 276 240 # ··· 456 420 latex_elements = { 457 421 # The paper size ('letterpaper' or 'a4paper'). 458 422 "papersize": "a4paper", 423 + "passoptionstopackages": dedent(r""" 424 + \PassOptionsToPackage{svgnames}{xcolor} 425 + """), 459 426 # The font size ('10pt', '11pt' or '12pt'). 460 427 "pointsize": "11pt", 428 + # Needed to generate a .ind file 429 + "printindex": r"\footnotesize\raggedright\printindex", 461 430 # Latex figure (float) alignment 462 431 # 'figure_align': 'htbp', 463 432 # Don't mangle with UTF-8 chars 433 + "fontenc": "", 464 434 "inputenc": "", 465 435 "utf8extra": "", 466 436 # Set document margins 467 - "sphinxsetup": """ 437 + "sphinxsetup": dedent(r""" 468 438 hmargin=0.5in, vmargin=1in, 469 439 parsedliteralwraps=true, 470 440 verbatimhintsturnover=false, 471 - """, 441 + """), 472 442 # 473 443 # Some of our authors are fond of deep nesting; tell latex to 474 444 # cope. ··· 482 440 "maxlistdepth": "10", 483 441 # For CJK One-half spacing, need to be in front of hyperref 484 442 "extrapackages": r"\usepackage{setspace}", 485 - # Additional stuff for the LaTeX preamble. 486 - "preamble": """ 487 - % Use some font with UTF-8 support with XeLaTeX 488 - \\usepackage{fontspec} 489 - \\setsansfont{DejaVu Sans} 490 - \\setromanfont{DejaVu Serif} 491 - \\setmonofont{DejaVu Sans Mono} 492 - """, 443 + "fontpkg": dedent(r""" 444 + \usepackage{fontspec} 445 + \setmainfont{DejaVu Serif} 446 + \setsansfont{DejaVu Sans} 447 + \setmonofont{DejaVu Sans Mono} 448 + \newfontfamily\headingfont{DejaVu Serif} 449 + """), 450 + "preamble": dedent(r""" 451 + % Load kerneldoc specific LaTeX settings 452 + \input{kerneldoc-preamble.sty} 453 + """) 493 454 } 494 455 495 - # Load kerneldoc specific LaTeX settings 496 - latex_elements["preamble"] += """ 497 - % Load kerneldoc specific LaTeX settings 498 - \\input{kerneldoc-preamble.sty} 499 - """ 500 - 501 - # Grouping the document tree into LaTeX files. List of tuples 502 - # (source start file, target name, title, 503 - # author, documentclass [howto, manual, or own class]). 504 - # Sorted in alphabetical order 456 + # This will be filled up by config-inited event 505 457 latex_documents = [] 506 - 507 - # Add all other index files from Documentation/ subdirectories 508 - for fn in os.listdir("."): 509 - doc = os.path.join(fn, "index") 510 - if os.path.exists(doc + ".rst"): 511 - has = False 512 - for l in latex_documents: 513 - if l[0] == doc: 514 - has = True 515 - break 516 - if not has: 517 - latex_documents.append( 518 - ( 519 - doc, 520 - fn + ".tex", 521 - "Linux %s Documentation" % fn.capitalize(), 522 - "The kernel development community", 523 - "manual", 524 - ) 525 - ) 526 458 527 459 # The name of an image file (relative to this directory) to place at the top of 528 460 # the title page. ··· 593 577 def setup(app): 594 578 """Patterns need to be updated at init time on older Sphinx versions""" 595 579 596 - app.connect('config-inited', update_patterns) 580 + app.connect('config-inited', config_init)
+1 -1
Documentation/core-api/folio_queue.rst
··· 44 44 * the size of each folio and 45 45 * three 1-bit marks per folio, 46 46 47 - but hese should not be accessed directly as the underlying data structure may 47 + but these should not be accessed directly as the underlying data structure may 48 48 change, but rather the access functions outlined below should be used. 49 49 50 50 The facility can be made accessible by::
+1
Documentation/core-api/index.rst
··· 24 24 printk-index 25 25 symbol-namespaces 26 26 asm-annotations 27 + real-time/index 27 28 28 29 Data structures and low-level utilities 29 30 =======================================
+3 -3
Documentation/core-api/irq/irq-affinity.rst
··· 9 9 10 10 /proc/irq/IRQ#/smp_affinity and /proc/irq/IRQ#/smp_affinity_list specify 11 11 which target CPUs are permitted for a given IRQ source. It's a bitmask 12 - (smp_affinity) or cpu list (smp_affinity_list) of allowed CPUs. It's not 12 + (smp_affinity) or CPU list (smp_affinity_list) of allowed CPUs. It's not 13 13 allowed to turn off all CPUs, and if an IRQ controller does not support 14 - IRQ affinity then the value will not change from the default of all cpus. 14 + IRQ affinity then the value will not change from the default of all CPUs. 15 15 16 16 /proc/irq/default_smp_affinity specifies default affinity mask that applies 17 17 to all non-active IRQs. Once IRQ is allocated/activated its affinity bitmask ··· 60 60 This time around IRQ44 was delivered only to the last four processors. 61 61 i.e counters for the CPU0-3 did not change. 62 62 63 - Here is an example of limiting that same irq (44) to cpus 1024 to 1031:: 63 + Here is an example of limiting that same IRQ (44) to CPUs 1024 to 1031:: 64 64 65 65 [root@moon 44]# echo 1024-1031 > smp_affinity_list 66 66 [root@moon 44]# cat smp_affinity_list
+19 -19
Documentation/core-api/irq/irq-domain.rst
··· 18 18 So in the past, IRQ numbers could be chosen so that they match the 19 19 hardware IRQ line into the root interrupt controller (i.e. the 20 20 component actually firing the interrupt line to the CPU). Nowadays, 21 - this number is just a number and the number loose all kind of 22 - correspondence to hardware interrupt numbers. 21 + this number is just a number and the number has no 22 + relationship to hardware interrupt numbers. 23 23 24 24 For this reason, we need a mechanism to separate controller-local 25 25 interrupt numbers, called hardware IRQs, from Linux IRQ numbers. ··· 77 77 variety of methods: 78 78 79 79 - irq_resolve_mapping() returns a pointer to the irq_desc structure 80 - for a given domain and hwirq number, and NULL if there was no 80 + for a given domain and hwirq number, or NULL if there was no 81 81 mapping. 82 82 - irq_find_mapping() returns a Linux IRQ number for a given domain and 83 - hwirq number, and 0 if there was no mapping 83 + hwirq number, or 0 if there was no mapping 84 84 - generic_handle_domain_irq() handles an interrupt described by a 85 85 domain and a hwirq number 86 86 87 - Note that irq domain lookups must happen in contexts that are 88 - compatible with a RCU read-side critical section. 87 + Note that irq_domain lookups must happen in contexts that are 88 + compatible with an RCU read-side critical section. 89 89 90 90 The irq_create_mapping() function must be called *at least once* 91 91 before any call to irq_find_mapping(), lest the descriptor will not ··· 100 100 ============================ 101 101 102 102 There are several mechanisms available for reverse mapping from hwirq 103 - to Linux irq, and each mechanism uses a different allocation function. 103 + to Linux IRQ, and each mechanism uses a different allocation function. 104 104 Which reverse map type should be used depends on the use case. Each 105 105 of the reverse map types are described below: 106 106 ··· 111 111 112 112 irq_domain_create_linear() 113 113 114 - The linear reverse map maintains a fixed size table indexed by the 114 + The linear reverse map maintains a fixed-size table indexed by the 115 115 hwirq number. When a hwirq is mapped, an irq_desc is allocated for 116 116 the hwirq, and the IRQ number is stored in the table. 117 117 118 118 The Linear map is a good choice when the maximum number of hwirqs is 119 119 fixed and a relatively small number (~ < 256). The advantages of this 120 - map are fixed time lookup for IRQ numbers, and irq_descs are only 120 + map are fixed-time lookup for IRQ numbers, and irq_descs are only 121 121 allocated for in-use IRQs. The disadvantage is that the table must be 122 122 as large as the largest possible hwirq number. 123 123 ··· 134 134 IRQs. When an hwirq is mapped, an irq_desc is allocated and the 135 135 hwirq is used as the lookup key for the radix tree. 136 136 137 - The tree map is a good choice if the hwirq number can be very large 137 + The Tree map is a good choice if the hwirq number can be very large 138 138 since it doesn't need to allocate a table as large as the largest 139 139 hwirq number. The disadvantage is that hwirq to IRQ number lookup is 140 140 dependent on how many entries are in the table. ··· 169 169 170 170 The Legacy mapping is a special case for drivers that already have a 171 171 range of irq_descs allocated for the hwirqs. It is used when the 172 - driver cannot be immediately converted to use the linear mapping. For 172 + driver cannot be immediately converted to use the Linear mapping. For 173 173 example, many embedded system board support files use a set of #defines 174 174 for IRQ numbers that are passed to struct device registrations. In that 175 - case the Linux IRQ numbers cannot be dynamically assigned and the legacy 175 + case the Linux IRQ numbers cannot be dynamically assigned and the Legacy 176 176 mapping should be used. 177 177 178 178 As the name implies, the \*_legacy() functions are deprecated and only ··· 180 180 added. Same goes for the \*_simple() functions when their use results 181 181 in the legacy behaviour. 182 182 183 - The legacy map assumes a contiguous range of IRQ numbers has already 183 + The Legacy map assumes a contiguous range of IRQ numbers has already 184 184 been allocated for the controller and that the IRQ number can be 185 185 calculated by adding a fixed offset to the hwirq number, and 186 186 visa-versa. The disadvantage is that it requires the interrupt 187 187 controller to manage IRQ allocations and it requires an irq_desc to be 188 188 allocated for every hwirq, even if it is unused. 189 189 190 - The legacy map should only be used if fixed IRQ mappings must be 191 - supported. For example, ISA controllers would use the legacy map for 190 + The Legacy map should only be used if fixed IRQ mappings must be 191 + supported. For example, ISA controllers would use the Legacy map for 192 192 mapping Linux IRQs 0-15 so that existing ISA drivers get the correct IRQ 193 193 numbers. 194 194 ··· 197 197 system and will otherwise use a linear domain mapping. The semantics of 198 198 this call are such that if an IRQ range is specified then descriptors 199 199 will be allocated on-the-fly for it, and if no range is specified it 200 - will fall through to irq_domain_create_linear() which means *no* irq 200 + will fall through to irq_domain_create_linear() which means *no* IRQ 201 201 descriptors will be allocated. 202 202 203 203 A typical use case for simple domains is where an irqchip provider ··· 214 214 215 215 On some architectures, there may be multiple interrupt controllers 216 216 involved in delivering an interrupt from the device to the target CPU. 217 - Let's look at a typical interrupt delivering path on x86 platforms:: 217 + Let's look at a typical interrupt delivery path on x86 platforms:: 218 218 219 219 Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU 220 220 ··· 227 227 To support such a hardware topology and make software architecture match 228 228 hardware architecture, an irq_domain data structure is built for each 229 229 interrupt controller and those irq_domains are organized into hierarchy. 230 - When building irq_domain hierarchy, the irq_domain near to the device is 231 - child and the irq_domain near to CPU is parent. So a hierarchy structure 230 + When building irq_domain hierarchy, the irq_domain nearest the device is 231 + child and the irq_domain nearest the CPU is parent. So a hierarchy structure 232 232 as below will be built for the example above:: 233 233 234 234 CPU Vector irq_domain (root irq_domain to manage CPU vectors)
+1 -1
Documentation/core-api/printk-formats.rst
··· 521 521 522 522 %pfw[fP] 523 523 524 - For printing information on fwnode handles. The default is to print the full 524 + For printing information on an fwnode_handle. The default is to print the full 525 525 node name, including the path. The modifiers are functionally equivalent to 526 526 %pOF above. 527 527
+109
Documentation/core-api/real-time/architecture-porting.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================================= 4 + Porting an architecture to support PREEMPT_RT 5 + ============================================= 6 + 7 + :Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> 8 + 9 + This list outlines the architecture specific requirements that must be 10 + implemented in order to enable PREEMPT_RT. Once all required features are 11 + implemented, ARCH_SUPPORTS_RT can be selected in architecture’s Kconfig to make 12 + PREEMPT_RT selectable. 13 + Many prerequisites (genirq support for example) are enforced by the common code 14 + and are omitted here. 15 + 16 + The optional features are not strictly required but it is worth to consider 17 + them. 18 + 19 + Requirements 20 + ------------ 21 + 22 + Forced threaded interrupts 23 + CONFIG_IRQ_FORCED_THREADING must be selected. Any interrupts that must 24 + remain in hard-IRQ context must be marked with IRQF_NO_THREAD. This 25 + requirement applies for instance to clocksource event interrupts, 26 + perf interrupts and cascading interrupt-controller handlers. 27 + 28 + PREEMPTION support 29 + Kernel preemption must be supported and requires that 30 + CONFIG_ARCH_NO_PREEMPT remain unselected. Scheduling requests, such as those 31 + issued from an interrupt or other exception handler, must be processed 32 + immediately. 33 + 34 + POSIX CPU timers and KVM 35 + POSIX CPU timers must expire from thread context rather than directly within 36 + the timer interrupt. This behavior is enabled by setting the configuration 37 + option CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK. 38 + When KVM is enabled, CONFIG_KVM_XFER_TO_GUEST_WORK must also be set to ensure 39 + that any pending work, such as POSIX timer expiration, is handled before 40 + transitioning into guest mode. 41 + 42 + Hard-IRQ and Soft-IRQ stacks 43 + Soft interrupts are handled in the thread context in which they are raised. If 44 + a soft interrupt is triggered from hard-IRQ context, its execution is deferred 45 + to the ksoftirqd thread. Preemption is never disabled during soft interrupt 46 + handling, which makes soft interrupts preemptible. 47 + If an architecture provides a custom __do_softirq() implementation that uses a 48 + separate stack, it must select CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK. The 49 + functionality should only be enabled when CONFIG_SOFTIRQ_ON_OWN_STACK is set. 50 + 51 + FPU and SIMD access in kernel mode 52 + FPU and SIMD registers are typically not used in kernel mode and are therefore 53 + not saved during kernel preemption. As a result, any kernel code that uses 54 + these registers must be enclosed within a kernel_fpu_begin() and 55 + kernel_fpu_end() section. 56 + The kernel_fpu_begin() function usually invokes local_bh_disable() to prevent 57 + interruptions from softirqs and to disable regular preemption. This allows the 58 + protected code to run safely in both thread and softirq contexts. 59 + On PREEMPT_RT kernels, however, kernel_fpu_begin() must not call 60 + local_bh_disable(). Instead, it should use preempt_disable(), since softirqs 61 + are always handled in thread context under PREEMPT_RT. In this case, disabling 62 + preemption alone is sufficient. 63 + The crypto subsystem operates on memory pages and requires users to "walk and 64 + map" these pages while processing a request. This operation must occur outside 65 + the kernel_fpu_begin()/ kernel_fpu_end() section because it requires preemption 66 + to be enabled. These preemption points are generally sufficient to avoid 67 + excessive scheduling latency. 68 + 69 + Exception handlers 70 + Exception handlers, such as the page fault handler, typically enable interrupts 71 + early, before invoking any generic code to process the exception. This is 72 + necessary because handling a page fault may involve operations that can sleep. 73 + Enabling interrupts is especially important on PREEMPT_RT, where certain 74 + locks, such as spinlock_t, become sleepable. For example, handling an 75 + invalid opcode may result in sending a SIGILL signal to the user task. A 76 + debug excpetion will send a SIGTRAP signal. 77 + In both cases, if the exception occurred in user space, it is safe to enable 78 + interrupts early. Sending a signal requires both interrupts and kernel 79 + preemption to be enabled. 80 + 81 + Optional features 82 + ----------------- 83 + 84 + Timer and clocksource 85 + A high-resolution clocksource and clockevents device are recommended. The 86 + clockevents device should support the CLOCK_EVT_FEAT_ONESHOT feature for 87 + optimal timer behavior. In most cases, microsecond-level accuracy is 88 + sufficient 89 + 90 + Lazy preemption 91 + This mechanism allows an in-kernel scheduling request for non-real-time tasks 92 + to be delayed until the task is about to return to user space. It helps avoid 93 + preempting a task that holds a sleeping lock at the time of the scheduling 94 + request. 95 + With CONFIG_GENERIC_IRQ_ENTRY enabled, supporting this feature requires 96 + defining a bit for TIF_NEED_RESCHED_LAZY, preferably near TIF_NEED_RESCHED. 97 + 98 + Serial console with NBCON 99 + With PREEMPT_RT enabled, all console output is handled by a dedicated thread 100 + rather than directly from the context in which printk() is invoked. This design 101 + allows printk() to be safely used in atomic contexts. 102 + However, this also means that if the kernel crashes and cannot switch to the 103 + printing thread, no output will be visible preventing the system from printing 104 + its final messages. 105 + There are exceptions for immediate output, such as during panic() handling. To 106 + support this, the console driver must implement new-style lock handling. This 107 + involves setting the CON_NBCON flag in console::flags and providing 108 + implementations for the write_atomic, write_thread, device_lock, and 109 + device_unlock callbacks.
+242
Documentation/core-api/real-time/differences.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =========================== 4 + How realtime kernels differ 5 + =========================== 6 + 7 + :Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> 8 + 9 + Preface 10 + ======= 11 + 12 + With forced-threaded interrupts and sleeping spin locks, code paths that 13 + previously caused long scheduling latencies have been made preemptible and 14 + moved into process context. This allows the scheduler to manage them more 15 + effectively and respond to higher-priority tasks with reduced latency. 16 + 17 + The following chapters provide an overview of key differences between a 18 + PREEMPT_RT kernel and a standard, non-PREEMPT_RT kernel. 19 + 20 + Locking 21 + ======= 22 + 23 + Spinning locks such as spinlock_t are used to provide synchronization for data 24 + structures accessed from both interrupt context and process context. For this 25 + reason, locking functions are also available with the _irq() or _irqsave() 26 + suffixes, which disable interrupts before acquiring the lock. This ensures that 27 + the lock can be safely acquired in process context when interrupts are enabled. 28 + 29 + However, on a PREEMPT_RT system, interrupts are forced-threaded and no longer 30 + run in hard IRQ context. As a result, there is no need to disable interrupts as 31 + part of the locking procedure when using spinlock_t. 32 + 33 + For low-level core components such as interrupt handling, the scheduler, or the 34 + timer subsystem the kernel uses raw_spinlock_t. This lock type preserves 35 + traditional semantics: it disables preemption and, when used with _irq() or 36 + _irqsave(), also disables interrupts. This ensures proper synchronization in 37 + critical sections that must remain non-preemptible or with interrupts disabled. 38 + 39 + Execution context 40 + ================= 41 + 42 + Interrupt handling in a PREEMPT_RT system is invoked in process context through 43 + the use of threaded interrupts. Other parts of the kernel also shift their 44 + execution into threaded context by different mechanisms. The goal is to keep 45 + execution paths preemptible, allowing the scheduler to interrupt them when a 46 + higher-priority task needs to run. 47 + 48 + Below is an overview of the kernel subsystems involved in this transition to 49 + threaded, preemptible execution. 50 + 51 + Interrupt handling 52 + ------------------ 53 + 54 + All interrupts are forced-threaded in a PREEMPT_RT system. The exceptions are 55 + interrupts that are requested with the IRQF_NO_THREAD, IRQF_PERCPU, or 56 + IRQF_ONESHOT flags. 57 + 58 + The IRQF_ONESHOT flag is used together with threaded interrupts, meaning those 59 + registered using request_threaded_irq() and providing only a threaded handler. 60 + Its purpose is to keep the interrupt line masked until the threaded handler has 61 + completed. 62 + 63 + If a primary handler is also provided in this case, it is essential that the 64 + handler does not acquire any sleeping locks, as it will not be threaded. The 65 + handler should be minimal and must avoid introducing delays, such as 66 + busy-waiting on hardware registers. 67 + 68 + 69 + Soft interrupts, bottom half handling 70 + ------------------------------------- 71 + 72 + Soft interrupts are raised by the interrupt handler and are executed after the 73 + handler returns. Since they run in thread context, they can be preempted by 74 + other threads. Do not assume that softirq context runs with preemption 75 + disabled. This means you must not rely on mechanisms like local_bh_disable() in 76 + process context to protect per-CPU variables. Because softirq handlers are 77 + preemptible under PREEMPT_RT, this approach does not provide reliable 78 + synchronization. 79 + 80 + If this kind of protection is required for performance reasons, consider using 81 + local_lock_nested_bh(). On non-PREEMPT_RT kernels, this allows lockdep to 82 + verify that bottom halves are disabled. On PREEMPT_RT systems, it adds the 83 + necessary locking to ensure proper protection. 84 + 85 + Using local_lock_nested_bh() also makes the locking scope explicit and easier 86 + for readers and maintainers to understand. 87 + 88 + 89 + per-CPU variables 90 + ----------------- 91 + 92 + Protecting access to per-CPU variables solely by using preempt_disable() should 93 + be avoided, especially if the critical section has unbounded runtime or may 94 + call APIs that can sleep. 95 + 96 + If using a spinlock_t is considered too costly for performance reasons, 97 + consider using local_lock_t. On non-PREEMPT_RT configurations, this introduces 98 + no runtime overhead when lockdep is disabled. With lockdep enabled, it verifies 99 + that the lock is only acquired in process context and never from softirq or 100 + hard IRQ context. 101 + 102 + On a PREEMPT_RT kernel, local_lock_t is implemented using a per-CPU spinlock_t, 103 + which provides safe local protection for per-CPU data while keeping the system 104 + preemptible. 105 + 106 + Because spinlock_t on PREEMPT_RT does not disable preemption, it cannot be used 107 + to protect per-CPU data by relying on implicit preemption disabling. If this 108 + inherited preemption disabling is essential and if local_lock_t cannot be used 109 + due to performance constraints, brevity of the code, or abstraction boundaries 110 + within an API then preempt_disable_nested() may be a suitable alternative. On 111 + non-PREEMPT_RT kernels, it verifies with lockdep that preemption is already 112 + disabled. On PREEMPT_RT, it explicitly disables preemption. 113 + 114 + Timers 115 + ------ 116 + 117 + By default, an hrtimer is executed in hard interrupt context. The exception is 118 + timers initialized with the HRTIMER_MODE_SOFT flag, which are executed in 119 + softirq context. 120 + 121 + On a PREEMPT_RT kernel, this behavior is reversed: hrtimers are executed in 122 + softirq context by default, typically within the ktimersd thread. This thread 123 + runs at the lowest real-time priority, ensuring it executes before any 124 + SCHED_OTHER tasks but does not interfere with higher-priority real-time 125 + threads. To explicitly request execution in hard interrupt context on 126 + PREEMPT_RT, the timer must be marked with the HRTIMER_MODE_HARD flag. 127 + 128 + Memory allocation 129 + ----------------- 130 + 131 + The memory allocation APIs, such as kmalloc() and alloc_pages(), require a 132 + gfp_t flag to indicate the allocation context. On non-PREEMPT_RT kernels, it is 133 + necessary to use GFP_ATOMIC when allocating memory from interrupt context or 134 + from sections where preemption is disabled. This is because the allocator must 135 + not sleep in these contexts waiting for memory to become available. 136 + 137 + However, this approach does not work on PREEMPT_RT kernels. The memory 138 + allocator in PREEMPT_RT uses sleeping locks internally, which cannot be 139 + acquired when preemption is disabled. Fortunately, this is generally not a 140 + problem, because PREEMPT_RT moves most contexts that would traditionally run 141 + with preemption or interrupts disabled into threaded context, where sleeping is 142 + allowed. 143 + 144 + What remains problematic is code that explicitly disables preemption or 145 + interrupts. In such cases, memory allocation must be performed outside the 146 + critical section. 147 + 148 + This restriction also applies to memory deallocation routines such as kfree() 149 + and free_pages(), which may also involve internal locking and must not be 150 + called from non-preemptible contexts. 151 + 152 + IRQ work 153 + -------- 154 + 155 + The irq_work API provides a mechanism to schedule a callback in interrupt 156 + context. It is designed for use in contexts where traditional scheduling is not 157 + possible, such as from within NMI handlers or from inside the scheduler, where 158 + using a workqueue would be unsafe. 159 + 160 + On non-PREEMPT_RT systems, all irq_work items are executed immediately in 161 + interrupt context. Items marked with IRQ_WORK_LAZY are deferred until the next 162 + timer tick but are still executed in interrupt context. 163 + 164 + On PREEMPT_RT systems, the execution model changes. Because irq_work callbacks 165 + may acquire sleeping locks or have unbounded execution time, they are handled 166 + in thread context by a per-CPU irq_work kernel thread. This thread runs at the 167 + lowest real-time priority, ensuring it executes before any SCHED_OTHER tasks 168 + but does not interfere with higher-priority real-time threads. 169 + 170 + The exception are work items marked with IRQ_WORK_HARD_IRQ, which are still 171 + executed in hard interrupt context. Lazy items (IRQ_WORK_LAZY) continue to be 172 + deferred until the next timer tick and are also executed by the irq_work/ 173 + thread. 174 + 175 + RCU callbacks 176 + ------------- 177 + 178 + RCU callbacks are invoked by default in softirq context. Their execution is 179 + important because, depending on the use case, they either free memory or ensure 180 + progress in state transitions. Running these callbacks as part of the softirq 181 + chain can lead to undesired situations, such as contention for CPU resources 182 + with other SCHED_OTHER tasks when executed within ksoftirqd. 183 + 184 + To avoid running callbacks in softirq context, the RCU subsystem provides a 185 + mechanism to execute them in process context instead. This behavior can be 186 + enabled by setting the boot command-line parameter rcutree.use_softirq=0. This 187 + setting is enforced in kernels configured with PREEMPT_RT. 188 + 189 + Spin until ready 190 + ================ 191 + 192 + The "spin until ready" pattern involves repeatedly checking (spinning on) the 193 + state of a data structure until it becomes available. This pattern assumes that 194 + preemption, soft interrupts, or interrupts are disabled. If the data structure 195 + is marked busy, it is presumed to be in use by another CPU, and spinning should 196 + eventually succeed as that CPU makes progress. 197 + 198 + Some examples are hrtimer_cancel() or timer_delete_sync(). These functions 199 + cancel timers that execute with interrupts or soft interrupts disabled. If a 200 + thread attempts to cancel a timer and finds it active, spinning until the 201 + callback completes is safe because the callback can only run on another CPU and 202 + will eventually finish. 203 + 204 + On PREEMPT_RT kernels, however, timer callbacks run in thread context. This 205 + introduces a challenge: a higher-priority thread attempting to cancel the timer 206 + may preempt the timer callback thread. Since the scheduler cannot migrate the 207 + callback thread to another CPU due to affinity constraints, spinning can result 208 + in livelock even on multiprocessor systems. 209 + 210 + To avoid this, both the canceling and callback sides must use a handshake 211 + mechanism that supports priority inheritance. This allows the canceling thread 212 + to suspend until the callback completes, ensuring forward progress without 213 + risking livelock. 214 + 215 + In order to solve the problem at the API level, the sequence locks were extended 216 + to allow a proper handover between the the spinning reader and the maybe 217 + blocked writer. 218 + 219 + Sequence locks 220 + -------------- 221 + 222 + Sequence counters and sequential locks are documented in 223 + Documentation/locking/seqlock.rst. 224 + 225 + The interface has been extended to ensure proper preemption states for the 226 + writer and spinning reader contexts. This is achieved by embedding the writer 227 + serialization lock directly into the sequence counter type, resulting in 228 + composite types such as seqcount_spinlock_t or seqcount_mutex_t. 229 + 230 + These composite types allow readers to detect an ongoing write and actively 231 + boost the writer’s priority to help it complete its update instead of spinning 232 + and waiting for its completion. 233 + 234 + If the plain seqcount_t is used, extra care must be taken to synchronize the 235 + reader with the writer during updates. The writer must ensure its update is 236 + serialized and non-preemptible relative to the reader. This cannot be achieved 237 + using a regular spinlock_t because spinlock_t on PREEMPT_RT does not disable 238 + preemption. In such cases, using seqcount_spinlock_t is the preferred solution. 239 + 240 + However, if there is no spinning involved i.e., if the reader only needs to 241 + detect whether a write has started and not serialize against it then using 242 + seqcount_t is reasonable.
+16
Documentation/core-api/real-time/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ===================== 4 + Real-time preemption 5 + ===================== 6 + 7 + This documentation is intended for Linux kernel developers and contributors 8 + interested in the inner workings of PREEMPT_RT. It explains key concepts and 9 + the required changes compared to a non-PREEMPT_RT configuration. 10 + 11 + .. toctree:: 12 + :maxdepth: 2 13 + 14 + theory 15 + differences 16 + architecture-porting
+116
Documentation/core-api/real-time/theory.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ===================== 4 + Theory of operation 5 + ===================== 6 + 7 + :Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de> 8 + 9 + Preface 10 + ======= 11 + 12 + PREEMPT_RT transforms the Linux kernel into a real-time kernel. It achieves 13 + this by replacing locking primitives, such as spinlock_t, with a preemptible 14 + and priority-inheritance aware implementation known as rtmutex, and by enforcing 15 + the use of threaded interrupts. As a result, the kernel becomes fully 16 + preemptible, with the exception of a few critical code paths, including entry 17 + code, the scheduler, and low-level interrupt handling routines. 18 + 19 + This transformation places the majority of kernel execution contexts under the 20 + control of the scheduler and significantly increasing the number of preemption 21 + points. Consequently, it reduces the latency between a high-priority task 22 + becoming runnable and its actual execution on the CPU. 23 + 24 + Scheduling 25 + ========== 26 + 27 + The core principles of Linux scheduling and the associated user-space API are 28 + documented in the man page sched(7) 29 + `sched(7) <https://man7.org/linux/man-pages/man7/sched.7.html>`_. 30 + By default, the Linux kernel uses the SCHED_OTHER scheduling policy. Under 31 + this policy, a task is preempted when the scheduler determines that it has 32 + consumed a fair share of CPU time relative to other runnable tasks. However, 33 + the policy does not guarantee immediate preemption when a new SCHED_OTHER task 34 + becomes runnable. The currently running task may continue executing. 35 + 36 + This behavior differs from that of real-time scheduling policies such as 37 + SCHED_FIFO. When a task with a real-time policy becomes runnable, the 38 + scheduler immediately selects it for execution if it has a higher priority than 39 + the currently running task. The task continues to run until it voluntarily 40 + yields the CPU, typically by blocking on an event. 41 + 42 + Sleeping spin locks 43 + =================== 44 + 45 + The various lock types and their behavior under real-time configurations are 46 + described in detail in Documentation/locking/locktypes.rst. 47 + In a non-PREEMPT_RT configuration, a spinlock_t is acquired by first disabling 48 + preemption and then actively spinning until the lock becomes available. Once 49 + the lock is released, preemption is enabled. From a real-time perspective, 50 + this approach is undesirable because disabling preemption prevents the 51 + scheduler from switching to a higher-priority task, potentially increasing 52 + latency. 53 + 54 + To address this, PREEMPT_RT replaces spinning locks with sleeping spin locks 55 + that do not disable preemption. On PREEMPT_RT, spinlock_t is implemented using 56 + rtmutex. Instead of spinning, a task attempting to acquire a contended lock 57 + disables CPU migration, donates its priority to the lock owner (priority 58 + inheritance), and voluntarily schedules out while waiting for the lock to 59 + become available. 60 + 61 + Disabling CPU migration provides the same effect as disabling preemption, while 62 + still allowing preemption and ensuring that the task continues to run on the 63 + same CPU while holding a sleeping lock. 64 + 65 + Priority inheritance 66 + ==================== 67 + 68 + Lock types such as spinlock_t and mutex_t in a PREEMPT_RT enabled kernel are 69 + implemented on top of rtmutex, which provides support for priority inheritance 70 + (PI). When a task blocks on such a lock, the PI mechanism temporarily 71 + propagates the blocked task’s scheduling parameters to the lock owner. 72 + 73 + For example, if a SCHED_FIFO task A blocks on a lock currently held by a 74 + SCHED_OTHER task B, task A’s scheduling policy and priority are temporarily 75 + inherited by task B. After this inheritance, task A is put to sleep while 76 + waiting for the lock, and task B effectively becomes the highest-priority task 77 + in the system. This allows B to continue executing, make progress, and 78 + eventually release the lock. 79 + 80 + Once B releases the lock, it reverts to its original scheduling parameters, and 81 + task A can resume execution. 82 + 83 + Threaded interrupts 84 + =================== 85 + 86 + Interrupt handlers are another source of code that executes with preemption 87 + disabled and outside the control of the scheduler. To bring interrupt handling 88 + under scheduler control, PREEMPT_RT enforces threaded interrupt handlers. 89 + 90 + With forced threading, interrupt handling is split into two stages. The first 91 + stage, the primary handler, is executed in IRQ context with interrupts disabled. 92 + Its sole responsibility is to wake the associated threaded handler. The second 93 + stage, the threaded handler, is the function passed to request_irq() as the 94 + interrupt handler. It runs in process context, scheduled by the kernel. 95 + 96 + From waking the interrupt thread until threaded handling is completed, the 97 + interrupt source is masked in the interrupt controller. This ensures that the 98 + device interrupt remains pending but does not retrigger the CPU, allowing the 99 + system to exit IRQ context and handle the interrupt in a scheduled thread. 100 + 101 + By default, the threaded handler executes with the SCHED_FIFO scheduling policy 102 + and a priority of 50 (MAX_RT_PRIO / 2), which is midway between the minimum and 103 + maximum real-time priorities. 104 + 105 + If the threaded interrupt handler raises any soft interrupts during its 106 + execution, those soft interrupt routines are invoked after the threaded handler 107 + completes, within the same thread. Preemption remains enabled during the 108 + execution of the soft interrupt handler. 109 + 110 + Summary 111 + ======= 112 + 113 + By using sleeping locks and forced-threaded interrupts, PREEMPT_RT 114 + significantly reduces sections of code where interrupts or preemption is 115 + disabled, allowing the scheduler to preempt the current execution context and 116 + switch to a higher-priority task.
+2 -2
Documentation/dev-tools/autofdo.rst
··· 131 131 132 132 For Zen3:: 133 133 134 - $ cat proc/cpuinfo | grep " brs" 134 + $ cat /proc/cpuinfo | grep " brs" 135 135 136 136 For Zen4:: 137 137 138 - $ cat proc/cpuinfo | grep amd_lbr_v2 138 + $ cat /proc/cpuinfo | grep amd_lbr_v2 139 139 140 140 The following command generated the perf data file:: 141 141
+1
Documentation/dev-tools/index.rst
··· 29 29 ubsan 30 30 kmemleak 31 31 kcsan 32 + lkmm/index 32 33 kfence 33 34 kselftest 34 35 kunit/index
+4 -1
Documentation/dev-tools/ktap.rst
··· 5 5 =================================================== 6 6 7 7 TAP, or the Test Anything Protocol is a format for specifying test results used 8 - by a number of projects. It's website and specification are found at this `link 8 + by a number of projects. Its website and specification are found at this `link 9 9 <https://testanything.org/>`_. The Linux Kernel largely uses TAP output for test 10 10 results. However, Kernel testing frameworks have special needs for test results 11 11 which don't align with the original TAP specification. Thus, a "Kernel TAP" ··· 20 20 aid human debugging. 21 21 22 22 KTAP output is built from four different types of lines: 23 + 23 24 - Version lines 24 25 - Plan lines 25 26 - Test case result lines ··· 39 38 version of the (K)TAP standard the result is compliant with. 40 39 41 40 For example: 41 + 42 42 - "KTAP version 1" 43 43 - "TAP version 13" 44 44 - "TAP version 14" ··· 278 276 This output defines the following hierarchy: 279 277 280 278 A single test called "main_test", which fails, and has three subtests: 279 + 281 280 - "example_test_1", which passes, and has one subtest: 282 281 283 282 - "test_1", which passes, and outputs the diagnostic message "test_1: initializing test_1"
+11
Documentation/dev-tools/lkmm/docs/access-marking.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Access Marking 4 + -------------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/access-marking.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/access-marking.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/cheatsheet.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Cheatsheet 4 + ---------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/cheatsheet.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/cheatsheet.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/control-dependencies.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Control Dependencies 4 + -------------------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/control-dependencies.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/control-dependencies.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/explanation.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Explanation 4 + ----------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/explanation.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/explanation.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/glossary.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Glossary 4 + -------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/glossary.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/glossary.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/herd-representation.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + herd-representation 4 + ------------------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/herd-representation.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/herd-representation.txt 11 + :literal:
+21
Documentation/dev-tools/lkmm/docs/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Documentation 4 + ============= 5 + 6 + .. toctree:: 7 + :maxdepth: 1 8 + 9 + readme 10 + simple 11 + ordering 12 + litmus-tests 13 + locking 14 + recipes 15 + control-dependencies 16 + access-marking 17 + cheatsheet 18 + explanation 19 + herd-representation 20 + glossary 21 + references
+11
Documentation/dev-tools/lkmm/docs/litmus-tests.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Litmus Tests 4 + ------------ 5 + 6 + Literal include of ``tools/memory-model/Documentation/litmus-tests.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/litmus-tests.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/locking.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Locking 4 + ------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/locking.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/locking.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/ordering.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Ordering 4 + -------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/ordering.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/ordering.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/readme.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + README (for LKMM Documentation) 4 + ------------------------------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/README``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/README 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/recipes.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Recipes 4 + ------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/recipes.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/recipes.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/references.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + References 4 + ---------- 5 + 6 + Literal include of ``tools/memory-model/Documentation/references.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/references.txt 11 + :literal:
+11
Documentation/dev-tools/lkmm/docs/simple.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Simple 4 + ------ 5 + 6 + Literal include of ``tools/memory-model/Documentation/simple.txt``. 7 + 8 + ------------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/Documentation/simple.txt 11 + :literal:
+15
Documentation/dev-tools/lkmm/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================================ 4 + Linux Kernel Memory Consistency Model (LKMM) 5 + ============================================ 6 + 7 + This section literally renders documents under ``tools/memory-model/`` 8 + and ``tools/memory-model/Documentation/``, which are maintained in 9 + the *pure* plain text form. 10 + 11 + .. toctree:: 12 + :maxdepth: 2 13 + 14 + readme 15 + docs/index
+11
Documentation/dev-tools/lkmm/readme.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + README (for LKMM) 4 + ================= 5 + 6 + Literal include of ``tools/memory-model/README``. 7 + 8 + ------------------------------------------------------------ 9 + 10 + .. kernel-include:: tools/memory-model/README 11 + :literal:
+1 -1
Documentation/devicetree/bindings/submitting-patches.rst
··· 66 66 any DTS patches, regardless whether using existing or new bindings, should 67 67 be placed at the end of patchset to indicate no dependency of drivers on 68 68 the DTS. DTS will be anyway applied through separate tree or branch, so 69 - different order would indicate the serie is non-bisectable. 69 + different order would indicate the series is non-bisectable. 70 70 71 71 If a driver subsystem maintainer prefers to apply entire set, instead of 72 72 their relevant portion of patchset, please split the DTS patches into
+1 -1
Documentation/driver-api/cxl/devices/device-types.rst
··· 22 22 Typically used for initialization, configuration, and I/O access for anything 23 23 other than memory (CXL.mem) or cache (CXL.cache) operations. 24 24 25 - The Linux CXL driver exposes access to .io functionalty via the various sysfs 25 + The Linux CXL driver exposes access to .io functionality via the various sysfs 26 26 interfaces and /dev/cxl/ devices (which exposes direct access to device 27 27 mailboxes). 28 28
+1 -1
Documentation/driver-api/cxl/platform/example-configurations/one-dev-per-hb.rst
··· 10 10 Things to note: 11 11 12 12 * Cross-Bridge interleave is not being used. 13 - * The expanders are in two separate but adjascent memory regions. 13 + * The expanders are in two separate but adjacent memory regions. 14 14 * This CEDT/SRAT describes one node per device 15 15 * The expanders have the same performance and will be in the same memory tier. 16 16
+2 -2
Documentation/driver-api/device-io.rst
··· 16 16 Introduction 17 17 ============ 18 18 19 - Linux provides an API which abstracts performing IO across all busses 19 + Linux provides an API which abstracts performing IO across all buses 20 20 and devices, allowing device drivers to be written independently of bus 21 21 type. 22 22 ··· 71 71 indicate the relaxed ordering. Use this with care. 72 72 73 73 While the basic functions are defined to be synchronous with respect to 74 - each other and ordered with respect to each other the busses the devices 74 + each other and ordered with respect to each other the buses the devices 75 75 sit on may themselves have asynchronicity. In particular many authors 76 76 are burned by the fact that PCI bus writes are posted asynchronously. A 77 77 driver author must issue a read from the same device to ensure that
+1 -1
Documentation/driver-api/driver-model/overview.rst
··· 22 22 23 23 The current driver model provides a common, uniform data model for describing 24 24 a bus and the devices that can appear under the bus. The unified bus 25 - model includes a set of common attributes which all busses carry, and a set 25 + model includes a set of common attributes which all buses carry, and a set 26 26 of common callbacks, such as device discovery during bus probing, bus 27 27 shutdown, bus power management, etc. 28 28
+1 -1
Documentation/driver-api/driver-model/platform.rst
··· 4 4 5 5 See <linux/platform_device.h> for the driver model interface to the 6 6 platform bus: platform_device, and platform_driver. This pseudo-bus 7 - is used to connect devices on busses with minimal infrastructure, 7 + is used to connect devices on buses with minimal infrastructure, 8 8 like those used to integrate peripherals on many system-on-chip 9 9 processors, or some "legacy" PC interconnects; as opposed to large 10 10 formally specified ones like PCI or USB.
+3 -3
Documentation/driver-api/eisa.rst
··· 8 8 new EISA/sysfs API. 9 9 10 10 Starting from version 2.5.59, the EISA bus is almost given the same 11 - status as other much more mainstream busses such as PCI or USB. This 11 + status as other much more mainstream buses such as PCI or USB. This 12 12 has been possible through sysfs, which defines a nice enough set of 13 - abstractions to manage busses, devices and drivers. 13 + abstractions to manage buses, devices and drivers. 14 14 15 15 Although the new API is quite simple to use, converting existing 16 16 drivers to the new infrastructure is not an easy task (mostly because ··· 205 205 Converting an EISA driver to the new API mostly involves *deleting* 206 206 code (since probing is now in the core EISA code). Unfortunately, most 207 207 drivers share their probing routine between ISA, and EISA. Special 208 - care must be taken when ripping out the EISA code, so other busses 208 + care must be taken when ripping out the EISA code, so other buses 209 209 won't suffer from these surgical strikes... 210 210 211 211 You *must not* expect any EISA device to be detected when returning
+2 -2
Documentation/driver-api/i3c/protocol.rst
··· 165 165 for more details): 166 166 167 167 * HDR-DDR: Double Data Rate mode 168 - * HDR-TSP: Ternary Symbol Pure. Only usable on busses with no I2C devices 169 - * HDR-TSL: Ternary Symbol Legacy. Usable on busses with I2C devices 168 + * HDR-TSP: Ternary Symbol Pure. Only usable on buses with no I2C devices 169 + * HDR-TSL: Ternary Symbol Legacy. Usable on buses with I2C devices 170 170 171 171 When sending an HDR command, the whole bus has to enter HDR mode, which is done 172 172 using a broadcast CCC command.
+2 -2
Documentation/driver-api/ipmi.rst
··· 617 617 address. So if you want your MC address to be 0x60, you put 0x30 618 618 here. See the I2C driver info for more details. 619 619 620 - Command bridging to other IPMB busses through this interface does not 620 + Command bridging to other IPMB buses through this interface does not 621 621 work. The receive message queue is not implemented, by design. There 622 622 is only one receive message queue on a BMC, and that is meant for the 623 623 host drivers, not something on the IPMB bus. 624 624 625 - A BMC may have multiple IPMB busses, which bus your device sits on 625 + A BMC may have multiple IPMB buses, which bus your device sits on 626 626 depends on how the system is wired. You can fetch the channels with 627 627 "ipmitool channel info <n>" where <n> is the channel, with the 628 628 channels being 0-7 and try the IPMB channels.
+2 -2
Documentation/driver-api/media/tx-rx.rst
··· 12 12 Bus types 13 13 --------- 14 14 15 - The following busses are the most common. This section discusses these two only. 15 + The following buses are the most common. This section discusses these two only. 16 16 17 17 MIPI CSI-2 18 18 ^^^^^^^^^^ ··· 36 36 37 37 Transmitter drivers generally need to provide the receiver drivers with the 38 38 configuration of the transmitter. What is required depends on the type of the 39 - bus. These are common for both busses. 39 + bus. These are common for both buses. 40 40 41 41 Media bus pixel code 42 42 ^^^^^^^^^^^^^^^^^^^^
+1 -1
Documentation/driver-api/nvdimm/nvdimm.rst
··· 230 230 A bus has a 1:1 relationship with an NFIT. The current expectation for 231 231 ACPI based systems is that there is only ever one platform-global NFIT. 232 232 That said, it is trivial to register multiple NFITs, the specification 233 - does not preclude it. The infrastructure supports multiple busses and 233 + does not preclude it. The infrastructure supports multiple buses and 234 234 we use this capability to test multiple NFIT configurations in the unit 235 235 test. 236 236
+6 -4
Documentation/driver-api/pin-control.rst
··· 1202 1202 { 1203 1203 /* Allocate a state holder named "foo" etc */ 1204 1204 struct foo_state *foo = ...; 1205 + int ret; 1205 1206 1206 1207 foo->p = devm_pinctrl_get(&device); 1207 1208 if (IS_ERR(foo->p)) { 1208 - /* FIXME: clean up "foo" here */ 1209 - return PTR_ERR(foo->p); 1209 + ret = PTR_ERR(foo->p); 1210 + foo->p = NULL; 1211 + return ret; 1210 1212 } 1211 1213 1212 1214 foo->s = pinctrl_lookup_state(foo->p, PINCTRL_STATE_DEFAULT); 1213 1215 if (IS_ERR(foo->s)) { 1214 - /* FIXME: clean up "foo" here */ 1216 + devm_pinctrl_put(foo->p); 1215 1217 return PTR_ERR(foo->s); 1216 1218 } 1217 1219 1218 1220 ret = pinctrl_select_state(foo->p, foo->s); 1219 1221 if (ret < 0) { 1220 - /* FIXME: clean up "foo" here */ 1222 + devm_pinctrl_put(foo->p); 1221 1223 return ret; 1222 1224 } 1223 1225 }
+2 -2
Documentation/driver-api/pm/devices.rst
··· 255 255 its parent; and can't be removed or suspended after that parent. 256 256 257 257 The policy is that the device hierarchy should match hardware bus topology. 258 - [Or at least the control bus, for devices which use multiple busses.] 258 + [Or at least the control bus, for devices which use multiple buses.] 259 259 In particular, this means that a device registration may fail if the parent of 260 260 the device is suspending (i.e. has been chosen by the PM core as the next 261 261 device to suspend) or has already suspended, as well as after all of the other ··· 493 493 494 494 Drivers must also be prepared to notice that the device has been removed 495 495 while the system was powered down, whenever that's physically possible. 496 - PCMCIA, MMC, USB, Firewire, SCSI, and even IDE are common examples of busses 496 + PCMCIA, MMC, USB, Firewire, SCSI, and even IDE are common examples of buses 497 497 where common Linux platforms will see such removal. Details of how drivers 498 498 will notice and handle such removals are currently bus-specific, and often 499 499 involve a separate thread.
+2 -2
Documentation/driver-api/scsi.rst
··· 18 18 19 19 Although the old parallel (fast/wide/ultra) SCSI bus has largely fallen 20 20 out of use, the SCSI command set is more widely used than ever to 21 - communicate with devices over a number of different busses. 21 + communicate with devices over a number of different buses. 22 22 23 23 The `SCSI protocol <https://www.t10.org/scsi-3.htm>`__ is a big-endian 24 24 peer-to-peer packet based protocol. SCSI commands are 6, 10, 12, or 16 ··· 286 286 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 287 287 288 288 The file drivers/scsi/scsi_transport_spi.c defines transport 289 - attributes for traditional (fast/wide/ultra) SCSI busses. 289 + attributes for traditional (fast/wide/ultra) SCSI buses. 290 290 291 291 .. kernel-doc:: drivers/scsi/scsi_transport_spi.c 292 292 :export:
+1 -1
Documentation/driver-api/spi.rst
··· 13 13 normally used for each peripheral, plus sometimes an interrupt. 14 14 15 15 The SPI bus facilities listed here provide a generalized interface to 16 - declare SPI busses and devices, manage them according to the standard 16 + declare SPI buses and devices, manage them according to the standard 17 17 Linux driver model, and perform input/output operations. At this time, 18 18 only "master" side interfaces are supported, where Linux talks to SPI 19 19 peripherals and does not implement such a peripheral itself. (Interfaces
+7 -7
Documentation/driver-api/thermal/exynos_thermal_emulation.rst
··· 28 28 delay of changing temperature. However, this node only uses same delay 29 29 of real sensing time, 938us.) 30 30 31 - Exynos emulation mode requires synchronous of value changing and 32 - enabling. It means when you want to update the any value of delay or 33 - next temperature, then you have to enable emulation mode at the same 34 - time. (Or you have to keep the mode enabling.) If you don't, it fails to 35 - change the value to updated one and just use last succeessful value 36 - repeatedly. That's why this node gives users the right to change 37 - termerpature only. Just one interface makes it more simply to use. 31 + Exynos emulation mode requires that value changes and enabling are performed 32 + synchronously. This means that when you want to update any value, such as the 33 + delay or the next temperature, you must enable emulation mode at the same 34 + time (or keep the mode enabled). If you do not, the value will fail to update 35 + and the last successful value will continue to be used. For this reason, 36 + this node only allows users to change the temperature. Providing a single 37 + interface makes it simpler to use. 38 38 39 39 Disabling emulation mode only requires writing value 0 to sysfs node. 40 40
+1 -1
Documentation/driver-api/usb/hotplug.rst
··· 5 5 ================= 6 6 7 7 8 - In hotpluggable busses like USB (and Cardbus PCI), end-users plug devices 8 + In hotpluggable buses like USB (and Cardbus PCI), end-users plug devices 9 9 into the bus with power on. In most cases, users expect the devices to become 10 10 immediately usable. That means the system must do many things, including: 11 11
+1
Documentation/driver-api/usb/index.rst
··· 3 3 ============= 4 4 5 5 .. toctree:: 6 + :maxdepth: 1 6 7 7 8 usb 8 9 gadget
+2 -2
Documentation/driver-api/usb/usb.rst
··· 13 13 interior nodes, and peripherals as leaves (and slaves). Modern PCs 14 14 support several such trees of USB devices, usually 15 15 a few USB 3.0 (5 GBit/s) or USB 3.1 (10 GBit/s) and some legacy 16 - USB 2.0 (480 MBit/s) busses just in case. 16 + USB 2.0 (480 MBit/s) buses just in case. 17 17 18 18 That master/slave asymmetry was designed-in for a number of reasons, one 19 19 being ease of use. It is not physically possible to mistake upstream and ··· 42 42 driver frameworks), and the other is for drivers that are *part of the 43 43 core*. Such core drivers include the *hub* driver (which manages trees 44 44 of USB devices) and several different kinds of *host controller 45 - drivers*, which control individual busses. 45 + drivers*, which control individual buses. 46 46 47 47 The device model seen by USB drivers is relatively complex. 48 48
+24 -18
Documentation/fb/fbcon.rst
··· 39 39 you don't do anything, the kernel configuration tool will select one for you, 40 40 usually an 8x16 font. 41 41 42 - GOTCHA: A common bug report is enabling the framebuffer without enabling the 43 - framebuffer console. Depending on the driver, you may get a blanked or 44 - garbled display, but the system still boots to completion. If you are 45 - fortunate to have a driver that does not alter the graphics chip, then you 46 - will still get a VGA console. 42 + .. admonition:: GOTCHA 43 + 44 + A common bug report is enabling the framebuffer without enabling the 45 + framebuffer console. Depending on the driver, you may get a blanked or 46 + garbled display, but the system still boots to completion. If you are 47 + fortunate to have a driver that does not alter the graphics chip, then you 48 + will still get a VGA console. 47 49 48 50 B. Loading 49 51 ========== ··· 76 74 over the console. 77 75 78 76 C. Boot options 77 + =============== 79 78 80 79 The framebuffer console has several, largely unknown, boot options 81 80 that can change its behavior. ··· 119 116 outside the given range will still be controlled by the standard 120 117 console driver. 121 118 122 - NOTE: For x86 machines, the standard console is the VGA console which 123 - is typically located on the same video card. Thus, the consoles that 124 - are controlled by the VGA console will be garbled. 119 + .. note:: 120 + For x86 machines, the standard console is the VGA console which 121 + is typically located on the same video card. Thus, the consoles that 122 + are controlled by the VGA console will be garbled. 125 123 126 124 4. fbcon=rotate:<n> 127 125 ··· 144 140 Console rotation will only become available if Framebuffer Console 145 141 Rotation support is compiled in your kernel. 146 142 147 - NOTE: This is purely console rotation. Any other applications that 148 - use the framebuffer will remain at their 'normal' orientation. 149 - Actually, the underlying fb driver is totally ignorant of console 150 - rotation. 143 + .. note:: 144 + This is purely console rotation. Any other applications that 145 + use the framebuffer will remain at their 'normal' orientation. 146 + Actually, the underlying fb driver is totally ignorant of console 147 + rotation. 151 148 152 149 5. fbcon=margin:<color> 153 150 ··· 177 172 The value 'n' overrides the number of bootup logos. 0 disables the 178 173 logo, and -1 gives the default which is the number of online CPUs. 179 174 180 - C. Attaching, Detaching and Unloading 175 + D. Attaching, Detaching and Unloading 176 + ===================================== 181 177 182 178 Before going on to how to attach, detach and unload the framebuffer console, an 183 179 illustration of the dependencies may help. ··· 255 249 echo 1 > /sys/class/vtconsole/vtcon1/bind 256 250 257 251 8. Once fbcon is unbound, all drivers registered to the system will also 258 - become unbound. This means that fbcon and individual framebuffer drivers 259 - can be unloaded or reloaded at will. Reloading the drivers or fbcon will 260 - automatically bind the console, fbcon and the drivers together. Unloading 261 - all the drivers without unloading fbcon will make it impossible for the 262 - console to bind fbcon. 252 + become unbound. This means that fbcon and individual framebuffer drivers 253 + can be unloaded or reloaded at will. Reloading the drivers or fbcon will 254 + automatically bind the console, fbcon and the drivers together. Unloading 255 + all the drivers without unloading fbcon will make it impossible for the 256 + console to bind fbcon. 263 257 264 258 Notes for vesafb users: 265 259 =======================
+2 -2
Documentation/features/core/eBPF-JIT/arch-support.txt
··· 7 7 | arch |status| 8 8 ----------------------- 9 9 | alpha: | TODO | 10 - | arc: | TODO | 10 + | arc: | ok | 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | csky: | TODO | ··· 18 18 | mips: | ok | 19 19 | nios2: | TODO | 20 20 | openrisc: | TODO | 21 - | parisc: | TODO | 21 + | parisc: | ok | 22 22 | powerpc: | ok | 23 23 | riscv: | ok | 24 24 | s390: | ok |
+1 -1
Documentation/features/core/mseal_sys_mappings/arch-support.txt
··· 20 20 | openrisc: | N/A | 21 21 | parisc: | TODO | 22 22 | powerpc: | TODO | 23 - | riscv: | TODO | 23 + | riscv: | ok | 24 24 | s390: | ok | 25 25 | sh: | N/A | 26 26 | sparc: | TODO |
+1 -1
Documentation/features/core/thread-info-in-task/arch-support.txt
··· 24 24 | s390: | ok | 25 25 | sh: | TODO | 26 26 | sparc: | TODO | 27 - | um: | TODO | 27 + | um: | ok | 28 28 | x86: | ok | 29 29 | xtensa: | TODO | 30 30 -----------------------
+1 -1
Documentation/features/core/tracehook/arch-support.txt
··· 24 24 | s390: | ok | 25 25 | sh: | ok | 26 26 | sparc: | ok | 27 - | um: | TODO | 27 + | um: | ok | 28 28 | x86: | ok | 29 29 | xtensa: | ok | 30 30 -----------------------
+1 -1
Documentation/features/perf/kprobes-event/arch-support.txt
··· 17 17 | microblaze: | TODO | 18 18 | mips: | ok | 19 19 | nios2: | TODO | 20 - | openrisc: | TODO | 20 + | openrisc: | ok | 21 21 | parisc: | ok | 22 22 | powerpc: | ok | 23 23 | riscv: | ok |
+1 -1
Documentation/features/time/clockevents/arch-support.txt
··· 18 18 | mips: | ok | 19 19 | nios2: | ok | 20 20 | openrisc: | ok | 21 - | parisc: | TODO | 21 + | parisc: | ok | 22 22 | powerpc: | ok | 23 23 | riscv: | ok | 24 24 | s390: | ok |
+1 -1
Documentation/filesystems/erofs.rst
··· 116 116 cluster for further reading. It still does 117 117 in-place I/O decompression for the rest 118 118 compressed physical clusters; 119 - readaround Cache the both ends of incomplete compressed 119 + readaround Cache both ends of incomplete compressed 120 120 physical clusters for further reading. 121 121 It still does in-place I/O decompression 122 122 for the rest compressed physical clusters.
+3 -3
Documentation/filesystems/ext4/atomic_writes.rst
··· 14 14 supports hardware atomic writes. This is supported in the following two ways: 15 15 16 16 1. **Single-fsblock Atomic Writes**: 17 - EXT4's supports atomic write operations with a single filesystem block since 17 + EXT4 supports atomic write operations with a single filesystem block since 18 18 v6.13. In this the atomic write unit minimum and maximum sizes are both set 19 19 to filesystem blocksize. 20 20 e.g. doing atomic write of 16KB with 16KB filesystem blocksize on 64KB ··· 50 50 51 51 The bigalloc feature changes ext4 to allocate in units of multiple filesystem 52 52 blocks, also known as clusters. With bigalloc each bit within block bitmap 53 - represents cluster (power of 2 number of blocks) rather than individual 53 + represents a cluster (power of 2 number of blocks) rather than individual 54 54 filesystem blocks. 55 55 EXT4 supports multi-fsblock atomic writes with bigalloc, subject to the 56 56 following constraints. The minimum atomic write size is the larger of the fs ··· 189 189 filesystem's maximum atomic write unit size. 190 190 See ``generic_atomic_write_valid()`` for more details. 191 191 192 - ``statx()`` system call with ``STATX_WRITE_ATOMIC`` flag can provides following 192 + ``statx()`` system call with ``STATX_WRITE_ATOMIC`` flag can provide following 193 193 details: 194 194 195 195 * ``stx_atomic_write_unit_min``: Minimum size of an atomic write request.
+1 -1
Documentation/filesystems/gfs2-glocks.rst
··· 105 105 Operations must not drop either the bit lock or the spinlock 106 106 if its held on entry. go_dump and do_demote_ok must never block. 107 107 Note that go_dump will only be called if the glock's state 108 - indicates that it is caching uptodate data. 108 + indicates that it is caching up-to-date data. 109 109 110 110 Glock locking order within GFS2: 111 111
+1 -1
Documentation/filesystems/hpfs.rst
··· 65 65 'cat FOO', 'cat Foo', 'cat foo' or 'cat F*' but not 'cat f*'. Note, that you 66 66 also won't be able to compile linux kernel (and maybe other things) on HPFS 67 67 because kernel creates different files with names like bootsect.S and 68 - bootsect.s. When searching for file thats name has characters >= 128, codepages 68 + bootsect.s. When searching for file whose name has characters >= 128, codepages 69 69 are used - see below. 70 70 OS/2 ignores dots and spaces at the end of file name, so this driver does as 71 71 well. If you create 'a. ...', the file 'a' will be created, but you can still
+1 -1
Documentation/filesystems/iomap/operations.rst
··· 321 321 - ``writeback_submit``: Submit the previous built writeback context. 322 322 Block based file systems should use the iomap_ioend_writeback_submit 323 323 helper, other file system can implement their own. 324 - File systems can optionall to hook into writeback bio submission. 324 + File systems can optionally hook into writeback bio submission. 325 325 This might include pre-write space accounting updates, or installing 326 326 a custom ``->bi_end_io`` function for internal purposes, such as 327 327 deferring the ioend completion to a workqueue to run metadata update
+10 -10
Documentation/filesystems/ocfs2-online-filecheck.rst
··· 58 58 # echo "<inode>" > /sys/fs/ocfs2/<devname>/filecheck/check 59 59 # cat /sys/fs/ocfs2/<devname>/filecheck/check 60 60 61 - The output is like this:: 61 + The output is like this:: 62 62 63 63 INO DONE ERROR 64 64 39502 1 GENERATION 65 65 66 - <INO> lists the inode numbers. 67 - <DONE> indicates whether the operation has been finished. 68 - <ERROR> says what kind of errors was found. For the detailed error numbers, 69 - please refer to the file linux/fs/ocfs2/filecheck.h. 66 + <INO> lists the inode numbers. 67 + <DONE> indicates whether the operation has been finished. 68 + <ERROR> says what kind of errors was found. For the detailed error numbers, 69 + please refer to the file linux/fs/ocfs2/filecheck.h. 70 70 71 71 2. If you determine to fix this inode, do:: 72 72 73 73 # echo "<inode>" > /sys/fs/ocfs2/<devname>/filecheck/fix 74 74 # cat /sys/fs/ocfs2/<devname>/filecheck/fix 75 75 76 - The output is like this::: 76 + The output is like this:: 77 77 78 78 INO DONE ERROR 79 79 39502 1 SUCCESS 80 80 81 - This time, the <ERROR> column indicates whether this fix is successful or not. 81 + This time, the <ERROR> column indicates whether this fix is successful or not. 82 82 83 83 3. The record cache is used to store the history of check/fix results. It's 84 - default size is 10, and can be adjust between the range of 10 ~ 100. You can 85 - adjust the size like this:: 84 + default size is 10, and can be adjust between the range of 10 ~ 100. You can 85 + adjust the size like this:: 86 86 87 - # echo "<size>" > /sys/fs/ocfs2/<devname>/filecheck/set 87 + # echo "<size>" > /sys/fs/ocfs2/<devname>/filecheck/set 88 88 89 89 Fixing stuff 90 90 ============
-21
Documentation/filesystems/proc.rst
··· 61 61 0.1 Introduction/Credits 62 62 ------------------------ 63 63 64 - This documentation is part of a soon (or so we hope) to be released book on 65 - the SuSE Linux distribution. As there is no complete documentation for the 66 - /proc file system and we've used many freely available sources to write these 67 - chapters, it seems only fair to give the work back to the Linux community. 68 - This work is based on the 2.2.* kernel version and the upcoming 2.4.*. I'm 69 - afraid it's still far from complete, but we hope it will be useful. As far as 70 - we know, it is the first 'all-in-one' document about the /proc file system. It 71 - is focused on the Intel x86 hardware, so if you are looking for PPC, ARM, 72 - SPARC, AXP, etc., features, you probably won't find what you are looking for. 73 - It also only covers IPv4 networking, not IPv6 nor other protocols - sorry. But 74 - additions and patches are welcome and will be added to this document if you 75 - mail them to Bodo. 76 - 77 64 We'd like to thank Alan Cox, Rik van Riel, and Alexey Kuznetsov and a lot of 78 65 other people for help compiling this documentation. We'd also like to extend a 79 66 special thank you to Andi Kleen for documentation, which we relied on heavily ··· 68 81 Thanks to everybody else who contributed source or docs to the Linux kernel 69 82 and helped create a great piece of software... :) 70 83 71 - If you have any comments, corrections or additions, please don't hesitate to 72 - contact Bodo Bauer at bb@ricochet.net. We'll be happy to add them to this 73 - document. 74 - 75 84 The latest version of this document is available online at 76 85 https://www.kernel.org/doc/html/latest/filesystems/proc.html 77 - 78 - If the above direction does not works for you, you could try the kernel 79 - mailing list at linux-kernel@vger.kernel.org and/or try to reach me at 80 - comandante@zaralinux.com. 81 86 82 87 0.2 Legal Stuff 83 88 ---------------
+3 -3
Documentation/filesystems/propagate_umount.txt
··· 286 286 strip the "seen by Trim_ancestors" mark from m 287 287 remove m from the Candidates list 288 288 return 289 - 289 + 290 290 remove_this = false 291 291 found = false 292 292 for each n in children(m) ··· 312 312 } 313 313 314 314 Terminating condition in the loop in Trim_ancestors() is correct, 315 - since that that loop will never run into p belonging to U - p is always 315 + since that loop will never run into p belonging to U - p is always 316 316 an ancestor of argument of Trim_one() and since U is closed, the argument 317 317 of Trim_one() would also have to belong to U. But Trim_one() is never 318 318 called for elements of U. In other words, p belongs to S if and only ··· 361 361 Proof: suppose S was non-shifting, x is a locked element of S, parent of x 362 362 is not in S and S - {x} is not non-shifting. Then there is an element m 363 363 in S - {x} and a subtree mounted strictly inside m, such that m contains 364 - an element not in in S - {x}. Since S is non-shifting, everything in 364 + an element not in S - {x}. Since S is non-shifting, everything in 365 365 that subtree must belong to S. But that means that this subtree must 366 366 contain x somewhere *and* that parent of x either belongs that subtree 367 367 or is equal to m. Either way it must belong to S. Contradiction.
+1 -1
Documentation/filesystems/resctrl.rst
··· 769 769 depending on # of threads: 770 770 771 771 For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4 772 - thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although 772 + thread, with 10% bandwidth' can consume up to 10GBps and 40GBps although 773 773 they have same percentage bandwidth of 10%. This is simply because as 774 774 threads start using more cores in an rdtgroup, the actual bandwidth may 775 775 increase or vary although user specified bandwidth percentage is same.
+728 -733
Documentation/filesystems/sharedsubtree.rst
··· 31 31 ----------- 32 32 33 33 Shared subtree provides four different flavors of mounts; struct vfsmount to be 34 - precise 35 - 36 - a. shared mount 37 - b. slave mount 38 - c. private mount 39 - d. unbindable mount 34 + precise: 40 35 41 36 42 - 2a) A shared mount can be replicated to as many mountpoints and all the 43 - replicas continue to be exactly same. 37 + a) A **shared mount** can be replicated to as many mountpoints and all the 38 + replicas continue to be exactly same. 44 39 45 - Here is an example: 40 + Here is an example: 46 41 47 - Let's say /mnt has a mount that is shared:: 42 + Let's say /mnt has a mount that is shared:: 48 43 49 - mount --make-shared /mnt 44 + # mount --make-shared /mnt 50 45 51 - Note: mount(8) command now supports the --make-shared flag, 52 - so the sample 'smount' program is no longer needed and has been 53 - removed. 46 + .. note:: 47 + mount(8) command now supports the --make-shared flag, 48 + so the sample 'smount' program is no longer needed and has been 49 + removed. 54 50 55 - :: 51 + :: 56 52 57 - # mount --bind /mnt /tmp 53 + # mount --bind /mnt /tmp 58 54 59 - The above command replicates the mount at /mnt to the mountpoint /tmp 60 - and the contents of both the mounts remain identical. 55 + The above command replicates the mount at /mnt to the mountpoint /tmp 56 + and the contents of both the mounts remain identical. 61 57 62 - :: 58 + :: 63 59 64 - #ls /mnt 65 - a b c 60 + #ls /mnt 61 + a b c 66 62 67 - #ls /tmp 68 - a b c 63 + #ls /tmp 64 + a b c 69 65 70 - Now let's say we mount a device at /tmp/a:: 66 + Now let's say we mount a device at /tmp/a:: 71 67 72 - # mount /dev/sd0 /tmp/a 68 + # mount /dev/sd0 /tmp/a 73 69 74 - #ls /tmp/a 75 - t1 t2 t3 70 + # ls /tmp/a 71 + t1 t2 t3 76 72 77 - #ls /mnt/a 78 - t1 t2 t3 73 + # ls /mnt/a 74 + t1 t2 t3 79 75 80 - Note that the mount has propagated to the mount at /mnt as well. 76 + Note that the mount has propagated to the mount at /mnt as well. 81 77 82 - And the same is true even when /dev/sd0 is mounted on /mnt/a. The 83 - contents will be visible under /tmp/a too. 84 - 85 - 86 - 2b) A slave mount is like a shared mount except that mount and umount events 87 - only propagate towards it. 88 - 89 - All slave mounts have a master mount which is a shared. 90 - 91 - Here is an example: 92 - 93 - Let's say /mnt has a mount which is shared. 94 - # mount --make-shared /mnt 95 - 96 - Let's bind mount /mnt to /tmp 97 - # mount --bind /mnt /tmp 98 - 99 - the new mount at /tmp becomes a shared mount and it is a replica of 100 - the mount at /mnt. 101 - 102 - Now let's make the mount at /tmp; a slave of /mnt 103 - # mount --make-slave /tmp 104 - 105 - let's mount /dev/sd0 on /mnt/a 106 - # mount /dev/sd0 /mnt/a 107 - 108 - #ls /mnt/a 109 - t1 t2 t3 110 - 111 - #ls /tmp/a 112 - t1 t2 t3 113 - 114 - Note the mount event has propagated to the mount at /tmp 115 - 116 - However let's see what happens if we mount something on the mount at /tmp 117 - 118 - # mount /dev/sd1 /tmp/b 119 - 120 - #ls /tmp/b 121 - s1 s2 s3 122 - 123 - #ls /mnt/b 124 - 125 - Note how the mount event has not propagated to the mount at 126 - /mnt 78 + And the same is true even when /dev/sd0 is mounted on /mnt/a. The 79 + contents will be visible under /tmp/a too. 127 80 128 81 129 - 2c) A private mount does not forward or receive propagation. 82 + b) A **slave mount** is like a shared mount except that mount and umount events 83 + only propagate towards it. 130 84 131 - This is the mount we are familiar with. Its the default type. 85 + All slave mounts have a master mount which is a shared. 86 + 87 + Here is an example: 88 + 89 + Let's say /mnt has a mount which is shared:: 90 + 91 + # mount --make-shared /mnt 92 + 93 + Let's bind mount /mnt to /tmp:: 94 + 95 + # mount --bind /mnt /tmp 96 + 97 + the new mount at /tmp becomes a shared mount and it is a replica of 98 + the mount at /mnt. 99 + 100 + Now let's make the mount at /tmp; a slave of /mnt:: 101 + 102 + # mount --make-slave /tmp 103 + 104 + let's mount /dev/sd0 on /mnt/a:: 105 + 106 + # mount /dev/sd0 /mnt/a 107 + 108 + # ls /mnt/a 109 + t1 t2 t3 110 + 111 + # ls /tmp/a 112 + t1 t2 t3 113 + 114 + Note the mount event has propagated to the mount at /tmp 115 + 116 + However let's see what happens if we mount something on the mount at 117 + /tmp:: 118 + 119 + # mount /dev/sd1 /tmp/b 120 + 121 + # ls /tmp/b 122 + s1 s2 s3 123 + 124 + # ls /mnt/b 125 + 126 + Note how the mount event has not propagated to the mount at 127 + /mnt 132 128 133 129 134 - 2d) A unbindable mount is a unbindable private mount 130 + c) A **private mount** does not forward or receive propagation. 135 131 136 - let's say we have a mount at /mnt and we make it unbindable:: 132 + This is the mount we are familiar with. Its the default type. 137 133 138 - # mount --make-unbindable /mnt 139 134 140 - Let's try to bind mount this mount somewhere else:: 135 + d) An **unbindable mount** is, as the name suggests, an unbindable private 136 + mount. 141 137 142 - # mount --bind /mnt /tmp 143 - mount: wrong fs type, bad option, bad superblock on /mnt, 144 - or too many mounted file systems 138 + let's say we have a mount at /mnt and we make it unbindable:: 145 139 146 - Binding a unbindable mount is a invalid operation. 140 + # mount --make-unbindable /mnt 141 + 142 + Let's try to bind mount this mount somewhere else:: 143 + 144 + # mount --bind /mnt /tmp mount: wrong fs type, bad option, bad 145 + superblock on /mnt, or too many mounted file systems 146 + 147 + Binding a unbindable mount is a invalid operation. 147 148 148 149 149 150 3) Setting mount states 150 151 ----------------------- 151 152 152 - The mount command (util-linux package) can be used to set mount 153 - states:: 153 + The mount command (util-linux package) can be used to set mount 154 + states:: 154 155 155 - mount --make-shared mountpoint 156 - mount --make-slave mountpoint 157 - mount --make-private mountpoint 158 - mount --make-unbindable mountpoint 156 + mount --make-shared mountpoint 157 + mount --make-slave mountpoint 158 + mount --make-private mountpoint 159 + mount --make-unbindable mountpoint 159 160 160 161 161 162 4) Use cases 162 163 ------------ 163 164 164 - A) A process wants to clone its own namespace, but still wants to 165 - access the CD that got mounted recently. 165 + A) A process wants to clone its own namespace, but still wants to 166 + access the CD that got mounted recently. 166 167 167 - Solution: 168 + Solution: 168 169 169 - The system administrator can make the mount at /cdrom shared:: 170 + The system administrator can make the mount at /cdrom shared:: 170 171 171 - mount --bind /cdrom /cdrom 172 - mount --make-shared /cdrom 172 + mount --bind /cdrom /cdrom 173 + mount --make-shared /cdrom 173 174 174 - Now any process that clones off a new namespace will have a 175 - mount at /cdrom which is a replica of the same mount in the 176 - parent namespace. 175 + Now any process that clones off a new namespace will have a 176 + mount at /cdrom which is a replica of the same mount in the 177 + parent namespace. 177 178 178 - So when a CD is inserted and mounted at /cdrom that mount gets 179 - propagated to the other mount at /cdrom in all the other clone 180 - namespaces. 179 + So when a CD is inserted and mounted at /cdrom that mount gets 180 + propagated to the other mount at /cdrom in all the other clone 181 + namespaces. 181 182 182 - B) A process wants its mounts invisible to any other process, but 183 - still be able to see the other system mounts. 183 + B) A process wants its mounts invisible to any other process, but 184 + still be able to see the other system mounts. 184 185 185 - Solution: 186 + Solution: 186 187 187 - To begin with, the administrator can mark the entire mount tree 188 - as shareable:: 188 + To begin with, the administrator can mark the entire mount tree 189 + as shareable:: 189 190 190 - mount --make-rshared / 191 + mount --make-rshared / 191 192 192 - A new process can clone off a new namespace. And mark some part 193 - of its namespace as slave:: 193 + A new process can clone off a new namespace. And mark some part 194 + of its namespace as slave:: 194 195 195 - mount --make-rslave /myprivatetree 196 + mount --make-rslave /myprivatetree 196 197 197 - Hence forth any mounts within the /myprivatetree done by the 198 - process will not show up in any other namespace. However mounts 199 - done in the parent namespace under /myprivatetree still shows 200 - up in the process's namespace. 198 + Hence forth any mounts within the /myprivatetree done by the 199 + process will not show up in any other namespace. However mounts 200 + done in the parent namespace under /myprivatetree still shows 201 + up in the process's namespace. 201 202 202 203 203 - Apart from the above semantics this feature provides the 204 - building blocks to solve the following problems: 204 + Apart from the above semantics this feature provides the 205 + building blocks to solve the following problems: 205 206 206 - C) Per-user namespace 207 + C) Per-user namespace 207 208 208 - The above semantics allows a way to share mounts across 209 - namespaces. But namespaces are associated with processes. If 210 - namespaces are made first class objects with user API to 211 - associate/disassociate a namespace with userid, then each user 212 - could have his/her own namespace and tailor it to his/her 213 - requirements. This needs to be supported in PAM. 209 + The above semantics allows a way to share mounts across 210 + namespaces. But namespaces are associated with processes. If 211 + namespaces are made first class objects with user API to 212 + associate/disassociate a namespace with userid, then each user 213 + could have his/her own namespace and tailor it to his/her 214 + requirements. This needs to be supported in PAM. 214 215 215 - D) Versioned files 216 + D) Versioned files 216 217 217 - If the entire mount tree is visible at multiple locations, then 218 - an underlying versioning file system can return different 219 - versions of the file depending on the path used to access that 220 - file. 218 + If the entire mount tree is visible at multiple locations, then 219 + an underlying versioning file system can return different 220 + versions of the file depending on the path used to access that 221 + file. 221 222 222 - An example is:: 223 + An example is:: 223 224 224 - mount --make-shared / 225 - mount --rbind / /view/v1 226 - mount --rbind / /view/v2 227 - mount --rbind / /view/v3 228 - mount --rbind / /view/v4 225 + mount --make-shared / 226 + mount --rbind / /view/v1 227 + mount --rbind / /view/v2 228 + mount --rbind / /view/v3 229 + mount --rbind / /view/v4 229 230 230 - and if /usr has a versioning filesystem mounted, then that 231 - mount appears at /view/v1/usr, /view/v2/usr, /view/v3/usr and 232 - /view/v4/usr too 231 + and if /usr has a versioning filesystem mounted, then that 232 + mount appears at /view/v1/usr, /view/v2/usr, /view/v3/usr and 233 + /view/v4/usr too 233 234 234 - A user can request v3 version of the file /usr/fs/namespace.c 235 - by accessing /view/v3/usr/fs/namespace.c . The underlying 236 - versioning filesystem can then decipher that v3 version of the 237 - filesystem is being requested and return the corresponding 238 - inode. 235 + A user can request v3 version of the file /usr/fs/namespace.c 236 + by accessing /view/v3/usr/fs/namespace.c . The underlying 237 + versioning filesystem can then decipher that v3 version of the 238 + filesystem is being requested and return the corresponding 239 + inode. 239 240 240 241 5) Detailed semantics 241 242 --------------------- 242 - The section below explains the detailed semantics of 243 - bind, rbind, move, mount, umount and clone-namespace operations. 243 + The section below explains the detailed semantics of 244 + bind, rbind, move, mount, umount and clone-namespace operations. 244 245 245 - Note: the word 'vfsmount' and the noun 'mount' have been used 246 - to mean the same thing, throughout this document. 246 + .. Note:: 247 + the word 'vfsmount' and the noun 'mount' have been used 248 + to mean the same thing, throughout this document. 247 249 248 - 5a) Mount states 250 + a) Mount states 249 251 250 - A given mount can be in one of the following states 252 + A **propagation event** is defined as event generated on a vfsmount 253 + that leads to mount or unmount actions in other vfsmounts. 251 254 252 - 1) shared 253 - 2) slave 254 - 3) shared and slave 255 - 4) private 256 - 5) unbindable 255 + A **peer group** is defined as a group of vfsmounts that propagate 256 + events to each other. 257 257 258 - A 'propagation event' is defined as event generated on a vfsmount 259 - that leads to mount or unmount actions in other vfsmounts. 258 + A given mount can be in one of the following states: 260 259 261 - A 'peer group' is defined as a group of vfsmounts that propagate 262 - events to each other. 260 + (1) Shared mounts 263 261 264 - (1) Shared mounts 262 + A **shared mount** is defined as a vfsmount that belongs to a 263 + peer group. 265 264 266 - A 'shared mount' is defined as a vfsmount that belongs to a 267 - 'peer group'. 265 + For example:: 268 266 269 - For example:: 267 + mount --make-shared /mnt 268 + mount --bind /mnt /tmp 270 269 271 - mount --make-shared /mnt 272 - mount --bind /mnt /tmp 270 + The mount at /mnt and that at /tmp are both shared and belong 271 + to the same peer group. Anything mounted or unmounted under 272 + /mnt or /tmp reflect in all the other mounts of its peer 273 + group. 273 274 274 - The mount at /mnt and that at /tmp are both shared and belong 275 - to the same peer group. Anything mounted or unmounted under 276 - /mnt or /tmp reflect in all the other mounts of its peer 277 - group. 278 275 276 + (2) Slave mounts 279 277 280 - (2) Slave mounts 278 + A **slave mount** is defined as a vfsmount that receives 279 + propagation events and does not forward propagation events. 281 280 282 - A 'slave mount' is defined as a vfsmount that receives 283 - propagation events and does not forward propagation events. 281 + A slave mount as the name implies has a master mount from which 282 + mount/unmount events are received. Events do not propagate from 283 + the slave mount to the master. Only a shared mount can be made 284 + a slave by executing the following command:: 284 285 285 - A slave mount as the name implies has a master mount from which 286 - mount/unmount events are received. Events do not propagate from 287 - the slave mount to the master. Only a shared mount can be made 288 - a slave by executing the following command:: 286 + mount --make-slave mount 289 287 290 - mount --make-slave mount 288 + A shared mount that is made as a slave is no more shared unless 289 + modified to become shared. 291 290 292 - A shared mount that is made as a slave is no more shared unless 293 - modified to become shared. 291 + (3) Shared and Slave 294 292 295 - (3) Shared and Slave 293 + A vfsmount can be both **shared** as well as **slave**. This state 294 + indicates that the mount is a slave of some vfsmount, and 295 + has its own peer group too. This vfsmount receives propagation 296 + events from its master vfsmount, and also forwards propagation 297 + events to its 'peer group' and to its slave vfsmounts. 296 298 297 - A vfsmount can be both shared as well as slave. This state 298 - indicates that the mount is a slave of some vfsmount, and 299 - has its own peer group too. This vfsmount receives propagation 300 - events from its master vfsmount, and also forwards propagation 301 - events to its 'peer group' and to its slave vfsmounts. 299 + Strictly speaking, the vfsmount is shared having its own 300 + peer group, and this peer-group is a slave of some other 301 + peer group. 302 302 303 - Strictly speaking, the vfsmount is shared having its own 304 - peer group, and this peer-group is a slave of some other 305 - peer group. 303 + Only a slave vfsmount can be made as 'shared and slave' by 304 + either executing the following command:: 306 305 307 - Only a slave vfsmount can be made as 'shared and slave' by 308 - either executing the following command:: 306 + mount --make-shared mount 309 307 310 - mount --make-shared mount 308 + or by moving the slave vfsmount under a shared vfsmount. 311 309 312 - or by moving the slave vfsmount under a shared vfsmount. 310 + (4) Private mount 313 311 314 - (4) Private mount 312 + A **private mount** is defined as vfsmount that does not 313 + receive or forward any propagation events. 315 314 316 - A 'private mount' is defined as vfsmount that does not 317 - receive or forward any propagation events. 315 + (5) Unbindable mount 318 316 319 - (5) Unbindable mount 317 + A **unbindable mount** is defined as vfsmount that does not 318 + receive or forward any propagation events and cannot 319 + be bind mounted. 320 320 321 - A 'unbindable mount' is defined as vfsmount that does not 322 - receive or forward any propagation events and cannot 323 - be bind mounted. 324 321 322 + State diagram: 325 323 326 - State diagram: 324 + The state diagram below explains the state transition of a mount, 325 + in response to various commands:: 327 326 328 - The state diagram below explains the state transition of a mount, 329 - in response to various commands:: 327 + ----------------------------------------------------------------------- 328 + | |make-shared | make-slave | make-private |make-unbindab| 329 + --------------|------------|--------------|--------------|-------------| 330 + |shared |shared |*slave/private| private | unbindable | 331 + | | | | | | 332 + |-------------|------------|--------------|--------------|-------------| 333 + |slave |shared | **slave | private | unbindable | 334 + | |and slave | | | | 335 + |-------------|------------|--------------|--------------|-------------| 336 + |shared |shared | slave | private | unbindable | 337 + |and slave |and slave | | | | 338 + |-------------|------------|--------------|--------------|-------------| 339 + |private |shared | **private | private | unbindable | 340 + |-------------|------------|--------------|--------------|-------------| 341 + |unbindable |shared |**unbindable | private | unbindable | 342 + ------------------------------------------------------------------------ 330 343 331 - ----------------------------------------------------------------------- 332 - | |make-shared | make-slave | make-private |make-unbindab| 333 - --------------|------------|--------------|--------------|-------------| 334 - |shared |shared |*slave/private| private | unbindable | 335 - | | | | | | 336 - |-------------|------------|--------------|--------------|-------------| 337 - |slave |shared | **slave | private | unbindable | 338 - | |and slave | | | | 339 - |-------------|------------|--------------|--------------|-------------| 340 - |shared |shared | slave | private | unbindable | 341 - |and slave |and slave | | | | 342 - |-------------|------------|--------------|--------------|-------------| 343 - |private |shared | **private | private | unbindable | 344 - |-------------|------------|--------------|--------------|-------------| 345 - |unbindable |shared |**unbindable | private | unbindable | 346 - ------------------------------------------------------------------------ 347 - 348 - * if the shared mount is the only mount in its peer group, making it 349 - slave, makes it private automatically. Note that there is no master to 350 - which it can be slaved to. 351 - 352 - ** slaving a non-shared mount has no effect on the mount. 353 - 354 - Apart from the commands listed below, the 'move' operation also changes 355 - the state of a mount depending on type of the destination mount. Its 356 - explained in section 5d. 357 - 358 - 5b) Bind semantics 359 - 360 - Consider the following command:: 361 - 362 - mount --bind A/a B/b 363 - 364 - where 'A' is the source mount, 'a' is the dentry in the mount 'A', 'B' 365 - is the destination mount and 'b' is the dentry in the destination mount. 366 - 367 - The outcome depends on the type of mount of 'A' and 'B'. The table 368 - below contains quick reference:: 369 - 370 - -------------------------------------------------------------------------- 371 - | BIND MOUNT OPERATION | 372 - |************************************************************************| 373 - |source(A)->| shared | private | slave | unbindable | 374 - | dest(B) | | | | | 375 - | | | | | | | 376 - | v | | | | | 377 - |************************************************************************| 378 - | shared | shared | shared | shared & slave | invalid | 379 - | | | | | | 380 - |non-shared| shared | private | slave | invalid | 381 - ************************************************************************** 344 + * if the shared mount is the only mount in its peer group, making it 345 + slave, makes it private automatically. Note that there is no master to 346 + which it can be slaved to. 347 + 348 + ** slaving a non-shared mount has no effect on the mount. 349 + 350 + Apart from the commands listed below, the 'move' operation also changes 351 + the state of a mount depending on type of the destination mount. Its 352 + explained in section 5d. 353 + 354 + b) Bind semantics 355 + 356 + Consider the following command:: 357 + 358 + mount --bind A/a B/b 359 + 360 + where 'A' is the source mount, 'a' is the dentry in the mount 'A', 'B' 361 + is the destination mount and 'b' is the dentry in the destination mount. 362 + 363 + The outcome depends on the type of mount of 'A' and 'B'. The table 364 + below contains quick reference:: 365 + 366 + -------------------------------------------------------------------------- 367 + | BIND MOUNT OPERATION | 368 + |************************************************************************| 369 + |source(A)->| shared | private | slave | unbindable | 370 + | dest(B) | | | | | 371 + | | | | | | | 372 + | v | | | | | 373 + |************************************************************************| 374 + | shared | shared | shared | shared & slave | invalid | 375 + | | | | | | 376 + |non-shared| shared | private | slave | invalid | 377 + ************************************************************************** 382 378 383 - Details: 379 + Details: 384 380 385 - 1. 'A' is a shared mount and 'B' is a shared mount. A new mount 'C' 386 - which is clone of 'A', is created. Its root dentry is 'a' . 'C' is 387 - mounted on mount 'B' at dentry 'b'. Also new mount 'C1', 'C2', 'C3' ... 388 - are created and mounted at the dentry 'b' on all mounts where 'B' 389 - propagates to. A new propagation tree containing 'C1',..,'Cn' is 390 - created. This propagation tree is identical to the propagation tree of 391 - 'B'. And finally the peer-group of 'C' is merged with the peer group 392 - of 'A'. 393 - 394 - 2. 'A' is a private mount and 'B' is a shared mount. A new mount 'C' 395 - which is clone of 'A', is created. Its root dentry is 'a'. 'C' is 396 - mounted on mount 'B' at dentry 'b'. Also new mount 'C1', 'C2', 'C3' ... 397 - are created and mounted at the dentry 'b' on all mounts where 'B' 398 - propagates to. A new propagation tree is set containing all new mounts 399 - 'C', 'C1', .., 'Cn' with exactly the same configuration as the 400 - propagation tree for 'B'. 401 - 402 - 3. 'A' is a slave mount of mount 'Z' and 'B' is a shared mount. A new 403 - mount 'C' which is clone of 'A', is created. Its root dentry is 'a' . 404 - 'C' is mounted on mount 'B' at dentry 'b'. Also new mounts 'C1', 'C2', 405 - 'C3' ... are created and mounted at the dentry 'b' on all mounts where 406 - 'B' propagates to. A new propagation tree containing the new mounts 407 - 'C','C1',.. 'Cn' is created. This propagation tree is identical to the 408 - propagation tree for 'B'. And finally the mount 'C' and its peer group 409 - is made the slave of mount 'Z'. In other words, mount 'C' is in the 410 - state 'slave and shared'. 411 - 412 - 4. 'A' is a unbindable mount and 'B' is a shared mount. This is a 413 - invalid operation. 414 - 415 - 5. 'A' is a private mount and 'B' is a non-shared(private or slave or 416 - unbindable) mount. A new mount 'C' which is clone of 'A', is created. 417 - Its root dentry is 'a'. 'C' is mounted on mount 'B' at dentry 'b'. 418 - 419 - 6. 'A' is a shared mount and 'B' is a non-shared mount. A new mount 'C' 420 - which is a clone of 'A' is created. Its root dentry is 'a'. 'C' is 421 - mounted on mount 'B' at dentry 'b'. 'C' is made a member of the 422 - peer-group of 'A'. 423 - 424 - 7. 'A' is a slave mount of mount 'Z' and 'B' is a non-shared mount. A 425 - new mount 'C' which is a clone of 'A' is created. Its root dentry is 426 - 'a'. 'C' is mounted on mount 'B' at dentry 'b'. Also 'C' is set as a 427 - slave mount of 'Z'. In other words 'A' and 'C' are both slave mounts of 428 - 'Z'. All mount/unmount events on 'Z' propagates to 'A' and 'C'. But 429 - mount/unmount on 'A' do not propagate anywhere else. Similarly 430 - mount/unmount on 'C' do not propagate anywhere else. 431 - 432 - 8. 'A' is a unbindable mount and 'B' is a non-shared mount. This is a 433 - invalid operation. A unbindable mount cannot be bind mounted. 434 - 435 - 5c) Rbind semantics 436 - 437 - rbind is same as bind. Bind replicates the specified mount. Rbind 438 - replicates all the mounts in the tree belonging to the specified mount. 439 - Rbind mount is bind mount applied to all the mounts in the tree. 440 - 441 - If the source tree that is rbind has some unbindable mounts, 442 - then the subtree under the unbindable mount is pruned in the new 443 - location. 444 - 445 - eg: 446 - 447 - let's say we have the following mount tree:: 448 - 449 - A 450 - / \ 451 - B C 452 - / \ / \ 453 - D E F G 454 - 455 - Let's say all the mount except the mount C in the tree are 456 - of a type other than unbindable. 457 - 458 - If this tree is rbound to say Z 459 - 460 - We will have the following tree at the new location:: 461 - 462 - Z 463 - | 464 - A' 465 - / 466 - B' Note how the tree under C is pruned 467 - / \ in the new location. 468 - D' E' 469 - 381 + 1. 'A' is a shared mount and 'B' is a shared mount. A new mount 'C' 382 + which is clone of 'A', is created. Its root dentry is 'a' . 'C' is 383 + mounted on mount 'B' at dentry 'b'. Also new mount 'C1', 'C2', 'C3' ... 384 + are created and mounted at the dentry 'b' on all mounts where 'B' 385 + propagates to. A new propagation tree containing 'C1',..,'Cn' is 386 + created. This propagation tree is identical to the propagation tree of 387 + 'B'. And finally the peer-group of 'C' is merged with the peer group 388 + of 'A'. 389 + 390 + 2. 'A' is a private mount and 'B' is a shared mount. A new mount 'C' 391 + which is clone of 'A', is created. Its root dentry is 'a'. 'C' is 392 + mounted on mount 'B' at dentry 'b'. Also new mount 'C1', 'C2', 'C3' ... 393 + are created and mounted at the dentry 'b' on all mounts where 'B' 394 + propagates to. A new propagation tree is set containing all new mounts 395 + 'C', 'C1', .., 'Cn' with exactly the same configuration as the 396 + propagation tree for 'B'. 470 397 471 - 472 - 5d) Move semantics 473 - 474 - Consider the following command 475 - 476 - mount --move A B/b 398 + 3. 'A' is a slave mount of mount 'Z' and 'B' is a shared mount. A new 399 + mount 'C' which is clone of 'A', is created. Its root dentry is 'a' . 400 + 'C' is mounted on mount 'B' at dentry 'b'. Also new mounts 'C1', 'C2', 401 + 'C3' ... are created and mounted at the dentry 'b' on all mounts where 402 + 'B' propagates to. A new propagation tree containing the new mounts 403 + 'C','C1',.. 'Cn' is created. This propagation tree is identical to the 404 + propagation tree for 'B'. And finally the mount 'C' and its peer group 405 + is made the slave of mount 'Z'. In other words, mount 'C' is in the 406 + state 'slave and shared'. 407 + 408 + 4. 'A' is a unbindable mount and 'B' is a shared mount. This is a 409 + invalid operation. 410 + 411 + 5. 'A' is a private mount and 'B' is a non-shared(private or slave or 412 + unbindable) mount. A new mount 'C' which is clone of 'A', is created. 413 + Its root dentry is 'a'. 'C' is mounted on mount 'B' at dentry 'b'. 414 + 415 + 6. 'A' is a shared mount and 'B' is a non-shared mount. A new mount 'C' 416 + which is a clone of 'A' is created. Its root dentry is 'a'. 'C' is 417 + mounted on mount 'B' at dentry 'b'. 'C' is made a member of the 418 + peer-group of 'A'. 419 + 420 + 7. 'A' is a slave mount of mount 'Z' and 'B' is a non-shared mount. A 421 + new mount 'C' which is a clone of 'A' is created. Its root dentry is 422 + 'a'. 'C' is mounted on mount 'B' at dentry 'b'. Also 'C' is set as a 423 + slave mount of 'Z'. In other words 'A' and 'C' are both slave mounts of 424 + 'Z'. All mount/unmount events on 'Z' propagates to 'A' and 'C'. But 425 + mount/unmount on 'A' do not propagate anywhere else. Similarly 426 + mount/unmount on 'C' do not propagate anywhere else. 427 + 428 + 8. 'A' is a unbindable mount and 'B' is a non-shared mount. This is a 429 + invalid operation. A unbindable mount cannot be bind mounted. 430 + 431 + c) Rbind semantics 432 + 433 + rbind is same as bind. Bind replicates the specified mount. Rbind 434 + replicates all the mounts in the tree belonging to the specified mount. 435 + Rbind mount is bind mount applied to all the mounts in the tree. 436 + 437 + If the source tree that is rbind has some unbindable mounts, 438 + then the subtree under the unbindable mount is pruned in the new 439 + location. 440 + 441 + eg: 442 + 443 + let's say we have the following mount tree:: 444 + 445 + A 446 + / \ 447 + B C 448 + / \ / \ 449 + D E F G 450 + 451 + Let's say all the mount except the mount C in the tree are 452 + of a type other than unbindable. 453 + 454 + If this tree is rbound to say Z 455 + 456 + We will have the following tree at the new location:: 457 + 458 + Z 459 + | 460 + A' 461 + / 462 + B' Note how the tree under C is pruned 463 + / \ in the new location. 464 + D' E' 477 465 478 - where 'A' is the source mount, 'B' is the destination mount and 'b' is 479 - the dentry in the destination mount. 480 466 481 - The outcome depends on the type of the mount of 'A' and 'B'. The table 482 - below is a quick reference:: 483 467 484 - --------------------------------------------------------------------------- 485 - | MOVE MOUNT OPERATION | 486 - |************************************************************************** 487 - | source(A)->| shared | private | slave | unbindable | 488 - | dest(B) | | | | | 489 - | | | | | | | 490 - | v | | | | | 491 - |************************************************************************** 492 - | shared | shared | shared |shared and slave| invalid | 493 - | | | | | | 494 - |non-shared| shared | private | slave | unbindable | 495 - *************************************************************************** 468 + d) Move semantics 496 469 497 - .. Note:: moving a mount residing under a shared mount is invalid. 470 + Consider the following command:: 498 471 499 - Details follow: 472 + mount --move A B/b 500 473 501 - 1. 'A' is a shared mount and 'B' is a shared mount. The mount 'A' is 502 - mounted on mount 'B' at dentry 'b'. Also new mounts 'A1', 'A2'...'An' 503 - are created and mounted at dentry 'b' on all mounts that receive 504 - propagation from mount 'B'. A new propagation tree is created in the 505 - exact same configuration as that of 'B'. This new propagation tree 506 - contains all the new mounts 'A1', 'A2'... 'An'. And this new 507 - propagation tree is appended to the already existing propagation tree 508 - of 'A'. 474 + where 'A' is the source mount, 'B' is the destination mount and 'b' is 475 + the dentry in the destination mount. 509 476 510 - 2. 'A' is a private mount and 'B' is a shared mount. The mount 'A' is 511 - mounted on mount 'B' at dentry 'b'. Also new mount 'A1', 'A2'... 'An' 512 - are created and mounted at dentry 'b' on all mounts that receive 513 - propagation from mount 'B'. The mount 'A' becomes a shared mount and a 514 - propagation tree is created which is identical to that of 515 - 'B'. This new propagation tree contains all the new mounts 'A1', 516 - 'A2'... 'An'. 477 + The outcome depends on the type of the mount of 'A' and 'B'. The table 478 + below is a quick reference:: 517 479 518 - 3. 'A' is a slave mount of mount 'Z' and 'B' is a shared mount. The 519 - mount 'A' is mounted on mount 'B' at dentry 'b'. Also new mounts 'A1', 520 - 'A2'... 'An' are created and mounted at dentry 'b' on all mounts that 521 - receive propagation from mount 'B'. A new propagation tree is created 522 - in the exact same configuration as that of 'B'. This new propagation 523 - tree contains all the new mounts 'A1', 'A2'... 'An'. And this new 524 - propagation tree is appended to the already existing propagation tree of 525 - 'A'. Mount 'A' continues to be the slave mount of 'Z' but it also 526 - becomes 'shared'. 480 + --------------------------------------------------------------------------- 481 + | MOVE MOUNT OPERATION | 482 + |************************************************************************** 483 + | source(A)->| shared | private | slave | unbindable | 484 + | dest(B) | | | | | 485 + | | | | | | | 486 + | v | | | | | 487 + |************************************************************************** 488 + | shared | shared | shared |shared and slave| invalid | 489 + | | | | | | 490 + |non-shared| shared | private | slave | unbindable | 491 + *************************************************************************** 527 492 528 - 4. 'A' is a unbindable mount and 'B' is a shared mount. The operation 529 - is invalid. Because mounting anything on the shared mount 'B' can 530 - create new mounts that get mounted on the mounts that receive 531 - propagation from 'B'. And since the mount 'A' is unbindable, cloning 532 - it to mount at other mountpoints is not possible. 493 + .. Note:: moving a mount residing under a shared mount is invalid. 533 494 534 - 5. 'A' is a private mount and 'B' is a non-shared(private or slave or 535 - unbindable) mount. The mount 'A' is mounted on mount 'B' at dentry 'b'. 495 + Details follow: 536 496 537 - 6. 'A' is a shared mount and 'B' is a non-shared mount. The mount 'A' 538 - is mounted on mount 'B' at dentry 'b'. Mount 'A' continues to be a 539 - shared mount. 497 + 1. 'A' is a shared mount and 'B' is a shared mount. The mount 'A' is 498 + mounted on mount 'B' at dentry 'b'. Also new mounts 'A1', 'A2'...'An' 499 + are created and mounted at dentry 'b' on all mounts that receive 500 + propagation from mount 'B'. A new propagation tree is created in the 501 + exact same configuration as that of 'B'. This new propagation tree 502 + contains all the new mounts 'A1', 'A2'... 'An'. And this new 503 + propagation tree is appended to the already existing propagation tree 504 + of 'A'. 540 505 541 - 7. 'A' is a slave mount of mount 'Z' and 'B' is a non-shared mount. 542 - The mount 'A' is mounted on mount 'B' at dentry 'b'. Mount 'A' 543 - continues to be a slave mount of mount 'Z'. 506 + 2. 'A' is a private mount and 'B' is a shared mount. The mount 'A' is 507 + mounted on mount 'B' at dentry 'b'. Also new mount 'A1', 'A2'... 'An' 508 + are created and mounted at dentry 'b' on all mounts that receive 509 + propagation from mount 'B'. The mount 'A' becomes a shared mount and a 510 + propagation tree is created which is identical to that of 511 + 'B'. This new propagation tree contains all the new mounts 'A1', 512 + 'A2'... 'An'. 544 513 545 - 8. 'A' is a unbindable mount and 'B' is a non-shared mount. The mount 546 - 'A' is mounted on mount 'B' at dentry 'b'. Mount 'A' continues to be a 547 - unbindable mount. 514 + 3. 'A' is a slave mount of mount 'Z' and 'B' is a shared mount. The 515 + mount 'A' is mounted on mount 'B' at dentry 'b'. Also new mounts 'A1', 516 + 'A2'... 'An' are created and mounted at dentry 'b' on all mounts that 517 + receive propagation from mount 'B'. A new propagation tree is created 518 + in the exact same configuration as that of 'B'. This new propagation 519 + tree contains all the new mounts 'A1', 'A2'... 'An'. And this new 520 + propagation tree is appended to the already existing propagation tree of 521 + 'A'. Mount 'A' continues to be the slave mount of 'Z' but it also 522 + becomes 'shared'. 548 523 549 - 5e) Mount semantics 524 + 4. 'A' is a unbindable mount and 'B' is a shared mount. The operation 525 + is invalid. Because mounting anything on the shared mount 'B' can 526 + create new mounts that get mounted on the mounts that receive 527 + propagation from 'B'. And since the mount 'A' is unbindable, cloning 528 + it to mount at other mountpoints is not possible. 550 529 551 - Consider the following command:: 530 + 5. 'A' is a private mount and 'B' is a non-shared(private or slave or 531 + unbindable) mount. The mount 'A' is mounted on mount 'B' at dentry 'b'. 552 532 553 - mount device B/b 533 + 6. 'A' is a shared mount and 'B' is a non-shared mount. The mount 'A' 534 + is mounted on mount 'B' at dentry 'b'. Mount 'A' continues to be a 535 + shared mount. 554 536 555 - 'B' is the destination mount and 'b' is the dentry in the destination 556 - mount. 537 + 7. 'A' is a slave mount of mount 'Z' and 'B' is a non-shared mount. 538 + The mount 'A' is mounted on mount 'B' at dentry 'b'. Mount 'A' 539 + continues to be a slave mount of mount 'Z'. 557 540 558 - The above operation is the same as bind operation with the exception 559 - that the source mount is always a private mount. 541 + 8. 'A' is a unbindable mount and 'B' is a non-shared mount. The mount 542 + 'A' is mounted on mount 'B' at dentry 'b'. Mount 'A' continues to be a 543 + unbindable mount. 560 544 545 + e) Mount semantics 561 546 562 - 5f) Unmount semantics 547 + Consider the following command:: 563 548 564 - Consider the following command:: 549 + mount device B/b 565 550 566 - umount A 551 + 'B' is the destination mount and 'b' is the dentry in the destination 552 + mount. 567 553 568 - where 'A' is a mount mounted on mount 'B' at dentry 'b'. 554 + The above operation is the same as bind operation with the exception 555 + that the source mount is always a private mount. 569 556 570 - If mount 'B' is shared, then all most-recently-mounted mounts at dentry 571 - 'b' on mounts that receive propagation from mount 'B' and does not have 572 - sub-mounts within them are unmounted. 573 557 574 - Example: Let's say 'B1', 'B2', 'B3' are shared mounts that propagate to 575 - each other. 558 + f) Unmount semantics 576 559 577 - let's say 'A1', 'A2', 'A3' are first mounted at dentry 'b' on mount 578 - 'B1', 'B2' and 'B3' respectively. 560 + Consider the following command:: 579 561 580 - let's say 'C1', 'C2', 'C3' are next mounted at the same dentry 'b' on 581 - mount 'B1', 'B2' and 'B3' respectively. 562 + umount A 582 563 583 - if 'C1' is unmounted, all the mounts that are most-recently-mounted on 584 - 'B1' and on the mounts that 'B1' propagates-to are unmounted. 564 + where 'A' is a mount mounted on mount 'B' at dentry 'b'. 585 565 586 - 'B1' propagates to 'B2' and 'B3'. And the most recently mounted mount 587 - on 'B2' at dentry 'b' is 'C2', and that of mount 'B3' is 'C3'. 566 + If mount 'B' is shared, then all most-recently-mounted mounts at dentry 567 + 'b' on mounts that receive propagation from mount 'B' and does not have 568 + sub-mounts within them are unmounted. 588 569 589 - So all 'C1', 'C2' and 'C3' should be unmounted. 570 + Example: Let's say 'B1', 'B2', 'B3' are shared mounts that propagate to 571 + each other. 590 572 591 - If any of 'C2' or 'C3' has some child mounts, then that mount is not 592 - unmounted, but all other mounts are unmounted. However if 'C1' is told 593 - to be unmounted and 'C1' has some sub-mounts, the umount operation is 594 - failed entirely. 573 + let's say 'A1', 'A2', 'A3' are first mounted at dentry 'b' on mount 574 + 'B1', 'B2' and 'B3' respectively. 595 575 596 - 5g) Clone Namespace 576 + let's say 'C1', 'C2', 'C3' are next mounted at the same dentry 'b' on 577 + mount 'B1', 'B2' and 'B3' respectively. 597 578 598 - A cloned namespace contains all the mounts as that of the parent 599 - namespace. 579 + if 'C1' is unmounted, all the mounts that are most-recently-mounted on 580 + 'B1' and on the mounts that 'B1' propagates-to are unmounted. 600 581 601 - Let's say 'A' and 'B' are the corresponding mounts in the parent and the 602 - child namespace. 582 + 'B1' propagates to 'B2' and 'B3'. And the most recently mounted mount 583 + on 'B2' at dentry 'b' is 'C2', and that of mount 'B3' is 'C3'. 603 584 604 - If 'A' is shared, then 'B' is also shared and 'A' and 'B' propagate to 605 - each other. 585 + So all 'C1', 'C2' and 'C3' should be unmounted. 606 586 607 - If 'A' is a slave mount of 'Z', then 'B' is also the slave mount of 608 - 'Z'. 587 + If any of 'C2' or 'C3' has some child mounts, then that mount is not 588 + unmounted, but all other mounts are unmounted. However if 'C1' is told 589 + to be unmounted and 'C1' has some sub-mounts, the umount operation is 590 + failed entirely. 609 591 610 - If 'A' is a private mount, then 'B' is a private mount too. 592 + g) Clone Namespace 611 593 612 - If 'A' is unbindable mount, then 'B' is a unbindable mount too. 594 + A cloned namespace contains all the mounts as that of the parent 595 + namespace. 596 + 597 + Let's say 'A' and 'B' are the corresponding mounts in the parent and the 598 + child namespace. 599 + 600 + If 'A' is shared, then 'B' is also shared and 'A' and 'B' propagate to 601 + each other. 602 + 603 + If 'A' is a slave mount of 'Z', then 'B' is also the slave mount of 604 + 'Z'. 605 + 606 + If 'A' is a private mount, then 'B' is a private mount too. 607 + 608 + If 'A' is unbindable mount, then 'B' is a unbindable mount too. 613 609 614 610 615 611 6) Quiz 616 612 ------- 617 613 618 - A. What is the result of the following command sequence? 614 + A. What is the result of the following command sequence? 619 615 620 - :: 616 + :: 621 617 622 - mount --bind /mnt /mnt 623 - mount --make-shared /mnt 624 - mount --bind /mnt /tmp 625 - mount --move /tmp /mnt/1 618 + mount --bind /mnt /mnt 619 + mount --make-shared /mnt 620 + mount --bind /mnt /tmp 621 + mount --move /tmp /mnt/1 626 622 627 - what should be the contents of /mnt /mnt/1 /mnt/1/1 should be? 628 - Should they all be identical? or should /mnt and /mnt/1 be 629 - identical only? 630 - 631 - 632 - B. What is the result of the following command sequence? 633 - 634 - :: 635 - 636 - mount --make-rshared / 637 - mkdir -p /v/1 638 - mount --rbind / /v/1 639 - 640 - what should be the content of /v/1/v/1 be? 623 + what should be the contents of /mnt /mnt/1 /mnt/1/1 should be? 624 + Should they all be identical? or should /mnt and /mnt/1 be 625 + identical only? 641 626 642 627 643 - C. What is the result of the following command sequence? 628 + B. What is the result of the following command sequence? 644 629 645 - :: 630 + :: 646 631 647 - mount --bind /mnt /mnt 648 - mount --make-shared /mnt 649 - mkdir -p /mnt/1/2/3 /mnt/1/test 650 - mount --bind /mnt/1 /tmp 651 - mount --make-slave /mnt 652 - mount --make-shared /mnt 653 - mount --bind /mnt/1/2 /tmp1 654 - mount --make-slave /mnt 632 + mount --make-rshared / 633 + mkdir -p /v/1 634 + mount --rbind / /v/1 655 635 656 - At this point we have the first mount at /tmp and 657 - its root dentry is 1. Let's call this mount 'A' 658 - And then we have a second mount at /tmp1 with root 659 - dentry 2. Let's call this mount 'B' 660 - Next we have a third mount at /mnt with root dentry 661 - mnt. Let's call this mount 'C' 636 + what should be the content of /v/1/v/1 be? 662 637 663 - 'B' is the slave of 'A' and 'C' is a slave of 'B' 664 - A -> B -> C 665 638 666 - at this point if we execute the following command 639 + C. What is the result of the following command sequence? 667 640 668 - mount --bind /bin /tmp/test 641 + :: 669 642 670 - The mount is attempted on 'A' 643 + mount --bind /mnt /mnt 644 + mount --make-shared /mnt 645 + mkdir -p /mnt/1/2/3 /mnt/1/test 646 + mount --bind /mnt/1 /tmp 647 + mount --make-slave /mnt 648 + mount --make-shared /mnt 649 + mount --bind /mnt/1/2 /tmp1 650 + mount --make-slave /mnt 671 651 672 - will the mount propagate to 'B' and 'C' ? 652 + At this point we have the first mount at /tmp and 653 + its root dentry is 1. Let's call this mount 'A' 654 + And then we have a second mount at /tmp1 with root 655 + dentry 2. Let's call this mount 'B' 656 + Next we have a third mount at /mnt with root dentry 657 + mnt. Let's call this mount 'C' 673 658 674 - what would be the contents of 675 - /mnt/1/test be? 659 + 'B' is the slave of 'A' and 'C' is a slave of 'B' 660 + A -> B -> C 661 + 662 + at this point if we execute the following command:: 663 + 664 + mount --bind /bin /tmp/test 665 + 666 + The mount is attempted on 'A' 667 + 668 + will the mount propagate to 'B' and 'C' ? 669 + 670 + what would be the contents of 671 + /mnt/1/test be? 676 672 677 673 7) FAQ 678 674 ------ 679 675 680 - Q1. Why is bind mount needed? How is it different from symbolic links? 681 - symbolic links can get stale if the destination mount gets 682 - unmounted or moved. Bind mounts continue to exist even if the 683 - other mount is unmounted or moved. 676 + 1. Why is bind mount needed? How is it different from symbolic links? 684 677 685 - Q2. Why can't the shared subtree be implemented using exportfs? 678 + symbolic links can get stale if the destination mount gets 679 + unmounted or moved. Bind mounts continue to exist even if the 680 + other mount is unmounted or moved. 686 681 687 - exportfs is a heavyweight way of accomplishing part of what 688 - shared subtree can do. I cannot imagine a way to implement the 689 - semantics of slave mount using exportfs? 682 + 2. Why can't the shared subtree be implemented using exportfs? 690 683 691 - Q3 Why is unbindable mount needed? 684 + exportfs is a heavyweight way of accomplishing part of what 685 + shared subtree can do. I cannot imagine a way to implement the 686 + semantics of slave mount using exportfs? 692 687 693 - Let's say we want to replicate the mount tree at multiple 694 - locations within the same subtree. 688 + 3. Why is unbindable mount needed? 695 689 696 - if one rbind mounts a tree within the same subtree 'n' times 697 - the number of mounts created is an exponential function of 'n'. 698 - Having unbindable mount can help prune the unneeded bind 699 - mounts. Here is an example. 690 + Let's say we want to replicate the mount tree at multiple 691 + locations within the same subtree. 700 692 701 - step 1: 702 - let's say the root tree has just two directories with 703 - one vfsmount:: 693 + if one rbind mounts a tree within the same subtree 'n' times 694 + the number of mounts created is an exponential function of 'n'. 695 + Having unbindable mount can help prune the unneeded bind 696 + mounts. Here is an example. 704 697 705 - root 706 - / \ 707 - tmp usr 698 + step 1: 699 + let's say the root tree has just two directories with 700 + one vfsmount:: 708 701 709 - And we want to replicate the tree at multiple 710 - mountpoints under /root/tmp 702 + root 703 + / \ 704 + tmp usr 711 705 712 - step 2: 713 - :: 706 + And we want to replicate the tree at multiple 707 + mountpoints under /root/tmp 714 708 715 - 716 - mount --make-shared /root 717 - 718 - mkdir -p /tmp/m1 719 - 720 - mount --rbind /root /tmp/m1 721 - 722 - the new tree now looks like this:: 723 - 724 - root 725 - / \ 726 - tmp usr 727 - / 728 - m1 729 - / \ 730 - tmp usr 731 - / 732 - m1 733 - 734 - it has two vfsmounts 735 - 736 - step 3: 737 - :: 738 - 739 - mkdir -p /tmp/m2 740 - mount --rbind /root /tmp/m2 741 - 742 - the new tree now looks like this:: 743 - 744 - root 745 - / \ 746 - tmp usr 747 - / \ 748 - m1 m2 749 - / \ / \ 750 - tmp usr tmp usr 751 - / \ / 752 - m1 m2 m1 753 - / \ / \ 754 - tmp usr tmp usr 755 - / / \ 756 - m1 m1 m2 757 - / \ 758 - tmp usr 759 - / \ 760 - m1 m2 761 - 762 - it has 6 vfsmounts 763 - 764 - step 4: 765 - :: 766 - mkdir -p /tmp/m3 767 - mount --rbind /root /tmp/m3 768 - 769 - I won't draw the tree..but it has 24 vfsmounts 709 + step 2: 710 + :: 770 711 771 712 772 - at step i the number of vfsmounts is V[i] = i*V[i-1]. 773 - This is an exponential function. And this tree has way more 774 - mounts than what we really needed in the first place. 713 + mount --make-shared /root 775 714 776 - One could use a series of umount at each step to prune 777 - out the unneeded mounts. But there is a better solution. 778 - Unclonable mounts come in handy here. 715 + mkdir -p /tmp/m1 779 716 780 - step 1: 781 - let's say the root tree has just two directories with 782 - one vfsmount:: 717 + mount --rbind /root /tmp/m1 783 718 784 - root 785 - / \ 786 - tmp usr 719 + the new tree now looks like this:: 787 720 788 - How do we set up the same tree at multiple locations under 789 - /root/tmp 721 + root 722 + / \ 723 + tmp usr 724 + / 725 + m1 726 + / \ 727 + tmp usr 728 + / 729 + m1 790 730 791 - step 2: 792 - :: 731 + it has two vfsmounts 732 + 733 + step 3: 734 + :: 735 + 736 + mkdir -p /tmp/m2 737 + mount --rbind /root /tmp/m2 738 + 739 + the new tree now looks like this:: 740 + 741 + root 742 + / \ 743 + tmp usr 744 + / \ 745 + m1 m2 746 + / \ / \ 747 + tmp usr tmp usr 748 + / \ / 749 + m1 m2 m1 750 + / \ / \ 751 + tmp usr tmp usr 752 + / / \ 753 + m1 m1 m2 754 + / \ 755 + tmp usr 756 + / \ 757 + m1 m2 758 + 759 + it has 6 vfsmounts 760 + 761 + step 4: 762 + :: 763 + 764 + mkdir -p /tmp/m3 765 + mount --rbind /root /tmp/m3 766 + 767 + I won't draw the tree..but it has 24 vfsmounts 793 768 794 769 795 - mount --bind /root/tmp /root/tmp 770 + at step i the number of vfsmounts is V[i] = i*V[i-1]. 771 + This is an exponential function. And this tree has way more 772 + mounts than what we really needed in the first place. 796 773 797 - mount --make-rshared /root 798 - mount --make-unbindable /root/tmp 774 + One could use a series of umount at each step to prune 775 + out the unneeded mounts. But there is a better solution. 776 + Unclonable mounts come in handy here. 799 777 800 - mkdir -p /tmp/m1 778 + step 1: 779 + let's say the root tree has just two directories with 780 + one vfsmount:: 801 781 802 - mount --rbind /root /tmp/m1 782 + root 783 + / \ 784 + tmp usr 803 785 804 - the new tree now looks like this:: 786 + How do we set up the same tree at multiple locations under 787 + /root/tmp 805 788 806 - root 807 - / \ 808 - tmp usr 809 - / 810 - m1 811 - / \ 812 - tmp usr 789 + step 2: 790 + :: 813 791 814 - step 3: 815 - :: 816 792 817 - mkdir -p /tmp/m2 818 - mount --rbind /root /tmp/m2 793 + mount --bind /root/tmp /root/tmp 819 794 820 - the new tree now looks like this:: 795 + mount --make-rshared /root 796 + mount --make-unbindable /root/tmp 821 797 822 - root 823 - / \ 824 - tmp usr 825 - / \ 826 - m1 m2 827 - / \ / \ 828 - tmp usr tmp usr 798 + mkdir -p /tmp/m1 829 799 830 - step 4: 831 - :: 800 + mount --rbind /root /tmp/m1 832 801 833 - mkdir -p /tmp/m3 834 - mount --rbind /root /tmp/m3 802 + the new tree now looks like this:: 835 803 836 - the new tree now looks like this:: 804 + root 805 + / \ 806 + tmp usr 807 + / 808 + m1 809 + / \ 810 + tmp usr 837 811 838 - root 839 - / \ 840 - tmp usr 841 - / \ \ 842 - m1 m2 m3 843 - / \ / \ / \ 844 - tmp usr tmp usr tmp usr 812 + step 3: 813 + :: 814 + 815 + mkdir -p /tmp/m2 816 + mount --rbind /root /tmp/m2 817 + 818 + the new tree now looks like this:: 819 + 820 + root 821 + / \ 822 + tmp usr 823 + / \ 824 + m1 m2 825 + / \ / \ 826 + tmp usr tmp usr 827 + 828 + step 4: 829 + :: 830 + 831 + mkdir -p /tmp/m3 832 + mount --rbind /root /tmp/m3 833 + 834 + the new tree now looks like this:: 835 + 836 + root 837 + / \ 838 + tmp usr 839 + / \ \ 840 + m1 m2 m3 841 + / \ / \ / \ 842 + tmp usr tmp usr tmp usr 845 843 846 844 8) Implementation 847 845 ----------------- 848 846 849 - 8A) Datastructure 847 + A) Datastructure 850 848 851 - 4 new fields are introduced to struct vfsmount: 849 + Several new fields are introduced to struct vfsmount: 852 850 853 - * ->mnt_share 854 - * ->mnt_slave_list 855 - * ->mnt_slave 856 - * ->mnt_master 851 + ->mnt_share 852 + Links together all the mount to/from which this vfsmount 853 + send/receives propagation events. 857 854 858 - ->mnt_share 859 - links together all the mount to/from which this vfsmount 860 - send/receives propagation events. 855 + ->mnt_slave_list 856 + Links all the mounts to which this vfsmount propagates 857 + to. 861 858 862 - ->mnt_slave_list 863 - links all the mounts to which this vfsmount propagates 864 - to. 859 + ->mnt_slave 860 + Links together all the slaves that its master vfsmount 861 + propagates to. 865 862 866 - ->mnt_slave 867 - links together all the slaves that its master vfsmount 868 - propagates to. 863 + ->mnt_master 864 + Points to the master vfsmount from which this vfsmount 865 + receives propagation. 869 866 870 - ->mnt_master 871 - points to the master vfsmount from which this vfsmount 872 - receives propagation. 867 + ->mnt_flags 868 + Takes two more flags to indicate the propagation status of 869 + the vfsmount. MNT_SHARE indicates that the vfsmount is a shared 870 + vfsmount. MNT_UNCLONABLE indicates that the vfsmount cannot be 871 + replicated. 873 872 874 - ->mnt_flags 875 - takes two more flags to indicate the propagation status of 876 - the vfsmount. MNT_SHARE indicates that the vfsmount is a shared 877 - vfsmount. MNT_UNCLONABLE indicates that the vfsmount cannot be 878 - replicated. 873 + All the shared vfsmounts in a peer group form a cyclic list through 874 + ->mnt_share. 879 875 880 - All the shared vfsmounts in a peer group form a cyclic list through 881 - ->mnt_share. 876 + All vfsmounts with the same ->mnt_master form on a cyclic list anchored 877 + in ->mnt_master->mnt_slave_list and going through ->mnt_slave. 882 878 883 - All vfsmounts with the same ->mnt_master form on a cyclic list anchored 884 - in ->mnt_master->mnt_slave_list and going through ->mnt_slave. 879 + ->mnt_master can point to arbitrary (and possibly different) members 880 + of master peer group. To find all immediate slaves of a peer group 881 + you need to go through _all_ ->mnt_slave_list of its members. 882 + Conceptually it's just a single set - distribution among the 883 + individual lists does not affect propagation or the way propagation 884 + tree is modified by operations. 885 885 886 - ->mnt_master can point to arbitrary (and possibly different) members 887 - of master peer group. To find all immediate slaves of a peer group 888 - you need to go through _all_ ->mnt_slave_list of its members. 889 - Conceptually it's just a single set - distribution among the 890 - individual lists does not affect propagation or the way propagation 891 - tree is modified by operations. 886 + All vfsmounts in a peer group have the same ->mnt_master. If it is 887 + non-NULL, they form a contiguous (ordered) segment of slave list. 892 888 893 - All vfsmounts in a peer group have the same ->mnt_master. If it is 894 - non-NULL, they form a contiguous (ordered) segment of slave list. 889 + A example propagation tree looks as shown in the figure below. 895 890 896 - A example propagation tree looks as shown in the figure below. 897 - [ NOTE: Though it looks like a forest, if we consider all the shared 898 - mounts as a conceptual entity called 'pnode', it becomes a tree]:: 891 + .. note:: 892 + Though it looks like a forest, if we consider all the shared 893 + mounts as a conceptual entity called 'pnode', it becomes a tree. 899 894 900 - 901 - A <--> B <--> C <---> D 902 - /|\ /| |\ 903 - / F G J K H I 904 - / 905 - E<-->K 906 - /|\ 907 - M L N 908 - 909 - In the above figure A,B,C and D all are shared and propagate to each 910 - other. 'A' has got 3 slave mounts 'E' 'F' and 'G' 'C' has got 2 slave 911 - mounts 'J' and 'K' and 'D' has got two slave mounts 'H' and 'I'. 912 - 'E' is also shared with 'K' and they propagate to each other. And 913 - 'K' has 3 slaves 'M', 'L' and 'N' 914 - 915 - A's ->mnt_share links with the ->mnt_share of 'B' 'C' and 'D' 916 - 917 - A's ->mnt_slave_list links with ->mnt_slave of 'E', 'K', 'F' and 'G' 918 - 919 - E's ->mnt_share links with ->mnt_share of K 920 - 921 - 'E', 'K', 'F', 'G' have their ->mnt_master point to struct vfsmount of 'A' 922 - 923 - 'M', 'L', 'N' have their ->mnt_master point to struct vfsmount of 'K' 924 - 925 - K's ->mnt_slave_list links with ->mnt_slave of 'M', 'L' and 'N' 926 - 927 - C's ->mnt_slave_list links with ->mnt_slave of 'J' and 'K' 928 - 929 - J and K's ->mnt_master points to struct vfsmount of C 930 - 931 - and finally D's ->mnt_slave_list links with ->mnt_slave of 'H' and 'I' 932 - 933 - 'H' and 'I' have their ->mnt_master pointing to struct vfsmount of 'D'. 895 + :: 934 896 935 897 936 - NOTE: The propagation tree is orthogonal to the mount tree. 898 + A <--> B <--> C <---> D 899 + /|\ /| |\ 900 + / F G J K H I 901 + / 902 + E<-->K 903 + /|\ 904 + M L N 937 905 938 - 8B Locking: 906 + In the above figure A,B,C and D all are shared and propagate to each 907 + other. 'A' has got 3 slave mounts 'E' 'F' and 'G' 'C' has got 2 slave 908 + mounts 'J' and 'K' and 'D' has got two slave mounts 'H' and 'I'. 909 + 'E' is also shared with 'K' and they propagate to each other. And 910 + 'K' has 3 slaves 'M', 'L' and 'N' 939 911 940 - ->mnt_share, ->mnt_slave, ->mnt_slave_list, ->mnt_master are protected 941 - by namespace_sem (exclusive for modifications, shared for reading). 912 + A's ->mnt_share links with the ->mnt_share of 'B' 'C' and 'D' 942 913 943 - Normally we have ->mnt_flags modifications serialized by vfsmount_lock. 944 - There are two exceptions: do_add_mount() and clone_mnt(). 945 - The former modifies a vfsmount that has not been visible in any shared 946 - data structures yet. 947 - The latter holds namespace_sem and the only references to vfsmount 948 - are in lists that can't be traversed without namespace_sem. 914 + A's ->mnt_slave_list links with ->mnt_slave of 'E', 'K', 'F' and 'G' 949 915 950 - 8C Algorithm: 916 + E's ->mnt_share links with ->mnt_share of K 951 917 952 - The crux of the implementation resides in rbind/move operation. 918 + 'E', 'K', 'F', 'G' have their ->mnt_master point to struct vfsmount of 'A' 953 919 954 - The overall algorithm breaks the operation into 3 phases: (look at 955 - attach_recursive_mnt() and propagate_mnt()) 920 + 'M', 'L', 'N' have their ->mnt_master point to struct vfsmount of 'K' 956 921 957 - 1. prepare phase. 958 - 2. commit phases. 959 - 3. abort phases. 922 + K's ->mnt_slave_list links with ->mnt_slave of 'M', 'L' and 'N' 960 923 961 - Prepare phase: 924 + C's ->mnt_slave_list links with ->mnt_slave of 'J' and 'K' 962 925 963 - for each mount in the source tree: 926 + J and K's ->mnt_master points to struct vfsmount of C 964 927 965 - a) Create the necessary number of mount trees to 966 - be attached to each of the mounts that receive 967 - propagation from the destination mount. 968 - b) Do not attach any of the trees to its destination. 969 - However note down its ->mnt_parent and ->mnt_mountpoint 970 - c) Link all the new mounts to form a propagation tree that 971 - is identical to the propagation tree of the destination 972 - mount. 928 + and finally D's ->mnt_slave_list links with ->mnt_slave of 'H' and 'I' 973 929 974 - If this phase is successful, there should be 'n' new 975 - propagation trees; where 'n' is the number of mounts in the 976 - source tree. Go to the commit phase 930 + 'H' and 'I' have their ->mnt_master pointing to struct vfsmount of 'D'. 977 931 978 - Also there should be 'm' new mount trees, where 'm' is 979 - the number of mounts to which the destination mount 980 - propagates to. 981 932 982 - if any memory allocations fail, go to the abort phase. 933 + NOTE: The propagation tree is orthogonal to the mount tree. 983 934 984 - Commit phase 985 - attach each of the mount trees to their corresponding 986 - destination mounts. 935 + B) Locking: 987 936 988 - Abort phase 989 - delete all the newly created trees. 937 + ->mnt_share, ->mnt_slave, ->mnt_slave_list, ->mnt_master are protected 938 + by namespace_sem (exclusive for modifications, shared for reading). 990 939 991 - .. Note:: 992 - all the propagation related functionality resides in the file pnode.c 940 + Normally we have ->mnt_flags modifications serialized by vfsmount_lock. 941 + There are two exceptions: do_add_mount() and clone_mnt(). 942 + The former modifies a vfsmount that has not been visible in any shared 943 + data structures yet. 944 + The latter holds namespace_sem and the only references to vfsmount 945 + are in lists that can't be traversed without namespace_sem. 946 + 947 + C) Algorithm: 948 + 949 + The crux of the implementation resides in rbind/move operation. 950 + 951 + The overall algorithm breaks the operation into 3 phases: (look at 952 + attach_recursive_mnt() and propagate_mnt()) 953 + 954 + 1. Prepare phase. 955 + 956 + For each mount in the source tree: 957 + 958 + a) Create the necessary number of mount trees to 959 + be attached to each of the mounts that receive 960 + propagation from the destination mount. 961 + b) Do not attach any of the trees to its destination. 962 + However note down its ->mnt_parent and ->mnt_mountpoint 963 + c) Link all the new mounts to form a propagation tree that 964 + is identical to the propagation tree of the destination 965 + mount. 966 + 967 + If this phase is successful, there should be 'n' new 968 + propagation trees; where 'n' is the number of mounts in the 969 + source tree. Go to the commit phase 970 + 971 + Also there should be 'm' new mount trees, where 'm' is 972 + the number of mounts to which the destination mount 973 + propagates to. 974 + 975 + If any memory allocations fail, go to the abort phase. 976 + 977 + 2. Commit phase. 978 + 979 + Attach each of the mount trees to their corresponding 980 + destination mounts. 981 + 982 + 3. Abort phase. 983 + 984 + Delete all the newly created trees. 985 + 986 + .. Note:: 987 + all the propagation related functionality resides in the file pnode.c 993 988 994 989 995 990 ------------------------------------------------------------------------
+19 -6
Documentation/filesystems/sysfs.rst
··· 243 243 - show() methods should return the number of bytes printed into the 244 244 buffer. 245 245 246 - - show() should only use sysfs_emit() or sysfs_emit_at() when formatting 247 - the value to be returned to user space. 246 + - New implementations of show() methods should only use sysfs_emit() or 247 + sysfs_emit_at() when formatting the value to be returned to user space. 248 248 249 249 - store() should return the number of bytes used from the buffer. If the 250 250 entire buffer has been used, just return the count argument. ··· 299 299 hypervisor/ 300 300 kernel/ 301 301 module/ 302 - net/ 303 302 power/ 304 303 305 304 devices/ contains a filesystem representation of the device tree. It maps ··· 312 313 drivers/ 313 314 314 315 devices/ contains symlinks for each device discovered in the system 315 - that point to the device's directory under root/. 316 + that point to the device's directory under /sys/devices. 316 317 317 318 drivers/ contains a directory for each device driver that is loaded 318 319 for devices on that particular bus (this assumes that drivers do not ··· 327 328 328 329 dev/ contains two directories: char/ and block/. Inside these two 329 330 directories there are symlinks named <major>:<minor>. These symlinks 330 - point to the sysfs directory for the given device. /sys/dev provides a 331 + point to the directories under /sys/devices for each device. /sys/dev provides a 331 332 quick way to lookup the sysfs interface for a device from the result of 332 333 a stat(2) operation. 333 334 334 335 More information on driver-model specific features can be found in 335 336 Documentation/driver-api/driver-model/. 336 337 338 + block/ contains symlinks to all the block devices discovered on the system. 339 + These symlinks point to directories under /sys/devices. 337 340 338 - TODO: Finish this section. 341 + class/ contains a directory for each device class, grouped by functional type. 342 + Each directory in class/ contains symlinks to devices in the /sys/devices directory. 343 + 344 + firmware/ contains system firmware data and configuration such as firmware tables, 345 + ACPI information, and device tree data. 346 + 347 + hypervisor/ contains virtualization platform information and provides an interface to 348 + the underlying hypervisor. It is only present when running on a virtual machine. 349 + 350 + kernel/ contains runtime kernel parameters, configuration settings, and status. 351 + 352 + power/ contains power management subsystem information including 353 + sleep states, suspend/resume capabilities, and policies. 339 354 340 355 341 356 Current Interfaces
+4 -4
Documentation/filesystems/xfs/xfs-online-fsck-design.rst
··· 454 454 information. 455 455 Once the scan is done, the owning object is re-locked, the live data is used to 456 456 write a new ondisk structure, and the repairs are committed atomically. 457 - The hooks are disabled and the staging staging area is freed. 457 + The hooks are disabled and the staging area is freed. 458 458 Finally, the storage from the old data structure are carefully reaped. 459 459 460 460 Introducing concurrency helps online repair avoid various locking problems, but ··· 475 475 shutdown. 476 476 477 477 Inspiration for the secondary metadata repair strategy was drawn from section 478 - 2.4 of Srinivasan above, and sections 2 ("NSF: Inded Build Without Side-File") 478 + 2.4 of Srinivasan above, and sections 2 ("NSF: Index Build Without Side-File") 479 479 and 3.1.1 ("Duplicate Key Insert Problem") in C. Mohan, `"Algorithms for 480 480 Creating Indexes for Very Large Tables Without Quiescing Updates" 481 481 <https://dl.acm.org/doi/10.1145/130283.130337>`_, 1992. ··· 2185 2185 checking and repairing of secondary metadata commonly requires coordination 2186 2186 between a live metadata scan of the filesystem and writer threads that are 2187 2187 updating that metadata. 2188 - Keeping the scan data up to date requires requires the ability to propagate 2188 + Keeping the scan data up to date requires the ability to propagate 2189 2189 metadata updates from the filesystem into the data being collected by the scan. 2190 2190 This *can* be done by appending concurrent updates into a separate log file and 2191 2191 applying them before writing the new metadata to disk, but this leads to ··· 4179 4179 This will be discussed in more detail in subsequent sections. 4180 4180 4181 4181 If the filesystem goes down in the middle of an operation, log recovery will 4182 - find the most recent unfinished maping exchange log intent item and restart 4182 + find the most recent unfinished mapping exchange log intent item and restart 4183 4183 from there. 4184 4184 This is how atomic file mapping exchanges guarantees that an outside observer 4185 4185 will either see the old broken structure or the new one, and never a mismash of
+21
Documentation/locking/locktypes.rst
··· 204 204 local_lock is not suitable to protect against preemption or interrupts on a 205 205 PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics. 206 206 207 + CPU local scope and bottom-half 208 + ------------------------------- 209 + 210 + Per-CPU variables that are accessed only in softirq context should not rely on 211 + the assumption that this context is implicitly protected due to being 212 + non-preemptible. In a PREEMPT_RT kernel, softirq context is preemptible, and 213 + synchronizing every bottom-half-disabled section via implicit context results 214 + in an implicit per-CPU "big kernel lock." 215 + 216 + A local_lock_t together with local_lock_nested_bh() and 217 + local_unlock_nested_bh() for locking operations help to identify the locking 218 + scope. 219 + 220 + When lockdep is enabled, these functions verify that data structure access 221 + occurs within softirq context. 222 + Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and 223 + does not add overhead when used without lockdep. 224 + 225 + On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and 226 + local_unlock_nested_bh() serializes access to the data structure, which allows 227 + removal of serialization via local_bh_disable(). 207 228 208 229 raw_spinlock_t and spinlock_t 209 230 =============================
+2
Documentation/locking/seqlock.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 1 3 ====================================== 2 4 Sequence counters and sequential locks 3 5 ======================================
-28
Documentation/maintainer/configure-git.rst
··· 28 28 rc file):: 29 29 30 30 export GPG_TTY=$(tty) 31 - 32 - 33 - Creating commit links to lore.kernel.org 34 - ---------------------------------------- 35 - 36 - The web site https://lore.kernel.org is meant as a grand archive of all mail 37 - list traffic concerning or influencing the kernel development. Storing archives 38 - of patches here is a recommended practice, and when a maintainer applies a 39 - patch to a subsystem tree, it is a good idea to provide a Link: tag with a 40 - reference back to the lore archive so that people that browse the commit 41 - history can find related discussions and rationale behind a certain change. 42 - The link tag will look like this:: 43 - 44 - Link: https://lore.kernel.org/r/<message-id> 45 - 46 - This can be configured to happen automatically any time you issue ``git am`` 47 - by adding the following hook into your git:: 48 - 49 - $ git config am.messageid true 50 - $ cat >.git/hooks/applypatch-msg <<'EOF' 51 - #!/bin/sh 52 - . git-sh-setup 53 - perl -pi -e 's|^Message-I[dD]:\s*<?([^>]+)>?$|Link: https://lore.kernel.org/r/$1|g;' "$1" 54 - test -x "$GIT_DIR/hooks/commit-msg" && 55 - exec "$GIT_DIR/hooks/commit-msg" ${1+"$@"} 56 - : 57 - EOF 58 - $ chmod a+x .git/hooks/applypatch-msg
+2
Documentation/maintainer/maintainer-entry-profile.rst
··· 59 59 wait for the next -rc. At a minimum: 60 60 61 61 - Last -rc for new feature submissions: 62 + 62 63 New feature submissions targeting the next merge window should have 63 64 their first posting for consideration before this point. Patches that 64 65 are submitted after this point should be clear that they are targeting ··· 69 68 submissions should appear before -rc5. 70 69 71 70 - Last -rc to merge features: Deadline for merge decisions 71 + 72 72 Indicate to contributors the point at which an as yet un-applied patch 73 73 set will need to wait for the NEXT+1 merge window. Of course there is no 74 74 obligation to ever accept any given patchset, but if the review has not
+2
Documentation/mm/physical_memory.rst
··· 171 171 The node has memory(regular, high, movable) 172 172 ``N_CPU`` 173 173 The node has one or more CPUs 174 + ``N_GENERIC_INITIATOR`` 175 + The node has one or more Generic Initiators 174 176 175 177 For each node that has a property described above, the bit corresponding to the 176 178 node ID in the ``node_states[<property>]`` bitmask is set.
+1 -1
Documentation/networking/can.rst
··· 539 539 The CAN filters are processed in per-device filter lists at CAN frame 540 540 reception time. To reduce the number of checks that need to be performed 541 541 while walking through the filter lists the CAN core provides an optimized 542 - filter handling when the filter subscription focusses on a single CAN ID. 542 + filter handling when the filter subscription focuses on a single CAN ID. 543 543 544 544 For the possible 2048 SFF CAN identifiers the identifier is used as an index 545 545 to access the corresponding subscription list without any further checks.
+1 -1
Documentation/networking/device_drivers/ethernet/ti/am65_nuss_cpsw_switchdev.rst
··· 42 42 overwriting of bridge configuration as CPSW switch driver completely reloads its 43 43 configuration when first port changes its state to UP. 44 44 45 - When the both interfaces joined the bridge - CPSW switch driver will enable 45 + When both interfaces have joined the bridge - CPSW switch driver will enable 46 46 marking packets with offload_fwd_mark flag. 47 47 48 48 All configuration is implemented via switchdev API.
+1 -1
Documentation/networking/device_drivers/ethernet/ti/cpsw_switchdev.rst
··· 92 92 overwriting of bridge configuration as CPSW switch driver copletly reloads its 93 93 configuration when first Port changes its state to UP. 94 94 95 - When the both interfaces joined the bridge - CPSW switch driver will enable 95 + When both interfaces have joined the bridge - CPSW switch driver will enable 96 96 marking packets with offload_fwd_mark flag unless "ale_bypass=0" 97 97 98 98 All configuration is implemented via switchdev API.
+1 -1
Documentation/networking/rds.rst
··· 339 339 rds_sendmsg() 340 340 - struct rds_message built from incoming data 341 341 - CMSGs parsed (e.g. RDMA ops) 342 - - transport connection alloced and connected if not already 342 + - transport connection allocated and connected if not already 343 343 - rds_message placed on send queue 344 344 - send worker awoken 345 345
+2 -2
Documentation/power/pci.rst
··· 472 472 The pci_pm_suspend_noirq() routine is executed after suspend_device_irqs() has 473 473 been called, which means that the device driver's interrupt handler won't be 474 474 invoked while this routine is running. It first checks if the device's driver 475 - implements legacy PCI suspends routines (Section 3), in which case the legacy 475 + implements legacy PCI suspend routines (Section 3), in which case the legacy 476 476 late suspend routine is called and its result is returned (the standard 477 477 configuration registers of the device are saved if the driver's callback hasn't 478 478 done that). Second, if the device driver's struct dev_pm_ops object is not ··· 544 544 The resume phase is carried out asynchronously for PCI devices, like the 545 545 suspend phase described above, which means that if two PCI devices don't depend 546 546 on each other in a known way, the pci_pm_resume() routine may be executed for 547 - the both of them in parallel. 547 + both of them in parallel. 548 548 549 549 The pci_pm_complete() routine only executes the device driver's pm->complete() 550 550 callback, if defined.
+1 -1
Documentation/power/suspend-and-cpuhotplug.rst
··· 13 13 14 14 Well, a picture is worth a thousand words... So ASCII art follows :-) 15 15 16 - [This depicts the current design in the kernel, and focusses only on the 16 + [This depicts the current design in the kernel, and focuses only on the 17 17 interactions involving the freezer and CPU hotplug and also tries to explain 18 18 the locking involved. It outlines the notifications involved as well. 19 19 But please note that here, only the call paths are illustrated, with the aim
+3 -4
Documentation/process/5.Posting.rst
··· 207 207 208 208 Link: https://example.com/somewhere.html optional-other-stuff 209 209 210 - Many maintainers when applying a patch also add this tag to link to the 211 - latest public review posting of the patch; often this is automatically done 212 - by tools like b4 or a git hook like the one described in 213 - 'Documentation/maintainer/configure-git.rst'. 210 + As per guidance from the Chief Penguin, a Link: tag should only be added to 211 + a commit if it leads to useful information that is not found in the commit 212 + itself. 214 213 215 214 If the URL points to a public bug report being fixed by the patch, use the 216 215 "Closes:" tag instead::
+8 -1
Documentation/process/changes.rst
··· 61 61 GNU tar 1.28 tar --version 62 62 gtags (optional) 6.6.5 gtags --version 63 63 mkimage (optional) 2017.01 mkimage --version 64 - Python (optional) 3.9.x python3 --version 64 + Python 3.9.x python3 --version 65 65 GNU AWK (optional) 5.1.0 gawk --version 66 66 ====================== =============== ======================================== 67 67 ··· 153 153 154 154 You will need perl 5 and the following modules: ``Getopt::Long``, 155 155 ``Getopt::Std``, ``File::Basename``, and ``File::Find`` to build the kernel. 156 + 157 + Python 158 + ------ 159 + 160 + Several config options require it: it is required for arm/arm64 161 + default configs, CONFIG_LTO_CLANG, some DRM optional configs, 162 + the kernel-doc tool, and docs build (Sphinx), among others. 156 163 157 164 BC 158 165 --
+75 -83
Documentation/process/maintainer-pgp-guide.rst
··· 49 49 for the latter may be. 50 50 51 51 The above guiding principle is the reason why this guide is needed. We 52 - want to make sure that by placing trust into developers we do not simply 52 + want to make sure that by placing trust into developers we do not merely 53 53 shift the blame for potential future security incidents to someone else. 54 54 The goal is to provide a set of guidelines developers can use to create 55 55 a secure working environment and safeguard the PGP keys used to ··· 60 60 PGP tools 61 61 ========= 62 62 63 - Use GnuPG 2.2 or later 63 + Use GnuPG 2.4 or later 64 64 ---------------------- 65 65 66 66 Your distro should already have GnuPG installed by default, you just ··· 69 69 70 70 $ gpg --version | head -n1 71 71 72 - If you have version 2.2 or above, then you are good to go. If you have a 73 - version that is prior than 2.2, then some commands from this guide may 74 - not work. 72 + If you have version 2.4 or above, then you are good to go. If you have 73 + an earlier version, then you are using a release of GnuPG that is no 74 + longer maintained and some commands from this guide may not work. 75 75 76 76 Configure gpg-agent options 77 77 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ··· 199 199 200 200 $ gpg --quick-addkey [fpr] ed25519 sign 201 201 202 - .. note:: ECC support in GnuPG 203 - 204 - Note, that if you intend to use a hardware token that does not 205 - support ED25519 ECC keys, you should choose "nistp256" instead or 206 - "ed25519." See the section below on recommended hardware devices. 207 - 208 - 209 202 Back up your Certify key for disaster recovery 210 203 ---------------------------------------------- 211 204 ··· 206 213 more reasons you have to create a backup version that lives on something 207 214 other than digital media, for disaster recovery reasons. 208 215 209 - The best way to create a printable hardcopy of your private key is by 216 + A good way to create a printable hardcopy of your private key is by 210 217 using the ``paperkey`` software written for this very purpose. See ``man 211 218 paperkey`` for more details on the output format and its benefits over 212 219 other solutions. Paperkey should already be packaged for most ··· 217 224 218 225 $ gpg --export-secret-key [fpr] | paperkey -o /tmp/key-backup.txt 219 226 220 - Print out that file (or pipe the output straight to lpr), then take a 221 - pen and write your passphrase on the margin of the paper. **This is 222 - strongly recommended** because the key printout is still encrypted with 223 - that passphrase, and if you ever change it you will not remember what it 224 - used to be when you had created the backup -- *guaranteed*. 227 + Print out that file, then take a pen and write your passphrase on the 228 + margin of the paper. **This is strongly recommended** because the key 229 + printout is still encrypted with that passphrase, and if you ever change 230 + it you will not remember what it used to be when you had created the 231 + backup -- *guaranteed*. 225 232 226 233 Put the resulting printout and the hand-written passphrase into an envelope 227 234 and store in a secure and well-protected place, preferably away from your ··· 229 236 230 237 .. note:: 231 238 232 - Your printer is probably no longer a simple dumb device connected to 233 - your parallel port, but since the output is still encrypted with 234 - your passphrase, printing out even to "cloud-integrated" modern 235 - printers should remain a relatively safe operation. 239 + The key is still encrypted with your passphrase, so printing out 240 + even to "cloud-integrated" modern printers should remain a 241 + relatively safe operation. 236 242 237 243 Back up your whole GnuPG directory 238 244 ---------------------------------- ··· 247 255 such as when making changes to your own key or signing other people's 248 256 keys after conferences and summits. 249 257 250 - Start by getting a small USB "thumb" drive (preferably two!) that you 251 - will use for backup purposes. You will need to encrypt them using LUKS 252 - -- refer to your distro's documentation on how to accomplish this. 258 + Start by getting an external media card (preferably two!) that you will 259 + use for backup purposes. You will need to create an encrypted partition 260 + on this device using LUKS -- refer to your distro's documentation on how 261 + to accomplish this. 253 262 254 263 For the encryption passphrase, you can use the same one as on your 255 264 PGP key. 256 265 257 - Once the encryption process is over, re-insert the USB drive and make 258 - sure it gets properly mounted. Copy your entire ``.gnupg`` directory 259 - over to the encrypted storage:: 266 + Once the encryption process is over, re-insert your device and make sure 267 + it gets properly mounted. Copy your entire ``.gnupg`` directory over to 268 + the encrypted storage:: 260 269 261 270 $ cp -a ~/.gnupg /media/disk/foo/gnupg-backup 262 271 ··· 266 273 $ gpg --homedir=/media/disk/foo/gnupg-backup --list-key [fpr] 267 274 268 275 If you don't get any errors, then you should be good to go. Unmount the 269 - USB drive, distinctly label it so you don't blow it away next time you 270 - need to use a random USB drive, and put in a safe place -- but not too 271 - far away, because you'll need to use it every now and again for things 272 - like editing identities, adding or revoking subkeys, or signing other 273 - people's keys. 276 + device, distinctly label it so you don't overwrite it by accident, and 277 + put in a safe place -- but not too far away, because you'll need to use 278 + it every now and again for things like editing identities, adding or 279 + revoking subkeys, or signing other people's keys. 274 280 275 281 Remove the Certify key from your homedir 276 282 ---------------------------------------- ··· 295 303 your GnuPG directory in its entirety. What we are about to do will 296 304 render your key useless if you do not have a usable backup! 297 305 298 - First, identify the keygrip of your Certify key:: 306 + First, identify the "keygrip" of your Certify key:: 299 307 300 308 $ gpg --with-keygrip --list-key [fpr] 301 309 ··· 320 328 2222000000000000000000000000000000000000.key 321 329 3333000000000000000000000000000000000000.key 322 330 323 - All you have to do is simply remove the .key file that corresponds to 324 - the Certify key keygrip:: 331 + It is sufficient to remove the .key file that corresponds to the Certify 332 + key keygrip:: 325 333 326 334 $ cd ~/.gnupg/private-keys-v1.d 327 335 $ rm 1111000000000000000000000000000000000000.key ··· 364 372 can be stolen from there by sufficiently advanced malware (think 365 373 Meltdown and Spectre). 366 374 367 - The best way to completely protect your keys is to move them to a 375 + A good way to completely protect your keys is to move them to a 368 376 specialized hardware device that is capable of smartcard operations. 369 377 370 378 The benefits of smartcards ··· 375 383 itself. Because the key contents never leave the smartcard, the 376 384 operating system of the computer into which you plug in the hardware 377 385 device is not able to retrieve the private keys themselves. This is very 378 - different from the encrypted USB storage device we used earlier for 379 - backup purposes -- while that USB device is plugged in and mounted, the 386 + different from the encrypted media storage device we used earlier for 387 + backup purposes -- while that device is plugged in and mounted, the 380 388 operating system is able to access the private key contents. 381 389 382 - Using external encrypted USB media is not a substitute to having a 390 + Using external encrypted media is not a substitute to having a 383 391 smartcard-capable device. 384 392 385 393 Available smartcard devices ··· 390 398 functionality. There are several options available: 391 399 392 400 - `Nitrokey Start`_: Open hardware and Free Software, based on FSI 393 - Japan's `Gnuk`_. One of the few available commercial devices that 394 - support ED25519 ECC keys, but offer fewest security features (such as 395 - resistance to tampering or some side-channel attacks). 396 - - `Nitrokey Pro 2`_: Similar to the Nitrokey Start, but more 397 - tamper-resistant and offers more security features. Pro 2 supports ECC 398 - cryptography (NISTP). 401 + Japan's `Gnuk`_. One of the cheapest options, but offers fewest 402 + security features (such as resistance to tampering or some 403 + side-channel attacks). 404 + - `Nitrokey 3`_: Similar to the Nitrokey Start, but more 405 + tamper-resistant and offers more security features and USB 406 + form-factors. Supports ECC cryptography (ED25519 and NISTP). 399 407 - `Yubikey 5`_: proprietary hardware and software, but cheaper than 400 - Nitrokey Pro and comes available in the USB-C form that is more useful 401 - with newer laptops. Offers additional security features such as FIDO 402 - U2F, among others, and now finally supports NISTP and ED25519 ECC 403 - keys. 408 + Nitrokey with a similar set of features. Supports ECC cryptography 409 + (ED25519 and NISTP). 404 410 405 411 Your choice will depend on cost, shipping availability in your 406 412 geographical region, and open/proprietary hardware considerations. ··· 409 419 you `qualify for a free Nitrokey Start`_ courtesy of The Linux 410 420 Foundation. 411 421 412 - .. _`Nitrokey Start`: https://shop.nitrokey.com/shop/product/nitrokey-start-6 413 - .. _`Nitrokey Pro 2`: https://shop.nitrokey.com/shop/product/nkpr2-nitrokey-pro-2-3 422 + .. _`Nitrokey Start`: https://www.nitrokey.com/products/nitrokeys 423 + .. _`Nitrokey 3`: https://www.nitrokey.com/products/nitrokeys 414 424 .. _`Yubikey 5`: https://www.yubico.com/products/yubikey-5-overview/ 415 425 .. _Gnuk: https://www.fsij.org/doc-gnuk/ 416 426 .. _`qualify for a free Nitrokey Start`: https://www.kernel.org/nitrokey-digital-tokens-for-kernel-developers.html ··· 445 455 inevitably forget what it is if you do not record it. 446 456 447 457 Getting back to the main card menu, you can also set other values (such 448 - as name, sex, login data, etc), but it's not necessary and will 458 + as name, gender, login data, etc), but it's not necessary and will 449 459 additionally leak information about your smartcard should you lose it. 450 460 451 461 .. note:: ··· 605 615 You can also use a specific date if that is easier to remember (e.g. 606 616 your birthday, January 1st, or Canada Day):: 607 617 608 - $ gpg --quick-set-expire [fpr] 2025-07-01 618 + $ gpg --quick-set-expire [fpr] 2038-07-01 609 619 610 620 Remember to send the updated key back to keyservers:: 611 621 ··· 646 656 that their copy of linux.git has not been tampered with by a malicious 647 657 third party? 648 658 649 - Or what happens if a backdoor is discovered in the code and the "Author" 650 - line in the commit says it was done by you, while you're pretty sure you 651 - had `nothing to do with it`_? 659 + Or what happens if malicious code is discovered in the kernel and the 660 + "Author" line in the commit says it was done by you, while you're pretty 661 + sure you had `nothing to do with it`_? 652 662 653 663 To address both of these issues, Git introduced PGP integration. Signed 654 664 tags prove the repository integrity by assuring that its contents are ··· 671 681 How to work with signed tags 672 682 ---------------------------- 673 683 674 - To create a signed tag, simply pass the ``-s`` switch to the tag 675 - command:: 684 + To create a signed tag, pass the ``-s`` switch to the tag command:: 676 685 677 686 $ git tag -s [tagname] 678 687 ··· 682 693 How to verify signed tags 683 694 ~~~~~~~~~~~~~~~~~~~~~~~~~ 684 695 685 - To verify a signed tag, simply use the ``verify-tag`` command:: 696 + To verify a signed tag, use the ``verify-tag`` command:: 686 697 687 698 $ git verify-tag [tagname] 688 699 ··· 701 712 # gpg: Signature made [...] 702 713 # gpg: Good signature from [...] 703 714 704 - If you are verifying someone else's git tag, then you will need to 705 - import their PGP key. Please refer to the 706 - ":ref:`verify_identities`" section below. 715 + If you are verifying someone else's git tag, you will first need to 716 + import their PGP key. Please refer to the ":ref:`verify_identities`" 717 + section below. 707 718 708 719 Configure git to always sign annotated tags 709 720 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ··· 717 728 How to work with signed commits 718 729 ------------------------------- 719 730 720 - It is easy to create signed commits, but it is much more difficult to 721 - use them in Linux kernel development, since it relies on patches sent to 722 - the mailing list, and this workflow does not preserve PGP commit 723 - signatures. Furthermore, when rebasing your repository to match 724 - upstream, even your own PGP commit signatures will end up discarded. For 725 - this reason, most kernel developers don't bother signing their commits 726 - and will ignore signed commits in any external repositories that they 727 - rely upon in their work. 731 + It is also possible to create signed commits, but they have limited 732 + usefulness in Linux kernel development. The kernel contribution workflow 733 + relies on sending in patches, and converting commits to patches does not 734 + preserve git commit signatures. Furthermore, when rebasing your own 735 + repository on a newer upstream, PGP commit signatures will end up 736 + discarded. For this reason, most kernel developers don't bother signing 737 + their commits and will ignore signed commits in any external 738 + repositories that they rely upon in their work. 728 739 729 - However, if you have your working git tree publicly available at some 740 + That said, if you have your working git tree publicly available at some 730 741 git hosting service (kernel.org, infradead.org, ozlabs.org, or others), 731 742 then the recommendation is that you sign all your git commits even if 732 743 upstream developers do not directly benefit from this practice. ··· 737 748 provenance, even externally maintained trees carrying PGP commit 738 749 signatures will be valuable for such purposes. 739 750 2. If you ever need to re-clone your local repository (for example, 740 - after a disk failure), this lets you easily verify the repository 751 + after reinstalling your system), this lets you verify the repository 741 752 integrity before resuming your work. 742 753 3. If someone needs to cherry-pick your commits, this allows them to 743 754 quickly verify their integrity before applying them. ··· 745 756 Creating signed commits 746 757 ~~~~~~~~~~~~~~~~~~~~~~~ 747 758 748 - To create a signed commit, you just need to pass the ``-S`` flag to the 749 - ``git commit`` command (it's capital ``-S`` due to collision with 750 - another flag):: 759 + To create a signed commit, pass the ``-S`` flag to the ``git commit`` 760 + command (it's capital ``-S`` due to collision with another flag):: 751 761 752 762 $ git commit -S 753 763 ··· 762 774 Make sure you configure ``gpg-agent`` before you turn this on. 763 775 764 776 .. _verify_identities: 765 - 766 777 767 778 How to work with signed patches 768 779 ------------------------------- ··· 779 792 780 793 Installing and configuring patatt 781 794 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 795 + 796 + .. note:: 797 + 798 + If you use B4 to send in your patches, patatt is already installed 799 + and integrated into your workflow. 782 800 783 801 Patatt is packaged for many distributions already, so please check there 784 802 first. You can also install it from pypi using "``pip install patatt``". ··· 827 835 How to verify kernel developer identities 828 836 ========================================= 829 837 830 - Signing tags and commits is easy, but how does one go about verifying 831 - that the key used to sign something belongs to the actual kernel 832 - developer and not to a malicious imposter? 838 + Signing tags and commits is straightforward, but how does one go about 839 + verifying that the key used to sign something belongs to the actual 840 + kernel developer and not to a malicious imposter? 833 841 834 842 Configure auto-key-retrieval using WKD and DANE 835 843 ----------------------------------------------- ··· 876 884 entity, PGP leaves this responsibility to each user. 877 885 878 886 Unfortunately, very few people understand how the Web of Trust works. 879 - While it remains an important aspect of the OpenPGP specification, 887 + While it is still an important part of the OpenPGP specification, 880 888 recent versions of GnuPG (2.2 and above) have implemented an alternative 881 889 mechanism called "Trust on First Use" (TOFU). You can think of TOFU as 882 890 "the SSH-like approach to trust." With SSH, the first time you connect ··· 886 894 trust the changed key or not. Similarly, the first time you import 887 895 someone's PGP key, it is assumed to be valid. If at any point in the 888 896 future GnuPG comes across another key with the same identity, both the 889 - previously imported key and the new key will be marked as invalid and 890 - you will need to manually figure out which one to keep. 897 + previously imported key and the new key will be marked for verification 898 + and you will need to manually figure out which one to keep. 891 899 892 900 We recommend that you use the combined TOFU+PGP trust model (which is 893 901 the new default in GnuPG v2). To set it, add (or modify) the
+3 -3
Documentation/process/submitting-patches.rst
··· 343 343 As is frequently quoted on the mailing list:: 344 344 345 345 A: http://en.wikipedia.org/wiki/Top_post 346 - Q: Were do I find info about this thing called top-posting? 346 + Q: Where do I find info about this thing called top-posting? 347 347 A: Because it messes up the order in which people normally read text. 348 348 Q: Why is top-posting such a bad thing? 349 349 A: Top-posting. ··· 602 602 explicit permission of the person named (see 'Tagging people requires 603 603 permission' below for details). 604 604 605 - A Fixes: tag indicates that the patch fixes an issue in a previous commit. It 606 - is used to make it easy to determine where a bug originated, which can help 605 + A Fixes: tag indicates that the patch fixes a bug in a previous commit. It 606 + is used to make it easy to determine where an issue originated, which can help 607 607 review a bug fix. This tag also assists the stable kernel team in determining 608 608 which stable kernel versions should receive your fix. This is the preferred 609 609 method for indicating a bug fixed by the patch. See :ref:`describe_changes`
+1 -1
Documentation/sphinx/automarkup.py
··· 244 244 return contnode 245 245 246 246 # 247 - # Variant of markup_abi_ref() that warns whan a reference is not found 247 + # Variant of markup_abi_ref() that warns when a reference is not found 248 248 # 249 249 def markup_abi_file_ref(docname, app, match): 250 250 return markup_abi_ref(docname, app, match, warning=True)
-247
Documentation/sphinx/cdomain.py
··· 1 - # -*- coding: utf-8; mode: python -*- 2 - # SPDX-License-Identifier: GPL-2.0 3 - # pylint: disable=W0141,C0113,C0103,C0325 4 - """ 5 - cdomain 6 - ~~~~~~~ 7 - 8 - Replacement for the sphinx c-domain. 9 - 10 - :copyright: Copyright (C) 2016 Markus Heiser 11 - :license: GPL Version 2, June 1991 see Linux/COPYING for details. 12 - 13 - List of customizations: 14 - 15 - * Moved the *duplicate C object description* warnings for function 16 - declarations in the nitpicky mode. See Sphinx documentation for 17 - the config values for ``nitpick`` and ``nitpick_ignore``. 18 - 19 - * Add option 'name' to the "c:function:" directive. With option 'name' the 20 - ref-name of a function can be modified. E.g.:: 21 - 22 - .. c:function:: int ioctl( int fd, int request ) 23 - :name: VIDIOC_LOG_STATUS 24 - 25 - The func-name (e.g. ioctl) remains in the output but the ref-name changed 26 - from 'ioctl' to 'VIDIOC_LOG_STATUS'. The function is referenced by:: 27 - 28 - * :c:func:`VIDIOC_LOG_STATUS` or 29 - * :any:`VIDIOC_LOG_STATUS` (``:any:`` needs sphinx 1.3) 30 - 31 - * Handle signatures of function-like macros well. Don't try to deduce 32 - arguments types of function-like macros. 33 - 34 - """ 35 - 36 - from docutils import nodes 37 - from docutils.parsers.rst import directives 38 - 39 - import sphinx 40 - from sphinx import addnodes 41 - from sphinx.domains.c import c_funcptr_sig_re, c_sig_re 42 - from sphinx.domains.c import CObject as Base_CObject 43 - from sphinx.domains.c import CDomain as Base_CDomain 44 - from itertools import chain 45 - import re 46 - 47 - __version__ = '1.1' 48 - 49 - # Namespace to be prepended to the full name 50 - namespace = None 51 - 52 - # 53 - # Handle trivial newer c domain tags that are part of Sphinx 3.1 c domain tags 54 - # - Store the namespace if ".. c:namespace::" tag is found 55 - # 56 - RE_namespace = re.compile(r'^\s*..\s*c:namespace::\s*(\S+)\s*$') 57 - 58 - def markup_namespace(match): 59 - global namespace 60 - 61 - namespace = match.group(1) 62 - 63 - return "" 64 - 65 - # 66 - # Handle c:macro for function-style declaration 67 - # 68 - RE_macro = re.compile(r'^\s*..\s*c:macro::\s*(\S+)\s+(\S.*)\s*$') 69 - def markup_macro(match): 70 - return ".. c:function:: " + match.group(1) + ' ' + match.group(2) 71 - 72 - # 73 - # Handle newer c domain tags that are evaluated as .. c:type: for 74 - # backward-compatibility with Sphinx < 3.0 75 - # 76 - RE_ctype = re.compile(r'^\s*..\s*c:(struct|union|enum|enumerator|alias)::\s*(.*)$') 77 - 78 - def markup_ctype(match): 79 - return ".. c:type:: " + match.group(2) 80 - 81 - # 82 - # Handle newer c domain tags that are evaluated as :c:type: for 83 - # backward-compatibility with Sphinx < 3.0 84 - # 85 - RE_ctype_refs = re.compile(r':c:(var|struct|union|enum|enumerator)::`([^\`]+)`') 86 - def markup_ctype_refs(match): 87 - return ":c:type:`" + match.group(2) + '`' 88 - 89 - # 90 - # Simply convert :c:expr: and :c:texpr: into a literal block. 91 - # 92 - RE_expr = re.compile(r':c:(expr|texpr):`([^\`]+)`') 93 - def markup_c_expr(match): 94 - return '\\ ``' + match.group(2) + '``\\ ' 95 - 96 - # 97 - # Parse Sphinx 3.x C markups, replacing them by backward-compatible ones 98 - # 99 - def c_markups(app, docname, source): 100 - result = "" 101 - markup_func = { 102 - RE_namespace: markup_namespace, 103 - RE_expr: markup_c_expr, 104 - RE_macro: markup_macro, 105 - RE_ctype: markup_ctype, 106 - RE_ctype_refs: markup_ctype_refs, 107 - } 108 - 109 - lines = iter(source[0].splitlines(True)) 110 - for n in lines: 111 - match_iterators = [regex.finditer(n) for regex in markup_func] 112 - matches = sorted(chain(*match_iterators), key=lambda m: m.start()) 113 - for m in matches: 114 - n = n[:m.start()] + markup_func[m.re](m) + n[m.end():] 115 - 116 - result = result + n 117 - 118 - source[0] = result 119 - 120 - # 121 - # Now implements support for the cdomain namespacing logic 122 - # 123 - 124 - def setup(app): 125 - 126 - # Handle easy Sphinx 3.1+ simple new tags: :c:expr and .. c:namespace:: 127 - app.connect('source-read', c_markups) 128 - app.add_domain(CDomain, override=True) 129 - 130 - return dict( 131 - version = __version__, 132 - parallel_read_safe = True, 133 - parallel_write_safe = True 134 - ) 135 - 136 - class CObject(Base_CObject): 137 - 138 - """ 139 - Description of a C language object. 140 - """ 141 - option_spec = { 142 - "name" : directives.unchanged 143 - } 144 - 145 - def handle_func_like_macro(self, sig, signode): 146 - """Handles signatures of function-like macros. 147 - 148 - If the objtype is 'function' and the signature ``sig`` is a 149 - function-like macro, the name of the macro is returned. Otherwise 150 - ``False`` is returned. """ 151 - 152 - global namespace 153 - 154 - if not self.objtype == 'function': 155 - return False 156 - 157 - m = c_funcptr_sig_re.match(sig) 158 - if m is None: 159 - m = c_sig_re.match(sig) 160 - if m is None: 161 - raise ValueError('no match') 162 - 163 - rettype, fullname, arglist, _const = m.groups() 164 - arglist = arglist.strip() 165 - if rettype or not arglist: 166 - return False 167 - 168 - arglist = arglist.replace('`', '').replace('\\ ', '') # remove markup 169 - arglist = [a.strip() for a in arglist.split(",")] 170 - 171 - # has the first argument a type? 172 - if len(arglist[0].split(" ")) > 1: 173 - return False 174 - 175 - # This is a function-like macro, its arguments are typeless! 176 - signode += addnodes.desc_name(fullname, fullname) 177 - paramlist = addnodes.desc_parameterlist() 178 - signode += paramlist 179 - 180 - for argname in arglist: 181 - param = addnodes.desc_parameter('', '', noemph=True) 182 - # separate by non-breaking space in the output 183 - param += nodes.emphasis(argname, argname) 184 - paramlist += param 185 - 186 - if namespace: 187 - fullname = namespace + "." + fullname 188 - 189 - return fullname 190 - 191 - def handle_signature(self, sig, signode): 192 - """Transform a C signature into RST nodes.""" 193 - 194 - global namespace 195 - 196 - fullname = self.handle_func_like_macro(sig, signode) 197 - if not fullname: 198 - fullname = super(CObject, self).handle_signature(sig, signode) 199 - 200 - if "name" in self.options: 201 - if self.objtype == 'function': 202 - fullname = self.options["name"] 203 - else: 204 - # FIXME: handle :name: value of other declaration types? 205 - pass 206 - else: 207 - if namespace: 208 - fullname = namespace + "." + fullname 209 - 210 - return fullname 211 - 212 - def add_target_and_index(self, name, sig, signode): 213 - # for C API items we add a prefix since names are usually not qualified 214 - # by a module name and so easily clash with e.g. section titles 215 - targetname = 'c.' + name 216 - if targetname not in self.state.document.ids: 217 - signode['names'].append(targetname) 218 - signode['ids'].append(targetname) 219 - signode['first'] = (not self.names) 220 - self.state.document.note_explicit_target(signode) 221 - inv = self.env.domaindata['c']['objects'] 222 - if (name in inv and self.env.config.nitpicky): 223 - if self.objtype == 'function': 224 - if ('c:func', name) not in self.env.config.nitpick_ignore: 225 - self.state_machine.reporter.warning( 226 - 'duplicate C object description of %s, ' % name + 227 - 'other instance in ' + self.env.doc2path(inv[name][0]), 228 - line=self.lineno) 229 - inv[name] = (self.env.docname, self.objtype) 230 - 231 - indextext = self.get_index_text(name) 232 - if indextext: 233 - self.indexnode['entries'].append( 234 - ('single', indextext, targetname, '', None)) 235 - 236 - class CDomain(Base_CDomain): 237 - 238 - """C language domain.""" 239 - name = 'c' 240 - label = 'C' 241 - directives = { 242 - 'function': CObject, 243 - 'member': CObject, 244 - 'macro': CObject, 245 - 'type': CObject, 246 - 'var': CObject, 247 - }
+3 -1
Documentation/sphinx/kernel_feat.py
··· 40 40 from docutils import nodes, statemachine 41 41 from docutils.statemachine import ViewList 42 42 from docutils.parsers.rst import directives, Directive 43 - from docutils.utils.error_reporting import ErrorString 44 43 from sphinx.util.docutils import switch_source_input 44 + 45 + def ErrorString(exc): # Shamelessly stolen from docutils 46 + return f'{exc.__class__.__name}: {exc}' 45 47 46 48 __version__ = '1.0' 47 49
+395 -139
Documentation/sphinx/kernel_include.py
··· 1 1 #!/usr/bin/env python3 2 - # -*- coding: utf-8; mode: python -*- 3 2 # SPDX-License-Identifier: GPL-2.0 4 - # pylint: disable=R0903, C0330, R0914, R0912, E0401 3 + # pylint: disable=R0903, R0912, R0914, R0915, C0209,W0707 4 + 5 5 6 6 """ 7 - kernel-include 8 - ~~~~~~~~~~~~~~ 7 + Implementation of the ``kernel-include`` reST-directive. 9 8 10 - Implementation of the ``kernel-include`` reST-directive. 9 + :copyright: Copyright (C) 2016 Markus Heiser 10 + :license: GPL Version 2, June 1991 see linux/COPYING for details. 11 11 12 - :copyright: Copyright (C) 2016 Markus Heiser 13 - :license: GPL Version 2, June 1991 see linux/COPYING for details. 12 + The ``kernel-include`` reST-directive is a replacement for the ``include`` 13 + directive. The ``kernel-include`` directive expand environment variables in 14 + the path name and allows to include files from arbitrary locations. 14 15 15 - The ``kernel-include`` reST-directive is a replacement for the ``include`` 16 - directive. The ``kernel-include`` directive expand environment variables in 17 - the path name and allows to include files from arbitrary locations. 16 + .. hint:: 18 17 19 - .. hint:: 18 + Including files from arbitrary locations (e.g. from ``/etc``) is a 19 + security risk for builders. This is why the ``include`` directive from 20 + docutils *prohibit* pathnames pointing to locations *above* the filesystem 21 + tree where the reST document with the include directive is placed. 20 22 21 - Including files from arbitrary locations (e.g. from ``/etc``) is a 22 - security risk for builders. This is why the ``include`` directive from 23 - docutils *prohibit* pathnames pointing to locations *above* the filesystem 24 - tree where the reST document with the include directive is placed. 23 + Substrings of the form $name or ${name} are replaced by the value of 24 + environment variable name. Malformed variable names and references to 25 + non-existing variables are left unchanged. 25 26 26 - Substrings of the form $name or ${name} are replaced by the value of 27 - environment variable name. Malformed variable names and references to 28 - non-existing variables are left unchanged. 27 + **Supported Sphinx Include Options**: 28 + 29 + :param literal: 30 + If present, the included file is inserted as a literal block. 31 + 32 + :param code: 33 + Specify the language for syntax highlighting (e.g., 'c', 'python'). 34 + 35 + :param encoding: 36 + Specify the encoding of the included file (default: 'utf-8'). 37 + 38 + :param tab-width: 39 + Specify the number of spaces that a tab represents. 40 + 41 + :param start-line: 42 + Line number at which to start including the file (1-based). 43 + 44 + :param end-line: 45 + Line number at which to stop including the file (inclusive). 46 + 47 + :param start-after: 48 + Include lines after the first line matching this text. 49 + 50 + :param end-before: 51 + Include lines before the first line matching this text. 52 + 53 + :param number-lines: 54 + Number the included lines (integer specifies start number). 55 + Only effective with 'literal' or 'code' options. 56 + 57 + :param class: 58 + Specify HTML class attribute for the included content. 59 + 60 + **Kernel-specific Extensions**: 61 + 62 + :param generate-cross-refs: 63 + If present, instead of directly including the file, it calls 64 + ParseDataStructs() to convert C data structures into cross-references 65 + that link to comprehensive documentation in other ReST files. 66 + 67 + :param exception-file: 68 + (Used with generate-cross-refs) 69 + 70 + Path to a file containing rules for handling special cases: 71 + - Ignore specific C data structures 72 + - Use alternative reference names 73 + - Specify different reference types 74 + 75 + :param warn-broken: 76 + (Used with generate-cross-refs) 77 + 78 + Enables warnings when auto-generated cross-references don't point to 79 + existing documentation targets. 29 80 """ 30 81 31 82 # ============================================================================== ··· 84 33 # ============================================================================== 85 34 86 35 import os.path 36 + import re 37 + import sys 87 38 88 39 from docutils import io, nodes, statemachine 89 - from docutils.utils.error_reporting import SafeString, ErrorString 90 - from docutils.parsers.rst import directives 40 + from docutils.statemachine import ViewList 41 + from docutils.parsers.rst import Directive, directives 91 42 from docutils.parsers.rst.directives.body import CodeBlock, NumberLines 92 - from docutils.parsers.rst.directives.misc import Include 93 43 94 - __version__ = '1.0' 44 + from sphinx.util import logging 45 + 46 + srctree = os.path.abspath(os.environ["srctree"]) 47 + sys.path.insert(0, os.path.join(srctree, "tools/docs/lib")) 48 + 49 + from parse_data_structs import ParseDataStructs 50 + 51 + __version__ = "1.0" 52 + logger = logging.getLogger(__name__) 53 + 54 + RE_DOMAIN_REF = re.compile(r'\\ :(ref|c:type|c:func):`([^<`]+)(?:<([^>]+)>)?`\\') 55 + RE_SIMPLE_REF = re.compile(r'`([^`]+)`') 56 + 57 + def ErrorString(exc): # Shamelessly stolen from docutils 58 + return f'{exc.__class__.__name}: {exc}' 59 + 95 60 96 61 # ============================================================================== 97 - def setup(app): 98 - # ============================================================================== 62 + class KernelInclude(Directive): 63 + """ 64 + KernelInclude (``kernel-include``) directive 99 65 100 - app.add_directive("kernel-include", KernelInclude) 101 - return dict( 102 - version = __version__, 103 - parallel_read_safe = True, 104 - parallel_write_safe = True 105 - ) 66 + Most of the stuff here came from Include directive defined at: 67 + docutils/parsers/rst/directives/misc.py 106 68 107 - # ============================================================================== 108 - class KernelInclude(Include): 109 - # ============================================================================== 69 + Yet, overriding the class don't has any benefits: the original class 70 + only have run() and argument list. Not all of them are implemented, 71 + when checked against latest Sphinx version, as with time more arguments 72 + were added. 110 73 111 - """KernelInclude (``kernel-include``) directive""" 74 + So, keep its own list of supported arguments 75 + """ 112 76 113 - def run(self): 114 - env = self.state.document.settings.env 115 - path = os.path.realpath( 116 - os.path.expandvars(self.arguments[0])) 77 + required_arguments = 1 78 + optional_arguments = 0 79 + final_argument_whitespace = True 80 + option_spec = { 81 + 'literal': directives.flag, 82 + 'code': directives.unchanged, 83 + 'encoding': directives.encoding, 84 + 'tab-width': int, 85 + 'start-line': int, 86 + 'end-line': int, 87 + 'start-after': directives.unchanged_required, 88 + 'end-before': directives.unchanged_required, 89 + # ignored except for 'literal' or 'code': 90 + 'number-lines': directives.unchanged, # integer or None 91 + 'class': directives.class_option, 117 92 118 - # to get a bit security back, prohibit /etc: 119 - if path.startswith(os.sep + "etc"): 120 - raise self.severe( 121 - 'Problems with "%s" directive, prohibited path: %s' 122 - % (self.name, path)) 93 + # Arguments that aren't from Sphinx Include directive 94 + 'generate-cross-refs': directives.flag, 95 + 'warn-broken': directives.flag, 96 + 'toc': directives.flag, 97 + 'exception-file': directives.unchanged, 98 + } 123 99 124 - self.arguments[0] = path 100 + def read_rawtext(self, path, encoding): 101 + """Read and process file content with error handling""" 102 + try: 103 + self.state.document.settings.record_dependencies.add(path) 104 + include_file = io.FileInput(source_path=path, 105 + encoding=encoding, 106 + error_handler=self.state.document.settings.input_encoding_error_handler) 107 + except UnicodeEncodeError: 108 + raise self.severe('Problems with directive path:\n' 109 + 'Cannot encode input file path "%s" ' 110 + '(wrong locale?).' % path) 111 + except IOError as error: 112 + raise self.severe('Problems with directive path:\n%s.' % ErrorString(error)) 125 113 126 - env.note_dependency(os.path.abspath(path)) 114 + try: 115 + return include_file.read() 116 + except UnicodeError as error: 117 + raise self.severe('Problem with directive:\n%s' % ErrorString(error)) 127 118 128 - #return super(KernelInclude, self).run() # won't work, see HINTs in _run() 129 - return self._run() 119 + def apply_range(self, rawtext): 120 + """ 121 + Handles start-line, end-line, start-after and end-before parameters 122 + """ 130 123 131 - def _run(self): 132 - """Include a file as part of the content of this reST file.""" 133 - 134 - # HINT: I had to copy&paste the whole Include.run method. I'am not happy 135 - # with this, but due to security reasons, the Include.run method does 136 - # not allow absolute or relative pathnames pointing to locations *above* 137 - # the filesystem tree where the reST document is placed. 138 - 139 - if not self.state.document.settings.file_insertion_enabled: 140 - raise self.warning('"%s" directive disabled.' % self.name) 141 - source = self.state_machine.input_lines.source( 142 - self.lineno - self.state_machine.input_offset - 1) 143 - source_dir = os.path.dirname(os.path.abspath(source)) 144 - path = directives.path(self.arguments[0]) 145 - if path.startswith('<') and path.endswith('>'): 146 - path = os.path.join(self.standard_include_path, path[1:-1]) 147 - path = os.path.normpath(os.path.join(source_dir, path)) 148 - 149 - # HINT: this is the only line I had to change / commented out: 150 - #path = utils.relative_path(None, path) 151 - 152 - encoding = self.options.get( 153 - 'encoding', self.state.document.settings.input_encoding) 154 - e_handler=self.state.document.settings.input_encoding_error_handler 155 - tab_width = self.options.get( 156 - 'tab-width', self.state.document.settings.tab_width) 157 - try: 158 - self.state.document.settings.record_dependencies.add(path) 159 - include_file = io.FileInput(source_path=path, 160 - encoding=encoding, 161 - error_handler=e_handler) 162 - except UnicodeEncodeError as error: 163 - raise self.severe('Problems with "%s" directive path:\n' 164 - 'Cannot encode input file path "%s" ' 165 - '(wrong locale?).' % 166 - (self.name, SafeString(path))) 167 - except IOError as error: 168 - raise self.severe('Problems with "%s" directive path:\n%s.' % 169 - (self.name, ErrorString(error))) 124 + # Get to-be-included content 170 125 startline = self.options.get('start-line', None) 171 126 endline = self.options.get('end-line', None) 172 127 try: 173 128 if startline or (endline is not None): 174 - lines = include_file.readlines() 175 - rawtext = ''.join(lines[startline:endline]) 176 - else: 177 - rawtext = include_file.read() 129 + lines = rawtext.splitlines() 130 + rawtext = '\n'.join(lines[startline:endline]) 178 131 except UnicodeError as error: 179 - raise self.severe('Problem with "%s" directive:\n%s' % 180 - (self.name, ErrorString(error))) 132 + raise self.severe(f'Problem with "{self.name}" directive:\n' 133 + + io.error_string(error)) 181 134 # start-after/end-before: no restrictions on newlines in match-text, 182 135 # and no restrictions on matching inside lines vs. line boundaries 183 - after_text = self.options.get('start-after', None) 136 + after_text = self.options.get("start-after", None) 184 137 if after_text: 185 138 # skip content in rawtext before *and incl.* a matching text 186 139 after_index = rawtext.find(after_text) 187 140 if after_index < 0: 188 141 raise self.severe('Problem with "start-after" option of "%s" ' 189 - 'directive:\nText not found.' % self.name) 190 - rawtext = rawtext[after_index + len(after_text):] 191 - before_text = self.options.get('end-before', None) 142 + "directive:\nText not found." % self.name) 143 + rawtext = rawtext[after_index + len(after_text) :] 144 + before_text = self.options.get("end-before", None) 192 145 if before_text: 193 146 # skip content in rawtext after *and incl.* a matching text 194 147 before_index = rawtext.find(before_text) 195 148 if before_index < 0: 196 149 raise self.severe('Problem with "end-before" option of "%s" ' 197 - 'directive:\nText not found.' % self.name) 150 + "directive:\nText not found." % self.name) 198 151 rawtext = rawtext[:before_index] 152 + 153 + return rawtext 154 + 155 + def xref_text(self, env, path, tab_width): 156 + """ 157 + Read and add contents from a C file parsed to have cross references. 158 + 159 + There are two types of supported output here: 160 + - A C source code with cross-references; 161 + - a TOC table containing cross references. 162 + """ 163 + parser = ParseDataStructs() 164 + parser.parse_file(path) 165 + 166 + if 'exception-file' in self.options: 167 + source_dir = os.path.dirname(os.path.abspath( 168 + self.state_machine.input_lines.source( 169 + self.lineno - self.state_machine.input_offset - 1))) 170 + exceptions_file = os.path.join(source_dir, self.options['exception-file']) 171 + parser.process_exceptions(exceptions_file) 172 + 173 + # Store references on a symbol dict to be used at check time 174 + if 'warn-broken' in self.options: 175 + env._xref_files.add(path) 176 + 177 + if "toc" not in self.options: 178 + 179 + rawtext = ".. parsed-literal::\n\n" + parser.gen_output() 180 + self.apply_range(rawtext) 181 + 182 + include_lines = statemachine.string2lines(rawtext, tab_width, 183 + convert_whitespace=True) 184 + 185 + # Sphinx always blame the ".. <directive>", so placing 186 + # line numbers here won't make any difference 187 + 188 + self.state_machine.insert_input(include_lines, path) 189 + return [] 190 + 191 + # TOC output is a ReST file, not a literal. So, we can add line 192 + # numbers 193 + 194 + rawtext = parser.gen_toc() 199 195 200 196 include_lines = statemachine.string2lines(rawtext, tab_width, 201 197 convert_whitespace=True) 202 - if 'literal' in self.options: 203 - # Convert tabs to spaces, if `tab_width` is positive. 204 - if tab_width >= 0: 205 - text = rawtext.expandtabs(tab_width) 206 - else: 207 - text = rawtext 208 - literal_block = nodes.literal_block(rawtext, source=path, 209 - classes=self.options.get('class', [])) 210 - literal_block.line = 1 211 - self.add_name(literal_block) 212 - if 'number-lines' in self.options: 213 - try: 214 - startline = int(self.options['number-lines'] or 1) 215 - except ValueError: 216 - raise self.error(':number-lines: with non-integer ' 217 - 'start value') 218 - endline = startline + len(include_lines) 219 - if text.endswith('\n'): 220 - text = text[:-1] 221 - tokens = NumberLines([([], text)], startline, endline) 222 - for classes, value in tokens: 223 - if classes: 224 - literal_block += nodes.inline(value, value, 225 - classes=classes) 226 - else: 227 - literal_block += nodes.Text(value, value) 228 - else: 229 - literal_block += nodes.Text(text, text) 230 - return [literal_block] 231 - if 'code' in self.options: 232 - self.options['source'] = path 233 - codeblock = CodeBlock(self.name, 234 - [self.options.pop('code')], # arguments 235 - self.options, 236 - include_lines, # content 237 - self.lineno, 238 - self.content_offset, 239 - self.block_text, 240 - self.state, 241 - self.state_machine) 242 - return codeblock.run() 243 - self.state_machine.insert_input(include_lines, path) 198 + 199 + # Append line numbers data 200 + 201 + startline = self.options.get('start-line', None) 202 + 203 + result = ViewList() 204 + if startline and startline > 0: 205 + offset = startline - 1 206 + else: 207 + offset = 0 208 + 209 + for ln, line in enumerate(include_lines, start=offset): 210 + result.append(line, path, ln) 211 + 212 + self.state_machine.insert_input(result, path) 213 + 244 214 return [] 215 + 216 + def literal(self, path, tab_width, rawtext): 217 + """Output a literal block""" 218 + 219 + # Convert tabs to spaces, if `tab_width` is positive. 220 + if tab_width >= 0: 221 + text = rawtext.expandtabs(tab_width) 222 + else: 223 + text = rawtext 224 + literal_block = nodes.literal_block(rawtext, source=path, 225 + classes=self.options.get("class", [])) 226 + literal_block.line = 1 227 + self.add_name(literal_block) 228 + if "number-lines" in self.options: 229 + try: 230 + startline = int(self.options["number-lines"] or 1) 231 + except ValueError: 232 + raise self.error(":number-lines: with non-integer start value") 233 + endline = startline + len(include_lines) 234 + if text.endswith("\n"): 235 + text = text[:-1] 236 + tokens = NumberLines([([], text)], startline, endline) 237 + for classes, value in tokens: 238 + if classes: 239 + literal_block += nodes.inline(value, value, 240 + classes=classes) 241 + else: 242 + literal_block += nodes.Text(value, value) 243 + else: 244 + literal_block += nodes.Text(text, text) 245 + return [literal_block] 246 + 247 + def code(self, path, tab_width): 248 + """Output a code block""" 249 + 250 + include_lines = statemachine.string2lines(rawtext, tab_width, 251 + convert_whitespace=True) 252 + 253 + self.options["source"] = path 254 + codeblock = CodeBlock(self.name, 255 + [self.options.pop("code")], # arguments 256 + self.options, 257 + include_lines, 258 + self.lineno, 259 + self.content_offset, 260 + self.block_text, 261 + self.state, 262 + self.state_machine) 263 + return codeblock.run() 264 + 265 + def run(self): 266 + """Include a file as part of the content of this reST file.""" 267 + env = self.state.document.settings.env 268 + 269 + # 270 + # The include logic accepts only patches relative to the 271 + # Kernel source tree. The logic does check it to prevent 272 + # directory traverse issues. 273 + # 274 + 275 + srctree = os.path.abspath(os.environ["srctree"]) 276 + 277 + path = os.path.expandvars(self.arguments[0]) 278 + src_path = os.path.join(srctree, path) 279 + 280 + if os.path.isfile(src_path): 281 + base = srctree 282 + path = src_path 283 + else: 284 + raise self.warning(f'File "%s" doesn\'t exist', path) 285 + 286 + abs_base = os.path.abspath(base) 287 + abs_full_path = os.path.abspath(os.path.join(base, path)) 288 + 289 + try: 290 + if os.path.commonpath([abs_full_path, abs_base]) != abs_base: 291 + raise self.severe('Problems with "%s" directive, prohibited path: %s' % 292 + (self.name, path)) 293 + except ValueError: 294 + # Paths don't have the same drive (Windows) or other incompatibility 295 + raise self.severe('Problems with "%s" directive, invalid path: %s' % 296 + (self.name, path)) 297 + 298 + self.arguments[0] = path 299 + 300 + # 301 + # Add path location to Sphinx dependencies to ensure proper cache 302 + # invalidation check. 303 + # 304 + 305 + env.note_dependency(os.path.abspath(path)) 306 + 307 + if not self.state.document.settings.file_insertion_enabled: 308 + raise self.warning('"%s" directive disabled.' % self.name) 309 + source = self.state_machine.input_lines.source(self.lineno - 310 + self.state_machine.input_offset - 1) 311 + source_dir = os.path.dirname(os.path.abspath(source)) 312 + path = directives.path(self.arguments[0]) 313 + if path.startswith("<") and path.endswith(">"): 314 + path = os.path.join(self.standard_include_path, path[1:-1]) 315 + path = os.path.normpath(os.path.join(source_dir, path)) 316 + 317 + # HINT: this is the only line I had to change / commented out: 318 + # path = utils.relative_path(None, path) 319 + 320 + encoding = self.options.get("encoding", 321 + self.state.document.settings.input_encoding) 322 + tab_width = self.options.get("tab-width", 323 + self.state.document.settings.tab_width) 324 + 325 + # Get optional arguments to related to cross-references generation 326 + if "generate-cross-refs" in self.options: 327 + return self.xref_text(env, path, tab_width) 328 + 329 + rawtext = self.read_rawtext(path, encoding) 330 + rawtext = self.apply_range(rawtext) 331 + 332 + if "code" in self.options: 333 + return self.code(path, tab_width, rawtext) 334 + 335 + return self.literal(path, tab_width, rawtext) 336 + 337 + # ============================================================================== 338 + 339 + reported = set() 340 + 341 + def check_missing_refs(app, env, node, contnode): 342 + """Check broken refs for the files it creates xrefs""" 343 + if not node.source: 344 + return None 345 + 346 + try: 347 + xref_files = env._xref_files 348 + except AttributeError: 349 + logger.critical("FATAL: _xref_files not initialized!") 350 + raise 351 + 352 + # Only show missing references for kernel-include reference-parsed files 353 + if node.source not in xref_files: 354 + return None 355 + 356 + target = node.get('reftarget', '') 357 + domain = node.get('refdomain', 'std') 358 + reftype = node.get('reftype', '') 359 + 360 + msg = f"can't link to: {domain}:{reftype}:: {target}" 361 + 362 + # Don't duplicate warnings 363 + data = (node.source, msg) 364 + if data in reported: 365 + return None 366 + reported.add(data) 367 + 368 + logger.warning(msg, location=node, type='ref', subtype='missing') 369 + 370 + return None 371 + 372 + def merge_xref_info(app, env, docnames, other): 373 + """ 374 + As each process modify env._xref_files, we need to merge them back. 375 + """ 376 + if not hasattr(other, "_xref_files"): 377 + return 378 + env._xref_files.update(getattr(other, "_xref_files", set())) 379 + 380 + def init_xref_docs(app, env, docnames): 381 + """Initialize a list of files that we're generating cross references¨""" 382 + app.env._xref_files = set() 383 + 384 + # ============================================================================== 385 + 386 + def setup(app): 387 + """Setup Sphinx exension""" 388 + 389 + app.connect("env-before-read-docs", init_xref_docs) 390 + app.connect("env-merge-info", merge_xref_info) 391 + app.add_directive("kernel-include", KernelInclude) 392 + app.connect("missing-reference", check_missing_refs) 393 + 394 + return { 395 + "version": __version__, 396 + "parallel_read_safe": True, 397 + "parallel_write_safe": True, 398 + }
+3 -1
Documentation/sphinx/maintainers_include.py
··· 22 22 import os.path 23 23 24 24 from docutils import statemachine 25 - from docutils.utils.error_reporting import ErrorString 26 25 from docutils.parsers.rst import Directive 27 26 from docutils.parsers.rst.directives.misc import Include 27 + 28 + def ErrorString(exc): # Shamelessly stolen from docutils 29 + return f'{exc.__class__.__name}: {exc}' 28 30 29 31 __version__ = '1.0' 30 32
-404
Documentation/sphinx/parse-headers.pl
··· 1 - #!/usr/bin/env perl 2 - # SPDX-License-Identifier: GPL-2.0 3 - # Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@kernel.org>. 4 - 5 - use strict; 6 - use Text::Tabs; 7 - use Getopt::Long; 8 - use Pod::Usage; 9 - 10 - my $debug; 11 - my $help; 12 - my $man; 13 - 14 - GetOptions( 15 - "debug" => \$debug, 16 - 'usage|?' => \$help, 17 - 'help' => \$man 18 - ) or pod2usage(2); 19 - 20 - pod2usage(1) if $help; 21 - pod2usage(-exitstatus => 0, -verbose => 2) if $man; 22 - pod2usage(2) if (scalar @ARGV < 2 || scalar @ARGV > 3); 23 - 24 - my ($file_in, $file_out, $file_exceptions) = @ARGV; 25 - 26 - my $data; 27 - my %ioctls; 28 - my %defines; 29 - my %typedefs; 30 - my %enums; 31 - my %enum_symbols; 32 - my %structs; 33 - 34 - require Data::Dumper if ($debug); 35 - 36 - # 37 - # read the file and get identifiers 38 - # 39 - 40 - my $is_enum = 0; 41 - my $is_comment = 0; 42 - open IN, $file_in or die "Can't open $file_in"; 43 - while (<IN>) { 44 - $data .= $_; 45 - 46 - my $ln = $_; 47 - if (!$is_comment) { 48 - $ln =~ s,/\*.*(\*/),,g; 49 - 50 - $is_comment = 1 if ($ln =~ s,/\*.*,,); 51 - } else { 52 - if ($ln =~ s,^(.*\*/),,) { 53 - $is_comment = 0; 54 - } else { 55 - next; 56 - } 57 - } 58 - 59 - if ($is_enum && $ln =~ m/^\s*([_\w][\w\d_]+)\s*[\,=]?/) { 60 - my $s = $1; 61 - my $n = $1; 62 - $n =~ tr/A-Z/a-z/; 63 - $n =~ tr/_/-/; 64 - 65 - $enum_symbols{$s} = "\\ :ref:`$s <$n>`\\ "; 66 - 67 - $is_enum = 0 if ($is_enum && m/\}/); 68 - next; 69 - } 70 - $is_enum = 0 if ($is_enum && m/\}/); 71 - 72 - if ($ln =~ m/^\s*#\s*define\s+([_\w][\w\d_]+)\s+_IO/) { 73 - my $s = $1; 74 - my $n = $1; 75 - $n =~ tr/A-Z/a-z/; 76 - 77 - $ioctls{$s} = "\\ :ref:`$s <$n>`\\ "; 78 - next; 79 - } 80 - 81 - if ($ln =~ m/^\s*#\s*define\s+([_\w][\w\d_]+)\s+/) { 82 - my $s = $1; 83 - my $n = $1; 84 - $n =~ tr/A-Z/a-z/; 85 - $n =~ tr/_/-/; 86 - 87 - $defines{$s} = "\\ :ref:`$s <$n>`\\ "; 88 - next; 89 - } 90 - 91 - if ($ln =~ m/^\s*typedef\s+([_\w][\w\d_]+)\s+(.*)\s+([_\w][\w\d_]+);/) { 92 - my $s = $2; 93 - my $n = $3; 94 - 95 - $typedefs{$n} = "\\ :c:type:`$n <$s>`\\ "; 96 - next; 97 - } 98 - if ($ln =~ m/^\s*enum\s+([_\w][\w\d_]+)\s+\{/ 99 - || $ln =~ m/^\s*enum\s+([_\w][\w\d_]+)$/ 100 - || $ln =~ m/^\s*typedef\s*enum\s+([_\w][\w\d_]+)\s+\{/ 101 - || $ln =~ m/^\s*typedef\s*enum\s+([_\w][\w\d_]+)$/) { 102 - my $s = $1; 103 - 104 - $enums{$s} = "enum :c:type:`$s`\\ "; 105 - 106 - $is_enum = $1; 107 - next; 108 - } 109 - if ($ln =~ m/^\s*struct\s+([_\w][\w\d_]+)\s+\{/ 110 - || $ln =~ m/^\s*struct\s+([[_\w][\w\d_]+)$/ 111 - || $ln =~ m/^\s*typedef\s*struct\s+([_\w][\w\d_]+)\s+\{/ 112 - || $ln =~ m/^\s*typedef\s*struct\s+([[_\w][\w\d_]+)$/ 113 - ) { 114 - my $s = $1; 115 - 116 - $structs{$s} = "struct $s\\ "; 117 - next; 118 - } 119 - } 120 - close IN; 121 - 122 - # 123 - # Handle multi-line typedefs 124 - # 125 - 126 - my @matches = ($data =~ m/typedef\s+struct\s+\S+?\s*\{[^\}]+\}\s*(\S+)\s*\;/g, 127 - $data =~ m/typedef\s+enum\s+\S+?\s*\{[^\}]+\}\s*(\S+)\s*\;/g,); 128 - foreach my $m (@matches) { 129 - my $s = $m; 130 - 131 - $typedefs{$s} = "\\ :c:type:`$s`\\ "; 132 - next; 133 - } 134 - 135 - # 136 - # Handle exceptions, if any 137 - # 138 - 139 - my %def_reftype = ( 140 - "ioctl" => ":ref", 141 - "define" => ":ref", 142 - "symbol" => ":ref", 143 - "typedef" => ":c:type", 144 - "enum" => ":c:type", 145 - "struct" => ":c:type", 146 - ); 147 - 148 - if ($file_exceptions) { 149 - open IN, $file_exceptions or die "Can't read $file_exceptions"; 150 - while (<IN>) { 151 - next if (m/^\s*$/ || m/^\s*#/); 152 - 153 - # Parsers to ignore a symbol 154 - 155 - if (m/^ignore\s+ioctl\s+(\S+)/) { 156 - delete $ioctls{$1} if (exists($ioctls{$1})); 157 - next; 158 - } 159 - if (m/^ignore\s+define\s+(\S+)/) { 160 - delete $defines{$1} if (exists($defines{$1})); 161 - next; 162 - } 163 - if (m/^ignore\s+typedef\s+(\S+)/) { 164 - delete $typedefs{$1} if (exists($typedefs{$1})); 165 - next; 166 - } 167 - if (m/^ignore\s+enum\s+(\S+)/) { 168 - delete $enums{$1} if (exists($enums{$1})); 169 - next; 170 - } 171 - if (m/^ignore\s+struct\s+(\S+)/) { 172 - delete $structs{$1} if (exists($structs{$1})); 173 - next; 174 - } 175 - if (m/^ignore\s+symbol\s+(\S+)/) { 176 - delete $enum_symbols{$1} if (exists($enum_symbols{$1})); 177 - next; 178 - } 179 - 180 - # Parsers to replace a symbol 181 - my ($type, $old, $new, $reftype); 182 - 183 - if (m/^replace\s+(\S+)\s+(\S+)\s+(\S+)/) { 184 - $type = $1; 185 - $old = $2; 186 - $new = $3; 187 - } else { 188 - die "Can't parse $file_exceptions: $_"; 189 - } 190 - 191 - if ($new =~ m/^\:c\:(data|func|macro|type)\:\`(.+)\`/) { 192 - $reftype = ":c:$1"; 193 - $new = $2; 194 - } elsif ($new =~ m/\:ref\:\`(.+)\`/) { 195 - $reftype = ":ref"; 196 - $new = $1; 197 - } else { 198 - $reftype = $def_reftype{$type}; 199 - } 200 - $new = "$reftype:`$old <$new>`"; 201 - 202 - if ($type eq "ioctl") { 203 - $ioctls{$old} = $new if (exists($ioctls{$old})); 204 - next; 205 - } 206 - if ($type eq "define") { 207 - $defines{$old} = $new if (exists($defines{$old})); 208 - next; 209 - } 210 - if ($type eq "symbol") { 211 - $enum_symbols{$old} = $new if (exists($enum_symbols{$old})); 212 - next; 213 - } 214 - if ($type eq "typedef") { 215 - $typedefs{$old} = $new if (exists($typedefs{$old})); 216 - next; 217 - } 218 - if ($type eq "enum") { 219 - $enums{$old} = $new if (exists($enums{$old})); 220 - next; 221 - } 222 - if ($type eq "struct") { 223 - $structs{$old} = $new if (exists($structs{$old})); 224 - next; 225 - } 226 - 227 - die "Can't parse $file_exceptions: $_"; 228 - } 229 - } 230 - 231 - if ($debug) { 232 - print Data::Dumper->Dump([\%ioctls], [qw(*ioctls)]) if (%ioctls); 233 - print Data::Dumper->Dump([\%typedefs], [qw(*typedefs)]) if (%typedefs); 234 - print Data::Dumper->Dump([\%enums], [qw(*enums)]) if (%enums); 235 - print Data::Dumper->Dump([\%structs], [qw(*structs)]) if (%structs); 236 - print Data::Dumper->Dump([\%defines], [qw(*defines)]) if (%defines); 237 - print Data::Dumper->Dump([\%enum_symbols], [qw(*enum_symbols)]) if (%enum_symbols); 238 - } 239 - 240 - # 241 - # Align block 242 - # 243 - $data = expand($data); 244 - $data = " " . $data; 245 - $data =~ s/\n/\n /g; 246 - $data =~ s/\n\s+$/\n/g; 247 - $data =~ s/\n\s+\n/\n\n/g; 248 - 249 - # 250 - # Add escape codes for special characters 251 - # 252 - $data =~ s,([\_\`\*\<\>\&\\\\:\/\|\%\$\#\{\}\~\^]),\\$1,g; 253 - 254 - $data =~ s,DEPRECATED,**DEPRECATED**,g; 255 - 256 - # 257 - # Add references 258 - # 259 - 260 - my $start_delim = "[ \n\t\(\=\*\@]"; 261 - my $end_delim = "(\\s|,|\\\\=|\\\\:|\\;|\\\)|\\}|\\{)"; 262 - 263 - foreach my $r (keys %ioctls) { 264 - my $s = $ioctls{$r}; 265 - 266 - $r =~ s,([\_\`\*\<\>\&\\\\:\/]),\\\\$1,g; 267 - 268 - print "$r -> $s\n" if ($debug); 269 - 270 - $data =~ s/($start_delim)($r)$end_delim/$1$s$3/g; 271 - } 272 - 273 - foreach my $r (keys %defines) { 274 - my $s = $defines{$r}; 275 - 276 - $r =~ s,([\_\`\*\<\>\&\\\\:\/]),\\\\$1,g; 277 - 278 - print "$r -> $s\n" if ($debug); 279 - 280 - $data =~ s/($start_delim)($r)$end_delim/$1$s$3/g; 281 - } 282 - 283 - foreach my $r (keys %enum_symbols) { 284 - my $s = $enum_symbols{$r}; 285 - 286 - $r =~ s,([\_\`\*\<\>\&\\\\:\/]),\\\\$1,g; 287 - 288 - print "$r -> $s\n" if ($debug); 289 - 290 - $data =~ s/($start_delim)($r)$end_delim/$1$s$3/g; 291 - } 292 - 293 - foreach my $r (keys %enums) { 294 - my $s = $enums{$r}; 295 - 296 - $r =~ s,([\_\`\*\<\>\&\\\\:\/]),\\\\$1,g; 297 - 298 - print "$r -> $s\n" if ($debug); 299 - 300 - $data =~ s/enum\s+($r)$end_delim/$s$2/g; 301 - } 302 - 303 - foreach my $r (keys %structs) { 304 - my $s = $structs{$r}; 305 - 306 - $r =~ s,([\_\`\*\<\>\&\\\\:\/]),\\\\$1,g; 307 - 308 - print "$r -> $s\n" if ($debug); 309 - 310 - $data =~ s/struct\s+($r)$end_delim/$s$2/g; 311 - } 312 - 313 - foreach my $r (keys %typedefs) { 314 - my $s = $typedefs{$r}; 315 - 316 - $r =~ s,([\_\`\*\<\>\&\\\\:\/]),\\\\$1,g; 317 - 318 - print "$r -> $s\n" if ($debug); 319 - $data =~ s/($start_delim)($r)$end_delim/$1$s$3/g; 320 - } 321 - 322 - $data =~ s/\\ ([\n\s])/\1/g; 323 - 324 - # 325 - # Generate output file 326 - # 327 - 328 - my $title = $file_in; 329 - $title =~ s,.*/,,; 330 - 331 - open OUT, "> $file_out" or die "Can't open $file_out"; 332 - print OUT ".. -*- coding: utf-8; mode: rst -*-\n\n"; 333 - print OUT "$title\n"; 334 - print OUT "=" x length($title); 335 - print OUT "\n\n.. parsed-literal::\n\n"; 336 - print OUT $data; 337 - close OUT; 338 - 339 - __END__ 340 - 341 - =head1 NAME 342 - 343 - parse_headers.pl - parse a C file, in order to identify functions, structs, 344 - enums and defines and create cross-references to a Sphinx book. 345 - 346 - =head1 SYNOPSIS 347 - 348 - B<parse_headers.pl> [<options>] <C_FILE> <OUT_FILE> [<EXCEPTIONS_FILE>] 349 - 350 - Where <options> can be: --debug, --help or --usage. 351 - 352 - =head1 OPTIONS 353 - 354 - =over 8 355 - 356 - =item B<--debug> 357 - 358 - Put the script in verbose mode, useful for debugging. 359 - 360 - =item B<--usage> 361 - 362 - Prints a brief help message and exits. 363 - 364 - =item B<--help> 365 - 366 - Prints a more detailed help message and exits. 367 - 368 - =back 369 - 370 - =head1 DESCRIPTION 371 - 372 - Convert a C header or source file (C_FILE), into a ReStructured Text 373 - included via ..parsed-literal block with cross-references for the 374 - documentation files that describe the API. It accepts an optional 375 - EXCEPTIONS_FILE with describes what elements will be either ignored or 376 - be pointed to a non-default reference. 377 - 378 - The output is written at the (OUT_FILE). 379 - 380 - It is capable of identifying defines, functions, structs, typedefs, 381 - enums and enum symbols and create cross-references for all of them. 382 - It is also capable of distinguish #define used for specifying a Linux 383 - ioctl. 384 - 385 - The EXCEPTIONS_FILE contain two rules to allow ignoring a symbol or 386 - to replace the default references by a custom one. 387 - 388 - Please read Documentation/doc-guide/parse-headers.rst at the Kernel's 389 - tree for more details. 390 - 391 - =head1 BUGS 392 - 393 - Report bugs to Mauro Carvalho Chehab <mchehab@kernel.org> 394 - 395 - =head1 COPYRIGHT 396 - 397 - Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@kernel.org>. 398 - 399 - License GPLv2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>. 400 - 401 - This is free software: you are free to change and redistribute it. 402 - There is NO WARRANTY, to the extent permitted by law. 403 - 404 - =cut
+2 -1
Documentation/sphinx/templates/kernel-toc.html
··· 1 - <!-- SPDX-License-Identifier: GPL-2.0 --> 1 + {# SPDX-License-Identifier: GPL-2.0 #} 2 + 2 3 {# Create a local TOC the kernel way #} 3 4 <p> 4 5 <h3 class="kernel-toc-contents">Contents</h3>
+2 -2
Documentation/sphinx/templates/translations.html
··· 1 - <!-- SPDX-License-Identifier: GPL-2.0 --> 2 - <!-- Copyright © 2023, Oracle and/or its affiliates. --> 1 + {# SPDX-License-Identifier: GPL-2.0 #} 2 + {# Copyright © 2023, Oracle and/or its affiliates. #} 3 3 4 4 {# Create a language menu for translations #} 5 5 {% if languages|length > 0: %}
+1 -1
Documentation/staging/remoteproc.rst
··· 104 104 rproc_shutdown(my_rproc); 105 105 } 106 106 107 - API for implementors 107 + API for implementers 108 108 ==================== 109 109 110 110 ::
+1 -1
Documentation/trace/boottime-trace.rst
··· 19 19 Options in the Boot Config 20 20 ========================== 21 21 22 - Here is the list of available options list for boot time tracing in 22 + Here is the list of available options for boot time tracing in 23 23 boot config file [1]_. All options are under "ftrace." or "kernel." 24 24 prefix. See kernel parameters for the options which starts 25 25 with "kernel." prefix [2]_.
+1 -1
Documentation/trace/debugging.rst
··· 59 59 crash occurs. This could be from the oops message in printk, or one could 60 60 use kexec/kdump. But these just show what happened at the time of the crash. 61 61 It can be very useful in knowing what happened up to the point of the crash. 62 - The tracing ring buffer, by default, is a circular buffer than will 62 + The tracing ring buffer, by default, is a circular buffer that will 63 63 overwrite older events with newer ones. When a crash happens, the content of 64 64 the ring buffer will be all the events that lead up to the crash. 65 65
+4 -4
Documentation/trace/events.rst
··· 629 629 - tracing synthetic events from in-kernel code 630 630 - the low-level "dynevent_cmd" API 631 631 632 - 7.1 Dyamically creating synthetic event definitions 633 - --------------------------------------------------- 632 + 7.1 Dynamically creating synthetic event definitions 633 + ---------------------------------------------------- 634 634 635 635 There are a couple ways to create a new synthetic event from a kernel 636 636 module or other kernel code. ··· 944 944 of whether any of the add calls failed (say due to a bad field name 945 945 being passed in). 946 946 947 - 7.3 Dyamically creating kprobe and kretprobe event definitions 948 - -------------------------------------------------------------- 947 + 7.3 Dynamically creating kprobe and kretprobe event definitions 948 + --------------------------------------------------------------- 949 949 950 950 To create a kprobe or kretprobe trace event from kernel code, the 951 951 kprobe_event_gen_cmd_start() or kretprobe_event_gen_cmd_start()
+1 -1
Documentation/trace/fprobe.rst
··· 81 81 after the register_fprobe() is called and before it returns. See 82 82 :file:`Documentation/trace/ftrace.rst`. 83 83 84 - Also, the unregister_fprobe() will guarantee that the both enter and exit 84 + Also, the unregister_fprobe() will guarantee that both enter and exit 85 85 handlers are no longer being called by functions after unregister_fprobe() 86 86 returns as same as unregister_ftrace_function(). 87 87
+1 -1
Documentation/trace/ftrace-uses.rst
··· 193 193 Note, if this flag is set, then the callback will always be called 194 194 with preemption disabled. If it is not set, then it is possible 195 195 (but not guaranteed) that the callback will be called in 196 - preemptable context. 196 + preemptible context. 197 197 198 198 FTRACE_OPS_FL_IPMODIFY 199 199 Requires FTRACE_OPS_FL_SAVE_REGS set. If the callback is to "hijack"
+7 -7
Documentation/trace/ftrace.rst
··· 30 30 a task is woken to the task is actually scheduled in. 31 31 32 32 One of the most common uses of ftrace is the event tracing. 33 - Throughout the kernel is hundreds of static event points that 33 + Throughout the kernel are hundreds of static event points that 34 34 can be enabled via the tracefs file system to see what is 35 35 going on in certain parts of the kernel. 36 36 ··· 383 383 not be listed in this count. 384 384 385 385 If the callback registered to be traced by a function with 386 - the "save regs" attribute (thus even more overhead), a 'R' 386 + the "save regs" attribute (thus even more overhead), an 'R' 387 387 will be displayed on the same line as the function that 388 388 is returning registers. 389 389 ··· 392 392 an 'I' will be displayed on the same line as the function that 393 393 can be overridden. 394 394 395 - If a non ftrace trampoline is attached (BPF) a 'D' will be displayed. 395 + If a non-ftrace trampoline is attached (BPF) a 'D' will be displayed. 396 396 Note, normal ftrace trampolines can also be attached, but only one 397 397 "direct" trampoline can be attached to a given function at a time. 398 398 ··· 402 402 403 403 If a function had either the "ip modify" or a "direct" call attached to 404 404 it in the past, a 'M' will be shown. This flag is never cleared. It is 405 - used to know if a function was every modified by the ftrace infrastructure, 405 + used to know if a function was ever modified by the ftrace infrastructure, 406 406 and can be used for debugging. 407 407 408 408 If the architecture supports it, it will also show what callback ··· 418 418 419 419 This file contains all the functions that ever had a function callback 420 420 to it via the ftrace infrastructure. It has the same format as 421 - enabled_functions but shows all functions that have every been 421 + enabled_functions but shows all functions that have ever been 422 422 traced. 423 423 424 424 To see any function that has every been modified by "ip modify" or a ··· 517 517 Whenever an event is recorded into the ring buffer, a 518 518 "timestamp" is added. This stamp comes from a specified 519 519 clock. By default, ftrace uses the "local" clock. This 520 - clock is very fast and strictly per cpu, but on some 520 + clock is very fast and strictly per CPU, but on some 521 521 systems it may not be monotonic with respect to other 522 522 CPUs. In other words, the local clocks may not be in sync 523 523 with local clocks on other CPUs. ··· 868 868 869 869 "mmiotrace" 870 870 871 - A special tracer that is used to trace binary module. 871 + A special tracer that is used to trace binary modules. 872 872 It will trace all the calls that a module makes to the 873 873 hardware. Everything it writes and reads from the I/O 874 874 as well.
+77 -74
Documentation/trace/histogram-design.rst
··· 11 11 structures used to implement them in trace_events_hist.c and 12 12 tracing_map.c. 13 13 14 - Note: All the ftrace histogram command examples assume the working 15 - directory is the ftrace /tracing directory. For example:: 14 + .. note:: 15 + All the ftrace histogram command examples assume the working 16 + directory is the ftrace /tracing directory. For example:: 16 17 17 18 # cd /sys/kernel/tracing 18 19 19 - Also, the histogram output displayed for those commands will be 20 - generally be truncated - only enough to make the point is displayed. 20 + Also, the histogram output displayed for those commands will be 21 + generally be truncated - only enough to make the point is displayed. 21 22 22 23 'hist_debug' trace event files 23 24 ============================== ··· 143 142 +--------------+ | | 144 143 n_keys = n_fields - n_vals | | 145 144 146 - The hist_data n_vals and n_fields delineate the extent of the fields[] | | 147 - array and separate keys from values for the rest of the code. | | 145 + The hist_data n_vals and n_fields delineate the extent of the fields[] 146 + array and separate keys from values for the rest of the code. 148 147 149 - Below is a run-time representation of the tracing_map part of the | | 150 - histogram, with pointers from various parts of the fields[] array | | 151 - to corresponding parts of the tracing_map. | | 148 + Below is a run-time representation of the tracing_map part of the 149 + histogram, with pointers from various parts of the fields[] array 150 + to corresponding parts of the tracing_map. 152 151 153 - The tracing_map consists of an array of tracing_map_entrys and a set | | 154 - of preallocated tracing_map_elts (abbreviated below as map_entry and | | 155 - map_elt). The total number of map_entrys in the hist_data.map array = | | 156 - map->max_elts (actually map->map_size but only max_elts of those are | | 157 - used. This is a property required by the map_insert() algorithm). | | 152 + The tracing_map consists of an array of tracing_map_entrys and a set 153 + of preallocated tracing_map_elts (abbreviated below as map_entry and 154 + map_elt). The total number of map_entrys in the hist_data.map array = 155 + map->max_elts (actually map->map_size but only max_elts of those are 156 + used. This is a property required by the map_insert() algorithm). 158 157 159 - If a map_entry is unused, meaning no key has yet hashed into it, its | | 160 - .key value is 0 and its .val pointer is NULL. Once a map_entry has | | 161 - been claimed, the .key value contains the key's hash value and the | | 162 - .val member points to a map_elt containing the full key and an entry | | 163 - for each key or value in the map_elt.fields[] array. There is an | | 164 - entry in the map_elt.fields[] array corresponding to each hist_field | | 165 - in the histogram, and this is where the continually aggregated sums | | 166 - corresponding to each histogram value are kept. | | 158 + If a map_entry is unused, meaning no key has yet hashed into it, its 159 + .key value is 0 and its .val pointer is NULL. Once a map_entry has 160 + been claimed, the .key value contains the key's hash value and the 161 + .val member points to a map_elt containing the full key and an entry 162 + for each key or value in the map_elt.fields[] array. There is an 163 + entry in the map_elt.fields[] array corresponding to each hist_field 164 + in the histogram, and this is where the continually aggregated sums 165 + corresponding to each histogram value are kept. 167 166 168 - The diagram attempts to show the relationship between the | | 169 - hist_data.fields[] and the map_elt.fields[] with the links drawn | | 167 + The diagram attempts to show the relationship between the 168 + hist_data.fields[] and the map_elt.fields[] with the links drawn 170 169 between diagrams:: 171 170 172 171 +-----------+ | | ··· 381 380 trigger above. 382 381 383 382 sched_waking histogram 384 - ----------------------:: 383 + ---------------------- 384 + 385 + .. code-block:: 385 386 386 387 +------------------+ 387 388 | hist_data |<-------------------------------------------------------+ ··· 443 440 n_keys = n_fields - n_vals | | | 444 441 | | | 445 442 446 - This is very similar to the basic case. In the above diagram, we can | | | 447 - see a new .flags member has been added to the struct hist_field | | | 448 - struct, and a new entry added to hist_data.fields representing the ts0 | | | 449 - variable. For a normal val hist_field, .flags is just 0 (modulo | | | 450 - modifier flags), but if the value is defined as a variable, the .flags | | | 451 - contains a set FL_VAR bit. | | | 443 + This is very similar to the basic case. In the above diagram, we can 444 + see a new .flags member has been added to the struct hist_field 445 + struct, and a new entry added to hist_data.fields representing the ts0 446 + variable. For a normal val hist_field, .flags is just 0 (modulo 447 + modifier flags), but if the value is defined as a variable, the .flags 448 + contains a set FL_VAR bit. 452 449 453 - As you can see, the ts0 entry's .var.idx member contains the index | | | 454 - into the tracing_map_elts' .vars[] array containing variable values. | | | 455 - This idx is used whenever the value of the variable is set or read. | | | 456 - The map_elt.vars idx assigned to the given variable is assigned and | | | 457 - saved in .var.idx by create_tracing_map_fields() after it calls | | | 458 - tracing_map_add_var(). | | | 450 + As you can see, the ts0 entry's .var.idx member contains the index 451 + into the tracing_map_elts' .vars[] array containing variable values. 452 + This idx is used whenever the value of the variable is set or read. 453 + The map_elt.vars idx assigned to the given variable is assigned and 454 + saved in .var.idx by create_tracing_map_fields() after it calls 455 + tracing_map_add_var(). 459 456 460 - Below is a representation of the histogram at run-time, which | | | 461 - populates the map, along with correspondence to the above hist_data and | | | 462 - hist_field data structures. | | | 457 + Below is a representation of the histogram at run-time, which 458 + populates the map, along with correspondence to the above hist_data and 459 + hist_field data structures. 463 460 464 - The diagram attempts to show the relationship between the | | | 465 - hist_data.fields[] and the map_elt.fields[] and map_elt.vars[] with | | | 466 - the links drawn between diagrams. For each of the map_elts, you can | | | 467 - see that the .fields[] members point to the .sum or .offset of a key | | | 468 - or val and the .vars[] members point to the value of a variable. The | | | 469 - arrows between the two diagrams show the linkages between those | | | 470 - tracing_map members and the field definitions in the corresponding | | | 461 + The diagram attempts to show the relationship between the 462 + hist_data.fields[] and the map_elt.fields[] and map_elt.vars[] with 463 + the links drawn between diagrams. For each of the map_elts, you can 464 + see that the .fields[] members point to the .sum or .offset of a key 465 + or val and the .vars[] members point to the value of a variable. The 466 + arrows between the two diagrams show the linkages between those 467 + tracing_map members and the field definitions in the corresponding 471 468 hist_data fields[] members.:: 472 469 473 470 +-----------+ | | | ··· 568 565 | | | | 569 566 +---------------+ | | 570 567 571 - For each used map entry, there's a map_elt pointing to an array of | | 572 - .vars containing the current value of the variables associated with | | 573 - that histogram entry. So in the above, the timestamp associated with | | 574 - pid 999 is 113345679876, and the timestamp variable in the same | | 575 - .var.idx for pid 4444 is 213499240729. | | 568 + For each used map entry, there's a map_elt pointing to an array of 569 + .vars containing the current value of the variables associated with 570 + that histogram entry. So in the above, the timestamp associated with 571 + pid 999 is 113345679876, and the timestamp variable in the same 572 + .var.idx for pid 4444 is 213499240729. 576 573 577 - sched_switch histogram | | 578 - ---------------------- | | 574 + sched_switch histogram 575 + ---------------------- 579 576 580 - The sched_switch histogram paired with the above sched_waking | | 581 - histogram is shown below. The most important aspect of the | | 582 - sched_switch histogram is that it references a variable on the | | 583 - sched_waking histogram above. | | 577 + The sched_switch histogram paired with the above sched_waking 578 + histogram is shown below. The most important aspect of the 579 + sched_switch histogram is that it references a variable on the 580 + sched_waking histogram above. 584 581 585 - The histogram diagram is very similar to the others so far displayed, | | 586 - but it adds variable references. You can see the normal hitcount and | | 587 - key fields along with a new wakeup_lat variable implemented in the | | 588 - same way as the sched_waking ts0 variable, but in addition there's an | | 589 - entry with the new FL_VAR_REF (short for HIST_FIELD_FL_VAR_REF) flag. | | 582 + The histogram diagram is very similar to the others so far displayed, 583 + but it adds variable references. You can see the normal hitcount and 584 + key fields along with a new wakeup_lat variable implemented in the 585 + same way as the sched_waking ts0 variable, but in addition there's an 586 + entry with the new FL_VAR_REF (short for HIST_FIELD_FL_VAR_REF) flag. 590 587 591 - Associated with the new var ref field are a couple of new hist_field | | 592 - members, var.hist_data and var_ref_idx. For a variable reference, the | | 593 - var.hist_data goes with the var.idx, which together uniquely identify | | 594 - a particular variable on a particular histogram. The var_ref_idx is | | 595 - just the index into the var_ref_vals[] array that caches the values of | | 596 - each variable whenever a hist trigger is updated. Those resulting | | 597 - values are then finally accessed by other code such as trace action | | 598 - code that uses the var_ref_idx values to assign param values. | | 588 + Associated with the new var ref field are a couple of new hist_field 589 + members, var.hist_data and var_ref_idx. For a variable reference, the 590 + var.hist_data goes with the var.idx, which together uniquely identify 591 + a particular variable on a particular histogram. The var_ref_idx is 592 + just the index into the var_ref_vals[] array that caches the values of 593 + each variable whenever a hist trigger is updated. Those resulting 594 + values are then finally accessed by other code such as trace action 595 + code that uses the var_ref_idx values to assign param values. 599 596 600 - The diagram below describes the situation for the sched_switch | | 597 + The diagram below describes the situation for the sched_switch 601 598 histogram referred to before:: 602 599 603 - # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0' >> | | 604 - events/sched/sched_switch/trigger | | 600 + # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0' >> 601 + events/sched/sched_switch/trigger 605 602 | | 606 603 +------------------+ | | 607 604 | hist_data | | |
+20 -20
Documentation/trace/histogram.rst
··· 186 186 The examples below provide a more concrete illustration of the 187 187 concepts and typical usage patterns discussed above. 188 188 189 - 'special' event fields 190 - ------------------------ 189 + 2.1. 'special' event fields 190 + --------------------------- 191 191 192 192 There are a number of 'special event fields' available for use as 193 193 keys or values in a hist trigger. These look like and behave as if ··· 204 204 common_cpu int the cpu on which the event occurred. 205 205 ====================== ==== ======================================= 206 206 207 - Extended error information 208 - -------------------------- 207 + 2.2. Extended error information 208 + ------------------------------- 209 209 210 210 For some error conditions encountered when invoking a hist trigger 211 211 command, extended error information is available via the 212 - tracing/error_log file. See Error Conditions in 213 - :file:`Documentation/trace/ftrace.rst` for details. 212 + tracing/error_log file. See "Error conditions" section in 213 + Documentation/trace/ftrace.rst for details. 214 214 215 - 6.2 'hist' trigger examples 216 - --------------------------- 215 + 2.3. 'hist' trigger examples 216 + ---------------------------- 217 217 218 218 The first set of examples creates aggregations using the kmalloc 219 219 event. The fields that can be used for the hist trigger are listed ··· 840 840 841 841 The compound key examples used a key and a sum value (hitcount) to 842 842 sort the output, but we can just as easily use two keys instead. 843 - Here's an example where we use a compound key composed of the the 843 + Here's an example where we use a compound key composed of the 844 844 common_pid and size event fields. Sorting with pid as the primary 845 845 key and 'size' as the secondary key allows us to display an 846 846 ordered summary of the recvfrom sizes, with counts, received by ··· 1608 1608 Entries: 7 1609 1609 Dropped: 0 1610 1610 1611 - 2.2 Inter-event hist triggers 1612 - ----------------------------- 1611 + 2.4. Inter-event hist triggers 1612 + ------------------------------ 1613 1613 1614 1614 Inter-event hist triggers are hist triggers that combine values from 1615 1615 one or more other events and create a histogram using that data. Data ··· 1685 1685 1686 1686 These features are described in more detail in the following sections. 1687 1687 1688 - 2.2.1 Histogram Variables 1689 - ------------------------- 1688 + 2.5. Histogram Variables 1689 + ------------------------ 1690 1690 1691 1691 Variables are simply named locations used for saving and retrieving 1692 1692 values between matching events. A 'matching' event is defined as an ··· 1789 1789 1790 1790 Variables can even hold stacktraces, which are useful with synthetic events. 1791 1791 1792 - 2.2.2 Synthetic Events 1793 - ---------------------- 1792 + 2.6. Synthetic Events 1793 + --------------------- 1794 1794 1795 1795 Synthetic events are user-defined events generated from hist trigger 1796 1796 variables or fields associated with one or more other events. Their ··· 1846 1846 At this point, there isn't yet an actual 'wakeup_latency' event 1847 1847 instantiated in the event subsystem - for this to happen, a 'hist 1848 1848 trigger action' needs to be instantiated and bound to actual fields 1849 - and variables defined on other events (see Section 2.2.3 below on 1849 + and variables defined on other events (see Section 2.7. below on 1850 1850 how that is done using hist trigger 'onmatch' action). Once that is 1851 1851 done, the 'wakeup_latency' synthetic event instance is created. 1852 1852 ··· 2094 2094 Entries: 7 2095 2095 Dropped: 0 2096 2096 2097 - 2.2.3 Hist trigger 'handlers' and 'actions' 2098 - ------------------------------------------- 2097 + 2.7. Hist trigger 'handlers' and 'actions' 2098 + ------------------------------------------ 2099 2099 2100 2100 A hist trigger 'action' is a function that's executed (in most cases 2101 2101 conditionally) whenever a histogram entry is added or updated. ··· 2526 2526 kworker/3:2-135 [003] d..3 49.823123: sched_switch: prev_comm=kworker/3:2 prev_pid=135 prev_prio=120 prev_state=T ==> next_comm=swapper/3 next_pid=0 next_prio=120 2527 2527 <idle>-0 [004] ..s7 49.823798: tcp_probe: src=10.0.0.10:54326 dest=23.215.104.193:80 mark=0x0 length=32 snd_nxt=0xe3ae2ff5 snd_una=0xe3ae2ecd snd_cwnd=10 ssthresh=2147483647 snd_wnd=28960 srtt=19604 rcv_wnd=29312 2528 2528 2529 - 3. User space creating a trigger 2530 - -------------------------------- 2529 + 2.8. User space creating a trigger 2530 + ---------------------------------- 2531 2531 2532 2532 Writing into /sys/kernel/tracing/trace_marker writes into the ftrace 2533 2533 ring buffer. This can also act like an event, by writing into the trigger
+1 -1
Documentation/trace/rv/monitor_synthesis.rst
··· 181 181 functions interacting with the Buchi automaton. 182 182 183 183 While generating code, `rvgen` cannot understand the meaning of the atomic 184 - propositions. Thus, that task is left for manual work. The recommended pratice 184 + propositions. Thus, that task is left for manual work. The recommended practice 185 185 is adding tracepoints to places where the atomic propositions change; and in the 186 186 tracepoints' handlers: the Buchi automaton is executed using:: 187 187
-14
Documentation/translations/it_IT/process/changes.rst
··· 46 46 kmod 13 depmod -V 47 47 e2fsprogs 1.41.4 e2fsck -V 48 48 jfsutils 1.1.3 fsck.jfs -V 49 - reiserfsprogs 3.6.3 reiserfsck -V 50 49 xfsprogs 2.6.0 xfs_db -V 51 50 squashfs-tools 4.0 mksquashfs -version 52 51 btrfs-progs 0.18 btrfsck ··· 259 260 260 261 - sono disponibili altri strumenti per il file-system. 261 262 262 - Reiserfsprogs 263 - ------------- 264 - 265 - Il pacchetto reiserfsprogs dovrebbe essere usato con reiserfs-3.6.x (Linux 266 - kernel 2.4.x). Questo è un pacchetto combinato che contiene versioni 267 - funzionanti di ``mkreiserfs``, ``resize_reiserfs``, ``debugreiserfs`` e 268 - ``reiserfsck``. Questi programmi funzionano sulle piattaforme i386 e alpha. 269 - 270 263 Xfsprogs 271 264 -------- 272 265 ··· 469 478 -------- 470 479 471 480 - <https://jfs.sourceforge.net/> 472 - 473 - Reiserfsprogs 474 - ------------- 475 - 476 - - <https://git.kernel.org/pub/scm/linux/kernel/git/jeffm/reiserfsprogs.git/> 477 481 478 482 Xfsprogs 479 483 --------
-64
Documentation/userspace-api/media/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - 3 - # Rules to convert a .h file to inline RST documentation 4 - 5 - SRC_DIR=$(srctree)/Documentation/userspace-api/media 6 - PARSER = $(srctree)/Documentation/sphinx/parse-headers.pl 7 - UAPI = $(srctree)/include/uapi/linux 8 - KAPI = $(srctree)/include/linux 9 - 10 - FILES = ca.h.rst dmx.h.rst frontend.h.rst net.h.rst \ 11 - videodev2.h.rst media.h.rst cec.h.rst lirc.h.rst 12 - 13 - TARGETS := $(addprefix $(BUILDDIR)/, $(FILES)) 14 - 15 - gen_rst = \ 16 - echo ${PARSER} $< $@ $(SRC_DIR)/$(notdir $@).exceptions; \ 17 - ${PARSER} $< $@ $(SRC_DIR)/$(notdir $@).exceptions 18 - 19 - quiet_gen_rst = echo ' PARSE $(patsubst $(srctree)/%,%,$<)'; \ 20 - ${PARSER} $< $@ $(SRC_DIR)/$(notdir $@).exceptions 21 - 22 - silent_gen_rst = ${gen_rst} 23 - 24 - $(BUILDDIR)/ca.h.rst: ${UAPI}/dvb/ca.h ${PARSER} $(SRC_DIR)/ca.h.rst.exceptions 25 - @$($(quiet)gen_rst) 26 - 27 - $(BUILDDIR)/dmx.h.rst: ${UAPI}/dvb/dmx.h ${PARSER} $(SRC_DIR)/dmx.h.rst.exceptions 28 - @$($(quiet)gen_rst) 29 - 30 - $(BUILDDIR)/frontend.h.rst: ${UAPI}/dvb/frontend.h ${PARSER} $(SRC_DIR)/frontend.h.rst.exceptions 31 - @$($(quiet)gen_rst) 32 - 33 - $(BUILDDIR)/net.h.rst: ${UAPI}/dvb/net.h ${PARSER} $(SRC_DIR)/net.h.rst.exceptions 34 - @$($(quiet)gen_rst) 35 - 36 - $(BUILDDIR)/videodev2.h.rst: ${UAPI}/videodev2.h ${PARSER} $(SRC_DIR)/videodev2.h.rst.exceptions 37 - @$($(quiet)gen_rst) 38 - 39 - $(BUILDDIR)/media.h.rst: ${UAPI}/media.h ${PARSER} $(SRC_DIR)/media.h.rst.exceptions 40 - @$($(quiet)gen_rst) 41 - 42 - $(BUILDDIR)/cec.h.rst: ${UAPI}/cec.h ${PARSER} $(SRC_DIR)/cec.h.rst.exceptions 43 - @$($(quiet)gen_rst) 44 - 45 - $(BUILDDIR)/lirc.h.rst: ${UAPI}/lirc.h ${PARSER} $(SRC_DIR)/lirc.h.rst.exceptions 46 - @$($(quiet)gen_rst) 47 - 48 - # Media build rules 49 - 50 - .PHONY: all html texinfo epub xml latex 51 - 52 - all: $(IMGDOT) $(BUILDDIR) ${TARGETS} 53 - html: all 54 - texinfo: all 55 - epub: all 56 - xml: all 57 - latex: $(IMGPDF) all 58 - linkcheck: 59 - 60 - clean: 61 - -rm -f $(DOTTGT) $(IMGTGT) ${TARGETS} 2>/dev/null 62 - 63 - $(BUILDDIR): 64 - $(Q)mkdir -p $@
Documentation/userspace-api/media/ca.h.rst.exceptions Documentation/userspace-api/media/dvb/ca.h.rst.exceptions
Documentation/userspace-api/media/cec.h.rst.exceptions Documentation/userspace-api/media/cec/cec.h.rst.exceptions
+3 -2
Documentation/userspace-api/media/cec/cec-header.rst
··· 6 6 CEC Header File 7 7 *************** 8 8 9 - .. kernel-include:: $BUILDDIR/cec.h.rst 10 - 9 + .. kernel-include:: include/uapi/linux/cec.h 10 + :generate-cross-refs: 11 + :exception-file: cec.h.rst.exceptions
Documentation/userspace-api/media/dmx.h.rst.exceptions Documentation/userspace-api/media/dvb/dmx.h.rst.exceptions
+13 -4
Documentation/userspace-api/media/dvb/headers.rst
··· 7 7 Digital TV uAPI headers 8 8 *********************** 9 9 10 - .. kernel-include:: $BUILDDIR/frontend.h.rst 10 + .. kernel-include:: include/uapi/linux/dvb/frontend.h 11 + :generate-cross-refs: 12 + :exception-file: frontend.h.rst.exceptions 11 13 12 - .. kernel-include:: $BUILDDIR/dmx.h.rst 14 + .. kernel-include:: include/uapi/linux/dvb/dmx.h 15 + :generate-cross-refs: 16 + :exception-file: dmx.h.rst.exceptions 13 17 14 - .. kernel-include:: $BUILDDIR/ca.h.rst 18 + .. kernel-include:: include/uapi/linux/dvb/ca.h 19 + :generate-cross-refs: 20 + :exception-file: ca.h.rst.exceptions 15 21 16 - .. kernel-include:: $BUILDDIR/net.h.rst 22 + .. kernel-include:: include/uapi/linux/dvb/net.h 23 + :generate-cross-refs: 24 + :exception-file: net.h.rst.exceptions 25 +
Documentation/userspace-api/media/frontend.h.rst.exceptions Documentation/userspace-api/media/dvb/frontend.h.rst.exceptions
Documentation/userspace-api/media/lirc.h.rst.exceptions Documentation/userspace-api/media/rc/lirc.h.rst.exceptions
Documentation/userspace-api/media/media.h.rst.exceptions Documentation/userspace-api/media/mediactl/media.h.rst.exceptions
+3 -2
Documentation/userspace-api/media/mediactl/media-header.rst
··· 6 6 Media Controller Header File 7 7 **************************** 8 8 9 - .. kernel-include:: $BUILDDIR/media.h.rst 10 - 9 + .. kernel-include:: include/uapi/linux/media.h 10 + :generate-cross-refs: 11 + :exception-file: media.h.rst.exceptions
Documentation/userspace-api/media/net.h.rst.exceptions Documentation/userspace-api/media/dvb/net.h.rst.exceptions
+3 -1
Documentation/userspace-api/media/rc/lirc-header.rst
··· 6 6 LIRC Header File 7 7 **************** 8 8 9 - .. kernel-include:: $BUILDDIR/lirc.h.rst 9 + .. kernel-include:: include/uapi/linux/lirc.h 10 + :generate-cross-refs: 11 + :exception-file: lirc.h.rst.exceptions 10 12
+3 -1
Documentation/userspace-api/media/v4l/videodev.rst
··· 6 6 Video For Linux Two Header File 7 7 ******************************* 8 8 9 - .. kernel-include:: $BUILDDIR/videodev2.h.rst 9 + .. kernel-include:: include/uapi/linux/videodev2.h 10 + :generate-cross-refs: 11 + :exception-file: videodev2.h.rst.exceptions
Documentation/userspace-api/media/videodev2.h.rst.exceptions Documentation/userspace-api/media/v4l/videodev2.h.rst.exceptions
+1 -1
Documentation/virt/kvm/review-checklist.rst
··· 98 98 It is important to demonstrate your use case. This can be as simple as 99 99 explaining that the feature is already in use on bare metal, or it can be 100 100 a proof-of-concept implementation in userspace. The latter need not be 101 - open source, though that is of course preferrable for easier testing. 101 + open source, though that is of course preferable for easier testing. 102 102 Selftests should test corner cases of the APIs, and should also cover 103 103 basic host and guest operation if no open source VMM uses the feature. 104 104
+1 -1
Documentation/w1/masters/ds2482.rst
··· 22 22 ----------- 23 23 24 24 The Maxim/Dallas Semiconductor DS2482 is a I2C device that provides 25 - one (DS2482-100) or eight (DS2482-800) 1-wire busses. 25 + one (DS2482-100) or eight (DS2482-800) 1-wire buses. 26 26 27 27 28 28 General Remarks
+1 -1
Documentation/w1/masters/index.rst
··· 1 - . SPDX-License-Identifier: GPL-2.0 1 + .. SPDX-License-Identifier: GPL-2.0 2 2 3 3 ===================== 4 4 1-wire Master Drivers
+1 -1
Documentation/w1/slaves/index.rst
··· 1 - . SPDX-License-Identifier: GPL-2.0 1 + .. SPDX-License-Identifier: GPL-2.0 2 2 3 3 ==================== 4 4 1-wire Slave Drivers
+3
MAINTAINERS
··· 7384 7384 T: git git://git.lwn.net/linux.git docs-next 7385 7385 F: Documentation/ 7386 7386 F: scripts/check-variable-fonts.sh 7387 + F: scripts/checktransupdate.py 7387 7388 F: scripts/documentation-file-ref-check 7388 7389 F: scripts/get_abi.py 7389 7390 F: scripts/kernel-doc* 7390 7391 F: scripts/lib/abi/* 7391 7392 F: scripts/lib/kdoc/* 7393 + F: tools/docs/* 7392 7394 F: tools/net/ynl/pyynl/lib/doc_generator.py 7393 7395 F: scripts/sphinx-pre-install 7394 7396 X: Documentation/ABI/ ··· 14325 14323 F: Documentation/atomic_bitops.txt 14326 14324 F: Documentation/atomic_t.txt 14327 14325 F: Documentation/core-api/refcount-vs-atomic.rst 14326 + F: Documentation/dev-tools/lkmm/ 14328 14327 F: Documentation/litmus-tests/ 14329 14328 F: Documentation/memory-barriers.txt 14330 14329 F: tools/memory-model/
+3 -2
Makefile
··· 1797 1797 1798 1798 # Documentation targets 1799 1799 # --------------------------------------------------------------------------- 1800 - DOC_TARGETS := xmldocs latexdocs pdfdocs htmldocs epubdocs cleandocs \ 1801 - linkcheckdocs dochelp refcheckdocs texinfodocs infodocs 1800 + DOC_TARGETS := xmldocs latexdocs pdfdocs htmldocs htmldocs-redirects \ 1801 + epubdocs cleandocs linkcheckdocs dochelp refcheckdocs \ 1802 + texinfodocs infodocs 1802 1803 PHONY += $(DOC_TARGETS) 1803 1804 $(DOC_TARGETS): 1804 1805 $(Q)$(MAKE) $(build)=Documentation $@
+1 -1
fs/btrfs/tree-log.c
··· 5829 5829 * See process_dir_items_leaf() for details about why it is needed. 5830 5830 * This is a recursive operation - if an existing dentry corresponds to a 5831 5831 * directory, that directory's new entries are logged too (same behaviour as 5832 - * ext3/4, xfs, f2fs, reiserfs, nilfs2). Note that when logging the inodes 5832 + * ext3/4, xfs, f2fs, nilfs2). Note that when logging the inodes 5833 5833 * the dentries point to we do not acquire their VFS lock, otherwise lockdep 5834 5834 * complains about the following circular lock dependency / possible deadlock: 5835 5835 *
+24 -10
scripts/kernel-doc.py
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>. 4 4 # 5 - # pylint: disable=C0103,R0915 6 - # 5 + # pylint: disable=C0103,R0912,R0914,R0915 6 + 7 + # NOTE: While kernel-doc requires at least version 3.6 to run, the 8 + # command line should work with Python 3.2+ (tested with 3.4). 9 + # The rationale is that it shall fail gracefully during Kernel 10 + # compilation with older Kernel versions. Due to that: 11 + # - encoding line is needed here; 12 + # - no f-strings can be used on this file. 13 + # - the libraries that require newer versions can only be included 14 + # after Python version is checked. 15 + 7 16 # Converted from the kernel-doc script originally written in Perl 8 17 # under GPLv2, copyrighted since 1998 by the following authors: 9 18 # ··· 115 106 SRC_DIR = os.path.dirname(os.path.realpath(__file__)) 116 107 117 108 sys.path.insert(0, os.path.join(SRC_DIR, LIB_DIR)) 118 - 119 - from kdoc_files import KernelFiles # pylint: disable=C0413 120 - from kdoc_output import RestFormat, ManFormat # pylint: disable=C0413 121 109 122 110 DESC = """ 123 111 Read C language source or header FILEs, extract embedded documentation comments, ··· 279 273 280 274 python_ver = sys.version_info[:2] 281 275 if python_ver < (3,6): 282 - logger.warning("Python 3.6 or later is required by kernel-doc") 276 + # Depending on Kernel configuration, kernel-doc --none is called at 277 + # build time. As we don't want to break compilation due to the 278 + # usage of an old Python version, return 0 here. 279 + if args.none: 280 + logger.error("Python 3.6 or later is required by kernel-doc. skipping checks") 281 + sys.exit(0) 283 282 284 - # Return 0 here to avoid breaking compilation 285 - sys.exit(0) 283 + sys.exit("Python 3.6 or later is required by kernel-doc. Aborting.") 286 284 287 285 if python_ver < (3,7): 288 286 logger.warning("Python 3.7 or later is required for correct results") 287 + 288 + # Import kernel-doc libraries only after checking Python version 289 + from kdoc_files import KernelFiles # pylint: disable=C0415 290 + from kdoc_output import RestFormat, ManFormat # pylint: disable=C0415 289 291 290 292 if args.man: 291 293 out_style = ManFormat(modulename=args.modulename) ··· 322 308 sys.exit(0) 323 309 324 310 if args.werror: 325 - print(f"{error_count} warnings as errors") 311 + print("%s warnings as errors" % error_count) # pylint: disable=C0209 326 312 sys.exit(error_count) 327 313 328 314 if args.verbose: 329 - print(f"{error_count} errors") 315 + print("%s errors" % error_count) # pylint: disable=C0209 330 316 331 317 if args.none: 332 318 sys.exit(0)
+412 -432
scripts/lib/kdoc/kdoc_parser.py
··· 46 46 known_section_names = 'description|context|returns?|notes?|examples?' 47 47 known_sections = KernRe(known_section_names, flags = re.I) 48 48 doc_sect = doc_com + \ 49 - KernRe(r'\s*(\@[.\w]+|\@\.\.\.|' + known_section_names + r')\s*:([^:].*)?$', 49 + KernRe(r'\s*(@[.\w]+|@\.\.\.|' + known_section_names + r')\s*:([^:].*)?$', 50 50 flags=re.I, cache=False) 51 51 52 52 doc_content = doc_com_body + KernRe(r'(.*)', cache=False) ··· 54 54 doc_inline_sect = KernRe(r'\s*\*\s*(@\s*[\w][\w\.]*\s*):(.*)', cache=False) 55 55 doc_inline_end = KernRe(r'^\s*\*/\s*$', cache=False) 56 56 doc_inline_oneline = KernRe(r'^\s*/\*\*\s*(@[\w\s]+):\s*(.*)\s*\*/\s*$', cache=False) 57 - attribute = KernRe(r"__attribute__\s*\(\([a-z0-9,_\*\s\(\)]*\)\)", 58 - flags=re.I | re.S, cache=False) 59 57 60 58 export_symbol = KernRe(r'^\s*EXPORT_SYMBOL(_GPL)?\s*\(\s*(\w+)\s*\)\s*', cache=False) 61 59 export_symbol_ns = KernRe(r'^\s*EXPORT_SYMBOL_NS(_GPL)?\s*\(\s*(\w+)\s*,\s*"\S+"\)\s*', cache=False) 62 60 63 - type_param = KernRe(r"\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)", cache=False) 61 + type_param = KernRe(r"@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)", cache=False) 64 62 65 63 # 66 64 # Tests for the beginning of a kerneldoc block in its various forms. ··· 73 75 cache = False) 74 76 75 77 # 78 + # Here begins a long set of transformations to turn structure member prefixes 79 + # and macro invocations into something we can parse and generate kdoc for. 80 + # 81 + struct_args_pattern = r'([^,)]+)' 82 + 83 + struct_xforms = [ 84 + # Strip attributes 85 + (KernRe(r"__attribute__\s*\(\([a-z0-9,_\*\s\(\)]*\)\)", flags=re.I | re.S, cache=False), ' '), 86 + (KernRe(r'\s*__aligned\s*\([^;]*\)', re.S), ' '), 87 + (KernRe(r'\s*__counted_by\s*\([^;]*\)', re.S), ' '), 88 + (KernRe(r'\s*__counted_by_(le|be)\s*\([^;]*\)', re.S), ' '), 89 + (KernRe(r'\s*__packed\s*', re.S), ' '), 90 + (KernRe(r'\s*CRYPTO_MINALIGN_ATTR', re.S), ' '), 91 + (KernRe(r'\s*____cacheline_aligned_in_smp', re.S), ' '), 92 + (KernRe(r'\s*____cacheline_aligned', re.S), ' '), 93 + (KernRe(r'\s*__cacheline_group_(begin|end)\([^\)]+\);'), ''), 94 + # 95 + # Unwrap struct_group macros based on this definition: 96 + # __struct_group(TAG, NAME, ATTRS, MEMBERS...) 97 + # which has variants like: struct_group(NAME, MEMBERS...) 98 + # Only MEMBERS arguments require documentation. 99 + # 100 + # Parsing them happens on two steps: 101 + # 102 + # 1. drop struct group arguments that aren't at MEMBERS, 103 + # storing them as STRUCT_GROUP(MEMBERS) 104 + # 105 + # 2. remove STRUCT_GROUP() ancillary macro. 106 + # 107 + # The original logic used to remove STRUCT_GROUP() using an 108 + # advanced regex: 109 + # 110 + # \bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*; 111 + # 112 + # with two patterns that are incompatible with 113 + # Python re module, as it has: 114 + # 115 + # - a recursive pattern: (?1) 116 + # - an atomic grouping: (?>...) 117 + # 118 + # I tried a simpler version: but it didn't work either: 119 + # \bSTRUCT_GROUP\(([^\)]+)\)[^;]*; 120 + # 121 + # As it doesn't properly match the end parenthesis on some cases. 122 + # 123 + # So, a better solution was crafted: there's now a NestedMatch 124 + # class that ensures that delimiters after a search are properly 125 + # matched. So, the implementation to drop STRUCT_GROUP() will be 126 + # handled in separate. 127 + # 128 + (KernRe(r'\bstruct_group\s*\(([^,]*,)', re.S), r'STRUCT_GROUP('), 129 + (KernRe(r'\bstruct_group_attr\s*\(([^,]*,){2}', re.S), r'STRUCT_GROUP('), 130 + (KernRe(r'\bstruct_group_tagged\s*\(([^,]*),([^,]*),', re.S), r'struct \1 \2; STRUCT_GROUP('), 131 + (KernRe(r'\b__struct_group\s*\(([^,]*,){3}', re.S), r'STRUCT_GROUP('), 132 + # 133 + # Replace macros 134 + # 135 + # TODO: use NestedMatch for FOO($1, $2, ...) matches 136 + # 137 + # it is better to also move those to the NestedMatch logic, 138 + # to ensure that parenthesis will be properly matched. 139 + # 140 + (KernRe(r'__ETHTOOL_DECLARE_LINK_MODE_MASK\s*\(([^\)]+)\)', re.S), 141 + r'DECLARE_BITMAP(\1, __ETHTOOL_LINK_MODE_MASK_NBITS)'), 142 + (KernRe(r'DECLARE_PHY_INTERFACE_MASK\s*\(([^\)]+)\)', re.S), 143 + r'DECLARE_BITMAP(\1, PHY_INTERFACE_MODE_MAX)'), 144 + (KernRe(r'DECLARE_BITMAP\s*\(' + struct_args_pattern + r',\s*' + struct_args_pattern + r'\)', 145 + re.S), r'unsigned long \1[BITS_TO_LONGS(\2)]'), 146 + (KernRe(r'DECLARE_HASHTABLE\s*\(' + struct_args_pattern + r',\s*' + struct_args_pattern + r'\)', 147 + re.S), r'unsigned long \1[1 << ((\2) - 1)]'), 148 + (KernRe(r'DECLARE_KFIFO\s*\(' + struct_args_pattern + r',\s*' + struct_args_pattern + 149 + r',\s*' + struct_args_pattern + r'\)', re.S), r'\2 *\1'), 150 + (KernRe(r'DECLARE_KFIFO_PTR\s*\(' + struct_args_pattern + r',\s*' + 151 + struct_args_pattern + r'\)', re.S), r'\2 *\1'), 152 + (KernRe(r'(?:__)?DECLARE_FLEX_ARRAY\s*\(' + struct_args_pattern + r',\s*' + 153 + struct_args_pattern + r'\)', re.S), r'\1 \2[]'), 154 + (KernRe(r'DEFINE_DMA_UNMAP_ADDR\s*\(' + struct_args_pattern + r'\)', re.S), r'dma_addr_t \1'), 155 + (KernRe(r'DEFINE_DMA_UNMAP_LEN\s*\(' + struct_args_pattern + r'\)', re.S), r'__u32 \1'), 156 + ] 157 + # 158 + # Regexes here are guaranteed to have the end limiter matching 159 + # the start delimiter. Yet, right now, only one replace group 160 + # is allowed. 161 + # 162 + struct_nested_prefixes = [ 163 + (re.compile(r'\bSTRUCT_GROUP\('), r'\1'), 164 + ] 165 + 166 + # 167 + # Transforms for function prototypes 168 + # 169 + function_xforms = [ 170 + (KernRe(r"^static +"), ""), 171 + (KernRe(r"^extern +"), ""), 172 + (KernRe(r"^asmlinkage +"), ""), 173 + (KernRe(r"^inline +"), ""), 174 + (KernRe(r"^__inline__ +"), ""), 175 + (KernRe(r"^__inline +"), ""), 176 + (KernRe(r"^__always_inline +"), ""), 177 + (KernRe(r"^noinline +"), ""), 178 + (KernRe(r"^__FORTIFY_INLINE +"), ""), 179 + (KernRe(r"__init +"), ""), 180 + (KernRe(r"__init_or_module +"), ""), 181 + (KernRe(r"__deprecated +"), ""), 182 + (KernRe(r"__flatten +"), ""), 183 + (KernRe(r"__meminit +"), ""), 184 + (KernRe(r"__must_check +"), ""), 185 + (KernRe(r"__weak +"), ""), 186 + (KernRe(r"__sched +"), ""), 187 + (KernRe(r"_noprof"), ""), 188 + (KernRe(r"__printf\s*\(\s*\d*\s*,\s*\d*\s*\) +"), ""), 189 + (KernRe(r"__(?:re)?alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) +"), ""), 190 + (KernRe(r"__diagnose_as\s*\(\s*\S+\s*(?:,\s*\d+\s*)*\) +"), ""), 191 + (KernRe(r"DECL_BUCKET_PARAMS\s*\(\s*(\S+)\s*,\s*(\S+)\s*\)"), r"\1, \2"), 192 + (KernRe(r"__attribute_const__ +"), ""), 193 + (KernRe(r"__attribute__\s*\(\((?:[\w\s]+(?:\([^)]*\))?\s*,?)+\)\)\s+"), ""), 194 + ] 195 + 196 + # 197 + # Apply a set of transforms to a block of text. 198 + # 199 + def apply_transforms(xforms, text): 200 + for search, subst in xforms: 201 + text = search.sub(subst, text) 202 + return text 203 + 204 + # 76 205 # A little helper to get rid of excess white space 77 206 # 78 207 multi_space = KernRe(r'\s\s+') 79 208 def trim_whitespace(s): 80 209 return multi_space.sub(' ', s.strip()) 210 + 211 + # 212 + # Remove struct/enum members that have been marked "private". 213 + # 214 + def trim_private_members(text): 215 + # 216 + # First look for a "public:" block that ends a private region, then 217 + # handle the "private until the end" case. 218 + # 219 + text = KernRe(r'/\*\s*private:.*?/\*\s*public:.*?\*/', flags=re.S).sub('', text) 220 + text = KernRe(r'/\*\s*private:.*', flags=re.S).sub('', text) 221 + # 222 + # We needed the comments to do the above, but now we can take them out. 223 + # 224 + return KernRe(r'\s*/\*.*?\*/\s*', flags=re.S).sub('', text).strip() 81 225 82 226 class state: 83 227 """ ··· 458 318 459 319 param = KernRe(r'[\[\)].*').sub('', param, count=1) 460 320 461 - if dtype == "" and param.endswith("..."): 462 - if KernRe(r'\w\.\.\.$').search(param): 463 - # For named variable parameters of the form `x...`, 464 - # remove the dots 465 - param = param[:-3] 466 - else: 467 - # Handles unnamed variable parameters 468 - param = "..." 321 + # 322 + # Look at various "anonymous type" cases. 323 + # 324 + if dtype == '': 325 + if param.endswith("..."): 326 + if len(param) > 3: # there is a name provided, use that 327 + param = param[:-3] 328 + if not self.entry.parameterdescs.get(param): 329 + self.entry.parameterdescs[param] = "variable arguments" 469 330 470 - if param not in self.entry.parameterdescs or \ 471 - not self.entry.parameterdescs[param]: 331 + elif (not param) or param == "void": 332 + param = "void" 333 + self.entry.parameterdescs[param] = "no arguments" 472 334 473 - self.entry.parameterdescs[param] = "variable arguments" 474 - 475 - elif dtype == "" and (not param or param == "void"): 476 - param = "void" 477 - self.entry.parameterdescs[param] = "no arguments" 478 - 479 - elif dtype == "" and param in ["struct", "union"]: 480 - # Handle unnamed (anonymous) union or struct 481 - dtype = param 482 - param = "{unnamed_" + param + "}" 483 - self.entry.parameterdescs[param] = "anonymous\n" 484 - self.entry.anon_struct_union = True 485 - 486 - # Handle cache group enforcing variables: they do not need 487 - # to be described in header files 488 - elif "__cacheline_group" in param: 489 - # Ignore __cacheline_group_begin and __cacheline_group_end 490 - return 335 + elif param in ["struct", "union"]: 336 + # Handle unnamed (anonymous) union or struct 337 + dtype = param 338 + param = "{unnamed_" + param + "}" 339 + self.entry.parameterdescs[param] = "anonymous\n" 340 + self.entry.anon_struct_union = True 491 341 492 342 # Warn if parameter has no description 493 343 # (but ignore ones starting with # as these are not parameters ··· 519 389 args = arg_expr.sub(r"\1#", args) 520 390 521 391 for arg in args.split(splitter): 522 - # Strip comments 523 - arg = KernRe(r'\/\*.*\*\/').sub('', arg) 524 - 525 392 # Ignore argument attributes 526 393 arg = KernRe(r'\sPOS0?\s').sub(' ', arg) 527 394 ··· 534 407 # Treat preprocessor directive as a typeless variable 535 408 self.push_parameter(ln, decl_type, arg, "", 536 409 "", declaration_name) 537 - 410 + # 411 + # The pointer-to-function case. 412 + # 538 413 elif KernRe(r'\(.+\)\s*\(').search(arg): 539 - # Pointer-to-function 540 - 541 414 arg = arg.replace('#', ',') 542 - 543 - r = KernRe(r'[^\(]+\(\*?\s*([\w\[\]\.]*)\s*\)') 415 + r = KernRe(r'[^\(]+\(\*?\s*' # Everything up to "(*" 416 + r'([\w\[\].]*)' # Capture the name and possible [array] 417 + r'\s*\)') # Make sure the trailing ")" is there 544 418 if r.match(arg): 545 419 param = r.group(1) 546 420 else: 547 421 self.emit_msg(ln, f"Invalid param: {arg}") 548 422 param = arg 549 - 550 - dtype = KernRe(r'([^\(]+\(\*?)\s*' + re.escape(param)).sub(r'\1', arg) 551 - self.push_parameter(ln, decl_type, param, dtype, 552 - arg, declaration_name) 553 - 423 + dtype = arg.replace(param, '') 424 + self.push_parameter(ln, decl_type, param, dtype, arg, declaration_name) 425 + # 426 + # The array-of-pointers case. Dig the parameter name out from the middle 427 + # of the declaration. 428 + # 554 429 elif KernRe(r'\(.+\)\s*\[').search(arg): 555 - # Array-of-pointers 556 - 557 - arg = arg.replace('#', ',') 558 - r = KernRe(r'[^\(]+\(\s*\*\s*([\w\[\]\.]*?)\s*(\s*\[\s*[\w]+\s*\]\s*)*\)') 430 + r = KernRe(r'[^\(]+\(\s*\*\s*' # Up to "(" and maybe "*" 431 + r'([\w.]*?)' # The actual pointer name 432 + r'\s*(\[\s*\w+\s*\]\s*)*\)') # The [array portion] 559 433 if r.match(arg): 560 434 param = r.group(1) 561 435 else: 562 436 self.emit_msg(ln, f"Invalid param: {arg}") 563 437 param = arg 564 - 565 - dtype = KernRe(r'([^\(]+\(\*?)\s*' + re.escape(param)).sub(r'\1', arg) 566 - 567 - self.push_parameter(ln, decl_type, param, dtype, 568 - arg, declaration_name) 569 - 438 + dtype = arg.replace(param, '') 439 + self.push_parameter(ln, decl_type, param, dtype, arg, declaration_name) 570 440 elif arg: 441 + # 442 + # Clean up extraneous spaces and split the string at commas; the first 443 + # element of the resulting list will also include the type information. 444 + # 571 445 arg = KernRe(r'\s*:\s*').sub(":", arg) 572 446 arg = KernRe(r'\s*\[').sub('[', arg) 573 - 574 447 args = KernRe(r'\s*,\s*').split(arg) 575 - if args[0] and '*' in args[0]: 576 - args[0] = re.sub(r'(\*+)\s*', r' \1', args[0]) 577 - 578 - first_arg = [] 579 - r = KernRe(r'^(.*\s+)(.*?\[.*\].*)$') 580 - if args[0] and r.match(args[0]): 581 - args.pop(0) 582 - first_arg.extend(r.group(1)) 583 - first_arg.append(r.group(2)) 448 + args[0] = re.sub(r'(\*+)\s*', r' \1', args[0]) 449 + # 450 + # args[0] has a string of "type a". If "a" includes an [array] 451 + # declaration, we want to not be fooled by any white space inside 452 + # the brackets, so detect and handle that case specially. 453 + # 454 + r = KernRe(r'^([^[\]]*\s+)(.*)$') 455 + if r.match(args[0]): 456 + args[0] = r.group(2) 457 + dtype = r.group(1) 584 458 else: 585 - first_arg = KernRe(r'\s+').split(args.pop(0)) 459 + # No space in args[0]; this seems wrong but preserves previous behavior 460 + dtype = '' 586 461 587 - args.insert(0, first_arg.pop()) 588 - dtype = ' '.join(first_arg) 589 - 462 + bitfield_re = KernRe(r'(.*?):(\w+)') 590 463 for param in args: 591 - if KernRe(r'^(\*+)\s*(.*)').match(param): 592 - r = KernRe(r'^(\*+)\s*(.*)') 593 - if not r.match(param): 594 - self.emit_msg(ln, f"Invalid param: {param}") 595 - continue 596 - 597 - param = r.group(1) 598 - 464 + # 465 + # For pointers, shift the star(s) from the variable name to the 466 + # type declaration. 467 + # 468 + r = KernRe(r'^(\*+)\s*(.*)') 469 + if r.match(param): 599 470 self.push_parameter(ln, decl_type, r.group(2), 600 471 f"{dtype} {r.group(1)}", 601 472 arg, declaration_name) 602 - 603 - elif KernRe(r'(.*?):(\w+)').search(param): 604 - r = KernRe(r'(.*?):(\w+)') 605 - if not r.match(param): 606 - self.emit_msg(ln, f"Invalid param: {param}") 607 - continue 608 - 473 + # 474 + # Perform a similar shift for bitfields. 475 + # 476 + elif bitfield_re.search(param): 609 477 if dtype != "": # Skip unnamed bit-fields 610 - self.push_parameter(ln, decl_type, r.group(1), 611 - f"{dtype}:{r.group(2)}", 478 + self.push_parameter(ln, decl_type, bitfield_re.group(1), 479 + f"{dtype}:{bitfield_re.group(2)}", 612 480 arg, declaration_name) 613 481 else: 614 482 self.push_parameter(ln, decl_type, param, dtype, ··· 642 520 self.emit_msg(ln, 643 521 f"No description found for return value of '{declaration_name}'") 644 522 645 - def dump_struct(self, ln, proto): 646 - """ 647 - Store an entry for an struct or union 648 - """ 649 - 523 + # 524 + # Split apart a structure prototype; returns (struct|union, name, members) or None 525 + # 526 + def split_struct_proto(self, proto): 650 527 type_pattern = r'(struct|union)' 651 - 652 528 qualifiers = [ 653 529 "__attribute__", 654 530 "__packed", ··· 654 534 "____cacheline_aligned_in_smp", 655 535 "____cacheline_aligned", 656 536 ] 657 - 658 537 definition_body = r'\{(.*)\}\s*' + "(?:" + '|'.join(qualifiers) + ")?" 659 - struct_members = KernRe(type_pattern + r'([^\{\};]+)(\{)([^\{\}]*)(\})([^\{\}\;]*)(\;)') 660 - 661 - # Extract struct/union definition 662 - members = None 663 - declaration_name = None 664 - decl_type = None 665 538 666 539 r = KernRe(type_pattern + r'\s+(\w+)\s*' + definition_body) 667 540 if r.search(proto): 668 - decl_type = r.group(1) 669 - declaration_name = r.group(2) 670 - members = r.group(3) 541 + return (r.group(1), r.group(2), r.group(3)) 671 542 else: 672 543 r = KernRe(r'typedef\s+' + type_pattern + r'\s*' + definition_body + r'\s*(\w+)\s*;') 673 - 674 544 if r.search(proto): 675 - decl_type = r.group(1) 676 - declaration_name = r.group(3) 677 - members = r.group(2) 678 - 679 - if not members: 680 - self.emit_msg(ln, f"{proto} error: Cannot parse struct or union!") 681 - return 682 - 683 - if self.entry.identifier != declaration_name: 684 - self.emit_msg(ln, 685 - f"expecting prototype for {decl_type} {self.entry.identifier}. Prototype was for {decl_type} {declaration_name} instead\n") 686 - return 687 - 688 - args_pattern = r'([^,)]+)' 689 - 690 - sub_prefixes = [ 691 - (KernRe(r'\/\*\s*private:.*?\/\*\s*public:.*?\*\/', re.S | re.I), ''), 692 - (KernRe(r'\/\*\s*private:.*', re.S | re.I), ''), 693 - 694 - # Strip comments 695 - (KernRe(r'\/\*.*?\*\/', re.S), ''), 696 - 697 - # Strip attributes 698 - (attribute, ' '), 699 - (KernRe(r'\s*__aligned\s*\([^;]*\)', re.S), ' '), 700 - (KernRe(r'\s*__counted_by\s*\([^;]*\)', re.S), ' '), 701 - (KernRe(r'\s*__counted_by_(le|be)\s*\([^;]*\)', re.S), ' '), 702 - (KernRe(r'\s*__packed\s*', re.S), ' '), 703 - (KernRe(r'\s*CRYPTO_MINALIGN_ATTR', re.S), ' '), 704 - (KernRe(r'\s*____cacheline_aligned_in_smp', re.S), ' '), 705 - (KernRe(r'\s*____cacheline_aligned', re.S), ' '), 706 - 707 - # Unwrap struct_group macros based on this definition: 708 - # __struct_group(TAG, NAME, ATTRS, MEMBERS...) 709 - # which has variants like: struct_group(NAME, MEMBERS...) 710 - # Only MEMBERS arguments require documentation. 711 - # 712 - # Parsing them happens on two steps: 713 - # 714 - # 1. drop struct group arguments that aren't at MEMBERS, 715 - # storing them as STRUCT_GROUP(MEMBERS) 716 - # 717 - # 2. remove STRUCT_GROUP() ancillary macro. 718 - # 719 - # The original logic used to remove STRUCT_GROUP() using an 720 - # advanced regex: 721 - # 722 - # \bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*; 723 - # 724 - # with two patterns that are incompatible with 725 - # Python re module, as it has: 726 - # 727 - # - a recursive pattern: (?1) 728 - # - an atomic grouping: (?>...) 729 - # 730 - # I tried a simpler version: but it didn't work either: 731 - # \bSTRUCT_GROUP\(([^\)]+)\)[^;]*; 732 - # 733 - # As it doesn't properly match the end parenthesis on some cases. 734 - # 735 - # So, a better solution was crafted: there's now a NestedMatch 736 - # class that ensures that delimiters after a search are properly 737 - # matched. So, the implementation to drop STRUCT_GROUP() will be 738 - # handled in separate. 739 - 740 - (KernRe(r'\bstruct_group\s*\(([^,]*,)', re.S), r'STRUCT_GROUP('), 741 - (KernRe(r'\bstruct_group_attr\s*\(([^,]*,){2}', re.S), r'STRUCT_GROUP('), 742 - (KernRe(r'\bstruct_group_tagged\s*\(([^,]*),([^,]*),', re.S), r'struct \1 \2; STRUCT_GROUP('), 743 - (KernRe(r'\b__struct_group\s*\(([^,]*,){3}', re.S), r'STRUCT_GROUP('), 744 - 745 - # Replace macros 746 - # 747 - # TODO: use NestedMatch for FOO($1, $2, ...) matches 748 - # 749 - # it is better to also move those to the NestedMatch logic, 750 - # to ensure that parenthesis will be properly matched. 751 - 752 - (KernRe(r'__ETHTOOL_DECLARE_LINK_MODE_MASK\s*\(([^\)]+)\)', re.S), r'DECLARE_BITMAP(\1, __ETHTOOL_LINK_MODE_MASK_NBITS)'), 753 - (KernRe(r'DECLARE_PHY_INTERFACE_MASK\s*\(([^\)]+)\)', re.S), r'DECLARE_BITMAP(\1, PHY_INTERFACE_MODE_MAX)'), 754 - (KernRe(r'DECLARE_BITMAP\s*\(' + args_pattern + r',\s*' + args_pattern + r'\)', re.S), r'unsigned long \1[BITS_TO_LONGS(\2)]'), 755 - (KernRe(r'DECLARE_HASHTABLE\s*\(' + args_pattern + r',\s*' + args_pattern + r'\)', re.S), r'unsigned long \1[1 << ((\2) - 1)]'), 756 - (KernRe(r'DECLARE_KFIFO\s*\(' + args_pattern + r',\s*' + args_pattern + r',\s*' + args_pattern + r'\)', re.S), r'\2 *\1'), 757 - (KernRe(r'DECLARE_KFIFO_PTR\s*\(' + args_pattern + r',\s*' + args_pattern + r'\)', re.S), r'\2 *\1'), 758 - (KernRe(r'(?:__)?DECLARE_FLEX_ARRAY\s*\(' + args_pattern + r',\s*' + args_pattern + r'\)', re.S), r'\1 \2[]'), 759 - (KernRe(r'DEFINE_DMA_UNMAP_ADDR\s*\(' + args_pattern + r'\)', re.S), r'dma_addr_t \1'), 760 - (KernRe(r'DEFINE_DMA_UNMAP_LEN\s*\(' + args_pattern + r'\)', re.S), r'__u32 \1'), 761 - (KernRe(r'VIRTIO_DECLARE_FEATURES\s*\(' + args_pattern + r'\)', re.S), r'u64 \1; u64 \1_array[VIRTIO_FEATURES_DWORDS]'), 762 - ] 763 - 764 - # Regexes here are guaranteed to have the end limiter matching 765 - # the start delimiter. Yet, right now, only one replace group 766 - # is allowed. 767 - 768 - sub_nested_prefixes = [ 769 - (re.compile(r'\bSTRUCT_GROUP\('), r'\1'), 770 - ] 771 - 772 - for search, sub in sub_prefixes: 773 - members = search.sub(sub, members) 774 - 775 - nested = NestedMatch() 776 - 777 - for search, sub in sub_nested_prefixes: 778 - members = nested.sub(search, sub, members) 779 - 780 - # Keeps the original declaration as-is 781 - declaration = members 782 - 783 - # Split nested struct/union elements 545 + return (r.group(1), r.group(3), r.group(2)) 546 + return None 547 + # 548 + # Rewrite the members of a structure or union for easier formatting later on. 549 + # Among other things, this function will turn a member like: 550 + # 551 + # struct { inner_members; } foo; 552 + # 553 + # into: 554 + # 555 + # struct foo; inner_members; 556 + # 557 + def rewrite_struct_members(self, members): 784 558 # 785 - # This loop was simpler at the original kernel-doc perl version, as 786 - # while ($members =~ m/$struct_members/) { ... } 787 - # reads 'members' string on each interaction. 559 + # Process struct/union members from the most deeply nested outward. The 560 + # trick is in the ^{ below - it prevents a match of an outer struct/union 561 + # until the inner one has been munged (removing the "{" in the process). 788 562 # 789 - # Python behavior is different: it parses 'members' only once, 790 - # creating a list of tuples from the first interaction. 791 - # 792 - # On other words, this won't get nested structs. 793 - # 794 - # So, we need to have an extra loop on Python to override such 795 - # re limitation. 796 - 797 - while True: 798 - tuples = struct_members.findall(members) 799 - if not tuples: 800 - break 801 - 563 + struct_members = KernRe(r'(struct|union)' # 0: declaration type 564 + r'([^\{\};]+)' # 1: possible name 565 + r'(\{)' 566 + r'([^\{\}]*)' # 3: Contents of declaration 567 + r'(\})' 568 + r'([^\{\};]*)(;)') # 5: Remaining stuff after declaration 569 + tuples = struct_members.findall(members) 570 + while tuples: 802 571 for t in tuples: 803 572 newmember = "" 804 - maintype = t[0] 805 - s_ids = t[5] 806 - content = t[3] 807 - 808 - oldmember = "".join(t) 809 - 810 - for s_id in s_ids.split(','): 573 + oldmember = "".join(t) # Reconstruct the original formatting 574 + dtype, name, lbr, content, rbr, rest, semi = t 575 + # 576 + # Pass through each field name, normalizing the form and formatting. 577 + # 578 + for s_id in rest.split(','): 811 579 s_id = s_id.strip() 812 - 813 - newmember += f"{maintype} {s_id}; " 580 + newmember += f"{dtype} {s_id}; " 581 + # 582 + # Remove bitfield/array/pointer info, getting the bare name. 583 + # 814 584 s_id = KernRe(r'[:\[].*').sub('', s_id) 815 585 s_id = KernRe(r'^\s*\**(\S+)\s*').sub(r'\1', s_id) 816 - 586 + # 587 + # Pass through the members of this inner structure/union. 588 + # 817 589 for arg in content.split(';'): 818 590 arg = arg.strip() 819 - 820 - if not arg: 821 - continue 822 - 823 - r = KernRe(r'^([^\(]+\(\*?\s*)([\w\.]*)(\s*\).*)') 591 + # 592 + # Look for (type)(*name)(args) - pointer to function 593 + # 594 + r = KernRe(r'^([^\(]+\(\*?\s*)([\w.]*)(\s*\).*)') 824 595 if r.match(arg): 596 + dtype, name, extra = r.group(1), r.group(2), r.group(3) 825 597 # Pointer-to-function 826 - dtype = r.group(1) 827 - name = r.group(2) 828 - extra = r.group(3) 829 - 830 - if not name: 831 - continue 832 - 833 598 if not s_id: 834 599 # Anonymous struct/union 835 600 newmember += f"{dtype}{name}{extra}; " 836 601 else: 837 602 newmember += f"{dtype}{s_id}.{name}{extra}; " 838 - 603 + # 604 + # Otherwise a non-function member. 605 + # 839 606 else: 840 - arg = arg.strip() 841 - # Handle bitmaps 607 + # 608 + # Remove bitmap and array portions and spaces around commas 609 + # 842 610 arg = KernRe(r':\s*\d+\s*').sub('', arg) 843 - 844 - # Handle arrays 845 611 arg = KernRe(r'\[.*\]').sub('', arg) 846 - 847 - # Handle multiple IDs 848 612 arg = KernRe(r'\s*,\s*').sub(',', arg) 849 - 613 + # 614 + # Look for a normal decl - "type name[,name...]" 615 + # 850 616 r = KernRe(r'(.*)\s+([\S+,]+)') 851 - 852 617 if r.search(arg): 853 - dtype = r.group(1) 854 - names = r.group(2) 618 + for name in r.group(2).split(','): 619 + name = KernRe(r'^\s*\**(\S+)\s*').sub(r'\1', name) 620 + if not s_id: 621 + # Anonymous struct/union 622 + newmember += f"{r.group(1)} {name}; " 623 + else: 624 + newmember += f"{r.group(1)} {s_id}.{name}; " 855 625 else: 856 626 newmember += f"{arg}; " 857 - continue 858 - 859 - for name in names.split(','): 860 - name = KernRe(r'^\s*\**(\S+)\s*').sub(r'\1', name).strip() 861 - 862 - if not name: 863 - continue 864 - 865 - if not s_id: 866 - # Anonymous struct/union 867 - newmember += f"{dtype} {name}; " 868 - else: 869 - newmember += f"{dtype} {s_id}.{name}; " 870 - 627 + # 628 + # At the end of the s_id loop, replace the original declaration with 629 + # the munged version. 630 + # 871 631 members = members.replace(oldmember, newmember) 632 + # 633 + # End of the tuple loop - search again and see if there are outer members 634 + # that now turn up. 635 + # 636 + tuples = struct_members.findall(members) 637 + return members 872 638 873 - # Ignore other nested elements, like enums 874 - members = re.sub(r'(\{[^\{\}]*\})', '', members) 875 - 876 - self.create_parameter_list(ln, decl_type, members, ';', 877 - declaration_name) 878 - self.check_sections(ln, declaration_name, decl_type) 879 - 880 - # Adjust declaration for better display 639 + # 640 + # Format the struct declaration into a standard form for inclusion in the 641 + # resulting docs. 642 + # 643 + def format_struct_decl(self, declaration): 644 + # 645 + # Insert newlines, get rid of extra spaces. 646 + # 881 647 declaration = KernRe(r'([\{;])').sub(r'\1\n', declaration) 882 648 declaration = KernRe(r'\}\s+;').sub('};', declaration) 883 - 884 - # Better handle inlined enums 885 - while True: 886 - r = KernRe(r'(enum\s+\{[^\}]+),([^\n])') 887 - if not r.search(declaration): 888 - break 889 - 649 + # 650 + # Format inline enums with each member on its own line. 651 + # 652 + r = KernRe(r'(enum\s+\{[^\}]+),([^\n])') 653 + while r.search(declaration): 890 654 declaration = r.sub(r'\1,\n\2', declaration) 891 - 655 + # 656 + # Now go through and supply the right number of tabs 657 + # for each line. 658 + # 892 659 def_args = declaration.split('\n') 893 660 level = 1 894 661 declaration = "" 895 662 for clause in def_args: 663 + clause = KernRe(r'\s+').sub(' ', clause.strip(), count=1) 664 + if clause: 665 + if '}' in clause and level > 1: 666 + level -= 1 667 + if not clause.startswith('#'): 668 + declaration += "\t" * level 669 + declaration += "\t" + clause + "\n" 670 + if "{" in clause and "}" not in clause: 671 + level += 1 672 + return declaration 896 673 897 - clause = clause.strip() 898 - clause = KernRe(r'\s+').sub(' ', clause, count=1) 899 674 900 - if not clause: 901 - continue 675 + def dump_struct(self, ln, proto): 676 + """ 677 + Store an entry for an struct or union 678 + """ 679 + # 680 + # Do the basic parse to get the pieces of the declaration. 681 + # 682 + struct_parts = self.split_struct_proto(proto) 683 + if not struct_parts: 684 + self.emit_msg(ln, f"{proto} error: Cannot parse struct or union!") 685 + return 686 + decl_type, declaration_name, members = struct_parts 902 687 903 - if '}' in clause and level > 1: 904 - level -= 1 688 + if self.entry.identifier != declaration_name: 689 + self.emit_msg(ln, f"expecting prototype for {decl_type} {self.entry.identifier}. " 690 + f"Prototype was for {decl_type} {declaration_name} instead\n") 691 + return 692 + # 693 + # Go through the list of members applying all of our transformations. 694 + # 695 + members = trim_private_members(members) 696 + members = apply_transforms(struct_xforms, members) 905 697 906 - if not KernRe(r'^\s*#').match(clause): 907 - declaration += "\t" * level 908 - 909 - declaration += "\t" + clause + "\n" 910 - if "{" in clause and "}" not in clause: 911 - level += 1 912 - 698 + nested = NestedMatch() 699 + for search, sub in struct_nested_prefixes: 700 + members = nested.sub(search, sub, members) 701 + # 702 + # Deal with embedded struct and union members, and drop enums entirely. 703 + # 704 + declaration = members 705 + members = self.rewrite_struct_members(members) 706 + members = re.sub(r'(\{[^\{\}]*\})', '', members) 707 + # 708 + # Output the result and we are done. 709 + # 710 + self.create_parameter_list(ln, decl_type, members, ';', 711 + declaration_name) 712 + self.check_sections(ln, declaration_name, decl_type) 913 713 self.output_declaration(decl_type, declaration_name, 914 - definition=declaration, 714 + definition=self.format_struct_decl(declaration), 915 715 purpose=self.entry.declaration_purpose) 916 716 917 717 def dump_enum(self, ln, proto): 918 718 """ 919 719 Stores an enum inside self.entries array. 920 720 """ 921 - 922 - # Ignore members marked private 923 - proto = KernRe(r'\/\*\s*private:.*?\/\*\s*public:.*?\*\/', flags=re.S).sub('', proto) 924 - proto = KernRe(r'\/\*\s*private:.*}', flags=re.S).sub('}', proto) 925 - 926 - # Strip comments 927 - proto = KernRe(r'\/\*.*?\*\/', flags=re.S).sub('', proto) 928 - 929 - # Strip #define macros inside enums 721 + # 722 + # Strip preprocessor directives. Note that this depends on the 723 + # trailing semicolon we added in process_proto_type(). 724 + # 930 725 proto = KernRe(r'#\s*((define|ifdef|if)\s+|endif)[^;]*;', flags=re.S).sub('', proto) 931 - 932 726 # 933 727 # Parse out the name and members of the enum. Typedef form first. 934 728 # 935 729 r = KernRe(r'typedef\s+enum\s*\{(.*)\}\s*(\w*)\s*;') 936 730 if r.search(proto): 937 731 declaration_name = r.group(2) 938 - members = r.group(1).rstrip() 732 + members = trim_private_members(r.group(1)) 939 733 # 940 734 # Failing that, look for a straight enum 941 735 # ··· 857 823 r = KernRe(r'enum\s+(\w*)\s*\{(.*)\}') 858 824 if r.match(proto): 859 825 declaration_name = r.group(1) 860 - members = r.group(2).rstrip() 826 + members = trim_private_members(r.group(2)) 861 827 # 862 828 # OK, this isn't going to work. 863 829 # ··· 926 892 Stores a function of function macro inside self.entries array. 927 893 """ 928 894 929 - func_macro = False 895 + found = func_macro = False 930 896 return_type = '' 931 897 decl_type = 'function' 932 - 933 - # Prefixes that would be removed 934 - sub_prefixes = [ 935 - (r"^static +", "", 0), 936 - (r"^extern +", "", 0), 937 - (r"^asmlinkage +", "", 0), 938 - (r"^inline +", "", 0), 939 - (r"^__inline__ +", "", 0), 940 - (r"^__inline +", "", 0), 941 - (r"^__always_inline +", "", 0), 942 - (r"^noinline +", "", 0), 943 - (r"^__FORTIFY_INLINE +", "", 0), 944 - (r"__init +", "", 0), 945 - (r"__init_or_module +", "", 0), 946 - (r"__deprecated +", "", 0), 947 - (r"__flatten +", "", 0), 948 - (r"__meminit +", "", 0), 949 - (r"__must_check +", "", 0), 950 - (r"__weak +", "", 0), 951 - (r"__sched +", "", 0), 952 - (r"_noprof", "", 0), 953 - (r"__printf\s*\(\s*\d*\s*,\s*\d*\s*\) +", "", 0), 954 - (r"__(?:re)?alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) +", "", 0), 955 - (r"__diagnose_as\s*\(\s*\S+\s*(?:,\s*\d+\s*)*\) +", "", 0), 956 - (r"DECL_BUCKET_PARAMS\s*\(\s*(\S+)\s*,\s*(\S+)\s*\)", r"\1, \2", 0), 957 - (r"__attribute_const__ +", "", 0), 958 - 959 - # It seems that Python support for re.X is broken: 960 - # At least for me (Python 3.13), this didn't work 961 - # (r""" 962 - # __attribute__\s*\(\( 963 - # (?: 964 - # [\w\s]+ # attribute name 965 - # (?:\([^)]*\))? # attribute arguments 966 - # \s*,? # optional comma at the end 967 - # )+ 968 - # \)\)\s+ 969 - # """, "", re.X), 970 - 971 - # So, remove whitespaces and comments from it 972 - (r"__attribute__\s*\(\((?:[\w\s]+(?:\([^)]*\))?\s*,?)+\)\)\s+", "", 0), 973 - ] 974 - 975 - for search, sub, flags in sub_prefixes: 976 - prototype = KernRe(search, flags).sub(sub, prototype) 977 - 978 - # Macros are a special case, as they change the prototype format 898 + # 899 + # Apply the initial transformations. 900 + # 901 + prototype = apply_transforms(function_xforms, prototype) 902 + # 903 + # If we have a macro, remove the "#define" at the front. 904 + # 979 905 new_proto = KernRe(r"^#\s*define\s+").sub("", prototype) 980 906 if new_proto != prototype: 981 - is_define_proto = True 982 907 prototype = new_proto 983 - else: 984 - is_define_proto = False 908 + # 909 + # Dispense with the simple "#define A B" case here; the key 910 + # is the space after the name of the symbol being defined. 911 + # NOTE that the seemingly misnamed "func_macro" indicates a 912 + # macro *without* arguments. 913 + # 914 + r = KernRe(r'^(\w+)\s+') 915 + if r.search(prototype): 916 + return_type = '' 917 + declaration_name = r.group(1) 918 + func_macro = True 919 + found = True 985 920 986 921 # Yes, this truly is vile. We are looking for: 987 922 # 1. Return type (may be nothing if we're looking at a macro) ··· 968 965 # - atomic_set (macro) 969 966 # - pci_match_device, __copy_to_user (long return type) 970 967 971 - name = r'[a-zA-Z0-9_~:]+' 972 - prototype_end1 = r'[^\(]*' 973 - prototype_end2 = r'[^\{]*' 974 - prototype_end = fr'\(({prototype_end1}|{prototype_end2})\)' 975 - 976 - # Besides compiling, Perl qr{[\w\s]+} works as a non-capturing group. 977 - # So, this needs to be mapped in Python with (?:...)? or (?:...)+ 978 - 968 + name = r'\w+' 979 969 type1 = r'(?:[\w\s]+)?' 980 970 type2 = r'(?:[\w\s]+\*+)+' 981 - 982 - found = False 983 - 984 - if is_define_proto: 985 - r = KernRe(r'^()(' + name + r')\s+') 986 - 987 - if r.search(prototype): 988 - return_type = '' 989 - declaration_name = r.group(2) 990 - func_macro = True 991 - 992 - found = True 993 - 971 + # 972 + # Attempt to match first on (args) with no internal parentheses; this 973 + # lets us easily filter out __acquires() and other post-args stuff. If 974 + # that fails, just grab the rest of the line to the last closing 975 + # parenthesis. 976 + # 977 + proto_args = r'\(([^\(]*|.*)\)' 978 + # 979 + # (Except for the simple macro case) attempt to split up the prototype 980 + # in the various ways we understand. 981 + # 994 982 if not found: 995 983 patterns = [ 996 - rf'^()({name})\s*{prototype_end}', 997 - rf'^({type1})\s+({name})\s*{prototype_end}', 998 - rf'^({type2})\s*({name})\s*{prototype_end}', 984 + rf'^()({name})\s*{proto_args}', 985 + rf'^({type1})\s+({name})\s*{proto_args}', 986 + rf'^({type2})\s*({name})\s*{proto_args}', 999 987 ] 1000 988 1001 989 for p in patterns: 1002 990 r = KernRe(p) 1003 - 1004 991 if r.match(prototype): 1005 - 1006 992 return_type = r.group(1) 1007 993 declaration_name = r.group(2) 1008 994 args = r.group(3) 1009 - 1010 995 self.create_parameter_list(ln, decl_type, args, ',', 1011 996 declaration_name) 1012 - 1013 997 found = True 1014 998 break 999 + # 1000 + # Parsing done; make sure that things are as we expect. 1001 + # 1015 1002 if not found: 1016 1003 self.emit_msg(ln, 1017 1004 f"cannot understand function prototype: '{prototype}'") 1018 1005 return 1019 - 1020 1006 if self.entry.identifier != declaration_name: 1021 - self.emit_msg(ln, 1022 - f"expecting prototype for {self.entry.identifier}(). Prototype was for {declaration_name}() instead") 1007 + self.emit_msg(ln, f"expecting prototype for {self.entry.identifier}(). " 1008 + f"Prototype was for {declaration_name}() instead") 1023 1009 return 1024 - 1025 1010 self.check_sections(ln, declaration_name, "function") 1026 - 1027 1011 self.check_return_section(ln, declaration_name, return_type) 1012 + # 1013 + # Store the result. 1014 + # 1015 + self.output_declaration(decl_type, declaration_name, 1016 + typedef=('typedef' in return_type), 1017 + functiontype=return_type, 1018 + purpose=self.entry.declaration_purpose, 1019 + func_macro=func_macro) 1028 1020 1029 - if 'typedef' in return_type: 1030 - self.output_declaration(decl_type, declaration_name, 1031 - typedef=True, 1032 - functiontype=return_type, 1033 - purpose=self.entry.declaration_purpose, 1034 - func_macro=func_macro) 1035 - else: 1036 - self.output_declaration(decl_type, declaration_name, 1037 - typedef=False, 1038 - functiontype=return_type, 1039 - purpose=self.entry.declaration_purpose, 1040 - func_macro=func_macro) 1041 1021 1042 1022 def dump_typedef(self, ln, proto): 1043 1023 """ 1044 1024 Stores a typedef inside self.entries array. 1045 1025 """ 1046 - 1047 - typedef_type = r'((?:\s+[\w\*]+\b){0,7}\s+(?:\w+\b|\*+))\s*' 1026 + # 1027 + # We start by looking for function typedefs. 1028 + # 1029 + typedef_type = r'typedef((?:\s+[\w*]+\b){0,7}\s+(?:\w+\b|\*+))\s*' 1048 1030 typedef_ident = r'\*?\s*(\w\S+)\s*' 1049 1031 typedef_args = r'\s*\((.*)\);' 1050 1032 1051 - typedef1 = KernRe(r'typedef' + typedef_type + r'\(' + typedef_ident + r'\)' + typedef_args) 1052 - typedef2 = KernRe(r'typedef' + typedef_type + typedef_ident + typedef_args) 1053 - 1054 - # Strip comments 1055 - proto = KernRe(r'/\*.*?\*/', flags=re.S).sub('', proto) 1033 + typedef1 = KernRe(typedef_type + r'\(' + typedef_ident + r'\)' + typedef_args) 1034 + typedef2 = KernRe(typedef_type + typedef_ident + typedef_args) 1056 1035 1057 1036 # Parse function typedef prototypes 1058 1037 for r in [typedef1, typedef2]: ··· 1050 1065 f"expecting prototype for typedef {self.entry.identifier}. Prototype was for typedef {declaration_name} instead\n") 1051 1066 return 1052 1067 1053 - decl_type = 'function' 1054 - self.create_parameter_list(ln, decl_type, args, ',', declaration_name) 1068 + self.create_parameter_list(ln, 'function', args, ',', declaration_name) 1055 1069 1056 - self.output_declaration(decl_type, declaration_name, 1070 + self.output_declaration('function', declaration_name, 1057 1071 typedef=True, 1058 1072 functiontype=return_type, 1059 1073 purpose=self.entry.declaration_purpose) 1060 1074 return 1061 - 1062 - # Handle nested parentheses or brackets 1063 - r = KernRe(r'(\(*.\)\s*|\[*.\]\s*);$') 1064 - while r.search(proto): 1065 - proto = r.sub('', proto) 1066 - 1067 - # Parse simple typedefs 1075 + # 1076 + # Not a function, try to parse a simple typedef. 1077 + # 1068 1078 r = KernRe(r'typedef.*\s+(\w+)\s*;') 1069 1079 if r.match(proto): 1070 1080 declaration_name = r.group(1) ··· 1242 1262 self.dump_section() 1243 1263 1244 1264 # Look for doc_com + <text> + doc_end: 1245 - r = KernRe(r'\s*\*\s*[a-zA-Z_0-9:\.]+\*/') 1265 + r = KernRe(r'\s*\*\s*[a-zA-Z_0-9:.]+\*/') 1246 1266 if r.match(line): 1247 1267 self.emit_msg(ln, f"suspicious ending line: {line}") 1248 1268 ··· 1453 1473 """Ancillary routine to process a function prototype""" 1454 1474 1455 1475 # strip C99-style comments to end of line 1456 - line = KernRe(r"\/\/.*$", re.S).sub('', line) 1476 + line = KernRe(r"//.*$", re.S).sub('', line) 1457 1477 # 1458 1478 # Soak up the line's worth of prototype text, stopping at { or ; if present. 1459 1479 #
+1 -1
scripts/selinux/install_policy.sh
··· 74 74 $SF -F file_contexts / 75 75 76 76 mounts=`cat /proc/$$/mounts | \ 77 - grep -E "ext[234]|jfs|xfs|reiserfs|jffs2|gfs2|btrfs|f2fs|ocfs2" | \ 77 + grep -E "ext[234]|jfs|xfs|jffs2|gfs2|btrfs|f2fs|ocfs2" | \ 78 78 awk '{ print $2 '}` 79 79 $SF -F file_contexts $mounts 80 80
+719
scripts/sphinx-build-wrapper
··· 1 + #!/usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + # Copyright (C) 2025 Mauro Carvalho Chehab <mchehab+huawei@kernel.org> 4 + # 5 + # pylint: disable=R0902, R0912, R0913, R0914, R0915, R0917, C0103 6 + # 7 + # Converted from docs Makefile and parallel-wrapper.sh, both under 8 + # GPLv2, copyrighted since 2008 by the following authors: 9 + # 10 + # Akira Yokosawa <akiyks@gmail.com> 11 + # Arnd Bergmann <arnd@arndb.de> 12 + # Breno Leitao <leitao@debian.org> 13 + # Carlos Bilbao <carlos.bilbao@amd.com> 14 + # Dave Young <dyoung@redhat.com> 15 + # Donald Hunter <donald.hunter@gmail.com> 16 + # Geert Uytterhoeven <geert+renesas@glider.be> 17 + # Jani Nikula <jani.nikula@intel.com> 18 + # Jan Stancek <jstancek@redhat.com> 19 + # Jonathan Corbet <corbet@lwn.net> 20 + # Joshua Clayton <stillcompiling@gmail.com> 21 + # Kees Cook <keescook@chromium.org> 22 + # Linus Torvalds <torvalds@linux-foundation.org> 23 + # Magnus Damm <damm+renesas@opensource.se> 24 + # Masahiro Yamada <masahiroy@kernel.org> 25 + # Mauro Carvalho Chehab <mchehab+huawei@kernel.org> 26 + # Maxim Cournoyer <maxim.cournoyer@gmail.com> 27 + # Peter Foley <pefoley2@pefoley.com> 28 + # Randy Dunlap <rdunlap@infradead.org> 29 + # Rob Herring <robh@kernel.org> 30 + # Shuah Khan <shuahkh@osg.samsung.com> 31 + # Thorsten Blum <thorsten.blum@toblux.com> 32 + # Tomas Winkler <tomas.winkler@intel.com> 33 + 34 + 35 + """ 36 + Sphinx build wrapper that handles Kernel-specific business rules: 37 + 38 + - it gets the Kernel build environment vars; 39 + - it determines what's the best parallelism; 40 + - it handles SPHINXDIRS 41 + 42 + This tool ensures that MIN_PYTHON_VERSION is satisfied. If version is 43 + below that, it seeks for a new Python version. If found, it re-runs using 44 + the newer version. 45 + """ 46 + 47 + import argparse 48 + import locale 49 + import os 50 + import re 51 + import shlex 52 + import shutil 53 + import subprocess 54 + import sys 55 + 56 + from concurrent import futures 57 + from glob import glob 58 + 59 + LIB_DIR = "lib" 60 + SRC_DIR = os.path.dirname(os.path.realpath(__file__)) 61 + 62 + sys.path.insert(0, os.path.join(SRC_DIR, LIB_DIR)) 63 + 64 + from jobserver import JobserverExec # pylint: disable=C0413 65 + 66 + 67 + def parse_version(version): 68 + """Convert a major.minor.patch version into a tuple""" 69 + return tuple(int(x) for x in version.split(".")) 70 + 71 + def ver_str(version): 72 + """Returns a version tuple as major.minor.patch""" 73 + 74 + return ".".join([str(x) for x in version]) 75 + 76 + # Minimal supported Python version needed by Sphinx and its extensions 77 + MIN_PYTHON_VERSION = parse_version("3.7") 78 + 79 + # Default value for --venv parameter 80 + VENV_DEFAULT = "sphinx_latest" 81 + 82 + # List of make targets and its corresponding builder and output directory 83 + TARGETS = { 84 + "cleandocs": { 85 + "builder": "clean", 86 + }, 87 + "htmldocs": { 88 + "builder": "html", 89 + }, 90 + "epubdocs": { 91 + "builder": "epub", 92 + "out_dir": "epub", 93 + }, 94 + "texinfodocs": { 95 + "builder": "texinfo", 96 + "out_dir": "texinfo", 97 + }, 98 + "infodocs": { 99 + "builder": "texinfo", 100 + "out_dir": "texinfo", 101 + }, 102 + "latexdocs": { 103 + "builder": "latex", 104 + "out_dir": "latex", 105 + }, 106 + "pdfdocs": { 107 + "builder": "latex", 108 + "out_dir": "latex", 109 + }, 110 + "xmldocs": { 111 + "builder": "xml", 112 + "out_dir": "xml", 113 + }, 114 + "linkcheckdocs": { 115 + "builder": "linkcheck" 116 + }, 117 + } 118 + 119 + # Paper sizes. An empty value will pick the default 120 + PAPER = ["", "a4", "letter"] 121 + 122 + class SphinxBuilder: 123 + """ 124 + Handles a sphinx-build target, adding needed arguments to build 125 + with the Kernel. 126 + """ 127 + 128 + def is_rust_enabled(self): 129 + """Check if rust is enabled at .config""" 130 + config_path = os.path.join(self.srctree, ".config") 131 + if os.path.isfile(config_path): 132 + with open(config_path, "r", encoding="utf-8") as f: 133 + return "CONFIG_RUST=y" in f.read() 134 + return False 135 + 136 + def get_path(self, path, abs_path=False): 137 + """ 138 + Ancillary routine to handle patches the right way, as shell does. 139 + 140 + It first expands "~" and "~user". Then, if patch is not absolute, 141 + join self.srctree. Finally, if requested, convert to abspath. 142 + """ 143 + 144 + path = os.path.expanduser(path) 145 + if not path.startswith("/"): 146 + path = os.path.join(self.srctree, path) 147 + 148 + if abs_path: 149 + return os.path.abspath(path) 150 + 151 + return path 152 + 153 + def __init__(self, venv=None, verbose=False, n_jobs=None, interactive=None): 154 + """Initialize internal variables""" 155 + self.venv = venv 156 + self.verbose = None 157 + 158 + # Normal variables passed from Kernel's makefile 159 + self.kernelversion = os.environ.get("KERNELVERSION", "unknown") 160 + self.kernelrelease = os.environ.get("KERNELRELEASE", "unknown") 161 + self.pdflatex = os.environ.get("PDFLATEX", "xelatex") 162 + 163 + if not interactive: 164 + self.latexopts = os.environ.get("LATEXOPTS", "-interaction=batchmode -no-shell-escape") 165 + else: 166 + self.latexopts = os.environ.get("LATEXOPTS", "") 167 + 168 + if not verbose: 169 + verbose = bool(os.environ.get("KBUILD_VERBOSE", "") != "") 170 + 171 + # Handle SPHINXOPTS evironment 172 + sphinxopts = shlex.split(os.environ.get("SPHINXOPTS", "")) 173 + 174 + # As we handle number of jobs and quiet in separate, we need to pick 175 + # it the same way as sphinx-build would pick, so let's use argparse 176 + # do to the right argument expansion 177 + parser = argparse.ArgumentParser() 178 + parser.add_argument('-j', '--jobs', type=int) 179 + parser.add_argument('-q', '--quiet', type=int) 180 + 181 + # Other sphinx-build arguments go as-is, so place them 182 + # at self.sphinxopts 183 + sphinx_args, self.sphinxopts = parser.parse_known_args(sphinxopts) 184 + if sphinx_args.quiet == True: 185 + self.verbose = False 186 + 187 + if sphinx_args.jobs: 188 + self.n_jobs = sphinx_args.jobs 189 + 190 + # Command line arguments was passed, override SPHINXOPTS 191 + if verbose is not None: 192 + self.verbose = verbose 193 + 194 + self.n_jobs = n_jobs 195 + 196 + # Source tree directory. This needs to be at os.environ, as 197 + # Sphinx extensions and media uAPI makefile needs it 198 + self.srctree = os.environ.get("srctree") 199 + if not self.srctree: 200 + self.srctree = "." 201 + os.environ["srctree"] = self.srctree 202 + 203 + # Now that we can expand srctree, get other directories as well 204 + self.sphinxbuild = os.environ.get("SPHINXBUILD", "sphinx-build") 205 + self.kerneldoc = self.get_path(os.environ.get("KERNELDOC", 206 + "scripts/kernel-doc.py")) 207 + self.obj = os.environ.get("obj", "Documentation") 208 + self.builddir = self.get_path(os.path.join(self.obj, "output"), 209 + abs_path=True) 210 + 211 + # Media uAPI needs it 212 + os.environ["BUILDDIR"] = self.builddir 213 + 214 + # Detect if rust is enabled 215 + self.config_rust = self.is_rust_enabled() 216 + 217 + # Get directory locations for LaTeX build toolchain 218 + self.pdflatex_cmd = shutil.which(self.pdflatex) 219 + self.latexmk_cmd = shutil.which("latexmk") 220 + 221 + self.env = os.environ.copy() 222 + 223 + # If venv parameter is specified, run Sphinx from venv 224 + if venv: 225 + bin_dir = os.path.join(venv, "bin") 226 + if os.path.isfile(os.path.join(bin_dir, "activate")): 227 + # "activate" virtual env 228 + self.env["PATH"] = bin_dir + ":" + self.env["PATH"] 229 + self.env["VIRTUAL_ENV"] = venv 230 + if "PYTHONHOME" in self.env: 231 + del self.env["PYTHONHOME"] 232 + print(f"Setting venv to {venv}") 233 + else: 234 + sys.exit(f"Venv {venv} not found.") 235 + 236 + def run_sphinx(self, sphinx_build, build_args, *args, **pwargs): 237 + """ 238 + Executes sphinx-build using current python3 command and setting 239 + -j parameter if possible to run the build in parallel. 240 + """ 241 + 242 + with JobserverExec() as jobserver: 243 + if jobserver.claim: 244 + n_jobs = str(jobserver.claim) 245 + else: 246 + n_jobs = "auto" # Supported since Sphinx 1.7 247 + 248 + cmd = [] 249 + 250 + if self.venv: 251 + cmd.append("python") 252 + else: 253 + cmd.append(sys.executable) 254 + 255 + cmd.append(sphinx_build) 256 + 257 + # if present, SPHINXOPTS or command line --jobs overrides default 258 + if self.n_jobs: 259 + n_jobs = str(self.n_jobs) 260 + 261 + if n_jobs: 262 + cmd += [f"-j{n_jobs}"] 263 + 264 + if not self.verbose: 265 + cmd.append("-q") 266 + 267 + cmd += self.sphinxopts 268 + 269 + cmd += build_args 270 + 271 + if self.verbose: 272 + print(" ".join(cmd)) 273 + 274 + rc = subprocess.call(cmd, *args, **pwargs) 275 + 276 + def handle_html(self, css, output_dir): 277 + """ 278 + Extra steps for HTML and epub output. 279 + 280 + For such targets, we need to ensure that CSS will be properly 281 + copied to the output _static directory 282 + """ 283 + 284 + if not css: 285 + return 286 + 287 + css = os.path.expanduser(css) 288 + if not css.startswith("/"): 289 + css = os.path.join(self.srctree, css) 290 + 291 + static_dir = os.path.join(output_dir, "_static") 292 + os.makedirs(static_dir, exist_ok=True) 293 + 294 + try: 295 + shutil.copy2(css, static_dir) 296 + except (OSError, IOError) as e: 297 + print(f"Warning: Failed to copy CSS: {e}", file=sys.stderr) 298 + 299 + def build_pdf_file(self, latex_cmd, from_dir, path): 300 + """Builds a single pdf file using latex_cmd""" 301 + try: 302 + subprocess.run(latex_cmd + [path], 303 + cwd=from_dir, check=True) 304 + 305 + return True 306 + except subprocess.CalledProcessError: 307 + # LaTeX PDF error code is almost useless: it returns 308 + # error codes even when build succeeds but has warnings. 309 + # So, we'll ignore the results 310 + return False 311 + 312 + def pdf_parallel_build(self, tex_suffix, latex_cmd, tex_files, n_jobs): 313 + """Build PDF files in parallel if possible""" 314 + builds = {} 315 + build_failed = False 316 + max_len = 0 317 + has_tex = False 318 + 319 + # Process files in parallel 320 + with futures.ThreadPoolExecutor(max_workers=n_jobs) as executor: 321 + jobs = {} 322 + 323 + for from_dir, pdf_dir, entry in tex_files: 324 + name = entry.name 325 + 326 + if not name.endswith(tex_suffix): 327 + continue 328 + 329 + name = name[:-len(tex_suffix)] 330 + 331 + max_len = max(max_len, len(name)) 332 + 333 + has_tex = True 334 + 335 + future = executor.submit(self.build_pdf_file, latex_cmd, 336 + from_dir, entry.path) 337 + jobs[future] = (from_dir, name, entry.path) 338 + 339 + for future in futures.as_completed(jobs): 340 + from_dir, name, path = jobs[future] 341 + 342 + pdf_name = name + ".pdf" 343 + pdf_from = os.path.join(from_dir, pdf_name) 344 + 345 + try: 346 + success = future.result() 347 + 348 + if success and os.path.exists(pdf_from): 349 + pdf_to = os.path.join(pdf_dir, pdf_name) 350 + 351 + os.rename(pdf_from, pdf_to) 352 + builds[name] = os.path.relpath(pdf_to, self.builddir) 353 + else: 354 + builds[name] = "FAILED" 355 + build_failed = True 356 + except Exception as e: 357 + builds[name] = f"FAILED ({str(e)})" 358 + build_failed = True 359 + 360 + # Handle case where no .tex files were found 361 + if not has_tex: 362 + name = "Sphinx LaTeX builder" 363 + max_len = max(max_len, len(name)) 364 + builds[name] = "FAILED (no .tex file was generated)" 365 + build_failed = True 366 + 367 + return builds, build_failed, max_len 368 + 369 + def handle_pdf(self, output_dirs): 370 + """ 371 + Extra steps for PDF output. 372 + 373 + As PDF is handled via a LaTeX output, after building the .tex file, 374 + a new build is needed to create the PDF output from the latex 375 + directory. 376 + """ 377 + builds = {} 378 + max_len = 0 379 + tex_suffix = ".tex" 380 + 381 + # Get all tex files that will be used for PDF build 382 + tex_files = [] 383 + for from_dir in output_dirs: 384 + pdf_dir = os.path.join(from_dir, "../pdf") 385 + os.makedirs(pdf_dir, exist_ok=True) 386 + 387 + if self.latexmk_cmd: 388 + latex_cmd = [self.latexmk_cmd, f"-{self.pdflatex}"] 389 + else: 390 + latex_cmd = [self.pdflatex] 391 + 392 + latex_cmd.extend(shlex.split(self.latexopts)) 393 + 394 + # Get a list of tex files to process 395 + with os.scandir(from_dir) as it: 396 + for entry in it: 397 + if entry.name.endswith(tex_suffix): 398 + tex_files.append((from_dir, pdf_dir, entry)) 399 + 400 + # When using make, this won't be used, as the number of jobs comes 401 + # from POSIX jobserver. So, this covers the case where build comes 402 + # from command line. On such case, serialize by default, except if 403 + # the user explicitly sets the number of jobs. 404 + n_jobs = 1 405 + 406 + # n_jobs is either an integer or "auto". Only use it if it is a number 407 + if self.n_jobs: 408 + try: 409 + n_jobs = int(self.n_jobs) 410 + except ValueError: 411 + pass 412 + 413 + # When using make, jobserver.claim is the number of jobs that were 414 + # used with "-j" and that aren't used by other make targets 415 + with JobserverExec() as jobserver: 416 + n_jobs = 1 417 + 418 + # Handle the case when a parameter is passed via command line, 419 + # using it as default, if jobserver doesn't claim anything 420 + if self.n_jobs: 421 + try: 422 + n_jobs = int(self.n_jobs) 423 + except ValueError: 424 + pass 425 + 426 + if jobserver.claim: 427 + n_jobs = jobserver.claim 428 + 429 + # Build files in parallel 430 + builds, build_failed, max_len = self.pdf_parallel_build(tex_suffix, 431 + latex_cmd, 432 + tex_files, 433 + n_jobs) 434 + 435 + msg = "Summary" 436 + msg += "\n" + "=" * len(msg) 437 + print() 438 + print(msg) 439 + 440 + for pdf_name, pdf_file in builds.items(): 441 + print(f"{pdf_name:<{max_len}}: {pdf_file}") 442 + 443 + print() 444 + 445 + # return an error if a PDF file is missing 446 + 447 + if build_failed: 448 + sys.exit(f"PDF build failed: not all PDF files were created.") 449 + else: 450 + print("All PDF files were built.") 451 + 452 + def handle_info(self, output_dirs): 453 + """ 454 + Extra steps for Info output. 455 + 456 + For texinfo generation, an additional make is needed from the 457 + texinfo directory. 458 + """ 459 + 460 + for output_dir in output_dirs: 461 + try: 462 + subprocess.run(["make", "info"], cwd=output_dir, check=True) 463 + except subprocess.CalledProcessError as e: 464 + sys.exit(f"Error generating info docs: {e}") 465 + 466 + def cleandocs(self, builder): 467 + 468 + shutil.rmtree(self.builddir, ignore_errors=True) 469 + 470 + def build(self, target, sphinxdirs=None, conf="conf.py", 471 + theme=None, css=None, paper=None): 472 + """ 473 + Build documentation using Sphinx. This is the core function of this 474 + module. It prepares all arguments required by sphinx-build. 475 + """ 476 + 477 + builder = TARGETS[target]["builder"] 478 + out_dir = TARGETS[target].get("out_dir", "") 479 + 480 + # Cleandocs doesn't require sphinx-build 481 + if target == "cleandocs": 482 + self.cleandocs(builder) 483 + return 484 + 485 + # Other targets require sphinx-build 486 + sphinxbuild = shutil.which(self.sphinxbuild, path=self.env["PATH"]) 487 + if not sphinxbuild: 488 + sys.exit(f"Error: {self.sphinxbuild} not found in PATH.\n") 489 + 490 + if builder == "latex": 491 + if not self.pdflatex_cmd and not self.latexmk_cmd: 492 + sys.exit("Error: pdflatex or latexmk required for PDF generation") 493 + 494 + docs_dir = os.path.abspath(os.path.join(self.srctree, "Documentation")) 495 + 496 + # Prepare base arguments for Sphinx build 497 + kerneldoc = self.kerneldoc 498 + if kerneldoc.startswith(self.srctree): 499 + kerneldoc = os.path.relpath(kerneldoc, self.srctree) 500 + 501 + # Prepare common Sphinx options 502 + args = [ 503 + "-b", builder, 504 + "-c", docs_dir, 505 + ] 506 + 507 + if builder == "latex": 508 + if not paper: 509 + paper = PAPER[1] 510 + 511 + args.extend(["-D", f"latex_elements.papersize={paper}paper"]) 512 + 513 + if self.config_rust: 514 + args.extend(["-t", "rustdoc"]) 515 + 516 + if conf: 517 + self.env["SPHINX_CONF"] = self.get_path(conf, abs_path=True) 518 + 519 + if not sphinxdirs: 520 + sphinxdirs = os.environ.get("SPHINXDIRS", ".") 521 + 522 + # The sphinx-build tool has a bug: internally, it tries to set 523 + # locale with locale.setlocale(locale.LC_ALL, ''). This causes a 524 + # crash if language is not set. Detect and fix it. 525 + try: 526 + locale.setlocale(locale.LC_ALL, '') 527 + except Exception: 528 + self.env["LC_ALL"] = "C" 529 + self.env["LANG"] = "C" 530 + 531 + # sphinxdirs can be a list or a whitespace-separated string 532 + sphinxdirs_list = [] 533 + for sphinxdir in sphinxdirs: 534 + if isinstance(sphinxdir, list): 535 + sphinxdirs_list += sphinxdir 536 + else: 537 + for name in sphinxdir.split(" "): 538 + sphinxdirs_list.append(name) 539 + 540 + # Build each directory 541 + output_dirs = [] 542 + for sphinxdir in sphinxdirs_list: 543 + src_dir = os.path.join(docs_dir, sphinxdir) 544 + doctree_dir = os.path.join(self.builddir, ".doctrees") 545 + output_dir = os.path.join(self.builddir, sphinxdir, out_dir) 546 + 547 + # Make directory names canonical 548 + src_dir = os.path.normpath(src_dir) 549 + doctree_dir = os.path.normpath(doctree_dir) 550 + output_dir = os.path.normpath(output_dir) 551 + 552 + os.makedirs(doctree_dir, exist_ok=True) 553 + os.makedirs(output_dir, exist_ok=True) 554 + 555 + output_dirs.append(output_dir) 556 + 557 + build_args = args + [ 558 + "-d", doctree_dir, 559 + "-D", f"kerneldoc_bin={kerneldoc}", 560 + "-D", f"version={self.kernelversion}", 561 + "-D", f"release={self.kernelrelease}", 562 + "-D", f"kerneldoc_srctree={self.srctree}", 563 + src_dir, 564 + output_dir, 565 + ] 566 + 567 + # Execute sphinx-build 568 + try: 569 + self.run_sphinx(sphinxbuild, build_args, env=self.env) 570 + except Exception as e: 571 + sys.exit(f"Build failed: {e}") 572 + 573 + # Ensure that html/epub will have needed static files 574 + if target in ["htmldocs", "epubdocs"]: 575 + self.handle_html(css, output_dir) 576 + 577 + # PDF and Info require a second build step 578 + if target == "pdfdocs": 579 + self.handle_pdf(output_dirs) 580 + elif target == "infodocs": 581 + self.handle_info(output_dirs) 582 + 583 + @staticmethod 584 + def get_python_version(cmd): 585 + """ 586 + Get python version from a Python binary. As we need to detect if 587 + are out there newer python binaries, we can't rely on sys.release here. 588 + """ 589 + 590 + result = subprocess.run([cmd, "--version"], check=True, 591 + stdout=subprocess.PIPE, stderr=subprocess.PIPE, 592 + universal_newlines=True) 593 + version = result.stdout.strip() 594 + 595 + match = re.search(r"(\d+\.\d+\.\d+)", version) 596 + if match: 597 + return parse_version(match.group(1)) 598 + 599 + print(f"Can't parse version {version}") 600 + return (0, 0, 0) 601 + 602 + @staticmethod 603 + def find_python(): 604 + """ 605 + Detect if are out there any python 3.xy version newer than the 606 + current one. 607 + 608 + Note: this routine is limited to up to 2 digits for python3. We 609 + may need to update it one day, hopefully on a distant future. 610 + """ 611 + patterns = [ 612 + "python3.[0-9]", 613 + "python3.[0-9][0-9]", 614 + ] 615 + 616 + # Seek for a python binary newer than MIN_PYTHON_VERSION 617 + for path in os.getenv("PATH", "").split(":"): 618 + for pattern in patterns: 619 + for cmd in glob(os.path.join(path, pattern)): 620 + if os.path.isfile(cmd) and os.access(cmd, os.X_OK): 621 + version = SphinxBuilder.get_python_version(cmd) 622 + if version >= MIN_PYTHON_VERSION: 623 + return cmd 624 + 625 + return None 626 + 627 + @staticmethod 628 + def check_python(): 629 + """ 630 + Check if the current python binary satisfies our minimal requirement 631 + for Sphinx build. If not, re-run with a newer version if found. 632 + """ 633 + cur_ver = sys.version_info[:3] 634 + if cur_ver >= MIN_PYTHON_VERSION: 635 + return 636 + 637 + python_ver = ver_str(cur_ver) 638 + 639 + new_python_cmd = SphinxBuilder.find_python() 640 + if not new_python_cmd: 641 + sys.exit(f"Python version {python_ver} is not supported anymore.") 642 + 643 + # Restart script using the newer version 644 + script_path = os.path.abspath(sys.argv[0]) 645 + args = [new_python_cmd, script_path] + sys.argv[1:] 646 + 647 + print(f"Python {python_ver} not supported. Changing to {new_python_cmd}") 648 + 649 + try: 650 + os.execv(new_python_cmd, args) 651 + except OSError as e: 652 + sys.exit(f"Failed to restart with {new_python_cmd}: {e}") 653 + 654 + def jobs_type(value): 655 + """ 656 + Handle valid values for -j. Accepts Sphinx "-jauto", plus a number 657 + equal or bigger than one. 658 + """ 659 + if value is None: 660 + return None 661 + 662 + if value.lower() == 'auto': 663 + return value.lower() 664 + 665 + try: 666 + if int(value) >= 1: 667 + return value 668 + 669 + raise argparse.ArgumentTypeError(f"Minimum jobs is 1, got {value}") 670 + except ValueError: 671 + raise argparse.ArgumentTypeError(f"Must be 'auto' or positive integer, got {value}") 672 + 673 + def main(): 674 + """ 675 + Main function. The only mandatory argument is the target. If not 676 + specified, the other arguments will use default values if not 677 + specified at os.environ. 678 + """ 679 + parser = argparse.ArgumentParser(description="Kernel documentation builder") 680 + 681 + parser.add_argument("target", choices=list(TARGETS.keys()), 682 + help="Documentation target to build") 683 + parser.add_argument("--sphinxdirs", nargs="+", 684 + help="Specific directories to build") 685 + parser.add_argument("--conf", default="conf.py", 686 + help="Sphinx configuration file") 687 + 688 + parser.add_argument("--theme", help="Sphinx theme to use") 689 + 690 + parser.add_argument("--css", help="Custom CSS file for HTML/EPUB") 691 + 692 + parser.add_argument("--paper", choices=PAPER, default=PAPER[0], 693 + help="Paper size for LaTeX/PDF output") 694 + 695 + parser.add_argument("-v", "--verbose", action='store_true', 696 + help="place build in verbose mode") 697 + 698 + parser.add_argument('-j', '--jobs', type=jobs_type, 699 + help="Sets number of jobs to use with sphinx-build") 700 + 701 + parser.add_argument('-i', '--interactive', action='store_true', 702 + help="Change latex default to run in interactive mode") 703 + 704 + parser.add_argument("-V", "--venv", nargs='?', const=f'{VENV_DEFAULT}', 705 + default=None, 706 + help=f'If used, run Sphinx from a venv dir (default dir: {VENV_DEFAULT})') 707 + 708 + args = parser.parse_args() 709 + 710 + SphinxBuilder.check_python() 711 + 712 + builder = SphinxBuilder(venv=args.venv, verbose=args.verbose, 713 + n_jobs=args.jobs, interactive=args.interactive) 714 + 715 + builder.build(args.target, sphinxdirs=args.sphinxdirs, conf=args.conf, 716 + theme=args.theme, css=args.css, paper=args.paper) 717 + 718 + if __name__ == "__main__": 719 + main()
+1619 -1054
scripts/sphinx-pre-install
··· 1 - #!/usr/bin/env perl 1 + #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0-or-later 3 - use strict; 4 - 5 - # Copyright (c) 2017-2020 Mauro Carvalho Chehab <mchehab@kernel.org> 3 + # Copyright (c) 2017-2025 Mauro Carvalho Chehab <mchehab+huawei@kernel.org> 6 4 # 7 - 8 - my $prefix = "./"; 9 - $prefix = "$ENV{'srctree'}/" if ($ENV{'srctree'}); 10 - 11 - my $conf = $prefix . "Documentation/conf.py"; 12 - my $requirement_file = $prefix . "Documentation/sphinx/requirements.txt"; 13 - my $virtenv_prefix = "sphinx_"; 14 - 15 - # 16 - # Static vars 17 - # 18 - 19 - my %missing; 20 - my $system_release; 21 - my $need = 0; 22 - my $optional = 0; 23 - my $need_symlink = 0; 24 - my $need_sphinx = 0; 25 - my $need_pip = 0; 26 - my $need_virtualenv = 0; 27 - my $rec_sphinx_upgrade = 0; 28 - my $verbose_warn_install = 1; 29 - my $install = ""; 30 - my $virtenv_dir = ""; 31 - my $python_cmd = ""; 32 - my $activate_cmd; 33 - my $min_version; 34 - my $cur_version; 35 - my $rec_version = "3.4.3"; 36 - my $latest_avail_ver; 37 - 38 - # 39 - # Command line arguments 40 - # 41 - 42 - my $pdf = 1; 43 - my $virtualenv = 1; 44 - my $version_check = 0; 45 - 46 - # 47 - # List of required texlive packages on Fedora and OpenSuse 48 - # 49 - 50 - my %texlive = ( 51 - 'amsfonts.sty' => 'texlive-amsfonts', 52 - 'amsmath.sty' => 'texlive-amsmath', 53 - 'amssymb.sty' => 'texlive-amsfonts', 54 - 'amsthm.sty' => 'texlive-amscls', 55 - 'anyfontsize.sty' => 'texlive-anyfontsize', 56 - 'atbegshi.sty' => 'texlive-oberdiek', 57 - 'bm.sty' => 'texlive-tools', 58 - 'capt-of.sty' => 'texlive-capt-of', 59 - 'cmap.sty' => 'texlive-cmap', 60 - 'ecrm1000.tfm' => 'texlive-ec', 61 - 'eqparbox.sty' => 'texlive-eqparbox', 62 - 'eu1enc.def' => 'texlive-euenc', 63 - 'fancybox.sty' => 'texlive-fancybox', 64 - 'fancyvrb.sty' => 'texlive-fancyvrb', 65 - 'float.sty' => 'texlive-float', 66 - 'fncychap.sty' => 'texlive-fncychap', 67 - 'footnote.sty' => 'texlive-mdwtools', 68 - 'framed.sty' => 'texlive-framed', 69 - 'luatex85.sty' => 'texlive-luatex85', 70 - 'multirow.sty' => 'texlive-multirow', 71 - 'needspace.sty' => 'texlive-needspace', 72 - 'palatino.sty' => 'texlive-psnfss', 73 - 'parskip.sty' => 'texlive-parskip', 74 - 'polyglossia.sty' => 'texlive-polyglossia', 75 - 'tabulary.sty' => 'texlive-tabulary', 76 - 'threeparttable.sty' => 'texlive-threeparttable', 77 - 'titlesec.sty' => 'texlive-titlesec', 78 - 'ucs.sty' => 'texlive-ucs', 79 - 'upquote.sty' => 'texlive-upquote', 80 - 'wrapfig.sty' => 'texlive-wrapfig', 81 - 'ctexhook.sty' => 'texlive-ctex', 82 - ); 83 - 84 - # 85 - # Subroutines that checks if a feature exists 86 - # 87 - 88 - sub check_missing(%) 89 - { 90 - my %map = %{$_[0]}; 91 - 92 - foreach my $prog (sort keys %missing) { 93 - my $is_optional = $missing{$prog}; 94 - 95 - # At least on some LTS distros like CentOS 7, texlive doesn't 96 - # provide all packages we need. When such distros are 97 - # detected, we have to disable PDF output. 98 - # 99 - # So, we need to ignore the packages that distros would 100 - # need for LaTeX to work 101 - if ($is_optional == 2 && !$pdf) { 102 - $optional--; 103 - next; 104 - } 105 - 106 - if ($verbose_warn_install) { 107 - if ($is_optional) { 108 - print "Warning: better to also install \"$prog\".\n"; 109 - } else { 110 - print "ERROR: please install \"$prog\", otherwise, build won't work.\n"; 111 - } 112 - } 113 - if (defined($map{$prog})) { 114 - $install .= " " . $map{$prog}; 115 - } else { 116 - $install .= " " . $prog; 117 - } 118 - } 119 - 120 - $install =~ s/^\s//; 121 - } 122 - 123 - sub add_package($$) 124 - { 125 - my $package = shift; 126 - my $is_optional = shift; 127 - 128 - $missing{$package} = $is_optional; 129 - if ($is_optional) { 130 - $optional++; 131 - } else { 132 - $need++; 133 - } 134 - } 135 - 136 - sub check_missing_file($$$) 137 - { 138 - my $files = shift; 139 - my $package = shift; 140 - my $is_optional = shift; 141 - 142 - for (@$files) { 143 - return if(-e $_); 144 - } 145 - 146 - add_package($package, $is_optional); 147 - } 148 - 149 - sub findprog($) 150 - { 151 - foreach(split(/:/, $ENV{PATH})) { 152 - return "$_/$_[0]" if(-x "$_/$_[0]"); 153 - } 154 - } 155 - 156 - sub find_python_no_venv() 157 - { 158 - my $prog = shift; 159 - 160 - my $cur_dir = qx(pwd); 161 - $cur_dir =~ s/\s+$//; 162 - 163 - foreach my $dir (split(/:/, $ENV{PATH})) { 164 - next if ($dir =~ m,($cur_dir)/sphinx,); 165 - return "$dir/python3" if(-x "$dir/python3"); 166 - } 167 - foreach my $dir (split(/:/, $ENV{PATH})) { 168 - next if ($dir =~ m,($cur_dir)/sphinx,); 169 - return "$dir/python" if(-x "$dir/python"); 170 - } 171 - return "python"; 172 - } 173 - 174 - sub check_program($$) 175 - { 176 - my $prog = shift; 177 - my $is_optional = shift; 178 - 179 - return $prog if findprog($prog); 180 - 181 - add_package($prog, $is_optional); 182 - } 183 - 184 - sub check_perl_module($$) 185 - { 186 - my $prog = shift; 187 - my $is_optional = shift; 188 - 189 - my $err = system("perl -M$prog -e 1 2>/dev/null /dev/null"); 190 - return if ($err == 0); 191 - 192 - add_package($prog, $is_optional); 193 - } 194 - 195 - sub check_python_module($$) 196 - { 197 - my $prog = shift; 198 - my $is_optional = shift; 199 - 200 - return if (!$python_cmd); 201 - 202 - my $err = system("$python_cmd -c 'import $prog' 2>/dev/null /dev/null"); 203 - return if ($err == 0); 204 - 205 - add_package($prog, $is_optional); 206 - } 207 - 208 - sub check_rpm_missing($$) 209 - { 210 - my @pkgs = @{$_[0]}; 211 - my $is_optional = $_[1]; 212 - 213 - foreach my $prog(@pkgs) { 214 - my $err = system("rpm -q '$prog' 2>/dev/null >/dev/null"); 215 - add_package($prog, $is_optional) if ($err); 216 - } 217 - } 218 - 219 - sub check_pacman_missing($$) 220 - { 221 - my @pkgs = @{$_[0]}; 222 - my $is_optional = $_[1]; 223 - 224 - foreach my $prog(@pkgs) { 225 - my $err = system("pacman -Q '$prog' 2>/dev/null >/dev/null"); 226 - add_package($prog, $is_optional) if ($err); 227 - } 228 - } 229 - 230 - sub check_missing_tex($) 231 - { 232 - my $is_optional = shift; 233 - my $kpsewhich = findprog("kpsewhich"); 234 - 235 - foreach my $prog(keys %texlive) { 236 - my $package = $texlive{$prog}; 237 - if (!$kpsewhich) { 238 - add_package($package, $is_optional); 239 - next; 240 - } 241 - my $file = qx($kpsewhich $prog); 242 - add_package($package, $is_optional) if ($file =~ /^\s*$/); 243 - } 244 - } 245 - 246 - sub get_sphinx_fname() 247 - { 248 - if ($ENV{'SPHINXBUILD'}) { 249 - return $ENV{'SPHINXBUILD'}; 250 - } 251 - 252 - my $fname = "sphinx-build"; 253 - return $fname if findprog($fname); 254 - 255 - $fname = "sphinx-build-3"; 256 - if (findprog($fname)) { 257 - $need_symlink = 1; 258 - return $fname; 259 - } 260 - 261 - return ""; 262 - } 263 - 264 - sub get_sphinx_version($) 265 - { 266 - my $cmd = shift; 267 - my $ver; 268 - 269 - open IN, "$cmd --version 2>&1 |"; 270 - while (<IN>) { 271 - if (m/^\s*sphinx-build\s+([\d\.]+)((\+\/[\da-f]+)|(b\d+))?$/) { 272 - $ver=$1; 273 - last; 274 - } 275 - # Sphinx 1.2.x uses a different format 276 - if (m/^\s*Sphinx.*\s+([\d\.]+)$/) { 277 - $ver=$1; 278 - last; 279 - } 280 - } 281 - close IN; 282 - return $ver; 283 - } 284 - 285 - sub check_sphinx() 286 - { 287 - open IN, $conf or die "Can't open $conf"; 288 - while (<IN>) { 289 - if (m/^\s*needs_sphinx\s*=\s*[\'\"]([\d\.]+)[\'\"]/) { 290 - $min_version=$1; 291 - last; 292 - } 293 - } 294 - close IN; 295 - 296 - die "Can't get needs_sphinx version from $conf" if (!$min_version); 297 - 298 - $virtenv_dir = $virtenv_prefix . "latest"; 299 - 300 - my $sphinx = get_sphinx_fname(); 301 - if ($sphinx eq "") { 302 - $need_sphinx = 1; 303 - return; 304 - } 305 - 306 - $cur_version = get_sphinx_version($sphinx); 307 - die "$sphinx didn't return its version" if (!$cur_version); 308 - 309 - if ($cur_version lt $min_version) { 310 - printf "ERROR: Sphinx version is %s. It should be >= %s\n", 311 - $cur_version, $min_version; 312 - $need_sphinx = 1; 313 - return; 314 - } 315 - 316 - return if ($cur_version lt $rec_version); 317 - 318 - # On version check mode, just assume Sphinx has all mandatory deps 319 - exit (0) if ($version_check); 320 - } 321 - 322 - # 323 - # Ancillary subroutines 324 - # 325 - 326 - sub catcheck($) 327 - { 328 - my $res = ""; 329 - $res = qx(cat $_[0]) if (-r $_[0]); 330 - return $res; 331 - } 332 - 333 - sub which($) 334 - { 335 - my $file = shift; 336 - my @path = split ":", $ENV{PATH}; 337 - 338 - foreach my $dir(@path) { 339 - my $name = $dir.'/'.$file; 340 - return $name if (-x $name ); 341 - } 342 - return undef; 343 - } 344 - 345 - # 346 - # Subroutines that check distro-specific hints 347 - # 348 - 349 - sub give_debian_hints() 350 - { 351 - my %map = ( 352 - "python-sphinx" => "python3-sphinx", 353 - "yaml" => "python3-yaml", 354 - "ensurepip" => "python3-venv", 355 - "virtualenv" => "virtualenv", 356 - "dot" => "graphviz", 357 - "convert" => "imagemagick", 358 - "Pod::Usage" => "perl-modules", 359 - "xelatex" => "texlive-xetex", 360 - "rsvg-convert" => "librsvg2-bin", 361 - ); 362 - 363 - if ($pdf) { 364 - check_missing_file(["/usr/share/texlive/texmf-dist/tex/latex/ctex/ctexhook.sty"], 365 - "texlive-lang-chinese", 2); 366 - 367 - check_missing_file(["/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf"], 368 - "fonts-dejavu", 2); 369 - 370 - check_missing_file(["/usr/share/fonts/noto-cjk/NotoSansCJK-Regular.ttc", 371 - "/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc", 372 - "/usr/share/fonts/opentype/noto/NotoSerifCJK-Regular.ttc"], 373 - "fonts-noto-cjk", 2); 374 - } 375 - 376 - check_program("dvipng", 2) if ($pdf); 377 - check_missing(\%map); 378 - 379 - return if (!$need && !$optional); 380 - printf("You should run:\n") if ($verbose_warn_install); 381 - printf("\n\tsudo apt-get install $install\n"); 382 - } 383 - 384 - sub give_redhat_hints() 385 - { 386 - my %map = ( 387 - "python-sphinx" => "python3-sphinx", 388 - "yaml" => "python3-pyyaml", 389 - "virtualenv" => "python3-virtualenv", 390 - "dot" => "graphviz", 391 - "convert" => "ImageMagick", 392 - "Pod::Usage" => "perl-Pod-Usage", 393 - "xelatex" => "texlive-xetex-bin", 394 - "rsvg-convert" => "librsvg2-tools", 395 - ); 396 - 397 - my @fedora26_opt_pkgs = ( 398 - "graphviz-gd", # Fedora 26: needed for PDF support 399 - ); 400 - 401 - my @fedora_tex_pkgs = ( 402 - "texlive-collection-fontsrecommended", 403 - "texlive-collection-latex", 404 - "texlive-xecjk", 405 - "dejavu-sans-fonts", 406 - "dejavu-serif-fonts", 407 - "dejavu-sans-mono-fonts", 408 - ); 409 - 410 - # 411 - # Checks valid for RHEL/CentOS version 7.x. 412 - # 413 - my $old = 0; 414 - my $rel; 415 - my $noto_sans_redhat = "google-noto-sans-cjk-ttc-fonts"; 416 - $rel = $1 if ($system_release =~ /(release|Linux)\s+(\d+)/); 417 - 418 - if (!($system_release =~ /Fedora/)) { 419 - $map{"virtualenv"} = "python-virtualenv"; 420 - 421 - if ($rel && $rel < 8) { 422 - $old = 1; 423 - $pdf = 0; 424 - 425 - printf("Note: texlive packages on RHEL/CENTOS <= 7 are incomplete. Can't support PDF output\n"); 426 - printf("If you want to build PDF, please read:\n"); 427 - printf("\thttps://www.systutorials.com/241660/how-to-install-tex-live-on-centos-7-linux/\n"); 428 - } 429 - } else { 430 - if ($rel && $rel < 26) { 431 - $old = 1; 432 - } 433 - if ($rel && $rel >= 38) { 434 - $noto_sans_redhat = "google-noto-sans-cjk-fonts"; 435 - } 436 - } 437 - if (!$rel) { 438 - printf("Couldn't identify release number\n"); 439 - $old = 1; 440 - $pdf = 0; 441 - } 442 - 443 - if ($pdf) { 444 - check_missing_file(["/usr/share/fonts/google-noto-cjk/NotoSansCJK-Regular.ttc", 445 - "/usr/share/fonts/google-noto-sans-cjk-fonts/NotoSansCJK-Regular.ttc"], 446 - $noto_sans_redhat, 2); 447 - } 448 - 449 - check_rpm_missing(\@fedora26_opt_pkgs, 2) if ($pdf && !$old); 450 - check_rpm_missing(\@fedora_tex_pkgs, 2) if ($pdf); 451 - check_missing_tex(2) if ($pdf); 452 - check_missing(\%map); 453 - 454 - return if (!$need && !$optional); 455 - 456 - if (!$old) { 457 - # dnf, for Fedora 18+ 458 - printf("You should run:\n") if ($verbose_warn_install); 459 - printf("\n\tsudo dnf install -y $install\n"); 460 - } else { 461 - # yum, for RHEL (and clones) or Fedora version < 18 462 - printf("You should run:\n") if ($verbose_warn_install); 463 - printf("\n\tsudo yum install -y $install\n"); 464 - } 465 - } 466 - 467 - sub give_opensuse_hints() 468 - { 469 - my %map = ( 470 - "python-sphinx" => "python3-sphinx", 471 - "yaml" => "python3-pyyaml", 472 - "virtualenv" => "python3-virtualenv", 473 - "dot" => "graphviz", 474 - "convert" => "ImageMagick", 475 - "Pod::Usage" => "perl-Pod-Usage", 476 - "xelatex" => "texlive-xetex-bin", 477 - ); 478 - 479 - # On Tumbleweed, this package is also named rsvg-convert 480 - $map{"rsvg-convert"} = "rsvg-view" if (!($system_release =~ /Tumbleweed/)); 481 - 482 - my @suse_tex_pkgs = ( 483 - "texlive-babel-english", 484 - "texlive-caption", 485 - "texlive-colortbl", 486 - "texlive-courier", 487 - "texlive-dvips", 488 - "texlive-helvetic", 489 - "texlive-makeindex", 490 - "texlive-metafont", 491 - "texlive-metapost", 492 - "texlive-palatino", 493 - "texlive-preview", 494 - "texlive-times", 495 - "texlive-zapfchan", 496 - "texlive-zapfding", 497 - ); 498 - 499 - $map{"latexmk"} = "texlive-latexmk-bin"; 500 - 501 - # FIXME: add support for installing CJK fonts 502 - # 503 - # I tried hard, but was unable to find a way to install 504 - # "Noto Sans CJK SC" on openSUSE 505 - 506 - check_rpm_missing(\@suse_tex_pkgs, 2) if ($pdf); 507 - check_missing_tex(2) if ($pdf); 508 - check_missing(\%map); 509 - 510 - return if (!$need && !$optional); 511 - printf("You should run:\n") if ($verbose_warn_install); 512 - printf("\n\tsudo zypper install --no-recommends $install\n"); 513 - } 514 - 515 - sub give_mageia_hints() 516 - { 517 - my %map = ( 518 - "python-sphinx" => "python3-sphinx", 519 - "yaml" => "python3-yaml", 520 - "virtualenv" => "python3-virtualenv", 521 - "dot" => "graphviz", 522 - "convert" => "ImageMagick", 523 - "Pod::Usage" => "perl-Pod-Usage", 524 - "xelatex" => "texlive", 525 - "rsvg-convert" => "librsvg2", 526 - ); 527 - 528 - my @tex_pkgs = ( 529 - "texlive-fontsextra", 530 - ); 531 - 532 - $map{"latexmk"} = "texlive-collection-basic"; 533 - 534 - my $packager_cmd; 535 - my $noto_sans; 536 - if ($system_release =~ /OpenMandriva/) { 537 - $packager_cmd = "dnf install"; 538 - $noto_sans = "noto-sans-cjk-fonts"; 539 - @tex_pkgs = ( "texlive-collection-fontsextra" ); 540 - } else { 541 - $packager_cmd = "urpmi"; 542 - $noto_sans = "google-noto-sans-cjk-ttc-fonts"; 543 - } 544 - 545 - 546 - if ($pdf) { 547 - check_missing_file(["/usr/share/fonts/google-noto-cjk/NotoSansCJK-Regular.ttc", 548 - "/usr/share/fonts/TTF/NotoSans-Regular.ttf"], 549 - $noto_sans, 2); 550 - } 551 - 552 - check_rpm_missing(\@tex_pkgs, 2) if ($pdf); 553 - check_missing(\%map); 554 - 555 - return if (!$need && !$optional); 556 - printf("You should run:\n") if ($verbose_warn_install); 557 - printf("\n\tsudo $packager_cmd $install\n"); 558 - } 559 - 560 - sub give_arch_linux_hints() 561 - { 562 - my %map = ( 563 - "yaml" => "python-yaml", 564 - "virtualenv" => "python-virtualenv", 565 - "dot" => "graphviz", 566 - "convert" => "imagemagick", 567 - "xelatex" => "texlive-xetex", 568 - "latexmk" => "texlive-core", 569 - "rsvg-convert" => "extra/librsvg", 570 - ); 571 - 572 - my @archlinux_tex_pkgs = ( 573 - "texlive-core", 574 - "texlive-latexextra", 575 - "ttf-dejavu", 576 - ); 577 - check_pacman_missing(\@archlinux_tex_pkgs, 2) if ($pdf); 578 - 579 - if ($pdf) { 580 - check_missing_file(["/usr/share/fonts/noto-cjk/NotoSansCJK-Regular.ttc"], 581 - "noto-fonts-cjk", 2); 582 - } 583 - 584 - check_missing(\%map); 585 - 586 - return if (!$need && !$optional); 587 - printf("You should run:\n") if ($verbose_warn_install); 588 - printf("\n\tsudo pacman -S $install\n"); 589 - } 590 - 591 - sub give_gentoo_hints() 592 - { 593 - my %map = ( 594 - "yaml" => "dev-python/pyyaml", 595 - "virtualenv" => "dev-python/virtualenv", 596 - "dot" => "media-gfx/graphviz", 597 - "convert" => "media-gfx/imagemagick", 598 - "xelatex" => "dev-texlive/texlive-xetex media-fonts/dejavu", 599 - "rsvg-convert" => "gnome-base/librsvg", 600 - ); 601 - 602 - check_missing_file(["/usr/share/fonts/dejavu/DejaVuSans.ttf"], 603 - "media-fonts/dejavu", 2) if ($pdf); 604 - 605 - if ($pdf) { 606 - check_missing_file(["/usr/share/fonts/noto-cjk/NotoSansCJKsc-Regular.otf", 607 - "/usr/share/fonts/noto-cjk/NotoSerifCJK-Regular.ttc"], 608 - "media-fonts/noto-cjk", 2); 609 - } 610 - 611 - check_missing(\%map); 612 - 613 - return if (!$need && !$optional); 614 - 615 - printf("You should run:\n") if ($verbose_warn_install); 616 - printf("\n"); 617 - 618 - my $imagemagick = "media-gfx/imagemagick svg png"; 619 - my $cairo = "media-gfx/graphviz cairo pdf"; 620 - my $portage_imagemagick = "/etc/portage/package.use/imagemagick"; 621 - my $portage_cairo = "/etc/portage/package.use/graphviz"; 622 - 623 - if (qx(grep imagemagick $portage_imagemagick 2>/dev/null) eq "") { 624 - printf("\tsudo su -c 'echo \"$imagemagick\" > $portage_imagemagick'\n") 625 - } 626 - if (qx(grep graphviz $portage_cairo 2>/dev/null) eq "") { 627 - printf("\tsudo su -c 'echo \"$cairo\" > $portage_cairo'\n"); 628 - } 629 - 630 - printf("\tsudo emerge --ask $install\n"); 631 - 632 - } 633 - 634 - sub check_distros() 635 - { 636 - # Distro-specific hints 637 - if ($system_release =~ /Red Hat Enterprise Linux/) { 638 - give_redhat_hints; 639 - return; 640 - } 641 - if ($system_release =~ /CentOS/) { 642 - give_redhat_hints; 643 - return; 644 - } 645 - if ($system_release =~ /Scientific Linux/) { 646 - give_redhat_hints; 647 - return; 648 - } 649 - if ($system_release =~ /Oracle Linux Server/) { 650 - give_redhat_hints; 651 - return; 652 - } 653 - if ($system_release =~ /Fedora/) { 654 - give_redhat_hints; 655 - return; 656 - } 657 - if ($system_release =~ /Ubuntu/) { 658 - give_debian_hints; 659 - return; 660 - } 661 - if ($system_release =~ /Debian/) { 662 - give_debian_hints; 663 - return; 664 - } 665 - if ($system_release =~ /openSUSE/) { 666 - give_opensuse_hints; 667 - return; 668 - } 669 - if ($system_release =~ /Mageia/) { 670 - give_mageia_hints; 671 - return; 672 - } 673 - if ($system_release =~ /OpenMandriva/) { 674 - give_mageia_hints; 675 - return; 676 - } 677 - if ($system_release =~ /Arch Linux/) { 678 - give_arch_linux_hints; 679 - return; 680 - } 681 - if ($system_release =~ /Gentoo/) { 682 - give_gentoo_hints; 683 - return; 684 - } 685 - 686 - # 687 - # Fall-back to generic hint code for other distros 688 - # That's far from ideal, specially for LaTeX dependencies. 689 - # 690 - my %map = ( 691 - "sphinx-build" => "sphinx" 692 - ); 693 - check_missing_tex(2) if ($pdf); 694 - check_missing(\%map); 695 - print "I don't know distro $system_release.\n"; 696 - print "So, I can't provide you a hint with the install procedure.\n"; 697 - print "There are likely missing dependencies.\n"; 698 - } 699 - 700 - # 701 - # Common dependencies 702 - # 703 - 704 - sub deactivate_help() 705 - { 706 - printf "\n If you want to exit the virtualenv, you can use:\n"; 707 - printf "\tdeactivate\n"; 708 - } 709 - 710 - sub get_virtenv() 711 - { 712 - my $ver; 713 - my $min_activate = "$ENV{'PWD'}/${virtenv_prefix}${min_version}/bin/activate"; 714 - my @activates = glob "$ENV{'PWD'}/${virtenv_prefix}*/bin/activate"; 715 - 716 - @activates = sort {$b cmp $a} @activates; 717 - 718 - foreach my $f (@activates) { 719 - next if ($f lt $min_activate); 720 - 721 - my $sphinx_cmd = $f; 722 - $sphinx_cmd =~ s/activate/sphinx-build/; 723 - next if (! -f $sphinx_cmd); 724 - 725 - my $ver = get_sphinx_version($sphinx_cmd); 726 - 727 - if (!$ver) { 728 - $f =~ s#/bin/activate##; 729 - print("Warning: virtual environment $f is not working.\nPython version upgrade? Remove it with:\n\n\trm -rf $f\n\n"); 730 - } 731 - 732 - if ($need_sphinx && ($ver ge $min_version)) { 733 - return ($f, $ver); 734 - } elsif ($ver gt $cur_version) { 735 - return ($f, $ver); 736 - } 737 - } 738 - return ("", ""); 739 - } 740 - 741 - sub recommend_sphinx_upgrade() 742 - { 743 - my $venv_ver; 744 - 745 - # Avoid running sphinx-builds from venv if $cur_version is good 746 - if ($cur_version && ($cur_version ge $rec_version)) { 747 - $latest_avail_ver = $cur_version; 748 - return; 749 - } 750 - 751 - # Get the highest version from sphinx_*/bin/sphinx-build and the 752 - # corresponding command to activate the venv/virtenv 753 - ($activate_cmd, $venv_ver) = get_virtenv(); 754 - 755 - # Store the highest version from Sphinx existing virtualenvs 756 - if (($activate_cmd ne "") && ($venv_ver gt $cur_version)) { 757 - $latest_avail_ver = $venv_ver; 758 - } else { 759 - $latest_avail_ver = $cur_version if ($cur_version); 760 - } 761 - 762 - # As we don't know package version of Sphinx, and there's no 763 - # virtual environments, don't check if upgrades are needed 764 - if (!$virtualenv) { 765 - return if (!$latest_avail_ver); 766 - } 767 - 768 - # Either there are already a virtual env or a new one should be created 769 - $need_pip = 1; 770 - 771 - return if (!$latest_avail_ver); 772 - 773 - # Return if the reason is due to an upgrade or not 774 - if ($latest_avail_ver lt $rec_version) { 775 - $rec_sphinx_upgrade = 1; 776 - } 777 - 778 - return $latest_avail_ver; 779 - } 780 - 781 - # 782 - # The logic here is complex, as it have to deal with different versions: 783 - # - minimal supported version; 784 - # - minimal PDF version; 785 - # - recommended version. 786 - # It also needs to work fine with both distro's package and venv/virtualenv 787 - sub recommend_sphinx_version($) 788 - { 789 - my $virtualenv_cmd = shift; 790 - 791 - # Version is OK. Nothing to do. 792 - if ($cur_version && ($cur_version ge $rec_version)) { 793 - return; 794 - }; 795 - 796 - if (!$need_sphinx) { 797 - # sphinx-build is present and its version is >= $min_version 798 - 799 - #only recommend enabling a newer virtenv version if makes sense. 800 - if ($latest_avail_ver gt $cur_version) { 801 - printf "\nYou may also use the newer Sphinx version $latest_avail_ver with:\n"; 802 - printf "\tdeactivate\n" if ($ENV{'PWD'} =~ /${virtenv_prefix}/); 803 - printf "\t. $activate_cmd\n"; 804 - deactivate_help(); 805 - 806 - return; 807 - } 808 - return if ($latest_avail_ver ge $rec_version); 809 - } 810 - 811 - if (!$virtualenv) { 812 - # No sphinx either via package or via virtenv. As we can't 813 - # Compare the versions here, just return, recommending the 814 - # user to install it from the package distro. 815 - return if (!$latest_avail_ver); 816 - 817 - # User doesn't want a virtenv recommendation, but he already 818 - # installed one via virtenv with a newer version. 819 - # So, print commands to enable it 820 - if ($latest_avail_ver gt $cur_version) { 821 - printf "\nYou may also use the Sphinx virtualenv version $latest_avail_ver with:\n"; 822 - printf "\tdeactivate\n" if ($ENV{'PWD'} =~ /${virtenv_prefix}/); 823 - printf "\t. $activate_cmd\n"; 824 - deactivate_help(); 825 - 826 - return; 827 - } 828 - print "\n"; 829 - } else { 830 - $need++ if ($need_sphinx); 831 - } 832 - 833 - # Suggest newer versions if current ones are too old 834 - if ($latest_avail_ver && $latest_avail_ver ge $min_version) { 835 - # If there's a good enough version, ask the user to enable it 836 - if ($latest_avail_ver ge $rec_version) { 837 - printf "\nNeed to activate Sphinx (version $latest_avail_ver) on virtualenv with:\n"; 838 - printf "\t. $activate_cmd\n"; 839 - deactivate_help(); 840 - 841 - return; 842 - } 843 - 844 - # Version is above the minimal required one, but may be 845 - # below the recommended one. So, print warnings/notes 846 - 847 - if ($latest_avail_ver lt $rec_version) { 848 - print "Warning: It is recommended at least Sphinx version $rec_version.\n"; 849 - } 850 - } 851 - 852 - # At this point, either it needs Sphinx or upgrade is recommended, 853 - # both via pip 854 - 855 - if ($rec_sphinx_upgrade) { 856 - if (!$virtualenv) { 857 - print "Instead of install/upgrade Python Sphinx pkg, you could use pip/pypi with:\n\n"; 858 - } else { 859 - print "To upgrade Sphinx, use:\n\n"; 860 - } 861 - } else { 862 - print "\nSphinx needs to be installed either:\n1) via pip/pypi with:\n\n"; 863 - } 864 - 865 - $python_cmd = find_python_no_venv(); 866 - 867 - printf "\t$virtualenv_cmd $virtenv_dir\n"; 868 - 869 - printf "\t. $virtenv_dir/bin/activate\n"; 870 - printf "\tpip install -r $requirement_file\n"; 871 - deactivate_help(); 872 - 873 - printf "\n2) As a package with:\n"; 874 - 875 - my $old_need = $need; 876 - my $old_optional = $optional; 877 - %missing = (); 878 - $pdf = 0; 879 - $optional = 0; 880 - $install = ""; 881 - $verbose_warn_install = 0; 882 - 883 - add_package("python-sphinx", 0); 884 - 885 - check_distros(); 886 - 887 - $need = $old_need; 888 - $optional = $old_optional; 889 - 890 - printf "\n Please note that Sphinx >= 3.0 will currently produce false-positive\n"; 891 - printf " warning when the same name is used for more than one type (functions,\n"; 892 - printf " structs, enums,...). This is known Sphinx bug. For more details, see:\n"; 893 - printf "\thttps://github.com/sphinx-doc/sphinx/pull/8313\n"; 894 - } 895 - 896 - sub check_needs() 897 - { 898 - # Check if Sphinx is already accessible from current environment 899 - check_sphinx(); 900 - 901 - if ($system_release) { 902 - print "Detected OS: $system_release.\n"; 903 - } else { 904 - print "Unknown OS\n"; 905 - } 906 - printf "Sphinx version: %s\n\n", $cur_version if ($cur_version); 907 - 908 - # Check python command line, trying first python3 909 - $python_cmd = findprog("python3"); 910 - $python_cmd = check_program("python", 0) if (!$python_cmd); 911 - 912 - # Check the type of virtual env, depending on Python version 913 - if ($python_cmd) { 914 - if ($virtualenv) { 915 - my $tmp = qx($python_cmd --version 2>&1); 916 - if ($tmp =~ m/(\d+\.)(\d+\.)/) { 917 - if ($1 < 3) { 918 - # Fail if it finds python2 (or worse) 919 - die "Python 3 is required to build the kernel docs\n"; 920 - } 921 - if ($1 == 3 && $2 < 3) { 922 - # Need Python 3.3 or upper for venv 923 - $need_virtualenv = 1; 924 - } 925 - } else { 926 - die "Warning: couldn't identify $python_cmd version!"; 927 - } 928 - } else { 929 - add_package("python-sphinx", 0); 930 - } 931 - } 932 - 933 - my $venv_ver = recommend_sphinx_upgrade(); 934 - 935 - my $virtualenv_cmd; 936 - 937 - if ($need_pip) { 938 - # Set virtualenv command line, if python < 3.3 939 - if ($need_virtualenv) { 940 - $virtualenv_cmd = findprog("virtualenv-3"); 941 - $virtualenv_cmd = findprog("virtualenv-3.5") if (!$virtualenv_cmd); 942 - if (!$virtualenv_cmd) { 943 - check_program("virtualenv", 0); 944 - $virtualenv_cmd = "virtualenv"; 945 - } 946 - } else { 947 - $virtualenv_cmd = "$python_cmd -m venv"; 948 - check_python_module("ensurepip", 0); 949 - } 950 - } 951 - 952 - # Check for needed programs/tools 953 - check_perl_module("Pod::Usage", 0); 954 - check_python_module("yaml", 0); 955 - check_program("make", 0); 956 - check_program("gcc", 0); 957 - check_program("dot", 1); 958 - check_program("convert", 1); 959 - 960 - # Extra PDF files - should use 2 for is_optional 961 - check_program("xelatex", 2) if ($pdf); 962 - check_program("rsvg-convert", 2) if ($pdf); 963 - check_program("latexmk", 2) if ($pdf); 964 - 965 - # Do distro-specific checks and output distro-install commands 966 - check_distros(); 967 - 968 - if (!$python_cmd) { 969 - if ($need == 1) { 970 - die "Can't build as $need mandatory dependency is missing"; 971 - } elsif ($need) { 972 - die "Can't build as $need mandatory dependencies are missing"; 973 - } 974 - } 975 - 976 - # Check if sphinx-build is called sphinx-build-3 977 - if ($need_symlink) { 978 - printf "\tsudo ln -sf %s /usr/bin/sphinx-build\n\n", 979 - which("sphinx-build-3"); 980 - } 981 - 982 - recommend_sphinx_version($virtualenv_cmd); 983 - printf "\n"; 984 - 985 - print "All optional dependencies are met.\n" if (!$optional); 986 - 987 - if ($need == 1) { 988 - die "Can't build as $need mandatory dependency is missing"; 989 - } elsif ($need) { 990 - die "Can't build as $need mandatory dependencies are missing"; 991 - } 992 - 993 - print "Needed package dependencies are met.\n"; 994 - } 995 - 996 - # 997 - # Main 998 - # 999 - 1000 - while (@ARGV) { 1001 - my $arg = shift(@ARGV); 1002 - 1003 - if ($arg eq "--no-virtualenv") { 1004 - $virtualenv = 0; 1005 - } elsif ($arg eq "--no-pdf"){ 1006 - $pdf = 0; 1007 - } elsif ($arg eq "--version-check"){ 1008 - $version_check = 1; 1009 - } else { 1010 - print "Usage:\n\t$0 <--no-virtualenv> <--no-pdf> <--version-check>\n\n"; 1011 - print "Where:\n"; 1012 - print "\t--no-virtualenv\t- Recommend installing Sphinx instead of using a virtualenv\n"; 1013 - print "\t--version-check\t- if version is compatible, don't check for missing dependencies\n"; 1014 - print "\t--no-pdf\t- don't check for dependencies required to build PDF docs\n\n"; 1015 - exit -1; 1016 - } 1017 - } 1018 - 1019 - # 1020 - # Determine the system type. There's no standard unique way that would 1021 - # work with all distros with a minimal package install. So, several 1022 - # methods are used here. 1023 - # 1024 - # By default, it will use lsb_release function. If not available, it will 1025 - # fail back to reading the known different places where the distro name 1026 - # is stored 1027 - # 1028 - 1029 - $system_release = qx(lsb_release -d) if which("lsb_release"); 1030 - $system_release =~ s/Description:\s*// if ($system_release); 1031 - $system_release = catcheck("/etc/system-release") if !$system_release; 1032 - $system_release = catcheck("/etc/redhat-release") if !$system_release; 1033 - $system_release = catcheck("/etc/lsb-release") if !$system_release; 1034 - $system_release = catcheck("/etc/gentoo-release") if !$system_release; 1035 - 1036 - # This seems more common than LSB these days 1037 - if (!$system_release) { 1038 - my %os_var; 1039 - if (open IN, "cat /etc/os-release|") { 1040 - while (<IN>) { 1041 - if (m/^([\w\d\_]+)=\"?([^\"]*)\"?\n/) { 1042 - $os_var{$1}=$2; 1043 - } 1044 - } 1045 - $system_release = $os_var{"NAME"}; 1046 - if (defined($os_var{"VERSION_ID"})) { 1047 - $system_release .= " " . $os_var{"VERSION_ID"} if (defined($os_var{"VERSION_ID"})); 1048 - } else { 1049 - $system_release .= " " . $os_var{"VERSION"}; 1050 - } 1051 - } 1052 - } 1053 - $system_release = catcheck("/etc/issue") if !$system_release; 1054 - $system_release =~ s/\s+$//; 1055 - 1056 - check_needs; 5 + # pylint: disable=C0103,C0114,C0115,C0116,C0301,C0302 6 + # pylint: disable=R0902,R0904,R0911,R0912,R0914,R0915,R1705,R1710,E1121 7 + 8 + # Note: this script requires at least Python 3.6 to run. 9 + # Don't add changes not compatible with it, it is meant to report 10 + # incompatible python versions. 11 + 12 + """ 13 + Dependency checker for Sphinx documentation Kernel build. 14 + 15 + This module provides tools to check for all required dependencies needed to 16 + build documentation using Sphinx, including system packages, Python modules 17 + and LaTeX packages for PDF generation. 18 + 19 + It detect packages for a subset of Linux distributions used by Kernel 20 + maintainers, showing hints and missing dependencies. 21 + 22 + The main class SphinxDependencyChecker handles the dependency checking logic 23 + and provides recommendations for installing missing packages. It supports both 24 + system package installations and Python virtual environments. By default, 25 + system pacage install is recommended. 26 + """ 27 + 28 + import argparse 29 + import os 30 + import re 31 + import subprocess 32 + import sys 33 + from glob import glob 34 + 35 + 36 + def parse_version(version): 37 + """Convert a major.minor.patch version into a tuple""" 38 + return tuple(int(x) for x in version.split(".")) 39 + 40 + 41 + def ver_str(version): 42 + """Returns a version tuple as major.minor.patch""" 43 + 44 + return ".".join([str(x) for x in version]) 45 + 46 + 47 + RECOMMENDED_VERSION = parse_version("3.4.3") 48 + MIN_PYTHON_VERSION = parse_version("3.7") 49 + 50 + 51 + class DepManager: 52 + """ 53 + Manage package dependencies. There are three types of dependencies: 54 + 55 + - System: dependencies required for docs build; 56 + - Python: python dependencies for a native distro Sphinx install; 57 + - PDF: dependencies needed by PDF builds. 58 + 59 + Each dependency can be mandatory or optional. Not installing an optional 60 + dependency won't break the build, but will cause degradation at the 61 + docs output. 62 + """ 63 + 64 + # Internal types of dependencies. Don't use them outside DepManager class. 65 + _SYS_TYPE = 0 66 + _PHY_TYPE = 1 67 + _PDF_TYPE = 2 68 + 69 + # Dependencies visible outside the class. 70 + # The keys are tuple with: (type, is_mandatory flag). 71 + # 72 + # Currently we're not using all optional dep types. Yet, we'll keep all 73 + # possible combinations here. They're not many, and that makes easier 74 + # if later needed and for the name() method below 75 + 76 + SYSTEM_MANDATORY = (_SYS_TYPE, True) 77 + PYTHON_MANDATORY = (_PHY_TYPE, True) 78 + PDF_MANDATORY = (_PDF_TYPE, True) 79 + 80 + SYSTEM_OPTIONAL = (_SYS_TYPE, False) 81 + PYTHON_OPTIONAL = (_PHY_TYPE, False) 82 + PDF_OPTIONAL = (_PDF_TYPE, True) 83 + 84 + def __init__(self, pdf): 85 + """ 86 + Initialize internal vars: 87 + 88 + - missing: missing dependencies list, containing a distro-independent 89 + name for a missing dependency and its type. 90 + - missing_pkg: ancillary dict containing missing dependencies in 91 + distro namespace, organized by type. 92 + - need: total number of needed dependencies. Never cleaned. 93 + - optional: total number of optional dependencies. Never cleaned. 94 + - pdf: Is PDF support enabled? 95 + """ 96 + self.missing = {} 97 + self.missing_pkg = {} 98 + self.need = 0 99 + self.optional = 0 100 + self.pdf = pdf 101 + 102 + @staticmethod 103 + def name(dtype): 104 + """ 105 + Ancillary routine to output a warn/error message reporting 106 + missing dependencies. 107 + """ 108 + if dtype[0] == DepManager._SYS_TYPE: 109 + msg = "build" 110 + elif dtype[0] == DepManager._PHY_TYPE: 111 + msg = "Python" 112 + else: 113 + msg = "PDF" 114 + 115 + if dtype[1]: 116 + return f"ERROR: {msg} mandatory deps missing" 117 + else: 118 + return f"Warning: {msg} optional deps missing" 119 + 120 + @staticmethod 121 + def is_optional(dtype): 122 + """Ancillary routine to report if a dependency is optional""" 123 + return not dtype[1] 124 + 125 + @staticmethod 126 + def is_pdf(dtype): 127 + """Ancillary routine to report if a dependency is for PDF generation""" 128 + if dtype[0] == DepManager._PDF_TYPE: 129 + return True 130 + 131 + return False 132 + 133 + def add_package(self, package, dtype): 134 + """ 135 + Add a package at the self.missing() dictionary. 136 + Doesn't update missing_pkg. 137 + """ 138 + is_optional = DepManager.is_optional(dtype) 139 + self.missing[package] = dtype 140 + if is_optional: 141 + self.optional += 1 142 + else: 143 + self.need += 1 144 + 145 + def del_package(self, package): 146 + """ 147 + Remove a package at the self.missing() dictionary. 148 + Doesn't update missing_pkg. 149 + """ 150 + if package in self.missing: 151 + del self.missing[package] 152 + 153 + def clear_deps(self): 154 + """ 155 + Clear dependencies without changing needed/optional. 156 + 157 + This is an ackward way to have a separate section to recommend 158 + a package after system main dependencies. 159 + 160 + TODO: rework the logic to prevent needing it. 161 + """ 162 + 163 + self.missing = {} 164 + self.missing_pkg = {} 165 + 166 + def check_missing(self, progs): 167 + """ 168 + Update self.missing_pkg, using progs dict to convert from the 169 + agnostic package name to distro-specific one. 170 + 171 + Returns an string with the packages to be installed, sorted and 172 + with eventual duplicates removed. 173 + """ 174 + 175 + self.missing_pkg = {} 176 + 177 + for prog, dtype in sorted(self.missing.items()): 178 + # At least on some LTS distros like CentOS 7, texlive doesn't 179 + # provide all packages we need. When such distros are 180 + # detected, we have to disable PDF output. 181 + # 182 + # So, we need to ignore the packages that distros would 183 + # need for LaTeX to work 184 + if DepManager.is_pdf(dtype) and not self.pdf: 185 + self.optional -= 1 186 + continue 187 + 188 + if not dtype in self.missing_pkg: 189 + self.missing_pkg[dtype] = [] 190 + 191 + self.missing_pkg[dtype].append(progs.get(prog, prog)) 192 + 193 + install = [] 194 + for dtype, pkgs in self.missing_pkg.items(): 195 + install += pkgs 196 + 197 + return " ".join(sorted(set(install))) 198 + 199 + def warn_install(self): 200 + """ 201 + Emit warnings/errors related to missing packages. 202 + """ 203 + 204 + output_msg = "" 205 + 206 + for dtype in sorted(self.missing_pkg.keys()): 207 + progs = " ".join(sorted(set(self.missing_pkg[dtype]))) 208 + 209 + try: 210 + name = DepManager.name(dtype) 211 + output_msg += f'{name}:\t{progs}\n' 212 + except KeyError: 213 + raise KeyError(f"ERROR!!!: invalid dtype for {progs}: {dtype}") 214 + 215 + if output_msg: 216 + print(f"\n{output_msg}") 217 + 218 + class AncillaryMethods: 219 + """ 220 + Ancillary methods that checks for missing dependencies for different 221 + types of types, like binaries, python modules, rpm deps, etc. 222 + """ 223 + 224 + @staticmethod 225 + def which(prog): 226 + """ 227 + Our own implementation of which(). We could instead use 228 + shutil.which(), but this function is simple enough. 229 + Probably faster to use this implementation than to import shutil. 230 + """ 231 + for path in os.environ.get("PATH", "").split(":"): 232 + full_path = os.path.join(path, prog) 233 + if os.access(full_path, os.X_OK): 234 + return full_path 235 + 236 + return None 237 + 238 + @staticmethod 239 + def get_python_version(cmd): 240 + """ 241 + Get python version from a Python binary. As we need to detect if 242 + are out there newer python binaries, we can't rely on sys.release here. 243 + """ 244 + 245 + result = SphinxDependencyChecker.run([cmd, "--version"], 246 + capture_output=True, text=True) 247 + version = result.stdout.strip() 248 + 249 + match = re.search(r"(\d+\.\d+\.\d+)", version) 250 + if match: 251 + return parse_version(match.group(1)) 252 + 253 + print(f"Can't parse version {version}") 254 + return (0, 0, 0) 255 + 256 + @staticmethod 257 + def find_python(): 258 + """ 259 + Detect if are out there any python 3.xy version newer than the 260 + current one. 261 + 262 + Note: this routine is limited to up to 2 digits for python3. We 263 + may need to update it one day, hopefully on a distant future. 264 + """ 265 + patterns = [ 266 + "python3.[0-9]", 267 + "python3.[0-9][0-9]", 268 + ] 269 + 270 + # Seek for a python binary newer than MIN_PYTHON_VERSION 271 + for path in os.getenv("PATH", "").split(":"): 272 + for pattern in patterns: 273 + for cmd in glob(os.path.join(path, pattern)): 274 + if os.path.isfile(cmd) and os.access(cmd, os.X_OK): 275 + version = SphinxDependencyChecker.get_python_version(cmd) 276 + if version >= MIN_PYTHON_VERSION: 277 + return cmd 278 + 279 + @staticmethod 280 + def check_python(): 281 + """ 282 + Check if the current python binary satisfies our minimal requirement 283 + for Sphinx build. If not, re-run with a newer version if found. 284 + """ 285 + cur_ver = sys.version_info[:3] 286 + if cur_ver >= MIN_PYTHON_VERSION: 287 + ver = ver_str(cur_ver) 288 + print(f"Python version: {ver}") 289 + 290 + # This could be useful for debugging purposes 291 + if SphinxDependencyChecker.which("docutils"): 292 + result = SphinxDependencyChecker.run(["docutils", "--version"], 293 + capture_output=True, text=True) 294 + ver = result.stdout.strip() 295 + match = re.search(r"(\d+\.\d+\.\d+)", ver) 296 + if match: 297 + ver = match.group(1) 298 + 299 + print(f"Docutils version: {ver}") 300 + 301 + return 302 + 303 + python_ver = ver_str(cur_ver) 304 + 305 + new_python_cmd = SphinxDependencyChecker.find_python() 306 + if not new_python_cmd: 307 + print(f"ERROR: Python version {python_ver} is not spported anymore\n") 308 + print(" Can't find a new version. This script may fail") 309 + return 310 + 311 + # Restart script using the newer version 312 + script_path = os.path.abspath(sys.argv[0]) 313 + args = [new_python_cmd, script_path] + sys.argv[1:] 314 + 315 + print(f"Python {python_ver} not supported. Changing to {new_python_cmd}") 316 + 317 + try: 318 + os.execv(new_python_cmd, args) 319 + except OSError as e: 320 + sys.exit(f"Failed to restart with {new_python_cmd}: {e}") 321 + 322 + @staticmethod 323 + def run(*args, **kwargs): 324 + """ 325 + Excecute a command, hiding its output by default. 326 + Preserve comatibility with older Python versions. 327 + """ 328 + 329 + capture_output = kwargs.pop('capture_output', False) 330 + 331 + if capture_output: 332 + if 'stdout' not in kwargs: 333 + kwargs['stdout'] = subprocess.PIPE 334 + if 'stderr' not in kwargs: 335 + kwargs['stderr'] = subprocess.PIPE 336 + else: 337 + if 'stdout' not in kwargs: 338 + kwargs['stdout'] = subprocess.DEVNULL 339 + if 'stderr' not in kwargs: 340 + kwargs['stderr'] = subprocess.DEVNULL 341 + 342 + # Don't break with older Python versions 343 + if 'text' in kwargs and sys.version_info < (3, 7): 344 + kwargs['universal_newlines'] = kwargs.pop('text') 345 + 346 + return subprocess.run(*args, **kwargs) 347 + 348 + class MissingCheckers(AncillaryMethods): 349 + """ 350 + Contains some ancillary checkers for different types of binaries and 351 + package managers. 352 + """ 353 + 354 + def __init__(self, args, texlive): 355 + """ 356 + Initialize its internal variables 357 + """ 358 + self.pdf = args.pdf 359 + self.virtualenv = args.virtualenv 360 + self.version_check = args.version_check 361 + self.texlive = texlive 362 + 363 + self.min_version = (0, 0, 0) 364 + self.cur_version = (0, 0, 0) 365 + 366 + self.deps = DepManager(self.pdf) 367 + 368 + self.need_symlink = 0 369 + self.need_sphinx = 0 370 + 371 + self.verbose_warn_install = 1 372 + 373 + self.virtenv_dir = "" 374 + self.install = "" 375 + self.python_cmd = "" 376 + 377 + self.virtenv_prefix = ["sphinx_", "Sphinx_" ] 378 + 379 + def check_missing_file(self, files, package, dtype): 380 + """ 381 + Does the file exists? If not, add it to missing dependencies. 382 + """ 383 + for f in files: 384 + if os.path.exists(f): 385 + return 386 + self.deps.add_package(package, dtype) 387 + 388 + def check_program(self, prog, dtype): 389 + """ 390 + Does the program exists and it is at the PATH? 391 + If not, add it to missing dependencies. 392 + """ 393 + found = self.which(prog) 394 + if found: 395 + return found 396 + 397 + self.deps.add_package(prog, dtype) 398 + 399 + return None 400 + 401 + def check_perl_module(self, prog, dtype): 402 + """ 403 + Does perl have a dependency? Is it available? 404 + If not, add it to missing dependencies. 405 + 406 + Right now, we still need Perl for doc build, as it is required 407 + by some tools called at docs or kernel build time, like: 408 + 409 + scripts/documentation-file-ref-check 410 + 411 + Also, checkpatch is on Perl. 412 + """ 413 + 414 + # While testing with lxc download template, one of the 415 + # distros (Oracle) didn't have perl - nor even an option to install 416 + # before installing oraclelinux-release-el9 package. 417 + # 418 + # Check it before running an error. If perl is not there, 419 + # add it as a mandatory package, as some parts of the doc builder 420 + # needs it. 421 + if not self.which("perl"): 422 + self.deps.add_package("perl", DepManager.SYSTEM_MANDATORY) 423 + self.deps.add_package(prog, dtype) 424 + return 425 + 426 + try: 427 + self.run(["perl", f"-M{prog}", "-e", "1"], check=True) 428 + except subprocess.CalledProcessError: 429 + self.deps.add_package(prog, dtype) 430 + 431 + def check_python_module(self, module, is_optional=False): 432 + """ 433 + Does a python module exists outside venv? If not, add it to missing 434 + dependencies. 435 + """ 436 + if is_optional: 437 + dtype = DepManager.PYTHON_OPTIONAL 438 + else: 439 + dtype = DepManager.PYTHON_MANDATORY 440 + 441 + try: 442 + self.run([self.python_cmd, "-c", f"import {module}"], check=True) 443 + except subprocess.CalledProcessError: 444 + self.deps.add_package(module, dtype) 445 + 446 + def check_rpm_missing(self, pkgs, dtype): 447 + """ 448 + Does a rpm package exists? If not, add it to missing dependencies. 449 + """ 450 + for prog in pkgs: 451 + try: 452 + self.run(["rpm", "-q", prog], check=True) 453 + except subprocess.CalledProcessError: 454 + self.deps.add_package(prog, dtype) 455 + 456 + def check_pacman_missing(self, pkgs, dtype): 457 + """ 458 + Does a pacman package exists? If not, add it to missing dependencies. 459 + """ 460 + for prog in pkgs: 461 + try: 462 + self.run(["pacman", "-Q", prog], check=True) 463 + except subprocess.CalledProcessError: 464 + self.deps.add_package(prog, dtype) 465 + 466 + def check_missing_tex(self, is_optional=False): 467 + """ 468 + Does a LaTeX package exists? If not, add it to missing dependencies. 469 + """ 470 + if is_optional: 471 + dtype = DepManager.PDF_OPTIONAL 472 + else: 473 + dtype = DepManager.PDF_MANDATORY 474 + 475 + kpsewhich = self.which("kpsewhich") 476 + for prog, package in self.texlive.items(): 477 + 478 + # If kpsewhich is not there, just add it to deps 479 + if not kpsewhich: 480 + self.deps.add_package(package, dtype) 481 + continue 482 + 483 + # Check if the package is needed 484 + try: 485 + result = self.run( 486 + [kpsewhich, prog], stdout=subprocess.PIPE, text=True, check=True 487 + ) 488 + 489 + # Didn't find. Add it 490 + if not result.stdout.strip(): 491 + self.deps.add_package(package, dtype) 492 + 493 + except subprocess.CalledProcessError: 494 + # kpsewhich returned an error. Add it, just in case 495 + self.deps.add_package(package, dtype) 496 + 497 + def get_sphinx_fname(self): 498 + """ 499 + Gets the binary filename for sphinx-build. 500 + """ 501 + if "SPHINXBUILD" in os.environ: 502 + return os.environ["SPHINXBUILD"] 503 + 504 + fname = "sphinx-build" 505 + if self.which(fname): 506 + return fname 507 + 508 + fname = "sphinx-build-3" 509 + if self.which(fname): 510 + self.need_symlink = 1 511 + return fname 512 + 513 + return "" 514 + 515 + def get_sphinx_version(self, cmd): 516 + """ 517 + Gets sphinx-build version. 518 + """ 519 + try: 520 + result = self.run([cmd, "--version"], 521 + stdout=subprocess.PIPE, 522 + stderr=subprocess.STDOUT, 523 + text=True, check=True) 524 + except (subprocess.CalledProcessError, FileNotFoundError): 525 + return None 526 + 527 + for line in result.stdout.split("\n"): 528 + match = re.match(r"^sphinx-build\s+([\d\.]+)(?:\+(?:/[\da-f]+)|b\d+)?\s*$", line) 529 + if match: 530 + return parse_version(match.group(1)) 531 + 532 + match = re.match(r"^Sphinx.*\s+([\d\.]+)\s*$", line) 533 + if match: 534 + return parse_version(match.group(1)) 535 + 536 + def check_sphinx(self, conf): 537 + """ 538 + Checks Sphinx minimal requirements 539 + """ 540 + try: 541 + with open(conf, "r", encoding="utf-8") as f: 542 + for line in f: 543 + match = re.match(r"^\s*needs_sphinx\s*=\s*[\'\"]([\d\.]+)[\'\"]", line) 544 + if match: 545 + self.min_version = parse_version(match.group(1)) 546 + break 547 + except IOError: 548 + sys.exit(f"Can't open {conf}") 549 + 550 + if not self.min_version: 551 + sys.exit(f"Can't get needs_sphinx version from {conf}") 552 + 553 + self.virtenv_dir = self.virtenv_prefix[0] + "latest" 554 + 555 + sphinx = self.get_sphinx_fname() 556 + if not sphinx: 557 + self.need_sphinx = 1 558 + return 559 + 560 + self.cur_version = self.get_sphinx_version(sphinx) 561 + if not self.cur_version: 562 + sys.exit(f"{sphinx} didn't return its version") 563 + 564 + if self.cur_version < self.min_version: 565 + curver = ver_str(self.cur_version) 566 + minver = ver_str(self.min_version) 567 + 568 + print(f"ERROR: Sphinx version is {curver}. It should be >= {minver}") 569 + self.need_sphinx = 1 570 + return 571 + 572 + # On version check mode, just assume Sphinx has all mandatory deps 573 + if self.version_check and self.cur_version >= RECOMMENDED_VERSION: 574 + sys.exit(0) 575 + 576 + def catcheck(self, filename): 577 + """ 578 + Reads a file if it exists, returning as string. 579 + If not found, returns an empty string. 580 + """ 581 + if os.path.exists(filename): 582 + with open(filename, "r", encoding="utf-8") as f: 583 + return f.read().strip() 584 + return "" 585 + 586 + def get_system_release(self): 587 + """ 588 + Determine the system type. There's no unique way that would work 589 + with all distros with a minimal package install. So, several 590 + methods are used here. 591 + 592 + By default, it will use lsb_release function. If not available, it will 593 + fail back to reading the known different places where the distro name 594 + is stored. 595 + 596 + Several modern distros now have /etc/os-release, which usually have 597 + a decent coverage. 598 + """ 599 + 600 + system_release = "" 601 + 602 + if self.which("lsb_release"): 603 + result = self.run(["lsb_release", "-d"], capture_output=True, text=True) 604 + system_release = result.stdout.replace("Description:", "").strip() 605 + 606 + release_files = [ 607 + "/etc/system-release", 608 + "/etc/redhat-release", 609 + "/etc/lsb-release", 610 + "/etc/gentoo-release", 611 + ] 612 + 613 + if not system_release: 614 + for f in release_files: 615 + system_release = self.catcheck(f) 616 + if system_release: 617 + break 618 + 619 + # This seems more common than LSB these days 620 + if not system_release: 621 + os_var = {} 622 + try: 623 + with open("/etc/os-release", "r", encoding="utf-8") as f: 624 + for line in f: 625 + match = re.match(r"^([\w\d\_]+)=\"?([^\"]*)\"?\n", line) 626 + if match: 627 + os_var[match.group(1)] = match.group(2) 628 + 629 + system_release = os_var.get("NAME", "") 630 + if "VERSION_ID" in os_var: 631 + system_release += " " + os_var["VERSION_ID"] 632 + elif "VERSION" in os_var: 633 + system_release += " " + os_var["VERSION"] 634 + except IOError: 635 + pass 636 + 637 + if not system_release: 638 + system_release = self.catcheck("/etc/issue") 639 + 640 + system_release = system_release.strip() 641 + 642 + return system_release 643 + 644 + class SphinxDependencyChecker(MissingCheckers): 645 + """ 646 + Main class for checking Sphinx documentation build dependencies. 647 + 648 + - Check for missing system packages; 649 + - Check for missing Python modules; 650 + - Check for missing LaTeX packages needed by PDF generation; 651 + - Propose Sphinx install via Python Virtual environment; 652 + - Propose Sphinx install via distro-specific package install. 653 + """ 654 + def __init__(self, args): 655 + """Initialize checker variables""" 656 + 657 + # List of required texlive packages on Fedora and OpenSuse 658 + texlive = { 659 + "amsfonts.sty": "texlive-amsfonts", 660 + "amsmath.sty": "texlive-amsmath", 661 + "amssymb.sty": "texlive-amsfonts", 662 + "amsthm.sty": "texlive-amscls", 663 + "anyfontsize.sty": "texlive-anyfontsize", 664 + "atbegshi.sty": "texlive-oberdiek", 665 + "bm.sty": "texlive-tools", 666 + "capt-of.sty": "texlive-capt-of", 667 + "cmap.sty": "texlive-cmap", 668 + "ctexhook.sty": "texlive-ctex", 669 + "ecrm1000.tfm": "texlive-ec", 670 + "eqparbox.sty": "texlive-eqparbox", 671 + "eu1enc.def": "texlive-euenc", 672 + "fancybox.sty": "texlive-fancybox", 673 + "fancyvrb.sty": "texlive-fancyvrb", 674 + "float.sty": "texlive-float", 675 + "fncychap.sty": "texlive-fncychap", 676 + "footnote.sty": "texlive-mdwtools", 677 + "framed.sty": "texlive-framed", 678 + "luatex85.sty": "texlive-luatex85", 679 + "multirow.sty": "texlive-multirow", 680 + "needspace.sty": "texlive-needspace", 681 + "palatino.sty": "texlive-psnfss", 682 + "parskip.sty": "texlive-parskip", 683 + "polyglossia.sty": "texlive-polyglossia", 684 + "tabulary.sty": "texlive-tabulary", 685 + "threeparttable.sty": "texlive-threeparttable", 686 + "titlesec.sty": "texlive-titlesec", 687 + "ucs.sty": "texlive-ucs", 688 + "upquote.sty": "texlive-upquote", 689 + "wrapfig.sty": "texlive-wrapfig", 690 + } 691 + 692 + super().__init__(args, texlive) 693 + 694 + self.need_pip = False 695 + self.rec_sphinx_upgrade = 0 696 + 697 + self.system_release = self.get_system_release() 698 + self.activate_cmd = "" 699 + 700 + # Some distros may not have a Sphinx shipped package compatible with 701 + # our minimal requirements 702 + self.package_supported = True 703 + 704 + # Recommend a new python version 705 + self.recommend_python = None 706 + 707 + # Certain hints are meant to be shown only once 708 + self.distro_msg = None 709 + 710 + self.latest_avail_ver = (0, 0, 0) 711 + self.venv_ver = (0, 0, 0) 712 + 713 + prefix = os.environ.get("srctree", ".") + "/" 714 + 715 + self.conf = prefix + "Documentation/conf.py" 716 + self.requirement_file = prefix + "Documentation/sphinx/requirements.txt" 717 + 718 + def get_install_progs(self, progs, cmd, extra=None): 719 + """ 720 + Check for missing dependencies using the provided program mapping. 721 + 722 + The actual distro-specific programs are mapped via progs argument. 723 + """ 724 + install = self.deps.check_missing(progs) 725 + 726 + if self.verbose_warn_install: 727 + self.deps.warn_install() 728 + 729 + if not install: 730 + return 731 + 732 + if cmd: 733 + if self.verbose_warn_install: 734 + msg = "You should run:" 735 + else: 736 + msg = "" 737 + 738 + if extra: 739 + msg += "\n\t" + extra.replace("\n", "\n\t") 740 + 741 + return(msg + "\n\tsudo " + cmd + " " + install) 742 + 743 + return None 744 + 745 + # 746 + # Distro-specific hints methods 747 + # 748 + 749 + def give_debian_hints(self): 750 + """ 751 + Provide package installation hints for Debian-based distros. 752 + """ 753 + progs = { 754 + "Pod::Usage": "perl-modules", 755 + "convert": "imagemagick", 756 + "dot": "graphviz", 757 + "ensurepip": "python3-venv", 758 + "python-sphinx": "python3-sphinx", 759 + "rsvg-convert": "librsvg2-bin", 760 + "virtualenv": "virtualenv", 761 + "xelatex": "texlive-xetex", 762 + "yaml": "python3-yaml", 763 + } 764 + 765 + if self.pdf: 766 + pdf_pkgs = { 767 + "fonts-dejavu": [ 768 + "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 769 + ], 770 + "fonts-noto-cjk": [ 771 + "/usr/share/fonts/noto-cjk/NotoSansCJK-Regular.ttc", 772 + "/usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc", 773 + "/usr/share/fonts/opentype/noto/NotoSerifCJK-Regular.ttc", 774 + ], 775 + "tex-gyre": [ 776 + "/usr/share/texmf/tex/latex/tex-gyre/tgtermes.sty" 777 + ], 778 + "texlive-fonts-recommended": [ 779 + "/usr/share/texlive/texmf-dist/fonts/tfm/adobe/zapfding/pzdr.tfm", 780 + ], 781 + "texlive-lang-chinese": [ 782 + "/usr/share/texlive/texmf-dist/tex/latex/ctex/ctexhook.sty", 783 + ], 784 + } 785 + 786 + for package, files in pdf_pkgs.items(): 787 + self.check_missing_file(files, package, DepManager.PDF_MANDATORY) 788 + 789 + self.check_program("dvipng", DepManager.PDF_MANDATORY) 790 + 791 + if not self.distro_msg: 792 + self.distro_msg = \ 793 + "Note: ImageMagick is broken on some distros, affecting PDF output. For more details:\n" \ 794 + "\thttps://askubuntu.com/questions/1158894/imagemagick-still-broken-using-with-usr-bin-convert" 795 + 796 + return self.get_install_progs(progs, "apt-get install") 797 + 798 + def give_redhat_hints(self): 799 + """ 800 + Provide package installation hints for RedHat-based distros 801 + (Fedora, RHEL and RHEL-based variants). 802 + """ 803 + progs = { 804 + "Pod::Usage": "perl-Pod-Usage", 805 + "convert": "ImageMagick", 806 + "dot": "graphviz", 807 + "python-sphinx": "python3-sphinx", 808 + "rsvg-convert": "librsvg2-tools", 809 + "virtualenv": "python3-virtualenv", 810 + "xelatex": "texlive-xetex-bin", 811 + "yaml": "python3-pyyaml", 812 + } 813 + 814 + fedora_tex_pkgs = [ 815 + "dejavu-sans-fonts", 816 + "dejavu-sans-mono-fonts", 817 + "dejavu-serif-fonts", 818 + "texlive-collection-fontsrecommended", 819 + "texlive-collection-latex", 820 + "texlive-xecjk", 821 + ] 822 + 823 + fedora = False 824 + rel = None 825 + 826 + match = re.search(r"(release|Linux)\s+(\d+)", self.system_release) 827 + if match: 828 + rel = int(match.group(2)) 829 + 830 + if not rel: 831 + print("Couldn't identify release number") 832 + noto_sans_redhat = None 833 + self.pdf = False 834 + elif re.search("Fedora", self.system_release): 835 + # Fedora 38 and upper use this CJK font 836 + 837 + noto_sans_redhat = "google-noto-sans-cjk-fonts" 838 + fedora = True 839 + else: 840 + # Almalinux, CentOS, RHEL, ... 841 + 842 + # at least up to version 9 (and Fedora < 38), that's the CJK font 843 + noto_sans_redhat = "google-noto-sans-cjk-ttc-fonts" 844 + 845 + progs["virtualenv"] = "python-virtualenv" 846 + 847 + if not rel or rel < 8: 848 + print("ERROR: Distro not supported. Too old?") 849 + return 850 + 851 + # RHEL 8 uses Python 3.6, which is not compatible with 852 + # the build system anymore. Suggest Python 3.11 853 + if rel == 8: 854 + self.check_program("python3.9", DepManager.SYSTEM_MANDATORY) 855 + progs["python3.9"] = "python39" 856 + progs["yaml"] = "python39-pyyaml" 857 + 858 + self.recommend_python = True 859 + 860 + # There's no python39-sphinx package. Only pip is supported 861 + self.package_supported = False 862 + 863 + if not self.distro_msg: 864 + self.distro_msg = \ 865 + "Note: RHEL-based distros typically require extra repositories.\n" \ 866 + "For most, enabling epel and crb are enough:\n" \ 867 + "\tsudo dnf install -y epel-release\n" \ 868 + "\tsudo dnf config-manager --set-enabled crb\n" \ 869 + "Yet, some may have other required repositories. Those commands could be useful:\n" \ 870 + "\tsudo dnf repolist all\n" \ 871 + "\tsudo dnf repoquery --available --info <pkgs>\n" \ 872 + "\tsudo dnf config-manager --set-enabled '*' # enable all - probably not what you want" 873 + 874 + if self.pdf: 875 + pdf_pkgs = [ 876 + "/usr/share/fonts/google-noto-cjk/NotoSansCJK-Regular.ttc", 877 + "/usr/share/fonts/google-noto-sans-cjk-fonts/NotoSansCJK-Regular.ttc", 878 + ] 879 + 880 + self.check_missing_file(pdf_pkgs, noto_sans_redhat, DepManager.PDF_MANDATORY) 881 + 882 + self.check_rpm_missing(fedora_tex_pkgs, DepManager.PDF_MANDATORY) 883 + 884 + self.check_missing_tex(DepManager.PDF_MANDATORY) 885 + 886 + # There's no texlive-ctex on RHEL 8 repositories. This will 887 + # likely affect CJK pdf build only. 888 + if not fedora and rel == 8: 889 + self.deps.del_package("texlive-ctex") 890 + 891 + return self.get_install_progs(progs, "dnf install") 892 + 893 + def give_opensuse_hints(self): 894 + """ 895 + Provide package installation hints for openSUSE-based distros 896 + (Leap and Tumbleweed). 897 + """ 898 + progs = { 899 + "Pod::Usage": "perl-Pod-Usage", 900 + "convert": "ImageMagick", 901 + "dot": "graphviz", 902 + "python-sphinx": "python3-sphinx", 903 + "virtualenv": "python3-virtualenv", 904 + "xelatex": "texlive-xetex-bin texlive-dejavu", 905 + "yaml": "python3-pyyaml", 906 + } 907 + 908 + suse_tex_pkgs = [ 909 + "texlive-babel-english", 910 + "texlive-caption", 911 + "texlive-colortbl", 912 + "texlive-courier", 913 + "texlive-dvips", 914 + "texlive-helvetic", 915 + "texlive-makeindex", 916 + "texlive-metafont", 917 + "texlive-metapost", 918 + "texlive-palatino", 919 + "texlive-preview", 920 + "texlive-times", 921 + "texlive-zapfchan", 922 + "texlive-zapfding", 923 + ] 924 + 925 + progs["latexmk"] = "texlive-latexmk-bin" 926 + 927 + match = re.search(r"(Leap)\s+(\d+).(\d)", self.system_release) 928 + if match: 929 + rel = int(match.group(2)) 930 + 931 + # Leap 15.x uses Python 3.6, which is not compatible with 932 + # the build system anymore. Suggest Python 3.11 933 + if rel == 15: 934 + if not self.which(self.python_cmd): 935 + self.check_program("python3.11", DepManager.SYSTEM_MANDATORY) 936 + progs["python3.11"] = "python311" 937 + self.recommend_python = True 938 + 939 + progs.update({ 940 + "python-sphinx": "python311-Sphinx python311-Sphinx-latex", 941 + "virtualenv": "python311-virtualenv", 942 + "yaml": "python311-PyYAML", 943 + }) 944 + else: 945 + # Tumbleweed defaults to Python 3.11 946 + 947 + progs.update({ 948 + "python-sphinx": "python313-Sphinx python313-Sphinx-latex", 949 + "virtualenv": "python313-virtualenv", 950 + "yaml": "python313-PyYAML", 951 + }) 952 + 953 + # FIXME: add support for installing CJK fonts 954 + # 955 + # I tried hard, but was unable to find a way to install 956 + # "Noto Sans CJK SC" on openSUSE 957 + 958 + if self.pdf: 959 + self.check_rpm_missing(suse_tex_pkgs, DepManager.PDF_MANDATORY) 960 + if self.pdf: 961 + self.check_missing_tex() 962 + 963 + return self.get_install_progs(progs, "zypper install --no-recommends") 964 + 965 + def give_mageia_hints(self): 966 + """ 967 + Provide package installation hints for Mageia and OpenMandriva. 968 + """ 969 + progs = { 970 + "Pod::Usage": "perl-Pod-Usage", 971 + "convert": "ImageMagick", 972 + "dot": "graphviz", 973 + "python-sphinx": "python3-sphinx", 974 + "rsvg-convert": "librsvg2", 975 + "virtualenv": "python3-virtualenv", 976 + "xelatex": "texlive", 977 + "yaml": "python3-yaml", 978 + } 979 + 980 + tex_pkgs = [ 981 + "texlive-fontsextra", 982 + "texlive-fonts-asian", 983 + "fonts-ttf-dejavu", 984 + ] 985 + 986 + if re.search(r"OpenMandriva", self.system_release): 987 + packager_cmd = "dnf install" 988 + noto_sans = "noto-sans-cjk-fonts" 989 + tex_pkgs = [ 990 + "texlive-collection-basic", 991 + "texlive-collection-langcjk", 992 + "texlive-collection-fontsextra", 993 + "texlive-collection-fontsrecommended" 994 + ] 995 + 996 + # Tested on OpenMandriva Lx 4.3 997 + progs["convert"] = "imagemagick" 998 + progs["yaml"] = "python-pyyaml" 999 + progs["python-virtualenv"] = "python-virtualenv" 1000 + progs["python-sphinx"] = "python-sphinx" 1001 + progs["xelatex"] = "texlive" 1002 + 1003 + self.check_program("python-virtualenv", DepManager.PYTHON_MANDATORY) 1004 + 1005 + # On my tests with openMandriva LX 4.0 docker image, upgraded 1006 + # to 4.3, python-virtualenv package is broken: it is missing 1007 + # ensurepip. Without it, the alternative would be to run: 1008 + # python3 -m venv --without-pip ~/sphinx_latest, but running 1009 + # pip there won't install sphinx at venv. 1010 + # 1011 + # Add a note about that. 1012 + 1013 + if not self.distro_msg: 1014 + self.distro_msg = \ 1015 + "Notes:\n"\ 1016 + "1. for venv, ensurepip could be broken, preventing its install method.\n" \ 1017 + "2. at least on OpenMandriva LX 4.3, texlive packages seem broken" 1018 + 1019 + else: 1020 + packager_cmd = "urpmi" 1021 + noto_sans = "google-noto-sans-cjk-ttc-fonts" 1022 + 1023 + progs["latexmk"] = "texlive-collection-basic" 1024 + 1025 + if self.pdf: 1026 + pdf_pkgs = [ 1027 + "/usr/share/fonts/google-noto-cjk/NotoSansCJK-Regular.ttc", 1028 + "/usr/share/fonts/TTF/NotoSans-Regular.ttf", 1029 + ] 1030 + 1031 + self.check_missing_file(pdf_pkgs, noto_sans, DepManager.PDF_MANDATORY) 1032 + self.check_rpm_missing(tex_pkgs, DepManager.PDF_MANDATORY) 1033 + 1034 + return self.get_install_progs(progs, packager_cmd) 1035 + 1036 + def give_arch_linux_hints(self): 1037 + """ 1038 + Provide package installation hints for ArchLinux. 1039 + """ 1040 + progs = { 1041 + "convert": "imagemagick", 1042 + "dot": "graphviz", 1043 + "latexmk": "texlive-core", 1044 + "rsvg-convert": "extra/librsvg", 1045 + "virtualenv": "python-virtualenv", 1046 + "xelatex": "texlive-xetex", 1047 + "yaml": "python-yaml", 1048 + } 1049 + 1050 + archlinux_tex_pkgs = [ 1051 + "texlive-basic", 1052 + "texlive-binextra", 1053 + "texlive-core", 1054 + "texlive-fontsrecommended", 1055 + "texlive-langchinese", 1056 + "texlive-langcjk", 1057 + "texlive-latexextra", 1058 + "ttf-dejavu", 1059 + ] 1060 + 1061 + if self.pdf: 1062 + self.check_pacman_missing(archlinux_tex_pkgs, 1063 + DepManager.PDF_MANDATORY) 1064 + 1065 + self.check_missing_file(["/usr/share/fonts/noto-cjk/NotoSansCJK-Regular.ttc"], 1066 + "noto-fonts-cjk", 1067 + DepManager.PDF_MANDATORY) 1068 + 1069 + 1070 + return self.get_install_progs(progs, "pacman -S") 1071 + 1072 + def give_gentoo_hints(self): 1073 + """ 1074 + Provide package installation hints for Gentoo. 1075 + """ 1076 + texlive_deps = [ 1077 + "dev-texlive/texlive-fontsrecommended", 1078 + "dev-texlive/texlive-latexextra", 1079 + "dev-texlive/texlive-xetex", 1080 + "media-fonts/dejavu", 1081 + ] 1082 + 1083 + progs = { 1084 + "convert": "media-gfx/imagemagick", 1085 + "dot": "media-gfx/graphviz", 1086 + "rsvg-convert": "gnome-base/librsvg", 1087 + "virtualenv": "dev-python/virtualenv", 1088 + "xelatex": " ".join(texlive_deps), 1089 + "yaml": "dev-python/pyyaml", 1090 + "python-sphinx": "dev-python/sphinx", 1091 + } 1092 + 1093 + if self.pdf: 1094 + pdf_pkgs = { 1095 + "media-fonts/dejavu": [ 1096 + "/usr/share/fonts/dejavu/DejaVuSans.ttf", 1097 + ], 1098 + "media-fonts/noto-cjk": [ 1099 + "/usr/share/fonts/noto-cjk/NotoSansCJKsc-Regular.otf", 1100 + "/usr/share/fonts/noto-cjk/NotoSerifCJK-Regular.ttc", 1101 + ], 1102 + } 1103 + for package, files in pdf_pkgs.items(): 1104 + self.check_missing_file(files, package, DepManager.PDF_MANDATORY) 1105 + 1106 + # Handling dependencies is a nightmare, as Gentoo refuses to emerge 1107 + # some packages if there's no package.use file describing them. 1108 + # To make it worse, compilation flags shall also be present there 1109 + # for some packages. If USE is not perfect, error/warning messages 1110 + # like those are shown: 1111 + # 1112 + # !!! The following binary packages have been ignored due to non matching USE: 1113 + # 1114 + # =media-gfx/graphviz-12.2.1-r1 X pdf -python_single_target_python3_13 qt6 svg 1115 + # =media-gfx/graphviz-12.2.1-r1 X pdf python_single_target_python3_12 -python_single_target_python3_13 qt6 svg 1116 + # =media-gfx/graphviz-12.2.1-r1 X pdf qt6 svg 1117 + # =media-gfx/graphviz-12.2.1-r1 X pdf -python_single_target_python3_10 qt6 svg 1118 + # =media-gfx/graphviz-12.2.1-r1 X pdf -python_single_target_python3_10 python_single_target_python3_12 -python_single_target_python3_13 qt6 svg 1119 + # =media-fonts/noto-cjk-20190416 X 1120 + # =app-text/texlive-core-2024-r1 X cjk -xetex 1121 + # =app-text/texlive-core-2024-r1 X -xetex 1122 + # =app-text/texlive-core-2024-r1 -xetex 1123 + # =dev-libs/zziplib-0.13.79-r1 sdl 1124 + # 1125 + # And will ignore such packages, installing the remaining ones. That 1126 + # affects mostly the image extension and PDF generation. 1127 + 1128 + # Package dependencies and the minimal needed args: 1129 + portages = { 1130 + "graphviz": "media-gfx/graphviz", 1131 + "imagemagick": "media-gfx/imagemagick", 1132 + "media-libs": "media-libs/harfbuzz icu", 1133 + "media-fonts": "media-fonts/noto-cjk", 1134 + "texlive": "app-text/texlive-core xetex", 1135 + "zziblib": "dev-libs/zziplib sdl", 1136 + } 1137 + 1138 + extra_cmds = "" 1139 + if not self.distro_msg: 1140 + self.distro_msg = "Note: Gentoo requires package.use to be adjusted before emerging packages" 1141 + 1142 + use_base = "/etc/portage/package.use" 1143 + files = glob(f"{use_base}/*") 1144 + 1145 + for fname, portage in portages.items(): 1146 + install = False 1147 + 1148 + while install is False: 1149 + if not files: 1150 + # No files under package.usage. Install all 1151 + install = True 1152 + break 1153 + 1154 + args = portage.split(" ") 1155 + 1156 + name = args.pop(0) 1157 + 1158 + cmd = ["grep", "-l", "-E", rf"^{name}\b" ] + files 1159 + result = self.run(cmd, stdout=subprocess.PIPE, text=True) 1160 + if result.returncode or not result.stdout.strip(): 1161 + # File containing portage name not found 1162 + install = True 1163 + break 1164 + 1165 + # Ensure that needed USE flags are present 1166 + if args: 1167 + match_fname = result.stdout.strip() 1168 + with open(match_fname, 'r', encoding='utf8', 1169 + errors='backslashreplace') as fp: 1170 + for line in fp: 1171 + for arg in args: 1172 + if arg.startswith("-"): 1173 + continue 1174 + 1175 + if not re.search(rf"\s*{arg}\b", line): 1176 + # Needed file argument not found 1177 + install = True 1178 + break 1179 + 1180 + # Everything looks ok, don't install 1181 + break 1182 + 1183 + # emit a code to setup missing USE 1184 + if install: 1185 + extra_cmds += (f"sudo su -c 'echo \"{portage}\" > {use_base}/{fname}'\n") 1186 + 1187 + # Now, we can use emerge and let it respect USE 1188 + return self.get_install_progs(progs, 1189 + "emerge --ask --changed-use --binpkg-respect-use=y", 1190 + extra_cmds) 1191 + 1192 + def get_install(self): 1193 + """ 1194 + OS-specific hints logic. Seeks for a hinter. If found, use it to 1195 + provide package-manager specific install commands. 1196 + 1197 + Otherwise, outputs install instructions for the meta-packages. 1198 + 1199 + Returns a string with the command to be executed to install the 1200 + the needed packages, if distro found. Otherwise, return just a 1201 + list of packages that require installation. 1202 + """ 1203 + os_hints = { 1204 + re.compile("Red Hat Enterprise Linux"): self.give_redhat_hints, 1205 + re.compile("Fedora"): self.give_redhat_hints, 1206 + re.compile("AlmaLinux"): self.give_redhat_hints, 1207 + re.compile("Amazon Linux"): self.give_redhat_hints, 1208 + re.compile("CentOS"): self.give_redhat_hints, 1209 + re.compile("openEuler"): self.give_redhat_hints, 1210 + re.compile("Oracle Linux Server"): self.give_redhat_hints, 1211 + re.compile("Rocky Linux"): self.give_redhat_hints, 1212 + re.compile("Springdale Open Enterprise"): self.give_redhat_hints, 1213 + 1214 + re.compile("Ubuntu"): self.give_debian_hints, 1215 + re.compile("Debian"): self.give_debian_hints, 1216 + re.compile("Devuan"): self.give_debian_hints, 1217 + re.compile("Kali"): self.give_debian_hints, 1218 + re.compile("Mint"): self.give_debian_hints, 1219 + 1220 + re.compile("openSUSE"): self.give_opensuse_hints, 1221 + 1222 + re.compile("Mageia"): self.give_mageia_hints, 1223 + re.compile("OpenMandriva"): self.give_mageia_hints, 1224 + 1225 + re.compile("Arch Linux"): self.give_arch_linux_hints, 1226 + re.compile("Gentoo"): self.give_gentoo_hints, 1227 + } 1228 + 1229 + # If the OS is detected, use per-OS hint logic 1230 + for regex, os_hint in os_hints.items(): 1231 + if regex.search(self.system_release): 1232 + return os_hint() 1233 + 1234 + # 1235 + # Fall-back to generic hint code for other distros 1236 + # That's far from ideal, specially for LaTeX dependencies. 1237 + # 1238 + progs = {"sphinx-build": "sphinx"} 1239 + if self.pdf: 1240 + self.check_missing_tex() 1241 + 1242 + self.distro_msg = \ 1243 + f"I don't know distro {self.system_release}.\n" \ 1244 + "So, I can't provide you a hint with the install procedure.\n" \ 1245 + "There are likely missing dependencies." 1246 + 1247 + return self.get_install_progs(progs, None) 1248 + 1249 + # 1250 + # Common dependencies 1251 + # 1252 + def deactivate_help(self): 1253 + """ 1254 + Print a helper message to disable a virtual environment. 1255 + """ 1256 + 1257 + print("\n If you want to exit the virtualenv, you can use:") 1258 + print("\tdeactivate") 1259 + 1260 + def get_virtenv(self): 1261 + """ 1262 + Give a hint about how to activate an already-existing virtual 1263 + environment containing sphinx-build. 1264 + 1265 + Returns a tuble with (activate_cmd_path, sphinx_version) with 1266 + the newest available virtual env. 1267 + """ 1268 + 1269 + cwd = os.getcwd() 1270 + 1271 + activates = [] 1272 + 1273 + # Add all sphinx prefixes with possible version numbers 1274 + for p in self.virtenv_prefix: 1275 + activates += glob(f"{cwd}/{p}[0-9]*/bin/activate") 1276 + 1277 + activates.sort(reverse=True, key=str.lower) 1278 + 1279 + # Place sphinx_latest first, if it exists 1280 + for p in self.virtenv_prefix: 1281 + activates = glob(f"{cwd}/{p}*latest/bin/activate") + activates 1282 + 1283 + ver = (0, 0, 0) 1284 + for f in activates: 1285 + # Discard too old Sphinx virtual environments 1286 + match = re.search(r"(\d+)\.(\d+)\.(\d+)", f) 1287 + if match: 1288 + ver = (int(match.group(1)), int(match.group(2)), int(match.group(3))) 1289 + 1290 + if ver < self.min_version: 1291 + continue 1292 + 1293 + sphinx_cmd = f.replace("activate", "sphinx-build") 1294 + if not os.path.isfile(sphinx_cmd): 1295 + continue 1296 + 1297 + ver = self.get_sphinx_version(sphinx_cmd) 1298 + 1299 + if not ver: 1300 + venv_dir = f.replace("/bin/activate", "") 1301 + print(f"Warning: virtual environment {venv_dir} is not working.\n" \ 1302 + "Python version upgrade? Remove it with:\n\n" \ 1303 + "\trm -rf {venv_dir}\n\n") 1304 + else: 1305 + if self.need_sphinx and ver >= self.min_version: 1306 + return (f, ver) 1307 + elif parse_version(ver) > self.cur_version: 1308 + return (f, ver) 1309 + 1310 + return ("", ver) 1311 + 1312 + def recommend_sphinx_upgrade(self): 1313 + """ 1314 + Check if Sphinx needs to be upgraded. 1315 + 1316 + Returns a tuple with the higest available Sphinx version if found. 1317 + Otherwise, returns None to indicate either that no upgrade is needed 1318 + or no venv was found. 1319 + """ 1320 + 1321 + # Avoid running sphinx-builds from venv if cur_version is good 1322 + if self.cur_version and self.cur_version >= RECOMMENDED_VERSION: 1323 + self.latest_avail_ver = self.cur_version 1324 + return None 1325 + 1326 + # Get the highest version from sphinx_*/bin/sphinx-build and the 1327 + # corresponding command to activate the venv/virtenv 1328 + self.activate_cmd, self.venv_ver = self.get_virtenv() 1329 + 1330 + # Store the highest version from Sphinx existing virtualenvs 1331 + if self.activate_cmd and self.venv_ver > self.cur_version: 1332 + self.latest_avail_ver = self.venv_ver 1333 + else: 1334 + if self.cur_version: 1335 + self.latest_avail_ver = self.cur_version 1336 + else: 1337 + self.latest_avail_ver = (0, 0, 0) 1338 + 1339 + # As we don't know package version of Sphinx, and there's no 1340 + # virtual environments, don't check if upgrades are needed 1341 + if not self.virtualenv: 1342 + if not self.latest_avail_ver: 1343 + return None 1344 + 1345 + return self.latest_avail_ver 1346 + 1347 + # Either there are already a virtual env or a new one should be created 1348 + self.need_pip = True 1349 + 1350 + if not self.latest_avail_ver: 1351 + return None 1352 + 1353 + # Return if the reason is due to an upgrade or not 1354 + if self.latest_avail_ver != (0, 0, 0): 1355 + if self.latest_avail_ver < RECOMMENDED_VERSION: 1356 + self.rec_sphinx_upgrade = 1 1357 + 1358 + return self.latest_avail_ver 1359 + 1360 + def recommend_package(self): 1361 + """ 1362 + Recommend installing Sphinx as a distro-specific package. 1363 + """ 1364 + 1365 + print("\n2) As a package with:") 1366 + 1367 + old_need = self.deps.need 1368 + old_optional = self.deps.optional 1369 + 1370 + self.pdf = False 1371 + self.deps.optional = 0 1372 + old_verbose = self.verbose_warn_install 1373 + self.verbose_warn_install = 0 1374 + 1375 + self.deps.clear_deps() 1376 + 1377 + self.deps.add_package("python-sphinx", DepManager.PYTHON_MANDATORY) 1378 + 1379 + cmd = self.get_install() 1380 + if cmd: 1381 + print(cmd) 1382 + 1383 + self.deps.need = old_need 1384 + self.deps.optional = old_optional 1385 + self.verbose_warn_install = old_verbose 1386 + 1387 + def recommend_sphinx_version(self, virtualenv_cmd): 1388 + """ 1389 + Provide recommendations for installing or upgrading Sphinx based 1390 + on current version. 1391 + 1392 + The logic here is complex, as it have to deal with different versions: 1393 + 1394 + - minimal supported version; 1395 + - minimal PDF version; 1396 + - recommended version. 1397 + 1398 + It also needs to work fine with both distro's package and 1399 + venv/virtualenv 1400 + """ 1401 + 1402 + if self.recommend_python: 1403 + cur_ver = sys.version_info[:3] 1404 + if cur_ver < MIN_PYTHON_VERSION: 1405 + print(f"\nPython version {cur_ver} is incompatible with doc build.\n" \ 1406 + "Please upgrade it and re-run.\n") 1407 + return 1408 + 1409 + # Version is OK. Nothing to do. 1410 + if self.cur_version != (0, 0, 0) and self.cur_version >= RECOMMENDED_VERSION: 1411 + return 1412 + 1413 + if self.latest_avail_ver: 1414 + latest_avail_ver = ver_str(self.latest_avail_ver) 1415 + 1416 + if not self.need_sphinx: 1417 + # sphinx-build is present and its version is >= $min_version 1418 + 1419 + # only recommend enabling a newer virtenv version if makes sense. 1420 + if self.latest_avail_ver and self.latest_avail_ver > self.cur_version: 1421 + print(f"\nYou may also use the newer Sphinx version {latest_avail_ver} with:") 1422 + if f"{self.virtenv_prefix}" in os.getcwd(): 1423 + print("\tdeactivate") 1424 + print(f"\t. {self.activate_cmd}") 1425 + self.deactivate_help() 1426 + return 1427 + 1428 + if self.latest_avail_ver and self.latest_avail_ver >= RECOMMENDED_VERSION: 1429 + return 1430 + 1431 + if not self.virtualenv: 1432 + # No sphinx either via package or via virtenv. As we can't 1433 + # Compare the versions here, just return, recommending the 1434 + # user to install it from the package distro. 1435 + if not self.latest_avail_ver or self.latest_avail_ver == (0, 0, 0): 1436 + return 1437 + 1438 + # User doesn't want a virtenv recommendation, but he already 1439 + # installed one via virtenv with a newer version. 1440 + # So, print commands to enable it 1441 + if self.latest_avail_ver > self.cur_version: 1442 + print(f"\nYou may also use the Sphinx virtualenv version {latest_avail_ver} with:") 1443 + if f"{self.virtenv_prefix}" in os.getcwd(): 1444 + print("\tdeactivate") 1445 + print(f"\t. {self.activate_cmd}") 1446 + self.deactivate_help() 1447 + return 1448 + print("\n") 1449 + else: 1450 + if self.need_sphinx: 1451 + self.deps.need += 1 1452 + 1453 + # Suggest newer versions if current ones are too old 1454 + if self.latest_avail_ver and self.latest_avail_ver >= self.min_version: 1455 + if self.latest_avail_ver >= RECOMMENDED_VERSION: 1456 + print(f"\nNeed to activate Sphinx (version {latest_avail_ver}) on virtualenv with:") 1457 + print(f"\t. {self.activate_cmd}") 1458 + self.deactivate_help() 1459 + return 1460 + 1461 + # Version is above the minimal required one, but may be 1462 + # below the recommended one. So, print warnings/notes 1463 + if self.latest_avail_ver < RECOMMENDED_VERSION: 1464 + print(f"Warning: It is recommended at least Sphinx version {RECOMMENDED_VERSION}.") 1465 + 1466 + # At this point, either it needs Sphinx or upgrade is recommended, 1467 + # both via pip 1468 + 1469 + if self.rec_sphinx_upgrade: 1470 + if not self.virtualenv: 1471 + print("Instead of install/upgrade Python Sphinx pkg, you could use pip/pypi with:\n\n") 1472 + else: 1473 + print("To upgrade Sphinx, use:\n\n") 1474 + else: 1475 + print("\nSphinx needs to be installed either:\n1) via pip/pypi with:\n") 1476 + 1477 + if not virtualenv_cmd: 1478 + print(" Currently not possible.\n") 1479 + print(" Please upgrade Python to a newer version and run this script again") 1480 + else: 1481 + print(f"\t{virtualenv_cmd} {self.virtenv_dir}") 1482 + print(f"\t. {self.virtenv_dir}/bin/activate") 1483 + print(f"\tpip install -r {self.requirement_file}") 1484 + self.deactivate_help() 1485 + 1486 + if self.package_supported: 1487 + self.recommend_package() 1488 + 1489 + print("\n" \ 1490 + " Please note that Sphinx currentlys produce false-positive\n" \ 1491 + " warnings when the same name is used for more than one type (functions,\n" \ 1492 + " structs, enums,...). This is known Sphinx bug. For more details, see:\n" \ 1493 + "\thttps://github.com/sphinx-doc/sphinx/pull/8313") 1494 + 1495 + def check_needs(self): 1496 + """ 1497 + Main method that checks needed dependencies and provides 1498 + recommendations. 1499 + """ 1500 + self.python_cmd = sys.executable 1501 + 1502 + # Check if Sphinx is already accessible from current environment 1503 + self.check_sphinx(self.conf) 1504 + 1505 + if self.system_release: 1506 + print(f"Detected OS: {self.system_release}.") 1507 + else: 1508 + print("Unknown OS") 1509 + if self.cur_version != (0, 0, 0): 1510 + ver = ver_str(self.cur_version) 1511 + print(f"Sphinx version: {ver}\n") 1512 + 1513 + # Check the type of virtual env, depending on Python version 1514 + virtualenv_cmd = None 1515 + 1516 + if sys.version_info < MIN_PYTHON_VERSION: 1517 + min_ver = ver_str(MIN_PYTHON_VERSION) 1518 + print(f"ERROR: at least python {min_ver} is required to build the kernel docs") 1519 + self.need_sphinx = 1 1520 + 1521 + self.venv_ver = self.recommend_sphinx_upgrade() 1522 + 1523 + if self.need_pip: 1524 + if sys.version_info < MIN_PYTHON_VERSION: 1525 + self.need_pip = False 1526 + print("Warning: python version is not supported.") 1527 + else: 1528 + virtualenv_cmd = f"{self.python_cmd} -m venv" 1529 + self.check_python_module("ensurepip") 1530 + 1531 + # Check for needed programs/tools 1532 + self.check_perl_module("Pod::Usage", DepManager.SYSTEM_MANDATORY) 1533 + 1534 + self.check_program("make", DepManager.SYSTEM_MANDATORY) 1535 + self.check_program("which", DepManager.SYSTEM_MANDATORY) 1536 + 1537 + self.check_program("dot", DepManager.SYSTEM_OPTIONAL) 1538 + self.check_program("convert", DepManager.SYSTEM_OPTIONAL) 1539 + 1540 + self.check_python_module("yaml") 1541 + 1542 + if self.pdf: 1543 + self.check_program("xelatex", DepManager.PDF_MANDATORY) 1544 + self.check_program("rsvg-convert", DepManager.PDF_MANDATORY) 1545 + self.check_program("latexmk", DepManager.PDF_MANDATORY) 1546 + 1547 + # Do distro-specific checks and output distro-install commands 1548 + cmd = self.get_install() 1549 + if cmd: 1550 + print(cmd) 1551 + 1552 + # If distro requires some special instructions, print here. 1553 + # Please notice that get_install() needs to be called first. 1554 + if self.distro_msg: 1555 + print("\n" + self.distro_msg) 1556 + 1557 + if not self.python_cmd: 1558 + if self.need == 1: 1559 + sys.exit("Can't build as 1 mandatory dependency is missing") 1560 + elif self.need: 1561 + sys.exit(f"Can't build as {self.need} mandatory dependencies are missing") 1562 + 1563 + # Check if sphinx-build is called sphinx-build-3 1564 + if self.need_symlink: 1565 + sphinx_path = self.which("sphinx-build-3") 1566 + if sphinx_path: 1567 + print(f"\tsudo ln -sf {sphinx_path} /usr/bin/sphinx-build\n") 1568 + 1569 + self.recommend_sphinx_version(virtualenv_cmd) 1570 + print("") 1571 + 1572 + if not self.deps.optional: 1573 + print("All optional dependencies are met.") 1574 + 1575 + if self.deps.need == 1: 1576 + sys.exit("Can't build as 1 mandatory dependency is missing") 1577 + elif self.deps.need: 1578 + sys.exit(f"Can't build as {self.deps.need} mandatory dependencies are missing") 1579 + 1580 + print("Needed package dependencies are met.") 1581 + 1582 + DESCRIPTION = """ 1583 + Process some flags related to Sphinx installation and documentation build. 1584 + """ 1585 + 1586 + 1587 + def main(): 1588 + """Main function""" 1589 + parser = argparse.ArgumentParser(description=DESCRIPTION) 1590 + 1591 + parser.add_argument( 1592 + "--no-virtualenv", 1593 + action="store_false", 1594 + dest="virtualenv", 1595 + help="Recommend installing Sphinx instead of using a virtualenv", 1596 + ) 1597 + 1598 + parser.add_argument( 1599 + "--no-pdf", 1600 + action="store_false", 1601 + dest="pdf", 1602 + help="Don't check for dependencies required to build PDF docs", 1603 + ) 1604 + 1605 + parser.add_argument( 1606 + "--version-check", 1607 + action="store_true", 1608 + dest="version_check", 1609 + help="If version is compatible, don't check for missing dependencies", 1610 + ) 1611 + 1612 + args = parser.parse_args() 1613 + 1614 + checker = SphinxDependencyChecker(args) 1615 + 1616 + checker.check_python() 1617 + checker.check_needs() 1618 + 1619 + # Call main if not used as module 1620 + if __name__ == "__main__": 1621 + main()
+54
tools/docs/gen-redirects.py
··· 1 + #! /usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Copyright © 2025, Oracle and/or its affiliates. 5 + # Author: Vegard Nossum <vegard.nossum@oracle.com> 6 + 7 + """Generate HTML redirects for renamed Documentation/**.rst files using 8 + the output of tools/docs/gen-renames.py. 9 + 10 + Example: 11 + 12 + tools/docs/gen-redirects.py --output Documentation/output/ < Documentation/.renames.txt 13 + """ 14 + 15 + import argparse 16 + import os 17 + import sys 18 + 19 + parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) 20 + parser.add_argument('-o', '--output', help='output directory') 21 + 22 + args = parser.parse_args() 23 + 24 + for line in sys.stdin: 25 + line = line.rstrip('\n') 26 + 27 + old_name, new_name = line.split(' ', 2) 28 + 29 + old_html_path = os.path.join(args.output, old_name + '.html') 30 + new_html_path = os.path.join(args.output, new_name + '.html') 31 + 32 + if not os.path.exists(new_html_path): 33 + print(f"warning: target does not exist: {new_html_path} (redirect from {old_html_path})") 34 + continue 35 + 36 + old_html_dir = os.path.dirname(old_html_path) 37 + if not os.path.exists(old_html_dir): 38 + os.makedirs(old_html_dir) 39 + 40 + relpath = os.path.relpath(new_name, os.path.dirname(old_name)) + '.html' 41 + 42 + with open(old_html_path, 'w') as f: 43 + print(f"""\ 44 + <!DOCTYPE html> 45 + 46 + <html lang="en"> 47 + <head> 48 + <title>This page has moved</title> 49 + <meta http-equiv="refresh" content="0; url={relpath}"> 50 + </head> 51 + <body> 52 + <p>This page has moved to <a href="{relpath}">{new_name}</a>.</p> 53 + </body> 54 + </html>""", file=f)
+130
tools/docs/gen-renames.py
··· 1 + #! /usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Copyright © 2025, Oracle and/or its affiliates. 5 + # Author: Vegard Nossum <vegard.nossum@oracle.com> 6 + 7 + """Trawl repository history for renames of Documentation/**.rst files. 8 + 9 + Example: 10 + 11 + tools/docs/gen-renames.py --rev HEAD > Documentation/.renames.txt 12 + """ 13 + 14 + import argparse 15 + import itertools 16 + import os 17 + import subprocess 18 + import sys 19 + 20 + parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter) 21 + parser.add_argument('--rev', default='HEAD', help='generate renames up to this revision') 22 + 23 + args = parser.parse_args() 24 + 25 + def normalize(path): 26 + prefix = 'Documentation/' 27 + suffix = '.rst' 28 + 29 + assert path.startswith(prefix) 30 + assert path.endswith(suffix) 31 + 32 + return path[len(prefix):-len(suffix)] 33 + 34 + class Name(object): 35 + def __init__(self, name): 36 + self.names = [name] 37 + 38 + def rename(self, new_name): 39 + self.names.append(new_name) 40 + 41 + names = { 42 + } 43 + 44 + for line in subprocess.check_output([ 45 + 'git', 'log', 46 + '--reverse', 47 + '--oneline', 48 + '--find-renames', 49 + '--diff-filter=RD', 50 + '--name-status', 51 + '--format=commit %H', 52 + # ~v4.8-ish is when Sphinx/.rst was added in the first place 53 + f'v4.8..{args.rev}', 54 + '--', 55 + 'Documentation/' 56 + ], text=True).splitlines(): 57 + # rename 58 + if line.startswith('R'): 59 + _, old, new = line[1:].split('\t', 2) 60 + 61 + if old.endswith('.rst') and new.endswith('.rst'): 62 + old = normalize(old) 63 + new = normalize(new) 64 + 65 + name = names.get(old) 66 + if name is None: 67 + name = Name(old) 68 + else: 69 + del names[old] 70 + 71 + name.rename(new) 72 + names[new] = name 73 + 74 + continue 75 + 76 + # delete 77 + if line.startswith('D'): 78 + _, old = line.split('\t', 1) 79 + 80 + if old.endswith('.rst'): 81 + old = normalize(old) 82 + 83 + # TODO: we could save added/modified files as well and propose 84 + # them as alternatives 85 + name = names.get(old) 86 + if name is None: 87 + pass 88 + else: 89 + del names[old] 90 + 91 + continue 92 + 93 + # 94 + # Get the set of current files so we can sanity check that we aren't 95 + # redirecting any of those 96 + # 97 + 98 + current_files = set() 99 + for line in subprocess.check_output([ 100 + 'git', 'ls-tree', 101 + '-r', 102 + '--name-only', 103 + args.rev, 104 + 'Documentation/', 105 + ], text=True).splitlines(): 106 + if line.endswith('.rst'): 107 + current_files.add(normalize(line)) 108 + 109 + # 110 + # Format/group/output result 111 + # 112 + 113 + result = [] 114 + for _, v in names.items(): 115 + old_names = v.names[:-1] 116 + new_name = v.names[-1] 117 + 118 + for old_name in old_names: 119 + if old_name == new_name: 120 + # A file was renamed to its new name twice; don't redirect that 121 + continue 122 + 123 + if old_name in current_files: 124 + # A file was recreated with a former name; don't redirect those 125 + continue 126 + 127 + result.append((old_name, new_name)) 128 + 129 + for old_name, new_name in sorted(result): 130 + print(f"{old_name} {new_name}")
tools/docs/lib/__init__.py
+70
tools/docs/lib/enrich_formatter.py
··· 1 + #!/usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + # Copyright (c) 2025 by Mauro Carvalho Chehab <mchehab@kernel.org>. 4 + 5 + """ 6 + Ancillary argparse HelpFormatter class that works on a similar way as 7 + argparse.RawDescriptionHelpFormatter, e.g. description maintains line 8 + breaks, but it also implement transformations to the help text. The 9 + actual transformations ar given by enrich_text(), if the output is tty. 10 + 11 + Currently, the follow transformations are done: 12 + 13 + - Positional arguments are shown in upper cases; 14 + - if output is TTY, ``var`` and positional arguments are shown prepended 15 + by an ANSI SGR code. This is usually translated to bold. On some 16 + terminals, like, konsole, this is translated into a colored bold text. 17 + """ 18 + 19 + import argparse 20 + import re 21 + import sys 22 + 23 + class EnrichFormatter(argparse.HelpFormatter): 24 + """ 25 + Better format the output, making easier to identify the positional args 26 + and how they're used at the __doc__ description. 27 + """ 28 + def __init__(self, *args, **kwargs): 29 + """Initialize class and check if is TTY""" 30 + super().__init__(*args, **kwargs) 31 + self._tty = sys.stdout.isatty() 32 + 33 + def enrich_text(self, text): 34 + """Handle ReST markups (currently, only ``foo``)""" 35 + if self._tty and text: 36 + # Replace ``text`` with ANSI SGR (bold) 37 + return re.sub(r'\`\`(.+?)\`\`', 38 + lambda m: f'\033[1m{m.group(1)}\033[0m', text) 39 + return text 40 + 41 + def _fill_text(self, text, width, indent): 42 + """Enrich descriptions with markups on it""" 43 + enriched = self.enrich_text(text) 44 + return "\n".join(indent + line for line in enriched.splitlines()) 45 + 46 + def _format_usage(self, usage, actions, groups, prefix): 47 + """Enrich positional arguments at usage: line""" 48 + 49 + prog = self._prog 50 + parts = [] 51 + 52 + for action in actions: 53 + if action.option_strings: 54 + opt = action.option_strings[0] 55 + if action.nargs != 0: 56 + opt += f" {action.dest.upper()}" 57 + parts.append(f"[{opt}]") 58 + else: 59 + # Positional argument 60 + parts.append(self.enrich_text(f"``{action.dest.upper()}``")) 61 + 62 + usage_text = f"{prefix or 'usage: '} {prog} {' '.join(parts)}\n" 63 + return usage_text 64 + 65 + def _format_action_invocation(self, action): 66 + """Enrich argument names""" 67 + if not action.option_strings: 68 + return self.enrich_text(f"``{action.dest.upper()}``") 69 + 70 + return ", ".join(action.option_strings)
+452
tools/docs/lib/parse_data_structs.py
··· 1 + #!/usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + # Copyright (c) 2016-2025 by Mauro Carvalho Chehab <mchehab@kernel.org>. 4 + # pylint: disable=R0912,R0915 5 + 6 + """ 7 + Parse a source file or header, creating ReStructured Text cross references. 8 + 9 + It accepts an optional file to change the default symbol reference or to 10 + suppress symbols from the output. 11 + 12 + It is capable of identifying defines, functions, structs, typedefs, 13 + enums and enum symbols and create cross-references for all of them. 14 + It is also capable of distinguish #define used for specifying a Linux 15 + ioctl. 16 + 17 + The optional rules file contains a set of rules like: 18 + 19 + ignore ioctl VIDIOC_ENUM_FMT 20 + replace ioctl VIDIOC_DQBUF vidioc_qbuf 21 + replace define V4L2_EVENT_MD_FL_HAVE_FRAME_SEQ :c:type:`v4l2_event_motion_det` 22 + """ 23 + 24 + import os 25 + import re 26 + import sys 27 + 28 + 29 + class ParseDataStructs: 30 + """ 31 + Creates an enriched version of a Kernel header file with cross-links 32 + to each C data structure type. 33 + 34 + It is meant to allow having a more comprehensive documentation, where 35 + uAPI headers will create cross-reference links to the code. 36 + 37 + It is capable of identifying defines, functions, structs, typedefs, 38 + enums and enum symbols and create cross-references for all of them. 39 + It is also capable of distinguish #define used for specifying a Linux 40 + ioctl. 41 + 42 + By default, it create rules for all symbols and defines, but it also 43 + allows parsing an exception file. Such file contains a set of rules 44 + using the syntax below: 45 + 46 + 1. Ignore rules: 47 + 48 + ignore <type> <symbol>` 49 + 50 + Removes the symbol from reference generation. 51 + 52 + 2. Replace rules: 53 + 54 + replace <type> <old_symbol> <new_reference> 55 + 56 + Replaces how old_symbol with a new reference. The new_reference can be: 57 + - A simple symbol name; 58 + - A full Sphinx reference. 59 + 60 + On both cases, <type> can be: 61 + - ioctl: for defines that end with _IO*, e.g. ioctl definitions 62 + - define: for other defines 63 + - symbol: for symbols defined within enums; 64 + - typedef: for typedefs; 65 + - enum: for the name of a non-anonymous enum; 66 + - struct: for structs. 67 + 68 + Examples: 69 + 70 + ignore define __LINUX_MEDIA_H 71 + ignore ioctl VIDIOC_ENUM_FMT 72 + replace ioctl VIDIOC_DQBUF vidioc_qbuf 73 + replace define V4L2_EVENT_MD_FL_HAVE_FRAME_SEQ :c:type:`v4l2_event_motion_det` 74 + """ 75 + 76 + # Parser regexes with multiple ways to capture enums and structs 77 + RE_ENUMS = [ 78 + re.compile(r"^\s*enum\s+([\w_]+)\s*\{"), 79 + re.compile(r"^\s*enum\s+([\w_]+)\s*$"), 80 + re.compile(r"^\s*typedef\s*enum\s+([\w_]+)\s*\{"), 81 + re.compile(r"^\s*typedef\s*enum\s+([\w_]+)\s*$"), 82 + ] 83 + RE_STRUCTS = [ 84 + re.compile(r"^\s*struct\s+([_\w][\w\d_]+)\s*\{"), 85 + re.compile(r"^\s*struct\s+([_\w][\w\d_]+)$"), 86 + re.compile(r"^\s*typedef\s*struct\s+([_\w][\w\d_]+)\s*\{"), 87 + re.compile(r"^\s*typedef\s*struct\s+([_\w][\w\d_]+)$"), 88 + ] 89 + 90 + # FIXME: the original code was written a long time before Sphinx C 91 + # domain to have multiple namespaces. To avoid to much turn at the 92 + # existing hyperlinks, the code kept using "c:type" instead of the 93 + # right types. To change that, we need to change the types not only 94 + # here, but also at the uAPI media documentation. 95 + DEF_SYMBOL_TYPES = { 96 + "ioctl": { 97 + "prefix": "\\ ", 98 + "suffix": "\\ ", 99 + "ref_type": ":ref", 100 + "description": "IOCTL Commands", 101 + }, 102 + "define": { 103 + "prefix": "\\ ", 104 + "suffix": "\\ ", 105 + "ref_type": ":ref", 106 + "description": "Macros and Definitions", 107 + }, 108 + # We're calling each definition inside an enum as "symbol" 109 + "symbol": { 110 + "prefix": "\\ ", 111 + "suffix": "\\ ", 112 + "ref_type": ":ref", 113 + "description": "Enumeration values", 114 + }, 115 + "typedef": { 116 + "prefix": "\\ ", 117 + "suffix": "\\ ", 118 + "ref_type": ":c:type", 119 + "description": "Type Definitions", 120 + }, 121 + # This is the description of the enum itself 122 + "enum": { 123 + "prefix": "\\ ", 124 + "suffix": "\\ ", 125 + "ref_type": ":c:type", 126 + "description": "Enumerations", 127 + }, 128 + "struct": { 129 + "prefix": "\\ ", 130 + "suffix": "\\ ", 131 + "ref_type": ":c:type", 132 + "description": "Structures", 133 + }, 134 + } 135 + 136 + def __init__(self, debug: bool = False): 137 + """Initialize internal vars""" 138 + self.debug = debug 139 + self.data = "" 140 + 141 + self.symbols = {} 142 + 143 + for symbol_type in self.DEF_SYMBOL_TYPES: 144 + self.symbols[symbol_type] = {} 145 + 146 + def store_type(self, symbol_type: str, symbol: str, 147 + ref_name: str = None, replace_underscores: bool = True): 148 + """ 149 + Stores a new symbol at self.symbols under symbol_type. 150 + 151 + By default, underscores are replaced by "-" 152 + """ 153 + defs = self.DEF_SYMBOL_TYPES[symbol_type] 154 + 155 + prefix = defs.get("prefix", "") 156 + suffix = defs.get("suffix", "") 157 + ref_type = defs.get("ref_type") 158 + 159 + # Determine ref_link based on symbol type 160 + if ref_type: 161 + if symbol_type == "enum": 162 + ref_link = f"{ref_type}:`{symbol}`" 163 + else: 164 + if not ref_name: 165 + ref_name = symbol.lower() 166 + 167 + # c-type references don't support hash 168 + if ref_type == ":ref" and replace_underscores: 169 + ref_name = ref_name.replace("_", "-") 170 + 171 + ref_link = f"{ref_type}:`{symbol} <{ref_name}>`" 172 + else: 173 + ref_link = symbol 174 + 175 + self.symbols[symbol_type][symbol] = f"{prefix}{ref_link}{suffix}" 176 + 177 + def store_line(self, line): 178 + """Stores a line at self.data, properly indented""" 179 + line = " " + line.expandtabs() 180 + self.data += line.rstrip(" ") 181 + 182 + def parse_file(self, file_in: str): 183 + """Reads a C source file and get identifiers""" 184 + self.data = "" 185 + is_enum = False 186 + is_comment = False 187 + multiline = "" 188 + 189 + with open(file_in, "r", 190 + encoding="utf-8", errors="backslashreplace") as f: 191 + for line_no, line in enumerate(f): 192 + self.store_line(line) 193 + line = line.strip("\n") 194 + 195 + # Handle continuation lines 196 + if line.endswith(r"\\"): 197 + multiline += line[-1] 198 + continue 199 + 200 + if multiline: 201 + line = multiline + line 202 + multiline = "" 203 + 204 + # Handle comments. They can be multilined 205 + if not is_comment: 206 + if re.search(r"/\*.*", line): 207 + is_comment = True 208 + else: 209 + # Strip C99-style comments 210 + line = re.sub(r"(//.*)", "", line) 211 + 212 + if is_comment: 213 + if re.search(r".*\*/", line): 214 + is_comment = False 215 + else: 216 + multiline = line 217 + continue 218 + 219 + # At this point, line variable may be a multilined statement, 220 + # if lines end with \ or if they have multi-line comments 221 + # With that, it can safely remove the entire comments, 222 + # and there's no need to use re.DOTALL for the logic below 223 + 224 + line = re.sub(r"(/\*.*\*/)", "", line) 225 + if not line.strip(): 226 + continue 227 + 228 + # It can be useful for debug purposes to print the file after 229 + # having comments stripped and multi-lines grouped. 230 + if self.debug > 1: 231 + print(f"line {line_no + 1}: {line}") 232 + 233 + # Now the fun begins: parse each type and store it. 234 + 235 + # We opted for a two parsing logic here due to: 236 + # 1. it makes easier to debug issues not-parsed symbols; 237 + # 2. we want symbol replacement at the entire content, not 238 + # just when the symbol is detected. 239 + 240 + if is_enum: 241 + match = re.match(r"^\s*([_\w][\w\d_]+)\s*[\,=]?", line) 242 + if match: 243 + self.store_type("symbol", match.group(1)) 244 + if "}" in line: 245 + is_enum = False 246 + continue 247 + 248 + match = re.match(r"^\s*#\s*define\s+([\w_]+)\s+_IO", line) 249 + if match: 250 + self.store_type("ioctl", match.group(1), 251 + replace_underscores=False) 252 + continue 253 + 254 + match = re.match(r"^\s*#\s*define\s+([\w_]+)(\s+|$)", line) 255 + if match: 256 + self.store_type("define", match.group(1)) 257 + continue 258 + 259 + match = re.match(r"^\s*typedef\s+([_\w][\w\d_]+)\s+(.*)\s+([_\w][\w\d_]+);", 260 + line) 261 + if match: 262 + name = match.group(2).strip() 263 + symbol = match.group(3) 264 + self.store_type("typedef", symbol, ref_name=name) 265 + continue 266 + 267 + for re_enum in self.RE_ENUMS: 268 + match = re_enum.match(line) 269 + if match: 270 + self.store_type("enum", match.group(1)) 271 + is_enum = True 272 + break 273 + 274 + for re_struct in self.RE_STRUCTS: 275 + match = re_struct.match(line) 276 + if match: 277 + self.store_type("struct", match.group(1)) 278 + break 279 + 280 + def process_exceptions(self, fname: str): 281 + """ 282 + Process exceptions file with rules to ignore or replace references. 283 + """ 284 + if not fname: 285 + return 286 + 287 + name = os.path.basename(fname) 288 + 289 + with open(fname, "r", encoding="utf-8", errors="backslashreplace") as f: 290 + for ln, line in enumerate(f): 291 + ln += 1 292 + line = line.strip() 293 + if not line or line.startswith("#"): 294 + continue 295 + 296 + # Handle ignore rules 297 + match = re.match(r"^ignore\s+(\w+)\s+(\S+)", line) 298 + if match: 299 + c_type = match.group(1) 300 + symbol = match.group(2) 301 + 302 + if c_type not in self.DEF_SYMBOL_TYPES: 303 + sys.exit(f"{name}:{ln}: {c_type} is invalid") 304 + 305 + d = self.symbols[c_type] 306 + if symbol in d: 307 + del d[symbol] 308 + 309 + continue 310 + 311 + # Handle replace rules 312 + match = re.match(r"^replace\s+(\S+)\s+(\S+)\s+(\S+)", line) 313 + if not match: 314 + sys.exit(f"{name}:{ln}: invalid line: {line}") 315 + 316 + c_type, old, new = match.groups() 317 + 318 + if c_type not in self.DEF_SYMBOL_TYPES: 319 + sys.exit(f"{name}:{ln}: {c_type} is invalid") 320 + 321 + reftype = None 322 + 323 + # Parse reference type when the type is specified 324 + 325 + match = re.match(r"^\:c\:(data|func|macro|type)\:\`(.+)\`", new) 326 + if match: 327 + reftype = f":c:{match.group(1)}" 328 + new = match.group(2) 329 + else: 330 + match = re.search(r"(\:ref)\:\`(.+)\`", new) 331 + if match: 332 + reftype = match.group(1) 333 + new = match.group(2) 334 + 335 + # If the replacement rule doesn't have a type, get default 336 + if not reftype: 337 + reftype = self.DEF_SYMBOL_TYPES[c_type].get("ref_type") 338 + if not reftype: 339 + reftype = self.DEF_SYMBOL_TYPES[c_type].get("real_type") 340 + 341 + new_ref = f"{reftype}:`{old} <{new}>`" 342 + 343 + # Change self.symbols to use the replacement rule 344 + if old in self.symbols[c_type]: 345 + self.symbols[c_type][old] = new_ref 346 + else: 347 + print(f"{name}:{ln}: Warning: can't find {old} {c_type}") 348 + 349 + def debug_print(self): 350 + """ 351 + Print debug information containing the replacement rules per symbol. 352 + To make easier to check, group them per type. 353 + """ 354 + if not self.debug: 355 + return 356 + 357 + for c_type, refs in self.symbols.items(): 358 + if not refs: # Skip empty dictionaries 359 + continue 360 + 361 + print(f"{c_type}:") 362 + 363 + for symbol, ref in sorted(refs.items()): 364 + print(f" {symbol} -> {ref}") 365 + 366 + print() 367 + 368 + def gen_output(self): 369 + """Write the formatted output to a file.""" 370 + 371 + # Avoid extra blank lines 372 + text = re.sub(r"\s+$", "", self.data) + "\n" 373 + text = re.sub(r"\n\s+\n", "\n\n", text) 374 + 375 + # Escape Sphinx special characters 376 + text = re.sub(r"([\_\`\*\<\>\&\\\\:\/\|\%\$\#\{\}\~\^])", r"\\\1", text) 377 + 378 + # Source uAPI files may have special notes. Use bold font for them 379 + text = re.sub(r"DEPRECATED", "**DEPRECATED**", text) 380 + 381 + # Delimiters to catch the entire symbol after escaped 382 + start_delim = r"([ \n\t\(=\*\@])" 383 + end_delim = r"(\s|,|\\=|\\:|\;|\)|\}|\{)" 384 + 385 + # Process all reference types 386 + for ref_dict in self.symbols.values(): 387 + for symbol, replacement in ref_dict.items(): 388 + symbol = re.escape(re.sub(r"([\_\`\*\<\>\&\\\\:\/])", r"\\\1", symbol)) 389 + text = re.sub(fr'{start_delim}{symbol}{end_delim}', 390 + fr'\1{replacement}\2', text) 391 + 392 + # Remove "\ " where not needed: before spaces and at the end of lines 393 + text = re.sub(r"\\ ([\n ])", r"\1", text) 394 + text = re.sub(r" \\ ", " ", text) 395 + 396 + return text 397 + 398 + def gen_toc(self): 399 + """ 400 + Create a TOC table pointing to each symbol from the header 401 + """ 402 + text = [] 403 + 404 + # Add header 405 + text.append(".. contents:: Table of Contents") 406 + text.append(" :depth: 2") 407 + text.append(" :local:") 408 + text.append("") 409 + 410 + # Sort symbol types per description 411 + symbol_descriptions = [] 412 + for k, v in self.DEF_SYMBOL_TYPES.items(): 413 + symbol_descriptions.append((v['description'], k)) 414 + 415 + symbol_descriptions.sort() 416 + 417 + # Process each category 418 + for description, c_type in symbol_descriptions: 419 + 420 + refs = self.symbols[c_type] 421 + if not refs: # Skip empty categories 422 + continue 423 + 424 + text.append(f"{description}") 425 + text.append("-" * len(description)) 426 + text.append("") 427 + 428 + # Sort symbols alphabetically 429 + for symbol, ref in sorted(refs.items()): 430 + text.append(f"* :{ref}:") 431 + 432 + text.append("") # Add empty line between categories 433 + 434 + return "\n".join(text) 435 + 436 + def write_output(self, file_in: str, file_out: str, toc: bool): 437 + title = os.path.basename(file_in) 438 + 439 + if toc: 440 + text = self.gen_toc() 441 + else: 442 + text = self.gen_output() 443 + 444 + with open(file_out, "w", encoding="utf-8", errors="backslashreplace") as f: 445 + f.write(".. -*- coding: utf-8; mode: rst -*-\n\n") 446 + f.write(f"{title}\n") 447 + f.write("=" * len(title) + "\n\n") 448 + 449 + if not toc: 450 + f.write(".. parsed-literal::\n\n") 451 + 452 + f.write(text)
+60
tools/docs/parse-headers.py
··· 1 + #!/usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + # Copyright (c) 2016, 2025 by Mauro Carvalho Chehab <mchehab@kernel.org>. 4 + # pylint: disable=C0103 5 + 6 + """ 7 + Convert a C header or source file ``FILE_IN``, into a ReStructured Text 8 + included via ..parsed-literal block with cross-references for the 9 + documentation files that describe the API. It accepts an optional 10 + ``FILE_RULES`` file to describes what elements will be either ignored or 11 + be pointed to a non-default reference type/name. 12 + 13 + The output is written at ``FILE_OUT``. 14 + 15 + It is capable of identifying defines, functions, structs, typedefs, 16 + enums and enum symbols and create cross-references for all of them. 17 + It is also capable of distinguish #define used for specifying a Linux 18 + ioctl. 19 + 20 + The optional ``FILE_RULES`` contains a set of rules like: 21 + 22 + ignore ioctl VIDIOC_ENUM_FMT 23 + replace ioctl VIDIOC_DQBUF vidioc_qbuf 24 + replace define V4L2_EVENT_MD_FL_HAVE_FRAME_SEQ :c:type:`v4l2_event_motion_det` 25 + """ 26 + 27 + import argparse 28 + 29 + from lib.parse_data_structs import ParseDataStructs 30 + from lib.enrich_formatter import EnrichFormatter 31 + 32 + def main(): 33 + """Main function""" 34 + parser = argparse.ArgumentParser(description=__doc__, 35 + formatter_class=EnrichFormatter) 36 + 37 + parser.add_argument("-d", "--debug", action="count", default=0, 38 + help="Increase debug level. Can be used multiple times") 39 + parser.add_argument("-t", "--toc", action="store_true", 40 + help="instead of a literal block, outputs a TOC table at the RST file") 41 + 42 + parser.add_argument("file_in", help="Input C file") 43 + parser.add_argument("file_out", help="Output RST file") 44 + parser.add_argument("file_rules", nargs="?", 45 + help="Exceptions file (optional)") 46 + 47 + args = parser.parse_args() 48 + 49 + parser = ParseDataStructs(debug=args.debug) 50 + parser.parse_file(args.file_in) 51 + 52 + if args.file_rules: 53 + parser.process_exceptions(args.file_rules) 54 + 55 + parser.debug_print() 56 + parser.write_output(args.file_in, args.file_out, args.toc) 57 + 58 + 59 + if __name__ == "__main__": 60 + main()