Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'docs-6.17' of git://git.lwn.net/linux

Pull documentation updates from Jonathan Corbet:
"It has been a relatively busy cycle for docs, especially the build
system:

- The Perl kernel-doc script was added to 2.3.52pre1 just after the
turn of the millennium. Over the following 25 years, it accumulated
a vast amount of cruft, all in a language few people want to deal
with anymore. Mauro's Python replacement in 6.16 faithfully
reproduced all of the cruft in the hope of avoiding regressions.

Now that we have a more reasonable code base, though, we can work
on cleaning it up; many of the changes this time around are toward
that end.

- A reorganization of the ext4 docs into the usual TOC format.

- Various Chinese translations and updates.

- A new script from Mauro to help with docs-build testing.

- A new document for linked lists

- A sweep through MAINTAINERS fixing broken GitHub git:// repository
links.

...and lots of fixes and updates"

* tag 'docs-6.17' of git://git.lwn.net/linux: (147 commits)
scripts: add origin commit identification based on specific patterns
sphinx: kernel_abi: fix performance regression with O=<dir>
Documentation: core-api: entry: Replace deprecated KVM entry/exit functions
docs: fault-injection: drop reference to md-faulty
docs: document linked lists
scripts: kdoc: make it backward-compatible with Python 3.7
docs: kernel-doc: emit warnings for ancient versions of Python
Documentation/rtla: Describe exit status
Documentation/rtla: Add include common_appendix.rst
docs: kernel: Clarify printk_ratelimit_burst reset behavior
Documentation: ioctl-number: Don't repeat macro names
Documentation: ioctl-number: Shorten macros table
Documentation: ioctl-number: Correct full path to papr-physical-attestation.h
Documentation: ioctl-number: Extend "Include File" column width
Documentation: ioctl-number: Fix linuxppc-dev mailto link
overlayfs.rst: fix typos
docs: kdoc: emit a warning for ancient versions of Python
docs: kdoc: clean up check_sections()
docs: kdoc: directly access the always-there KdocItem fields
docs: kdoc: straighten up dump_declaration()
...

+3754 -1445
+1
.gitignore
··· 114 114 !.gitignore 115 115 !.kunitconfig 116 116 !.mailmap 117 + !.pylintrc 117 118 !.rustfmt.toml 118 119 119 120 #
+3 -1
Documentation/ABI/README
··· 46 46 47 47 What: Short description of the interface 48 48 Date: Date created 49 - KernelVersion: Kernel version this feature first showed up in. 49 + KernelVersion: (Optional) Kernel version this feature first showed up in. 50 + Note: git history often provides more accurate version 51 + info, so this field may be omitted. 50 52 Contact: Primary contact for this interface (may be a mailing list) 51 53 Description: Long description of the interface and how to use it. 52 54 Users: All users of this interface who wish to be notified when
+2
Documentation/Makefile
··· 5 5 # for cleaning 6 6 subdir- := devicetree/bindings 7 7 8 + ifneq ($(MAKECMDGOALS),cleandocs) 8 9 # Check for broken documentation file references 9 10 ifeq ($(CONFIG_WARN_MISSING_DOCUMENTS),y) 10 11 $(shell $(srctree)/scripts/documentation-file-ref-check --warn) ··· 14 13 # Check for broken ABI files 15 14 ifeq ($(CONFIG_WARN_ABI_ERRORS),y) 16 15 $(shell $(srctree)/scripts/get_abi.py --dir $(srctree)/Documentation/ABI validate) 16 + endif 17 17 endif 18 18 19 19 # You can set these variables from the command line.
+1 -1
Documentation/admin-guide/bootconfig.rst
··· 265 265 Config File Limitation 266 266 ====================== 267 267 268 - Currently the maximum config size size is 32KB and the total key-words (not 268 + Currently the maximum config size is 32KB and the total key-words (not 269 269 key-value entries) must be under 1024 nodes. 270 270 Note: this is not the number of entries but nodes, an entry must consume 271 271 more than 2 nodes (a key-word and a value). So theoretically, it will be
+3 -1
Documentation/admin-guide/sysctl/kernel.rst
··· 177 177 %E executable path 178 178 %c maximum size of core file by resource limit RLIMIT_CORE 179 179 %C CPU the task ran on 180 + %F pidfd number 180 181 %<OTHER> both are dropped 181 182 ======== ========================================== 182 183 ··· 1107 1106 While long term we enforce one message per `printk_ratelimit`_ 1108 1107 seconds, we do allow a burst of messages to pass through. 1109 1108 ``printk_ratelimit_burst`` specifies the number of messages we can 1110 - send before ratelimiting kicks in. 1109 + send before ratelimiting kicks in. After `printk_ratelimit`_ seconds 1110 + have elapsed, another burst of messages may be sent. 1111 1111 1112 1112 The default value is 10 messages. 1113 1113
+1
Documentation/arch/powerpc/index.rst
··· 19 19 elf_hwcaps 20 20 elfnote 21 21 firmware-assisted-dump 22 + htm 22 23 hvcs 23 24 imc 24 25 isa-versions
+226 -174
Documentation/conf.py
··· 1 - # -*- coding: utf-8 -*- 2 - # 3 - # The Linux Kernel documentation build configuration file, created by 4 - # sphinx-quickstart on Fri Feb 12 13:51:46 2016. 5 - # 6 - # This file is execfile()d with the current directory set to its 7 - # containing dir. 8 - # 9 - # Note that not all possible configuration values are present in this 10 - # autogenerated file. 11 - # 12 - # All configuration values have a default; values that are commented out 13 - # serve to show the default. 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # pylint: disable=C0103,C0209 14 3 15 - import sys 4 + """ 5 + The Linux Kernel documentation build configuration file. 6 + """ 7 + 16 8 import os 17 - import sphinx 18 9 import shutil 10 + import sys 11 + 12 + import sphinx 13 + 14 + # If extensions (or modules to document with autodoc) are in another directory, 15 + # add these directories to sys.path here. If the directory is relative to the 16 + # documentation root, use os.path.abspath to make it absolute, like shown here. 17 + sys.path.insert(0, os.path.abspath("sphinx")) 18 + 19 + from load_config import loadConfig # pylint: disable=C0413,E0401 20 + 21 + # Minimal supported version 22 + needs_sphinx = "3.4.3" 23 + 24 + # Get Sphinx version 25 + major, minor, patch = sphinx.version_info[:3] # pylint: disable=I1101 26 + 27 + # Include_patterns were added on Sphinx 5.1 28 + if (major < 5) or (major == 5 and minor < 1): 29 + has_include_patterns = False 30 + else: 31 + has_include_patterns = True 32 + # Include patterns that don't contain directory names, in glob format 33 + include_patterns = ["**.rst"] 34 + 35 + # Location of Documentation/ directory 36 + doctree = os.path.abspath(".") 37 + 38 + # Exclude of patterns that don't contain directory names, in glob format. 39 + exclude_patterns = [] 40 + 41 + # List of patterns that contain directory names in glob format. 42 + dyn_include_patterns = [] 43 + dyn_exclude_patterns = ["output"] 44 + 45 + # Properly handle include/exclude patterns 46 + # ---------------------------------------- 47 + 48 + def update_patterns(app, config): 49 + """ 50 + On Sphinx, all directories are relative to what it is passed as 51 + SOURCEDIR parameter for sphinx-build. Due to that, all patterns 52 + that have directory names on it need to be dynamically set, after 53 + converting them to a relative patch. 54 + 55 + As Sphinx doesn't include any patterns outside SOURCEDIR, we should 56 + exclude relative patterns that start with "../". 57 + """ 58 + 59 + # setup include_patterns dynamically 60 + if has_include_patterns: 61 + for p in dyn_include_patterns: 62 + full = os.path.join(doctree, p) 63 + 64 + rel_path = os.path.relpath(full, start=app.srcdir) 65 + if rel_path.startswith("../"): 66 + continue 67 + 68 + config.include_patterns.append(rel_path) 69 + 70 + # setup exclude_patterns dynamically 71 + for p in dyn_exclude_patterns: 72 + full = os.path.join(doctree, p) 73 + 74 + rel_path = os.path.relpath(full, start=app.srcdir) 75 + if rel_path.startswith("../"): 76 + continue 77 + 78 + config.exclude_patterns.append(rel_path) 79 + 19 80 20 81 # helper 21 82 # ------ 83 + 22 84 23 85 def have_command(cmd): 24 86 """Search ``cmd`` in the ``PATH`` environment. ··· 90 28 """ 91 29 return shutil.which(cmd) is not None 92 30 93 - # If extensions (or modules to document with autodoc) are in another directory, 94 - # add these directories to sys.path here. If the directory is relative to the 95 - # documentation root, use os.path.abspath to make it absolute, like shown here. 96 - sys.path.insert(0, os.path.abspath('sphinx')) 97 - from load_config import loadConfig 98 31 99 32 # -- General configuration ------------------------------------------------ 100 33 101 - # If your documentation needs a minimal Sphinx version, state it here. 102 - needs_sphinx = '3.4.3' 103 - 104 - # Add any Sphinx extension module names here, as strings. They can be 105 - # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 106 - # ones. 107 - extensions = ['kerneldoc', 'rstFlatTable', 'kernel_include', 108 - 'kfigure', 'sphinx.ext.ifconfig', 'automarkup', 109 - 'maintainers_include', 'sphinx.ext.autosectionlabel', 110 - 'kernel_abi', 'kernel_feat', 'translations'] 34 + # Add any Sphinx extensions in alphabetic order 35 + extensions = [ 36 + "automarkup", 37 + "kernel_abi", 38 + "kerneldoc", 39 + "kernel_feat", 40 + "kernel_include", 41 + "kfigure", 42 + "maintainers_include", 43 + "rstFlatTable", 44 + "sphinx.ext.autosectionlabel", 45 + "sphinx.ext.ifconfig", 46 + "translations", 47 + ] 111 48 112 49 # Since Sphinx version 3, the C function parser is more pedantic with regards 113 50 # to type checking. Due to that, having macros at c:function cause problems. ··· 181 120 # Load math renderer: 182 121 # For html builder, load imgmath only when its dependencies are met. 183 122 # mathjax is the default math renderer since Sphinx 1.8. 184 - have_latex = have_command('latex') 185 - have_dvipng = have_command('dvipng') 123 + have_latex = have_command("latex") 124 + have_dvipng = have_command("dvipng") 186 125 load_imgmath = have_latex and have_dvipng 187 126 188 127 # Respect SPHINX_IMGMATH (for html docs only) 189 - if 'SPHINX_IMGMATH' in os.environ: 190 - env_sphinx_imgmath = os.environ['SPHINX_IMGMATH'] 191 - if 'yes' in env_sphinx_imgmath: 128 + if "SPHINX_IMGMATH" in os.environ: 129 + env_sphinx_imgmath = os.environ["SPHINX_IMGMATH"] 130 + if "yes" in env_sphinx_imgmath: 192 131 load_imgmath = True 193 - elif 'no' in env_sphinx_imgmath: 132 + elif "no" in env_sphinx_imgmath: 194 133 load_imgmath = False 195 134 else: 196 135 sys.stderr.write("Unknown env SPHINX_IMGMATH=%s ignored.\n" % env_sphinx_imgmath) 197 136 198 137 if load_imgmath: 199 138 extensions.append("sphinx.ext.imgmath") 200 - math_renderer = 'imgmath' 139 + math_renderer = "imgmath" 201 140 else: 202 - math_renderer = 'mathjax' 141 + math_renderer = "mathjax" 203 142 204 143 # Add any paths that contain templates here, relative to this directory. 205 - templates_path = ['sphinx/templates'] 144 + templates_path = ["sphinx/templates"] 206 145 207 146 # The suffix(es) of source filenames. 208 147 # You can specify multiple suffix as a list of string: ··· 210 149 source_suffix = '.rst' 211 150 212 151 # The encoding of source files. 213 - #source_encoding = 'utf-8-sig' 152 + # source_encoding = 'utf-8-sig' 214 153 215 154 # The master toctree document. 216 - master_doc = 'index' 155 + master_doc = "index" 217 156 218 157 # General information about the project. 219 - project = 'The Linux Kernel' 220 - copyright = 'The kernel development community' 221 - author = 'The kernel development community' 158 + project = "The Linux Kernel" 159 + copyright = "The kernel development community" # pylint: disable=W0622 160 + author = "The kernel development community" 222 161 223 162 # The version info for the project you're documenting, acts as replacement for 224 163 # |version| and |release|, also used in various other places throughout the ··· 233 172 try: 234 173 makefile_version = None 235 174 makefile_patchlevel = None 236 - for line in open('../Makefile'): 237 - key, val = [x.strip() for x in line.split('=', 2)] 238 - if key == 'VERSION': 239 - makefile_version = val 240 - elif key == 'PATCHLEVEL': 241 - makefile_patchlevel = val 242 - if makefile_version and makefile_patchlevel: 243 - break 244 - except: 175 + with open("../Makefile", encoding="utf=8") as fp: 176 + for line in fp: 177 + key, val = [x.strip() for x in line.split("=", 2)] 178 + if key == "VERSION": 179 + makefile_version = val 180 + elif key == "PATCHLEVEL": 181 + makefile_patchlevel = val 182 + if makefile_version and makefile_patchlevel: 183 + break 184 + except Exception: 245 185 pass 246 186 finally: 247 187 if makefile_version and makefile_patchlevel: 248 - version = release = makefile_version + '.' + makefile_patchlevel 188 + version = release = makefile_version + "." + makefile_patchlevel 249 189 else: 250 190 version = release = "unknown version" 251 191 252 - # 253 - # HACK: there seems to be no easy way for us to get at the version and 254 - # release information passed in from the makefile...so go pawing through the 255 - # command-line options and find it for ourselves. 256 - # 192 + 257 193 def get_cline_version(): 258 - c_version = c_release = '' 194 + """ 195 + HACK: There seems to be no easy way for us to get at the version and 196 + release information passed in from the makefile...so go pawing through the 197 + command-line options and find it for ourselves. 198 + """ 199 + 200 + c_version = c_release = "" 259 201 for arg in sys.argv: 260 - if arg.startswith('version='): 202 + if arg.startswith("version="): 261 203 c_version = arg[8:] 262 - elif arg.startswith('release='): 204 + elif arg.startswith("release="): 263 205 c_release = arg[8:] 264 206 if c_version: 265 207 if c_release: 266 - return c_version + '-' + c_release 208 + return c_version + "-" + c_release 267 209 return c_version 268 - return version # Whatever we came up with before 210 + return version # Whatever we came up with before 211 + 269 212 270 213 # The language for content autogenerated by Sphinx. Refer to documentation 271 214 # for a list of supported languages. 272 215 # 273 216 # This is also used if you do content translation via gettext catalogs. 274 217 # Usually you set "language" from the command line for these cases. 275 - language = 'en' 218 + language = "en" 276 219 277 220 # There are two options for replacing |today|: either, you set today to some 278 221 # non-false value, then it is used: 279 - #today = '' 222 + # today = '' 280 223 # Else, today_fmt is used as the format for a strftime call. 281 - #today_fmt = '%B %d, %Y' 282 - 283 - # List of patterns, relative to source directory, that match files and 284 - # directories to ignore when looking for source files. 285 - exclude_patterns = ['output'] 224 + # today_fmt = '%B %d, %Y' 286 225 287 226 # The reST default role (used for this markup: `text`) to use for all 288 227 # documents. 289 - #default_role = None 228 + # default_role = None 290 229 291 230 # If true, '()' will be appended to :func: etc. cross-reference text. 292 - #add_function_parentheses = True 231 + # add_function_parentheses = True 293 232 294 233 # If true, the current module name will be prepended to all description 295 234 # unit titles (such as .. function::). 296 - #add_module_names = True 235 + # add_module_names = True 297 236 298 237 # If true, sectionauthor and moduleauthor directives will be shown in the 299 238 # output. They are ignored by default. 300 - #show_authors = False 239 + # show_authors = False 301 240 302 241 # The name of the Pygments (syntax highlighting) style to use. 303 - pygments_style = 'sphinx' 242 + pygments_style = "sphinx" 304 243 305 244 # A list of ignored prefixes for module index sorting. 306 - #modindex_common_prefix = [] 245 + # modindex_common_prefix = [] 307 246 308 247 # If true, keep warnings as "system message" paragraphs in the built documents. 309 - #keep_warnings = False 248 + # keep_warnings = False 310 249 311 250 # If true, `todo` and `todoList` produce output, else they produce nothing. 312 251 todo_include_todos = False 313 252 314 - primary_domain = 'c' 315 - highlight_language = 'none' 253 + primary_domain = "c" 254 + highlight_language = "none" 316 255 317 256 # -- Options for HTML output ---------------------------------------------- 318 257 ··· 320 259 # a list of builtin themes. 321 260 322 261 # Default theme 323 - html_theme = 'alabaster' 262 + html_theme = "alabaster" 324 263 html_css_files = [] 325 264 326 265 if "DOCS_THEME" in os.environ: 327 266 html_theme = os.environ["DOCS_THEME"] 328 267 329 - if html_theme == 'sphinx_rtd_theme' or html_theme == 'sphinx_rtd_dark_mode': 268 + if html_theme in ["sphinx_rtd_theme", "sphinx_rtd_dark_mode"]: 330 269 # Read the Docs theme 331 270 try: 332 271 import sphinx_rtd_theme 272 + 333 273 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 334 274 335 275 # Add any paths that contain custom static files (such as style sheets) here, 336 276 # relative to this directory. They are copied after the builtin static files, 337 277 # so a file named "default.css" will overwrite the builtin "default.css". 338 278 html_css_files = [ 339 - 'theme_overrides.css', 279 + "theme_overrides.css", 340 280 ] 341 281 342 282 # Read the Docs dark mode override theme 343 - if html_theme == 'sphinx_rtd_dark_mode': 283 + if html_theme == "sphinx_rtd_dark_mode": 344 284 try: 345 - import sphinx_rtd_dark_mode 346 - extensions.append('sphinx_rtd_dark_mode') 347 - except ImportError: 348 - html_theme == 'sphinx_rtd_theme' 285 + import sphinx_rtd_dark_mode # pylint: disable=W0611 349 286 350 - if html_theme == 'sphinx_rtd_theme': 351 - # Add color-specific RTD normal mode 352 - html_css_files.append('theme_rtd_colors.css') 287 + extensions.append("sphinx_rtd_dark_mode") 288 + except ImportError: 289 + html_theme = "sphinx_rtd_theme" 290 + 291 + if html_theme == "sphinx_rtd_theme": 292 + # Add color-specific RTD normal mode 293 + html_css_files.append("theme_rtd_colors.css") 353 294 354 295 html_theme_options = { 355 - 'navigation_depth': -1, 296 + "navigation_depth": -1, 356 297 } 357 298 358 299 except ImportError: 359 - html_theme = 'alabaster' 300 + html_theme = "alabaster" 360 301 361 302 if "DOCS_CSS" in os.environ: 362 303 css = os.environ["DOCS_CSS"].split(" ") ··· 366 303 for l in css: 367 304 html_css_files.append(l) 368 305 369 - if html_theme == 'alabaster': 306 + if html_theme == "alabaster": 370 307 html_theme_options = { 371 - 'description': get_cline_version(), 372 - 'page_width': '65em', 373 - 'sidebar_width': '15em', 374 - 'fixed_sidebar': 'true', 375 - 'font_size': 'inherit', 376 - 'font_family': 'serif', 308 + "description": get_cline_version(), 309 + "page_width": "65em", 310 + "sidebar_width": "15em", 311 + "fixed_sidebar": "true", 312 + "font_size": "inherit", 313 + "font_family": "serif", 377 314 } 378 315 379 316 sys.stderr.write("Using %s theme\n" % html_theme) ··· 381 318 # Add any paths that contain custom static files (such as style sheets) here, 382 319 # relative to this directory. They are copied after the builtin static files, 383 320 # so a file named "default.css" will overwrite the builtin "default.css". 384 - html_static_path = ['sphinx-static'] 321 + html_static_path = ["sphinx-static"] 385 322 386 323 # If true, Docutils "smart quotes" will be used to convert quotes and dashes 387 324 # to typographically correct entities. However, conversion of "--" to "—" 388 325 # is not always what we want, so enable only quotes. 389 - smartquotes_action = 'q' 326 + smartquotes_action = "q" 390 327 391 328 # Custom sidebar templates, maps document names to template names. 392 329 # Note that the RTD theme ignores this 393 - html_sidebars = { '**': ['searchbox.html', 'kernel-toc.html', 'sourcelink.html']} 330 + html_sidebars = {"**": ["searchbox.html", 331 + "kernel-toc.html", 332 + "sourcelink.html"]} 394 333 395 334 # about.html is available for alabaster theme. Add it at the front. 396 - if html_theme == 'alabaster': 397 - html_sidebars['**'].insert(0, 'about.html') 335 + if html_theme == "alabaster": 336 + html_sidebars["**"].insert(0, "about.html") 398 337 399 338 # The name of an image file (relative to this directory) to place at the top 400 339 # of the sidebar. 401 - html_logo = 'images/logo.svg' 340 + html_logo = "images/logo.svg" 402 341 403 342 # Output file base name for HTML help builder. 404 - htmlhelp_basename = 'TheLinuxKerneldoc' 343 + htmlhelp_basename = "TheLinuxKerneldoc" 405 344 406 345 # -- Options for LaTeX output --------------------------------------------- 407 346 408 347 latex_elements = { 409 348 # The paper size ('letterpaper' or 'a4paper'). 410 - 'papersize': 'a4paper', 411 - 349 + "papersize": "a4paper", 412 350 # The font size ('10pt', '11pt' or '12pt'). 413 - 'pointsize': '11pt', 414 - 351 + "pointsize": "11pt", 415 352 # Latex figure (float) alignment 416 - #'figure_align': 'htbp', 417 - 353 + # 'figure_align': 'htbp', 418 354 # Don't mangle with UTF-8 chars 419 - 'inputenc': '', 420 - 'utf8extra': '', 421 - 355 + "inputenc": "", 356 + "utf8extra": "", 422 357 # Set document margins 423 - 'sphinxsetup': ''' 358 + "sphinxsetup": """ 424 359 hmargin=0.5in, vmargin=1in, 425 360 parsedliteralwraps=true, 426 361 verbatimhintsturnover=false, 427 - ''', 428 - 362 + """, 429 363 # 430 364 # Some of our authors are fond of deep nesting; tell latex to 431 365 # cope. 432 366 # 433 - 'maxlistdepth': '10', 434 - 367 + "maxlistdepth": "10", 435 368 # For CJK One-half spacing, need to be in front of hyperref 436 - 'extrapackages': r'\usepackage{setspace}', 437 - 369 + "extrapackages": r"\usepackage{setspace}", 438 370 # Additional stuff for the LaTeX preamble. 439 - 'preamble': ''' 371 + "preamble": """ 440 372 % Use some font with UTF-8 support with XeLaTeX 441 373 \\usepackage{fontspec} 442 374 \\setsansfont{DejaVu Sans} 443 375 \\setromanfont{DejaVu Serif} 444 376 \\setmonofont{DejaVu Sans Mono} 445 - ''', 377 + """, 446 378 } 447 379 448 380 # Load kerneldoc specific LaTeX settings 449 - latex_elements['preamble'] += ''' 381 + latex_elements["preamble"] += """ 450 382 % Load kerneldoc specific LaTeX settings 451 - \\input{kerneldoc-preamble.sty} 452 - ''' 453 - 454 - # With Sphinx 1.6, it is possible to change the Bg color directly 455 - # by using: 456 - # \definecolor{sphinxnoteBgColor}{RGB}{204,255,255} 457 - # \definecolor{sphinxwarningBgColor}{RGB}{255,204,204} 458 - # \definecolor{sphinxattentionBgColor}{RGB}{255,255,204} 459 - # \definecolor{sphinximportantBgColor}{RGB}{192,255,204} 460 - # 461 - # However, it require to use sphinx heavy box with: 462 - # 463 - # \renewenvironment{sphinxlightbox} {% 464 - # \\begin{sphinxheavybox} 465 - # } 466 - # \\end{sphinxheavybox} 467 - # } 468 - # 469 - # Unfortunately, the implementation is buggy: if a note is inside a 470 - # table, it isn't displayed well. So, for now, let's use boring 471 - # black and white notes. 383 + \\input{kerneldoc-preamble.sty} 384 + """ 472 385 473 386 # Grouping the document tree into LaTeX files. List of tuples 474 387 # (source start file, target name, title, 475 388 # author, documentclass [howto, manual, or own class]). 476 389 # Sorted in alphabetical order 477 - latex_documents = [ 478 - ] 390 + latex_documents = [] 479 391 480 392 # Add all other index files from Documentation/ subdirectories 481 - for fn in os.listdir('.'): 393 + for fn in os.listdir("."): 482 394 doc = os.path.join(fn, "index") 483 395 if os.path.exists(doc + ".rst"): 484 396 has = False ··· 462 424 has = True 463 425 break 464 426 if not has: 465 - latex_documents.append((doc, fn + '.tex', 466 - 'Linux %s Documentation' % fn.capitalize(), 467 - 'The kernel development community', 468 - 'manual')) 427 + latex_documents.append( 428 + ( 429 + doc, 430 + fn + ".tex", 431 + "Linux %s Documentation" % fn.capitalize(), 432 + "The kernel development community", 433 + "manual", 434 + ) 435 + ) 469 436 470 437 # The name of an image file (relative to this directory) to place at the top of 471 438 # the title page. 472 - #latex_logo = None 439 + # latex_logo = None 473 440 474 441 # For "manual" documents, if this is true, then toplevel headings are parts, 475 442 # not chapters. 476 - #latex_use_parts = False 443 + # latex_use_parts = False 477 444 478 445 # If true, show page references after internal links. 479 - #latex_show_pagerefs = False 446 + # latex_show_pagerefs = False 480 447 481 448 # If true, show URL addresses after external links. 482 - #latex_show_urls = False 449 + # latex_show_urls = False 483 450 484 451 # Documents to append as an appendix to all manuals. 485 - #latex_appendices = [] 452 + # latex_appendices = [] 486 453 487 454 # If false, no module index is generated. 488 - #latex_domain_indices = True 455 + # latex_domain_indices = True 489 456 490 457 # Additional LaTeX stuff to be copied to build directory 491 458 latex_additional_files = [ 492 - 'sphinx/kerneldoc-preamble.sty', 459 + "sphinx/kerneldoc-preamble.sty", 493 460 ] 494 461 495 462 ··· 503 460 # One entry per manual page. List of tuples 504 461 # (source start file, name, description, authors, manual section). 505 462 man_pages = [ 506 - (master_doc, 'thelinuxkernel', 'The Linux Kernel Documentation', 507 - [author], 1) 463 + (master_doc, "thelinuxkernel", "The Linux Kernel Documentation", [author], 1) 508 464 ] 509 465 510 466 # If true, show URL addresses after external links. 511 - #man_show_urls = False 467 + # man_show_urls = False 512 468 513 469 514 470 # -- Options for Texinfo output ------------------------------------------- ··· 515 473 # Grouping the document tree into Texinfo files. List of tuples 516 474 # (source start file, target name, title, author, 517 475 # dir menu entry, description, category) 518 - texinfo_documents = [ 519 - (master_doc, 'TheLinuxKernel', 'The Linux Kernel Documentation', 520 - author, 'TheLinuxKernel', 'One line description of project.', 521 - 'Miscellaneous'), 522 - ] 476 + texinfo_documents = [( 477 + master_doc, 478 + "TheLinuxKernel", 479 + "The Linux Kernel Documentation", 480 + author, 481 + "TheLinuxKernel", 482 + "One line description of project.", 483 + "Miscellaneous", 484 + ),] 523 485 524 486 # -- Options for Epub output ---------------------------------------------- 525 487 ··· 534 488 epub_copyright = copyright 535 489 536 490 # A list of files that should not be packed into the epub file. 537 - epub_exclude_files = ['search.html'] 491 + epub_exclude_files = ["search.html"] 538 492 539 - #======= 493 + # ======= 540 494 # rst2pdf 541 495 # 542 496 # Grouping the document tree into PDF files. List of tuples ··· 548 502 # multiple PDF files here actually tries to get the cross-referencing right 549 503 # *between* PDF files. 550 504 pdf_documents = [ 551 - ('kernel-documentation', u'Kernel', u'Kernel', u'J. Random Bozo'), 505 + ("kernel-documentation", "Kernel", "Kernel", "J. Random Bozo"), 552 506 ] 553 507 554 508 # kernel-doc extension configuration for running Sphinx directly (e.g. by Read 555 509 # the Docs). In a normal build, these are supplied from the Makefile via command 556 510 # line arguments. 557 - kerneldoc_bin = '../scripts/kernel-doc.py' 558 - kerneldoc_srctree = '..' 511 + kerneldoc_bin = "../scripts/kernel-doc.py" 512 + kerneldoc_srctree = ".." 559 513 560 514 # ------------------------------------------------------------------------------ 561 515 # Since loadConfig overwrites settings from the global namespace, it has to be 562 516 # the last statement in the conf.py file 563 517 # ------------------------------------------------------------------------------ 564 518 loadConfig(globals()) 519 + 520 + 521 + def setup(app): 522 + """Patterns need to be updated at init time on older Sphinx versions""" 523 + 524 + app.connect('config-inited', update_patterns)
+18 -18
Documentation/core-api/dma-api-howto.rst
··· 155 155 156 156 Special note about PCI: PCI-X specification requires PCI-X devices to support 157 157 64-bit addressing (DAC) for all transactions. And at least one platform (SGI 158 - SN2) requires 64-bit consistent allocations to operate correctly when the IO 158 + SN2) requires 64-bit coherent allocations to operate correctly when the IO 159 159 bus is in PCI-X mode. 160 160 161 161 For correct operation, you must set the DMA mask to inform the kernel about ··· 174 174 175 175 int dma_set_mask(struct device *dev, u64 mask); 176 176 177 - The setup for consistent allocations is performed via a call 177 + The setup for coherent allocations is performed via a call 178 178 to dma_set_coherent_mask():: 179 179 180 180 int dma_set_coherent_mask(struct device *dev, u64 mask); ··· 241 241 242 242 The coherent mask will always be able to set the same or a smaller mask as 243 243 the streaming mask. However for the rare case that a device driver only 244 - uses consistent allocations, one would have to check the return value from 244 + uses coherent allocations, one would have to check the return value from 245 245 dma_set_coherent_mask(). 246 246 247 247 Finally, if your device can only drive the low 24-bits of ··· 298 298 299 299 There are two types of DMA mappings: 300 300 301 - - Consistent DMA mappings which are usually mapped at driver 301 + - Coherent DMA mappings which are usually mapped at driver 302 302 initialization, unmapped at the end and for which the hardware should 303 303 guarantee that the device and the CPU can access the data 304 304 in parallel and will see updates made by each other without any 305 305 explicit software flushing. 306 306 307 - Think of "consistent" as "synchronous" or "coherent". 307 + Think of "coherent" as "synchronous". 308 308 309 - The current default is to return consistent memory in the low 32 309 + The current default is to return coherent memory in the low 32 310 310 bits of the DMA space. However, for future compatibility you should 311 - set the consistent mask even if this default is fine for your 311 + set the coherent mask even if this default is fine for your 312 312 driver. 313 313 314 - Good examples of what to use consistent mappings for are: 314 + Good examples of what to use coherent mappings for are: 315 315 316 316 - Network card DMA ring descriptors. 317 317 - SCSI adapter mailbox command data structures. ··· 320 320 321 321 The invariant these examples all require is that any CPU store 322 322 to memory is immediately visible to the device, and vice 323 - versa. Consistent mappings guarantee this. 323 + versa. Coherent mappings guarantee this. 324 324 325 325 .. important:: 326 326 327 - Consistent DMA memory does not preclude the usage of 327 + Coherent DMA memory does not preclude the usage of 328 328 proper memory barriers. The CPU may reorder stores to 329 - consistent memory just as it may normal memory. Example: 329 + coherent memory just as it may normal memory. Example: 330 330 if it is important for the device to see the first word 331 331 of a descriptor updated before the second, you must do 332 332 something like:: ··· 365 365 when the underlying buffers don't share cache lines with other data. 366 366 367 367 368 - Using Consistent DMA mappings 369 - ============================= 368 + Using Coherent DMA mappings 369 + =========================== 370 370 371 - To allocate and map large (PAGE_SIZE or so) consistent DMA regions, 371 + To allocate and map large (PAGE_SIZE or so) coherent DMA regions, 372 372 you should do:: 373 373 374 374 dma_addr_t dma_handle; ··· 385 385 driver needs regions sized smaller than a page, you may prefer using 386 386 the dma_pool interface, described below. 387 387 388 - The consistent DMA mapping interfaces, will by default return a DMA address 388 + The coherent DMA mapping interfaces, will by default return a DMA address 389 389 which is 32-bit addressable. Even if the device indicates (via the DMA mask) 390 - that it may address the upper 32-bits, consistent allocation will only 391 - return > 32-bit addresses for DMA if the consistent DMA mask has been 390 + that it may address the upper 32-bits, coherent allocation will only 391 + return > 32-bit addresses for DMA if the coherent DMA mask has been 392 392 explicitly changed via dma_set_coherent_mask(). This is true of the 393 393 dma_pool interface as well. 394 394 ··· 497 497 kernel logs when the DMA controller hardware detects violation of the 498 498 permission setting. 499 499 500 - Only streaming mappings specify a direction, consistent mappings 500 + Only streaming mappings specify a direction, coherent mappings 501 501 implicitly have a direction attribute setting of 502 502 DMA_BIDIRECTIONAL. 503 503
+74 -123
Documentation/core-api/dma-api.rst
··· 8 8 of the API (and actual examples), see Documentation/core-api/dma-api-howto.rst. 9 9 10 10 This API is split into two pieces. Part I describes the basic API. 11 - Part II describes extensions for supporting non-consistent memory 11 + Part II describes extensions for supporting non-coherent memory 12 12 machines. Unless you know that your driver absolutely has to support 13 - non-consistent platforms (this is usually only legacy platforms) you 13 + non-coherent platforms (this is usually only legacy platforms) you 14 14 should only use the API described in part I. 15 15 16 - Part I - dma_API 16 + Part I - DMA API 17 17 ---------------- 18 18 19 - To get the dma_API, you must #include <linux/dma-mapping.h>. This 19 + To get the DMA API, you must #include <linux/dma-mapping.h>. This 20 20 provides dma_addr_t and the interfaces described below. 21 21 22 22 A dma_addr_t can hold any valid DMA address for the platform. It can be ··· 33 33 dma_alloc_coherent(struct device *dev, size_t size, 34 34 dma_addr_t *dma_handle, gfp_t flag) 35 35 36 - Consistent memory is memory for which a write by either the device or 36 + Coherent memory is memory for which a write by either the device or 37 37 the processor can immediately be read by the processor or device 38 38 without having to worry about caching effects. (You may however need 39 39 to make sure to flush the processor's write buffers before telling 40 40 devices to read that memory.) 41 41 42 - This routine allocates a region of <size> bytes of consistent memory. 42 + This routine allocates a region of <size> bytes of coherent memory. 43 43 44 44 It returns a pointer to the allocated region (in the processor's virtual 45 45 address space) or NULL if the allocation failed. ··· 48 48 same width as the bus and given to the device as the DMA address base of 49 49 the region. 50 50 51 - Note: consistent memory can be expensive on some platforms, and the 51 + Note: coherent memory can be expensive on some platforms, and the 52 52 minimum allocation length may be as big as a page, so you should 53 - consolidate your requests for consistent memory as much as possible. 53 + consolidate your requests for coherent memory as much as possible. 54 54 The simplest way to do that is to use the dma_pool calls (see below). 55 55 56 - The flag parameter (dma_alloc_coherent() only) allows the caller to 57 - specify the ``GFP_`` flags (see kmalloc()) for the allocation (the 58 - implementation may choose to ignore flags that affect the location of 59 - the returned memory, like GFP_DMA). 56 + The flag parameter allows the caller to specify the ``GFP_`` flags (see 57 + kmalloc()) for the allocation (the implementation may ignore flags that affect 58 + the location of the returned memory, like GFP_DMA). 60 59 61 60 :: 62 61 ··· 63 64 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 64 65 dma_addr_t dma_handle) 65 66 66 - Free a region of consistent memory you previously allocated. dev, 67 - size and dma_handle must all be the same as those passed into 68 - dma_alloc_coherent(). cpu_addr must be the virtual address returned by 69 - the dma_alloc_coherent(). 67 + Free a previously allocated region of coherent memory. dev, size and dma_handle 68 + must all be the same as those passed into dma_alloc_coherent(). cpu_addr must 69 + be the virtual address returned by dma_alloc_coherent(). 70 70 71 - Note that unlike their sibling allocation calls, these routines 72 - may only be called with IRQs enabled. 71 + Note that unlike the sibling allocation call, this routine may only be called 72 + with IRQs enabled. 73 73 74 74 75 75 Part Ib - Using small DMA-coherent buffers 76 76 ------------------------------------------ 77 77 78 - To get this part of the dma_API, you must #include <linux/dmapool.h> 78 + To get this part of the DMA API, you must #include <linux/dmapool.h> 79 79 80 80 Many drivers need lots of small DMA-coherent memory regions for DMA 81 81 descriptors or I/O buffers. Rather than allocating in units of a page ··· 83 85 not __get_free_pages(). Also, they understand common hardware constraints 84 86 for alignment, like queue heads needing to be aligned on N-byte boundaries. 85 87 88 + .. kernel-doc:: mm/dmapool.c 89 + :export: 86 90 87 - :: 88 - 89 - struct dma_pool * 90 - dma_pool_create(const char *name, struct device *dev, 91 - size_t size, size_t align, size_t alloc); 92 - 93 - dma_pool_create() initializes a pool of DMA-coherent buffers 94 - for use with a given device. It must be called in a context which 95 - can sleep. 96 - 97 - The "name" is for diagnostics (like a struct kmem_cache name); dev and size 98 - are like what you'd pass to dma_alloc_coherent(). The device's hardware 99 - alignment requirement for this type of data is "align" (which is expressed 100 - in bytes, and must be a power of two). If your device has no boundary 101 - crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated 102 - from this pool must not cross 4KByte boundaries. 103 - 104 - :: 105 - 106 - void * 107 - dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, 108 - dma_addr_t *handle) 109 - 110 - Wraps dma_pool_alloc() and also zeroes the returned memory if the 111 - allocation attempt succeeded. 112 - 113 - 114 - :: 115 - 116 - void * 117 - dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 118 - dma_addr_t *dma_handle); 119 - 120 - This allocates memory from the pool; the returned memory will meet the 121 - size and alignment requirements specified at creation time. Pass 122 - GFP_ATOMIC to prevent blocking, or if it's permitted (not 123 - in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow 124 - blocking. Like dma_alloc_coherent(), this returns two values: an 125 - address usable by the CPU, and the DMA address usable by the pool's 126 - device. 127 - 128 - :: 129 - 130 - void 131 - dma_pool_free(struct dma_pool *pool, void *vaddr, 132 - dma_addr_t addr); 133 - 134 - This puts memory back into the pool. The pool is what was passed to 135 - dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what 136 - were returned when that routine allocated the memory being freed. 137 - 138 - :: 139 - 140 - void 141 - dma_pool_destroy(struct dma_pool *pool); 142 - 143 - dma_pool_destroy() frees the resources of the pool. It must be 144 - called in a context which can sleep. Make sure you've freed all allocated 145 - memory back to the pool before you destroy it. 91 + .. kernel-doc:: include/linux/dmapool.h 146 92 147 93 148 94 Part Ic - DMA addressing limitations 149 95 ------------------------------------ 96 + 97 + DMA mask is a bit mask of the addressable region for the device. In other words, 98 + if applying the DMA mask (a bitwise AND operation) to the DMA address of a 99 + memory region does not clear any bits in the address, then the device can 100 + perform DMA to that memory region. 101 + 102 + All the below functions which set a DMA mask may fail if the requested mask 103 + cannot be used with the device, or if the device is not capable of doing DMA. 150 104 151 105 :: 152 106 153 107 int 154 108 dma_set_mask_and_coherent(struct device *dev, u64 mask) 155 109 156 - Checks to see if the mask is possible and updates the device 157 - streaming and coherent DMA mask parameters if it is. 110 + Updates both streaming and coherent DMA masks. 158 111 159 112 Returns: 0 if successful and a negative error if not. 160 113 ··· 114 165 int 115 166 dma_set_mask(struct device *dev, u64 mask) 116 167 117 - Checks to see if the mask is possible and updates the device 118 - parameters if it is. 168 + Updates only the streaming DMA mask. 119 169 120 170 Returns: 0 if successful and a negative error if not. 121 171 ··· 123 175 int 124 176 dma_set_coherent_mask(struct device *dev, u64 mask) 125 177 126 - Checks to see if the mask is possible and updates the device 127 - parameters if it is. 178 + Updates only the coherent DMA mask. 128 179 129 180 Returns: 0 if successful and a negative error if not. 130 181 ··· 178 231 unsigned long 179 232 dma_get_merge_boundary(struct device *dev); 180 233 181 - Returns the DMA merge boundary. If the device cannot merge any the DMA address 234 + Returns the DMA merge boundary. If the device cannot merge any DMA address 182 235 segments, the function returns 0. 183 236 184 237 Part Id - Streaming DMA mappings 185 238 -------------------------------- 239 + 240 + Streaming DMA allows to map an existing buffer for DMA transfers and then 241 + unmap it when finished. Map functions are not guaranteed to succeed, so the 242 + return value must be checked. 243 + 244 + .. note:: 245 + 246 + In particular, mapping may fail for memory not addressable by the 247 + device, e.g. if it is not within the DMA mask of the device and/or a 248 + connecting bus bridge. Streaming DMA functions try to overcome such 249 + addressing constraints, either by using an IOMMU (a device which maps 250 + I/O DMA addresses to physical memory addresses), or by copying the 251 + data to/from a bounce buffer if the kernel is configured with a 252 + :doc:`SWIOTLB <swiotlb>`. However, these methods are not always 253 + available, and even if they are, they may still fail for a number of 254 + reasons. 255 + 256 + In short, a device driver may need to be wary of where buffers are 257 + located in physical memory, especially if the DMA mask is less than 32 258 + bits. 186 259 187 260 :: 188 261 ··· 213 246 Maps a piece of processor virtual memory so it can be accessed by the 214 247 device and returns the DMA address of the memory. 215 248 216 - The direction for both APIs may be converted freely by casting. 217 - However the dma_API uses a strongly typed enumerator for its 218 - direction: 249 + The DMA API uses a strongly typed enumerator for its direction: 219 250 220 251 ======================= ============================================= 221 252 DMA_NONE no direction (used for debugging) ··· 224 259 225 260 .. note:: 226 261 227 - Not all memory regions in a machine can be mapped by this API. 228 - Further, contiguous kernel virtual space may not be contiguous as 262 + Contiguous kernel virtual space may not be contiguous as 229 263 physical memory. Since this API does not provide any scatter/gather 230 264 capability, it will fail if the user tries to map a non-physically 231 265 contiguous piece of memory. For this reason, memory to be mapped by 232 266 this API should be obtained from sources which guarantee it to be 233 267 physically contiguous (like kmalloc). 234 - 235 - Further, the DMA address of the memory must be within the 236 - dma_mask of the device (the dma_mask is a bit mask of the 237 - addressable region for the device, i.e., if the DMA address of 238 - the memory ANDed with the dma_mask is still equal to the DMA 239 - address, then the device can perform DMA to the memory). To 240 - ensure that the memory allocated by kmalloc is within the dma_mask, 241 - the driver may specify various platform-dependent flags to restrict 242 - the DMA address range of the allocation (e.g., on x86, GFP_DMA 243 - guarantees to be within the first 16MB of available DMA addresses, 244 - as required by ISA devices). 245 - 246 - Note also that the above constraints on physical contiguity and 247 - dma_mask may not apply if the platform has an IOMMU (a device which 248 - maps an I/O DMA address to a physical memory address). However, to be 249 - portable, device driver writers may *not* assume that such an IOMMU 250 - exists. 251 268 252 269 .. warning:: 253 270 ··· 272 325 enum dma_data_direction direction) 273 326 274 327 Unmaps the region previously mapped. All the parameters passed in 275 - must be identical to those passed in (and returned) by the mapping 276 - API. 328 + must be identical to those passed to (and returned by) dma_map_single(). 277 329 278 330 :: 279 331 ··· 322 376 dma_map_sg(struct device *dev, struct scatterlist *sg, 323 377 int nents, enum dma_data_direction direction) 324 378 325 - Returns: the number of DMA address segments mapped (this may be shorter 326 - than <nents> passed in if some elements of the scatter/gather list are 327 - physically or virtually adjacent and an IOMMU maps them with a single 328 - entry). 379 + Maps a scatter/gather list for DMA. Returns the number of DMA address segments 380 + mapped, which may be smaller than <nents> passed in if several consecutive 381 + sglist entries are merged (e.g. with an IOMMU, or if some adjacent segments 382 + just happen to be physically contiguous). 329 383 330 384 Please note that the sg cannot be mapped again if it has been mapped once. 331 385 The mapping process is allowed to destroy information in the sg. ··· 349 403 where nents is the number of entries in the sglist. 350 404 351 405 The implementation is free to merge several consecutive sglist entries 352 - into one (e.g. with an IOMMU, or if several pages just happen to be 353 - physically contiguous) and returns the actual number of sg entries it 354 - mapped them to. On failure 0, is returned. 406 + into one. The returned number is the actual number of sg entries it 407 + mapped them to. On failure, 0 is returned. 355 408 356 409 Then you should loop count times (note: this can be less than nents times) 357 410 and use sg_dma_address() and sg_dma_len() macros where you previously ··· 720 775 of two for easy alignment. 721 776 722 777 723 - Part III - Debug drivers use of the DMA-API 778 + Part III - Debug drivers use of the DMA API 724 779 ------------------------------------------- 725 780 726 - The DMA-API as described above has some constraints. DMA addresses must be 781 + The DMA API as described above has some constraints. DMA addresses must be 727 782 released with the corresponding function with the same size for example. With 728 783 the advent of hardware IOMMUs it becomes more and more important that drivers 729 784 do not violate those constraints. In the worst case such a violation can 730 785 result in data corruption up to destroyed filesystems. 731 786 732 - To debug drivers and find bugs in the usage of the DMA-API checking code can 787 + To debug drivers and find bugs in the usage of the DMA API checking code can 733 788 be compiled into the kernel which will tell the developer about those 734 789 violations. If your architecture supports it you can select the "Enable 735 - debugging of DMA-API usage" option in your kernel configuration. Enabling this 790 + debugging of DMA API usage" option in your kernel configuration. Enabling this 736 791 option has a performance impact. Do not enable it in production kernels. 737 792 738 793 If you boot the resulting kernel will contain code which does some bookkeeping ··· 771 826 <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- 772 827 773 828 The driver developer can find the driver and the device including a stacktrace 774 - of the DMA-API call which caused this warning. 829 + of the DMA API call which caused this warning. 775 830 776 831 Per default only the first error will result in a warning message. All other 777 832 errors will only silently counted. This limitation exist to prevent the code ··· 779 834 be disabled via debugfs. See the debugfs interface documentation below for 780 835 details. 781 836 782 - The debugfs directory for the DMA-API debugging code is called dma-api/. In 837 + The debugfs directory for the DMA API debugging code is called dma-api/. In 783 838 this directory the following files can currently be found: 784 839 785 840 =============================== =============================================== ··· 827 882 828 883 If you have this code compiled into your kernel it will be enabled by default. 829 884 If you want to boot without the bookkeeping anyway you can provide 830 - 'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. 885 + 'dma_debug=off' as a boot parameter. This will disable DMA API debugging. 831 886 Notice that you can not enable it again at runtime. You have to reboot to do 832 887 so. 833 888 ··· 860 915 this flag is still set, prints warning message that includes call trace that 861 916 leads up to the unmap. This interface can be called from dma_mapping_error() 862 917 routines to enable DMA mapping error check debugging. 918 + 919 + Functions and structures 920 + ======================== 921 + 922 + .. kernel-doc:: include/linux/scatterlist.h 923 + .. kernel-doc:: lib/scatterlist.c
+3 -3
Documentation/core-api/entry.rst
··· 105 105 ensure that enter_from_user_mode() is called first on entry and 106 106 exit_to_user_mode() is called last on exit. 107 107 108 - Do not nest syscalls. Nested systcalls will cause RCU and/or context tracking 108 + Do not nest syscalls. Nested syscalls will cause RCU and/or context tracking 109 109 to print a warning. 110 110 111 111 KVM ··· 115 115 kernel point of view the CPU goes off into user space when entering the 116 116 guest and returns to the kernel on exit. 117 117 118 - kvm_guest_enter_irqoff() is a KVM-specific variant of exit_to_user_mode() 119 - and kvm_guest_exit_irqoff() is the KVM variant of enter_from_user_mode(). 118 + guest_state_enter_irqoff() is a KVM-specific variant of exit_to_user_mode() 119 + and guest_state_exit_irqoff() is the KVM variant of enter_from_user_mode(). 120 120 The state operations have the same ordering. 121 121 122 122 Task work handling is done separately for guest at the boundary of the
+1
Documentation/core-api/index.rst
··· 54 54 union_find 55 55 min_heap 56 56 parser 57 + list 57 58 58 59 Low level entry and exit 59 60 ========================
-6
Documentation/core-api/kernel-api.rst
··· 3 3 ==================== 4 4 5 5 6 - List Management Functions 7 - ========================= 8 - 9 - .. kernel-doc:: include/linux/list.h 10 - :internal: 11 - 12 6 Basic C Library Functions 13 7 ========================= 14 8
+776
Documentation/core-api/list.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0+ 2 + 3 + ===================== 4 + Linked Lists in Linux 5 + ===================== 6 + 7 + :Author: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> 8 + 9 + .. contents:: 10 + 11 + Introduction 12 + ============ 13 + 14 + Linked lists are one of the most basic data structures used in many programs. 15 + The Linux kernel implements several different flavours of linked lists. The 16 + purpose of this document is not to explain linked lists in general, but to show 17 + new kernel developers how to use the Linux kernel implementations of linked 18 + lists. 19 + 20 + Please note that while linked lists certainly are ubiquitous, they are rarely 21 + the best data structure to use in cases where a simple array doesn't already 22 + suffice. In particular, due to their poor data locality, linked lists are a bad 23 + choice in situations where performance may be of consideration. Familiarizing 24 + oneself with other in-kernel generic data structures, especially for concurrent 25 + accesses, is highly encouraged. 26 + 27 + Linux implementation of doubly linked lists 28 + =========================================== 29 + 30 + Linux's linked list implementations can be used by including the header file 31 + ``<linux/list.h>``. 32 + 33 + The doubly-linked list will likely be the most familiar to many readers. It's a 34 + list that can efficiently be traversed forwards and backwards. 35 + 36 + The Linux kernel's doubly-linked list is circular in nature. This means that to 37 + get from the head node to the tail, we can just travel one edge backwards. 38 + Similarly, to get from the tail node to the head, we can simply travel forwards 39 + "beyond" the tail and arrive back at the head. 40 + 41 + Declaring a node 42 + ---------------- 43 + 44 + A node in a doubly-linked list is declared by adding a struct list_head 45 + member to the data structure you wish to be contained in the list: 46 + 47 + .. code-block:: c 48 + 49 + struct clown { 50 + unsigned long long shoe_size; 51 + const char *name; 52 + struct list_head node; /* the aforementioned member */ 53 + }; 54 + 55 + This may be an unfamiliar approach to some, as the classical explanation of a 56 + linked list is a list node data structure with pointers to the previous and next 57 + list node, as well the payload data. Linux chooses this approach because it 58 + allows for generic list modification code regardless of what data structure is 59 + contained within the list. Since the struct list_head member is not a pointer 60 + but part of the data structure proper, the container_of() pattern can be used by 61 + the list implementation to access the payload data regardless of its type, while 62 + staying oblivious to what said type actually is. 63 + 64 + Declaring and initializing a list 65 + --------------------------------- 66 + 67 + A doubly-linked list can then be declared as just another struct list_head, 68 + and initialized with the LIST_HEAD_INIT() macro during initial assignment, or 69 + with the INIT_LIST_HEAD() function later: 70 + 71 + .. code-block:: c 72 + 73 + struct clown_car { 74 + int tyre_pressure[4]; 75 + struct list_head clowns; /* Looks like a node! */ 76 + }; 77 + 78 + /* ... Somewhere later in our driver ... */ 79 + 80 + static int circus_init(struct circus_priv *circus) 81 + { 82 + struct clown_car other_car = { 83 + .tyre_pressure = {10, 12, 11, 9}, 84 + .clowns = LIST_HEAD_INIT(other_car.clowns) 85 + }; 86 + 87 + INIT_LIST_HEAD(&circus->car.clowns); 88 + 89 + return 0; 90 + } 91 + 92 + A further point of confusion to some may be that the list itself doesn't really 93 + have its own type. The concept of the entire linked list and a 94 + struct list_head member that points to other entries in the list are one and 95 + the same. 96 + 97 + Adding nodes to the list 98 + ------------------------ 99 + 100 + Adding a node to the linked list is done through the list_add() macro. 101 + 102 + We'll return to our clown car example to illustrate how nodes get added to the 103 + list: 104 + 105 + .. code-block:: c 106 + 107 + static int circus_fill_car(struct circus_priv *circus) 108 + { 109 + struct clown_car *car = &circus->car; 110 + struct clown *grock; 111 + struct clown *dimitri; 112 + 113 + /* State 1 */ 114 + 115 + grock = kzalloc(sizeof(*grock), GFP_KERNEL); 116 + if (!grock) 117 + return -ENOMEM; 118 + grock->name = "Grock"; 119 + grock->shoe_size = 1000; 120 + 121 + /* Note that we're adding the "node" member */ 122 + list_add(&grock->node, &car->clowns); 123 + 124 + /* State 2 */ 125 + 126 + dimitri = kzalloc(sizeof(*dimitri), GFP_KERNEL); 127 + if (!dimitri) 128 + return -ENOMEM; 129 + dimitri->name = "Dimitri"; 130 + dimitri->shoe_size = 50; 131 + 132 + list_add(&dimitri->node, &car->clowns); 133 + 134 + /* State 3 */ 135 + 136 + return 0; 137 + } 138 + 139 + In State 1, our list of clowns is still empty:: 140 + 141 + .------. 142 + v | 143 + .--------. | 144 + | clowns |--' 145 + '--------' 146 + 147 + This diagram shows the singular "clowns" node pointing at itself. In this 148 + diagram, and all following diagrams, only the forward edges are shown, to aid in 149 + clarity. 150 + 151 + In State 2, we've added Grock after the list head:: 152 + 153 + .--------------------. 154 + v | 155 + .--------. .-------. | 156 + | clowns |---->| Grock |--' 157 + '--------' '-------' 158 + 159 + This diagram shows the "clowns" node pointing at a new node labeled "Grock". 160 + The Grock node is pointing back at the "clowns" node. 161 + 162 + In State 3, we've added Dimitri after the list head, resulting in the following:: 163 + 164 + .------------------------------------. 165 + v | 166 + .--------. .---------. .-------. | 167 + | clowns |---->| Dimitri |---->| Grock |--' 168 + '--------' '---------' '-------' 169 + 170 + This diagram shows the "clowns" node pointing at a new node labeled "Dimitri", 171 + which then points at the node labeled "Grock". The "Grock" node still points 172 + back at the "clowns" node. 173 + 174 + If we wanted to have Dimitri inserted at the end of the list instead, we'd use 175 + list_add_tail(). Our code would then look like this: 176 + 177 + .. code-block:: c 178 + 179 + static int circus_fill_car(struct circus_priv *circus) 180 + { 181 + /* ... */ 182 + 183 + list_add_tail(&dimitri->node, &car->clowns); 184 + 185 + /* State 3b */ 186 + 187 + return 0; 188 + } 189 + 190 + This results in the following list:: 191 + 192 + .------------------------------------. 193 + v | 194 + .--------. .-------. .---------. | 195 + | clowns |---->| Grock |---->| Dimitri |--' 196 + '--------' '-------' '---------' 197 + 198 + This diagram shows the "clowns" node pointing at the node labeled "Grock", 199 + which points at the new node labeled "Dimitri". The node labeled "Dimitri" 200 + points back at the "clowns" node. 201 + 202 + Traversing the list 203 + ------------------- 204 + 205 + To iterate the list, we can loop through all nodes within the list with 206 + list_for_each(). 207 + 208 + In our clown example, this results in the following somewhat awkward code: 209 + 210 + .. code-block:: c 211 + 212 + static unsigned long long circus_get_max_shoe_size(struct circus_priv *circus) 213 + { 214 + unsigned long long res = 0; 215 + struct clown *e; 216 + struct list_head *cur; 217 + 218 + list_for_each(cur, &circus->car.clowns) { 219 + e = list_entry(cur, struct clown, node); 220 + if (e->shoe_size > res) 221 + res = e->shoe_size; 222 + } 223 + 224 + return res; 225 + } 226 + 227 + The list_entry() macro internally uses the aforementioned container_of() to 228 + retrieve the data structure instance that ``node`` is a member of. 229 + 230 + Note how the additional list_entry() call is a little awkward here. It's only 231 + there because we're iterating through the ``node`` members, but we really want 232 + to iterate through the payload, i.e. the ``struct clown`` that contains each 233 + node's struct list_head. For this reason, there is a second macro: 234 + list_for_each_entry() 235 + 236 + Using it would change our code to something like this: 237 + 238 + .. code-block:: c 239 + 240 + static unsigned long long circus_get_max_shoe_size(struct circus_priv *circus) 241 + { 242 + unsigned long long res = 0; 243 + struct clown *e; 244 + 245 + list_for_each_entry(e, &circus->car.clowns, node) { 246 + if (e->shoe_size > res) 247 + res = e->shoe_size; 248 + } 249 + 250 + return res; 251 + } 252 + 253 + This eliminates the need for the list_entry() step, and our loop cursor is now 254 + of the type of our payload. The macro is given the member name that corresponds 255 + to the list's struct list_head within the clown data structure so that it can 256 + still walk the list. 257 + 258 + Removing nodes from the list 259 + ---------------------------- 260 + 261 + The list_del() function can be used to remove entries from the list. It not only 262 + removes the given entry from the list, but poisons the entry's ``prev`` and 263 + ``next`` pointers, so that unintended use of the entry after removal does not 264 + go unnoticed. 265 + 266 + We can extend our previous example to remove one of the entries: 267 + 268 + .. code-block:: c 269 + 270 + static int circus_fill_car(struct circus_priv *circus) 271 + { 272 + /* ... */ 273 + 274 + list_add(&dimitri->node, &car->clowns); 275 + 276 + /* State 3 */ 277 + 278 + list_del(&dimitri->node); 279 + 280 + /* State 4 */ 281 + 282 + return 0; 283 + } 284 + 285 + The result of this would be this:: 286 + 287 + .--------------------. 288 + v | 289 + .--------. .-------. | .---------. 290 + | clowns |---->| Grock |--' | Dimitri | 291 + '--------' '-------' '---------' 292 + 293 + This diagram shows the "clowns" node pointing at the node labeled "Grock", 294 + which points back at the "clowns" node. Off to the side is a lone node labeled 295 + "Dimitri", which has no arrows pointing anywhere. 296 + 297 + Note how the Dimitri node does not point to itself; its pointers are 298 + intentionally set to a "poison" value that the list code refuses to traverse. 299 + 300 + If we wanted to reinitialize the removed node instead to make it point at itself 301 + again like an empty list head, we can use list_del_init() instead: 302 + 303 + .. code-block:: c 304 + 305 + static int circus_fill_car(struct circus_priv *circus) 306 + { 307 + /* ... */ 308 + 309 + list_add(&dimitri->node, &car->clowns); 310 + 311 + /* State 3 */ 312 + 313 + list_del_init(&dimitri->node); 314 + 315 + /* State 4b */ 316 + 317 + return 0; 318 + } 319 + 320 + This results in the deleted node pointing to itself again:: 321 + 322 + .--------------------. .-------. 323 + v | v | 324 + .--------. .-------. | .---------. | 325 + | clowns |---->| Grock |--' | Dimitri |--' 326 + '--------' '-------' '---------' 327 + 328 + This diagram shows the "clowns" node pointing at the node labeled "Grock", 329 + which points back at the "clowns" node. Off to the side is a lone node labeled 330 + "Dimitri", which points to itself. 331 + 332 + Traversing whilst removing nodes 333 + -------------------------------- 334 + 335 + Deleting entries while we're traversing the list will cause problems if we use 336 + list_for_each() and list_for_each_entry(), as deleting the current entry would 337 + modify the ``next`` pointer of it, which means the traversal can't properly 338 + advance to the next list entry. 339 + 340 + There is a solution to this however: list_for_each_safe() and 341 + list_for_each_entry_safe(). These take an additional parameter of a pointer to 342 + a struct list_head to use as temporary storage for the next entry during 343 + iteration, solving the issue. 344 + 345 + An example of how to use it: 346 + 347 + .. code-block:: c 348 + 349 + static void circus_eject_insufficient_clowns(struct circus_priv *circus) 350 + { 351 + struct clown *e; 352 + struct clown *n; /* temporary storage for safe iteration */ 353 + 354 + list_for_each_entry_safe(e, n, &circus->car.clowns, node) { 355 + if (e->shoe_size < 500) 356 + list_del(&e->node); 357 + } 358 + } 359 + 360 + Proper memory management (i.e. freeing the deleted node while making sure 361 + nothing still references it) in this case is left as an exercise to the reader. 362 + 363 + Cutting a list 364 + -------------- 365 + 366 + There are two helper functions to cut lists with. Both take elements from the 367 + list ``head``, and replace the contents of the list ``list``. 368 + 369 + The first such function is list_cut_position(). It removes all list entries from 370 + ``head`` up to and including ``entry``, placing them in ``list`` instead. 371 + 372 + In this example, it's assumed we start with the following list:: 373 + 374 + .----------------------------------------------------------------. 375 + v | 376 + .--------. .-------. .---------. .-----. .---------. | 377 + | clowns |---->| Grock |---->| Dimitri |---->| Pic |---->| Alfredo |--' 378 + '--------' '-------' '---------' '-----' '---------' 379 + 380 + With the following code, every clown up to and including "Pic" is moved from 381 + the "clowns" list head to a separate struct list_head initialized at local 382 + stack variable ``retirement``: 383 + 384 + .. code-block:: c 385 + 386 + static void circus_retire_clowns(struct circus_priv *circus) 387 + { 388 + struct list_head retirement = LIST_HEAD_INIT(retirement); 389 + struct clown *grock, *dimitri, *pic, *alfredo; 390 + struct clown_car *car = &circus->car; 391 + 392 + /* ... clown initialization, list adding ... */ 393 + 394 + list_cut_position(&retirement, &car->clowns, &pic->node); 395 + 396 + /* State 1 */ 397 + } 398 + 399 + The resulting ``car->clowns`` list would be this:: 400 + 401 + .----------------------. 402 + v | 403 + .--------. .---------. | 404 + | clowns |---->| Alfredo |--' 405 + '--------' '---------' 406 + 407 + Meanwhile, the ``retirement`` list is transformed to the following:: 408 + 409 + .--------------------------------------------------. 410 + v | 411 + .------------. .-------. .---------. .-----. | 412 + | retirement |---->| Grock |---->| Dimitri |---->| Pic |--' 413 + '------------' '-------' '---------' '-----' 414 + 415 + The second function, list_cut_before(), is much the same, except it cuts before 416 + the ``entry`` node, i.e. it removes all list entries from ``head`` up to but 417 + excluding ``entry``, placing them in ``list`` instead. This example assumes the 418 + same initial starting list as the previous example: 419 + 420 + .. code-block:: c 421 + 422 + static void circus_retire_clowns(struct circus_priv *circus) 423 + { 424 + struct list_head retirement = LIST_HEAD_INIT(retirement); 425 + struct clown *grock, *dimitri, *pic, *alfredo; 426 + struct clown_car *car = &circus->car; 427 + 428 + /* ... clown initialization, list adding ... */ 429 + 430 + list_cut_before(&retirement, &car->clowns, &pic->node); 431 + 432 + /* State 1b */ 433 + } 434 + 435 + The resulting ``car->clowns`` list would be this:: 436 + 437 + .----------------------------------. 438 + v | 439 + .--------. .-----. .---------. | 440 + | clowns |---->| Pic |---->| Alfredo |--' 441 + '--------' '-----' '---------' 442 + 443 + Meanwhile, the ``retirement`` list is transformed to the following:: 444 + 445 + .--------------------------------------. 446 + v | 447 + .------------. .-------. .---------. | 448 + | retirement |---->| Grock |---->| Dimitri |--' 449 + '------------' '-------' '---------' 450 + 451 + It should be noted that both functions will destroy links to any existing nodes 452 + in the destination ``struct list_head *list``. 453 + 454 + Moving entries and partial lists 455 + -------------------------------- 456 + 457 + The list_move() and list_move_tail() functions can be used to move an entry 458 + from one list to another, to either the start or end respectively. 459 + 460 + In the following example, we'll assume we start with two lists ("clowns" and 461 + "sidewalk" in the following initial state "State 0":: 462 + 463 + .----------------------------------------------------------------. 464 + v | 465 + .--------. .-------. .---------. .-----. .---------. | 466 + | clowns |---->| Grock |---->| Dimitri |---->| Pic |---->| Alfredo |--' 467 + '--------' '-------' '---------' '-----' '---------' 468 + 469 + .-------------------. 470 + v | 471 + .----------. .-----. | 472 + | sidewalk |---->| Pio |--' 473 + '----------' '-----' 474 + 475 + We apply the following example code to the two lists: 476 + 477 + .. code-block:: c 478 + 479 + static void circus_clowns_exit_car(struct circus_priv *circus) 480 + { 481 + struct list_head sidewalk = LIST_HEAD_INIT(sidewalk); 482 + struct clown *grock, *dimitri, *pic, *alfredo, *pio; 483 + struct clown_car *car = &circus->car; 484 + 485 + /* ... clown initialization, list adding ... */ 486 + 487 + /* State 0 */ 488 + 489 + list_move(&pic->node, &sidewalk); 490 + 491 + /* State 1 */ 492 + 493 + list_move_tail(&dimitri->node, &sidewalk); 494 + 495 + /* State 2 */ 496 + } 497 + 498 + In State 1, we arrive at the following situation:: 499 + 500 + .-----------------------------------------------------. 501 + | | 502 + v | 503 + .--------. .-------. .---------. .---------. | 504 + | clowns |---->| Grock |---->| Dimitri |---->| Alfredo |--' 505 + '--------' '-------' '---------' '---------' 506 + 507 + .-------------------------------. 508 + v | 509 + .----------. .-----. .-----. | 510 + | sidewalk |---->| Pic |---->| Pio |--' 511 + '----------' '-----' '-----' 512 + 513 + In State 2, after we've moved Dimitri to the tail of sidewalk, the situation 514 + changes as follows:: 515 + 516 + .-------------------------------------. 517 + | | 518 + v | 519 + .--------. .-------. .---------. | 520 + | clowns |---->| Grock |---->| Alfredo |--' 521 + '--------' '-------' '---------' 522 + 523 + .-----------------------------------------------. 524 + v | 525 + .----------. .-----. .-----. .---------. | 526 + | sidewalk |---->| Pic |---->| Pio |---->| Dimitri |--' 527 + '----------' '-----' '-----' '---------' 528 + 529 + As long as the source and destination list head are part of the same list, we 530 + can also efficiently bulk move a segment of the list to the tail end of the 531 + list. We continue the previous example by adding a list_bulk_move_tail() after 532 + State 2, moving Pic and Pio to the tail end of the sidewalk list. 533 + 534 + .. code-block:: c 535 + 536 + static void circus_clowns_exit_car(struct circus_priv *circus) 537 + { 538 + struct list_head sidewalk = LIST_HEAD_INIT(sidewalk); 539 + struct clown *grock, *dimitri, *pic, *alfredo, *pio; 540 + struct clown_car *car = &circus->car; 541 + 542 + /* ... clown initialization, list adding ... */ 543 + 544 + /* State 0 */ 545 + 546 + list_move(&pic->node, &sidewalk); 547 + 548 + /* State 1 */ 549 + 550 + list_move_tail(&dimitri->node, &sidewalk); 551 + 552 + /* State 2 */ 553 + 554 + list_bulk_move_tail(&sidewalk, &pic->node, &pio->node); 555 + 556 + /* State 3 */ 557 + } 558 + 559 + For the sake of brevity, only the altered "sidewalk" list at State 3 is depicted 560 + in the following diagram:: 561 + 562 + .-----------------------------------------------. 563 + v | 564 + .----------. .---------. .-----. .-----. | 565 + | sidewalk |---->| Dimitri |---->| Pic |---->| Pio |--' 566 + '----------' '---------' '-----' '-----' 567 + 568 + Do note that list_bulk_move_tail() does not do any checking as to whether all 569 + three supplied ``struct list_head *`` parameters really do belong to the same 570 + list. If you use it outside the constraints the documentation gives, then the 571 + result is a matter between you and the implementation. 572 + 573 + Rotating entries 574 + ---------------- 575 + 576 + A common write operation on lists, especially when using them as queues, is 577 + to rotate it. A list rotation means entries at the front are sent to the back. 578 + 579 + For rotation, Linux provides us with two functions: list_rotate_left() and 580 + list_rotate_to_front(). The former can be pictured like a bicycle chain, taking 581 + the entry after the supplied ``struct list_head *`` and moving it to the tail, 582 + which in essence means the entire list, due to its circular nature, rotates by 583 + one position. 584 + 585 + The latter, list_rotate_to_front(), takes the same concept one step further: 586 + instead of advancing the list by one entry, it advances it *until* the specified 587 + entry is the new front. 588 + 589 + In the following example, our starting state, State 0, is the following:: 590 + 591 + .-----------------------------------------------------------------. 592 + v | 593 + .--------. .-------. .---------. .-----. .---------. .-----. | 594 + | clowns |-->| Grock |-->| Dimitri |-->| Pic |-->| Alfredo |-->| Pio |-' 595 + '--------' '-------' '---------' '-----' '---------' '-----' 596 + 597 + The example code being used to demonstrate list rotations is the following: 598 + 599 + .. code-block:: c 600 + 601 + static void circus_clowns_rotate(struct circus_priv *circus) 602 + { 603 + struct clown *grock, *dimitri, *pic, *alfredo, *pio; 604 + struct clown_car *car = &circus->car; 605 + 606 + /* ... clown initialization, list adding ... */ 607 + 608 + /* State 0 */ 609 + 610 + list_rotate_left(&car->clowns); 611 + 612 + /* State 1 */ 613 + 614 + list_rotate_to_front(&alfredo->node, &car->clowns); 615 + 616 + /* State 2 */ 617 + 618 + } 619 + 620 + In State 1, we arrive at the following situation:: 621 + 622 + .-----------------------------------------------------------------. 623 + v | 624 + .--------. .---------. .-----. .---------. .-----. .-------. | 625 + | clowns |-->| Dimitri |-->| Pic |-->| Alfredo |-->| Pio |-->| Grock |-' 626 + '--------' '---------' '-----' '---------' '-----' '-------' 627 + 628 + Next, after the list_rotate_to_front() call, we arrive in the following 629 + State 2:: 630 + 631 + .-----------------------------------------------------------------. 632 + v | 633 + .--------. .---------. .-----. .-------. .---------. .-----. | 634 + | clowns |-->| Alfredo |-->| Pio |-->| Grock |-->| Dimitri |-->| Pic |-' 635 + '--------' '---------' '-----' '-------' '---------' '-----' 636 + 637 + As is hopefully evident from the diagrams, the entries in front of "Alfredo" 638 + were cycled to the tail end of the list. 639 + 640 + Swapping entries 641 + ---------------- 642 + 643 + Another common operation is that two entries need to be swapped with each other. 644 + 645 + For this, Linux provides us with list_swap(). 646 + 647 + In the following example, we have a list with three entries, and swap two of 648 + them. This is our starting state in "State 0":: 649 + 650 + .-----------------------------------------. 651 + v | 652 + .--------. .-------. .---------. .-----. | 653 + | clowns |-->| Grock |-->| Dimitri |-->| Pic |-' 654 + '--------' '-------' '---------' '-----' 655 + 656 + .. code-block:: c 657 + 658 + static void circus_clowns_swap(struct circus_priv *circus) 659 + { 660 + struct clown *grock, *dimitri, *pic; 661 + struct clown_car *car = &circus->car; 662 + 663 + /* ... clown initialization, list adding ... */ 664 + 665 + /* State 0 */ 666 + 667 + list_swap(&dimitri->node, &pic->node); 668 + 669 + /* State 1 */ 670 + } 671 + 672 + The resulting list at State 1 is the following:: 673 + 674 + .-----------------------------------------. 675 + v | 676 + .--------. .-------. .-----. .---------. | 677 + | clowns |-->| Grock |-->| Pic |-->| Dimitri |-' 678 + '--------' '-------' '-----' '---------' 679 + 680 + As is evident by comparing the diagrams, the "Pic" and "Dimitri" nodes have 681 + traded places. 682 + 683 + Splicing two lists together 684 + --------------------------- 685 + 686 + Say we have two lists, in the following example one represented by a list head 687 + we call "knie" and one we call "stey". In a hypothetical circus acquisition, 688 + the two list of clowns should be spliced together. The following is our 689 + situation in "State 0":: 690 + 691 + .-----------------------------------------. 692 + | | 693 + v | 694 + .------. .-------. .---------. .-----. | 695 + | knie |-->| Grock |-->| Dimitri |-->| Pic |--' 696 + '------' '-------' '---------' '-----' 697 + 698 + .-----------------------------. 699 + v | 700 + .------. .---------. .-----. | 701 + | stey |-->| Alfredo |-->| Pio |--' 702 + '------' '---------' '-----' 703 + 704 + The function to splice these two lists together is list_splice(). Our example 705 + code is as follows: 706 + 707 + .. code-block:: c 708 + 709 + static void circus_clowns_splice(void) 710 + { 711 + struct clown *grock, *dimitri, *pic, *alfredo, *pio; 712 + struct list_head knie = LIST_HEAD_INIT(knie); 713 + struct list_head stey = LIST_HEAD_INIT(stey); 714 + 715 + /* ... Clown allocation and initialization here ... */ 716 + 717 + list_add_tail(&grock->node, &knie); 718 + list_add_tail(&dimitri->node, &knie); 719 + list_add_tail(&pic->node, &knie); 720 + list_add_tail(&alfredo->node, &stey); 721 + list_add_tail(&pio->node, &stey); 722 + 723 + /* State 0 */ 724 + 725 + list_splice(&stey, &dimitri->node); 726 + 727 + /* State 1 */ 728 + } 729 + 730 + The list_splice() call here adds all the entries in ``stey`` to the list 731 + ``dimitri``'s ``node`` list_head is in, after the ``node`` of ``dimitri``. A 732 + somewhat surprising diagram of the resulting "State 1" follows:: 733 + 734 + .-----------------------------------------------------------------. 735 + | | 736 + v | 737 + .------. .-------. .---------. .---------. .-----. .-----. | 738 + | knie |-->| Grock |-->| Dimitri |-->| Alfredo |-->| Pio |-->| Pic |--' 739 + '------' '-------' '---------' '---------' '-----' '-----' 740 + ^ 741 + .-------------------------------' 742 + | 743 + .------. | 744 + | stey |--' 745 + '------' 746 + 747 + Traversing the ``stey`` list no longer results in correct behavior. A call of 748 + list_for_each() on ``stey`` results in an infinite loop, as it never returns 749 + back to the ``stey`` list head. 750 + 751 + This is because list_splice() did not reinitialize the list_head it took 752 + entries from, leaving its pointer pointing into what is now a different list. 753 + 754 + If we want to avoid this situation, list_splice_init() can be used. It does the 755 + same thing as list_splice(), except reinitalizes the donor list_head after the 756 + transplant. 757 + 758 + Concurrency considerations 759 + -------------------------- 760 + 761 + Concurrent access and modification of a list needs to be protected with a lock 762 + in most cases. Alternatively and preferably, one may use the RCU primitives for 763 + lists in read-mostly use-cases, where read accesses to the list are common but 764 + modifications to the list less so. See Documentation/RCU/listRCU.rst for more 765 + details. 766 + 767 + Further reading 768 + --------------- 769 + 770 + * `How does the kernel implements Linked Lists? - KernelNewbies <https://kernelnewbies.org/FAQ/LinkedLists>`_ 771 + 772 + Full List API 773 + ============= 774 + 775 + .. kernel-doc:: include/linux/list.h 776 + :internal:
-6
Documentation/core-api/mm-api.rst
··· 91 91 .. kernel-doc:: mm/mempool.c 92 92 :export: 93 93 94 - DMA pools 95 - ========= 96 - 97 - .. kernel-doc:: mm/dmapool.c 98 - :export: 99 - 100 94 More Memory Management Functions 101 95 ================================ 102 96
+1 -1
Documentation/core-api/packing.rst
··· 319 319 320 320 #define SIZE 13 321 321 322 - typdef struct __packed { u8 buf[SIZE]; } packed_buf_t; 322 + typedef struct __packed { u8 buf[SIZE]; } packed_buf_t; 323 323 324 324 static const struct packed_field_u8 fields[] = { 325 325 PACKED_FIELD(100, 90, struct data, field1),
+23
Documentation/doc-guide/sphinx.rst
··· 131 131 ``--no-virtualenv`` 132 132 Use OS packaging for Sphinx instead of Python virtual environment. 133 133 134 + Installing Sphinx Minimal Version 135 + --------------------------------- 136 + 137 + When changing Sphinx build system, it is important to ensure that 138 + the minimal version will still be supported. Nowadays, it is 139 + becoming harder to do that on modern distributions, as it is not 140 + possible to install with Python 3.13 and above. 141 + 142 + Testing with the lowest supported Python version as defined at 143 + Documentation/process/changes.rst can be done by creating 144 + a venv with it with, and install minimal requirements with:: 145 + 146 + /usr/bin/python3.9 -m venv sphinx_min 147 + . sphinx_min/bin/activate 148 + pip install -r Documentation/sphinx/min_requirements.txt 149 + 150 + A more comprehensive test can be done by using: 151 + 152 + scripts/test_doc_build.py 153 + 154 + Such script create one Python venv per supported version, 155 + optionally building documentation for a range of Sphinx versions. 156 + 134 157 135 158 Sphinx Build 136 159 ============
+1 -1
Documentation/driver-api/gpio/driver.rst
··· 750 750 - Test your driver with the appropriate in-kernel real-time test cases for both 751 751 level and edge IRQs 752 752 753 - * [1] http://www.spinics.net/lists/linux-omap/msg120425.html 753 + * [1] https://lore.kernel.org/r/1437496011-11486-1-git-send-email-bigeasy@linutronix.de/ 754 754 * [2] https://lore.kernel.org/r/1443209283-20781-2-git-send-email-grygorii.strashko@ti.com 755 755 * [3] https://lore.kernel.org/r/1443209283-20781-3-git-send-email-grygorii.strashko@ti.com 756 756
+1 -1
Documentation/fault-injection/fault-injection.rst
··· 2 2 Fault injection capabilities infrastructure 3 3 =========================================== 4 4 5 - See also drivers/md/md-faulty.c and "every_nth" module option for scsi_debug. 5 + See also "every_nth" module option for scsi_debug. 6 6 7 7 8 8 Available fault injection capabilities
-1
Documentation/filesystems/dax.rst
··· 206 206 implement direct_access. 207 207 208 208 These block devices may be used for inspiration: 209 - - brd: RAM backed block device driver 210 209 - pmem: NVDIMM persistent memory driver 211 210 212 211
+5 -5
Documentation/filesystems/ext4/atomic_writes.rst
··· 148 148 only required to handle a split extent across leaf blocks. 149 149 150 150 How to 151 - ------ 151 + ~~~~~~ 152 152 153 153 Creating Filesystems with Atomic Write Support 154 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 154 + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 155 155 156 156 First check the atomic write units supported by block device. 157 157 See :ref:`atomic_write_bdev_support` for more details. ··· 176 176 and ``-O bigalloc`` enables the bigalloc feature. 177 177 178 178 Application Interface 179 - ~~~~~~~~~~~~~~~~~~~~~ 179 + ^^^^^^^^^^^^^^^^^^^^^ 180 180 181 181 Applications can use the ``pwritev2()`` system call with the ``RWF_ATOMIC`` flag 182 182 to perform atomic writes: ··· 204 204 .. _atomic_write_bdev_support: 205 205 206 206 Hardware Support 207 - ---------------- 207 + ~~~~~~~~~~~~~~~~ 208 208 209 209 The underlying storage device must support atomic write operations. 210 210 Modern NVMe and SCSI devices often provide this capability. ··· 217 217 atomic writes. 218 218 219 219 See Also 220 - -------- 220 + ~~~~~~~~ 221 221 222 222 * :doc:`bigalloc` - Documentation on the bigalloc feature 223 223 * :doc:`allocators` - Documentation on block allocation in ext4
-7
Documentation/filesystems/ext4/bitmaps.rst
··· 19 19 the bitmaps and group descriptor live inside the group. Unfortunately, 20 20 ext2fs_test_block_bitmap2() will return '0' for those locations, 21 21 which produces confusing debugfs output. 22 - 23 - Inode Table 24 - ----------- 25 - Inode tables are statically allocated at mkfs time. Each block group 26 - descriptor points to the start of the table, and the superblock records 27 - the number of inodes per group. See the section on inodes for more 28 - information.
+7 -4
Documentation/filesystems/ext4/blockgroup.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 + Block Groups 4 + ------------ 5 + 3 6 Layout 4 - ------ 7 + ~~~~~~ 5 8 6 9 The layout of a standard block group is approximately as follows (each 7 10 of these fields is discussed in a separate section below): ··· 63 60 block maps, extent tree blocks, and extended attributes. 64 61 65 62 Flexible Block Groups 66 - --------------------- 63 + ~~~~~~~~~~~~~~~~~~~~~ 67 64 68 65 Starting in ext4, there is a new feature called flexible block groups 69 66 (flex_bg). In a flex_bg, several block groups are tied together as one ··· 81 78 flex_bg is given by 2 ^ ``sb.s_log_groups_per_flex``. 82 79 83 80 Meta Block Groups 84 - ----------------- 81 + ~~~~~~~~~~~~~~~~~ 85 82 86 83 Without the option META_BG, for safety concerns, all block group 87 84 descriptors copies are kept in the first block group. Given the default ··· 120 117 block and inode bitmaps. 121 118 122 119 Lazy Block Group Initialization 123 - ------------------------------- 120 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 124 121 125 122 A new feature for ext4 are three block group descriptor flags that 126 123 enable mkfs to skip initializing other parts of the block group
+6 -4
Documentation/filesystems/ext4/dynamic.rst
··· 6 6 Dynamic metadata are created on the fly when files and blocks are 7 7 allocated to files. 8 8 9 - .. include:: inodes.rst 10 - .. include:: ifork.rst 11 - .. include:: directory.rst 12 - .. include:: attributes.rst 9 + .. toctree:: 10 + 11 + inodes 12 + ifork 13 + directory 14 + attributes
+9 -6
Documentation/filesystems/ext4/globals.rst
··· 6 6 The filesystem is sharded into a number of block groups, each of which 7 7 have static metadata at fixed locations. 8 8 9 - .. include:: super.rst 10 - .. include:: group_descr.rst 11 - .. include:: bitmaps.rst 12 - .. include:: mmp.rst 13 - .. include:: journal.rst 14 - .. include:: orphan.rst 9 + .. toctree:: 10 + 11 + super 12 + group_descr 13 + bitmaps 14 + inode_table 15 + mmp 16 + journal 17 + orphan
+1 -1
Documentation/filesystems/ext4/index.rst
··· 5 5 =================================== 6 6 7 7 .. toctree:: 8 - :maxdepth: 6 8 + :maxdepth: 2 9 9 :numbered: 10 10 11 11 about
+9
Documentation/filesystems/ext4/inode_table.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Inode Table 4 + ----------- 5 + 6 + Inode tables are statically allocated at mkfs time. Each block group 7 + descriptor points to the start of the table, and the superblock records 8 + the number of inodes per group. See :doc:`inode documentation <inodes>` 9 + for more information on inode table layout.
+12 -10
Documentation/filesystems/ext4/overview.rst
··· 16 16 all fields in jbd2 (the journal) are written to disk in big-endian 17 17 order. 18 18 19 - .. include:: blocks.rst 20 - .. include:: blockgroup.rst 21 - .. include:: special_inodes.rst 22 - .. include:: allocators.rst 23 - .. include:: checksums.rst 24 - .. include:: bigalloc.rst 25 - .. include:: inlinedata.rst 26 - .. include:: eainode.rst 27 - .. include:: verity.rst 28 - .. include:: atomic_writes.rst 19 + .. toctree:: 20 + 21 + blocks 22 + blockgroup 23 + special_inodes 24 + allocators 25 + checksums 26 + bigalloc 27 + inlinedata 28 + eainode 29 + verity 30 + atomic_writes
+2 -2
Documentation/filesystems/f2fs.rst
··· 218 218 fragmentation/after-GC situation itself. The developers use these 219 219 modes to understand filesystem fragmentation/after-GC condition well, 220 220 and eventually get some insights to handle them better. 221 - In "fragment:segment", f2fs allocates a new segment in ramdom 221 + In "fragment:segment", f2fs allocates a new segment in random 222 222 position. With this, we can simulate the after-GC condition. 223 223 In "fragment:block", we can scatter block allocation with 224 224 "max_fragment_chunk" and "max_fragment_hole" sysfs nodes. ··· 261 261 The argument may be either "v1" or "v2", in order to 262 262 select the corresponding fscrypt policy version. 263 263 checkpoint=%s[:%u[%]] Set to "disable" to turn off checkpointing. Set to "enable" 264 - to reenable checkpointing. Is enabled by default. While 264 + to re-enable checkpointing. Is enabled by default. While 265 265 disabled, any unmounting or unexpected shutdowns will cause 266 266 the filesystem contents to appear as they did when the 267 267 filesystem was mounted with that option.
+13 -13
Documentation/filesystems/overlayfs.rst
··· 9 9 This document describes a prototype for a new approach to providing 10 10 overlay-filesystem functionality in Linux (sometimes referred to as 11 11 union-filesystems). An overlay-filesystem tries to present a 12 - filesystem which is the result over overlaying one filesystem on top 12 + filesystem which is the result of overlaying one filesystem on top 13 13 of the other. 14 14 15 15 ··· 61 61 |Configuration | Persistent | Uniform | st_ino == d_ino | d_ino == i_ino | 62 62 | | st_ino | st_dev | | [*] | 63 63 +==============+=====+======+=====+======+========+========+========+=======+ 64 - | | dir | !dir | dir | !dir | dir + !dir | dir | !dir | 64 + | | dir | !dir | dir | !dir | dir | !dir | dir | !dir | 65 65 +--------------+-----+------+-----+------+--------+--------+--------+-------+ 66 66 | All layers | Y | Y | Y | Y | Y | Y | Y | Y | 67 67 | on same fs | | | | | | | | | ··· 425 425 The "lower data" file can be on any lower layer, except from the top most 426 426 lower layer. 427 427 428 - Below the top most lower layer, any number of lower most layers may be defined 428 + Below the topmost lower layer, any number of lowermost layers may be defined 429 429 as "data-only" lower layers, using double colon ("::") separators. 430 430 A normal lower layer is not allowed to be below a data-only layer, so single 431 431 colon separators are not allowed to the right of double colon ("::") separators. ··· 445 445 446 446 Instead of explicitly enabling "metacopy=on" it is sufficient to specify at 447 447 least one data-only layer to enable redirection of data to a data-only layer. 448 - In this case other forms of metacopy are rejected. Note: this way data-only 449 - layers may be used toghether with "userxattr", in which case careful attention 448 + In this case other forms of metacopy are rejected. Note: this way, data-only 449 + layers may be used together with "userxattr", in which case careful attention 450 450 must be given to privileges needed to change the "user.overlay.redirect" xattr 451 451 to prevent misuse. 452 452 ··· 515 515 The metacopy digest is never generated or used. This is the 516 516 default if verity option is not specified. 517 517 - "on": 518 - Whenever a metacopy files specifies an expected digest, the 518 + Whenever a metacopy file specifies an expected digest, the 519 519 corresponding data file must match the specified digest. When 520 520 generating a metacopy file the verity digest will be set in it 521 521 based on the source file (if it has one). ··· 537 537 another overlay mount is not allowed and may fail with EBUSY. Using 538 538 partially overlapping paths is not allowed and may fail with EBUSY. 539 539 If files are accessed from two overlayfs mounts which share or overlap the 540 - upper layer and/or workdir path the behavior of the overlay is undefined, 540 + upper layer and/or workdir path, the behavior of the overlay is undefined, 541 541 though it will not result in a crash or deadlock. 542 542 543 543 Mounting an overlay using an upper layer path, where the upper layer path ··· 778 778 - "auto": (default) 779 779 UUID is taken from xattr "trusted.overlay.uuid" if it exists. 780 780 Upgrade to "uuid=on" on first time mount of new overlay filesystem that 781 - meets the prerequites. 781 + meets the prerequisites. 782 782 Downgrade to "uuid=null" for existing overlay filesystems that were never 783 783 mounted with "uuid=on". 784 784 ··· 794 794 The advantage of mounting with the "volatile" option is that all forms of 795 795 sync calls to the upper filesystem are omitted. 796 796 797 - In order to avoid a giving a false sense of safety, the syncfs (and fsync) 797 + In order to avoid giving a false sense of safety, the syncfs (and fsync) 798 798 semantics of volatile mounts are slightly different than that of the rest of 799 799 VFS. If any writeback error occurs on the upperdir's filesystem after a 800 800 volatile mount takes place, all sync functions will return an error. Once this 801 801 condition is reached, the filesystem will not recover, and every subsequent sync 802 - call will return an error, even if the upperdir has not experience a new error 802 + call will return an error, even if the upperdir has not experienced a new error 803 803 since the last sync call. 804 804 805 805 When overlay is mounted with "volatile" option, the directory 806 806 "$workdir/work/incompat/volatile" is created. During next mount, overlay 807 807 checks for this directory and refuses to mount if present. This is a strong 808 - indicator that user should throw away upper and work directories and create 809 - fresh one. In very limited cases where the user knows that the system has 810 - not crashed and contents of upperdir are intact, The "volatile" directory 808 + indicator that the user should discard upper and work directories and create 809 + fresh ones. In very limited cases where the user knows that the system has 810 + not crashed and contents of upperdir are intact, the "volatile" directory 811 811 can be removed. 812 812 813 813
+1 -1
Documentation/filesystems/ubifs-authentication.rst
··· 443 443 444 444 [DM-VERITY] https://www.kernel.org/doc/Documentation/device-mapper/verity.rst 445 445 446 - [FSCRYPT-POLICY2] https://www.spinics.net/lists/linux-ext4/msg58710.html 446 + [FSCRYPT-POLICY2] https://lore.kernel.org/r/20171023214058.128121-1-ebiggers3@gmail.com/ 447 447 448 448 [UBIFS-WP] http://www.linux-mtd.infradead.org/doc/ubifs_whitepaper.pdf
+3 -3
Documentation/networking/device_drivers/ethernet/ti/cpsw.rst
··· 268 268 269 269 // Run your appropriate tools with socket option "SO_PRIORITY" 270 270 // to 3 for class A and/or to 2 for class B 271 - // (I took at https://www.spinics.net/lists/netdev/msg460869.html) 271 + // (I took at https://lore.kernel.org/r/20171017010128.22141-1-vinicius.gomes@intel.com/) 272 272 ./tsn_talker -d 18:03:73:66:87:42 -i eth0.100 -p3 -s 1500& 273 273 ./tsn_talker -d 18:03:73:66:87:42 -i eth0.100 -p2 -s 1500& 274 274 275 275 13) :: 276 276 277 277 // run your listener on workstation (should be in same vlan) 278 - // (I took at https://www.spinics.net/lists/netdev/msg460869.html) 278 + // (I took at https://lore.kernel.org/r/20171017010128.22141-1-vinicius.gomes@intel.com/) 279 279 ./tsn_listener -d 18:03:73:66:87:42 -i enp5s0 -s 1500 280 280 Receiving data rate: 39012 kbps 281 281 Receiving data rate: 39012 kbps ··· 555 555 20) :: 556 556 557 557 // run your listener on workstation (should be in same vlan) 558 - // (I took at https://www.spinics.net/lists/netdev/msg460869.html) 558 + // (I took at https://lore.kernel.org/r/20171017010128.22141-1-vinicius.gomes@intel.com/) 559 559 ./tsn_listener -d 18:03:73:66:87:42 -i enp5s0 -s 1500 560 560 Receiving data rate: 39012 kbps 561 561 Receiving data rate: 39012 kbps
-14
Documentation/process/changes.rst
··· 43 43 kmod 13 depmod -V 44 44 e2fsprogs 1.41.4 e2fsck -V 45 45 jfsutils 1.1.3 fsck.jfs -V 46 - reiserfsprogs 3.6.3 reiserfsck -V 47 46 xfsprogs 2.6.0 xfs_db -V 48 47 squashfs-tools 4.0 mksquashfs -version 49 48 btrfs-progs 0.18 btrfs --version ··· 260 261 - ``mkfs.jfs`` - create a JFS formatted partition. 261 262 262 263 - other file system utilities are also available in this package. 263 - 264 - Reiserfsprogs 265 - ------------- 266 - 267 - The reiserfsprogs package should be used for reiserfs-3.6.x 268 - (Linux kernels 2.4.x). It is a combined package and contains working 269 - versions of ``mkreiserfs``, ``resize_reiserfs``, ``debugreiserfs`` and 270 - ``reiserfsck``. These utils work on both i386 and alpha platforms. 271 264 272 265 Xfsprogs 273 266 -------- ··· 483 492 -------- 484 493 485 494 - <https://jfs.sourceforge.net/> 486 - 487 - Reiserfsprogs 488 - ------------- 489 - 490 - - <https://git.kernel.org/pub/scm/linux/kernel/git/jeffm/reiserfsprogs.git/> 491 495 492 496 Xfsprogs 493 497 --------
+4 -1
Documentation/process/coding-style.rst
··· 614 614 615 615 When commenting the kernel API functions, please use the kernel-doc format. 616 616 See the files at :ref:`Documentation/doc-guide/ <doc_guide>` and 617 - ``scripts/kernel-doc`` for details. 617 + ``scripts/kernel-doc`` for details. Note that the danger of over-commenting 618 + applies to kernel-doc comments all the same. Do not add boilerplate 619 + kernel-doc which simply reiterates what's obvious from the signature 620 + of the function. 618 621 619 622 The preferred style for long (multi-line) comments is: 620 623
+53 -24
Documentation/scheduler/sched-deadline.rst
··· 20 20 4.3 Default behavior 21 21 4.4 Behavior of sched_yield() 22 22 5. Tasks CPU affinity 23 - 5.1 SCHED_DEADLINE and cpusets HOWTO 23 + 5.1 Using cgroup v1 cpuset controller 24 + 5.2 Using cgroup v2 cpuset controller 24 25 6. Future plans 25 26 A. Test suite 26 27 B. Minimal main() ··· 672 671 5. Tasks CPU affinity 673 672 ===================== 674 673 675 - -deadline tasks cannot have an affinity mask smaller that the entire 676 - root_domain they are created on. However, affinities can be specified 677 - through the cpuset facility (Documentation/admin-guide/cgroup-v1/cpusets.rst). 674 + Deadline tasks cannot have a cpu affinity mask smaller than the root domain they 675 + are created on. So, using ``sched_setaffinity(2)`` won't work. Instead, the 676 + the deadline task should be created in a restricted root domain. This can be 677 + done using the cpuset controller of either cgroup v1 (deprecated) or cgroup v2. 678 + See :ref:`Documentation/admin-guide/cgroup-v1/cpusets.rst <cpusets>` and 679 + :ref:`Documentation/admin-guide/cgroup-v2.rst <cgroup-v2>` for more information. 678 680 679 - 5.1 SCHED_DEADLINE and cpusets HOWTO 680 - ------------------------------------ 681 + 5.1 Using cgroup v1 cpuset controller 682 + ------------------------------------- 681 683 682 - An example of a simple configuration (pin a -deadline task to CPU0) 683 - follows (rt-app is used to create a -deadline task):: 684 + An example of a simple configuration (pin a -deadline task to CPU0) follows:: 684 685 685 686 mkdir /dev/cpuset 686 687 mount -t cgroup -o cpuset cpuset /dev/cpuset ··· 695 692 echo 1 > cpu0/cpuset.cpu_exclusive 696 693 echo 1 > cpu0/cpuset.mem_exclusive 697 694 echo $$ > cpu0/tasks 698 - rt-app -t 100000:10000:d:0 -D5 # it is now actually superfluous to specify 699 - # task affinity 695 + chrt --sched-runtime 100000 --sched-period 200000 --deadline 0 yes > /dev/null 696 + 697 + 5.2 Using cgroup v2 cpuset controller 698 + ------------------------------------- 699 + 700 + Assuming the cgroup v2 root is mounted at ``/sys/fs/cgroup``. 701 + 702 + cd /sys/fs/cgroup 703 + echo '+cpuset' > cgroup.subtree_control 704 + mkdir deadline_group 705 + echo 0 > deadline_group/cpuset.cpus 706 + echo 'root' > deadline_group/cpuset.cpus.partition 707 + echo $$ > deadline_group/cgroup.procs 708 + chrt --sched-runtime 100000 --sched-period 200000 --deadline 0 yes > /dev/null 700 709 701 710 6. Future plans 702 711 =============== ··· 746 731 behaves under such workloads. In this way, results are easily reproducible. 747 732 rt-app is available at: https://github.com/scheduler-tools/rt-app. 748 733 749 - Thread parameters can be specified from the command line, with something like 750 - this:: 734 + rt-app does not accept command line arguments, and instead reads from a JSON 735 + configuration file. Here is an example ``config.json``: 751 736 752 - # rt-app -t 100000:10000:d -t 150000:20000:f:10 -D5 737 + .. code-block:: json 753 738 754 - The above creates 2 threads. The first one, scheduled by SCHED_DEADLINE, 755 - executes for 10ms every 100ms. The second one, scheduled at SCHED_FIFO 756 - priority 10, executes for 20ms every 150ms. The test will run for a total 757 - of 5 seconds. 739 + { 740 + "tasks": { 741 + "dl_task": { 742 + "policy": "SCHED_DEADLINE", 743 + "priority": 0, 744 + "dl-runtime": 10000, 745 + "dl-period": 100000, 746 + "dl-deadline": 100000 747 + }, 748 + "fifo_task": { 749 + "policy": "SCHED_FIFO", 750 + "priority": 10, 751 + "runtime": 20000, 752 + "sleep": 130000 753 + } 754 + }, 755 + "global": { 756 + "duration": 5 757 + } 758 + } 758 759 759 - More interestingly, configurations can be described with a json file that 760 - can be passed as input to rt-app with something like this:: 760 + On running ``rt-app config.json``, it creates 2 threads. The first one, 761 + scheduled by SCHED_DEADLINE, executes for 10ms every 100ms. The second one, 762 + scheduled at SCHED_FIFO priority 10, executes for 20ms every 150ms. The test 763 + will run for a total of 5 seconds. 761 764 762 - # rt-app my_config.json 763 - 764 - The parameters that can be specified with the second method are a superset 765 - of the command line options. Please refer to rt-app documentation for more 766 - details (`<rt-app-sources>/doc/*.json`). 765 + Please refer to the rt-app documentation for the JSON schema and more examples. 767 766 768 767 The second testing application is done using chrt which has support 769 768 for SCHED_DEADLINE.
+31 -22
Documentation/scheduler/sched-stats.rst
··· 86 86 ----------------- 87 87 One of these is produced per domain for each cpu described. (Note that if 88 88 CONFIG_SMP is not defined, *no* domains are utilized and these lines 89 - will not appear in the output. <name> is an extension to the domain field 90 - that prints the name of the corresponding sched domain. It can appear in 91 - schedstat version 17 and above. 89 + will not appear in the output.) 92 90 93 91 domain<N> <name> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 94 92 95 - The first field is a bit mask indicating what cpus this domain operates over. 93 + The <name> field prints the name of the sched domain and is only supported 94 + with schedstat version >= 17. On previous versions, <cpumask> is the first 95 + field. 96 + 97 + The <cpumask> field is a bit mask indicating what cpus this domain operates 98 + over. 96 99 97 100 The next 33 are a variety of sched_balance_rq() statistics in grouped into types 98 101 of idleness (busy, idle and newly idle): ··· 106 103 load did not require balancing when busy 107 104 3) # of times in this domain sched_balance_rq() tried to move one or 108 105 more tasks and failed, when the cpu was busy 109 - 4) Total imbalance in load when the cpu was busy 110 - 5) Total imbalance in utilization when the cpu was busy 111 - 6) Total imbalance in number of tasks when the cpu was busy 112 - 7) Total imbalance due to misfit tasks when the cpu was busy 113 - 8) # of times in this domain pull_task() was called when busy 114 - 9) # of times in this domain pull_task() was called even though the 106 + 4) Total imbalance in load in this domain when the cpu was busy 107 + 5) Total imbalance in utilization in this domain when the cpu was busy 108 + 6) Total imbalance in number of tasks in this domain when the cpu was busy 109 + 7) Total imbalance due to misfit tasks in this domain when the cpu was 110 + busy 111 + 8) # of times in this domain detach_task() was called when busy 112 + 9) # of times in this domain detach_task() was called even though the 115 113 target task was cache-hot when busy 116 114 10) # of times in this domain sched_balance_rq() was called but did not 117 115 find a busier queue while the cpu was busy ··· 125 121 the load did not require balancing when the cpu was idle 126 122 14) # of times in this domain sched_balance_rq() tried to move one or 127 123 more tasks and failed, when the cpu was idle 128 - 15) Total imbalance in load when the cpu was idle 129 - 16) Total imbalance in utilization when the cpu was idle 130 - 17) Total imbalance in number of tasks when the cpu was idle 131 - 18) Total imbalance due to misfit tasks when the cpu was idle 132 - 19) # of times in this domain pull_task() was called when the cpu 124 + 15) Total imbalance in load in this domain when the cpu was idle 125 + 16) Total imbalance in utilization in this domain when the cpu was idle 126 + 17) Total imbalance in number of tasks in this domain when the cpu was idle 127 + 18) Total imbalance due to misfit tasks in this domain when the cpu was 128 + idle 129 + 19) # of times in this domain detach_task() was called when the cpu 133 130 was idle 134 - 20) # of times in this domain pull_task() was called even though 131 + 20) # of times in this domain detach_task() was called even though 135 132 the target task was cache-hot when idle 136 133 21) # of times in this domain sched_balance_rq() was called but did 137 134 not find a busier queue while the cpu was idle ··· 145 140 load did not require balancing when the cpu was just becoming idle 146 141 25) # of times in this domain sched_balance_rq() tried to move one or more 147 142 tasks and failed, when the cpu was just becoming idle 148 - 26) Total imbalance in load when the cpu was just becoming idle 149 - 27) Total imbalance in utilization when the cpu was just becoming idle 150 - 28) Total imbalance in number of tasks when the cpu was just becoming idle 151 - 29) Total imbalance due to misfit tasks when the cpu was just becoming idle 152 - 30) # of times in this domain pull_task() was called when newly idle 153 - 31) # of times in this domain pull_task() was called even though the 143 + 26) Total imbalance in load in this domain when the cpu was just becoming 144 + idle 145 + 27) Total imbalance in utilization in this domain when the cpu was just 146 + becoming idle 147 + 28) Total imbalance in number of tasks in this domain when the cpu was just 148 + becoming idle 149 + 29) Total imbalance due to misfit tasks in this domain when the cpu was 150 + just becoming idle 151 + 30) # of times in this domain detach_task() was called when newly idle 152 + 31) # of times in this domain detach_task() was called even though the 154 153 target task was cache-hot when just becoming idle 155 154 32) # of times in this domain sched_balance_rq() was called but did not 156 155 find a busier queue while the cpu was just becoming idle
+15
Documentation/sphinx-static/custom.css
··· 136 136 div.language-selection ul li:hover { 137 137 background: #dddddd; 138 138 } 139 + 140 + /* Make xrefs more universally visible */ 141 + a.reference, a.reference:hover { 142 + border-bottom: none; 143 + text-decoration: underline; 144 + text-underline-offset: 0.3em; 145 + } 146 + 147 + /* Slightly different style for sidebar links */ 148 + div.sphinxsidebar a { border-bottom: none; } 149 + div.sphinxsidebar a:hover { 150 + border-bottom: none; 151 + text-decoration: underline; 152 + text-underline-offset: 0.3em; 153 + }
+9 -18
Documentation/sphinx/automarkup.py
··· 23 23 RE_function = re.compile(r'\b(([a-zA-Z_]\w+)\(\))', flags=re.ASCII) 24 24 25 25 # 26 - # Sphinx 2 uses the same :c:type role for struct, union, enum and typedef 27 - # 28 - RE_generic_type = re.compile(r'\b(struct|union|enum|typedef)\s+([a-zA-Z_]\w+)', 29 - flags=re.ASCII) 30 - 31 - # 32 26 # Sphinx 3 uses a different C role for each one of struct, union, enum and 33 27 # typedef 34 28 # ··· 144 150 return target_text 145 151 146 152 def markup_c_ref(docname, app, match): 147 - class_str = {# Sphinx 2 only 148 - RE_function: 'c-func', 149 - RE_generic_type: 'c-type', 150 - # Sphinx 3+ only 151 - RE_struct: 'c-struct', 153 + class_str = {RE_struct: 'c-struct', 152 154 RE_union: 'c-union', 153 155 RE_enum: 'c-enum', 154 156 RE_typedef: 'c-type', 155 157 } 156 - reftype_str = {# Sphinx 2 only 157 - RE_function: 'function', 158 - RE_generic_type: 'type', 159 - # Sphinx 3+ only 160 - RE_struct: 'struct', 158 + reftype_str = {RE_struct: 'struct', 161 159 RE_union: 'union', 162 160 RE_enum: 'enum', 163 161 RE_typedef: 'type', ··· 235 249 236 250 if xref: 237 251 return xref 238 - 239 - return None 252 + # 253 + # We didn't find the xref; if a container node was supplied, 254 + # mark it as a broken xref 255 + # 256 + if contnode: 257 + contnode['classes'].append("broken_xref") 258 + return contnode 240 259 241 260 # 242 261 # Variant of markup_abi_ref() that warns whan a reference is not found
+1
Documentation/sphinx/cdomain.py
··· 1 1 # -*- coding: utf-8; mode: python -*- 2 + # SPDX-License-Identifier: GPL-2.0 2 3 # pylint: disable=W0141,C0113,C0103,C0325 3 4 """ 4 5 cdomain
+4 -2
Documentation/sphinx/kernel_abi.py
··· 146 146 n += 1 147 147 148 148 if f != old_f: 149 - # Add the file to Sphinx build dependencies 150 - env.note_dependency(os.path.abspath(f)) 149 + # Add the file to Sphinx build dependencies if the file exists 150 + fname = os.path.join(srctree, f) 151 + if os.path.isfile(fname): 152 + env.note_dependency(fname) 151 153 152 154 old_f = f 153 155
+1
Documentation/sphinx/kernel_include.py
··· 1 1 #!/usr/bin/env python3 2 2 # -*- coding: utf-8; mode: python -*- 3 + # SPDX-License-Identifier: GPL-2.0 3 4 # pylint: disable=R0903, C0330, R0914, R0912, E0401 4 5 5 6 """
+1 -2
Documentation/sphinx/kerneldoc.py
··· 1 1 # coding=utf-8 2 + # SPDX-License-Identifier: MIT 2 3 # 3 4 # Copyright © 2016 Intel Corporation 4 5 # ··· 24 23 # 25 24 # Authors: 26 25 # Jani Nikula <jani.nikula@intel.com> 27 - # 28 - # Please make sure this works on both python2 and python3. 29 26 # 30 27 31 28 import codecs
+1
Documentation/sphinx/kfigure.py
··· 1 1 # -*- coding: utf-8; mode: python -*- 2 + # SPDX-License-Identifier: GPL-2.0 2 3 # pylint: disable=C0103, R0903, R0912, R0915 3 4 """ 4 5 scalable figure and image handling
+1
Documentation/sphinx/load_config.py
··· 1 1 # -*- coding: utf-8; mode: python -*- 2 + # SPDX-License-Identifier: GPL-2.0 2 3 # pylint: disable=R0903, C0330, R0914, R0912, E0401 3 4 4 5 import os
+11
Documentation/sphinx/min_requirements.txt
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + alabaster >=0.7,<0.8 3 + docutils>=0.15,<0.18 4 + jinja2>=2.3,<3.1 5 + PyYAML>=5.1,<6.1 6 + Sphinx==3.4.3 7 + sphinxcontrib-applehelp==1.0.2 8 + sphinxcontrib-devhelp==1.0.1 9 + sphinxcontrib-htmlhelp==1.0.3 10 + sphinxcontrib-qthelp==1.0.2 11 + sphinxcontrib-serializinghtml==1.1.4
+4 -1
Documentation/sphinx/parse-headers.pl
··· 1 1 #!/usr/bin/env perl 2 + # SPDX-License-Identifier: GPL-2.0 3 + # Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@kernel.org>. 4 + 2 5 use strict; 3 6 use Text::Tabs; 4 7 use Getopt::Long; ··· 394 391 395 392 =head1 COPYRIGHT 396 393 397 - Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>. 394 + Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@kernel.org>. 398 395 399 396 License GPLv2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>. 400 397
+1
Documentation/sphinx/requirements.txt
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 alabaster 2 3 Sphinx 3 4 pyyaml
+1
Documentation/sphinx/rstFlatTable.py
··· 1 1 #!/usr/bin/env python3 2 2 # -*- coding: utf-8; mode: python -*- 3 + # SPDX-License-Identifier: GPL-2.0 3 4 # pylint: disable=C0330, R0903, R0912 4 5 5 6 """
+11
Documentation/tools/rtla/common_appendix.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + EXIT STATUS 4 + =========== 5 + 6 + :: 7 + 8 + 0 Passed: the test did not hit the stop tracing condition 9 + 1 Error: invalid argument 10 + 2 Failed: the test hit the stop tracing condition 11 + 1 12 REPORTING BUGS 2 13 ============== 3 14 Report bugs to <linux-kernel@vger.kernel.org>
+2
Documentation/tools/rtla/rtla-timerlat-hist.rst
··· 107 107 AUTHOR 108 108 ====== 109 109 Written by Daniel Bristot de Oliveira <bristot@kernel.org> 110 + 111 + .. include:: common_appendix.rst
+2 -2
Documentation/trace/boottime-trace.rst
··· 198 198 after that (arch_initcall or subsys_initcall). Thus, you can trace those with 199 199 boot-time tracing. 200 200 If you want to trace events before core_initcall, you can use the options 201 - starting with ``kernel``. Some of them will be enabled eariler than the initcall 202 - processing (for example,. ``kernel.ftrace=function`` and ``kernel.trace_event`` 201 + starting with ``kernel``. Some of them will be enabled earlier than the initcall 202 + processing (for example, ``kernel.ftrace=function`` and ``kernel.trace_event`` 203 203 will start before the initcall.) 204 204 205 205
+1 -1
Documentation/trace/histogram.rst
··· 249 249 table, it should keep a running total of the number of bytes 250 250 requested by that call_site. 251 251 252 - We'll let it run for awhile and then dump the contents of the 'hist' 252 + We'll let it run for a while and then dump the contents of the 'hist' 253 253 file in the kmalloc event's subdirectory (for readability, a number 254 254 of entries have been omitted):: 255 255
+61 -47
Documentation/translations/zh_CN/how-to.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ========================= 4 - Linux内核中文文档翻译规范 5 - ========================= 3 + ========================== 4 + Linux 内核中文文档翻译规范 5 + ========================== 6 6 7 7 修订记录: 8 - - v1.0 2025年3月28日,司延腾、慕冬亮共同编写了该规范。 8 + - v1.0 2025 年 3 月 28 日,司延腾、慕冬亮共同编写了该规范。 9 9 10 10 制定规范的背景 11 11 ============== 12 12 13 13 过去几年,在广大社区爱好者的友好合作下,Linux 内核中文文档迎来了蓬勃的发 14 14 展。在翻译的早期,一切都是混乱的,社区对译稿只有一个准确翻译的要求,以鼓 15 - 励更多的开发者参与进来,这是从0到1的必然过程,所以早期的中文文档目录更加 16 - 具有多样性,不过好在文档不多,维护上并没有过大的压力。 15 + 励更多的开发者参与进来,这是从 0 到 1 的必然过程,所以早期的中文文档目录 16 + 更加具有多样性,不过好在文档不多,维护上并没有过大的压力。 17 17 18 18 然而,世事变幻,不觉有年,现在内核中文文档在前进的道路上越走越远,很多潜 19 19 在的问题逐渐浮出水面,而且随着中文文档数量的增加,翻译更多的文档与提高中 ··· 34 34 ======== 35 35 36 36 工欲善其事必先利其器,如果您目前对内核文档翻译满怀热情,并且会独立地安装 37 - linux 发行版和简单地使用 linux 命令行,那么可以迅速开始了。若您尚不具备该 37 + Linux 发行版和简单地使用 Linux 命令行,那么可以迅速开始了。若您尚不具备该 38 38 能力,很多网站上会有详细的手把手教程,最多一个上午,您应该就能掌握对应技 39 39 能。您需要注意的一点是,请不要使用 root 用户进行后续步骤和文档翻译。 40 40 ··· 66 66 cd linux 67 67 ./scripts/sphinx-pre-install 68 68 69 - 以Fedora为例,它的输出是这样的:: 69 + 以 Fedora 为例,它的输出是这样的:: 70 70 71 71 You should run: 72 72 73 - sudo dnf install -y dejavu-sans-fonts dejavu-sans-mono-fonts dejavu-serif-fonts google-noto-sans-cjk-fonts graphviz-gd latexmk librsvg2-tools texlive-anyfontsize texlive-capt-of texlive-collection-fontsrecommended texlive-ctex texlive-eqparbox texlive-fncychap texlive-framed texlive-luatex85 texlive-multirow texlive-needspace texlive-tabulary texlive-threeparttable texlive-upquote texlive-wrapfig texlive-xecjk 73 + sudo dnf install -y dejavu-sans-fonts dejavu-sans-mono-fonts \ 74 + dejavu-serif-fonts google-noto-sans-cjk-fonts graphviz-gd \ 75 + latexmk librsvg2-tools texlive-anyfontsize texlive-capt-of \ 76 + texlive-collection-fontsrecommended texlive-ctex \ 77 + texlive-eqparbox texlive-fncychap texlive-framed \ 78 + texlive-luatex85 texlive-multirow texlive-needspace \ 79 + texlive-tabulary texlive-threeparttable texlive-upquote \ 80 + texlive-wrapfig texlive-xecjk 74 81 75 82 Sphinx needs to be installed either: 76 83 1) via pip/pypi with: ··· 99 92 https://github.com/sphinx-doc/sphinx/pull/8313 100 93 101 94 请您按照提示复制打印的命令到命令行执行,您必须具备 root 权限才能执行 sudo 102 - 开头的命令。 95 + 开头的命令。**请注意**,最新版本 Sphinx 的文档编译速度有极大提升,强烈建议 96 + 您通过 pip/pypi 安装最新版本 Sphinx。 103 97 104 98 如果您处于一个多用户环境中,为了避免对其他人造成影响,建议您配置单用户 105 99 sphinx 虚拟环境,即只需要执行:: ··· 134 126 检查编译结果 135 127 ------------ 136 128 137 - 编译输出在Documentation/output/目录下,请用浏览器打开该目录下对应 129 + 编译输出在 Documentation/output/ 目录下,请用浏览器打开该目录下对应 138 130 的文件进行检查。 139 131 140 - git和邮箱配置 141 - ------------- 132 + Git 和邮箱配置 133 + -------------- 142 134 143 135 打开命令行执行:: 144 136 ··· 158 150 smtpencryption = ssl 159 151 smtpserver = smtp.migadu.com 160 152 smtpuser = si.yanteng@linux.dev 161 - smtppass = <passwd> # 建议使用第三方客户端专用密码 153 + smtppass = <passwd> # 建议使用第三方客户端专用密码 162 154 chainreplyto = false 163 155 smtpserverport = 465 164 156 165 - 关于邮件客户端的配置,请查阅Documentation/translations/zh_CN/process/email-clients.rst。 157 + 关于邮件客户端的配置,请查阅 Documentation/translations/zh_CN/process/email-clients.rst。 166 158 167 159 开始翻译文档 168 160 ============ ··· 170 162 文档索引结构 171 163 ------------ 172 164 173 - 目前中文文档是在Documentation/translations/zh_CN/目录下进行,该 174 - 目录结构最终会与Documentation/结构一致,所以您只需要将您感兴趣的英文 165 + 目前中文文档是在 Documentation/translations/zh_CN/ 目录下进行,该 166 + 目录结构最终会与 Documentation/ 结构一致,所以您只需要将您感兴趣的英文 175 167 文档文件和对应的 index.rst 复制到 zh_CN 目录下对应的位置,然后修改更 176 168 上一级的 index 即可开始您的翻译。 177 169 ··· 185 177 请执行以下命令,新建开发分支:: 186 178 187 179 git checkout docs-next 188 - git branch my-trans 189 - git checkout my-trans 180 + git checkout -b my-trans 190 181 191 182 译文格式要求 192 183 ------------ 193 184 194 - - 每行长度最多不超过40个字符 185 + - 每行长度最多不超过 40 个字符 195 186 - 每行长度请保持一致 196 187 - 标题的下划线长度请按照一个英文一个字符、一个中文两个字符与标题对齐 197 188 - 其它的修饰符请与英文文档保持一致 ··· 199 192 200 193 .. SPDX-License-Identifier: GPL-2.0 201 194 .. include:: ../disclaimer-zh_CN.rst #您需要了解该文件的路径,根 202 - 据您实际翻译的文档灵活调整 195 + 据您实际翻译的文档灵活调整 203 196 204 197 :Original: Documentation/xxx/xxx.rst #替换为您翻译的英文文档路径 205 198 ··· 210 203 翻译技巧 211 204 -------- 212 205 213 - 中文文档有每行40字符限制,因为一个中文字符等于2个英文字符。但是社区并没有 214 - 那么严格,一个诀窍是将您的翻译的内容与英文原文的每行长度对齐即可,这样, 206 + 中文文档有每行 40 字符限制,因为一个中文字符等于 2 个英文字符。但是社区并 207 + 没有那么严格,一个诀窍是将您的翻译的内容与英文原文的每行长度对齐即可,这样, 215 208 您也不必总是检查有没有超限。 216 209 217 - 如果您的英文阅读能力有限,可以考虑使用辅助翻译工具,例如 deepseek 。但是您 210 + 如果您的英文阅读能力有限,可以考虑使用辅助翻译工具,例如 deepseek。但是您 218 211 必须仔细地打磨,使译文达到“信达雅”的标准。 219 212 220 213 **请注意** 社区不接受纯机器翻译的文档,社区工作建立在信任的基础上,请认真对待。 ··· 255 248 256 249 Translate .../security/self-protection.rst into Chinese. 257 250 258 - Update the translation through commit b080e52110ea #请执行git log <您翻译的英文文档路径> 复制最顶部第一个补丁的sha值的前12位,替换掉12位sha值。 251 + Update the translation through commit b080e52110ea 259 252 ("docs: update self-protection __ro_after_init status") 253 + # 请执行 git log --oneline <您翻译的英文文档路径>,并替换上述内容 260 254 261 - Signed-off-by: Yanteng Si <si.yanteng@linux.dev> #如果您前面的步骤正确执行,该行会自动显示,否则请检查gitconfig文件。 255 + Signed-off-by: Yanteng Si <si.yanteng@linux.dev> 256 + # 如果您前面的步骤正确执行,该行会自动显示,否则请检查 gitconfig 文件 262 257 263 258 保存并退出。 264 259 265 - **请注意** 以上四行,缺少任何一行,您都将会在第一轮审阅后返工,如果您需要一个更加明确的示例,请对 zh_CN 目录执行 git log。 260 + **请注意** 以上四行,缺少任何一行,您都将会在第一轮审阅后返工,如果您需要一个 261 + 更加明确的示例,请对 zh_CN 目录执行 git log。 266 262 267 263 导出补丁和制作封面 268 264 ------------------ ··· 273 263 这个时候,可以导出补丁,做发送邮件列表最后的准备了。命令行执行:: 274 264 275 265 git format-patch -N 266 + # N 要替换为补丁数量,一般 N 大于等于 1 276 267 277 268 然后命令行会输出类似下面的内容:: 278 269 ··· 297 286 然后执行以下命令为补丁追加更改:: 298 287 299 288 git checkout docs-next 300 - git branch test-trans 289 + git checkout -b test-trans-new 301 290 git am 0001-xxxxx.patch 302 291 ./scripts/checkpatch.pl 0001-xxxxx.patch 303 - 直接修改您的翻译 292 + # 直接修改您的翻译 304 293 git add . 305 294 git am --amend 306 - 保存退出 295 + # 保存退出 307 296 git am 0002-xxxxx.patch 308 297 …… 309 298 ··· 312 301 最后,如果检测时没有 warning 和 error 需要被处理或者您只有一个补丁,请跳 313 302 过下面这个步骤,否则请重新导出补丁制作封面:: 314 303 315 - git format-patch -N --cover-letter --thread=shallow #N为您的补丁数量,N一般要大于1。 304 + git format-patch -N --cover-letter --thread=shallow 305 + # N 要替换为补丁数量,一般 N 大于 1 316 306 317 307 然后命令行会输出类似下面的内容:: 318 308 319 309 0000-cover-letter.patch 320 310 0001-docs-zh_CN-add-xxxxxxxx.patch 321 311 0002-docs-zh_CN-add-xxxxxxxx.patch 312 + …… 322 313 323 - 您需要用编辑器打开0号补丁,修改两处内容:: 314 + 您需要用编辑器打开 0 号补丁,修改两处内容:: 324 315 325 316 vim 0000-cover-letter.patch 326 317 327 318 ... 328 - Subject: [PATCH 0/1] *** SUBJECT HERE *** #修改该字段,概括您的补丁集都做了哪些事情 319 + Subject: [PATCH 0/N] *** SUBJECT HERE *** #修改该字段,概括您的补丁集都做了哪些事情 329 320 330 - *** BLURB HERE *** #修改该字段,详细描述您的补丁集做了哪些事情 321 + *** BLURB HERE *** #修改该字段,详细描述您的补丁集做了哪些事情 331 322 332 323 Yanteng Si (1): 333 324 docs/zh_CN: add xxxxx 334 325 ... 335 326 336 - 如果您只有一个补丁,则可以不制作封面,即0号补丁,只需要执行:: 327 + 如果您只有一个补丁,则可以不制作封面,即 0 号补丁,只需要执行:: 337 328 338 329 git format-patch -1 339 330 ··· 358 345 359 346 打开上面您保存的邮件地址,执行:: 360 347 361 - git send-email *.patch --to <maintainer email addr> --cc <others addr> #一个to对应一个地址,一个cc对应一个地址,有几个就写几个。 348 + git send-email *.patch --to <maintainer email addr> --cc <others addr> 349 + # 一个 to 对应一个地址,一个 cc 对应一个地址,有几个就写几个 362 350 363 - 执行该命令时,请确保网络通常,邮件发送成功一般会返回250。 351 + 执行该命令时,请确保网络通常,邮件发送成功一般会返回 250。 364 352 365 353 您可以先发送给自己,尝试发出的 patch 是否可以用 'git am' 工具正常打上。 366 354 如果检查正常, 您就可以放心的发送到社区评审了。 ··· 396 382 每次迭代一个补丁,不要一次多个:: 397 383 398 384 git am <您要修改的补丁> 399 - 直接对文件进行您的修改 385 + # 直接对文件进行您的修改 400 386 git add . 401 387 git commit --amend 402 388 403 389 当您将所有的评论落实到位后,导出第二版补丁,并修改封面:: 404 390 405 - git format-patch -N -v 2 --cover-letter --thread=shallow 391 + git format-patch -N -v 2 --cover-letter --thread=shallow 406 392 407 - 打开0号补丁,在 BLURB HERE 处编写相较于上个版本,您做了哪些改动。 393 + 打开 0 号补丁,在 BLURB HERE 处编写相较于上个版本,您做了哪些改动。 408 394 409 395 然后执行:: 410 396 ··· 428 414 如果您发送到邮件列表之后。发现发错了补丁集,尤其是在多个版本迭代的过程中; 429 415 自己发现了一些不妥的翻译;发送错了邮件列表…… 430 416 431 - git email默认会抄送给您一份,所以您可以切换为审阅者的角色审查自己的补丁, 417 + git email 默认会抄送给您一份,所以您可以切换为审阅者的角色审查自己的补丁, 432 418 并留下评论,描述有何不妥,将在下个版本怎么改,并付诸行动,重新提交,但是 433 419 注意频率,每天提交的次数不要超过两次。 434 420 ··· 439 425 440 426 ./script/checktransupdate.py -l zh_CN`` 441 427 442 - 该命令会列出需要翻译或更新的英文文档。 428 + 该命令会列出需要翻译或更新的英文文档,结果同时保存在 checktransupdate.log 中。 443 429 444 - 关于详细操作说明,请参考: Documentation/translations/zh_CN/doc-guide/checktransupdate.rst\ 430 + 关于详细操作说明,请参考:Documentation/translations/zh_CN/doc-guide/checktransupdate.rst。 445 431 446 432 进阶 447 433 ---- ··· 453 439 常见的问题 454 440 ========== 455 441 456 - Maintainer回复补丁不能正常apply 457 - ------------------------------- 442 + Maintainer 回复补丁不能正常 apply 443 + --------------------------------- 458 444 459 445 这通常是因为您的补丁与邮件列表其他人的补丁产生了冲突,别人的补丁先被 apply 了, 460 446 您的补丁集就无法成功 apply 了,这需要您更新本地分支,在本地解决完冲突后再次提交。 ··· 469 455 大部分情况下,是由于您发送了非纯文本格式的信件,请尽量避免使用 webmail,推荐 470 456 使用邮件客户端,比如 thunderbird,记得在设置中的回信配置那改为纯文本发送。 471 457 472 - 如果超过了24小时,您依旧没有在<https://lore.kernel.org/linux-doc/>发现您的邮 473 - 件,请联系您的网络管理员帮忙解决。 458 + 如果超过了 24 小时,您依旧没有在<https://lore.kernel.org/linux-doc/>发现您的 459 + 邮件,请联系您的网络管理员帮忙解决。
+56
Documentation/translations/zh_CN/networking/alias.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + .. include:: ../disclaimer-zh_CN.rst 4 + 5 + :Original: Documentation/networking/alias.rst 6 + 7 + :翻译: 8 + 9 + 邱禹潭 Qiu Yutan <qiu.yutan@zte.com.cn> 10 + 11 + :校译: 12 + 13 + ====== 14 + IP别名 15 + ====== 16 + 17 + IP别名是管理每个接口存在多个IP地址/子网掩码的一种过时方法。 18 + 虽然更新的工具如iproute2支持每个接口多个地址/前缀, 19 + 但为了向后兼容性,别名仍被支持。 20 + 21 + 别名通过在使用 ifconfig 时在接口名后添加冒号和一个字符串来创建。 22 + 这个字符串通常是数字,但并非必须。 23 + 24 + 25 + 别名创建 26 + ======== 27 + 28 + 别名的创建是通过“特殊的”接口命名机制完成的:例如, 29 + 要为eth0创建一个 200.1.1.1 的别名... 30 + :: 31 + 32 + # ifconfig eth0:0 200.1.1.1 等等 33 + ~~ -> 请求为eth0创建别名#0(如果尚不存在) 34 + 35 + 该命令也会设置相应的路由表项。请注意:路由表项始终指向基础接口。 36 + 37 + 38 + 别名删除 39 + ======== 40 + 41 + 通过关闭别名即可将其删除:: 42 + 43 + # ifconfig eth0:0 down 44 + ~~~~~~~~~~ -> 将删除别名 45 + 46 + 47 + 别名(重新)配置 48 + ================ 49 + 50 + 别名不是真实的设备,但程序应该能够正常配置和引用它们(ifconfig、route等)。 51 + 52 + 53 + 与主设备的关系 54 + ============== 55 + 56 + 如果基础设备被关闭,则其上添加的所有别名也将被删除。
+6 -6
Documentation/translations/zh_CN/networking/index.rst
··· 21 21 :maxdepth: 1 22 22 23 23 msg_zerocopy 24 + napi 25 + vxlan 26 + netif-msg 27 + xfrm_proc 28 + netmem 29 + alias 24 30 25 31 Todolist: 26 32 ··· 51 45 * page_pool 52 46 * phy 53 47 * sfp-phylink 54 - * alias 55 48 * bridge 56 49 * snmp_counter 57 50 * checksum-offloads ··· 99 94 * mptcp-sysctl 100 95 * multiqueue 101 96 * multi-pf-netdev 102 - * napi 103 97 * net_cachelines/index 104 98 * netconsole 105 99 * netdev-features 106 100 * netdevices 107 101 * netfilter-sysctl 108 - * netif-msg 109 - * netmem 110 102 * nexthop-group-resilient 111 103 * nf_conntrack-sysctl 112 104 * nf_flowtable ··· 144 142 * tuntap 145 143 * udplite 146 144 * vrf 147 - * vxlan 148 145 * x25 149 146 * x25-iface 150 147 * xfrm_device 151 - * xfrm_proc 152 148 * xfrm_sync 153 149 * xfrm_sysctl 154 150 * xdp-rx-metadata
+362
Documentation/translations/zh_CN/networking/napi.rst
··· 1 + .. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + .. include:: ../disclaimer-zh_CN.rst 3 + 4 + :Original: Documentation/networking/napi.rst 5 + 6 + :翻译: 7 + 8 + 王亚鑫 Yaxin Wang <wang.yaxin@zte.com.cn> 9 + 10 + ==== 11 + NAPI 12 + ==== 13 + 14 + NAPI 是 Linux 网络堆栈中使用的事件处理机制。NAPI 的名称现在不再代表任何特定含义 [#]_。 15 + 16 + 在基本操作中,设备通过中断通知主机有新事件发生。主机随后调度 NAPI 实例来处理这些事件。 17 + 该设备也可以通过 NAPI 进行事件轮询,而无需先接收中断信号(:ref:`忙轮询<poll_zh_CN>`)。 18 + 19 + NAPI 处理通常发生在软中断上下文中,但有一个选项,可以使用 :ref:`单独的内核线程<threaded_zh_CN>` 20 + 来进行 NAPI 处理。 21 + 22 + 总的来说,NAPI 为驱动程序抽象了事件(数据包接收和发送)处理的上下文环境和配置情况。 23 + 24 + 驱动程序API 25 + =========== 26 + 27 + NAPI 最重要的两个元素是 struct napi_struct 和关联的 poll 方法。struct napi_struct 28 + 持有 NAPI 实例的状态,而方法则是与驱动程序相关的事件处理器。该方法通常会释放已传输的发送 29 + (Tx)数据包并处理新接收的数据包。 30 + 31 + .. _drv_ctrl_zh_CN: 32 + 33 + 控制API 34 + ------- 35 + 36 + netif_napi_add() 和 netif_napi_del() 用于向系统中添加/删除一个 NAPI 实例。实例会被 37 + 附加到作为参数传递的 netdevice上(并在 netdevice 注销时自动删除)。实例在添加时处于禁 38 + 用状态。 39 + 40 + napi_enable() 和 napi_disable() 管理禁用状态。禁用的 NAPI 不会被调度,并且保证其 41 + poll 方法不会被调用。napi_disable() 会等待 NAPI 实例的所有权被释放。 42 + 43 + 这些控制 API 并非幂等的。控制 API 调用在面对数据路径 API 的并发使用时是安全的,但控制 44 + API 调用顺序错误可能会导致系统崩溃、死锁或竞态条件。例如,连续多次调用 napi_disable() 45 + 会造成死锁。 46 + 47 + 数据路径API 48 + ----------- 49 + 50 + napi_schedule() 是调度 NAPI 轮询的基本方法。驱动程序应在其中断处理程序中调用此函数 51 + (更多信息请参见 :ref:`drv_sched_zh_CN`)。成功的 napi_schedule() 调用将获得 NAPI 实例 52 + 的所有权。 53 + 54 + 之后,在 NAPI 被调度后,驱动程序的 poll 方法将被调用以处理事件/数据包。该方法接受一个 55 + ``budget`` 参数 - 驱动程序可以处理任意数量的发送 (Tx) 数据包完成,但处理最多处理 56 + ``budget`` 个接收 (Rx) 数据包。处理接收数据包通常开销更大。 57 + 58 + 换句话说,对于接收数据包的处理,``budget`` 参数限制了驱动程序在单次轮询中能够处理的数 59 + 据包数量。当 ``budget`` 为 0 时,像页面池或 XDP 这类专门用于接收的 API 根本无法使用。 60 + 无论 ``budget`` 的值是多少,skb 的发送处理都应该进行,但是如果 ``budget`` 参数为 0, 61 + 驱动程序就不能调用任何 XDP(或页面池)API。 62 + 63 + .. warning:: 64 + 65 + 如果内核仅尝试处理skb的发送完成情况,而不处理接收 (Rx) 或 XDP 数据包,那么 ``budget`` 66 + 参数可能为 0。 67 + 68 + 轮询方法会返回已完成的工作量。如果驱动程序仍有未完成的工作(例如,``budget`` 已用完), 69 + 轮询方法应精确返回 ``budget`` 的值。在这种情况下,NAPI 实例将再次被处理 / 轮询(无需 70 + 重新调度)。 71 + 72 + 如果事件处理已完成(所有未处理的数据包都已处理完毕),轮询方法在返回之前应调用 napi_complete_done()。 73 + napi_complete_done() 会释放实例的所有权。 74 + 75 + .. warning:: 76 + 77 + 当出现既完成了所有事件处理,又恰好达到了 ``budget`` 数量的情况时,必须谨慎处理。因为没 78 + 有办法将这种(很少出现的)情况报告给协议栈,所以驱动程序要么不调用 napi_complete_done() 79 + 并等待再次被调用,要么返回 ``budget - 1``。 80 + 81 + 当 ``budget`` 为 0 时,napi_complete_done() 绝对不能被调用。 82 + 83 + 调用序列 84 + -------- 85 + 86 + 驱动程序不应假定调用的顺序是固定不变的。即使驱动程序没有调度该实例,轮询方法也可能会被调用 87 + (除非该实例处于禁用状态)。同样,即便 napi_schedule() 调用成功,也不能保证轮询方法一定 88 + 会被调用(例如,如果该实例被禁用)。 89 + 90 + 正如在 :ref:`drv_ctrl_zh_CN` 部分所提到的,napi_disable() 以及后续对轮询方法的调用, 91 + 仅会等待该实例的所有权被释放,而不会等待轮询方法退出。这意味着,驱动程序在调用 napi_complete_done() 92 + 之后,应避免访问任何数据结构。 93 + 94 + .. _drv_sched_zh_CN: 95 + 96 + 调度与IRQ屏蔽 97 + ------------- 98 + 99 + 驱动程序应在调度 NAPI 实例后保持中断屏蔽 - 直到 NAPI 轮询完成,任何进一步的中断都是不必要的。 100 + 101 + 显式屏蔽中断的驱动程序(而非设备自动屏蔽 IRQ)应使用 napi_schedule_prep() 和 102 + __napi_schedule() 调用: 103 + 104 + .. code-block:: c 105 + 106 + if (napi_schedule_prep(&v->napi)) { 107 + mydrv_mask_rxtx_irq(v->idx); 108 + /* 在屏蔽后调度以避免竞争 */ 109 + __napi_schedule(&v->napi); 110 + } 111 + 112 + IRQ 仅应在成功调用 napi_complete_done() 后取消屏蔽: 113 + 114 + .. code-block:: c 115 + 116 + if (budget && napi_complete_done(&v->napi, work_done)) { 117 + mydrv_unmask_rxtx_irq(v->idx); 118 + return min(work_done, budget - 1); 119 + } 120 + 121 + napi_schedule_irqoff() 是 napi_schedule() 的一个变体,它利用了在中断请求(IRQ)上下文 122 + 环境中调用所带来的特性(无需屏蔽中断)。如果中断请求(IRQ)是通过线程处理的(例如启用了 123 + ``PREEMPT_RT`` 时的情况),napi_schedule_irqoff() 会回退为使用 napi_schedule() 。 124 + 125 + 实例到队列的映射 126 + ---------------- 127 + 128 + 现代设备每个接口有多个 NAPI 实例(struct napi_struct)。关于实例如何映射到队列和中断没有 129 + 严格要求。NAPI 主要是事件处理/轮询抽象,没有用户可见的语义。也就是说,大多数网络设备最终以 130 + 非常相似的方式使用 NAPI。 131 + 132 + NAPI 实例最常以 1:1:1 映射到中断和队列对(队列对是由一个接收队列和一个发送队列组成的一组 133 + 队列)。 134 + 135 + 在不太常见的情况下,一个 NAPI 实例可能会用于处理多个队列,或者在单个内核上,接收(Rx)队列 136 + 和发送(Tx)队列可以由不同的 NAPI 实例来处理。不过,无论队列如何分配,通常 NAPI 实例和中断 137 + 之间仍然保持一一对应的关系。 138 + 139 + 值得注意的是,ethtool API 使用了 “通道” 这一术语,每个通道可以是 ``rx`` (接收)、``tx`` 140 + (发送)或 ``combined`` (组合)类型。目前尚不清楚一个通道具体由什么构成,建议的理解方式是 141 + 将一个通道视为一个为特定类型队列提供服务的 IRQ(中断请求)/ NAPI 实例。例如,配置为 1 个 142 + ``rx`` 通道、1 个 ``tx`` 通道和 1 个 ``combined`` 通道的情况下,预计会使用 3 个中断、 143 + 2 个接收队列和 2 个发送队列。 144 + 145 + 持久化NAPI配置 146 + -------------- 147 + 148 + 驱动程序常常会动态地分配和释放 NAPI 实例。这就导致每当 NAPI 实例被重新分配时,与 NAPI 相关 149 + 的用户配置就会丢失。netif_napi_add_config() API接口通过将每个 NAPI 实例与基于驱动程序定义 150 + 的索引值(如队列编号)的持久化 NAPI 配置相关联,从而避免了这种配置丢失的情况。 151 + 152 + 使用此 API 可实现持久化的 NAPI 标识符(以及其他设置),这对于使用 ``SO_INCOMING_NAPI_ID`` 153 + 的用户空间程序来说是有益的。有关其他 NAPI 配置的设置,请参阅以下章节。 154 + 155 + 驱动程序应尽可能尝试使用 netif_napi_add_config()。 156 + 157 + 用户API 158 + ======= 159 + 160 + 用户与 NAPI 的交互依赖于 NAPI 实例 ID。这些实例 ID 仅通过 ``SO_INCOMING_NAPI_ID`` 套接字 161 + 选项对用户可见。 162 + 163 + 用户可以使用 Netlink 来查询某个设备或设备队列的 NAPI 标识符。这既可以在用户应用程序中通过编程 164 + 方式实现,也可以使用内核源代码树中包含的一个脚本:tools/net/ynl/pyynl/cli.py 来完成。 165 + 166 + 例如,使用该脚本转储某个设备的所有队列(这将显示每个队列的 NAPI 标识符): 167 + 168 + 169 + .. code-block:: bash 170 + 171 + $ kernel-source/tools/net/ynl/pyynl/cli.py \ 172 + --spec Documentation/netlink/specs/netdev.yaml \ 173 + --dump queue-get \ 174 + --json='{"ifindex": 2}' 175 + 176 + 有关可用操作和属性的更多详细信息,请参阅 ``Documentation/netlink/specs/netdev.yaml``。 177 + 178 + 软件IRQ合并 179 + ----------- 180 + 181 + 默认情况下,NAPI 不执行任何显式的事件合并。在大多数场景中,数据包的批量处理得益于设备进行 182 + 的中断请求(IRQ)合并。不过,在某些情况下,软件层面的合并操作也很有帮助。 183 + 184 + 可以将 NAPI 配置为设置一个重新轮询定时器,而不是在处理完所有数据包后立即取消屏蔽硬件中断。 185 + 网络设备的 ``gro_flush_timeout`` sysfs 配置项可用于控制该定时器的延迟时间,而 ``napi_defer_hard_irqs`` 186 + 则用于控制在 NAPI 放弃并重新启用硬件中断之前,连续进行空轮询的次数。 187 + 188 + 上述参数也可以通过 Netlink 的 netdev-genl 接口,基于每个 NAPI 实例进行设置。当通过 189 + Netlink 进行配置且是基于每个 NAPI 实例设置时,上述参数使用连字符(-)而非下划线(_) 190 + 来命名,即 ``gro-flush-timeout`` 和 ``napi-defer-hard-irqs``。 191 + 192 + 基于每个 NAPI 实例的配置既可以在用户应用程序中通过编程方式完成,也可以使用内核源代码树中的 193 + 一个脚本实现,该脚本为 ``tools/net/ynl/pyynl/cli.py``。 194 + 195 + 例如,通过如下方式使用该脚本: 196 + 197 + .. code-block:: bash 198 + 199 + $ kernel-source/tools/net/ynl/pyynl/cli.py \ 200 + --spec Documentation/netlink/specs/netdev.yaml \ 201 + --do napi-set \ 202 + --json='{"id": 345, 203 + "defer-hard-irqs": 111, 204 + "gro-flush-timeout": 11111}' 205 + 206 + 类似地,参数 ``irq-suspend-timeout`` 也可以通过 netlink 的 netdev-genl 设置。没有全局 207 + 的 sysfs 参数可用于设置这个值。 208 + 209 + ``irq-suspend-timeout`` 用于确定应用程序可以完全挂起 IRQ 的时长。与 SO_PREFER_BUSY_POLL 210 + 结合使用,后者可以通过 ``EPIOCSPARAMS`` ioctl 在每个 epoll 上下文中设置。 211 + 212 + .. _poll_zh_CN: 213 + 214 + 忙轮询 215 + ------ 216 + 217 + 忙轮询允许用户进程在设备中断触发前检查传入的数据包。与其他忙轮询一样,它以 CPU 周期换取更低 218 + 的延迟(生产环境中 NAPI 忙轮询的使用尚不明确)。 219 + 220 + 通过在选定套接字上设置 ``SO_BUSY_POLL`` 或使用全局 ``net.core.busy_poll`` 和 ``net.core.busy_read`` 221 + 等 sysctls 启用忙轮询。还存在基于 io_uring 的 NAPI 忙轮询 API 可使用。 222 + 223 + 基于epoll的忙轮询 224 + ----------------- 225 + 226 + 可以从 ``epoll_wait`` 调用直接触发数据包处理。为了使用此功能,用户应用程序必须确保添加到 227 + epoll 上下文的所有文件描述符具有相同的 NAPI ID。 228 + 229 + 如果应用程序使用专用的 acceptor 线程,那么该应用程序可以获取传入连接的 NAPI ID(使用 230 + SO_INCOMING_NAPI_ID)然后将该文件描述符分发给工作线程。工作线程将该文件描述符添加到其 231 + epoll 上下文。这确保了每个工作线程的 epoll 上下文中所包含的文件描述符具有相同的 NAPI ID。 232 + 233 + 或者,如果应用程序使用 SO_REUSEPORT,可以插入 bpf 或 ebpf 程序来分发传入连接,使得每个 234 + 线程只接收具有相同 NAPI ID 的连接。但是必须谨慎处理系统中可能存在多个网卡的情况。 235 + 236 + 为了启用忙轮询,有两种选择: 237 + 238 + 1. ``/proc/sys/net/core/busy_poll`` 可以设置为微秒数以在忙循环中等待事件。这是一个系统 239 + 范围的设置,将导致所有基于 epoll 的应用程序在调用 epoll_wait 时忙轮询。这可能不是理想 240 + 的情况,因为许多应用程序可能不需要忙轮询。 241 + 242 + 2. 使用最新内核的应用程序可以在 epoll 上下文的文件描述符上发出 ioctl 来设置(``EPIOCSPARAMS``) 243 + 或获取(``EPIOCGPARAMS``) ``struct epoll_params``,用户程序定义如下: 244 + 245 + .. code-block:: c 246 + 247 + struct epoll_params { 248 + uint32_t busy_poll_usecs; 249 + uint16_t busy_poll_budget; 250 + uint8_t prefer_busy_poll; 251 + 252 + /* 将结构填充到 64 位的倍数 */ 253 + uint8_t __pad; 254 + }; 255 + 256 + IRQ缓解 257 + ------- 258 + 259 + 虽然忙轮询旨在用于低延迟应用,但类似的机制可用于减少中断请求。 260 + 261 + 每秒高请求的应用程序(尤其是路由/转发应用程序和特别使用 AF_XDP 套接字的应用程序) 262 + 可能希望在处理完一个请求或一批数据包之前不被中断。 263 + 264 + 此类应用程序可以向内核承诺会定期执行忙轮询操作,而驱动程序应将设备的中断请求永久屏蔽。 265 + 通过使用 ``SO_PREFER_BUSY_POLL`` 套接字选项可启用此模式。为避免系统出现异常,如果 266 + 在 ``gro_flush_timeout`` 时间内没有进行任何忙轮询调用,该承诺将被撤销。对于基于 267 + epoll 的忙轮询应用程序,可以将 ``struct epoll_params`` 结构体中的 ``prefer_busy_poll`` 268 + 字段设置为 1,并使用 ``EPIOCSPARAMS`` 输入 / 输出控制(ioctl)操作来启用此模式。 269 + 更多详情请参阅上述章节。 270 + 271 + NAPI 忙轮询的 budget 低于默认值(这符合正常忙轮询的低延迟意图)。减少中断请求的场景中 272 + 并非如此,因此 budget 可以通过 ``SO_BUSY_POLL_BUDGET`` 套接字选项进行调整。对于基于 273 + epoll 的忙轮询应用程序,可以通过调整 ``struct epoll_params`` 中的 ``busy_poll_budget`` 274 + 字段为特定值,并使用 ``EPIOCSPARAMS`` ioctl 在特定 epoll 上下文中设置。更多详细信 275 + 息请参见上述部分。 276 + 277 + 需要注意的是,为 ``gro_flush_timeout`` 选择较大的值会延迟中断请求,以实现更好的批 278 + 量处理,但在系统未满载时会增加延迟。为 ``gro_flush_timeout`` 选择较小的值可能会因 279 + 设备中断请求和软中断处理而干扰尝试进行忙轮询的用户应用程序。应权衡这些因素后谨慎选择 280 + 该值。基于 epoll 的忙轮询应用程序可以通过为 ``maxevents`` 选择合适的值来减少用户 281 + 处理的干扰。 282 + 283 + 用户可能需要考虑使用另一种方法,IRQ 挂起,以帮助应对这些权衡问题。 284 + 285 + IRQ挂起 286 + ------- 287 + 288 + IRQ 挂起是一种机制,其中设备 IRQ 在 epoll 触发 NAPI 数据包处理期间被屏蔽。 289 + 290 + 只要应用程序对 epoll_wait 的调用成功获取事件,内核就会推迟 IRQ 挂起定时器。如果 291 + 在忙轮询期间没有获取任何事件(例如,因为网络流量减少),则会禁用IRQ挂起功能,并启 292 + 用上述减少中断请求的策略。 293 + 294 + 这允许用户在 CPU 消耗和网络处理效率之间取得平衡。 295 + 296 + 要使用此机制: 297 + 298 + 1. 每个 NAPI 的配置参数 ``irq-suspend-timeout`` 应设置为应用程序可以挂起 299 + IRQ 的最大时间(纳秒)。这通过 netlink 完成,如上所述。此超时时间作为一 300 + 种安全机制,如果应用程序停滞,将重新启动中断驱动程序的中断处理。此值应选择 301 + 为覆盖用户应用程序调用 epoll_wait 处理数据所需的时间,需注意的是,应用程 302 + 序可通过在调用 epoll_wait 时设置 ``max_events`` 来控制获取的数据量。 303 + 304 + 2. sysfs 参数或每个 NAPI 的配置参数 ``gro_flush_timeout`` 和 ``napi_defer_hard_irqs`` 305 + 可以设置为较低值。它们将用于在忙轮询未找到数据时延迟 IRQs。 306 + 307 + 3. 必须将 ``prefer_busy_poll`` 标志设置为 true。如前文所述,可使用 ``EPIOCSPARAMS`` 308 + ioctl操作来完成此设置。 309 + 310 + 4. 应用程序按照上述方式使用 epoll 触发 NAPI 数据包处理。 311 + 312 + 如上所述,只要后续对 epoll_wait 的调用向用户空间返回事件,``irq-suspend-timeout`` 313 + 就会被推迟并且 IRQ 会被禁用。这允许应用程序在无干扰的情况下处理数据。 314 + 315 + 一旦 epoll_wait 的调用没有找到任何事件,IRQ 挂起会被自动禁用,并且 ``gro_flush_timeout`` 316 + 和 ``napi_defer_hard_irqs`` 缓解机制将开始起作用。 317 + 318 + 预期是 ``irq-suspend-timeout`` 的设置值会远大于 ``gro_flush_timeout``,因为 ``irq-suspend-timeout`` 319 + 应在一个用户空间处理周期内暂停中断请求。 320 + 321 + 虽然严格来说不必通过 ``napi_defer_hard_irqs`` 和 ``gro_flush_timeout`` 来执行 IRQ 挂起, 322 + 但强烈建议这样做。 323 + 324 + 中断请求挂起会使系统在轮询模式和由中断驱动的数据包传输模式之间切换。在网络繁忙期间,``irq-suspend-timeout`` 325 + 会覆盖 ``gro_flush_timeout``,使系统保持忙轮询状态,但是当 epoll 未发现任何事件时,``gro_flush_timeout`` 326 + 和 ``napi_defer_hard_irqs`` 的设置将决定下一步的操作。 327 + 328 + 有三种可能的网络处理和数据包交付循环: 329 + 330 + 1) 硬中断 -> 软中断 -> NAPI 轮询;基本中断交付 331 + 2) 定时器 -> 软中断 -> NAPI 轮询;延迟的 IRQ 处理 332 + 3) epoll -> 忙轮询 -> NAPI 轮询;忙循环 333 + 334 + 循环 2 可以接管循环 1,如果设置了 ``gro_flush_timeout`` 和 ``napi_defer_hard_irqs``。 335 + 336 + 如果设置了 ``gro_flush_timeout`` 和 ``napi_defer_hard_irqs``,循环 2 和 3 将互相“争夺”控制权。 337 + 338 + 在繁忙时期,``irq-suspend-timeout`` 用作循环 2 的定时器,这基本上使网络处理倾向于循环 3。 339 + 340 + 如果不设置 ``gro_flush_timeout`` 和 ``napi_defer_hard_irqs``,循环 3 无法从循环 1 接管。 341 + 342 + 因此,建议设置 ``gro_flush_timeout`` 和 ``napi_defer_hard_irqs``,因为若不这样做,设置 343 + ``irq-suspend-timeout`` 可能不会有明显效果。 344 + 345 + .. _threaded_zh_CN: 346 + 347 + 线程化NAPI 348 + ---------- 349 + 350 + 线程化 NAPI 是一种操作模式,它使用专用的内核线程而非软件中断上下文来进行 NAPI 处理。这种配置 351 + 是针对每个网络设备的,并且会影响该设备的所有 NAPI 实例。每个 NAPI 实例将生成一个单独的线程 352 + (称为 ``napi/${ifc-name}-${napi-id}`` )。 353 + 354 + 建议将每个内核线程固定到单个 CPU 上,这个 CPU 与处理中断的 CPU 相同。请注意,中断请求(IRQ) 355 + 和 NAPI 实例之间的映射关系可能并不简单(并且取决于驱动程序)。NAPI 实例 ID 的分配顺序将与内 356 + 核线程的进程 ID 顺序相反。 357 + 358 + 线程化 NAPI 是通过向网络设备的 sysfs 目录中的 ``threaded`` 文件写入 0 或 1 来控制的。 359 + 360 + .. rubric:: 脚注 361 + 362 + .. [#] NAPI 最初在 2.4 Linux 中被称为 New API。
+92
Documentation/translations/zh_CN/networking/netif-msg.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + .. include:: ../disclaimer-zh_CN.rst 4 + 5 + :Original: Documentation/networking/netif-msg.rst 6 + 7 + :翻译: 8 + 9 + 王亚鑫 Wang Yaxin <wang.yaxin@zte.com.cn> 10 + 11 + ================ 12 + 网络接口消息级别 13 + ================ 14 + 15 + 网络接口消息级别设置的设计方案。 16 + 17 + 历史背景 18 + -------- 19 + 20 + 调试消息接口的设计遵循并受制于向后兼容性及历史实践。理解其发展历史有助于把握 21 + 当前实践,并将其与旧版驱动代码相关联。 22 + 23 + 自Linux诞生之初,每个网络设备驱动均包含一个本地整型变量以控制调试消息级别。 24 + 消息级别范围为0至7,数值越大表示输出越详细。 25 + 26 + 消息级别的定义在3级之后未明确细化,但实际实现通常与指定级别相差±1。驱动程序 27 + 成熟后,冗余的详细级别消息常被移除。 28 + 29 + - 0 最简消息,仅显示致命错误的关键信息。 30 + - 1 标准消息,初始化状态。无运行时消息。 31 + - 2 特殊介质选择消息,通常由定时器驱动。 32 + - 3 接口开启和停止消息,包括正常状态信息。 33 + - 4 Tx/Rx帧错误消息及异常驱动操作。 34 + - 5 Tx数据包队列信息、中断事件。 35 + - 6 每个完成的Tx数据包和接收的Rx数据包状态。 36 + - 7 Tx/Rx数据包初始内容。 37 + 38 + 最初,该消息级别变量在各驱动中具有唯一名称(如"lance_debug"),便于通过 39 + 内核符号调试器定位和修改其设置。模块化内核出现后,变量统一重命名为"debug", 40 + 并作为模块参数设置。 41 + 42 + 这种方法效果良好。然而,人们始终对附加功能存在需求。多年来,以下功能逐渐 43 + 成为合理且易于实现的增强方案: 44 + 45 + - 通过ioctl()调用修改消息级别。 46 + - 按接口而非驱动设置消息级别。 47 + - 对发出的消息类型进行更具选择性的控制。 48 + 49 + netif_msg 建议添加了这些功能,仅带来了轻微的复杂性增加和代码规模增长。 50 + 51 + 推荐方案如下: 52 + 53 + - 保留驱动级整型变量"debug"作为模块参数,默认值为'1'。 54 + 55 + - 添加一个名为 "msg_enable" 的接口私有变量。该变量是位图而非级别, 56 + 并按如下方式初始化:: 57 + 58 + 1 << debug 59 + 60 + 或更精确地说:: 61 + 62 + debug < 0 ? 0 : 1 << min(sizeof(int)-1, debug) 63 + 64 + 消息应从以下形式更改:: 65 + 66 + if (debug > 1) 67 + printk(MSG_DEBUG "%s: ... 68 + 69 + 改为:: 70 + 71 + if (np->msg_enable & NETIF_MSG_LINK) 72 + printk(MSG_DEBUG "%s: ... 73 + 74 + 消息级别命名对应关系 75 + 76 + 77 + ========= =================== ============ 78 + 旧级别 名称 位位置 79 + ========= =================== ============ 80 + 1 NETIF_MSG_PROBE 0x0002 81 + 2 NETIF_MSG_LINK 0x0004 82 + 2 NETIF_MSG_TIMER 0x0004 83 + 3 NETIF_MSG_IFDOWN 0x0008 84 + 3 NETIF_MSG_IFUP 0x0008 85 + 4 NETIF_MSG_RX_ERR 0x0010 86 + 4 NETIF_MSG_TX_ERR 0x0010 87 + 5 NETIF_MSG_TX_QUEUED 0x0020 88 + 5 NETIF_MSG_INTR 0x0020 89 + 6 NETIF_MSG_TX_DONE 0x0040 90 + 6 NETIF_MSG_RX_STATUS 0x0040 91 + 7 NETIF_MSG_PKTDATA 0x0080 92 + ========= =================== ============
+92
Documentation/translations/zh_CN/networking/netmem.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + .. include:: ../disclaimer-zh_CN.rst 4 + 5 + :Original: Documentation/networking/netmem.rst 6 + 7 + :翻译: 8 + 9 + 王亚鑫 Wang Yaxin <wang.yaxin@zte.com.cn> 10 + 11 + ================== 12 + 网络驱动支持Netmem 13 + ================== 14 + 15 + 本文档概述了网络驱动支持netmem(一种抽象内存类型)的要求,该内存类型 16 + 支持设备内存 TCP 等功能。通过支持netmem,驱动可以灵活适配不同底层内 17 + 存类型(如设备内存TCP),且无需或仅需少量修改。 18 + 19 + Netmem的优势: 20 + 21 + * 灵活性:netmem 可由不同内存类型(如 struct page、DMA-buf)支持, 22 + 使驱动程序能够支持设备内存 TCP 等各种用例。 23 + * 前瞻性:支持netmem的驱动可无缝适配未来依赖此功能的新特性。 24 + * 简化开发:驱动通过统一API与netmem交互,无需关注底层内存的实现差异。 25 + 26 + 驱动RX要求 27 + ========== 28 + 29 + 1. 驱动必须支持page_pool。 30 + 31 + 2. 驱动必须支持tcp-data-split ethtool选项。 32 + 33 + 3. 驱动必须使用page_pool netmem API处理有效载荷内存。当前netmem API 34 + 与page API一一对应。转换时需要将page API替换为netmem API,并用驱动 35 + 中的netmem_refs跟踪内存而非 `struct page *`: 36 + 37 + - page_pool_alloc -> page_pool_alloc_netmem 38 + - page_pool_get_dma_addr -> page_pool_get_dma_addr_netmem 39 + - page_pool_put_page -> page_pool_put_netmem 40 + 41 + 目前并非所有页 pageAPI 都有对应的 netmem 等效接口。如果你的驱动程序 42 + 依赖某个尚未实现的 netmem API,请直接实现并提交至 netdev@邮件列表, 43 + 或联系维护者及 almasrymina@google.com 协助添加该 netmem API。 44 + 45 + 4. 驱动必须设置以下PP_FLAGS: 46 + 47 + - PP_FLAG_DMA_MAP:驱动程序无法对 netmem 执行 DMA 映射。此时驱动 48 + 程序必须将 DMA 映射操作委托给 page_pool,由其判断何时适合(或不适合) 49 + 进行 DMA 映射。 50 + - PP_FLAG_DMA_SYNC_DEV:驱动程序无法保证 netmem 的 DMA 地址一定能 51 + 完成 DMA 同步。此时驱动程序必须将 DMA 同步操作委托给 page_pool,由 52 + 其判断何时适合(或不适合)进行 DMA 同步。 53 + - PP_FLAG_ALLOW_UNREADABLE_NETMEM:仅当启用 tcp-data-split 时, 54 + 驱动程序必须显式设置此标志。 55 + 56 + 5. 驱动不得假设netmem可读或基于页。当netmem_address()返回NULL时,表示 57 + 内存不可读。驱动需正确处理不可读的netmem,例如,当netmem_address()返回 58 + NULL时,避免访问内容。 59 + 60 + 理想情况下,驱动程序不应通过netmem_is_net_iov()等辅助函数检查底层 61 + netmem 类型,也不应通过netmem_to_page()或netmem_to_net_iov()将 62 + netmem 转换为其底层类型。在大多数情况下,系统会提供抽象这些复杂性的 63 + netmem 或 page_pool 辅助函数(并可根据需要添加更多)。 64 + 65 + 6. 驱动程序必须使用page_pool_dma_sync_netmem_for_cpu()代替dma_sync_single_range_for_cpu()。 66 + 对于某些内存提供者,CPU 的 DMA 同步将由 page_pool 完成;而对于其他提供者 67 + (特别是 dmabuf 内存提供者),CPU 的 DMA 同步由使用 dmabuf API 的用户空 68 + 间负责。驱动程序必须将整个 DMA 同步操作委托给 page_pool,以确保操作正确执行。 69 + 70 + 7. 避免在 page_pool 之上实现特定于驱动程序内存回收机制。由于 netmem 可能 71 + 不由struct page支持,驱动程序不能保留struct page来进行自定义回收。不过, 72 + 可为此目的通过page_pool_fragment_netmem()或page_pool_ref_netmem()保留 73 + page_pool 引用,但需注意某些 netmem 类型的循环时间可能更长(例如零拷贝场景 74 + 下用户空间持有引用的情况)。 75 + 76 + 驱动TX要求 77 + ========== 78 + 79 + 1. 驱动程序绝对不能直接把 netmem 的 dma_addr 传递给任何 dma-mapping API。这 80 + 是由于 netmem 的 dma_addr 可能源自 dma-buf 这类和 dma-mapping API 不兼容的 81 + 源头。 82 + 83 + 应当使用netmem_dma_unmap_page_attrs()和netmem_dma_unmap_addr_set()等辅助 84 + 函数来替代dma_unmap_page[_attrs]()、dma_unmap_addr_set()。不管 dma_addr 85 + 来源如何,netmem 的这些变体都能正确处理 netmem dma_addr,在合适的时候会委托给 86 + dma-mapping API 去处理。 87 + 88 + 目前,并非所有的 dma-mapping API 都有对应的 netmem 版本。要是你的驱动程序需要 89 + 使用某个还不存在的 netmem API,你可以自行添加并提交到 netdev@,也可以联系维护 90 + 人员或者发送邮件至 almasrymina@google.com 寻求帮助。 91 + 92 + 2. 驱动程序应通过设置 netdev->netmem_tx = true 来表明自身支持 netmem 功能。
+85
Documentation/translations/zh_CN/networking/vxlan.rst
··· 1 + .. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + .. include:: ../disclaimer-zh_CN.rst 3 + 4 + :Original: Documentation/networking/vxlan.rst 5 + 6 + :翻译: 7 + 8 + 范雨 Fan Yu <fan.yu9@zte.com.cn> 9 + 10 + :校译: 11 + 12 + - 邱禹潭 Qiu Yutan <qiu.yutan@zte.com.cn> 13 + - 徐鑫 xu xin <xu.xin16@zte.com.cn> 14 + 15 + ========================== 16 + 虚拟扩展本地局域网协议文档 17 + ========================== 18 + 19 + VXLAN 协议是一种隧道协议,旨在解决 IEEE 802.1q 中 VLAN ID(4096)有限的问题。 20 + VXLAN 将标识符的大小扩展到 24 位(16777216)。 21 + 22 + VXLAN 在 IETF RFC 7348 中进行了描述,并已由多家供应商设计实现。 23 + 该协议通过 UDP 协议运行,并使用特定目的端口。 24 + 本文档介绍了 Linux 内核隧道设备,Openvswitch 也有单独的 VXLAN 实现。 25 + 26 + 与大多数隧道不同,VXLAN 是 1 对 N 的网络,而不仅仅是点对点网络。 27 + VXLAN 设备可以通过类似于学习桥接器的方式动态学习另一端点的 IP 地址,也可以利用静态配置的转发条目。 28 + 29 + VXLAN 的管理方式与它的两个近邻 GRE 和 VLAN 相似。 30 + 配置 VXLAN 需要 iproute2 的版本与 VXLAN 首次向上游合并的内核版本相匹配。 31 + 32 + 1. 创建 vxlan 设备:: 33 + 34 + # ip link add vxlan0 type vxlan id 42 group 239.1.1.1 dev eth1 dstport 4789 35 + 36 + 这将创建一个名为 vxlan0 的网络设备,该设备通过 eth1 使用组播组 239.1.1.1 处理转发表中没有对应条目的流量。 37 + 目标端口号设置为 IANA 分配的值 4789,VXLAN 的 Linux 实现早于 IANA 选择标准目的端口号的时间。 38 + 因此默认使用 Linux 选择的值,以保持向后兼容性。 39 + 40 + 2. 删除 vxlan 设备:: 41 + 42 + # ip link delete vxlan0 43 + 44 + 3. 查看 vxlan 设备信息:: 45 + 46 + # ip -d link show vxlan0 47 + 48 + 使用新的 bridge 命令可以创建、销毁和显示 vxlan 转发表。 49 + 50 + 1. 创建vxlan转发表项:: 51 + 52 + # bridge fdb add to 00:17:42:8a:b4:05 dst 192.19.0.2 dev vxlan0 53 + 54 + 2. 删除vxlan转发表项:: 55 + 56 + # bridge fdb delete 00:17:42:8a:b4:05 dev vxlan0 57 + 58 + 3. 显示vxlan转发表项:: 59 + 60 + # bridge fdb show dev vxlan0 61 + 62 + 以下网络接口控制器特性可能表明对 UDP 隧道相关的卸载支持(最常见的是 VXLAN 功能, 63 + 但是对特定封装协议的支持取决于网络接口控制器): 64 + 65 + - `tx-udp_tnl-segmentation` 66 + - `tx-udp_tnl-csum-segmentation` 67 + 对 UDP 封装帧执行 TCP 分段卸载的能力 68 + 69 + - `rx-udp_tunnel-port-offload` 70 + 在接收端解析 UDP 封装帧,使网络接口控制器能够执行协议感知卸载, 71 + 例如内部帧的校验和验证卸载(只有不带协议感知卸载的网络接口控制器才需要) 72 + 73 + 对于支持 `rx-udp_tunnel-port-offload` 的设备,可使用 `ethtool` 查询当前卸载端口的列表:: 74 + 75 + $ ethtool --show-tunnels eth0 76 + Tunnel information for eth0: 77 + UDP port table 0: 78 + Size: 4 79 + Types: vxlan 80 + No entries 81 + UDP port table 1: 82 + Size: 4 83 + Types: geneve, vxlan-gpe 84 + Entries (1): 85 + port 1230, vxlan-gpe
+126
Documentation/translations/zh_CN/networking/xfrm_proc.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + .. include:: ../disclaimer-zh_CN.rst 4 + 5 + :Original: Documentation/networking/xfrm_proc.rst 6 + 7 + :翻译: 8 + 9 + 王亚鑫 Wang Yaxin <wang.yaxin@zte.com.cn> 10 + 11 + ================================= 12 + XFRM proc - /proc/net/xfrm_* 文件 13 + ================================= 14 + 15 + 作者:Masahide NAKAMURA <nakam@linux-ipv6.org> 16 + 17 + 18 + 转换统计信息 19 + ------------ 20 + 21 + `xfrm_proc` 提供一组统计计数器,显示转换过程中丢弃的数据包及其原因。 22 + 这些计数器属于Linux私有MIB的一部分,可通过 `/proc/net/xfrm_stat` 23 + 查看。 24 + 25 + 入站错误 26 + ~~~~~~~~ 27 + 28 + XfrmInError: 29 + 未匹配其他类别的所有错误 30 + 31 + XfrmInBufferError: 32 + 缓冲区不足 33 + 34 + XfrmInHdrError: 35 + 头部错误 36 + 37 + XfrmInNoStates: 38 + 未找到状态 39 + (入站SPI、地址或SA的IPsec协议不匹配) 40 + 41 + XfrmInStateProtoError: 42 + 转换协议相关的错误 43 + (如SA密钥错误) 44 + 45 + XfrmInStateModeError: 46 + 转换模式相关的错误 47 + 48 + XfrmInStateSeqError: 49 + 序列号错误 50 + 序列号超出窗口范围 51 + 52 + XfrmInStateExpired: 53 + 状态已过期 54 + 55 + XfrmInStateMismatch: 56 + 状态选项不匹配 57 + (如UDP封装类型不匹配) 58 + 59 + XfrmInStateInvalid: 60 + 无效状态 61 + 62 + XfrmInTmplMismatch: 63 + 状态模板不匹配 64 + (如入站SA正确但SP规则错误) 65 + 66 + XfrmInNoPols: 67 + 未找到状态的对应策略 68 + (如入站SA正确但无SP规则) 69 + 70 + XfrmInPolBlock: 71 + 丢弃的策略 72 + 73 + XfrmInPolError: 74 + 错误的策略 75 + 76 + XfrmAcquireError: 77 + 状态未完全获取即被使用 78 + 79 + XfrmFwdHdrError: 80 + 转发路由禁止 81 + 82 + XfrmInStateDirError: 83 + 状态方向不匹配 84 + (输入路径查找到输出状态,预期是输入状态或者无方向) 85 + 86 + 出站错误 87 + ~~~~~~~~ 88 + XfrmOutError: 89 + 未匹配其他类别的所有错误 90 + 91 + XfrmOutBundleGenError: 92 + 捆绑包生成错误 93 + 94 + XfrmOutBundleCheckError: 95 + 捆绑包校验错误 96 + 97 + XfrmOutNoStates: 98 + 未找到状态 99 + 100 + XfrmOutStateProtoError: 101 + 转换协议特定错误 102 + 103 + XfrmOutStateModeError: 104 + 转换模式特定错误 105 + 106 + XfrmOutStateSeqError: 107 + 序列号错误 108 + (序列号溢出) 109 + 110 + XfrmOutStateExpired: 111 + 状态已过期 112 + 113 + XfrmOutPolBlock: 114 + 丢弃策略 115 + 116 + XfrmOutPolDead: 117 + 失效策略 118 + 119 + XfrmOutPolError: 120 + 错误策略 121 + 122 + XfrmOutStateInvalid: 123 + 无效状态(可能已过期) 124 + 125 + XfrmOutStateDirError: 126 + 状态方向不匹配(输出路径查找到输入状态,预期为输出状态或无方向)
+5 -5
Documentation/translations/zh_CN/process/1.Intro.rst
··· 182 182 可以获得所有版权所有者的同意(或者从内核中删除他们的代码)。因此,尤其是在 183 183 可预见的将来,许可证不大可能迁移到GPL的版本3。 184 184 185 - 所有贡献给内核的代码都必须是合法的免费软件。因此,不接受匿名(或化名)贡献 186 - 者的代码。所有贡献者都需要在他们的代码上“sign off(签发)”,声明代码可以 187 - 在GPL下与内核一起分发。无法提供未被其所有者许可为免费软件的代码,或可能为 188 - 内核造成版权相关问题的代码(例如,由缺乏适当保护的反向工程工作派生的代码) 189 - 不能被接受。 185 + 所有贡献给内核的代码都必须是合法的免费软件。因此,出于这个原因,身份不明的 186 + 贡献者或匿名贡献者提交的代码将不予接受。所有贡献者都需要在他们的代码上 187 + “sign off(签发)”,声明代码可以在GPL下与内核一起分发。无法提供未被其所有者 188 + 许可为免费软件的代码,或可能为内核造成版权相关问题的代码(例如,由缺乏适当 189 + 保护的反向工程工作派生的代码)不能被接受。 190 190 191 191 有关版权问题的提问在Linux开发邮件列表中很常见。这样的问题通常会得到不少答案, 192 192 但请记住,回答这些问题的人不是律师,不能提供法律咨询。如果您有关于Linux源代码
+3 -4
Documentation/translations/zh_CN/process/2.Process.rst
··· 292 292 一个潜在的危险,他们可能会被一堆电子邮件淹没、违反Linux列表上使用的约定, 293 293 或者两者兼而有之。 294 294 295 - 大多数内核邮件列表都在vger.kernel.org上运行;主列表位于: 295 + 大多数内核邮件列表都托管在 kernel.org;主列表位于: 296 296 297 - http://vger.kernel.org/vger-lists.html 297 + https://subspace.kernel.org 298 298 299 - 不过,也有一些列表托管在别处;其中一些列表位于 300 - redhat.com/mailman/listinfo。 299 + 其他地方也有邮件列表;请查看 MAINTAINERS 文件,获取与特定子系统相关的列表。 301 300 302 301 当然,内核开发的核心邮件列表是linux-kernel。这个列表是一个令人生畏的地方: 303 302 每天的信息量可以达到500条,噪音很高,谈话技术性很强,且参与者并不总是表现出
+11
Documentation/translations/zh_CN/process/5.Posting.rst
··· 177 177 178 178 - Reported-by: 指定报告此补丁修复的问题的用户;此标记用于表示感谢。 179 179 180 + - Suggested-by: 表示该补丁思路由所提及的人提出,确保其创意贡献获得认可。 181 + 这有望激励他们在未来继续提供帮助。 182 + 180 183 - Cc:指定某人收到了补丁的副本,并有机会对此发表评论。 181 184 182 185 在补丁中添加标签时要小心:只有Cc:才适合在没有指定人员明确许可的情况下添加。 186 + 187 + 在补丁中添加上述标签时需谨慎,因为除了 Cc:、Reported-by: 和 Suggested-by:, 188 + 所有其他标签都需要被提及者的明确许可。对于这三个标签,若根据 lore 归档或提交 189 + 历史记录,相关人员使用该姓名和电子邮件地址为 Linux 内核做出过贡献,则隐含许可 190 + 已足够 -- 对于 Reported-by: 和 Suggested-by:,需确保报告或建议是公开进行的。 191 + 请注意,从这个意义上讲,bugzilla.kernel.org 属于公开场合,但其使用的电子邮件地址 192 + 属于私人信息;因此,除非相关人员曾在早期贡献中使用过这些邮箱,否则请勿在标签中 193 + 公开它们。 183 194 184 195 寄送补丁 185 196 --------
+5
Documentation/translations/zh_CN/process/6.Followthrough.rst
··· 49 49 变。他们真的,几乎毫无例外地,致力于创造他们所能做到的最好的内核;他们并 50 50 没有试图给雇主的竞争对手造成不适。 51 51 52 + - 请准备好应对看似“愚蠢”的代码风格修改请求,以及将部分代码拆分到内核 53 + 共享模块的要求。维护者的职责之一是保持整体风格的一致性。有时这意味着, 54 + 你在驱动中为解决某一问题而采用的巧妙取巧方案,实际上需要被提炼为通用的 55 + 内核特性,以便未来复用。 56 + 52 57 所有这些归根结底就是,当审阅者向您发送评论时,您需要注意他们正在进行的技术 53 58 评论。不要让他们的表达方式或你自己的骄傲阻止此事。当你在一个补丁上得到评论 54 59 时,花点时间去理解评论人想说什么。如果可能的话,请修复审阅者要求您修复的内
+14
Documentation/translations/zh_CN/process/7.AdvancedTopics.rst
··· 113 113 更改。在这方面 git request-pull 命令非常有用;它将按照其他开发人员所期望的 114 114 格式化请求,并检查以确保您已记得将这些更改推送到公共服务器。 115 115 116 + .. _cn_development_advancedtopics_reviews: 117 + 116 118 审阅补丁 117 119 -------- 118 120 ··· 128 126 的建议是:把审阅评论当成问题而不是批评。询问“在这条路径中如何释放锁?” 129 127 总是比说“这里的锁是错误的”更好。 130 128 129 + 当出现分歧时,另一个有用的技巧是邀请他人参与讨论。如果交流数次后讨论陷入僵局, 130 + 可征求其他评审者或维护者的意见。通常,与某一评审者意见一致的人往往会保持沉默, 131 + 除非被主动询问。众人意见会产生成倍的影响力。 132 + 131 133 不同的开发人员将从不同的角度审查代码。部分人会主要关注代码风格以及代码行是 132 134 否有尾随空格。其他人会主要关注补丁作为一个整体实现的变更是否对内核有好处。 133 135 同时也有人会检查是否存在锁问题、堆栈使用过度、可能的安全问题、在其他地方 134 136 发现的代码重复、足够的文档、对性能的不利影响、用户空间ABI更改等。所有类型 135 137 的检查,只要它们能引导更好的代码进入内核,都是受欢迎和值得的。 138 + 139 + 使用诸如 ``Reviewed-by`` 这类特定标签并无严格要求。事实上,即便提供了标签,也 140 + 更鼓励用平实的英文撰写评审意见,因为这样的内容信息量更大,例如,“我查看了此次 141 + 提交中 A、B、C 等方面的内容,认为没有问题。”显然,以某种形式提供评审信息或回复 142 + 是必要的,否则维护者将完全无法知晓评审者是否已查看过补丁! 143 + 144 + 最后但同样重要的是,补丁评审可能会变成一个聚焦于指出问题的负面过程。请偶尔给予 145 + 称赞,尤其是对新手贡献者!
+1 -1
Documentation/translations/zh_CN/staging/index.rst
··· 13 13 .. toctree:: 14 14 :maxdepth: 2 15 15 16 + speculation 16 17 xz 17 18 18 19 TODOList: ··· 22 21 * lzo 23 22 * remoteproc 24 23 * rpmsg 25 - * speculation 26 24 * static-keys 27 25 * tee
+85
Documentation/translations/zh_CN/staging/speculation.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + .. include:: ../disclaimer-zh_CN.rst 3 + 4 + :Original: Documentation/staging/speculation.rst 5 + 6 + :翻译: 7 + 8 + 崔巍 Cui Wei <chris.wei.cui@gmail.com> 9 + 10 + ======== 11 + 推测执行 12 + ======== 13 + 14 + 本文档解释了推测执行的潜在影响,以及如何使用通用API来减轻不良影响。 15 + 16 + ------------------------------------------------------------------------------ 17 + 18 + 为提高性能并减少平均延迟,许多现代处理器都采用分支预测等推测执行技术,执行结果 19 + 可能在后续阶段被丢弃。 20 + 21 + 通常情况下,我们无法从架构状态(如寄存器内容)观察到推测执行。然而,在某些情况 22 + 下从微架构状态观察其影响是可能的,例如数据是否存在于缓存中。这种状态可能会形成 23 + 侧信道,通过观察侧信道可以提取秘密信息。 24 + 25 + 例如,在分支预测存在的情况下,边界检查可能被推测执行的代码忽略。考虑以下代码:: 26 + 27 + int load_array(int *array, unsigned int index) 28 + { 29 + if (index >= MAX_ARRAY_ELEMS) 30 + return 0; 31 + else 32 + return array[index]; 33 + } 34 + 35 + 在arm64上,可以编译成如下汇编序列:: 36 + 37 + CMP <index>, #MAX_ARRAY_ELEMS 38 + B.LT less 39 + MOV <returnval>, #0 40 + RET 41 + less: 42 + LDR <returnval>, [<array>, <index>] 43 + RET 44 + 45 + 处理器有可能误预测条件分支,并推测性装载array[index],即使index >= MAX_ARRAY_ELEMS。 46 + 这个值随后会被丢弃,但推测的装载可能会影响微架构状态,随后可被测量到。 47 + 48 + 涉及多个依赖内存访问的更复杂序列可能会导致敏感信息泄露。以前面的示例为基础,考虑 49 + 以下代码:: 50 + 51 + int load_dependent_arrays(int *arr1, int *arr2, int index) 52 + { 53 + int val1, val2, 54 + 55 + val1 = load_array(arr1, index); 56 + val2 = load_array(arr2, val1); 57 + 58 + return val2; 59 + } 60 + 61 + 根据推测,对load_array()的第一次调用可能会返回一个越界地址的值,而第二次调用将影响 62 + 依赖于该值的微架构状态。这可能会提供一个任意读取原语。 63 + 64 + 缓解推测执行侧信道 65 + ================== 66 + 67 + 内核提供了一个通用API以确保即使在推测情况下也能遵守边界检查。受推测执行侧信道影响 68 + 的架构应当实现这些原语。 69 + 70 + <linux/nospec.h>中的array_index_nospec()辅助函数可用于防止信息通过侧信道泄漏。 71 + 72 + 调用array_index_nospec(index, size)将返回一个经过净化的索引值,即使在CPU推测执行 73 + 条件下,该值也会被严格限制在[0, size)范围内。 74 + 75 + 这可以用来保护前面的load_array()示例:: 76 + 77 + int load_array(int *array, unsigned int index) 78 + { 79 + if (index >= MAX_ARRAY_ELEMS) 80 + return 0; 81 + else { 82 + index = array_index_nospec(index, MAX_ARRAY_ELEMS); 83 + return array[index]; 84 + } 85 + }
+1 -1
Documentation/usb/gadget-testing.rst
··· 874 874 875 875 with these patches: 876 876 877 - http://www.spinics.net/lists/linux-usb/msg99220.html 877 + https://lore.kernel.org/r/1386675637-18243-1-git-send-email-r.baldyga@samsung.com/ 878 878 879 879 host:: 880 880
+15 -15
Documentation/userspace-api/fwctl/fwctl.rst
··· 54 54 construction of drives within the HW RAID. 55 55 56 56 In the past when devices were more single function, individual subsystems would 57 - grow different approaches to solving some of these common problems. For instance 57 + grow different approaches to solving some of these common problems. For instance, 58 58 monitoring device health, manipulating its FLASH, debugging the FW, 59 59 provisioning, all have various unique interfaces across the kernel. 60 60 ··· 87 87 3. Multiple VM functions tightly scoped within the VM 88 88 89 89 The device may create a logical parent/child relationship between these scopes. 90 - For instance a child VM's FW may be within the scope of the hypervisor FW. It is 90 + For instance, a child VM's FW may be within the scope of the hypervisor FW. It is 91 91 quite common in the VFIO world that the hypervisor environment has a complex 92 92 provisioning/profiling/configuration responsibility for the function VFIO 93 93 assigns to the VM. ··· 105 105 106 106 3. Write access to function & child debug information strictly compatible with 107 107 the principles of kernel lockdown and kernel integrity protection. Triggers 108 - a kernel Taint. 108 + a kernel taint. 109 109 110 - 4. Full debug device access. Triggers a kernel Taint, requires CAP_SYS_RAWIO. 110 + 4. Full debug device access. Triggers a kernel taint, requires CAP_SYS_RAWIO. 111 111 112 112 User space will provide a scope label on each RPC and the kernel must enforce the 113 113 above CAPs and taints based on that scope. A combination of kernel and FW can 114 114 enforce that RPCs are placed in the correct scope by user space. 115 115 116 - Denied behavior 117 - --------------- 116 + Disallowed behavior 117 + ------------------- 118 118 119 119 There are many things this interface must not allow user space to do (without a 120 - Taint or CAP), broadly derived from the principles of kernel lockdown. Some 120 + taint or CAP), broadly derived from the principles of kernel lockdown. Some 121 121 examples: 122 122 123 123 1. DMA to/from arbitrary memory, hang the system, compromise FW integrity with ··· 138 138 fwctl is not a replacement for device direct access subsystems like uacce or 139 139 VFIO. 140 140 141 - Operations exposed through fwctl's non-taining interfaces should be fully 142 - sharable with other users of the device. For instance exposing a RPC through 141 + Operations exposed through fwctl's non-tainting interfaces should be fully 142 + sharable with other users of the device. For instance, exposing a RPC through 143 143 fwctl should never prevent a kernel subsystem from also concurrently using that 144 144 same RPC or hardware unit down the road. In such cases fwctl will be less 145 145 important than proper kernel subsystems that eventually emerge. Mistakes in this ··· 225 225 226 226 Each device type must be mindful of Linux's philosophy for stable ABI. The FW 227 227 RPC interface does not have to meet a strictly stable ABI, but it does need to 228 - meet an expectation that userspace tools that are deployed and in significant 228 + meet an expectation that user space tools that are deployed and in significant 229 229 use don't needlessly break. FW upgrade and kernel upgrade should keep widely 230 230 deployed tooling working. 231 231 232 232 Development and debugging focused RPCs under more permissive scopes can have 233 - less stabilitiy if the tools using them are only run under exceptional 233 + less stability if the tools using them are only run under exceptional 234 234 circumstances and not for every day use of the device. Debugging tools may even 235 235 require exact version matching as they may require something similar to DWARF 236 236 debug information from the FW binary. ··· 261 261 - HW RAID controllers. This includes RPCs to do things like compose drives into 262 262 a RAID volume, configure RAID parameters, monitor the HW and more. 263 263 264 - - Baseboard managers. RPCs for configuring settings in the device and more 264 + - Baseboard managers. RPCs for configuring settings in the device and more. 265 265 266 266 - NVMe vendor command capsules. nvme-cli provides access to some monitoring 267 267 functions that different products have defined, but more exist. ··· 269 269 - CXL also has a NVMe-like vendor command system. 270 270 271 271 - DRM allows user space drivers to send commands to the device via kernel 272 - mediation 272 + mediation. 273 273 274 274 - RDMA allows user space drivers to directly push commands to the device 275 - without kernel involvement 275 + without kernel involvement. 276 276 277 277 - Various “raw” APIs, raw HID (SDL2), raw USB, NVMe Generic Interface, etc. 278 278 279 279 The first 4 are examples of areas that fwctl intends to cover. The latter three 280 - are examples of denied behavior as they fully overlap with the primary purpose 280 + are examples of disallowed behavior as they fully overlap with the primary purpose 281 281 of a kernel subsystem. 282 282 283 283 Some key lessons learned from these past efforts are the importance of having a
+277 -273
Documentation/userspace-api/ioctl/ioctl-number.rst
··· 10 10 If you are adding new ioctl's to the kernel, you should use the _IO 11 11 macros defined in <linux/ioctl.h>: 12 12 13 - ====== == ============================================ 14 - _IO an ioctl with no parameters 15 - _IOW an ioctl with write parameters (copy_from_user) 16 - _IOR an ioctl with read parameters (copy_to_user) 17 - _IOWR an ioctl with both write and read parameters. 18 - ====== == ============================================ 13 + ====== =========================== 14 + macro parameters 15 + ====== =========================== 16 + _IO none 17 + _IOW write (read from userspace) 18 + _IOR read (write to userpace) 19 + _IOWR write and read 20 + ====== =========================== 19 21 20 22 'Write' and 'read' are from the user's point of view, just like the 21 23 system calls 'write' and 'read'. For example, a SET_FOO ioctl would ··· 25 23 a GET_FOO ioctl would be _IOR, although the kernel would actually write 26 24 data to user space. 27 25 28 - The first argument to _IO, _IOW, _IOR, or _IOWR is an identifying letter 29 - or number from the table below. Because of the large number of drivers, 30 - many drivers share a partial letter with other drivers. 26 + The first argument to the macros is an identifying letter or number from 27 + the table below. Because of the large number of drivers, many drivers 28 + share a partial letter with other drivers. 31 29 32 30 If you are writing a driver for a new device and need a letter, pick an 33 31 unused block with enough room for expansion: 32 to 256 ioctl commands ··· 35 33 submitting the patch through :doc:`usual patch submission process 36 34 </process/submitting-patches>`. 37 35 38 - The second argument to _IO, _IOW, _IOR, or _IOWR is a sequence number 39 - to distinguish ioctls from each other. The third argument to _IOW, 40 - _IOR, or _IOWR is the type of the data going into the kernel or coming 41 - out of the kernel (e.g. 'int' or 'struct foo'). NOTE! Do NOT use 42 - sizeof(arg) as the third argument as this results in your ioctl thinking 43 - it passes an argument of type size_t. 36 + The second argument is a sequence number to distinguish ioctls from each 37 + other. The third argument (not applicable to _IO) is the type of the data 38 + going into the kernel or coming out of the kernel (e.g. 'int' or 39 + 'struct foo'). 40 + 41 + .. note:: 42 + Do NOT use sizeof(arg) as the third argument as this results in your 43 + ioctl thinking it passes an argument of type size_t. 44 44 45 45 Some devices use their major number as the identifier; this is OK, as 46 46 long as it is unique. Some devices are irregular and don't follow any ··· 55 51 error rather than some unexpected behaviour. 56 52 57 53 (2) The 'strace' build procedure automatically finds ioctl numbers 58 - defined with _IO, _IOW, _IOR, or _IOWR. 54 + defined with the macros. 59 55 60 56 (3) 'strace' can decode numbers back into useful names when the 61 57 numbers are unique. ··· 69 65 This table lists ioctls visible from userland, excluding ones from 70 66 drivers/staging/. 71 67 72 - ==== ===== ======================================================= ================================================================ 73 - Code Seq# Include File Comments 68 + ==== ===== ========================================================= ================================================================ 69 + Code Seq# Include File Comments 74 70 (hex) 75 - ==== ===== ======================================================= ================================================================ 76 - 0x00 00-1F linux/fs.h conflict! 77 - 0x00 00-1F scsi/scsi_ioctl.h conflict! 78 - 0x00 00-1F linux/fb.h conflict! 79 - 0x00 00-1F linux/wavefront.h conflict! 71 + ==== ===== ========================================================= ================================================================ 72 + 0x00 00-1F linux/fs.h conflict! 73 + 0x00 00-1F scsi/scsi_ioctl.h conflict! 74 + 0x00 00-1F linux/fb.h conflict! 75 + 0x00 00-1F linux/wavefront.h conflict! 80 76 0x02 all linux/fd.h 81 77 0x03 all linux/hdreg.h 82 - 0x04 D2-DC linux/umsdos_fs.h Dead since 2.6.11, but don't reuse these. 78 + 0x04 D2-DC linux/umsdos_fs.h Dead since 2.6.11, but don't reuse these. 83 79 0x06 all linux/lp.h 84 80 0x07 9F-D0 linux/vmw_vmci_defs.h, uapi/linux/vm_sockets.h 85 81 0x09 all linux/raid/md_u.h 86 82 0x10 00-0F drivers/char/s390/vmcp.h 87 83 0x10 10-1F arch/s390/include/uapi/sclp_ctl.h 88 84 0x10 20-2F arch/s390/include/uapi/asm/hypfs.h 89 - 0x12 all linux/fs.h BLK* ioctls 85 + 0x12 all linux/fs.h BLK* ioctls 90 86 linux/blkpg.h 91 87 linux/blkzoned.h 92 88 linux/blk-crypto.h 93 - 0x15 all linux/fs.h FS_IOC_* ioctls 94 - 0x1b all InfiniBand Subsystem 95 - <http://infiniband.sourceforge.net/> 89 + 0x15 all linux/fs.h FS_IOC_* ioctls 90 + 0x1b all InfiniBand Subsystem 91 + <http://infiniband.sourceforge.net/> 96 92 0x20 all drivers/cdrom/cm206.h 97 93 0x22 all scsi/sg.h 98 - 0x3E 00-0F linux/counter.h <mailto:linux-iio@vger.kernel.org> 94 + 0x3E 00-0F linux/counter.h <mailto:linux-iio@vger.kernel.org> 99 95 '!' 00-1F uapi/linux/seccomp.h 100 - '#' 00-3F IEEE 1394 Subsystem 101 - Block for the entire subsystem 96 + '#' 00-3F IEEE 1394 Subsystem 97 + Block for the entire subsystem 102 98 '$' 00-0F linux/perf_counter.h, linux/perf_event.h 103 - '%' 00-0F include/uapi/linux/stm.h System Trace Module subsystem 104 - <mailto:alexander.shishkin@linux.intel.com> 99 + '%' 00-0F include/uapi/linux/stm.h System Trace Module subsystem 100 + <mailto:alexander.shishkin@linux.intel.com> 105 101 '&' 00-07 drivers/firewire/nosy-user.h 106 - '*' 00-1F uapi/linux/user_events.h User Events Subsystem 107 - <mailto:linux-trace-kernel@vger.kernel.org> 108 - '1' 00-1F linux/timepps.h PPS kit from Ulrich Windl 109 - <ftp://ftp.de.kernel.org/pub/linux/daemons/ntp/PPS/> 102 + '*' 00-1F uapi/linux/user_events.h User Events Subsystem 103 + <mailto:linux-trace-kernel@vger.kernel.org> 104 + '1' 00-1F linux/timepps.h PPS kit from Ulrich Windl 105 + <ftp://ftp.de.kernel.org/pub/linux/daemons/ntp/PPS/> 110 106 '2' 01-04 linux/i2o.h 111 - '3' 00-0F drivers/s390/char/raw3270.h conflict! 112 - '3' 00-1F linux/suspend_ioctls.h, conflict! 107 + '3' 00-0F drivers/s390/char/raw3270.h conflict! 108 + '3' 00-1F linux/suspend_ioctls.h, conflict! 113 109 kernel/power/user.c 114 - '8' all SNP8023 advanced NIC card 115 - <mailto:mcr@solidum.com> 110 + '8' all SNP8023 advanced NIC card 111 + <mailto:mcr@solidum.com> 116 112 ';' 64-7F linux/vfio.h 117 113 ';' 80-FF linux/iommufd.h 118 - '=' 00-3f uapi/linux/ptp_clock.h <mailto:richardcochran@gmail.com> 119 - '@' 00-0F linux/radeonfb.h conflict! 120 - '@' 00-0F drivers/video/aty/aty128fb.c conflict! 121 - 'A' 00-1F linux/apm_bios.h conflict! 122 - 'A' 00-0F linux/agpgart.h, conflict! 114 + '=' 00-3f uapi/linux/ptp_clock.h <mailto:richardcochran@gmail.com> 115 + '@' 00-0F linux/radeonfb.h conflict! 116 + '@' 00-0F drivers/video/aty/aty128fb.c conflict! 117 + 'A' 00-1F linux/apm_bios.h conflict! 118 + 'A' 00-0F linux/agpgart.h, conflict! 123 119 drivers/char/agp/compat_ioctl.h 124 - 'A' 00-7F sound/asound.h conflict! 125 - 'B' 00-1F linux/cciss_ioctl.h conflict! 126 - 'B' 00-0F include/linux/pmu.h conflict! 127 - 'B' C0-FF advanced bbus <mailto:maassen@uni-freiburg.de> 128 - 'B' 00-0F xen/xenbus_dev.h conflict! 129 - 'C' all linux/soundcard.h conflict! 130 - 'C' 01-2F linux/capi.h conflict! 131 - 'C' F0-FF drivers/net/wan/cosa.h conflict! 120 + 'A' 00-7F sound/asound.h conflict! 121 + 'B' 00-1F linux/cciss_ioctl.h conflict! 122 + 'B' 00-0F include/linux/pmu.h conflict! 123 + 'B' C0-FF advanced bbus <mailto:maassen@uni-freiburg.de> 124 + 'B' 00-0F xen/xenbus_dev.h conflict! 125 + 'C' all linux/soundcard.h conflict! 126 + 'C' 01-2F linux/capi.h conflict! 127 + 'C' F0-FF drivers/net/wan/cosa.h conflict! 132 128 'D' all arch/s390/include/asm/dasd.h 133 - 'D' 40-5F drivers/scsi/dpt/dtpi_ioctl.h Dead since 2022 129 + 'D' 40-5F drivers/scsi/dpt/dtpi_ioctl.h Dead since 2022 134 130 'D' 05 drivers/scsi/pmcraid.h 135 - 'E' all linux/input.h conflict! 136 - 'E' 00-0F xen/evtchn.h conflict! 137 - 'F' all linux/fb.h conflict! 138 - 'F' 01-02 drivers/scsi/pmcraid.h conflict! 139 - 'F' 20 drivers/video/fsl-diu-fb.h conflict! 140 - 'F' 20 linux/ivtvfb.h conflict! 141 - 'F' 20 linux/matroxfb.h conflict! 142 - 'F' 20 drivers/video/aty/atyfb_base.c conflict! 143 - 'F' 00-0F video/da8xx-fb.h conflict! 144 - 'F' 80-8F linux/arcfb.h conflict! 145 - 'F' DD video/sstfb.h conflict! 146 - 'G' 00-3F drivers/misc/sgi-gru/grulib.h conflict! 147 - 'G' 00-0F xen/gntalloc.h, xen/gntdev.h conflict! 148 - 'H' 00-7F linux/hiddev.h conflict! 149 - 'H' 00-0F linux/hidraw.h conflict! 150 - 'H' 01 linux/mei.h conflict! 151 - 'H' 02 linux/mei.h conflict! 152 - 'H' 03 linux/mei.h conflict! 153 - 'H' 00-0F sound/asound.h conflict! 154 - 'H' 20-40 sound/asound_fm.h conflict! 155 - 'H' 80-8F sound/sfnt_info.h conflict! 156 - 'H' 10-8F sound/emu10k1.h conflict! 157 - 'H' 10-1F sound/sb16_csp.h conflict! 158 - 'H' 10-1F sound/hda_hwdep.h conflict! 159 - 'H' 40-4F sound/hdspm.h conflict! 160 - 'H' 40-4F sound/hdsp.h conflict! 131 + 'E' all linux/input.h conflict! 132 + 'E' 00-0F xen/evtchn.h conflict! 133 + 'F' all linux/fb.h conflict! 134 + 'F' 01-02 drivers/scsi/pmcraid.h conflict! 135 + 'F' 20 drivers/video/fsl-diu-fb.h conflict! 136 + 'F' 20 linux/ivtvfb.h conflict! 137 + 'F' 20 linux/matroxfb.h conflict! 138 + 'F' 20 drivers/video/aty/atyfb_base.c conflict! 139 + 'F' 00-0F video/da8xx-fb.h conflict! 140 + 'F' 80-8F linux/arcfb.h conflict! 141 + 'F' DD video/sstfb.h conflict! 142 + 'G' 00-3F drivers/misc/sgi-gru/grulib.h conflict! 143 + 'G' 00-0F xen/gntalloc.h, xen/gntdev.h conflict! 144 + 'H' 00-7F linux/hiddev.h conflict! 145 + 'H' 00-0F linux/hidraw.h conflict! 146 + 'H' 01 linux/mei.h conflict! 147 + 'H' 02 linux/mei.h conflict! 148 + 'H' 03 linux/mei.h conflict! 149 + 'H' 00-0F sound/asound.h conflict! 150 + 'H' 20-40 sound/asound_fm.h conflict! 151 + 'H' 80-8F sound/sfnt_info.h conflict! 152 + 'H' 10-8F sound/emu10k1.h conflict! 153 + 'H' 10-1F sound/sb16_csp.h conflict! 154 + 'H' 10-1F sound/hda_hwdep.h conflict! 155 + 'H' 40-4F sound/hdspm.h conflict! 156 + 'H' 40-4F sound/hdsp.h conflict! 161 157 'H' 90 sound/usb/usx2y/usb_stream.h 162 - 'H' 00-0F uapi/misc/habanalabs.h conflict! 158 + 'H' 00-0F uapi/misc/habanalabs.h conflict! 163 159 'H' A0 uapi/linux/usb/cdc-wdm.h 164 - 'H' C0-F0 net/bluetooth/hci.h conflict! 165 - 'H' C0-DF net/bluetooth/hidp/hidp.h conflict! 166 - 'H' C0-DF net/bluetooth/cmtp/cmtp.h conflict! 167 - 'H' C0-DF net/bluetooth/bnep/bnep.h conflict! 168 - 'H' F1 linux/hid-roccat.h <mailto:erazor_de@users.sourceforge.net> 160 + 'H' C0-F0 net/bluetooth/hci.h conflict! 161 + 'H' C0-DF net/bluetooth/hidp/hidp.h conflict! 162 + 'H' C0-DF net/bluetooth/cmtp/cmtp.h conflict! 163 + 'H' C0-DF net/bluetooth/bnep/bnep.h conflict! 164 + 'H' F1 linux/hid-roccat.h <mailto:erazor_de@users.sourceforge.net> 169 165 'H' F8-FA sound/firewire.h 170 - 'I' all linux/isdn.h conflict! 171 - 'I' 00-0F drivers/isdn/divert/isdn_divert.h conflict! 172 - 'I' 40-4F linux/mISDNif.h conflict! 166 + 'I' all linux/isdn.h conflict! 167 + 'I' 00-0F drivers/isdn/divert/isdn_divert.h conflict! 168 + 'I' 40-4F linux/mISDNif.h conflict! 173 169 'K' all linux/kd.h 174 - 'L' 00-1F linux/loop.h conflict! 175 - 'L' 10-1F drivers/scsi/mpt3sas/mpt3sas_ctl.h conflict! 176 - 'L' E0-FF linux/ppdd.h encrypted disk device driver 177 - <http://linux01.gwdg.de/~alatham/ppdd.html> 178 - 'M' all linux/soundcard.h conflict! 179 - 'M' 01-16 mtd/mtd-abi.h conflict! 170 + 'L' 00-1F linux/loop.h conflict! 171 + 'L' 10-1F drivers/scsi/mpt3sas/mpt3sas_ctl.h conflict! 172 + 'L' E0-FF linux/ppdd.h encrypted disk device driver 173 + <http://linux01.gwdg.de/~alatham/ppdd.html> 174 + 'M' all linux/soundcard.h conflict! 175 + 'M' 01-16 mtd/mtd-abi.h conflict! 180 176 and drivers/mtd/mtdchar.c 181 177 'M' 01-03 drivers/scsi/megaraid/megaraid_sas.h 182 - 'M' 00-0F drivers/video/fsl-diu-fb.h conflict! 178 + 'M' 00-0F drivers/video/fsl-diu-fb.h conflict! 183 179 'N' 00-1F drivers/usb/scanner.h 184 180 'N' 40-7F drivers/block/nvme.c 185 - 'N' 80-8F uapi/linux/ntsync.h NT synchronization primitives 186 - <mailto:wine-devel@winehq.org> 187 - 'O' 00-06 mtd/ubi-user.h UBI 188 - 'P' all linux/soundcard.h conflict! 189 - 'P' 60-6F sound/sscape_ioctl.h conflict! 190 - 'P' 00-0F drivers/usb/class/usblp.c conflict! 191 - 'P' 01-09 drivers/misc/pci_endpoint_test.c conflict! 192 - 'P' 00-0F xen/privcmd.h conflict! 193 - 'P' 00-05 linux/tps6594_pfsm.h conflict! 181 + 'N' 80-8F uapi/linux/ntsync.h NT synchronization primitives 182 + <mailto:wine-devel@winehq.org> 183 + 'O' 00-06 mtd/ubi-user.h UBI 184 + 'P' all linux/soundcard.h conflict! 185 + 'P' 60-6F sound/sscape_ioctl.h conflict! 186 + 'P' 00-0F drivers/usb/class/usblp.c conflict! 187 + 'P' 01-09 drivers/misc/pci_endpoint_test.c conflict! 188 + 'P' 00-0F xen/privcmd.h conflict! 189 + 'P' 00-05 linux/tps6594_pfsm.h conflict! 194 190 'Q' all linux/soundcard.h 195 - 'R' 00-1F linux/random.h conflict! 196 - 'R' 01 linux/rfkill.h conflict! 191 + 'R' 00-1F linux/random.h conflict! 192 + 'R' 01 linux/rfkill.h conflict! 197 193 'R' 20-2F linux/trace_mmap.h 198 194 'R' C0-DF net/bluetooth/rfcomm.h 199 195 'R' E0 uapi/linux/fsl_mc.h 200 - 'S' all linux/cdrom.h conflict! 201 - 'S' 80-81 scsi/scsi_ioctl.h conflict! 202 - 'S' 82-FF scsi/scsi.h conflict! 203 - 'S' 00-7F sound/asequencer.h conflict! 204 - 'T' all linux/soundcard.h conflict! 205 - 'T' 00-AF sound/asound.h conflict! 206 - 'T' all arch/x86/include/asm/ioctls.h conflict! 207 - 'T' C0-DF linux/if_tun.h conflict! 208 - 'U' all sound/asound.h conflict! 209 - 'U' 00-CF linux/uinput.h conflict! 196 + 'S' all linux/cdrom.h conflict! 197 + 'S' 80-81 scsi/scsi_ioctl.h conflict! 198 + 'S' 82-FF scsi/scsi.h conflict! 199 + 'S' 00-7F sound/asequencer.h conflict! 200 + 'T' all linux/soundcard.h conflict! 201 + 'T' 00-AF sound/asound.h conflict! 202 + 'T' all arch/x86/include/asm/ioctls.h conflict! 203 + 'T' C0-DF linux/if_tun.h conflict! 204 + 'U' all sound/asound.h conflict! 205 + 'U' 00-CF linux/uinput.h conflict! 210 206 'U' 00-EF linux/usbdevice_fs.h 211 207 'U' C0-CF drivers/bluetooth/hci_uart.h 212 - 'V' all linux/vt.h conflict! 213 - 'V' all linux/videodev2.h conflict! 214 - 'V' C0 linux/ivtvfb.h conflict! 215 - 'V' C0 linux/ivtv.h conflict! 216 - 'V' C0 media/si4713.h conflict! 217 - 'W' 00-1F linux/watchdog.h conflict! 218 - 'W' 00-1F linux/wanrouter.h conflict! (pre 3.9) 219 - 'W' 00-3F sound/asound.h conflict! 208 + 'V' all linux/vt.h conflict! 209 + 'V' all linux/videodev2.h conflict! 210 + 'V' C0 linux/ivtvfb.h conflict! 211 + 'V' C0 linux/ivtv.h conflict! 212 + 'V' C0 media/si4713.h conflict! 213 + 'W' 00-1F linux/watchdog.h conflict! 214 + 'W' 00-1F linux/wanrouter.h conflict! (pre 3.9) 215 + 'W' 00-3F sound/asound.h conflict! 220 216 'W' 40-5F drivers/pci/switch/switchtec.c 221 217 'W' 60-61 linux/watch_queue.h 222 - 'X' all fs/xfs/xfs_fs.h, conflict! 218 + 'X' all fs/xfs/xfs_fs.h, conflict! 223 219 fs/xfs/linux-2.6/xfs_ioctl32.h, 224 220 include/linux/falloc.h, 225 221 linux/fs.h, 226 - 'X' all fs/ocfs2/ocfs_fs.h conflict! 222 + 'X' all fs/ocfs2/ocfs_fs.h conflict! 227 223 'Z' 14-15 drivers/message/fusion/mptctl.h 228 - '[' 00-3F linux/usb/tmc.h USB Test and Measurement Devices 229 - <mailto:gregkh@linuxfoundation.org> 230 - 'a' all linux/atm*.h, linux/sonet.h ATM on linux 231 - <http://lrcwww.epfl.ch/> 232 - 'a' 00-0F drivers/crypto/qat/qat_common/adf_cfg_common.h conflict! qat driver 233 - 'b' 00-FF conflict! bit3 vme host bridge 234 - <mailto:natalia@nikhefk.nikhef.nl> 235 - 'b' 00-0F linux/dma-buf.h conflict! 236 - 'c' 00-7F linux/comstats.h conflict! 237 - 'c' 00-7F linux/coda.h conflict! 238 - 'c' 00-1F linux/chio.h conflict! 239 - 'c' 80-9F arch/s390/include/asm/chsc.h conflict! 224 + '[' 00-3F linux/usb/tmc.h USB Test and Measurement Devices 225 + <mailto:gregkh@linuxfoundation.org> 226 + 'a' all linux/atm*.h, linux/sonet.h ATM on linux 227 + <http://lrcwww.epfl.ch/> 228 + 'a' 00-0F drivers/crypto/qat/qat_common/adf_cfg_common.h conflict! qat driver 229 + 'b' 00-FF conflict! bit3 vme host bridge 230 + <mailto:natalia@nikhefk.nikhef.nl> 231 + 'b' 00-0F linux/dma-buf.h conflict! 232 + 'c' 00-7F linux/comstats.h conflict! 233 + 'c' 00-7F linux/coda.h conflict! 234 + 'c' 00-1F linux/chio.h conflict! 235 + 'c' 80-9F arch/s390/include/asm/chsc.h conflict! 240 236 'c' A0-AF arch/x86/include/asm/msr.h conflict! 241 - 'd' 00-FF linux/char/drm/drm.h conflict! 242 - 'd' 02-40 pcmcia/ds.h conflict! 237 + 'd' 00-FF linux/char/drm/drm.h conflict! 238 + 'd' 02-40 pcmcia/ds.h conflict! 243 239 'd' F0-FF linux/digi1.h 244 - 'e' all linux/digi1.h conflict! 245 - 'f' 00-1F linux/ext2_fs.h conflict! 246 - 'f' 00-1F linux/ext3_fs.h conflict! 247 - 'f' 00-0F fs/jfs/jfs_dinode.h conflict! 248 - 'f' 00-0F fs/ext4/ext4.h conflict! 249 - 'f' 00-0F linux/fs.h conflict! 250 - 'f' 00-0F fs/ocfs2/ocfs2_fs.h conflict! 240 + 'e' all linux/digi1.h conflict! 241 + 'f' 00-1F linux/ext2_fs.h conflict! 242 + 'f' 00-1F linux/ext3_fs.h conflict! 243 + 'f' 00-0F fs/jfs/jfs_dinode.h conflict! 244 + 'f' 00-0F fs/ext4/ext4.h conflict! 245 + 'f' 00-0F linux/fs.h conflict! 246 + 'f' 00-0F fs/ocfs2/ocfs2_fs.h conflict! 251 247 'f' 13-27 linux/fscrypt.h 252 248 'f' 81-8F linux/fsverity.h 253 249 'g' 00-0F linux/usb/gadgetfs.h 254 250 'g' 20-2F linux/usb/g_printer.h 255 - 'h' 00-7F conflict! Charon filesystem 256 - <mailto:zapman@interlan.net> 257 - 'h' 00-1F linux/hpet.h conflict! 251 + 'h' 00-7F conflict! Charon filesystem 252 + <mailto:zapman@interlan.net> 253 + 'h' 00-1F linux/hpet.h conflict! 258 254 'h' 80-8F fs/hfsplus/ioctl.c 259 - 'i' 00-3F linux/i2o-dev.h conflict! 260 - 'i' 0B-1F linux/ipmi.h conflict! 255 + 'i' 00-3F linux/i2o-dev.h conflict! 256 + 'i' 0B-1F linux/ipmi.h conflict! 261 257 'i' 80-8F linux/i8k.h 262 - 'i' 90-9F `linux/iio/*.h` IIO 258 + 'i' 90-9F `linux/iio/*.h` IIO 263 259 'j' 00-3F linux/joystick.h 264 - 'k' 00-0F linux/spi/spidev.h conflict! 265 - 'k' 00-05 video/kyro.h conflict! 266 - 'k' 10-17 linux/hsi/hsi_char.h HSI character device 267 - 'l' 00-3F linux/tcfs_fs.h transparent cryptographic file system 268 - <http://web.archive.org/web/%2A/http://mikonos.dia.unisa.it/tcfs> 269 - 'l' 40-7F linux/udf_fs_i.h in development: 270 - <https://github.com/pali/udftools> 271 - 'm' 00-09 linux/mmtimer.h conflict! 272 - 'm' all linux/mtio.h conflict! 273 - 'm' all linux/soundcard.h conflict! 274 - 'm' all linux/synclink.h conflict! 275 - 'm' 00-19 drivers/message/fusion/mptctl.h conflict! 276 - 'm' 00 drivers/scsi/megaraid/megaraid_ioctl.h conflict! 260 + 'k' 00-0F linux/spi/spidev.h conflict! 261 + 'k' 00-05 video/kyro.h conflict! 262 + 'k' 10-17 linux/hsi/hsi_char.h HSI character device 263 + 'l' 00-3F linux/tcfs_fs.h transparent cryptographic file system 264 + <http://web.archive.org/web/%2A/http://mikonos.dia.unisa.it/tcfs> 265 + 'l' 40-7F linux/udf_fs_i.h in development: 266 + <https://github.com/pali/udftools> 267 + 'm' 00-09 linux/mmtimer.h conflict! 268 + 'm' all linux/mtio.h conflict! 269 + 'm' all linux/soundcard.h conflict! 270 + 'm' all linux/synclink.h conflict! 271 + 'm' 00-19 drivers/message/fusion/mptctl.h conflict! 272 + 'm' 00 drivers/scsi/megaraid/megaraid_ioctl.h conflict! 277 273 'n' 00-7F linux/ncp_fs.h and fs/ncpfs/ioctl.c 278 - 'n' 80-8F uapi/linux/nilfs2_api.h NILFS2 279 - 'n' E0-FF linux/matroxfb.h matroxfb 280 - 'o' 00-1F fs/ocfs2/ocfs2_fs.h OCFS2 281 - 'o' 00-03 mtd/ubi-user.h conflict! (OCFS2 and UBI overlaps) 282 - 'o' 40-41 mtd/ubi-user.h UBI 283 - 'o' 01-A1 `linux/dvb/*.h` DVB 284 - 'p' 00-0F linux/phantom.h conflict! (OpenHaptics needs this) 285 - 'p' 00-1F linux/rtc.h conflict! 274 + 'n' 80-8F uapi/linux/nilfs2_api.h NILFS2 275 + 'n' E0-FF linux/matroxfb.h matroxfb 276 + 'o' 00-1F fs/ocfs2/ocfs2_fs.h OCFS2 277 + 'o' 00-03 mtd/ubi-user.h conflict! (OCFS2 and UBI overlaps) 278 + 'o' 40-41 mtd/ubi-user.h UBI 279 + 'o' 01-A1 `linux/dvb/*.h` DVB 280 + 'p' 00-0F linux/phantom.h conflict! (OpenHaptics needs this) 281 + 'p' 00-1F linux/rtc.h conflict! 286 282 'p' 40-7F linux/nvram.h 287 - 'p' 80-9F linux/ppdev.h user-space parport 288 - <mailto:tim@cyberelk.net> 289 - 'p' A1-A5 linux/pps.h LinuxPPS 290 - 'p' B1-B3 linux/pps_gen.h LinuxPPS 291 - <mailto:giometti@linux.it> 283 + 'p' 80-9F linux/ppdev.h user-space parport 284 + <mailto:tim@cyberelk.net> 285 + 'p' A1-A5 linux/pps.h LinuxPPS 286 + 'p' B1-B3 linux/pps_gen.h LinuxPPS 287 + <mailto:giometti@linux.it> 292 288 'q' 00-1F linux/serio.h 293 - 'q' 80-FF linux/telephony.h Internet PhoneJACK, Internet LineJACK 294 - linux/ixjuser.h <http://web.archive.org/web/%2A/http://www.quicknet.net> 289 + 'q' 80-FF linux/telephony.h Internet PhoneJACK, Internet LineJACK 290 + linux/ixjuser.h <http://web.archive.org/web/%2A/http://www.quicknet.net> 295 291 'r' 00-1F linux/msdos_fs.h and fs/fat/dir.c 296 292 's' all linux/cdk.h 297 293 't' 00-7F linux/ppp-ioctl.h 298 294 't' 80-8F linux/isdn_ppp.h 299 - 't' 90-91 linux/toshiba.h toshiba and toshiba_acpi SMM 300 - 'u' 00-1F linux/smb_fs.h gone 301 - 'u' 00-2F linux/ublk_cmd.h conflict! 302 - 'u' 20-3F linux/uvcvideo.h USB video class host driver 303 - 'u' 40-4f linux/udmabuf.h userspace dma-buf misc device 304 - 'v' 00-1F linux/ext2_fs.h conflict! 305 - 'v' 00-1F linux/fs.h conflict! 306 - 'v' 00-0F linux/sonypi.h conflict! 307 - 'v' 00-0F media/v4l2-subdev.h conflict! 308 - 'v' 20-27 arch/powerpc/include/uapi/asm/vas-api.h VAS API 309 - 'v' C0-FF linux/meye.h conflict! 310 - 'w' all CERN SCI driver 311 - 'y' 00-1F packet based user level communications 312 - <mailto:zapman@interlan.net> 313 - 'z' 00-3F CAN bus card conflict! 314 - <mailto:hdstich@connectu.ulm.circular.de> 315 - 'z' 40-7F CAN bus card conflict! 316 - <mailto:oe@port.de> 317 - 'z' 10-4F drivers/s390/crypto/zcrypt_api.h conflict! 295 + 't' 90-91 linux/toshiba.h toshiba and toshiba_acpi SMM 296 + 'u' 00-1F linux/smb_fs.h gone 297 + 'u' 00-2F linux/ublk_cmd.h conflict! 298 + 'u' 20-3F linux/uvcvideo.h USB video class host driver 299 + 'u' 40-4f linux/udmabuf.h userspace dma-buf misc device 300 + 'v' 00-1F linux/ext2_fs.h conflict! 301 + 'v' 00-1F linux/fs.h conflict! 302 + 'v' 00-0F linux/sonypi.h conflict! 303 + 'v' 00-0F media/v4l2-subdev.h conflict! 304 + 'v' 20-27 arch/powerpc/include/uapi/asm/vas-api.h VAS API 305 + 'v' C0-FF linux/meye.h conflict! 306 + 'w' all CERN SCI driver 307 + 'y' 00-1F packet based user level communications 308 + <mailto:zapman@interlan.net> 309 + 'z' 00-3F CAN bus card conflict! 310 + <mailto:hdstich@connectu.ulm.circular.de> 311 + 'z' 40-7F CAN bus card conflict! 312 + <mailto:oe@port.de> 313 + 'z' 10-4F drivers/s390/crypto/zcrypt_api.h conflict! 318 314 '|' 00-7F linux/media.h 319 - '|' 80-9F samples/ Any sample and example drivers 315 + '|' 80-9F samples/ Any sample and example drivers 320 316 0x80 00-1F linux/fb.h 321 317 0x81 00-1F linux/vduse.h 322 318 0x89 00-06 arch/x86/include/asm/sockios.h 323 319 0x89 0B-DF linux/sockios.h 324 - 0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range 325 - 0x89 F0-FF linux/sockios.h SIOCDEVPRIVATE range 320 + 0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range 321 + 0x89 F0-FF linux/sockios.h SIOCDEVPRIVATE range 326 322 0x8A 00-1F linux/eventpoll.h 327 323 0x8B all linux/wireless.h 328 - 0x8C 00-3F WiNRADiO driver 329 - <http://www.winradio.com.au/> 324 + 0x8C 00-3F WiNRADiO driver 325 + <http://www.winradio.com.au/> 330 326 0x90 00 drivers/cdrom/sbpcd.h 331 327 0x92 00-0F drivers/usb/mon/mon_bin.c 332 328 0x93 60-7F linux/auto_fs.h 333 - 0x94 all fs/btrfs/ioctl.h Btrfs filesystem 334 - and linux/fs.h some lifted to vfs/generic 335 - 0x97 00-7F fs/ceph/ioctl.h Ceph file system 336 - 0x99 00-0F 537-Addinboard driver 337 - <mailto:buk@buks.ipn.de> 329 + 0x94 all fs/btrfs/ioctl.h Btrfs filesystem 330 + and linux/fs.h some lifted to vfs/generic 331 + 0x97 00-7F fs/ceph/ioctl.h Ceph file system 332 + 0x99 00-0F 537-Addinboard driver 333 + <mailto:buk@buks.ipn.de> 338 334 0x9A 00-0F include/uapi/fwctl/fwctl.h 339 - 0xA0 all linux/sdp/sdp.h Industrial Device Project 340 - <mailto:kenji@bitgate.com> 341 - 0xA1 0 linux/vtpm_proxy.h TPM Emulator Proxy Driver 342 - 0xA2 all uapi/linux/acrn.h ACRN hypervisor 343 - 0xA3 80-8F Port ACL in development: 344 - <mailto:tlewis@mindspring.com> 335 + 0xA0 all linux/sdp/sdp.h Industrial Device Project 336 + <mailto:kenji@bitgate.com> 337 + 0xA1 0 linux/vtpm_proxy.h TPM Emulator Proxy Driver 338 + 0xA2 all uapi/linux/acrn.h ACRN hypervisor 339 + 0xA3 80-8F Port ACL in development: 340 + <mailto:tlewis@mindspring.com> 345 341 0xA3 90-9F linux/dtlk.h 346 - 0xA4 00-1F uapi/linux/tee.h Generic TEE subsystem 347 - 0xA4 00-1F uapi/asm/sgx.h <mailto:linux-sgx@vger.kernel.org> 348 - 0xA5 01-05 linux/surface_aggregator/cdev.h Microsoft Surface Platform System Aggregator 349 - <mailto:luzmaximilian@gmail.com> 350 - 0xA5 20-2F linux/surface_aggregator/dtx.h Microsoft Surface DTX driver 351 - <mailto:luzmaximilian@gmail.com> 342 + 0xA4 00-1F uapi/linux/tee.h Generic TEE subsystem 343 + 0xA4 00-1F uapi/asm/sgx.h <mailto:linux-sgx@vger.kernel.org> 344 + 0xA5 01-05 linux/surface_aggregator/cdev.h Microsoft Surface Platform System Aggregator 345 + <mailto:luzmaximilian@gmail.com> 346 + 0xA5 20-2F linux/surface_aggregator/dtx.h Microsoft Surface DTX driver 347 + <mailto:luzmaximilian@gmail.com> 352 348 0xAA 00-3F linux/uapi/linux/userfaultfd.h 353 349 0xAB 00-1F linux/nbd.h 354 350 0xAC 00-1F linux/raw.h 355 - 0xAD 00 Netfilter device in development: 356 - <mailto:rusty@rustcorp.com.au> 357 - 0xAE 00-1F linux/kvm.h Kernel-based Virtual Machine 358 - <mailto:kvm@vger.kernel.org> 359 - 0xAE 40-FF linux/kvm.h Kernel-based Virtual Machine 360 - <mailto:kvm@vger.kernel.org> 361 - 0xAE 20-3F linux/nitro_enclaves.h Nitro Enclaves 362 - 0xAF 00-1F linux/fsl_hypervisor.h Freescale hypervisor 363 - 0xB0 all RATIO devices in development: 364 - <mailto:vgo@ratio.de> 365 - 0xB1 00-1F PPPoX 366 - <mailto:mostrows@styx.uwaterloo.ca> 367 - 0xB2 00 arch/powerpc/include/uapi/asm/papr-vpd.h powerpc/pseries VPD API 368 - <mailto:linuxppc-dev> 369 - 0xB2 01-02 arch/powerpc/include/uapi/asm/papr-sysparm.h powerpc/pseries system parameter API 370 - <mailto:linuxppc-dev> 371 - 0xB2 03-05 arch/powerpc/include/uapi/asm/papr-indices.h powerpc/pseries indices API 372 - <mailto:linuxppc-dev> 373 - 0xB2 06-07 arch/powerpc/include/uapi/asm/papr-platform-dump.h powerpc/pseries Platform Dump API 374 - <mailto:linuxppc-dev> 375 - 0xB2 08 powerpc/include/uapi/asm/papr-physical-attestation.h powerpc/pseries Physical Attestation API 376 - <mailto:linuxppc-dev> 351 + 0xAD 00 Netfilter device in development: 352 + <mailto:rusty@rustcorp.com.au> 353 + 0xAE 00-1F linux/kvm.h Kernel-based Virtual Machine 354 + <mailto:kvm@vger.kernel.org> 355 + 0xAE 40-FF linux/kvm.h Kernel-based Virtual Machine 356 + <mailto:kvm@vger.kernel.org> 357 + 0xAE 20-3F linux/nitro_enclaves.h Nitro Enclaves 358 + 0xAF 00-1F linux/fsl_hypervisor.h Freescale hypervisor 359 + 0xB0 all RATIO devices in development: 360 + <mailto:vgo@ratio.de> 361 + 0xB1 00-1F PPPoX 362 + <mailto:mostrows@styx.uwaterloo.ca> 363 + 0xB2 00 arch/powerpc/include/uapi/asm/papr-vpd.h powerpc/pseries VPD API 364 + <mailto:linuxppc-dev@lists.ozlabs.org> 365 + 0xB2 01-02 arch/powerpc/include/uapi/asm/papr-sysparm.h powerpc/pseries system parameter API 366 + <mailto:linuxppc-dev@lists.ozlabs.org> 367 + 0xB2 03-05 arch/powerpc/include/uapi/asm/papr-indices.h powerpc/pseries indices API 368 + <mailto:linuxppc-dev@lists.ozlabs.org> 369 + 0xB2 06-07 arch/powerpc/include/uapi/asm/papr-platform-dump.h powerpc/pseries Platform Dump API 370 + <mailto:linuxppc-dev@lists.ozlabs.org> 371 + 0xB2 08 arch/powerpc/include/uapi/asm/papr-physical-attestation.h powerpc/pseries Physical Attestation API 372 + <mailto:linuxppc-dev@lists.ozlabs.org> 377 373 0xB3 00 linux/mmc/ioctl.h 378 - 0xB4 00-0F linux/gpio.h <mailto:linux-gpio@vger.kernel.org> 379 - 0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org> 374 + 0xB4 00-0F linux/gpio.h <mailto:linux-gpio@vger.kernel.org> 375 + 0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org> 380 376 0xB6 all linux/fpga-dfl.h 381 - 0xB7 all uapi/linux/remoteproc_cdev.h <mailto:linux-remoteproc@vger.kernel.org> 382 - 0xB7 all uapi/linux/nsfs.h <mailto:Andrei Vagin <avagin@openvz.org>> 383 - 0xB8 01-02 uapi/misc/mrvl_cn10k_dpi.h Marvell CN10K DPI driver 384 - 0xB8 all uapi/linux/mshv.h Microsoft Hyper-V /dev/mshv driver 385 - <mailto:linux-hyperv@vger.kernel.org> 377 + 0xB7 all uapi/linux/remoteproc_cdev.h <mailto:linux-remoteproc@vger.kernel.org> 378 + 0xB7 all uapi/linux/nsfs.h <mailto:Andrei Vagin <avagin@openvz.org>> 379 + 0xB8 01-02 uapi/misc/mrvl_cn10k_dpi.h Marvell CN10K DPI driver 380 + 0xB8 all uapi/linux/mshv.h Microsoft Hyper-V /dev/mshv driver 381 + <mailto:linux-hyperv@vger.kernel.org> 386 382 0xC0 00-0F linux/usb/iowarrior.h 387 - 0xCA 00-0F uapi/misc/cxl.h Dead since 6.15 383 + 0xCA 00-0F uapi/misc/cxl.h Dead since 6.15 388 384 0xCA 10-2F uapi/misc/ocxl.h 389 - 0xCA 80-BF uapi/scsi/cxlflash_ioctl.h Dead since 6.15 390 - 0xCB 00-1F CBM serial IEC bus in development: 391 - <mailto:michael.klein@puffin.lb.shuttle.de> 392 - 0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver 393 - 0xCD 01 linux/reiserfs_fs.h Dead since 6.13 394 - 0xCE 01-02 uapi/linux/cxl_mem.h Compute Express Link Memory Devices 385 + 0xCA 80-BF uapi/scsi/cxlflash_ioctl.h Dead since 6.15 386 + 0xCB 00-1F CBM serial IEC bus in development: 387 + <mailto:michael.klein@puffin.lb.shuttle.de> 388 + 0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver 389 + 0xCD 01 linux/reiserfs_fs.h Dead since 6.13 390 + 0xCE 01-02 uapi/linux/cxl_mem.h Compute Express Link Memory Devices 395 391 0xCF 02 fs/smb/client/cifs_ioctl.h 396 392 0xDB 00-0F drivers/char/mwave/mwavepub.h 397 - 0xDD 00-3F ZFCP device driver see drivers/s390/scsi/ 398 - <mailto:aherrman@de.ibm.com> 393 + 0xDD 00-3F ZFCP device driver see drivers/s390/scsi/ 394 + <mailto:aherrman@de.ibm.com> 399 395 0xE5 00-3F linux/fuse.h 400 - 0xEC 00-01 drivers/platform/chrome/cros_ec_dev.h ChromeOS EC driver 401 - 0xEE 00-09 uapi/linux/pfrut.h Platform Firmware Runtime Update and Telemetry 402 - 0xF3 00-3F drivers/usb/misc/sisusbvga/sisusb.h sisfb (in development) 403 - <mailto:thomas@winischhofer.net> 404 - 0xF6 all LTTng Linux Trace Toolkit Next Generation 405 - <mailto:mathieu.desnoyers@efficios.com> 406 - 0xF8 all arch/x86/include/uapi/asm/amd_hsmp.h AMD HSMP EPYC system management interface driver 407 - <mailto:nchatrad@amd.com> 408 - 0xF9 00-0F uapi/misc/amd-apml.h AMD side band system management interface driver 409 - <mailto:naveenkrishna.chatradhi@amd.com> 396 + 0xEC 00-01 drivers/platform/chrome/cros_ec_dev.h ChromeOS EC driver 397 + 0xEE 00-09 uapi/linux/pfrut.h Platform Firmware Runtime Update and Telemetry 398 + 0xF3 00-3F drivers/usb/misc/sisusbvga/sisusb.h sisfb (in development) 399 + <mailto:thomas@winischhofer.net> 400 + 0xF6 all LTTng Linux Trace Toolkit Next Generation 401 + <mailto:mathieu.desnoyers@efficios.com> 402 + 0xF8 all arch/x86/include/uapi/asm/amd_hsmp.h AMD HSMP EPYC system management interface driver 403 + <mailto:nchatrad@amd.com> 404 + 0xF9 00-0F uapi/misc/amd-apml.h AMD side band system management interface driver 405 + <mailto:naveenkrishna.chatradhi@amd.com> 410 406 0xFD all linux/dm-ioctl.h 411 407 0xFE all linux/isst_if.h 412 - ==== ===== ======================================================= ================================================================ 408 + ==== ===== ========================================================= ================================================================
+3 -3
Documentation/userspace-api/sysfs-platform_profile.rst
··· 18 18 Note that this API is only for selecting the platform profile, it is 19 19 NOT a goal of this API to allow monitoring the resulting performance 20 20 characteristics. Monitoring performance is best done with device/vendor 21 - specific tools such as e.g. turbostat. 21 + specific tools, e.g. turbostat. 22 22 23 - Specifically when selecting a high performance profile the actual achieved 23 + Specifically, when selecting a high performance profile the actual achieved 24 24 performance may be limited by various factors such as: the heat generated 25 25 by other components, room temperature, free air flow at the bottom of a 26 26 laptop, etc. It is explicitly NOT a goal of this API to let userspace know ··· 44 44 "Custom" profile support 45 45 ======================== 46 46 The platform_profile class also supports profiles advertising a "custom" 47 - profile. This is intended to be set by drivers when the setttings in the 47 + profile. This is intended to be set by drivers when the settings in the 48 48 driver have been modified in a way that a standard profile doesn't represent 49 49 the current state. 50 50
+12 -12
MAINTAINERS
··· 158 158 W: http://github.com/v9fs 159 159 Q: http://patchwork.kernel.org/project/v9fs-devel/list/ 160 160 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs.git 161 - T: git git://github.com/martinetd/linux.git 161 + T: git https://github.com/martinetd/linux.git 162 162 F: Documentation/filesystems/9p.rst 163 163 F: fs/9p/ 164 164 F: include/net/9p/ ··· 2598 2598 M: Linus Walleij <linus.walleij@linaro.org> 2599 2599 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2600 2600 S: Maintained 2601 - T: git git://github.com/ulli-kroll/linux.git 2601 + T: git https://github.com/ulli-kroll/linux.git 2602 2602 F: Documentation/devicetree/bindings/arm/gemini.yaml 2603 2603 F: Documentation/devicetree/bindings/net/cortina,gemini-ethernet.yaml 2604 2604 F: Documentation/devicetree/bindings/pinctrl/cortina,gemini-pinctrl.txt ··· 2805 2805 M: Piotr Wojtaszczyk <piotr.wojtaszczyk@timesys.com> 2806 2806 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2807 2807 S: Maintained 2808 - T: git git://github.com/vzapolskiy/linux-lpc32xx.git 2808 + T: git https://github.com/vzapolskiy/linux-lpc32xx.git 2809 2809 F: Documentation/devicetree/bindings/i2c/nxp,pnx-i2c.yaml 2810 2810 F: arch/arm/boot/dts/nxp/lpc/lpc32* 2811 2811 F: arch/arm/mach-lpc32xx/ ··· 2979 2979 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2980 2980 S: Maintained 2981 2981 W: http://linux-chenxing.org/ 2982 - T: git git://github.com/linux-chenxing/linux.git 2982 + T: git https://github.com/linux-chenxing/linux.git 2983 2983 F: Documentation/devicetree/bindings/arm/mstar/* 2984 2984 F: Documentation/devicetree/bindings/clock/mstar,msc313-mpll.yaml 2985 2985 F: Documentation/devicetree/bindings/gpio/mstar,msc313-gpio.yaml ··· 3909 3909 M: Alban Bedel <albeu@free.fr> 3910 3910 S: Maintained 3911 3911 W: https://github.com/AlbanBedel/linux 3912 - T: git git://github.com/AlbanBedel/linux 3912 + T: git https://github.com/AlbanBedel/linux.git 3913 3913 F: Documentation/devicetree/bindings/gpio/qca,ar7100-gpio.yaml 3914 3914 F: drivers/gpio/gpio-ath79.c 3915 3915 ··· 3917 3917 M: Alban Bedel <albeu@free.fr> 3918 3918 S: Maintained 3919 3919 W: https://github.com/AlbanBedel/linux 3920 - T: git git://github.com/AlbanBedel/linux 3920 + T: git https://github.com/AlbanBedel/linux.git 3921 3921 F: Documentation/devicetree/bindings/phy/phy-ath79-usb.txt 3922 3922 F: drivers/phy/qualcomm/phy-ath79-usb.c 3923 3923 ··· 3982 3982 ATMEL MAXTOUCH DRIVER 3983 3983 M: Nick Dyer <nick@shmanahar.org> 3984 3984 S: Maintained 3985 - T: git git://github.com/ndyer/linux.git 3985 + T: git https://github.com/ndyer/linux.git 3986 3986 F: Documentation/devicetree/bindings/input/atmel,maxtouch.yaml 3987 3987 F: drivers/input/touchscreen/atmel_mxt_ts.c 3988 3988 ··· 19919 19919 S: Supported 19920 19920 W: https://01.org/pm-graph 19921 19921 B: https://bugzilla.kernel.org/buglist.cgi?component=pm-graph&product=Tools 19922 - T: git git://github.com/intel/pm-graph 19922 + T: git https://github.com/intel/pm-graph.git 19923 19923 F: tools/power/pm-graph 19924 19924 19925 19925 PM6764TR DRIVER ··· 20310 20310 M: Robert Jarzmik <robert.jarzmik@free.fr> 20311 20311 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 20312 20312 S: Maintained 20313 - T: git git://github.com/hzhuang1/linux.git 20314 - T: git git://github.com/rjarzmik/linux.git 20313 + T: git https://github.com/hzhuang1/linux.git 20314 + T: git https://github.com/rjarzmik/linux.git 20315 20315 F: arch/arm/boot/dts/intel/pxa/ 20316 20316 F: arch/arm/mach-pxa/ 20317 20317 F: drivers/dma/pxa* ··· 23117 23117 L: linux-security-module@vger.kernel.org 23118 23118 S: Maintained 23119 23119 W: http://schaufler-ca.com 23120 - T: git git://github.com/cschaufler/smack-next 23120 + T: git https://github.com/cschaufler/smack-next.git 23121 23121 F: Documentation/admin-guide/LSM/Smack.rst 23122 23122 F: security/smack/ 23123 23123 ··· 25458 25458 M: Hu Haowen <2023002089@link.tyut.edu.cn> 25459 25459 S: Maintained 25460 25460 W: https://github.com/srcres258/linux-doc 25461 - T: git git://github.com/srcres258/linux-doc.git doc-zh-tw 25461 + T: git https://github.com/srcres258/linux-doc.git doc-zh-tw 25462 25462 F: Documentation/translations/zh_TW/ 25463 25463 25464 25464 TRIGGER SOURCE - ADI UTIL SIGMA DELTA SPI
+8
include/linux/dmapool.h
··· 60 60 NUMA_NO_NODE); 61 61 } 62 62 63 + /** 64 + * dma_pool_zalloc - Get a zero-initialized block of DMA coherent memory. 65 + * @pool: dma pool that will produce the block 66 + * @mem_flags: GFP_* bitmask 67 + * @handle: pointer to dma address of block 68 + * 69 + * Same as dma_pool_alloc(), but the returned memory is zeroed. 70 + */ 63 71 static inline void *dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, 64 72 dma_addr_t *handle) 65 73 {
+3 -3
mm/dmapool.c
··· 200 200 201 201 202 202 /** 203 - * dma_pool_create_node - Creates a pool of consistent memory blocks, for dma. 203 + * dma_pool_create_node - Creates a pool of coherent DMA memory blocks. 204 204 * @name: name of pool, for diagnostics 205 205 * @dev: device that will be doing the DMA 206 206 * @size: size of the blocks in this pool. ··· 210 210 * Context: not in_interrupt() 211 211 * 212 212 * Given one of these pools, dma_pool_alloc() 213 - * may be used to allocate memory. Such memory will all have "consistent" 213 + * may be used to allocate memory. Such memory will all have coherent 214 214 * DMA mappings, accessible by the device and its driver without using 215 215 * cache flushing primitives. The actual size of blocks allocated may be 216 216 * larger than requested because of alignment. ··· 395 395 EXPORT_SYMBOL(dma_pool_destroy); 396 396 397 397 /** 398 - * dma_pool_alloc - get a block of consistent memory 398 + * dma_pool_alloc - get a block of coherent memory 399 399 * @pool: dma pool that will produce the block 400 400 * @mem_flags: GFP_* bitmask 401 401 * @handle: pointer to dma address of block
+37 -1
scripts/checktransupdate.py
··· 24 24 """ 25 25 26 26 import os 27 + import re 27 28 import time 28 29 import logging 29 30 from argparse import ArgumentParser, ArgumentTypeError, BooleanOptionalAction ··· 70 69 return o_from_t 71 70 72 71 72 + def get_origin_from_trans_smartly(origin_path, t_from_head): 73 + """Get the latest origin commit from the formatted translation commit: 74 + (1) update to commit HASH (TITLE) 75 + (2) Update the translation through commit HASH (TITLE) 76 + """ 77 + # catch flag for 12-bit commit hash 78 + HASH = r'([0-9a-f]{12})' 79 + # pattern 1: contains "update to commit HASH" 80 + pat_update_to = re.compile(rf'update to commit {HASH}') 81 + # pattern 2: contains "Update the translation through commit HASH" 82 + pat_update_translation = re.compile(rf'Update the translation through commit {HASH}') 83 + 84 + origin_commit_hash = None 85 + for line in t_from_head["message"]: 86 + # check if the line matches the first pattern 87 + match = pat_update_to.search(line) 88 + if match: 89 + origin_commit_hash = match.group(1) 90 + break 91 + # check if the line matches the second pattern 92 + match = pat_update_translation.search(line) 93 + if match: 94 + origin_commit_hash = match.group(1) 95 + break 96 + if origin_commit_hash is None: 97 + return None 98 + o_from_t = get_latest_commit_from(origin_path, origin_commit_hash) 99 + if o_from_t is not None: 100 + logging.debug("tracked origin commit id: %s", o_from_t["hash"]) 101 + return o_from_t 102 + 103 + 73 104 def get_commits_count_between(opath, commit1, commit2): 74 105 """Get the commits count between two commits for the specified file""" 75 106 command = f"git log --pretty=format:%H {commit1}...{commit2} -- {opath}" ··· 141 108 logging.error("Cannot find the latest commit for %s", file_path) 142 109 return 143 110 144 - o_from_t = get_origin_from_trans(opath, t_from_head) 111 + o_from_t = get_origin_from_trans_smartly(opath, t_from_head) 112 + # notice, o_from_t from get_*_smartly() is always more accurate than from get_*() 113 + if o_from_t is None: 114 + o_from_t = get_origin_from_trans(opath, t_from_head) 145 115 146 116 if o_from_t is None: 147 117 logging.error("Error: Cannot find the latest origin commit for %s", file_path)
+10
scripts/kernel-doc.py
··· 271 271 272 272 logger.addHandler(handler) 273 273 274 + python_ver = sys.version_info[:2] 275 + if python_ver < (3,6): 276 + logger.warning("Python 3.6 or later is required by kernel-doc") 277 + 278 + # Return 0 here to avoid breaking compilation 279 + sys.exit(0) 280 + 281 + if python_ver < (3,7): 282 + logger.warning("Python 3.7 or later is required for correct results") 283 + 274 284 if args.man: 275 285 out_style = ManFormat(modulename=args.modulename) 276 286 elif args.none:
+2 -2
scripts/lib/kdoc/kdoc_files.py
··· 275 275 self.config.log.warning("No kernel-doc for file %s", fname) 276 276 continue 277 277 278 - for name, arg in self.results[fname]: 279 - m = self.out_msg(fname, name, arg) 278 + for arg in self.results[fname]: 279 + m = self.out_msg(fname, arg.name, arg) 280 280 281 281 if m is None: 282 282 ln = arg.get("ln", 0)
+42
scripts/lib/kdoc/kdoc_item.py
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # A class that will, eventually, encapsulate all of the parsed data that we 4 + # then pass into the output modules. 5 + # 6 + 7 + class KdocItem: 8 + def __init__(self, name, type, start_line, **other_stuff): 9 + self.name = name 10 + self.type = type 11 + self.declaration_start_line = start_line 12 + self.sections = {} 13 + self.sections_start_lines = {} 14 + self.parameterlist = [] 15 + self.parameterdesc_start_lines = [] 16 + self.parameterdescs = {} 17 + self.parametertypes = {} 18 + # 19 + # Just save everything else into our own dict so that the output 20 + # side can grab it directly as before. As we move things into more 21 + # structured data, this will, hopefully, fade away. 22 + # 23 + self.other_stuff = other_stuff 24 + 25 + def get(self, key, default = None): 26 + return self.other_stuff.get(key, default) 27 + 28 + def __getitem__(self, key): 29 + return self.get(key) 30 + 31 + # 32 + # Tracking of section and parameter information. 33 + # 34 + def set_sections(self, sections, start_lines): 35 + self.sections = sections 36 + self.section_start_lines = start_lines 37 + 38 + def set_params(self, names, descs, types, starts): 39 + self.parameterlist = names 40 + self.parameterdescs = descs 41 + self.parametertypes = types 42 + self.parameterdesc_start_lines = starts
+64 -108
scripts/lib/kdoc/kdoc_output.py
··· 124 124 Output warnings for identifiers that will be displayed. 125 125 """ 126 126 127 - warnings = args.get('warnings', []) 128 - 129 - for log_msg in warnings: 127 + for log_msg in args.warnings: 130 128 self.config.warning(log_msg) 131 129 132 130 def check_doc(self, name, args): ··· 182 184 183 185 self.data = "" 184 186 185 - dtype = args.get('type', "") 187 + dtype = args.type 186 188 187 189 if dtype == "doc": 188 190 self.out_doc(fname, name, args) ··· 336 338 starts by putting out the name of the doc section itself, but that 337 339 tends to duplicate a header already in the template file. 338 340 """ 339 - 340 - sectionlist = args.get('sectionlist', []) 341 - sections = args.get('sections', {}) 342 - section_start_lines = args.get('section_start_lines', {}) 343 - 344 - for section in sectionlist: 341 + for section, text in args.sections.items(): 345 342 # Skip sections that are in the nosymbol_table 346 343 if section in self.nosymbol: 347 344 continue ··· 348 355 else: 349 356 self.data += f'{self.lineprefix}**{section}**\n\n' 350 357 351 - self.print_lineno(section_start_lines.get(section, 0)) 352 - self.output_highlight(sections[section]) 358 + self.print_lineno(args.section_start_lines.get(section, 0)) 359 + self.output_highlight(text) 353 360 self.data += "\n" 354 361 self.data += "\n" 355 362 ··· 365 372 366 373 func_macro = args.get('func_macro', False) 367 374 if func_macro: 368 - signature = args['function'] 375 + signature = name 369 376 else: 370 377 if args.get('functiontype'): 371 378 signature = args['functiontype'] + " " 372 - signature += args['function'] + " (" 379 + signature += name + " (" 373 380 374 - parameterlist = args.get('parameterlist', []) 375 - parameterdescs = args.get('parameterdescs', {}) 376 - parameterdesc_start_lines = args.get('parameterdesc_start_lines', {}) 377 - 378 - ln = args.get('declaration_start_line', 0) 379 - 381 + ln = args.declaration_start_line 380 382 count = 0 381 - for parameter in parameterlist: 383 + for parameter in args.parameterlist: 382 384 if count != 0: 383 385 signature += ", " 384 386 count += 1 385 - dtype = args['parametertypes'].get(parameter, "") 387 + dtype = args.parametertypes.get(parameter, "") 386 388 387 389 if function_pointer.search(dtype): 388 390 signature += function_pointer.group(1) + parameter + function_pointer.group(3) ··· 389 401 390 402 self.print_lineno(ln) 391 403 if args.get('typedef') or not args.get('functiontype'): 392 - self.data += f".. c:macro:: {args['function']}\n\n" 404 + self.data += f".. c:macro:: {name}\n\n" 393 405 394 406 if args.get('typedef'): 395 407 self.data += " **Typedef**: " ··· 412 424 # function prototypes apart 413 425 self.lineprefix = " " 414 426 415 - if parameterlist: 427 + if args.parameterlist: 416 428 self.data += ".. container:: kernelindent\n\n" 417 429 self.data += f"{self.lineprefix}**Parameters**\n\n" 418 430 419 - for parameter in parameterlist: 431 + for parameter in args.parameterlist: 420 432 parameter_name = KernRe(r'\[.*').sub('', parameter) 421 - dtype = args['parametertypes'].get(parameter, "") 433 + dtype = args.parametertypes.get(parameter, "") 422 434 423 435 if dtype: 424 436 self.data += f"{self.lineprefix}``{dtype}``\n" 425 437 else: 426 438 self.data += f"{self.lineprefix}``{parameter}``\n" 427 439 428 - self.print_lineno(parameterdesc_start_lines.get(parameter_name, 0)) 440 + self.print_lineno(args.parameterdesc_start_lines.get(parameter_name, 0)) 429 441 430 442 self.lineprefix = " " 431 - if parameter_name in parameterdescs and \ 432 - parameterdescs[parameter_name] != KernelDoc.undescribed: 443 + if parameter_name in args.parameterdescs and \ 444 + args.parameterdescs[parameter_name] != KernelDoc.undescribed: 433 445 434 - self.output_highlight(parameterdescs[parameter_name]) 446 + self.output_highlight(args.parameterdescs[parameter_name]) 435 447 self.data += "\n" 436 448 else: 437 449 self.data += f"{self.lineprefix}*undescribed*\n\n" ··· 443 455 def out_enum(self, fname, name, args): 444 456 445 457 oldprefix = self.lineprefix 446 - name = args.get('enum', '') 447 - parameterlist = args.get('parameterlist', []) 448 - parameterdescs = args.get('parameterdescs', {}) 449 - ln = args.get('declaration_start_line', 0) 458 + ln = args.declaration_start_line 450 459 451 460 self.data += f"\n\n.. c:enum:: {name}\n\n" 452 461 ··· 457 472 self.lineprefix = outer + " " 458 473 self.data += f"{outer}**Constants**\n\n" 459 474 460 - for parameter in parameterlist: 475 + for parameter in args.parameterlist: 461 476 self.data += f"{outer}``{parameter}``\n" 462 477 463 - if parameterdescs.get(parameter, '') != KernelDoc.undescribed: 464 - self.output_highlight(parameterdescs[parameter]) 478 + if args.parameterdescs.get(parameter, '') != KernelDoc.undescribed: 479 + self.output_highlight(args.parameterdescs[parameter]) 465 480 else: 466 481 self.data += f"{self.lineprefix}*undescribed*\n\n" 467 482 self.data += "\n" ··· 472 487 def out_typedef(self, fname, name, args): 473 488 474 489 oldprefix = self.lineprefix 475 - name = args.get('typedef', '') 476 - ln = args.get('declaration_start_line', 0) 490 + ln = args.declaration_start_line 477 491 478 492 self.data += f"\n\n.. c:type:: {name}\n\n" 479 493 ··· 488 504 489 505 def out_struct(self, fname, name, args): 490 506 491 - name = args.get('struct', "") 492 507 purpose = args.get('purpose', "") 493 508 declaration = args.get('definition', "") 494 - dtype = args.get('type', "struct") 495 - ln = args.get('declaration_start_line', 0) 496 - 497 - parameterlist = args.get('parameterlist', []) 498 - parameterdescs = args.get('parameterdescs', {}) 499 - parameterdesc_start_lines = args.get('parameterdesc_start_lines', {}) 509 + dtype = args.type 510 + ln = args.declaration_start_line 500 511 501 512 self.data += f"\n\n.. c:{dtype}:: {name}\n\n" 502 513 ··· 515 536 516 537 self.lineprefix = " " 517 538 self.data += f"{self.lineprefix}**Members**\n\n" 518 - for parameter in parameterlist: 539 + for parameter in args.parameterlist: 519 540 if not parameter or parameter.startswith("#"): 520 541 continue 521 542 522 543 parameter_name = parameter.split("[", maxsplit=1)[0] 523 544 524 - if parameterdescs.get(parameter_name) == KernelDoc.undescribed: 545 + if args.parameterdescs.get(parameter_name) == KernelDoc.undescribed: 525 546 continue 526 547 527 - self.print_lineno(parameterdesc_start_lines.get(parameter_name, 0)) 548 + self.print_lineno(args.parameterdesc_start_lines.get(parameter_name, 0)) 528 549 529 550 self.data += f"{self.lineprefix}``{parameter}``\n" 530 551 531 552 self.lineprefix = " " 532 - self.output_highlight(parameterdescs[parameter_name]) 553 + self.output_highlight(args.parameterdescs[parameter_name]) 533 554 self.lineprefix = " " 534 555 535 556 self.data += "\n" ··· 615 636 self.data += line + "\n" 616 637 617 638 def out_doc(self, fname, name, args): 618 - sectionlist = args.get('sectionlist', []) 619 - sections = args.get('sections', {}) 620 - 621 639 if not self.check_doc(name, args): 622 640 return 623 641 624 642 self.data += f'.TH "{self.modulename}" 9 "{self.modulename}" "{self.man_date}" "API Manual" LINUX' + "\n" 625 643 626 - for section in sectionlist: 644 + for section, text in args.sections.items(): 627 645 self.data += f'.SH "{section}"' + "\n" 628 - self.output_highlight(sections.get(section)) 646 + self.output_highlight(text) 629 647 630 648 def out_function(self, fname, name, args): 631 649 """output function in man""" 632 650 633 - parameterlist = args.get('parameterlist', []) 634 - parameterdescs = args.get('parameterdescs', {}) 635 - sectionlist = args.get('sectionlist', []) 636 - sections = args.get('sections', {}) 637 - 638 - self.data += f'.TH "{args["function"]}" 9 "{args["function"]}" "{self.man_date}" "Kernel Hacker\'s Manual" LINUX' + "\n" 651 + self.data += f'.TH "{name}" 9 "{name}" "{self.man_date}" "Kernel Hacker\'s Manual" LINUX' + "\n" 639 652 640 653 self.data += ".SH NAME\n" 641 - self.data += f"{args['function']} \\- {args['purpose']}\n" 654 + self.data += f"{name} \\- {args['purpose']}\n" 642 655 643 656 self.data += ".SH SYNOPSIS\n" 644 657 if args.get('functiontype', ''): 645 - self.data += f'.B "{args["functiontype"]}" {args["function"]}' + "\n" 658 + self.data += f'.B "{args["functiontype"]}" {name}' + "\n" 646 659 else: 647 - self.data += f'.B "{args["function"]}' + "\n" 660 + self.data += f'.B "{name}' + "\n" 648 661 649 662 count = 0 650 663 parenth = "(" 651 664 post = "," 652 665 653 - for parameter in parameterlist: 654 - if count == len(parameterlist) - 1: 666 + for parameter in args.parameterlist: 667 + if count == len(args.parameterlist) - 1: 655 668 post = ");" 656 669 657 - dtype = args['parametertypes'].get(parameter, "") 670 + dtype = args.parametertypes.get(parameter, "") 658 671 if function_pointer.match(dtype): 659 672 # Pointer-to-function 660 673 self.data += f'".BI "{parenth}{function_pointer.group(1)}" " ") ({function_pointer.group(2)}){post}"' + "\n" ··· 657 686 count += 1 658 687 parenth = "" 659 688 660 - if parameterlist: 689 + if args.parameterlist: 661 690 self.data += ".SH ARGUMENTS\n" 662 691 663 - for parameter in parameterlist: 692 + for parameter in args.parameterlist: 664 693 parameter_name = re.sub(r'\[.*', '', parameter) 665 694 666 695 self.data += f'.IP "{parameter}" 12' + "\n" 667 - self.output_highlight(parameterdescs.get(parameter_name, "")) 696 + self.output_highlight(args.parameterdescs.get(parameter_name, "")) 668 697 669 - for section in sectionlist: 698 + for section, text in args.sections.items(): 670 699 self.data += f'.SH "{section.upper()}"' + "\n" 671 - self.output_highlight(sections[section]) 700 + self.output_highlight(text) 672 701 673 702 def out_enum(self, fname, name, args): 674 - 675 - name = args.get('enum', '') 676 - parameterlist = args.get('parameterlist', []) 677 - sectionlist = args.get('sectionlist', []) 678 - sections = args.get('sections', {}) 679 - 680 - self.data += f'.TH "{self.modulename}" 9 "enum {args["enum"]}" "{self.man_date}" "API Manual" LINUX' + "\n" 703 + self.data += f'.TH "{self.modulename}" 9 "enum {name}" "{self.man_date}" "API Manual" LINUX' + "\n" 681 704 682 705 self.data += ".SH NAME\n" 683 - self.data += f"enum {args['enum']} \\- {args['purpose']}\n" 706 + self.data += f"enum {name} \\- {args['purpose']}\n" 684 707 685 708 self.data += ".SH SYNOPSIS\n" 686 - self.data += f"enum {args['enum']}" + " {\n" 709 + self.data += f"enum {name}" + " {\n" 687 710 688 711 count = 0 689 - for parameter in parameterlist: 712 + for parameter in args.parameterlist: 690 713 self.data += f'.br\n.BI " {parameter}"' + "\n" 691 - if count == len(parameterlist) - 1: 714 + if count == len(args.parameterlist) - 1: 692 715 self.data += "\n};\n" 693 716 else: 694 717 self.data += ", \n.br\n" ··· 691 726 692 727 self.data += ".SH Constants\n" 693 728 694 - for parameter in parameterlist: 729 + for parameter in args.parameterlist: 695 730 parameter_name = KernRe(r'\[.*').sub('', parameter) 696 731 self.data += f'.IP "{parameter}" 12' + "\n" 697 - self.output_highlight(args['parameterdescs'].get(parameter_name, "")) 732 + self.output_highlight(args.parameterdescs.get(parameter_name, "")) 698 733 699 - for section in sectionlist: 734 + for section, text in args.sections.items(): 700 735 self.data += f'.SH "{section}"' + "\n" 701 - self.output_highlight(sections[section]) 736 + self.output_highlight(text) 702 737 703 738 def out_typedef(self, fname, name, args): 704 739 module = self.modulename 705 - typedef = args.get('typedef') 706 740 purpose = args.get('purpose') 707 - sectionlist = args.get('sectionlist', []) 708 - sections = args.get('sections', {}) 709 741 710 - self.data += f'.TH "{module}" 9 "{typedef}" "{self.man_date}" "API Manual" LINUX' + "\n" 742 + self.data += f'.TH "{module}" 9 "{name}" "{self.man_date}" "API Manual" LINUX' + "\n" 711 743 712 744 self.data += ".SH NAME\n" 713 - self.data += f"typedef {typedef} \\- {purpose}\n" 745 + self.data += f"typedef {name} \\- {purpose}\n" 714 746 715 - for section in sectionlist: 747 + for section, text in args.sections.items(): 716 748 self.data += f'.SH "{section}"' + "\n" 717 - self.output_highlight(sections.get(section)) 749 + self.output_highlight(text) 718 750 719 751 def out_struct(self, fname, name, args): 720 752 module = self.modulename 721 - struct_type = args.get('type') 722 - struct_name = args.get('struct') 723 753 purpose = args.get('purpose') 724 754 definition = args.get('definition') 725 - sectionlist = args.get('sectionlist', []) 726 - parameterlist = args.get('parameterlist', []) 727 - sections = args.get('sections', {}) 728 - parameterdescs = args.get('parameterdescs', {}) 729 755 730 - self.data += f'.TH "{module}" 9 "{struct_type} {struct_name}" "{self.man_date}" "API Manual" LINUX' + "\n" 756 + self.data += f'.TH "{module}" 9 "{args.type} {name}" "{self.man_date}" "API Manual" LINUX' + "\n" 731 757 732 758 self.data += ".SH NAME\n" 733 - self.data += f"{struct_type} {struct_name} \\- {purpose}\n" 759 + self.data += f"{args.type} {name} \\- {purpose}\n" 734 760 735 761 # Replace tabs with two spaces and handle newlines 736 762 declaration = definition.replace("\t", " ") 737 763 declaration = KernRe(r"\n").sub('"\n.br\n.BI "', declaration) 738 764 739 765 self.data += ".SH SYNOPSIS\n" 740 - self.data += f"{struct_type} {struct_name} " + "{" + "\n.br\n" 766 + self.data += f"{args.type} {name} " + "{" + "\n.br\n" 741 767 self.data += f'.BI "{declaration}\n' + "};\n.br\n\n" 742 768 743 769 self.data += ".SH Members\n" 744 - for parameter in parameterlist: 770 + for parameter in args.parameterlist: 745 771 if parameter.startswith("#"): 746 772 continue 747 773 748 774 parameter_name = re.sub(r"\[.*", "", parameter) 749 775 750 - if parameterdescs.get(parameter_name) == KernelDoc.undescribed: 776 + if args.parameterdescs.get(parameter_name) == KernelDoc.undescribed: 751 777 continue 752 778 753 779 self.data += f'.IP "{parameter}" 12' + "\n" 754 - self.output_highlight(parameterdescs.get(parameter_name)) 780 + self.output_highlight(args.parameterdescs.get(parameter_name)) 755 781 756 - for section in sectionlist: 782 + for section, text in args.sections.items(): 757 783 self.data += f'.SH "{section}"' + "\n" 758 - self.output_highlight(sections.get(section)) 784 + self.output_highlight(text)
+399 -476
scripts/lib/kdoc/kdoc_parser.py
··· 12 12 documentation comments 13 13 """ 14 14 15 + import sys 15 16 import re 16 17 from pprint import pformat 17 18 18 19 from kdoc_re import NestedMatch, KernRe 19 - 20 + from kdoc_item import KdocItem 20 21 21 22 # 22 23 # Regular expressions used to parse kernel-doc markups at KernelDoc class. ··· 43 42 # @{section-name}: 44 43 # while trying to not match literal block starts like "example::" 45 44 # 45 + known_section_names = 'description|context|returns?|notes?|examples?' 46 + known_sections = KernRe(known_section_names, flags = re.I) 46 47 doc_sect = doc_com + \ 47 - KernRe(r'\s*(\@[.\w]+|\@\.\.\.|description|context|returns?|notes?|examples?)\s*:([^:].*)?$', 48 - flags=re.I, cache=False) 48 + KernRe(r'\s*(\@[.\w]+|\@\.\.\.|' + known_section_names + r')\s*:([^:].*)?$', 49 + flags=re.I, cache=False) 49 50 50 51 doc_content = doc_com_body + KernRe(r'(.*)', cache=False) 51 - doc_block = doc_com + KernRe(r'DOC:\s*(.*)?', cache=False) 52 52 doc_inline_start = KernRe(r'^\s*/\*\*\s*$', cache=False) 53 53 doc_inline_sect = KernRe(r'\s*\*\s*(@\s*[\w][\w\.]*\s*):(.*)', cache=False) 54 54 doc_inline_end = KernRe(r'^\s*\*/\s*$', cache=False) ··· 62 60 63 61 type_param = KernRe(r"\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)", cache=False) 64 62 63 + # 64 + # Tests for the beginning of a kerneldoc block in its various forms. 65 + # 66 + doc_block = doc_com + KernRe(r'DOC:\s*(.*)?', cache=False) 67 + doc_begin_data = KernRe(r"^\s*\*?\s*(struct|union|enum|typedef)\b\s*(\w*)", cache = False) 68 + doc_begin_func = KernRe(str(doc_com) + # initial " * ' 69 + r"(?:\w+\s*\*\s*)?" + # type (not captured) 70 + r'(?:define\s+)?' + # possible "define" (not captured) 71 + r'(\w+)\s*(?:\(\w*\))?\s*' + # name and optional "(...)" 72 + r'(?:[-:].*)?$', # description (not captured) 73 + cache = False) 74 + 75 + # 76 + # A little helper to get rid of excess white space 77 + # 78 + multi_space = KernRe(r'\s\s+') 79 + def trim_whitespace(s): 80 + return multi_space.sub(' ', s.strip()) 81 + 65 82 class state: 66 83 """ 67 84 State machine enums ··· 89 68 # Parser states 90 69 NORMAL = 0 # normal code 91 70 NAME = 1 # looking for function name 92 - BODY_MAYBE = 2 # body - or maybe more description 71 + DECLARATION = 2 # We have seen a declaration which might not be done 93 72 BODY = 3 # the body of the comment 94 - BODY_WITH_BLANK_LINE = 4 # the body which has a blank line 73 + SPECIAL_SECTION = 4 # doc section ending with a blank line 95 74 PROTO = 5 # scanning prototype 96 75 DOCBLOCK = 6 # documentation block 97 - INLINE = 7 # gathering doc outside main block 76 + INLINE_NAME = 7 # gathering doc outside main block 77 + INLINE_TEXT = 8 # reading the body of inline docs 98 78 99 79 name = [ 100 80 "NORMAL", 101 81 "NAME", 102 - "BODY_MAYBE", 82 + "DECLARATION", 103 83 "BODY", 104 - "BODY_WITH_BLANK_LINE", 84 + "SPECIAL_SECTION", 105 85 "PROTO", 106 86 "DOCBLOCK", 107 - "INLINE", 87 + "INLINE_NAME", 88 + "INLINE_TEXT", 108 89 ] 109 90 110 - # Inline documentation state 111 - INLINE_NA = 0 # not applicable ($state != INLINE) 112 - INLINE_NAME = 1 # looking for member name (@foo:) 113 - INLINE_TEXT = 2 # looking for member documentation 114 - INLINE_END = 3 # done 115 - INLINE_ERROR = 4 # error - Comment without header was found. 116 - # Spit a warning as it's not 117 - # proper kernel-doc and ignore the rest. 118 - 119 - inline_name = [ 120 - "", 121 - "_NAME", 122 - "_TEXT", 123 - "_END", 124 - "_ERROR", 125 - ] 126 91 127 92 SECTION_DEFAULT = "Description" # default section 128 93 ··· 117 110 def __init__(self, config, ln): 118 111 self.config = config 119 112 120 - self.contents = "" 121 - self.function = "" 122 - self.sectcheck = "" 123 - self.struct_actual = "" 113 + self._contents = [] 124 114 self.prototype = "" 125 115 126 116 self.warnings = [] ··· 128 124 self.parameterdesc_start_lines = {} 129 125 130 126 self.section_start_lines = {} 131 - self.sectionlist = [] 132 127 self.sections = {} 133 128 134 129 self.anon_struct_union = False ··· 136 133 137 134 # State flags 138 135 self.brcount = 0 139 - 140 - self.in_doc_sect = False 141 136 self.declaration_start_line = ln + 1 137 + 138 + # 139 + # Management of section contents 140 + # 141 + def add_text(self, text): 142 + self._contents.append(text) 143 + 144 + def contents(self): 145 + return '\n'.join(self._contents) + '\n' 142 146 143 147 # TODO: rename to emit_message after removal of kernel-doc.pl 144 148 def emit_msg(self, log_msg, warning=True): ··· 161 151 self.warnings.append(log_msg) 162 152 return 163 153 154 + # 155 + # Begin a new section. 156 + # 157 + def begin_section(self, line_no, title = SECTION_DEFAULT, dump = False): 158 + if dump: 159 + self.dump_section(start_new = True) 160 + self.section = title 161 + self.new_start_line = line_no 162 + 164 163 def dump_section(self, start_new=True): 165 164 """ 166 165 Dumps section contents to arrays/hashes intended for that purpose. 167 166 """ 168 - 167 + # 168 + # If we have accumulated no contents in the default ("description") 169 + # section, don't bother. 170 + # 171 + if self.section == SECTION_DEFAULT and not self._contents: 172 + return 169 173 name = self.section 170 - contents = self.contents 174 + contents = self.contents() 171 175 172 176 if type_param.match(name): 173 177 name = type_param.group(1) ··· 189 165 self.parameterdescs[name] = contents 190 166 self.parameterdesc_start_lines[name] = self.new_start_line 191 167 192 - self.sectcheck += name + " " 193 - self.new_start_line = 0 194 - 195 - elif name == "@...": 196 - name = "..." 197 - self.parameterdescs[name] = contents 198 - self.sectcheck += name + " " 199 - self.parameterdesc_start_lines[name] = self.new_start_line 200 168 self.new_start_line = 0 201 169 202 170 else: ··· 197 181 if name != SECTION_DEFAULT: 198 182 self.emit_msg(self.new_start_line, 199 183 f"duplicate section name '{name}'\n") 200 - self.sections[name] += contents 184 + # Treat as a new paragraph - add a blank line 185 + self.sections[name] += '\n' + contents 201 186 else: 202 187 self.sections[name] = contents 203 - self.sectionlist.append(name) 204 188 self.section_start_lines[name] = self.new_start_line 205 189 self.new_start_line = 0 206 190 ··· 208 192 209 193 if start_new: 210 194 self.section = SECTION_DEFAULT 211 - self.contents = "" 195 + self._contents = [] 212 196 213 197 214 198 class KernelDoc: ··· 219 203 220 204 # Section names 221 205 222 - section_intro = "Introduction" 223 206 section_context = "Context" 224 207 section_return = "Return" 225 208 ··· 232 217 233 218 # Initial state for the state machines 234 219 self.state = state.NORMAL 235 - self.inline_doc_state = state.INLINE_NA 236 220 237 221 # Store entry currently being processed 238 222 self.entry = None 239 223 240 224 # Place all potential outputs into an array 241 225 self.entries = [] 226 + 227 + # 228 + # We need Python 3.7 for its "dicts remember the insertion 229 + # order" guarantee 230 + # 231 + if sys.version_info.major == 3 and sys.version_info.minor < 7: 232 + self.emit_msg(0, 233 + 'Python 3.7 or later is required for correct results') 242 234 243 235 def emit_msg(self, ln, msg, warning=True): 244 236 """Emit a message""" ··· 277 255 The actual output and output filters will be handled elsewhere 278 256 """ 279 257 280 - # The implementation here is different than the original kernel-doc: 281 - # instead of checking for output filters or actually output anything, 282 - # it just stores the declaration content at self.entries, as the 283 - # output will happen on a separate class. 284 - # 285 - # For now, we're keeping the same name of the function just to make 286 - # easier to compare the source code of both scripts 287 - 288 - args["declaration_start_line"] = self.entry.declaration_start_line 289 - args["type"] = dtype 290 - args["warnings"] = self.entry.warnings 291 - 292 - # TODO: use colletions.OrderedDict to remove sectionlist 293 - 294 - sections = args.get('sections', {}) 295 - sectionlist = args.get('sectionlist', []) 258 + item = KdocItem(name, dtype, self.entry.declaration_start_line, **args) 259 + item.warnings = self.entry.warnings 296 260 297 261 # Drop empty sections 298 262 # TODO: improve empty sections logic to emit warnings 263 + sections = self.entry.sections 299 264 for section in ["Description", "Return"]: 300 - if section in sectionlist: 301 - if not sections[section].rstrip(): 302 - del sections[section] 303 - sectionlist.remove(section) 304 - 305 - self.entries.append((name, args)) 265 + if section in sections and not sections[section].rstrip(): 266 + del sections[section] 267 + item.set_sections(sections, self.entry.section_start_lines) 268 + item.set_params(self.entry.parameterlist, self.entry.parameterdescs, 269 + self.entry.parametertypes, 270 + self.entry.parameterdesc_start_lines) 271 + self.entries.append(item) 306 272 307 273 self.config.log.debug("Output: %s:%s = %s", dtype, name, pformat(args)) 308 274 ··· 304 294 305 295 # State flags 306 296 self.state = state.NORMAL 307 - self.inline_doc_state = state.INLINE_NA 308 297 309 298 def push_parameter(self, ln, decl_type, param, dtype, 310 299 org_arg, declaration_name): ··· 376 367 org_arg = KernRe(r'\s\s+').sub(' ', org_arg) 377 368 self.entry.parametertypes[param] = org_arg 378 369 379 - def save_struct_actual(self, actual): 380 - """ 381 - Strip all spaces from the actual param so that it looks like 382 - one string item. 383 - """ 384 - 385 - actual = KernRe(r'\s*').sub("", actual, count=1) 386 - 387 - self.entry.struct_actual += actual + " " 388 370 389 371 def create_parameter_list(self, ln, decl_type, args, 390 372 splitter, declaration_name): ··· 421 421 param = arg 422 422 423 423 dtype = KernRe(r'([^\(]+\(\*?)\s*' + re.escape(param)).sub(r'\1', arg) 424 - self.save_struct_actual(param) 425 424 self.push_parameter(ln, decl_type, param, dtype, 426 425 arg, declaration_name) 427 426 ··· 437 438 438 439 dtype = KernRe(r'([^\(]+\(\*?)\s*' + re.escape(param)).sub(r'\1', arg) 439 440 440 - self.save_struct_actual(param) 441 441 self.push_parameter(ln, decl_type, param, dtype, 442 442 arg, declaration_name) 443 443 ··· 469 471 470 472 param = r.group(1) 471 473 472 - self.save_struct_actual(r.group(2)) 473 474 self.push_parameter(ln, decl_type, r.group(2), 474 475 f"{dtype} {r.group(1)}", 475 476 arg, declaration_name) ··· 480 483 continue 481 484 482 485 if dtype != "": # Skip unnamed bit-fields 483 - self.save_struct_actual(r.group(1)) 484 486 self.push_parameter(ln, decl_type, r.group(1), 485 487 f"{dtype}:{r.group(2)}", 486 488 arg, declaration_name) 487 489 else: 488 - self.save_struct_actual(param) 489 490 self.push_parameter(ln, decl_type, param, dtype, 490 491 arg, declaration_name) 491 492 492 - def check_sections(self, ln, decl_name, decl_type, sectcheck, prmscheck): 493 + def check_sections(self, ln, decl_name, decl_type): 493 494 """ 494 495 Check for errors inside sections, emitting warnings if not found 495 496 parameters are described. 496 497 """ 497 - 498 - sects = sectcheck.split() 499 - prms = prmscheck.split() 500 - err = False 501 - 502 - for sx in range(len(sects)): # pylint: disable=C0200 503 - err = True 504 - for px in range(len(prms)): # pylint: disable=C0200 505 - prm_clean = prms[px] 506 - prm_clean = KernRe(r'\[.*\]').sub('', prm_clean) 507 - prm_clean = attribute.sub('', prm_clean) 508 - 509 - # ignore array size in a parameter string; 510 - # however, the original param string may contain 511 - # spaces, e.g.: addr[6 + 2] 512 - # and this appears in @prms as "addr[6" since the 513 - # parameter list is split at spaces; 514 - # hence just ignore "[..." for the sections check; 515 - prm_clean = KernRe(r'\[.*').sub('', prm_clean) 516 - 517 - if prm_clean == sects[sx]: 518 - err = False 519 - break 520 - 521 - if err: 498 + for section in self.entry.sections: 499 + if section not in self.entry.parameterlist and \ 500 + not known_sections.search(section): 522 501 if decl_type == 'function': 523 502 dname = f"{decl_type} parameter" 524 503 else: 525 504 dname = f"{decl_type} member" 526 - 527 505 self.emit_msg(ln, 528 - f"Excess {dname} '{sects[sx]}' description in '{decl_name}'") 506 + f"Excess {dname} '{section}' description in '{decl_name}'") 529 507 530 508 def check_return_section(self, ln, declaration_name, return_type): 531 509 """ ··· 755 783 756 784 self.create_parameter_list(ln, decl_type, members, ';', 757 785 declaration_name) 758 - self.check_sections(ln, declaration_name, decl_type, 759 - self.entry.sectcheck, self.entry.struct_actual) 786 + self.check_sections(ln, declaration_name, decl_type) 760 787 761 788 # Adjust declaration for better display 762 789 declaration = KernRe(r'([\{;])').sub(r'\1\n', declaration) ··· 791 820 level += 1 792 821 793 822 self.output_declaration(decl_type, declaration_name, 794 - struct=declaration_name, 795 823 definition=declaration, 796 - parameterlist=self.entry.parameterlist, 797 - parameterdescs=self.entry.parameterdescs, 798 - parametertypes=self.entry.parametertypes, 799 - parameterdesc_start_lines=self.entry.parameterdesc_start_lines, 800 - sectionlist=self.entry.sectionlist, 801 - sections=self.entry.sections, 802 - section_start_lines=self.entry.section_start_lines, 803 824 purpose=self.entry.declaration_purpose) 804 825 805 826 def dump_enum(self, ln, proto): ··· 809 846 # Strip #define macros inside enums 810 847 proto = KernRe(r'#\s*((define|ifdef|if)\s+|endif)[^;]*;', flags=re.S).sub('', proto) 811 848 812 - members = None 813 - declaration_name = None 814 - 849 + # 850 + # Parse out the name and members of the enum. Typedef form first. 851 + # 815 852 r = KernRe(r'typedef\s+enum\s*\{(.*)\}\s*(\w*)\s*;') 816 853 if r.search(proto): 817 854 declaration_name = r.group(2) 818 855 members = r.group(1).rstrip() 856 + # 857 + # Failing that, look for a straight enum 858 + # 819 859 else: 820 860 r = KernRe(r'enum\s+(\w*)\s*\{(.*)\}') 821 861 if r.match(proto): 822 862 declaration_name = r.group(1) 823 863 members = r.group(2).rstrip() 824 - 825 - if not members: 826 - self.emit_msg(ln, f"{proto}: error: Cannot parse enum!") 827 - return 828 - 864 + # 865 + # OK, this isn't going to work. 866 + # 867 + else: 868 + self.emit_msg(ln, f"{proto}: error: Cannot parse enum!") 869 + return 870 + # 871 + # Make sure we found what we were expecting. 872 + # 829 873 if self.entry.identifier != declaration_name: 830 874 if self.entry.identifier == "": 831 875 self.emit_msg(ln, 832 876 f"{proto}: wrong kernel-doc identifier on prototype") 833 877 else: 834 878 self.emit_msg(ln, 835 - f"expecting prototype for enum {self.entry.identifier}. Prototype was for enum {declaration_name} instead") 879 + f"expecting prototype for enum {self.entry.identifier}. " 880 + f"Prototype was for enum {declaration_name} instead") 836 881 return 837 882 838 883 if not declaration_name: 839 884 declaration_name = "(anonymous)" 840 - 885 + # 886 + # Parse out the name of each enum member, and verify that we 887 + # have a description for it. 888 + # 841 889 member_set = set() 842 - 843 - members = KernRe(r'\([^;]*?[\)]').sub('', members) 844 - 890 + members = KernRe(r'\([^;)]*\)').sub('', members) 845 891 for arg in members.split(','): 846 892 if not arg: 847 893 continue ··· 861 889 self.emit_msg(ln, 862 890 f"Enum value '{arg}' not described in enum '{declaration_name}'") 863 891 member_set.add(arg) 864 - 892 + # 893 + # Ensure that every described member actually exists in the enum. 894 + # 865 895 for k in self.entry.parameterdescs: 866 896 if k not in member_set: 867 897 self.emit_msg(ln, 868 898 f"Excess enum value '%{k}' description in '{declaration_name}'") 869 899 870 900 self.output_declaration('enum', declaration_name, 871 - enum=declaration_name, 872 - parameterlist=self.entry.parameterlist, 873 - parameterdescs=self.entry.parameterdescs, 874 - parameterdesc_start_lines=self.entry.parameterdesc_start_lines, 875 - sectionlist=self.entry.sectionlist, 876 - sections=self.entry.sections, 877 - section_start_lines=self.entry.section_start_lines, 878 901 purpose=self.entry.declaration_purpose) 879 902 880 903 def dump_declaration(self, ln, prototype): ··· 879 912 880 913 if self.entry.decl_type == "enum": 881 914 self.dump_enum(ln, prototype) 882 - return 883 - 884 - if self.entry.decl_type == "typedef": 915 + elif self.entry.decl_type == "typedef": 885 916 self.dump_typedef(ln, prototype) 886 - return 887 - 888 - if self.entry.decl_type in ["union", "struct"]: 917 + elif self.entry.decl_type in ["union", "struct"]: 889 918 self.dump_struct(ln, prototype) 890 - return 891 - 892 - self.output_declaration(self.entry.decl_type, prototype, 893 - entry=self.entry) 919 + else: 920 + # This would be a bug 921 + self.emit_message(ln, f'Unknown declaration type: {self.entry.decl_type}') 894 922 895 923 def dump_function(self, ln, prototype): 896 924 """ ··· 1019 1057 f"expecting prototype for {self.entry.identifier}(). Prototype was for {declaration_name}() instead") 1020 1058 return 1021 1059 1022 - prms = " ".join(self.entry.parameterlist) 1023 - self.check_sections(ln, declaration_name, "function", 1024 - self.entry.sectcheck, prms) 1060 + self.check_sections(ln, declaration_name, "function") 1025 1061 1026 1062 self.check_return_section(ln, declaration_name, return_type) 1027 1063 1028 1064 if 'typedef' in return_type: 1029 1065 self.output_declaration(decl_type, declaration_name, 1030 - function=declaration_name, 1031 1066 typedef=True, 1032 1067 functiontype=return_type, 1033 - parameterlist=self.entry.parameterlist, 1034 - parameterdescs=self.entry.parameterdescs, 1035 - parametertypes=self.entry.parametertypes, 1036 - parameterdesc_start_lines=self.entry.parameterdesc_start_lines, 1037 - sectionlist=self.entry.sectionlist, 1038 - sections=self.entry.sections, 1039 - section_start_lines=self.entry.section_start_lines, 1040 1068 purpose=self.entry.declaration_purpose, 1041 1069 func_macro=func_macro) 1042 1070 else: 1043 1071 self.output_declaration(decl_type, declaration_name, 1044 - function=declaration_name, 1045 1072 typedef=False, 1046 1073 functiontype=return_type, 1047 - parameterlist=self.entry.parameterlist, 1048 - parameterdescs=self.entry.parameterdescs, 1049 - parametertypes=self.entry.parametertypes, 1050 - parameterdesc_start_lines=self.entry.parameterdesc_start_lines, 1051 - sectionlist=self.entry.sectionlist, 1052 - sections=self.entry.sections, 1053 - section_start_lines=self.entry.section_start_lines, 1054 1074 purpose=self.entry.declaration_purpose, 1055 1075 func_macro=func_macro) 1056 1076 ··· 1069 1125 self.create_parameter_list(ln, decl_type, args, ',', declaration_name) 1070 1126 1071 1127 self.output_declaration(decl_type, declaration_name, 1072 - function=declaration_name, 1073 1128 typedef=True, 1074 1129 functiontype=return_type, 1075 - parameterlist=self.entry.parameterlist, 1076 - parameterdescs=self.entry.parameterdescs, 1077 - parametertypes=self.entry.parametertypes, 1078 - parameterdesc_start_lines=self.entry.parameterdesc_start_lines, 1079 - sectionlist=self.entry.sectionlist, 1080 - sections=self.entry.sections, 1081 - section_start_lines=self.entry.section_start_lines, 1082 1130 purpose=self.entry.declaration_purpose) 1083 1131 return 1084 1132 ··· 1090 1154 return 1091 1155 1092 1156 self.output_declaration('typedef', declaration_name, 1093 - typedef=declaration_name, 1094 - sectionlist=self.entry.sectionlist, 1095 - sections=self.entry.sections, 1096 - section_start_lines=self.entry.section_start_lines, 1097 1157 purpose=self.entry.declaration_purpose) 1098 1158 return 1099 1159 ··· 1104 1172 with a staticmethod decorator. 1105 1173 """ 1106 1174 1175 + # We support documenting some exported symbols with different 1176 + # names. A horrible hack. 1177 + suffixes = [ '_noprof' ] 1178 + 1107 1179 # Note: it accepts only one EXPORT_SYMBOL* per line, as having 1108 1180 # multiple export lines would violate Kernel coding style. 1109 1181 1110 1182 if export_symbol.search(line): 1111 1183 symbol = export_symbol.group(2) 1112 - function_set.add(symbol) 1113 - return 1114 - 1115 - if export_symbol_ns.search(line): 1184 + elif export_symbol_ns.search(line): 1116 1185 symbol = export_symbol_ns.group(2) 1117 - function_set.add(symbol) 1186 + else: 1187 + return False 1188 + # 1189 + # Found an export, trim out any special suffixes 1190 + # 1191 + for suffix in suffixes: 1192 + # Be backward compatible with Python < 3.9 1193 + if symbol.endswith(suffix): 1194 + symbol = symbol[:-len(suffix)] 1195 + function_set.add(symbol) 1196 + return True 1118 1197 1119 1198 def process_normal(self, ln, line): 1120 1199 """ ··· 1137 1194 1138 1195 # start a new entry 1139 1196 self.reset_state(ln) 1140 - self.entry.in_doc_sect = False 1141 1197 1142 1198 # next line is always the function name 1143 1199 self.state = state.NAME ··· 1145 1203 """ 1146 1204 STATE_NAME: Looking for the "name - description" line 1147 1205 """ 1148 - 1206 + # 1207 + # Check for a DOC: block and handle them specially. 1208 + # 1149 1209 if doc_block.search(line): 1150 - self.entry.new_start_line = ln 1151 1210 1152 1211 if not doc_block.group(1): 1153 - self.entry.section = self.section_intro 1212 + self.entry.begin_section(ln, "Introduction") 1154 1213 else: 1155 - self.entry.section = doc_block.group(1) 1214 + self.entry.begin_section(ln, doc_block.group(1)) 1156 1215 1157 1216 self.entry.identifier = self.entry.section 1158 1217 self.state = state.DOCBLOCK 1159 - return 1160 - 1161 - if doc_decl.search(line): 1218 + # 1219 + # Otherwise we're looking for a normal kerneldoc declaration line. 1220 + # 1221 + elif doc_decl.search(line): 1162 1222 self.entry.identifier = doc_decl.group(1) 1163 - self.entry.is_kernel_comment = False 1164 - 1165 - decl_start = str(doc_com) # comment block asterisk 1166 - fn_type = r"(?:\w+\s*\*\s*)?" # type (for non-functions) 1167 - parenthesis = r"(?:\(\w*\))?" # optional parenthesis on function 1168 - decl_end = r"(?:[-:].*)" # end of the name part 1169 - 1170 - # test for pointer declaration type, foo * bar() - desc 1171 - r = KernRe(fr"^{decl_start}([\w\s]+?){parenthesis}?\s*{decl_end}?$") 1172 - if r.search(line): 1173 - self.entry.identifier = r.group(1) 1174 1223 1175 1224 # Test for data declaration 1176 - r = KernRe(r"^\s*\*?\s*(struct|union|enum|typedef)\b\s*(\w*)") 1177 - if r.search(line): 1178 - self.entry.decl_type = r.group(1) 1179 - self.entry.identifier = r.group(2) 1180 - self.entry.is_kernel_comment = True 1225 + if doc_begin_data.search(line): 1226 + self.entry.decl_type = doc_begin_data.group(1) 1227 + self.entry.identifier = doc_begin_data.group(2) 1228 + # 1229 + # Look for a function description 1230 + # 1231 + elif doc_begin_func.search(line): 1232 + self.entry.identifier = doc_begin_func.group(1) 1233 + self.entry.decl_type = "function" 1234 + # 1235 + # We struck out. 1236 + # 1181 1237 else: 1182 - # Look for foo() or static void foo() - description; 1183 - # or misspelt identifier 1184 - 1185 - r1 = KernRe(fr"^{decl_start}{fn_type}(\w+)\s*{parenthesis}\s*{decl_end}?$") 1186 - r2 = KernRe(fr"^{decl_start}{fn_type}(\w+[^-:]*){parenthesis}\s*{decl_end}$") 1187 - 1188 - for r in [r1, r2]: 1189 - if r.search(line): 1190 - self.entry.identifier = r.group(1) 1191 - self.entry.decl_type = "function" 1192 - 1193 - r = KernRe(r"define\s+") 1194 - self.entry.identifier = r.sub("", self.entry.identifier) 1195 - self.entry.is_kernel_comment = True 1196 - break 1197 - 1198 - self.entry.identifier = self.entry.identifier.strip(" ") 1199 - 1200 - self.state = state.BODY 1201 - 1202 - # if there's no @param blocks need to set up default section here 1203 - self.entry.section = SECTION_DEFAULT 1204 - self.entry.new_start_line = ln + 1 1205 - 1206 - r = KernRe("[-:](.*)") 1207 - if r.search(line): 1208 - # strip leading/trailing/multiple spaces 1209 - self.entry.descr = r.group(1).strip(" ") 1210 - 1211 - r = KernRe(r"\s+") 1212 - self.entry.descr = r.sub(" ", self.entry.descr) 1213 - self.entry.declaration_purpose = self.entry.descr 1214 - self.state = state.BODY_MAYBE 1215 - else: 1216 - self.entry.declaration_purpose = "" 1217 - 1218 - if not self.entry.is_kernel_comment: 1219 1238 self.emit_msg(ln, 1220 1239 f"This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst\n{line}") 1221 1240 self.state = state.NORMAL 1241 + return 1242 + # 1243 + # OK, set up for a new kerneldoc entry. 1244 + # 1245 + self.state = state.BODY 1246 + self.entry.identifier = self.entry.identifier.strip(" ") 1247 + # if there's no @param blocks need to set up default section here 1248 + self.entry.begin_section(ln + 1) 1249 + # 1250 + # Find the description portion, which *should* be there but 1251 + # isn't always. 1252 + # (We should be able to capture this from the previous parsing - someday) 1253 + # 1254 + r = KernRe("[-:](.*)") 1255 + if r.search(line): 1256 + self.entry.declaration_purpose = trim_whitespace(r.group(1)) 1257 + self.state = state.DECLARATION 1258 + else: 1259 + self.entry.declaration_purpose = "" 1222 1260 1223 1261 if not self.entry.declaration_purpose and self.config.wshort_desc: 1224 1262 self.emit_msg(ln, ··· 1213 1291 self.emit_msg(ln, 1214 1292 f"Scanning doc for {self.entry.decl_type} {self.entry.identifier}", 1215 1293 warning=False) 1216 - 1217 - return 1218 - 1294 + # 1219 1295 # Failed to find an identifier. Emit a warning 1220 - self.emit_msg(ln, f"Cannot find identifier on line:\n{line}") 1296 + # 1297 + else: 1298 + self.emit_msg(ln, f"Cannot find identifier on line:\n{line}") 1221 1299 1222 - def process_body(self, ln, line): 1223 - """ 1224 - STATE_BODY and STATE_BODY_MAYBE: the bulk of a kerneldoc comment. 1225 - """ 1226 - 1227 - if self.state == state.BODY_WITH_BLANK_LINE: 1228 - r = KernRe(r"\s*\*\s?\S") 1229 - if r.match(line): 1230 - self.dump_section() 1231 - self.entry.section = SECTION_DEFAULT 1232 - self.entry.new_start_line = ln 1233 - self.entry.contents = "" 1234 - 1300 + # 1301 + # Helper function to determine if a new section is being started. 1302 + # 1303 + def is_new_section(self, ln, line): 1235 1304 if doc_sect.search(line): 1236 - self.entry.in_doc_sect = True 1305 + self.state = state.BODY 1306 + # 1307 + # Pick out the name of our new section, tweaking it if need be. 1308 + # 1237 1309 newsection = doc_sect.group(1) 1238 - 1239 - if newsection.lower() in ["description", "context"]: 1240 - newsection = newsection.title() 1241 - 1242 - # Special case: @return is a section, not a param description 1243 - if newsection.lower() in ["@return", "@returns", 1244 - "return", "returns"]: 1310 + if newsection.lower() == 'description': 1311 + newsection = 'Description' 1312 + elif newsection.lower() == 'context': 1313 + newsection = 'Context' 1314 + self.state = state.SPECIAL_SECTION 1315 + elif newsection.lower() in ["@return", "@returns", 1316 + "return", "returns"]: 1245 1317 newsection = "Return" 1246 - 1247 - # Perl kernel-doc has a check here for contents before sections. 1248 - # the logic there is always false, as in_doc_sect variable is 1249 - # always true. So, just don't implement Wcontents_before_sections 1250 - 1251 - # .title() 1318 + self.state = state.SPECIAL_SECTION 1319 + elif newsection[0] == '@': 1320 + self.state = state.SPECIAL_SECTION 1321 + # 1322 + # Initialize the contents, and get the new section going. 1323 + # 1252 1324 newcontents = doc_sect.group(2) 1253 1325 if not newcontents: 1254 1326 newcontents = "" 1255 - 1256 - if self.entry.contents.strip("\n"): 1257 - self.dump_section() 1258 - 1259 - self.entry.new_start_line = ln 1260 - self.entry.section = newsection 1327 + self.dump_section() 1328 + self.entry.begin_section(ln, newsection) 1261 1329 self.entry.leading_space = None 1262 1330 1263 - self.entry.contents = newcontents.lstrip() 1264 - if self.entry.contents: 1265 - self.entry.contents += "\n" 1331 + self.entry.add_text(newcontents.lstrip()) 1332 + return True 1333 + return False 1266 1334 1267 - self.state = state.BODY 1268 - return 1269 - 1335 + # 1336 + # Helper function to detect (and effect) the end of a kerneldoc comment. 1337 + # 1338 + def is_comment_end(self, ln, line): 1270 1339 if doc_end.search(line): 1271 1340 self.dump_section() 1272 1341 ··· 1270 1357 self.entry.new_start_line = ln + 1 1271 1358 1272 1359 self.state = state.PROTO 1360 + return True 1361 + return False 1362 + 1363 + 1364 + def process_decl(self, ln, line): 1365 + """ 1366 + STATE_DECLARATION: We've seen the beginning of a declaration 1367 + """ 1368 + if self.is_new_section(ln, line) or self.is_comment_end(ln, line): 1369 + return 1370 + # 1371 + # Look for anything with the " * " line beginning. 1372 + # 1373 + if doc_content.search(line): 1374 + cont = doc_content.group(1) 1375 + # 1376 + # A blank line means that we have moved out of the declaration 1377 + # part of the comment (without any "special section" parameter 1378 + # descriptions). 1379 + # 1380 + if cont == "": 1381 + self.state = state.BODY 1382 + # 1383 + # Otherwise we have more of the declaration section to soak up. 1384 + # 1385 + else: 1386 + self.entry.declaration_purpose = \ 1387 + trim_whitespace(self.entry.declaration_purpose + ' ' + cont) 1388 + else: 1389 + # Unknown line, ignore 1390 + self.emit_msg(ln, f"bad line: {line}") 1391 + 1392 + 1393 + def process_special(self, ln, line): 1394 + """ 1395 + STATE_SPECIAL_SECTION: a section ending with a blank line 1396 + """ 1397 + # 1398 + # If we have hit a blank line (only the " * " marker), then this 1399 + # section is done. 1400 + # 1401 + if KernRe(r"\s*\*\s*$").match(line): 1402 + self.entry.begin_section(ln, dump = True) 1403 + self.state = state.BODY 1404 + return 1405 + # 1406 + # Not a blank line, look for the other ways to end the section. 1407 + # 1408 + if self.is_new_section(ln, line) or self.is_comment_end(ln, line): 1409 + return 1410 + # 1411 + # OK, we should have a continuation of the text for this section. 1412 + # 1413 + if doc_content.search(line): 1414 + cont = doc_content.group(1) 1415 + # 1416 + # If the lines of text after the first in a special section have 1417 + # leading white space, we need to trim it out or Sphinx will get 1418 + # confused. For the second line (the None case), see what we 1419 + # find there and remember it. 1420 + # 1421 + if self.entry.leading_space is None: 1422 + r = KernRe(r'^(\s+)') 1423 + if r.match(cont): 1424 + self.entry.leading_space = len(r.group(1)) 1425 + else: 1426 + self.entry.leading_space = 0 1427 + # 1428 + # Otherwise, before trimming any leading chars, be *sure* 1429 + # that they are white space. We should maybe warn if this 1430 + # isn't the case. 1431 + # 1432 + for i in range(0, self.entry.leading_space): 1433 + if cont[i] != " ": 1434 + self.entry.leading_space = i 1435 + break 1436 + # 1437 + # Add the trimmed result to the section and we're done. 1438 + # 1439 + self.entry.add_text(cont[self.entry.leading_space:]) 1440 + else: 1441 + # Unknown line, ignore 1442 + self.emit_msg(ln, f"bad line: {line}") 1443 + 1444 + def process_body(self, ln, line): 1445 + """ 1446 + STATE_BODY: the bulk of a kerneldoc comment. 1447 + """ 1448 + if self.is_new_section(ln, line) or self.is_comment_end(ln, line): 1273 1449 return 1274 1450 1275 1451 if doc_content.search(line): 1276 1452 cont = doc_content.group(1) 1453 + self.entry.add_text(cont) 1454 + else: 1455 + # Unknown line, ignore 1456 + self.emit_msg(ln, f"bad line: {line}") 1277 1457 1278 - if cont == "": 1279 - if self.entry.section == self.section_context: 1280 - self.dump_section() 1458 + def process_inline_name(self, ln, line): 1459 + """STATE_INLINE_NAME: beginning of docbook comments within a prototype.""" 1281 1460 1282 - self.entry.new_start_line = ln 1283 - self.state = state.BODY 1284 - else: 1285 - if self.entry.section != SECTION_DEFAULT: 1286 - self.state = state.BODY_WITH_BLANK_LINE 1287 - else: 1288 - self.state = state.BODY 1461 + if doc_inline_sect.search(line): 1462 + self.entry.begin_section(ln, doc_inline_sect.group(1)) 1463 + self.entry.add_text(doc_inline_sect.group(2).lstrip()) 1464 + self.state = state.INLINE_TEXT 1465 + elif doc_inline_end.search(line): 1466 + self.dump_section() 1467 + self.state = state.PROTO 1468 + elif doc_content.search(line): 1469 + self.emit_msg(ln, f"Incorrect use of kernel-doc format: {line}") 1470 + self.state = state.PROTO 1471 + # else ... ?? 1289 1472 1290 - self.entry.contents += "\n" 1291 - 1292 - elif self.state == state.BODY_MAYBE: 1293 - 1294 - # Continued declaration purpose 1295 - self.entry.declaration_purpose = self.entry.declaration_purpose.rstrip() 1296 - self.entry.declaration_purpose += " " + cont 1297 - 1298 - r = KernRe(r"\s+") 1299 - self.entry.declaration_purpose = r.sub(' ', 1300 - self.entry.declaration_purpose) 1301 - 1302 - else: 1303 - if self.entry.section.startswith('@') or \ 1304 - self.entry.section == self.section_context: 1305 - if self.entry.leading_space is None: 1306 - r = KernRe(r'^(\s+)') 1307 - if r.match(cont): 1308 - self.entry.leading_space = len(r.group(1)) 1309 - else: 1310 - self.entry.leading_space = 0 1311 - 1312 - # Double-check if leading space are realy spaces 1313 - pos = 0 1314 - for i in range(0, self.entry.leading_space): 1315 - if cont[i] != " ": 1316 - break 1317 - pos += 1 1318 - 1319 - cont = cont[pos:] 1320 - 1321 - # NEW LOGIC: 1322 - # In case it is different, update it 1323 - if self.entry.leading_space != pos: 1324 - self.entry.leading_space = pos 1325 - 1326 - self.entry.contents += cont + "\n" 1327 - return 1328 - 1329 - # Unknown line, ignore 1330 - self.emit_msg(ln, f"bad line: {line}") 1331 - 1332 - def process_inline(self, ln, line): 1333 - """STATE_INLINE: docbook comments within a prototype.""" 1334 - 1335 - if self.inline_doc_state == state.INLINE_NAME and \ 1336 - doc_inline_sect.search(line): 1337 - self.entry.section = doc_inline_sect.group(1) 1338 - self.entry.new_start_line = ln 1339 - 1340 - self.entry.contents = doc_inline_sect.group(2).lstrip() 1341 - if self.entry.contents != "": 1342 - self.entry.contents += "\n" 1343 - 1344 - self.inline_doc_state = state.INLINE_TEXT 1345 - # Documentation block end */ 1346 - return 1473 + def process_inline_text(self, ln, line): 1474 + """STATE_INLINE_TEXT: docbook comments within a prototype.""" 1347 1475 1348 1476 if doc_inline_end.search(line): 1349 - if self.entry.contents not in ["", "\n"]: 1350 - self.dump_section() 1351 - 1477 + self.dump_section() 1352 1478 self.state = state.PROTO 1353 - self.inline_doc_state = state.INLINE_NA 1354 - return 1355 - 1356 - if doc_content.search(line): 1357 - if self.inline_doc_state == state.INLINE_TEXT: 1358 - self.entry.contents += doc_content.group(1) + "\n" 1359 - if not self.entry.contents.strip(" ").rstrip("\n"): 1360 - self.entry.contents = "" 1361 - 1362 - elif self.inline_doc_state == state.INLINE_NAME: 1363 - self.emit_msg(ln, 1364 - f"Incorrect use of kernel-doc format: {line}") 1365 - 1366 - self.inline_doc_state = state.INLINE_ERROR 1479 + elif doc_content.search(line): 1480 + self.entry.add_text(doc_content.group(1)) 1481 + # else ... ?? 1367 1482 1368 1483 def syscall_munge(self, ln, proto): # pylint: disable=W0613 1369 1484 """ ··· 1473 1532 """Ancillary routine to process a function prototype""" 1474 1533 1475 1534 # strip C99-style comments to end of line 1476 - r = KernRe(r"\/\/.*$", re.S) 1477 - line = r.sub('', line) 1478 - 1535 + line = KernRe(r"\/\/.*$", re.S).sub('', line) 1536 + # 1537 + # Soak up the line's worth of prototype text, stopping at { or ; if present. 1538 + # 1479 1539 if KernRe(r'\s*#\s*define').match(line): 1480 1540 self.entry.prototype = line 1481 - elif line.startswith('#'): 1482 - # Strip other macros like #ifdef/#ifndef/#endif/... 1483 - pass 1484 - else: 1541 + elif not line.startswith('#'): # skip other preprocessor stuff 1485 1542 r = KernRe(r'([^\{]*)') 1486 1543 if r.match(line): 1487 1544 self.entry.prototype += r.group(1) + " " 1488 - 1545 + # 1546 + # If we now have the whole prototype, clean it up and declare victory. 1547 + # 1489 1548 if '{' in line or ';' in line or KernRe(r'\s*#\s*define').match(line): 1490 - # strip comments 1491 - r = KernRe(r'/\*.*?\*/') 1492 - self.entry.prototype = r.sub('', self.entry.prototype) 1493 - 1494 - # strip newlines/cr's 1495 - r = KernRe(r'[\r\n]+') 1496 - self.entry.prototype = r.sub(' ', self.entry.prototype) 1497 - 1498 - # strip leading spaces 1499 - r = KernRe(r'^\s+') 1500 - self.entry.prototype = r.sub('', self.entry.prototype) 1501 - 1549 + # strip comments and surrounding spaces 1550 + self.entry.prototype = KernRe(r'/\*.*\*/').sub('', self.entry.prototype).strip() 1551 + # 1502 1552 # Handle self.entry.prototypes for function pointers like: 1503 1553 # int (*pcs_config)(struct foo) 1504 - 1554 + # by turning it into 1555 + # int pcs_config(struct foo) 1556 + # 1505 1557 r = KernRe(r'^(\S+\s+)\(\s*\*(\S+)\)') 1506 1558 self.entry.prototype = r.sub(r'\1\2', self.entry.prototype) 1507 - 1559 + # 1560 + # Handle special declaration syntaxes 1561 + # 1508 1562 if 'SYSCALL_DEFINE' in self.entry.prototype: 1509 1563 self.entry.prototype = self.syscall_munge(ln, 1510 1564 self.entry.prototype) 1511 - 1512 - r = KernRe(r'TRACE_EVENT|DEFINE_EVENT|DEFINE_SINGLE_EVENT') 1513 - if r.search(self.entry.prototype): 1514 - self.entry.prototype = self.tracepoint_munge(ln, 1515 - self.entry.prototype) 1516 - 1565 + else: 1566 + r = KernRe(r'TRACE_EVENT|DEFINE_EVENT|DEFINE_SINGLE_EVENT') 1567 + if r.search(self.entry.prototype): 1568 + self.entry.prototype = self.tracepoint_munge(ln, 1569 + self.entry.prototype) 1570 + # 1571 + # ... and we're done 1572 + # 1517 1573 self.dump_function(ln, self.entry.prototype) 1518 1574 self.reset_state(ln) 1519 1575 1520 1576 def process_proto_type(self, ln, line): 1521 1577 """Ancillary routine to process a type""" 1522 1578 1523 - # Strip newlines/cr's. 1524 - line = KernRe(r'[\r\n]+', re.S).sub(' ', line) 1525 - 1526 - # Strip leading spaces 1527 - line = KernRe(r'^\s+', re.S).sub('', line) 1528 - 1529 - # Strip trailing spaces 1530 - line = KernRe(r'\s+$', re.S).sub('', line) 1531 - 1532 - # Strip C99-style comments to the end of the line 1533 - line = KernRe(r"\/\/.*$", re.S).sub('', line) 1579 + # Strip C99-style comments and surrounding whitespace 1580 + line = KernRe(r"//.*$", re.S).sub('', line).strip() 1581 + if not line: 1582 + return # nothing to see here 1534 1583 1535 1584 # To distinguish preprocessor directive from regular declaration later. 1536 1585 if line.startswith('#'): 1537 1586 line += ";" 1538 - 1539 - r = KernRe(r'([^\{\};]*)([\{\};])(.*)') 1540 - while True: 1541 - if r.search(line): 1542 - if self.entry.prototype: 1543 - self.entry.prototype += " " 1544 - self.entry.prototype += r.group(1) + r.group(2) 1545 - 1546 - self.entry.brcount += r.group(2).count('{') 1547 - self.entry.brcount -= r.group(2).count('}') 1548 - 1549 - self.entry.brcount = max(self.entry.brcount, 0) 1550 - 1551 - if r.group(2) == ';' and self.entry.brcount == 0: 1587 + # 1588 + # Split the declaration on any of { } or ;, and accumulate pieces 1589 + # until we hit a semicolon while not inside {brackets} 1590 + # 1591 + r = KernRe(r'(.*?)([{};])') 1592 + for chunk in r.split(line): 1593 + if chunk: # Ignore empty matches 1594 + self.entry.prototype += chunk 1595 + # 1596 + # This cries out for a match statement ... someday after we can 1597 + # drop Python 3.9 ... 1598 + # 1599 + if chunk == '{': 1600 + self.entry.brcount += 1 1601 + elif chunk == '}': 1602 + self.entry.brcount -= 1 1603 + elif chunk == ';' and self.entry.brcount <= 0: 1552 1604 self.dump_declaration(ln, self.entry.prototype) 1553 1605 self.reset_state(ln) 1554 - break 1555 - 1556 - line = r.group(3) 1557 - else: 1558 - self.entry.prototype += line 1559 - break 1606 + return 1607 + # 1608 + # We hit the end of the line while still in the declaration; put 1609 + # in a space to represent the newline. 1610 + # 1611 + self.entry.prototype += ' ' 1560 1612 1561 1613 def process_proto(self, ln, line): 1562 1614 """STATE_PROTO: reading a function/whatever prototype.""" 1563 1615 1564 1616 if doc_inline_oneline.search(line): 1565 - self.entry.section = doc_inline_oneline.group(1) 1566 - self.entry.contents = doc_inline_oneline.group(2) 1567 - 1568 - if self.entry.contents != "": 1569 - self.entry.contents += "\n" 1570 - self.dump_section(start_new=False) 1617 + self.entry.begin_section(ln, doc_inline_oneline.group(1)) 1618 + self.entry.add_text(doc_inline_oneline.group(2)) 1619 + self.dump_section() 1571 1620 1572 1621 elif doc_inline_start.search(line): 1573 - self.state = state.INLINE 1574 - self.inline_doc_state = state.INLINE_NAME 1622 + self.state = state.INLINE_NAME 1575 1623 1576 1624 elif self.entry.decl_type == 'function': 1577 1625 self.process_proto_function(ln, line) ··· 1573 1643 1574 1644 if doc_end.search(line): 1575 1645 self.dump_section() 1576 - self.output_declaration("doc", self.entry.identifier, 1577 - sectionlist=self.entry.sectionlist, 1578 - sections=self.entry.sections, 1579 - section_start_lines=self.entry.section_start_lines) 1646 + self.output_declaration("doc", self.entry.identifier) 1580 1647 self.reset_state(ln) 1581 1648 1582 1649 elif doc_content.search(line): 1583 - self.entry.contents += doc_content.group(1) + "\n" 1650 + self.entry.add_text(doc_content.group(1)) 1584 1651 1585 1652 def parse_export(self): 1586 1653 """ ··· 1598 1671 1599 1672 return export_table 1600 1673 1674 + # 1675 + # The state/action table telling us which function to invoke in 1676 + # each state. 1677 + # 1678 + state_actions = { 1679 + state.NORMAL: process_normal, 1680 + state.NAME: process_name, 1681 + state.BODY: process_body, 1682 + state.DECLARATION: process_decl, 1683 + state.SPECIAL_SECTION: process_special, 1684 + state.INLINE_NAME: process_inline_name, 1685 + state.INLINE_TEXT: process_inline_text, 1686 + state.PROTO: process_proto, 1687 + state.DOCBLOCK: process_docblock, 1688 + } 1689 + 1601 1690 def parse_kdoc(self): 1602 1691 """ 1603 1692 Open and process each line of a C source file. ··· 1624 1681 Besides parsing kernel-doc tags, it also parses export symbols. 1625 1682 """ 1626 1683 1627 - cont = False 1628 1684 prev = "" 1629 1685 prev_ln = None 1630 1686 export_table = set() ··· 1639 1697 if self.state == state.PROTO: 1640 1698 if line.endswith("\\"): 1641 1699 prev += line.rstrip("\\") 1642 - cont = True 1643 - 1644 1700 if not prev_ln: 1645 1701 prev_ln = ln 1646 - 1647 1702 continue 1648 1703 1649 - if cont: 1704 + if prev: 1650 1705 ln = prev_ln 1651 1706 line = prev + line 1652 1707 prev = "" 1653 - cont = False 1654 1708 prev_ln = None 1655 1709 1656 - self.config.log.debug("%d %s%s: %s", 1710 + self.config.log.debug("%d %s: %s", 1657 1711 ln, state.name[self.state], 1658 - state.inline_name[self.inline_doc_state], 1659 1712 line) 1660 1713 1661 1714 # This is an optimization over the original script. ··· 1658 1721 # it was read twice. Here, we use the already-existing 1659 1722 # loop to parse exported symbols as well. 1660 1723 # 1661 - # TODO: It should be noticed that not all states are 1662 - # needed here. On a future cleanup, process export only 1663 - # at the states that aren't handling comment markups. 1664 - self.process_export(export_table, line) 1724 + if (self.state != state.NORMAL) or \ 1725 + not self.process_export(export_table, line): 1726 + # Hand this line to the appropriate state handler 1727 + self.state_actions[self.state](self, ln, line) 1665 1728 1666 - # Hand this line to the appropriate state handler 1667 - if self.state == state.NORMAL: 1668 - self.process_normal(ln, line) 1669 - elif self.state == state.NAME: 1670 - self.process_name(ln, line) 1671 - elif self.state in [state.BODY, state.BODY_MAYBE, 1672 - state.BODY_WITH_BLANK_LINE]: 1673 - self.process_body(ln, line) 1674 - elif self.state == state.INLINE: # scanning for inline parameters 1675 - self.process_inline(ln, line) 1676 - elif self.state == state.PROTO: 1677 - self.process_proto(ln, line) 1678 - elif self.state == state.DOCBLOCK: 1679 - self.process_docblock(ln, line) 1680 1729 except OSError: 1681 1730 self.config.log.error(f"Error: Cannot open file {self.fname}") 1682 1731
+2 -5
scripts/lib/kdoc/kdoc_re.py
··· 29 29 """ 30 30 Adds a new regex or re-use it from the cache. 31 31 """ 32 - 33 - if string in re_cache: 34 - self.regex = re_cache[string] 35 - else: 32 + self.regex = re_cache.get(string, None) 33 + if not self.regex: 36 34 self.regex = re.compile(string, flags=flags) 37 - 38 35 if self.cache: 39 36 re_cache[string] = self.regex 40 37
+5 -1
scripts/sphinx-pre-install
··· 245 245 246 246 sub get_sphinx_fname() 247 247 { 248 + if ($ENV{'SPHINXBUILD'}) { 249 + return $ENV{'SPHINXBUILD'}; 250 + } 251 + 248 252 my $fname = "sphinx-build"; 249 253 return $fname if findprog($fname); 250 254 ··· 413 409 my $old = 0; 414 410 my $rel; 415 411 my $noto_sans_redhat = "google-noto-sans-cjk-ttc-fonts"; 416 - $rel = $1 if ($system_release =~ /release\s+(\d+)/); 412 + $rel = $1 if ($system_release =~ /(release|Linux)\s+(\d+)/); 417 413 418 414 if (!($system_release =~ /Fedora/)) { 419 415 $map{"virtualenv"} = "python-virtualenv";
+513
scripts/test_doc_build.py
··· 1 + #!/usr/bin/env python3 2 + # SPDX-License-Identifier: GPL-2.0 3 + # Copyright(c) 2025: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> 4 + # 5 + # pylint: disable=R0903,R0912,R0913,R0914,R0917,C0301 6 + 7 + """ 8 + Install minimal supported requirements for different Sphinx versions 9 + and optionally test the build. 10 + """ 11 + 12 + import argparse 13 + import asyncio 14 + import os.path 15 + import shutil 16 + import sys 17 + import time 18 + import subprocess 19 + 20 + # Minimal python version supported by the building system. 21 + 22 + PYTHON = os.path.basename(sys.executable) 23 + 24 + min_python_bin = None 25 + 26 + for i in range(9, 13): 27 + p = f"python3.{i}" 28 + if shutil.which(p): 29 + min_python_bin = p 30 + break 31 + 32 + if not min_python_bin: 33 + min_python_bin = PYTHON 34 + 35 + # Starting from 8.0, Python 3.9 is not supported anymore. 36 + PYTHON_VER_CHANGES = {(8, 0, 0): PYTHON} 37 + 38 + DEFAULT_VERSIONS_TO_TEST = [ 39 + (3, 4, 3), # Minimal supported version 40 + (5, 3, 0), # CentOS Stream 9 / AlmaLinux 9 41 + (6, 1, 1), # Debian 12 42 + (7, 2, 1), # openSUSE Leap 15.6 43 + (7, 2, 6), # Ubuntu 24.04 LTS 44 + (7, 4, 7), # Ubuntu 24.10 45 + (7, 3, 0), # openSUSE Tumbleweed 46 + (8, 1, 3), # Fedora 42 47 + (8, 2, 3) # Latest version - covers rolling distros 48 + ] 49 + 50 + # Sphinx versions to be installed and their incremental requirements 51 + SPHINX_REQUIREMENTS = { 52 + # Oldest versions we support for each package required by Sphinx 3.4.3 53 + (3, 4, 3): { 54 + "docutils": "0.16", 55 + "alabaster": "0.7.12", 56 + "babel": "2.8.0", 57 + "certifi": "2020.6.20", 58 + "docutils": "0.16", 59 + "idna": "2.10", 60 + "imagesize": "1.2.0", 61 + "Jinja2": "2.11.2", 62 + "MarkupSafe": "1.1.1", 63 + "packaging": "20.4", 64 + "Pygments": "2.6.1", 65 + "PyYAML": "5.1", 66 + "requests": "2.24.0", 67 + "snowballstemmer": "2.0.0", 68 + "sphinxcontrib-applehelp": "1.0.2", 69 + "sphinxcontrib-devhelp": "1.0.2", 70 + "sphinxcontrib-htmlhelp": "1.0.3", 71 + "sphinxcontrib-jsmath": "1.0.1", 72 + "sphinxcontrib-qthelp": "1.0.3", 73 + "sphinxcontrib-serializinghtml": "1.1.4", 74 + "urllib3": "1.25.9", 75 + }, 76 + 77 + # Update package dependencies to a more modern base. The goal here 78 + # is to avoid to many incremental changes for the next entries 79 + (3, 5, 0): { 80 + "alabaster": "0.7.13", 81 + "babel": "2.17.0", 82 + "certifi": "2025.6.15", 83 + "idna": "3.10", 84 + "imagesize": "1.4.1", 85 + "packaging": "25.0", 86 + "Pygments": "2.8.1", 87 + "requests": "2.32.4", 88 + "snowballstemmer": "3.0.1", 89 + "sphinxcontrib-applehelp": "1.0.4", 90 + "sphinxcontrib-htmlhelp": "2.0.1", 91 + "sphinxcontrib-serializinghtml": "1.1.5", 92 + "urllib3": "2.0.0", 93 + }, 94 + 95 + # Starting from here, ensure all docutils versions are covered with 96 + # supported Sphinx versions. Other packages are upgraded only when 97 + # required by pip 98 + (4, 0, 0): { 99 + "PyYAML": "5.1", 100 + }, 101 + (4, 1, 0): { 102 + "docutils": "0.17", 103 + "Pygments": "2.19.1", 104 + "Jinja2": "3.0.3", 105 + "MarkupSafe": "2.0", 106 + }, 107 + (4, 3, 0): {}, 108 + (4, 4, 0): {}, 109 + (4, 5, 0): { 110 + "docutils": "0.17.1", 111 + }, 112 + (5, 0, 0): {}, 113 + (5, 1, 0): {}, 114 + (5, 2, 0): { 115 + "docutils": "0.18", 116 + "Jinja2": "3.1.2", 117 + "MarkupSafe": "2.0", 118 + "PyYAML": "5.3.1", 119 + }, 120 + (5, 3, 0): { 121 + "docutils": "0.18.1", 122 + }, 123 + (6, 0, 0): {}, 124 + (6, 1, 0): {}, 125 + (6, 2, 0): { 126 + "PyYAML": "5.4.1", 127 + }, 128 + (7, 0, 0): {}, 129 + (7, 1, 0): {}, 130 + (7, 2, 0): { 131 + "docutils": "0.19", 132 + "PyYAML": "6.0.1", 133 + "sphinxcontrib-serializinghtml": "1.1.9", 134 + }, 135 + (7, 2, 6): { 136 + "docutils": "0.20", 137 + }, 138 + (7, 3, 0): { 139 + "alabaster": "0.7.14", 140 + "PyYAML": "6.0.1", 141 + "tomli": "2.0.1", 142 + }, 143 + (7, 4, 0): { 144 + "docutils": "0.20.1", 145 + "PyYAML": "6.0.1", 146 + }, 147 + (8, 0, 0): { 148 + "docutils": "0.21", 149 + }, 150 + (8, 1, 0): { 151 + "docutils": "0.21.1", 152 + "PyYAML": "6.0.1", 153 + "sphinxcontrib-applehelp": "1.0.7", 154 + "sphinxcontrib-devhelp": "1.0.6", 155 + "sphinxcontrib-htmlhelp": "2.0.6", 156 + "sphinxcontrib-qthelp": "1.0.6", 157 + }, 158 + (8, 2, 0): { 159 + "docutils": "0.21.2", 160 + "PyYAML": "6.0.1", 161 + "sphinxcontrib-serializinghtml": "1.1.9", 162 + }, 163 + } 164 + 165 + 166 + class AsyncCommands: 167 + """Excecute command synchronously""" 168 + 169 + def __init__(self, fp=None): 170 + 171 + self.stdout = None 172 + self.stderr = None 173 + self.output = None 174 + self.fp = fp 175 + 176 + def log(self, out, verbose, is_info=True): 177 + out = out.removesuffix('\n') 178 + 179 + if verbose: 180 + if is_info: 181 + print(out) 182 + else: 183 + print(out, file=sys.stderr) 184 + 185 + if self.fp: 186 + self.fp.write(out + "\n") 187 + 188 + async def _read(self, stream, verbose, is_info): 189 + """Ancillary routine to capture while displaying""" 190 + 191 + while stream is not None: 192 + line = await stream.readline() 193 + if line: 194 + out = line.decode("utf-8", errors="backslashreplace") 195 + self.log(out, verbose, is_info) 196 + if is_info: 197 + self.stdout += out 198 + else: 199 + self.stderr += out 200 + else: 201 + break 202 + 203 + async def run(self, cmd, capture_output=False, check=False, 204 + env=None, verbose=True): 205 + 206 + """ 207 + Execute an arbitrary command, handling errors. 208 + 209 + Please notice that this class is not thread safe 210 + """ 211 + 212 + self.stdout = "" 213 + self.stderr = "" 214 + 215 + self.log("$ " + " ".join(cmd), verbose) 216 + 217 + proc = await asyncio.create_subprocess_exec(cmd[0], 218 + *cmd[1:], 219 + env=env, 220 + stdout=asyncio.subprocess.PIPE, 221 + stderr=asyncio.subprocess.PIPE) 222 + 223 + # Handle input and output in realtime 224 + await asyncio.gather( 225 + self._read(proc.stdout, verbose, True), 226 + self._read(proc.stderr, verbose, False), 227 + ) 228 + 229 + await proc.wait() 230 + 231 + if check and proc.returncode > 0: 232 + raise subprocess.CalledProcessError(returncode=proc.returncode, 233 + cmd=" ".join(cmd), 234 + output=self.stdout, 235 + stderr=self.stderr) 236 + 237 + if capture_output: 238 + if proc.returncode > 0: 239 + self.log(f"Error {proc.returncode}", verbose=True, is_info=False) 240 + return "" 241 + 242 + return self.output 243 + 244 + ret = subprocess.CompletedProcess(args=cmd, 245 + returncode=proc.returncode, 246 + stdout=self.stdout, 247 + stderr=self.stderr) 248 + 249 + return ret 250 + 251 + 252 + class SphinxVenv: 253 + """ 254 + Installs Sphinx on one virtual env per Sphinx version with a minimal 255 + set of dependencies, adjusting them to each specific version. 256 + """ 257 + 258 + def __init__(self): 259 + """Initialize instance variables""" 260 + 261 + self.built_time = {} 262 + self.first_run = True 263 + 264 + async def _handle_version(self, args, fp, 265 + cur_ver, cur_requirements, python_bin): 266 + """Handle a single Sphinx version""" 267 + 268 + cmd = AsyncCommands(fp) 269 + 270 + ver = ".".join(map(str, cur_ver)) 271 + 272 + if not self.first_run and args.wait_input and args.build: 273 + ret = input("Press Enter to continue or 'a' to abort: ").strip().lower() 274 + if ret == "a": 275 + print("Aborted.") 276 + sys.exit() 277 + else: 278 + self.first_run = False 279 + 280 + venv_dir = f"Sphinx_{ver}" 281 + req_file = f"requirements_{ver}.txt" 282 + 283 + cmd.log(f"\nSphinx {ver} with {python_bin}", verbose=True) 284 + 285 + # Create venv 286 + await cmd.run([python_bin, "-m", "venv", venv_dir], 287 + verbose=args.verbose, check=True) 288 + pip = os.path.join(venv_dir, "bin/pip") 289 + 290 + # Create install list 291 + reqs = [] 292 + for pkg, verstr in cur_requirements.items(): 293 + reqs.append(f"{pkg}=={verstr}") 294 + 295 + reqs.append(f"Sphinx=={ver}") 296 + 297 + await cmd.run([pip, "install"] + reqs, check=True, verbose=args.verbose) 298 + 299 + # Freeze environment 300 + result = await cmd.run([pip, "freeze"], verbose=False, check=True) 301 + 302 + # Pip install succeeded. Write requirements file 303 + if args.req_file: 304 + with open(req_file, "w", encoding="utf-8") as fp: 305 + fp.write(result.stdout) 306 + 307 + if args.build: 308 + start_time = time.time() 309 + 310 + # Prepare a venv environment 311 + env = os.environ.copy() 312 + bin_dir = os.path.join(venv_dir, "bin") 313 + env["PATH"] = bin_dir + ":" + env["PATH"] 314 + env["VIRTUAL_ENV"] = venv_dir 315 + if "PYTHONHOME" in env: 316 + del env["PYTHONHOME"] 317 + 318 + # Test doc build 319 + await cmd.run(["make", "cleandocs"], env=env, check=True) 320 + make = ["make"] 321 + 322 + if args.output: 323 + sphinx_build = os.path.realpath(f"{bin_dir}/sphinx-build") 324 + make += [f"O={args.output}", f"SPHINXBUILD={sphinx_build}"] 325 + 326 + if args.make_args: 327 + make += args.make_args 328 + 329 + make += args.targets 330 + 331 + if args.verbose: 332 + cmd.log(f". {bin_dir}/activate", verbose=True) 333 + await cmd.run(make, env=env, check=True, verbose=True) 334 + if args.verbose: 335 + cmd.log("deactivate", verbose=True) 336 + 337 + end_time = time.time() 338 + elapsed_time = end_time - start_time 339 + hours, minutes = divmod(elapsed_time, 3600) 340 + minutes, seconds = divmod(minutes, 60) 341 + 342 + hours = int(hours) 343 + minutes = int(minutes) 344 + seconds = int(seconds) 345 + 346 + self.built_time[ver] = f"{hours:02d}:{minutes:02d}:{seconds:02d}" 347 + 348 + cmd.log(f"Finished doc build for Sphinx {ver}. Elapsed time: {self.built_time[ver]}", verbose=True) 349 + 350 + async def run(self, args): 351 + """ 352 + Navigate though multiple Sphinx versions, handling each of them 353 + on a loop. 354 + """ 355 + 356 + if args.log: 357 + fp = open(args.log, "w", encoding="utf-8") 358 + if not args.verbose: 359 + args.verbose = False 360 + else: 361 + fp = None 362 + if not args.verbose: 363 + args.verbose = True 364 + 365 + cur_requirements = {} 366 + python_bin = min_python_bin 367 + 368 + vers = set(SPHINX_REQUIREMENTS.keys()) | set(args.versions) 369 + 370 + for cur_ver in sorted(vers): 371 + if cur_ver in SPHINX_REQUIREMENTS: 372 + new_reqs = SPHINX_REQUIREMENTS[cur_ver] 373 + cur_requirements.update(new_reqs) 374 + 375 + if cur_ver in PYTHON_VER_CHANGES: # pylint: disable=R1715 376 + python_bin = PYTHON_VER_CHANGES[cur_ver] 377 + 378 + if cur_ver not in args.versions: 379 + continue 380 + 381 + if args.min_version: 382 + if cur_ver < args.min_version: 383 + continue 384 + 385 + if args.max_version: 386 + if cur_ver > args.max_version: 387 + break 388 + 389 + await self._handle_version(args, fp, cur_ver, cur_requirements, 390 + python_bin) 391 + 392 + if args.build: 393 + cmd = AsyncCommands(fp) 394 + cmd.log("\nSummary:", verbose=True) 395 + for ver, elapsed_time in sorted(self.built_time.items()): 396 + cmd.log(f"\tSphinx {ver} elapsed time: {elapsed_time}", 397 + verbose=True) 398 + 399 + if fp: 400 + fp.close() 401 + 402 + def parse_version(ver_str): 403 + """Convert a version string into a tuple.""" 404 + 405 + return tuple(map(int, ver_str.split("."))) 406 + 407 + 408 + DEFAULT_VERS = " - " 409 + DEFAULT_VERS += "\n - ".join(map(lambda v: f"{v[0]}.{v[1]}.{v[2]}", 410 + DEFAULT_VERSIONS_TO_TEST)) 411 + 412 + SCRIPT = os.path.relpath(__file__) 413 + 414 + DESCRIPTION = f""" 415 + This tool allows creating Python virtual environments for different 416 + Sphinx versions that are supported by the Linux Kernel build system. 417 + 418 + Besides creating the virtual environment, it can also test building 419 + the documentation using "make htmldocs" (and/or other doc targets). 420 + 421 + If called without "--versions" argument, it covers the versions shipped 422 + on major distros, plus the lowest supported version: 423 + 424 + {DEFAULT_VERS} 425 + 426 + A typical usage is to run: 427 + 428 + {SCRIPT} -m -l sphinx_builds.log 429 + 430 + This will create one virtual env for the default version set and run 431 + "make htmldocs" for each version, creating a log file with the 432 + excecuted commands on it. 433 + 434 + NOTE: The build time can be very long, specially on old versions. Also, there 435 + is a known bug with Sphinx version 6.0.x: each subprocess uses a lot of 436 + memory. That, together with "-jauto" may cause OOM killer to cause 437 + failures at the doc generation. To minimize the risk, you may use the 438 + "-a" command line parameter to constrain the built directories and/or 439 + reduce the number of threads from "-jauto" to, for instance, "-j4": 440 + 441 + {SCRIPT} -m -V 6.0.1 -a "SPHINXDIRS=process" "SPHINXOPTS='-j4'" 442 + 443 + """ 444 + 445 + MAKE_TARGETS = [ 446 + "htmldocs", 447 + "texinfodocs", 448 + "infodocs", 449 + "latexdocs", 450 + "pdfdocs", 451 + "epubdocs", 452 + "xmldocs", 453 + ] 454 + 455 + async def main(): 456 + """Main program""" 457 + 458 + parser = argparse.ArgumentParser(description=DESCRIPTION, 459 + formatter_class=argparse.RawDescriptionHelpFormatter) 460 + 461 + ver_group = parser.add_argument_group("Version range options") 462 + 463 + ver_group.add_argument('-V', '--versions', nargs="*", 464 + default=DEFAULT_VERSIONS_TO_TEST,type=parse_version, 465 + help='Sphinx versions to test') 466 + ver_group.add_argument('--min-version', "--min", type=parse_version, 467 + help='Sphinx minimal version') 468 + ver_group.add_argument('--max-version', "--max", type=parse_version, 469 + help='Sphinx maximum version') 470 + ver_group.add_argument('-f', '--full', action='store_true', 471 + help='Add all Sphinx (major,minor) supported versions to the version range') 472 + 473 + build_group = parser.add_argument_group("Build options") 474 + 475 + build_group.add_argument('-b', '--build', action='store_true', 476 + help='Build documentation') 477 + build_group.add_argument('-a', '--make-args', nargs="*", 478 + help='extra arguments for make, like SPHINXDIRS=netlink/specs', 479 + ) 480 + build_group.add_argument('-t', '--targets', nargs="+", choices=MAKE_TARGETS, 481 + default=[MAKE_TARGETS[0]], 482 + help="make build targets. Default: htmldocs.") 483 + build_group.add_argument("-o", '--output', 484 + help="output directory for the make O=OUTPUT") 485 + 486 + other_group = parser.add_argument_group("Other options") 487 + 488 + other_group.add_argument('-r', '--req-file', action='store_true', 489 + help='write a requirements.txt file') 490 + other_group.add_argument('-l', '--log', 491 + help='Log command output on a file') 492 + other_group.add_argument('-v', '--verbose', action='store_true', 493 + help='Verbose all commands') 494 + other_group.add_argument('-i', '--wait-input', action='store_true', 495 + help='Wait for an enter before going to the next version') 496 + 497 + args = parser.parse_args() 498 + 499 + if not args.make_args: 500 + args.make_args = [] 501 + 502 + sphinx_versions = sorted(list(SPHINX_REQUIREMENTS.keys())) 503 + 504 + if args.full: 505 + args.versions += list(SPHINX_REQUIREMENTS.keys()) 506 + 507 + venv = SphinxVenv() 508 + await venv.run(args) 509 + 510 + 511 + # Call main method 512 + if __name__ == "__main__": 513 + asyncio.run(main())
-2
scripts/ver_linux
··· 25 25 printversion("Module-init-tools", version("depmod -V")) 26 26 printversion("E2fsprogs", version("tune2fs")) 27 27 printversion("Jfsutils", version("fsck.jfs -V")) 28 - printversion("Reiserfsprogs", version("reiserfsck -V")) 29 - printversion("Reiser4fsprogs", version("fsck.reiser4 -V")) 30 28 printversion("Xfsprogs", version("xfs_db -V")) 31 29 printversion("Pcmciautils", version("pccardctl -V")) 32 30 printversion("Pcmcia-cs", version("cardmgr -V"))