Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'mauro' into docs-mw

Mauro's work to include documentation from our Python modules. His cover
letter follows:

This is an extended version of:
https://lore.kernel.org/linux-doc/cover.1768488832.git.mchehab+huawei@kernel.org/

It basically adds everything we currently have inside libs/tool/python
to "tools" book inside documentation.

This version should be independent of the other series yet to be merged,
(including the jobserver one).

The vast amount of changes here are docstring cleanups and additions.
They mainly consists on:

- ensuring that every phrase will end with a period, making it uniform
along all files;
- cleaning ups to better uniform docstrings;
- variable descriptions now use "#:" markup, as it allows autodoc to
add them inside the documentation;
- added some missing docstrings;
- some new blank lines at comments to make ReST syntax parser happy;
- add a couple of sphinx markups (mainly, code blocks).

Most of those are minor changes, affecting only comments.

It also has one patch per libarary type, adding them to docs.

For kernel-doc, I did the cleanups first, as there is one code block
inside tools/lib/python/kdoc/latex_fonts.py that would cause a Sphinx
crash without such markups.

The series actually starts with 3 fixes:

- avoid "*" markups on indexes with deep> 3 to override text
- a variable rename to stop abusing doctree name
- don't rely on cwd to get Documentation/ location

patch 4 adds support to document scripts either at:
- tools/
- scripts/

patch 5 contains a CSS to better display autodoc html output.

For those who want to play with documentation, documenting a python
file is very simple. All it takes is to use:

.. automodule:: lib.python.<dir+name>

Usually, we add a couple of control members to it to adjust
the desired documentation scope (add/remove members, showing class
inheritance, showing members that currently don't have
docstrings, etc). That's why we're using:

.. automodule:: lib.python.kdoc.enrich_formatter
:members:
:show-inheritance:
:undoc-members:

(and similar) inside tools/kdoc*.rst.

autodoc allows filtering in/out members, file docstrings, etc.

It also allows documenting just some members or functions with
directives like:

..autofunction:
..automember:

Sphinx also has a helper script to generate .rst files with
documentation:

$ sphinx-apidoc -o foobar tools/lib/python/

which can be helpful to discover what should be documented,
although changes are needed to use what it produces.

+607 -264
+13 -10
Documentation/conf.py
··· 13 13 14 14 import sphinx 15 15 16 - # If extensions (or modules to document with autodoc) are in another directory, 17 - # add these directories to sys.path here. If the directory is relative to the 18 - # documentation root, use os.path.abspath to make it absolute, like shown here. 19 - sys.path.insert(0, os.path.abspath("sphinx")) 16 + # Location of Documentation/ directory 17 + kern_doc_dir = os.path.dirname(os.path.abspath(__file__)) 18 + 19 + # Add location of Sphinx extensions 20 + sys.path.insert(0, os.path.join(kern_doc_dir, "sphinx")) 21 + 22 + # Allow sphinx.ext.autodoc to document files at tools and scripts 23 + sys.path.append(os.path.join(kern_doc_dir, "..", "tools")) 24 + sys.path.append(os.path.join(kern_doc_dir, "..", "scripts")) 20 25 21 26 # Minimal supported version 22 27 needs_sphinx = "3.4.3" ··· 36 31 has_include_patterns = True 37 32 # Include patterns that don't contain directory names, in glob format 38 33 include_patterns = ["**.rst"] 39 - 40 - # Location of Documentation/ directory 41 - doctree = os.path.abspath(".") 42 34 43 35 # Exclude of patterns that don't contain directory names, in glob format. 44 36 exclude_patterns = [] ··· 75 73 # setup include_patterns dynamically 76 74 if has_include_patterns: 77 75 for p in dyn_include_patterns: 78 - full = os.path.join(doctree, p) 76 + full = os.path.join(kern_doc_dir, p) 79 77 80 78 rel_path = os.path.relpath(full, start=app.srcdir) 81 79 if rel_path.startswith("../"): ··· 85 83 86 84 # setup exclude_patterns dynamically 87 85 for p in dyn_exclude_patterns: 88 - full = os.path.join(doctree, p) 86 + full = os.path.join(kern_doc_dir, p) 89 87 90 88 rel_path = os.path.relpath(full, start=app.srcdir) 91 89 if rel_path.startswith("../"): ··· 97 95 # of the app.srcdir. Add them here 98 96 99 97 # Handle the case where SPHINXDIRS is used 100 - if not os.path.samefile(doctree, app.srcdir): 98 + if not os.path.samefile(kern_doc_dir, app.srcdir): 101 99 # Add a tag to mark that the build is actually a subproject 102 100 tags.add("subproject") 103 101 ··· 156 154 "maintainers_include", 157 155 "parser_yaml", 158 156 "rstFlatTable", 157 + "sphinx.ext.autodoc", 159 158 "sphinx.ext.autosectionlabel", 160 159 "sphinx.ext.ifconfig", 161 160 "translations",
+12
Documentation/sphinx-static/custom.css
··· 30 30 margin-bottom: 20px; 31 31 } 32 32 33 + /* The default is to use -1em, wich makes it override text */ 34 + li { text-indent: 0em; } 35 + 33 36 /* 34 37 * Parameters for the display of function prototypes and such included 35 38 * from C source files. ··· 42 39 dl.function dt { margin-left: 10em; text-indent: -10em; } 43 40 dt.sig-object { font-size: larger; } 44 41 div.kernelindent { margin-left: 2em; margin-right: 4em; } 42 + 43 + /* 44 + * Parameters for the display of function prototypes and such included 45 + * from Python source files. 46 + */ 47 + dl.py { margin-top: 2em; background-color: #ecf0f3; } 48 + dl.py.class { margin-left: 2em; text-indent: -2em; padding-left: 2em; } 49 + dl.py.method, dl.py.attribute { margin-left: 2em; text-indent: -2em; } 50 + dl.py li, pre { text-indent: 0em; padding-left: 0 !important; } 45 51 46 52 /* 47 53 * Tweaks for our local TOC
+10
Documentation/tools/feat.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ==================================== 4 + Documentation features parser module 5 + ==================================== 6 + 7 + .. automodule:: lib.python.feat.parse_features 8 + :members: 9 + :show-inheritance: 10 + :undoc-members:
+1
Documentation/tools/index.rst
··· 12 12 13 13 rtla/index 14 14 rv/index 15 + python 15 16 16 17 .. only:: subproject and html 17 18
+10
Documentation/tools/jobserver.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================= 4 + Job server module 5 + ================= 6 + 7 + .. automodule:: lib.python.jobserver 8 + :members: 9 + :show-inheritance: 10 + :undoc-members:
+13
Documentation/tools/kabi.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ===================================== 4 + Kernel ABI documentation tool modules 5 + ===================================== 6 + 7 + .. toctree:: 8 + :maxdepth: 2 9 + 10 + kabi_parser 11 + kabi_regex 12 + kabi_symbols 13 + kabi_helpers
+11
Documentation/tools/kabi_helpers.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================= 4 + Ancillary classes 5 + ================= 6 + 7 + .. automodule:: lib.python.abi.helpers 8 + :members: 9 + :member-order: bysource 10 + :show-inheritance: 11 + :undoc-members:
+10
Documentation/tools/kabi_parser.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ===================================== 4 + Kernel ABI documentation parser class 5 + ===================================== 6 + 7 + .. automodule:: lib.python.abi.abi_parser 8 + :members: 9 + :show-inheritance: 10 + :undoc-members:
+10
Documentation/tools/kabi_regex.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================= 4 + ABI regex search symbol class 5 + ============================= 6 + 7 + .. automodule:: lib.python.abi.abi_regex 8 + :members: 9 + :show-inheritance: 10 + :undoc-members:
+10
Documentation/tools/kabi_symbols.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ========================================= 4 + System ABI documentation validation class 5 + ========================================= 6 + 7 + .. automodule:: lib.python.abi.system_symbols 8 + :members: 9 + :show-inheritance: 10 + :undoc-members:
+12
Documentation/tools/kdoc.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================== 4 + Kernel-doc modules 5 + ================== 6 + 7 + .. toctree:: 8 + :maxdepth: 2 9 + 10 + kdoc_parser 11 + kdoc_output 12 + kdoc_ancillary
+46
Documentation/tools/kdoc_ancillary.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================= 4 + Ancillary classes 5 + ================= 6 + 7 + Argparse formatter class 8 + ======================== 9 + 10 + .. automodule:: lib.python.kdoc.enrich_formatter 11 + :members: 12 + :show-inheritance: 13 + :undoc-members: 14 + 15 + Regular expression class handler 16 + ================================ 17 + 18 + .. automodule:: lib.python.kdoc.kdoc_re 19 + :members: 20 + :show-inheritance: 21 + :undoc-members: 22 + 23 + 24 + Chinese, Japanese and Korean variable fonts handler 25 + =================================================== 26 + 27 + .. automodule:: lib.python.kdoc.latex_fonts 28 + :members: 29 + :show-inheritance: 30 + :undoc-members: 31 + 32 + Kernel C file include logic 33 + =========================== 34 + 35 + .. automodule:: lib.python.kdoc.parse_data_structs 36 + :members: 37 + :show-inheritance: 38 + :undoc-members: 39 + 40 + Python version ancillary methods 41 + ================================ 42 + 43 + .. automodule:: lib.python.kdoc.python_version 44 + :members: 45 + :show-inheritance: 46 + :undoc-members:
+14
Documentation/tools/kdoc_output.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ======================= 4 + Kernel-doc output stage 5 + ======================= 6 + 7 + Output handler for man pages and ReST 8 + ===================================== 9 + 10 + .. automodule:: lib.python.kdoc.kdoc_output 11 + :members: 12 + :show-inheritance: 13 + :undoc-members: 14 +
+29
Documentation/tools/kdoc_parser.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ======================= 4 + Kernel-doc parser stage 5 + ======================= 6 + 7 + File handler classes 8 + ==================== 9 + 10 + .. automodule:: lib.python.kdoc.kdoc_files 11 + :members: 12 + :show-inheritance: 13 + :undoc-members: 14 + 15 + Parsed item data class 16 + ====================== 17 + 18 + .. automodule:: lib.python.kdoc.kdoc_item 19 + :members: 20 + :show-inheritance: 21 + :undoc-members: 22 + 23 + Parser classes and methods 24 + ========================== 25 + 26 + .. automodule:: lib.python.kdoc.kdoc_parser 27 + :members: 28 + :show-inheritance: 29 + :undoc-members:
+13
Documentation/tools/python.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================ 4 + Python libraries 5 + ================ 6 + 7 + .. toctree:: 8 + :maxdepth: 4 9 + 10 + jobserver 11 + feat 12 + kdoc 13 + kabi
+18 -15
tools/lib/python/abi/abi_parser.py
··· 21 21 22 22 23 23 class AbiParser: 24 - """Main class to parse ABI files""" 24 + """Main class to parse ABI files.""" 25 25 26 + #: Valid tags at Documentation/ABI. 26 27 TAGS = r"(what|where|date|kernelversion|contact|description|users)" 28 + 29 + #: ABI elements that will auto-generate cross-references. 27 30 XREF = r"(?:^|\s|\()(\/(?:sys|config|proc|dev|kvd)\/[^,.:;\)\s]+)(?:[,.:;\)\s]|\Z)" 28 31 29 32 def __init__(self, directory, logger=None, 30 33 enable_lineno=False, show_warnings=True, debug=0): 31 - """Stores arguments for the class and initialize class vars""" 34 + """Stores arguments for the class and initialize class vars.""" 32 35 33 36 self.directory = directory 34 37 self.enable_lineno = enable_lineno ··· 68 65 self.re_xref_node = re.compile(self.XREF) 69 66 70 67 def warn(self, fdata, msg, extra=None): 71 - """Displays a parse error if warning is enabled""" 68 + """Displays a parse error if warning is enabled.""" 72 69 73 70 if not self.show_warnings: 74 71 return ··· 80 77 self.log.warning(msg) 81 78 82 79 def add_symbol(self, what, fname, ln=None, xref=None): 83 - """Create a reference table describing where each 'what' is located""" 80 + """Create a reference table describing where each 'what' is located.""" 84 81 85 82 if what not in self.what_symbols: 86 83 self.what_symbols[what] = {"file": {}} ··· 95 92 self.what_symbols[what]["xref"] = xref 96 93 97 94 def _parse_line(self, fdata, line): 98 - """Parse a single line of an ABI file""" 95 + """Parse a single line of an ABI file.""" 99 96 100 97 new_what = False 101 98 new_tag = False ··· 267 264 self.warn(fdata, "Unexpected content", line) 268 265 269 266 def parse_readme(self, nametag, fname): 270 - """Parse ABI README file""" 267 + """Parse ABI README file.""" 271 268 272 269 nametag["what"] = ["Introduction"] 273 270 nametag["path"] = "README" ··· 285 282 nametag["description"] += line 286 283 287 284 def parse_file(self, fname, path, basename): 288 - """Parse a single file""" 285 + """Parse a single file.""" 289 286 290 287 ref = f"abi_file_{path}_{basename}" 291 288 ref = self.re_unprintable.sub("_", ref).strip("_") ··· 351 348 self.add_symbol(what=w, fname=fname, xref=fdata.key) 352 349 353 350 def _parse_abi(self, root=None): 354 - """Internal function to parse documentation ABI recursively""" 351 + """Internal function to parse documentation ABI recursively.""" 355 352 356 353 if not root: 357 354 root = self.directory ··· 380 377 self.parse_file(name, path, basename) 381 378 382 379 def parse_abi(self, root=None): 383 - """Parse documentation ABI""" 380 + """Parse documentation ABI.""" 384 381 385 382 self._parse_abi(root) 386 383 ··· 388 385 self.log.debug(pformat(self.data)) 389 386 390 387 def desc_txt(self, desc): 391 - """Print description as found inside ABI files""" 388 + """Print description as found inside ABI files.""" 392 389 393 390 desc = desc.strip(" \t\n") 394 391 ··· 396 393 397 394 def xref(self, fname): 398 395 """ 399 - Converts a Documentation/ABI + basename into a ReST cross-reference 396 + Converts a Documentation/ABI + basename into a ReST cross-reference. 400 397 """ 401 398 402 399 xref = self.file_refs.get(fname) ··· 406 403 return xref 407 404 408 405 def desc_rst(self, desc): 409 - """Enrich ReST output by creating cross-references""" 406 + """Enrich ReST output by creating cross-references.""" 410 407 411 408 # Remove title markups from the description 412 409 # Having titles inside ABI files will only work if extra ··· 462 459 463 460 def doc(self, output_in_txt=False, show_symbols=True, show_file=True, 464 461 filter_path=None): 465 - """Print ABI at stdout""" 462 + """Print ABI at stdout.""" 466 463 467 464 part = None 468 465 for key, v in sorted(self.data.items(), ··· 552 549 yield (msg, file_ref[0][0], ln) 553 550 554 551 def check_issues(self): 555 - """Warn about duplicated ABI entries""" 552 + """Warn about duplicated ABI entries.""" 556 553 557 554 for what, v in self.what_symbols.items(): 558 555 files = v.get("file") ··· 578 575 self.log.warning("%s is defined %d times: %s", what, len(f), "; ".join(f)) 579 576 580 577 def search_symbols(self, expr): 581 - """ Searches for ABI symbols """ 578 + """ Searches for ABI symbols.""" 582 579 583 580 regex = re.compile(expr, re.I) 584 581
+20 -6
tools/lib/python/abi/abi_regex.py
··· 16 16 from abi.helpers import AbiDebug 17 17 18 18 class AbiRegex(AbiParser): 19 - """Extends AbiParser to search ABI nodes with regular expressions""" 19 + """ 20 + Extends AbiParser to search ABI nodes with regular expressions. 20 21 21 - # Escape only ASCII visible characters 22 + There some optimizations here to allow a quick symbol search: 23 + instead of trying to place all symbols altogether an doing linear 24 + search which is very time consuming, create a tree with one depth, 25 + grouping similar symbols altogether. 26 + 27 + Yet, sometimes a full search will be needed, so we have a special branch 28 + on such group tree where other symbols are placed. 29 + """ 30 + 31 + #: Escape only ASCII visible characters. 22 32 escape_symbols = r"([\x21-\x29\x2b-\x2d\x3a-\x40\x5c\x60\x7b-\x7e])" 33 + 34 + #: Special group for other nodes. 23 35 leave_others = "others" 24 36 25 37 # Tuples with regular expressions to be compiled and replacement data ··· 100 88 # Recover plus characters 101 89 (re.compile(r"\xf7"), "+"), 102 90 ] 91 + 92 + #: Regex to check if the symbol name has a number on it. 103 93 re_has_num = re.compile(r"\\d") 104 94 105 - # Symbol name after escape_chars that are considered a devnode basename 95 + #: Symbol name after escape_chars that are considered a devnode basename. 106 96 re_symbol_name = re.compile(r"(\w|\\[\.\-\:])+$") 107 97 108 - # List of popular group names to be skipped to minimize regex group size 109 - # Use AbiDebug.SUBGROUP_SIZE to detect those 98 + #: List of popular group names to be skipped to minimize regex group size 99 + #: Use AbiDebug.SUBGROUP_SIZE to detect those. 110 100 skip_names = set(["devices", "hwmon"]) 111 101 112 102 def regex_append(self, what, new): ··· 162 148 def get_regexes(self, what): 163 149 """ 164 150 Given an ABI devnode, return a list of all regular expressions that 165 - may match it, based on the sub-groups created by regex_append() 151 + may match it, based on the sub-groups created by regex_append(). 166 152 """ 167 153 168 154 re_list = []
+22 -20
tools/lib/python/abi/helpers.py
··· 13 13 class AbiDebug: 14 14 """Debug levels""" 15 15 16 - WHAT_PARSING = 1 17 - WHAT_OPEN = 2 18 - DUMP_ABI_STRUCTS = 4 19 - UNDEFINED = 8 20 - REGEX = 16 21 - SUBGROUP_MAP = 32 22 - SUBGROUP_DICT = 64 23 - SUBGROUP_SIZE = 128 24 - GRAPH = 256 16 + WHAT_PARSING = 1 #: Enable debug parsing logic. 17 + WHAT_OPEN = 2 #: Enable debug messages on file open. 18 + DUMP_ABI_STRUCTS = 4 #: Enable debug for ABI parse data. 19 + UNDEFINED = 8 #: Enable extra undefined symbol data. 20 + REGEX = 16 #: Enable debug for what to regex conversion. 21 + SUBGROUP_MAP = 32 #: Enable debug for symbol regex subgroups 22 + SUBGROUP_DICT = 64 #: Enable debug for sysfs graph tree variable. 23 + SUBGROUP_SIZE = 128 #: Enable debug of search groups. 24 + GRAPH = 256 #: Display ref tree graph for undefined symbols. 25 25 26 - 26 + #: Helper messages for each debug variable 27 27 DEBUG_HELP = """ 28 - 1 - enable debug parsing logic 29 - 2 - enable debug messages on file open 30 - 4 - enable debug for ABI parse data 31 - 8 - enable extra debug information to identify troubles 32 - with ABI symbols found at the local machine that 33 - weren't found on ABI documentation (used only for 34 - undefined subcommand) 35 - 16 - enable debug for what to regex conversion 36 - 32 - enable debug for symbol regex subgroups 37 - 64 - enable debug for sysfs graph tree variable 28 + 1 - enable debug parsing logic 29 + 2 - enable debug messages on file open 30 + 4 - enable debug for ABI parse data 31 + 8 - enable extra debug information to identify troubles 32 + with ABI symbols found at the local machine that 33 + weren't found on ABI documentation (used only for 34 + undefined subcommand) 35 + 16 - enable debug for what to regex conversion 36 + 32 - enable debug for symbol regex subgroups 37 + 64 - enable debug for sysfs graph tree variable 38 + 128 - enable debug of search groups 39 + 256 - enable displaying refrence tree graphs for undefined symbols. 38 40 """
+7 -7
tools/lib/python/abi/system_symbols.py
··· 18 18 from abi.helpers import AbiDebug 19 19 20 20 class SystemSymbols: 21 - """Stores arguments for the class and initialize class vars""" 21 + """Stores arguments for the class and initialize class vars.""" 22 22 23 23 def graph_add_file(self, path, link=None): 24 24 """ 25 - add a file path to the sysfs graph stored at self.root 25 + add a file path to the sysfs graph stored at self.root. 26 26 """ 27 27 28 28 if path in self.files: ··· 43 43 self.files.add(path) 44 44 45 45 def print_graph(self, root_prefix="", root=None, level=0): 46 - """Prints a reference tree graph using UTF-8 characters""" 46 + """Prints a reference tree graph using UTF-8 characters.""" 47 47 48 48 if not root: 49 49 root = self.root ··· 173 173 self._walk(sysfs) 174 174 175 175 def check_file(self, refs, found): 176 - """Check missing ABI symbols for a given sysfs file""" 176 + """Check missing ABI symbols for a given sysfs file.""" 177 177 178 178 res_list = [] 179 179 ··· 214 214 return res_list 215 215 216 216 def _ref_interactor(self, root): 217 - """Recursive function to interact over the sysfs tree""" 217 + """Recursive function to interact over the sysfs tree.""" 218 218 219 219 for k, v in root.items(): 220 220 if isinstance(v, dict): ··· 232 232 233 233 234 234 def get_fileref(self, all_refs, chunk_size): 235 - """Interactor to group refs into chunks""" 235 + """Interactor to group refs into chunks.""" 236 236 237 237 n = 0 238 238 refs = [] ··· 250 250 251 251 def check_undefined_symbols(self, max_workers=None, chunk_size=50, 252 252 found=None, dry_run=None): 253 - """Seach ABI for sysfs symbols missing documentation""" 253 + """Seach ABI for sysfs symbols missing documentation.""" 254 254 255 255 self.abi.parse_abi() 256 256
+20 -7
tools/lib/python/feat/parse_features.py
··· 21 21 from it. 22 22 """ 23 23 24 + #: feature header string. 24 25 h_name = "Feature" 26 + 27 + #: Kernel config header string. 25 28 h_kconfig = "Kconfig" 29 + 30 + #: description header string. 26 31 h_description = "Description" 32 + 33 + #: subsystem header string. 27 34 h_subsys = "Subsystem" 35 + 36 + #: status header string. 28 37 h_status = "Status" 38 + 39 + #: architecture header string. 29 40 h_arch = "Architecture" 30 41 31 - # Sort order for status. Others will be mapped at the end. 42 + #: Sort order for status. Others will be mapped at the end. 32 43 status_map = { 33 44 "ok": 0, 34 45 "TODO": 1, ··· 51 40 52 41 def __init__(self, prefix, debug=0, enable_fname=False): 53 42 """ 54 - Sets internal variables 43 + Sets internal variables. 55 44 """ 56 45 57 46 self.prefix = prefix ··· 74 63 self.msg = "" 75 64 76 65 def emit(self, msg="", end="\n"): 66 + """Helper function to append a new message for feature output.""" 67 + 77 68 self.msg += msg + end 78 69 79 70 def parse_error(self, fname, ln, msg, data=None): 80 71 """ 81 - Displays an error message, printing file name and line 72 + Displays an error message, printing file name and line. 82 73 """ 83 74 84 75 if ln: ··· 95 82 print("", file=sys.stderr) 96 83 97 84 def parse_feat_file(self, fname): 98 - """Parses a single arch-support.txt feature file""" 85 + """Parses a single arch-support.txt feature file.""" 99 86 100 87 if os.path.isdir(fname): 101 88 return ··· 217 204 self.max_size_arch_with_header = self.max_size_arch + len(self.h_arch) 218 205 219 206 def parse(self): 220 - """Parses all arch-support.txt feature files inside self.prefix""" 207 + """Parses all arch-support.txt feature files inside self.prefix.""" 221 208 222 209 path = os.path.expanduser(self.prefix) 223 210 ··· 294 281 295 282 def output_feature(self, feat): 296 283 """ 297 - Output a feature on all architectures 284 + Output a feature on all architectures. 298 285 """ 299 286 300 287 title = f"Feature {feat}" ··· 344 331 345 332 def matrix_lines(self, desc_size, max_size_status, header): 346 333 """ 347 - Helper function to split element tables at the output matrix 334 + Helper function to split element tables at the output matrix. 348 335 """ 349 336 350 337 if header:
+12 -8
tools/lib/python/jobserver.py
··· 11 11 A "normal" jobserver task, like the one initiated by a make subrocess would do: 12 12 13 13 - open read/write file descriptors to communicate with the job server; 14 - - ask for one slot by calling: 14 + - ask for one slot by calling:: 15 + 15 16 claim = os.read(reader, 1) 16 - - when the job finshes, call: 17 + 18 + - when the job finshes, call:: 19 + 17 20 os.write(writer, b"+") # os.write(writer, claim) 18 21 19 22 Here, the goal is different: This script aims to get the remaining number 20 23 of slots available, using all of them to run a command which handle tasks in 21 24 parallel. To to that, it has a loop that ends only after there are no 22 25 slots left. It then increments the number by one, in order to allow a 23 - call equivalent to make -j$((claim+1)), e.g. having a parent make creating 26 + call equivalent to ``make -j$((claim+1))``, e.g. having a parent make creating 24 27 $claim child to do the actual work. 25 28 26 29 The end goal here is to keep the total number of build tasks under the 27 - limit established by the initial make -j$n_proc call. 30 + limit established by the initial ``make -j$n_proc`` call. 28 31 29 32 See: 30 33 https://www.gnu.org/software/make/manual/html_node/POSIX-Jobserver.html#POSIX-Jobserver ··· 46 43 Claim all slots from make using POSIX Jobserver. 47 44 48 45 The main methods here are: 46 + 49 47 - open(): reserves all slots; 50 48 - close(): method returns all used slots back to make; 51 - - run(): executes a command setting PARALLELISM=<available slots jobs + 1> 49 + - run(): executes a command setting PARALLELISM=<available slots jobs + 1>. 52 50 """ 53 51 54 52 def __init__(self): 55 - """Initialize internal vars""" 53 + """Initialize internal vars.""" 56 54 self.claim = 0 57 55 self.jobs = b"" 58 56 self.reader = None ··· 61 57 self.is_open = False 62 58 63 59 def open(self): 64 - """Reserve all available slots to be claimed later on""" 60 + """Reserve all available slots to be claimed later on.""" 65 61 66 62 if self.is_open: 67 63 return ··· 159 155 self.claim = len(self.jobs) + 1 160 156 161 157 def close(self): 162 - """Return all reserved slots to Jobserver""" 158 + """Return all reserved slots to Jobserver.""" 163 159 164 160 if not self.is_open: 165 161 return
+15 -5
tools/lib/python/kdoc/enrich_formatter.py
··· 26 26 and how they're used at the __doc__ description. 27 27 """ 28 28 def __init__(self, *args, **kwargs): 29 - """Initialize class and check if is TTY""" 29 + """ 30 + Initialize class and check if is TTY. 31 + """ 30 32 super().__init__(*args, **kwargs) 31 33 self._tty = sys.stdout.isatty() 32 34 33 35 def enrich_text(self, text): 34 - """Handle ReST markups (currently, only ``foo``)""" 36 + r""" 37 + Handle ReST markups (currently, only \`\`text\`\` markups). 38 + """ 35 39 if self._tty and text: 36 40 # Replace ``text`` with ANSI SGR (bold) 37 41 return re.sub(r'\`\`(.+?)\`\`', ··· 43 39 return text 44 40 45 41 def _fill_text(self, text, width, indent): 46 - """Enrich descriptions with markups on it""" 42 + """ 43 + Enrich descriptions with markups on it. 44 + """ 47 45 enriched = self.enrich_text(text) 48 46 return "\n".join(indent + line for line in enriched.splitlines()) 49 47 50 48 def _format_usage(self, usage, actions, groups, prefix): 51 - """Enrich positional arguments at usage: line""" 49 + """ 50 + Enrich positional arguments at usage: line. 51 + """ 52 52 53 53 prog = self._prog 54 54 parts = [] ··· 71 63 return usage_text 72 64 73 65 def _format_action_invocation(self, action): 74 - """Enrich argument names""" 66 + """ 67 + Enrich argument names. 68 + """ 75 69 if not action.option_strings: 76 70 return self.enrich_text(f"``{action.dest.upper()}``") 77 71
+12 -11
tools/lib/python/kdoc/kdoc_files.py
··· 5 5 # pylint: disable=R0903,R0913,R0914,R0917 6 6 7 7 """ 8 - Parse lernel-doc tags on multiple kernel source files. 8 + Classes for navigating through the files that kernel-doc needs to handle 9 + to generate documentation. 9 10 """ 10 11 11 12 import argparse ··· 44 43 self.srctree = srctree 45 44 46 45 def _parse_dir(self, dirname): 47 - """Internal function to parse files recursively""" 46 + """Internal function to parse files recursively.""" 48 47 49 48 with os.scandir(dirname) as obj: 50 49 for entry in obj: ··· 66 65 def parse_files(self, file_list, file_not_found_cb): 67 66 """ 68 67 Define an iterator to parse all source files from file_list, 69 - handling directories if any 68 + handling directories if any. 70 69 """ 71 70 72 71 if not file_list: ··· 92 91 93 92 There are two type of parsers defined here: 94 93 - self.parse_file(): parses both kernel-doc markups and 95 - EXPORT_SYMBOL* macros; 96 - - self.process_export_file(): parses only EXPORT_SYMBOL* macros. 94 + ``EXPORT_SYMBOL*`` macros; 95 + - self.process_export_file(): parses only ``EXPORT_SYMBOL*`` macros. 97 96 """ 98 97 99 98 def warning(self, msg): 100 - """Ancillary routine to output a warning and increment error count""" 99 + """Ancillary routine to output a warning and increment error count.""" 101 100 102 101 self.config.log.warning(msg) 103 102 self.errors += 1 104 103 105 104 def error(self, msg): 106 - """Ancillary routine to output an error and increment error count""" 105 + """Ancillary routine to output an error and increment error count.""" 107 106 108 107 self.config.log.error(msg) 109 108 self.errors += 1 ··· 129 128 130 129 def process_export_file(self, fname): 131 130 """ 132 - Parses EXPORT_SYMBOL* macros from a single Kernel source file. 131 + Parses ``EXPORT_SYMBOL*`` macros from a single Kernel source file. 133 132 """ 134 133 135 134 # Prevent parsing the same file twice if results are cached ··· 158 157 wcontents_before_sections=False, 159 158 logger=None): 160 159 """ 161 - Initialize startup variables and parse all files 160 + Initialize startup variables and parse all files. 162 161 """ 163 162 164 163 if not verbose: ··· 214 213 215 214 def parse(self, file_list, export_file=None): 216 215 """ 217 - Parse all files 216 + Parse all files. 218 217 """ 219 218 220 219 glob = GlobSourceFiles(srctree=self.config.src_tree) ··· 243 242 filenames=None, export_file=None): 244 243 """ 245 244 Interacts over the kernel-doc results and output messages, 246 - returning kernel-doc markups on each interaction 245 + returning kernel-doc markups on each interaction. 247 246 """ 248 247 249 248 self.out_style.set_config(self.config)
+18
tools/lib/python/kdoc/kdoc_item.py
··· 4 4 # then pass into the output modules. 5 5 # 6 6 7 + """ 8 + Data class to store a kernel-doc Item. 9 + """ 10 + 7 11 class KdocItem: 12 + """ 13 + A class that will, eventually, encapsulate all of the parsed data that we 14 + then pass into the output modules. 15 + """ 16 + 8 17 def __init__(self, name, fname, type, start_line, **other_stuff): 9 18 self.name = name 10 19 self.fname = fname ··· 33 24 self.other_stuff = other_stuff 34 25 35 26 def get(self, key, default = None): 27 + """ 28 + Get a value from optional keys. 29 + """ 36 30 return self.other_stuff.get(key, default) 37 31 38 32 def __getitem__(self, key): ··· 45 33 # Tracking of section and parameter information. 46 34 # 47 35 def set_sections(self, sections, start_lines): 36 + """ 37 + Set sections and start lines. 38 + """ 48 39 self.sections = sections 49 40 self.section_start_lines = start_lines 50 41 51 42 def set_params(self, names, descs, types, starts): 43 + """ 44 + Set parameter list: names, descriptions, types and start lines. 45 + """ 52 46 self.parameterlist = names 53 47 self.parameterdescs = descs 54 48 self.parametertypes = types
+35 -25
tools/lib/python/kdoc/kdoc_output.py
··· 5 5 # pylint: disable=C0301,R0902,R0911,R0912,R0913,R0914,R0915,R0917 6 6 7 7 """ 8 - Implement output filters to print kernel-doc documentation. 8 + Classes to implement output filters to print kernel-doc documentation. 9 9 10 - The implementation uses a virtual base class (OutputFormat) which 10 + The implementation uses a virtual base class ``OutputFormat``. It 11 11 contains dispatches to virtual methods, and some code to filter 12 12 out output messages. 13 13 14 14 The actual implementation is done on one separate class per each type 15 - of output. Currently, there are output classes for ReST and man/troff. 15 + of output, e.g. ``RestFormat`` and ``ManFormat`` classes. 16 + 17 + Currently, there are output classes for ReST and man/troff. 16 18 """ 17 19 18 20 import os ··· 56 54 """ 57 55 58 56 # output mode. 59 - OUTPUT_ALL = 0 # output all symbols and doc sections 60 - OUTPUT_INCLUDE = 1 # output only specified symbols 61 - OUTPUT_EXPORTED = 2 # output exported symbols 62 - OUTPUT_INTERNAL = 3 # output non-exported symbols 57 + OUTPUT_ALL = 0 #: Output all symbols and doc sections. 58 + OUTPUT_INCLUDE = 1 #: Output only specified symbols. 59 + OUTPUT_EXPORTED = 2 #: Output exported symbols. 60 + OUTPUT_INTERNAL = 3 #: Output non-exported symbols. 63 61 64 - # Virtual member to be overridden at the inherited classes 62 + #: Highlights to be used in ReST format. 65 63 highlights = [] 66 64 65 + #: Blank line character. 66 + blankline = "" 67 + 67 68 def __init__(self): 68 - """Declare internal vars and set mode to OUTPUT_ALL""" 69 + """Declare internal vars and set mode to ``OUTPUT_ALL``.""" 69 70 70 71 self.out_mode = self.OUTPUT_ALL 71 72 self.enable_lineno = None ··· 133 128 self.config.warning(log_msg) 134 129 135 130 def check_doc(self, name, args): 136 - """Check if DOC should be output""" 131 + """Check if DOC should be output.""" 137 132 138 133 if self.no_doc_sections: 139 134 return False ··· 182 177 183 178 def msg(self, fname, name, args): 184 179 """ 185 - Handles a single entry from kernel-doc parser 180 + Handles a single entry from kernel-doc parser. 186 181 """ 187 182 188 183 self.data = "" ··· 225 220 # Virtual methods to be overridden by inherited classes 226 221 # At the base class, those do nothing. 227 222 def set_symbols(self, symbols): 228 - """Get a list of all symbols from kernel_doc""" 223 + """Get a list of all symbols from kernel_doc.""" 229 224 230 225 def out_doc(self, fname, name, args): 231 - """Outputs a DOC block""" 226 + """Outputs a DOC block.""" 232 227 233 228 def out_function(self, fname, name, args): 234 - """Outputs a function""" 229 + """Outputs a function.""" 235 230 236 231 def out_enum(self, fname, name, args): 237 - """Outputs an enum""" 232 + """Outputs an enum.""" 238 233 239 234 def out_var(self, fname, name, args): 240 - """Outputs a variable""" 235 + """Outputs a variable.""" 241 236 242 237 def out_typedef(self, fname, name, args): 243 - """Outputs a typedef""" 238 + """Outputs a typedef.""" 244 239 245 240 def out_struct(self, fname, name, args): 246 - """Outputs a struct""" 241 + """Outputs a struct.""" 247 242 248 243 249 244 class RestFormat(OutputFormat): 250 - """Consts and functions used by ReST output""" 245 + """Consts and functions used by ReST output.""" 251 246 247 + #: Highlights to be used in ReST format 252 248 highlights = [ 253 249 (type_constant, r"``\1``"), 254 250 (type_constant2, r"``\1``"), ··· 269 263 (type_fallback, r":c:type:`\1`"), 270 264 (type_param_ref, r"**\1\2**") 271 265 ] 266 + 272 267 blankline = "\n" 273 268 269 + #: Sphinx literal block regex. 274 270 sphinx_literal = KernRe(r'^[^.].*::$', cache=False) 271 + 272 + #: Sphinx code block regex. 275 273 sphinx_cblock = KernRe(r'^\.\.\ +code-block::', cache=False) 276 274 277 275 def __init__(self): ··· 290 280 self.lineprefix = "" 291 281 292 282 def print_lineno(self, ln): 293 - """Outputs a line number""" 283 + """Outputs a line number.""" 294 284 295 285 if self.enable_lineno and ln is not None: 296 286 ln += 1 ··· 299 289 def output_highlight(self, args): 300 290 """ 301 291 Outputs a C symbol that may require being converted to ReST using 302 - the self.highlights variable 292 + the self.highlights variable. 303 293 """ 304 294 305 295 input_text = args ··· 580 570 581 571 582 572 class ManFormat(OutputFormat): 583 - """Consts and functions used by man pages output""" 573 + """Consts and functions used by man pages output.""" 584 574 585 575 highlights = ( 586 576 (type_constant, r"\1"), ··· 597 587 ) 598 588 blankline = "" 599 589 590 + #: Allowed timestamp formats. 600 591 date_formats = [ 601 592 "%a %b %d %H:%M:%S %Z %Y", 602 593 "%a %b %d %H:%M:%S %Y", ··· 664 653 self.symbols = symbols 665 654 666 655 def out_tail(self, fname, name, args): 667 - """Adds a tail for all man pages""" 656 + """Adds a tail for all man pages.""" 668 657 669 658 # SEE ALSO section 670 659 self.data += f'.SH "SEE ALSO"' + "\n.PP\n" ··· 700 689 def output_highlight(self, block): 701 690 """ 702 691 Outputs a C symbol that may require being highlighted with 703 - self.highlights variable using troff syntax 692 + self.highlights variable using troff syntax. 704 693 """ 705 694 706 695 contents = self.highlight_block(block) ··· 731 720 self.output_highlight(text) 732 721 733 722 def out_function(self, fname, name, args): 734 - """output function in man""" 735 723 736 724 out_name = self.arg_name(args, name) 737 725
+92 -77
tools/lib/python/kdoc/kdoc_parser.py
··· 5 5 # pylint: disable=C0301,C0302,R0904,R0912,R0913,R0914,R0915,R0917,R1702 6 6 7 7 """ 8 - kdoc_parser 9 - =========== 10 - 11 - Read a C language source or header FILE and extract embedded 12 - documentation comments 8 + Classes and functions related to reading a C language source or header FILE 9 + and extract embedded documentation comments from it. 13 10 """ 14 11 15 12 import sys ··· 192 195 ] 193 196 194 197 # 195 - # Apply a set of transforms to a block of text. 198 + # Ancillary functions 196 199 # 200 + 197 201 def apply_transforms(xforms, text): 202 + """ 203 + Apply a set of transforms to a block of text. 204 + """ 198 205 for search, subst in xforms: 199 206 text = search.sub(subst, text) 200 207 return text 201 208 202 - # 203 - # A little helper to get rid of excess white space 204 - # 205 209 multi_space = KernRe(r'\s\s+') 206 210 def trim_whitespace(s): 211 + """ 212 + A little helper to get rid of excess white space. 213 + """ 207 214 return multi_space.sub(' ', s.strip()) 208 215 209 - # 210 - # Remove struct/enum members that have been marked "private". 211 - # 212 216 def trim_private_members(text): 213 - # 217 + """ 218 + Remove ``struct``/``enum`` members that have been marked "private". 219 + """ 214 220 # First look for a "public:" block that ends a private region, then 215 221 # handle the "private until the end" case. 216 222 # ··· 226 226 227 227 class state: 228 228 """ 229 - State machine enums 229 + States used by the parser's state machine. 230 230 """ 231 231 232 232 # Parser states 233 - NORMAL = 0 # normal code 234 - NAME = 1 # looking for function name 235 - DECLARATION = 2 # We have seen a declaration which might not be done 236 - BODY = 3 # the body of the comment 237 - SPECIAL_SECTION = 4 # doc section ending with a blank line 238 - PROTO = 5 # scanning prototype 239 - DOCBLOCK = 6 # documentation block 240 - INLINE_NAME = 7 # gathering doc outside main block 241 - INLINE_TEXT = 8 # reading the body of inline docs 233 + NORMAL = 0 #: Normal code. 234 + NAME = 1 #: Looking for function name. 235 + DECLARATION = 2 #: We have seen a declaration which might not be done. 236 + BODY = 3 #: The body of the comment. 237 + SPECIAL_SECTION = 4 #: Doc section ending with a blank line. 238 + PROTO = 5 #: Scanning prototype. 239 + DOCBLOCK = 6 #: Documentation block. 240 + INLINE_NAME = 7 #: Gathering doc outside main block. 241 + INLINE_TEXT = 8 #: Reading the body of inline docs. 242 242 243 + #: Names for each parser state. 243 244 name = [ 244 245 "NORMAL", 245 246 "NAME", ··· 254 253 ] 255 254 256 255 257 - SECTION_DEFAULT = "Description" # default section 256 + SECTION_DEFAULT = "Description" #: Default section. 258 257 259 258 class KernelEntry: 259 + """ 260 + Encapsulates a Kernel documentation entry. 261 + """ 260 262 261 263 def __init__(self, config, fname, ln): 262 264 self.config = config ··· 292 288 # Management of section contents 293 289 # 294 290 def add_text(self, text): 291 + """Add a new text to the entry contents list.""" 295 292 self._contents.append(text) 296 293 297 294 def contents(self): 295 + """Returns a string with all content texts that were added.""" 298 296 return '\n'.join(self._contents) + '\n' 299 297 300 298 # TODO: rename to emit_message after removal of kernel-doc.pl ··· 315 309 self.warnings.append(log_msg) 316 310 return 317 311 318 - # 319 - # Begin a new section. 320 - # 321 312 def begin_section(self, line_no, title = SECTION_DEFAULT, dump = False): 313 + """ 314 + Begin a new section. 315 + """ 322 316 if dump: 323 317 self.dump_section(start_new = True) 324 318 self.section = title ··· 372 366 documentation comments. 373 367 """ 374 368 375 - # Section names 376 - 369 + #: Name of context section. 377 370 section_context = "Context" 371 + 372 + #: Name of return section. 378 373 section_return = "Return" 379 374 375 + #: String to write when a parameter is not described. 380 376 undescribed = "-- undescribed --" 381 377 382 378 def __init__(self, config, fname): ··· 424 416 425 417 def dump_section(self, start_new=True): 426 418 """ 427 - Dumps section contents to arrays/hashes intended for that purpose. 419 + Dump section contents to arrays/hashes intended for that purpose. 428 420 """ 429 421 430 422 if self.entry: ··· 433 425 # TODO: rename it to store_declaration after removal of kernel-doc.pl 434 426 def output_declaration(self, dtype, name, **args): 435 427 """ 436 - Stores the entry into an entry array. 428 + Store the entry into an entry array. 437 429 438 - The actual output and output filters will be handled elsewhere 430 + The actual output and output filters will be handled elsewhere. 439 431 """ 440 432 441 433 item = KdocItem(name, self.fname, dtype, ··· 690 682 self.emit_msg(ln, 691 683 f"No description found for return value of '{declaration_name}'") 692 684 693 - # 694 - # Split apart a structure prototype; returns (struct|union, name, members) or None 695 - # 696 685 def split_struct_proto(self, proto): 686 + """ 687 + Split apart a structure prototype; returns (struct|union, name, 688 + members) or ``None``. 689 + """ 690 + 697 691 type_pattern = r'(struct|union)' 698 692 qualifiers = [ 699 693 "__attribute__", ··· 714 704 if r.search(proto): 715 705 return (r.group(1), r.group(3), r.group(2)) 716 706 return None 717 - # 718 - # Rewrite the members of a structure or union for easier formatting later on. 719 - # Among other things, this function will turn a member like: 720 - # 721 - # struct { inner_members; } foo; 722 - # 723 - # into: 724 - # 725 - # struct foo; inner_members; 726 - # 707 + 727 708 def rewrite_struct_members(self, members): 709 + """ 710 + Process ``struct``/``union`` members from the most deeply nested 711 + outward. 712 + 713 + Rewrite the members of a ``struct`` or ``union`` for easier formatting 714 + later on. Among other things, this function will turn a member like:: 715 + 716 + struct { inner_members; } foo; 717 + 718 + into:: 719 + 720 + struct foo; inner_members; 721 + """ 722 + 728 723 # 729 - # Process struct/union members from the most deeply nested outward. The 730 - # trick is in the ^{ below - it prevents a match of an outer struct/union 731 - # until the inner one has been munged (removing the "{" in the process). 724 + # The trick is in the ``^{`` below - it prevents a match of an outer 725 + # ``struct``/``union`` until the inner one has been munged 726 + # (removing the ``{`` in the process). 732 727 # 733 728 struct_members = KernRe(r'(struct|union)' # 0: declaration type 734 729 r'([^\{\};]+)' # 1: possible name ··· 811 796 tuples = struct_members.findall(members) 812 797 return members 813 798 814 - # 815 - # Format the struct declaration into a standard form for inclusion in the 816 - # resulting docs. 817 - # 818 799 def format_struct_decl(self, declaration): 800 + """ 801 + Format the ``struct`` declaration into a standard form for inclusion 802 + in the resulting docs. 803 + """ 804 + 819 805 # 820 806 # Insert newlines, get rid of extra spaces. 821 807 # ··· 850 834 851 835 def dump_struct(self, ln, proto): 852 836 """ 853 - Store an entry for a struct or union 837 + Store an entry for a ``struct`` or ``union`` 854 838 """ 855 839 # 856 840 # Do the basic parse to get the pieces of the declaration. ··· 892 876 893 877 def dump_enum(self, ln, proto): 894 878 """ 895 - Stores an enum inside self.entries array. 879 + Store an ``enum`` inside self.entries array. 896 880 """ 897 881 # 898 882 # Strip preprocessor directives. Note that this depends on the ··· 1039 1023 1040 1024 def dump_declaration(self, ln, prototype): 1041 1025 """ 1042 - Stores a data declaration inside self.entries array. 1026 + Store a data declaration inside self.entries array. 1043 1027 """ 1044 1028 1045 1029 if self.entry.decl_type == "enum": ··· 1056 1040 1057 1041 def dump_function(self, ln, prototype): 1058 1042 """ 1059 - Stores a function or function macro inside self.entries array. 1043 + Store a function or function macro inside self.entries array. 1060 1044 """ 1061 1045 1062 1046 found = func_macro = False ··· 1157 1141 1158 1142 def dump_typedef(self, ln, proto): 1159 1143 """ 1160 - Stores a typedef inside self.entries array. 1144 + Store a ``typedef`` inside self.entries array. 1161 1145 """ 1162 1146 # 1163 1147 # We start by looking for function typedefs. ··· 1211 1195 @staticmethod 1212 1196 def process_export(function_set, line): 1213 1197 """ 1214 - process EXPORT_SYMBOL* tags 1198 + process ``EXPORT_SYMBOL*`` tags 1215 1199 1216 1200 This method doesn't use any variable from the class, so declare it 1217 1201 with a staticmethod decorator. ··· 1242 1226 1243 1227 def process_normal(self, ln, line): 1244 1228 """ 1245 - STATE_NORMAL: looking for the /** to begin everything. 1229 + STATE_NORMAL: looking for the ``/**`` to begin everything. 1246 1230 """ 1247 1231 1248 1232 if not doc_start.match(line): ··· 1332 1316 else: 1333 1317 self.emit_msg(ln, f"Cannot find identifier on line:\n{line}") 1334 1318 1335 - # 1336 - # Helper function to determine if a new section is being started. 1337 - # 1338 1319 def is_new_section(self, ln, line): 1320 + """ 1321 + Helper function to determine if a new section is being started. 1322 + """ 1339 1323 if doc_sect.search(line): 1340 1324 self.state = state.BODY 1341 1325 # ··· 1367 1351 return True 1368 1352 return False 1369 1353 1370 - # 1371 - # Helper function to detect (and effect) the end of a kerneldoc comment. 1372 - # 1373 1354 def is_comment_end(self, ln, line): 1355 + """ 1356 + Helper function to detect (and effect) the end of a kerneldoc comment. 1357 + """ 1374 1358 if doc_end.search(line): 1375 1359 self.dump_section() 1376 1360 ··· 1389 1373 1390 1374 def process_decl(self, ln, line): 1391 1375 """ 1392 - STATE_DECLARATION: We've seen the beginning of a declaration 1376 + STATE_DECLARATION: We've seen the beginning of a declaration. 1393 1377 """ 1394 1378 if self.is_new_section(ln, line) or self.is_comment_end(ln, line): 1395 1379 return ··· 1418 1402 1419 1403 def process_special(self, ln, line): 1420 1404 """ 1421 - STATE_SPECIAL_SECTION: a section ending with a blank line 1405 + STATE_SPECIAL_SECTION: a section ending with a blank line. 1422 1406 """ 1423 1407 # 1424 1408 # If we have hit a blank line (only the " * " marker), then this ··· 1508 1492 1509 1493 def syscall_munge(self, ln, proto): # pylint: disable=W0613 1510 1494 """ 1511 - Handle syscall definitions 1495 + Handle syscall definitions. 1512 1496 """ 1513 1497 1514 1498 is_void = False ··· 1547 1531 1548 1532 def tracepoint_munge(self, ln, proto): 1549 1533 """ 1550 - Handle tracepoint definitions 1534 + Handle tracepoint definitions. 1551 1535 """ 1552 1536 1553 1537 tracepointname = None ··· 1583 1567 return proto 1584 1568 1585 1569 def process_proto_function(self, ln, line): 1586 - """Ancillary routine to process a function prototype""" 1570 + """Ancillary routine to process a function prototype.""" 1587 1571 1588 1572 # strip C99-style comments to end of line 1589 1573 line = KernRe(r"//.*$", re.S).sub('', line) ··· 1628 1612 self.reset_state(ln) 1629 1613 1630 1614 def process_proto_type(self, ln, line): 1631 - """Ancillary routine to process a type""" 1615 + """ 1616 + Ancillary routine to process a type. 1617 + """ 1632 1618 1633 1619 # Strip C99-style comments and surrounding whitespace 1634 1620 line = KernRe(r"//.*$", re.S).sub('', line).strip() ··· 1684 1666 self.process_proto_type(ln, line) 1685 1667 1686 1668 def process_docblock(self, ln, line): 1687 - """STATE_DOCBLOCK: within a DOC: block.""" 1669 + """STATE_DOCBLOCK: within a ``DOC:`` block.""" 1688 1670 1689 1671 if doc_end.search(line): 1690 1672 self.dump_section() ··· 1696 1678 1697 1679 def parse_export(self): 1698 1680 """ 1699 - Parses EXPORT_SYMBOL* macros from a single Kernel source file. 1681 + Parses ``EXPORT_SYMBOL*`` macros from a single Kernel source file. 1700 1682 """ 1701 1683 1702 1684 export_table = set() ··· 1713 1695 1714 1696 return export_table 1715 1697 1716 - # 1717 - # The state/action table telling us which function to invoke in 1718 - # each state. 1719 - # 1698 + #: The state/action table telling us which function to invoke in each state. 1720 1699 state_actions = { 1721 1700 state.NORMAL: process_normal, 1722 1701 state.NAME: process_name,
+11 -7
tools/lib/python/kdoc/kdoc_re.py
··· 51 51 """ 52 52 return self.regex.pattern 53 53 54 + def __repr__(self): 55 + return f're.compile("{self.regex.pattern}")' 56 + 54 57 def __add__(self, other): 55 58 """ 56 59 Allows adding two regular expressions into one. ··· 64 61 65 62 def match(self, string): 66 63 """ 67 - Handles a re.match storing its results 64 + Handles a re.match storing its results. 68 65 """ 69 66 70 67 self.last_match = self.regex.match(string) ··· 72 69 73 70 def search(self, string): 74 71 """ 75 - Handles a re.search storing its results 72 + Handles a re.search storing its results. 76 73 """ 77 74 78 75 self.last_match = self.regex.search(string) ··· 80 77 81 78 def findall(self, string): 82 79 """ 83 - Alias to re.findall 80 + Alias to re.findall. 84 81 """ 85 82 86 83 return self.regex.findall(string) 87 84 88 85 def split(self, string): 89 86 """ 90 - Alias to re.split 87 + Alias to re.split. 91 88 """ 92 89 93 90 return self.regex.split(string) 94 91 95 92 def sub(self, sub, string, count=0): 96 93 """ 97 - Alias to re.sub 94 + Alias to re.sub. 98 95 """ 99 96 100 97 return self.regex.sub(sub, string, count=count) 101 98 102 99 def group(self, num): 103 100 """ 104 - Returns the group results of the last match 101 + Returns the group results of the last match. 105 102 """ 106 103 107 104 return self.last_match.group(num) ··· 113 110 even harder on Python with its normal re module, as there are several 114 111 advanced regular expressions that are missing. 115 112 116 - This is the case of this pattern: 113 + This is the case of this pattern:: 117 114 118 115 '\\bSTRUCT_GROUP(\\(((?:(?>[^)(]+)|(?1))*)\\))[^;]*;' 119 116 ··· 124 121 replace nested expressions. 125 122 126 123 The original approach was suggested by: 124 + 127 125 https://stackoverflow.com/questions/5454322/python-how-to-match-nested-parentheses-with-regex 128 126 129 127 Although I re-implemented it to make it more generic and match 3 types
+56 -39
tools/lib/python/kdoc/latex_fonts.py
··· 5 5 # Ported to Python by (c) Mauro Carvalho Chehab, 2025 6 6 7 7 """ 8 - Detect problematic Noto CJK variable fonts. 8 + Detect problematic Noto CJK variable fonts 9 + ========================================== 9 10 10 - For "make pdfdocs", reports of build errors of translations.pdf started 11 - arriving early 2024 [1, 2]. It turned out that Fedora and openSUSE 12 - tumbleweed have started deploying variable-font [3] format of "Noto CJK" 13 - fonts [4, 5]. For PDF, a LaTeX package named xeCJK is used for CJK 11 + For ``make pdfdocs``, reports of build errors of translations.pdf started 12 + arriving early 2024 [1]_ [2]_. It turned out that Fedora and openSUSE 13 + tumbleweed have started deploying variable-font [3]_ format of "Noto CJK" 14 + fonts [4]_ [5]_. For PDF, a LaTeX package named xeCJK is used for CJK 14 15 (Chinese, Japanese, Korean) pages. xeCJK requires XeLaTeX/XeTeX, which 15 16 does not (and likely never will) understand variable fonts for historical 16 17 reasons. ··· 26 25 suggestions if variable-font files of "Noto CJK" fonts are in the list of 27 26 fonts accessible from XeTeX. 28 27 29 - References: 30 - [1]: https://lore.kernel.org/r/8734tqsrt7.fsf@meer.lwn.net/ 31 - [2]: https://lore.kernel.org/r/1708585803.600323099@f111.i.mail.ru/ 32 - [3]: https://en.wikipedia.org/wiki/Variable_font 33 - [4]: https://fedoraproject.org/wiki/Changes/Noto_CJK_Variable_Fonts 34 - [5]: https://build.opensuse.org/request/show/1157217 28 + .. [1] https://lore.kernel.org/r/8734tqsrt7.fsf@meer.lwn.net/ 29 + .. [2] https://lore.kernel.org/r/1708585803.600323099@f111.i.mail.ru/ 30 + .. [3] https://en.wikipedia.org/wiki/Variable_font 31 + .. [4] https://fedoraproject.org/wiki/Changes/Noto_CJK_Variable_Fonts 32 + .. [5] https://build.opensuse.org/request/show/1157217 35 33 36 - #=========================================================================== 37 34 Workarounds for building translations.pdf 38 - #=========================================================================== 35 + ----------------------------------------- 39 36 40 37 * Denylist "variable font" Noto CJK fonts. 38 + 41 39 - Create $HOME/deny-vf/fontconfig/fonts.conf from template below, with 42 40 tweaks if necessary. Remove leading "". 41 + 43 42 - Path of fontconfig/fonts.conf can be overridden by setting an env 44 43 variable FONTS_CONF_DENY_VF. 45 44 46 - * Template: 47 - ----------------------------------------------------------------- 48 - <?xml version="1.0"?> 49 - <!DOCTYPE fontconfig SYSTEM "urn:fontconfig:fonts.dtd"> 50 - <fontconfig> 51 - <!-- 52 - Ignore variable-font glob (not to break xetex) 53 - --> 54 - <selectfont> 55 - <rejectfont> 56 - <!-- 57 - for Fedora 58 - --> 59 - <glob>/usr/share/fonts/google-noto-*-cjk-vf-fonts</glob> 60 - <!-- 61 - for openSUSE tumbleweed 62 - --> 63 - <glob>/usr/share/fonts/truetype/Noto*CJK*-VF.otf</glob> 64 - </rejectfont> 65 - </selectfont> 66 - </fontconfig> 67 - ----------------------------------------------------------------- 45 + * Template:: 46 + 47 + <?xml version="1.0"?> 48 + <!DOCTYPE fontconfig SYSTEM "urn:fontconfig:fonts.dtd"> 49 + <fontconfig> 50 + <!-- 51 + Ignore variable-font glob (not to break xetex) 52 + --> 53 + <selectfont> 54 + <rejectfont> 55 + <!-- 56 + for Fedora 57 + --> 58 + <glob>/usr/share/fonts/google-noto-*-cjk-vf-fonts</glob> 59 + <!-- 60 + for openSUSE tumbleweed 61 + --> 62 + <glob>/usr/share/fonts/truetype/Noto*CJK*-VF.otf</glob> 63 + </rejectfont> 64 + </selectfont> 65 + </fontconfig> 68 66 69 67 The denylisting is activated for "make pdfdocs". 70 68 71 69 * For skipping CJK pages in PDF 70 + 72 71 - Uninstall texlive-xecjk. 73 72 Denylisting is not needed in this case. 74 73 75 74 * For printing CJK pages in PDF 75 + 76 76 - Need non-variable "Noto CJK" fonts. 77 + 77 78 * Fedora 79 + 78 80 - google-noto-sans-cjk-fonts 79 81 - google-noto-serif-cjk-fonts 82 + 80 83 * openSUSE tumbleweed 84 + 81 85 - Non-variable "Noto CJK" fonts are not available as distro packages 82 86 as of April, 2024. Fetch a set of font files from upstream Noto 83 87 CJK Font released at: 88 + 84 89 https://github.com/notofonts/noto-cjk/tree/main/Sans#super-otc 90 + 85 91 and at: 92 + 86 93 https://github.com/notofonts/noto-cjk/tree/main/Serif#super-otc 87 - , then uncompress and deploy them. 94 + 95 + then uncompress and deploy them. 88 96 - Remember to update fontconfig cache by running fc-cache. 89 97 90 - !!! Caution !!! 98 + .. caution:: 91 99 Uninstalling "variable font" packages can be dangerous. 92 100 They might be depended upon by other packages important for your work. 93 101 Denylisting should be less invasive, as it is effective only while ··· 125 115 self.re_cjk = re.compile(r"([^:]+):\s*Noto\s+(Sans|Sans Mono|Serif) CJK") 126 116 127 117 def description(self): 118 + """ 119 + Returns module description. 120 + """ 128 121 return __doc__ 129 122 130 123 def get_noto_cjk_vf_fonts(self): 131 - """Get Noto CJK fonts""" 124 + """ 125 + Get Noto CJK fonts. 126 + """ 132 127 133 128 cjk_fonts = set() 134 129 cmd = ["fc-list", ":", "file", "family", "variable"] ··· 158 143 return sorted(cjk_fonts) 159 144 160 145 def check(self): 161 - """Check for problems with CJK fonts""" 146 + """ 147 + Check for problems with CJK fonts. 148 + """ 162 149 163 150 fonts = textwrap.indent("\n".join(self.get_noto_cjk_vf_fonts()), " ") 164 151 if not fonts:
+39 -23
tools/lib/python/kdoc/parse_data_structs.py
··· 9 9 It accepts an optional file to change the default symbol reference or to 10 10 suppress symbols from the output. 11 11 12 - It is capable of identifying defines, functions, structs, typedefs, 13 - enums and enum symbols and create cross-references for all of them. 12 + It is capable of identifying ``define``, function, ``struct``, ``typedef``, 13 + ``enum`` and ``enum`` symbols and create cross-references for all of them. 14 14 It is also capable of distinguish #define used for specifying a Linux 15 15 ioctl. 16 16 17 - The optional rules file contains a set of rules like: 17 + The optional rules file contains a set of rules like:: 18 18 19 19 ignore ioctl VIDIOC_ENUM_FMT 20 20 replace ioctl VIDIOC_DQBUF vidioc_qbuf ··· 34 34 It is meant to allow having a more comprehensive documentation, where 35 35 uAPI headers will create cross-reference links to the code. 36 36 37 - It is capable of identifying defines, functions, structs, typedefs, 38 - enums and enum symbols and create cross-references for all of them. 37 + It is capable of identifying ``define``, function, ``struct``, ``typedef``, 38 + ``enum`` and ``enum`` symbols and create cross-references for all of them. 39 39 It is also capable of distinguish #define used for specifying a Linux 40 40 ioctl. 41 41 ··· 43 43 allows parsing an exception file. Such file contains a set of rules 44 44 using the syntax below: 45 45 46 - 1. Ignore rules: 46 + 1. Ignore rules:: 47 47 48 48 ignore <type> <symbol>` 49 49 50 50 Removes the symbol from reference generation. 51 51 52 - 2. Replace rules: 52 + 2. Replace rules:: 53 53 54 54 replace <type> <old_symbol> <new_reference> 55 55 ··· 58 58 - A simple symbol name; 59 59 - A full Sphinx reference. 60 60 61 - 3. Namespace rules 61 + 3. Namespace rules:: 62 62 63 63 namespace <namespace> 64 64 65 65 Sets C namespace to be used during cross-reference generation. Can 66 66 be overridden by replace rules. 67 67 68 - On ignore and replace rules, <type> can be: 69 - - ioctl: for defines that end with _IO*, e.g. ioctl definitions 70 - - define: for other defines 71 - - symbol: for symbols defined within enums; 72 - - typedef: for typedefs; 73 - - enum: for the name of a non-anonymous enum; 74 - - struct: for structs. 68 + On ignore and replace rules, ``<type>`` can be: 69 + - ``ioctl``: for defines that end with ``_IO*``, e.g. ioctl definitions 70 + - ``define``: for other defines 71 + - ``symbol``: for symbols defined within enums; 72 + - ``typedef``: for typedefs; 73 + - ``enum``: for the name of a non-anonymous enum; 74 + - ``struct``: for structs. 75 75 76 - Examples: 76 + Examples:: 77 77 78 78 ignore define __LINUX_MEDIA_H 79 79 ignore ioctl VIDIOC_ENUM_FMT ··· 83 83 namespace MC 84 84 """ 85 85 86 - # Parser regexes with multiple ways to capture enums and structs 86 + #: Parser regex with multiple ways to capture enums. 87 87 RE_ENUMS = [ 88 88 re.compile(r"^\s*enum\s+([\w_]+)\s*\{"), 89 89 re.compile(r"^\s*enum\s+([\w_]+)\s*$"), 90 90 re.compile(r"^\s*typedef\s*enum\s+([\w_]+)\s*\{"), 91 91 re.compile(r"^\s*typedef\s*enum\s+([\w_]+)\s*$"), 92 92 ] 93 + 94 + #: Parser regex with multiple ways to capture structs. 93 95 RE_STRUCTS = [ 94 96 re.compile(r"^\s*struct\s+([_\w][\w\d_]+)\s*\{"), 95 97 re.compile(r"^\s*struct\s+([_\w][\w\d_]+)$"), ··· 99 97 re.compile(r"^\s*typedef\s*struct\s+([_\w][\w\d_]+)$"), 100 98 ] 101 99 102 - # FIXME: the original code was written a long time before Sphinx C 100 + # NOTE: the original code was written a long time before Sphinx C 103 101 # domain to have multiple namespaces. To avoid to much turn at the 104 102 # existing hyperlinks, the code kept using "c:type" instead of the 105 103 # right types. To change that, we need to change the types not only 106 104 # here, but also at the uAPI media documentation. 105 + 106 + #: Dictionary containing C type identifiers to be transformed. 107 107 DEF_SYMBOL_TYPES = { 108 108 "ioctl": { 109 109 "prefix": "\\ ", ··· 162 158 self.symbols[symbol_type] = {} 163 159 164 160 def read_exceptions(self, fname: str): 161 + """ 162 + Read an optional exceptions file, used to override defaults. 163 + """ 164 + 165 165 if not fname: 166 166 return 167 167 ··· 250 242 def store_type(self, ln, symbol_type: str, symbol: str, 251 243 ref_name: str = None, replace_underscores: bool = True): 252 244 """ 253 - Stores a new symbol at self.symbols under symbol_type. 245 + Store a new symbol at self.symbols under symbol_type. 254 246 255 - By default, underscores are replaced by "-" 247 + By default, underscores are replaced by ``-``. 256 248 """ 257 249 defs = self.DEF_SYMBOL_TYPES[symbol_type] 258 250 ··· 284 276 self.symbols[symbol_type][symbol] = (f"{prefix}{ref_link}{suffix}", ln) 285 277 286 278 def store_line(self, line): 287 - """Stores a line at self.data, properly indented""" 279 + """ 280 + Store a line at self.data, properly indented. 281 + """ 288 282 line = " " + line.expandtabs() 289 283 self.data += line.rstrip(" ") 290 284 291 285 def parse_file(self, file_in: str, exceptions: str = None): 292 - """Reads a C source file and get identifiers""" 286 + """ 287 + Read a C source file and get identifiers. 288 + """ 293 289 self.data = "" 294 290 is_enum = False 295 291 is_comment = False ··· 445 433 446 434 def gen_toc(self): 447 435 """ 448 - Create a list of symbols to be part of a TOC contents table 436 + Create a list of symbols to be part of a TOC contents table. 449 437 """ 450 438 text = [] 451 439 ··· 476 464 return "\n".join(text) 477 465 478 466 def write_output(self, file_in: str, file_out: str, toc: bool): 467 + """ 468 + Write a ReST output file. 469 + """ 470 + 479 471 title = os.path.basename(file_in) 480 472 481 473 if toc:
+16 -4
tools/lib/python/kdoc/python_version.py
··· 33 33 """ 34 34 35 35 def __init__(self, version): 36 - """Ïnitialize self.version tuple from a version string""" 36 + """ 37 + Ïnitialize self.version tuple from a version string. 38 + """ 37 39 self.version = self.parse_version(version) 38 40 39 41 @staticmethod 40 42 def parse_version(version): 41 - """Convert a major.minor.patch version into a tuple""" 43 + """ 44 + Convert a major.minor.patch version into a tuple. 45 + """ 42 46 return tuple(int(x) for x in version.split(".")) 43 47 44 48 @staticmethod 45 49 def ver_str(version): 46 - """Returns a version tuple as major.minor.patch""" 50 + """ 51 + Returns a version tuple as major.minor.patch. 52 + """ 47 53 return ".".join([str(x) for x in version]) 48 54 49 55 @staticmethod 50 56 def cmd_print(cmd, max_len=80): 57 + """ 58 + Outputs a command line, repecting maximum width. 59 + """ 60 + 51 61 cmd_line = [] 52 62 53 63 for w in cmd: ··· 76 66 return "\n ".join(cmd_line) 77 67 78 68 def __str__(self): 79 - """Returns a version tuple as major.minor.patch from self.version""" 69 + """ 70 + Return a version tuple as major.minor.patch from self.version. 71 + """ 80 72 return self.ver_str(self.version) 81 73 82 74 @staticmethod