Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

docs: riscv: convert docs to ReST and rename to *.rst

The conversion here is trivial:
- Adjust the document title's markup
- Do some whitespace alignment;
- mark literal blocks;
- Use ReST way to markup indented lists.

At its new index.rst, let's add a :orphan: while this is not linked to
the main index.rst file, in order to avoid build warnings.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>

authored by

Mauro Carvalho Chehab and committed by
Jonathan Corbet
bdf3a950 329f0041

+69 -46
+17
Documentation/riscv/index.rst
··· 1 + :orphan: 2 + 3 + =================== 4 + RISC-V architecture 5 + =================== 6 + 7 + .. toctree:: 8 + :maxdepth: 1 9 + 10 + pmu 11 + 12 + .. only:: subproject and html 13 + 14 + Indices 15 + ======= 16 + 17 + * :ref:`genindex`
+52 -46
Documentation/riscv/pmu.txt Documentation/riscv/pmu.rst
··· 1 + =================================== 1 2 Supporting PMUs on RISC-V platforms 2 - ========================================== 3 + =================================== 4 + 3 5 Alan Kao <alankao@andestech.com>, Mar 2018 4 6 5 7 Introduction ··· 79 77 (2) privilege level setting (user space only, kernel space only, both); 80 78 (3) destructor setting. Normally it is sufficient to apply *riscv_destroy_event*; 81 79 (4) tweaks for non-sampling events, which will be utilized by functions such as 82 - *perf_adjust_period*, usually something like the follows: 80 + *perf_adjust_period*, usually something like the follows:: 83 81 84 - if (!is_sampling_event(event)) { 85 - hwc->sample_period = x86_pmu.max_period; 86 - hwc->last_period = hwc->sample_period; 87 - local64_set(&hwc->period_left, hwc->sample_period); 88 - } 82 + if (!is_sampling_event(event)) { 83 + hwc->sample_period = x86_pmu.max_period; 84 + hwc->last_period = hwc->sample_period; 85 + local64_set(&hwc->period_left, hwc->sample_period); 86 + } 89 87 90 88 In the case of *riscv_base_pmu*, only (3) is provided for now. 91 89 ··· 96 94 3.1. Interrupt Initialization 97 95 98 96 This often occurs at the beginning of the *event_init* method. In common 99 - practice, this should be a code segment like 97 + practice, this should be a code segment like:: 100 98 101 - int x86_reserve_hardware(void) 102 - { 99 + int x86_reserve_hardware(void) 100 + { 103 101 int err = 0; 104 102 105 103 if (!atomic_inc_not_zero(&pmc_refcount)) { ··· 116 114 } 117 115 118 116 return err; 119 - } 117 + } 120 118 121 119 And the magic is in *reserve_pmc_hardware*, which usually does atomic 122 120 operations to make implemented IRQ accessible from some global function pointer. ··· 130 128 131 129 3.2. IRQ Structure 132 130 133 - Basically, a IRQ runs the following pseudo code: 131 + Basically, a IRQ runs the following pseudo code:: 134 132 135 - for each hardware counter that triggered this overflow 133 + for each hardware counter that triggered this overflow 136 134 137 - get the event of this counter 135 + get the event of this counter 138 136 139 - // following two steps are defined as *read()*, 140 - // check the section Reading/Writing Counters for details. 141 - count the delta value since previous interrupt 142 - update the event->count (# event occurs) by adding delta, and 143 - event->hw.period_left by subtracting delta 137 + // following two steps are defined as *read()*, 138 + // check the section Reading/Writing Counters for details. 139 + count the delta value since previous interrupt 140 + update the event->count (# event occurs) by adding delta, and 141 + event->hw.period_left by subtracting delta 144 142 145 - if the event overflows 146 - sample data 147 - set the counter appropriately for the next overflow 143 + if the event overflows 144 + sample data 145 + set the counter appropriately for the next overflow 148 146 149 - if the event overflows again 150 - too frequently, throttle this event 151 - fi 152 - fi 147 + if the event overflows again 148 + too frequently, throttle this event 149 + fi 150 + fi 153 151 154 - end for 152 + end for 155 153 156 154 However as of this writing, none of the RISC-V implementations have designed an 157 155 interrupt for perf, so the details are to be completed in the future. ··· 197 195 At this stage, a general event is bound to a physical counter, if any. 198 196 The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, because it is now 199 197 stopped, and the (software) event count does not need updating. 200 - ** *start* is then called, and the counter is enabled. 201 - With flag PERF_EF_RELOAD, it writes an appropriate value to the counter (check 202 - previous section for detail). 203 - Nothing is written if the flag does not contain PERF_EF_RELOAD. 204 - The state now is reset to none, because it is neither stopped nor updated 205 - (the counting already started) 198 + 199 + - *start* is then called, and the counter is enabled. 200 + With flag PERF_EF_RELOAD, it writes an appropriate value to the counter (check 201 + previous section for detail). 202 + Nothing is written if the flag does not contain PERF_EF_RELOAD. 203 + The state now is reset to none, because it is neither stopped nor updated 204 + (the counting already started) 205 + 206 206 * When being context-switched out, *del* is called. It then checks out all the 207 207 events in the PMU and calls *stop* to update their counts. 208 - ** *stop* is called by *del* 209 - and the perf core with flag PERF_EF_UPDATE, and it often shares the same 210 - subroutine as *read* with the same logic. 211 - The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, again. 212 208 213 - ** Life cycle of these two pairs: *add* and *del* are called repeatedly as 214 - tasks switch in-and-out; *start* and *stop* is also called when the perf core 215 - needs a quick stop-and-start, for instance, when the interrupt period is being 216 - adjusted. 209 + - *stop* is called by *del* 210 + and the perf core with flag PERF_EF_UPDATE, and it often shares the same 211 + subroutine as *read* with the same logic. 212 + The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, again. 213 + 214 + - Life cycle of these two pairs: *add* and *del* are called repeatedly as 215 + tasks switch in-and-out; *start* and *stop* is also called when the perf core 216 + needs a quick stop-and-start, for instance, when the interrupt period is being 217 + adjusted. 217 218 218 219 Current implementation is sufficient for now and can be easily extended to 219 220 features in the future. ··· 230 225 Both structures are designed to be read-only. 231 226 232 227 *struct pmu* defines some function pointer interfaces, and most of them take 233 - *struct perf_event* as a main argument, dealing with perf events according to 234 - perf's internal state machine (check kernel/events/core.c for details). 228 + *struct perf_event* as a main argument, dealing with perf events according to 229 + perf's internal state machine (check kernel/events/core.c for details). 235 230 236 231 *struct riscv_pmu* defines PMU-specific parameters. The naming follows the 237 - convention of all other architectures. 232 + convention of all other architectures. 238 233 239 234 * struct perf_event: include/linux/perf_event.h 240 235 * struct hw_perf_event 241 236 242 237 The generic structure that represents perf events, and the hardware-related 243 - details. 238 + details. 244 239 245 240 * struct riscv_hw_events: arch/riscv/include/asm/perf_event.h 246 241 247 242 The structure that holds the status of events, has two fixed members: 248 - the number of events and the array of the events. 243 + the number of events and the array of the events. 249 244 250 245 References 251 246 ---------- 252 247 253 248 [1] https://github.com/riscv/riscv-linux/pull/124 249 + 254 250 [2] https://groups.google.com/a/groups.riscv.org/forum/#!topic/sw-dev/f19TmCNP6yA