Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1.. SPDX-License-Identifier: GPL-2.0
2.. include:: <isonum.txt>
3
4===========================================
5User Interface for Resource Control feature
6===========================================
7
8:Copyright: |copy| 2016 Intel Corporation
9:Authors: - Fenghua Yu <fenghua.yu@intel.com>
10 - Tony Luck <tony.luck@intel.com>
11 - Vikas Shivappa <vikas.shivappa@intel.com>
12
13
14Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT).
15AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
16
17This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo
18flag bits:
19
20=============================================== ================================
21RDT (Resource Director Technology) Allocation "rdt_a"
22CAT (Cache Allocation Technology) "cat_l3", "cat_l2"
23CDP (Code and Data Prioritization) "cdp_l3", "cdp_l2"
24CQM (Cache QoS Monitoring) "cqm_llc", "cqm_occup_llc"
25MBM (Memory Bandwidth Monitoring) "cqm_mbm_total", "cqm_mbm_local"
26MBA (Memory Bandwidth Allocation) "mba"
27SMBA (Slow Memory Bandwidth Allocation) ""
28BMEC (Bandwidth Monitoring Event Configuration) ""
29=============================================== ================================
30
31Historically, new features were made visible by default in /proc/cpuinfo. This
32resulted in the feature flags becoming hard to parse by humans. Adding a new
33flag to /proc/cpuinfo should be avoided if user space can obtain information
34about the feature from resctrl's info directory.
35
36To use the feature mount the file system::
37
38 # mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps][,debug]] /sys/fs/resctrl
39
40mount options are:
41
42"cdp":
43 Enable code/data prioritization in L3 cache allocations.
44"cdpl2":
45 Enable code/data prioritization in L2 cache allocations.
46"mba_MBps":
47 Enable the MBA Software Controller(mba_sc) to specify MBA
48 bandwidth in MiBps
49"debug":
50 Make debug files accessible. Available debug files are annotated with
51 "Available only with debug option".
52
53L2 and L3 CDP are controlled separately.
54
55RDT features are orthogonal. A particular system may support only
56monitoring, only control, or both monitoring and control. Cache
57pseudo-locking is a unique way of using cache control to "pin" or
58"lock" data in the cache. Details can be found in
59"Cache Pseudo-Locking".
60
61
62The mount succeeds if either of allocation or monitoring is present, but
63only those files and directories supported by the system will be created.
64For more details on the behavior of the interface during monitoring
65and allocation, see the "Resource alloc and monitor groups" section.
66
67Info directory
68==============
69
70The 'info' directory contains information about the enabled
71resources. Each resource has its own subdirectory. The subdirectory
72names reflect the resource names.
73
74Each subdirectory contains the following files with respect to
75allocation:
76
77Cache resource(L3/L2) subdirectory contains the following files
78related to allocation:
79
80"num_closids":
81 The number of CLOSIDs which are valid for this
82 resource. The kernel uses the smallest number of
83 CLOSIDs of all enabled resources as limit.
84"cbm_mask":
85 The bitmask which is valid for this resource.
86 This mask is equivalent to 100%.
87"min_cbm_bits":
88 The minimum number of consecutive bits which
89 must be set when writing a mask.
90
91"shareable_bits":
92 Bitmask of shareable resource with other executing
93 entities (e.g. I/O). User can use this when
94 setting up exclusive cache partitions. Note that
95 some platforms support devices that have their
96 own settings for cache use which can over-ride
97 these bits.
98"bit_usage":
99 Annotated capacity bitmasks showing how all
100 instances of the resource are used. The legend is:
101
102 "0":
103 Corresponding region is unused. When the system's
104 resources have been allocated and a "0" is found
105 in "bit_usage" it is a sign that resources are
106 wasted.
107
108 "H":
109 Corresponding region is used by hardware only
110 but available for software use. If a resource
111 has bits set in "shareable_bits" but not all
112 of these bits appear in the resource groups'
113 schematas then the bits appearing in
114 "shareable_bits" but no resource group will
115 be marked as "H".
116 "X":
117 Corresponding region is available for sharing and
118 used by hardware and software. These are the
119 bits that appear in "shareable_bits" as
120 well as a resource group's allocation.
121 "S":
122 Corresponding region is used by software
123 and available for sharing.
124 "E":
125 Corresponding region is used exclusively by
126 one resource group. No sharing allowed.
127 "P":
128 Corresponding region is pseudo-locked. No
129 sharing allowed.
130"sparse_masks":
131 Indicates if non-contiguous 1s value in CBM is supported.
132
133 "0":
134 Only contiguous 1s value in CBM is supported.
135 "1":
136 Non-contiguous 1s value in CBM is supported.
137
138Memory bandwidth(MB) subdirectory contains the following files
139with respect to allocation:
140
141"min_bandwidth":
142 The minimum memory bandwidth percentage which
143 user can request.
144
145"bandwidth_gran":
146 The granularity in which the memory bandwidth
147 percentage is allocated. The allocated
148 b/w percentage is rounded off to the next
149 control step available on the hardware. The
150 available bandwidth control steps are:
151 min_bandwidth + N * bandwidth_gran.
152
153"delay_linear":
154 Indicates if the delay scale is linear or
155 non-linear. This field is purely informational
156 only.
157
158"thread_throttle_mode":
159 Indicator on Intel systems of how tasks running on threads
160 of a physical core are throttled in cases where they
161 request different memory bandwidth percentages:
162
163 "max":
164 the smallest percentage is applied
165 to all threads
166 "per-thread":
167 bandwidth percentages are directly applied to
168 the threads running on the core
169
170If RDT monitoring is available there will be an "L3_MON" directory
171with the following files:
172
173"num_rmids":
174 The number of RMIDs available. This is the
175 upper bound for how many "CTRL_MON" + "MON"
176 groups can be created.
177
178"mon_features":
179 Lists the monitoring events if
180 monitoring is enabled for the resource.
181 Example::
182
183 # cat /sys/fs/resctrl/info/L3_MON/mon_features
184 llc_occupancy
185 mbm_total_bytes
186 mbm_local_bytes
187
188 If the system supports Bandwidth Monitoring Event
189 Configuration (BMEC), then the bandwidth events will
190 be configurable. The output will be::
191
192 # cat /sys/fs/resctrl/info/L3_MON/mon_features
193 llc_occupancy
194 mbm_total_bytes
195 mbm_total_bytes_config
196 mbm_local_bytes
197 mbm_local_bytes_config
198
199"mbm_total_bytes_config", "mbm_local_bytes_config":
200 Read/write files containing the configuration for the mbm_total_bytes
201 and mbm_local_bytes events, respectively, when the Bandwidth
202 Monitoring Event Configuration (BMEC) feature is supported.
203 The event configuration settings are domain specific and affect
204 all the CPUs in the domain. When either event configuration is
205 changed, the bandwidth counters for all RMIDs of both events
206 (mbm_total_bytes as well as mbm_local_bytes) are cleared for that
207 domain. The next read for every RMID will report "Unavailable"
208 and subsequent reads will report the valid value.
209
210 Following are the types of events supported:
211
212 ==== ========================================================
213 Bits Description
214 ==== ========================================================
215 6 Dirty Victims from the QOS domain to all types of memory
216 5 Reads to slow memory in the non-local NUMA domain
217 4 Reads to slow memory in the local NUMA domain
218 3 Non-temporal writes to non-local NUMA domain
219 2 Non-temporal writes to local NUMA domain
220 1 Reads to memory in the non-local NUMA domain
221 0 Reads to memory in the local NUMA domain
222 ==== ========================================================
223
224 By default, the mbm_total_bytes configuration is set to 0x7f to count
225 all the event types and the mbm_local_bytes configuration is set to
226 0x15 to count all the local memory events.
227
228 Examples:
229
230 * To view the current configuration::
231 ::
232
233 # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
234 0=0x7f;1=0x7f;2=0x7f;3=0x7f
235
236 # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
237 0=0x15;1=0x15;3=0x15;4=0x15
238
239 * To change the mbm_total_bytes to count only reads on domain 0,
240 the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary
241 (in hexadecimal 0x33):
242 ::
243
244 # echo "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
245
246 # cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config
247 0=0x33;1=0x7f;2=0x7f;3=0x7f
248
249 * To change the mbm_local_bytes to count all the slow memory reads on
250 domain 0 and 1, the bits 4 and 5 needs to be set, which is 110000b
251 in binary (in hexadecimal 0x30):
252 ::
253
254 # echo "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
255
256 # cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config
257 0=0x30;1=0x30;3=0x15;4=0x15
258
259"max_threshold_occupancy":
260 Read/write file provides the largest value (in
261 bytes) at which a previously used LLC_occupancy
262 counter can be considered for re-use.
263
264Finally, in the top level of the "info" directory there is a file
265named "last_cmd_status". This is reset with every "command" issued
266via the file system (making new directories or writing to any of the
267control files). If the command was successful, it will read as "ok".
268If the command failed, it will provide more information that can be
269conveyed in the error returns from file operations. E.g.
270::
271
272 # echo L3:0=f7 > schemata
273 bash: echo: write error: Invalid argument
274 # cat info/last_cmd_status
275 mask f7 has non-consecutive 1-bits
276
277Resource alloc and monitor groups
278=================================
279
280Resource groups are represented as directories in the resctrl file
281system. The default group is the root directory which, immediately
282after mounting, owns all the tasks and cpus in the system and can make
283full use of all resources.
284
285On a system with RDT control features additional directories can be
286created in the root directory that specify different amounts of each
287resource (see "schemata" below). The root and these additional top level
288directories are referred to as "CTRL_MON" groups below.
289
290On a system with RDT monitoring the root directory and other top level
291directories contain a directory named "mon_groups" in which additional
292directories can be created to monitor subsets of tasks in the CTRL_MON
293group that is their ancestor. These are called "MON" groups in the rest
294of this document.
295
296Removing a directory will move all tasks and cpus owned by the group it
297represents to the parent. Removing one of the created CTRL_MON groups
298will automatically remove all MON groups below it.
299
300Moving MON group directories to a new parent CTRL_MON group is supported
301for the purpose of changing the resource allocations of a MON group
302without impacting its monitoring data or assigned tasks. This operation
303is not allowed for MON groups which monitor CPUs. No other move
304operation is currently allowed other than simply renaming a CTRL_MON or
305MON group.
306
307All groups contain the following files:
308
309"tasks":
310 Reading this file shows the list of all tasks that belong to
311 this group. Writing a task id to the file will add a task to the
312 group. Multiple tasks can be added by separating the task ids
313 with commas. Tasks will be assigned sequentially. Multiple
314 failures are not supported. A single failure encountered while
315 attempting to assign a task will cause the operation to abort and
316 already added tasks before the failure will remain in the group.
317 Failures will be logged to /sys/fs/resctrl/info/last_cmd_status.
318
319 If the group is a CTRL_MON group the task is removed from
320 whichever previous CTRL_MON group owned the task and also from
321 any MON group that owned the task. If the group is a MON group,
322 then the task must already belong to the CTRL_MON parent of this
323 group. The task is removed from any previous MON group.
324
325
326"cpus":
327 Reading this file shows a bitmask of the logical CPUs owned by
328 this group. Writing a mask to this file will add and remove
329 CPUs to/from this group. As with the tasks file a hierarchy is
330 maintained where MON groups may only include CPUs owned by the
331 parent CTRL_MON group.
332 When the resource group is in pseudo-locked mode this file will
333 only be readable, reflecting the CPUs associated with the
334 pseudo-locked region.
335
336
337"cpus_list":
338 Just like "cpus", only using ranges of CPUs instead of bitmasks.
339
340
341When control is enabled all CTRL_MON groups will also contain:
342
343"schemata":
344 A list of all the resources available to this group.
345 Each resource has its own line and format - see below for details.
346
347"size":
348 Mirrors the display of the "schemata" file to display the size in
349 bytes of each allocation instead of the bits representing the
350 allocation.
351
352"mode":
353 The "mode" of the resource group dictates the sharing of its
354 allocations. A "shareable" resource group allows sharing of its
355 allocations while an "exclusive" resource group does not. A
356 cache pseudo-locked region is created by first writing
357 "pseudo-locksetup" to the "mode" file before writing the cache
358 pseudo-locked region's schemata to the resource group's "schemata"
359 file. On successful pseudo-locked region creation the mode will
360 automatically change to "pseudo-locked".
361
362"ctrl_hw_id":
363 Available only with debug option. The identifier used by hardware
364 for the control group. On x86 this is the CLOSID.
365
366When monitoring is enabled all MON groups will also contain:
367
368"mon_data":
369 This contains a set of files organized by L3 domain and by
370 RDT event. E.g. on a system with two L3 domains there will
371 be subdirectories "mon_L3_00" and "mon_L3_01". Each of these
372 directories have one file per event (e.g. "llc_occupancy",
373 "mbm_total_bytes", and "mbm_local_bytes"). In a MON group these
374 files provide a read out of the current value of the event for
375 all tasks in the group. In CTRL_MON groups these files provide
376 the sum for all tasks in the CTRL_MON group and all tasks in
377 MON groups. Please see example section for more details on usage.
378 On systems with Sub-NUMA Cluster (SNC) enabled there are extra
379 directories for each node (located within the "mon_L3_XX" directory
380 for the L3 cache they occupy). These are named "mon_sub_L3_YY"
381 where "YY" is the node number.
382
383"mon_hw_id":
384 Available only with debug option. The identifier used by hardware
385 for the monitor group. On x86 this is the RMID.
386
387When the "mba_MBps" mount option is used all CTRL_MON groups will also contain:
388
389"mba_MBps_event":
390 Reading this file shows which memory bandwidth event is used
391 as input to the software feedback loop that keeps memory bandwidth
392 below the value specified in the schemata file. Writing the
393 name of one of the supported memory bandwidth events found in
394 /sys/fs/resctrl/info/L3_MON/mon_features changes the input
395 event.
396
397Resource allocation rules
398-------------------------
399
400When a task is running the following rules define which resources are
401available to it:
402
4031) If the task is a member of a non-default group, then the schemata
404 for that group is used.
405
4062) Else if the task belongs to the default group, but is running on a
407 CPU that is assigned to some specific group, then the schemata for the
408 CPU's group is used.
409
4103) Otherwise the schemata for the default group is used.
411
412Resource monitoring rules
413-------------------------
4141) If a task is a member of a MON group, or non-default CTRL_MON group
415 then RDT events for the task will be reported in that group.
416
4172) If a task is a member of the default CTRL_MON group, but is running
418 on a CPU that is assigned to some specific group, then the RDT events
419 for the task will be reported in that group.
420
4213) Otherwise RDT events for the task will be reported in the root level
422 "mon_data" group.
423
424
425Notes on cache occupancy monitoring and control
426===============================================
427When moving a task from one group to another you should remember that
428this only affects *new* cache allocations by the task. E.g. you may have
429a task in a monitor group showing 3 MB of cache occupancy. If you move
430to a new group and immediately check the occupancy of the old and new
431groups you will likely see that the old group is still showing 3 MB and
432the new group zero. When the task accesses locations still in cache from
433before the move, the h/w does not update any counters. On a busy system
434you will likely see the occupancy in the old group go down as cache lines
435are evicted and re-used while the occupancy in the new group rises as
436the task accesses memory and loads into the cache are counted based on
437membership in the new group.
438
439The same applies to cache allocation control. Moving a task to a group
440with a smaller cache partition will not evict any cache lines. The
441process may continue to use them from the old partition.
442
443Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)
444to identify a control group and a monitoring group respectively. Each of
445the resource groups are mapped to these IDs based on the kind of group. The
446number of CLOSid and RMID are limited by the hardware and hence the creation of
447a "CTRL_MON" directory may fail if we run out of either CLOSID or RMID
448and creation of "MON" group may fail if we run out of RMIDs.
449
450max_threshold_occupancy - generic concepts
451------------------------------------------
452
453Note that an RMID once freed may not be immediately available for use as
454the RMID is still tagged the cache lines of the previous user of RMID.
455Hence such RMIDs are placed on limbo list and checked back if the cache
456occupancy has gone down. If there is a time when system has a lot of
457limbo RMIDs but which are not ready to be used, user may see an -EBUSY
458during mkdir.
459
460max_threshold_occupancy is a user configurable value to determine the
461occupancy at which an RMID can be freed.
462
463The mon_llc_occupancy_limbo tracepoint gives the precise occupancy in bytes
464for a subset of RMID that are not immediately available for allocation.
465This can't be relied on to produce output every second, it may be necessary
466to attempt to create an empty monitor group to force an update. Output may
467only be produced if creation of a control or monitor group fails.
468
469Schemata files - general concepts
470---------------------------------
471Each line in the file describes one resource. The line starts with
472the name of the resource, followed by specific values to be applied
473in each of the instances of that resource on the system.
474
475Cache IDs
476---------
477On current generation systems there is one L3 cache per socket and L2
478caches are generally just shared by the hyperthreads on a core, but this
479isn't an architectural requirement. We could have multiple separate L3
480caches on a socket, multiple cores could share an L2 cache. So instead
481of using "socket" or "core" to define the set of logical cpus sharing
482a resource we use a "Cache ID". At a given cache level this will be a
483unique number across the whole system (but it isn't guaranteed to be a
484contiguous sequence, there may be gaps). To find the ID for each logical
485CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
486
487Cache Bit Masks (CBM)
488---------------------
489For cache resources we describe the portion of the cache that is available
490for allocation using a bitmask. The maximum value of the mask is defined
491by each cpu model (and may be different for different cache levels). It
492is found using CPUID, but is also provided in the "info" directory of
493the resctrl file system in "info/{resource}/cbm_mask". Some Intel hardware
494requires that these masks have all the '1' bits in a contiguous block. So
4950x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9
496and 0xA are not. Check /sys/fs/resctrl/info/{resource}/sparse_masks
497if non-contiguous 1s value is supported. On a system with a 20-bit mask
498each bit represents 5% of the capacity of the cache. You could partition
499the cache into four equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
500
501Notes on Sub-NUMA Cluster mode
502==============================
503When SNC mode is enabled, Linux may load balance tasks between Sub-NUMA
504nodes much more readily than between regular NUMA nodes since the CPUs
505on Sub-NUMA nodes share the same L3 cache and the system may report
506the NUMA distance between Sub-NUMA nodes with a lower value than used
507for regular NUMA nodes.
508
509The top-level monitoring files in each "mon_L3_XX" directory provide
510the sum of data across all SNC nodes sharing an L3 cache instance.
511Users who bind tasks to the CPUs of a specific Sub-NUMA node can read
512the "llc_occupancy", "mbm_total_bytes", and "mbm_local_bytes" in the
513"mon_sub_L3_YY" directories to get node local data.
514
515Memory bandwidth allocation is still performed at the L3 cache
516level. I.e. throttling controls are applied to all SNC nodes.
517
518L3 cache allocation bitmaps also apply to all SNC nodes. But note that
519the amount of L3 cache represented by each bit is divided by the number
520of SNC nodes per L3 cache. E.g. with a 100MB cache on a system with 10-bit
521allocation masks each bit normally represents 10MB. With SNC mode enabled
522with two SNC nodes per L3 cache, each bit only represents 5MB.
523
524Memory bandwidth Allocation and monitoring
525==========================================
526
527For Memory bandwidth resource, by default the user controls the resource
528by indicating the percentage of total memory bandwidth.
529
530The minimum bandwidth percentage value for each cpu model is predefined
531and can be looked up through "info/MB/min_bandwidth". The bandwidth
532granularity that is allocated is also dependent on the cpu model and can
533be looked up at "info/MB/bandwidth_gran". The available bandwidth
534control steps are: min_bw + N * bw_gran. Intermediate values are rounded
535to the next control step available on the hardware.
536
537The bandwidth throttling is a core specific mechanism on some of Intel
538SKUs. Using a high bandwidth and a low bandwidth setting on two threads
539sharing a core may result in both threads being throttled to use the
540low bandwidth (see "thread_throttle_mode").
541
542The fact that Memory bandwidth allocation(MBA) may be a core
543specific mechanism where as memory bandwidth monitoring(MBM) is done at
544the package level may lead to confusion when users try to apply control
545via the MBA and then monitor the bandwidth to see if the controls are
546effective. Below are such scenarios:
547
5481. User may *not* see increase in actual bandwidth when percentage
549 values are increased:
550
551This can occur when aggregate L2 external bandwidth is more than L3
552external bandwidth. Consider an SKL SKU with 24 cores on a package and
553where L2 external is 10GBps (hence aggregate L2 external bandwidth is
554240GBps) and L3 external bandwidth is 100GBps. Now a workload with '20
555threads, having 50% bandwidth, each consuming 5GBps' consumes the max L3
556bandwidth of 100GBps although the percentage value specified is only 50%
557<< 100%. Hence increasing the bandwidth percentage will not yield any
558more bandwidth. This is because although the L2 external bandwidth still
559has capacity, the L3 external bandwidth is fully used. Also note that
560this would be dependent on number of cores the benchmark is run on.
561
5622. Same bandwidth percentage may mean different actual bandwidth
563 depending on # of threads:
564
565For the same SKU in #1, a 'single thread, with 10% bandwidth' and '4
566thread, with 10% bandwidth' can consume upto 10GBps and 40GBps although
567they have same percentage bandwidth of 10%. This is simply because as
568threads start using more cores in an rdtgroup, the actual bandwidth may
569increase or vary although user specified bandwidth percentage is same.
570
571In order to mitigate this and make the interface more user friendly,
572resctrl added support for specifying the bandwidth in MiBps as well. The
573kernel underneath would use a software feedback mechanism or a "Software
574Controller(mba_sc)" which reads the actual bandwidth using MBM counters
575and adjust the memory bandwidth percentages to ensure::
576
577 "actual bandwidth < user specified bandwidth".
578
579By default, the schemata would take the bandwidth percentage values
580where as user can switch to the "MBA software controller" mode using
581a mount option 'mba_MBps'. The schemata format is specified in the below
582sections.
583
584L3 schemata file details (code and data prioritization disabled)
585----------------------------------------------------------------
586With CDP disabled the L3 schemata format is::
587
588 L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
589
590L3 schemata file details (CDP enabled via mount option to resctrl)
591------------------------------------------------------------------
592When CDP is enabled L3 control is split into two separate resources
593so you can specify independent masks for code and data like this::
594
595 L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
596 L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
597
598L2 schemata file details
599------------------------
600CDP is supported at L2 using the 'cdpl2' mount option. The schemata
601format is either::
602
603 L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
604
605or
606
607 L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
608 L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
609
610
611Memory bandwidth Allocation (default mode)
612------------------------------------------
613
614Memory b/w domain is L3 cache.
615::
616
617 MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
618
619Memory bandwidth Allocation specified in MiBps
620----------------------------------------------
621
622Memory bandwidth domain is L3 cache.
623::
624
625 MB:<cache_id0>=bw_MiBps0;<cache_id1>=bw_MiBps1;...
626
627Slow Memory Bandwidth Allocation (SMBA)
628---------------------------------------
629AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).
630CXL.memory is the only supported "slow" memory device. With the
631support of SMBA, the hardware enables bandwidth allocation on
632the slow memory devices. If there are multiple such devices in
633the system, the throttling logic groups all the slow sources
634together and applies the limit on them as a whole.
635
636The presence of SMBA (with CXL.memory) is independent of slow memory
637devices presence. If there are no such devices on the system, then
638configuring SMBA will have no impact on the performance of the system.
639
640The bandwidth domain for slow memory is L3 cache. Its schemata file
641is formatted as:
642::
643
644 SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
645
646Reading/writing the schemata file
647---------------------------------
648Reading the schemata file will show the state of all resources
649on all domains. When writing you only need to specify those values
650which you wish to change. E.g.
651::
652
653 # cat schemata
654 L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
655 L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
656 # echo "L3DATA:2=3c0;" > schemata
657 # cat schemata
658 L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
659 L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
660
661Reading/writing the schemata file (on AMD systems)
662--------------------------------------------------
663Reading the schemata file will show the current bandwidth limit on all
664domains. The allocated resources are in multiples of one eighth GB/s.
665When writing to the file, you need to specify what cache id you wish to
666configure the bandwidth limit.
667
668For example, to allocate 2GB/s limit on the first cache id:
669
670::
671
672 # cat schemata
673 MB:0=2048;1=2048;2=2048;3=2048
674 L3:0=ffff;1=ffff;2=ffff;3=ffff
675
676 # echo "MB:1=16" > schemata
677 # cat schemata
678 MB:0=2048;1= 16;2=2048;3=2048
679 L3:0=ffff;1=ffff;2=ffff;3=ffff
680
681Reading/writing the schemata file (on AMD systems) with SMBA feature
682--------------------------------------------------------------------
683Reading and writing the schemata file is the same as without SMBA in
684above section.
685
686For example, to allocate 8GB/s limit on the first cache id:
687
688::
689
690 # cat schemata
691 SMBA:0=2048;1=2048;2=2048;3=2048
692 MB:0=2048;1=2048;2=2048;3=2048
693 L3:0=ffff;1=ffff;2=ffff;3=ffff
694
695 # echo "SMBA:1=64" > schemata
696 # cat schemata
697 SMBA:0=2048;1= 64;2=2048;3=2048
698 MB:0=2048;1=2048;2=2048;3=2048
699 L3:0=ffff;1=ffff;2=ffff;3=ffff
700
701Cache Pseudo-Locking
702====================
703CAT enables a user to specify the amount of cache space that an
704application can fill. Cache pseudo-locking builds on the fact that a
705CPU can still read and write data pre-allocated outside its current
706allocated area on a cache hit. With cache pseudo-locking, data can be
707preloaded into a reserved portion of cache that no application can
708fill, and from that point on will only serve cache hits. The cache
709pseudo-locked memory is made accessible to user space where an
710application can map it into its virtual address space and thus have
711a region of memory with reduced average read latency.
712
713The creation of a cache pseudo-locked region is triggered by a request
714from the user to do so that is accompanied by a schemata of the region
715to be pseudo-locked. The cache pseudo-locked region is created as follows:
716
717- Create a CAT allocation CLOSNEW with a CBM matching the schemata
718 from the user of the cache region that will contain the pseudo-locked
719 memory. This region must not overlap with any current CAT allocation/CLOS
720 on the system and no future overlap with this cache region is allowed
721 while the pseudo-locked region exists.
722- Create a contiguous region of memory of the same size as the cache
723 region.
724- Flush the cache, disable hardware prefetchers, disable preemption.
725- Make CLOSNEW the active CLOS and touch the allocated memory to load
726 it into the cache.
727- Set the previous CLOS as active.
728- At this point the closid CLOSNEW can be released - the cache
729 pseudo-locked region is protected as long as its CBM does not appear in
730 any CAT allocation. Even though the cache pseudo-locked region will from
731 this point on not appear in any CBM of any CLOS an application running with
732 any CLOS will be able to access the memory in the pseudo-locked region since
733 the region continues to serve cache hits.
734- The contiguous region of memory loaded into the cache is exposed to
735 user-space as a character device.
736
737Cache pseudo-locking increases the probability that data will remain
738in the cache via carefully configuring the CAT feature and controlling
739application behavior. There is no guarantee that data is placed in
740cache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict
741“locked” data from cache. Power management C-states may shrink or
742power off cache. Deeper C-states will automatically be restricted on
743pseudo-locked region creation.
744
745It is required that an application using a pseudo-locked region runs
746with affinity to the cores (or a subset of the cores) associated
747with the cache on which the pseudo-locked region resides. A sanity check
748within the code will not allow an application to map pseudo-locked memory
749unless it runs with affinity to cores associated with the cache on which the
750pseudo-locked region resides. The sanity check is only done during the
751initial mmap() handling, there is no enforcement afterwards and the
752application self needs to ensure it remains affine to the correct cores.
753
754Pseudo-locking is accomplished in two stages:
755
7561) During the first stage the system administrator allocates a portion
757 of cache that should be dedicated to pseudo-locking. At this time an
758 equivalent portion of memory is allocated, loaded into allocated
759 cache portion, and exposed as a character device.
7602) During the second stage a user-space application maps (mmap()) the
761 pseudo-locked memory into its address space.
762
763Cache Pseudo-Locking Interface
764------------------------------
765A pseudo-locked region is created using the resctrl interface as follows:
766
7671) Create a new resource group by creating a new directory in /sys/fs/resctrl.
7682) Change the new resource group's mode to "pseudo-locksetup" by writing
769 "pseudo-locksetup" to the "mode" file.
7703) Write the schemata of the pseudo-locked region to the "schemata" file. All
771 bits within the schemata should be "unused" according to the "bit_usage"
772 file.
773
774On successful pseudo-locked region creation the "mode" file will contain
775"pseudo-locked" and a new character device with the same name as the resource
776group will exist in /dev/pseudo_lock. This character device can be mmap()'ed
777by user space in order to obtain access to the pseudo-locked memory region.
778
779An example of cache pseudo-locked region creation and usage can be found below.
780
781Cache Pseudo-Locking Debugging Interface
782----------------------------------------
783The pseudo-locking debugging interface is enabled by default (if
784CONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
785
786There is no explicit way for the kernel to test if a provided memory
787location is present in the cache. The pseudo-locking debugging interface uses
788the tracing infrastructure to provide two ways to measure cache residency of
789the pseudo-locked region:
790
7911) Memory access latency using the pseudo_lock_mem_latency tracepoint. Data
792 from these measurements are best visualized using a hist trigger (see
793 example below). In this test the pseudo-locked region is traversed at
794 a stride of 32 bytes while hardware prefetchers and preemption
795 are disabled. This also provides a substitute visualization of cache
796 hits and misses.
7972) Cache hit and miss measurements using model specific precision counters if
798 available. Depending on the levels of cache on the system the pseudo_lock_l2
799 and pseudo_lock_l3 tracepoints are available.
800
801When a pseudo-locked region is created a new debugfs directory is created for
802it in debugfs as /sys/kernel/debug/resctrl/<newdir>. A single
803write-only file, pseudo_lock_measure, is present in this directory. The
804measurement of the pseudo-locked region depends on the number written to this
805debugfs file:
806
8071:
808 writing "1" to the pseudo_lock_measure file will trigger the latency
809 measurement captured in the pseudo_lock_mem_latency tracepoint. See
810 example below.
8112:
812 writing "2" to the pseudo_lock_measure file will trigger the L2 cache
813 residency (cache hits and misses) measurement captured in the
814 pseudo_lock_l2 tracepoint. See example below.
8153:
816 writing "3" to the pseudo_lock_measure file will trigger the L3 cache
817 residency (cache hits and misses) measurement captured in the
818 pseudo_lock_l3 tracepoint.
819
820All measurements are recorded with the tracing infrastructure. This requires
821the relevant tracepoints to be enabled before the measurement is triggered.
822
823Example of latency debugging interface
824~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
825In this example a pseudo-locked region named "newlock" was created. Here is
826how we can measure the latency in cycles of reading from this region and
827visualize this data with a histogram that is available if CONFIG_HIST_TRIGGERS
828is set::
829
830 # :> /sys/kernel/tracing/trace
831 # echo 'hist:keys=latency' > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/trigger
832 # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
833 # echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
834 # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable
835 # cat /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/hist
836
837 # event histogram
838 #
839 # trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]
840 #
841
842 { latency: 456 } hitcount: 1
843 { latency: 50 } hitcount: 83
844 { latency: 36 } hitcount: 96
845 { latency: 44 } hitcount: 174
846 { latency: 48 } hitcount: 195
847 { latency: 46 } hitcount: 262
848 { latency: 42 } hitcount: 693
849 { latency: 40 } hitcount: 3204
850 { latency: 38 } hitcount: 3484
851
852 Totals:
853 Hits: 8192
854 Entries: 9
855 Dropped: 0
856
857Example of cache hits/misses debugging
858~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
859In this example a pseudo-locked region named "newlock" was created on the L2
860cache of a platform. Here is how we can obtain details of the cache hits
861and misses using the platform's precision counters.
862::
863
864 # :> /sys/kernel/tracing/trace
865 # echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
866 # echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure
867 # echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable
868 # cat /sys/kernel/tracing/trace
869
870 # tracer: nop
871 #
872 # _-----=> irqs-off
873 # / _----=> need-resched
874 # | / _---=> hardirq/softirq
875 # || / _--=> preempt-depth
876 # ||| / delay
877 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
878 # | | | |||| | |
879 pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0
880
881
882Examples for RDT allocation usage
883~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
884
8851) Example 1
886
887On a two socket machine (one L3 cache per socket) with just four bits
888for cache bit masks, minimum b/w of 10% with a memory bandwidth
889granularity of 10%.
890::
891
892 # mount -t resctrl resctrl /sys/fs/resctrl
893 # cd /sys/fs/resctrl
894 # mkdir p0 p1
895 # echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata
896 # echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
897
898The default resource group is unmodified, so we have access to all parts
899of all caches (its schemata file reads "L3:0=f;1=f").
900
901Tasks that are under the control of group "p0" may only allocate from the
902"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
903Tasks in group "p1" use the "lower" 50% of cache on both sockets.
904
905Similarly, tasks that are under the control of group "p0" may use a
906maximum memory b/w of 50% on socket0 and 50% on socket 1.
907Tasks in group "p1" may also use 50% memory b/w on both sockets.
908Note that unlike cache masks, memory b/w cannot specify whether these
909allocations can overlap or not. The allocations specifies the maximum
910b/w that the group may be able to use and the system admin can configure
911the b/w accordingly.
912
913If resctrl is using the software controller (mba_sc) then user can enter the
914max b/w in MB rather than the percentage values.
915::
916
917 # echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata
918 # echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
919
920In the above example the tasks in "p1" and "p0" on socket 0 would use a max b/w
921of 1024MB where as on socket 1 they would use 500MB.
922
9232) Example 2
924
925Again two sockets, but this time with a more realistic 20-bit mask.
926
927Two real time tasks pid=1234 running on processor 0 and pid=5678 running on
928processor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisy
929neighbors, each of the two real-time tasks exclusively occupies one quarter
930of L3 cache on socket 0.
931::
932
933 # mount -t resctrl resctrl /sys/fs/resctrl
934 # cd /sys/fs/resctrl
935
936First we reset the schemata for the default group so that the "upper"
93750% of the L3 cache on socket 0 and 50% of memory b/w cannot be used by
938ordinary tasks::
939
940 # echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
941
942Next we make a resource group for our first real time task and give
943it access to the "top" 25% of the cache on socket 0.
944::
945
946 # mkdir p0
947 # echo "L3:0=f8000;1=fffff" > p0/schemata
948
949Finally we move our first real time task into this resource group. We
950also use taskset(1) to ensure the task always runs on a dedicated CPU
951on socket 0. Most uses of resource groups will also constrain which
952processors tasks run on.
953::
954
955 # echo 1234 > p0/tasks
956 # taskset -cp 1 1234
957
958Ditto for the second real time task (with the remaining 25% of cache)::
959
960 # mkdir p1
961 # echo "L3:0=7c00;1=fffff" > p1/schemata
962 # echo 5678 > p1/tasks
963 # taskset -cp 2 5678
964
965For the same 2 socket system with memory b/w resource and CAT L3 the
966schemata would look like(Assume min_bandwidth 10 and bandwidth_gran is
96710):
968
969For our first real time task this would request 20% memory b/w on socket 0.
970::
971
972 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
973
974For our second real time task this would request an other 20% memory b/w
975on socket 0.
976::
977
978 # echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
979
9803) Example 3
981
982A single socket system which has real-time tasks running on core 4-7 and
983non real-time workload assigned to core 0-3. The real-time tasks share text
984and data, so a per task association is not required and due to interaction
985with the kernel it's desired that the kernel on these cores shares L3 with
986the tasks.
987::
988
989 # mount -t resctrl resctrl /sys/fs/resctrl
990 # cd /sys/fs/resctrl
991
992First we reset the schemata for the default group so that the "upper"
99350% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0
994cannot be used by ordinary tasks::
995
996 # echo "L3:0=3ff\nMB:0=50" > schemata
997
998Next we make a resource group for our real time cores and give it access
999to the "top" 50% of the cache on socket 0 and 50% of memory bandwidth on
1000socket 0.
1001::
1002
1003 # mkdir p0
1004 # echo "L3:0=ffc00\nMB:0=50" > p0/schemata
1005
1006Finally we move core 4-7 over to the new group and make sure that the
1007kernel and the tasks running there get 50% of the cache. They should
1008also get 50% of memory bandwidth assuming that the cores 4-7 are SMT
1009siblings and only the real time threads are scheduled on the cores 4-7.
1010::
1011
1012 # echo F0 > p0/cpus
1013
10144) Example 4
1015
1016The resource groups in previous examples were all in the default "shareable"
1017mode allowing sharing of their cache allocations. If one resource group
1018configures a cache allocation then nothing prevents another resource group
1019to overlap with that allocation.
1020
1021In this example a new exclusive resource group will be created on a L2 CAT
1022system with two L2 cache instances that can be configured with an 8-bit
1023capacity bitmask. The new exclusive resource group will be configured to use
102425% of each cache instance.
1025::
1026
1027 # mount -t resctrl resctrl /sys/fs/resctrl/
1028 # cd /sys/fs/resctrl
1029
1030First, we observe that the default group is configured to allocate to all L2
1031cache::
1032
1033 # cat schemata
1034 L2:0=ff;1=ff
1035
1036We could attempt to create the new resource group at this point, but it will
1037fail because of the overlap with the schemata of the default group::
1038
1039 # mkdir p0
1040 # echo 'L2:0=0x3;1=0x3' > p0/schemata
1041 # cat p0/mode
1042 shareable
1043 # echo exclusive > p0/mode
1044 -sh: echo: write error: Invalid argument
1045 # cat info/last_cmd_status
1046 schemata overlaps
1047
1048To ensure that there is no overlap with another resource group the default
1049resource group's schemata has to change, making it possible for the new
1050resource group to become exclusive.
1051::
1052
1053 # echo 'L2:0=0xfc;1=0xfc' > schemata
1054 # echo exclusive > p0/mode
1055 # grep . p0/*
1056 p0/cpus:0
1057 p0/mode:exclusive
1058 p0/schemata:L2:0=03;1=03
1059 p0/size:L2:0=262144;1=262144
1060
1061A new resource group will on creation not overlap with an exclusive resource
1062group::
1063
1064 # mkdir p1
1065 # grep . p1/*
1066 p1/cpus:0
1067 p1/mode:shareable
1068 p1/schemata:L2:0=fc;1=fc
1069 p1/size:L2:0=786432;1=786432
1070
1071The bit_usage will reflect how the cache is used::
1072
1073 # cat info/L2/bit_usage
1074 0=SSSSSSEE;1=SSSSSSEE
1075
1076A resource group cannot be forced to overlap with an exclusive resource group::
1077
1078 # echo 'L2:0=0x1;1=0x1' > p1/schemata
1079 -sh: echo: write error: Invalid argument
1080 # cat info/last_cmd_status
1081 overlaps with exclusive group
1082
1083Example of Cache Pseudo-Locking
1084~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1085Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-locked
1086region is exposed at /dev/pseudo_lock/newlock that can be provided to
1087application for argument to mmap().
1088::
1089
1090 # mount -t resctrl resctrl /sys/fs/resctrl/
1091 # cd /sys/fs/resctrl
1092
1093Ensure that there are bits available that can be pseudo-locked, since only
1094unused bits can be pseudo-locked the bits to be pseudo-locked needs to be
1095removed from the default resource group's schemata::
1096
1097 # cat info/L2/bit_usage
1098 0=SSSSSSSS;1=SSSSSSSS
1099 # echo 'L2:1=0xfc' > schemata
1100 # cat info/L2/bit_usage
1101 0=SSSSSSSS;1=SSSSSS00
1102
1103Create a new resource group that will be associated with the pseudo-locked
1104region, indicate that it will be used for a pseudo-locked region, and
1105configure the requested pseudo-locked region capacity bitmask::
1106
1107 # mkdir newlock
1108 # echo pseudo-locksetup > newlock/mode
1109 # echo 'L2:1=0x3' > newlock/schemata
1110
1111On success the resource group's mode will change to pseudo-locked, the
1112bit_usage will reflect the pseudo-locked region, and the character device
1113exposing the pseudo-locked region will exist::
1114
1115 # cat newlock/mode
1116 pseudo-locked
1117 # cat info/L2/bit_usage
1118 0=SSSSSSSS;1=SSSSSSPP
1119 # ls -l /dev/pseudo_lock/newlock
1120 crw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock
1121
1122::
1123
1124 /*
1125 * Example code to access one page of pseudo-locked cache region
1126 * from user space.
1127 */
1128 #define _GNU_SOURCE
1129 #include <fcntl.h>
1130 #include <sched.h>
1131 #include <stdio.h>
1132 #include <stdlib.h>
1133 #include <unistd.h>
1134 #include <sys/mman.h>
1135
1136 /*
1137 * It is required that the application runs with affinity to only
1138 * cores associated with the pseudo-locked region. Here the cpu
1139 * is hardcoded for convenience of example.
1140 */
1141 static int cpuid = 2;
1142
1143 int main(int argc, char *argv[])
1144 {
1145 cpu_set_t cpuset;
1146 long page_size;
1147 void *mapping;
1148 int dev_fd;
1149 int ret;
1150
1151 page_size = sysconf(_SC_PAGESIZE);
1152
1153 CPU_ZERO(&cpuset);
1154 CPU_SET(cpuid, &cpuset);
1155 ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
1156 if (ret < 0) {
1157 perror("sched_setaffinity");
1158 exit(EXIT_FAILURE);
1159 }
1160
1161 dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR);
1162 if (dev_fd < 0) {
1163 perror("open");
1164 exit(EXIT_FAILURE);
1165 }
1166
1167 mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED,
1168 dev_fd, 0);
1169 if (mapping == MAP_FAILED) {
1170 perror("mmap");
1171 close(dev_fd);
1172 exit(EXIT_FAILURE);
1173 }
1174
1175 /* Application interacts with pseudo-locked memory @mapping */
1176
1177 ret = munmap(mapping, page_size);
1178 if (ret < 0) {
1179 perror("munmap");
1180 close(dev_fd);
1181 exit(EXIT_FAILURE);
1182 }
1183
1184 close(dev_fd);
1185 exit(EXIT_SUCCESS);
1186 }
1187
1188Locking between applications
1189----------------------------
1190
1191Certain operations on the resctrl filesystem, composed of read/writes
1192to/from multiple files, must be atomic.
1193
1194As an example, the allocation of an exclusive reservation of L3 cache
1195involves:
1196
1197 1. Read the cbmmasks from each directory or the per-resource "bit_usage"
1198 2. Find a contiguous set of bits in the global CBM bitmask that is clear
1199 in any of the directory cbmmasks
1200 3. Create a new directory
1201 4. Set the bits found in step 2 to the new directory "schemata" file
1202
1203If two applications attempt to allocate space concurrently then they can
1204end up allocating the same bits so the reservations are shared instead of
1205exclusive.
1206
1207To coordinate atomic operations on the resctrlfs and to avoid the problem
1208above, the following locking procedure is recommended:
1209
1210Locking is based on flock, which is available in libc and also as a shell
1211script command
1212
1213Write lock:
1214
1215 A) Take flock(LOCK_EX) on /sys/fs/resctrl
1216 B) Read/write the directory structure.
1217 C) funlock
1218
1219Read lock:
1220
1221 A) Take flock(LOCK_SH) on /sys/fs/resctrl
1222 B) If success read the directory structure.
1223 C) funlock
1224
1225Example with bash::
1226
1227 # Atomically read directory structure
1228 $ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl
1229
1230 # Read directory contents and create new subdirectory
1231
1232 $ cat create-dir.sh
1233 find /sys/fs/resctrl/ > output.txt
1234 mask = function-of(output.txt)
1235 mkdir /sys/fs/resctrl/newres/
1236 echo mask > /sys/fs/resctrl/newres/schemata
1237
1238 $ flock /sys/fs/resctrl/ ./create-dir.sh
1239
1240Example with C::
1241
1242 /*
1243 * Example code do take advisory locks
1244 * before accessing resctrl filesystem
1245 */
1246 #include <sys/file.h>
1247 #include <stdlib.h>
1248
1249 void resctrl_take_shared_lock(int fd)
1250 {
1251 int ret;
1252
1253 /* take shared lock on resctrl filesystem */
1254 ret = flock(fd, LOCK_SH);
1255 if (ret) {
1256 perror("flock");
1257 exit(-1);
1258 }
1259 }
1260
1261 void resctrl_take_exclusive_lock(int fd)
1262 {
1263 int ret;
1264
1265 /* release lock on resctrl filesystem */
1266 ret = flock(fd, LOCK_EX);
1267 if (ret) {
1268 perror("flock");
1269 exit(-1);
1270 }
1271 }
1272
1273 void resctrl_release_lock(int fd)
1274 {
1275 int ret;
1276
1277 /* take shared lock on resctrl filesystem */
1278 ret = flock(fd, LOCK_UN);
1279 if (ret) {
1280 perror("flock");
1281 exit(-1);
1282 }
1283 }
1284
1285 void main(void)
1286 {
1287 int fd, ret;
1288
1289 fd = open("/sys/fs/resctrl", O_DIRECTORY);
1290 if (fd == -1) {
1291 perror("open");
1292 exit(-1);
1293 }
1294 resctrl_take_shared_lock(fd);
1295 /* code to read directory contents */
1296 resctrl_release_lock(fd);
1297
1298 resctrl_take_exclusive_lock(fd);
1299 /* code to read and write directory contents */
1300 resctrl_release_lock(fd);
1301 }
1302
1303Examples for RDT Monitoring along with allocation usage
1304=======================================================
1305Reading monitored data
1306----------------------
1307Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) would
1308show the current snapshot of LLC occupancy of the corresponding MON
1309group or CTRL_MON group.
1310
1311
1312Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)
1313------------------------------------------------------------------------
1314On a two socket machine (one L3 cache per socket) with just four bits
1315for cache bit masks::
1316
1317 # mount -t resctrl resctrl /sys/fs/resctrl
1318 # cd /sys/fs/resctrl
1319 # mkdir p0 p1
1320 # echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata
1321 # echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata
1322 # echo 5678 > p1/tasks
1323 # echo 5679 > p1/tasks
1324
1325The default resource group is unmodified, so we have access to all parts
1326of all caches (its schemata file reads "L3:0=f;1=f").
1327
1328Tasks that are under the control of group "p0" may only allocate from the
1329"lower" 50% on cache ID 0, and the "upper" 50% of cache ID 1.
1330Tasks in group "p1" use the "lower" 50% of cache on both sockets.
1331
1332Create monitor groups and assign a subset of tasks to each monitor group.
1333::
1334
1335 # cd /sys/fs/resctrl/p1/mon_groups
1336 # mkdir m11 m12
1337 # echo 5678 > m11/tasks
1338 # echo 5679 > m12/tasks
1339
1340fetch data (data shown in bytes)
1341::
1342
1343 # cat m11/mon_data/mon_L3_00/llc_occupancy
1344 16234000
1345 # cat m11/mon_data/mon_L3_01/llc_occupancy
1346 14789000
1347 # cat m12/mon_data/mon_L3_00/llc_occupancy
1348 16789000
1349
1350The parent ctrl_mon group shows the aggregated data.
1351::
1352
1353 # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1354 31234000
1355
1356Example 2 (Monitor a task from its creation)
1357--------------------------------------------
1358On a two socket machine (one L3 cache per socket)::
1359
1360 # mount -t resctrl resctrl /sys/fs/resctrl
1361 # cd /sys/fs/resctrl
1362 # mkdir p0 p1
1363
1364An RMID is allocated to the group once its created and hence the <cmd>
1365below is monitored from its creation.
1366::
1367
1368 # echo $$ > /sys/fs/resctrl/p1/tasks
1369 # <cmd>
1370
1371Fetch the data::
1372
1373 # cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy
1374 31789000
1375
1376Example 3 (Monitor without CAT support or before creating CAT groups)
1377---------------------------------------------------------------------
1378
1379Assume a system like HSW has only CQM and no CAT support. In this case
1380the resctrl will still mount but cannot create CTRL_MON directories.
1381But user can create different MON groups within the root group thereby
1382able to monitor all tasks including kernel threads.
1383
1384This can also be used to profile jobs cache size footprint before being
1385able to allocate them to different allocation groups.
1386::
1387
1388 # mount -t resctrl resctrl /sys/fs/resctrl
1389 # cd /sys/fs/resctrl
1390 # mkdir mon_groups/m01
1391 # mkdir mon_groups/m02
1392
1393 # echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks
1394 # echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
1395
1396Monitor the groups separately and also get per domain data. From the
1397below its apparent that the tasks are mostly doing work on
1398domain(socket) 0.
1399::
1400
1401 # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy
1402 31234000
1403 # cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy
1404 34555
1405 # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy
1406 31234000
1407 # cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy
1408 32789
1409
1410
1411Example 4 (Monitor real time tasks)
1412-----------------------------------
1413
1414A single socket system which has real time tasks running on cores 4-7
1415and non real time tasks on other cpus. We want to monitor the cache
1416occupancy of the real time threads on these cores.
1417::
1418
1419 # mount -t resctrl resctrl /sys/fs/resctrl
1420 # cd /sys/fs/resctrl
1421 # mkdir p1
1422
1423Move the cpus 4-7 over to p1::
1424
1425 # echo f0 > p1/cpus
1426
1427View the llc occupancy snapshot::
1428
1429 # cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy
1430 11234000
1431
1432Intel RDT Errata
1433================
1434
1435Intel MBM Counters May Report System Memory Bandwidth Incorrectly
1436-----------------------------------------------------------------
1437
1438Errata SKX99 for Skylake server and BDF102 for Broadwell server.
1439
1440Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metrics
1441according to the assigned Resource Monitor ID (RMID) for that logical
1442core. The IA32_QM_CTR register (MSR 0xC8E), used to report these
1443metrics, may report incorrect system bandwidth for certain RMID values.
1444
1445Implication: Due to the errata, system memory bandwidth may not match
1446what is reported.
1447
1448Workaround: MBM total and local readings are corrected according to the
1449following correction factor table:
1450
1451+---------------+---------------+---------------+-----------------+
1452|core count |rmid count |rmid threshold |correction factor|
1453+---------------+---------------+---------------+-----------------+
1454|1 |8 |0 |1.000000 |
1455+---------------+---------------+---------------+-----------------+
1456|2 |16 |0 |1.000000 |
1457+---------------+---------------+---------------+-----------------+
1458|3 |24 |15 |0.969650 |
1459+---------------+---------------+---------------+-----------------+
1460|4 |32 |0 |1.000000 |
1461+---------------+---------------+---------------+-----------------+
1462|6 |48 |31 |0.969650 |
1463+---------------+---------------+---------------+-----------------+
1464|7 |56 |47 |1.142857 |
1465+---------------+---------------+---------------+-----------------+
1466|8 |64 |0 |1.000000 |
1467+---------------+---------------+---------------+-----------------+
1468|9 |72 |63 |1.185115 |
1469+---------------+---------------+---------------+-----------------+
1470|10 |80 |63 |1.066553 |
1471+---------------+---------------+---------------+-----------------+
1472|11 |88 |79 |1.454545 |
1473+---------------+---------------+---------------+-----------------+
1474|12 |96 |0 |1.000000 |
1475+---------------+---------------+---------------+-----------------+
1476|13 |104 |95 |1.230769 |
1477+---------------+---------------+---------------+-----------------+
1478|14 |112 |95 |1.142857 |
1479+---------------+---------------+---------------+-----------------+
1480|15 |120 |95 |1.066667 |
1481+---------------+---------------+---------------+-----------------+
1482|16 |128 |0 |1.000000 |
1483+---------------+---------------+---------------+-----------------+
1484|17 |136 |127 |1.254863 |
1485+---------------+---------------+---------------+-----------------+
1486|18 |144 |127 |1.185255 |
1487+---------------+---------------+---------------+-----------------+
1488|19 |152 |0 |1.000000 |
1489+---------------+---------------+---------------+-----------------+
1490|20 |160 |127 |1.066667 |
1491+---------------+---------------+---------------+-----------------+
1492|21 |168 |0 |1.000000 |
1493+---------------+---------------+---------------+-----------------+
1494|22 |176 |159 |1.454334 |
1495+---------------+---------------+---------------+-----------------+
1496|23 |184 |0 |1.000000 |
1497+---------------+---------------+---------------+-----------------+
1498|24 |192 |127 |0.969744 |
1499+---------------+---------------+---------------+-----------------+
1500|25 |200 |191 |1.280246 |
1501+---------------+---------------+---------------+-----------------+
1502|26 |208 |191 |1.230921 |
1503+---------------+---------------+---------------+-----------------+
1504|27 |216 |0 |1.000000 |
1505+---------------+---------------+---------------+-----------------+
1506|28 |224 |191 |1.143118 |
1507+---------------+---------------+---------------+-----------------+
1508
1509If rmid > rmid threshold, MBM total and local values should be multiplied
1510by the correction factor.
1511
1512See:
1513
15141. Erratum SKX99 in Intel Xeon Processor Scalable Family Specification Update:
1515http://web.archive.org/web/20200716124958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
1516
15172. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update:
1518http://web.archive.org/web/20191125200531/https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
1519
15203. The errata in Intel Resource Director Technology (Intel RDT) on 2nd Generation Intel Xeon Scalable Processors Reference Manual:
1521https://software.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-manual.html
1522
1523for further information.