Merge tag 'perf-urgent-for-mingo-5.1-20190329' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent

Pull perf/urgent fixes from Arnaldo:

Core libraries:
Jiri Olsa:
- Fix max perf_event_attr.precise_ip detection.

Kan Liang:
- Fix parser error for uncore event alias

Wei Lin:
- Fixup ordering of kernel maps after obtaining the main kernel map address.

Intel PT:
Adrian Hunter:
- Fix TSC slip where A TSC packet can slip past MTC packets so that the
timestamp appears to go backwards.

- Fixes for exported-sql-viewer GUI conversion to python3.

ARM coresight:
Solomon Tan:
- Fix the build by adding a missing case value for enumeration value introduced
in newer library, that now is the required one.

tool headers:
Arnaldo Carvalho de Melo:
- Syncronize kernel headers with the kernel, getting new io_uring and
pidfd_send_signal syscalls so that 'perf trace' can handle them.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

Changed files
+719 -392
Documentation
filesystems
arch
drivers
fs
tools
arch
alpha
include
uapi
asm
mips
include
uapi
asm
parisc
include
uapi
asm
powerpc
include
uapi
asm
x86
include
xtensa
include
uapi
asm
build
include
perf
arch
x86
entry
syscalls
scripts
trace
beauty
util
+194 -171
Documentation/filesystems/mount_api.txt
··· 12 12 13 13 (4) Filesystem context security. 14 14 15 - (5) VFS filesystem context operations. 15 + (5) VFS filesystem context API. 16 16 17 - (6) Parameter description. 17 + (6) Superblock creation helpers. 18 18 19 - (7) Parameter helper functions. 19 + (7) Parameter description. 20 + 21 + (8) Parameter helper functions. 20 22 21 23 22 24 ======== ··· 43 41 44 42 (7) Destroy the context. 45 43 46 - To support this, the file_system_type struct gains a new field: 44 + To support this, the file_system_type struct gains two new fields: 47 45 48 46 int (*init_fs_context)(struct fs_context *fc); 47 + const struct fs_parameter_description *parameters; 49 48 50 - which is invoked to set up the filesystem-specific parts of a filesystem 51 - context, including the additional space. 49 + The first is invoked to set up the filesystem-specific parts of a filesystem 50 + context, including the additional space, and the second points to the 51 + parameter description for validation at registration time and querying by a 52 + future system call. 52 53 53 54 Note that security initialisation is done *after* the filesystem is called so 54 55 that the namespaces may be adjusted first. ··· 78 73 void *s_fs_info; 79 74 unsigned int sb_flags; 80 75 unsigned int sb_flags_mask; 76 + unsigned int s_iflags; 77 + unsigned int lsm_flags; 81 78 enum fs_context_purpose purpose:8; 82 - bool sloppy:1; 83 - bool silent:1; 84 79 ... 85 80 }; 86 81 ··· 146 141 147 142 Which bits SB_* flags are to be set/cleared in super_block::s_flags. 148 143 144 + (*) unsigned int s_iflags 145 + 146 + These will be bitwise-OR'd with s->s_iflags when a superblock is created. 147 + 149 148 (*) enum fs_context_purpose 150 149 151 150 This indicates the purpose for which the context is intended. The ··· 158 149 FS_CONTEXT_FOR_MOUNT, -- New superblock for explicit mount 159 150 FS_CONTEXT_FOR_SUBMOUNT -- New automatic submount of extant mount 160 151 FS_CONTEXT_FOR_RECONFIGURE -- Change an existing mount 161 - 162 - (*) bool sloppy 163 - (*) bool silent 164 - 165 - These are set if the sloppy or silent mount options are given. 166 - 167 - [NOTE] sloppy is probably unnecessary when userspace passes over one 168 - option at a time since the error can just be ignored if userspace deems it 169 - to be unimportant. 170 - 171 - [NOTE] silent is probably redundant with sb_flags & SB_SILENT. 172 152 173 153 The mount context is created by calling vfs_new_fs_context() or 174 154 vfs_dup_fs_context() and is destroyed with put_fs_context(). Note that the ··· 340 342 It should return 0 on success or a negative error code on failure. 341 343 342 344 343 - ================================= 344 - VFS FILESYSTEM CONTEXT OPERATIONS 345 - ================================= 345 + ========================== 346 + VFS FILESYSTEM CONTEXT API 347 + ========================== 346 348 347 - There are four operations for creating a filesystem context and 348 - one for destroying a context: 349 + There are four operations for creating a filesystem context and one for 350 + destroying a context: 349 351 350 - (*) struct fs_context *vfs_new_fs_context(struct file_system_type *fs_type, 351 - struct dentry *reference, 352 - unsigned int sb_flags, 353 - unsigned int sb_flags_mask, 354 - enum fs_context_purpose purpose); 352 + (*) struct fs_context *fs_context_for_mount( 353 + struct file_system_type *fs_type, 354 + unsigned int sb_flags); 355 355 356 - Create a filesystem context for a given filesystem type and purpose. This 357 - allocates the filesystem context, sets the superblock flags, initialises 358 - the security and calls fs_type->init_fs_context() to initialise the 359 - filesystem private data. 356 + Allocate a filesystem context for the purpose of setting up a new mount, 357 + whether that be with a new superblock or sharing an existing one. This 358 + sets the superblock flags, initialises the security and calls 359 + fs_type->init_fs_context() to initialise the filesystem private data. 360 360 361 - reference can be NULL or it may indicate the root dentry of a superblock 362 - that is going to be reconfigured (FS_CONTEXT_FOR_RECONFIGURE) or 363 - the automount point that triggered a submount (FS_CONTEXT_FOR_SUBMOUNT). 364 - This is provided as a source of namespace information. 361 + fs_type specifies the filesystem type that will manage the context and 362 + sb_flags presets the superblock flags stored therein. 363 + 364 + (*) struct fs_context *fs_context_for_reconfigure( 365 + struct dentry *dentry, 366 + unsigned int sb_flags, 367 + unsigned int sb_flags_mask); 368 + 369 + Allocate a filesystem context for the purpose of reconfiguring an 370 + existing superblock. dentry provides a reference to the superblock to be 371 + configured. sb_flags and sb_flags_mask indicate which superblock flags 372 + need changing and to what. 373 + 374 + (*) struct fs_context *fs_context_for_submount( 375 + struct file_system_type *fs_type, 376 + struct dentry *reference); 377 + 378 + Allocate a filesystem context for the purpose of creating a new mount for 379 + an automount point or other derived superblock. fs_type specifies the 380 + filesystem type that will manage the context and the reference dentry 381 + supplies the parameters. Namespaces are propagated from the reference 382 + dentry's superblock also. 383 + 384 + Note that it's not a requirement that the reference dentry be of the same 385 + filesystem type as fs_type. 365 386 366 387 (*) struct fs_context *vfs_dup_fs_context(struct fs_context *src_fc); 367 388 ··· 406 389 407 390 For the remaining operations, if an error occurs, a negative error code will be 408 391 returned. 409 - 410 - (*) int vfs_get_tree(struct fs_context *fc); 411 - 412 - Get or create the mountable root and superblock, using the parameters in 413 - the filesystem context to select/configure the superblock. This invokes 414 - the ->validate() op and then the ->get_tree() op. 415 - 416 - [NOTE] ->validate() could perhaps be rolled into ->get_tree() and 417 - ->reconfigure(). 418 - 419 - (*) struct vfsmount *vfs_create_mount(struct fs_context *fc); 420 - 421 - Create a mount given the parameters in the specified filesystem context. 422 - Note that this does not attach the mount to anything. 423 392 424 393 (*) int vfs_parse_fs_param(struct fs_context *fc, 425 394 struct fs_parameter *param); ··· 435 432 clear the pointer, but then becomes responsible for disposing of the 436 433 object. 437 434 438 - (*) int vfs_parse_fs_string(struct fs_context *fc, char *key, 435 + (*) int vfs_parse_fs_string(struct fs_context *fc, const char *key, 439 436 const char *value, size_t v_size); 440 437 441 - A wrapper around vfs_parse_fs_param() that just passes a constant string. 438 + A wrapper around vfs_parse_fs_param() that copies the value string it is 439 + passed. 442 440 443 441 (*) int generic_parse_monolithic(struct fs_context *fc, void *data); 444 442 445 443 Parse a sys_mount() data page, assuming the form to be a text list 446 444 consisting of key[=val] options separated by commas. Each item in the 447 445 list is passed to vfs_mount_option(). This is the default when the 448 - ->parse_monolithic() operation is NULL. 446 + ->parse_monolithic() method is NULL. 447 + 448 + (*) int vfs_get_tree(struct fs_context *fc); 449 + 450 + Get or create the mountable root and superblock, using the parameters in 451 + the filesystem context to select/configure the superblock. This invokes 452 + the ->get_tree() method. 453 + 454 + (*) struct vfsmount *vfs_create_mount(struct fs_context *fc); 455 + 456 + Create a mount given the parameters in the specified filesystem context. 457 + Note that this does not attach the mount to anything. 458 + 459 + 460 + =========================== 461 + SUPERBLOCK CREATION HELPERS 462 + =========================== 463 + 464 + A number of VFS helpers are available for use by filesystems for the creation 465 + or looking up of superblocks. 466 + 467 + (*) struct super_block * 468 + sget_fc(struct fs_context *fc, 469 + int (*test)(struct super_block *sb, struct fs_context *fc), 470 + int (*set)(struct super_block *sb, struct fs_context *fc)); 471 + 472 + This is the core routine. If test is non-NULL, it searches for an 473 + existing superblock matching the criteria held in the fs_context, using 474 + the test function to match them. If no match is found, a new superblock 475 + is created and the set function is called to set it up. 476 + 477 + Prior to the set function being called, fc->s_fs_info will be transferred 478 + to sb->s_fs_info - and fc->s_fs_info will be cleared if set returns 479 + success (ie. 0). 480 + 481 + The following helpers all wrap sget_fc(): 482 + 483 + (*) int vfs_get_super(struct fs_context *fc, 484 + enum vfs_get_super_keying keying, 485 + int (*fill_super)(struct super_block *sb, 486 + struct fs_context *fc)) 487 + 488 + This creates/looks up a deviceless superblock. The keying indicates how 489 + many superblocks of this type may exist and in what manner they may be 490 + shared: 491 + 492 + (1) vfs_get_single_super 493 + 494 + Only one such superblock may exist in the system. Any further 495 + attempt to get a new superblock gets this one (and any parameter 496 + differences are ignored). 497 + 498 + (2) vfs_get_keyed_super 499 + 500 + Multiple superblocks of this type may exist and they're keyed on 501 + their s_fs_info pointer (for example this may refer to a 502 + namespace). 503 + 504 + (3) vfs_get_independent_super 505 + 506 + Multiple independent superblocks of this type may exist. This 507 + function never matches an existing one and always creates a new 508 + one. 449 509 450 510 451 511 ===================== ··· 520 454 521 455 struct fs_parameter_description { 522 456 const char name[16]; 523 - u8 nr_params; 524 - u8 nr_alt_keys; 525 - u8 nr_enums; 526 - bool ignore_unknown; 527 - bool no_source; 528 - const char *const *keys; 529 - const struct constant_table *alt_keys; 530 457 const struct fs_parameter_spec *specs; 531 458 const struct fs_parameter_enum *enums; 532 459 }; 533 460 534 461 For example: 535 462 536 - enum afs_param { 463 + enum { 537 464 Opt_autocell, 538 465 Opt_bar, 539 466 Opt_dyn, 540 467 Opt_foo, 541 468 Opt_source, 542 - nr__afs_params 543 469 }; 544 470 545 471 static const struct fs_parameter_description afs_fs_parameters = { 546 472 .name = "kAFS", 547 - .nr_params = nr__afs_params, 548 - .nr_alt_keys = ARRAY_SIZE(afs_param_alt_keys), 549 - .nr_enums = ARRAY_SIZE(afs_param_enums), 550 - .keys = afs_param_keys, 551 - .alt_keys = afs_param_alt_keys, 552 473 .specs = afs_param_specs, 553 474 .enums = afs_param_enums, 554 475 }; ··· 547 494 The name to be used in error messages generated by the parse helper 548 495 functions. 549 496 550 - (2) u8 nr_params; 497 + (2) const struct fs_parameter_specification *specs; 551 498 552 - The number of discrete parameter identifiers. This indicates the number 553 - of elements in the ->types[] array and also limits the values that may be 554 - used in the values that the ->keys[] array maps to. 499 + Table of parameter specifications, terminated with a null entry, where the 500 + entries are of type: 555 501 556 - It is expected that, for example, two parameters that are related, say 557 - "acl" and "noacl" with have the same ID, but will be flagged to indicate 558 - that one is the inverse of the other. The value can then be picked out 559 - from the parse result. 560 - 561 - (3) const struct fs_parameter_specification *specs; 562 - 563 - Table of parameter specifications, where the entries are of type: 564 - 565 - struct fs_parameter_type { 566 - enum fs_parameter_spec type:8; 567 - u8 flags; 502 + struct fs_parameter_spec { 503 + const char *name; 504 + u8 opt; 505 + enum fs_parameter_type type:8; 506 + unsigned short flags; 568 507 }; 569 508 570 - and the parameter identifier is the index to the array. 'type' indicates 571 - the desired value type and must be one of: 509 + The 'name' field is a string to match exactly to the parameter key (no 510 + wildcards, patterns and no case-independence) and 'opt' is the value that 511 + will be returned by the fs_parser() function in the case of a successful 512 + match. 513 + 514 + The 'type' field indicates the desired value type and must be one of: 572 515 573 516 TYPE NAME EXPECTED VALUE RESULT IN 574 517 ======================= ======================= ===================== ··· 574 525 fs_param_is_u32_octal 32-bit octal int result->uint_32 575 526 fs_param_is_u32_hex 32-bit hex int result->uint_32 576 527 fs_param_is_s32 32-bit signed int result->int_32 528 + fs_param_is_u64 64-bit unsigned int result->uint_64 577 529 fs_param_is_enum Enum value name result->uint_32 578 530 fs_param_is_string Arbitrary string param->string 579 531 fs_param_is_blob Binary blob param->blob 580 532 fs_param_is_blockdev Blockdev path * Needs lookup 581 533 fs_param_is_path Path * Needs lookup 582 - fs_param_is_fd File descriptor param->file 583 - 584 - And each parameter can be qualified with 'flags': 585 - 586 - fs_param_v_optional The value is optional 587 - fs_param_neg_with_no If key name is prefixed with "no", it is false 588 - fs_param_neg_with_empty If value is "", it is false 589 - fs_param_deprecated The parameter is deprecated. 590 - 591 - For example: 592 - 593 - static const struct fs_parameter_spec afs_param_specs[nr__afs_params] = { 594 - [Opt_autocell] = { fs_param_is flag }, 595 - [Opt_bar] = { fs_param_is_enum }, 596 - [Opt_dyn] = { fs_param_is flag }, 597 - [Opt_foo] = { fs_param_is_bool, fs_param_neg_with_no }, 598 - [Opt_source] = { fs_param_is_string }, 599 - }; 534 + fs_param_is_fd File descriptor result->int_32 600 535 601 536 Note that if the value is of fs_param_is_bool type, fs_parse() will try 602 537 to match any string value against "0", "1", "no", "yes", "false", "true". 603 538 604 - [!] NOTE that the table must be sorted according to primary key name so 605 - that ->keys[] is also sorted. 539 + Each parameter can also be qualified with 'flags': 606 540 607 - (4) const char *const *keys; 541 + fs_param_v_optional The value is optional 542 + fs_param_neg_with_no result->negated set if key is prefixed with "no" 543 + fs_param_neg_with_empty result->negated set if value is "" 544 + fs_param_deprecated The parameter is deprecated. 608 545 609 - Table of primary key names for the parameters. There must be one entry 610 - per defined parameter. The table is optional if ->nr_params is 0. The 611 - table is just an array of names e.g.: 546 + These are wrapped with a number of convenience wrappers: 612 547 613 - static const char *const afs_param_keys[nr__afs_params] = { 614 - [Opt_autocell] = "autocell", 615 - [Opt_bar] = "bar", 616 - [Opt_dyn] = "dyn", 617 - [Opt_foo] = "foo", 618 - [Opt_source] = "source", 548 + MACRO SPECIFIES 549 + ======================= =============================================== 550 + fsparam_flag() fs_param_is_flag 551 + fsparam_flag_no() fs_param_is_flag, fs_param_neg_with_no 552 + fsparam_bool() fs_param_is_bool 553 + fsparam_u32() fs_param_is_u32 554 + fsparam_u32oct() fs_param_is_u32_octal 555 + fsparam_u32hex() fs_param_is_u32_hex 556 + fsparam_s32() fs_param_is_s32 557 + fsparam_u64() fs_param_is_u64 558 + fsparam_enum() fs_param_is_enum 559 + fsparam_string() fs_param_is_string 560 + fsparam_blob() fs_param_is_blob 561 + fsparam_bdev() fs_param_is_blockdev 562 + fsparam_path() fs_param_is_path 563 + fsparam_fd() fs_param_is_fd 564 + 565 + all of which take two arguments, name string and option number - for 566 + example: 567 + 568 + static const struct fs_parameter_spec afs_param_specs[] = { 569 + fsparam_flag ("autocell", Opt_autocell), 570 + fsparam_flag ("dyn", Opt_dyn), 571 + fsparam_string ("source", Opt_source), 572 + fsparam_flag_no ("foo", Opt_foo), 573 + {} 619 574 }; 620 575 621 - [!] NOTE that the table must be sorted such that the table can be searched 622 - with bsearch() using strcmp(). This means that the Opt_* values must 623 - correspond to the entries in this table. 624 - 625 - (5) const struct constant_table *alt_keys; 626 - u8 nr_alt_keys; 627 - 628 - Table of additional key names and their mappings to parameter ID plus the 629 - number of elements in the table. This is optional. The table is just an 630 - array of { name, integer } pairs, e.g.: 631 - 632 - static const struct constant_table afs_param_keys[] = { 633 - { "baz", Opt_bar }, 634 - { "dynamic", Opt_dyn }, 635 - }; 636 - 637 - [!] NOTE that the table must be sorted such that strcmp() can be used with 638 - bsearch() to search the entries. 639 - 640 - The parameter ID can also be fs_param_key_removed to indicate that a 641 - deprecated parameter has been removed and that an error will be given. 642 - This differs from fs_param_deprecated where the parameter may still have 643 - an effect. 644 - 645 - Further, the behaviour of the parameter may differ when an alternate name 646 - is used (for instance with NFS, "v3", "v4.2", etc. are alternate names). 576 + An addition macro, __fsparam() is provided that takes an additional pair 577 + of arguments to specify the type and the flags for anything that doesn't 578 + match one of the above macros. 647 579 648 580 (6) const struct fs_parameter_enum *enums; 649 - u8 nr_enums; 650 581 651 - Table of enum value names to integer mappings and the number of elements 652 - stored therein. This is of type: 582 + Table of enum value names to integer mappings, terminated with a null 583 + entry. This is of type: 653 584 654 585 struct fs_parameter_enum { 655 - u8 param_id; 586 + u8 opt; 656 587 char name[14]; 657 588 u8 value; 658 589 }; ··· 649 620 If a parameter of type fs_param_is_enum is encountered, fs_parse() will 650 621 try to look the value up in the enum table and the result will be stored 651 622 in the parse result. 652 - 653 - (7) bool no_source; 654 - 655 - If this is set, fs_parse() will ignore any "source" parameter and not 656 - pass it to the filesystem. 657 623 658 624 The parser should be pointed to by the parser pointer in the file_system_type 659 625 struct as this will provide validation on registration (if ··· 674 650 int value; 675 651 }; 676 652 677 - and it must be sorted such that it can be searched using bsearch() using 678 - strcmp(). If a match is found, the corresponding value is returned. If a 679 - match isn't found, the not_found value is returned instead. 653 + If a match is found, the corresponding value is returned. If a match 654 + isn't found, the not_found value is returned instead. 680 655 681 656 (*) bool validate_constant_table(const struct constant_table *tbl, 682 657 size_t tbl_size, ··· 688 665 should just be set to lie inside the low-to-high range. 689 666 690 667 If all is good, true is returned. If the table is invalid, errors are 691 - logged to dmesg, the stack is dumped and false is returned. 668 + logged to dmesg and false is returned. 669 + 670 + (*) bool fs_validate_description(const struct fs_parameter_description *desc); 671 + 672 + This performs some validation checks on a parameter description. It 673 + returns true if the description is good and false if it is not. It will 674 + log errors to dmesg if validation fails. 692 675 693 676 (*) int fs_parse(struct fs_context *fc, 694 - const struct fs_param_parser *parser, 677 + const struct fs_parameter_description *desc, 695 678 struct fs_parameter *param, 696 - struct fs_param_parse_result *result); 679 + struct fs_parse_result *result); 697 680 698 681 This is the main interpreter of parameters. It uses the parameter 699 - description (parser) to look up the name of the parameter to use and to 700 - convert that to a parameter ID (stored in result->key). 682 + description to look up a parameter by key name and to convert that to an 683 + option number (which it returns). 701 684 702 685 If successful, and if the parameter type indicates the result is a 703 686 boolean, integer or enum type, the value is converted by this function and 704 - the result stored in result->{boolean,int_32,uint_32}. 687 + the result stored in result->{boolean,int_32,uint_32,uint_64}. 705 688 706 689 If a match isn't initially made, the key is prefixed with "no" and no 707 690 value is present then an attempt will be made to look up the key with the 708 691 prefix removed. If this matches a parameter for which the type has flag 709 - fs_param_neg_with_no set, then a match will be made and the value will be 710 - set to false/0/NULL. 692 + fs_param_neg_with_no set, then a match will be made and result->negated 693 + will be set to true. 711 694 712 - If the parameter is successfully matched and, optionally, parsed 713 - correctly, 1 is returned. If the parameter isn't matched and 714 - parser->ignore_unknown is set, then 0 is returned. Otherwise -EINVAL is 715 - returned. 716 - 717 - (*) bool fs_validate_description(const struct fs_parameter_description *desc); 718 - 719 - This is validates the parameter description. It returns true if the 720 - description is good and false if it is not. 695 + If the parameter isn't matched, -ENOPARAM will be returned; if the 696 + parameter is matched, but the value is erroneous, -EINVAL will be 697 + returned; otherwise the parameter's option number will be returned. 721 698 722 699 (*) int fs_lookup_param(struct fs_context *fc, 723 700 struct fs_parameter *value,
+1
arch/arm/Kconfig
··· 596 596 select HAVE_IDE 597 597 select PM_GENERIC_DOMAINS if PM 598 598 select PM_GENERIC_DOMAINS_OF if PM && OF 599 + select REGMAP_MMIO 599 600 select RESET_CONTROLLER 600 601 select SPARSE_IRQ 601 602 select USE_OF
+1 -1
arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
··· 93 93 }; 94 94 95 95 &hdmi { 96 - hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>; 96 + hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>; 97 97 }; 98 98 99 99 &pwm {
+3 -3
arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
··· 114 114 reg = <2>; 115 115 }; 116 116 117 - switch@0 { 117 + switch@10 { 118 118 compatible = "qca,qca8334"; 119 - reg = <0>; 119 + reg = <10>; 120 120 121 121 switch_ports: ports { 122 122 #address-cells = <1>; ··· 125 125 ethphy0: port@0 { 126 126 reg = <0>; 127 127 label = "cpu"; 128 - phy-mode = "rgmii"; 128 + phy-mode = "rgmii-id"; 129 129 ethernet = <&fec>; 130 130 131 131 fixed-link {
+2 -2
arch/arm/boot/dts/imx6qdl-icore-rqs.dtsi
··· 264 264 pinctrl-2 = <&pinctrl_usdhc3_200mhz>; 265 265 vmcc-supply = <&reg_sd3_vmmc>; 266 266 cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>; 267 - bus-witdh = <4>; 267 + bus-width = <4>; 268 268 no-1-8-v; 269 269 status = "okay"; 270 270 }; ··· 275 275 pinctrl-1 = <&pinctrl_usdhc4_100mhz>; 276 276 pinctrl-2 = <&pinctrl_usdhc4_200mhz>; 277 277 vmcc-supply = <&reg_sd4_vmmc>; 278 - bus-witdh = <8>; 278 + bus-width = <8>; 279 279 no-1-8-v; 280 280 non-removable; 281 281 status = "okay";
+1
arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
··· 91 91 pinctrl-0 = <&pinctrl_enet>; 92 92 phy-handle = <&ethphy>; 93 93 phy-mode = "rgmii"; 94 + phy-reset-duration = <10>; /* in msecs */ 94 95 phy-reset-gpios = <&gpio3 23 GPIO_ACTIVE_LOW>; 95 96 phy-supply = <&vdd_eth_io_reg>; 96 97 status = "disabled";
+1 -1
arch/arm/boot/dts/imx6ull-pinfunc-snvs.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Copyright (C) 2016 Freescale Semiconductor, Inc. 4 4 * Copyright (C) 2017 NXP
+5 -4
arch/arm/boot/dts/ste-nomadik-nhk15.dts
··· 213 213 gpio-sck = <&gpio0 5 GPIO_ACTIVE_HIGH>; 214 214 gpio-mosi = <&gpio0 4 GPIO_ACTIVE_HIGH>; 215 215 /* 216 - * It's not actually active high, but the frameworks assume 217 - * the polarity of the passed-in GPIO is "normal" (active 218 - * high) then actively drives the line low to select the 219 - * chip. 216 + * This chipselect is active high. Just setting the flags 217 + * to GPIO_ACTIVE_HIGH is not enough for the SPI DT bindings, 218 + * it will be ignored, only the special "spi-cs-high" flag 219 + * really counts. 220 220 */ 221 221 cs-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; 222 + spi-cs-high; 222 223 num-chipselects = <1>; 223 224 224 225 /*
+3
arch/arm/configs/imx_v4_v5_defconfig
··· 170 170 # CONFIG_IOMMU_SUPPORT is not set 171 171 CONFIG_IIO=y 172 172 CONFIG_FSL_MX25_ADC=y 173 + CONFIG_PWM=y 174 + CONFIG_PWM_IMX1=y 175 + CONFIG_PWM_IMX27=y 173 176 CONFIG_EXT4_FS=y 174 177 # CONFIG_DNOTIFY is not set 175 178 CONFIG_VFAT_FS=y
+1 -1
arch/arm/configs/imx_v6_v7_defconfig
··· 398 398 CONFIG_MPL3115=y 399 399 CONFIG_PWM=y 400 400 CONFIG_PWM_FSL_FTM=y 401 - CONFIG_PWM_IMX=y 401 + CONFIG_PWM_IMX27=y 402 402 CONFIG_NVMEM_IMX_OCOTP=y 403 403 CONFIG_NVMEM_VF610_OCOTP=y 404 404 CONFIG_TEE=y
+10 -17
arch/arm/mach-imx/cpuidle-imx6q.c
··· 16 16 #include "cpuidle.h" 17 17 #include "hardware.h" 18 18 19 - static atomic_t master = ATOMIC_INIT(0); 20 - static DEFINE_SPINLOCK(master_lock); 19 + static int num_idle_cpus = 0; 20 + static DEFINE_SPINLOCK(cpuidle_lock); 21 21 22 22 static int imx6q_enter_wait(struct cpuidle_device *dev, 23 23 struct cpuidle_driver *drv, int index) 24 24 { 25 - if (atomic_inc_return(&master) == num_online_cpus()) { 26 - /* 27 - * With this lock, we prevent other cpu to exit and enter 28 - * this function again and become the master. 29 - */ 30 - if (!spin_trylock(&master_lock)) 31 - goto idle; 25 + spin_lock(&cpuidle_lock); 26 + if (++num_idle_cpus == num_online_cpus()) 32 27 imx6_set_lpm(WAIT_UNCLOCKED); 33 - cpu_do_idle(); 34 - imx6_set_lpm(WAIT_CLOCKED); 35 - spin_unlock(&master_lock); 36 - goto done; 37 - } 28 + spin_unlock(&cpuidle_lock); 38 29 39 - idle: 40 30 cpu_do_idle(); 41 - done: 42 - atomic_dec(&master); 31 + 32 + spin_lock(&cpuidle_lock); 33 + if (num_idle_cpus-- == num_online_cpus()) 34 + imx6_set_lpm(WAIT_CLOCKED); 35 + spin_unlock(&cpuidle_lock); 43 36 44 37 return index; 45 38 }
+1
arch/arm/mach-imx/mach-imx51.c
··· 59 59 return; 60 60 61 61 m4if_base = of_iomap(np, 0); 62 + of_node_put(np); 62 63 if (!m4if_base) { 63 64 pr_err("Unable to map M4IF registers\n"); 64 65 return;
+1
arch/arm64/Kconfig.platforms
··· 27 27 bool "Broadcom BCM2835 family" 28 28 select TIMER_OF 29 29 select GPIOLIB 30 + select MFD_CORE 30 31 select PINCTRL 31 32 select PINCTRL_BCM2835 32 33 select ARM_AMBA
-1
arch/arm64/boot/dts/nvidia/tegra186.dtsi
··· 321 321 nvidia,default-trim = <0x9>; 322 322 nvidia,dqs-trim = <63>; 323 323 mmc-hs400-1_8v; 324 - supports-cqe; 325 324 status = "disabled"; 326 325 }; 327 326
+3 -4
arch/arm64/boot/dts/renesas/r8a774c0.dtsi
··· 2 2 /* 3 3 * Device Tree Source for the RZ/G2E (R8A774C0) SoC 4 4 * 5 - * Copyright (C) 2018 Renesas Electronics Corp. 5 + * Copyright (C) 2018-2019 Renesas Electronics Corp. 6 6 */ 7 7 8 8 #include <dt-bindings/clock/r8a774c0-cpg-mssr.h> ··· 1150 1150 <&cpg CPG_CORE R8A774C0_CLK_S3D1C>, 1151 1151 <&scif_clk>; 1152 1152 clock-names = "fck", "brg_int", "scif_clk"; 1153 - dmas = <&dmac1 0x5b>, <&dmac1 0x5a>, 1154 - <&dmac2 0x5b>, <&dmac2 0x5a>; 1155 - dma-names = "tx", "rx", "tx", "rx"; 1153 + dmas = <&dmac0 0x5b>, <&dmac0 0x5a>; 1154 + dma-names = "tx", "rx"; 1156 1155 power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>; 1157 1156 resets = <&cpg 202>; 1158 1157 status = "disabled";
+3 -4
arch/arm64/boot/dts/renesas/r8a77990.dtsi
··· 2 2 /* 3 3 * Device Tree Source for the R-Car E3 (R8A77990) SoC 4 4 * 5 - * Copyright (C) 2018 Renesas Electronics Corp. 5 + * Copyright (C) 2018-2019 Renesas Electronics Corp. 6 6 */ 7 7 8 8 #include <dt-bindings/clock/r8a77990-cpg-mssr.h> ··· 1067 1067 <&cpg CPG_CORE R8A77990_CLK_S3D1C>, 1068 1068 <&scif_clk>; 1069 1069 clock-names = "fck", "brg_int", "scif_clk"; 1070 - dmas = <&dmac1 0x5b>, <&dmac1 0x5a>, 1071 - <&dmac2 0x5b>, <&dmac2 0x5a>; 1072 - dma-names = "tx", "rx", "tx", "rx"; 1070 + dmas = <&dmac0 0x5b>, <&dmac0 0x5a>; 1071 + dma-names = "tx", "rx"; 1073 1072 power-domains = <&sysc R8A77990_PD_ALWAYS_ON>; 1074 1073 resets = <&cpg 202>; 1075 1074 status = "disabled";
+11
arch/s390/include/asm/ap.h
··· 360 360 return reg1; 361 361 } 362 362 363 + /* 364 + * Interface to tell the AP bus code that a configuration 365 + * change has happened. The bus code should at least do 366 + * an ap bus resource rescan. 367 + */ 368 + #if IS_ENABLED(CONFIG_ZCRYPT) 369 + void ap_bus_cfg_chg(void); 370 + #else 371 + static inline void ap_bus_cfg_chg(void){}; 372 + #endif 373 + 363 374 #endif /* _ASM_S390_AP_H_ */
+7 -4
arch/s390/include/asm/elf.h
··· 252 252 253 253 /* 254 254 * Cache aliasing on the latest machines calls for a mapping granularity 255 - * of 512KB. For 64-bit processes use a 512KB alignment and a randomization 256 - * of up to 1GB. For 31-bit processes the virtual address space is limited, 257 - * use no alignment and limit the randomization to 8MB. 255 + * of 512KB for the anonymous mapping base. For 64-bit processes use a 256 + * 512KB alignment and a randomization of up to 1GB. For 31-bit processes 257 + * the virtual address space is limited, use no alignment and limit the 258 + * randomization to 8MB. 259 + * For the additional randomization of the program break use 32MB for 260 + * 64-bit and 8MB for 31-bit. 258 261 */ 259 - #define BRK_RND_MASK (is_compat_task() ? 0x7ffUL : 0x3ffffUL) 262 + #define BRK_RND_MASK (is_compat_task() ? 0x7ffUL : 0x1fffUL) 260 263 #define MMAP_RND_MASK (is_compat_task() ? 0x7ffUL : 0x3ff80UL) 261 264 #define MMAP_ALIGN_MASK (is_compat_task() ? 0 : 0x7fUL) 262 265 #define STACK_RND_MASK MMAP_RND_MASK
+31 -30
arch/s390/include/asm/lowcore.h
··· 91 91 __u64 hardirq_timer; /* 0x02e8 */ 92 92 __u64 softirq_timer; /* 0x02f0 */ 93 93 __u64 steal_timer; /* 0x02f8 */ 94 - __u64 last_update_timer; /* 0x0300 */ 95 - __u64 last_update_clock; /* 0x0308 */ 96 - __u64 int_clock; /* 0x0310 */ 97 - __u64 mcck_clock; /* 0x0318 */ 98 - __u64 clock_comparator; /* 0x0320 */ 99 - __u64 boot_clock[2]; /* 0x0328 */ 94 + __u64 avg_steal_timer; /* 0x0300 */ 95 + __u64 last_update_timer; /* 0x0308 */ 96 + __u64 last_update_clock; /* 0x0310 */ 97 + __u64 int_clock; /* 0x0318*/ 98 + __u64 mcck_clock; /* 0x0320 */ 99 + __u64 clock_comparator; /* 0x0328 */ 100 + __u64 boot_clock[2]; /* 0x0330 */ 100 101 101 102 /* Current process. */ 102 - __u64 current_task; /* 0x0338 */ 103 - __u64 kernel_stack; /* 0x0340 */ 103 + __u64 current_task; /* 0x0340 */ 104 + __u64 kernel_stack; /* 0x0348 */ 104 105 105 106 /* Interrupt, DAT-off and restartstack. */ 106 - __u64 async_stack; /* 0x0348 */ 107 - __u64 nodat_stack; /* 0x0350 */ 108 - __u64 restart_stack; /* 0x0358 */ 107 + __u64 async_stack; /* 0x0350 */ 108 + __u64 nodat_stack; /* 0x0358 */ 109 + __u64 restart_stack; /* 0x0360 */ 109 110 110 111 /* Restart function and parameter. */ 111 - __u64 restart_fn; /* 0x0360 */ 112 - __u64 restart_data; /* 0x0368 */ 113 - __u64 restart_source; /* 0x0370 */ 112 + __u64 restart_fn; /* 0x0368 */ 113 + __u64 restart_data; /* 0x0370 */ 114 + __u64 restart_source; /* 0x0378 */ 114 115 115 116 /* Address space pointer. */ 116 - __u64 kernel_asce; /* 0x0378 */ 117 - __u64 user_asce; /* 0x0380 */ 118 - __u64 vdso_asce; /* 0x0388 */ 117 + __u64 kernel_asce; /* 0x0380 */ 118 + __u64 user_asce; /* 0x0388 */ 119 + __u64 vdso_asce; /* 0x0390 */ 119 120 120 121 /* 121 122 * The lpp and current_pid fields form a 122 123 * 64-bit value that is set as program 123 124 * parameter with the LPP instruction. 124 125 */ 125 - __u32 lpp; /* 0x0390 */ 126 - __u32 current_pid; /* 0x0394 */ 126 + __u32 lpp; /* 0x0398 */ 127 + __u32 current_pid; /* 0x039c */ 127 128 128 129 /* SMP info area */ 129 - __u32 cpu_nr; /* 0x0398 */ 130 - __u32 softirq_pending; /* 0x039c */ 131 - __u32 preempt_count; /* 0x03a0 */ 132 - __u32 spinlock_lockval; /* 0x03a4 */ 133 - __u32 spinlock_index; /* 0x03a8 */ 134 - __u32 fpu_flags; /* 0x03ac */ 135 - __u64 percpu_offset; /* 0x03b0 */ 136 - __u64 vdso_per_cpu_data; /* 0x03b8 */ 137 - __u64 machine_flags; /* 0x03c0 */ 138 - __u64 gmap; /* 0x03c8 */ 139 - __u8 pad_0x03d0[0x0400-0x03d0]; /* 0x03d0 */ 130 + __u32 cpu_nr; /* 0x03a0 */ 131 + __u32 softirq_pending; /* 0x03a4 */ 132 + __u32 preempt_count; /* 0x03a8 */ 133 + __u32 spinlock_lockval; /* 0x03ac */ 134 + __u32 spinlock_index; /* 0x03b0 */ 135 + __u32 fpu_flags; /* 0x03b4 */ 136 + __u64 percpu_offset; /* 0x03b8 */ 137 + __u64 vdso_per_cpu_data; /* 0x03c0 */ 138 + __u64 machine_flags; /* 0x03c8 */ 139 + __u64 gmap; /* 0x03d0 */ 140 + __u8 pad_0x03d8[0x0400-0x03d8]; /* 0x03d8 */ 140 141 141 142 /* br %r1 trampoline */ 142 143 __u16 br_r1_trampoline; /* 0x0400 */
+13 -6
arch/s390/kernel/perf_cpum_cf_diag.c
··· 196 196 */ 197 197 static int __hw_perf_event_init(struct perf_event *event) 198 198 { 199 - struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events); 200 199 struct perf_event_attr *attr = &event->attr; 200 + struct cpu_cf_events *cpuhw; 201 201 enum cpumf_ctr_set i; 202 202 int err = 0; 203 203 204 - debug_sprintf_event(cf_diag_dbg, 5, 205 - "%s event %p cpu %d authorized %#x\n", __func__, 206 - event, event->cpu, cpuhw->info.auth_ctl); 204 + debug_sprintf_event(cf_diag_dbg, 5, "%s event %p cpu %d\n", __func__, 205 + event, event->cpu); 207 206 208 207 event->hw.config = attr->config; 209 208 event->hw.config_base = 0; 210 - local64_set(&event->count, 0); 211 209 212 - /* Add all authorized counter sets to config_base */ 210 + /* Add all authorized counter sets to config_base. The 211 + * the hardware init function is either called per-cpu or just once 212 + * for all CPUS (event->cpu == -1). This depends on the whether 213 + * counting is started for all CPUs or on a per workload base where 214 + * the perf event moves from one CPU to another CPU. 215 + * Checking the authorization on any CPU is fine as the hardware 216 + * applies the same authorization settings to all CPUs. 217 + */ 218 + cpuhw = &get_cpu_var(cpu_cf_events); 213 219 for (i = CPUMF_CTR_SET_BASIC; i < CPUMF_CTR_SET_MAX; ++i) 214 220 if (cpuhw->info.auth_ctl & cpumf_ctr_ctl[i]) 215 221 event->hw.config_base |= cpumf_ctr_ctl[i]; 222 + put_cpu_var(cpu_cf_events); 216 223 217 224 /* No authorized counter sets, nothing to count/sample */ 218 225 if (!event->hw.config_base) {
+2 -1
arch/s390/kernel/smp.c
··· 266 266 lc->percpu_offset = __per_cpu_offset[cpu]; 267 267 lc->kernel_asce = S390_lowcore.kernel_asce; 268 268 lc->machine_flags = S390_lowcore.machine_flags; 269 - lc->user_timer = lc->system_timer = lc->steal_timer = 0; 269 + lc->user_timer = lc->system_timer = 270 + lc->steal_timer = lc->avg_steal_timer = 0; 270 271 __ctl_store(lc->cregs_save_area, 0, 15); 271 272 save_access_regs((unsigned int *) lc->access_regs_save_area); 272 273 memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list,
+12 -7
arch/s390/kernel/vtime.c
··· 124 124 */ 125 125 static int do_account_vtime(struct task_struct *tsk) 126 126 { 127 - u64 timer, clock, user, guest, system, hardirq, softirq, steal; 127 + u64 timer, clock, user, guest, system, hardirq, softirq; 128 128 129 129 timer = S390_lowcore.last_update_timer; 130 130 clock = S390_lowcore.last_update_clock; ··· 182 182 if (softirq) 183 183 account_system_index_scaled(tsk, softirq, CPUTIME_SOFTIRQ); 184 184 185 - steal = S390_lowcore.steal_timer; 186 - if ((s64) steal > 0) { 187 - S390_lowcore.steal_timer = 0; 188 - account_steal_time(cputime_to_nsecs(steal)); 189 - } 190 - 191 185 return virt_timer_forward(user + guest + system + hardirq + softirq); 192 186 } 193 187 ··· 207 213 */ 208 214 void vtime_flush(struct task_struct *tsk) 209 215 { 216 + u64 steal, avg_steal; 217 + 210 218 if (do_account_vtime(tsk)) 211 219 virt_timer_expire(); 220 + 221 + steal = S390_lowcore.steal_timer; 222 + avg_steal = S390_lowcore.avg_steal_timer / 2; 223 + if ((s64) steal > 0) { 224 + S390_lowcore.steal_timer = 0; 225 + account_steal_time(steal); 226 + avg_steal += steal; 227 + } 228 + S390_lowcore.avg_steal_timer = avg_steal; 212 229 } 213 230 214 231 /*
+13
drivers/s390/cio/chsc.c
··· 24 24 #include <asm/crw.h> 25 25 #include <asm/isc.h> 26 26 #include <asm/ebcdic.h> 27 + #include <asm/ap.h> 27 28 28 29 #include "css.h" 29 30 #include "cio.h" ··· 587 586 " failed (rc=%d).\n", ret); 588 587 } 589 588 589 + static void chsc_process_sei_ap_cfg_chg(struct chsc_sei_nt0_area *sei_area) 590 + { 591 + CIO_CRW_EVENT(3, "chsc: ap config changed\n"); 592 + if (sei_area->rs != 5) 593 + return; 594 + 595 + ap_bus_cfg_chg(); 596 + } 597 + 590 598 static void chsc_process_sei_nt2(struct chsc_sei_nt2_area *sei_area) 591 599 { 592 600 switch (sei_area->cc) { ··· 621 611 break; 622 612 case 2: /* i/o resource accessibility */ 623 613 chsc_process_sei_res_acc(sei_area); 614 + break; 615 + case 3: /* ap config changed */ 616 + chsc_process_sei_ap_cfg_chg(sei_area); 624 617 break; 625 618 case 7: /* channel-path-availability information */ 626 619 chsc_process_sei_chp_avail(sei_area);
+6 -2
drivers/s390/cio/vfio_ccw_drv.c
··· 72 72 { 73 73 struct vfio_ccw_private *private; 74 74 struct irb *irb; 75 + bool is_final; 75 76 76 77 private = container_of(work, struct vfio_ccw_private, io_work); 77 78 irb = &private->irb; 78 79 80 + is_final = !(scsw_actl(&irb->scsw) & 81 + (SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT)); 79 82 if (scsw_is_solicited(&irb->scsw)) { 80 83 cp_update_scsw(&private->cp, &irb->scsw); 81 - cp_free(&private->cp); 84 + if (is_final) 85 + cp_free(&private->cp); 82 86 } 83 87 memcpy(private->io_region->irb_area, irb, sizeof(*irb)); 84 88 85 89 if (private->io_trigger) 86 90 eventfd_signal(private->io_trigger, 1); 87 91 88 - if (private->mdev) 92 + if (private->mdev && is_final) 89 93 private->state = VFIO_CCW_STATE_IDLE; 90 94 } 91 95
+18 -1
drivers/s390/crypto/ap_bus.c
··· 810 810 struct ap_device *ap_dev = to_ap_dev(dev); 811 811 struct ap_driver *ap_drv = ap_dev->drv; 812 812 813 + /* prepare ap queue device removal */ 813 814 if (is_queue_dev(dev)) 814 - ap_queue_remove(to_ap_queue(dev)); 815 + ap_queue_prepare_remove(to_ap_queue(dev)); 816 + 817 + /* driver's chance to clean up gracefully */ 815 818 if (ap_drv->remove) 816 819 ap_drv->remove(ap_dev); 820 + 821 + /* now do the ap queue device remove */ 822 + if (is_queue_dev(dev)) 823 + ap_queue_remove(to_ap_queue(dev)); 817 824 818 825 /* Remove queue/card from list of active queues/cards */ 819 826 spin_lock_bh(&ap_list_lock); ··· 866 859 flush_work(&ap_scan_work); 867 860 } 868 861 EXPORT_SYMBOL(ap_bus_force_rescan); 862 + 863 + /* 864 + * A config change has happened, force an ap bus rescan. 865 + */ 866 + void ap_bus_cfg_chg(void) 867 + { 868 + AP_DBF(DBF_INFO, "%s config change, forcing bus rescan\n", __func__); 869 + 870 + ap_bus_force_rescan(); 871 + } 869 872 870 873 /* 871 874 * hex2bitmap() - parse hex mask string and set bitmap.
+2
drivers/s390/crypto/ap_bus.h
··· 91 91 AP_STATE_WORKING, 92 92 AP_STATE_QUEUE_FULL, 93 93 AP_STATE_SUSPEND_WAIT, 94 + AP_STATE_REMOVE, /* about to be removed from driver */ 94 95 AP_STATE_UNBOUND, /* momentary not bound to a driver */ 95 96 AP_STATE_BORKED, /* broken */ 96 97 NR_AP_STATES ··· 253 252 254 253 void ap_queue_init_reply(struct ap_queue *aq, struct ap_message *ap_msg); 255 254 struct ap_queue *ap_queue_create(ap_qid_t qid, int device_type); 255 + void ap_queue_prepare_remove(struct ap_queue *aq); 256 256 void ap_queue_remove(struct ap_queue *aq); 257 257 void ap_queue_suspend(struct ap_device *ap_dev); 258 258 void ap_queue_resume(struct ap_device *ap_dev);
+22 -6
drivers/s390/crypto/ap_queue.c
··· 420 420 [AP_EVENT_POLL] = ap_sm_suspend_read, 421 421 [AP_EVENT_TIMEOUT] = ap_sm_nop, 422 422 }, 423 + [AP_STATE_REMOVE] = { 424 + [AP_EVENT_POLL] = ap_sm_nop, 425 + [AP_EVENT_TIMEOUT] = ap_sm_nop, 426 + }, 423 427 [AP_STATE_UNBOUND] = { 424 428 [AP_EVENT_POLL] = ap_sm_nop, 425 429 [AP_EVENT_TIMEOUT] = ap_sm_nop, ··· 744 740 } 745 741 EXPORT_SYMBOL(ap_flush_queue); 746 742 743 + void ap_queue_prepare_remove(struct ap_queue *aq) 744 + { 745 + spin_lock_bh(&aq->lock); 746 + /* flush queue */ 747 + __ap_flush_queue(aq); 748 + /* set REMOVE state to prevent new messages are queued in */ 749 + aq->state = AP_STATE_REMOVE; 750 + del_timer_sync(&aq->timeout); 751 + spin_unlock_bh(&aq->lock); 752 + } 753 + 747 754 void ap_queue_remove(struct ap_queue *aq) 748 755 { 749 - ap_flush_queue(aq); 750 - del_timer_sync(&aq->timeout); 751 - 752 - /* reset with zero, also clears irq registration */ 756 + /* 757 + * all messages have been flushed and the state is 758 + * AP_STATE_REMOVE. Now reset with zero which also 759 + * clears the irq registration and move the state 760 + * to AP_STATE_UNBOUND to signal that this queue 761 + * is not used by any driver currently. 762 + */ 753 763 spin_lock_bh(&aq->lock); 754 764 ap_zapq(aq->qid); 755 765 aq->state = AP_STATE_UNBOUND; 756 766 spin_unlock_bh(&aq->lock); 757 767 } 758 - EXPORT_SYMBOL(ap_queue_remove); 759 768 760 769 void ap_queue_reinit_state(struct ap_queue *aq) 761 770 { ··· 777 760 ap_wait(ap_sm_event(aq, AP_EVENT_POLL)); 778 761 spin_unlock_bh(&aq->lock); 779 762 } 780 - EXPORT_SYMBOL(ap_queue_reinit_state);
+18 -12
drivers/s390/crypto/zcrypt_api.c
··· 586 586 587 587 static inline struct zcrypt_queue *zcrypt_pick_queue(struct zcrypt_card *zc, 588 588 struct zcrypt_queue *zq, 589 + struct module **pmod, 589 590 unsigned int weight) 590 591 { 591 592 if (!zq || !try_module_get(zq->queue->ap_dev.drv->driver.owner)) ··· 596 595 atomic_add(weight, &zc->load); 597 596 atomic_add(weight, &zq->load); 598 597 zq->request_count++; 598 + *pmod = zq->queue->ap_dev.drv->driver.owner; 599 599 return zq; 600 600 } 601 601 602 602 static inline void zcrypt_drop_queue(struct zcrypt_card *zc, 603 603 struct zcrypt_queue *zq, 604 + struct module *mod, 604 605 unsigned int weight) 605 606 { 606 - struct module *mod = zq->queue->ap_dev.drv->driver.owner; 607 - 608 607 zq->request_count--; 609 608 atomic_sub(weight, &zc->load); 610 609 atomic_sub(weight, &zq->load); ··· 654 653 unsigned int weight, pref_weight; 655 654 unsigned int func_code; 656 655 int qid = 0, rc = -ENODEV; 656 + struct module *mod; 657 657 658 658 trace_s390_zcrypt_req(mex, TP_ICARSAMODEXPO); 659 659 ··· 708 706 pref_weight = weight; 709 707 } 710 708 } 711 - pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, weight); 709 + pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, &mod, weight); 712 710 spin_unlock(&zcrypt_list_lock); 713 711 714 712 if (!pref_zq) { ··· 720 718 rc = pref_zq->ops->rsa_modexpo(pref_zq, mex); 721 719 722 720 spin_lock(&zcrypt_list_lock); 723 - zcrypt_drop_queue(pref_zc, pref_zq, weight); 721 + zcrypt_drop_queue(pref_zc, pref_zq, mod, weight); 724 722 spin_unlock(&zcrypt_list_lock); 725 723 726 724 out: ··· 737 735 unsigned int weight, pref_weight; 738 736 unsigned int func_code; 739 737 int qid = 0, rc = -ENODEV; 738 + struct module *mod; 740 739 741 740 trace_s390_zcrypt_req(crt, TP_ICARSACRT); 742 741 ··· 791 788 pref_weight = weight; 792 789 } 793 790 } 794 - pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, weight); 791 + pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, &mod, weight); 795 792 spin_unlock(&zcrypt_list_lock); 796 793 797 794 if (!pref_zq) { ··· 803 800 rc = pref_zq->ops->rsa_modexpo_crt(pref_zq, crt); 804 801 805 802 spin_lock(&zcrypt_list_lock); 806 - zcrypt_drop_queue(pref_zc, pref_zq, weight); 803 + zcrypt_drop_queue(pref_zc, pref_zq, mod, weight); 807 804 spin_unlock(&zcrypt_list_lock); 808 805 809 806 out: ··· 822 819 unsigned int func_code; 823 820 unsigned short *domain; 824 821 int qid = 0, rc = -ENODEV; 822 + struct module *mod; 825 823 826 824 trace_s390_zcrypt_req(xcRB, TB_ZSECSENDCPRB); 827 825 ··· 869 865 pref_weight = weight; 870 866 } 871 867 } 872 - pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, weight); 868 + pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, &mod, weight); 873 869 spin_unlock(&zcrypt_list_lock); 874 870 875 871 if (!pref_zq) { ··· 885 881 rc = pref_zq->ops->send_cprb(pref_zq, xcRB, &ap_msg); 886 882 887 883 spin_lock(&zcrypt_list_lock); 888 - zcrypt_drop_queue(pref_zc, pref_zq, weight); 884 + zcrypt_drop_queue(pref_zc, pref_zq, mod, weight); 889 885 spin_unlock(&zcrypt_list_lock); 890 886 891 887 out: ··· 936 932 unsigned int func_code; 937 933 struct ap_message ap_msg; 938 934 int qid = 0, rc = -ENODEV; 935 + struct module *mod; 939 936 940 937 trace_s390_zcrypt_req(xcrb, TP_ZSENDEP11CPRB); 941 938 ··· 1005 1000 pref_weight = weight; 1006 1001 } 1007 1002 } 1008 - pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, weight); 1003 + pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, &mod, weight); 1009 1004 spin_unlock(&zcrypt_list_lock); 1010 1005 1011 1006 if (!pref_zq) { ··· 1017 1012 rc = pref_zq->ops->send_ep11_cprb(pref_zq, xcrb, &ap_msg); 1018 1013 1019 1014 spin_lock(&zcrypt_list_lock); 1020 - zcrypt_drop_queue(pref_zc, pref_zq, weight); 1015 + zcrypt_drop_queue(pref_zc, pref_zq, mod, weight); 1021 1016 spin_unlock(&zcrypt_list_lock); 1022 1017 1023 1018 out_free: ··· 1038 1033 struct ap_message ap_msg; 1039 1034 unsigned int domain; 1040 1035 int qid = 0, rc = -ENODEV; 1036 + struct module *mod; 1041 1037 1042 1038 trace_s390_zcrypt_req(buffer, TP_HWRNGCPRB); 1043 1039 ··· 1070 1064 pref_weight = weight; 1071 1065 } 1072 1066 } 1073 - pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, weight); 1067 + pref_zq = zcrypt_pick_queue(pref_zc, pref_zq, &mod, weight); 1074 1068 spin_unlock(&zcrypt_list_lock); 1075 1069 1076 1070 if (!pref_zq) { ··· 1082 1076 rc = pref_zq->ops->rng(pref_zq, buffer, &ap_msg); 1083 1077 1084 1078 spin_lock(&zcrypt_list_lock); 1085 - zcrypt_drop_queue(pref_zc, pref_zq, weight); 1079 + zcrypt_drop_queue(pref_zc, pref_zq, mod, weight); 1086 1080 spin_unlock(&zcrypt_list_lock); 1087 1081 1088 1082 out:
+42 -7
drivers/soc/bcm/bcm2835-power.c
··· 150 150 151 151 static int bcm2835_asb_enable(struct bcm2835_power *power, u32 reg) 152 152 { 153 - u64 start = ktime_get_ns(); 153 + u64 start; 154 + 155 + if (!reg) 156 + return 0; 157 + 158 + start = ktime_get_ns(); 154 159 155 160 /* Enable the module's async AXI bridges. */ 156 161 ASB_WRITE(reg, ASB_READ(reg) & ~ASB_REQ_STOP); ··· 170 165 171 166 static int bcm2835_asb_disable(struct bcm2835_power *power, u32 reg) 172 167 { 173 - u64 start = ktime_get_ns(); 168 + u64 start; 169 + 170 + if (!reg) 171 + return 0; 172 + 173 + start = ktime_get_ns(); 174 174 175 175 /* Enable the module's async AXI bridges. */ 176 176 ASB_WRITE(reg, ASB_READ(reg) | ASB_REQ_STOP); ··· 485 475 } 486 476 } 487 477 488 - static void 478 + static int 489 479 bcm2835_init_power_domain(struct bcm2835_power *power, 490 480 int pd_xlate_index, const char *name) 491 481 { ··· 493 483 struct bcm2835_power_domain *dom = &power->domains[pd_xlate_index]; 494 484 495 485 dom->clk = devm_clk_get(dev->parent, name); 486 + if (IS_ERR(dom->clk)) { 487 + int ret = PTR_ERR(dom->clk); 488 + 489 + if (ret == -EPROBE_DEFER) 490 + return ret; 491 + 492 + /* Some domains don't have a clk, so make sure that we 493 + * don't deref an error pointer later. 494 + */ 495 + dom->clk = NULL; 496 + } 496 497 497 498 dom->base.name = name; 498 499 dom->base.power_on = bcm2835_power_pd_power_on; ··· 516 495 pm_genpd_init(&dom->base, NULL, true); 517 496 518 497 power->pd_xlate.domains[pd_xlate_index] = &dom->base; 498 + 499 + return 0; 519 500 } 520 501 521 502 /** bcm2835_reset_reset - Resets a block that has a reset line in the ··· 615 592 { BCM2835_POWER_DOMAIN_IMAGE_PERI, BCM2835_POWER_DOMAIN_CAM0 }, 616 593 { BCM2835_POWER_DOMAIN_IMAGE_PERI, BCM2835_POWER_DOMAIN_CAM1 }, 617 594 }; 618 - int ret, i; 595 + int ret = 0, i; 619 596 u32 id; 620 597 621 598 power = devm_kzalloc(dev, sizeof(*power), GFP_KERNEL); ··· 642 619 643 620 power->pd_xlate.num_domains = ARRAY_SIZE(power_domain_names); 644 621 645 - for (i = 0; i < ARRAY_SIZE(power_domain_names); i++) 646 - bcm2835_init_power_domain(power, i, power_domain_names[i]); 622 + for (i = 0; i < ARRAY_SIZE(power_domain_names); i++) { 623 + ret = bcm2835_init_power_domain(power, i, power_domain_names[i]); 624 + if (ret) 625 + goto fail; 626 + } 647 627 648 628 for (i = 0; i < ARRAY_SIZE(domain_deps); i++) { 649 629 pm_genpd_add_subdomain(&power->domains[domain_deps[i].parent].base, ··· 660 634 661 635 ret = devm_reset_controller_register(dev, &power->reset); 662 636 if (ret) 663 - return ret; 637 + goto fail; 664 638 665 639 of_genpd_add_provider_onecell(dev->parent->of_node, &power->pd_xlate); 666 640 667 641 dev_info(dev, "Broadcom BCM2835 power domains driver"); 668 642 return 0; 643 + 644 + fail: 645 + for (i = 0; i < ARRAY_SIZE(power_domain_names); i++) { 646 + struct generic_pm_domain *dom = &power->domains[i].base; 647 + 648 + if (dom->name) 649 + pm_genpd_remove(dom); 650 + } 651 + return ret; 669 652 } 670 653 671 654 static int bcm2835_power_remove(struct platform_device *pdev)
+3 -3
fs/afs/fsclient.c
··· 1515 1515 1516 1516 xdr_encode_AFS_StoreStatus(&bp, attr); 1517 1517 1518 - *bp++ = 0; /* position of start of write */ 1519 - *bp++ = 0; 1518 + *bp++ = htonl(attr->ia_size >> 32); /* position of start of write */ 1519 + *bp++ = htonl((u32) attr->ia_size); 1520 1520 *bp++ = 0; /* size of write */ 1521 1521 *bp++ = 0; 1522 1522 *bp++ = htonl(attr->ia_size >> 32); /* new file length */ ··· 1564 1564 1565 1565 xdr_encode_AFS_StoreStatus(&bp, attr); 1566 1566 1567 - *bp++ = 0; /* position of start of write */ 1567 + *bp++ = htonl(attr->ia_size); /* position of start of write */ 1568 1568 *bp++ = 0; /* size of write */ 1569 1569 *bp++ = htonl(attr->ia_size); /* new file length */ 1570 1570
+1 -1
fs/afs/yfsclient.c
··· 1514 1514 bp = xdr_encode_u32(bp, 0); /* RPC flags */ 1515 1515 bp = xdr_encode_YFSFid(bp, &vnode->fid); 1516 1516 bp = xdr_encode_YFS_StoreStatus(bp, attr); 1517 - bp = xdr_encode_u64(bp, 0); /* position of start of write */ 1517 + bp = xdr_encode_u64(bp, attr->ia_size); /* position of start of write */ 1518 1518 bp = xdr_encode_u64(bp, 0); /* size of write */ 1519 1519 bp = xdr_encode_u64(bp, attr->ia_size); /* new file length */ 1520 1520 yfs_check_req(call, bp);
-2
tools/arch/alpha/include/uapi/asm/mman.h
··· 27 27 #define MAP_NONBLOCK 0x40000 28 28 #define MAP_NORESERVE 0x10000 29 29 #define MAP_POPULATE 0x20000 30 - #define MAP_PRIVATE 0x02 31 - #define MAP_SHARED 0x01 32 30 #define MAP_STACK 0x80000 33 31 #define PROT_EXEC 0x4 34 32 #define PROT_GROWSDOWN 0x01000000
-2
tools/arch/mips/include/uapi/asm/mman.h
··· 28 28 #define MAP_NONBLOCK 0x20000 29 29 #define MAP_NORESERVE 0x0400 30 30 #define MAP_POPULATE 0x10000 31 - #define MAP_PRIVATE 0x002 32 - #define MAP_SHARED 0x001 33 31 #define MAP_STACK 0x40000 34 32 #define PROT_EXEC 0x04 35 33 #define PROT_GROWSDOWN 0x01000000
-2
tools/arch/parisc/include/uapi/asm/mman.h
··· 27 27 #define MAP_NONBLOCK 0x20000 28 28 #define MAP_NORESERVE 0x4000 29 29 #define MAP_POPULATE 0x10000 30 - #define MAP_PRIVATE 0x02 31 - #define MAP_SHARED 0x01 32 30 #define MAP_STACK 0x40000 33 31 #define PROT_EXEC 0x4 34 32 #define PROT_GROWSDOWN 0x01000000
+2
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 463 463 #define KVM_PPC_CPU_CHAR_BR_HINT_HONOURED (1ULL << 58) 464 464 #define KVM_PPC_CPU_CHAR_MTTRIG_THR_RECONF (1ULL << 57) 465 465 #define KVM_PPC_CPU_CHAR_COUNT_CACHE_DIS (1ULL << 56) 466 + #define KVM_PPC_CPU_CHAR_BCCTR_FLUSH_ASSIST (1ull << 54) 466 467 467 468 #define KVM_PPC_CPU_BEHAV_FAVOUR_SECURITY (1ULL << 63) 468 469 #define KVM_PPC_CPU_BEHAV_L1D_FLUSH_PR (1ULL << 62) 469 470 #define KVM_PPC_CPU_BEHAV_BNDS_CHK_SPEC_BAR (1ULL << 61) 471 + #define KVM_PPC_CPU_BEHAV_FLUSH_COUNT_CACHE (1ull << 58) 470 472 471 473 /* Per-vcpu XICS interrupt controller state */ 472 474 #define KVM_REG_PPC_ICP_STATE (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x8c)
+1
tools/arch/x86/include/asm/cpufeatures.h
··· 344 344 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */ 345 345 #define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */ 346 346 #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */ 347 + #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */ 347 348 #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */ 348 349 #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */ 349 350 #define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
-2
tools/arch/xtensa/include/uapi/asm/mman.h
··· 27 27 #define MAP_NONBLOCK 0x20000 28 28 #define MAP_NORESERVE 0x0400 29 29 #define MAP_POPULATE 0x10000 30 - #define MAP_PRIVATE 0x002 31 - #define MAP_SHARED 0x001 32 30 #define MAP_STACK 0x40000 33 31 #define PROT_EXEC 0x4 34 32 #define PROT_GROWSDOWN 0x01000000
+2 -2
tools/build/feature/test-libopencsd.c
··· 4 4 /* 5 5 * Check OpenCSD library version is sufficient to provide required features 6 6 */ 7 - #define OCSD_MIN_VER ((0 << 16) | (10 << 8) | (0)) 7 + #define OCSD_MIN_VER ((0 << 16) | (11 << 8) | (0)) 8 8 #if !defined(OCSD_VER_NUM) || (OCSD_VER_NUM < OCSD_MIN_VER) 9 - #error "OpenCSD >= 0.10.0 is required" 9 + #error "OpenCSD >= 0.11.0 is required" 10 10 #endif 11 11 12 12 int main(void)
+23
tools/include/uapi/asm-generic/mman-common-tools.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __ASM_GENERIC_MMAN_COMMON_TOOLS_ONLY_H 3 + #define __ASM_GENERIC_MMAN_COMMON_TOOLS_ONLY_H 4 + 5 + #include <asm-generic/mman-common.h> 6 + 7 + /* We need this because we need to have tools/include/uapi/ included in the tools 8 + * header search path to get access to stuff that is not yet in the system's 9 + * copy of the files in that directory, but since this cset: 10 + * 11 + * 746c9398f5ac ("arch: move common mmap flags to linux/mman.h") 12 + * 13 + * We end up making sys/mman.h, that is in the system headers, to not find the 14 + * MAP_SHARED and MAP_PRIVATE defines because they are not anymore in our copy 15 + * of asm-generic/mman-common.h. So we define them here and include this header 16 + * from each of the per arch mman.h headers. 17 + */ 18 + #ifndef MAP_SHARED 19 + #define MAP_SHARED 0x01 /* Share changes */ 20 + #define MAP_PRIVATE 0x02 /* Changes are private */ 21 + #define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */ 22 + #endif 23 + #endif // __ASM_GENERIC_MMAN_COMMON_TOOLS_ONLY_H
+1 -3
tools/include/uapi/asm-generic/mman-common.h
··· 15 15 #define PROT_GROWSDOWN 0x01000000 /* mprotect flag: extend change to start of growsdown vma */ 16 16 #define PROT_GROWSUP 0x02000000 /* mprotect flag: extend change to end of growsup vma */ 17 17 18 - #define MAP_SHARED 0x01 /* Share changes */ 19 - #define MAP_PRIVATE 0x02 /* Changes are private */ 20 - #define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */ 18 + /* 0x01 - 0x03 are defined in linux/mman.h */ 21 19 #define MAP_TYPE 0x0f /* Mask for type of mapping */ 22 20 #define MAP_FIXED 0x10 /* Interpret addr exactly */ 23 21 #define MAP_ANONYMOUS 0x20 /* don't use a file */
+1 -1
tools/include/uapi/asm-generic/mman.h
··· 2 2 #ifndef __ASM_GENERIC_MMAN_H 3 3 #define __ASM_GENERIC_MMAN_H 4 4 5 - #include <asm-generic/mman-common.h> 5 + #include <asm-generic/mman-common-tools.h> 6 6 7 7 #define MAP_GROWSDOWN 0x0100 /* stack-like segment */ 8 8 #define MAP_DENYWRITE 0x0800 /* ETXTBSY */
+10 -1
tools/include/uapi/asm-generic/unistd.h
··· 824 824 __SYSCALL(__NR_sched_rr_get_interval_time64, sys_sched_rr_get_interval) 825 825 #endif 826 826 827 + #define __NR_pidfd_send_signal 424 828 + __SYSCALL(__NR_pidfd_send_signal, sys_pidfd_send_signal) 829 + #define __NR_io_uring_setup 425 830 + __SYSCALL(__NR_io_uring_setup, sys_io_uring_setup) 831 + #define __NR_io_uring_enter 426 832 + __SYSCALL(__NR_io_uring_enter, sys_io_uring_enter) 833 + #define __NR_io_uring_register 427 834 + __SYSCALL(__NR_io_uring_register, sys_io_uring_register) 835 + 827 836 #undef __NR_syscalls 828 - #define __NR_syscalls 424 837 + #define __NR_syscalls 428 829 838 830 839 /* 831 840 * 32 bit systems traditionally used different
+64
tools/include/uapi/drm/i915_drm.h
··· 1486 1486 #define I915_CONTEXT_MAX_USER_PRIORITY 1023 /* inclusive */ 1487 1487 #define I915_CONTEXT_DEFAULT_PRIORITY 0 1488 1488 #define I915_CONTEXT_MIN_USER_PRIORITY -1023 /* inclusive */ 1489 + /* 1490 + * When using the following param, value should be a pointer to 1491 + * drm_i915_gem_context_param_sseu. 1492 + */ 1493 + #define I915_CONTEXT_PARAM_SSEU 0x7 1489 1494 __u64 value; 1495 + }; 1496 + 1497 + /** 1498 + * Context SSEU programming 1499 + * 1500 + * It may be necessary for either functional or performance reason to configure 1501 + * a context to run with a reduced number of SSEU (where SSEU stands for Slice/ 1502 + * Sub-slice/EU). 1503 + * 1504 + * This is done by configuring SSEU configuration using the below 1505 + * @struct drm_i915_gem_context_param_sseu for every supported engine which 1506 + * userspace intends to use. 1507 + * 1508 + * Not all GPUs or engines support this functionality in which case an error 1509 + * code -ENODEV will be returned. 1510 + * 1511 + * Also, flexibility of possible SSEU configuration permutations varies between 1512 + * GPU generations and software imposed limitations. Requesting such a 1513 + * combination will return an error code of -EINVAL. 1514 + * 1515 + * NOTE: When perf/OA is active the context's SSEU configuration is ignored in 1516 + * favour of a single global setting. 1517 + */ 1518 + struct drm_i915_gem_context_param_sseu { 1519 + /* 1520 + * Engine class & instance to be configured or queried. 1521 + */ 1522 + __u16 engine_class; 1523 + __u16 engine_instance; 1524 + 1525 + /* 1526 + * Unused for now. Must be cleared to zero. 1527 + */ 1528 + __u32 flags; 1529 + 1530 + /* 1531 + * Mask of slices to enable for the context. Valid values are a subset 1532 + * of the bitmask value returned for I915_PARAM_SLICE_MASK. 1533 + */ 1534 + __u64 slice_mask; 1535 + 1536 + /* 1537 + * Mask of subslices to enable for the context. Valid values are a 1538 + * subset of the bitmask value return by I915_PARAM_SUBSLICE_MASK. 1539 + */ 1540 + __u64 subslice_mask; 1541 + 1542 + /* 1543 + * Minimum/Maximum number of EUs to enable per subslice for the 1544 + * context. min_eus_per_subslice must be inferior or equal to 1545 + * max_eus_per_subslice. 1546 + */ 1547 + __u16 min_eus_per_subslice; 1548 + __u16 max_eus_per_subslice; 1549 + 1550 + /* 1551 + * Unused for now. Must be cleared to zero. 1552 + */ 1553 + __u32 rsvd; 1490 1554 }; 1491 1555 1492 1556 enum drm_i915_oa_format {
+1
tools/include/uapi/linux/fcntl.h
··· 41 41 #define F_SEAL_SHRINK 0x0002 /* prevent file from shrinking */ 42 42 #define F_SEAL_GROW 0x0004 /* prevent file from growing */ 43 43 #define F_SEAL_WRITE 0x0008 /* prevent writes */ 44 + #define F_SEAL_FUTURE_WRITE 0x0010 /* prevent future writes while mapped */ 44 45 /* (1U << 31) is reserved for signed error codes */ 45 46 46 47 /*
+4
tools/include/uapi/linux/mman.h
··· 12 12 #define OVERCOMMIT_ALWAYS 1 13 13 #define OVERCOMMIT_NEVER 2 14 14 15 + #define MAP_SHARED 0x01 /* Share changes */ 16 + #define MAP_PRIVATE 0x02 /* Changes are private */ 17 + #define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */ 18 + 15 19 /* 16 20 * Huge page size encoding when MAP_HUGETLB is specified, and a huge page 17 21 * size other than the default is desired. See hugetlb_encode.h.
+2 -2
tools/perf/Makefile.perf
··· 481 481 mmap_flags_array := $(beauty_outdir)/mmap_flags_array.c 482 482 mmap_flags_tbl := $(srctree)/tools/perf/trace/beauty/mmap_flags.sh 483 483 484 - $(mmap_flags_array): $(asm_generic_uapi_dir)/mman.h $(asm_generic_uapi_dir)/mman-common.h $(mmap_flags_tbl) 485 - $(Q)$(SHELL) '$(mmap_flags_tbl)' $(asm_generic_uapi_dir) $(arch_asm_uapi_dir) > $@ 484 + $(mmap_flags_array): $(linux_uapi_dir)/mman.h $(asm_generic_uapi_dir)/mman.h $(asm_generic_uapi_dir)/mman-common.h $(mmap_flags_tbl) 485 + $(Q)$(SHELL) '$(mmap_flags_tbl)' $(linux_uapi_dir) $(asm_generic_uapi_dir) $(arch_asm_uapi_dir) > $@ 486 486 487 487 mount_flags_array := $(beauty_outdir)/mount_flags_array.c 488 488 mount_flags_tbl := $(srctree)/tools/perf/trace/beauty/mount_flags.sh
+4
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 345 345 334 common rseq __x64_sys_rseq 346 346 # don't use numbers 387 through 423, add new calls after the last 347 347 # 'common' entry 348 + 424 common pidfd_send_signal __x64_sys_pidfd_send_signal 349 + 425 common io_uring_setup __x64_sys_io_uring_setup 350 + 426 common io_uring_enter __x64_sys_io_uring_enter 351 + 427 common io_uring_register __x64_sys_io_uring_register 348 352 349 353 # 350 354 # x32-specific system call numbers start at 512 to avoid cache impact
+1 -1
tools/perf/check-headers.sh
··· 103 103 # diff with extra ignore lines 104 104 check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>"' 105 105 check arch/x86/lib/memset_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>"' 106 - check include/uapi/asm-generic/mman.h '-I "^#include <\(uapi/\)*asm-generic/mman-common.h>"' 106 + check include/uapi/asm-generic/mman.h '-I "^#include <\(uapi/\)*asm-generic/mman-common\(-tools\)*.h>"' 107 107 check include/uapi/linux/mman.h '-I "^#include <\(uapi/\)*asm/mman.h>"' 108 108 109 109 # diff non-symmetric files
+63 -14
tools/perf/scripts/python/exported-sql-viewer.py
··· 107 107 from PySide.QtCore import * 108 108 from PySide.QtGui import * 109 109 from PySide.QtSql import * 110 + pyside_version_1 = True 110 111 from decimal import * 111 112 from ctypes import * 112 113 from multiprocessing import Process, Array, Value, Event ··· 1527 1526 " (" + dsoname(query.value(15)) + ")") 1528 1527 return data 1529 1528 1529 + def BranchDataPrepWA(query): 1530 + data = [] 1531 + data.append(query.value(0)) 1532 + # Workaround pyside failing to handle large integers (i.e. time) in python3 by converting to a string 1533 + data.append("{:>19}".format(query.value(1))) 1534 + for i in xrange(2, 8): 1535 + data.append(query.value(i)) 1536 + data.append(tohex(query.value(8)).rjust(16) + " " + query.value(9) + offstr(query.value(10)) + 1537 + " (" + dsoname(query.value(11)) + ")" + " -> " + 1538 + tohex(query.value(12)) + " " + query.value(13) + offstr(query.value(14)) + 1539 + " (" + dsoname(query.value(15)) + ")") 1540 + return data 1541 + 1530 1542 # Branch data model 1531 1543 1532 1544 class BranchModel(TreeModel): ··· 1567 1553 " AND evsel_id = " + str(self.event_id) + 1568 1554 " ORDER BY samples.id" 1569 1555 " LIMIT " + str(glb_chunk_sz)) 1570 - self.fetcher = SQLFetcher(glb, sql, BranchDataPrep, self.AddSample) 1556 + if pyside_version_1 and sys.version_info[0] == 3: 1557 + prep = BranchDataPrepWA 1558 + else: 1559 + prep = BranchDataPrep 1560 + self.fetcher = SQLFetcher(glb, sql, prep, self.AddSample) 1571 1561 self.fetcher.done.connect(self.Update) 1572 1562 self.fetcher.Fetch(glb_chunk_sz) 1573 1563 ··· 2097 2079 return False 2098 2080 return True 2099 2081 2100 - # SQL data preparation 2101 - 2102 - def SQLTableDataPrep(query, count): 2103 - data = [] 2104 - for i in xrange(count): 2105 - data.append(query.value(i)) 2106 - return data 2107 - 2108 2082 # SQL table data model item 2109 2083 2110 2084 class SQLTableItem(): ··· 2120 2110 self.more = True 2121 2111 self.populated = 0 2122 2112 self.column_headers = column_headers 2123 - self.fetcher = SQLFetcher(glb, sql, lambda x, y=len(column_headers): SQLTableDataPrep(x, y), self.AddSample) 2113 + self.fetcher = SQLFetcher(glb, sql, lambda x, y=len(column_headers): self.SQLTableDataPrep(x, y), self.AddSample) 2124 2114 self.fetcher.done.connect(self.Update) 2125 2115 self.fetcher.Fetch(glb_chunk_sz) 2126 2116 ··· 2164 2154 def columnHeader(self, column): 2165 2155 return self.column_headers[column] 2166 2156 2157 + def SQLTableDataPrep(self, query, count): 2158 + data = [] 2159 + for i in xrange(count): 2160 + data.append(query.value(i)) 2161 + return data 2162 + 2167 2163 # SQL automatic table data model 2168 2164 2169 2165 class SQLAutoTableModel(SQLTableModel): ··· 2198 2182 QueryExec(query, "SELECT column_name FROM information_schema.columns WHERE table_schema = '" + schema + "' and table_name = '" + select_table_name + "'") 2199 2183 while query.next(): 2200 2184 column_headers.append(query.value(0)) 2185 + if pyside_version_1 and sys.version_info[0] == 3: 2186 + if table_name == "samples_view": 2187 + self.SQLTableDataPrep = self.samples_view_DataPrep 2188 + if table_name == "samples": 2189 + self.SQLTableDataPrep = self.samples_DataPrep 2201 2190 super(SQLAutoTableModel, self).__init__(glb, sql, column_headers, parent) 2191 + 2192 + def samples_view_DataPrep(self, query, count): 2193 + data = [] 2194 + data.append(query.value(0)) 2195 + # Workaround pyside failing to handle large integers (i.e. time) in python3 by converting to a string 2196 + data.append("{:>19}".format(query.value(1))) 2197 + for i in xrange(2, count): 2198 + data.append(query.value(i)) 2199 + return data 2200 + 2201 + def samples_DataPrep(self, query, count): 2202 + data = [] 2203 + for i in xrange(9): 2204 + data.append(query.value(i)) 2205 + # Workaround pyside failing to handle large integers (i.e. time) in python3 by converting to a string 2206 + data.append("{:>19}".format(query.value(9))) 2207 + for i in xrange(10, count): 2208 + data.append(query.value(i)) 2209 + return data 2202 2210 2203 2211 # Base class for custom ResizeColumnsToContents 2204 2212 ··· 2908 2868 ok = self.xed_format_context(2, inst.xedp, inst.bufferp, sizeof(inst.buffer), ip, 0, 0) 2909 2869 if not ok: 2910 2870 return 0, "" 2871 + if sys.version_info[0] == 2: 2872 + result = inst.buffer.value 2873 + else: 2874 + result = inst.buffer.value.decode() 2911 2875 # Return instruction length and the disassembled instruction text 2912 2876 # For now, assume the length is in byte 166 2913 - return inst.xedd[166], inst.buffer.value 2877 + return inst.xedd[166], result 2914 2878 2915 2879 def TryOpen(file_name): 2916 2880 try: ··· 2930 2886 header = f.read(7) 2931 2887 f.seek(pos) 2932 2888 magic = header[0:4] 2933 - eclass = ord(header[4]) 2934 - encoding = ord(header[5]) 2935 - version = ord(header[6]) 2889 + if sys.version_info[0] == 2: 2890 + eclass = ord(header[4]) 2891 + encoding = ord(header[5]) 2892 + version = ord(header[6]) 2893 + else: 2894 + eclass = header[4] 2895 + encoding = header[5] 2896 + version = header[6] 2936 2897 if magic == chr(127) + "ELF" and eclass > 0 and eclass < 3 and encoding > 0 and encoding < 3 and version == 1: 2937 2898 result = True if eclass == 2 else False 2938 2899 return result
+11 -3
tools/perf/trace/beauty/mmap_flags.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: LGPL-2.1 3 3 4 - if [ $# -ne 2 ] ; then 4 + if [ $# -ne 3 ] ; then 5 5 [ $# -eq 1 ] && hostarch=$1 || hostarch=`uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/` 6 + linux_header_dir=tools/include/uapi/linux 6 7 header_dir=tools/include/uapi/asm-generic 7 8 arch_header_dir=tools/arch/${hostarch}/include/uapi/asm 8 9 else 9 - header_dir=$1 10 - arch_header_dir=$2 10 + linux_header_dir=$1 11 + header_dir=$2 12 + arch_header_dir=$3 11 13 fi 12 14 15 + linux_mman=${linux_header_dir}/mman.h 13 16 arch_mman=${arch_header_dir}/mman.h 14 17 15 18 # those in egrep -vw are flags, we want just the bits ··· 21 18 regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MAP_([[:alnum:]_]+)[[:space:]]+(0x[[:xdigit:]]+)[[:space:]]*.*' 22 19 egrep -q $regex ${arch_mman} && \ 23 20 (egrep $regex ${arch_mman} | \ 21 + sed -r "s/$regex/\2 \1/g" | \ 22 + xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n") 23 + egrep -q $regex ${linux_mman} && \ 24 + (egrep $regex ${linux_mman} | \ 25 + egrep -vw 'MAP_(UNINITIALIZED|TYPE|SHARED_VALIDATE)' | \ 24 26 sed -r "s/$regex/\2 \1/g" | \ 25 27 xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n") 26 28 ([ ! -f ${arch_mman} ] || egrep -q '#[[:space:]]*include[[:space:]]+<uapi/asm-generic/mman.*' ${arch_mman}) &&
+1
tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
··· 387 387 break; 388 388 case OCSD_INSTR_ISB: 389 389 case OCSD_INSTR_DSB_DMB: 390 + case OCSD_INSTR_WFI_WFE: 390 391 case OCSD_INSTR_OTHER: 391 392 default: 392 393 packet->last_instr_taken_branch = false;
-29
tools/perf/util/evlist.c
··· 231 231 } 232 232 } 233 233 234 - void perf_event_attr__set_max_precise_ip(struct perf_event_attr *pattr) 235 - { 236 - struct perf_event_attr attr = { 237 - .type = PERF_TYPE_HARDWARE, 238 - .config = PERF_COUNT_HW_CPU_CYCLES, 239 - .exclude_kernel = 1, 240 - .precise_ip = 3, 241 - }; 242 - 243 - event_attr_init(&attr); 244 - 245 - /* 246 - * Unnamed union member, not supported as struct member named 247 - * initializer in older compilers such as gcc 4.4.7 248 - */ 249 - attr.sample_period = 1; 250 - 251 - while (attr.precise_ip != 0) { 252 - int fd = sys_perf_event_open(&attr, 0, -1, -1, 0); 253 - if (fd != -1) { 254 - close(fd); 255 - break; 256 - } 257 - --attr.precise_ip; 258 - } 259 - 260 - pattr->precise_ip = attr.precise_ip; 261 - } 262 - 263 234 int __perf_evlist__add_default(struct perf_evlist *evlist, bool precise) 264 235 { 265 236 struct perf_evsel *evsel = perf_evsel__new_cycles(precise);
-2
tools/perf/util/evlist.h
··· 315 315 void perf_evlist__set_tracking_event(struct perf_evlist *evlist, 316 316 struct perf_evsel *tracking_evsel); 317 317 318 - void perf_event_attr__set_max_precise_ip(struct perf_event_attr *attr); 319 - 320 318 struct perf_evsel * 321 319 perf_evlist__find_evsel_by_str(struct perf_evlist *evlist, const char *str); 322 320
+59 -13
tools/perf/util/evsel.c
··· 295 295 if (!precise) 296 296 goto new_event; 297 297 298 - perf_event_attr__set_max_precise_ip(&attr); 299 298 /* 300 299 * Now let the usual logic to set up the perf_event_attr defaults 301 300 * to kick in when we return and before perf_evsel__open() is called. ··· 303 304 evsel = perf_evsel__new(&attr); 304 305 if (evsel == NULL) 305 306 goto out; 307 + 308 + evsel->precise_max = true; 306 309 307 310 /* use asprintf() because free(evsel) assumes name is allocated */ 308 311 if (asprintf(&evsel->name, "cycles%s%s%.*s", ··· 1084 1083 } 1085 1084 1086 1085 if (evsel->precise_max) 1087 - perf_event_attr__set_max_precise_ip(attr); 1086 + attr->precise_ip = 3; 1088 1087 1089 1088 if (opts->all_user) { 1090 1089 attr->exclude_kernel = 1; ··· 1750 1749 return true; 1751 1750 } 1752 1751 1752 + static void display_attr(struct perf_event_attr *attr) 1753 + { 1754 + if (verbose >= 2) { 1755 + fprintf(stderr, "%.60s\n", graph_dotted_line); 1756 + fprintf(stderr, "perf_event_attr:\n"); 1757 + perf_event_attr__fprintf(stderr, attr, __open_attr__fprintf, NULL); 1758 + fprintf(stderr, "%.60s\n", graph_dotted_line); 1759 + } 1760 + } 1761 + 1762 + static int perf_event_open(struct perf_evsel *evsel, 1763 + pid_t pid, int cpu, int group_fd, 1764 + unsigned long flags) 1765 + { 1766 + int precise_ip = evsel->attr.precise_ip; 1767 + int fd; 1768 + 1769 + while (1) { 1770 + pr_debug2("sys_perf_event_open: pid %d cpu %d group_fd %d flags %#lx", 1771 + pid, cpu, group_fd, flags); 1772 + 1773 + fd = sys_perf_event_open(&evsel->attr, pid, cpu, group_fd, flags); 1774 + if (fd >= 0) 1775 + break; 1776 + 1777 + /* 1778 + * Do quick precise_ip fallback if: 1779 + * - there is precise_ip set in perf_event_attr 1780 + * - maximum precise is requested 1781 + * - sys_perf_event_open failed with ENOTSUP error, 1782 + * which is associated with wrong precise_ip 1783 + */ 1784 + if (!precise_ip || !evsel->precise_max || (errno != ENOTSUP)) 1785 + break; 1786 + 1787 + /* 1788 + * We tried all the precise_ip values, and it's 1789 + * still failing, so leave it to standard fallback. 1790 + */ 1791 + if (!evsel->attr.precise_ip) { 1792 + evsel->attr.precise_ip = precise_ip; 1793 + break; 1794 + } 1795 + 1796 + pr_debug2("\nsys_perf_event_open failed, error %d\n", -ENOTSUP); 1797 + evsel->attr.precise_ip--; 1798 + pr_debug2("decreasing precise_ip by one (%d)\n", evsel->attr.precise_ip); 1799 + display_attr(&evsel->attr); 1800 + } 1801 + 1802 + return fd; 1803 + } 1804 + 1753 1805 int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus, 1754 1806 struct thread_map *threads) 1755 1807 { ··· 1878 1824 if (perf_missing_features.sample_id_all) 1879 1825 evsel->attr.sample_id_all = 0; 1880 1826 1881 - if (verbose >= 2) { 1882 - fprintf(stderr, "%.60s\n", graph_dotted_line); 1883 - fprintf(stderr, "perf_event_attr:\n"); 1884 - perf_event_attr__fprintf(stderr, &evsel->attr, __open_attr__fprintf, NULL); 1885 - fprintf(stderr, "%.60s\n", graph_dotted_line); 1886 - } 1827 + display_attr(&evsel->attr); 1887 1828 1888 1829 for (cpu = 0; cpu < cpus->nr; cpu++) { 1889 1830 ··· 1890 1841 1891 1842 group_fd = get_group_fd(evsel, cpu, thread); 1892 1843 retry_open: 1893 - pr_debug2("sys_perf_event_open: pid %d cpu %d group_fd %d flags %#lx", 1894 - pid, cpus->map[cpu], group_fd, flags); 1895 - 1896 1844 test_attr__ready(); 1897 1845 1898 - fd = sys_perf_event_open(&evsel->attr, pid, cpus->map[cpu], 1899 - group_fd, flags); 1846 + fd = perf_event_open(evsel, pid, cpus->map[cpu], 1847 + group_fd, flags); 1900 1848 1901 1849 FD(evsel, cpu, thread) = fd; 1902 1850
+8 -12
tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
··· 251 251 if (!(decoder->tsc_ctc_ratio_n % decoder->tsc_ctc_ratio_d)) 252 252 decoder->tsc_ctc_mult = decoder->tsc_ctc_ratio_n / 253 253 decoder->tsc_ctc_ratio_d; 254 - 255 - /* 256 - * Allow for timestamps appearing to backwards because a TSC 257 - * packet has slipped past a MTC packet, so allow 2 MTC ticks 258 - * or ... 259 - */ 260 - decoder->tsc_slip = multdiv(2 << decoder->mtc_shift, 261 - decoder->tsc_ctc_ratio_n, 262 - decoder->tsc_ctc_ratio_d); 263 254 } 264 - /* ... or 0x100 paranoia */ 265 - if (decoder->tsc_slip < 0x100) 266 - decoder->tsc_slip = 0x100; 255 + 256 + /* 257 + * A TSC packet can slip past MTC packets so that the timestamp appears 258 + * to go backwards. One estimate is that can be up to about 40 CPU 259 + * cycles, which is certainly less than 0x1000 TSC ticks, but accept 260 + * slippage an order of magnitude more to be on the safe side. 261 + */ 262 + decoder->tsc_slip = 0x10000; 267 263 268 264 intel_pt_log("timestamp: mtc_shift %u\n", decoder->mtc_shift); 269 265 intel_pt_log("timestamp: tsc_ctc_ratio_n %u\n", decoder->tsc_ctc_ratio_n);
+20 -12
tools/perf/util/machine.c
··· 1421 1421 machine->vmlinux_map->end = ~0ULL; 1422 1422 } 1423 1423 1424 + static void machine__update_kernel_mmap(struct machine *machine, 1425 + u64 start, u64 end) 1426 + { 1427 + struct map *map = machine__kernel_map(machine); 1428 + 1429 + map__get(map); 1430 + map_groups__remove(&machine->kmaps, map); 1431 + 1432 + machine__set_kernel_mmap(machine, start, end); 1433 + 1434 + map_groups__insert(&machine->kmaps, map); 1435 + map__put(map); 1436 + } 1437 + 1424 1438 int machine__create_kernel_maps(struct machine *machine) 1425 1439 { 1426 1440 struct dso *kernel = machine__get_kernel(machine); ··· 1467 1453 goto out_put; 1468 1454 } 1469 1455 1470 - /* we have a real start address now, so re-order the kmaps */ 1471 - map = machine__kernel_map(machine); 1472 - 1473 - map__get(map); 1474 - map_groups__remove(&machine->kmaps, map); 1475 - 1476 - /* assume it's the last in the kmaps */ 1477 - machine__set_kernel_mmap(machine, addr, ~0ULL); 1478 - 1479 - map_groups__insert(&machine->kmaps, map); 1480 - map__put(map); 1456 + /* 1457 + * we have a real start address now, so re-order the kmaps 1458 + * assume it's the last in the kmaps 1459 + */ 1460 + machine__update_kernel_mmap(machine, addr, ~0ULL); 1481 1461 } 1482 1462 1483 1463 if (machine__create_extra_kernel_maps(machine, kernel)) ··· 1607 1599 if (strstr(kernel->long_name, "vmlinux")) 1608 1600 dso__set_short_name(kernel, "[kernel.vmlinux]", false); 1609 1601 1610 - machine__set_kernel_mmap(machine, event->mmap.start, 1602 + machine__update_kernel_mmap(machine, event->mmap.start, 1611 1603 event->mmap.start + event->mmap.len); 1612 1604 1613 1605 /*
+10
tools/perf/util/pmu.c
··· 732 732 733 733 if (!is_arm_pmu_core(name)) { 734 734 pname = pe->pmu ? pe->pmu : "cpu"; 735 + 736 + /* 737 + * uncore alias may be from different PMU 738 + * with common prefix 739 + */ 740 + if (pmu_is_uncore(name) && 741 + !strncmp(pname, name, strlen(pname))) 742 + goto new_alias; 743 + 735 744 if (strcmp(pname, name)) 736 745 continue; 737 746 } 738 747 748 + new_alias: 739 749 /* need type casts to override 'const' */ 740 750 __perf_pmu__new_alias(head, NULL, (char *)pe->name, 741 751 (char *)pe->desc, (char *)pe->event,