Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

scsi: ufs: core: Remove HPB support

Interest among UFS users in HPB has reduced significantly. I am not aware
of any current users of the HPB functionality. Hence remove HPB support
from the kernel.

A note: the work in JEDEC on a successor for HPB is nearing completion.
Zoned storage for UFS or ZUFS combines the UFS standard with ZBC-2.

Acked-by: Avri Altman <avri.altman@wdc.com>
Reviewed-by: Bean Huo <beanhuo@micron.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: ChanWoo Lee <cw9316.lee@samsung.com>
Cc: Daejun Park <daejun7.park@samsung.com>
Cc: Keoseong Park <keosung.park@samsung.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Link: https://lore.kernel.org/r/20230719165758.2787573-1-bvanassche@acm.org
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>

authored by

Bart Van Assche and committed by
Martin K. Petersen
7e9609d2 f669b8a6

+1 -3405
-247
Documentation/ABI/testing/sysfs-driver-ufs
··· 1437 1437 If avail_wb_buff < wb_flush_threshold, it indicates that WriteBooster buffer needs to 1438 1438 be flushed, otherwise it is not necessary. 1439 1439 1440 - What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_version 1441 - What: /sys/bus/platform/devices/*.ufs/device_descriptor/hpb_version 1442 - Date: June 2021 1443 - Contact: Daejun Park <daejun7.park@samsung.com> 1444 - Description: This entry shows the HPB specification version. 1445 - The full information about the descriptor can be found in the UFS 1446 - HPB (Host Performance Booster) Extension specifications. 1447 - Example: version 1.2.3 = 0123h 1448 - 1449 - The file is read only. 1450 - 1451 - What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_control 1452 - What: /sys/bus/platform/devices/*.ufs/device_descriptor/hpb_control 1453 - Date: June 2021 1454 - Contact: Daejun Park <daejun7.park@samsung.com> 1455 - Description: This entry shows an indication of the HPB control mode. 1456 - 00h: Host control mode 1457 - 01h: Device control mode 1458 - 1459 - The file is read only. 1460 - 1461 - What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_region_size 1462 - What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_region_size 1463 - Date: June 2021 1464 - Contact: Daejun Park <daejun7.park@samsung.com> 1465 - Description: This entry shows the bHPBRegionSize which can be calculated 1466 - as in the following (in bytes): 1467 - HPB Region size = 512B * 2^bHPBRegionSize 1468 - 1469 - The file is read only. 1470 - 1471 - What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_number_lu 1472 - What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_number_lu 1473 - Date: June 2021 1474 - Contact: Daejun Park <daejun7.park@samsung.com> 1475 - Description: This entry shows the maximum number of HPB LU supported by 1476 - the device. 1477 - 00h: HPB is not supported by the device. 1478 - 01h ~ 20h: Maximum number of HPB LU supported by the device 1479 - 1480 - The file is read only. 1481 - 1482 - What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_subregion_size 1483 - What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_subregion_size 1484 - Date: June 2021 1485 - Contact: Daejun Park <daejun7.park@samsung.com> 1486 - Description: This entry shows the bHPBSubRegionSize, which can be 1487 - calculated as in the following (in bytes) and shall be a multiple of 1488 - logical block size: 1489 - HPB Sub-Region size = 512B x 2^bHPBSubRegionSize 1490 - bHPBSubRegionSize shall not exceed bHPBRegionSize. 1491 - 1492 - The file is read only. 1493 - 1494 - What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_max_active_regions 1495 - What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_max_active_regions 1496 - Date: June 2021 1497 - Contact: Daejun Park <daejun7.park@samsung.com> 1498 - Description: This entry shows the maximum number of active HPB regions that 1499 - is supported by the device. 1500 - 1501 - The file is read only. 1502 - 1503 - What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_lu_max_active_regions 1504 - Date: June 2021 1505 - Contact: Daejun Park <daejun7.park@samsung.com> 1506 - Description: This entry shows the maximum number of HPB regions assigned to 1507 - the HPB logical unit. 1508 - 1509 - The file is read only. 1510 - 1511 - What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_pinned_region_start_offset 1512 - Date: June 2021 1513 - Contact: Daejun Park <daejun7.park@samsung.com> 1514 - Description: This entry shows the start offset of HPB pinned region. 1515 - 1516 - The file is read only. 1517 - 1518 - What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_number_pinned_regions 1519 - Date: June 2021 1520 - Contact: Daejun Park <daejun7.park@samsung.com> 1521 - Description: This entry shows the number of HPB pinned regions assigned to 1522 - the HPB logical unit. 1523 - 1524 - The file is read only. 1525 - 1526 - What: /sys/class/scsi_device/*/device/hpb_stats/hit_cnt 1527 - Date: June 2021 1528 - Contact: Daejun Park <daejun7.park@samsung.com> 1529 - Description: This entry shows the number of reads that changed to HPB read. 1530 - 1531 - The file is read only. 1532 - 1533 - What: /sys/class/scsi_device/*/device/hpb_stats/miss_cnt 1534 - Date: June 2021 1535 - Contact: Daejun Park <daejun7.park@samsung.com> 1536 - Description: This entry shows the number of reads that cannot be changed to 1537 - HPB read. 1538 - 1539 - The file is read only. 1540 - 1541 - What: /sys/class/scsi_device/*/device/hpb_stats/rcmd_noti_cnt 1542 - Date: June 2021 1543 - Contact: Daejun Park <daejun7.park@samsung.com> 1544 - Description: This entry shows the number of response UPIUs that has 1545 - recommendations for activating sub-regions and/or inactivating region. 1546 - 1547 - The file is read only. 1548 - 1549 - What: /sys/class/scsi_device/*/device/hpb_stats/rcmd_active_cnt 1550 - Date: June 2021 1551 - Contact: Daejun Park <daejun7.park@samsung.com> 1552 - Description: For the HPB device control mode, this entry shows the number of 1553 - active sub-regions recommended by response UPIUs. For the HPB host control 1554 - mode, this entry shows the number of active sub-regions recommended by the 1555 - HPB host control mode heuristic algorithm. 1556 - 1557 - The file is read only. 1558 - 1559 - What: /sys/class/scsi_device/*/device/hpb_stats/rcmd_inactive_cnt 1560 - Date: June 2021 1561 - Contact: Daejun Park <daejun7.park@samsung.com> 1562 - Description: For the HPB device control mode, this entry shows the number of 1563 - inactive regions recommended by response UPIUs. For the HPB host control 1564 - mode, this entry shows the number of inactive regions recommended by the 1565 - HPB host control mode heuristic algorithm. 1566 - 1567 - The file is read only. 1568 - 1569 - What: /sys/class/scsi_device/*/device/hpb_stats/map_req_cnt 1570 - Date: June 2021 1571 - Contact: Daejun Park <daejun7.park@samsung.com> 1572 - Description: This entry shows the number of read buffer commands for 1573 - activating sub-regions recommended by response UPIUs. 1574 - 1575 - The file is read only. 1576 - 1577 - What: /sys/class/scsi_device/*/device/hpb_params/requeue_timeout_ms 1578 - Date: June 2021 1579 - Contact: Daejun Park <daejun7.park@samsung.com> 1580 - Description: This entry shows the requeue timeout threshold for write buffer 1581 - command in ms. The value can be changed by writing an integer to 1582 - this entry. 1583 - 1584 - What: /sys/bus/platform/drivers/ufshcd/*/attributes/max_data_size_hpb_single_cmd 1585 - What: /sys/bus/platform/devices/*.ufs/attributes/max_data_size_hpb_single_cmd 1586 - Date: June 2021 1587 - Contact: Daejun Park <daejun7.park@samsung.com> 1588 - Description: This entry shows the maximum HPB data size for using a single HPB 1589 - command. 1590 - 1591 - === ======== 1592 - 00h 4KB 1593 - 01h 8KB 1594 - 02h 12KB 1595 - ... 1596 - FFh 1024KB 1597 - === ======== 1598 - 1599 - The file is read only. 1600 - 1601 - What: /sys/bus/platform/drivers/ufshcd/*/flags/hpb_enable 1602 - What: /sys/bus/platform/devices/*.ufs/flags/hpb_enable 1603 - Date: June 2021 1604 - Contact: Daejun Park <daejun7.park@samsung.com> 1605 - Description: This entry shows the status of HPB. 1606 - 1607 - == ============================ 1608 - 0 HPB is not enabled. 1609 - 1 HPB is enabled 1610 - == ============================ 1611 - 1612 - The file is read only. 1613 - 1614 1440 Contact: Daniil Lunev <dlunev@chromium.org> 1615 1441 What: /sys/bus/platform/drivers/ufshcd/*/capabilities/ 1616 1442 What: /sys/bus/platform/devices/*.ufs/capabilities/ ··· 1474 1648 1475 1649 The file is read only. 1476 1650 1477 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld 1478 - Date: February 2021 1479 - Contact: Avri Altman <avri.altman@wdc.com> 1480 - Description: In host control mode, reads are the major source of activation 1481 - trials. Once this threshold hs met, the region is added to the 1482 - "to-be-activated" list. Since we reset the read counter upon 1483 - write, this include sending a rb command updating the region 1484 - ppn as well. 1485 - 1486 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor 1487 - Date: February 2021 1488 - Contact: Avri Altman <avri.altman@wdc.com> 1489 - Description: In host control mode, we think of the regions as "buckets". 1490 - Those buckets are being filled with reads, and emptied on write. 1491 - We use entries_per_srgn - the amount of blocks in a subregion as 1492 - our bucket size. This applies because HPB1.0 only handles 1493 - single-block reads. Once the bucket size is crossed, we trigger 1494 - a normalization work - not only to avoid overflow, but mainly 1495 - because we want to keep those counters normalized, as we are 1496 - using those reads as a comparative score, to make various decisions. 1497 - The normalization is dividing (shift right) the read counter by 1498 - the normalization_factor. If during consecutive normalizations 1499 - an active region has exhausted its reads - inactivate it. 1500 - 1501 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter 1502 - Date: February 2021 1503 - Contact: Avri Altman <avri.altman@wdc.com> 1504 - Description: Region deactivation is often due to the fact that eviction took 1505 - place: A region becomes active at the expense of another. This is 1506 - happening when the max-active-regions limit has been crossed. 1507 - In host mode, eviction is considered an extreme measure. We 1508 - want to verify that the entering region has enough reads, and 1509 - the exiting region has much fewer reads. eviction_thld_enter is 1510 - the min reads that a region must have in order to be considered 1511 - a candidate for evicting another region. 1512 - 1513 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit 1514 - Date: February 2021 1515 - Contact: Avri Altman <avri.altman@wdc.com> 1516 - Description: Same as above for the exiting region. A region is considered to 1517 - be a candidate for eviction only if it has fewer reads than 1518 - eviction_thld_exit. 1519 - 1520 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms 1521 - Date: February 2021 1522 - Contact: Avri Altman <avri.altman@wdc.com> 1523 - Description: In order not to hang on to "cold" regions, we inactivate 1524 - a region that has no READ access for a predefined amount of 1525 - time - read_timeout_ms. If read_timeout_ms has expired, and the 1526 - region is dirty, it is less likely that we can make any use of 1527 - HPB reading it so we inactivate it. Still, deactivation has 1528 - its overhead, and we may still benefit from HPB reading this 1529 - region if it is clean - see read_timeout_expiries. 1530 - 1531 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries 1532 - Date: February 2021 1533 - Contact: Avri Altman <avri.altman@wdc.com> 1534 - Description: If the region read timeout has expired, but the region is clean, 1535 - just re-wind its timer for another spin. Do that as long as it 1536 - is clean and did not exhaust its read_timeout_expiries threshold. 1537 - 1538 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms 1539 - Date: February 2021 1540 - Contact: Avri Altman <avri.altman@wdc.com> 1541 - Description: The frequency with which the delayed worker that checks the 1542 - read_timeouts is awakened. 1543 - 1544 - What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req 1545 - Date: February 2021 1546 - Contact: Avri Altman <avri.altman@wdc.com> 1547 - Description: In host control mode the host is the originator of map requests. 1548 - To avoid flooding the device with map requests, use a simple throttling 1549 - mechanism that limits the number of inflight map requests.
-8
drivers/ufs/core/Kconfig
··· 35 35 capabilities of the UFS device (if present) to perform crypto 36 36 operations on data being transferred to/from the device. 37 37 38 - config SCSI_UFS_HPB 39 - bool "Support UFS Host Performance Booster" 40 - help 41 - The UFS HPB feature improves random read performance. It caches 42 - L2P (logical to physical) map of UFS to host DRAM. The driver uses HPB 43 - read command by piggybacking physical page number for bypassing FTL (flash 44 - translation layer)'s L2P address translation. 45 - 46 38 config SCSI_UFS_FAULT_INJECTION 47 39 bool "UFS Fault Injection Support" 48 40 depends on FAULT_INJECTION
-1
drivers/ufs/core/Makefile
··· 5 5 ufshcd-core-$(CONFIG_DEBUG_FS) += ufs-debugfs.o 6 6 ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o 7 7 ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o 8 - ufshcd-core-$(CONFIG_SCSI_UFS_HPB) += ufshpb.o 9 8 ufshcd-core-$(CONFIG_SCSI_UFS_FAULT_INJECTION) += ufs-fault-injection.o 10 9 ufshcd-core-$(CONFIG_SCSI_UFS_HWMON) += ufs-hwmon.o
-22
drivers/ufs/core/ufs-sysfs.c
··· 718 718 UFS_DEVICE_DESC_PARAM(number_of_secure_wpa, _NUM_SEC_WPA, 1); 719 719 UFS_DEVICE_DESC_PARAM(psa_max_data_size, _PSA_MAX_DATA, 4); 720 720 UFS_DEVICE_DESC_PARAM(psa_state_timeout, _PSA_TMT, 1); 721 - UFS_DEVICE_DESC_PARAM(hpb_version, _HPB_VER, 2); 722 - UFS_DEVICE_DESC_PARAM(hpb_control, _HPB_CONTROL, 1); 723 721 UFS_DEVICE_DESC_PARAM(ext_feature_sup, _EXT_UFS_FEATURE_SUP, 4); 724 722 UFS_DEVICE_DESC_PARAM(wb_presv_us_en, _WB_PRESRV_USRSPC_EN, 1); 725 723 UFS_DEVICE_DESC_PARAM(wb_type, _WB_TYPE, 1); ··· 750 752 &dev_attr_number_of_secure_wpa.attr, 751 753 &dev_attr_psa_max_data_size.attr, 752 754 &dev_attr_psa_state_timeout.attr, 753 - &dev_attr_hpb_version.attr, 754 - &dev_attr_hpb_control.attr, 755 755 &dev_attr_ext_feature_sup.attr, 756 756 &dev_attr_wb_presv_us_en.attr, 757 757 &dev_attr_wb_type.attr, ··· 823 827 _ENM4_MAX_NUM_UNITS, 4); 824 828 UFS_GEOMETRY_DESC_PARAM(enh4_memory_capacity_adjustment_factor, 825 829 _ENM4_CAP_ADJ_FCTR, 2); 826 - UFS_GEOMETRY_DESC_PARAM(hpb_region_size, _HPB_REGION_SIZE, 1); 827 - UFS_GEOMETRY_DESC_PARAM(hpb_number_lu, _HPB_NUMBER_LU, 1); 828 - UFS_GEOMETRY_DESC_PARAM(hpb_subregion_size, _HPB_SUBREGION_SIZE, 1); 829 - UFS_GEOMETRY_DESC_PARAM(hpb_max_active_regions, _HPB_MAX_ACTIVE_REGS, 2); 830 830 UFS_GEOMETRY_DESC_PARAM(wb_max_alloc_units, _WB_MAX_ALLOC_UNITS, 4); 831 831 UFS_GEOMETRY_DESC_PARAM(wb_max_wb_luns, _WB_MAX_WB_LUNS, 1); 832 832 UFS_GEOMETRY_DESC_PARAM(wb_buff_cap_adj, _WB_BUFF_CAP_ADJ, 1); ··· 860 868 &dev_attr_enh3_memory_capacity_adjustment_factor.attr, 861 869 &dev_attr_enh4_memory_max_alloc_units.attr, 862 870 &dev_attr_enh4_memory_capacity_adjustment_factor.attr, 863 - &dev_attr_hpb_region_size.attr, 864 - &dev_attr_hpb_number_lu.attr, 865 - &dev_attr_hpb_subregion_size.attr, 866 - &dev_attr_hpb_max_active_regions.attr, 867 871 &dev_attr_wb_max_alloc_units.attr, 868 872 &dev_attr_wb_max_wb_luns.attr, 869 873 &dev_attr_wb_buff_cap_adj.attr, ··· 1120 1132 UFS_FLAG(wb_enable, _WB_EN); 1121 1133 UFS_FLAG(wb_flush_en, _WB_BUFF_FLUSH_EN); 1122 1134 UFS_FLAG(wb_flush_during_h8, _WB_BUFF_FLUSH_DURING_HIBERN8); 1123 - UFS_FLAG(hpb_enable, _HPB_EN); 1124 1135 1125 1136 static struct attribute *ufs_sysfs_device_flags[] = { 1126 1137 &dev_attr_device_init.attr, ··· 1133 1146 &dev_attr_wb_enable.attr, 1134 1147 &dev_attr_wb_flush_en.attr, 1135 1148 &dev_attr_wb_flush_during_h8.attr, 1136 - &dev_attr_hpb_enable.attr, 1137 1149 NULL, 1138 1150 }; 1139 1151 ··· 1179 1193 static DEVICE_ATTR_RO(_name) 1180 1194 1181 1195 UFS_ATTRIBUTE(boot_lun_enabled, _BOOT_LU_EN); 1182 - UFS_ATTRIBUTE(max_data_size_hpb_single_cmd, _MAX_HPB_SINGLE_CMD); 1183 1196 UFS_ATTRIBUTE(current_power_mode, _POWER_MODE); 1184 1197 UFS_ATTRIBUTE(active_icc_level, _ACTIVE_ICC_LVL); 1185 1198 UFS_ATTRIBUTE(ooo_data_enabled, _OOO_DATA_EN); ··· 1202 1217 1203 1218 static struct attribute *ufs_sysfs_attributes[] = { 1204 1219 &dev_attr_boot_lun_enabled.attr, 1205 - &dev_attr_max_data_size_hpb_single_cmd.attr, 1206 1220 &dev_attr_current_power_mode.attr, 1207 1221 &dev_attr_active_icc_level.attr, 1208 1222 &dev_attr_ooo_data_enabled.attr, ··· 1275 1291 UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8); 1276 1292 UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2); 1277 1293 UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1); 1278 - UFS_UNIT_DESC_PARAM(hpb_lu_max_active_regions, _HPB_LU_MAX_ACTIVE_RGNS, 2); 1279 - UFS_UNIT_DESC_PARAM(hpb_pinned_region_start_offset, _HPB_PIN_RGN_START_OFF, 2); 1280 - UFS_UNIT_DESC_PARAM(hpb_number_pinned_regions, _HPB_NUM_PIN_RGNS, 2); 1281 1294 UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4); 1282 1295 1283 1296 static struct attribute *ufs_sysfs_unit_descriptor[] = { ··· 1292 1311 &dev_attr_physical_memory_resourse_count.attr, 1293 1312 &dev_attr_context_capabilities.attr, 1294 1313 &dev_attr_large_unit_granularity.attr, 1295 - &dev_attr_hpb_lu_max_active_regions.attr, 1296 - &dev_attr_hpb_pinned_region_start_offset.attr, 1297 - &dev_attr_hpb_number_pinned_regions.attr, 1298 1314 &dev_attr_wb_buf_alloc_units.attr, 1299 1315 NULL, 1300 1316 };
+1 -66
drivers/ufs/core/ufshcd.c
··· 34 34 #include "ufs-fault-injection.h" 35 35 #include "ufs_bsg.h" 36 36 #include "ufshcd-crypto.h" 37 - #include "ufshpb.h" 38 37 #include <asm/unaligned.h> 39 38 40 39 #define CREATE_TRACE_POINTS ··· 237 238 /* UFS cards deviations table */ 238 239 { .wmanufacturerid = UFS_VENDOR_MICRON, 239 240 .model = UFS_ANY_MODEL, 240 - .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM | 241 - UFS_DEVICE_QUIRK_SWAP_L2P_ENTRY_FOR_HPB_READ }, 241 + .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM }, 242 242 { .wmanufacturerid = UFS_VENDOR_SAMSUNG, 243 243 .model = UFS_ANY_MODEL, 244 244 .quirk = UFS_DEVICE_QUIRK_DELAY_BEFORE_LPM | ··· 2905 2907 2906 2908 lrbp->req_abort_skip = false; 2907 2909 2908 - ufshpb_prep(hba, lrbp); 2909 - 2910 2910 ufshcd_comp_scsi_upiu(hba, lrbp); 2911 2911 2912 2912 err = ufshcd_map_sg(hba, lrbp); ··· 5103 5107 return scsi_change_queue_depth(sdev, min(depth, sdev->host->can_queue)); 5104 5108 } 5105 5109 5106 - static void ufshcd_hpb_destroy(struct ufs_hba *hba, struct scsi_device *sdev) 5107 - { 5108 - /* skip well-known LU */ 5109 - if ((sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) || 5110 - !(hba->dev_info.hpb_enabled) || !ufshpb_is_allowed(hba)) 5111 - return; 5112 - 5113 - ufshpb_destroy_lu(hba, sdev); 5114 - } 5115 - 5116 - static void ufshcd_hpb_configure(struct ufs_hba *hba, struct scsi_device *sdev) 5117 - { 5118 - /* skip well-known LU */ 5119 - if ((sdev->lun >= UFS_UPIU_MAX_UNIT_NUM_ID) || 5120 - !(hba->dev_info.hpb_enabled) || !ufshpb_is_allowed(hba)) 5121 - return; 5122 - 5123 - ufshpb_init_hpb_lu(hba, sdev); 5124 - } 5125 - 5126 5110 /** 5127 5111 * ufshcd_slave_configure - adjust SCSI device configurations 5128 5112 * @sdev: pointer to SCSI device ··· 5111 5135 { 5112 5136 struct ufs_hba *hba = shost_priv(sdev->host); 5113 5137 struct request_queue *q = sdev->request_queue; 5114 - 5115 - ufshcd_hpb_configure(hba, sdev); 5116 5138 5117 5139 blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1); 5118 5140 if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT) ··· 5145 5171 unsigned long flags; 5146 5172 5147 5173 hba = shost_priv(sdev->host); 5148 - 5149 - ufshcd_hpb_destroy(hba, sdev); 5150 5174 5151 5175 /* Drop the reference as it won't be needed anymore */ 5152 5176 if (ufshcd_scsi_to_upiu_lun(sdev->lun) == UFS_UPIU_UFS_DEVICE_WLUN) { ··· 5271 5299 ufshcd_is_exception_event(lrbp->ucd_rsp_ptr)) 5272 5300 /* Flushed in suspend */ 5273 5301 schedule_work(&hba->eeh_work); 5274 - 5275 - if (scsi_status == SAM_STAT_GOOD) 5276 - ufshpb_rsp_upiu(hba, lrbp); 5277 5302 break; 5278 5303 case UPIU_TRANSACTION_REJECT_UPIU: 5279 5304 /* TODO: handle Reject UPIU Response */ ··· 7627 7658 * Stop the host controller and complete the requests 7628 7659 * cleared by h/w 7629 7660 */ 7630 - ufshpb_toggle_state(hba, HPB_PRESENT, HPB_RESET); 7631 7661 ufshcd_hba_stop(hba); 7632 7662 hba->silence_err_logs = true; 7633 7663 ufshcd_complete_requests(hba, true); ··· 8089 8121 { 8090 8122 int err; 8091 8123 u8 model_index; 8092 - u8 b_ufs_feature_sup; 8093 8124 u8 *desc_buf; 8094 8125 struct ufs_dev_info *dev_info = &hba->dev_info; 8095 8126 ··· 8117 8150 dev_info->wspecversion = desc_buf[DEVICE_DESC_PARAM_SPEC_VER] << 8 | 8118 8151 desc_buf[DEVICE_DESC_PARAM_SPEC_VER + 1]; 8119 8152 dev_info->bqueuedepth = desc_buf[DEVICE_DESC_PARAM_Q_DPTH]; 8120 - b_ufs_feature_sup = desc_buf[DEVICE_DESC_PARAM_UFS_FEAT]; 8121 8153 8122 8154 model_index = desc_buf[DEVICE_DESC_PARAM_PRDCT_NAME]; 8123 - 8124 - if (dev_info->wspecversion >= UFS_DEV_HPB_SUPPORT_VERSION && 8125 - (b_ufs_feature_sup & UFS_DEV_HPB_SUPPORT)) { 8126 - bool hpb_en = false; 8127 - 8128 - ufshpb_get_dev_info(hba, desc_buf); 8129 - 8130 - if (!ufshpb_is_legacy(hba)) 8131 - err = ufshcd_query_flag_retry(hba, 8132 - UPIU_QUERY_OPCODE_READ_FLAG, 8133 - QUERY_FLAG_IDN_HPB_EN, 0, 8134 - &hpb_en); 8135 - 8136 - if (ufshpb_is_legacy(hba) || (!err && hpb_en)) 8137 - dev_info->hpb_enabled = true; 8138 - } 8139 8155 8140 8156 err = ufshcd_read_string_desc(hba, model_index, 8141 8157 &dev_info->model, SD_ASCII_STD); ··· 8354 8404 else if (desc_buf[GEOMETRY_DESC_PARAM_MAX_NUM_LUN] == 0) 8355 8405 hba->dev_info.max_lu_supported = 8; 8356 8406 8357 - if (desc_buf[QUERY_DESC_LENGTH_OFFSET] >= 8358 - GEOMETRY_DESC_PARAM_HPB_MAX_ACTIVE_REGS) 8359 - ufshpb_get_geo_info(hba, desc_buf); 8360 - 8361 8407 out: 8362 8408 kfree(desc_buf); 8363 8409 return err; ··· 8494 8548 } 8495 8549 8496 8550 ufs_bsg_probe(hba); 8497 - ufshpb_init(hba); 8498 8551 scsi_scan_host(hba->host); 8499 8552 pm_runtime_put_sync(hba->dev); 8500 8553 ··· 8725 8780 /* Enable Auto-Hibernate if configured */ 8726 8781 ufshcd_auto_hibern8_enable(hba); 8727 8782 8728 - ufshpb_toggle_state(hba, HPB_RESET, HPB_PRESENT); 8729 8783 out: 8730 8784 spin_lock_irqsave(hba->host->host_lock, flags); 8731 8785 if (ret) ··· 8794 8850 static const struct attribute_group *ufshcd_driver_groups[] = { 8795 8851 &ufs_sysfs_unit_descriptor_group, 8796 8852 &ufs_sysfs_lun_attributes_group, 8797 - #ifdef CONFIG_SCSI_UFS_HPB 8798 - &ufs_sysfs_hpb_stat_group, 8799 - &ufs_sysfs_hpb_param_group, 8800 - #endif 8801 8853 NULL, 8802 8854 }; 8803 8855 ··· 9478 9538 req_link_state = UIC_LINK_OFF_STATE; 9479 9539 } 9480 9540 9481 - ufshpb_suspend(hba); 9482 - 9483 9541 /* 9484 9542 * If we can't transition into any of the low power modes 9485 9543 * just gate the clocks. ··· 9631 9693 ufshcd_update_evt_hist(hba, UFS_EVT_WL_SUSP_ERR, (u32)ret); 9632 9694 hba->clk_gating.is_suspended = false; 9633 9695 ufshcd_release(hba); 9634 - ufshpb_resume(hba); 9635 9696 } 9636 9697 hba->pm_op_in_progress = false; 9637 9698 return ret; ··· 9710 9773 /* Enable Auto-Hibernate if configured */ 9711 9774 ufshcd_auto_hibern8_enable(hba); 9712 9775 9713 - ufshpb_resume(hba); 9714 9776 goto out; 9715 9777 9716 9778 set_old_link_state: ··· 10049 10113 ufshcd_rpm_get_sync(hba); 10050 10114 ufs_hwmon_remove(hba); 10051 10115 ufs_bsg_remove(hba); 10052 - ufshpb_remove(hba); 10053 10116 ufs_sysfs_remove_nodes(hba->dev); 10054 10117 blk_mq_destroy_queue(hba->tmf_queue); 10055 10118 blk_put_queue(hba->tmf_queue);
-2668
drivers/ufs/core/ufshpb.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Universal Flash Storage Host Performance Booster 4 - * 5 - * Copyright (C) 2017-2021 Samsung Electronics Co., Ltd. 6 - * 7 - * Authors: 8 - * Yongmyung Lee <ymhungry.lee@samsung.com> 9 - * Jinyoung Choi <j-young.choi@samsung.com> 10 - */ 11 - 12 - #include <asm/unaligned.h> 13 - #include <linux/delay.h> 14 - #include <linux/device.h> 15 - #include <linux/module.h> 16 - #include <scsi/scsi_cmnd.h> 17 - 18 - #include "ufshcd-priv.h" 19 - #include "ufshpb.h" 20 - #include "../../scsi/sd.h" 21 - 22 - #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ 23 - #define READ_TO_MS 1000 24 - #define READ_TO_EXPIRIES 100 25 - #define POLLING_INTERVAL_MS 200 26 - #define THROTTLE_MAP_REQ_DEFAULT 1 27 - 28 - /* memory management */ 29 - static struct kmem_cache *ufshpb_mctx_cache; 30 - static mempool_t *ufshpb_mctx_pool; 31 - static mempool_t *ufshpb_page_pool; 32 - /* A cache size of 2MB can cache ppn in the 1GB range. */ 33 - static unsigned int ufshpb_host_map_kbytes = SZ_2K; 34 - static int tot_active_srgn_pages; 35 - 36 - static struct workqueue_struct *ufshpb_wq; 37 - 38 - static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, 39 - int srgn_idx); 40 - 41 - bool ufshpb_is_allowed(struct ufs_hba *hba) 42 - { 43 - return !(hba->ufshpb_dev.hpb_disabled); 44 - } 45 - 46 - /* HPB version 1.0 is called as legacy version. */ 47 - bool ufshpb_is_legacy(struct ufs_hba *hba) 48 - { 49 - return hba->ufshpb_dev.is_legacy; 50 - } 51 - 52 - static struct ufshpb_lu *ufshpb_get_hpb_data(struct scsi_device *sdev) 53 - { 54 - return sdev->hostdata; 55 - } 56 - 57 - static int ufshpb_get_state(struct ufshpb_lu *hpb) 58 - { 59 - return atomic_read(&hpb->hpb_state); 60 - } 61 - 62 - static void ufshpb_set_state(struct ufshpb_lu *hpb, int state) 63 - { 64 - atomic_set(&hpb->hpb_state, state); 65 - } 66 - 67 - static int ufshpb_is_valid_srgn(struct ufshpb_region *rgn, 68 - struct ufshpb_subregion *srgn) 69 - { 70 - return rgn->rgn_state != HPB_RGN_INACTIVE && 71 - srgn->srgn_state == HPB_SRGN_VALID; 72 - } 73 - 74 - static bool ufshpb_is_read_cmd(struct scsi_cmnd *cmd) 75 - { 76 - return req_op(scsi_cmd_to_rq(cmd)) == REQ_OP_READ; 77 - } 78 - 79 - static bool ufshpb_is_write_or_discard(struct scsi_cmnd *cmd) 80 - { 81 - return op_is_write(req_op(scsi_cmd_to_rq(cmd))) || 82 - op_is_discard(req_op(scsi_cmd_to_rq(cmd))); 83 - } 84 - 85 - static bool ufshpb_is_supported_chunk(struct ufshpb_lu *hpb, int transfer_len) 86 - { 87 - return transfer_len <= hpb->pre_req_max_tr_len; 88 - } 89 - 90 - static bool ufshpb_is_general_lun(int lun) 91 - { 92 - return lun < UFS_UPIU_MAX_UNIT_NUM_ID; 93 - } 94 - 95 - static bool ufshpb_is_pinned_region(struct ufshpb_lu *hpb, int rgn_idx) 96 - { 97 - return hpb->lu_pinned_end != PINNED_NOT_SET && 98 - rgn_idx >= hpb->lu_pinned_start && rgn_idx <= hpb->lu_pinned_end; 99 - } 100 - 101 - static void ufshpb_kick_map_work(struct ufshpb_lu *hpb) 102 - { 103 - bool ret = false; 104 - unsigned long flags; 105 - 106 - if (ufshpb_get_state(hpb) != HPB_PRESENT) 107 - return; 108 - 109 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 110 - if (!list_empty(&hpb->lh_inact_rgn) || !list_empty(&hpb->lh_act_srgn)) 111 - ret = true; 112 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 113 - 114 - if (ret) 115 - queue_work(ufshpb_wq, &hpb->map_work); 116 - } 117 - 118 - static bool ufshpb_is_hpb_rsp_valid(struct ufs_hba *hba, 119 - struct ufshcd_lrb *lrbp, 120 - struct utp_hpb_rsp *rsp_field) 121 - { 122 - /* Check HPB_UPDATE_ALERT */ 123 - if (!(lrbp->ucd_rsp_ptr->header.dword_2 & 124 - upiu_header_dword(0, 2, 0, 0))) 125 - return false; 126 - 127 - if (be16_to_cpu(rsp_field->sense_data_len) != DEV_SENSE_SEG_LEN || 128 - rsp_field->desc_type != DEV_DES_TYPE || 129 - rsp_field->additional_len != DEV_ADDITIONAL_LEN || 130 - rsp_field->active_rgn_cnt > MAX_ACTIVE_NUM || 131 - rsp_field->inactive_rgn_cnt > MAX_INACTIVE_NUM || 132 - rsp_field->hpb_op == HPB_RSP_NONE || 133 - (rsp_field->hpb_op == HPB_RSP_REQ_REGION_UPDATE && 134 - !rsp_field->active_rgn_cnt && !rsp_field->inactive_rgn_cnt)) 135 - return false; 136 - 137 - if (!ufshpb_is_general_lun(rsp_field->lun)) { 138 - dev_warn(hba->dev, "ufshpb: lun(%d) not supported\n", 139 - lrbp->lun); 140 - return false; 141 - } 142 - 143 - return true; 144 - } 145 - 146 - static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, 147 - int srgn_offset, int cnt, bool set_dirty) 148 - { 149 - struct ufshpb_region *rgn; 150 - struct ufshpb_subregion *srgn, *prev_srgn = NULL; 151 - int set_bit_len; 152 - int bitmap_len; 153 - unsigned long flags; 154 - 155 - next_srgn: 156 - rgn = hpb->rgn_tbl + rgn_idx; 157 - srgn = rgn->srgn_tbl + srgn_idx; 158 - 159 - if (likely(!srgn->is_last)) 160 - bitmap_len = hpb->entries_per_srgn; 161 - else 162 - bitmap_len = hpb->last_srgn_entries; 163 - 164 - if ((srgn_offset + cnt) > bitmap_len) 165 - set_bit_len = bitmap_len - srgn_offset; 166 - else 167 - set_bit_len = cnt; 168 - 169 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 170 - if (rgn->rgn_state != HPB_RGN_INACTIVE) { 171 - if (set_dirty) { 172 - if (srgn->srgn_state == HPB_SRGN_VALID) 173 - bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, 174 - set_bit_len); 175 - } else if (hpb->is_hcm) { 176 - /* rewind the read timer for lru regions */ 177 - rgn->read_timeout = ktime_add_ms(ktime_get(), 178 - rgn->hpb->params.read_timeout_ms); 179 - rgn->read_timeout_expiries = 180 - rgn->hpb->params.read_timeout_expiries; 181 - } 182 - } 183 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 184 - 185 - if (hpb->is_hcm && prev_srgn != srgn) { 186 - bool activate = false; 187 - 188 - spin_lock(&rgn->rgn_lock); 189 - if (set_dirty) { 190 - rgn->reads -= srgn->reads; 191 - srgn->reads = 0; 192 - set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); 193 - } else { 194 - srgn->reads++; 195 - rgn->reads++; 196 - if (srgn->reads == hpb->params.activation_thld) 197 - activate = true; 198 - } 199 - spin_unlock(&rgn->rgn_lock); 200 - 201 - if (activate || 202 - test_and_clear_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags)) { 203 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 204 - ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); 205 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 206 - dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, 207 - "activate region %d-%d\n", rgn_idx, srgn_idx); 208 - } 209 - 210 - prev_srgn = srgn; 211 - } 212 - 213 - srgn_offset = 0; 214 - if (++srgn_idx == hpb->srgns_per_rgn) { 215 - srgn_idx = 0; 216 - rgn_idx++; 217 - } 218 - 219 - cnt -= set_bit_len; 220 - if (cnt > 0) 221 - goto next_srgn; 222 - } 223 - 224 - static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, 225 - int srgn_idx, int srgn_offset, int cnt) 226 - { 227 - struct ufshpb_region *rgn; 228 - struct ufshpb_subregion *srgn; 229 - int bitmap_len; 230 - int bit_len; 231 - 232 - next_srgn: 233 - rgn = hpb->rgn_tbl + rgn_idx; 234 - srgn = rgn->srgn_tbl + srgn_idx; 235 - 236 - if (!ufshpb_is_valid_srgn(rgn, srgn)) 237 - return true; 238 - 239 - /* 240 - * If the region state is active, mctx must be allocated. 241 - * In this case, check whether the region is evicted or 242 - * mctx allocation fail. 243 - */ 244 - if (unlikely(!srgn->mctx)) { 245 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 246 - "no mctx in region %d subregion %d.\n", 247 - srgn->rgn_idx, srgn->srgn_idx); 248 - return true; 249 - } 250 - 251 - if (likely(!srgn->is_last)) 252 - bitmap_len = hpb->entries_per_srgn; 253 - else 254 - bitmap_len = hpb->last_srgn_entries; 255 - 256 - if ((srgn_offset + cnt) > bitmap_len) 257 - bit_len = bitmap_len - srgn_offset; 258 - else 259 - bit_len = cnt; 260 - 261 - if (find_next_bit(srgn->mctx->ppn_dirty, bit_len + srgn_offset, 262 - srgn_offset) < bit_len + srgn_offset) 263 - return true; 264 - 265 - srgn_offset = 0; 266 - if (++srgn_idx == hpb->srgns_per_rgn) { 267 - srgn_idx = 0; 268 - rgn_idx++; 269 - } 270 - 271 - cnt -= bit_len; 272 - if (cnt > 0) 273 - goto next_srgn; 274 - 275 - return false; 276 - } 277 - 278 - static inline bool is_rgn_dirty(struct ufshpb_region *rgn) 279 - { 280 - return test_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); 281 - } 282 - 283 - static int ufshpb_fill_ppn_from_page(struct ufshpb_lu *hpb, 284 - struct ufshpb_map_ctx *mctx, int pos, 285 - int len, __be64 *ppn_buf) 286 - { 287 - struct page *page; 288 - int index, offset; 289 - int copied; 290 - 291 - index = pos / (PAGE_SIZE / HPB_ENTRY_SIZE); 292 - offset = pos % (PAGE_SIZE / HPB_ENTRY_SIZE); 293 - 294 - if ((offset + len) <= (PAGE_SIZE / HPB_ENTRY_SIZE)) 295 - copied = len; 296 - else 297 - copied = (PAGE_SIZE / HPB_ENTRY_SIZE) - offset; 298 - 299 - page = mctx->m_page[index]; 300 - if (unlikely(!page)) { 301 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 302 - "error. cannot find page in mctx\n"); 303 - return -ENOMEM; 304 - } 305 - 306 - memcpy(ppn_buf, page_address(page) + (offset * HPB_ENTRY_SIZE), 307 - copied * HPB_ENTRY_SIZE); 308 - 309 - return copied; 310 - } 311 - 312 - static void 313 - ufshpb_get_pos_from_lpn(struct ufshpb_lu *hpb, unsigned long lpn, int *rgn_idx, 314 - int *srgn_idx, int *offset) 315 - { 316 - int rgn_offset; 317 - 318 - *rgn_idx = lpn >> hpb->entries_per_rgn_shift; 319 - rgn_offset = lpn & hpb->entries_per_rgn_mask; 320 - *srgn_idx = rgn_offset >> hpb->entries_per_srgn_shift; 321 - *offset = rgn_offset & hpb->entries_per_srgn_mask; 322 - } 323 - 324 - static void 325 - ufshpb_set_hpb_read_to_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp, 326 - __be64 ppn, u8 transfer_len) 327 - { 328 - unsigned char *cdb = lrbp->cmd->cmnd; 329 - __be64 ppn_tmp = ppn; 330 - cdb[0] = UFSHPB_READ; 331 - 332 - if (hba->dev_quirks & UFS_DEVICE_QUIRK_SWAP_L2P_ENTRY_FOR_HPB_READ) 333 - ppn_tmp = (__force __be64)swab64((__force u64)ppn); 334 - 335 - /* ppn value is stored as big-endian in the host memory */ 336 - memcpy(&cdb[6], &ppn_tmp, sizeof(__be64)); 337 - cdb[14] = transfer_len; 338 - cdb[15] = 0; 339 - 340 - lrbp->cmd->cmd_len = UFS_CDB_SIZE; 341 - } 342 - 343 - /* 344 - * This function will set up HPB read command using host-side L2P map data. 345 - */ 346 - int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) 347 - { 348 - struct ufshpb_lu *hpb; 349 - struct ufshpb_region *rgn; 350 - struct ufshpb_subregion *srgn; 351 - struct scsi_cmnd *cmd = lrbp->cmd; 352 - u32 lpn; 353 - __be64 ppn; 354 - unsigned long flags; 355 - int transfer_len, rgn_idx, srgn_idx, srgn_offset; 356 - int err = 0; 357 - 358 - hpb = ufshpb_get_hpb_data(cmd->device); 359 - if (!hpb) 360 - return -ENODEV; 361 - 362 - if (ufshpb_get_state(hpb) == HPB_INIT) 363 - return -ENODEV; 364 - 365 - if (ufshpb_get_state(hpb) != HPB_PRESENT) { 366 - dev_notice(&hpb->sdev_ufs_lu->sdev_dev, 367 - "%s: ufshpb state is not PRESENT", __func__); 368 - return -ENODEV; 369 - } 370 - 371 - if (blk_rq_is_passthrough(scsi_cmd_to_rq(cmd)) || 372 - (!ufshpb_is_write_or_discard(cmd) && 373 - !ufshpb_is_read_cmd(cmd))) 374 - return 0; 375 - 376 - transfer_len = sectors_to_logical(cmd->device, 377 - blk_rq_sectors(scsi_cmd_to_rq(cmd))); 378 - if (unlikely(!transfer_len)) 379 - return 0; 380 - 381 - lpn = sectors_to_logical(cmd->device, blk_rq_pos(scsi_cmd_to_rq(cmd))); 382 - ufshpb_get_pos_from_lpn(hpb, lpn, &rgn_idx, &srgn_idx, &srgn_offset); 383 - rgn = hpb->rgn_tbl + rgn_idx; 384 - srgn = rgn->srgn_tbl + srgn_idx; 385 - 386 - /* If command type is WRITE or DISCARD, set bitmap as dirty */ 387 - if (ufshpb_is_write_or_discard(cmd)) { 388 - ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, 389 - transfer_len, true); 390 - return 0; 391 - } 392 - 393 - if (!ufshpb_is_supported_chunk(hpb, transfer_len)) 394 - return 0; 395 - 396 - if (hpb->is_hcm) { 397 - /* 398 - * in host control mode, reads are the main source for 399 - * activation trials. 400 - */ 401 - ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, 402 - transfer_len, false); 403 - 404 - /* keep those counters normalized */ 405 - if (rgn->reads > hpb->entries_per_srgn) 406 - schedule_work(&hpb->ufshpb_normalization_work); 407 - } 408 - 409 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 410 - if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, 411 - transfer_len)) { 412 - hpb->stats.miss_cnt++; 413 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 414 - return 0; 415 - } 416 - 417 - err = ufshpb_fill_ppn_from_page(hpb, srgn->mctx, srgn_offset, 1, &ppn); 418 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 419 - if (unlikely(err < 0)) { 420 - /* 421 - * In this case, the region state is active, 422 - * but the ppn table is not allocated. 423 - * Make sure that ppn table must be allocated on 424 - * active state. 425 - */ 426 - dev_err(hba->dev, "get ppn failed. err %d\n", err); 427 - return err; 428 - } 429 - 430 - ufshpb_set_hpb_read_to_upiu(hba, lrbp, ppn, transfer_len); 431 - 432 - hpb->stats.hit_cnt++; 433 - return 0; 434 - } 435 - 436 - static struct ufshpb_req *ufshpb_get_req(struct ufshpb_lu *hpb, int rgn_idx, 437 - enum req_op op, bool atomic) 438 - { 439 - struct ufshpb_req *rq; 440 - struct request *req; 441 - int retries = HPB_MAP_REQ_RETRIES; 442 - 443 - rq = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL); 444 - if (!rq) 445 - return NULL; 446 - 447 - retry: 448 - req = blk_mq_alloc_request(hpb->sdev_ufs_lu->request_queue, op, 449 - BLK_MQ_REQ_NOWAIT); 450 - 451 - if (!atomic && (PTR_ERR(req) == -EWOULDBLOCK) && (--retries > 0)) { 452 - usleep_range(3000, 3100); 453 - goto retry; 454 - } 455 - 456 - if (IS_ERR(req)) 457 - goto free_rq; 458 - 459 - rq->hpb = hpb; 460 - rq->req = req; 461 - rq->rb.rgn_idx = rgn_idx; 462 - 463 - return rq; 464 - 465 - free_rq: 466 - kmem_cache_free(hpb->map_req_cache, rq); 467 - return NULL; 468 - } 469 - 470 - static void ufshpb_put_req(struct ufshpb_lu *hpb, struct ufshpb_req *rq) 471 - { 472 - blk_mq_free_request(rq->req); 473 - kmem_cache_free(hpb->map_req_cache, rq); 474 - } 475 - 476 - static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, 477 - struct ufshpb_subregion *srgn) 478 - { 479 - struct ufshpb_req *map_req; 480 - struct bio *bio; 481 - unsigned long flags; 482 - 483 - if (hpb->is_hcm && 484 - hpb->num_inflight_map_req >= hpb->params.inflight_map_req) { 485 - dev_info(&hpb->sdev_ufs_lu->sdev_dev, 486 - "map_req throttle. inflight %d throttle %d", 487 - hpb->num_inflight_map_req, 488 - hpb->params.inflight_map_req); 489 - return NULL; 490 - } 491 - 492 - map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_DRV_IN, false); 493 - if (!map_req) 494 - return NULL; 495 - 496 - bio = bio_alloc(NULL, hpb->pages_per_srgn, 0, GFP_KERNEL); 497 - if (!bio) { 498 - ufshpb_put_req(hpb, map_req); 499 - return NULL; 500 - } 501 - 502 - map_req->bio = bio; 503 - 504 - map_req->rb.srgn_idx = srgn->srgn_idx; 505 - map_req->rb.mctx = srgn->mctx; 506 - 507 - spin_lock_irqsave(&hpb->param_lock, flags); 508 - hpb->num_inflight_map_req++; 509 - spin_unlock_irqrestore(&hpb->param_lock, flags); 510 - 511 - return map_req; 512 - } 513 - 514 - static void ufshpb_put_map_req(struct ufshpb_lu *hpb, 515 - struct ufshpb_req *map_req) 516 - { 517 - unsigned long flags; 518 - 519 - bio_put(map_req->bio); 520 - ufshpb_put_req(hpb, map_req); 521 - 522 - spin_lock_irqsave(&hpb->param_lock, flags); 523 - hpb->num_inflight_map_req--; 524 - spin_unlock_irqrestore(&hpb->param_lock, flags); 525 - } 526 - 527 - static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, 528 - struct ufshpb_subregion *srgn) 529 - { 530 - struct ufshpb_region *rgn; 531 - u32 num_entries = hpb->entries_per_srgn; 532 - 533 - if (!srgn->mctx) { 534 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 535 - "no mctx in region %d subregion %d.\n", 536 - srgn->rgn_idx, srgn->srgn_idx); 537 - return -1; 538 - } 539 - 540 - if (unlikely(srgn->is_last)) 541 - num_entries = hpb->last_srgn_entries; 542 - 543 - bitmap_zero(srgn->mctx->ppn_dirty, num_entries); 544 - 545 - rgn = hpb->rgn_tbl + srgn->rgn_idx; 546 - clear_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); 547 - 548 - return 0; 549 - } 550 - 551 - static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, 552 - int srgn_idx) 553 - { 554 - struct ufshpb_region *rgn; 555 - struct ufshpb_subregion *srgn; 556 - 557 - rgn = hpb->rgn_tbl + rgn_idx; 558 - srgn = rgn->srgn_tbl + srgn_idx; 559 - 560 - list_del_init(&rgn->list_inact_rgn); 561 - 562 - if (list_empty(&srgn->list_act_srgn)) 563 - list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); 564 - 565 - hpb->stats.rcmd_active_cnt++; 566 - } 567 - 568 - static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) 569 - { 570 - struct ufshpb_region *rgn; 571 - struct ufshpb_subregion *srgn; 572 - int srgn_idx; 573 - 574 - rgn = hpb->rgn_tbl + rgn_idx; 575 - 576 - for_each_sub_region(rgn, srgn_idx, srgn) 577 - list_del_init(&srgn->list_act_srgn); 578 - 579 - if (list_empty(&rgn->list_inact_rgn)) 580 - list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn); 581 - 582 - hpb->stats.rcmd_inactive_cnt++; 583 - } 584 - 585 - static void ufshpb_activate_subregion(struct ufshpb_lu *hpb, 586 - struct ufshpb_subregion *srgn) 587 - { 588 - struct ufshpb_region *rgn; 589 - 590 - /* 591 - * If there is no mctx in subregion 592 - * after I/O progress for HPB_READ_BUFFER, the region to which the 593 - * subregion belongs was evicted. 594 - * Make sure the region must not evict in I/O progress 595 - */ 596 - if (!srgn->mctx) { 597 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 598 - "no mctx in region %d subregion %d.\n", 599 - srgn->rgn_idx, srgn->srgn_idx); 600 - srgn->srgn_state = HPB_SRGN_INVALID; 601 - return; 602 - } 603 - 604 - rgn = hpb->rgn_tbl + srgn->rgn_idx; 605 - 606 - if (unlikely(rgn->rgn_state == HPB_RGN_INACTIVE)) { 607 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 608 - "region %d subregion %d evicted\n", 609 - srgn->rgn_idx, srgn->srgn_idx); 610 - srgn->srgn_state = HPB_SRGN_INVALID; 611 - return; 612 - } 613 - srgn->srgn_state = HPB_SRGN_VALID; 614 - } 615 - 616 - static enum rq_end_io_ret ufshpb_umap_req_compl_fn(struct request *req, 617 - blk_status_t error) 618 - { 619 - struct ufshpb_req *umap_req = req->end_io_data; 620 - 621 - ufshpb_put_req(umap_req->hpb, umap_req); 622 - return RQ_END_IO_NONE; 623 - } 624 - 625 - static enum rq_end_io_ret ufshpb_map_req_compl_fn(struct request *req, 626 - blk_status_t error) 627 - { 628 - struct ufshpb_req *map_req = req->end_io_data; 629 - struct ufshpb_lu *hpb = map_req->hpb; 630 - struct ufshpb_subregion *srgn; 631 - unsigned long flags; 632 - 633 - srgn = hpb->rgn_tbl[map_req->rb.rgn_idx].srgn_tbl + 634 - map_req->rb.srgn_idx; 635 - 636 - ufshpb_clear_dirty_bitmap(hpb, srgn); 637 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 638 - ufshpb_activate_subregion(hpb, srgn); 639 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 640 - 641 - ufshpb_put_map_req(map_req->hpb, map_req); 642 - return RQ_END_IO_NONE; 643 - } 644 - 645 - static void ufshpb_set_unmap_cmd(unsigned char *cdb, struct ufshpb_region *rgn) 646 - { 647 - cdb[0] = UFSHPB_WRITE_BUFFER; 648 - cdb[1] = rgn ? UFSHPB_WRITE_BUFFER_INACT_SINGLE_ID : 649 - UFSHPB_WRITE_BUFFER_INACT_ALL_ID; 650 - if (rgn) 651 - put_unaligned_be16(rgn->rgn_idx, &cdb[2]); 652 - cdb[9] = 0x00; 653 - } 654 - 655 - static void ufshpb_set_read_buf_cmd(unsigned char *cdb, int rgn_idx, 656 - int srgn_idx, int srgn_mem_size) 657 - { 658 - cdb[0] = UFSHPB_READ_BUFFER; 659 - cdb[1] = UFSHPB_READ_BUFFER_ID; 660 - 661 - put_unaligned_be16(rgn_idx, &cdb[2]); 662 - put_unaligned_be16(srgn_idx, &cdb[4]); 663 - put_unaligned_be24(srgn_mem_size, &cdb[6]); 664 - 665 - cdb[9] = 0x00; 666 - } 667 - 668 - static void ufshpb_execute_umap_req(struct ufshpb_lu *hpb, 669 - struct ufshpb_req *umap_req, 670 - struct ufshpb_region *rgn) 671 - { 672 - struct request *req = umap_req->req; 673 - struct scsi_cmnd *scmd = blk_mq_rq_to_pdu(req); 674 - 675 - req->timeout = 0; 676 - req->end_io_data = umap_req; 677 - req->end_io = ufshpb_umap_req_compl_fn; 678 - 679 - ufshpb_set_unmap_cmd(scmd->cmnd, rgn); 680 - scmd->cmd_len = HPB_WRITE_BUFFER_CMD_LENGTH; 681 - 682 - blk_execute_rq_nowait(req, true); 683 - 684 - hpb->stats.umap_req_cnt++; 685 - } 686 - 687 - static int ufshpb_execute_map_req(struct ufshpb_lu *hpb, 688 - struct ufshpb_req *map_req, bool last) 689 - { 690 - struct request_queue *q; 691 - struct request *req; 692 - struct scsi_cmnd *scmd; 693 - int mem_size = hpb->srgn_mem_size; 694 - int ret = 0; 695 - int i; 696 - 697 - q = hpb->sdev_ufs_lu->request_queue; 698 - for (i = 0; i < hpb->pages_per_srgn; i++) { 699 - ret = bio_add_pc_page(q, map_req->bio, map_req->rb.mctx->m_page[i], 700 - PAGE_SIZE, 0); 701 - if (ret != PAGE_SIZE) { 702 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 703 - "bio_add_pc_page fail %d - %d\n", 704 - map_req->rb.rgn_idx, map_req->rb.srgn_idx); 705 - return ret; 706 - } 707 - } 708 - 709 - req = map_req->req; 710 - 711 - blk_rq_append_bio(req, map_req->bio); 712 - 713 - req->end_io_data = map_req; 714 - req->end_io = ufshpb_map_req_compl_fn; 715 - 716 - if (unlikely(last)) 717 - mem_size = hpb->last_srgn_entries * HPB_ENTRY_SIZE; 718 - 719 - scmd = blk_mq_rq_to_pdu(req); 720 - ufshpb_set_read_buf_cmd(scmd->cmnd, map_req->rb.rgn_idx, 721 - map_req->rb.srgn_idx, mem_size); 722 - scmd->cmd_len = HPB_READ_BUFFER_CMD_LENGTH; 723 - 724 - blk_execute_rq_nowait(req, true); 725 - 726 - hpb->stats.map_req_cnt++; 727 - return 0; 728 - } 729 - 730 - static struct ufshpb_map_ctx *ufshpb_get_map_ctx(struct ufshpb_lu *hpb, 731 - bool last) 732 - { 733 - struct ufshpb_map_ctx *mctx; 734 - u32 num_entries = hpb->entries_per_srgn; 735 - int i, j; 736 - 737 - mctx = mempool_alloc(ufshpb_mctx_pool, GFP_KERNEL); 738 - if (!mctx) 739 - return NULL; 740 - 741 - mctx->m_page = kmem_cache_alloc(hpb->m_page_cache, GFP_KERNEL); 742 - if (!mctx->m_page) 743 - goto release_mctx; 744 - 745 - if (unlikely(last)) 746 - num_entries = hpb->last_srgn_entries; 747 - 748 - mctx->ppn_dirty = bitmap_zalloc(num_entries, GFP_KERNEL); 749 - if (!mctx->ppn_dirty) 750 - goto release_m_page; 751 - 752 - for (i = 0; i < hpb->pages_per_srgn; i++) { 753 - mctx->m_page[i] = mempool_alloc(ufshpb_page_pool, GFP_KERNEL); 754 - if (!mctx->m_page[i]) { 755 - for (j = 0; j < i; j++) 756 - mempool_free(mctx->m_page[j], ufshpb_page_pool); 757 - goto release_ppn_dirty; 758 - } 759 - clear_page(page_address(mctx->m_page[i])); 760 - } 761 - 762 - return mctx; 763 - 764 - release_ppn_dirty: 765 - bitmap_free(mctx->ppn_dirty); 766 - release_m_page: 767 - kmem_cache_free(hpb->m_page_cache, mctx->m_page); 768 - release_mctx: 769 - mempool_free(mctx, ufshpb_mctx_pool); 770 - return NULL; 771 - } 772 - 773 - static void ufshpb_put_map_ctx(struct ufshpb_lu *hpb, 774 - struct ufshpb_map_ctx *mctx) 775 - { 776 - int i; 777 - 778 - for (i = 0; i < hpb->pages_per_srgn; i++) 779 - mempool_free(mctx->m_page[i], ufshpb_page_pool); 780 - 781 - bitmap_free(mctx->ppn_dirty); 782 - kmem_cache_free(hpb->m_page_cache, mctx->m_page); 783 - mempool_free(mctx, ufshpb_mctx_pool); 784 - } 785 - 786 - static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb, 787 - struct ufshpb_region *rgn) 788 - { 789 - struct ufshpb_subregion *srgn; 790 - int srgn_idx; 791 - 792 - for_each_sub_region(rgn, srgn_idx, srgn) 793 - if (srgn->srgn_state == HPB_SRGN_ISSUED) 794 - return -EPERM; 795 - 796 - return 0; 797 - } 798 - 799 - static void ufshpb_read_to_handler(struct work_struct *work) 800 - { 801 - struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, 802 - ufshpb_read_to_work.work); 803 - struct victim_select_info *lru_info = &hpb->lru_info; 804 - struct ufshpb_region *rgn, *next_rgn; 805 - unsigned long flags; 806 - unsigned int poll; 807 - LIST_HEAD(expired_list); 808 - 809 - if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits)) 810 - return; 811 - 812 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 813 - 814 - list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn, 815 - list_lru_rgn) { 816 - bool timedout = ktime_after(ktime_get(), rgn->read_timeout); 817 - 818 - if (timedout) { 819 - rgn->read_timeout_expiries--; 820 - if (is_rgn_dirty(rgn) || 821 - rgn->read_timeout_expiries == 0) 822 - list_add(&rgn->list_expired_rgn, &expired_list); 823 - else 824 - rgn->read_timeout = ktime_add_ms(ktime_get(), 825 - hpb->params.read_timeout_ms); 826 - } 827 - } 828 - 829 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 830 - 831 - list_for_each_entry_safe(rgn, next_rgn, &expired_list, 832 - list_expired_rgn) { 833 - list_del_init(&rgn->list_expired_rgn); 834 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 835 - ufshpb_update_inactive_info(hpb, rgn->rgn_idx); 836 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 837 - } 838 - 839 - ufshpb_kick_map_work(hpb); 840 - 841 - clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits); 842 - 843 - poll = hpb->params.timeout_polling_interval_ms; 844 - schedule_delayed_work(&hpb->ufshpb_read_to_work, 845 - msecs_to_jiffies(poll)); 846 - } 847 - 848 - static void ufshpb_add_lru_info(struct victim_select_info *lru_info, 849 - struct ufshpb_region *rgn) 850 - { 851 - rgn->rgn_state = HPB_RGN_ACTIVE; 852 - list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); 853 - atomic_inc(&lru_info->active_cnt); 854 - if (rgn->hpb->is_hcm) { 855 - rgn->read_timeout = 856 - ktime_add_ms(ktime_get(), 857 - rgn->hpb->params.read_timeout_ms); 858 - rgn->read_timeout_expiries = 859 - rgn->hpb->params.read_timeout_expiries; 860 - } 861 - } 862 - 863 - static void ufshpb_hit_lru_info(struct victim_select_info *lru_info, 864 - struct ufshpb_region *rgn) 865 - { 866 - list_move_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); 867 - } 868 - 869 - static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) 870 - { 871 - struct victim_select_info *lru_info = &hpb->lru_info; 872 - struct ufshpb_region *rgn, *victim_rgn = NULL; 873 - 874 - list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) { 875 - if (ufshpb_check_srgns_issue_state(hpb, rgn)) 876 - continue; 877 - 878 - /* 879 - * in host control mode, verify that the exiting region 880 - * has fewer reads 881 - */ 882 - if (hpb->is_hcm && 883 - rgn->reads > hpb->params.eviction_thld_exit) 884 - continue; 885 - 886 - victim_rgn = rgn; 887 - break; 888 - } 889 - 890 - if (!victim_rgn) 891 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 892 - "%s: no region allocated\n", 893 - __func__); 894 - 895 - return victim_rgn; 896 - } 897 - 898 - static void ufshpb_cleanup_lru_info(struct victim_select_info *lru_info, 899 - struct ufshpb_region *rgn) 900 - { 901 - list_del_init(&rgn->list_lru_rgn); 902 - rgn->rgn_state = HPB_RGN_INACTIVE; 903 - atomic_dec(&lru_info->active_cnt); 904 - } 905 - 906 - static void ufshpb_purge_active_subregion(struct ufshpb_lu *hpb, 907 - struct ufshpb_subregion *srgn) 908 - { 909 - if (srgn->srgn_state != HPB_SRGN_UNUSED) { 910 - ufshpb_put_map_ctx(hpb, srgn->mctx); 911 - srgn->srgn_state = HPB_SRGN_UNUSED; 912 - srgn->mctx = NULL; 913 - } 914 - } 915 - 916 - static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb, 917 - struct ufshpb_region *rgn, 918 - bool atomic) 919 - { 920 - struct ufshpb_req *umap_req; 921 - int rgn_idx = rgn ? rgn->rgn_idx : 0; 922 - 923 - umap_req = ufshpb_get_req(hpb, rgn_idx, REQ_OP_DRV_OUT, atomic); 924 - if (!umap_req) 925 - return -ENOMEM; 926 - 927 - ufshpb_execute_umap_req(hpb, umap_req, rgn); 928 - 929 - return 0; 930 - } 931 - 932 - static int ufshpb_issue_umap_single_req(struct ufshpb_lu *hpb, 933 - struct ufshpb_region *rgn) 934 - { 935 - return ufshpb_issue_umap_req(hpb, rgn, true); 936 - } 937 - 938 - static void __ufshpb_evict_region(struct ufshpb_lu *hpb, 939 - struct ufshpb_region *rgn) 940 - { 941 - struct victim_select_info *lru_info; 942 - struct ufshpb_subregion *srgn; 943 - int srgn_idx; 944 - 945 - lru_info = &hpb->lru_info; 946 - 947 - dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "evict region %d\n", rgn->rgn_idx); 948 - 949 - ufshpb_cleanup_lru_info(lru_info, rgn); 950 - 951 - for_each_sub_region(rgn, srgn_idx, srgn) 952 - ufshpb_purge_active_subregion(hpb, srgn); 953 - } 954 - 955 - static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) 956 - { 957 - unsigned long flags; 958 - int ret = 0; 959 - 960 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 961 - if (rgn->rgn_state == HPB_RGN_PINNED) { 962 - dev_warn(&hpb->sdev_ufs_lu->sdev_dev, 963 - "pinned region cannot drop-out. region %d\n", 964 - rgn->rgn_idx); 965 - goto out; 966 - } 967 - 968 - if (!list_empty(&rgn->list_lru_rgn)) { 969 - if (ufshpb_check_srgns_issue_state(hpb, rgn)) { 970 - ret = -EBUSY; 971 - goto out; 972 - } 973 - 974 - if (hpb->is_hcm) { 975 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 976 - ret = ufshpb_issue_umap_single_req(hpb, rgn); 977 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 978 - if (ret) 979 - goto out; 980 - } 981 - 982 - __ufshpb_evict_region(hpb, rgn); 983 - } 984 - out: 985 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 986 - return ret; 987 - } 988 - 989 - static int ufshpb_issue_map_req(struct ufshpb_lu *hpb, 990 - struct ufshpb_region *rgn, 991 - struct ufshpb_subregion *srgn) 992 - { 993 - struct ufshpb_req *map_req; 994 - unsigned long flags; 995 - int ret; 996 - int err = -EAGAIN; 997 - bool alloc_required = false; 998 - enum HPB_SRGN_STATE state = HPB_SRGN_INVALID; 999 - 1000 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 1001 - 1002 - if (ufshpb_get_state(hpb) != HPB_PRESENT) { 1003 - dev_notice(&hpb->sdev_ufs_lu->sdev_dev, 1004 - "%s: ufshpb state is not PRESENT\n", __func__); 1005 - goto unlock_out; 1006 - } 1007 - 1008 - if ((rgn->rgn_state == HPB_RGN_INACTIVE) && 1009 - (srgn->srgn_state == HPB_SRGN_INVALID)) { 1010 - err = 0; 1011 - goto unlock_out; 1012 - } 1013 - 1014 - if (srgn->srgn_state == HPB_SRGN_UNUSED) 1015 - alloc_required = true; 1016 - 1017 - /* 1018 - * If the subregion is already ISSUED state, 1019 - * a specific event (e.g., GC or wear-leveling, etc.) occurs in 1020 - * the device and HPB response for map loading is received. 1021 - * In this case, after finishing the HPB_READ_BUFFER, 1022 - * the next HPB_READ_BUFFER is performed again to obtain the latest 1023 - * map data. 1024 - */ 1025 - if (srgn->srgn_state == HPB_SRGN_ISSUED) 1026 - goto unlock_out; 1027 - 1028 - srgn->srgn_state = HPB_SRGN_ISSUED; 1029 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 1030 - 1031 - if (alloc_required) { 1032 - srgn->mctx = ufshpb_get_map_ctx(hpb, srgn->is_last); 1033 - if (!srgn->mctx) { 1034 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 1035 - "get map_ctx failed. region %d - %d\n", 1036 - rgn->rgn_idx, srgn->srgn_idx); 1037 - state = HPB_SRGN_UNUSED; 1038 - goto change_srgn_state; 1039 - } 1040 - } 1041 - 1042 - map_req = ufshpb_get_map_req(hpb, srgn); 1043 - if (!map_req) 1044 - goto change_srgn_state; 1045 - 1046 - 1047 - ret = ufshpb_execute_map_req(hpb, map_req, srgn->is_last); 1048 - if (ret) { 1049 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 1050 - "%s: issue map_req failed: %d, region %d - %d\n", 1051 - __func__, ret, srgn->rgn_idx, srgn->srgn_idx); 1052 - goto free_map_req; 1053 - } 1054 - return 0; 1055 - 1056 - free_map_req: 1057 - ufshpb_put_map_req(hpb, map_req); 1058 - change_srgn_state: 1059 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 1060 - srgn->srgn_state = state; 1061 - unlock_out: 1062 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 1063 - return err; 1064 - } 1065 - 1066 - static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) 1067 - { 1068 - struct ufshpb_region *victim_rgn = NULL; 1069 - struct victim_select_info *lru_info = &hpb->lru_info; 1070 - unsigned long flags; 1071 - int ret = 0; 1072 - 1073 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 1074 - /* 1075 - * If region belongs to lru_list, just move the region 1076 - * to the front of lru list because the state of the region 1077 - * is already active-state. 1078 - */ 1079 - if (!list_empty(&rgn->list_lru_rgn)) { 1080 - ufshpb_hit_lru_info(lru_info, rgn); 1081 - goto out; 1082 - } 1083 - 1084 - if (rgn->rgn_state == HPB_RGN_INACTIVE) { 1085 - if (atomic_read(&lru_info->active_cnt) == 1086 - lru_info->max_lru_active_cnt) { 1087 - /* 1088 - * If the maximum number of active regions 1089 - * is exceeded, evict the least recently used region. 1090 - * This case may occur when the device responds 1091 - * to the eviction information late. 1092 - * It is okay to evict the least recently used region, 1093 - * because the device could detect this region 1094 - * by not issuing HPB_READ 1095 - * 1096 - * in host control mode, verify that the entering 1097 - * region has enough reads 1098 - */ 1099 - if (hpb->is_hcm && 1100 - rgn->reads < hpb->params.eviction_thld_enter) { 1101 - ret = -EACCES; 1102 - goto out; 1103 - } 1104 - 1105 - victim_rgn = ufshpb_victim_lru_info(hpb); 1106 - if (!victim_rgn) { 1107 - dev_warn(&hpb->sdev_ufs_lu->sdev_dev, 1108 - "cannot get victim region %s\n", 1109 - hpb->is_hcm ? "" : "error"); 1110 - ret = -ENOMEM; 1111 - goto out; 1112 - } 1113 - 1114 - dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, 1115 - "LRU full (%d), choose victim %d\n", 1116 - atomic_read(&lru_info->active_cnt), 1117 - victim_rgn->rgn_idx); 1118 - 1119 - if (hpb->is_hcm) { 1120 - spin_unlock_irqrestore(&hpb->rgn_state_lock, 1121 - flags); 1122 - ret = ufshpb_issue_umap_single_req(hpb, 1123 - victim_rgn); 1124 - spin_lock_irqsave(&hpb->rgn_state_lock, 1125 - flags); 1126 - if (ret) 1127 - goto out; 1128 - } 1129 - 1130 - __ufshpb_evict_region(hpb, victim_rgn); 1131 - } 1132 - 1133 - /* 1134 - * When a region is added to lru_info list_head, 1135 - * it is guaranteed that the subregion has been 1136 - * assigned all mctx. If failed, try to receive mctx again 1137 - * without being added to lru_info list_head 1138 - */ 1139 - ufshpb_add_lru_info(lru_info, rgn); 1140 - } 1141 - out: 1142 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 1143 - return ret; 1144 - } 1145 - /** 1146 - *ufshpb_submit_region_inactive() - submit a region to be inactivated later 1147 - *@hpb: per-LU HPB instance 1148 - *@region_index: the index associated with the region that will be inactivated later 1149 - */ 1150 - static void ufshpb_submit_region_inactive(struct ufshpb_lu *hpb, int region_index) 1151 - { 1152 - int subregion_index; 1153 - struct ufshpb_region *rgn; 1154 - struct ufshpb_subregion *srgn; 1155 - 1156 - /* 1157 - * Remove this region from active region list and add it to inactive list 1158 - */ 1159 - spin_lock(&hpb->rsp_list_lock); 1160 - ufshpb_update_inactive_info(hpb, region_index); 1161 - spin_unlock(&hpb->rsp_list_lock); 1162 - 1163 - rgn = hpb->rgn_tbl + region_index; 1164 - 1165 - /* 1166 - * Set subregion state to be HPB_SRGN_INVALID, there will no HPB read on this subregion 1167 - */ 1168 - spin_lock(&hpb->rgn_state_lock); 1169 - if (rgn->rgn_state != HPB_RGN_INACTIVE) { 1170 - for (subregion_index = 0; subregion_index < rgn->srgn_cnt; subregion_index++) { 1171 - srgn = rgn->srgn_tbl + subregion_index; 1172 - if (srgn->srgn_state == HPB_SRGN_VALID) 1173 - srgn->srgn_state = HPB_SRGN_INVALID; 1174 - } 1175 - } 1176 - spin_unlock(&hpb->rgn_state_lock); 1177 - } 1178 - 1179 - static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, 1180 - struct utp_hpb_rsp *rsp_field) 1181 - { 1182 - struct ufshpb_region *rgn; 1183 - struct ufshpb_subregion *srgn; 1184 - int i, rgn_i, srgn_i; 1185 - 1186 - BUILD_BUG_ON(sizeof(struct ufshpb_active_field) != HPB_ACT_FIELD_SIZE); 1187 - /* 1188 - * If the active region and the inactive region are the same, 1189 - * we will inactivate this region. 1190 - * The device could check this (region inactivated) and 1191 - * will response the proper active region information 1192 - */ 1193 - for (i = 0; i < rsp_field->active_rgn_cnt; i++) { 1194 - rgn_i = 1195 - be16_to_cpu(rsp_field->hpb_active_field[i].active_rgn); 1196 - srgn_i = 1197 - be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn); 1198 - 1199 - rgn = hpb->rgn_tbl + rgn_i; 1200 - if (hpb->is_hcm && 1201 - (rgn->rgn_state != HPB_RGN_ACTIVE || is_rgn_dirty(rgn))) { 1202 - /* 1203 - * in host control mode, subregion activation 1204 - * recommendations are only allowed to active regions. 1205 - * Also, ignore recommendations for dirty regions - the 1206 - * host will make decisions concerning those by himself 1207 - */ 1208 - continue; 1209 - } 1210 - 1211 - dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, 1212 - "activate(%d) region %d - %d\n", i, rgn_i, srgn_i); 1213 - 1214 - spin_lock(&hpb->rsp_list_lock); 1215 - ufshpb_update_active_info(hpb, rgn_i, srgn_i); 1216 - spin_unlock(&hpb->rsp_list_lock); 1217 - 1218 - srgn = rgn->srgn_tbl + srgn_i; 1219 - 1220 - /* blocking HPB_READ */ 1221 - spin_lock(&hpb->rgn_state_lock); 1222 - if (srgn->srgn_state == HPB_SRGN_VALID) 1223 - srgn->srgn_state = HPB_SRGN_INVALID; 1224 - spin_unlock(&hpb->rgn_state_lock); 1225 - } 1226 - 1227 - if (hpb->is_hcm) { 1228 - /* 1229 - * in host control mode the device is not allowed to inactivate 1230 - * regions 1231 - */ 1232 - goto out; 1233 - } 1234 - 1235 - for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) { 1236 - rgn_i = be16_to_cpu(rsp_field->hpb_inactive_field[i]); 1237 - dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "inactivate(%d) region %d\n", i, rgn_i); 1238 - ufshpb_submit_region_inactive(hpb, rgn_i); 1239 - } 1240 - 1241 - out: 1242 - dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n", 1243 - rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt); 1244 - 1245 - if (ufshpb_get_state(hpb) == HPB_PRESENT) 1246 - queue_work(ufshpb_wq, &hpb->map_work); 1247 - } 1248 - 1249 - /* 1250 - * Set the flags of all active regions to RGN_FLAG_UPDATE to let host side reload L2P entries later 1251 - */ 1252 - static void ufshpb_set_regions_update(struct ufshpb_lu *hpb) 1253 - { 1254 - struct victim_select_info *lru_info = &hpb->lru_info; 1255 - struct ufshpb_region *rgn; 1256 - unsigned long flags; 1257 - 1258 - spin_lock_irqsave(&hpb->rgn_state_lock, flags); 1259 - 1260 - list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) 1261 - set_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags); 1262 - 1263 - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); 1264 - } 1265 - 1266 - static void ufshpb_dev_reset_handler(struct ufs_hba *hba) 1267 - { 1268 - struct scsi_device *sdev; 1269 - struct ufshpb_lu *hpb; 1270 - 1271 - __shost_for_each_device(sdev, hba->host) { 1272 - hpb = ufshpb_get_hpb_data(sdev); 1273 - if (!hpb) 1274 - continue; 1275 - 1276 - if (hpb->is_hcm) { 1277 - /* 1278 - * For the HPB host control mode, in case device powered up and lost HPB 1279 - * information, we will set the region flag to be RGN_FLAG_UPDATE, it will 1280 - * let host reload its L2P entries(reactivate region in the UFS device). 1281 - */ 1282 - ufshpb_set_regions_update(hpb); 1283 - } else { 1284 - /* 1285 - * For the HPB device control mode, if host side receives 02h:HPB Operation 1286 - * in UPIU response, which means device recommends the host side should 1287 - * inactivate all active regions. Here we add all active regions to inactive 1288 - * list, they will be inactivated later in ufshpb_map_work_handler(). 1289 - */ 1290 - struct victim_select_info *lru_info = &hpb->lru_info; 1291 - struct ufshpb_region *rgn; 1292 - 1293 - list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) 1294 - ufshpb_submit_region_inactive(hpb, rgn->rgn_idx); 1295 - 1296 - if (ufshpb_get_state(hpb) == HPB_PRESENT) 1297 - queue_work(ufshpb_wq, &hpb->map_work); 1298 - } 1299 - } 1300 - } 1301 - 1302 - /* 1303 - * This function will parse recommended active subregion information in sense 1304 - * data field of response UPIU with SAM_STAT_GOOD state. 1305 - */ 1306 - void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) 1307 - { 1308 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(lrbp->cmd->device); 1309 - struct utp_hpb_rsp *rsp_field = &lrbp->ucd_rsp_ptr->hr; 1310 - int data_seg_len; 1311 - 1312 - data_seg_len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2) 1313 - & MASK_RSP_UPIU_DATA_SEG_LEN; 1314 - 1315 - /* If data segment length is zero, rsp_field is not valid */ 1316 - if (!data_seg_len) 1317 - return; 1318 - 1319 - if (unlikely(lrbp->lun != rsp_field->lun)) { 1320 - struct scsi_device *sdev; 1321 - bool found = false; 1322 - 1323 - __shost_for_each_device(sdev, hba->host) { 1324 - hpb = ufshpb_get_hpb_data(sdev); 1325 - 1326 - if (!hpb) 1327 - continue; 1328 - 1329 - if (rsp_field->lun == hpb->lun) { 1330 - found = true; 1331 - break; 1332 - } 1333 - } 1334 - 1335 - if (!found) 1336 - return; 1337 - } 1338 - 1339 - if (!hpb) 1340 - return; 1341 - 1342 - if (ufshpb_get_state(hpb) == HPB_INIT) 1343 - return; 1344 - 1345 - if ((ufshpb_get_state(hpb) != HPB_PRESENT) && 1346 - (ufshpb_get_state(hpb) != HPB_SUSPEND)) { 1347 - dev_notice(&hpb->sdev_ufs_lu->sdev_dev, 1348 - "%s: ufshpb state is not PRESENT/SUSPEND\n", 1349 - __func__); 1350 - return; 1351 - } 1352 - 1353 - BUILD_BUG_ON(sizeof(struct utp_hpb_rsp) != UTP_HPB_RSP_SIZE); 1354 - 1355 - if (!ufshpb_is_hpb_rsp_valid(hba, lrbp, rsp_field)) 1356 - return; 1357 - 1358 - hpb->stats.rcmd_noti_cnt++; 1359 - 1360 - switch (rsp_field->hpb_op) { 1361 - case HPB_RSP_REQ_REGION_UPDATE: 1362 - if (data_seg_len != DEV_DATA_SEG_LEN) 1363 - dev_warn(&hpb->sdev_ufs_lu->sdev_dev, 1364 - "%s: data seg length is not same.\n", 1365 - __func__); 1366 - ufshpb_rsp_req_region_update(hpb, rsp_field); 1367 - break; 1368 - case HPB_RSP_DEV_RESET: 1369 - dev_warn(&hpb->sdev_ufs_lu->sdev_dev, 1370 - "UFS device lost HPB information during PM.\n"); 1371 - ufshpb_dev_reset_handler(hba); 1372 - 1373 - break; 1374 - default: 1375 - dev_notice(&hpb->sdev_ufs_lu->sdev_dev, 1376 - "hpb_op is not available: %d\n", 1377 - rsp_field->hpb_op); 1378 - break; 1379 - } 1380 - } 1381 - 1382 - static void ufshpb_add_active_list(struct ufshpb_lu *hpb, 1383 - struct ufshpb_region *rgn, 1384 - struct ufshpb_subregion *srgn) 1385 - { 1386 - if (!list_empty(&rgn->list_inact_rgn)) 1387 - return; 1388 - 1389 - if (!list_empty(&srgn->list_act_srgn)) { 1390 - list_move(&srgn->list_act_srgn, &hpb->lh_act_srgn); 1391 - return; 1392 - } 1393 - 1394 - list_add(&srgn->list_act_srgn, &hpb->lh_act_srgn); 1395 - } 1396 - 1397 - static void ufshpb_add_pending_evict_list(struct ufshpb_lu *hpb, 1398 - struct ufshpb_region *rgn, 1399 - struct list_head *pending_list) 1400 - { 1401 - struct ufshpb_subregion *srgn; 1402 - int srgn_idx; 1403 - 1404 - if (!list_empty(&rgn->list_inact_rgn)) 1405 - return; 1406 - 1407 - for_each_sub_region(rgn, srgn_idx, srgn) 1408 - if (!list_empty(&srgn->list_act_srgn)) 1409 - return; 1410 - 1411 - list_add_tail(&rgn->list_inact_rgn, pending_list); 1412 - } 1413 - 1414 - static void ufshpb_run_active_subregion_list(struct ufshpb_lu *hpb) 1415 - { 1416 - struct ufshpb_region *rgn; 1417 - struct ufshpb_subregion *srgn; 1418 - unsigned long flags; 1419 - int ret = 0; 1420 - 1421 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 1422 - while ((srgn = list_first_entry_or_null(&hpb->lh_act_srgn, 1423 - struct ufshpb_subregion, 1424 - list_act_srgn))) { 1425 - if (ufshpb_get_state(hpb) == HPB_SUSPEND) 1426 - break; 1427 - 1428 - list_del_init(&srgn->list_act_srgn); 1429 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 1430 - 1431 - rgn = hpb->rgn_tbl + srgn->rgn_idx; 1432 - ret = ufshpb_add_region(hpb, rgn); 1433 - if (ret) 1434 - goto active_failed; 1435 - 1436 - ret = ufshpb_issue_map_req(hpb, rgn, srgn); 1437 - if (ret) { 1438 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, 1439 - "issue map_req failed. ret %d, region %d - %d\n", 1440 - ret, rgn->rgn_idx, srgn->srgn_idx); 1441 - goto active_failed; 1442 - } 1443 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 1444 - } 1445 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 1446 - return; 1447 - 1448 - active_failed: 1449 - dev_err(&hpb->sdev_ufs_lu->sdev_dev, "failed to activate region %d - %d, will retry\n", 1450 - rgn->rgn_idx, srgn->srgn_idx); 1451 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 1452 - ufshpb_add_active_list(hpb, rgn, srgn); 1453 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 1454 - } 1455 - 1456 - static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) 1457 - { 1458 - struct ufshpb_region *rgn; 1459 - unsigned long flags; 1460 - int ret; 1461 - LIST_HEAD(pending_list); 1462 - 1463 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 1464 - while ((rgn = list_first_entry_or_null(&hpb->lh_inact_rgn, 1465 - struct ufshpb_region, 1466 - list_inact_rgn))) { 1467 - if (ufshpb_get_state(hpb) == HPB_SUSPEND) 1468 - break; 1469 - 1470 - list_del_init(&rgn->list_inact_rgn); 1471 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 1472 - 1473 - ret = ufshpb_evict_region(hpb, rgn); 1474 - if (ret) { 1475 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 1476 - ufshpb_add_pending_evict_list(hpb, rgn, &pending_list); 1477 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 1478 - } 1479 - 1480 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 1481 - } 1482 - 1483 - list_splice(&pending_list, &hpb->lh_inact_rgn); 1484 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 1485 - } 1486 - 1487 - static void ufshpb_normalization_work_handler(struct work_struct *work) 1488 - { 1489 - struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, 1490 - ufshpb_normalization_work); 1491 - int rgn_idx; 1492 - u8 factor = hpb->params.normalization_factor; 1493 - 1494 - for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { 1495 - struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; 1496 - int srgn_idx; 1497 - 1498 - spin_lock(&rgn->rgn_lock); 1499 - rgn->reads = 0; 1500 - for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) { 1501 - struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; 1502 - 1503 - srgn->reads >>= factor; 1504 - rgn->reads += srgn->reads; 1505 - } 1506 - spin_unlock(&rgn->rgn_lock); 1507 - 1508 - if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads) 1509 - continue; 1510 - 1511 - /* if region is active but has no reads - inactivate it */ 1512 - spin_lock(&hpb->rsp_list_lock); 1513 - ufshpb_update_inactive_info(hpb, rgn->rgn_idx); 1514 - spin_unlock(&hpb->rsp_list_lock); 1515 - } 1516 - } 1517 - 1518 - static void ufshpb_map_work_handler(struct work_struct *work) 1519 - { 1520 - struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work); 1521 - 1522 - if (ufshpb_get_state(hpb) != HPB_PRESENT) { 1523 - dev_notice(&hpb->sdev_ufs_lu->sdev_dev, 1524 - "%s: ufshpb state is not PRESENT\n", __func__); 1525 - return; 1526 - } 1527 - 1528 - ufshpb_run_inactive_region_list(hpb); 1529 - ufshpb_run_active_subregion_list(hpb); 1530 - } 1531 - 1532 - /* 1533 - * this function doesn't need to hold lock due to be called in init. 1534 - * (rgn_state_lock, rsp_list_lock, etc..) 1535 - */ 1536 - static int ufshpb_init_pinned_active_region(struct ufs_hba *hba, 1537 - struct ufshpb_lu *hpb, 1538 - struct ufshpb_region *rgn) 1539 - { 1540 - struct ufshpb_subregion *srgn; 1541 - int srgn_idx, i; 1542 - int err = 0; 1543 - 1544 - for_each_sub_region(rgn, srgn_idx, srgn) { 1545 - srgn->mctx = ufshpb_get_map_ctx(hpb, srgn->is_last); 1546 - srgn->srgn_state = HPB_SRGN_INVALID; 1547 - if (!srgn->mctx) { 1548 - err = -ENOMEM; 1549 - dev_err(hba->dev, 1550 - "alloc mctx for pinned region failed\n"); 1551 - goto release; 1552 - } 1553 - 1554 - list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); 1555 - } 1556 - 1557 - rgn->rgn_state = HPB_RGN_PINNED; 1558 - return 0; 1559 - 1560 - release: 1561 - for (i = 0; i < srgn_idx; i++) { 1562 - srgn = rgn->srgn_tbl + i; 1563 - ufshpb_put_map_ctx(hpb, srgn->mctx); 1564 - } 1565 - return err; 1566 - } 1567 - 1568 - static void ufshpb_init_subregion_tbl(struct ufshpb_lu *hpb, 1569 - struct ufshpb_region *rgn, bool last) 1570 - { 1571 - int srgn_idx; 1572 - struct ufshpb_subregion *srgn; 1573 - 1574 - for_each_sub_region(rgn, srgn_idx, srgn) { 1575 - INIT_LIST_HEAD(&srgn->list_act_srgn); 1576 - 1577 - srgn->rgn_idx = rgn->rgn_idx; 1578 - srgn->srgn_idx = srgn_idx; 1579 - srgn->srgn_state = HPB_SRGN_UNUSED; 1580 - } 1581 - 1582 - if (unlikely(last && hpb->last_srgn_entries)) 1583 - srgn->is_last = true; 1584 - } 1585 - 1586 - static int ufshpb_alloc_subregion_tbl(struct ufshpb_lu *hpb, 1587 - struct ufshpb_region *rgn, int srgn_cnt) 1588 - { 1589 - rgn->srgn_tbl = kvcalloc(srgn_cnt, sizeof(struct ufshpb_subregion), 1590 - GFP_KERNEL); 1591 - if (!rgn->srgn_tbl) 1592 - return -ENOMEM; 1593 - 1594 - rgn->srgn_cnt = srgn_cnt; 1595 - return 0; 1596 - } 1597 - 1598 - static void ufshpb_lu_parameter_init(struct ufs_hba *hba, 1599 - struct ufshpb_lu *hpb, 1600 - struct ufshpb_dev_info *hpb_dev_info, 1601 - struct ufshpb_lu_info *hpb_lu_info) 1602 - { 1603 - u32 entries_per_rgn; 1604 - u64 rgn_mem_size, tmp; 1605 - 1606 - if (ufshpb_is_legacy(hba)) 1607 - hpb->pre_req_max_tr_len = HPB_LEGACY_CHUNK_HIGH; 1608 - else 1609 - hpb->pre_req_max_tr_len = hpb_dev_info->max_hpb_single_cmd; 1610 - 1611 - hpb->lu_pinned_start = hpb_lu_info->pinned_start; 1612 - hpb->lu_pinned_end = hpb_lu_info->num_pinned ? 1613 - (hpb_lu_info->pinned_start + hpb_lu_info->num_pinned - 1) 1614 - : PINNED_NOT_SET; 1615 - hpb->lru_info.max_lru_active_cnt = 1616 - hpb_lu_info->max_active_rgns - hpb_lu_info->num_pinned; 1617 - 1618 - rgn_mem_size = (1ULL << hpb_dev_info->rgn_size) * HPB_RGN_SIZE_UNIT 1619 - * HPB_ENTRY_SIZE; 1620 - do_div(rgn_mem_size, HPB_ENTRY_BLOCK_SIZE); 1621 - hpb->srgn_mem_size = (1ULL << hpb_dev_info->srgn_size) 1622 - * HPB_RGN_SIZE_UNIT / HPB_ENTRY_BLOCK_SIZE * HPB_ENTRY_SIZE; 1623 - 1624 - tmp = rgn_mem_size; 1625 - do_div(tmp, HPB_ENTRY_SIZE); 1626 - entries_per_rgn = (u32)tmp; 1627 - hpb->entries_per_rgn_shift = ilog2(entries_per_rgn); 1628 - hpb->entries_per_rgn_mask = entries_per_rgn - 1; 1629 - 1630 - hpb->entries_per_srgn = hpb->srgn_mem_size / HPB_ENTRY_SIZE; 1631 - hpb->entries_per_srgn_shift = ilog2(hpb->entries_per_srgn); 1632 - hpb->entries_per_srgn_mask = hpb->entries_per_srgn - 1; 1633 - 1634 - tmp = rgn_mem_size; 1635 - do_div(tmp, hpb->srgn_mem_size); 1636 - hpb->srgns_per_rgn = (int)tmp; 1637 - 1638 - hpb->rgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, 1639 - entries_per_rgn); 1640 - hpb->srgns_per_lu = DIV_ROUND_UP(hpb_lu_info->num_blocks, 1641 - (hpb->srgn_mem_size / HPB_ENTRY_SIZE)); 1642 - hpb->last_srgn_entries = hpb_lu_info->num_blocks 1643 - % (hpb->srgn_mem_size / HPB_ENTRY_SIZE); 1644 - 1645 - hpb->pages_per_srgn = DIV_ROUND_UP(hpb->srgn_mem_size, PAGE_SIZE); 1646 - 1647 - if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) 1648 - hpb->is_hcm = true; 1649 - } 1650 - 1651 - static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) 1652 - { 1653 - struct ufshpb_region *rgn_table, *rgn; 1654 - int rgn_idx, i; 1655 - int ret = 0; 1656 - 1657 - rgn_table = kvcalloc(hpb->rgns_per_lu, sizeof(struct ufshpb_region), 1658 - GFP_KERNEL); 1659 - if (!rgn_table) 1660 - return -ENOMEM; 1661 - 1662 - for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { 1663 - int srgn_cnt = hpb->srgns_per_rgn; 1664 - bool last_srgn = false; 1665 - 1666 - rgn = rgn_table + rgn_idx; 1667 - rgn->rgn_idx = rgn_idx; 1668 - 1669 - spin_lock_init(&rgn->rgn_lock); 1670 - 1671 - INIT_LIST_HEAD(&rgn->list_inact_rgn); 1672 - INIT_LIST_HEAD(&rgn->list_lru_rgn); 1673 - INIT_LIST_HEAD(&rgn->list_expired_rgn); 1674 - 1675 - if (rgn_idx == hpb->rgns_per_lu - 1) { 1676 - srgn_cnt = ((hpb->srgns_per_lu - 1) % 1677 - hpb->srgns_per_rgn) + 1; 1678 - last_srgn = true; 1679 - } 1680 - 1681 - ret = ufshpb_alloc_subregion_tbl(hpb, rgn, srgn_cnt); 1682 - if (ret) 1683 - goto release_srgn_table; 1684 - ufshpb_init_subregion_tbl(hpb, rgn, last_srgn); 1685 - 1686 - if (ufshpb_is_pinned_region(hpb, rgn_idx)) { 1687 - ret = ufshpb_init_pinned_active_region(hba, hpb, rgn); 1688 - if (ret) 1689 - goto release_srgn_table; 1690 - } else { 1691 - rgn->rgn_state = HPB_RGN_INACTIVE; 1692 - } 1693 - 1694 - rgn->rgn_flags = 0; 1695 - rgn->hpb = hpb; 1696 - } 1697 - 1698 - hpb->rgn_tbl = rgn_table; 1699 - 1700 - return 0; 1701 - 1702 - release_srgn_table: 1703 - for (i = 0; i <= rgn_idx; i++) 1704 - kvfree(rgn_table[i].srgn_tbl); 1705 - 1706 - kvfree(rgn_table); 1707 - return ret; 1708 - } 1709 - 1710 - static void ufshpb_destroy_subregion_tbl(struct ufshpb_lu *hpb, 1711 - struct ufshpb_region *rgn) 1712 - { 1713 - int srgn_idx; 1714 - struct ufshpb_subregion *srgn; 1715 - 1716 - for_each_sub_region(rgn, srgn_idx, srgn) 1717 - if (srgn->srgn_state != HPB_SRGN_UNUSED) { 1718 - srgn->srgn_state = HPB_SRGN_UNUSED; 1719 - ufshpb_put_map_ctx(hpb, srgn->mctx); 1720 - } 1721 - } 1722 - 1723 - static void ufshpb_destroy_region_tbl(struct ufshpb_lu *hpb) 1724 - { 1725 - int rgn_idx; 1726 - 1727 - for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { 1728 - struct ufshpb_region *rgn; 1729 - 1730 - rgn = hpb->rgn_tbl + rgn_idx; 1731 - if (rgn->rgn_state != HPB_RGN_INACTIVE) { 1732 - rgn->rgn_state = HPB_RGN_INACTIVE; 1733 - 1734 - ufshpb_destroy_subregion_tbl(hpb, rgn); 1735 - } 1736 - 1737 - kvfree(rgn->srgn_tbl); 1738 - } 1739 - 1740 - kvfree(hpb->rgn_tbl); 1741 - } 1742 - 1743 - /* SYSFS functions */ 1744 - #define ufshpb_sysfs_attr_show_func(__name) \ 1745 - static ssize_t __name##_show(struct device *dev, \ 1746 - struct device_attribute *attr, char *buf) \ 1747 - { \ 1748 - struct scsi_device *sdev = to_scsi_device(dev); \ 1749 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); \ 1750 - \ 1751 - if (!hpb) \ 1752 - return -ENODEV; \ 1753 - \ 1754 - return sysfs_emit(buf, "%llu\n", hpb->stats.__name); \ 1755 - } \ 1756 - \ 1757 - static DEVICE_ATTR_RO(__name) 1758 - 1759 - ufshpb_sysfs_attr_show_func(hit_cnt); 1760 - ufshpb_sysfs_attr_show_func(miss_cnt); 1761 - ufshpb_sysfs_attr_show_func(rcmd_noti_cnt); 1762 - ufshpb_sysfs_attr_show_func(rcmd_active_cnt); 1763 - ufshpb_sysfs_attr_show_func(rcmd_inactive_cnt); 1764 - ufshpb_sysfs_attr_show_func(map_req_cnt); 1765 - ufshpb_sysfs_attr_show_func(umap_req_cnt); 1766 - 1767 - static struct attribute *hpb_dev_stat_attrs[] = { 1768 - &dev_attr_hit_cnt.attr, 1769 - &dev_attr_miss_cnt.attr, 1770 - &dev_attr_rcmd_noti_cnt.attr, 1771 - &dev_attr_rcmd_active_cnt.attr, 1772 - &dev_attr_rcmd_inactive_cnt.attr, 1773 - &dev_attr_map_req_cnt.attr, 1774 - &dev_attr_umap_req_cnt.attr, 1775 - NULL, 1776 - }; 1777 - 1778 - struct attribute_group ufs_sysfs_hpb_stat_group = { 1779 - .name = "hpb_stats", 1780 - .attrs = hpb_dev_stat_attrs, 1781 - }; 1782 - 1783 - /* SYSFS functions */ 1784 - #define ufshpb_sysfs_param_show_func(__name) \ 1785 - static ssize_t __name##_show(struct device *dev, \ 1786 - struct device_attribute *attr, char *buf) \ 1787 - { \ 1788 - struct scsi_device *sdev = to_scsi_device(dev); \ 1789 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); \ 1790 - \ 1791 - if (!hpb) \ 1792 - return -ENODEV; \ 1793 - \ 1794 - return sysfs_emit(buf, "%d\n", hpb->params.__name); \ 1795 - } 1796 - 1797 - ufshpb_sysfs_param_show_func(requeue_timeout_ms); 1798 - static ssize_t 1799 - requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr, 1800 - const char *buf, size_t count) 1801 - { 1802 - struct scsi_device *sdev = to_scsi_device(dev); 1803 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1804 - int val; 1805 - 1806 - if (!hpb) 1807 - return -ENODEV; 1808 - 1809 - if (kstrtouint(buf, 0, &val)) 1810 - return -EINVAL; 1811 - 1812 - if (val < 0) 1813 - return -EINVAL; 1814 - 1815 - hpb->params.requeue_timeout_ms = val; 1816 - 1817 - return count; 1818 - } 1819 - static DEVICE_ATTR_RW(requeue_timeout_ms); 1820 - 1821 - ufshpb_sysfs_param_show_func(activation_thld); 1822 - static ssize_t 1823 - activation_thld_store(struct device *dev, struct device_attribute *attr, 1824 - const char *buf, size_t count) 1825 - { 1826 - struct scsi_device *sdev = to_scsi_device(dev); 1827 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1828 - int val; 1829 - 1830 - if (!hpb) 1831 - return -ENODEV; 1832 - 1833 - if (!hpb->is_hcm) 1834 - return -EOPNOTSUPP; 1835 - 1836 - if (kstrtouint(buf, 0, &val)) 1837 - return -EINVAL; 1838 - 1839 - if (val <= 0) 1840 - return -EINVAL; 1841 - 1842 - hpb->params.activation_thld = val; 1843 - 1844 - return count; 1845 - } 1846 - static DEVICE_ATTR_RW(activation_thld); 1847 - 1848 - ufshpb_sysfs_param_show_func(normalization_factor); 1849 - static ssize_t 1850 - normalization_factor_store(struct device *dev, struct device_attribute *attr, 1851 - const char *buf, size_t count) 1852 - { 1853 - struct scsi_device *sdev = to_scsi_device(dev); 1854 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1855 - int val; 1856 - 1857 - if (!hpb) 1858 - return -ENODEV; 1859 - 1860 - if (!hpb->is_hcm) 1861 - return -EOPNOTSUPP; 1862 - 1863 - if (kstrtouint(buf, 0, &val)) 1864 - return -EINVAL; 1865 - 1866 - if (val <= 0 || val > ilog2(hpb->entries_per_srgn)) 1867 - return -EINVAL; 1868 - 1869 - hpb->params.normalization_factor = val; 1870 - 1871 - return count; 1872 - } 1873 - static DEVICE_ATTR_RW(normalization_factor); 1874 - 1875 - ufshpb_sysfs_param_show_func(eviction_thld_enter); 1876 - static ssize_t 1877 - eviction_thld_enter_store(struct device *dev, struct device_attribute *attr, 1878 - const char *buf, size_t count) 1879 - { 1880 - struct scsi_device *sdev = to_scsi_device(dev); 1881 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1882 - int val; 1883 - 1884 - if (!hpb) 1885 - return -ENODEV; 1886 - 1887 - if (!hpb->is_hcm) 1888 - return -EOPNOTSUPP; 1889 - 1890 - if (kstrtouint(buf, 0, &val)) 1891 - return -EINVAL; 1892 - 1893 - if (val <= hpb->params.eviction_thld_exit) 1894 - return -EINVAL; 1895 - 1896 - hpb->params.eviction_thld_enter = val; 1897 - 1898 - return count; 1899 - } 1900 - static DEVICE_ATTR_RW(eviction_thld_enter); 1901 - 1902 - ufshpb_sysfs_param_show_func(eviction_thld_exit); 1903 - static ssize_t 1904 - eviction_thld_exit_store(struct device *dev, struct device_attribute *attr, 1905 - const char *buf, size_t count) 1906 - { 1907 - struct scsi_device *sdev = to_scsi_device(dev); 1908 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1909 - int val; 1910 - 1911 - if (!hpb) 1912 - return -ENODEV; 1913 - 1914 - if (!hpb->is_hcm) 1915 - return -EOPNOTSUPP; 1916 - 1917 - if (kstrtouint(buf, 0, &val)) 1918 - return -EINVAL; 1919 - 1920 - if (val <= hpb->params.activation_thld) 1921 - return -EINVAL; 1922 - 1923 - hpb->params.eviction_thld_exit = val; 1924 - 1925 - return count; 1926 - } 1927 - static DEVICE_ATTR_RW(eviction_thld_exit); 1928 - 1929 - ufshpb_sysfs_param_show_func(read_timeout_ms); 1930 - static ssize_t 1931 - read_timeout_ms_store(struct device *dev, struct device_attribute *attr, 1932 - const char *buf, size_t count) 1933 - { 1934 - struct scsi_device *sdev = to_scsi_device(dev); 1935 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1936 - int val; 1937 - 1938 - if (!hpb) 1939 - return -ENODEV; 1940 - 1941 - if (!hpb->is_hcm) 1942 - return -EOPNOTSUPP; 1943 - 1944 - if (kstrtouint(buf, 0, &val)) 1945 - return -EINVAL; 1946 - 1947 - /* read_timeout >> timeout_polling_interval */ 1948 - if (val < hpb->params.timeout_polling_interval_ms * 2) 1949 - return -EINVAL; 1950 - 1951 - hpb->params.read_timeout_ms = val; 1952 - 1953 - return count; 1954 - } 1955 - static DEVICE_ATTR_RW(read_timeout_ms); 1956 - 1957 - ufshpb_sysfs_param_show_func(read_timeout_expiries); 1958 - static ssize_t 1959 - read_timeout_expiries_store(struct device *dev, struct device_attribute *attr, 1960 - const char *buf, size_t count) 1961 - { 1962 - struct scsi_device *sdev = to_scsi_device(dev); 1963 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1964 - int val; 1965 - 1966 - if (!hpb) 1967 - return -ENODEV; 1968 - 1969 - if (!hpb->is_hcm) 1970 - return -EOPNOTSUPP; 1971 - 1972 - if (kstrtouint(buf, 0, &val)) 1973 - return -EINVAL; 1974 - 1975 - if (val <= 0) 1976 - return -EINVAL; 1977 - 1978 - hpb->params.read_timeout_expiries = val; 1979 - 1980 - return count; 1981 - } 1982 - static DEVICE_ATTR_RW(read_timeout_expiries); 1983 - 1984 - ufshpb_sysfs_param_show_func(timeout_polling_interval_ms); 1985 - static ssize_t 1986 - timeout_polling_interval_ms_store(struct device *dev, 1987 - struct device_attribute *attr, 1988 - const char *buf, size_t count) 1989 - { 1990 - struct scsi_device *sdev = to_scsi_device(dev); 1991 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 1992 - int val; 1993 - 1994 - if (!hpb) 1995 - return -ENODEV; 1996 - 1997 - if (!hpb->is_hcm) 1998 - return -EOPNOTSUPP; 1999 - 2000 - if (kstrtouint(buf, 0, &val)) 2001 - return -EINVAL; 2002 - 2003 - /* timeout_polling_interval << read_timeout */ 2004 - if (val <= 0 || val > hpb->params.read_timeout_ms / 2) 2005 - return -EINVAL; 2006 - 2007 - hpb->params.timeout_polling_interval_ms = val; 2008 - 2009 - return count; 2010 - } 2011 - static DEVICE_ATTR_RW(timeout_polling_interval_ms); 2012 - 2013 - ufshpb_sysfs_param_show_func(inflight_map_req); 2014 - static ssize_t inflight_map_req_store(struct device *dev, 2015 - struct device_attribute *attr, 2016 - const char *buf, size_t count) 2017 - { 2018 - struct scsi_device *sdev = to_scsi_device(dev); 2019 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 2020 - int val; 2021 - 2022 - if (!hpb) 2023 - return -ENODEV; 2024 - 2025 - if (!hpb->is_hcm) 2026 - return -EOPNOTSUPP; 2027 - 2028 - if (kstrtouint(buf, 0, &val)) 2029 - return -EINVAL; 2030 - 2031 - if (val <= 0 || val > hpb->sdev_ufs_lu->queue_depth - 1) 2032 - return -EINVAL; 2033 - 2034 - hpb->params.inflight_map_req = val; 2035 - 2036 - return count; 2037 - } 2038 - static DEVICE_ATTR_RW(inflight_map_req); 2039 - 2040 - static void ufshpb_hcm_param_init(struct ufshpb_lu *hpb) 2041 - { 2042 - hpb->params.activation_thld = ACTIVATION_THRESHOLD; 2043 - hpb->params.normalization_factor = 1; 2044 - hpb->params.eviction_thld_enter = (ACTIVATION_THRESHOLD << 5); 2045 - hpb->params.eviction_thld_exit = (ACTIVATION_THRESHOLD << 4); 2046 - hpb->params.read_timeout_ms = READ_TO_MS; 2047 - hpb->params.read_timeout_expiries = READ_TO_EXPIRIES; 2048 - hpb->params.timeout_polling_interval_ms = POLLING_INTERVAL_MS; 2049 - hpb->params.inflight_map_req = THROTTLE_MAP_REQ_DEFAULT; 2050 - } 2051 - 2052 - static struct attribute *hpb_dev_param_attrs[] = { 2053 - &dev_attr_requeue_timeout_ms.attr, 2054 - &dev_attr_activation_thld.attr, 2055 - &dev_attr_normalization_factor.attr, 2056 - &dev_attr_eviction_thld_enter.attr, 2057 - &dev_attr_eviction_thld_exit.attr, 2058 - &dev_attr_read_timeout_ms.attr, 2059 - &dev_attr_read_timeout_expiries.attr, 2060 - &dev_attr_timeout_polling_interval_ms.attr, 2061 - &dev_attr_inflight_map_req.attr, 2062 - NULL, 2063 - }; 2064 - 2065 - struct attribute_group ufs_sysfs_hpb_param_group = { 2066 - .name = "hpb_params", 2067 - .attrs = hpb_dev_param_attrs, 2068 - }; 2069 - 2070 - static int ufshpb_pre_req_mempool_init(struct ufshpb_lu *hpb) 2071 - { 2072 - struct ufshpb_req *pre_req = NULL, *t; 2073 - int qd = hpb->sdev_ufs_lu->queue_depth / 2; 2074 - int i; 2075 - 2076 - INIT_LIST_HEAD(&hpb->lh_pre_req_free); 2077 - 2078 - hpb->pre_req = kcalloc(qd, sizeof(struct ufshpb_req), GFP_KERNEL); 2079 - hpb->throttle_pre_req = qd; 2080 - hpb->num_inflight_pre_req = 0; 2081 - 2082 - if (!hpb->pre_req) 2083 - goto release_mem; 2084 - 2085 - for (i = 0; i < qd; i++) { 2086 - pre_req = hpb->pre_req + i; 2087 - INIT_LIST_HEAD(&pre_req->list_req); 2088 - pre_req->req = NULL; 2089 - 2090 - pre_req->bio = bio_alloc(NULL, 1, 0, GFP_KERNEL); 2091 - if (!pre_req->bio) 2092 - goto release_mem; 2093 - 2094 - pre_req->wb.m_page = alloc_page(GFP_KERNEL | __GFP_ZERO); 2095 - if (!pre_req->wb.m_page) { 2096 - bio_put(pre_req->bio); 2097 - goto release_mem; 2098 - } 2099 - 2100 - list_add_tail(&pre_req->list_req, &hpb->lh_pre_req_free); 2101 - } 2102 - 2103 - return 0; 2104 - release_mem: 2105 - list_for_each_entry_safe(pre_req, t, &hpb->lh_pre_req_free, list_req) { 2106 - list_del_init(&pre_req->list_req); 2107 - bio_put(pre_req->bio); 2108 - __free_page(pre_req->wb.m_page); 2109 - } 2110 - 2111 - kfree(hpb->pre_req); 2112 - return -ENOMEM; 2113 - } 2114 - 2115 - static void ufshpb_pre_req_mempool_destroy(struct ufshpb_lu *hpb) 2116 - { 2117 - struct ufshpb_req *pre_req = NULL; 2118 - int i; 2119 - 2120 - for (i = 0; i < hpb->throttle_pre_req; i++) { 2121 - pre_req = hpb->pre_req + i; 2122 - bio_put(hpb->pre_req[i].bio); 2123 - if (!pre_req->wb.m_page) 2124 - __free_page(hpb->pre_req[i].wb.m_page); 2125 - list_del_init(&pre_req->list_req); 2126 - } 2127 - 2128 - kfree(hpb->pre_req); 2129 - } 2130 - 2131 - static void ufshpb_stat_init(struct ufshpb_lu *hpb) 2132 - { 2133 - hpb->stats.hit_cnt = 0; 2134 - hpb->stats.miss_cnt = 0; 2135 - hpb->stats.rcmd_noti_cnt = 0; 2136 - hpb->stats.rcmd_active_cnt = 0; 2137 - hpb->stats.rcmd_inactive_cnt = 0; 2138 - hpb->stats.map_req_cnt = 0; 2139 - hpb->stats.umap_req_cnt = 0; 2140 - } 2141 - 2142 - static void ufshpb_param_init(struct ufshpb_lu *hpb) 2143 - { 2144 - hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS; 2145 - if (hpb->is_hcm) 2146 - ufshpb_hcm_param_init(hpb); 2147 - } 2148 - 2149 - static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) 2150 - { 2151 - int ret; 2152 - 2153 - spin_lock_init(&hpb->rgn_state_lock); 2154 - spin_lock_init(&hpb->rsp_list_lock); 2155 - spin_lock_init(&hpb->param_lock); 2156 - 2157 - INIT_LIST_HEAD(&hpb->lru_info.lh_lru_rgn); 2158 - INIT_LIST_HEAD(&hpb->lh_act_srgn); 2159 - INIT_LIST_HEAD(&hpb->lh_inact_rgn); 2160 - INIT_LIST_HEAD(&hpb->list_hpb_lu); 2161 - 2162 - INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); 2163 - if (hpb->is_hcm) { 2164 - INIT_WORK(&hpb->ufshpb_normalization_work, 2165 - ufshpb_normalization_work_handler); 2166 - INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work, 2167 - ufshpb_read_to_handler); 2168 - } 2169 - 2170 - hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", 2171 - sizeof(struct ufshpb_req), 0, 0, NULL); 2172 - if (!hpb->map_req_cache) { 2173 - dev_err(hba->dev, "ufshpb(%d) ufshpb_req_cache create fail", 2174 - hpb->lun); 2175 - return -ENOMEM; 2176 - } 2177 - 2178 - hpb->m_page_cache = kmem_cache_create("ufshpb_m_page_cache", 2179 - sizeof(struct page *) * hpb->pages_per_srgn, 2180 - 0, 0, NULL); 2181 - if (!hpb->m_page_cache) { 2182 - dev_err(hba->dev, "ufshpb(%d) ufshpb_m_page_cache create fail", 2183 - hpb->lun); 2184 - ret = -ENOMEM; 2185 - goto release_req_cache; 2186 - } 2187 - 2188 - ret = ufshpb_pre_req_mempool_init(hpb); 2189 - if (ret) { 2190 - dev_err(hba->dev, "ufshpb(%d) pre_req_mempool init fail", 2191 - hpb->lun); 2192 - goto release_m_page_cache; 2193 - } 2194 - 2195 - ret = ufshpb_alloc_region_tbl(hba, hpb); 2196 - if (ret) 2197 - goto release_pre_req_mempool; 2198 - 2199 - ufshpb_stat_init(hpb); 2200 - ufshpb_param_init(hpb); 2201 - 2202 - if (hpb->is_hcm) { 2203 - unsigned int poll; 2204 - 2205 - poll = hpb->params.timeout_polling_interval_ms; 2206 - schedule_delayed_work(&hpb->ufshpb_read_to_work, 2207 - msecs_to_jiffies(poll)); 2208 - } 2209 - 2210 - return 0; 2211 - 2212 - release_pre_req_mempool: 2213 - ufshpb_pre_req_mempool_destroy(hpb); 2214 - release_m_page_cache: 2215 - kmem_cache_destroy(hpb->m_page_cache); 2216 - release_req_cache: 2217 - kmem_cache_destroy(hpb->map_req_cache); 2218 - return ret; 2219 - } 2220 - 2221 - static struct ufshpb_lu * 2222 - ufshpb_alloc_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev, 2223 - struct ufshpb_dev_info *hpb_dev_info, 2224 - struct ufshpb_lu_info *hpb_lu_info) 2225 - { 2226 - struct ufshpb_lu *hpb; 2227 - int ret; 2228 - 2229 - hpb = kzalloc(sizeof(struct ufshpb_lu), GFP_KERNEL); 2230 - if (!hpb) 2231 - return NULL; 2232 - 2233 - hpb->lun = sdev->lun; 2234 - hpb->sdev_ufs_lu = sdev; 2235 - 2236 - ufshpb_lu_parameter_init(hba, hpb, hpb_dev_info, hpb_lu_info); 2237 - 2238 - ret = ufshpb_lu_hpb_init(hba, hpb); 2239 - if (ret) { 2240 - dev_err(hba->dev, "hpb lu init failed. ret %d", ret); 2241 - goto release_hpb; 2242 - } 2243 - 2244 - sdev->hostdata = hpb; 2245 - return hpb; 2246 - 2247 - release_hpb: 2248 - kfree(hpb); 2249 - return NULL; 2250 - } 2251 - 2252 - static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) 2253 - { 2254 - struct ufshpb_region *rgn, *next_rgn; 2255 - struct ufshpb_subregion *srgn, *next_srgn; 2256 - unsigned long flags; 2257 - 2258 - /* 2259 - * If the device reset occurred, the remaining HPB region information 2260 - * may be stale. Therefore, by discarding the lists of HPB response 2261 - * that remained after reset, we prevent unnecessary work. 2262 - */ 2263 - spin_lock_irqsave(&hpb->rsp_list_lock, flags); 2264 - list_for_each_entry_safe(rgn, next_rgn, &hpb->lh_inact_rgn, 2265 - list_inact_rgn) 2266 - list_del_init(&rgn->list_inact_rgn); 2267 - 2268 - list_for_each_entry_safe(srgn, next_srgn, &hpb->lh_act_srgn, 2269 - list_act_srgn) 2270 - list_del_init(&srgn->list_act_srgn); 2271 - spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); 2272 - } 2273 - 2274 - static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) 2275 - { 2276 - if (hpb->is_hcm) { 2277 - cancel_delayed_work_sync(&hpb->ufshpb_read_to_work); 2278 - cancel_work_sync(&hpb->ufshpb_normalization_work); 2279 - } 2280 - cancel_work_sync(&hpb->map_work); 2281 - } 2282 - 2283 - static bool ufshpb_check_hpb_reset_query(struct ufs_hba *hba) 2284 - { 2285 - int err = 0; 2286 - bool flag_res = true; 2287 - int try; 2288 - 2289 - /* wait for the device to complete HPB reset query */ 2290 - for (try = 0; try < HPB_RESET_REQ_RETRIES; try++) { 2291 - dev_dbg(hba->dev, 2292 - "%s: start flag reset polling %d times\n", 2293 - __func__, try); 2294 - 2295 - /* Poll fHpbReset flag to be cleared */ 2296 - err = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_READ_FLAG, 2297 - QUERY_FLAG_IDN_HPB_RESET, 0, &flag_res); 2298 - 2299 - if (err) { 2300 - dev_err(hba->dev, 2301 - "%s: reading fHpbReset flag failed with error %d\n", 2302 - __func__, err); 2303 - return flag_res; 2304 - } 2305 - 2306 - if (!flag_res) 2307 - goto out; 2308 - 2309 - usleep_range(1000, 1100); 2310 - } 2311 - if (flag_res) { 2312 - dev_err(hba->dev, 2313 - "%s: fHpbReset was not cleared by the device\n", 2314 - __func__); 2315 - } 2316 - out: 2317 - return flag_res; 2318 - } 2319 - 2320 - /** 2321 - * ufshpb_toggle_state - switch HPB state of all LUs 2322 - * @hba: per-adapter instance 2323 - * @src: expected current HPB state 2324 - * @dest: target HPB state to switch to 2325 - */ 2326 - void ufshpb_toggle_state(struct ufs_hba *hba, enum UFSHPB_STATE src, enum UFSHPB_STATE dest) 2327 - { 2328 - struct ufshpb_lu *hpb; 2329 - struct scsi_device *sdev; 2330 - 2331 - shost_for_each_device(sdev, hba->host) { 2332 - hpb = ufshpb_get_hpb_data(sdev); 2333 - 2334 - if (!hpb || ufshpb_get_state(hpb) != src) 2335 - continue; 2336 - ufshpb_set_state(hpb, dest); 2337 - 2338 - if (dest == HPB_RESET) { 2339 - ufshpb_cancel_jobs(hpb); 2340 - ufshpb_discard_rsp_lists(hpb); 2341 - } 2342 - } 2343 - } 2344 - 2345 - void ufshpb_suspend(struct ufs_hba *hba) 2346 - { 2347 - struct ufshpb_lu *hpb; 2348 - struct scsi_device *sdev; 2349 - 2350 - shost_for_each_device(sdev, hba->host) { 2351 - hpb = ufshpb_get_hpb_data(sdev); 2352 - if (!hpb || ufshpb_get_state(hpb) != HPB_PRESENT) 2353 - continue; 2354 - 2355 - ufshpb_set_state(hpb, HPB_SUSPEND); 2356 - ufshpb_cancel_jobs(hpb); 2357 - } 2358 - } 2359 - 2360 - void ufshpb_resume(struct ufs_hba *hba) 2361 - { 2362 - struct ufshpb_lu *hpb; 2363 - struct scsi_device *sdev; 2364 - 2365 - shost_for_each_device(sdev, hba->host) { 2366 - hpb = ufshpb_get_hpb_data(sdev); 2367 - if (!hpb || ufshpb_get_state(hpb) != HPB_SUSPEND) 2368 - continue; 2369 - 2370 - ufshpb_set_state(hpb, HPB_PRESENT); 2371 - ufshpb_kick_map_work(hpb); 2372 - if (hpb->is_hcm) { 2373 - unsigned int poll = hpb->params.timeout_polling_interval_ms; 2374 - 2375 - schedule_delayed_work(&hpb->ufshpb_read_to_work, msecs_to_jiffies(poll)); 2376 - } 2377 - } 2378 - } 2379 - 2380 - static int ufshpb_get_lu_info(struct ufs_hba *hba, int lun, 2381 - struct ufshpb_lu_info *hpb_lu_info) 2382 - { 2383 - u16 max_active_rgns; 2384 - u8 lu_enable; 2385 - int size = QUERY_DESC_MAX_SIZE; 2386 - int ret; 2387 - char desc_buf[QUERY_DESC_MAX_SIZE]; 2388 - 2389 - ufshcd_rpm_get_sync(hba); 2390 - ret = ufshcd_query_descriptor_retry(hba, UPIU_QUERY_OPCODE_READ_DESC, 2391 - QUERY_DESC_IDN_UNIT, lun, 0, 2392 - desc_buf, &size); 2393 - ufshcd_rpm_put_sync(hba); 2394 - 2395 - if (ret) { 2396 - dev_err(hba->dev, 2397 - "%s: idn: %d lun: %d query request failed", 2398 - __func__, QUERY_DESC_IDN_UNIT, lun); 2399 - return ret; 2400 - } 2401 - 2402 - lu_enable = desc_buf[UNIT_DESC_PARAM_LU_ENABLE]; 2403 - if (lu_enable != LU_ENABLED_HPB_FUNC) 2404 - return -ENODEV; 2405 - 2406 - max_active_rgns = get_unaligned_be16( 2407 - desc_buf + UNIT_DESC_PARAM_HPB_LU_MAX_ACTIVE_RGNS); 2408 - if (!max_active_rgns) { 2409 - dev_err(hba->dev, 2410 - "lun %d wrong number of max active regions\n", lun); 2411 - return -ENODEV; 2412 - } 2413 - 2414 - hpb_lu_info->num_blocks = get_unaligned_be64( 2415 - desc_buf + UNIT_DESC_PARAM_LOGICAL_BLK_COUNT); 2416 - hpb_lu_info->pinned_start = get_unaligned_be16( 2417 - desc_buf + UNIT_DESC_PARAM_HPB_PIN_RGN_START_OFF); 2418 - hpb_lu_info->num_pinned = get_unaligned_be16( 2419 - desc_buf + UNIT_DESC_PARAM_HPB_NUM_PIN_RGNS); 2420 - hpb_lu_info->max_active_rgns = max_active_rgns; 2421 - 2422 - return 0; 2423 - } 2424 - 2425 - void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) 2426 - { 2427 - struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); 2428 - 2429 - if (!hpb) 2430 - return; 2431 - 2432 - ufshpb_set_state(hpb, HPB_FAILED); 2433 - 2434 - sdev = hpb->sdev_ufs_lu; 2435 - sdev->hostdata = NULL; 2436 - 2437 - ufshpb_cancel_jobs(hpb); 2438 - 2439 - ufshpb_pre_req_mempool_destroy(hpb); 2440 - ufshpb_destroy_region_tbl(hpb); 2441 - 2442 - kmem_cache_destroy(hpb->map_req_cache); 2443 - kmem_cache_destroy(hpb->m_page_cache); 2444 - 2445 - list_del_init(&hpb->list_hpb_lu); 2446 - 2447 - kfree(hpb); 2448 - } 2449 - 2450 - static void ufshpb_hpb_lu_prepared(struct ufs_hba *hba) 2451 - { 2452 - int pool_size; 2453 - struct ufshpb_lu *hpb; 2454 - struct scsi_device *sdev; 2455 - bool init_success; 2456 - 2457 - if (tot_active_srgn_pages == 0) { 2458 - ufshpb_remove(hba); 2459 - return; 2460 - } 2461 - 2462 - init_success = !ufshpb_check_hpb_reset_query(hba); 2463 - 2464 - pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * SZ_1K) / PAGE_SIZE; 2465 - if (pool_size > tot_active_srgn_pages) { 2466 - mempool_resize(ufshpb_mctx_pool, tot_active_srgn_pages); 2467 - mempool_resize(ufshpb_page_pool, tot_active_srgn_pages); 2468 - } 2469 - 2470 - shost_for_each_device(sdev, hba->host) { 2471 - hpb = ufshpb_get_hpb_data(sdev); 2472 - if (!hpb) 2473 - continue; 2474 - 2475 - if (init_success) { 2476 - ufshpb_set_state(hpb, HPB_PRESENT); 2477 - if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0) 2478 - queue_work(ufshpb_wq, &hpb->map_work); 2479 - } else { 2480 - dev_err(hba->dev, "destroy HPB lu %d\n", hpb->lun); 2481 - ufshpb_destroy_lu(hba, sdev); 2482 - } 2483 - } 2484 - 2485 - if (!init_success) 2486 - ufshpb_remove(hba); 2487 - } 2488 - 2489 - void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) 2490 - { 2491 - struct ufshpb_lu *hpb; 2492 - int ret; 2493 - struct ufshpb_lu_info hpb_lu_info = { 0 }; 2494 - int lun = sdev->lun; 2495 - 2496 - if (lun >= hba->dev_info.max_lu_supported) 2497 - goto out; 2498 - 2499 - ret = ufshpb_get_lu_info(hba, lun, &hpb_lu_info); 2500 - if (ret) 2501 - goto out; 2502 - 2503 - hpb = ufshpb_alloc_hpb_lu(hba, sdev, &hba->ufshpb_dev, 2504 - &hpb_lu_info); 2505 - if (!hpb) 2506 - goto out; 2507 - 2508 - tot_active_srgn_pages += hpb_lu_info.max_active_rgns * 2509 - hpb->srgns_per_rgn * hpb->pages_per_srgn; 2510 - 2511 - out: 2512 - /* All LUs are initialized */ 2513 - if (atomic_dec_and_test(&hba->ufshpb_dev.slave_conf_cnt)) 2514 - ufshpb_hpb_lu_prepared(hba); 2515 - } 2516 - 2517 - static int ufshpb_init_mem_wq(struct ufs_hba *hba) 2518 - { 2519 - int ret; 2520 - unsigned int pool_size; 2521 - 2522 - ufshpb_mctx_cache = kmem_cache_create("ufshpb_mctx_cache", 2523 - sizeof(struct ufshpb_map_ctx), 2524 - 0, 0, NULL); 2525 - if (!ufshpb_mctx_cache) { 2526 - dev_err(hba->dev, "ufshpb: cannot init mctx cache\n"); 2527 - return -ENOMEM; 2528 - } 2529 - 2530 - pool_size = PAGE_ALIGN(ufshpb_host_map_kbytes * SZ_1K) / PAGE_SIZE; 2531 - dev_info(hba->dev, "%s:%d ufshpb_host_map_kbytes %u pool_size %u\n", 2532 - __func__, __LINE__, ufshpb_host_map_kbytes, pool_size); 2533 - 2534 - ufshpb_mctx_pool = mempool_create_slab_pool(pool_size, 2535 - ufshpb_mctx_cache); 2536 - if (!ufshpb_mctx_pool) { 2537 - dev_err(hba->dev, "ufshpb: cannot init mctx pool\n"); 2538 - ret = -ENOMEM; 2539 - goto release_mctx_cache; 2540 - } 2541 - 2542 - ufshpb_page_pool = mempool_create_page_pool(pool_size, 0); 2543 - if (!ufshpb_page_pool) { 2544 - dev_err(hba->dev, "ufshpb: cannot init page pool\n"); 2545 - ret = -ENOMEM; 2546 - goto release_mctx_pool; 2547 - } 2548 - 2549 - ufshpb_wq = alloc_workqueue("ufshpb-wq", 2550 - WQ_UNBOUND | WQ_MEM_RECLAIM, 0); 2551 - if (!ufshpb_wq) { 2552 - dev_err(hba->dev, "ufshpb: alloc workqueue failed\n"); 2553 - ret = -ENOMEM; 2554 - goto release_page_pool; 2555 - } 2556 - 2557 - return 0; 2558 - 2559 - release_page_pool: 2560 - mempool_destroy(ufshpb_page_pool); 2561 - release_mctx_pool: 2562 - mempool_destroy(ufshpb_mctx_pool); 2563 - release_mctx_cache: 2564 - kmem_cache_destroy(ufshpb_mctx_cache); 2565 - return ret; 2566 - } 2567 - 2568 - void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) 2569 - { 2570 - struct ufshpb_dev_info *hpb_info = &hba->ufshpb_dev; 2571 - int max_active_rgns = 0; 2572 - int hpb_num_lu; 2573 - 2574 - hpb_num_lu = geo_buf[GEOMETRY_DESC_PARAM_HPB_NUMBER_LU]; 2575 - if (hpb_num_lu == 0) { 2576 - dev_err(hba->dev, "No HPB LU supported\n"); 2577 - hpb_info->hpb_disabled = true; 2578 - return; 2579 - } 2580 - 2581 - hpb_info->rgn_size = geo_buf[GEOMETRY_DESC_PARAM_HPB_REGION_SIZE]; 2582 - hpb_info->srgn_size = geo_buf[GEOMETRY_DESC_PARAM_HPB_SUBREGION_SIZE]; 2583 - max_active_rgns = get_unaligned_be16(geo_buf + 2584 - GEOMETRY_DESC_PARAM_HPB_MAX_ACTIVE_REGS); 2585 - 2586 - if (hpb_info->rgn_size == 0 || hpb_info->srgn_size == 0 || 2587 - max_active_rgns == 0) { 2588 - dev_err(hba->dev, "No HPB supported device\n"); 2589 - hpb_info->hpb_disabled = true; 2590 - return; 2591 - } 2592 - } 2593 - 2594 - void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) 2595 - { 2596 - struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; 2597 - int version, ret; 2598 - int max_single_cmd; 2599 - 2600 - hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; 2601 - 2602 - version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER); 2603 - if ((version != HPB_SUPPORT_VERSION) && 2604 - (version != HPB_SUPPORT_LEGACY_VERSION)) { 2605 - dev_err(hba->dev, "%s: HPB %x version is not supported.\n", 2606 - __func__, version); 2607 - hpb_dev_info->hpb_disabled = true; 2608 - return; 2609 - } 2610 - 2611 - if (version == HPB_SUPPORT_LEGACY_VERSION) 2612 - hpb_dev_info->is_legacy = true; 2613 - 2614 - /* 2615 - * Get the number of user logical unit to check whether all 2616 - * scsi_device finish initialization 2617 - */ 2618 - hpb_dev_info->num_lu = desc_buf[DEVICE_DESC_PARAM_NUM_LU]; 2619 - 2620 - if (hpb_dev_info->is_legacy) 2621 - return; 2622 - 2623 - ret = ufshcd_query_attr_retry(hba, UPIU_QUERY_OPCODE_READ_ATTR, 2624 - QUERY_ATTR_IDN_MAX_HPB_SINGLE_CMD, 0, 0, &max_single_cmd); 2625 - 2626 - if (ret) 2627 - hpb_dev_info->max_hpb_single_cmd = HPB_LEGACY_CHUNK_HIGH; 2628 - else 2629 - hpb_dev_info->max_hpb_single_cmd = min(max_single_cmd + 1, HPB_MULTI_CHUNK_HIGH); 2630 - } 2631 - 2632 - void ufshpb_init(struct ufs_hba *hba) 2633 - { 2634 - struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; 2635 - int try; 2636 - int ret; 2637 - 2638 - if (!ufshpb_is_allowed(hba) || !hba->dev_info.hpb_enabled) 2639 - return; 2640 - 2641 - if (ufshpb_init_mem_wq(hba)) { 2642 - hpb_dev_info->hpb_disabled = true; 2643 - return; 2644 - } 2645 - 2646 - atomic_set(&hpb_dev_info->slave_conf_cnt, hpb_dev_info->num_lu); 2647 - tot_active_srgn_pages = 0; 2648 - /* issue HPB reset query */ 2649 - for (try = 0; try < HPB_RESET_REQ_RETRIES; try++) { 2650 - ret = ufshcd_query_flag(hba, UPIU_QUERY_OPCODE_SET_FLAG, 2651 - QUERY_FLAG_IDN_HPB_RESET, 0, NULL); 2652 - if (!ret) 2653 - break; 2654 - } 2655 - } 2656 - 2657 - void ufshpb_remove(struct ufs_hba *hba) 2658 - { 2659 - mempool_destroy(ufshpb_page_pool); 2660 - mempool_destroy(ufshpb_mctx_pool); 2661 - kmem_cache_destroy(ufshpb_mctx_cache); 2662 - 2663 - destroy_workqueue(ufshpb_wq); 2664 - } 2665 - 2666 - module_param(ufshpb_host_map_kbytes, uint, 0644); 2667 - MODULE_PARM_DESC(ufshpb_host_map_kbytes, 2668 - "ufshpb host mapping memory kilo-bytes for ufshpb memory-pool");
-318
drivers/ufs/core/ufshpb.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Universal Flash Storage Host Performance Booster 4 - * 5 - * Copyright (C) 2017-2021 Samsung Electronics Co., Ltd. 6 - * 7 - * Authors: 8 - * Yongmyung Lee <ymhungry.lee@samsung.com> 9 - * Jinyoung Choi <j-young.choi@samsung.com> 10 - */ 11 - 12 - #ifndef _UFSHPB_H_ 13 - #define _UFSHPB_H_ 14 - 15 - /* hpb response UPIU macro */ 16 - #define HPB_RSP_NONE 0x0 17 - #define HPB_RSP_REQ_REGION_UPDATE 0x1 18 - #define HPB_RSP_DEV_RESET 0x2 19 - #define MAX_ACTIVE_NUM 2 20 - #define MAX_INACTIVE_NUM 2 21 - #define DEV_DATA_SEG_LEN 0x14 22 - #define DEV_SENSE_SEG_LEN 0x12 23 - #define DEV_DES_TYPE 0x80 24 - #define DEV_ADDITIONAL_LEN 0x10 25 - 26 - /* hpb map & entries macro */ 27 - #define HPB_RGN_SIZE_UNIT 512 28 - #define HPB_ENTRY_BLOCK_SIZE SZ_4K 29 - #define HPB_ENTRY_SIZE 0x8 30 - #define PINNED_NOT_SET U32_MAX 31 - 32 - /* hpb support chunk size */ 33 - #define HPB_LEGACY_CHUNK_HIGH 1 34 - #define HPB_MULTI_CHUNK_HIGH 255 35 - 36 - /* hpb vender defined opcode */ 37 - #define UFSHPB_READ 0xF8 38 - #define UFSHPB_READ_BUFFER 0xF9 39 - #define UFSHPB_READ_BUFFER_ID 0x01 40 - #define UFSHPB_WRITE_BUFFER 0xFA 41 - #define UFSHPB_WRITE_BUFFER_INACT_SINGLE_ID 0x01 42 - #define UFSHPB_WRITE_BUFFER_PREFETCH_ID 0x02 43 - #define UFSHPB_WRITE_BUFFER_INACT_ALL_ID 0x03 44 - #define HPB_WRITE_BUFFER_CMD_LENGTH 10 45 - #define MAX_HPB_READ_ID 0x7F 46 - #define HPB_READ_BUFFER_CMD_LENGTH 10 47 - #define LU_ENABLED_HPB_FUNC 0x02 48 - 49 - #define HPB_RESET_REQ_RETRIES 10 50 - #define HPB_MAP_REQ_RETRIES 5 51 - #define HPB_REQUEUE_TIME_MS 0 52 - 53 - #define HPB_SUPPORT_VERSION 0x200 54 - #define HPB_SUPPORT_LEGACY_VERSION 0x100 55 - 56 - enum UFSHPB_MODE { 57 - HPB_HOST_CONTROL, 58 - HPB_DEVICE_CONTROL, 59 - }; 60 - 61 - enum UFSHPB_STATE { 62 - HPB_INIT, 63 - HPB_PRESENT, 64 - HPB_SUSPEND, 65 - HPB_FAILED, 66 - HPB_RESET, 67 - }; 68 - 69 - enum HPB_RGN_STATE { 70 - HPB_RGN_INACTIVE, 71 - HPB_RGN_ACTIVE, 72 - /* pinned regions are always active */ 73 - HPB_RGN_PINNED, 74 - }; 75 - 76 - enum HPB_SRGN_STATE { 77 - HPB_SRGN_UNUSED, 78 - HPB_SRGN_INVALID, 79 - HPB_SRGN_VALID, 80 - HPB_SRGN_ISSUED, 81 - }; 82 - 83 - /** 84 - * struct ufshpb_lu_info - UFSHPB logical unit related info 85 - * @num_blocks: the number of logical block 86 - * @pinned_start: the start region number of pinned region 87 - * @num_pinned: the number of pinned regions 88 - * @max_active_rgns: maximum number of active regions 89 - */ 90 - struct ufshpb_lu_info { 91 - int num_blocks; 92 - int pinned_start; 93 - int num_pinned; 94 - int max_active_rgns; 95 - }; 96 - 97 - struct ufshpb_map_ctx { 98 - struct page **m_page; 99 - unsigned long *ppn_dirty; 100 - }; 101 - 102 - struct ufshpb_subregion { 103 - struct ufshpb_map_ctx *mctx; 104 - enum HPB_SRGN_STATE srgn_state; 105 - int rgn_idx; 106 - int srgn_idx; 107 - bool is_last; 108 - 109 - /* subregion reads - for host mode */ 110 - unsigned int reads; 111 - 112 - /* below information is used by rsp_list */ 113 - struct list_head list_act_srgn; 114 - }; 115 - 116 - struct ufshpb_region { 117 - struct ufshpb_lu *hpb; 118 - struct ufshpb_subregion *srgn_tbl; 119 - enum HPB_RGN_STATE rgn_state; 120 - int rgn_idx; 121 - int srgn_cnt; 122 - 123 - /* below information is used by rsp_list */ 124 - struct list_head list_inact_rgn; 125 - 126 - /* below information is used by lru */ 127 - struct list_head list_lru_rgn; 128 - unsigned long rgn_flags; 129 - #define RGN_FLAG_DIRTY 0 130 - #define RGN_FLAG_UPDATE 1 131 - 132 - /* region reads - for host mode */ 133 - spinlock_t rgn_lock; 134 - unsigned int reads; 135 - /* region "cold" timer - for host mode */ 136 - ktime_t read_timeout; 137 - unsigned int read_timeout_expiries; 138 - struct list_head list_expired_rgn; 139 - }; 140 - 141 - #define for_each_sub_region(rgn, i, srgn) \ 142 - for ((i) = 0; \ 143 - ((i) < (rgn)->srgn_cnt) && ((srgn) = &(rgn)->srgn_tbl[i]); \ 144 - (i)++) 145 - 146 - /** 147 - * struct ufshpb_req - HPB related request structure (write/read buffer) 148 - * @req: block layer request structure 149 - * @bio: bio for this request 150 - * @hpb: ufshpb_lu structure that related to 151 - * @list_req: ufshpb_req mempool list 152 - * @sense: store its sense data 153 - * @mctx: L2P map information 154 - * @rgn_idx: target region index 155 - * @srgn_idx: target sub-region index 156 - * @lun: target logical unit number 157 - * @m_page: L2P map information data for pre-request 158 - * @len: length of host-side cached L2P map in m_page 159 - * @lpn: start LPN of L2P map in m_page 160 - */ 161 - struct ufshpb_req { 162 - struct request *req; 163 - struct bio *bio; 164 - struct ufshpb_lu *hpb; 165 - struct list_head list_req; 166 - union { 167 - struct { 168 - struct ufshpb_map_ctx *mctx; 169 - unsigned int rgn_idx; 170 - unsigned int srgn_idx; 171 - unsigned int lun; 172 - } rb; 173 - struct { 174 - struct page *m_page; 175 - unsigned int len; 176 - unsigned long lpn; 177 - } wb; 178 - }; 179 - }; 180 - 181 - struct victim_select_info { 182 - struct list_head lh_lru_rgn; /* LRU list of regions */ 183 - int max_lru_active_cnt; /* supported hpb #region - pinned #region */ 184 - atomic_t active_cnt; 185 - }; 186 - 187 - /** 188 - * ufshpb_params - ufs hpb parameters 189 - * @requeue_timeout_ms - requeue threshold of wb command (0x2) 190 - * @activation_thld - min reads [IOs] to activate/update a region 191 - * @normalization_factor - shift right the region's reads 192 - * @eviction_thld_enter - min reads [IOs] for the entering region in eviction 193 - * @eviction_thld_exit - max reads [IOs] for the exiting region in eviction 194 - * @read_timeout_ms - timeout [ms] from the last read IO to the region 195 - * @read_timeout_expiries - amount of allowable timeout expireis 196 - * @timeout_polling_interval_ms - frequency in which timeouts are checked 197 - * @inflight_map_req - number of inflight map requests 198 - */ 199 - struct ufshpb_params { 200 - unsigned int requeue_timeout_ms; 201 - unsigned int activation_thld; 202 - unsigned int normalization_factor; 203 - unsigned int eviction_thld_enter; 204 - unsigned int eviction_thld_exit; 205 - unsigned int read_timeout_ms; 206 - unsigned int read_timeout_expiries; 207 - unsigned int timeout_polling_interval_ms; 208 - unsigned int inflight_map_req; 209 - }; 210 - 211 - struct ufshpb_stats { 212 - u64 hit_cnt; 213 - u64 miss_cnt; 214 - u64 rcmd_noti_cnt; 215 - u64 rcmd_active_cnt; 216 - u64 rcmd_inactive_cnt; 217 - u64 map_req_cnt; 218 - u64 pre_req_cnt; 219 - u64 umap_req_cnt; 220 - }; 221 - 222 - struct ufshpb_lu { 223 - int lun; 224 - struct scsi_device *sdev_ufs_lu; 225 - 226 - spinlock_t rgn_state_lock; /* for protect rgn/srgn state */ 227 - struct ufshpb_region *rgn_tbl; 228 - 229 - atomic_t hpb_state; 230 - 231 - spinlock_t rsp_list_lock; 232 - struct list_head lh_act_srgn; /* hold rsp_list_lock */ 233 - struct list_head lh_inact_rgn; /* hold rsp_list_lock */ 234 - 235 - /* pre request information */ 236 - struct ufshpb_req *pre_req; 237 - int num_inflight_pre_req; 238 - int throttle_pre_req; 239 - int num_inflight_map_req; /* hold param_lock */ 240 - spinlock_t param_lock; 241 - 242 - struct list_head lh_pre_req_free; 243 - int pre_req_max_tr_len; 244 - 245 - /* cached L2P map management worker */ 246 - struct work_struct map_work; 247 - 248 - /* for selecting victim */ 249 - struct victim_select_info lru_info; 250 - struct work_struct ufshpb_normalization_work; 251 - struct delayed_work ufshpb_read_to_work; 252 - unsigned long work_data_bits; 253 - #define TIMEOUT_WORK_RUNNING 0 254 - 255 - /* pinned region information */ 256 - u32 lu_pinned_start; 257 - u32 lu_pinned_end; 258 - 259 - /* HPB related configuration */ 260 - u32 rgns_per_lu; 261 - u32 srgns_per_lu; 262 - u32 last_srgn_entries; 263 - int srgns_per_rgn; 264 - u32 srgn_mem_size; 265 - u32 entries_per_rgn_mask; 266 - u32 entries_per_rgn_shift; 267 - u32 entries_per_srgn; 268 - u32 entries_per_srgn_mask; 269 - u32 entries_per_srgn_shift; 270 - u32 pages_per_srgn; 271 - 272 - bool is_hcm; 273 - 274 - struct ufshpb_stats stats; 275 - struct ufshpb_params params; 276 - 277 - struct kmem_cache *map_req_cache; 278 - struct kmem_cache *m_page_cache; 279 - 280 - struct list_head list_hpb_lu; 281 - }; 282 - 283 - struct ufs_hba; 284 - struct ufshcd_lrb; 285 - 286 - #ifndef CONFIG_SCSI_UFS_HPB 287 - static int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { return 0; } 288 - static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {} 289 - static void ufshpb_resume(struct ufs_hba *hba) {} 290 - static void ufshpb_suspend(struct ufs_hba *hba) {} 291 - static void ufshpb_toggle_state(struct ufs_hba *hba, enum UFSHPB_STATE src, enum UFSHPB_STATE dest) {} 292 - static void ufshpb_init(struct ufs_hba *hba) {} 293 - static void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) {} 294 - static void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) {} 295 - static void ufshpb_remove(struct ufs_hba *hba) {} 296 - static bool ufshpb_is_allowed(struct ufs_hba *hba) { return false; } 297 - static void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) {} 298 - static void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) {} 299 - static bool ufshpb_is_legacy(struct ufs_hba *hba) { return false; } 300 - #else 301 - int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); 302 - void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp); 303 - void ufshpb_resume(struct ufs_hba *hba); 304 - void ufshpb_suspend(struct ufs_hba *hba); 305 - void ufshpb_toggle_state(struct ufs_hba *hba, enum UFSHPB_STATE src, enum UFSHPB_STATE dest); 306 - void ufshpb_init(struct ufs_hba *hba); 307 - void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev); 308 - void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev); 309 - void ufshpb_remove(struct ufs_hba *hba); 310 - bool ufshpb_is_allowed(struct ufs_hba *hba); 311 - void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf); 312 - void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf); 313 - bool ufshpb_is_legacy(struct ufs_hba *hba); 314 - extern struct attribute_group ufs_sysfs_hpb_stat_group; 315 - extern struct attribute_group ufs_sysfs_hpb_param_group; 316 - #endif 317 - 318 - #endif /* End of Header */
-39
include/ufs/ufs.h
··· 517 517 u8 sense_data[UFS_SENSE_SIZE]; 518 518 }; 519 519 520 - struct ufshpb_active_field { 521 - __be16 active_rgn; 522 - __be16 active_srgn; 523 - }; 524 - #define HPB_ACT_FIELD_SIZE 4 525 - 526 - /** 527 - * struct utp_hpb_rsp - Response UPIU structure 528 - * @residual_transfer_count: Residual transfer count DW-3 529 - * @reserved1: Reserved double words DW-4 to DW-7 530 - * @sense_data_len: Sense data length DW-8 U16 531 - * @desc_type: Descriptor type of sense data 532 - * @additional_len: Additional length of sense data 533 - * @hpb_op: HPB operation type 534 - * @lun: LUN of response UPIU 535 - * @active_rgn_cnt: Active region count 536 - * @inactive_rgn_cnt: Inactive region count 537 - * @hpb_active_field: Recommended to read HPB region and subregion 538 - * @hpb_inactive_field: To be inactivated HPB region and subregion 539 - */ 540 - struct utp_hpb_rsp { 541 - __be32 residual_transfer_count; 542 - __be32 reserved1[4]; 543 - __be16 sense_data_len; 544 - u8 desc_type; 545 - u8 additional_len; 546 - u8 hpb_op; 547 - u8 lun; 548 - u8 active_rgn_cnt; 549 - u8 inactive_rgn_cnt; 550 - struct ufshpb_active_field hpb_active_field[2]; 551 - __be16 hpb_inactive_field[2]; 552 - }; 553 - #define UTP_HPB_RSP_SIZE 40 554 - 555 520 /** 556 521 * struct utp_upiu_rsp - general upiu response structure 557 522 * @header: UPIU header structure DW-0 to DW-2 ··· 527 562 struct utp_upiu_header header; 528 563 union { 529 564 struct utp_cmd_rsp sr; 530 - struct utp_hpb_rsp hr; 531 565 struct utp_upiu_query qr; 532 566 }; 533 567 }; ··· 585 621 u32 clk_gating_wait_us; 586 622 /* Stores the depth of queue in UFS device */ 587 623 u8 bqueuedepth; 588 - 589 - /* UFS HPB related flag */ 590 - bool hpb_enabled; 591 624 592 625 /* UFS WB related flags */ 593 626 bool wb_enabled;
-6
include/ufs/ufs_quirks.h
··· 107 107 */ 108 108 #define UFS_DEVICE_QUIRK_DELAY_AFTER_LPM (1 << 11) 109 109 110 - /* 111 - * Some UFS devices require L2P entry should be swapped before being sent to the 112 - * UFS device for HPB READ command. 113 - */ 114 - #define UFS_DEVICE_QUIRK_SWAP_L2P_ENTRY_FOR_HPB_READ (1 << 12) 115 - 116 110 #endif /* UFS_QUIRKS_H_ */
-30
include/ufs/ufshcd.h
··· 709 709 u32 wb_flush_threshold; 710 710 }; 711 711 712 - #ifdef CONFIG_SCSI_UFS_HPB 713 - /** 714 - * struct ufshpb_dev_info - UFSHPB device related info 715 - * @num_lu: the number of user logical unit to check whether all lu finished 716 - * initialization 717 - * @rgn_size: device reported HPB region size 718 - * @srgn_size: device reported HPB sub-region size 719 - * @slave_conf_cnt: counter to check all lu finished initialization 720 - * @hpb_disabled: flag to check if HPB is disabled 721 - * @max_hpb_single_cmd: device reported bMAX_DATA_SIZE_FOR_SINGLE_CMD value 722 - * @is_legacy: flag to check HPB 1.0 723 - * @control_mode: either host or device 724 - */ 725 - struct ufshpb_dev_info { 726 - int num_lu; 727 - int rgn_size; 728 - int srgn_size; 729 - atomic_t slave_conf_cnt; 730 - bool hpb_disabled; 731 - u8 max_hpb_single_cmd; 732 - bool is_legacy; 733 - u8 control_mode; 734 - }; 735 - #endif 736 - 737 712 struct ufs_hba_monitor { 738 713 unsigned long chunk_size; 739 714 ··· 869 894 * @rpm_dev_flush_recheck_work: used to suspend from RPM (runtime power 870 895 * management) after the UFS device has finished a WriteBooster buffer 871 896 * flush or auto BKOP. 872 - * @ufshpb_dev: information related to HPB (Host Performance Booster). 873 897 * @monitor: statistics about UFS commands 874 898 * @crypto_capabilities: Content of crypto capabilities register (0x100) 875 899 * @crypto_cap_array: Array of crypto capabilities ··· 1023 1049 struct device bsg_dev; 1024 1050 struct request_queue *bsg_queue; 1025 1051 struct delayed_work rpm_dev_flush_recheck_work; 1026 - 1027 - #ifdef CONFIG_SCSI_UFS_HPB 1028 - struct ufshpb_dev_info ufshpb_dev; 1029 - #endif 1030 1052 1031 1053 struct ufs_hba_monitor monitor; 1032 1054