···9696 description:9797 A port node pointing to the HDMI-TX port node.98989999+ port@2:100100+ $ref: /schemas/graph.yaml#/properties/port101101+ description:102102+ A port node pointing to the DPI port node (e.g. DSI or LVDS transceiver).103103+99104 "#address-cells":100105 const: 1101106
···7070 samsung,burst-clock-frequency:7171 $ref: /schemas/types.yaml#/definitions/uint327272 description:7373- DSIM high speed burst mode frequency.7373+ DSIM high speed burst mode frequency. If absent,7474+ the pixel clock from the attached device or bridge7575+ will be used instead.74767577 samsung,esc-clock-frequency:7678 $ref: /schemas/types.yaml#/definitions/uint32···8280 samsung,pll-clock-frequency:8381 $ref: /schemas/types.yaml#/definitions/uint328482 description:8585- DSIM oscillator clock frequency.8383+ DSIM oscillator clock frequency. If absent, the clock frequency8484+ of sclk_mipi will be used instead.86858786 phys:8887 maxItems: 1···103100 specified.104101105102 port@1:106106- $ref: /schemas/graph.yaml#/properties/port103103+ $ref: /schemas/graph.yaml#/$defs/port-base104104+ unevaluatedProperties: false107105 description:108106 DSI output port node to the panel or the next bridge109107 in the chain.···138134 - compatible139135 - interrupts140136 - reg141141- - samsung,burst-clock-frequency142137 - samsung,esc-clock-frequency143143- - samsung,pll-clock-frequency144138145139allOf:146140 - $ref: ../dsi-controller.yaml#
···3636 description: GPIO signal to enable DDC bus3737 maxItems: 138383939+ hdmi-pwr-supply:4040+ description: Power supply for the HDMI +5V Power pin4141+3942 port:4043 $ref: /schemas/graph.yaml#/properties/port4144 description: Connection to controller providing HDMI signals
···2424- All keys shall be prefixed with `drm-`.2525- Whitespace between the delimiter and first non-whitespace character shall be2626 ignored when parsing.2727-- Neither keys or values are allowed to contain whitespace characters.2727+- Keys are not allowed to contain whitespace characters.2828- Numerical key value pairs can end with optional unit string.2929- Data type of the value is fixed as defined in the specification.3030···3939----------40404141- <uint> - Unsigned integer without defining the maximum value.4242-- <str> - String excluding any above defined reserved characters or whitespace.4242+- <keystr> - String excluding any above defined reserved characters or whitespace.4343+- <valstr> - String.43444445Mandatory fully standardised keys4546---------------------------------46474747-- drm-driver: <str>4848+- drm-driver: <valstr>48494950String shall contain the name this driver registered as via the respective5051`struct drm_driver` data structure.51525253Optional fully standardised keys5354--------------------------------5555+5656+Identification5757+^^^^^^^^^^^^^^54585559- drm-pdev: <aaaa:bb.cc.d>5660···7369Userspace should make sure to not double account any usage statistics by using7470the above described criteria in order to associate data to individual clients.75717676-- drm-engine-<str>: <uint> ns7272+Utilization7373+^^^^^^^^^^^7474+7575+- drm-engine-<keystr>: <uint> ns77767877GPUs usually contain multiple execution engines. Each shall be given a stable7979-and unique name (str), with possible values documented in the driver specific7878+and unique name (keystr), with possible values documented in the driver specific8079documentation.81808281Value shall be in specified time units which the respective GPU engine spent···9184was previously read, userspace is expected to stay with that larger previous9285value until a monotonic update is seen.93869494-- drm-engine-capacity-<str>: <uint>8787+- drm-engine-capacity-<keystr>: <uint>95889689Engine identifier string must be the same as the one specified in the9797-drm-engine-<str> tag and shall contain a greater than zero number in case the9090+drm-engine-<keystr> tag and shall contain a greater than zero number in case the9891exported engine corresponds to a group of identical hardware engines.999210093In the absence of this tag parser shall assume capacity of one. Zero capacity10194is not allowed.10295103103-- drm-memory-<str>: <uint> [KiB|MiB]104104-105105-Each possible memory type which can be used to store buffer objects by the106106-GPU in question shall be given a stable and unique name to be returned as the107107-string here.108108-109109-Value shall reflect the amount of storage currently consumed by the buffer110110-object belong to this client, in the respective memory region.111111-112112-Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'113113-indicating kibi- or mebi-bytes.114114-115115-- drm-cycles-<str> <uint>9696+- drm-cycles-<keystr>: <uint>1169711798Engine identifier string must be the same as the one specified in the118118-drm-engine-<str> tag and shall contain the number of busy cycles for the given9999+drm-engine-<keystr> tag and shall contain the number of busy cycles for the given119100engine.120101121102Values are not required to be constantly monotonic if it makes the driver···112117was previously read, userspace is expected to stay with that larger previous113118value until a monotonic update is seen.114119115115-- drm-maxfreq-<str> <uint> [Hz|MHz|KHz]120120+- drm-maxfreq-<keystr>: <uint> [Hz|MHz|KHz]116121117122Engine identifier string must be the same as the one specified in the118118-drm-engine-<str> tag and shall contain the maximum frequency for the given119119-engine. Taken together with drm-cycles-<str>, this can be used to calculate120120-percentage utilization of the engine, whereas drm-engine-<str> only reflects123123+drm-engine-<keystr> tag and shall contain the maximum frequency for the given124124+engine. Taken together with drm-cycles-<keystr>, this can be used to calculate125125+percentage utilization of the engine, whereas drm-engine-<keystr> only reflects121126time active without considering what frequency the engine is operating as a122127percentage of it's maximum frequency.123128129129+Memory130130+^^^^^^131131+132132+- drm-memory-<region>: <uint> [KiB|MiB]133133+134134+Each possible memory type which can be used to store buffer objects by the135135+GPU in question shall be given a stable and unique name to be returned as the136136+string here. The name "memory" is reserved to refer to normal system memory.137137+138138+Value shall reflect the amount of storage currently consumed by the buffer139139+objects belong to this client, in the respective memory region.140140+141141+Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'142142+indicating kibi- or mebi-bytes.143143+144144+- drm-shared-<region>: <uint> [KiB|MiB]145145+146146+The total size of buffers that are shared with another file (ie. have more147147+than a single handle).148148+149149+- drm-total-<region>: <uint> [KiB|MiB]150150+151151+The total size of buffers that including shared and private memory.152152+153153+- drm-resident-<region>: <uint> [KiB|MiB]154154+155155+The total size of buffers that are resident in the specified region.156156+157157+- drm-purgeable-<region>: <uint> [KiB|MiB]158158+159159+The total size of buffers that are purgeable.160160+161161+- drm-active-<region>: <uint> [KiB|MiB]162162+163163+The total size of buffers that are active on one or more engines.164164+165165+Implementation Details166166+======================167167+168168+Drivers should use drm_show_fdinfo() in their `struct file_operations`, and169169+implement &drm_driver.show_fdinfo if they wish to provide any stats which170170+are not provided by drm_show_fdinfo(). But even driver specific stats should171171+be documented above and where possible, aligned with other drivers.172172+124173Driver specific implementations125125-===============================174174+-------------------------------126175127176:ref:`i915-usage-stats`
+3-2
MAINTAINERS
···69816981F: Documentation/devicetree/bindings/display/bridge/renesas,dw-hdmi.yaml69826982F: Documentation/devicetree/bindings/display/bridge/renesas,lvds.yaml69836983F: Documentation/devicetree/bindings/display/renesas,du.yaml69846984-F: drivers/gpu/drm/rcar-du/69856985-F: drivers/gpu/drm/shmobile/69846984+F: drivers/gpu/drm/renesas/69866985F: include/linux/platform_data/shmob_drm.h6987698669886987DRM DRIVERS FOR ROCKCHIP···17387173881738817389QUALCOMM CLOUD AI (QAIC) DRIVER1738917390M: Jeffrey Hugo <quic_jhugo@quicinc.com>1739117391+R: Carl Vanderlip <quic_carlv@quicinc.com>1739217392+R: Pranjal Ramajor Asha Kanojiya <quic_pkanojiy@quicinc.com>1739017393L: linux-arm-msm@vger.kernel.org1739117394L: dri-devel@lists.freedesktop.org1739217395S: Supported
-6
drivers/accel/habanalabs/common/command_buffer.c
···2727 return -EINVAL;2828 }29293030- if (!hdev->mmu_enable) {3131- dev_err_ratelimited(hdev->dev,3232- "Cannot map CB because MMU is disabled\n");3333- return -EINVAL;3434- }3535-3630 if (cb->is_mmu_mapped)3731 return 0;3832
···280280281281static bool is_cb_patched(struct hl_device *hdev, struct hl_cs_job *job)282282{283283- /*284284- * Patched CB is created for external queues jobs, and for H/W queues285285- * jobs if the user CB was allocated by driver and MMU is disabled.286286- */287287- return (job->queue_type == QUEUE_TYPE_EXT ||288288- (job->queue_type == QUEUE_TYPE_HW &&289289- job->is_kernel_allocated_cb &&290290- !hdev->mmu_enable));283283+ /* Patched CB is created for external queues jobs */284284+ return (job->queue_type == QUEUE_TYPE_EXT);291285}292286293287/*···357363 }358364 }359365360360- /* For H/W queue jobs, if a user CB was allocated by driver and MMU is361361- * enabled, the user CB isn't released in cs_parser() and thus should be366366+ /* For H/W queue jobs, if a user CB was allocated by driver,367367+ * the user CB isn't released in cs_parser() and thus should be362368 * released here. This is also true for INT queues jobs which were363369 * allocated by driver.364370 */365365- if ((job->is_kernel_allocated_cb &&366366- ((job->queue_type == QUEUE_TYPE_HW && hdev->mmu_enable) ||367367- job->queue_type == QUEUE_TYPE_INT))) {371371+ if (job->is_kernel_allocated_cb &&372372+ (job->queue_type == QUEUE_TYPE_HW || job->queue_type == QUEUE_TYPE_INT)) {368373 atomic_dec(&job->user_cb->cs_cnt);369374 hl_cb_put(job->user_cb);370375 }···797804798805static void cs_timedout(struct work_struct *work)799806{807807+ struct hl_cs *cs = container_of(work, struct hl_cs, work_tdr.work);808808+ bool skip_reset_on_timeout, device_reset = false;800809 struct hl_device *hdev;801810 u64 event_mask = 0x0;811811+ uint timeout_sec;802812 int rc;803803- struct hl_cs *cs = container_of(work, struct hl_cs,804804- work_tdr.work);805805- bool skip_reset_on_timeout = cs->skip_reset_on_timeout, device_reset = false;813813+814814+ skip_reset_on_timeout = cs->skip_reset_on_timeout;806815807816 rc = cs_get_unless_zero(cs);808817 if (!rc)···835840 event_mask |= HL_NOTIFIER_EVENT_CS_TIMEOUT;836841 }837842843843+ timeout_sec = jiffies_to_msecs(hdev->timeout_jiffies) / 1000;844844+838845 switch (cs->type) {839846 case CS_TYPE_SIGNAL:840847 dev_err(hdev->dev,841841- "Signal command submission %llu has not finished in time!\n",842842- cs->sequence);848848+ "Signal command submission %llu has not finished in %u seconds!\n",849849+ cs->sequence, timeout_sec);843850 break;844851845852 case CS_TYPE_WAIT:846853 dev_err(hdev->dev,847847- "Wait command submission %llu has not finished in time!\n",848848- cs->sequence);854854+ "Wait command submission %llu has not finished in %u seconds!\n",855855+ cs->sequence, timeout_sec);849856 break;850857851858 case CS_TYPE_COLLECTIVE_WAIT:852859 dev_err(hdev->dev,853853- "Collective Wait command submission %llu has not finished in time!\n",854854- cs->sequence);860860+ "Collective Wait command submission %llu has not finished in %u seconds!\n",861861+ cs->sequence, timeout_sec);855862 break;856863857864 default:858865 dev_err(hdev->dev,859859- "Command submission %llu has not finished in time!\n",860860- cs->sequence);866866+ "Command submission %llu has not finished in %u seconds!\n",867867+ cs->sequence, timeout_sec);861868 break;862869 }863870···11361139 spin_unlock(&hdev->cs_mirror_lock);11371140}1138114111391139-void hl_abort_waitings_for_completion(struct hl_device *hdev)11421142+void hl_abort_waiting_for_cs_completions(struct hl_device *hdev)11401143{11411144 force_complete_cs(hdev);11421145 force_complete_multi_cs(hdev);11431143- hl_release_pending_user_interrupts(hdev);11441146}1145114711461148static void job_wq_completion(struct work_struct *work)···19441948 else19451949 cb_size = hdev->asic_funcs->get_signal_cb_size(hdev);1946195019471947- cb = hl_cb_kernel_create(hdev, cb_size,19481948- q_type == QUEUE_TYPE_HW && hdev->mmu_enable);19511951+ cb = hl_cb_kernel_create(hdev, cb_size, q_type == QUEUE_TYPE_HW);19491952 if (!cb) {19501953 atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt);19511954 atomic64_inc(&cntr->out_of_mem_drop_cnt);···2147215221482153 hdev->asic_funcs->hw_queues_unlock(hdev);21492154 rc = -EINVAL;21502150- goto out;21552155+ goto out_unlock;21512156 }2152215721532158 /*···2162216721632168 /* Release the id and free allocated memory of the handle */21642169 idr_remove(&mgr->handles, handle_id);21702170+21712171+ /* unlock before calling ctx_put, where we might sleep */21722172+ spin_unlock(&mgr->lock);21652173 hl_ctx_put(encaps_sig_hdl->ctx);21662174 kfree(encaps_sig_hdl);21752175+ goto out;21672176 } else {21682177 rc = -EINVAL;21692178 dev_err(hdev->dev, "failed to unreserve signals, cannot find handler\n");21702179 }21712171-out:21802180+21812181+out_unlock:21722182 spin_unlock(&mgr->lock);2173218321842184+out:21742185 return rc;21752186}21762187
···674674 return 0;675675}676676677677-static int device_cdev_sysfs_add(struct hl_device *hdev)677677+static int cdev_sysfs_debugfs_add(struct hl_device *hdev)678678{679679 int rc;680680···699699 goto delete_ctrl_cdev_device;700700 }701701702702- hdev->cdev_sysfs_created = true;702702+ hl_debugfs_add_device(hdev);703703+704704+ hdev->cdev_sysfs_debugfs_created = true;703705704706 return 0;705707···712710 return rc;713711}714712715715-static void device_cdev_sysfs_del(struct hl_device *hdev)713713+static void cdev_sysfs_debugfs_remove(struct hl_device *hdev)716714{717717- if (!hdev->cdev_sysfs_created)715715+ if (!hdev->cdev_sysfs_debugfs_created)718716 goto put_devices;719717718718+ hl_debugfs_remove_device(hdev);720719 hl_sysfs_fini(hdev);721720 cdev_device_del(&hdev->cdev_ctrl, hdev->dev_ctrl);722721 cdev_device_del(&hdev->cdev, hdev->dev);···984981 hdev->asic_funcs->early_fini(hdev);985982}986983984984+static bool is_pci_link_healthy(struct hl_device *hdev)985985+{986986+ u16 vendor_id;987987+988988+ if (!hdev->pdev)989989+ return false;990990+991991+ pci_read_config_word(hdev->pdev, PCI_VENDOR_ID, &vendor_id);992992+993993+ return (vendor_id == PCI_VENDOR_ID_HABANALABS);994994+}995995+987996static void hl_device_heartbeat(struct work_struct *work)988997{989998 struct hl_device *hdev = container_of(work, struct hl_device,···1010995 goto reschedule;10119961012997 if (hl_device_operational(hdev, NULL))10131013- dev_err(hdev->dev, "Device heartbeat failed!\n");998998+ dev_err(hdev->dev, "Device heartbeat failed! PCI link is %s\n",999999+ is_pci_link_healthy(hdev) ? "healthy" : "broken");1014100010151001 info.err_type = HL_INFO_FW_HEARTBEAT_ERR;10161002 info.event_mask = &event_mask;···11731157 mutex_unlock(&hdev->fpriv_ctrl_list_lock);11741158}1175115911601160+static void hl_abort_waiting_for_completions(struct hl_device *hdev)11611161+{11621162+ hl_abort_waiting_for_cs_completions(hdev);11631163+11641164+ /* Release all pending user interrupts, each pending user interrupt11651165+ * holds a reference to a user context.11661166+ */11671167+ hl_release_pending_user_interrupts(hdev);11681168+}11691169+11761170static void cleanup_resources(struct hl_device *hdev, bool hard_reset, bool fw_reset,11771171 bool skip_wq_flush)11781172{···12021176 /* flush the MMU prefetch workqueue */12031177 flush_workqueue(hdev->prefetch_wq);1204117812051205- /* Release all pending user interrupts, each pending user interrupt12061206- * holds a reference to user context12071207- */12081208- hl_release_pending_user_interrupts(hdev);11791179+ hl_abort_waiting_for_completions(hdev);12091180}1210118112111182/*···1944192119451922 hl_ctx_put(ctx);1946192319471947- hl_abort_waitings_for_completion(hdev);19241924+ hl_abort_waiting_for_completions(hdev);1948192519491926 return 0;19501927···20572034int hl_device_init(struct hl_device *hdev)20582035{20592036 int i, rc, cq_cnt, user_interrupt_cnt, cq_ready_cnt;20602060- bool add_cdev_sysfs_on_err = false;20372037+ bool expose_interfaces_on_err = false;2061203820622039 rc = create_cdev(hdev);20632040 if (rc)···21732150 hdev->device_release_watchdog_timeout_sec = HL_DEVICE_RELEASE_WATCHDOG_TIMEOUT_SEC;2174215121752152 hdev->memory_scrub_val = MEM_SCRUB_DEFAULT_VAL;21762176- hl_debugfs_add_device(hdev);2177215321782178- /* debugfs nodes are created in hl_ctx_init so it must be called after21792179- * hl_debugfs_add_device.21542154+ rc = hl_debugfs_device_init(hdev);21552155+ if (rc) {21562156+ dev_err(hdev->dev, "failed to initialize debugfs entry structure\n");21572157+ kfree(hdev->kernel_ctx);21582158+ goto mmu_fini;21592159+ }21602160+21612161+ /* The debugfs entry structure is accessed in hl_ctx_init(), so it must be called after21622162+ * hl_debugfs_device_init().21802163 */21812164 rc = hl_ctx_init(hdev, hdev->kernel_ctx, true);21822165 if (rc) {21832166 dev_err(hdev->dev, "failed to initialize kernel context\n");21842167 kfree(hdev->kernel_ctx);21852185- goto remove_device_from_debugfs;21682168+ goto debugfs_device_fini;21862169 }2187217021882171 rc = hl_cb_pool_init(hdev);···22042175 }2205217622062177 /*22072207- * From this point, override rc (=0) in case of an error to allow22082208- * debugging (by adding char devices and create sysfs nodes as part of22092209- * the error flow).21782178+ * From this point, override rc (=0) in case of an error to allow debugging21792179+ * (by adding char devices and creating sysfs/debugfs files as part of the error flow).22102180 */22112211- add_cdev_sysfs_on_err = true;21812181+ expose_interfaces_on_err = true;2212218222132183 /* Device is now enabled as part of the initialization requires22142184 * communication with the device firmware to get information that···22492221 }2250222222512223 /*22522252- * Expose devices and sysfs nodes to user.22532253- * From here there is no need to add char devices and create sysfs nodes22542254- * in case of an error.22242224+ * Expose devices and sysfs/debugfs files to user.22252225+ * From here there is no need to expose them in case of an error.22552226 */22562256- add_cdev_sysfs_on_err = false;22572257- rc = device_cdev_sysfs_add(hdev);22272227+ expose_interfaces_on_err = false;22282228+ rc = cdev_sysfs_debugfs_add(hdev);22582229 if (rc) {22592259- dev_err(hdev->dev,22602260- "Failed to add char devices and sysfs nodes\n");22302230+ dev_err(hdev->dev, "Failed to add char devices and sysfs/debugfs files\n");22612231 rc = 0;22622232 goto out_disabled;22632233 }···23012275 if (hl_ctx_put(hdev->kernel_ctx) != 1)23022276 dev_err(hdev->dev,23032277 "kernel ctx is still alive on initialization failure\n");23042304-remove_device_from_debugfs:23052305- hl_debugfs_remove_device(hdev);22782278+debugfs_device_fini:22792279+ hl_debugfs_device_fini(hdev);23062280mmu_fini:23072281 hl_mmu_fini(hdev);23082282eq_fini:···23262300 put_device(hdev->dev);23272301out_disabled:23282302 hdev->disabled = true;23292329- if (add_cdev_sysfs_on_err)23302330- device_cdev_sysfs_add(hdev);23312331- if (hdev->pdev)23322332- dev_err(&hdev->pdev->dev,23332333- "Failed to initialize hl%d. Device %s is NOT usable !\n",23342334- hdev->cdev_idx, dev_name(&(hdev)->pdev->dev));23352335- else23362336- pr_err("Failed to initialize hl%d. Device %s is NOT usable !\n",23372337- hdev->cdev_idx, dev_name(&(hdev)->pdev->dev));23032303+ if (expose_interfaces_on_err)23042304+ cdev_sysfs_debugfs_add(hdev);23052305+ dev_err(&hdev->pdev->dev,23062306+ "Failed to initialize hl%d. Device %s is NOT usable !\n",23072307+ hdev->cdev_idx, dev_name(&hdev->pdev->dev));2338230823392309 return rc;23402310}···24492427 if ((hdev->kernel_ctx) && (hl_ctx_put(hdev->kernel_ctx) != 1))24502428 dev_err(hdev->dev, "kernel ctx is still alive\n");2451242924522452- hl_debugfs_remove_device(hdev);24532453-24542430 hl_dec_fini(hdev);2455243124562432 hl_vm_fini(hdev);···2473245324742454 device_early_fini(hdev);2475245524762476- /* Hide devices and sysfs nodes from user */24772477- device_cdev_sysfs_del(hdev);24562456+ /* Hide devices and sysfs/debugfs files from user */24572457+ cdev_sysfs_debugfs_remove(hdev);24582458+24592459+ hl_debugfs_device_fini(hdev);2478246024792461 pr_info("removed device successfully\n");24802462}···2688266626892667 if (info->event_mask)26902668 *info->event_mask |= HL_NOTIFIER_EVENT_CRITICL_FW_ERR;26692669+}26702670+26712671+void hl_enable_err_info_capture(struct hl_error_info *captured_err_info)26722672+{26732673+ vfree(captured_err_info->page_fault_info.user_mappings);26742674+ memset(captured_err_info, 0, sizeof(struct hl_error_info));26752675+ atomic_set(&captured_err_info->cs_timeout.write_enable, 1);26762676+ captured_err_info->undef_opcode.write_enable = true;26912677}
+166-52
drivers/accel/habanalabs/common/firmware_if.c
···7171 return NULL;7272}73737474+/**7575+ * extract_u32_until_given_char() - given a string of the format "<u32><char>*", extract the u32.7676+ * @str: the given string7777+ * @ver_num: the pointer to the extracted u32 to be returned to the caller.7878+ * @given_char: the given char at the end of the u32 in the string7979+ *8080+ * Return: Upon success, return a pointer to the given_char in the string. Upon failure, return NULL8181+ */8282+static char *extract_u32_until_given_char(char *str, u32 *ver_num, char given_char)8383+{8484+ char num_str[8] = {}, *ch;8585+8686+ ch = strchrnul(str, given_char);8787+ if (*ch == '\0' || ch == str || ch - str >= sizeof(num_str))8888+ return NULL;8989+9090+ memcpy(num_str, str, ch - str);9191+ if (kstrtou32(num_str, 10, ver_num))9292+ return NULL;9393+ return ch;9494+}9595+9696+/**9797+ * hl_get_sw_major_minor_subminor() - extract the FW's SW version major, minor, sub-minor9898+ * from the version string9999+ * @hdev: pointer to the hl_device100100+ * @fw_str: the FW's version string101101+ *102102+ * The extracted version is set in the hdev fields: fw_sw_{major/minor/sub_minor}_ver.103103+ *104104+ * fw_str is expected to have one of two possible formats, examples:105105+ * 1) 'Preboot version hl-gaudi2-1.9.0-fw-42.0.1-sec-3'106106+ * 2) 'Preboot version hl-gaudi2-1.9.0-rc-fw-42.0.1-sec-3'107107+ * In those examples, the SW major,minor,subminor are correspondingly: 1,9,0.108108+ *109109+ * Return: 0 for success or a negative error code for failure.110110+ */111111+static int hl_get_sw_major_minor_subminor(struct hl_device *hdev, const char *fw_str)112112+{113113+ char *end, *start;114114+115115+ end = strnstr(fw_str, "-rc-", VERSION_MAX_LEN);116116+ if (end == fw_str)117117+ return -EINVAL;118118+119119+ if (!end)120120+ end = strnstr(fw_str, "-fw-", VERSION_MAX_LEN);121121+122122+ if (end == fw_str)123123+ return -EINVAL;124124+125125+ if (!end)126126+ return -EINVAL;127127+128128+ for (start = end - 1; start != fw_str; start--) {129129+ if (*start == '-')130130+ break;131131+ }132132+133133+ if (start == fw_str)134134+ return -EINVAL;135135+136136+ /* start/end point each to the starting and ending hyphen of the sw version e.g. -1.9.0- */137137+ start++;138138+ start = extract_u32_until_given_char(start, &hdev->fw_sw_major_ver, '.');139139+ if (!start)140140+ goto err_zero_ver;141141+142142+ start++;143143+ start = extract_u32_until_given_char(start, &hdev->fw_sw_minor_ver, '.');144144+ if (!start)145145+ goto err_zero_ver;146146+147147+ start++;148148+ start = extract_u32_until_given_char(start, &hdev->fw_sw_sub_minor_ver, '-');149149+ if (!start)150150+ goto err_zero_ver;151151+152152+ return 0;153153+154154+err_zero_ver:155155+ hdev->fw_sw_major_ver = 0;156156+ hdev->fw_sw_minor_ver = 0;157157+ hdev->fw_sw_sub_minor_ver = 0;158158+ return -EINVAL;159159+}160160+161161+/**162162+ * hl_get_preboot_major_minor() - extract the FW's version major, minor from the version string.163163+ * @hdev: pointer to the hl_device164164+ * @preboot_ver: the FW's version string165165+ *166166+ * preboot_ver is expected to be the format of <major>.<minor>.<sub minor>*, e.g: 42.0.1-sec-3167167+ * The extracted version is set in the hdev fields: fw_inner_{major/minor}_ver.168168+ *169169+ * Return: 0 on success, negative error code for failure.170170+ */74171static int hl_get_preboot_major_minor(struct hl_device *hdev, char *preboot_ver)75172{7676- char major[8], minor[8], *first_dot, *second_dot;7777- int rc;7878-7979- first_dot = strnstr(preboot_ver, ".", 10);8080- if (first_dot) {8181- strscpy(major, preboot_ver, first_dot - preboot_ver + 1);8282- rc = kstrtou32(major, 10, &hdev->fw_major_version);8383- } else {8484- rc = -EINVAL;173173+ preboot_ver = extract_u32_until_given_char(preboot_ver, &hdev->fw_inner_major_ver, '.');174174+ if (!preboot_ver) {175175+ dev_err(hdev->dev, "Error parsing preboot major version\n");176176+ goto err_zero_ver;85177 }861788787- if (rc) {8888- dev_err(hdev->dev, "Error %d parsing preboot major version\n", rc);8989- return rc;179179+ preboot_ver++;180180+181181+ preboot_ver = extract_u32_until_given_char(preboot_ver, &hdev->fw_inner_minor_ver, '.');182182+ if (!preboot_ver) {183183+ dev_err(hdev->dev, "Error parsing preboot minor version\n");184184+ goto err_zero_ver;90185 }186186+ return 0;911879292- /* skip the first dot */9393- first_dot++;9494-9595- second_dot = strnstr(first_dot, ".", 10);9696- if (second_dot) {9797- strscpy(minor, first_dot, second_dot - first_dot + 1);9898- rc = kstrtou32(minor, 10, &hdev->fw_minor_version);9999- } else {100100- rc = -EINVAL;101101- }102102-103103- if (rc)104104- dev_err(hdev->dev, "Error %d parsing preboot minor version\n", rc);105105- return rc;188188+err_zero_ver:189189+ hdev->fw_inner_major_ver = 0;190190+ hdev->fw_inner_minor_ver = 0;191191+ return -EINVAL;106192}107193108194static int hl_request_fw(struct hl_device *hdev,···589503{590504 gen_pool_free(hdev->cpu_accessible_dma_pool, (u64) (uintptr_t) vaddr,591505 size);506506+}507507+508508+int hl_fw_send_soft_reset(struct hl_device *hdev)509509+{510510+ struct cpucp_packet pkt;511511+ int rc;512512+513513+ memset(&pkt, 0, sizeof(pkt));514514+ pkt.ctl = cpu_to_le32(CPUCP_PACKET_SOFT_RESET << CPUCP_PKT_CTL_OPCODE_SHIFT);515515+ rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 0, NULL);516516+ if (rc)517517+ dev_err(hdev->dev, "failed to send soft-reset msg (err = %d)\n", rc);518518+519519+ return rc;592520}593521594522int hl_fw_send_device_activity(struct hl_device *hdev, bool open)···1368126813691269void hl_fw_ask_halt_machine_without_linux(struct hl_device *hdev)13701270{13711371- struct static_fw_load_mgr *static_loader =13721372- &hdev->fw_loader.static_loader;12711271+ struct fw_load_mgr *fw_loader = &hdev->fw_loader;12721272+ u32 status, cpu_boot_status_reg, cpu_timeout;12731273+ struct static_fw_load_mgr *static_loader;12741274+ struct pre_fw_load_props *pre_fw_load;13731275 int rc;1374127613751277 if (hdev->device_cpu_is_halted)···1379127713801278 /* Stop device CPU to make sure nothing bad happens */13811279 if (hdev->asic_prop.dynamic_fw_load) {12801280+ pre_fw_load = &fw_loader->pre_fw_load;12811281+ cpu_timeout = fw_loader->cpu_timeout;12821282+ cpu_boot_status_reg = pre_fw_load->cpu_boot_status_reg;12831283+13821284 rc = hl_fw_dynamic_send_protocol_cmd(hdev, &hdev->fw_loader,13831383- COMMS_GOTO_WFE, 0, false,13841384- hdev->fw_loader.cpu_timeout);13851385- if (rc)12851285+ COMMS_GOTO_WFE, 0, false, cpu_timeout);12861286+ if (rc) {13861287 dev_err(hdev->dev, "Failed sending COMMS_GOTO_WFE\n");12881288+ } else {12891289+ rc = hl_poll_timeout(12901290+ hdev,12911291+ cpu_boot_status_reg,12921292+ status,12931293+ status == CPU_BOOT_STATUS_IN_WFE,12941294+ hdev->fw_poll_interval_usec,12951295+ cpu_timeout);12961296+ if (rc)12971297+ dev_err(hdev->dev, "Current status=%u. Timed-out updating to WFE\n",12981298+ status);12991299+ }13871300 } else {13011301+ static_loader = &hdev->fw_loader.static_loader;13881302 WREG32(static_loader->kmd_msg_to_cpu_reg, KMD_MSG_GOTO_WFE);13891303 msleep(static_loader->cpu_reset_wait_msec);13901304···22692151 struct asic_fixed_properties *prop = &hdev->asic_prop;22702152 char *preboot_ver, *boot_ver;22712153 char btl_ver[32];21542154+ int rc;2272215522732156 switch (fwc) {22742157 case FW_COMP_BOOT_FIT:···22832164 break;22842165 case FW_COMP_PREBOOT:22852166 strscpy(prop->preboot_ver, fw_version, VERSION_MAX_LEN);22862286- preboot_ver = strnstr(prop->preboot_ver, "Preboot",22872287- VERSION_MAX_LEN);21672167+ preboot_ver = strnstr(prop->preboot_ver, "Preboot", VERSION_MAX_LEN);21682168+ dev_info(hdev->dev, "preboot full version: '%s'\n", preboot_ver);21692169+22882170 if (preboot_ver && preboot_ver != prop->preboot_ver) {22892171 strscpy(btl_ver, prop->preboot_ver,22902172 min((int) (preboot_ver - prop->preboot_ver), 31));22912173 dev_info(hdev->dev, "%s\n", btl_ver);22922174 }2293217521762176+ rc = hl_get_sw_major_minor_subminor(hdev, preboot_ver);21772177+ if (rc)21782178+ return rc;22942179 preboot_ver = extract_fw_ver_from_str(prop->preboot_ver);22952180 if (preboot_ver) {22962296- int rc;22972297-22982298- dev_info(hdev->dev, "preboot version %s\n", preboot_ver);22992299-23002181 rc = hl_get_preboot_major_minor(hdev, preboot_ver);23012182 kfree(preboot_ver);23022183 if (rc)···24852366 fw_loader->dynamic_loader.comm_desc.cur_fw_ver);24862367 if (rc)24872368 goto release_fw;24882488-24892489- /* update state according to boot stage */24902490- if (cur_fwc == FW_COMP_BOOT_FIT) {24912491- struct cpu_dyn_regs *dyn_regs;24922492-24932493- dyn_regs = &fw_loader->dynamic_loader.comm_desc.cpu_dyn_regs;24942494- hl_fw_boot_fit_update_state(hdev,24952495- le32_to_cpu(dyn_regs->cpu_boot_dev_sts0),24962496- le32_to_cpu(dyn_regs->cpu_boot_dev_sts1));24972497- }2498236924992370 /* copy boot fit to space allocated by FW */25002371 rc = hl_fw_dynamic_copy_image(hdev, fw, fw_loader);···27882679 goto protocol_err;27892680 }2790268126822682+ rc = hl_fw_dynamic_wait_for_boot_fit_active(hdev, fw_loader);26832683+ if (rc)26842684+ goto protocol_err;26852685+26862686+ hl_fw_boot_fit_update_state(hdev,26872687+ le32_to_cpu(dyn_regs->cpu_boot_dev_sts0),26882688+ le32_to_cpu(dyn_regs->cpu_boot_dev_sts1));26892689+27912690 /*27922691 * when testing FW load (without Linux) on PLDM we don't want to27932692 * wait until boot fit is active as it may take several hours.···28042687 */28052688 if (hdev->pldm && !(hdev->fw_components & FW_TYPE_LINUX))28062689 return 0;28072807-28082808- rc = hl_fw_dynamic_wait_for_boot_fit_active(hdev, fw_loader);28092809- if (rc)28102810- goto protocol_err;2811269028122691 /* Enable DRAM scrambling before Linux boot and after successful28132692 * UBoot···28382725 if (rc)28392726 goto protocol_err;2840272728412841- hl_fw_linux_update_state(hdev, le32_to_cpu(dyn_regs->cpu_boot_dev_sts0),27282728+ hl_fw_linux_update_state(hdev,27292729+ le32_to_cpu(dyn_regs->cpu_boot_dev_sts0),28422730 le32_to_cpu(dyn_regs->cpu_boot_dev_sts1));2843273128442732 hl_fw_dynamic_update_linux_interrupt_if(hdev);
+33-44
drivers/accel/habanalabs/common/habanalabs.h
···3636struct hl_device;3737struct hl_fpriv;38383939+#define PCI_VENDOR_ID_HABANALABS 0x1da34040+3941/* Use upper bits of mmap offset to store habana driver specific information.4042 * bits[63:59] - Encode mmap type4143 * bits[45:0] - mmap offset value···113111 MMU_DR_PGT = 0, /* device-dram-resident MMU PGT */114112 MMU_HR_PGT, /* host resident MMU PGT */115113 MMU_NUM_PGT_LOCATIONS /* num of PGT locations */116116-};117117-118118-/**119119- * enum hl_mmu_enablement - what mmu modules to enable120120- * @MMU_EN_NONE: mmu disabled.121121- * @MMU_EN_ALL: enable all.122122- * @MMU_EN_PMMU_ONLY: Enable only the PMMU leaving the DMMU disabled.123123- */124124-enum hl_mmu_enablement {125125- MMU_EN_NONE = 0,126126- MMU_EN_ALL = 1,127127- MMU_EN_PMMU_ONLY = 3, /* N/A for Goya/Gaudi */128114};129115130116/*···25582568 ktime_t __timeout; \25592569 u32 __elbi_read; \25602570 int __rc = 0; \25612561- if (hdev->pdev) \25622562- __timeout = ktime_add_us(ktime_get(), timeout_us); \25632563- else \25642564- __timeout = ktime_add_us(ktime_get(),\25652565- min((u64)(timeout_us * 10), \25662566- (u64) HL_SIM_MAX_TIMEOUT_US)); \25712571+ __timeout = ktime_add_us(ktime_get(), timeout_us); \25672572 might_sleep_if(sleep_us); \25682573 for (;;) { \25692574 if (elbi) { \···26102625 u8 __arr_idx; \26112626 int __rc = 0; \26122627 \26132613- if (hdev->pdev) \26142614- __timeout = ktime_add_us(ktime_get(), timeout_us); \26152615- else \26162616- __timeout = ktime_add_us(ktime_get(),\26172617- min(((u64)timeout_us * 10), \26182618- (u64) HL_SIM_MAX_TIMEOUT_US)); \26192619- \26282628+ __timeout = ktime_add_us(ktime_get(), timeout_us); \26202629 might_sleep_if(sleep_us); \26212630 if (arr_size >= 64) \26222631 __rc = -EINVAL; \···26682689 mem_written_by_device) \26692690({ \26702691 ktime_t __timeout; \26712671- if (hdev->pdev) \26722672- __timeout = ktime_add_us(ktime_get(), timeout_us); \26732673- else \26742674- __timeout = ktime_add_us(ktime_get(),\26752675- min((u64)(timeout_us * 100), \26762676- (u64) HL_SIM_MAX_TIMEOUT_US)); \26922692+ \26932693+ __timeout = ktime_add_us(ktime_get(), timeout_us); \26772694 might_sleep_if(sleep_us); \26782695 for (;;) { \26792696 /* Verify we read updates done by other cores or by device */ \···32003225 * @captured_err_info: holds information about errors.32013226 * @reset_info: holds current device reset information.32023227 * @stream_master_qid_arr: pointer to array with QIDs of master streams.32033203- * @fw_major_version: major version of current loaded preboot.32043204- * @fw_minor_version: minor version of current loaded preboot.32283228+ * @fw_inner_major_ver: the major of current loaded preboot inner version.32293229+ * @fw_inner_minor_ver: the minor of current loaded preboot inner version.32303230+ * @fw_sw_major_ver: the major of current loaded preboot SW version.32313231+ * @fw_sw_minor_ver: the minor of current loaded preboot SW version.32323232+ * @fw_sw_sub_minor_ver: the sub-minor of current loaded preboot SW version.32053233 * @dram_used_mem: current DRAM memory consumption.32063234 * @memory_scrub_val: the value to which the dram will be scrubbed to using cb scrub_device_dram32073235 * @timeout_jiffies: device CS timeout value.···32653287 * @in_debug: whether the device is in a state where the profiling/tracing infrastructure32663288 * can be used. This indication is needed because in some ASICs we need to do32673289 * specific operations to enable that infrastructure.32683268- * @cdev_sysfs_created: were char devices and sysfs nodes created.32903290+ * @cdev_sysfs_debugfs_created: were char devices and sysfs/debugfs files created.32693291 * @stop_on_err: true if engines should stop on error.32703292 * @supports_sync_stream: is sync stream supported.32713293 * @sync_stream_queue_idx: helper index for sync stream queues initialization.···32923314 * @nic_ports_mask: Controls which NIC ports are enabled. Used only for testing.32933315 * @fw_components: Controls which f/w components to load to the device. There are multiple f/w32943316 * stages and sometimes we want to stop at a certain stage. Used only for testing.32953295- * @mmu_enable: Whether to enable or disable the device MMU(s). Used only for testing.33173317+ * @mmu_disable: Disable the device MMU(s). Used only for testing.32963318 * @cpu_queues_enable: Whether to enable queues communication vs. the f/w. Used only for testing.32973319 * @pldm: Whether we are running in Palladium environment. Used only for testing.32983320 * @hard_reset_on_fw_events: Whether to do device hard-reset when a fatal event is received from···33903412 struct hl_reset_info reset_info;3391341333923414 u32 *stream_master_qid_arr;33933393- u32 fw_major_version;33943394- u32 fw_minor_version;34153415+ u32 fw_inner_major_ver;34163416+ u32 fw_inner_minor_ver;34173417+ u32 fw_sw_major_ver;34183418+ u32 fw_sw_minor_ver;34193419+ u32 fw_sw_sub_minor_ver;33953420 atomic64_t dram_used_mem;33963421 u64 memory_scrub_val;33973422 u64 timeout_jiffies;···34323451 u8 init_done;34333452 u8 device_cpu_disabled;34343453 u8 in_debug;34353435- u8 cdev_sysfs_created;34543454+ u8 cdev_sysfs_debugfs_created;34363455 u8 stop_on_err;34373456 u8 supports_sync_stream;34383457 u8 sync_stream_queue_idx;···34553474 /* Parameters for bring-up to be upstreamed */34563475 u64 nic_ports_mask;34573476 u64 fw_components;34583458- u8 mmu_enable;34773477+ u8 mmu_disable;34593478 u8 cpu_queues_enable;34603479 u8 pldm;34613480 u8 hard_reset_on_fw_events;···35283547 hl_ioctl_t *func;35293548};3530354935313531-static inline bool hl_is_fw_ver_below_1_9(struct hl_device *hdev)35503550+static inline bool hl_is_fw_sw_ver_below(struct hl_device *hdev, u32 fw_sw_major, u32 fw_sw_minor)35323551{35333533- return (hdev->fw_major_version < 42);35523552+ if (hdev->fw_sw_major_ver < fw_sw_major)35533553+ return true;35543554+ if (hdev->fw_sw_major_ver > fw_sw_major)35553555+ return false;35563556+ if (hdev->fw_sw_minor_ver < fw_sw_minor)35573557+ return true;35583558+ return false;35343559}3535356035363561/*···38003813 u64 curr_pte, bool *is_new_hop);38013814int hl_mmu_hr_get_tlb_info(struct hl_ctx *ctx, u64 virt_addr, struct hl_mmu_hop_info *hops,38023815 struct hl_hr_mmu_funcs *hr_func);38033803-void hl_mmu_swap_out(struct hl_ctx *ctx);38043804-void hl_mmu_swap_in(struct hl_ctx *ctx);38053816int hl_mmu_if_set_funcs(struct hl_device *hdev);38063817void hl_mmu_v1_set_funcs(struct hl_device *hdev, struct hl_mmu_funcs *mmu);38073818void hl_mmu_v2_hr_set_funcs(struct hl_device *hdev, struct hl_mmu_funcs *mmu);···38573872int hl_fw_dram_pending_row_get(struct hl_device *hdev, u32 *pend_rows_num);38583873int hl_fw_cpucp_engine_core_asid_set(struct hl_device *hdev, u32 asid);38593874int hl_fw_send_device_activity(struct hl_device *hdev, bool open);38753875+int hl_fw_send_soft_reset(struct hl_device *hdev);38603876int hl_pci_bars_map(struct hl_device *hdev, const char * const name[3],38613877 bool is_wc[3]);38623878int hl_pci_elbi_read(struct hl_device *hdev, u64 addr, u32 *data);···39073921void hl_dec_ctx_fini(struct hl_ctx *ctx);3908392239093923void hl_release_pending_user_interrupts(struct hl_device *hdev);39103910-void hl_abort_waitings_for_completion(struct hl_device *hdev);39243924+void hl_abort_waiting_for_cs_completions(struct hl_device *hdev);39113925int hl_cs_signal_sob_wraparound_handler(struct hl_device *hdev, u32 q_idx,39123926 struct hl_hw_sob **hw_sob, u32 count, bool encaps_sig);39133927···39443958 u64 *event_mask);39453959void hl_handle_critical_hw_err(struct hl_device *hdev, u16 event_id, u64 *event_mask);39463960void hl_handle_fw_err(struct hl_device *hdev, struct hl_info_fw_err_info *info);39613961+void hl_enable_err_info_capture(struct hl_error_info *captured_err_info);3947396239483963#ifdef CONFIG_DEBUG_FS3949396439503965void hl_debugfs_init(void);39513966void hl_debugfs_fini(void);39673967+int hl_debugfs_device_init(struct hl_device *hdev);39683968+void hl_debugfs_device_fini(struct hl_device *hdev);39523969void hl_debugfs_add_device(struct hl_device *hdev);39533970void hl_debugfs_remove_device(struct hl_device *hdev);39543971void hl_debugfs_add_file(struct hl_fpriv *hpriv);
+2-7
drivers/accel/habanalabs/common/habanalabs_drv.c
···13131414#include <linux/pci.h>1515#include <linux/module.h>1616+#include <linux/vmalloc.h>16171718#define CREATE_TRACE_POINTS1819#include <trace/events/habanalabs.h>···5453module_param(boot_error_status_mask, ulong, 0444);5554MODULE_PARM_DESC(boot_error_status_mask,5655 "Mask of the error status during device CPU boot (If bitX is cleared then error X is masked. Default all 1's)");5757-5858-#define PCI_VENDOR_ID_HABANALABS 0x1da359566057#define PCI_IDS_GOYA 0x00016158#define PCI_IDS_GAUDI 0x1000···219220220221 hl_debugfs_add_file(hpriv);221222222222- memset(&hdev->captured_err_info, 0, sizeof(hdev->captured_err_info));223223- atomic_set(&hdev->captured_err_info.cs_timeout.write_enable, 1);224224- hdev->captured_err_info.undef_opcode.write_enable = true;223223+ hl_enable_err_info_capture(&hdev->captured_err_info);225224226225 hdev->open_counter++;227226 hdev->last_successful_open_jif = jiffies;···304307{305308 hdev->nic_ports_mask = 0;306309 hdev->fw_components = FW_TYPE_ALL_TYPES;307307- hdev->mmu_enable = MMU_EN_ALL;308310 hdev->cpu_queues_enable = 1;309311 hdev->pldm = 0;310312 hdev->hard_reset_on_fw_events = 1;···378382 /* If CPU queues not enabled, no way to do heartbeat */379383 if (!hdev->cpu_queues_enable)380384 hdev->heartbeat = 0;381381-382385 fixup_device_params_per_asic(hdev, tmp_timeout);383386384387 return 0;
···430430 cur_eqe_index = FIELD_GET(EQ_CTL_INDEX_MASK, cur_eqe);431431 if ((hdev->event_queue.check_eqe_index) &&432432 (((eq->prev_eqe_index + 1) & EQ_CTL_INDEX_MASK) != cur_eqe_index)) {433433- dev_dbg(hdev->dev,433433+ dev_err(hdev->dev,434434 "EQE %#x in queue is ready but index does not match %d!=%d",435435 cur_eqe,436436 ((eq->prev_eqe_index + 1) & EQ_CTL_INDEX_MASK),
+2-102
drivers/accel/habanalabs/common/memory.c
···10341034 }10351035}1036103610371037-static int get_paddr_from_handle(struct hl_ctx *ctx, struct hl_mem_in *args,10381038- u64 *paddr)10391039-{10401040- struct hl_device *hdev = ctx->hdev;10411041- struct hl_vm *vm = &hdev->vm;10421042- struct hl_vm_phys_pg_pack *phys_pg_pack;10431043- u32 handle;10441044-10451045- handle = lower_32_bits(args->map_device.handle);10461046- spin_lock(&vm->idr_lock);10471047- phys_pg_pack = idr_find(&vm->phys_pg_pack_handles, handle);10481048- if (!phys_pg_pack) {10491049- spin_unlock(&vm->idr_lock);10501050- dev_err(hdev->dev, "no match for handle %u\n", handle);10511051- return -EINVAL;10521052- }10531053-10541054- *paddr = phys_pg_pack->pages[0];10551055-10561056- spin_unlock(&vm->idr_lock);10571057-10581058- return 0;10591059-}10601060-10611037/**10621038 * map_device_va() - map the given memory.10631039 * @ctx: pointer to the context structure.···20702094 return rc;20712095}2072209620732073-static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args)20742074-{20752075- struct hl_device *hdev = hpriv->hdev;20762076- u64 block_handle, device_addr = 0;20772077- struct hl_ctx *ctx = hpriv->ctx;20782078- u32 handle = 0, block_size;20792079- int rc;20802080-20812081- switch (args->in.op) {20822082- case HL_MEM_OP_ALLOC:20832083- if (args->in.alloc.mem_size == 0) {20842084- dev_err(hdev->dev, "alloc size must be larger than 0\n");20852085- rc = -EINVAL;20862086- goto out;20872087- }20882088-20892089- /* Force contiguous as there are no real MMU20902090- * translations to overcome physical memory gaps20912091- */20922092- args->in.flags |= HL_MEM_CONTIGUOUS;20932093- rc = alloc_device_memory(ctx, &args->in, &handle);20942094-20952095- memset(args, 0, sizeof(*args));20962096- args->out.handle = (__u64) handle;20972097- break;20982098-20992099- case HL_MEM_OP_FREE:21002100- rc = free_device_memory(ctx, &args->in);21012101- break;21022102-21032103- case HL_MEM_OP_MAP:21042104- if (args->in.flags & HL_MEM_USERPTR) {21052105- dev_err(hdev->dev, "Failed to map host memory when MMU is disabled\n");21062106- rc = -EPERM;21072107- } else {21082108- rc = get_paddr_from_handle(ctx, &args->in, &device_addr);21092109- memset(args, 0, sizeof(*args));21102110- args->out.device_virt_addr = device_addr;21112111- }21122112-21132113- break;21142114-21152115- case HL_MEM_OP_UNMAP:21162116- rc = 0;21172117- break;21182118-21192119- case HL_MEM_OP_MAP_BLOCK:21202120- rc = map_block(hdev, args->in.map_block.block_addr, &block_handle, &block_size);21212121- args->out.block_handle = block_handle;21222122- args->out.block_size = block_size;21232123- break;21242124-21252125- case HL_MEM_OP_EXPORT_DMABUF_FD:21262126- dev_err(hdev->dev, "Failed to export dma-buf object when MMU is disabled\n");21272127- rc = -EPERM;21282128- break;21292129-21302130- case HL_MEM_OP_TS_ALLOC:21312131- rc = allocate_timestamps_buffers(hpriv, &args->in, &args->out.handle);21322132- break;21332133- default:21342134- dev_err(hdev->dev, "Unknown opcode for memory IOCTL\n");21352135- rc = -EINVAL;21362136- break;21372137- }21382138-21392139-out:21402140- return rc;21412141-}21422142-21432097static void ts_buff_release(struct hl_mmap_mem_buf *buf)21442098{21452099 struct hl_ts_buff *ts_buff = buf->private;···21872281 hdev->status[status]);21882282 return -EBUSY;21892283 }21902190-21912191- if (!hdev->mmu_enable)21922192- return mem_ioctl_no_mmu(hpriv, args);2193228421942285 switch (args->in.op) {21952286 case HL_MEM_OP_ALLOC:···26822779 atomic64_set(&ctx->dram_phys_mem, 0);2683278026842781 /*26852685- * - If MMU is enabled, init the ranges as usual.26862686- * - If MMU is disabled, in case of host mapping, the returned address26872687- * is the given one.26882782 * In case of DRAM mapping, the returned address is the physical26892783 * address of the memory related to the given handle.26902784 */26912691- if (!ctx->hdev->mmu_enable)27852785+ if (ctx->hdev->mmu_disable)26922786 return 0;2693278726942788 dram_range_start = prop->dmmu.start_addr;···27352835 struct hl_mem_in args;27362836 int i;2737283727382738- if (!hdev->mmu_enable)28382838+ if (hdev->mmu_disable)27392839 return;2740284027412841 hl_debugfs_remove_ctx_mem_hash(hdev, ctx);
+8-48
drivers/accel/habanalabs/common/mmu/mmu.c
···4444{4545 int rc = -EOPNOTSUPP;46464747- if (!hdev->mmu_enable)4747+ if (hdev->mmu_disable)4848 return 0;49495050 mutex_init(&hdev->mmu_lock);···8282 */8383void hl_mmu_fini(struct hl_device *hdev)8484{8585- if (!hdev->mmu_enable)8585+ if (hdev->mmu_disable)8686 return;87878888 if (hdev->mmu_func[MMU_DR_PGT].fini != NULL)···107107 struct hl_device *hdev = ctx->hdev;108108 int rc = -EOPNOTSUPP;109109110110- if (!hdev->mmu_enable)110110+ if (hdev->mmu_disable)111111 return 0;112112113113 if (hdev->mmu_func[MMU_DR_PGT].ctx_init != NULL) {···145145{146146 struct hl_device *hdev = ctx->hdev;147147148148- if (!hdev->mmu_enable)148148+ if (hdev->mmu_disable)149149 return;150150151151 if (hdev->mmu_func[MMU_DR_PGT].ctx_fini != NULL)···233233 u64 real_virt_addr;234234 bool is_dram_addr;235235236236- if (!hdev->mmu_enable)236236+ if (hdev->mmu_disable)237237 return 0;238238239239 is_dram_addr = hl_is_dram_va(hdev, virt_addr);···301301 bool is_dram_addr;302302303303304304- if (!hdev->mmu_enable)304304+ if (hdev->mmu_disable)305305 return 0;306306307307 is_dram_addr = hl_is_dram_va(hdev, virt_addr);···472472 return rc;473473}474474475475-/*476476- * hl_mmu_swap_out - marks all mapping of the given ctx as swapped out477477- *478478- * @ctx: pointer to the context structure479479- *480480- */481481-void hl_mmu_swap_out(struct hl_ctx *ctx)482482-{483483- struct hl_device *hdev = ctx->hdev;484484-485485- if (!hdev->mmu_enable)486486- return;487487-488488- if (hdev->mmu_func[MMU_DR_PGT].swap_out != NULL)489489- hdev->mmu_func[MMU_DR_PGT].swap_out(ctx);490490-491491- if (hdev->mmu_func[MMU_HR_PGT].swap_out != NULL)492492- hdev->mmu_func[MMU_HR_PGT].swap_out(ctx);493493-}494494-495495-/*496496- * hl_mmu_swap_in - marks all mapping of the given ctx as swapped in497497- *498498- * @ctx: pointer to the context structure499499- *500500- */501501-void hl_mmu_swap_in(struct hl_ctx *ctx)502502-{503503- struct hl_device *hdev = ctx->hdev;504504-505505- if (!hdev->mmu_enable)506506- return;507507-508508- if (hdev->mmu_func[MMU_DR_PGT].swap_in != NULL)509509- hdev->mmu_func[MMU_DR_PGT].swap_in(ctx);510510-511511- if (hdev->mmu_func[MMU_HR_PGT].swap_in != NULL)512512- hdev->mmu_func[MMU_HR_PGT].swap_in(ctx);513513-}514514-515475static void hl_mmu_pa_page_with_offset(struct hl_ctx *ctx, u64 virt_addr,516476 struct hl_mmu_hop_info *hops,517477 u64 *phys_addr)···554594 int pgt_residency, rc;555595 bool is_dram_addr;556596557557- if (!hdev->mmu_enable)597597+ if (hdev->mmu_disable)558598 return -EOPNOTSUPP;559599560600 prop = &hdev->asic_prop;···585625586626int hl_mmu_if_set_funcs(struct hl_device *hdev)587627{588588- if (!hdev->mmu_enable)628628+ if (hdev->mmu_disable)589629 return 0;590630591631 switch (hdev->asic_type) {
···359359 union {360360 __le64 data_placeholder;361361 struct hl_eq_ecc_data ecc_data;362362- struct hl_eq_hbm_ecc_data hbm_ecc_data; /* Gaudi1 HBM */362362+ struct hl_eq_hbm_ecc_data hbm_ecc_data; /* Obsolete */363363 struct hl_eq_sm_sei_data sm_sei_data;364364 struct cpucp_pkt_sync_err pkt_sync_err;365365 struct hl_eq_fw_alive fw_alive;···653653 * which address is passed via the CpuCp packet. In addition, the host's driver654654 * passes the max size it allows the CpuCP to write to the structure, to prevent655655 * data corruption in case of mismatched driver/FW versions.656656- * Relevant only to Gaudi.656656+ * Obsolete.657657 *658658 * CPUCP_PACKET_GENERIC_PASSTHROUGH -659659 * Generic opcode for all firmware info that is only passed to host···665665 *666666 * CPUCP_PACKET_REGISTER_INTERRUPTS -667667 * Packet to register interrupts indicating LKD is ready to receive events from FW.668668+ *669669+ * CPUCP_PACKET_SOFT_RESET -670670+ * Packet to perform soft-reset.668671 */669672670673enum cpucp_packet_id {···734731 CPUCP_PACKET_RESERVED11, /* not used */735732 CPUCP_PACKET_RESERVED12, /* internal */736733 CPUCP_PACKET_REGISTER_INTERRUPTS, /* internal */734734+ CPUCP_PACKET_SOFT_RESET, /* internal */737735 CPUCP_PACKET_ID_MAX /* must be last */738736};739737···868864enum cpucp_led_index {869865 CPUCP_LED0_INDEX = 0,870866 CPUCP_LED1_INDEX,871871- CPUCP_LED2_INDEX867867+ CPUCP_LED2_INDEX,868868+ CPUCP_LED_MAX_INDEX = CPUCP_LED2_INDEX872869};873870874871/*875872 * enum cpucp_packet_rc - Error return code876873 * @cpucp_packet_success -> in case of success.877877- * @cpucp_packet_invalid -> this is to support Goya and Gaudi platform.874874+ * @cpucp_packet_invalid -> this is to support first generation platforms.878875 * @cpucp_packet_fault -> in case of processing error like failing to879876 * get device binding or semaphore etc.880880- * @cpucp_packet_invalid_pkt -> when cpucp packet is un-supported. This is881881- * supported Greco onwards.877877+ * @cpucp_packet_invalid_pkt -> when cpucp packet is un-supported.882878 * @cpucp_packet_invalid_params -> when checking parameter like length of buffer883883- * or attribute value etc. Supported Greco onwards.879879+ * or attribute value etc.884880 * @cpucp_packet_rc_max -> It indicates size of enum so should be at last.885881 */886882enum cpucp_packet_rc {···13651361#define DCORE_MON_REGS_SZ 51213661362/*13671363 * struct dcore_monitor_regs_data - DCORE monitor regs data.13681368- * the structure follows sync manager block layout. relevant only to Gaudi.13641364+ * the structure follows sync manager block layout. Obsolete.13691365 * @mon_pay_addrl: array of payload address low bits.13701366 * @mon_pay_addrh: array of payload address high bits.13711367 * @mon_pay_data: array of payload data.···13801376 __le32 mon_status[DCORE_MON_REGS_SZ];13811377};1382137813831383-/* contains SM data for each SYNC_MNGR (relevant only to Gaudi) */13791379+/* contains SM data for each SYNC_MNGR (Obsolete) */13841380struct cpucp_monitor_dump {13851381 struct dcore_monitor_regs_data sync_mngr_w_s;13861382 struct dcore_monitor_regs_data sync_mngr_e_s;
···9595config DRM_KMS_HELPER9696 tristate9797 depends on DRM9898+ select FB_SYS_HELPERS_DEFERRED if DRM_FBDEV_EMULATION9899 help99100 CRTC helpers for KMS drivers.100101···133132 bool "Enable legacy fbdev support for your modesetting driver"134133 depends on DRM_KMS_HELPER135134 depends on FB=y || FB=DRM_KMS_HELPER136136- select FB_CFB_FILLRECT137137- select FB_CFB_COPYAREA138138- select FB_CFB_IMAGEBLIT139139- select FB_DEFERRED_IO140140- select FB_SYS_FOPS141141- select FB_SYS_FILLRECT142142- select FB_SYS_COPYAREA143143- select FB_SYS_IMAGEBLIT144135 select FRAMEBUFFER_CONSOLE if !EXPERT145136 select FRAMEBUFFER_CONSOLE_DETECT_PRIMARY if FRAMEBUFFER_CONSOLE146137 default y···216223config DRM_GEM_DMA_HELPER217224 tristate218225 depends on DRM226226+ select FB_SYS_HELPERS if DRM_FBDEV_EMULATION219227 help220228 Choose this if you need the GEM DMA helper functions221229···289295290296source "drivers/gpu/drm/atmel-hlcdc/Kconfig"291297292292-source "drivers/gpu/drm/rcar-du/Kconfig"293293-294294-source "drivers/gpu/drm/shmobile/Kconfig"298298+source "drivers/gpu/drm/renesas/Kconfig"295299296300source "drivers/gpu/drm/sun4i/Kconfig"297301
···33 tristate "DRM support for Marvell Armada SoCs"44 depends on DRM && HAVE_CLK && ARM && MMU55 select DRM_KMS_HELPER66+ select FB_IO_HELPERS if DRM_FBDEV_EMULATION67 help78 Support the "LCD" controllers found on the Marvell Armada 51089 devices. There are two controllers on the device, each controller
···227227 select DRM_KMS_HELPER228228 select DRM_MIPI_DSI229229 select DRM_PANEL_BRIDGE230230+ select GENERIC_PHY_MIPI_DPHY230231 help231232 The Samsung MIPI DSIM bridge controller driver.232233 This MIPI DSIM bridge can be found it on Exynos SoCs and
···2424 struct gpio_desc *hpd_gpio;2525 int hpd_irq;26262727- struct regulator *dp_pwr;2727+ struct regulator *supply;2828 struct gpio_desc *ddc_en;2929};3030···191191 return IRQ_HANDLED;192192}193193194194+static int display_connector_get_supply(struct platform_device *pdev,195195+ struct display_connector *conn,196196+ const char *name)197197+{198198+ conn->supply = devm_regulator_get_optional(&pdev->dev, name);199199+200200+ if (conn->supply == ERR_PTR(-ENODEV))201201+ conn->supply = NULL;202202+203203+ return PTR_ERR_OR_ZERO(conn->supply);204204+}205205+194206static int display_connector_probe(struct platform_device *pdev)195207{196208 struct display_connector *conn;···328316 if (type == DRM_MODE_CONNECTOR_DisplayPort) {329317 int ret;330318331331- conn->dp_pwr = devm_regulator_get_optional(&pdev->dev, "dp-pwr");332332-333333- if (IS_ERR(conn->dp_pwr)) {334334- ret = PTR_ERR(conn->dp_pwr);335335-336336- switch (ret) {337337- case -ENODEV:338338- conn->dp_pwr = NULL;339339- break;340340-341341- case -EPROBE_DEFER:342342- return -EPROBE_DEFER;343343-344344- default:345345- dev_err(&pdev->dev, "failed to get DP PWR regulator: %d\n", ret);346346- return ret;347347- }348348- }349349-350350- if (conn->dp_pwr) {351351- ret = regulator_enable(conn->dp_pwr);352352- if (ret) {353353- dev_err(&pdev->dev, "failed to enable DP PWR regulator: %d\n", ret);354354- return ret;355355- }356356- }319319+ ret = display_connector_get_supply(pdev, conn, "dp-pwr");320320+ if (ret < 0)321321+ return dev_err_probe(&pdev->dev, ret, "failed to get DP PWR regulator\n");357322 }358323359324 /* enable DDC */360325 if (type == DRM_MODE_CONNECTOR_HDMIA) {326326+ int ret;327327+361328 conn->ddc_en = devm_gpiod_get_optional(&pdev->dev, "ddc-en",362329 GPIOD_OUT_HIGH);363330364331 if (IS_ERR(conn->ddc_en)) {365332 dev_err(&pdev->dev, "Couldn't get ddc-en gpio\n");366333 return PTR_ERR(conn->ddc_en);334334+ }335335+336336+ ret = display_connector_get_supply(pdev, conn, "hdmi-pwr");337337+ if (ret < 0)338338+ return dev_err_probe(&pdev->dev, ret, "failed to get HDMI +5V Power regulator\n");339339+ }340340+341341+ if (conn->supply) {342342+ ret = regulator_enable(conn->supply);343343+ if (ret) {344344+ dev_err(&pdev->dev, "failed to enable PWR regulator: %d\n", ret);345345+ return ret;367346 }368347 }369348···389386 if (conn->ddc_en)390387 gpiod_set_value(conn->ddc_en, 0);391388392392- if (conn->dp_pwr)393393- regulator_disable(conn->dp_pwr);389389+ if (conn->supply)390390+ regulator_disable(conn->supply);394391395392 drm_bridge_remove(&conn->bridge);396393
+5
drivers/gpu/drm/bridge/imx/Kconfig
···11if ARCH_MXC || COMPILE_TEST2233+config DRM_IMX_LDB_HELPER44+ tristate55+36config DRM_IMX8QM_LDB47 tristate "Freescale i.MX8QM LVDS display bridge"58 depends on OF69 depends on COMMON_CLK1010+ select DRM_IMX_LDB_HELPER711 select DRM_KMS_HELPER812 help913 Choose this to enable the internal LVDS Display Bridge(LDB) found in···1713 tristate "Freescale i.MX8QXP LVDS display bridge"1814 depends on OF1915 depends on COMMON_CLK1616+ select DRM_IMX_LDB_HELPER2017 select DRM_KMS_HELPER2118 help2219 Choose this to enable the internal LVDS Display Bridge(LDB) found in
···1111 */12121313#include <linux/delay.h>1414+#include <linux/gpio/consumer.h>1415#include <linux/mod_devicetable.h>1516#include <linux/module.h>1617#include <linux/of_graph.h>···6463 struct drm_bridge bridge;6564 struct regulator *regulator;6665 struct drm_bridge *panel_bridge;6666+ struct gpio_desc *reset_gpio;6767 bool pre_enabled;6868 int error;6969};···140138141139 ctx->pre_enabled = false;142140141141+ if (ctx->reset_gpio)142142+ gpiod_set_value_cansleep(ctx->reset_gpio, 0);143143+143144 ret = regulator_disable(ctx->regulator);144145 if (ret < 0)145146 dev_err(ctx->dev, "error disabling regulators (%d)\n", ret);···156151 ret = regulator_enable(ctx->regulator);157152 if (ret < 0)158153 dev_err(ctx->dev, "error enabling regulators (%d)\n", ret);154154+155155+ if (ctx->reset_gpio) {156156+ gpiod_set_value_cansleep(ctx->reset_gpio, 1);157157+ usleep_range(5000, 10000);158158+ }159159160160 ret = tc358762_init(ctx);161161 if (ret < 0)···194184 return PTR_ERR(panel_bridge);195185196186 ctx->panel_bridge = panel_bridge;187187+188188+ /* Reset GPIO is optional */189189+ ctx->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);190190+ if (IS_ERR(ctx->reset_gpio))191191+ return PTR_ERR(ctx->reset_gpio);197192198193 return 0;199194}
+195-2
drivers/gpu/drm/bridge/tc358767.c
···1781178117821782static bool tc_readable_reg(struct device *dev, unsigned int reg)17831783{17841784- return reg != SYSCTRL;17841784+ switch (reg) {17851785+ /* DSI D-PHY Layer */17861786+ case 0x004:17871787+ case 0x020:17881788+ case 0x024:17891789+ case 0x028:17901790+ case 0x02c:17911791+ case 0x030:17921792+ case 0x038:17931793+ case 0x040:17941794+ case 0x044:17951795+ case 0x048:17961796+ case 0x04c:17971797+ case 0x050:17981798+ case 0x054:17991799+ /* DSI PPI Layer */18001800+ case PPI_STARTPPI:18011801+ case 0x108:18021802+ case 0x110:18031803+ case PPI_LPTXTIMECNT:18041804+ case PPI_LANEENABLE:18051805+ case PPI_TX_RX_TA:18061806+ case 0x140:18071807+ case PPI_D0S_ATMR:18081808+ case PPI_D1S_ATMR:18091809+ case 0x14c:18101810+ case 0x150:18111811+ case PPI_D0S_CLRSIPOCOUNT:18121812+ case PPI_D1S_CLRSIPOCOUNT:18131813+ case PPI_D2S_CLRSIPOCOUNT:18141814+ case PPI_D3S_CLRSIPOCOUNT:18151815+ case 0x180:18161816+ case 0x184:18171817+ case 0x188:18181818+ case 0x18c:18191819+ case 0x190:18201820+ case 0x1a0:18211821+ case 0x1a4:18221822+ case 0x1a8:18231823+ case 0x1ac:18241824+ case 0x1b0:18251825+ case 0x1c0:18261826+ case 0x1c4:18271827+ case 0x1c8:18281828+ case 0x1cc:18291829+ case 0x1d0:18301830+ case 0x1e0:18311831+ case 0x1e4:18321832+ case 0x1f0:18331833+ case 0x1f4:18341834+ /* DSI Protocol Layer */18351835+ case DSI_STARTDSI:18361836+ case 0x208:18371837+ case DSI_LANEENABLE:18381838+ case 0x214:18391839+ case 0x218:18401840+ case 0x220:18411841+ case 0x224:18421842+ case 0x228:18431843+ case 0x230:18441844+ /* DSI General */18451845+ case 0x300:18461846+ /* DSI Application Layer */18471847+ case 0x400:18481848+ case 0x404:18491849+ /* DPI */18501850+ case DPIPXLFMT:18511851+ /* Parallel Output */18521852+ case POCTRL:18531853+ /* Video Path0 Configuration */18541854+ case VPCTRL0:18551855+ case HTIM01:18561856+ case HTIM02:18571857+ case VTIM01:18581858+ case VTIM02:18591859+ case VFUEN0:18601860+ /* System */18611861+ case TC_IDREG:18621862+ case 0x504:18631863+ case SYSSTAT:18641864+ case SYSRSTENB:18651865+ case SYSCTRL:18661866+ /* I2C */18671867+ case 0x520:18681868+ /* GPIO */18691869+ case GPIOM:18701870+ case GPIOC:18711871+ case GPIOO:18721872+ case GPIOI:18731873+ /* Interrupt */18741874+ case INTCTL_G:18751875+ case INTSTS_G:18761876+ case 0x570:18771877+ case 0x574:18781878+ case INT_GP0_LCNT:18791879+ case INT_GP1_LCNT:18801880+ /* DisplayPort Control */18811881+ case DP0CTL:18821882+ /* DisplayPort Clock */18831883+ case DP0_VIDMNGEN0:18841884+ case DP0_VIDMNGEN1:18851885+ case DP0_VMNGENSTATUS:18861886+ case 0x628:18871887+ case 0x62c:18881888+ case 0x630:18891889+ /* DisplayPort Main Channel */18901890+ case DP0_SECSAMPLE:18911891+ case DP0_VIDSYNCDELAY:18921892+ case DP0_TOTALVAL:18931893+ case DP0_STARTVAL:18941894+ case DP0_ACTIVEVAL:18951895+ case DP0_SYNCVAL:18961896+ case DP0_MISC:18971897+ /* DisplayPort Aux Channel */18981898+ case DP0_AUXCFG0:18991899+ case DP0_AUXCFG1:19001900+ case DP0_AUXADDR:19011901+ case 0x66c:19021902+ case 0x670:19031903+ case 0x674:19041904+ case 0x678:19051905+ case 0x67c:19061906+ case 0x680:19071907+ case 0x684:19081908+ case 0x688:19091909+ case DP0_AUXSTATUS:19101910+ case DP0_AUXI2CADR:19111911+ /* DisplayPort Link Training */19121912+ case DP0_SRCCTRL:19131913+ case DP0_LTSTAT:19141914+ case DP0_SNKLTCHGREQ:19151915+ case DP0_LTLOOPCTRL:19161916+ case DP0_SNKLTCTRL:19171917+ case 0x6e8:19181918+ case 0x6ec:19191919+ case 0x6f0:19201920+ case 0x6f4:19211921+ /* DisplayPort Audio */19221922+ case 0x700:19231923+ case 0x704:19241924+ case 0x708:19251925+ case 0x70c:19261926+ case 0x710:19271927+ case 0x714:19281928+ case 0x718:19291929+ case 0x71c:19301930+ case 0x720:19311931+ /* DisplayPort Source Control */19321932+ case DP1_SRCCTRL:19331933+ /* DisplayPort PHY */19341934+ case DP_PHY_CTRL:19351935+ case 0x810:19361936+ case 0x814:19371937+ case 0x820:19381938+ case 0x840:19391939+ /* I2S */19401940+ case 0x880:19411941+ case 0x888:19421942+ case 0x88c:19431943+ case 0x890:19441944+ case 0x894:19451945+ case 0x898:19461946+ case 0x89c:19471947+ case 0x8a0:19481948+ case 0x8a4:19491949+ case 0x8a8:19501950+ case 0x8ac:19511951+ case 0x8b0:19521952+ case 0x8b4:19531953+ /* PLL */19541954+ case DP0_PLLCTRL:19551955+ case DP1_PLLCTRL:19561956+ case PXL_PLLCTRL:19571957+ case PXL_PLLPARAM:19581958+ case SYS_PLLPARAM:19591959+ /* HDCP */19601960+ case 0x980:19611961+ case 0x984:19621962+ case 0x988:19631963+ case 0x98c:19641964+ case 0x990:19651965+ case 0x994:19661966+ case 0x998:19671967+ case 0x99c:19681968+ case 0x9a0:19691969+ case 0x9a4:19701970+ case 0x9a8:19711971+ case 0x9ac:19721972+ /* Debug */19731973+ case TSTCTL:19741974+ case PLL_DBG:19751975+ return true;19761976+ }19771977+ return false;17851978}1786197917871980static const struct regmap_range tc_volatile_ranges[] = {···24022209 .of_match_table = tc358767_of_ids,24032210 },24042211 .id_table = tc358767_i2c_ids,24052405- .probe_new = tc_probe,22122212+ .probe = tc_probe,24062213 .remove = tc_remove,24072214};24082215module_i2c_driver(tc358767_driver);
···4242#include <drm/drm_client.h>4343#include <drm/drm_drv.h>4444#include <drm/drm_file.h>4545+#include <drm/drm_gem.h>4546#include <drm/drm_print.h>46474748#include "drm_crtc_internal.h"···149148 */150149struct drm_file *drm_file_alloc(struct drm_minor *minor)151150{151151+ static atomic64_t ident = ATOMIC_INIT(0);152152 struct drm_device *dev = minor->dev;153153 struct drm_file *file;154154 int ret;···158156 if (!file)159157 return ERR_PTR(-ENOMEM);160158159159+ /* Get a unique identifier for fdinfo: */160160+ file->client_id = atomic64_inc_return(&ident);161161 file->pid = get_pid(task_tgid(current));162162 file->minor = minor;163163···871867 spin_unlock_irqrestore(&dev->event_lock, irqflags);872868}873869EXPORT_SYMBOL(drm_send_event);870870+871871+static void print_size(struct drm_printer *p, const char *stat,872872+ const char *region, u64 sz)873873+{874874+ const char *units[] = {"", " KiB", " MiB"};875875+ unsigned u;876876+877877+ for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {878878+ if (sz < SZ_1K)879879+ break;880880+ sz = div_u64(sz, SZ_1K);881881+ }882882+883883+ drm_printf(p, "drm-%s-%s:\t%llu%s\n", stat, region, sz, units[u]);884884+}885885+886886+/**887887+ * drm_print_memory_stats - A helper to print memory stats888888+ * @p: The printer to print output to889889+ * @stats: The collected memory stats890890+ * @supported_status: Bitmask of optional stats which are available891891+ * @region: The memory region892892+ *893893+ */894894+void drm_print_memory_stats(struct drm_printer *p,895895+ const struct drm_memory_stats *stats,896896+ enum drm_gem_object_status supported_status,897897+ const char *region)898898+{899899+ print_size(p, "total", region, stats->private + stats->shared);900900+ print_size(p, "shared", region, stats->shared);901901+ print_size(p, "active", region, stats->active);902902+903903+ if (supported_status & DRM_GEM_OBJECT_RESIDENT)904904+ print_size(p, "resident", region, stats->resident);905905+906906+ if (supported_status & DRM_GEM_OBJECT_PURGEABLE)907907+ print_size(p, "purgeable", region, stats->purgeable);908908+}909909+EXPORT_SYMBOL(drm_print_memory_stats);910910+911911+/**912912+ * drm_show_memory_stats - Helper to collect and show standard fdinfo memory stats913913+ * @p: the printer to print output to914914+ * @file: the DRM file915915+ *916916+ * Helper to iterate over GEM objects with a handle allocated in the specified917917+ * file.918918+ */919919+void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file)920920+{921921+ struct drm_gem_object *obj;922922+ struct drm_memory_stats status = {};923923+ enum drm_gem_object_status supported_status;924924+ int id;925925+926926+ spin_lock(&file->table_lock);927927+ idr_for_each_entry (&file->object_idr, obj, id) {928928+ enum drm_gem_object_status s = 0;929929+930930+ if (obj->funcs && obj->funcs->status) {931931+ s = obj->funcs->status(obj);932932+ supported_status = DRM_GEM_OBJECT_RESIDENT |933933+ DRM_GEM_OBJECT_PURGEABLE;934934+ }935935+936936+ if (obj->handle_count > 1) {937937+ status.shared += obj->size;938938+ } else {939939+ status.private += obj->size;940940+ }941941+942942+ if (s & DRM_GEM_OBJECT_RESIDENT) {943943+ status.resident += obj->size;944944+ } else {945945+ /* If already purged or not yet backed by pages, don't946946+ * count it as purgeable:947947+ */948948+ s &= ~DRM_GEM_OBJECT_PURGEABLE;949949+ }950950+951951+ if (!dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true))) {952952+ status.active += obj->size;953953+954954+ /* If still active, don't count as purgeable: */955955+ s &= ~DRM_GEM_OBJECT_PURGEABLE;956956+ }957957+958958+ if (s & DRM_GEM_OBJECT_PURGEABLE)959959+ status.purgeable += obj->size;960960+ }961961+ spin_unlock(&file->table_lock);962962+963963+ drm_print_memory_stats(p, &status, supported_status, "memory");964964+}965965+EXPORT_SYMBOL(drm_show_memory_stats);966966+967967+/**968968+ * drm_show_fdinfo - helper for drm file fops969969+ * @m: output stream970970+ * @f: the device file instance971971+ *972972+ * Helper to implement fdinfo, for userspace to query usage stats, etc, of a973973+ * process using the GPU. See also &drm_driver.show_fdinfo.974974+ *975975+ * For text output format description please see Documentation/gpu/drm-usage-stats.rst976976+ */977977+void drm_show_fdinfo(struct seq_file *m, struct file *f)978978+{979979+ struct drm_file *file = f->private_data;980980+ struct drm_device *dev = file->minor->dev;981981+ struct drm_printer p = drm_seq_file_printer(m);982982+983983+ drm_printf(&p, "drm-driver:\t%s\n", dev->driver->name);984984+ drm_printf(&p, "drm-client-id:\t%llu\n", file->client_id);985985+986986+ if (dev_is_pci(dev->dev)) {987987+ struct pci_dev *pdev = to_pci_dev(dev->dev);988988+989989+ drm_printf(&p, "drm-pdev:\t%04x:%02x:%02x.%d\n",990990+ pci_domain_nr(pdev->bus), pdev->bus->number,991991+ PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));992992+ }993993+994994+ if (dev->driver->show_fdinfo)995995+ dev->driver->show_fdinfo(&p, file);996996+}997997+EXPORT_SYMBOL(drm_show_fdinfo);874998875999/**8761000 * mock_drm_getfile - Create a new struct file for the drm device
+1
drivers/gpu/drm/exynos/Kconfig
···77 select DRM_DISPLAY_HELPER if DRM_EXYNOS_DP88 select DRM_KMS_HELPER99 select VIDEOMODE_HELPERS1010+ select FB_IO_HELPERS if DRM_FBDEV_EMULATION1011 select SND_SOC_HDMI_CODEC if SND_SOC1112 help1213 Choose this option if you have a Samsung SoC Exynos chipset.
···33 tristate "Intel GMA500/600/3600/3650 KMS Framebuffer"44 depends on DRM && PCI && X86 && MMU55 select DRM_KMS_HELPER66+ select FB_IO_HELPERS if DRM_FBDEV_EMULATION67 select I2C78 select I2C_ALGOBIT89 # GMA500 depends on ACPI_VIDEO when ACPI is enabled, just like i915
···157157config DRM_I915_DEBUG_GUC158158 bool "Enable additional driver debugging for GuC"159159 depends on DRM_I915160160+ select STACKDEPOT160161 default n161162 help162163 Choose this option to turn on extra driver debugging that may affect
···245245 unsigned int n_placements;246246 unsigned int placement_mask;247247 unsigned long flags;248248+ unsigned int pat_index;248249};249250250251static void repr_placements(char *buf, size_t size,···395394 return 0;396395}397396397397+static int ext_set_pat(struct i915_user_extension __user *base, void *data)398398+{399399+ struct create_ext *ext_data = data;400400+ struct drm_i915_private *i915 = ext_data->i915;401401+ struct drm_i915_gem_create_ext_set_pat ext;402402+ unsigned int max_pat_index;403403+404404+ BUILD_BUG_ON(sizeof(struct drm_i915_gem_create_ext_set_pat) !=405405+ offsetofend(struct drm_i915_gem_create_ext_set_pat, rsvd));406406+407407+ /* Limiting the extension only to Meteor Lake */408408+ if (!IS_METEORLAKE(i915))409409+ return -ENODEV;410410+411411+ if (copy_from_user(&ext, base, sizeof(ext)))412412+ return -EFAULT;413413+414414+ max_pat_index = INTEL_INFO(i915)->max_pat_index;415415+416416+ if (ext.pat_index > max_pat_index) {417417+ drm_dbg(&i915->drm, "PAT index is invalid: %u\n",418418+ ext.pat_index);419419+ return -EINVAL;420420+ }421421+422422+ ext_data->pat_index = ext.pat_index;423423+424424+ return 0;425425+}426426+398427static const i915_user_extension_fn create_extensions[] = {399428 [I915_GEM_CREATE_EXT_MEMORY_REGIONS] = ext_set_placements,400429 [I915_GEM_CREATE_EXT_PROTECTED_CONTENT] = ext_set_protected,430430+ [I915_GEM_CREATE_EXT_SET_PAT] = ext_set_pat,401431};402432433433+#define PAT_INDEX_NOT_SET 0xffff403434/**404435 * i915_gem_create_ext_ioctl - Creates a new mm object and returns a handle to it.405436 * @dev: drm device pointer···451418 if (args->flags & ~I915_GEM_CREATE_EXT_FLAG_NEEDS_CPU_ACCESS)452419 return -EINVAL;453420421421+ ext_data.pat_index = PAT_INDEX_NOT_SET;454422 ret = i915_user_extensions(u64_to_user_ptr(args->extensions),455423 create_extensions,456424 ARRAY_SIZE(create_extensions),···487453 ext_data.flags);488454 if (IS_ERR(obj))489455 return PTR_ERR(obj);456456+457457+ if (ext_data.pat_index != PAT_INDEX_NOT_SET) {458458+ i915_gem_object_set_pat_index(obj, ext_data.pat_index);459459+ /* Mark pat_index is set by UMD */460460+ obj->pat_set_by_user = true;461461+ }490462491463 return i915_gem_publish(obj, file, &args->size, &args->handle);492464}
+6
drivers/gpu/drm/i915/gem/i915_gem_object.c
···209209 return false;210210211211 /*212212+ * Always flush cache for UMD objects at creation time.213213+ */214214+ if (obj->pat_set_by_user)215215+ return true;216216+217217+ /*212218 * EHL and JSL add the 'Bypass LLC' MOCS entry, which should make it213219 * possible for userspace to bypass the GTT caching bits set by the214220 * kernel, as per the given object cache_level. This is troublesome
···15301530 struct drm_i915_gem_object *obj;15311531 struct i915_vma *vma;15321532 enum intel_engine_id id;15331533- int err = -ENOMEM;15341533 u32 *map;15341534+ int err;1535153515361536 /*15371537 * Verify that even without HAS_LOGICAL_RING_PREEMPTION, we can···15391539 */1540154015411541 ctx_hi = kernel_context(gt->i915, NULL);15421542- if (!ctx_hi)15431543- return -ENOMEM;15421542+ if (IS_ERR(ctx_hi))15431543+ return PTR_ERR(ctx_hi);15441544+15441545 ctx_hi->sched.priority = I915_CONTEXT_MAX_USER_PRIORITY;1545154615461547 ctx_lo = kernel_context(gt->i915, NULL);15471547- if (!ctx_lo)15481548+ if (IS_ERR(ctx_lo)) {15491549+ err = PTR_ERR(ctx_lo);15481550 goto err_ctx_hi;15511551+ }15521552+15491553 ctx_lo->sched.priority = I915_CONTEXT_MIN_USER_PRIORITY;1550155415511555 obj = i915_gem_object_create_internal(gt->i915, PAGE_SIZE);
+9-2
drivers/gpu/drm/i915/gt/selftest_tlb.c
···190190191191static struct drm_i915_gem_object *create_lmem(struct intel_gt *gt)192192{193193+ struct intel_memory_region *mr = gt->i915->mm.regions[INTEL_REGION_LMEM_0];194194+ resource_size_t size = SZ_1G;195195+193196 /*194197 * Allocation of largest possible page size allows to test all types195195- * of pages.198198+ * of pages. To succeed with both allocations, especially in case of Small199199+ * BAR, try to allocate no more than quarter of mappable memory.196200 */197197- return i915_gem_object_create_lmem(gt->i915, SZ_1G, I915_BO_ALLOC_CONTIGUOUS);201201+ if (mr && size > mr->io_size / 4)202202+ size = mr->io_size / 4;203203+204204+ return i915_gem_object_create_lmem(gt->i915, size, I915_BO_ALLOC_CONTIGUOUS);198205}199206200207static struct drm_i915_gem_object *create_smem(struct intel_gt *gt)
···29293030 if (actions & GSC_ACTION_FW_LOAD) {3131 ret = intel_gsc_uc_fw_upload(gsc);3232- if (ret == -EEXIST) /* skip proxy if not a new load */3333- actions &= ~GSC_ACTION_FW_LOAD;3434- else if (ret)3232+ if (!ret)3333+ /* setup proxy on a new load */3434+ actions |= GSC_ACTION_SW_PROXY;3535+ else if (ret != -EEXIST)3536 goto out_put;3737+3838+ /*3939+ * The HuC auth can be done both before or after the proxy init;4040+ * if done after, a proxy request will be issued and must be4141+ * serviced before the authentication can complete.4242+ * Since this worker also handles proxy requests, we can't4343+ * perform an action that requires the proxy from within it and4444+ * then stall waiting for it, because we'd be blocking the4545+ * service path. Therefore, it is easier for us to load HuC4646+ * first and do proxy later. The GSC will ack the HuC auth and4747+ * then send the HuC proxy request as part of the proxy init4848+ * flow.4949+ * Note that we can only do the GSC auth if the GuC auth was5050+ * successful.5151+ */5252+ if (intel_uc_uses_huc(>->uc) &&5353+ intel_huc_is_authenticated(>->uc.huc, INTEL_HUC_AUTH_BY_GUC))5454+ intel_huc_auth(>->uc.huc, INTEL_HUC_AUTH_BY_GSC);3655 }37563838- if (actions & (GSC_ACTION_FW_LOAD | GSC_ACTION_SW_PROXY)) {5757+ if (actions & GSC_ACTION_SW_PROXY) {3958 if (!intel_gsc_uc_fw_init_done(gsc)) {4059 gt_err(gt, "Proxy request received with GSC not loaded!\n");4160 goto out_put;···10990{11091 struct intel_gt *gt = gsc_uc_to_gt(gsc);11192112112- intel_uc_fw_init_early(&gsc->fw, INTEL_UC_FW_TYPE_GSC);9393+ /*9494+ * GSC FW needs to be copied to a dedicated memory allocations for9595+ * loading (see gsc->local), so we don't need to GGTT map the FW image9696+ * itself into GGTT.9797+ */9898+ intel_uc_fw_init_early(&gsc->fw, INTEL_UC_FW_TYPE_GSC, false);11399 INIT_WORK(&gsc->work, gsc_work);114100115101 /* we can arrive here from i915_driver_early_probe for primary
···66#include <linux/types.h>7788#include "gt/intel_gt.h"99-#include "gt/intel_gt_print.h"109#include "intel_guc_reg.h"1110#include "intel_huc.h"1111+#include "intel_huc_print.h"1212#include "i915_drv.h"1313+#include "i915_reg.h"1414+#include "pxp/intel_pxp_cmd_interface_43.h"13151416#include <linux/device/bus.h>1517#include <linux/mei_aux.h>1616-1717-#define huc_printk(_huc, _level, _fmt, ...) \1818- gt_##_level(huc_to_gt(_huc), "HuC: " _fmt, ##__VA_ARGS__)1919-#define huc_err(_huc, _fmt, ...) huc_printk((_huc), err, _fmt, ##__VA_ARGS__)2020-#define huc_warn(_huc, _fmt, ...) huc_printk((_huc), warn, _fmt, ##__VA_ARGS__)2121-#define huc_notice(_huc, _fmt, ...) huc_printk((_huc), notice, _fmt, ##__VA_ARGS__)2222-#define huc_info(_huc, _fmt, ...) huc_printk((_huc), info, _fmt, ##__VA_ARGS__)2323-#define huc_dbg(_huc, _fmt, ...) huc_printk((_huc), dbg, _fmt, ##__VA_ARGS__)2424-#define huc_probe_error(_huc, _fmt, ...) huc_printk((_huc), probe_error, _fmt, ##__VA_ARGS__)25182619/**2720 * DOC: HuC···2431 * capabilities by adding HuC specific commands to batch buffers.2532 *2633 * The kernel driver is only responsible for loading the HuC firmware and2727- * triggering its security authentication, which is performed by the GuC on2828- * older platforms and by the GSC on newer ones. For the GuC to correctly2929- * perform the authentication, the HuC binary must be loaded before the GuC one.3434+ * triggering its security authentication. This is done differently depending3535+ * on the platform:3636+ * - older platforms (from Gen9 to most Gen12s): the load is performed via DMA3737+ * and the authentication via GuC3838+ * - DG2: load and authentication are both performed via GSC.3939+ * - MTL and newer platforms: the load is performed via DMA (same as with4040+ * not-DG2 older platforms), while the authentication is done in 2-steps,4141+ * a first auth for clear-media workloads via GuC and a second one for all4242+ * workloads via GSC.4343+ * On platforms where the GuC does the authentication, to correctly do so the4444+ * HuC binary must be loaded before the GuC one.3045 * Loading the HuC is optional; however, not using the HuC might negatively3146 * impact power usage and/or performance of media workloads, depending on the3247 * use-cases.3348 * HuC must be reloaded on events that cause the WOPCM to lose its contents3434- * (S3/S4, FLR); GuC-authenticated HuC must also be reloaded on GuC/GT reset,3535- * while GSC-managed HuC will survive that.4949+ * (S3/S4, FLR); on older platforms the HuC must also be reloaded on GuC/GT5050+ * reset, while on newer ones it will survive that.3651 *3752 * See https://github.com/intel/media-driver for the latest details on HuC3853 * functionality.···116115{117116 struct intel_huc *huc = container_of(hrtimer, struct intel_huc, delayed_load.timer);118117119119- if (!intel_huc_is_authenticated(huc)) {118118+ if (!intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC)) {120119 if (huc->delayed_load.status == INTEL_HUC_WAITING_ON_GSC)121120 huc_notice(huc, "timed out waiting for MEI GSC\n");122121 else if (huc->delayed_load.status == INTEL_HUC_WAITING_ON_PXP)···134133{135134 ktime_t delay;136135137137- GEM_BUG_ON(intel_huc_is_authenticated(huc));136136+ GEM_BUG_ON(intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC));138137139138 /*140139 * On resume we don't have to wait for MEI-GSC to be re-probed, but we···277276 struct drm_i915_private *i915 = huc_to_gt(huc)->i915;278277 struct intel_gt *gt = huc_to_gt(huc);279278280280- intel_uc_fw_init_early(&huc->fw, INTEL_UC_FW_TYPE_HUC);279279+ intel_uc_fw_init_early(&huc->fw, INTEL_UC_FW_TYPE_HUC, true);281280282281 /*283282 * we always init the fence as already completed, even if HuC is not···294293 }295294296295 if (GRAPHICS_VER(i915) >= 11) {297297- huc->status.reg = GEN11_HUC_KERNEL_LOAD_INFO;298298- huc->status.mask = HUC_LOAD_SUCCESSFUL;299299- huc->status.value = HUC_LOAD_SUCCESSFUL;296296+ huc->status[INTEL_HUC_AUTH_BY_GUC].reg = GEN11_HUC_KERNEL_LOAD_INFO;297297+ huc->status[INTEL_HUC_AUTH_BY_GUC].mask = HUC_LOAD_SUCCESSFUL;298298+ huc->status[INTEL_HUC_AUTH_BY_GUC].value = HUC_LOAD_SUCCESSFUL;300299 } else {301301- huc->status.reg = HUC_STATUS2;302302- huc->status.mask = HUC_FW_VERIFIED;303303- huc->status.value = HUC_FW_VERIFIED;300300+ huc->status[INTEL_HUC_AUTH_BY_GUC].reg = HUC_STATUS2;301301+ huc->status[INTEL_HUC_AUTH_BY_GUC].mask = HUC_FW_VERIFIED;302302+ huc->status[INTEL_HUC_AUTH_BY_GUC].value = HUC_FW_VERIFIED;303303+ }304304+305305+ if (IS_DG2(i915)) {306306+ huc->status[INTEL_HUC_AUTH_BY_GSC].reg = GEN11_HUC_KERNEL_LOAD_INFO;307307+ huc->status[INTEL_HUC_AUTH_BY_GSC].mask = HUC_LOAD_SUCCESSFUL;308308+ huc->status[INTEL_HUC_AUTH_BY_GSC].value = HUC_LOAD_SUCCESSFUL;309309+ } else {310310+ huc->status[INTEL_HUC_AUTH_BY_GSC].reg = HECI_FWSTS5(MTL_GSC_HECI1_BASE);311311+ huc->status[INTEL_HUC_AUTH_BY_GSC].mask = HECI_FWSTS5_HUC_AUTH_DONE;312312+ huc->status[INTEL_HUC_AUTH_BY_GSC].value = HECI_FWSTS5_HUC_AUTH_DONE;304313 }305314}306315···318307static int check_huc_loading_mode(struct intel_huc *huc)319308{320309 struct intel_gt *gt = huc_to_gt(huc);321321- bool fw_needs_gsc = intel_huc_is_loaded_by_gsc(huc);322322- bool hw_uses_gsc = false;310310+ bool gsc_enabled = huc->fw.has_gsc_headers;323311324312 /*325313 * The fuse for HuC load via GSC is only valid on platforms that have326314 * GuC deprivilege.327315 */328316 if (HAS_GUC_DEPRIVILEGE(gt->i915))329329- hw_uses_gsc = intel_uncore_read(gt->uncore, GUC_SHIM_CONTROL2) &330330- GSC_LOADS_HUC;317317+ huc->loaded_via_gsc = intel_uncore_read(gt->uncore, GUC_SHIM_CONTROL2) &318318+ GSC_LOADS_HUC;331319332332- if (fw_needs_gsc != hw_uses_gsc) {333333- huc_err(huc, "mismatch between FW (%s) and HW (%s) load modes\n",334334- HUC_LOAD_MODE_STRING(fw_needs_gsc), HUC_LOAD_MODE_STRING(hw_uses_gsc));320320+ if (huc->loaded_via_gsc && !gsc_enabled) {321321+ huc_err(huc, "HW requires a GSC-enabled blob, but we found a legacy one\n");335322 return -ENOEXEC;336323 }337324338338- /* make sure we can access the GSC via the mei driver if we need it */339339- if (!(IS_ENABLED(CONFIG_INTEL_MEI_PXP) && IS_ENABLED(CONFIG_INTEL_MEI_GSC)) &&340340- fw_needs_gsc) {341341- huc_info(huc, "can't load due to missing MEI modules\n");342342- return -EIO;325325+ /*326326+ * On newer platforms we have GSC-enabled binaries but we load the HuC327327+ * via DMA. To do so we need to find the location of the legacy-style328328+ * binary inside the GSC-enabled one, which we do at fetch time. Make329329+ * sure that we were able to do so if the fuse says we need to load via330330+ * DMA and the binary is GSC-enabled.331331+ */332332+ if (!huc->loaded_via_gsc && gsc_enabled && !huc->fw.dma_start_offset) {333333+ huc_err(huc, "HW in DMA mode, but we have an incompatible GSC-enabled blob\n");334334+ return -ENOEXEC;343335 }344336345345- huc_dbg(huc, "loaded by GSC = %s\n", str_yes_no(fw_needs_gsc));337337+ /*338338+ * If the HuC is loaded via GSC, we need to be able to access the GSC.339339+ * On DG2 this is done via the mei components, while on newer platforms340340+ * it is done via the GSCCS,341341+ */342342+ if (huc->loaded_via_gsc) {343343+ if (IS_DG2(gt->i915)) {344344+ if (!IS_ENABLED(CONFIG_INTEL_MEI_PXP) ||345345+ !IS_ENABLED(CONFIG_INTEL_MEI_GSC)) {346346+ huc_info(huc, "can't load due to missing mei modules\n");347347+ return -EIO;348348+ }349349+ } else {350350+ if (!HAS_ENGINE(gt, GSC0)) {351351+ huc_info(huc, "can't load due to missing GSCCS\n");352352+ return -EIO;353353+ }354354+ }355355+ }356356+357357+ huc_dbg(huc, "loaded by GSC = %s\n", str_yes_no(huc->loaded_via_gsc));346358347359 return 0;348360}349361350362int intel_huc_init(struct intel_huc *huc)351363{364364+ struct intel_gt *gt = huc_to_gt(huc);352365 int err;353366354367 err = check_huc_loading_mode(huc);355368 if (err)356369 goto out;357370371371+ if (HAS_ENGINE(gt, GSC0)) {372372+ struct i915_vma *vma;373373+374374+ vma = intel_guc_allocate_vma(>->uc.guc, PXP43_HUC_AUTH_INOUT_SIZE * 2);375375+ if (IS_ERR(vma)) {376376+ huc_info(huc, "Failed to allocate heci pkt\n");377377+ goto out;378378+ }379379+380380+ huc->heci_pkt = vma;381381+ }382382+358383 err = intel_uc_fw_init(&huc->fw);359384 if (err)360360- goto out;385385+ goto out_pkt;361386362387 intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_LOADABLE);363388364389 return 0;365390391391+out_pkt:392392+ if (huc->heci_pkt)393393+ i915_vma_unpin_and_release(&huc->heci_pkt, 0);366394out:367395 intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_INIT_FAIL);368396 huc_info(huc, "initialization failed %pe\n", ERR_PTR(err));···415365 * even if HuC loading is off.416366 */417367 delayed_huc_load_fini(huc);368368+369369+ if (huc->heci_pkt)370370+ i915_vma_unpin_and_release(&huc->heci_pkt, 0);418371419372 if (intel_uc_fw_is_loadable(&huc->fw))420373 intel_uc_fw_fini(&huc->fw);···436383 delayed_huc_load_complete(huc);437384}438385439439-int intel_huc_wait_for_auth_complete(struct intel_huc *huc)386386+static const char *auth_mode_string(struct intel_huc *huc,387387+ enum intel_huc_authentication_type type)388388+{389389+ bool partial = huc->fw.has_gsc_headers && type == INTEL_HUC_AUTH_BY_GUC;390390+391391+ return partial ? "clear media" : "all workloads";392392+}393393+394394+int intel_huc_wait_for_auth_complete(struct intel_huc *huc,395395+ enum intel_huc_authentication_type type)440396{441397 struct intel_gt *gt = huc_to_gt(huc);442398 int ret;443399444400 ret = __intel_wait_for_register(gt->uncore,445445- huc->status.reg,446446- huc->status.mask,447447- huc->status.value,401401+ huc->status[type].reg,402402+ huc->status[type].mask,403403+ huc->status[type].value,448404 2, 50, NULL);449405450406 /* mark the load process as complete even if the wait failed */451407 delayed_huc_load_complete(huc);452408453409 if (ret) {454454- huc_err(huc, "firmware not verified %pe\n", ERR_PTR(ret));410410+ huc_err(huc, "firmware not verified for %s: %pe\n",411411+ auth_mode_string(huc, type), ERR_PTR(ret));455412 intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_LOAD_FAIL);456413 return ret;457414 }458415459416 intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_RUNNING);460460- huc_info(huc, "authenticated!\n");417417+ huc_info(huc, "authenticated for %s\n", auth_mode_string(huc, type));461418 return 0;462419}463420464421/**465422 * intel_huc_auth() - Authenticate HuC uCode466423 * @huc: intel_huc structure424424+ * @type: authentication type (via GuC or via GSC)467425 *468426 * Called after HuC and GuC firmware loading during intel_uc_init_hw().469427 *···482418 * passing the offset of the RSA signature to intel_guc_auth_huc(). It then483419 * waits for up to 50ms for firmware verification ACK.484420 */485485-int intel_huc_auth(struct intel_huc *huc)421421+int intel_huc_auth(struct intel_huc *huc, enum intel_huc_authentication_type type)486422{487423 struct intel_gt *gt = huc_to_gt(huc);488424 struct intel_guc *guc = >->uc.guc;···491427 if (!intel_uc_fw_is_loaded(&huc->fw))492428 return -ENOEXEC;493429494494- /* GSC will do the auth */430430+ /* GSC will do the auth with the load */495431 if (intel_huc_is_loaded_by_gsc(huc))496432 return -ENODEV;433433+434434+ if (intel_huc_is_authenticated(huc, type))435435+ return -EEXIST;497436498437 ret = i915_inject_probe_error(gt->i915, -ENXIO);499438 if (ret)500439 goto fail;501440502502- GEM_BUG_ON(intel_uc_fw_is_running(&huc->fw));503503-504504- ret = intel_guc_auth_huc(guc, intel_guc_ggtt_offset(guc, huc->fw.rsa_data));505505- if (ret) {506506- huc_err(huc, "authentication by GuC failed %pe\n", ERR_PTR(ret));507507- goto fail;441441+ switch (type) {442442+ case INTEL_HUC_AUTH_BY_GUC:443443+ ret = intel_guc_auth_huc(guc, intel_guc_ggtt_offset(guc, huc->fw.rsa_data));444444+ break;445445+ case INTEL_HUC_AUTH_BY_GSC:446446+ ret = intel_huc_fw_auth_via_gsccs(huc);447447+ break;448448+ default:449449+ MISSING_CASE(type);450450+ ret = -EINVAL;508451 }452452+ if (ret)453453+ goto fail;509454510455 /* Check authentication status, it should be done by now */511511- ret = intel_huc_wait_for_auth_complete(huc);456456+ ret = intel_huc_wait_for_auth_complete(huc, type);512457 if (ret)513458 goto fail;514459515460 return 0;516461517462fail:518518- huc_probe_error(huc, "authentication failed %pe\n", ERR_PTR(ret));463463+ huc_probe_error(huc, "%s authentication failed %pe\n",464464+ auth_mode_string(huc, type), ERR_PTR(ret));519465 return ret;520466}521467522522-bool intel_huc_is_authenticated(struct intel_huc *huc)468468+bool intel_huc_is_authenticated(struct intel_huc *huc,469469+ enum intel_huc_authentication_type type)523470{524471 struct intel_gt *gt = huc_to_gt(huc);525472 intel_wakeref_t wakeref;526473 u32 status = 0;527474528475 with_intel_runtime_pm(gt->uncore->rpm, wakeref)529529- status = intel_uncore_read(gt->uncore, huc->status.reg);476476+ status = intel_uncore_read(gt->uncore, huc->status[type].reg);530477531531- return (status & huc->status.mask) == huc->status.value;478478+ return (status & huc->status[type].mask) == huc->status[type].value;479479+}480480+481481+static bool huc_is_fully_authenticated(struct intel_huc *huc)482482+{483483+ struct intel_uc_fw *huc_fw = &huc->fw;484484+485485+ if (!huc_fw->has_gsc_headers)486486+ return intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GUC);487487+ else if (intel_huc_is_loaded_by_gsc(huc) || HAS_ENGINE(huc_to_gt(huc), GSC0))488488+ return intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GSC);489489+ else490490+ return false;532491}533492534493/**···566479 */567480int intel_huc_check_status(struct intel_huc *huc)568481{569569- switch (__intel_uc_fw_status(&huc->fw)) {482482+ struct intel_uc_fw *huc_fw = &huc->fw;483483+484484+ switch (__intel_uc_fw_status(huc_fw)) {570485 case INTEL_UC_FIRMWARE_NOT_SUPPORTED:571486 return -ENODEV;572487 case INTEL_UC_FIRMWARE_DISABLED:···585496 break;586497 }587498588588- return intel_huc_is_authenticated(huc);499499+ /*500500+ * GSC-enabled binaries loaded via DMA are first partially501501+ * authenticated by GuC and then fully authenticated by GSC502502+ */503503+ if (huc_is_fully_authenticated(huc))504504+ return 1; /* full auth */505505+ else if (huc_fw->has_gsc_headers && !intel_huc_is_loaded_by_gsc(huc) &&506506+ intel_huc_is_authenticated(huc, INTEL_HUC_AUTH_BY_GUC))507507+ return 2; /* clear media only */508508+ else509509+ return 0;589510}590511591512static bool huc_has_delayed_load(struct intel_huc *huc)···609510 if (!intel_uc_fw_is_loadable(&huc->fw))610511 return;611512612612- if (intel_huc_is_authenticated(huc))513513+ if (!huc->fw.has_gsc_headers)514514+ return;515515+516516+ if (huc_is_fully_authenticated(huc))613517 intel_uc_fw_change_status(&huc->fw,614518 INTEL_UC_FIRMWARE_RUNNING);615519 else if (huc_has_delayed_load(huc))···645543646544 with_intel_runtime_pm(gt->uncore->rpm, wakeref)647545 drm_printf(p, "HuC status: 0x%08x\n",648648- intel_uncore_read(gt->uncore, huc->status.reg));546546+ intel_uncore_read(gt->uncore, huc->status[INTEL_HUC_AUTH_BY_GUC].reg));649547}
···9999 struct drm_i915_gem_object *obj;100100101101 /**102102- * @dummy: A vma used in binding the uc fw to ggtt. We can't define this103103- * vma on the stack as it can lead to a stack overflow, so we define it104104- * here. Safe to have 1 copy per uc fw because the binding is single105105- * threaded as it done during driver load (inherently single threaded)106106- * or during a GT reset (mutex guarantees single threaded).102102+ * @needs_ggtt_mapping: indicates whether the fw object needs to be103103+ * pinned to ggtt. If true, the fw is pinned at init time and unpinned104104+ * during driver unload.107105 */108108- struct i915_vma_resource dummy;106106+ bool needs_ggtt_mapping;107107+108108+ /**109109+ * @vma_res: A vma resource used in binding the uc fw to ggtt. The fw is110110+ * pinned in a reserved area of the ggtt (above the maximum address111111+ * usable by GuC); therefore, we can't use the normal vma functions to112112+ * do the pinning and we instead use this resource to do so.113113+ */114114+ struct i915_vma_resource vma_res;109115 struct i915_vma *rsa_data;110116111117 u32 rsa_size;112118 u32 ucode_size;113119 u32 private_data_size;114120115115- bool loaded_via_gsc;121121+ u32 dma_start_offset;122122+123123+ bool has_gsc_headers;116124};117125118126/*···290282}291283292284void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw,293293- enum intel_uc_fw_type type);285285+ enum intel_uc_fw_type type,286286+ bool needs_ggtt_mapping);294287int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw);295288void intel_uc_fw_cleanup_fetch(struct intel_uc_fw *uc_fw);296289int intel_uc_fw_upload(struct intel_uc_fw *uc_fw, u32 offset, u32 dma_flags);297290int intel_uc_fw_init(struct intel_uc_fw *uc_fw);298291void intel_uc_fw_fini(struct intel_uc_fw *uc_fw);292292+void intel_uc_fw_resume_mapping(struct intel_uc_fw *uc_fw);299293size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len);300294int intel_uc_fw_mark_load_failed(struct intel_uc_fw *uc_fw, int err);301295void intel_uc_fw_dump(const struct intel_uc_fw *uc_fw, struct drm_printer *p);
···13251325 if (!file_priv)13261326 goto err_alloc;1327132713281328- client = i915_drm_client_add(&i915->clients);13291329- if (IS_ERR(client)) {13301330- ret = PTR_ERR(client);13281328+ client = i915_drm_client_alloc();13291329+ if (!client)13311330 goto err_client;13321332- }1333133113341332 file->driver_priv = file_priv;13351333 file_priv->i915 = i915;
+5-1
drivers/gpu/drm/i915/i915_getparam.c
···100100 value = sseu->min_eu_in_pool;101101 break;102102 case I915_PARAM_HUC_STATUS:103103- value = intel_huc_check_status(&to_gt(i915)->uc.huc);103103+ /* On platform with a media GT, the HuC is on that GT */104104+ if (i915->media_gt)105105+ value = intel_huc_check_status(&i915->media_gt->uc.huc);106106+ else107107+ value = intel_huc_check_status(&to_gt(i915)->uc.huc);104108 if (value < 0)105109 return value;106110 break;
+48-63
drivers/gpu/drm/i915/i915_perf.c
···531531 * (See description of OA_TAIL_MARGIN_NSEC above for further details.)532532 *533533 * Besides returning true when there is data available to read() this function534534- * also updates the tail, aging_tail and aging_timestamp in the oa_buffer535535- * object.534534+ * also updates the tail in the oa_buffer object.536535 *537536 * Note: It's safe to read OA config state here unlocked, assuming that this is538537 * only called while the stream is enabled, while the global OA configuration···543544{544545 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma);545546 int report_size = stream->oa_buffer.format->size;547547+ u32 head, tail, read_tail;546548 unsigned long flags;547549 bool pollin;548550 u32 hw_tail;549549- u64 now;550551 u32 partial_report_size;551552552553 /* We have to consider the (unlikely) possibility that read() errors···565566 partial_report_size %= report_size;566567567568 /* Subtract partial amount off the tail */568568- hw_tail = gtt_offset + OA_TAKEN(hw_tail, partial_report_size);569569+ hw_tail = OA_TAKEN(hw_tail, partial_report_size);569570570570- now = ktime_get_mono_fast_ns();571571+ /* NB: The head we observe here might effectively be a little572572+ * out of date. If a read() is in progress, the head could be573573+ * anywhere between this head and stream->oa_buffer.tail.574574+ */575575+ head = stream->oa_buffer.head - gtt_offset;576576+ read_tail = stream->oa_buffer.tail - gtt_offset;571577572572- if (hw_tail == stream->oa_buffer.aging_tail &&573573- (now - stream->oa_buffer.aging_timestamp) > OA_TAIL_MARGIN_NSEC) {574574- /* If the HW tail hasn't move since the last check and the HW575575- * tail has been aging for long enough, declare it the new576576- * tail.577577- */578578- stream->oa_buffer.tail = stream->oa_buffer.aging_tail;579579- } else {580580- u32 head, tail, aged_tail;578578+ tail = hw_tail;581579582582- /* NB: The head we observe here might effectively be a little583583- * out of date. If a read() is in progress, the head could be584584- * anywhere between this head and stream->oa_buffer.tail.585585- */586586- head = stream->oa_buffer.head - gtt_offset;587587- aged_tail = stream->oa_buffer.tail - gtt_offset;580580+ /* Walk the stream backward until we find a report with report581581+ * id and timestmap not at 0. Since the circular buffer pointers582582+ * progress by increments of 64 bytes and that reports can be up583583+ * to 256 bytes long, we can't tell whether a report has fully584584+ * landed in memory before the report id and timestamp of the585585+ * following report have effectively landed.586586+ *587587+ * This is assuming that the writes of the OA unit land in588588+ * memory in the order they were written to.589589+ * If not : (╯°□°)╯︵ ┻━┻590590+ */591591+ while (OA_TAKEN(tail, read_tail) >= report_size) {592592+ void *report = stream->oa_buffer.vaddr + tail;588593589589- hw_tail -= gtt_offset;590590- tail = hw_tail;594594+ if (oa_report_id(stream, report) ||595595+ oa_timestamp(stream, report))596596+ break;591597592592- /* Walk the stream backward until we find a report with report593593- * id and timestmap not at 0. Since the circular buffer pointers594594- * progress by increments of 64 bytes and that reports can be up595595- * to 256 bytes long, we can't tell whether a report has fully596596- * landed in memory before the report id and timestamp of the597597- * following report have effectively landed.598598- *599599- * This is assuming that the writes of the OA unit land in600600- * memory in the order they were written to.601601- * If not : (╯°□°)╯︵ ┻━┻602602- */603603- while (OA_TAKEN(tail, aged_tail) >= report_size) {604604- void *report = stream->oa_buffer.vaddr + tail;605605-606606- if (oa_report_id(stream, report) ||607607- oa_timestamp(stream, report))608608- break;609609-610610- tail = (tail - report_size) & (OA_BUFFER_SIZE - 1);611611- }612612-613613- if (OA_TAKEN(hw_tail, tail) > report_size &&614614- __ratelimit(&stream->perf->tail_pointer_race))615615- drm_notice(&stream->uncore->i915->drm,616616- "unlanded report(s) head=0x%x tail=0x%x hw_tail=0x%x\n",617617- head, tail, hw_tail);618618-619619- stream->oa_buffer.tail = gtt_offset + tail;620620- stream->oa_buffer.aging_tail = gtt_offset + hw_tail;621621- stream->oa_buffer.aging_timestamp = now;598598+ tail = (tail - report_size) & (OA_BUFFER_SIZE - 1);622599 }623600624624- pollin = OA_TAKEN(stream->oa_buffer.tail - gtt_offset,625625- stream->oa_buffer.head - gtt_offset) >= report_size;601601+ if (OA_TAKEN(hw_tail, tail) > report_size &&602602+ __ratelimit(&stream->perf->tail_pointer_race))603603+ drm_notice(&stream->uncore->i915->drm,604604+ "unlanded report(s) head=0x%x tail=0x%x hw_tail=0x%x\n",605605+ head, tail, hw_tail);606606+607607+ stream->oa_buffer.tail = gtt_offset + tail;608608+609609+ pollin = OA_TAKEN(stream->oa_buffer.tail,610610+ stream->oa_buffer.head) >= report_size;626611627612 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);628613···860877 stream->oa_buffer.last_ctx_id = ctx_id;861878 }862879863863- /*864864- * Clear out the report id and timestamp as a means to detect unlanded865865- * reports.866866- */867867- oa_report_id_clear(stream, report32);868868- oa_timestamp_clear(stream, report32);880880+ if (is_power_of_2(report_size)) {881881+ /*882882+ * Clear out the report id and timestamp as a means883883+ * to detect unlanded reports.884884+ */885885+ oa_report_id_clear(stream, report32);886886+ oa_timestamp_clear(stream, report32);887887+ } else {888888+ /* Zero out the entire report */889889+ memset(report32, 0, report_size);890890+ }869891 }870892871893 if (start_offset != *offset) {···17101722 gtt_offset | OABUFFER_SIZE_16M);1711172317121724 /* Mark that we need updated tail pointers to read from... */17131713- stream->oa_buffer.aging_tail = INVALID_TAIL_PTR;17141725 stream->oa_buffer.tail = gtt_offset;1715172617161727 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);···17611774 intel_uncore_write(uncore, GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);1762177517631776 /* Mark that we need updated tail pointers to read from... */17641764- stream->oa_buffer.aging_tail = INVALID_TAIL_PTR;17651777 stream->oa_buffer.tail = gtt_offset;1766177817671779 /*···18141828 gtt_offset & GEN12_OAG_OATAILPTR_MASK);1815182918161830 /* Mark that we need updated tail pointers to read from... */18171817- stream->oa_buffer.aging_tail = INVALID_TAIL_PTR;18181831 stream->oa_buffer.tail = gtt_offset;1819183218201833 /*
-12
drivers/gpu/drm/i915/i915_perf_types.h
···313313 spinlock_t ptr_lock;314314315315 /**316316- * @aging_tail: The last HW tail reported by HW. The data317317- * might not have made it to memory yet though.318318- */319319- u32 aging_tail;320320-321321- /**322322- * @aging_timestamp: A monotonic timestamp for when the current aging tail pointer323323- * was read; used to determine when it is old enough to trust.324324- */325325- u64 aging_timestamp;326326-327327- /**328316 * @head: Although we can always read back the head pointer register,329317 * we prefer to avoid trusting the HW state, just to avoid any330318 * risk that some hardware condition could * somehow bump the
+9-25
drivers/gpu/drm/i915/i915_pmu.c
···132132 unsigned int i;133133 u32 mask = 0;134134135135- for (i = 0; i < I915_PMU_MAX_GTS; i++)135135+ for (i = 0; i < I915_PMU_MAX_GT; i++)136136 mask |= config_mask(__I915_PMU_ACTUAL_FREQUENCY(i)) |137137 config_mask(__I915_PMU_REQUESTED_FREQUENCY(i));138138139139 return mask;140140}141141142142-static bool pmu_needs_timer(struct i915_pmu *pmu, bool gpu_active)142142+static bool pmu_needs_timer(struct i915_pmu *pmu)143143{144144 struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);145145 u32 enable;···158158 enable &= frequency_enabled_mask() | ENGINE_SAMPLE_MASK;159159160160 /*161161- * When the GPU is idle per-engine counters do not need to be162162- * running so clear those bits out.163163- */164164- if (!gpu_active)165165- enable &= ~ENGINE_SAMPLE_MASK;166166- /*167161 * Also there is software busyness tracking available we do not168162 * need the timer for I915_SAMPLE_BUSY counter.169163 */170170- else if (i915->caps.scheduler & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS)164164+ if (i915->caps.scheduler & I915_SCHEDULER_CAP_ENGINE_BUSY_STATS)171165 enable &= ~BIT(I915_SAMPLE_BUSY);172166173167 /*···191197 return ktime_to_ns(ktime_sub(ktime_get_raw(), kt));192198}193199194194-static unsigned int195195-__sample_idx(struct i915_pmu *pmu, unsigned int gt_id, int sample)196196-{197197- unsigned int idx = gt_id * __I915_NUM_PMU_SAMPLERS + sample;198198-199199- GEM_BUG_ON(idx >= ARRAY_SIZE(pmu->sample));200200-201201- return idx;202202-}203203-204200static u64 read_sample(struct i915_pmu *pmu, unsigned int gt_id, int sample)205201{206206- return pmu->sample[__sample_idx(pmu, gt_id, sample)].cur;202202+ return pmu->sample[gt_id][sample].cur;207203}208204209205static void210206store_sample(struct i915_pmu *pmu, unsigned int gt_id, int sample, u64 val)211207{212212- pmu->sample[__sample_idx(pmu, gt_id, sample)].cur = val;208208+ pmu->sample[gt_id][sample].cur = val;213209}214210215211static void216212add_sample_mult(struct i915_pmu *pmu, unsigned int gt_id, int sample, u32 val, u32 mul)217213{218218- pmu->sample[__sample_idx(pmu, gt_id, sample)].cur += mul_u32_u32(val, mul);214214+ pmu->sample[gt_id][sample].cur += mul_u32_u32(val, mul);219215}220216221217static u64 get_rc6(struct intel_gt *gt)···279295280296static void __i915_pmu_maybe_start_timer(struct i915_pmu *pmu)281297{282282- if (!pmu->timer_enabled && pmu_needs_timer(pmu, true)) {298298+ if (!pmu->timer_enabled && pmu_needs_timer(pmu)) {283299 pmu->timer_enabled = true;284300 pmu->timer_last = ktime_get();285301 hrtimer_start_range_ns(&pmu->timer,···305321 */306322 pmu->unparked &= ~BIT(gt->info.id);307323 if (pmu->unparked == 0)308308- pmu->timer_enabled = pmu_needs_timer(pmu, false);324324+ pmu->timer_enabled = false;309325310326 spin_unlock_irq(&pmu->lock);311327}···811827 */812828 if (--pmu->enable_count[bit] == 0) {813829 pmu->enable &= ~BIT(bit);814814- pmu->timer_enabled &= pmu_needs_timer(pmu, true);830830+ pmu->timer_enabled &= pmu_needs_timer(pmu);815831 }816832817833 spin_unlock_irqrestore(&pmu->lock, flags);
+4-4
drivers/gpu/drm/i915/i915_pmu.h
···3838 __I915_NUM_PMU_SAMPLERS3939};40404141-#define I915_PMU_MAX_GTS 24141+#define I915_PMU_MAX_GT 242424343/*4444 * How many different events we track in the global PMU mask.···4747 */4848#define I915_PMU_MASK_BITS \4949 (I915_ENGINE_SAMPLE_COUNT + \5050- I915_PMU_MAX_GTS * __I915_PMU_TRACKED_EVENT_COUNT)5050+ I915_PMU_MAX_GT * __I915_PMU_TRACKED_EVENT_COUNT)51515252#define I915_ENGINE_SAMPLE_COUNT (I915_SAMPLE_SEMA + 1)5353···127127 * Only global counters are held here, while the per-engine ones are in128128 * struct intel_engine_cs.129129 */130130- struct i915_pmu_sample sample[I915_PMU_MAX_GTS * __I915_NUM_PMU_SAMPLERS];130130+ struct i915_pmu_sample sample[I915_PMU_MAX_GT][__I915_NUM_PMU_SAMPLERS];131131 /**132132 * @sleep_last: Last time GT parked for RC6 estimation.133133 */134134- ktime_t sleep_last[I915_PMU_MAX_GTS];134134+ ktime_t sleep_last[I915_PMU_MAX_GT];135135 /**136136 * @irq_count: Number of interrupts137137 *
···143143144144 reply_size = header->message_size - sizeof(*header);145145 if (reply_size > msg_out_size_max) {146146- drm_warn(&i915->drm, "caller with insufficient PXP reply size %u (%ld)\n",146146+ drm_warn(&i915->drm, "caller with insufficient PXP reply size %u (%zu)\n",147147 reply_size, msg_out_size_max);148148 reply_size = msg_out_size_max;149149 }···196196 * gsc-proxy init flow (the last set of dependencies that197197 * are out of order) will suffice.198198 */199199- if (intel_huc_is_authenticated(&pxp->ctrl_gt->uc.huc) &&199199+ if (intel_huc_is_authenticated(&pxp->ctrl_gt->uc.huc, INTEL_HUC_AUTH_BY_GSC) &&200200 intel_gsc_uc_fw_proxy_init_done(&pxp->ctrl_gt->uc.gsc))201201 return true;202202
···1717 default y if DRM_MESON1818 select DRM_DW_HDMI1919 imply DRM_DW_HDMI_I2S_AUDIO2020+2121+config DRM_MESON_DW_MIPI_DSI2222+ tristate "MIPI DSI Synopsys Controller support for Amlogic Meson Display"2323+ depends on DRM_MESON2424+ default y if DRM_MESON2525+ select DRM_DW_MIPI_DSI2626+ select GENERIC_PHY_MIPI_DPHY
···223223 * DU channels that have a display PLL can't use the internal224224 * system clock, and have no internal clock divider.225225 */226226-227227- /*228228- * The H3 ES1.x exhibits dot clock duty cycle stability issues.229229- * We can work around them by configuring the DPLL to twice the230230- * desired frequency, coupled with a /2 post-divider. Restrict231231- * the workaround to H3 ES1.x as ES2.0 and all other SoCs have232232- * no post-divider when a display PLL is present (as shown by233233- * the workaround breaking HDMI output on M3-W during testing).234234- */235235- if (rcdu->info->quirks & RCAR_DU_QUIRK_H3_ES1_PCLK_STABILITY) {236236- target *= 2;237237- div = 1;238238- }239239-240226 extclk = clk_get_rate(rcrtc->extclock);241227 rcar_du_dpll_divider(rcrtc, &dpll, extclk, target);242228···231245 | DPLLCR_N(dpll.n) | DPLLCR_M(dpll.m)232246 | DPLLCR_STBY;233247234234- if (rcrtc->index == 1) {248248+ if (rcrtc->index == 1)235249 dpllcr |= DPLLCR_PLCS1236250 | DPLLCR_INCS_DOTCLKIN1;237237- } else {238238- dpllcr |= DPLLCR_PLCS0_PLL251251+ else252252+ dpllcr |= DPLLCR_PLCS0239253 | DPLLCR_INCS_DOTCLKIN0;240240-241241- /*242242- * On ES2.x we have a single mux controlled via bit 21,243243- * which selects between DCLKIN source (bit 21 = 0) and244244- * a PLL source (bit 21 = 1), where the PLL is always245245- * PLL1.246246- *247247- * On ES1.x we have an additional mux, controlled248248- * via bit 20, for choosing between PLL0 (bit 20 = 0)249249- * and PLL1 (bit 20 = 1). We always want to use PLL1,250250- * so on ES1.x, in addition to setting bit 21, we need251251- * to set the bit 20.252252- */253253-254254- if (rcdu->info->quirks & RCAR_DU_QUIRK_H3_ES1_PLL)255255- dpllcr |= DPLLCR_PLCS0_H3ES1X_PLL1;256256- }257254258255 rcar_du_group_write(rcrtc->group, DPLLCR, dpllcr);259256
···4141struct dma_fence;4242struct drm_file;4343struct drm_device;4444+struct drm_printer;4445struct device;4546struct file;4647···259258 /** @pid: Process that opened this file. */260259 struct pid *pid;261260261261+ /** @client_id: A unique id for fdinfo */262262+ u64 client_id;263263+262264 /** @magic: Authentication magic, see @authenticated. */263265 drm_magic_t magic;264266···442438void drm_send_event_timestamp_locked(struct drm_device *dev,443439 struct drm_pending_event *e,444440 ktime_t timestamp);441441+442442+/**443443+ * struct drm_memory_stats - GEM object stats associated444444+ * @shared: Total size of GEM objects shared between processes445445+ * @private: Total size of GEM objects446446+ * @resident: Total size of GEM objects backing pages447447+ * @purgeable: Total size of GEM objects that can be purged (resident and not active)448448+ * @active: Total size of GEM objects active on one or more engines449449+ *450450+ * Used by drm_print_memory_stats()451451+ */452452+struct drm_memory_stats {453453+ u64 shared;454454+ u64 private;455455+ u64 resident;456456+ u64 purgeable;457457+ u64 active;458458+};459459+460460+enum drm_gem_object_status;461461+462462+void drm_print_memory_stats(struct drm_printer *p,463463+ const struct drm_memory_stats *stats,464464+ enum drm_gem_object_status supported_status,465465+ const char *region);466466+467467+void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file);468468+void drm_show_fdinfo(struct seq_file *m, struct file *f);445469446470struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);447471
+32
include/drm/drm_gem.h
···4343struct drm_gem_object;44444545/**4646+ * enum drm_gem_object_status - bitmask of object state for fdinfo reporting4747+ * @DRM_GEM_OBJECT_RESIDENT: object is resident in memory (ie. not unpinned)4848+ * @DRM_GEM_OBJECT_PURGEABLE: object marked as purgeable by userspace4949+ *5050+ * Bitmask of status used for fdinfo memory stats, see &drm_gem_object_funcs.status5151+ * and drm_show_fdinfo(). Note that an object can DRM_GEM_OBJECT_PURGEABLE if5252+ * it still active or not resident, in which case drm_show_fdinfo() will not5353+ * account for it as purgeable. So drivers do not need to check if the buffer5454+ * is idle and resident to return this bit. (Ie. userspace can mark a buffer5555+ * as purgeable even while it is still busy on the GPU.. it does not _actually_5656+ * become puregeable until it becomes idle. The status gem object func does5757+ * not need to consider this.)5858+ */5959+enum drm_gem_object_status {6060+ DRM_GEM_OBJECT_RESIDENT = BIT(0),6161+ DRM_GEM_OBJECT_PURGEABLE = BIT(1),6262+};6363+6464+/**4665 * struct drm_gem_object_funcs - GEM object functions4766 */4867struct drm_gem_object_funcs {···192173 * This callback is optional.193174 */194175 int (*evict)(struct drm_gem_object *obj);176176+177177+ /**178178+ * @status:179179+ *180180+ * The optional status callback can return additional object state181181+ * which determines which stats the object is counted against. The182182+ * callback is called under table_lock. Racing against object status183183+ * change is "harmless", and the callback can expect to not race184184+ * against object destruction.185185+ *186186+ * Called by drm_show_memory_stats().187187+ */188188+ enum drm_gem_object_status (*status)(struct drm_gem_object *obj);195189196190 /**197191 * @vm_ops:
···787787 * The address which accessing it caused the razwi.788788 * Razwi initiator.789789 * Razwi cause, was it a page fault or MMU access error.790790+ * May return 0 even though no new data is available, in that case791791+ * timestamp will be 0.790792 * HL_INFO_DEV_MEM_ALLOC_PAGE_SIZES - Retrieve valid page sizes for device memory allocation791793 * HL_INFO_SECURED_ATTESTATION - Retrieve attestation report of the boot.792794 * HL_INFO_REGISTER_EVENTFD - Register eventfd for event notifications.793795 * HL_INFO_UNREGISTER_EVENTFD - Unregister eventfd794796 * HL_INFO_GET_EVENTS - Retrieve the last occurred events795797 * HL_INFO_UNDEFINED_OPCODE_EVENT - Retrieve last undefined opcode error information.798798+ * May return 0 even though no new data is available, in that case799799+ * timestamp will be 0.796800 * HL_INFO_ENGINE_STATUS - Retrieve the status of all the h/w engines in the asic.797801 * HL_INFO_PAGE_FAULT_EVENT - Retrieve parameters of captured page fault.802802+ * May return 0 even though no new data is available, in that case803803+ * timestamp will be 0.798804 * HL_INFO_USER_MAPPINGS - Retrieve user mappings, captured after page fault event.799805 * HL_INFO_FW_GENERIC_REQ - Send generic request to FW.800806 * HL_INFO_HW_ERR_EVENT - Retrieve information on the reported HW error.807807+ * May return 0 even though no new data is available, in that case808808+ * timestamp will be 0.801809 * HL_INFO_FW_ERR_EVENT - Retrieve information on the reported FW error.810810+ * May return 0 even though no new data is available, in that case811811+ * timestamp will be 0.802812 */803813#define HL_INFO_HW_IP_INFO 0804814#define HL_INFO_HW_EVENTS 1
+43-1
include/uapi/drm/i915_drm.h
···674674 * If the IOCTL is successful, the returned parameter will be set to one of the675675 * following values:676676 * * 0 if HuC firmware load is not complete,677677- * * 1 if HuC firmware is authenticated and running.677677+ * * 1 if HuC firmware is loaded and fully authenticated,678678+ * * 2 if HuC firmware is loaded and authenticated for clear media only678679 */679680#define I915_PARAM_HUC_STATUS 42680681···36803679 *36813680 * For I915_GEM_CREATE_EXT_PROTECTED_CONTENT usage see36823681 * struct drm_i915_gem_create_ext_protected_content.36823682+ *36833683+ * For I915_GEM_CREATE_EXT_SET_PAT usage see36843684+ * struct drm_i915_gem_create_ext_set_pat.36833685 */36843686#define I915_GEM_CREATE_EXT_MEMORY_REGIONS 036853687#define I915_GEM_CREATE_EXT_PROTECTED_CONTENT 136883688+#define I915_GEM_CREATE_EXT_SET_PAT 236863689 __u64 extensions;36873690};36883691···37993794 struct i915_user_extension base;38003795 /** @flags: reserved for future usage, currently MBZ */38013796 __u32 flags;37973797+};37983798+37993799+/**38003800+ * struct drm_i915_gem_create_ext_set_pat - The38013801+ * I915_GEM_CREATE_EXT_SET_PAT extension.38023802+ *38033803+ * If this extension is provided, the specified caching policy (PAT index) is38043804+ * applied to the buffer object.38053805+ *38063806+ * Below is an example on how to create an object with specific caching policy:38073807+ *38083808+ * .. code-block:: C38093809+ *38103810+ * struct drm_i915_gem_create_ext_set_pat set_pat_ext = {38113811+ * .base = { .name = I915_GEM_CREATE_EXT_SET_PAT },38123812+ * .pat_index = 0,38133813+ * };38143814+ * struct drm_i915_gem_create_ext create_ext = {38153815+ * .size = PAGE_SIZE,38163816+ * .extensions = (uintptr_t)&set_pat_ext,38173817+ * };38183818+ *38193819+ * int err = ioctl(fd, DRM_IOCTL_I915_GEM_CREATE_EXT, &create_ext);38203820+ * if (err) ...38213821+ */38223822+struct drm_i915_gem_create_ext_set_pat {38233823+ /** @base: Extension link. See struct i915_user_extension. */38243824+ struct i915_user_extension base;38253825+ /**38263826+ * @pat_index: PAT index to be set38273827+ * PAT index is a bit field in Page Table Entry to control caching38283828+ * behaviors for GPU accesses. The definition of PAT index is38293829+ * platform dependent and can be found in hardware specifications,38303830+ */38313831+ __u32 pat_index;38323832+ /** @rsvd: reserved for future use */38333833+ __u32 rsvd;38023834};3803383538043836/* ID of the protected content session managed by i915 when PXP is active */