x86/sev: Fix operator precedence in GHCB_MSR_VMPL_REQ_LEVEL macro

The GHCB_MSR_VMPL_REQ_LEVEL macro lacked parentheses around the bitmask
expression, causing the shift operation to bind too early. As a result,
when requesting VMPL1 (e.g., GHCB_MSR_VMPL_REQ_LEVEL(1)), incorrect
values such as 0x000000016 were generated instead of the intended
0x100000016 (the requested VMPL level is specified in GHCBData[39:32]).

Fix the precedence issue by grouping the masked value before applying
the shift.

[ bp: Massage commit message. ]

Fixes: 34ff65901735 ("x86/sev: Use kernel provided SVSM Calling Areas")
Signed-off-by: Seongman Lee <augustus92@kaist.ac.kr>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250511092329.12680-1-cloudlee1719@gmail.com

authored by Seongman Lee and committed by Borislav Petkov (AMD) f7387eff 5214a9f6

+1 -1
+1 -1
arch/x86/include/asm/sev-common.h
··· 116 #define GHCB_MSR_VMPL_REQ 0x016 117 #define GHCB_MSR_VMPL_REQ_LEVEL(v) \ 118 /* GHCBData[39:32] */ \ 119 - (((u64)(v) & GENMASK_ULL(7, 0) << 32) | \ 120 /* GHCBDdata[11:0] */ \ 121 GHCB_MSR_VMPL_REQ) 122
··· 116 #define GHCB_MSR_VMPL_REQ 0x016 117 #define GHCB_MSR_VMPL_REQ_LEVEL(v) \ 118 /* GHCBData[39:32] */ \ 119 + ((((u64)(v) & GENMASK_ULL(7, 0)) << 32) | \ 120 /* GHCBDdata[11:0] */ \ 121 GHCB_MSR_VMPL_REQ) 122