Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'release' of master.kernel.org:/pub/scm/linux/kernel/git/aegl/linux-2.6

+292 -74
+194
Documentation/ia64/mca.txt
··· 1 + An ad-hoc collection of notes on IA64 MCA and INIT processing. Feel 2 + free to update it with notes about any area that is not clear. 3 + 4 + --- 5 + 6 + MCA/INIT are completely asynchronous. They can occur at any time, when 7 + the OS is in any state. Including when one of the cpus is already 8 + holding a spinlock. Trying to get any lock from MCA/INIT state is 9 + asking for deadlock. Also the state of structures that are protected 10 + by locks is indeterminate, including linked lists. 11 + 12 + --- 13 + 14 + The complicated ia64 MCA process. All of this is mandated by Intel's 15 + specification for ia64 SAL, error recovery and and unwind, it is not as 16 + if we have a choice here. 17 + 18 + * MCA occurs on one cpu, usually due to a double bit memory error. 19 + This is the monarch cpu. 20 + 21 + * SAL sends an MCA rendezvous interrupt (which is a normal interrupt) 22 + to all the other cpus, the slaves. 23 + 24 + * Slave cpus that receive the MCA interrupt call down into SAL, they 25 + end up spinning disabled while the MCA is being serviced. 26 + 27 + * If any slave cpu was already spinning disabled when the MCA occurred 28 + then it cannot service the MCA interrupt. SAL waits ~20 seconds then 29 + sends an unmaskable INIT event to the slave cpus that have not 30 + already rendezvoused. 31 + 32 + * Because MCA/INIT can be delivered at any time, including when the cpu 33 + is down in PAL in physical mode, the registers at the time of the 34 + event are _completely_ undefined. In particular the MCA/INIT 35 + handlers cannot rely on the thread pointer, PAL physical mode can 36 + (and does) modify TP. It is allowed to do that as long as it resets 37 + TP on return. However MCA/INIT events expose us to these PAL 38 + internal TP changes. Hence curr_task(). 39 + 40 + * If an MCA/INIT event occurs while the kernel was running (not user 41 + space) and the kernel has called PAL then the MCA/INIT handler cannot 42 + assume that the kernel stack is in a fit state to be used. Mainly 43 + because PAL may or may not maintain the stack pointer internally. 44 + Because the MCA/INIT handlers cannot trust the kernel stack, they 45 + have to use their own, per-cpu stacks. The MCA/INIT stacks are 46 + preformatted with just enough task state to let the relevant handlers 47 + do their job. 48 + 49 + * Unlike most other architectures, the ia64 struct task is embedded in 50 + the kernel stack[1]. So switching to a new kernel stack means that 51 + we switch to a new task as well. Because various bits of the kernel 52 + assume that current points into the struct task, switching to a new 53 + stack also means a new value for current. 54 + 55 + * Once all slaves have rendezvoused and are spinning disabled, the 56 + monarch is entered. The monarch now tries to diagnose the problem 57 + and decide if it can recover or not. 58 + 59 + * Part of the monarch's job is to look at the state of all the other 60 + tasks. The only way to do that on ia64 is to call the unwinder, 61 + as mandated by Intel. 62 + 63 + * The starting point for the unwind depends on whether a task is 64 + running or not. That is, whether it is on a cpu or is blocked. The 65 + monarch has to determine whether or not a task is on a cpu before it 66 + knows how to start unwinding it. The tasks that received an MCA or 67 + INIT event are no longer running, they have been converted to blocked 68 + tasks. But (and its a big but), the cpus that received the MCA 69 + rendezvous interrupt are still running on their normal kernel stacks! 70 + 71 + * To distinguish between these two cases, the monarch must know which 72 + tasks are on a cpu and which are not. Hence each slave cpu that 73 + switches to an MCA/INIT stack, registers its new stack using 74 + set_curr_task(), so the monarch can tell that the _original_ task is 75 + no longer running on that cpu. That gives us a decent chance of 76 + getting a valid backtrace of the _original_ task. 77 + 78 + * MCA/INIT can be nested, to a depth of 2 on any cpu. In the case of a 79 + nested error, we want diagnostics on the MCA/INIT handler that 80 + failed, not on the task that was originally running. Again this 81 + requires set_curr_task() so the MCA/INIT handlers can register their 82 + own stack as running on that cpu. Then a recursive error gets a 83 + trace of the failing handler's "task". 84 + 85 + [1] My (Keith Owens) original design called for ia64 to separate its 86 + struct task and the kernel stacks. Then the MCA/INIT data would be 87 + chained stacks like i386 interrupt stacks. But that required 88 + radical surgery on the rest of ia64, plus extra hard wired TLB 89 + entries with its associated performance degradation. David 90 + Mosberger vetoed that approach. Which meant that separate kernel 91 + stacks meant separate "tasks" for the MCA/INIT handlers. 92 + 93 + --- 94 + 95 + INIT is less complicated than MCA. Pressing the nmi button or using 96 + the equivalent command on the management console sends INIT to all 97 + cpus. SAL picks one one of the cpus as the monarch and the rest are 98 + slaves. All the OS INIT handlers are entered at approximately the same 99 + time. The OS monarch prints the state of all tasks and returns, after 100 + which the slaves return and the system resumes. 101 + 102 + At least that is what is supposed to happen. Alas there are broken 103 + versions of SAL out there. Some drive all the cpus as monarchs. Some 104 + drive them all as slaves. Some drive one cpu as monarch, wait for that 105 + cpu to return from the OS then drive the rest as slaves. Some versions 106 + of SAL cannot even cope with returning from the OS, they spin inside 107 + SAL on resume. The OS INIT code has workarounds for some of these 108 + broken SAL symptoms, but some simply cannot be fixed from the OS side. 109 + 110 + --- 111 + 112 + The scheduler hooks used by ia64 (curr_task, set_curr_task) are layer 113 + violations. Unfortunately MCA/INIT start off as massive layer 114 + violations (can occur at _any_ time) and they build from there. 115 + 116 + At least ia64 makes an attempt at recovering from hardware errors, but 117 + it is a difficult problem because of the asynchronous nature of these 118 + errors. When processing an unmaskable interrupt we sometimes need 119 + special code to cope with our inability to take any locks. 120 + 121 + --- 122 + 123 + How is ia64 MCA/INIT different from x86 NMI? 124 + 125 + * x86 NMI typically gets delivered to one cpu. MCA/INIT gets sent to 126 + all cpus. 127 + 128 + * x86 NMI cannot be nested. MCA/INIT can be nested, to a depth of 2 129 + per cpu. 130 + 131 + * x86 has a separate struct task which points to one of multiple kernel 132 + stacks. ia64 has the struct task embedded in the single kernel 133 + stack, so switching stack means switching task. 134 + 135 + * x86 does not call the BIOS so the NMI handler does not have to worry 136 + about any registers having changed. MCA/INIT can occur while the cpu 137 + is in PAL in physical mode, with undefined registers and an undefined 138 + kernel stack. 139 + 140 + * i386 backtrace is not very sensitive to whether a process is running 141 + or not. ia64 unwind is very, very sensitive to whether a process is 142 + running or not. 143 + 144 + --- 145 + 146 + What happens when MCA/INIT is delivered what a cpu is running user 147 + space code? 148 + 149 + The user mode registers are stored in the RSE area of the MCA/INIT on 150 + entry to the OS and are restored from there on return to SAL, so user 151 + mode registers are preserved across a recoverable MCA/INIT. Since the 152 + OS has no idea what unwind data is available for the user space stack, 153 + MCA/INIT never tries to backtrace user space. Which means that the OS 154 + does not bother making the user space process look like a blocked task, 155 + i.e. the OS does not copy pt_regs and switch_stack to the user space 156 + stack. Also the OS has no idea how big the user space RSE and memory 157 + stacks are, which makes it too risky to copy the saved state to a user 158 + mode stack. 159 + 160 + --- 161 + 162 + How do we get a backtrace on the tasks that were running when MCA/INIT 163 + was delivered? 164 + 165 + mca.c:::ia64_mca_modify_original_stack(). That identifies and 166 + verifies the original kernel stack, copies the dirty registers from 167 + the MCA/INIT stack's RSE to the original stack's RSE, copies the 168 + skeleton struct pt_regs and switch_stack to the original stack, fills 169 + in the skeleton structures from the PAL minstate area and updates the 170 + original stack's thread.ksp. That makes the original stack look 171 + exactly like any other blocked task, i.e. it now appears to be 172 + sleeping. To get a backtrace, just start with thread.ksp for the 173 + original task and unwind like any other sleeping task. 174 + 175 + --- 176 + 177 + How do we identify the tasks that were running when MCA/INIT was 178 + delivered? 179 + 180 + If the previous task has been verified and converted to a blocked 181 + state, then sos->prev_task on the MCA/INIT stack is updated to point to 182 + the previous task. You can look at that field in dumps or debuggers. 183 + To help distinguish between the handler and the original tasks, 184 + handlers have _TIF_MCA_INIT set in thread_info.flags. 185 + 186 + The sos data is always in the MCA/INIT handler stack, at offset 187 + MCA_SOS_OFFSET. You can get that value from mca_asm.h or calculate it 188 + as KERNEL_STACK_SIZE - sizeof(struct pt_regs) - sizeof(struct 189 + ia64_sal_os_state), with 16 byte alignment for all structures. 190 + 191 + Also the comm field of the MCA/INIT task is modified to include the pid 192 + of the original task, for humans to use. For example, a comm field of 193 + 'MCA 12159' means that pid 12159 was running when the MCA was 194 + delivered.
+1 -1
arch/ia64/kernel/acpi.c
··· 899 899 if ((err = iosapic_init(phys_addr, gsi_base))) 900 900 return err; 901 901 902 - #if CONFIG_ACPI_NUMA 902 + #ifdef CONFIG_ACPI_NUMA 903 903 acpi_map_iosapic(handle, 0, NULL, NULL); 904 904 #endif /* CONFIG_ACPI_NUMA */ 905 905
+1 -1
arch/ia64/kernel/entry.S
··· 491 491 ;; 492 492 lfetch.fault [r16], 128 493 493 br.ret.sptk.many rp 494 - END(prefetch_switch_stack) 494 + END(prefetch_stack) 495 495 496 496 GLOBAL_ENTRY(execve) 497 497 mov r15=__NR_execve // put syscall number in place
+69 -45
arch/ia64/kernel/mca_drv.c
··· 84 84 struct page *p; 85 85 86 86 /* whether physical address is valid or not */ 87 - if ( !ia64_phys_addr_valid(paddr) ) 87 + if (!ia64_phys_addr_valid(paddr)) 88 88 return ISOLATE_NG; 89 89 90 90 /* convert physical address to physical page number */ 91 91 p = pfn_to_page(paddr>>PAGE_SHIFT); 92 92 93 93 /* check whether a page number have been already registered or not */ 94 - for( i = 0; i < num_page_isolate; i++ ) 95 - if( page_isolate[i] == p ) 94 + for (i = 0; i < num_page_isolate; i++) 95 + if (page_isolate[i] == p) 96 96 return ISOLATE_OK; /* already listed */ 97 97 98 98 /* limitation check */ 99 - if( num_page_isolate == MAX_PAGE_ISOLATE ) 99 + if (num_page_isolate == MAX_PAGE_ISOLATE) 100 100 return ISOLATE_NG; 101 101 102 102 /* kick pages having attribute 'SLAB' or 'Reserved' */ 103 - if( PageSlab(p) || PageReserved(p) ) 103 + if (PageSlab(p) || PageReserved(p)) 104 104 return ISOLATE_NG; 105 105 106 106 /* add attribute 'Reserved' and register the page */ ··· 139 139 * @peidx: pointer to index of processor error section 140 140 */ 141 141 142 - static void 142 + static void 143 143 mca_make_peidx(sal_log_processor_info_t *slpi, peidx_table_t *peidx) 144 144 { 145 - /* 145 + /* 146 146 * calculate the start address of 147 147 * "struct cpuid_info" and "sal_processor_static_info_t". 148 148 */ ··· 164 164 } 165 165 166 166 /** 167 - * mca_make_slidx - Make index of SAL error record 167 + * mca_make_slidx - Make index of SAL error record 168 168 * @buffer: pointer to SAL error record 169 169 * @slidx: pointer to index of SAL error record 170 170 * ··· 172 172 * 1 if record has platform error / 0 if not 173 173 */ 174 174 #define LOG_INDEX_ADD_SECT_PTR(sect, ptr) \ 175 - { slidx_list_t *hl = &slidx_pool.buffer[slidx_pool.cur_idx]; \ 176 - hl->hdr = ptr; \ 177 - list_add(&hl->list, &(sect)); \ 178 - slidx_pool.cur_idx = (slidx_pool.cur_idx + 1)%slidx_pool.max_idx; } 175 + {slidx_list_t *hl = &slidx_pool.buffer[slidx_pool.cur_idx]; \ 176 + hl->hdr = ptr; \ 177 + list_add(&hl->list, &(sect)); \ 178 + slidx_pool.cur_idx = (slidx_pool.cur_idx + 1)%slidx_pool.max_idx; } 179 179 180 - static int 180 + static int 181 181 mca_make_slidx(void *buffer, slidx_table_t *slidx) 182 182 { 183 183 int platform_err = 0; ··· 214 214 sp = (sal_log_section_hdr_t *)((char*)buffer + ercd_pos); 215 215 if (!efi_guidcmp(sp->guid, SAL_PROC_DEV_ERR_SECT_GUID)) { 216 216 LOG_INDEX_ADD_SECT_PTR(slidx->proc_err, sp); 217 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_MEM_DEV_ERR_SECT_GUID)) { 217 + } else if (!efi_guidcmp(sp->guid, 218 + SAL_PLAT_MEM_DEV_ERR_SECT_GUID)) { 218 219 platform_err = 1; 219 220 LOG_INDEX_ADD_SECT_PTR(slidx->mem_dev_err, sp); 220 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_SEL_DEV_ERR_SECT_GUID)) { 221 + } else if (!efi_guidcmp(sp->guid, 222 + SAL_PLAT_SEL_DEV_ERR_SECT_GUID)) { 221 223 platform_err = 1; 222 224 LOG_INDEX_ADD_SECT_PTR(slidx->sel_dev_err, sp); 223 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_PCI_BUS_ERR_SECT_GUID)) { 225 + } else if (!efi_guidcmp(sp->guid, 226 + SAL_PLAT_PCI_BUS_ERR_SECT_GUID)) { 224 227 platform_err = 1; 225 228 LOG_INDEX_ADD_SECT_PTR(slidx->pci_bus_err, sp); 226 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_SMBIOS_DEV_ERR_SECT_GUID)) { 229 + } else if (!efi_guidcmp(sp->guid, 230 + SAL_PLAT_SMBIOS_DEV_ERR_SECT_GUID)) { 227 231 platform_err = 1; 228 232 LOG_INDEX_ADD_SECT_PTR(slidx->smbios_dev_err, sp); 229 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_PCI_COMP_ERR_SECT_GUID)) { 233 + } else if (!efi_guidcmp(sp->guid, 234 + SAL_PLAT_PCI_COMP_ERR_SECT_GUID)) { 230 235 platform_err = 1; 231 236 LOG_INDEX_ADD_SECT_PTR(slidx->pci_comp_err, sp); 232 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_SPECIFIC_ERR_SECT_GUID)) { 237 + } else if (!efi_guidcmp(sp->guid, 238 + SAL_PLAT_SPECIFIC_ERR_SECT_GUID)) { 233 239 platform_err = 1; 234 240 LOG_INDEX_ADD_SECT_PTR(slidx->plat_specific_err, sp); 235 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_HOST_CTLR_ERR_SECT_GUID)) { 241 + } else if (!efi_guidcmp(sp->guid, 242 + SAL_PLAT_HOST_CTLR_ERR_SECT_GUID)) { 236 243 platform_err = 1; 237 244 LOG_INDEX_ADD_SECT_PTR(slidx->host_ctlr_err, sp); 238 - } else if (!efi_guidcmp(sp->guid, SAL_PLAT_BUS_ERR_SECT_GUID)) { 245 + } else if (!efi_guidcmp(sp->guid, 246 + SAL_PLAT_BUS_ERR_SECT_GUID)) { 239 247 platform_err = 1; 240 248 LOG_INDEX_ADD_SECT_PTR(slidx->plat_bus_err, sp); 241 249 } else { ··· 261 253 * Return value: 262 254 * 0 on Success / -ENOMEM on Failure 263 255 */ 264 - static int 256 + static int 265 257 init_record_index_pools(void) 266 258 { 267 259 int i; 268 260 int rec_max_size; /* Maximum size of SAL error records */ 269 261 int sect_min_size; /* Minimum size of SAL error sections */ 270 262 /* minimum size table of each section */ 271 - static int sal_log_sect_min_sizes[] = { 272 - sizeof(sal_log_processor_info_t) + sizeof(sal_processor_static_info_t), 263 + static int sal_log_sect_min_sizes[] = { 264 + sizeof(sal_log_processor_info_t) 265 + + sizeof(sal_processor_static_info_t), 273 266 sizeof(sal_log_mem_dev_err_info_t), 274 267 sizeof(sal_log_sel_dev_err_info_t), 275 268 sizeof(sal_log_pci_bus_err_info_t), ··· 303 294 304 295 /* - 3 - */ 305 296 slidx_pool.max_idx = (rec_max_size/sect_min_size) * 2 + 1; 306 - slidx_pool.buffer = (slidx_list_t *) kmalloc(slidx_pool.max_idx * sizeof(slidx_list_t), GFP_KERNEL); 297 + slidx_pool.buffer = (slidx_list_t *) 298 + kmalloc(slidx_pool.max_idx * sizeof(slidx_list_t), GFP_KERNEL); 307 299 308 300 return slidx_pool.buffer ? 0 : -ENOMEM; 309 301 } ··· 318 308 * is_mca_global - Check whether this MCA is global or not 319 309 * @peidx: pointer of index of processor error section 320 310 * @pbci: pointer to pal_bus_check_info_t 311 + * @sos: pointer to hand off struct between SAL and OS 321 312 * 322 313 * Return value: 323 314 * MCA_IS_LOCAL / MCA_IS_GLOBAL ··· 328 317 is_mca_global(peidx_table_t *peidx, pal_bus_check_info_t *pbci, 329 318 struct ia64_sal_os_state *sos) 330 319 { 331 - pal_processor_state_info_t *psp = (pal_processor_state_info_t*)peidx_psp(peidx); 320 + pal_processor_state_info_t *psp = 321 + (pal_processor_state_info_t*)peidx_psp(peidx); 332 322 333 - /* 323 + /* 334 324 * PAL can request a rendezvous, if the MCA has a global scope. 335 - * If "rz_always" flag is set, SAL requests MCA rendezvous 325 + * If "rz_always" flag is set, SAL requests MCA rendezvous 336 326 * in spite of global MCA. 337 327 * Therefore it is local MCA when rendezvous has not been requested. 338 328 * Failed to rendezvous, the system must be down. ··· 393 381 * @slidx: pointer of index of SAL error record 394 382 * @peidx: pointer of index of processor error section 395 383 * @pbci: pointer of pal_bus_check_info 384 + * @sos: pointer to hand off struct between SAL and OS 396 385 * 397 386 * Return value: 398 387 * 1 on Success / 0 on Failure 399 388 */ 400 389 401 390 static int 402 - recover_from_read_error(slidx_table_t *slidx, peidx_table_t *peidx, pal_bus_check_info_t *pbci, 391 + recover_from_read_error(slidx_table_t *slidx, 392 + peidx_table_t *peidx, pal_bus_check_info_t *pbci, 403 393 struct ia64_sal_os_state *sos) 404 394 { 405 395 sal_log_mod_error_info_t *smei; ··· 467 453 * @slidx: pointer of index of SAL error record 468 454 * @peidx: pointer of index of processor error section 469 455 * @pbci: pointer of pal_bus_check_info 456 + * @sos: pointer to hand off struct between SAL and OS 470 457 * 471 458 * Return value: 472 459 * 1 on Success / 0 on Failure 473 460 */ 474 461 475 462 static int 476 - recover_from_platform_error(slidx_table_t *slidx, peidx_table_t *peidx, pal_bus_check_info_t *pbci, 463 + recover_from_platform_error(slidx_table_t *slidx, peidx_table_t *peidx, 464 + pal_bus_check_info_t *pbci, 477 465 struct ia64_sal_os_state *sos) 478 466 { 479 467 int status = 0; 480 - pal_processor_state_info_t *psp = (pal_processor_state_info_t*)peidx_psp(peidx); 468 + pal_processor_state_info_t *psp = 469 + (pal_processor_state_info_t*)peidx_psp(peidx); 481 470 482 471 if (psp->bc && pbci->eb && pbci->bsi == 0) { 483 472 switch(pbci->type) { 484 473 case 1: /* partial read */ 485 474 case 3: /* full line(cpu) read */ 486 475 case 9: /* I/O space read */ 487 - status = recover_from_read_error(slidx, peidx, pbci, sos); 476 + status = recover_from_read_error(slidx, peidx, pbci, 477 + sos); 488 478 break; 489 479 case 0: /* unknown */ 490 480 case 2: /* partial write */ ··· 499 481 case 8: /* write coalescing transactions */ 500 482 case 10: /* I/O space write */ 501 483 case 11: /* inter-processor interrupt message(IPI) */ 502 - case 12: /* interrupt acknowledge or external task priority cycle */ 484 + case 12: /* interrupt acknowledge or 485 + external task priority cycle */ 503 486 default: 504 487 break; 505 488 } ··· 515 496 * @slidx: pointer of index of SAL error record 516 497 * @peidx: pointer of index of processor error section 517 498 * @pbci: pointer of pal_bus_check_info 499 + * @sos: pointer to hand off struct between SAL and OS 518 500 * 519 501 * Return value: 520 502 * 1 on Success / 0 on Failure ··· 529 509 */ 530 510 531 511 static int 532 - recover_from_processor_error(int platform, slidx_table_t *slidx, peidx_table_t *peidx, pal_bus_check_info_t *pbci, 512 + recover_from_processor_error(int platform, slidx_table_t *slidx, 513 + peidx_table_t *peidx, pal_bus_check_info_t *pbci, 533 514 struct ia64_sal_os_state *sos) 534 515 { 535 - pal_processor_state_info_t *psp = (pal_processor_state_info_t*)peidx_psp(peidx); 516 + pal_processor_state_info_t *psp = 517 + (pal_processor_state_info_t*)peidx_psp(peidx); 536 518 537 - /* 519 + /* 538 520 * We cannot recover errors with other than bus_check. 539 521 */ 540 - if (psp->cc || psp->rc || psp->uc) 522 + if (psp->cc || psp->rc || psp->uc) 541 523 return 0; 542 524 543 525 /* ··· 568 546 * (e.g. a load from poisoned memory) 569 547 * This means "there are some platform errors". 570 548 */ 571 - if (platform) 549 + if (platform) 572 550 return recover_from_platform_error(slidx, peidx, pbci, sos); 573 - /* 574 - * On account of strange SAL error record, we cannot recover. 551 + /* 552 + * On account of strange SAL error record, we cannot recover. 575 553 */ 576 554 return 0; 577 555 } ··· 579 557 /** 580 558 * mca_try_to_recover - Try to recover from MCA 581 559 * @rec: pointer to a SAL error record 560 + * @sos: pointer to hand off struct between SAL and OS 582 561 * 583 562 * Return value: 584 563 * 1 on Success / 0 on Failure 585 564 */ 586 565 587 566 static int 588 - mca_try_to_recover(void *rec, 589 - struct ia64_sal_os_state *sos) 567 + mca_try_to_recover(void *rec, struct ia64_sal_os_state *sos) 590 568 { 591 569 int platform_err; 592 570 int n_proc_err; ··· 610 588 } 611 589 612 590 /* Make index of processor error section */ 613 - mca_make_peidx((sal_log_processor_info_t*)slidx_first_entry(&slidx.proc_err)->hdr, &peidx); 591 + mca_make_peidx((sal_log_processor_info_t*) 592 + slidx_first_entry(&slidx.proc_err)->hdr, &peidx); 614 593 615 594 /* Extract Processor BUS_CHECK[0] */ 616 595 *((u64*)&pbci) = peidx_check_info(&peidx, bus_check, 0); ··· 621 598 return 0; 622 599 623 600 /* Try to recover a processor error */ 624 - return recover_from_processor_error(platform_err, &slidx, &peidx, &pbci, sos); 601 + return recover_from_processor_error(platform_err, &slidx, &peidx, 602 + &pbci, sos); 625 603 } 626 604 627 605 /* ··· 635 611 return -ENOMEM; 636 612 637 613 /* register external mca handlers */ 638 - if (ia64_reg_MCA_extension(mca_try_to_recover)){ 614 + if (ia64_reg_MCA_extension(mca_try_to_recover)) { 639 615 printk(KERN_ERR "ia64_reg_MCA_extension failed.\n"); 640 616 kfree(slidx_pool.buffer); 641 617 return -EFAULT;
+1 -1
arch/ia64/kernel/mca_drv.h
··· 6 6 * Copyright (C) Hidetoshi Seto (seto.hidetoshi@jp.fujitsu.com) 7 7 */ 8 8 /* 9 - * Processor error section: 9 + * Processor error section: 10 10 * 11 11 * +-sal_log_processor_info_t *info-------------+ 12 12 * | sal_log_section_hdr_t header; |
+24 -24
arch/ia64/kernel/mca_drv_asm.S
··· 13 13 #include <asm/ptrace.h> 14 14 15 15 GLOBAL_ENTRY(mca_handler_bhhook) 16 - invala // clear RSE ? 17 - ;; // 18 - cover // 19 - ;; // 20 - clrrrb // 16 + invala // clear RSE ? 17 + ;; 18 + cover 19 + ;; 20 + clrrrb 21 21 ;; 22 - alloc r16=ar.pfs,0,2,1,0 // make a new frame 22 + alloc r16=ar.pfs,0,2,1,0 // make a new frame 23 23 ;; 24 - mov ar.rsc=0 24 + mov ar.rsc=0 25 25 ;; 26 - mov r13=IA64_KR(CURRENT) // current task pointer 26 + mov r13=IA64_KR(CURRENT) // current task pointer 27 27 ;; 28 - mov r2=r13 28 + mov r2=r13 29 29 ;; 30 - addl r22=IA64_RBS_OFFSET,r2 30 + addl r22=IA64_RBS_OFFSET,r2 31 31 ;; 32 - mov ar.bspstore=r22 32 + mov ar.bspstore=r22 33 33 ;; 34 - addl sp=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r2 34 + addl sp=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r2 35 35 ;; 36 - adds r2=IA64_TASK_THREAD_ON_USTACK_OFFSET,r13 36 + adds r2=IA64_TASK_THREAD_ON_USTACK_OFFSET,r13 37 37 ;; 38 - st1 [r2]=r0 // clear current->thread.on_ustack flag 39 - mov loc0=r16 40 - movl loc1=mca_handler_bh // recovery C function 38 + st1 [r2]=r0 // clear current->thread.on_ustack flag 39 + mov loc0=r16 40 + movl loc1=mca_handler_bh // recovery C function 41 41 ;; 42 - mov out0=r8 // poisoned address 43 - mov b6=loc1 42 + mov out0=r8 // poisoned address 43 + mov b6=loc1 44 44 ;; 45 - mov loc1=rp 45 + mov loc1=rp 46 46 ;; 47 - ssm psr.i 47 + ssm psr.i 48 48 ;; 49 - br.call.sptk.many rp=b6 // does not return ... 49 + br.call.sptk.many rp=b6 // does not return ... 50 50 ;; 51 - mov ar.pfs=loc0 52 - mov rp=loc1 51 + mov ar.pfs=loc0 52 + mov rp=loc1 53 53 ;; 54 - mov r8=r0 54 + mov r8=r0 55 55 br.ret.sptk.many rp 56 56 ;; 57 57 END(mca_handler_bhhook)
+1 -1
arch/ia64/kernel/perfmon.c
··· 574 574 return 0UL; 575 575 } 576 576 577 - static inline unsigned long 577 + static inline void 578 578 pfm_unprotect_ctx_ctxsw(pfm_context_t *x, unsigned long f) 579 579 { 580 580 spin_unlock(&(x)->ctx_lock);
+1 -1
drivers/char/agp/hp-agp.c
··· 252 252 readl(hp->ioc_regs+HP_ZX1_PDIR_BASE); 253 253 writel(hp->io_tlb_ps, hp->ioc_regs+HP_ZX1_TCNFG); 254 254 readl(hp->ioc_regs+HP_ZX1_TCNFG); 255 - writel(~(HP_ZX1_IOVA_SIZE-1), hp->ioc_regs+HP_ZX1_IMASK); 255 + writel((unsigned int)(~(HP_ZX1_IOVA_SIZE-1)), hp->ioc_regs+HP_ZX1_IMASK); 256 256 readl(hp->ioc_regs+HP_ZX1_IMASK); 257 257 writel(hp->iova_base|1, hp->ioc_regs+HP_ZX1_IBASE); 258 258 readl(hp->ioc_regs+HP_ZX1_IBASE);