commits
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/kyle/parisc-2.6:
[PARISC] make ptr_to_pide() static
[PARISC] head.S: section mismatch fixes
[PARISC] add back Crestone Peak cpu
[PARISC] futex: special case cmpxchg NULL in kernel space
[PARISC] clean up show_stack
[PARISC] add pa8900 CPUs to hardware inventory
[PARISC] clean up include/asm-parisc/elf.h
[PARISC] move defconfig to arch/parisc/configs/
[PARISC] add back AD1889 MAINTAINERS entry
[PARISC] pdc_console: fix bizarre panic on boot
[PARISC] dump_stack in show_regs
[PARISC] pdc_stable: fix compile errors
[PARISC] remove unused pdc_iodc_printf function
[PARISC] bump __NR_syscalls
[PARISC] unbreak pgalloc.h
[PARISC] move VMALLOC_* definitions to fixmap.h
[PARISC] wire up timerfd syscalls
[PARISC] remove old timerfd syscall
This essentially reverts commit 71fc47a9adf8ee89e5c96a47222915c5485ac437
("ACPI: basic initramfs DSDT override support"), because the code simply
isn't ready.
It did ugly things to the init sequence to populate the rootfs image
early, but that just ended up showing other problems with the whole
approach. The fact is, the VFS layer simply isn't initialized this
early, and the relevant ACPI code should either run much later, or this
shouldn't be done at all.
For 2.6.25, we'll just pick the latter option. We can revisit this
concept later if necessary.
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: Tilman Schmidt <tilman@imap.cc>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Renninger <trenn@suse.de>
Cc: Eric Piel <eric.piel@tremplin-utc.net>
Cc: Len Brown <len.brown@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Markus Gaugusch <dsdt@gaugusch.at>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
DATA_CARRY is not boolean
Signed-off-by: Roel Kluin <12o3l@tiscali.nl>
Signed-off-by: Pierre Ossman <drzeus@drzeus.cx>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- move boot_args[] into the init section
- move $global$ into the read_mostly section
- fix the following two section mismatches:
WARNING: vmlinux.o(.text+0x9c): Section mismatch: reference to .init.text:start_kernel (between '$pgt_fill_loop' and '$is_pa20')
WARNING: vmlinux.o(.text+0xa0): Section mismatch: reference to .init.text:start_kernel (between '$pgt_fill_loop' and '$is_pa20')
Signed-off-by: Helge Deller <deller@gmx.de>
SIgned-off-by: Kyle McMartin <kyle@mcmartin.ca>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6:
[NET]: Fix tbench regression in 2.6.25-rc1
Crestone Peak Slow is the 800MHz PA-8800 cpu in the C8000.
0x88B is probably the Crestone Peak Fast.
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Use the existing calc_delta_mine() calculation for sched_slice(). This
saves a divide and simplifies the code because we share it with the
other /cfs_rq->load users.
It also improves code size:
text data bss dec hex filename
42659 2740 144 45543 b1e7 sched.o.before
42093 2740 144 44977 afb1 sched.o.after
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Comparing with kernel 2.6.24, tbench result has regression with
2.6.25-rc1.
1) On 2 quad-core processor stoakley: 4%.
2) On 4 quad-core processor tigerton: more than 30%.
bisect located below patch.
b4ce92775c2e7ff9cf79cca4e0a19c8c5fd6287b is first bad commit
commit b4ce92775c2e7ff9cf79cca4e0a19c8c5fd6287b
Author: Herbert Xu <herbert@gondor.apana.org.au>
Date: Tue Nov 13 21:33:32 2007 -0800
[IPV6]: Move nfheader_len into rt6_info
The dst member nfheader_len is only used by IPv6. It's also currently
creating a rather ugly alignment hole in struct dst. Therefore this patch
moves it from there into struct rt6_info.
Above patch changes the cache line alignment, especially member
__refcnt. I did a testing by adding 2 unsigned long pading before
lastuse, so the 3 members, lastuse/__refcnt/__use, are moved to next
cache line. The performance is recovered.
I created a patch to rearrange the members in struct dst_entry.
With Eric and Valdis Kletnieks's suggestion, I made finer arrangement.
1) Move tclassid under ops in case CONFIG_NET_CLS_ROUTE=y. So
sizeof(dst_entry)=200 no matter if CONFIG_NET_CLS_ROUTE=y/n. I
tested many patches on my 16-core tigerton by moving tclassid to
different place. It looks like tclassid could also have impact on
performance. If moving tclassid before metrics, or just don't move
tclassid, the performance isn't good. So I move it behind metrics.
2) Add comments before __refcnt.
On 16-core tigerton:
If CONFIG_NET_CLS_ROUTE=y, the result with below patch is about 18%
better than the one without the patch;
If CONFIG_NET_CLS_ROUTE=n, the result with below patch is about 30%
better than the one without the patch.
With 32bit 2.6.25-rc1 on 8-core stoakley, the new patch doesn't
introduce regression.
Thank Eric, Valdis, and David!
Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Acked-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit a0c1e9073ef7428a14309cba010633a6cd6719ea added code to futex.c
to detect whether futex_atomic_cmpxchg_inatomic was implemented at run
time:
+ curval = cmpxchg_futex_value_locked(NULL, 0, 0);
+ if (curval == -EFAULT)
+ futex_cmpxchg_enabled = 1;
This is bogus on parisc, since page zero in kernel virtual space is the
gateway page for syscall entry, and should not be read from the kernel.
(That, and we really don't like the kernel faulting on its own address
space...)
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Fair sleepers need to scale their latency target down by runqueue
weight. Otherwise busy systems will gain ever larger sleep bonus.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Since the lists are circular, we need to explicitely tag
the address to be deleted since we might end up freeing
the list head instead. This fixes some interesting SCTP
crashes.
Signed-off-by: Chidambar 'ilLogict' Zinnoury <illogict@online.fr>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When we show_regs, we obviously have a struct pt_regs of the calling
frame. Use these in show_stack so we don't have the entire bogus call trace
up to the show_stack call.
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Currently we schedule to the leftmost task in the runqueue. When the
runtimes are very short because of some server/client ping-pong,
especially in over-saturated workloads, this will cycle through all
tasks trashing the cache.
Reduce cache trashing by keeping dependent tasks together by running
newly woken tasks first. However, by not running the leftmost task first
we could starve tasks because the wakee can gain unlimited runtime.
Therefore we only run the wakee if its within a small
(wakeup_granularity) window of the leftmost task. This preserves
fairness, but does alternate server/client task groups.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
fs/built-in.o:(.rodata+0x1134): undefined reference to `proc_net_inode_operations'
fs/built-in.o:(.rodata+0x1138): undefined reference to `proc_net_operations'
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the known pa8900 CPUs to the inventory list and removes
the Crestone Peak one which apparently never escaped into the wild.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
lw->weight can be 0 for a short time during bootup.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
With TSO it was possible to send past the receiver window when the skb
to be sent was the last in the write queue while the receiver window
is the limiting factor. One can notice that there's a loophole in the
tcp_mss_split_point that lacked a receiver window check for the
tcp_write_queue_tail() if also cwnd was smaller than the full skb.
Noticed by Thomas Gleixner <tglx@linutronix.de> in form of "Treason
uncloaked! Peer ... shrinks window .... Repaired." messages (the peer
didn't actually shrink its window as the message suggests, we had just
sent something past it without a permission to do so).
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Tested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cleanup some cruft. No functionality changes.
Signed-off-by: Randolph Chung <tausq@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Clear the cached inverse value when updating load. This is needed for
calc_delta_mine() to work correctly when using the rq load.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
This patch moves the default parisc defconfig to
arch/parisc/configs/generic_defconfig where it belongs and selects it as
the default defconfig through KBUILD_DEFCONFIG.
Signed-off-by: Adrian Bunk <adrian.bunk@movial.fi>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Current min_vruntime tracking is incorrect and will cause serious
problems when we don't run the leftmost task for some reason.
min_vruntime does two things; 1) it's used to determine a forward
direction when the u64 vruntime wraps, 2) it's used to track the
leftmost vruntime to position newly enqueued tasks from.
The current logic advances min_vruntime whenever the current task's
vruntime advance. Because the current task may pass the leftmost task
still waiting we're failing the second goal. This causes new tasks to be
placed too far ahead and thus penalizes their runtime.
Fix this by making min_vruntime the min_vruntime of the waiting tasks by
tracking it in enqueue/dequeue, and compare against current's vruntime
to obtain the absolute minimum when placing new tasks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
On rt73 and rt61 disabling reception of multicast packets also disables
broadcast traffic which we never want to do. Therefore we should never
disable multicast.
Signed-off-by: Adam Baker <linux@baker-net.org.uk>
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Thibaut VARENE <T-Bone@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Fix a hard to trigger crash seen in the -rt kernel that also affects
the vanilla scheduler.
There is a race condition between schedule() and some dequeue/enqueue
functions; rt_mutex_setprio(), __setscheduler() and sched_move_task().
When scheduling to idle, idle_balance() is called to pull tasks from
other busy processor. It might drop the rq lock. It means that those 3
functions encounter on_rq=0 and running=1. The current task should be
put when running.
Here is a possible scenario:
CPU0 CPU1
| schedule()
| ->deactivate_task()
| ->idle_balance()
| -->load_balance_newidle()
rt_mutex_setprio() |
| --->double_lock_balance()
*get lock *rel lock
* on_rq=0, ruuning=1 |
* sched_class is changed |
*rel lock *get lock
: |
:
->put_prev_task_rt()
->pick_next_task_fair()
=> panic
The current process of CPU1(P1) is scheduling. Deactivated P1, and the
scheduler looks for another process on other CPU's runqueue because CPU1
will be idle. idle_balance(), load_balance_newidle() and
double_lock_balance() are called and double_lock_balance() could drop
the rq lock. On the other hand, CPU0 is trying to boost the priority of
P1. The result of boosting only P1's prio and sched_class are changed to
RT. The sched entities of P1 and P1's group are never put. It makes
cfs_rq invalid, because the cfs_rq has curr and no leaf, but
pick_next_task_fair() is called, then the kernel panics.
Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|libertas: Invalid CMD_RESP 8012 to command 50!
The special case got mixed up in 8a96df80b3.
Signed-off-by: Sebastian Siewior <bigeasy@linutronix.de>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Commit 721fdf34167580ff98263c74cead8871d76936e6 introduced a subtle bug
by accidently removing the "static" from iodc_dbuf. This resulted in, what
appeared to be, a trap without *current set to a task. Probably the result of
a trap in real mode while calling firmware.
Also do other misc clean ups. Since the only input from firmware is non
blocking, share iodc_dbuf between input and output, and spinlock the
only callers.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6:
firewire: fw-ohci: shut up false compiler warning on PPC32
firewire: fw-ohci: use dma_alloc_coherent for ar_buffer
ieee1394: sbp2: fix for SYM13FW500 bridge (Datafab disk)
firewire: fw-sbp2: fix for SYM13FW500 bridge (Datafab disk)
firewire: update Kconfig help text
firewire: warn on fatal condition in topology code
firewire: fw-sbp2: set single-phase retry_limit
firewire: fw-ohci: Apple UniNorth 1st generation support
firewire: fw-ohci: PPC PMac platform code
firewire: endianess annotations
firewire: endianess fix
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Originally, show_stack was used in BUG() output. However, a recent commit
changed it to print register state (no idea what that's supposed to help,
really...) and parisc was missing a backtrace because of it.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
This bug was always here, but before my commit 6fa02839bf9412e18e77
("recheck for secure ports in fh_verify"), it could only be triggered by
failure of a kmalloc(). After that commit it could be triggered by a
client making a request from a non-reserved port for access to an export
marked "secure". (Exports are "secure" by default.)
The result is a struct svc_export with a reference count one too low,
resulting in likely oopses next time the export is accessed.
The reference counting here is not straightforward; a later patch will
clean up fh_verify().
Thanks to Lukas Hejtmanek for the bug report and followup.
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Cc: Lukas Hejtmanek <xhejtman@ics.muni.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Shut up "may be used uninitialised in this function" warnings due to
PPC32's implementation of dma_alloc_coherent().
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Commit ce7663d84:
[NETFILTER]: nfnetlink_queue: don't unregister handler of other subsystem
changed nf_unregister_queue_handler to return an error when attempting to
unregister a queue handler that is not identical to the one passed in.
This is correct in case we really do have a different queue handler already
registered, but some existing userspace code always does an unbind before
bind and aborts if that fails, so try to be nice and return success in
that case.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Joel Soete <rubisher@scarlet.be>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
The comments in the definition of struct export_operations don't match the
current members.
Add a comment for the 2 new functions and remove 2 comments for unused ones.
Signed-off-by: Marc Dionne <marc.c.dionne@gmail.com>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently, we do nothing to guarantee we have a consistent DMA buffer for
asynchronous receive packets. Rather than doing several sync's following a
dma_map_single() to get consistent buffers, just switch to using
dma_alloc_coherent().
Resolves constant buffer failures on my own x86_64 laptop w/4GB of RAM and
likely to fix a number of other failures witnessed on x86_64 systems with
4GB of RAM or more.
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Similar to the nfnetlink_log problem, nfnetlink_queue incorrectly
returns -EPERM when binding or unbinding to an address family and
queueing instance 0 exists and is owned by a different process. Unlike
nfnetlink_log it previously completes the operation, but it is still
incorrect.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband:
IPoIB: Allocate priv->tx_ring with vmalloc()
IPoIB/cm: Set tx_wr.num_sge in connected mode post_send()
IPoIB: Don't drop multicast sends when they can be queued
IB/ipath: Reset the retry counter for RDMA_READ_RESPONSE_MIDDLE packets
IB/ipath: Fix error completion put on send CQ instead of recv CQ
IB/ipath: Fix RC QP initialization
IB/ipath: Fix potentially wrong RNR retry counter returned in ipath_query_qp()
IB/ipath: Fix IB compliance problems with link state vs physical state
Fix I/O errors due to SYM13FW500's inability to handle larger request
sizes. Reported by Piergiorgio Sartor <piergiorgio.sartor@nexgo.de> for
firewire-sbp2 in https://bugzilla.redhat.com/show_bug.cgi?id=436879
This fix is necessary because sbp2's default request size limit has been
lifted since 2.6.25-rc1.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
When binding or unbinding to an address family, the res_id is usually set
to zero. When logging instance 0 already exists and is owned by a different
process, this makes nfunl_recv_config return -EPERM without performing
the bind operation.
Since no operation on the foreign logging instance itself was requested,
this is incorrect. Move bind/unbind commands before the queue instance
permissions checks.
Also remove an incorrect comment.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
oops, forgot this in the previous commit.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
async_tx: checkpatch says s/__FUNCTION__/__func__/g
iop-adma.c: replace remaining __FUNCTION__ occurrences
fsldma: Add a completed cookie updated action in DMA finish interrupt.
fsldma: Add device_prep_dma_interrupt support to fsldma.c
dmaengine: Fix a bug about BUG_ON() on DMA engine capability DMA_INTERRUPT.
fsldma: Fix fsldma.c warning messages when it's compiled under PPC64.
Commit 7143740d ("IPoIB: Add send gather support") made struct
ipoib_tx_buf significantly larger, since the mapping member changed
from a single u64 to an array with MAX_SKB_FRAGS + 1 entries. This
means that allocating tx_rings with kzalloc() may fail because there
is not enough contiguous memory for the new, much bigger size. Fix
this regression by allocating the rings with vmalloc() instead.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Fix I/O errors due to SYM13FW500's inability to handle larger request
sizes. Reported by Piergiorgio Sartor <piergiorgio.sartor@nexgo.de> in
https://bugzilla.redhat.com/show_bug.cgi?id=436879
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
There's a horrible slab abuse in net/netfilter/nf_conntrack_extend.c
that can be replaced with a call to ksize().
Cc: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 2f569afd9ced9ebec9a6eb3dbf6f83429be0a7b4 broke the compile
rather spectacularly. Fix code errors.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/pci-2.6:
PCI: fix issue with busses registering multiple times in sysfs
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Commit 7143740d ("IPoIB: Add send gather support") made it possible
for tx_wr.num_sge to be != 1 -- this happens if send gather support is
enabled. However, the code in the connected mode post_send() function
assumes the old invariant, namely that tx_wr.num_sge is always 1. Fix
this by explicitly setting tx_wr.num_sge to 1 in the CM post_send().
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Remove some less necessary information, point out that video1394 and
dv1394 should be blacklisted along with ohci1394.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Alexey Dobriyan <adobriyan@sw.ru>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
They make way more sense here, really...
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
Commit 681cc5cd3efbeafca6386114070e0bfb5012e249 ("iommu sg merging:
swiotlb: respect the segment boundary limits") introduced two
possibilities for entering an endless loop in lib/swiotlb.c:
- if max_slots is zero (possible if mask is ~0UL)
- if the number of slots requested fits into a swiotlb segment, but is
too large for the part of a segment which remains after considering
offset_slots
This fixes them
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
PCI busses can be registered multiple times, so we need to detect if we
have registered our bus structure in sysfs already. If so, don't do it
again.
Thanks to Guennadi Liakhovetski <g.liakhovetski@gmx.de> for reporting
the problem, and to Linus for poking me to get me to believe that it was
a real problem.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
__FUNCTION__ is gcc-specific, use __func__
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When set_multicast_list() is called the multicast task is restarted
and the IPOIB_MCAST_STARTED bit is cleared. As a result for some
window of time, multicast packets are not transmitted nor queued but
rather dropped by ipoib_mcast_send(). These dropped packets are
painful in two cases:
- bonding fail-over which both calls set_multicast_list() on the new
active slave and sends Gratuitous ARP through that slave.
- IP_DROP_MEMBERSHIP code which both calls set_multicast_list() on the
device and issues IGMP leave.
In both these cases, depending on the scheduling of the IPoIB
multicast task, the packets would be dropped. As a result, in the
bonding case, the failover would not be detected by the peers until
their neighbour is renewed the neighbour (which takes a few tens of
seconds). In the IGMP case, the IP router doesn't get an IGMP leave
and would only learn on that from further probes on the group (also a
delay of at least a few tens of seconds).
Fix this by allowing transmission (or queuing) depending on the
IPOIB_FLAG_OPER_UP flag instead of the IPOIB_MCAST_STARTED flag.
Signed-off-by: Olga Shern <olgas@voltaire.com>
Signed-off-by: Or Gerlitz <ogerlitz@voltaire.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
If this ever happens to anybody, we want to have it in his log.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/kyle/parisc-2.6:
[PARISC] make ptr_to_pide() static
[PARISC] head.S: section mismatch fixes
[PARISC] add back Crestone Peak cpu
[PARISC] futex: special case cmpxchg NULL in kernel space
[PARISC] clean up show_stack
[PARISC] add pa8900 CPUs to hardware inventory
[PARISC] clean up include/asm-parisc/elf.h
[PARISC] move defconfig to arch/parisc/configs/
[PARISC] add back AD1889 MAINTAINERS entry
[PARISC] pdc_console: fix bizarre panic on boot
[PARISC] dump_stack in show_regs
[PARISC] pdc_stable: fix compile errors
[PARISC] remove unused pdc_iodc_printf function
[PARISC] bump __NR_syscalls
[PARISC] unbreak pgalloc.h
[PARISC] move VMALLOC_* definitions to fixmap.h
[PARISC] wire up timerfd syscalls
[PARISC] remove old timerfd syscall
This essentially reverts commit 71fc47a9adf8ee89e5c96a47222915c5485ac437
("ACPI: basic initramfs DSDT override support"), because the code simply
isn't ready.
It did ugly things to the init sequence to populate the rootfs image
early, but that just ended up showing other problems with the whole
approach. The fact is, the VFS layer simply isn't initialized this
early, and the relevant ACPI code should either run much later, or this
shouldn't be done at all.
For 2.6.25, we'll just pick the latter option. We can revisit this
concept later if necessary.
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: Tilman Schmidt <tilman@imap.cc>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Renninger <trenn@suse.de>
Cc: Eric Piel <eric.piel@tremplin-utc.net>
Cc: Len Brown <len.brown@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Markus Gaugusch <dsdt@gaugusch.at>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- move boot_args[] into the init section
- move $global$ into the read_mostly section
- fix the following two section mismatches:
WARNING: vmlinux.o(.text+0x9c): Section mismatch: reference to .init.text:start_kernel (between '$pgt_fill_loop' and '$is_pa20')
WARNING: vmlinux.o(.text+0xa0): Section mismatch: reference to .init.text:start_kernel (between '$pgt_fill_loop' and '$is_pa20')
Signed-off-by: Helge Deller <deller@gmx.de>
SIgned-off-by: Kyle McMartin <kyle@mcmartin.ca>
Use the existing calc_delta_mine() calculation for sched_slice(). This
saves a divide and simplifies the code because we share it with the
other /cfs_rq->load users.
It also improves code size:
text data bss dec hex filename
42659 2740 144 45543 b1e7 sched.o.before
42093 2740 144 44977 afb1 sched.o.after
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Comparing with kernel 2.6.24, tbench result has regression with
2.6.25-rc1.
1) On 2 quad-core processor stoakley: 4%.
2) On 4 quad-core processor tigerton: more than 30%.
bisect located below patch.
b4ce92775c2e7ff9cf79cca4e0a19c8c5fd6287b is first bad commit
commit b4ce92775c2e7ff9cf79cca4e0a19c8c5fd6287b
Author: Herbert Xu <herbert@gondor.apana.org.au>
Date: Tue Nov 13 21:33:32 2007 -0800
[IPV6]: Move nfheader_len into rt6_info
The dst member nfheader_len is only used by IPv6. It's also currently
creating a rather ugly alignment hole in struct dst. Therefore this patch
moves it from there into struct rt6_info.
Above patch changes the cache line alignment, especially member
__refcnt. I did a testing by adding 2 unsigned long pading before
lastuse, so the 3 members, lastuse/__refcnt/__use, are moved to next
cache line. The performance is recovered.
I created a patch to rearrange the members in struct dst_entry.
With Eric and Valdis Kletnieks's suggestion, I made finer arrangement.
1) Move tclassid under ops in case CONFIG_NET_CLS_ROUTE=y. So
sizeof(dst_entry)=200 no matter if CONFIG_NET_CLS_ROUTE=y/n. I
tested many patches on my 16-core tigerton by moving tclassid to
different place. It looks like tclassid could also have impact on
performance. If moving tclassid before metrics, or just don't move
tclassid, the performance isn't good. So I move it behind metrics.
2) Add comments before __refcnt.
On 16-core tigerton:
If CONFIG_NET_CLS_ROUTE=y, the result with below patch is about 18%
better than the one without the patch;
If CONFIG_NET_CLS_ROUTE=n, the result with below patch is about 30%
better than the one without the patch.
With 32bit 2.6.25-rc1 on 8-core stoakley, the new patch doesn't
introduce regression.
Thank Eric, Valdis, and David!
Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Acked-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit a0c1e9073ef7428a14309cba010633a6cd6719ea added code to futex.c
to detect whether futex_atomic_cmpxchg_inatomic was implemented at run
time:
+ curval = cmpxchg_futex_value_locked(NULL, 0, 0);
+ if (curval == -EFAULT)
+ futex_cmpxchg_enabled = 1;
This is bogus on parisc, since page zero in kernel virtual space is the
gateway page for syscall entry, and should not be read from the kernel.
(That, and we really don't like the kernel faulting on its own address
space...)
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Since the lists are circular, we need to explicitely tag
the address to be deleted since we might end up freeing
the list head instead. This fixes some interesting SCTP
crashes.
Signed-off-by: Chidambar 'ilLogict' Zinnoury <illogict@online.fr>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently we schedule to the leftmost task in the runqueue. When the
runtimes are very short because of some server/client ping-pong,
especially in over-saturated workloads, this will cycle through all
tasks trashing the cache.
Reduce cache trashing by keeping dependent tasks together by running
newly woken tasks first. However, by not running the leftmost task first
we could starve tasks because the wakee can gain unlimited runtime.
Therefore we only run the wakee if its within a small
(wakeup_granularity) window of the leftmost task. This preserves
fairness, but does alternate server/client task groups.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
With TSO it was possible to send past the receiver window when the skb
to be sent was the last in the write queue while the receiver window
is the limiting factor. One can notice that there's a loophole in the
tcp_mss_split_point that lacked a receiver window check for the
tcp_write_queue_tail() if also cwnd was smaller than the full skb.
Noticed by Thomas Gleixner <tglx@linutronix.de> in form of "Treason
uncloaked! Peer ... shrinks window .... Repaired." messages (the peer
didn't actually shrink its window as the message suggests, we had just
sent something past it without a permission to do so).
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Tested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Current min_vruntime tracking is incorrect and will cause serious
problems when we don't run the leftmost task for some reason.
min_vruntime does two things; 1) it's used to determine a forward
direction when the u64 vruntime wraps, 2) it's used to track the
leftmost vruntime to position newly enqueued tasks from.
The current logic advances min_vruntime whenever the current task's
vruntime advance. Because the current task may pass the leftmost task
still waiting we're failing the second goal. This causes new tasks to be
placed too far ahead and thus penalizes their runtime.
Fix this by making min_vruntime the min_vruntime of the waiting tasks by
tracking it in enqueue/dequeue, and compare against current's vruntime
to obtain the absolute minimum when placing new tasks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
On rt73 and rt61 disabling reception of multicast packets also disables
broadcast traffic which we never want to do. Therefore we should never
disable multicast.
Signed-off-by: Adam Baker <linux@baker-net.org.uk>
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Fix a hard to trigger crash seen in the -rt kernel that also affects
the vanilla scheduler.
There is a race condition between schedule() and some dequeue/enqueue
functions; rt_mutex_setprio(), __setscheduler() and sched_move_task().
When scheduling to idle, idle_balance() is called to pull tasks from
other busy processor. It might drop the rq lock. It means that those 3
functions encounter on_rq=0 and running=1. The current task should be
put when running.
Here is a possible scenario:
CPU0 CPU1
| schedule()
| ->deactivate_task()
| ->idle_balance()
| -->load_balance_newidle()
rt_mutex_setprio() |
| --->double_lock_balance()
*get lock *rel lock
* on_rq=0, ruuning=1 |
* sched_class is changed |
*rel lock *get lock
: |
:
->put_prev_task_rt()
->pick_next_task_fair()
=> panic
The current process of CPU1(P1) is scheduling. Deactivated P1, and the
scheduler looks for another process on other CPU's runqueue because CPU1
will be idle. idle_balance(), load_balance_newidle() and
double_lock_balance() are called and double_lock_balance() could drop
the rq lock. On the other hand, CPU0 is trying to boost the priority of
P1. The result of boosting only P1's prio and sched_class are changed to
RT. The sched entities of P1 and P1's group are never put. It makes
cfs_rq invalid, because the cfs_rq has curr and no leaf, but
pick_next_task_fair() is called, then the kernel panics.
Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Commit 721fdf34167580ff98263c74cead8871d76936e6 introduced a subtle bug
by accidently removing the "static" from iodc_dbuf. This resulted in, what
appeared to be, a trap without *current set to a task. Probably the result of
a trap in real mode while calling firmware.
Also do other misc clean ups. Since the only input from firmware is non
blocking, share iodc_dbuf between input and output, and spinlock the
only callers.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6:
firewire: fw-ohci: shut up false compiler warning on PPC32
firewire: fw-ohci: use dma_alloc_coherent for ar_buffer
ieee1394: sbp2: fix for SYM13FW500 bridge (Datafab disk)
firewire: fw-sbp2: fix for SYM13FW500 bridge (Datafab disk)
firewire: update Kconfig help text
firewire: warn on fatal condition in topology code
firewire: fw-sbp2: set single-phase retry_limit
firewire: fw-ohci: Apple UniNorth 1st generation support
firewire: fw-ohci: PPC PMac platform code
firewire: endianess annotations
firewire: endianess fix
This bug was always here, but before my commit 6fa02839bf9412e18e77
("recheck for secure ports in fh_verify"), it could only be triggered by
failure of a kmalloc(). After that commit it could be triggered by a
client making a request from a non-reserved port for access to an export
marked "secure". (Exports are "secure" by default.)
The result is a struct svc_export with a reference count one too low,
resulting in likely oopses next time the export is accessed.
The reference counting here is not straightforward; a later patch will
clean up fh_verify().
Thanks to Lukas Hejtmanek for the bug report and followup.
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Cc: Lukas Hejtmanek <xhejtman@ics.muni.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit ce7663d84:
[NETFILTER]: nfnetlink_queue: don't unregister handler of other subsystem
changed nf_unregister_queue_handler to return an error when attempting to
unregister a queue handler that is not identical to the one passed in.
This is correct in case we really do have a different queue handler already
registered, but some existing userspace code always does an unbind before
bind and aborts if that fails, so try to be nice and return success in
that case.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The comments in the definition of struct export_operations don't match the
current members.
Add a comment for the 2 new functions and remove 2 comments for unused ones.
Signed-off-by: Marc Dionne <marc.c.dionne@gmail.com>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently, we do nothing to guarantee we have a consistent DMA buffer for
asynchronous receive packets. Rather than doing several sync's following a
dma_map_single() to get consistent buffers, just switch to using
dma_alloc_coherent().
Resolves constant buffer failures on my own x86_64 laptop w/4GB of RAM and
likely to fix a number of other failures witnessed on x86_64 systems with
4GB of RAM or more.
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Similar to the nfnetlink_log problem, nfnetlink_queue incorrectly
returns -EPERM when binding or unbinding to an address family and
queueing instance 0 exists and is owned by a different process. Unlike
nfnetlink_log it previously completes the operation, but it is still
incorrect.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband:
IPoIB: Allocate priv->tx_ring with vmalloc()
IPoIB/cm: Set tx_wr.num_sge in connected mode post_send()
IPoIB: Don't drop multicast sends when they can be queued
IB/ipath: Reset the retry counter for RDMA_READ_RESPONSE_MIDDLE packets
IB/ipath: Fix error completion put on send CQ instead of recv CQ
IB/ipath: Fix RC QP initialization
IB/ipath: Fix potentially wrong RNR retry counter returned in ipath_query_qp()
IB/ipath: Fix IB compliance problems with link state vs physical state
Fix I/O errors due to SYM13FW500's inability to handle larger request
sizes. Reported by Piergiorgio Sartor <piergiorgio.sartor@nexgo.de> for
firewire-sbp2 in https://bugzilla.redhat.com/show_bug.cgi?id=436879
This fix is necessary because sbp2's default request size limit has been
lifted since 2.6.25-rc1.
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Jarod Wilson <jwilson@redhat.com>
When binding or unbinding to an address family, the res_id is usually set
to zero. When logging instance 0 already exists and is owned by a different
process, this makes nfunl_recv_config return -EPERM without performing
the bind operation.
Since no operation on the foreign logging instance itself was requested,
this is incorrect. Move bind/unbind commands before the queue instance
permissions checks.
Also remove an incorrect comment.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
async_tx: checkpatch says s/__FUNCTION__/__func__/g
iop-adma.c: replace remaining __FUNCTION__ occurrences
fsldma: Add a completed cookie updated action in DMA finish interrupt.
fsldma: Add device_prep_dma_interrupt support to fsldma.c
dmaengine: Fix a bug about BUG_ON() on DMA engine capability DMA_INTERRUPT.
fsldma: Fix fsldma.c warning messages when it's compiled under PPC64.
Commit 7143740d ("IPoIB: Add send gather support") made struct
ipoib_tx_buf significantly larger, since the mapping member changed
from a single u64 to an array with MAX_SKB_FRAGS + 1 entries. This
means that allocating tx_rings with kzalloc() may fail because there
is not enough contiguous memory for the new, much bigger size. Fix
this regression by allocating the rings with vmalloc() instead.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
There's a horrible slab abuse in net/netfilter/nf_conntrack_extend.c
that can be replaced with a call to ksize().
Cc: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 7143740d ("IPoIB: Add send gather support") made it possible
for tx_wr.num_sge to be != 1 -- this happens if send gather support is
enabled. However, the code in the connected mode post_send() function
assumes the old invariant, namely that tx_wr.num_sge is always 1. Fix
this by explicitly setting tx_wr.num_sge to 1 in the CM post_send().
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Commit 681cc5cd3efbeafca6386114070e0bfb5012e249 ("iommu sg merging:
swiotlb: respect the segment boundary limits") introduced two
possibilities for entering an endless loop in lib/swiotlb.c:
- if max_slots is zero (possible if mask is ~0UL)
- if the number of slots requested fits into a swiotlb segment, but is
too large for the part of a segment which remains after considering
offset_slots
This fixes them
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
PCI busses can be registered multiple times, so we need to detect if we
have registered our bus structure in sysfs already. If so, don't do it
again.
Thanks to Guennadi Liakhovetski <g.liakhovetski@gmx.de> for reporting
the problem, and to Linus for poking me to get me to believe that it was
a real problem.
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When set_multicast_list() is called the multicast task is restarted
and the IPOIB_MCAST_STARTED bit is cleared. As a result for some
window of time, multicast packets are not transmitted nor queued but
rather dropped by ipoib_mcast_send(). These dropped packets are
painful in two cases:
- bonding fail-over which both calls set_multicast_list() on the new
active slave and sends Gratuitous ARP through that slave.
- IP_DROP_MEMBERSHIP code which both calls set_multicast_list() on the
device and issues IGMP leave.
In both these cases, depending on the scheduling of the IPoIB
multicast task, the packets would be dropped. As a result, in the
bonding case, the failover would not be detected by the peers until
their neighbour is renewed the neighbour (which takes a few tens of
seconds). In the IGMP case, the IP router doesn't get an IGMP leave
and would only learn on that from further probes on the group (also a
delay of at least a few tens of seconds).
Fix this by allowing transmission (or queuing) depending on the
IPOIB_FLAG_OPER_UP flag instead of the IPOIB_MCAST_STARTED flag.
Signed-off-by: Olga Shern <olgas@voltaire.com>
Signed-off-by: Or Gerlitz <ogerlitz@voltaire.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>