Commit Graph

131577 Commits

Author SHA1 Message Date
john stultz
b13e24644c x86, hpet: fix for LS21 + HPET = boot hang
Between 2.6.23 and 2.6.24-rc1 a change was made that broke IBM LS21
systems that had the HPET enabled in the BIOS, resulting in boot hangs
for x86_64.

Specifically commit b8ce335906, which
merges the i386 and x86_64 HPET code.

Prior to this commit, when we setup the HPET timers in x86_64, we did
the following:

	hpet_writel(HPET_TN_ENABLE | HPET_TN_PERIODIC | HPET_TN_SETVAL |
                    HPET_TN_32BIT, HPET_T0_CFG);

However after the i386/x86_64 HPET merge, we do the following:

	cfg = hpet_readl(HPET_Tn_CFG(timer));
	cfg |= HPET_TN_ENABLE | HPET_TN_PERIODIC |
			HPET_TN_SETVAL | HPET_TN_32BIT;
	hpet_writel(cfg, HPET_Tn_CFG(timer));

However on LS21s with HPET enabled in the BIOS, the HPET_T0_CFG register
boots with Level triggered interrupts (HPET_TN_LEVEL) enabled. This
causes the periodic interrupt to be not so periodic, and that results in
the boot time hang I reported earlier in the delay calibration.

My fix: Always disable HPET_TN_LEVEL when setting up periodic mode.

Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-13 09:15:46 +01:00
Herton Ronaldo Krzesinski
92258a3ed2 ALSA: hda - Change HP dv7 (103c:30f4) quirk from hp-m4 to hp-dv5 model
Change HP dv7 quirk: although reported to work with hp-m4 model
(https://bugzilla.novell.com/show_bug.cgi?id=445321), the original
report doesn't contain info about testing of internal microphone.

Recently I received a report about internal mic not working
(https://qa.mandriva.com/show_bug.cgi?id=44855#c193), this must be
related with the forced line in on pin 0x0e done with hp-m4 model. Thus
change the current quirk from STAC_HP_M4 to STAC_HP_DV5, later reported
to be fixed on a provided kernel with this change
(https://qa.mandriva.com/show_bug.cgi?id=44855#c196).

Signed-off-by: Herton Ronaldo Krzesinski <herton@mandriva.com.br>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2009-02-13 08:41:00 +01:00
Michael Neuling
26456dcfb8 powerpc/vsx: Fix VSX alignment handler for regs 32-63
Fix the VSX alignment handler for VSX registers > 32.  32-63 are stored
in the VMX part of the thread_struct not the FPR part.

Signed-off-by: Michael Neuling <mikey@neuling.org>
CC: stable@kernel.org (2.6.27 & .28 please)
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-13 16:37:45 +11:00
Geoff Levand
0047656e2a powerpc/ps3: Move ps3_mm_add_memory to device_initcall
Change the PS3 hotplug memory routine ps3_mm_add_memory() from
a core_initcall to a device_initcall.

core_initcall routines run before the powerpc topology_init()
startup routine, which is a subsys_initcall, resulting in
failure of ps3_mm_add_memory() when CONFIG_NUMA=y.  When
ps3_mm_add_memory() fails the system will boot with just the
128 MiB of boot memory

Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-13 16:37:45 +11:00
Dave Hansen
06eccea6c3 powerpc/mm: Fix numa reserve bootmem page selection
Fix the powerpc NUMA reserve bootmem page selection logic.

commit 8f64e1f2d1 (powerpc: Reserve
in bootmem lmb reserved regions that cross NUMA nodes) changed
the logic for how the powerpc LMB reserved regions were converted
to bootmen reserved regions.  As the folowing discussion reports,
the new logic was not correct.

mark_reserved_regions_for_nid() goes through each LMB on the
system that specifies a reserved area.  It searches for
active regions that intersect with that LMB and are on the
specified node.  It attempts to bootmem-reserve only the area
where the active region and the reserved LMB intersect.  We
can not reserve things on other nodes as they may not have
bootmem structures allocated, yet.

We base the size of the bootmem reservation on two possible
things.  Normally, we just make the reservation start and
stop exactly at the start and end of the LMB.

However, the LMB reservations are not aware of NUMA nodes and
on occasion a single LMB may cross into several adjacent
active regions.  Those may even be on different NUMA nodes
and will require separate calls to the bootmem reserve
functions.  So, the bootmem reservation must be trimmed to
fit inside the current active region.

That's all fine and dandy, but we trim the reservation
in a page-aligned fashion.  That's bad because we start the
reservation at a non-page-aligned address: physbase.

The reservation may only span 2 bytes, but that those bytes
may span two pfns and cause a reserve_size of 2*PAGE_SIZE.

Take the case where you reserve 0x2 bytes at 0x0fff and
where the active region ends at 0x1000.  You'll jump into
that if() statment, but node_ar.end_pfn=0x1 and
start_pfn=0x0.  You'll end up with a reserve_size=0x1000,
and then call

  reserve_bootmem_node(node, physbase=0xfff, size=0x1000);

0x1000 may not be on the same node as 0xfff.  Oops.

In almost all the vm code, end_<anything> is not inclusive.
If you have an end_pfn of 0x1234, page 0x1234 is not
included in the range.  Using PFN_UP instead of the
(>> >> PAGE_SHIFT) will make this consistent with the other VM
code.

We also need to do math for the reserved size with physbase
instead of start_pfn.  node_ar.end_pfn << PAGE_SHIFT is
*precisely* the end of the node.  However,
(start_pfn << PAGE_SHIFT) is *NOT* precisely the beginning
of the reserved area.  That is, of course, physbase.
If we don't use physbase here, the reserve_size can be
made too large.

From: Dave Hansen <dave@linux.vnet.ibm.com>
Tested-by: Geoff Levand <geoffrey.levand@am.sony.com>  Tested on PS3.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-13 16:37:45 +11:00
Philippe Gerum
fbc78b07ba powerpc/mm: Fix _PAGE_CHG_MASK to protect _PAGE_SPECIAL
Fix _PAGE_CHG_MASK so that pte_modify() does not affect the _PAGE_SPECIAL bit.

Signed-off-by: Philippe Gerum <rpm@xenomai.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-13 16:37:44 +11:00
Linus Torvalds
37bed90094 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (32 commits)
  wimax: fix oops in wimax_dev_get_by_genl_info() when looking up non-wimax iface
  net: 4 bytes kernel memory disclosure in SO_BSDCOMPAT gsopt try #2
  netxen: fix compile waring "label ‘set_32_bit_mask’ defined but not used" on IA64 platform
  bnx2: Update version to 1.9.2 and copyright.
  bnx2: Fix jumbo frames error handling.
  bnx2: Update 5709 firmware.
  bnx2: Update 5706/5708 firmware.
  3c505: do not set pcb->data.raw beyond its size
  Documentation/connector/cn_test.c: don't use gfp_any()
  net: don't use in_atomic() in gfp_any()
  IRDA: cnt is off by 1
  netxen: remove pcie workaround
  sun3: print when lance_open() fails
  qlge: bugfix: Add missing rx buf clean index on early exit.
  qlge: bugfix: Fix RX scaling values.
  qlge: bugfix: Fix TSO breakage.
  qlge: bugfix: Add missing dev_kfree_skb_any() call.
  qlge: bugfix: Add missing put_page() call.
  qlge: bugfix: Fix fatal error recovery hang.
  qlge: bugfix: Use netif_receive_skb() and vlan_hwaccel_receive_skb().
  ...
2009-02-12 17:47:15 -08:00
Inaky Perez-Gonzalez
1d7b33f77b wimax: fix oops in wimax_dev_get_by_genl_info() when looking up non-wimax iface
When a non-wimax interface is looked up by the stack, a bad pointer is
returned when the looked-up interface is not found in the list (of
registered WiMAX interfaces). This causes an oops in the caller when
trying to use the pointer.

Fix by properly setting the pointer to NULL if we don't exit from the
list_for_each() with a found entry.

Signed-off-by: Inaky Perez-Gonzalez <inaky@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 17:00:20 -08:00
Clément Lecigne
df0bca049d net: 4 bytes kernel memory disclosure in SO_BSDCOMPAT gsopt try #2
In function sock_getsockopt() located in net/core/sock.c, optval v.val
is not correctly initialized and directly returned in userland in case
we have SO_BSDCOMPAT option set.

This dummy code should trigger the bug:

int main(void)
{
	unsigned char buf[4] = { 0, 0, 0, 0 };
	int len;
	int sock;
	sock = socket(33, 2, 2);
	getsockopt(sock, 1, SO_BSDCOMPAT, &buf, &len);
	printf("%x%x%x%x\n", buf[0], buf[1], buf[2], buf[3]);
	close(sock);
}

Here is a patch that fix this bug by initalizing v.val just after its
declaration.

Signed-off-by: Clément Lecigne <clement.lecigne@netasq.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:59:09 -08:00
Yang Hongyang
354b45fff9 netxen: fix compile waring "label ‘set_32_bit_mask’ defined but not used" on IA64 platform
When compile the latest kernel on IA64 platform,I got a warning:
drivers/net/netxen/netxen_nic_main.c:203: warning: label ‘set_32_bit_mask’ 
defined but not used

We do not need label ‘set_32_bit_mask’  on IA64 platform,So move it to #else.

Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:57:12 -08:00
Michael Chan
a6952b5299 bnx2: Update version to 1.9.2 and copyright.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:54:48 -08:00
Michael Chan
990ec3804b bnx2: Fix jumbo frames error handling.
If errors are reported on a frame descriptor, we need to
account for the buffer pages that may have been used for this
error packet and recycle them.  Otherwise, we may get the wrong
pages for the next packet.

Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Signed-off-by: Benjamin Li <benli@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:54:13 -08:00
Michael Chan
259436a505 bnx2: Update 5709 firmware.
New firmware fixes a data corruption issue when receiving and
placing jumbo frames into host buffers.  In some cases, the
buffer descriptor is not updated correctly and this will lead
to the driver linking the wrong number of pages into the SKB.

Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:53:48 -08:00
Michael Chan
86f95f9eac bnx2: Update 5706/5708 firmware.
New firmware fixes a data corruption issue when receiving and
placing jumbo frames into host buffers.  In some cases, the
buffer descriptor is not updated correctly and this will lead
to the driver linking the wrong number of pages into the SKB.

Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:53:22 -08:00
Roel Kluin
501aa061bd 3c505: do not set pcb->data.raw beyond its size
Ensure that we do not set pcb->data.raw beyond its size, print an error message
and return false if we attempt to. A timout message was printed one too early.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:52:31 -08:00
Andrew Morton
f82da72339 Documentation/connector/cn_test.c: don't use gfp_any()
cn_test_timer_func() is a timer handler and can never use GFP_KERNEL -
there's no point in using gfp_any() here.

Also, use setup_timer().

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:47:01 -08:00
Andrew Morton
9970937273 net: don't use in_atomic() in gfp_any()
The problem is that in_atomic() will return false inside spinlocks if
CONFIG_PREEMPT=n.  This will lead to deadlockable GFP_KERNEL allocations
from spinlocked regions.

Secondly, if CONFIG_PREEMPT=y, this bug solves itself because networking
will instead use GFP_ATOMIC from this callsite.  Hence we won't get the
might_sleep() debugging warnings which would have informed us of the buggy
callsites.

Solve both these problems by switching to in_interrupt().  Now, if someone
runs a gfp_any() allocation from inside spinlock we will get the warning
if CONFIG_PREEMPT=y.

I reviewed all callsites and most of them were too complex for my little
brain and none of them documented their interface requirements.  I have no
idea what this patch will do.

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:43:17 -08:00
Roel Kluin
fb0886745a IRDA: cnt is off by 1
If no prior break occurs, cnt reaches 101 after the loop, so we are still able
to change speed when cnt has become 100.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:42:31 -08:00
Dhananjay Phadke
acdb602fb3 netxen: remove pcie workaround
Remove workaround for pcie bug in early revisions of NX3031
(rev 41 or earlier). This is taken care of during firmware init.

The workaround required writing pcie config reg of every
pcie function on a card, not all of which are enabled.

Signed-off-by: Dhananjay Phadke <dhananjay@netxen.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:41:14 -08:00
Roel Kluin
cff71e89a8 sun3: print when lance_open() fails
With while (--i > 0) { ... } i reaches 0; print when lance_open() fails

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:40:20 -08:00
Ron Mercer
79d2b29e8a qlge: bugfix: Add missing rx buf clean index on early exit.
The large receive buffer queue is not properly tracking the current
index in the case where an early exit occurs.  This can happen when a
page alloc or dma mapping fails.  If this occurs the queue will get
out of sync and invalid indexes can be written to the hardware.

Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:38:34 -08:00
Ron Mercer
def48b6e32 qlge: bugfix: Fix RX scaling values.
Receive packets were only scaling across 2 of the receive queues. The
value was hardcoded to 2 instead of being based on how many rx queues
were running.

Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:38:18 -08:00
Ron Mercer
0d979f74a9 qlge: bugfix: Fix TSO breakage.
Moved the buffer mapping to a point after TSO logic has modified the
iph->check field. We were seeing stale data on the PCIe bus.

Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:38:03 -08:00
Ron Mercer
06a3d510fa qlge: bugfix: Add missing dev_kfree_skb_any() call.
We put the skb back if we can't get mapping for it.  We don't
want unmapped buffers on our receive buffer queue.

Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:37:48 -08:00
Ron Mercer
f2603c2c57 qlge: bugfix: Add missing put_page() call.
We put the page back if we can't get mapping for it.  We don't
want unmapped buffers on our receive buffer queue.

Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:37:32 -08:00
Ron Mercer
6497b607fb qlge: bugfix: Fix fatal error recovery hang.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:37:13 -08:00
Ron Mercer
7a9deb661f qlge: bugfix: Use netif_receive_skb() and vlan_hwaccel_receive_skb().
Replace calls to vlan_hwaccel_rx() and netif_rx().
Thanks to Dave Miller for pointing out the the driver was making
the wrong upcall for passing packets into the stack.

Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:36:50 -08:00
Roel Kluin
d4675b52a9 TG3: limit reaches -1
With while (limit--) { ... } limit reaches -1, so 0 means success.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-12 16:33:27 -08:00
Thomas Gleixner
7ad9de6ac8 x86: CPA avoid repeated lazy mmu flush
Impact: Flush the lazy MMU only once

Pending mmu updates only need to be flushed once to bring the
in-memory pagetable state up to date.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2009-02-12 23:11:58 +01:00
Thomas Gleixner
34b0900d32 x86: warn if arch_flush_lazy_mmu_cpu is called in preemptible context
Impact: Catch cases where lazy MMU state is active in a preemtible context

arch_flush_lazy_mmu_cpu() has been changed to disable preemption so
the checks in enter/leave will never trigger. Put the preemtible()
check into arch_flush_lazy_mmu_cpu() to catch such cases.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2009-02-12 23:11:58 +01:00
Jeremy Fitzhardinge
d85cf93da6 x86/paravirt: make arch_flush_lazy_mmu/cpu disable preemption
Impact: avoid access to percpu vars in preempible context

They are intended to be used whenever there's the possibility
that there's some stale state which is going to be overwritten
with a queued update, or to force a state change when we may be
in lazy mode.  Either way, we could end up calling it with
preemption enabled, so wrap the functions in their own little
preempt-disable section so they can be safely called in any
context (though preemption should never be enabled if we're actually
in a lazy state).

(Move out of line to avoid #include dependencies.)
    
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2009-02-12 23:11:58 +01:00
Yan Zheng
2456242530 Btrfs: hold trans_mutex when using btrfs_record_root_in_trans
btrfs_record_root_in_trans needs the trans_mutex held to make sure two
callers don't race to setup the root in a given transaction.  This adds
it to all the places that were missing it.

Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
2009-02-12 14:14:53 -05:00
Chris Mason
4008c04a07 Btrfs: make a lockdep class for the extent buffer locks
Btrfs is currently using spin_lock_nested with a nested value based
on the tree depth of the block.  But, this doesn't quite work because
the max tree depth is bigger than what spin_lock_nested can deal with,
and because locks are sometimes taken before the level field is filled in.

The solution here is to use lockdep_set_class_and_name instead, and to
set the class before unlocking the pages when the block is read from the
disk and just after init of a freshly allocated tree block.

btrfs_clear_path_blocking is also changed to take the locks in the proper
order, and it also makes sure all the locks currently held are properly
set to blocking before it tries to retake the spinlocks.  Otherwise, lockdep
gets upset about bad lock orderin.

The lockdep magic cam from Peter Zijlstra <peterz@infradead.org>

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 14:09:45 -05:00
Linus Torvalds
071a0bc2ce Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
  mm: Export symbol ksize()
2009-02-12 09:56:14 -08:00
Nick Piggin
3a4c6800f3 Fix page writeback thinko, causing Berkeley DB slowdown
A bug was introduced into write_cache_pages cyclic writeout by commit
31a12666d8 ("mm: write_cache_pages cyclic
fix").  The intention (and comments) is that we should cycle back and
look for more dirty pages at the beginning of the file if there is no
more work to be done.

But the !done condition was dropped from the test.  This means that any
time the page writeout loop breaks (eg.  due to nr_to_write == 0), we
will set index to 0, then goto again.  This will set done_index to
index, then find done is set, so will proceed to the end of the
function.  When updating mapping->writeback_index for cyclic writeout,
we now use done_index == 0, so we're always cycling back to 0.

This seemed to be causing random mmap writes (slapadd and iozone) to
start writing more pages from the LRU and writeout would slowdown, and
caused bugzilla entry

	http://bugzilla.kernel.org/show_bug.cgi?id=12604

about Berkeley DB slowing down dramatically.

With this patch, iozone random write performance is increased nearly
5x on my system (iozone -B -r 4k -s 64k -s 512m -s 1200m on ext2).

Signed-off-by: Nick Piggin <npiggin@suse.de>
Reported-and-tested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-02-12 08:10:53 -08:00
Kirill A. Shutemov
b1aabecd55 mm: Export symbol ksize()
Commit 7b2cd92adc ("crypto: api - Fix
zeroing on free") added modular user of ksize(). Export that to fix
crypto.ko compilation.

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
2009-02-12 17:50:46 +02:00
Julia Lawall
3f3420df50 Btrfs: fs/btrfs/volumes.c: remove useless kzalloc
The call to kzalloc is followed by a kmalloc whose result is stored in the
same variable.

The semantic match that finds the problem is as follows:
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@r exists@
local idexpression x;
statement S;
expression E;
identifier f,l;
position p1,p2;
expression *ptr != NULL;
@@

(
if ((x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...)) == NULL) S
|
x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...);
...
if (x == NULL) S
)
<... when != x
     when != if (...) { <+...x...+> }
x->f = E
...>
(
 return \(0\|<+...x...+>\|ptr\);
|
 return@p2 ...;
)

@script:python@
p1 << r.p1;
p2 << r.p2;
@@

print "* file: %s kmalloc %s return %s" % (p1[0].file,p1[0].line,p2[0].line)
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 10:16:03 -05:00
Qinghuang Feng
a48ddf08ba Btrfs: remove unused code in split_state()
These two lines are not used, remove them.

Signed-off-by: Qinghuang Feng <qhfeng.kernel@gmail.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 14:25:23 -05:00
Jeff Mahoney
e00f730865 Btrfs: remove btrfs_init_path
btrfs_init_path was initially used when the path objects were on the
stack.  Now all the work is done by btrfs_alloc_path and btrfs_init_path
isn't required.

This patch removes it, and just uses kmem_cache_zalloc to zero out the object.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 14:11:25 -05:00
Jeff Mahoney
7951f3cefb Btrfs: balance_level checks !child after access
The BUG_ON() is in the wrong spot.

Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 10:06:15 -05:00
Yan Zheng
b335b0034e Btrfs: Avoid using __GFP_HIGHMEM with slab allocator
btrfs_releasepage may call kmem_cache_alloc indirectly,
and provide same GFP flags it gets to kmem_cache_alloc.
So it's possible to use __GFP_HIGHMEM with the slab
allocator.

Signed-off-by: Yan Zheng <zheng.yan@oracle.com>
2009-02-12 10:06:04 -05:00
Chris Mason
e1df36d2f1 Btrfs: don't clean old snapshots on sync(1)
Cleaning old snapshots can make sync(1) somewhat slow, and some users
and applications still use it in a global fsync kind of workload.

This patch changes btrfs not to clean old snapshots during sync, which is
safe from a FS consistency point of view.  The major downside is that it
makes it difficult to tell when old snapshots have been reaped and
the space they were using has been reclaimed.  A new ioctl will be added
for this purpose instead.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 09:45:08 -05:00
Chris Mason
536ac8ae86 Btrfs: use larger metadata clusters in ssd mode
Larger metadata clusters can significantly improve writeback performance
on ssd drives with large erasure blocks.  The larger clusters make it
more likely a given IO will completely overwrite the ssd block, so it
doesn't have to do an internal rwm cycle.

On spinning media, lager metadata clusters end up spreading out the
metadata more over time, which makes fsck slower, so we don't want this
to be the default.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 09:41:38 -05:00
Chris Mason
b288052e17 Btrfs: process mount options on mount -o remount,
Btrfs wasn't parsing any new mount options during remount, making it
difficult to set mount options on a root drive.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
2009-02-12 09:37:35 -05:00
Josef Bacik
eb09967089 Btrfs: make sure all pending extent operations are complete
Theres a slight problem with finish_current_insert, if we set all to 1 and then
go through and don't actually skip any of the extents on the pending list, we
could exit right after we've added new extents.

This is a problem because by inserting the new extents we could have gotten new
COW's to happen and such, so we may have some pending updates to do or even
more inserts to do after that.

So this patch will only exit if we have never skipped any of the extents in the
pending list, and we have no extents to insert, this will make sure that all of
the pending work is truly done before we return.  I've been running with this
patch for a few days with all of my other testing and have not seen issues.
Thanks,

Signed-off-by: Josef Bacik <jbacik@redhat.com>
2009-02-12 09:27:38 -05:00
Ingo Molnar
a0490fa35d sched: cpu hotplug fix
rq_attach_root() does a kfree() with the runqueue lock held.

That's not a very wise move, fix it.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-12 11:57:36 +01:00
Suresh Siddha
be03d9e802 x86, pat: fix warn_on_once() while mapping 0-1MB range with /dev/mem
Jeff Mahoney reported:

> With Suse's hwinfo tool, on -tip:
> WARNING: at arch/x86/mm/pat.c:637 reserve_pfn_range+0x5b/0x26d()

reserve_pfn_range() is not tracking the memory range below 1MB
as non-RAM and as such is inconsistent with similar checks in
reserve_memtype() and free_memtype()

Rename the pagerange_is_ram() to pat_pagerange_is_ram() and add the
"track legacy 1MB region as non RAM" condition.

And also, fix reserve_pfn_range() to return -EINVAL, when the pfn
range is RAM. This is to be consistent with this API design.

Reported-and-tested-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-12 08:27:27 +01:00
Jeremy Fitzhardinge
4f06b0436b x86/cpa: make sure cpa is safe to call in lazy mmu mode
Impact: fix race leading to crash under KVM and Xen

The CPA code may be called while we're in lazy mmu update mode - for
example, when using DEBUG_PAGE_ALLOC and doing a slab allocation
in an interrupt handler which interrupted a lazy mmu update.  In this
case, the in-memory pagetable state may be out of date due to pending
queued updates.  We need to flush any pending updates before inspecting
the page table.  Similarly, we must explicitly flush any modifications
CPA may have made (which comes down to flushing queued operations when
flushing the TLB).

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Acked-by: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Stable Kernel <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-02-12 08:27:26 +01:00
Linus Torvalds
b578f3fcca Merge git://git.infradead.org/users/cbou/battery-2.6.29
* git://git.infradead.org/users/cbou/battery-2.6.29:
  pcf50633_charger: Fix typo
2009-02-11 16:28:08 -08:00
Takashi Iwai
26a74f1f61 ALSA: hda - Register (new) devices at reconfig
The devices that have been newly added during reconfig must be
registered.  Otherwise they won't be visible to user-space.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
2009-02-12 00:13:19 +01:00