Most SH platforms aren't going to need more than a single active
region, ones that need more can pad this out as necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This moves SH over to the generic quicklists. As per x86_64,
we have special mappings for the PGDs, so these go on their
own list..
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.
Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Use SLAB_PANIC and delete duplicated panic().
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Ian Molton <spyro@f2s.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This fixes up SH7705 CPU support and the SE7705 board
for some of the recent changes.
Signed-off-by: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.zh@hitachi.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This reworks some of the node 0 bootmem initialization in
preparation for discontigmem and sparsemem support.
ARCH_POPULATES_NODE_MAP is switched to as a result of this.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This ended up causing problems for older parts (particularly ones
using PTEA). Revert this for now, it can be added back in once it's
had some more testing.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert some of the global flush users over to using the local variants
that don't need to use the global routines.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Rename the existing flush routines to local_ variants for use by
the IPI-backed global flush routines on SMP.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There are a lot of bogus cpu_data-> references that only end up working
for the boot CPU, convert these to current_cpu_data to fixup SMP.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Previously this was implemented using a global cache, cache
this per-CPU instead and bump up the number of context IDs to
match NR_CPUS.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Only SH-4 needs to set _PAGE_WT when using write-through caching,
don't attempt to set it on SH-3 where it ends up being a reserved
bit.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This converts the lazy dcache handling to the model described in
Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks
used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a
bonus, this slightly cuts down on the cache flushing frequency.
With that and the PTEA handling out of the way, the update_mmu_cache()
implementations can be consolidated, and we no longer have to worry
about which configuration the cache is in for the SH7705 case.
And finally, explicitly disable the lazy writeback on SMP (SH-4A).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up shmin (and SH7706/SH7708) IPR support for some of the
recent API changes.
Signed-off-by: Takashi YOSHII <takasi-y@ops.dti.ne.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Many struct file_operations in the kernel can be "const". Marking them const
moves these to the .rodata section, which avoids false sharing with potential
dirty data. In addition it'll catch accidental writes at compile time to
these shared resources.
[akpm@osdl.org: sparc64 fix]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
sh / sh64: Remove ZONE_DMA remains.
Both arches do not need ZONE_DMA
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Convert SH to use generic ioremap_page_range()
Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Replace all uses of kmem_cache_t with struct kmem_cache.
The patch was generated using the following script:
#!/bin/sh
#
# Replace one string by another in all the kernel sources.
#
set -e
for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do
quilt add $file
sed -e "1,\$s/$1/$2/g" $file >/tmp/$$
mv /tmp/$$ $file
quilt refresh
done
The script was run like this
sh replace kmem_cache_t "struct kmem_cache"
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Following up with the work on shared page table done by Dave McCracken. This
set of patch target shared page table for hugetlb memory only.
The shared page table is particular useful in the situation of large number of
independent processes sharing large shared memory segments. In the normal
page case, the amount of memory saved from process' page table is quite
significant. For hugetlb, the saving on page table memory is not the primary
objective (as hugetlb itself already cuts down page table overhead
significantly), instead, the purpose of using shared page table on hugetlb is
to allow faster TLB refill and smaller cache pollution upon TLB miss.
With PT sharing, pte entries are shared among hundreds of processes, the cache
consumption used by all the page table is smaller and in return, application
gets much higher cache hit ratio. One other effect is that cache hit ratio
with hardware page walker hitting on pte in cache will be higher and this
helps to reduce tlb miss latency. These two effects contribute to higher
application performance.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Acked-by: Hugh Dickins <hugh@veritas.com>
Cc: Dave McCracken <dmccr@us.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The following moves the creation of IPR interupts into setup-7750.c
and updates a few other things to make it all work after the "Drop
CPU subtype IRQ headers" commit. It boots and runs fine on my titan
board.
- adds an ipr_idx to the ipr_data and uses a function in the subtype
code to calculate the address of the IPR registers
- adds a function to enable individual interrupt mode for externals
in the subtype code and calls that from the titan board code
instead of doing it directly.
- I changed the shift in the ipr_data to be the actual # of bits to
shift, instead of the numnber / 4 - made it easier to match with
the manual.
Signed-off-by: Jamie Lenehan <lenehan@twibble.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Previously this was using a static pgd shift in the reporting
code, simply flip this to PGDIR_SHIFT which does the right
thing depending on varying PTE magnitudes on the SH-X2 MMU.
While we're at it, and since it's been recently added, use
get_TTB() for fetching the TTB, rather than the open coded
instructions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There were a number of places that made evil PAGE_SIZE == 4k
assumptions that ended up breaking when trying to play with
8k and 64k page sizes, this fixes those up.
The most significant change is the way we load THREAD_SIZE,
previously this was done via:
mov #(THREAD_SIZE >> 8), reg
shll8 reg
to avoid a memory access and allow the immediate load. With
a 64k PAGE_SIZE, we're out of range for the immediate load
size without resorting to special instructions available in
later ISAs (movi20s and so on). The "workaround" for this is
to bump up the shift to 10 and insert a shll2, which gives a
bit more flexibility while still being much cheaper than a
memory access.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Handle simple TLB miss faults which can be resolved completely
from the page table in assembler.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Remove extra bits from the pmd structure and store a kernel logical
address rather than a physical address. This allows it to be directly
dereferenced. Another piece of wierdness inherited from x86.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add TTB accessor functions and give it a sensible default
value. We will use this later for optimizing the fault
path.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Remove the previous saving of fault codes into the thread_struct
as they are never used, and appeared to be inherited from x86.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds some preliminary support for the SH-X2 MMU, used by
newer SH-4A parts (particularly SH7785).
This MMU implements a 'compat' mode with SH-X MMUs and an
'extended' mode for SH-X2 extended features. Extended features
include additional page sizes (8kB, 4MB, 64MB), as well as the
addition of page execute permissions.
The extended mode attributes are placed in a second data array,
which requires us to switch to 64-bit PTEs when in X2 mode.
With the addition of the exec perms, we also overhaul the mmap
prots somewhat, now that it's possible to handle them more
intelligently.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements initial support for the SH7206 (SH-2A) and SH7619
(SH-2) MMU-less CPUs.
Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is an updated version of Eric Biederman's is_init() patch.
(http://lkml.org/lkml/2006/2/6/280). It applies cleanly to 2.6.18-rc3 and
replaces a few more instances of ->pid == 1 with is_init().
Further, is_init() checks pid and thus removes dependency on Eric's other
patches for now.
Eric's original description:
There are a lot of places in the kernel where we test for init
because we give it special properties. Most significantly init
must not die. This results in code all over the kernel test
->pid == 1.
Introduce is_init to capture this case.
With multiple pid spaces for all of the cases affected we are
looking for only the first process on the system, not some other
process that has pid == 1.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@us.ibm.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: Serge Hallyn <serue@us.ibm.com>
Cc: Cedric Le Goater <clg@fr.ibm.com>
Cc: <lxc-devel@lists.sourceforge.net>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Make PROT_WRITE imply PROT_READ for a number of architectures which don't
support write only in hardware.
While looking at this, I noticed that some architectures which do not
support write only mappings already take the exact same approach. For
example, in arch/alpha/mm/fault.c:
"
if (cause < 0) {
if (!(vma->vm_flags & VM_EXEC))
goto bad_area;
} else if (!cause) {
/* Allow reads even for write-only mappings */
if (!(vma->vm_flags & (VM_READ | VM_WRITE)))
goto bad_area;
} else {
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;
}
"
Thus, this patch brings other architectures which do not support write only
mappings in-line and consistent with the rest. I've verified the patch on
ia64, x86_64 and x86.
Additional discussion:
Several architectures, including x86, can not support write-only mappings.
The pte for x86 reserves a single bit for protection and its two states are
read only or read/write. Thus, write only is not supported in h/w.
Currently, if i 'mmap' a page write-only, the first read attempt on that page
creates a page fault and will SEGV. That check is enforced in
arch/blah/mm/fault.c. However, if i first write that page it will fault in
and the pte will be set to read/write. Thus, any subsequent reads to the page
will succeed. It is this inconsistency in behavior that this patch is
attempting to address. Furthermore, if the page is swapped out, and then
brought back the first read will also cause a SEGV. Thus, any arbitrary read
on a page can potentially result in a SEGV.
According to the SuSv3 spec, "if the application requests only PROT_WRITE, the
implementation may also allow read access." Also as mentioned, some
archtectures, such as alpha, shown above already take the approach that i am
suggesting.
The counter-argument to this raised by Arjan, is that the kernel is enforcing
the write only mapping the best it can given the h/w limitations. This is
true, however Alan Cox, and myself would argue that the inconsitency in
behavior, that is applications can sometimes work/sometimes fails is highly
undesireable. If you read through the thread, i think people, came to an
agreement on the last patch i posted, as nobody has objected to it...
Signed-off-by: Jason Baron <jbaron@redhat.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Andi Kleen <ak@muc.de>
Acked-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
Cc: Ian Molton <spyro@f2s.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
IRQs disabling in flush_cache_4096 for cache purge. Under certain
workloads we would get an IRQ in the middle of a purge operation,
and the cachelines would remain in an inconsistent state, leading
to occasional stack corruption.
Signed-off-by: Takeo Takahashi <takahashi.takeo@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements initial support for the vsyscall page on SH.
At the moment we leave it configurable due to having nommu
to support from the same code base. We hook it up for the
signal trampoline return at present, with more to be added
later, once uClibc catches up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
flush_cache_mm() wraps in to flush_cache_all(), which is rather
excessive given that the number of PTEs within the specified context
are generally quite low. Optimize for walking the mm's VMA list and
selectively flushing the VMA ranges from the dcache. Invalidate the
icache only if a VMA sets VM_EXEC.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds support for the aforementioned CPU subtypes, and cleans
up some build issues encountered as a result.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>