2013-01-18 09:42:19 +00:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _ASM_ARC_TLB_H
|
|
|
|
#define _ASM_ARC_TLB_H
|
|
|
|
|
|
|
|
#ifdef __KERNEL__
|
|
|
|
|
|
|
|
#include <asm/pgtable.h>
|
|
|
|
|
|
|
|
/* Masks for actual TLB "PD"s */
|
|
|
|
#define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT)
|
|
|
|
#define PTE_BITS_IN_PD1 (PAGE_MASK | _PAGE_CACHEABLE | \
|
ARC: copy_(to|from)_user() to honor usermode-access permissions
This manifested as grep failing psuedo-randomly:
-------------->8---------------------
[ARCLinux]$ ip address show lo | grep inet
[ARCLinux]$ ip address show lo | grep inet
[ARCLinux]$ ip address show lo | grep inet
[ARCLinux]$
[ARCLinux]$ ip address show lo | grep inet
inet 127.0.0.1/8 scope host lo
-------------->8---------------------
ARC700 MMU provides fully orthogonal permission bits per page:
Ur, Uw, Ux, Kr, Kw, Kx
The user mode page permission templates used to have all Kernel mode
access bits enabled.
This caused a tricky race condition observed with uClibc buffered file
read and UNIX pipes.
1. Read access to an anon mapped page in libc .bss: write-protected
zero_page mapped: TLB Entry installed with Ur + K[rwx]
2. grep calls libc:getc() -> buffered read layer calls read(2) with the
internal read buffer in same .bss page.
The read() call is on STDIN which has been redirected to a pipe.
read(2) => sys_read() => pipe_read() => copy_to_user()
3. Since page has Kernel-write permission (despite being user-mode
write-protected), copy_to_user() suceeds w/o taking a MMU TLB-Miss
Exception (page-fault for ARC). core-MM is unaware that kernel
erroneously wrote to the reserved read-only zero-page (BUG #1)
4. Control returns to userspace which now does a write to same .bss page
Since Linux MM is not aware that page has been modified by kernel, it
simply reassigns a new writable zero-init page to mapping, loosing the
prior write by kernel - effectively zero'ing out the libc read buffer
under the hood - hence grep doesn't see right data (BUG #2)
The fix is to make all kernel-mode access permissions mirror the
user-mode ones. Note that the kernel still has full access to pages,
when accessed directly (w/o MMU) - this fix ensures that kernel-mode
access in copy_to_from() path uses the same faulting access model as for
pure user accesses to keep MM fully aware of page state.
The issue is peudo-random because it only shows up if the TLB entry
installed in #1 is present at the time of #3. If it is evicted out, due
to TLB pressure or some-such, then copy_to_user() does take a TLB Miss
Exception, with a routine write-to-anon COW processing installing a
fresh page for kernel writes and also usable as it is in userspace.
Further the issue was dormant for so long as it depends on where the
libc internal read buffer (in .bss) is mapped at runtime.
If it happens to reside in file-backed data mapping of libc (in the
page-aligned slack space trailing the file backed data), loader zero
padding the slack space, does the early cow page replacement, setting
things up at the very beginning itself.
With gcc 4.8 based builds, the libc buffer got pushed out to a real
anon mapping which triggers the issue.
Reported-by: Anton Kolesov <akolesov@synopsys.com>
Cc: <stable@vger.kernel.org> # 3.9
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-05-21 09:55:11 +00:00
|
|
|
_PAGE_U_EXECUTE | _PAGE_U_WRITE | _PAGE_U_READ | \
|
2013-01-18 09:42:19 +00:00
|
|
|
_PAGE_K_EXECUTE | _PAGE_K_WRITE | _PAGE_K_READ)
|
|
|
|
|
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
|
2013-04-05 13:08:31 +00:00
|
|
|
#define tlb_flush(tlb) \
|
|
|
|
do { \
|
|
|
|
if (tlb->fullmm) \
|
|
|
|
flush_tlb_mm((tlb)->mm); \
|
|
|
|
} while (0)
|
2013-01-18 09:42:20 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This pair is called at time of munmap/exit to flush cache and TLB entries
|
|
|
|
* for mappings being torn down.
|
2013-05-09 16:24:51 +00:00
|
|
|
* 1) cache-flush part -implemented via tlb_start_vma( ) for VIPT aliasing D$
|
2013-04-05 13:08:31 +00:00
|
|
|
* 2) tlb-flush part - implemted via tlb_end_vma( ) flushes the TLB range
|
2013-01-18 09:42:20 +00:00
|
|
|
*
|
|
|
|
* Note, read http://lkml.org/lkml/2004/1/15/6
|
|
|
|
*/
|
2013-05-09 16:24:51 +00:00
|
|
|
#ifndef CONFIG_ARC_CACHE_VIPT_ALIASING
|
2013-01-18 09:42:20 +00:00
|
|
|
#define tlb_start_vma(tlb, vma)
|
2013-05-09 16:24:51 +00:00
|
|
|
#else
|
|
|
|
#define tlb_start_vma(tlb, vma) \
|
|
|
|
do { \
|
|
|
|
if (!tlb->fullmm) \
|
|
|
|
flush_cache_range(vma, vma->vm_start, vma->vm_end); \
|
|
|
|
} while(0)
|
|
|
|
#endif
|
2013-04-05 13:08:31 +00:00
|
|
|
|
|
|
|
#define tlb_end_vma(tlb, vma) \
|
|
|
|
do { \
|
|
|
|
if (!tlb->fullmm) \
|
|
|
|
flush_tlb_range(vma, vma->vm_start, vma->vm_end); \
|
|
|
|
} while (0)
|
2013-01-18 09:42:20 +00:00
|
|
|
|
|
|
|
#define __tlb_remove_tlb_entry(tlb, ptep, address)
|
|
|
|
|
2013-01-18 09:42:19 +00:00
|
|
|
#include <linux/pagemap.h>
|
|
|
|
#include <asm-generic/tlb.h>
|
|
|
|
|
|
|
|
#ifdef CONFIG_ARC_DBG_TLB_PARANOIA
|
|
|
|
void tlb_paranoid_check(unsigned int pid_sw, unsigned long address);
|
|
|
|
#else
|
|
|
|
#define tlb_paranoid_check(a, b)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
void arc_mmu_init(void);
|
|
|
|
extern char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len);
|
|
|
|
void __init read_decode_mmu_bcr(void);
|
|
|
|
|
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
|
|
|
|
#endif /* __KERNEL__ */
|
|
|
|
|
|
|
|
#endif /* _ASM_ARC_TLB_H */
|